text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
BINOMIAL EDGE IDEALS OF BIPARTITE GRAPHS
arXiv:1704.00152v2 [] 6 May 2017
DAVIDE BOLOGNINI, ANTONIO MACCHIA, FRANCESCO STRAZZANTI
Abstract. We classify the bipartite graphs G whose binomial edge ideal JG is Cohen-Macaulay. The
connected components of such graphs can be obtained by gluing a finite number of basic blocks with
two operations. In this context we prove the converse of a well-known result due to Hartshorne, showing
that the Cohen-Macaulayness of these ideals is equivalent to the connectedness of their dual graphs. We
study interesting properties also for non-bipartite graphs and in the unmixed case, constructing classes of
bipartite graphs with JG unmixed and not Cohen-Macaulay.
1. Introduction
Binomial edge ideals were introduced independently in [10] and [17]. They are a natural generalization
of the ideals of 2-minors of a (2 × n)-generic matrix [3]: their generators are those 2-minors whose column
indices correspond to the edges of a graph. In this perspective, the ideals of 2-minors are binomial edge
ideals of complete graphs. On the other hand, binomial edge ideals arise naturally in Algebraic Statistics,
in the context of conditional independence ideals, see [10, Section 4].
More precisely, given a finite simple graph G on the vertex set [n] = {1, . . . , n}, the binomial edge ideal
associated with G is the ideal
JG = (xi yj − xj yi : {i, j} is an edge of G) ⊂ R = K[xi , yi : i ∈ [n]].
Binomial edge ideals have been extensively studied, see e.g. [1], [5], [6], [13], [14], [15], [18], [19]. Yet
a number of interesting questions is still unanswered. In particular, many authors have studied classes of
Cohen-Macaulay binomial edge ideals in terms of the associated graph, see e.g. [1], [5], [13], [18], [19].
Some of these results concern a class of chordal graphs, the so-called closed graphs, introduced in [10], and
their generalizations, such as block and generalized block graphs [13].
In the context of squarefree monomial ideals, any graph can be associated with the so-called edge ideal,
whose generators are monomials of degree 2 corresponding to the edges of the graph. Herzog and Hibi, in
[9, Theorem 3.4], classified Cohen-Macaulay edge ideals of bipartite graphs in purely combinatorial terms.
In the same spirit, we provide a combinatorial classification of Cohen-Macaulay binomial edge ideals of
bipartite graphs. In particular, we present a family of bipartite graphs Fm whose binomial edge ideal
is Cohen-Macaulay, and we prove that, if G is connected and bipartite, then JG is Cohen-Macaulay if
and only if G can be obtained recursively by gluing a finite number of graphs of the form Fm via two
operations.
2010 Mathematics Subject Classification. Primary 13H10, 13C05, 05C40; Secondary 05E40, 05C99.
Key words and phrases. Binomial edge ideals, bipartite graphs, Cohen-Macaulay rings, unmixed ideals, dual graph of an
ideal.
The first and the second author were supported by INdAM.
The third author was partially supported by MTM2013-46231-P (Ministerio de Economıía y Competitividad) and FEDER.
1
2
We now explain in more detail the basic blocks and the operations in our classification. For the
terminology about graphs we refer to [4].
Basic blocks: For every m ≥ 1, let Fm be the graph (see Figure 1) on the vertex set [2m] and with
edge set
E(Fm ) = {{2i, 2j − 1} : i = 1, . . . , m, j = i, . . . , m} .
Notice that F1 is the single edge {1, 2} and F2 is the path of length 3.
1
3
5
1
3
5
7
2
4
6
2
4
6
8
(a) The graph F3
(b) The graph F4
Figure 1
Operation ∗: For i = 1, 2, let Gi be a graph with at least one vertex fi of degree one, i.e., a leaf of
Gi . We denote the graph G obtained by identifying f1 and f2 by G = (G1 , f1 ) ∗ (G2 , f2 ), see Figure 2(a).
This is a particular case of an operation studied by Rauf and Rinaldo in [18, Section 2].
Operation ◦: For i = 1, 2, let Gi be a graph with at least one leaf fi , vi its neighbour and assume
degGi (vi ) ≥ 3. We define G = (G1 , f1 ) ◦ (G2 , f2 ) to be the graph obtained from G1 and G2 by removing
the leaves f1 , f2 and identifying v1 and v2 , see Figure 2(b).
For both operations, if it is not important to specify the vertices fi or it is clear from the context, we
simply write G1 ∗ G2 or G1 ◦ G2 .
v1 = v2
f1 = f2
(b) The graph F3 ◦ F4
(a) The graph F3 ∗ F4
Figure 2
Finally, we recall the notion of dual graph of an ideal, which is one of the main tools in the proof of our
classification. We follow the notation used in [2].
Dual graph: Let I be an ideal in a polynomial ring A = K[x1 , . . . , xn ] and let p1 , . . . , pr be the minimal
prime ideals of I. The dual graph D(I) is a graph with vertex set [r] and edge set
{{i, j} : ht(pi + pj ) − 1 = ht(pi ) = ht(pj ) = ht(I)}.
This notion was originally studied by Hartshorne in [8] in terms of connectedness in codimension one.
By [8, Corollary 2.4], if A/I is Cohen-Macaulay, then the algebraic variety defined by I is connected in
codimension one, hence I is unmixed by [8, Remark 2.4.1]. The connectedness of the dual graph translates
3
in combinatorial terms the notion of connectedness in codimension one, see [8, Proposition 1.1]. Thus, if
A/I is Cohen-Macaulay, then D(I) is connected. The converse does not hold in general, see for instance
Remark 5.1. We will show that for binomial edge ideals of connected bipartite graphs this is indeed an
equivalence. In geometric terms, this means that the algebraic variety defined by JG is Cohen-Macaulay
if and only if it is connected in codimension one.
Given a graph G, the ideal JG is Cohen-Macaulay if and only if the binomial edge ideal of each connected
component of G is Cohen–Macaulay. Thus, we may assume G connected with at least two vertices.
Before stating the main result, we recall the notion of cut set, which is central in the study of binomial
edge ideals. In fact, there is a bijection between the cut sets of a graph G and the minimal prime ideals
of JG , see [10, Section 3]. For a subset S ⊆ [n], let cG (S) be the number of connected components of the
induced subgraph G[n]\S . The set S is called cut set of G if S = ∅ or S 6= ∅ and cG (S \ {i}) < cG (S) for
every i ∈ S. Moreover, we call cut vertex a cut set of cardinality one. We denote by M(G) the set of cut
sets of G.
We are now ready to state our main result.
Theorem 6.1. Let G be a connected bipartite graph. The following properties are equivalent:
a)
b)
c)
d)
JG is Cohen-Macaulay;
the dual graph D(JG ) is connected;
G = A1 ∗ A2 ∗ · · · ∗ Ak , where Ai = Fm or Ai = Fm1 ◦ · · · ◦ Fmr , for some m ≥ 1 and mj ≥ 3;
JG is unmixed and for every non-empty S ∈ M(G), there exists s ∈ S such that S \ {s} ∈ M(G).
The paper is structured as follows. In Section 2 we study unmixed binomial edge ideals of bipartite
graphs. A combinatorial characterization of unmixedness was already proved in [10] (see also [18, Lemma
2.5]), in terms of the cut sets of the underlying graph.
A first distinguishing fact about bipartite graphs with JG unmixed is that they have exactly two leaves
(Proposition 2.3). This, in particular, means that G has at least two cut vertices. In Proposition 2.8,
we present a construction that is useful in the study of the basic blocks and to produce new examples of
unmixed binomial edge ideals, which are not Cohen-Macaulay.
In Section 3 we prove that the ideals JFm , associated with the basic blocks of our construction, are
Cohen-Macaulay, see Proposition 3.3. In Section 4 we study the operations ∗ and ◦. In [18, Theorem 2.7],
Rauf and Rinaldo proved that JG1 ∗G2 is Cohen-Macaulay if and only if so are JG1 and JG2 . In Theorem
4.9, we show that JG is Cohen-Macaulay if G = Fm1 ◦ · · · ◦ Fmk , for every k ≥ 2 and mi ≥ 3. Using these
results, we prove the implication c) ⇒ a) of Theorem 6.1.
Section 5 is devoted to the study of the dual graph of binomial edge ideals. This is one of the main tools
in the proof of Theorem 6.1. First of all, given a (not necessarily bipartite) graph G with JG unmixed, in
Theorem 5.2 we provide an explicit description of the edges of the dual graph D(JG ) in terms of the cut
sets of G. This allows us to show infinite families of bipartite graphs whose binomial edge ideal is unmixed
and not Cohen-Macaulay, see Examples 2.2 and 5.4.
A crucial result concerns a basic, yet elusive, property of cut sets of unmixed binomial edge ideals. In
Lemma 5.5, we show that, mostly for bipartite graphs and under some assumption, the intersection of any
two cut sets is a cut set. This leads to the proof of the equivalence b) ⇔ d) in Theorem 6.1, see Theorem
5.7. On the other hand, if G = G1 ∗ G2 or G = G1 ◦ G2 is bipartite and D(JG ) is connected, then the dual
4
graphs of G1 and G2 are connected, see Theorem 5.8. Thus, we may reduce to consider bipartite graphs
with exactly two cut vertices and prove the implication b) ⇒ c) of Theorem 6.1.
It is worth noting that, the main theorem gives also a classification of other classes of Cohen-Macaulay
binomial ideals associated with bipartite graphs, Corollary 6.2: Lovász-Saks-Schrijver ideals [11], permanental edge ideals [11, Section 3] and parity binomial edge ideals [12].
As an application of the main result, in Corollary 6.3, we show that Cohen-Macaulay binomial edge
ideals of bipartite graphs are Hirsch, meaning that the diameter of the dual graph of JG is bounded above
by the height of JG , verifying [2, Conjecture 1.6].
All the results presented in this paper are independent of the field.
2. Unmixed binomial edge ideals of bipartite graphs
In this paper all graphs are finite and simple (without loops and multiple edges). In what follows,
unless otherwise stated, we assume that G is a connected graph with at least two vertices. Given a graph
G, we denote by V (G) its vertex set and by E(G) its edge set. If G is a bipartite graph, we denote by
V (G) = V1 ⊔ V2 the bipartition of the vertex set and call V1 , V2 the bipartition sets of G.
For a subset S ⊆ V (G), we denote by GS the subgraph induced in G by S, which is the graph with
vertex set S and edge set consisting of all the edges of G with both endpoints in S.
We recall some definitions and results from [10]. Let G be a graph with vertex set [n]. We denote
by R = K[xi , yi : i ∈ [n]] the polynomial ring in which the ideal JG is defined and, if S ⊆ [n], we set
S = [n] \ S. Let cG (S), or simply c(S), be the number of connected components of the induced subgraph
ei the complete
GS and let G1 , . . . , GcG (S) be the connected components of GS . For each Gi , denote by G
graph on V (Gi ) and define the ideal
!
[
{xi , yi }, JGe1 , . . . , JGe
PS (G) =
.
i∈S
cG (S)
In [10, Section 3], it is shown that PS (G) is a prime ideal for every S ⊆ [n], ht(PS (G)) = n + |S| − cG (S)
T
and JG = S⊆[n] PS (G). Moreover, PS (G) is a minimal prime ideal of JG if and only if S = ∅ or S 6= ∅
and cG (S \ {i}) < cG (S) for every i ∈ S. In simple terms the last condition means that, adding a vertex
of S to GS , we connect at least two connected components of GS . We set
M(G) = {S ⊂ [n] : PS (G) is a minimal prime ideal of JG }
= {∅} ∪ {S ⊂ [n] : S 6= ∅, cG (S \ {i}) < cG (S) for every i ∈ S},
and we call cut sets of G the elements of M(G). If {v} ∈ M(G), we say that v is a cut vertex of G.
We further recall that a clique of a graph G is a subset C ⊆ V (G) such that GC is complete. A free
vertex of G is a vertex that belongs to exactly one maximal clique of G. A vertex of degree 1 in G, which
in particular is a free vertex, is called a leaf of G.
Remark 2.1. If v is a free vertex of a graph G, then v ∈
/ S, for every S ∈ M(G). In fact, if v ∈ S, for
some S ∈ M(G), then cG (S) = cG (S \ {v}).
Recall that an ideal is unmixed if all its minimal primes have the same height. By [18, Lemma 2.5], JG
is unmixed if and only if for every S ∈ M(G),
(1)
cG (S) = |S| + 1.
5
This follows from the equality ht(P∅ (G)) = n − 1 = ht(PS (G)) = n + |S| − cG (S).
Moreover, for every graph G, with JG unmixed, we have that dim(R/JG ) = |V (G)| + c, where c is the
number of connected components of G, see [10, Corollary 3.3].
In this section, we study some properties of unmixed binomial edge ideals of bipartite graphs. It is
well-known that if JG is Cohen-Macaulay, then JG is unmixed. The converse is, in general, not true,
also for binomial edge ideals of bipartite graphs. In fact, in the following example we show two classes of
bipartite graphs whose binomial edge ideals are unmixed but not Cohen-Macaulay.
Example 2.2. For every k ≥ 4, let Mk,k be the graph with vertex set [2k] and edge set
E(Mk,k ) = {{1, 2}, {2k − 1, 2k}} ∪ {{2i, 2j − 1} : i = 1, . . . , k − 1, j = 2, . . . , k},
see Figure 3(a), and let Mk−1,k be the graph with vertex set [2k − 1] and edge set
E(Mk−1,k ) = {{1, 2}, {2k − 2, 2k − 1}} ∪ {{2i, 2j − 1} : i = 1, . . . , k − 1, j = 2, . . . , k − 1},
see Figure 3(b).
1
3
5
7
2
4
6
8
1
3
2
(a) The graph M4,4
5
4
7
6
(b) The graph M3,4
Figure 3
Notice that the graphs Mk,k and Mk−1,k are obtained by adding two whiskers to some complete bipartite
graph. Recall that adding a whisker to a graph G means adding a new vertex and connect it to one of the
vertices of G.
Let V1 ⊔ V2 be the bipartition of Mk,k and of Mk−1,k such that V1 contains the odd labelled vertices
and V2 contains the even labelled vertices. We claim that
M(Mk,k ) = {∅, {2}, {2k − 1}, {2, 2k − 1}, V1 \ {1}, V2 \ {2k}} and
M(Mk−1,k ) = {∅, {2}, {2k − 2}, {2, 2k − 2}, V1 \ {1, 2k − 1}, V2 }.
The inclusion ⊇ is clear. We prove the other inclusion for Mk,k , the proof is similar for Mk−1,k . Let
S ∈ M(Mk,k ). If S ⊆ {2, 2k − 1}, there is nothing to prove. If there exists v ∈ S \ {2, 2k − 1}, then
S = V1 \ {1} or S = V2 \ {2k}. In fact, if v ∈ V1 \ {1} and there is w ∈ (V1 \ {1}) \ S, then c(S \ {v}) = c(S),
a contradiction. Hence, V1 \ {1} ⊆ S. On the other hand, if w ∈ V2 \ {2k}, then w ∈
/ S. This shows that
S = V1 \ {1} The other case is similar.
Moreover, it is easy to check that JMk,k and JMk−1,k are unmixed. In Example 5.4 we will show that
these ideals are not Cohen-Macaulay.
A first nice fact about bipartite graphs with unmixed binomial edge ideal is that they have at least two
cut vertices.
Proposition 2.3. Let G be a bipartite graph such that JG is unmixed. Then G has exactly 2 leaves.
6
Proof. Let V (G) = V1 ⊔ V2 be the bipartition of G, with m1 = |V1 | ≥ 1 and m2 = |V2 | ≥ 1. Assume that G
has exactly h leaves, f1 , . . . , fh , in V1 and k leaves, g1 , . . . , gk , in V2 . We claim that S1 = V1 \ {f1 , . . . , fh }
and S2 = V2 \ {g1 , . . . , gk } are cut sets of G. Notice that cG (S1 ) = |V2 | = m2 and cG (S1 \ {v}) < cG (S1 )
since the vertex v joins at least two connected components of GS 1 . By symmetry, the claim is true for
S2 and, in particular cG (S2 ) = |V1 | = m1 . From the unmixedness of JG it follows that ht(P∅ (G)) =
ht(PS1 (G)) and ht(P∅ (G)) = ht(PS2 (G)). Thus n − 1 = n + |S1 | − cG (S1 ) = n + m1 − h − m2 and
n − 1 = n + |S2 | − cG (S2 ) = n + m2 − k − m1 . Hence h = m1 − m2 + 1 and k = m2 − m1 + 1. The sum of
the two equations yields h + k = 2.
Remark 2.4. Assume that G is bipartite and JG is unmixed. The proof of Proposition 2.3 implies that:
(i) either h = 2 and k = 0, i.e., the two leaves are in the same bipartition set and in this case
m1 = m2 + 1, or h = 1 and k = 1, i.e., each bipartition set contains exactly one leaf and in this
case m1 = m2 ;
(ii) if G has at least 4 vertices, then the leaves cannot be attached to the same vertex v, otherwise
cG ({v}) ≥ 3 > 2 = |{v}| + 1, against the unmixedness of JG , see (1). Hence G has at least two
distinct cut vertices, which are the neighbours of the leaves.
Remark 2.5. Notice that Proposition 2.3 does not hold if G is not bipartite. In fact, there are nonbipartite graphs G with an arbitrary number of leaves and such that JG is Cohen-Macaulay. For n ≥ 2
the binomial edge ideal JKn of the complete graph Kn is Cohen-Macaulay, since it is the ideal of 2-minors
of a generic (2 × n)-matrix (see [3, Corollary 2.8]). Moreover, for n ≥ 3, Kn has 0 leaves. Let W ⊆ [n],
with |W | = k ≥ 1. Adding a whisker to a vertex of W , the resulting graph H has 1 leaf and JH is
Cohen-Macaulay by [18, Theorem 2.7]. Applying the same argument to all vertices of W , we obtain a
graph H ′ with k leaves such that JH ′ is Cohen-Macaulay.
In the remaining part of the section we present a construction, Proposition 2.8, that produces new
examples of unmixed binomial edge ideals. It will also be important in the proof of the main theorem.
If X is a subset of V (G), we define the set of neighbours of the elements of X, denoted NG (X), or
simply N (X), as the set
NG (X) = {y ∈ V (G) : {x, y} ∈ E(G) for some x ∈ X}.
Lemma 2.6. Let G be a bipartite graph with bipartition V1 ⊔ V2 , JG unmixed and let v1 and v2 be the
neighbours of the leaves.
a) If X ⊆ V1 \ {v1 , v2 }, then N (X) is a cut set of G and |N (X)| ≥ |X|.
b) If {v1 , v2 } ∈ E(G), then m = |V1 | = |V2 | and vi has degree m, for i = 1, 2. Moreover, v1 and v2
are the only cut vertices of G.
Proof. a) First notice that N (X) is a cut set. In fact, every element of X is isolated in GN (X) . Let
v ∈ N (X). Then deg(v) ≥ 2, since v1 , v2 ∈
/ X. Adding v to GN (X) , it connects at least a vertex of X with
some other connected component.
Now, suppose by contradiction that |N (X)| < |X|. Then GN (X) has at least |X| isolated vertices and
another connected component containing a leaf, because v1 , v2 ∈
/ X. Hence, cG (N (X)) ≥ |X| + 1 >
|N (X)| + 1, against the unmixedness of JG .
7
b) Assume that v1 ∈ V1 . Then v2 ∈ V2 , since {v1 , v2 } ∈ E(G). By Remark 2.4(i), it follows that
m = |V1 | = |V2 |. Define X = {w ∈ V2 : {v1 , w} ∈
/ E(G)} and assume that X 6= ∅. Since {v1 , v2 } ∈ E(G),
v2 ∈
/ X, hence N (X) is a cut set and |N (X)| ≥ |X| by a). We claim that the inequality is strict. Assume
|N (X)| = |X|. Let f be the leaf of G adjacent to v1 , then S = V2 \ (X ∪ {f }) is a cut set of G and
|S| = m − |X| − 1. In fact, in GS all vertices of V1 \ N (X) are isolated, except for v1 that is connected only
to f . Moreover, by definition of X, if we add an element of S to GS , we join the connected component
of v1 with some other connected component of GS . Thus, S is a cut set and GS consists of at least
|V1 | − |N (X)| − 1 = m − |X| − 1 isolated vertices, the single edge {v1 , f }, and the connected component
containing the vertices of X and N (X). Hence, cG (S) ≥ m − |X| + 1 > |S| + 1, a contradiction since JG
is unmixed. This shows that |N (X)| > |X|.
Now, the vertices of X are isolated in GN (X) . Moreover, the remaining vertices belong to the same
connected component, because, by definition of X, {v1 , w} ∈ E(G) for every w ∈ V2 \ X and all vertices
in V1 \ N (X) are adjacent to vertices of X. Hence, cG (N (X)) = |X| + 1 < |N (X)| + 1, which again
contradicts the unmixedness of JG . Hence, X = ∅ and v1 has degree m. In the same way it follows that
v2 has degree m.
For the last part of the claim, notice that if v ∈ V (G) \ {v1 , v2 }, the first part implies that every vertex
of G{v} is adjacent to either v1 or v2 . Hence, G{v} is connected and, thus, v is not a cut vertex of G.
Remark 2.7. Let G be a bipartite graph such that JG is unmixed. If G has exactly two cut vertices, they
are not necessarily adjacent. Thus, the converse of the last part of Lemma 2.6 b) does not hold. In fact,
if |V1 | = |V2 | + 1, then v1 and v2 belong to the same bipartition set, hence {v1 , v2 } ∈
/ E(G). On the other
hand, if |V1 | = |V2 |, let G be the graph in Figure 4. One can check with Macaulay2 [7] that the ideal JG
is unmixed, and we notice that the vertices 2 and 11 are the only cut vertices, but {2, 11} ∈
/ E(G).
1
3
5
7
9
11
2
4
6
8
10
12
Figure 4
Proposition 2.8. Let H be a bipartite graph with bipartition V1 ⊔ V2 and |V1 | = |V2 |. Let v and f be two
new vertices and let G be the bipartite graph with V (G) = V (H) ∪ {v, f } and E(G) = E(H) ∪ {{v, x} :
x ∈ V1 ∪ {f }}. If JH is unmixed and the neighbours of the leaves of H are adjacent, then JG is unmixed
and
M(G) = {∅, V1 } ∪ {S ∪ {v} : S ∈ M(H)} ∪ {T ⊂ V1 : T ∈ M(H)}.
Moreover, the converse holds if there exists w ∈ V1 such that degG (w) = 2.
Proof. Assume that JH is unmixed and the neighbours of the leaves of H are adjacent. Clearly, ∅, V1 ∈
M(G). If S ∈ M(H), then adding v to GS∪{v} we join f with some other connected component of
HS . Moreover, if w ∈ S, adding w to GS∪{v} we join at least two connected components of HS (since
S ∈ M(H)), which are different components of GS∪{v} . Finally, let T ∈ M(H), T ⊂ V1 . By Lemma 2.6
8
b), in H there exists a unique cut vertex v2 ∈ V2 and NH (v2 ) = V1 . Hence, adding w ∈ T to GT , we join
at least two components since NG (v) = V1 ∪ {f } and T ∈ M(H).
Conversely, let S ∈ M(G) and suppose first that v ∈ S. Then GS = HS\{v} ⊔ {f } and this implies that
S \ {v} is a cut set of H, since every element of S \ {v} has to join some connected components that only
contain vertices of HS\{v} . Therefore cG (S) = cH (S \ {v}) + 1 = |S| + 1.
Suppose now that v ∈
/ S. Let w be the leaf of H adjacent to v2 , that is also adjacent to v in G. First
of all, notice that S ⊂ V1 . Indeed, in GS every vertex of V1 \ S is in the same connected component
of v. Thus, a vertex of V2 cannot join different connected components. Since w is adjacent only to v
and v2 , if w ∈ S, then v and v2 cannot be in the same connected component of GS . This means that
V1 ⊂ S, because all the vertices of V1 are adjacent to v and v2 , by Lemma 2.6 b). Thus S = V1 and
cG (S) = |V2 | + 1 = |S| + 1. Hence, we may assume that w ∈
/ S. We claim that, in this case, S ∈ M(H).
In fact, it is clear that v2 , w, v and f are in the same connected component C of GS , which also contains
all vertices of V1 \ S, since they are adjacent to v. Then, the connected components of GS and HS are
the same except for C, that in HS is C{v,f } 6= ∅. Therefore, if x ∈ S joins two connected components of
GS , it also joins the same connected components of HS (or C{v,f } , if it joins C), hence S is a cut set of
H. Moreover, cG (S) = cH (S) = |S| + 1.
Conversely, assume that JG is unmixed and let S ∈ M(H). Notice that w is a leaf of H, hence w ∈
/ S,
by Remark 2.1. We prove that T = S ∪ {v} is a cut set of G. As before, GT = HS ∪ {f }. Thus the
elements of S join different connected components also in GT and v connects the isolated vertex f with
the connected component of w. Hence, T ∈ M(G) and cH (S) = cG (T ) − 1 = |T | + 1 − 1 = |S| + 1.
Finally, let vi be the cut vertex of H in Vi for i = 1, 2. Since {v, v1 } ∈ E(G), it follows, from Lemma
2.6 b), that {v1 , v2 } ∈ E(G). Then v1 and v2 are adjacent also in H.
In Figure 5, we show an example of the above construction. The ideal JG is unmixed by Proposition
2.8, since H = M4,4 and JH is unmixed by Example 2.2. Moreover, it will follow from Example 5.4 and
Proposition 5.14 that JG is not Cohen-Macaulay.
v
f
Figure 5
In Proposition 2.8, the existence of a vertex w ∈ V1 such that degG (w) = 2 means that w is a leaf of
H. This is not true in general, see for instance the graph Mk,k in Example 2.2 for k ≥ 4. However, if JH
is unmixed, this always holds:
Corollary 2.9. Let H be a bipartite graph with bipartition V1 ⊔ V2 , |V1 | = |V2 | and such that JH is
unmixed. Let G be the graph in Proposition 2.8. Then JG is unmixed if and only if the neighbours of the
leaves of H are adjacent.
Example 2.10. The graph H of Figure 4 is such that JH is unmixed, but the two cut vertices 2 and
11 are not adjacent. The graph in Figure 6 is the graph G obtained from H with the construction in
9
Proposition 2.8. According to Corollary 2.9, JG is not unmixed: in fact S = N (11) = {8, 10, 12} is a cut
set and cG (S) = 3 6= |S| + 1.
1
3
5
7
9
11
13
2
4
6
8
10
12
14
Figure 6
3. Basic blocks
In this section we study the basic blocks Fm of our classification, proving that JFm is Cohen-Macaulay.
In what follows we will use several times the following argument.
Remark 3.1. Let G be a graph, v be a vertex of G, H ′ = G \ {v} and assume that M(H ′ ) = {S \ {v} :
T
S ∈ M(G), v ∈ S}, in particular v is a cut vertex of G since ∅ ∈ M(H ′ ). Let JG = S∈M(G) PS (G) be
T
T
the primary decomposition of JG and set A = S∈M(G),v∈S
S∈M(G),v∈S PS (G). Then
/ PS (G) and B =
JG = A ∩ B and we have the short exact sequence
0 −→ R/JG −→ R/A ⊕ R/B −→ R/(A + B) −→ 0.
(2)
Notice that
i) A = JH , where H is the graph obtained from G by adding all possible edges between the vertices of
NG (v). In other words, V (H) = V (G) and E(H) = E(G) ∪ {{k, ℓ} : k, ℓ ∈ NG (v), k 6= ℓ}. In fact,
notice that v ∈
/ S for every S ∈ M(H) by Remark 2.1 and all cut sets of G not containing v are
cut sets of H as well. Thus, M(H) = {S ∈ M(G) : v ∈
/ S}. Moreover, for every S ∈ M(H), the
connected components of GS and HS are the same, except for the component containing v, which
ei = H
e i , hence PS (G) = PS (H) for every S ∈ M(H).
is Gi in GS and Hi in HS . Nevertheless, G
′
ii) B = (xv , yv ) + JH ′ , where H = G \ {v}. In fact, if S ∈ M(G) with v ∈ S, then S \ {v} ∈ M(H ′ )
by assumption and we have that PS (G) = (xv , yv ) + PS\{v} (H ′ ). Thus,
\
\
B = (xv , yv ) +
PS\{v} (H ′ ) = (xv , yv ) +
PT (H ′ ) = (xv , yv ) + JH ′ .
S∈M(G),v∈S
T ∈M(H ′ )
iii) A + B = (xv , yv ) + JH ′′ , where H ′′ = H \ {v}.
We now describe a new family of Cohen-Macaulay binomial edge ideals associated with non-bipartite
graphs, which will be useful in what follows. Let Kn be the complete graph on the vertex set [n] and
W = {v1 , . . . , vr } ⊆ [n]. Let H be the graph obtained from Kn by attaching, for every i = 1, . . . , r, a
complete graph Khi to Kn in such a way that V (Kn ) ∩ V (Khi ) = {v1 , . . . , vi }, for some hi > i. We say
that the graph H is obtained by adding a fan to Kn on the set S. For example, Figure 7 shows the result
of adding a fan to K6 on a set S of three vertices.
Lemma 3.2. Let Kn be the complete graph on [n] and W1 ⊔ · · · ⊔ Wk be a partition of a subset W ⊆ [n].
Let G be the graph obtained from Kn by adding a fan on each set Wi . Then JG is Cohen-Macaulay.
10
Figure 7. Adding a fan to K6 on three vertices
Proof. First we show that JG is unmixed. For every i = 1, . . . , k, set Wi = {vi,1 , . . . , vi,ri } and Mi =
{∅} ∪ {{vi,1 , . . . , vi,h } : 1 ≤ h ≤ ri }. We claim that
(3)
M(G) = {T1 ∪ · · · ∪ Tk : Ti ∈ Mi , T1 ∪ · · · ∪ Tk ( [n]}.
Let T = T1 ∪ · · · ∪ Tk 6= ∅, with Ti ∈ Mi for i = 1, . . . , k, and T ( [n]. Let v ∈ T . Then v ∈ Tj
for some j, say v = vj,ℓ , with 1 ≤ ℓ ≤ rj . Hence, if we add v to the graph GT , it joins the connected
component containing Kn \ T (which is non-empty since T ( [n]) with Khj,ℓ \ T , where V (Khj,ℓ ) ∩ V (G) =
{vj,1 , . . . , vj,ℓ }. This shows that cG (T ) > cG (T \ {v}) for every v ∈ T , thus T ∈ M(G).
Conversely, let T ∈ M(G). First notice that T 6= [n], since cG ([n]) = cG ([n] \ {vi,ri }) for every i.
S
Moreover, T does not contain any vertex v ∈ V (G) \ ki=1 Wi , otherwise v belongs to exactly one maximal
S
S
clique of G, see Remark 2.1. Then cG (T ) = cG (T \{v}). Hence T ⊆ ki=1 Wi and T ( [n]. Let T = ki=1 Ti ,
where Ti ⊆ Wi . We want to show that, if vi,j ∈ Ti ⊆ T , then vi,h ∈ T for every 1 ≤ h < j. Assume
vi,h ∈
/ Ti for some h < j. Then cG (T ) = cG (T \ {vi,j }) because all maximal cliques of G containing vi,j
contain vi,h as well, since h < j. This shows that Ti ∈ Mi for every i.
Finally, for every T ∈ M(G), since GT consists of |T | connected components that are complete graphs
(Khj,ℓ \ T for every j = 1, . . . , k and ℓ = 1, . . . , |Tj |) and a graph obtained from Kn \ T by adding a fan on
each Wi \Ti , it follows that cG (T ) = |T |+ 1. This means that JG is unmixed and dim(R/JG ) = |V (G)|+ 1.
In order to prove that JG is Cohen-Macaulay, we proceed by induction on k ≥ 1 and |Sk | ≥ 1. Let
k = 1 and set W1 = {1, . . . , r}. If |W1 | = 1, then the claim follows by [18, Theorem 2.7]. Assume that
|W1 | = r ≥ 2 and the claim true for r − 1. Notice that G = cone(1, G1 ⊔ G2 ), where G1 ∼
= Kh1 −1 (graph
isomorphism) and G2 is the graph obtained from Kn \ {1} by adding a fan on the clique {2, . . . , r}. We
know that JG1 is Cohen-Macaulay by [3, Corollary 2.8] and JG2 is Cohen-Macaulay by induction. Hence,
the claim follows by [18, Theorem 3.8].
Now, let k ≥ 2 and assume the claim true for k − 1. Again, if |Wk | = 1, the claim follows by
induction and by [18, Theorem 2.7]. Assume that |Wk | = rk ≥ 2 and the claim true for rk − 1. For
T
simplicity, let Wk = {1, . . . , rk }. Let JG = S∈M(G) PS (G) be the primary decomposition of JG and set
T
T
A = S∈M(G),1∈S
S∈M(G),1∈S PS (G). Then JG = A ∩ B.
/ PS (G) and B =
By Remark 3.1, A = JH , where H is a complete graph on the vertices of {1} ∪ NG (1) to which we add
a fan on the cliques W1 , . . . , Wk−1 . Hence R/A is Cohen-Macaulay by induction on k and depth(R/A) =
|V (G)| + 1.
Notice that H ′ = G \ {1} is the disjoint union of a complete graph and a graph K ′ , which is obtained
by adding a fan to Kn \ {1} ∼
= Kn−1 on the cliques W1 , . . . , Wk−1 and Wk \ {1}. From (3), it follows
11
that M(H ′ ) = {S \ {1} : S ∈ M(G), 1 ∈ S}, thus B = (x1 , y1 ) + JH ′ by Remark 3.1. By induction on
|Wk |, JK ′ is Cohen-Macaulay, hence JH ′ is Cohen-Macaulay since it is the sum of Cohen-Macaulay ideals
on disjoint sets of variables. In particular, depth(R/B) = |V (H ′ )| + 2 = |V (G)| + 1 (it follows from the
formula for the dimension [10, Corollary 3.4]).
Finally, by Remark 3.1, A + B = (x1 , y1 ) + JH ′′ , where H ′′ = H \ {1}. Hence R/(A + B) is CohenMacaulay by induction on k and depth(R/(A + B)) = |V (G)|.
The Depth Lemma [20, Lemma 3.1.4] applied to the short exact sequence (2) yields depth(R/JG ) =
|V (G)| + 1. The claim follows from the first part, since dim(R/JG ) = |V (G)| + 1.
Notice that the graphs produced by Lemma 3.2 are not generalized block graphs (see [13]) nor closed
graphs if k ≥ 2 (studied in [5]). Hence they form a new family of non-biparite graphs whose binomial edge
ideal is Cohen-Macaulay.
Now we prove that the binomial edge ideals of the graphs Fm (see Figure 1) are Cohen-Macaulay. The
graphs Fm are the basic blocks in our classification, Theorem 6.1.
Recall that, for every m ≥ 1, if n = 2m, Fm is the graph on the vertex set [n] and with edge set
E(Fm ) = {{2i, 2j − 1} : i = 1, . . . , m, j = i, . . . , m} .
Notice that Fm , with m ≥ 2, can be obtained from Fm−1 using the construction of Proposition 2.8.
Proposition 3.3. For every m ≥ 1, JFm is Cohen-Macaulay.
Proof. First we show that JFm is unmixed. We proceed by induction on m ≥ 1. If m = 1, then JF1 is
a principal ideal, hence it is prime and unmixed of height 1. Let m ≥ 2 and assume the claim true for
m − 1. Then Fm is obtained from Fm−1 by adding the vertices n − 1 and n and connecting n − 1 to the
vertices 2, 4, . . . , n. Since JFm−1 is unmixed by induction and {2, n − 3} ∈ E(Fm−1 ), by Proposition 2.8,
it follows that JFm is unmixed and
(4)
M(Fm ) = {∅} ∪ {{2, 4, . . . , 2i} : 1 ≤ i ≤ m − 1} ∪ {{n − 1} ∪ S : S ∈ M(Fm−1 )}.
Now we prove that JFm is Cohen-Macaulay by induction on m ≥ 1. The graphs F1 and F2 are paths,
hence the ideals JF1 and JF2 are complete intersections, by [5, Corollary 1.2], thus Cohen-Macaulay.
T
Let m ≥ 3 and assume that JFm−1 is Cohen-Macaulay. Let JFm = S∈M(Fm ) PS (Fm ) be the primary
T
T
decomposition of JFm and define A = S∈M(Fm ),n−1∈S
/ PS (Fm ) and B =
S∈M(Fm ),n−1∈S PS (Fm ). Then
JFm = A ∩ B.
By Remark 3.1, A = JH , where H is obtained by adding a fan to the complete graph with vertex
set NFm (n − 1) = {2, 4, . . . , n} on the set NFm (n − 1), hence it is Cohen-Macaulay by Lemma 3.2 and
depth(R/A) = n + 1.
Since Fm \ {n − 1} = Fm−1 ⊔ {n}, by (4), M(Fm−1 ⊔ {n}) = {S \ {n − 1} : S ∈ M(Fm ), n − 1 ∈ S}.
Thus, B = (xn−1 , yn−1 ) + JFm−1 ⊔{n} = (xn−1 , yn−1 ) + JFm−1 , hence it is Cohen-Macaulay by induction
and depth(R/B) = n + 1.
Finally, A + B = (xn−1 , yn−1 ) + JH ′′ , where H ′′ = H \ {n − 1}, which is Cohen-Macaulay again by
Lemma 3.2 and depth(R/(A + B)) = n.
The Depth Lemma applied to the exact sequence (2) yields depth(R/JFm ) = n + 1. Moreover, since
JFm is unmixed, it follows that dim(R/JFm ) = n + 1 and, therefore, JFm is Cohen-Macaulay.
12
4. Gluing graphs: operations ∗ and ◦
In this section we consider two operations that, together with the graphs Fm , are the main ingredients
of Theorem 6.1. Given two (not necessarily bipartite) graphs G1 and G2 , we glue them to obtain a
new graph G. If G1 and G2 are bipartite, both constructions preserve the Cohen-Macaulayness of the
associated binomial edge ideal. The first operation is a particular case of the one studied by Rauf and
Rinaldo in [18, Section 2].
Definition 4.1. For i = 1, 2, let Gi be a graph with at least one leaf fi . We define the graph G =
(G1 , f1 ) ∗ (G2 , f2 ) obtained by identifying f1 and f2 (see Figure 8). If it is not important to specify the
vertices fi or it is clear from the context, we simply write G1 ∗ G2 .
f1
f2
f1 = f2
(b) A graph G2
(a) A graph G1
(c) The graph (G1 , f1 ) ∗ (G2 , f2 )
Figure 8
In the next Theorem we recall some results about the operation ∗, see [18, Lemma 2.3, Proposition 2.6,
Theorem 2.7].
Theorem 4.2. For i = 1, 2, consider a graph Gi with at least one leaf fi and G = (G1 , f1 ) ∗ (G2 , f2 ). Let
v1 and v2 be the neighbours of the leaves and let v be the vertex obtained by identifying f1 and f2 . If
A = {S1 ∪ S2 : Si ∈ M(Gi ), i = 1, 2} and
B = {S1 ∪ S2 ∪ {v} : Si ∈ M(Gi ) and vi ∈
/ Si , i = 1, 2},
the following properties hold:
a) M(G) = A ∪ B;
b) JG is unmixed if and only if JG1 and JG2 are unmixed;
c) JG is Cohen-Macaulay if and only if JG1 and JG2 are Cohen-Macaulay.
We now introduce the second operation.
Definition 4.3. For i = 1, 2, let Gi be a graph with at least one leaf fi , vi its neighbour and assume
degGi (vi ) ≥ 3. We define G = (G1 , f1 ) ◦ (G2 , f2 ) to be the graph obtained from G1 and G2 by removing
the leaves f1 , f2 and identifying v1 and v2 (see Figure 9). If it is not important to specify the leaves fi or
it is clear from the context, then we simply write G1 ◦ G2 .
We denote by v the vertex of G resulting from the identification of v1 and v2 and, with abuse of notation,
we write V (G1 ) ∩ V (G2 ) = {v}.
Notice that, if degGi (vi ) = 2 for i = 1, 2, then (G1 , f1 ) ◦ (G2 , f2 ) = (G1 \ {f1 }, v1 ) ∗ (G2 \ {f2 }, v2 ). On
the other hand, we do not allow degG1 (v1 ) = 2 and degG2 (v2 ) ≥ 3 (or vice versa), since in this case the
operation ◦ does not preserve unmixedness, see Remark 4.7 (ii).
13
Remark 4.4. Unlike the operation ∗ (cf. Theorem 4.2), if one of JG1 and JG2 is not Cohen-Macaulay,
then JG1 ◦G2 may not be unmixed, even if G1 and G2 are bipartite. For example, let G1 and G2 be the
graphs in Figure 9(a) and 9(b). Then JG1 ◦G2 is not unmixed even if JG1 = JF4 is Cohen-Macaulay (by
Proposition 3.3) and JG2 = JM4,4 is unmixed (by Example 2.2). In fact, S = {5, 7, 8, 10, 12} ∈ M(G), but
cG (S) = 5 6= |S| + 1.
(a) The graph G1
v1
v2
f1
f2
1
3
5
2
4
6
(b) The graph G2
7
9
11
13
8
10
12
(c) The graph G = G1 ◦ G2
Figure 9
We describe the structure of the cut sets of G1 ◦ G2 under some extra assumption on G1 and G2 . In
this case, ◦ preserves unmixedness.
Theorem 4.5. Let G = G1 ◦ G2 and set V (G1 ) ∩ V (G2 ) = {v}, where degGi (v) ≥ 3 for i = 1, 2. If for
i = 1, 2 there exists ui ∈ NGi (v) with degGi (ui ) = 2, then
(5)
M(G) = A ∪ B,
where
A = {S1 ∪ S2 : Si ∈ M(Gi ), i = 1, 2, v ∈
/ S1 ∪ S2 } and
B = {S1 ∪ S2 : Si ∈ M(Gi ), i = 1, 2, S1 ∩ S2 = {v}}.
If JG1 and JG2 are unmixed and for i = 1, 2 there exists ui ∈ NGi (v) with degGi (ui ) = 2, then JG is
unmixed. The converse holds if G is bipartite. In particular, if G is bipartite and JG is unmixed, the cut
sets of G are described in (5).
Proof. Let S = S1 ∪ S2 ⊂ V (G), where S1 = S ∩ V (G1 ) and S2 = S ∩ V (G2 ). Notice that
(6)
cG (S) = cG1 (S1 ) + cG2 (S2 ) − 1, if v ∈
/ S,
(7)
cG (S) = cG1 (S1 ) + cG2 (S2 ) − 2, if v ∈ S.
In fact, if v ∈
/ S, the connected components of GS are those of (G1 )S 1 and (G2 )S 2 , where the component
containing v is counted once. On the other hand, if v ∈ S, clearly v ∈ S1 ∩S2 and the connected components
of GS are those of (G1 )S 1 and (G2 )S 2 , except for the two leaves f1 and f2 .
In order to prove (5), we show the two inclusions.
⊆: Let S ∈ M(G) and define S1 and S2 as before. Suppose by contradiction that S1 ∈
/ M(G1 ), i.e.,
there exists w ∈ S1 such that cG1 (S1 ) = cG1 (S1 \ {w}). If v ∈
/ S, then by (6)
cG (S \ {w}) = cG1 (S1 \ {w}) + cG2 (S2 ) − 1 = cG1 (S1 ) + cG2 (S2 ) − 1 = cG (S),
a contradiction. On the other hand, if v ∈ S and w 6= v, by (7) we have
cG (S \ {w}) = cG1 (S1 \ {w}) + cG2 (S2 ) − 2 = cG1 (S1 ) + cG2 (S2 ) − 2 = cG (S),
14
again a contradiction. We show that the case w = v cannot occur. In fact, by assumption, there exists
u1 ∈ NG1 (v) such that degG1 (u1 ) = 2. Since v ∈ S, we have that cG (S) = cG (S \ {u1 }), hence u1 ∈
/ S.
,
we
join
the
connected
component
of
u1
Thus cG1 (S1 ) > cG1 (S1 \ {v}), because by adding v to (G1 )S 1
and the isolated vertex f1 , which is a leaf in G1 . Hence w 6= v. The same argument also shows that
S2 ∈ M(G2 ).
⊇: Let S = S1 ∪ S2 , with Si ∈ M(Gi ), for i = 1, 2. Assume first S1 ∩ S2 = {v}. By the equalities (6)
and (7) we have
cG (S \ {v}) = cG1 (S1 \ {v}) + cG2 (S2 \ {v}) − 1 ≤ cG1 (S1 ) + cG2 (S2 ) − 3 = cG (S) − 1 < cG (S).
Let w ∈ S, w 6= v. Without loss of generality, we may assume w ∈ S1 . Then
cG (S \ {w}) = cG1 (S1 \ {w}) + cG2 (S2 ) − 2 ≤ cG1 (S1 ) + cG2 (S2 ) − 3 = cG (S) − 1 < cG (S).
Assume now that v ∈
/ S1 ∪ S2 . Let w ∈ S, and without loss of generality w ∈ S1 . Then
cG (S \ {w}) = cG1 (S1 \ {w}) + cG2 (S2 ) − 1 ≤ cG1 (S1 ) + cG2 (S2 ) − 2 = cG (S) − 1 < cG (S).
Let now JG1 and JG2 be unmixed and for i = 1, 2 there exists ui ∈ NGi (v) with degGi (ui ) = 2. By the
last assumption, the cut sets of G are described in (5). Let S ∈ M(G) and Si = S ∩ V (Gi ) for i = 1, 2.
Thus, by (6) and (7),
(i) if v ∈
/ S, cG (S) = cG1 (S1 ) + cG2 (S2 ) − 1 = |S1 | + 1 + |S2 | + 1 − 1 = |S1 | + |S2 | + 1 = |S| + 1,
(ii) if v ∈ S, cG (S) = cG1 (S1 ) + cG2 (S2 ) − 2 = |S1 | + 1 + |S2 | + 1 − 2 = |S1 | + |S2 | = |S| + 1.
It follows that JG is unmixed.
Conversely, let JG be unmixed and G bipartite. If S is a cut set of G1 , then it is also a cut set of G
and clearly cG1 (S) = cG (S); therefore JG1 is unmixed and the same holds for JG2 . By Proposition 2.3,
the graphs G, G1 and G2 have exactly two leaves. Let fi be the leaf of Gi adjacent to v and gi be the
other leaf of Gi . Thus, g1 and g2 are the leaves of G.
By symmetry, it is enough to prove that there exists u1 ∈ NG1 (v) such that degG1 (u1 ) = 2. For i = 1, 2,
let V (Gi ) = Vi ∪ Wi and assume |V1 | ≤ |W1 |. By Remark 2.4, we have one of the following two cases:
a) if |V1 | = |W1 |, we may assume f1 ∈ W1 and g1 ∈ V1 . Set S = (W1 \ {f1 }) ∪ {v}. Hence,
cG1 (S) = |V1 | = |W1 | = |S|.
b) If |W1 | = |V1 | + 1, then f1 , g1 ∈ W1 . Hence v ∈ V1 . Set S = (W1 \ {f1 , g1 }) ∪ {v}. Thus,
cG1 (S) = |V1 | = |W1 | − 1 = |S|.
First suppose |V (G2 )| even and assume f2 ∈ W2 . Hence, v, g2 ∈ V2 and T = V2 \ {g2 } is a cut set of G2 .
Now, let |V (G2 )| be odd and assume f2 ∈ W2 . Hence, g2 ∈ W2 , v ∈ V2 and |W2 | = |V2 | + 1. Then
T = V2 is a cut set of G2 .
In both cases, notice that S ∪ T is not a cut set of G, since S ∩ T = {v} and, by (7),
cG (S ∪ T ) = cG1 (S) + cG2 (T ) − 2 = |S| + |T | − 1 = |S ∪ T |,
which contradicts the unmixedness of JG . Let u ∈ S ∪ T such that cG ((S ∪ T ) \ {u}) = cG (S ∪ T ) = |S ∪ T |.
We show that u ∈ S and u 6= v. If u ∈
/ S, then u ∈ T and u 6= v. By (7),
cG ((S ∪ T ) \ {u}) = cG1 (S) + cG2 (T \ {u}) − 2 < |S| + cG2 (T ) − 2 = |S| + |T | − 1 = |S ∪ T | = cG (S ∪ T ),
15
against our assumption (the inequality holds since T is a cut set of G2 and the second equality follows
from the unmixedness of JG2 ). Thus, u ∈ S. Moreover, in both cases cG1 (S \ {v}) = cG1 (S) = |S| (since
v is a leaf of (G1 )S\{v} ) and, by (6),
cG ((S∪T )\{v}) = cG1 (S\{v})+cG2 (T \{v})−1 = |S|+|T |−|NG2 (v)|+2−1 < |S|+|T |−1 = |S∪T | = cG (S∪T ),
where the inequality holds since degG2 (v) ≥ 3. This contradicts our assumption, thus u 6= v.
We conclude that u ∈ S \ {v}. Since u 6= f1 , g1 , we have degG1 (u) ≥ 2. On the other hand, since
cG ((S ∪ T ) \ {u}) = cG (S ∪ T ), it follows that u ∈ NG1 (v) and degG1 (u) = 2.
Corollary 4.6. Let G = Fm1 ◦ · · · ◦ Fmk , where mi ≥ 3 for i = 1, . . . , k. Then JG is unmixed.
Proof. Set G1 = Fm1 ◦ · · · ◦ Fmk−1 , G2 = Fmk and let v be the only vertex of V (G1 ) ∩ V (G2 ). We proceed
by induction on k ≥ 2. If k = 2, the claim follows by Theorem 4.5, because JG1 and JG2 are unmixed by
Proposition 3.3 and for i = 1, 2, there exists ui ∈ NGi (v) such that degGi (ui ) = 2, by definition of Fmi .
Now let k > 2 and assume the claim true for k − 1. By induction, JG1 is unmixed. Since mk−1 ≥ 3,
there exists u1 ∈ NG1 (v) such that degG1 (u1 ) = 2. The claim follows again by Theorem 4.5.
Remark 4.7. In Corollary 4.6 the condition mi ≥ 3, for i = 2, . . . , k − 1, cannot be omitted. For
instance, the binomial edge ideal JF3 ◦F2 ◦F3 is not unmixed: in fact S = {3, 5, 6, 8} is a cut set and
cF3 ◦F2 ◦F3 (S) = 4 6= |S| + 1, see Figure 10.
1
3
5
7
9
2
4
6
8
10
Figure 10. The graph F3 ◦ F2 ◦ F3
On the other hand, we may allow m1 = mk = 2, since, in this case, the graph G = Fm1 ◦ · · · ◦ Fmk =
F1 ∗ Fm2 ◦ · · · ◦ Fmk−1 ∗ F1 . Hence, JG is unmixed by Theorem 4.2 and Corollary 4.6.
Let n ≥ 3, W1 ⊔ · · · ⊔ Wk be a partition of a subset of [n] and Wi = {vi,1 , . . . , vi,ri } for some ri ≥ 1 and
i = 1, . . . , k. Let E be the graph obtained from Kn by adding a fan on each set Wi in such a way that
we attach a complete graph Kh+1 to Kn , with V (Kn ) ∩ V (Kh+1 ) = {vi,1 , . . . , vi,h }, for i = 1, . . . , k and
h = 1, . . . , ri , see Figure 11 (cf. Figure 7). By Lemma 3.2, JE is Cohen-Macaulay.
Lemma 4.8. Let G = Fm1 ◦· · ·◦Fmk ◦E, where E is the graph defined above, mi ≥ 3 for every i = 2, . . . , k
and V (Fm1 ◦ · · · ◦ Fmk ) ∩ V (E) = {v}. Assume that v ∈ W1 and |W1 | ≥ 2. Then JG is unmixed.
Proof. Set G1 = Fm1 ◦ · · · ◦ Fmk and G2 = E. Then JG1 is unmixed by Corollary 4.6 and JG2 is CohenMacaulay by Lemma 3.2, hence it is unmixed.
Notice that, since mk ≥ 3, there exists u1 ∈ NG1 (v) such that degG1 (u1 ) = 2. Moreover, since
|W1 | ≥ 2 and by definition of G2 = E, we attach K3 to Kn in such a way that |V (Kn ) ∩ V (K3 )| = 2
and v ∈ V (Kn ) ∩ V (K3 ). Thus, there exists u2 ∈ K3 , hence u2 ∈ NG2 (v), such that degG2 (u2 ) = 2. The
statement follows by Theorem 4.5.
16
Figure 11. The graph E
In Lemma 4.8 we assume |W1 | ≥ 2, since this is the only case we need in the following theorem.
Moreover, in the next statement the case F = E is useful to prove that the binomial edge ideal associated
with Fm1 ◦ · · · ◦ Fmk ◦ Fn is Cohen-Macaulay.
Theorem 4.9. Let G = Fm1 ◦ · · · ◦ Fmk ◦ F , where mi ≥ 3 for every i = 1, . . . , k and F = Fn for some
n ≥ 3 or F = E is the same graph of Lemma 4.8. Then JG is Cohen-Macaulay.
Proof. Let V (Fm1 ◦· · ·◦Fmk )∩V (F ) = {w} and call fk and f the leaves that we remove from Fm1 ◦· · ·◦Fmk
T
T
and F . Let JG = S∈M(G) PS (G) be the primary decomposition of JG and set A = S∈M(G),w∈S
/ PS (G)
T
and B = S∈M(G),w∈S PS (G).
We proceed by induction on k ≥ 1. First assume k = 1 and, for simplicity, let m = m1 . By Remark
3.1, the ideal A is the binomial edge ideal of the graph H, obtained by adding a fan to the complete graph
with vertex set {w} ∪ NG (w) on the sets NFm (w) \ {fk } and NF (w) \ {f }. Hence R/A is Cohen-Macaulay
and depth(R/A) = |V (G)| + 1 by Lemma 3.2.
Notice that G \ {w} = (Fm \ {w, fk }) ⊔ (F \ {w, f }). By Theorem 4.5 and Remark 3.1, B = (xw , yw ) +
JFm \{w,fk } + JF \{w,f } , where Fm \{w, f } ∼
= Fm−1 . Moreover, if F = E, E \{w, f } is of the same form as E,
∼
otherwise F = Fn and Fn \ {w, f } = Fn−1 . In any case, JFm \{w,fk } and JF \{w,f } are Cohen-Macaulay (by
Lemma 3.2 and Proposition 3.3), hence B is Cohen-Macaulay since it is the sum of Cohen-Macaulay ideals
on disjoint sets of variables. In particular, it follows from the formula for the dimension [10, Corollary 3.4]
that depth(R/B) = |V (Fm−1 )| + 1 + |V (F \ {w, f })| + 1 = |V (G)| + 1.
Finally, A + B = (xw , yw ) + JH ′′ , where H ′′ = H \ {w} is the binomial edge ideal of the graph obtained
by adding a fan to the complete graph with vertex set NG (w) on the sets NFm (w) \ {fk } and NF (w) \ {f }.
Hence R/(A + B) is Cohen-Macaulay and depth(R/(A + B)) = |V (G)| by Lemma 3.2.
The Depth Lemma applied to the short exact sequence (2) yields depth(R/JG ) = |V (G)| + 1. The claim
follows by Lemma 4.8 (resp. Corollary 4.6), since dim(R/JG ) = |V (G)| + 1.
Now let k > 1 and assume the claim true for k − 1. By Remark 3.1, the ideal A is the binomial edge
ideal of the graph H = F1 ◦ · · · ◦ Fmk−1 ◦ F ′ , where F ′ is obtained by adding a fan to the complete graph
with vertex set {w} ∪ NG (w) on the sets NFmk (w) \ {fk } and NF (w) \ {f }. Notice that, since mk ≥ 3,
|NFmk (w) \ {fk }| ≥ 2 and we are in the assumption of Lemma 4.8. Hence, R/A is Cohen-Macaulay by
induction and depth(R/A) = |V (G)| + 1.
Similarly to the case k = 1, the ideal B equals (xw , yw ) + J(Fm1 ◦···◦Fmk )\{w,fk } + JF \{w,f } , where (Fm1 ◦
· · · ◦ Fmk ) \ {w, fk } ∼
= Fm1 ◦ · · · ◦ Fmk −1 and JFm1 ◦···◦Fmk −1 is Cohen-Macaulay by induction (notice that,
if mk = 3, then Fm1 ◦ · · · ◦ Fmk = Fm1 ◦ · · · ◦ Fmk−1 ∗ F1 and the corresponding binomial edge ideal is
17
Cohen-Macaulay by induction and Theorem 4.2). Moreover, if F = E, then E \ {w} is of the same form
as E, otherwise F = Fn and Fn \ {w, f } ∼
= Fn−1 . Thus JF \{w,f } is Cohen-Macaulay (by Lemma 3.2 and
Proposition 3.3), hence B is Cohen-Macaulay since it is the sum of Cohen-Macaulay ideals on disjoint sets
of variables. In particular, depth(R/B) = |V (Fm−1 )| + 1 + |V (F \ {w, f })| + 1 = |V (G)| + 1 (it follows
from the formula for the dimension [10, Corollary 3.4]).
Finally, A+ B = (xw , yw )+ JH ′′ , where H ′′ = H \{w} (again, since mk ≥ 3, we have |NFmk (w)\{fk }| ≥
2). Hence R/(A + B) is Cohen-Macaulay by induction and depth(R/(A + B)) = |V (G)|.
The Depth Lemma applied to the short exact sequence (2) yields depth(R/JG ) = |V (G)| + 1. Notice
that, if F = E, the ideal JG is unmixed by Lemma 4.8, whereas, if F = Fn , it is unmixed by Corollary
4.6. This implies that dim(R/JG ) = |V (G)| + 1 and the claim follows.
5. The dual graph of binomial edge ideals
In this section we study the dual graph of binomial edge ideals. This is one of the main tools to prove
that, if G is bipartite and JG is Cohen-Macaulay, then G can be obtained recursively via a sequence of
operations ∗ and ◦ on a finite set of graphs of the form Fm , Theorem 6.1 c).
Let I be an ideal in a polynomial ring A = K[x1 , . . . , xn ] and let p1 , . . . , pr be the minimal prime ideals
of I. Following [2], the dual graph D(I) of I is a graph with vertex set {1, . . . , r} and edge set
{{i, j} : ht(pi + pj ) − 1 = ht(pi ) = ht(pj ) = ht(I)}.
Notice that, if D(I) is connected, then I is unmixed. In [8], Hartshorne proved that if A/I is CohenMacaulay, then D(I) is connected. We will show that this is indeed an equivalence for binomial edge ideals
of bipartite graphs. Nevertheless, this does not hold when G is not bipartite, see Remark 5.1.
To ease the notation, we denote by D(G) the dual graph of the binomial edge ideal JG of a graph G.
Moreover, we denote by PS (G) or PS both the minimal primes of JG and the vertices of D(G).
Remark 5.1. The dual graph of the non-bipartite graph G in Figure 12(a) is connected, see Figure 12(b),
but using Macaulay2 [7] one can check that JG is not Cohen-Macaulay.
P{3,5}
3
8
2
4
P{2,5}
P∅
7
5
P{3,4,5}
P{5}
1
P{2}
6
(a) The graph G
P{3,4}
P{2,3,4}
P{2,4}
(b) The dual graph of G
Figure 12
We now describe the edges of the dual graph of JG , when JG is unmixed. This result holds for nonbipartite graphs as well.
18
Theorem 5.2. Let G be a graph such that JG is unmixed and let S, T ∈ M(G), with |T | ≥ |S|. Denote
by PS the minimal primes of JG . Then the following properties hold:
a) if |T \ S| > 1, then {PS , PT } is not an edge of D(G);
b) if |T \ S| = 1 and S ⊂ T , then {PS , PT } is an edge of D(G);
c) if T \ S = {t} and S * T , then {PS , PT } is an edge of D(G) if and only if t is not a cut vertex of
GS .
Proof. Let E1 , E2 , . . . , Ec(S) be the connected components of GS .
a) Let v, w ∈ T \ S. Then PS + PT ⊇ PS + (xv , xw , yv , yw ). If Ej and Ek are the connected components
of GS containing v and w respectively (possibly j = k), it follows that
PS + (xv , xw , yv , yw ) =
[
{xi , yi }, JEe1 , . . . , J(Eej )
{v}
i∈S∪{v,w}
, . . . , J(Ee
k ){w}
. . . , JEe
c(S)
.
Thus, ht(PS + PT ) ≥ ht(PS + (xv , xw , yv , yw )) = ht(PS ) + 4 − 2 = ht(PS ) + 2. Hence, {PS , PT } is not an
edge of D(G).
b) Let T \ S = {t} and let Ej be the connected component of GS containing t. Then
PS + PT =
[
{xi , yi }, (xt , yt ), JEe1 , . . . , J(Eej )
{t}
i∈S
, . . . , JEe
c(S)
!
.
Thus, ht(PS + PT ) = ht(PS ) + 2 − 1 = ht(PS ) + 1. Hence, {PS , PT } is an edge of D(G).
c) Let G1 , G2 , . . . , Gr be the connected components of GS∩T . Let also S \ T = {s}, T \ S = {t} and
assume that s ∈ Gj and t ∈ Gk . Since S, T ∈ M(G), it follows that s and t are cut vertices of Gj and
Gk , respectively.
If j 6= k, then t is a cut vertex of GS . Moreover, if V (Gj ) = V (E1 ∪ · · · ∪ Eh ∪ {s}), where h ≥ 2 and
Gk = Eh+1 , then
PS + PT =
[
i∈S∪{t}
{xi , yi }, J(Gej )
{s}
, J(Ee
h+1 ){t}
, JEe
h+2
, . . . , JEe
c(S)
.
P
It follows that ht(PS +PT ) = ht(PS )+2+|V (Gj )|−2− hi=1 (|V (Ei )|−1)−1 = ht(PS )+2+1−2+h−1 =
ht(PS ) + h > ht(PS ) + 1. Thus, {PS , PT } is not an edge of D(G).
Assume now that j = k and let j = 1 for simplicity. Denote by H1 , . . . , Hi the connected components of
(G1 ){s} and by K1 , . . . , Ki the connected components of (G1 ){t} (note that the number of components is
the same because S, T ∈ M(G) and JG is unmixed). Suppose also that t ∈ H1 and s ∈ K1 . If there exists
v ∈ Hp ∩ Kq with p, q 6= 1, then, since v ∈ Hp , there exists a path from v to s that does not involve t. This
is a contradiction because v ∈ Kq and s ∈ K1 . Hence, Kq ⊆ H1 and Hp ⊆ K1 for all p, q = 2, . . . , i. In
particular, the connected components of GS∪T are H2 , . . . , Hi , K2 , . . . , Ki , G2 , . . . , Gr and the connected
components of H1 ∩ K1 , if it is not empty.
19
Suppose first that H1 ∩K1 = ∅. Hence, V (H1 ) = V (K2 ∪· · ·∪Ki ∪{t}) and V (K1 ) = V (H2 ∪· · ·∪Hi ∪{s}).
If i ≥ 3, then t is a cut vertex of H1 , hence a cut vertex of GS . It follows that
!
[
PS =
and
{xh , yh }, JHe1 , JHe2 , . . . , JHei , JGe2 , . . . , JGer
h∈S
PS + PT =
[
{xh , yh }, J(He1 )
{t}
h∈S∪{t}
, J(Ke 1 )
{s}
, JGe2 , . . . , JGer .
P
P
Therefore, ht(PS +PT ) = ht(PS )+2−1− ih=2 (|V (Hh )|−1)+|V (K1 )|−2 = ht(PS )+1− ih=2 |V (Hh )|+
P
(i − 1) + ( ih=2 |V (Hh )| + 1) − 2 = ht(PS ) + i − 1 > ht(PS ) + 1, since i ≥ 3. Thus, {PS , PT } is not an
edge of D(G).
On the other hand, if i = 2, then t is not a cut vertex of H1 , since K2 is connected. Therefore, t is not
a cut vertex of GS . It follows that
[
{xh , yh }, J(He1 ) , JHe 2 , JGe2 , . . . , JGer .
PS + PT =
{t}
h∈S∪{t}
Hence, ht(PS + PT ) = ht(PS ) + 2 − 1 = ht(PS ) + 1 and {PS , PT } is an edge of D(G).
Let now H1 ∩ K1 6= ∅. It follows that
V (H1 ) = V (K2 ∪ · · · ∪ Ki ∪ (H1 ∩ K1 ) ∪ {t}) and
V (K1 ) = V (H2 ∪ · · · ∪ Hi ∪ (H1 ∩ K1 ) ∪ {s})
and in this case t is a cut vertex of GS . Moreover,
[
{xh , yh }, J(He1 )
PS + PT =
{t}
h∈S∪{t}
In fact, JHe ⊆ J(Ke 1 )
h
J(He1 )
{t}
{s}
+J(Ke 1 )
{s}
and JKe ⊆ J(He1 )
h
{t}
, J(Ke 1 )
{s}
, JGe2 . . . , JGer .
for all h = 2, . . . , i. We now compute the height of J =
. Setting W1 = H2 ∪· · ·∪Hi and W2 = K2 ∪· · ·∪Ki , the ideal J is the binomial edge ideal
f1 ∪ W
f2 ∪(H
e1 ∩K
e 1 ) by adding the edges {{v, w} : v ∈ H
e1 ∩K
e 1, w ∈ W
f1 ∪ W
f2 }.
of the graph F obtained from W
e1 ∩ K
e 1 . Moreover,
It is easy to check that the only cut sets of F are ∅ and H
e1 ∩ K
e 1 )| − 2 ≥ |V (F )| − 1 = ht(P∅ (F ))
ht(PHe1 ∩Ke 1 (F )) = |V (F )| + |V (H
Thus ht(J) = |V (F )| − 1 =
Pi
f1 )| + |V (W
f2 )| + |V (H
e1 ∩ K
e 1 )| − 1.
= |V (W
h=1 |V
ht(PS + PT ) = ht(PS ) + 2 −
(Hh )| − 2. Since i ≥ 2, we get
i
X
(|V (Hh )| − 1) +
h=1
Hence, {PS , PT } is not an edge of D(G).
i
X
|V (Hh )| − 2 = ht(PS ) + i > ht(PS ) + 1.
h=1
Remark 5.3. Recall that a graph is k-connected if it has more than k vertices and the removal of any
h < k vertices does not disconnect the graph. In particular, every non-empty connected graph, which is
not reduced to a single vertex, is 1-connected.
Let G be a connected graph such that D(G) is connected. If G is not the complete graph, then G is
1-connected but not 2-connected. In fact, if G is 2-connected, then G does not have cut vertices and, by
20
Theorem 5.2 a), it follows that P∅ is an isolated vertex of the dual graph D(G), a contradiction. Notice
that, if G is bipartite, by Lemma 2.3, it is enough to require JG to be unmixed. Nevertheless, in the
non-bipartite case we need to assume D(G) connected. In fact, the graph G in Figure 13 is 2-connected,
JG is unmixed and D(G) consists of two isolated vertices.
We also observe that the above statement generalizes [1, Proposition 3.10], since having a connected
dual graph is a weaker condition (see also [8, Corollary 2.4]). In particular, being not 2-connected is a
necessary condition for JG to be Cohen-Macaulay.
Figure 13. A 2-connected graph G with JG unmixed and D(G) disconnected
Example 5.4. For every k ≥ 4, let Mk,k and Mk−1,k be the graphs defined in Example 2.2. With the
same notation used there and by Theorem 5.2, their dual graphs are represented in Figure 14.
P{2}
PV1 \{1}
P{2,2k−1}
P∅
P{2k−1}
PV2 \{2k}
(a) The dual graph of JMk,k
P{2}
P∅
P{3,5}
P{2,6} P{2,4,6}
P{6}
(b) The dual graph of JM3,4
P{2}
PV1 \{1,2k−1}
P{2,2k−2}
P∅
P{2k−2}
PV2
(c) The dual graph of JMk−1,k , k ≥ 5
Figure 14
Thus, JMk,k and JMk−1,k are not Cohen-Macaulay by Hartshorne’s Theorem [8]. Notice that, M3,4 is
the bipartite graph with the smallest number of vertices whose binomial edge ideal is unmixed and not
Cohen-Macaulay.
The following technical result has several crucial consequences, see Theorem 5.7 and Theorem 5.8. We
show that, under some assumption on the graph, the intersection of two cut sets, which differ by one
element and have the same cardinality, is again a cut set.
Lemma 5.5. Let G be a graph such that JG is unmixed. Let S, T ∈ M(G) with |S| = |T | and |S \ T | = 1.
(i) If {PS , PT } ∈ D(G), then S ∩ T ∈ M(G).
(ii) If S ∪ T ∈ M(G) and G is bipartite, then S ∩ T ∈ M(G).
Proof. Let S = (T \ {t}) ∪ {s} and let G1 , . . . , Gr be the connected components of GS∩T . Suppose first
that s ∈ Gi and t ∈ Gj with i 6= j. Let z ∈ S ∩ T such that cG ((S ∩ T ) \ {z}) = cG (S ∩ T ). Since z ∈ S and
21
S ∈ M(G), z joins at least two components of GS . Then in GS it is only adjacent to some components of
(Gi ){s} . This implies that it does not join any components in GT , a contradiction, since T ∈ M(G).
Assume now that s, t ∈ G1 and suppose first that r = |S ∩ T | + 1. We claim that S ∩ T ∈ M(G). In
this case, GS has r + 1 connected components, say H1 , H2 , G2 , . . . , Gr . Consider the set
Z = {z ∈ S ∩ T : adding z to GS it connects only H1 and H2 }.
We show that X = (S ∩ T ) \ Z ∈ M(G). For every x ∈ X, we know that cG (S \ {x}) < cG (S). In
particular, adding x to GS , it joins some connected components and at least one of them is Gi with i ≥ 2.
Hence, cG (X \ {x}) < cG (X). Moreover, cG (X) = |S ∩ T | − |Z| + 1, by the unmixedness of JG . On the
other hand, by definition of Z and since S ∈ M(G), it follows that cG (X) = r = |S| = |S ∩ T | + 1. Thus,
Z = ∅ and S ∩ T = X ∈ M(G).
Suppose now that H1 , . . . , Hi , G2 , . . . , Gr are the connected components of GS , with i ≥ 3, and that
t ∈ H1 . In the same way let K1 , . . . , Ki , G2 , . . . , Gr be the connected components of GT and let s ∈ K1 .
We show that this case cannot occur.
Following the same argument of the proof of Theorem 5.2 c), we conclude that the connected components
of GS∪T are H2 , . . . , Hi , K2 , . . . , Ki , G2 , . . . , Gr and the connected components of H1 ∩K1 , if it is not empty.
(i) If H1 ∩ K1 6= ∅, it follows that V (H1 ) = V (K2 ∪ · · · ∪ Ki ∪ (H1 ∩ K1 ) ∪ {t}) and V (K1 ) =
V (H2 ∪ · · · ∪ Hi ∪ (H1 ∩ K1 ) ∪ {s}). In this case, t is a cut vertex of GS , hence {PS , PT } is not an edge of
D(G) by Theorem 5.2 c), a contradiction.
Let now H1 ∩ K1 = ∅, then V (H1 ) = V (K2 ∪ · · · ∪ Ki ∪ {t}) and V (K1 ) = V (H2 ∪ · · · ∪ Hi ∪ {s}). Since
i ≥ 3, t is a cut vertex of H1 , hence {PS , PT } is not an edge of D(G) by Theorem 5.2 c), a contradiction.
(ii) In this case, since both S and S ∪ T are cut sets of G and i ≥ 3, we have that i = 3 and H1 ∩ K1 = ∅.
Therefore, the connected components of GS∪T are H2 , H3 , K2 , K3 , G2 , . . . , Gr .
We know that s is adjacent to x ∈ H1 and that s ∈ K1 . Hence, s is not adjacent to any vertices of K2
or K3 . Thus, x = t, since V (H1 ) = V (K2 ∪ K3 ∪ {t}). This means that {s, t} ∈ E(G). Let
Z = {z ∈ S ∩ T : adding z to GS∪T it connects only some Hi with some Kj }.
Notice that, there are no vertices in S ∩ T that only connects H2 to H3 or K2 to K3 in GS∪T . In fact, if
z ∈ S ∩ T only connects H2 to H3 in GS∪T , then cG (T \ {z}) = cG (T ), a contradiction, since T ∈ M(G).
The same holds for K2 and K3 .
As above, since S ∪ T ∈ M(G), it follows that (S ∩ T ) \ Z ∈ M(G) and, by the unmixedness of JG ,
|Z| = 1, say Z = {z}. Without loss of generality, we may assume that z connects at least H2 and K2 .
Since s and t are adjacent in G, one of them is in the same bipartition set of z. Without loss of
generality, assume that this vertex is t, thus NH2 (s) ∩ NH2 (z) = ∅. Let
A = {x ∈ S ∩ T : if {x, v} ∈ E(G) for some v ∈ G1 , then v ∈ NH2 (s)}.
Notice that, A contains also all vertices of S ∩ T that connect only some Gj ’s in GS∩T , with j ≥ 2. We
claim that
W = ((S ∩ T ) \ A) ∪ NH2 (s) ∈ M(G).
In Figure 15 the set W is colored in gray and the circles represent the connected components of GS∪T ,
where only some vertices are drawn.
22
z ∈Z
K2
(S ∩ T )\(A ∪ B ∪ {z})
H2
w1
NH2 (s)
x2 ∈ A
w2
x1 ∈ A
s
···
t
G2
K3
H3
Gr
y∈B
Figure 15. The set W in gray
Notice that z ∈ W . Let w ∈ W . Adding w = z to GW , we connect a vertex of H2 \ NH2 (s) with K2
whereas, adding w ∈ NH2 (s) to GW , we connect s to H2 \ NH2 (s). Moreover, if w ∈ (S ∩ T ) \ (A ∪ {z}),
we know that, in GS∩T , w connects Gi for some i ≥ 2 to a vertex v of G1 \ NH2 (s). By construction, in
GW the connected components containing v and Gi are different and w still connects them. This proves
that W ∈ M(G).
Since JG is unmixed, we have that cG (W ) = |W | + 1 and a connected component of GW is the subgraph
induced on H3 ∪ K2 ∪ K3 ∪ {s, t}. Thus, removing t from GW , this component splits in three components,
H3 ∪ {s}, K2 , K3 . Therefore, if W ∪ {t} is a cut set of G, we get cG (W ∪ {t}) = cG (W ) + 2 = |W | + 3,
which contradicts the unmixedness of JG .
Hence, we may assume that W ∪ {t} ∈
/ M(G). Thus there exists y ∈ NG (t) that joins t with only one
connected component of GW (i.e., cG ((W ∪ {t}) \ {y}) = cG (W ∪ {t})). In this case, we define
B = {y ∈ S ∩ T : {y, t} ∈ E(G) and NG (y) \ {t} is contained in one connected component of GW },
where |B| ≥ 1, since W ∪ {t} ∈
/ M(G). We claim that
W ′ = (W \ B) ∪ {t} ∈ M(G).
Notice that z ∈ W ′ . The proof is similar to the case of W . We only notice that, adding t to GW ′ , we
connect at least K2 , K3 and the connected component containing s. Moreover, each element in B does
not connect different connected components of GW and any two elements of B are not adjacent (since
they are adjacent to t and G is bipartite). Thus, |W ′ | < |W ∪ {t}| and
cG (W ′ ) = cG (W ∪ {t}) = cG (W ) + 2 = |W | + 3 = |W ∪ {t}| + 2 > |W ′ | + 1,
which contradicts the unmixedness of JG .
Remark 5.6. It could be true that, if G is bipartite and JG is unmixed, then S ∩ T ∈ M(G) for every
S, T ∈ M(G). Both assumptions are needed: in fact, if G is the graph in Figure 16, one can check with
Macaulay2 [7] that JG is Cohen-Macaulay and thus D(G) is connected. Nevertheless, {2, 4}, {4, 5} ∈ M(G)
and {2, 4} ∩ {4, 5} = {4} ∈
/ M(G).
23
On the other hand, if G is the cycle of length 6 with consecutive labelled vertices, then JG is not
unmixed, {1, 3}, {1, 5} ∈ M(G) and {1, 3} ∩ {1, 5} = {1} ∈
/ M(G).
3
2
7
4
P{2}
1
P∅
5
6
P{2,4}
P{2,5}
P{5}
P{2,4,5}
P{4,5}
(a) The graph G
(b) The dual graph of G
Figure 16
The next result is important for Theorem 6.1, since at the same time provides the equivalence b) ⇔ d)
and has important consequences for the proof of b) ⇒ c).
Theorem 5.7. Let G be a bipartite graph. If D(G) is connected, then for every non-empty S ∈ M(G),
there exists s ∈ S such that S \ {s} ∈ M(G).
Proof. By contradiction, let T ∈ M(G) such that T \ {t} ∈
/ M(G) for every t ∈ T . Notice that |T | ≥ 2,
otherwise T \ {t} = ∅ ∈ M(G).
Let W ∈ M(G), W 6= T , such that there exists a path P : PT = PS0 , PS1 , . . . , PSk , PSk+1 = PW in
D(G). Assume P is a shortest path from PT to PW .
Claim: For 1 ≤ i ≤ k + 1, |Si | > |Si−1 |. In particular, |W | > |T |.
We proceed by induction on k ≥ 0.
Let k = 0. First notice that |W | ≥ |T |, otherwise by Theorem 5.2 a), W = T \ {t} ∈ M(G) for some
t ∈ T , a contradiction. If |W | = |T |, since {PT , PW } is an edge of D(G), by Theorem 5.2 a), we have that
W = (T \ {t}) ∪ {w}, for some t ∈ T and w ∈
/ T . By Lemma 5.5, we have that W ∩ T = T \ {t} ∈ M(G),
a contradiction. Then |W | > |T |.
Let k ≥ 1. By induction, |Si | > |Si−1 |, for every 1 ≤ i ≤ k. In particular, by Theorem 5.2 b),
Si = T ∪ {s1 , . . . , si } for i = 1, . . . , k and sj ∈
/ T , for j = 1, . . . , k. Set S = Sk .
If |W | < |S|, then |W | = |S| − 1 by Theorem 5.2 a). Hence W = S \ {s} for some s ∈ S.
First suppose that s ∈ T . Thus, W = (T \ {s}) ∪ {s1 , . . . , sk }. Since |W | = |Sk−1 |, |W \ Sk−1 | = 1 and
W ∪ Sk−1 = S ∈ M(G), by Lemma 5.5 (ii) it follows that Sk−1 ∩ W = (T \ {s}) ∪ {s1 , . . . , sk−1 } ∈ M(G).
For every i = 1, . . . , k − 2, let Ti = Si+1 ∩ · · · ∩ Sk−1 ∩ W . By induction on i ≤ k − 2, assume that
Ti ∈ M(G), then Ti−1 = Si ∩ Ti = (T \ {s}) ∪ {s1 , . . . , si } ∈ M(G) by Lemma 5.5 (ii), since |Si | = |Ti |,
|Ti \Si | = 1 and Si ∪Ti = Si+1 ∈ M(G). In particular, T0 = S1 ∩· · ·∩Sk−1 ∩W = (T \{s})∪{s1 } ∈ M(G),
|T0 | = |T |, |T0 \ T | = 1 and T0 ∪ T = S1 ∈ M(G). Again, by Lemma 5.5 (ii), T ∩ T0 = T \ {s} ∈ M(G), a
contradiction.
Now assume that s ∈ S \T , where s = sj for some j ∈ {1, . . . , k}. Since |W | = |Sk−1 | and |W \Sk−1 | = 1,
by Lemma 5.5 (ii), Sk−1 ∩W = Sk−1 \{sj } ∈ M(G). For every i = j, . . . , k−2, let Ti = Si+1 ∩· · ·∩Sk−1 ∩W .
By induction on i ≤ k − 2, assume that Ti ∈ M(G), then Ti−1 = Si ∩ Ti = Si \ {sj } ∈ M(G) by Lemma 5.5
24
(ii), since |Si | = |Ti |, |Ti \ Si | = 1 and Si ∪ Ti = Si+1 ∈ M(G). In particular, Tj−1 = Sj ∩ · · · ∩ Sk−1 ∩ W =
Sj \ {sj } = Sj−1 ∈ M(G). Therefore,
P ′ : PS0 = PT , PS1 , . . . , PSj−1 = PTj−1 , PTj , . . . , PTk−2 , PW
is a path from PT to PW , shorter than P, a contradiction.
If |W | = |S|, then W = (S \ {x}) ∪ {y} for some x, y. If x ∈ T , then W = (T \ {x}) ∪ {s1 , . . . , sk , y}.
By Lemma 5.5 (i), W ∩ S = (T \ {x}) ∪ {s1 , . . . , sk } ∈ M(G). We may proceed in a similar way to the
case |W | < |S|, setting Ti = Si+1 ∩ · · · ∩ Sk ∩ W for i = 1, . . . , k − 1.
Now assume x ∈ S \ T , where x = sj for some j ∈ {1, . . . , k}. Since |W | = |S|, by Lemma 5.5 (i),
S ∩W = S \{x} ∈ M(G). Again, we may proceed as in the case |W | < |S|, setting Ti = Si+1 ∩· · ·∩Sk ∩W
for i = j, . . . , k − 1.
In both cases we find a contradiction. In conclusion, we proved that, if there exists a path from PT
to PW in D(G), then |W | > |T | ≥ 2. Thus, there is no path from PT to P∅ in D(G), hence D(G) is
disconnected.
Using the following result, we may reduce to consider bipartite graphs G with exactly two cut vertices
and D(G) connected.
Theorem 5.8. Let G be a bipartite graph with at least three cut vertices and such that JG is unmixed.
a) There exist G1 and G2 such that G = G1 ∗ G2 or G = G1 ◦ G2 .
b) If D(G) is connected, then D(G1 ) and D(G2 ) are connected.
Proof. a) By Proposition 2.3, G has exactly two leaves. Let v be a cut vertex that is not a neighbour
of a leaf and let H1 and H2 be the connected components of G{v} . If v is a leaf of both GV (H1 )∪{v} and
GV (H2 )∪{v} , then G = GV (H1 )∪{v} ∗ GV (H2 )∪{v} .
Assume that v is not a leaf of GV (H1 )∪{v} and of GV (H2 )∪{v} . Then, given two new vertices w1 and w2 ,
for i = 1, 2 we set Gi to be the graph (GV (Hi )∪{v} ) ∪ {v, wi }. It follows that G = G1 ◦ G2 .
Now assume by contradiction that v is a leaf of GV (H2 )∪{v} , but not of GV (H1 )∪{v} , and let w be the
only neighbour of v in GV (H2 )∪{v} . Hence, w is a cut vertex of G and we may assume that it is not a leaf
of GV (H2 ) , otherwise G = GV (H1 )∪{v,w} ∗ GV (H2 ) .
The graphs GV (H1 )∪{v} and GV (H2 ) are bipartite with bipartitions V1 ⊔ V2 and W1 ⊔ W2 , respectively.
Without loss of generality, assume that v ∈ V1 and w ∈ W1 and let S = V1 \ {ℓ : ℓ is a leaf of G}. This
is a cut set of G: indeed in GS all vertices of V2 are either isolated or connected with only one leaf of
G, hence every element of S connects at least one vertex of V2 with some other connected component.
Therefore, since JG is unmixed, GS has |S| + 1 connected components, GV (H2 ) is one of them and the
vertices of GV (H1 ) not in S form the remaining |S| connected components. In the same way, the set
T = W1 \ {ℓ : ℓ is a leaf of G} ∈ M(G) and GT consists of the connected component GV (H1 )∪{v} and of
|T | connected components on the vertices of GV (H2 ) that are not in T . Notice that S ∪ T is a cut set of
G: in fact, adding either v or w to GS∪T , we join at least two connected components, since v is not a leaf
of GV (H1 )∪{v} and w is not a leaf of GV (H2 ) . Then GS∪T has |S| connected components on the vertices of
GV (H1 )∪{v} and |T | on the vertices of GV (H2 ) . Hence, cG (S ∪ T ) = |S| + |T |, a contradiction.
b) We prove the statement for G1 , the argument for G2 is the same. Let PS be the primary components
of JG1 , S0 ∈ M(G1 ) and k = |S0 |. Thus, S0 ∈ M(G) by Theorems 4.2 and 4.5. Moreover, by Theorem 5.7,
25
there exists s1 ∈ S0 such that S1 = S0 \ {s1 } ∈ M(G). Applying repeatedly Theorem 5.7, we find a finite
sequence of cut sets S2 = S0 \ {s1 , s2 }, S3 = S0 \ {s1 , s2 , s3 }, . . . , Sk = S0 \ S0 = ∅ ∈ M(G). Notice that
Si ∈ M(G1 ) for i = 1, . . . , k and, by Theorem 5.2, {PSi , PSi+1 } is an edge of M(G1 ) for i = 1, . . . , k − 1.
Hence,
P : PS0 , PS1 , PS2 , . . . , Pk = P∅ ,
is a path from PS to P∅ in D(G1 ). Therefore, D(G1 ) is connected.
Remark 5.9. If the graph G is not bipartite, Theorem 5.8 a) does not hold. For instance, the ideal JG
of the graph in Figure 17 is unmixed, indeed Cohen-Macaulay, and G has four cut vertices, but it is not
possible to split it using the operations ∗ and ◦.
Figure 17
The remaining part of the section is useful to prove that a bipartite graph G with exactly two cut
vertices and D(G) connected is of the form Fm .
Corollary 5.10. Let G be a bipartite graph such that D(G) is connected. Then every non-empty cut set
S ∈ M(G) contains a cut vertex.
Proof. Let S ∈ M(G) and k = |S|. We may assume k ≥ 2. By Theorem 5.7, there exists s ∈ S such that
T = S \ {s} ∈ M(G). By induction, T contains a cut vertex and the claim follows.
Remark 5.11. All assumptions in Theorem 5.8 and Corollary 5.10 are needed. In fact, both claims do
not hold if we only assume G bipartite but D(G) is not connected. For instance, let G = M3,4 . Then
{3, 5} is a cut set that does not contain any cut vertex (see Example 2.2).
On the other hand, both results do not hold if D(G) is connected but G is not bipartite. For example,
if G is the graph in Figure 12(a), then {3, 4} ∈ M(G), but 3 and 4 are not cut vertices of G.
Corollary 5.12. Let G be a bipartite graph with bipartition V1 ⊔ V2 and with exactly two cut vertices v1
and v2 . If D(G) is connected, then {v1 , v2 } ∈ E(G). In particular |V1 | = |V2 |.
Proof. Let fi be the leaf adjacent to vi for i = 1, 2. Assume that {v1 , v2 } ∈
/ E(G). Then Si = NG (vi ) \ {fi }
is a cut set of G for i = 1, 2. Moreover, S1 and S2 do not contain cut vertices. By Corollary 5.10 it follows
that D(G) is disconnected, a contradiction. The last part of the claim follows from Remark 2.4.
Lemma 5.13. Let G be a bipartite graph with bipartition V1 ⊔ V2 , |V1 | = |V2 | and with exactly two cut
vertices. If D(G) is connected, then there exists a vertex of G with degree 2.
Proof. Suppose by contradiction that all the vertices of G, except the two leaves, have degree greater than
2. Let f be the only leaf of G in V1 and consider T = V1 \ {f }. Clearly GT is the disjoint union of |V2 | − 1
isolated vertices and the edge {v2 , f }, where v2 ∈ V2 is a cut vertex. Therefore, T is a cut set and we
claim that it is an isolated vertex in D(G).
26
Notice that T is not contained in any other cut set. Moreover, suppose that S is a cut set of G such
that S ⊂ T and T \ S = {v}. Since S ⊂ V1 , it follows that degGS (v) > 2. Then cG (S) = cG (T \ {v}) ≤
cG (T ) − 2 = |V1 | − 2, since GT consists of isolated vertices and one edge. This contradicts the unmixedness
of JG .
Finally, let T ′ be a cut set such that T \ T ′ = {v} and T ′ \ T = {v ′ }. If we set S = T \ {v} = T ′ \ {v ′ },
it follows that v ′ has to be a cut vertex of GS . As consequence, v ′ = v2 is the cut vertex in V2 , and
{v, v ′ } ∈ E(G). On the other hand, as before, GS has at most |V2 | − 2 connected components, then
cG (T ′ ) = cG (S) + 1 ≤ |V2 | − 1. This contradicts the unmixedness of JG , because |T ′ | = |V2 | − 1. Therefore,
Theorem 5.2 implies that T is an isolated vertex in D(G) against our assumption.
Proposition 5.14. Let H be a bipartite graph with bipartition V1 ⊔ V2 and |V1 | = |V2 |. Let v and f be two
new vertices and let G be the bipartite graph with V (G) = V (H) ∪ {v, f } and E(G) = E(H) ∪ {{v, x} :
x ∈ V1 ∪ {f }}. If D(G) is connected, then D(H) is connected.
Proof. Let f2 be the leaf of G in V2 and w its only neighbour, which is a cut vertex. Lemmas 2.6 b) and
5.13 imply that there is a vertex with degree 2 in G. Thus, by Proposition 2.8,
M(G) = {∅, V1 } ∪ {S ∪ {v} : S ∈ M(H)} ∪ {T ⊂ V1 : T ∈ M(H)}.
Let us denote by PS the primary components of JG and by QS those of JH . Using Theorem 5.2, we can
give a complete description of the edges of D(G):
{P∅ , PT } ∈ E(D(G)) if and only if either T = {v} or T = {w},
{PV1 , PT } ∈ E(D(G)) if and only if either T = V1 \ {f1 } or T = (V1 \ {f1 }) ∪ {v};
if S1 , S2 ∈ M(H), then {PS1 ∪{v} , PS2 ∪{v} } ∈ E(D(G)) if and only if {QS1 , QS2 } ∈ E(D(H));
if T1 , T2 ∈ M(G) are strictly contained in V1 , then we have {PT1 , PT2 } ∈ E(D(G)) if and only if
{QT1 , QT2 } ∈ E(D(H));
(v) if S, T ∈ M(H) and T ( V1 , then {PS∪{v} , PT } ∈ E(D(G)) if and only if S = T .
(i)
(ii)
(iii)
(iv)
If S ∈ M(H), it is enough to prove that QS is in the same connected component as Q∅ in D(H). By
(iii), this is equivalent to prove that in D(G) there exists a path P{v} = PU1 , PU2 , . . . , PUr = PS∪{v} such
that Ui contains v for all i. Since D(G) is connected, we know that there exists a path P from P{v} to
PS∪{v} . We first note that, if P contains P∅ or PV1 , we may avoid them: in fact, by (i) and (ii), they
only have two neighbours; for PV1 they are adjacent by (v), whereas we may replace P∅ with P{v,w} by
(iii) and (v). Let i be the smallest index for which Ui does not contain v. This means that Ui ( V1 and
Ui−1 = Ui ∪ {v} by (v). Moreover, Ui+1 does not contain v, otherwise it would be equal to Ui−1 (again by
(v)). Therefore, Ui+1 ( V1 and {QUi , QUi+1 } ∈ E(D(H)) by (iv). Thus, replacing Ui with Ui+1 ∪ {v} in
P, we get a new path from P{v} to PS∪{v} , by (iii) and (iv). Repeating the same argument finitely many
times, we eventually find a path from P{v} to PS∪{v} that involves only cut sets containing v. Thus D(H)
is connected by (iii).
6. The main theorem
In this section we prove the main theorem of the paper and give some applications.
27
Theorem 6.1. Let G be a connected bipartite graph. The following properties are equivalent:
a)
b)
c)
d)
JG is Cohen-Macaulay;
the dual graph D(G) is connected;
G = A1 ∗ A2 ∗ · · · ∗ Ak , where Ai = Fm or Ai = Fm1 ◦ · · · ◦ Fmr , for some m ≥ 1 and mj ≥ 3;
JG is unmixed and for every non-empty S ∈ M(G), there exists s ∈ S such that S \ {s} ∈ M(G).
Proof. The implication a) ⇒ b) follows by Hartshorne’s Connectedness Theorem [8, Proposition 1.1,
Corollary 2.4, Remark 2.4.1].
b) ⇒ c): We may assume that G has more than two vertices. Recall that, since D(G) is connected,
then JG is unmixed. By Proposition 2.3, G has exactly two leaves, hence at least two cut vertices v1 , v2 ,
which are their neighbours. We proceed by induction on the number h ≥ 2 of cut vertices of G.
Let h = 2. We claim that G = Fm , for some m ≥ 2. Let V (G) = V1 ⊔ V2 be the bipartition of the vertex
set of G. By Corollary 5.12, we have that {v1 , v2 } ∈ E(G) and |V1 | = |V2 |, with vi ∈ Vi for i = 1, 2. We
proceed by induction on m = |V1 | = |V2 |. If m = 2, then G = F2 . Let m > 2 and consider the graph H
obtained removing v2 and the leaf adjacent to it. Lemma 2.6 b) implies that v has degree m and H has
exactly two cut vertices, whereas by Proposition 5.14, D(H) is connected. Hence, by induction, it follows
that H = Fm−1 and G = Fm by construction.
Assume now h > 2. Let v be a cut vertex of G such that v 6= v1 , v2 . By Theorem 5.8, there exist two
graphs G1 and G2 such that G = G1 ∗G2 or G = G1 ◦G2 and D(G1 ), D(G2 ) are connected. If G = G1 ∗G2 ,
by induction they are of the form A1 ∗ A2 ∗ · · · ∗ Ak , for some k ≥ 1, where Ai = Fm , with m ≥ 1, or
Ai = Fm1 ◦ · · · ◦ Fmr , with mj ≥ 3 for j = 1, . . . , r.
On the other hand, if G = G1 ◦ G2 , it follows that G1 = A1 ∗ A2 ∗ · · · ∗ As and G2 = B1 ∗ B2 ∗ · · · ∗ Bt ,
where each Ai and Bi are equal to Fm , for some m ≥ 1, or to Fm1 ◦ · · · ◦ Fmr , with mj ≥ 3 for j = 1, . . . , r.
By Theorem 4.5, it follows that if As = Fm or B1 = Fm , then m ≥ 3.
c) ⇒ a): Let G be a graph as in c). We proceed by induction on k ≥ 1.
If k = 1, then G = Fm for some m ≥ 1, or G = Fm1 ◦ · · · ◦ Fmr , with mj ≥ 3 for j = 1, . . . , r. In the
first case the claim follows from Proposition 3.3, in the latter from Theorem 4.9.
Let k > 1 and consider the graphs G1 = A1 ∗ A2 ∗ · · · ∗ Ak−1 and G2 = Ak . By induction, JG1 is
Cohen-Macaulay and, by the previous argument, also JG2 is Cohen-Macaulay. Then, the claim follows
from Theorem 4.2.
b) ⇔ d): The first implication follows from Theorem 5.7. Conversely, let S ∈ M(G), S 6= ∅, and PS be
the primary components of JG . It suffices to show that there exists a path from P∅ to PS . If |S| = 1, the
claim follows by Theorem 5.2 b). If |S| > 1, by assumption, there exists s ∈ S such that S \ {s} ∈ M(G)
and, by induction, there exists a path from P∅ to PS\{s} . Thus, Theorem 5.2 b) implies that {PS\{s} , PS }
is an edge of D(G).
Theorem 6.1 can be restated in the following way. Let G be a connected bipartite graph. If it has
exactly two cut vertices, then JG is Cohen-Macaulay if and only if G = Fm for some m ≥ 1. If it has more
than two cut vertices, then JG is Cohen-Macaulay if and only if there exist two bipartite graphs G1 , G2
such that JG1 , JG2 are Cohen-Macaulay and G = G1 ∗ G2 or G = G1 ◦ G2 .
Figure 18 shows a graph G obtained by a sequence of operations ∗ and ◦ on a finite set of graphs of the
form Fm . More precisely, G = F3 ∗ F3 ◦ F4 ∗ F1 ∗ F3 ◦ F3 and vi denotes the only common vertex between
two consecutive blocks. By Theorem 6.1, JG is Cohen-Macaulay.
28
v1
v3
v4
v2
v5
Figure 18. The graph G = F3 ∗ F3 ◦ F4 ∗ F1 ∗ F3 ◦ F3
It is interesting to notice that, Theorem 6.1 gives, at the same time, a classification of other known
classes of Cohen-Macaulay binomial ideals associated with graphs. We recall that, given a graph G, the
Lovász-Saks-Schrijver ideal LG (see [11]), the permanental edge ideal ΠG (see [11, Section 3]) and the
parity binomial edge ideal IG (see [12]) are defined respectively as
LG = (xi xj + yi yj : {i, j} ∈ E(G)),
ΠG = (xi yj + xj yi : {i, j} ∈ E(G)),
IG = (xi xj − yi yj : {i, j} ∈ E(G)).
Corollary 6.2. Let G be a bipartite connected graph. Then Theorem 6.1 holds for LG , ΠG and IG .
Proof. Let G be a bipartite graph with bipartition V (G) = V1 ⊔ V2 . Then the binomial edge ideal JG can
be identified respectively with LG , ΠG and IG by means of the isomorphisms induced by:
(
(
(
(xi , yi )
if i ∈ V1
(xi , yi )
if i ∈ V1
(xi , yi ) if i ∈ V1
LG
ΠG
IG
(xi , yi ) 7−−→
, (xi , yi ) 7−−→
, (xi , yi ) 7−−→
.
(yi , −xi ) if i ∈ V2
(−xi , yi ) if i ∈ V2
(yi , xi ) if i ∈ V2
Notice that the first transformation is more general than the one described in [11, Remark 1.5].
Thus, for bipartite graphs, these four classes of binomial ideals are essentially the same and Theorem
6.1 classifies which of these ideals are Cohen-Macaulay.
As a final application, we show that [2, Conjecture 1.6] holds for Cohen-Macaulay binomial edge ideals
of bipartite graphs. Recall that the diameter, diam(G), of a graph G is the maximal distance between two
of its vertices. A homogeneous ideal I in A = K[x1 , · · · , xn ] is called Hirsch if diam(D(I)) ≤ ht(I). In [2],
the authors conjecture that every Cohen-Macaulay homogeneous ideal generated in degree two is Hirsch.
Corollary 6.3. Let G be a bipartite connected graph such that JG is Cohen-Macaulay. Then JG is Hirsch.
Proof. Let S ∈ M(G) be a cut set of G and let n = |V (G)|. We may assume n ≥ 3, otherwise D(JG ) is
a single vertex. Since JG is unmixed, GS has exactly |S| + 1 connected components and we claim that
|S| ≤ ⌈ n2 ⌉ − 1. In fact, if |S| ≥ ⌈ n2 ⌉, we would have
lnm lnm
n n
+
+ 1 ≥ + + 1 = n + 1,
|V (G)| ≥ |S| + |S| + 1 ≥
2
2
2
2
a contradiction. Consider now another cut set T of G. By Theorem 6.1 d), it follows that there is a
path connecting PS and PT , containing P∅ and with length |S| + |T | ≤ 2(⌈ n2 ⌉ − 1) ≤ n − 1. Thus,
diam(D(JG )) ≤ n − 1 = ht(JG ).
29
Acknowledgments
The authors acknowledge the extensive use of the software Macaulay2 [7] and Nauty [16]. They also
thank Giancarlo Rinaldo for pointing out an error in Remark 3.1 in a previous version of the manuscript.
References
[1] A. Banerjee, L. Núñez-Betancourt, Graph connectivity and binomial edge ideals, Proc. Amer. Math. Soc. 145
(2017), 487-499.
[2] B. Benedetti, M. Varbaro, On the dual graph of Cohen-Macaulay algebras, Int. Math. Res. Not. 17 (2015), 8085-8115.
[3] W. Bruns, U. Vetter, Determinantal Rings, Lecture Notes in Mathematics (2nd ed.), Vol. 1327 (1988), Springer,
Heidelberg.
[4] R. Diestel, Graph Theory, Graduate Texts in Mathematics (4th ed.), Vol. 173 (2010), Springer-Verlag, Heidelberg.
[5] V. Ene, J. Herzog, T. Hibi, Cohen-Macaulay binomial edge ideals, Nagoya Math. J. 204 (2011), 57-68.
[6] V. Ene, J. Herzog, T. Hibi, Koszul binomial edge ideals, in “Bridging Algebra, Geometry, and Topology”, Springer
Proc. Math. Stat. 96, Springer, Cham, (2014), 125-136.
[7] D. R. Grayson, M. E. Stillman, Macaulay2, a software system for research in Algebraic Geometry, available at
http://www.math.uiuc.edu/Macaulay2/.
[8] R. Hartshorne, Complete intersections and connectedness, Amer. J. Math. 84 (1962), 497-508.
[9] J. Herzog, T. Hibi, Distributive lattices, bipartite graphs and Alexander duality, J. Algebr. Comb. 22 (2005), 289-302.
[10] J. Herzog, T. Hibi, F. Hreinsdóttir, T. Kahle, J. Rauh, Binomial edge ideals and conditional independence
statements, Adv. Appl. Math. 45 (2010), 317-333.
[11] J. Herzog, A. Macchia, S. Saeedi Madani, V. Welker, On the ideal of orthogonal representations of a graph in
R2 , Adv. Appl. Math. 71 (2015), 146-173.
[12] T. Kahle, C. Sarmiento, T. Windisch, Parity binomial edge ideals, J. Algebr. Comb. 44 (2016), 1, 99-117.
[13] D. Kiani, S. Saeedi Madani, Some Cohen-Macaulay and unmixed binomial edge ideals, Comm. Alg. 43 (2015), 12,
5434-5453.
[14] D. Kiani, S. Saeedi Madani, The Castelnuovo-Mumford regularity of binomial edge ideals, J. Combin. Theory Ser. A
139 (2016), 80-86.
[15] M. Matsuda, S. Murai, Regularity bounds for binomial edge ideals, J. Commut. Alg. 5 (2013), 1, 141-149.
[16] B. D. McKay, A. Piperno, Practical Graph Isomorphism, II, J. Symbolic Computation 60 (2013), 94-112. The
software package Nauty is available at http://cs.anu.edu.au/∼bdm/nauty/.
[17] M. Ohtani, Graphs and ideals generated by some 2-minors, Comm. Algebra 39 (2011), 3, 905-917.
[18] A. Rauf, G. Rinaldo, Construction of Cohen-Macaulay binomial edge ideals, Comm. Algebra 42 (2014), 1, 238-252.
[19] G. Rinaldo, Cohen-Macaulay binomial edge ideals of small deviation, Bull. Math. Soc. Sci. Math. Roumanie 56 (104)
(2013), 4, 497-503.
[20] W. V. Vasconcelos, Arithmetic of Blowup Algebras, London Math. Soc., Lecture Note Series 195, Cambridge University Press, Cambridge, 1994.
Davide Bolognini, Institut für Mathematik, Albrechtstrasse 28a, 49076 Osnabrück, Germany
E-mail address: [email protected]
Antonio Macchia, Dipartimento di Matematica, Università degli Studi di Bari “Aldo Moro”, Via
Orabona 4, 70125 Bari, Italy
E-mail address: [email protected]
Francesco Strazzanti, Departamento de Álgebra, Facultad de Matemáticas, Universidad de
Sevilla, Avda. Reina Mercedes s/n, 41080 Sevilla, Spain
E-mail address: [email protected]
| 0 |
Personalized Machine Learning for Robot Perception
of Affect and Engagement in Autism Therapy
arXiv:1802.01186v1 [cs.RO] 4 Feb 2018
Ognjen (Oggi) Rudovic1∗ , Jaeryoung Lee2 , Miles Dai1 ,
Björn Schuller3 and Rosalind W. Picard1
1
2
3
MIT Media Lab, USA
Chubu University, Japan
Imperial College London, UK
∗
E-mail: [email protected].
Robots have great potential to facilitate future therapies for children on the
autism spectrum. However, existing robots lack the ability to automatically
perceive and respond to human affect, which is necessary for establishing
and maintaining engaging interactions. Moreover, their inference challenge
is made harder by the fact that many individuals with autism have atypical
and unusually diverse styles of expressing their affective-cognitive states. To
tackle the heterogeneity in behavioral cues of children with autism, we use the
latest advances in deep learning to formulate a personalized machine learning
(ML) framework for automatic perception of the children’s affective states and
engagement during robot-assisted autism therapy. The key to our approach is
a novel shift from the traditional ML paradigm; instead of using “one-size-fitsall” ML models, our personalized ML framework is optimized for each child
by leveraging relevant contextual information (demographics and behavioral
1
assessment scores) and individual characteristics of each child. We designed
and evaluated this framework using a dataset of multi-modal audio, video and
autonomic physiology data of 35 children with autism (age 3-13) and from 2
cultures (Asia and Europe), participating in a 25-minute child-robot interaction (∼ 500k datapoints). Our experiments confirm the feasibility of the robot
perception of affect and engagement, showing clear improvements due to the
model personalization. The proposed approach has potential to improve existing therapies for autism by offering more efficient monitoring and summarization of the therapy progress.
1
Introduction
The past decade has produced an extensive body of research on human-centered robot technologies capable of sensing and perceiving human behavioral cues, leading to more naturalistic
human-robot interaction (1, 2). However, virtually all existing robots are still limited in their
perception of human signals. Until a few years ago, the main focus of research on social robots
has been on their design (3). The next generation of these robots will have to be not only more
appealing to humans, but also more social-emotionally intelligent. Health care is one of the
areas in particular that can substantially benefit from the use of socially assistive robots, which
have the potential to facilitate and improve many aspects of clinical interventions (4, 5). The
most recent advances in machine learning (ML) (6) and, in particular, deep learning (7), have
paved the way for such technology.
Various terms for the use of social robots in clinical therapy settings have emerged, including Socially Assistive Robotics (8, 9), Robot-enhanced Therapy (10, 11), and Robot-augmented
Therapy (12). The main role of social robots in this context is to facilitate a therapy through
story-telling and games, helping the therapist deliver content in a more engaging and interac2
tive manner. Here we focus on a particular application of social robots to therapy for children
with Autism Spectrum Conditions (ASC) (13). Recently, research on autism has received significant attention due to the increasing number of individuals on the spectrum (1 in 64, with
the prevalence of 4:1 males to females (14)). Children with ASC have persistent challenges
in social communication and interactions, as well as restricted repetitive patterns of behavior,
interests and/or activities (13). To improve their social skills, children with ASC undertake
various types of occupational therapies during which they rehearse social and emotional communication scripts with a therapist. Traditionally, the therapist encourages the child to engage
by means of motivational activities (e.g., using toys). More recently, social robots have been
used to this aim, as many children with ASC find them enjoyable and engaging, perhaps due
to their human-like, yet predictable and nonthreatening nature (15, 16). This, in turn, has been
shown to increase the learning opportunities for these children, for whom early intervention
may enhance long-term outcome (17, 18).
A typical robot-assisted autism therapy for teaching emotion expressions to children with
ASC proceeds as follows: a therapist uses images of facial and body expressions of basic emotions (e.g. sadness, happiness, anger, and fear) as shown by typically developing children.
Then, the robot shows its expressions of these emotions to the child, and the therapists asks
the child to recognize the emotion. This is followed by the mirroring stage, where the child is
encouraged to imitate the robot expressions. If successful, the therapist proceeds to the next
level, telling a story and asking the child to imagine what would the robot feel in a particular
situation. These steps are adopted from the Theory of Mind (ToM) concept (19), designed to
teach the perspective taking ("social imagination") – one of the key challenges for many children with ASC. Other therapy designs have also been applied in the field, including Applied
Behavioral Analysis (ABA) (20) and Pivotal Response Treatment (PRT) (21). However, using
humanoid and other types of robotic solutions as part of clinical therapy is still in the "experi3
mental" stage (22). The progress, in part, has been impeded due to the inability of current robot
solutions to autonomously perceive, interpret, and naturally respond to the children’s behavioral
cues. Today, this has been accomplished by the "person behind the curtain", typically a therapist who controls the robot via a set of keyboard programmed behaviors and different types
of prompts such as the robot waving at or saying something to the child, designed to engage
the child (the so-called "Wizard of Oz (WoZ)" scenario (23)). This makes the interaction less
natural and potentially more distracting for the child and therapist (24). Thus, there is a need for
(semi)autonomous and data-driven robots (25) that can learn and recognize the child’s behavioral cues, and also adapt to them with more smoothly (e.g., by changing the exercise, providing
feedback/prompts, etc.) (26). This is discussed in more detail in several pioneering works on the
use of robots in autism therapy (15, 27, 28), where particular attention has been placed on robot
appearance (29, 30), interaction strategy (31) and analysis of behavioral cues of individuals interacting with the robots (30), including their facial expressions (32), body movements (33),
autonomic physiology (34), and vocalizations (35, 36).
Automated analyses of children’s behavioral cues during robot-assisted therapy relies on
ML from sensory inputs capturing different behavioral modalities (37). Typically, these inputs
are transformed into feature representations, which are then used to train supervised ML models,
where human experts provide labels for target states (e.g., engagement levels) of the children
by examining each child’s (video) data. Then, the ML consists of selecting a model (e.g., a
Support Vector Machine (SVM) (6)), which learns a mathematical function to map the input
features onto the target labels. This model is then applied to new data, providing estimates of
target outputs. For instance, in the context of children with ASC, Zheng et al. (38) proposed
the imitation skill training architecture and used a rule-based finite state machine method to
recognize the children’s body gestures from skeleton data. Likewise, Sanghvi et al. (39) used
a dataset of affective postural expressions during chess games between children and the iCat
4
robot. The authors extracted the upper body silhouette, and trained a set of weak classifiers
for the expression classification. Kim et al. (40) estimated the emotional states of children
with ASC from audio-based data and assessed their social engagement during playing with the
robots, using the Interactive Emotional Dyadic Motion Capture (IEMOCAP) database for realtime emotion classification with SVMs. Other work adopted a similar approach using SVMs
to estimate engagement based on children’s head movements (41). More recently, Esteban
et al. (11) demonstrated the feasibility of robot-assisted autism therapy using the NAO robot,
Kinect cameras, and multimodal-sensory information, including gaze estimation, human action
recognition, facial expression recognition, and voice analysis to classify stereotypical behaviors
of children with ASC using SVMs.
In this work, we use the most recent advances in ML (7, 42) to design a deep learning
framework that can easily personalize the robot’s interpretation of the children’s affective states
and engagement to different cultures and individuals. This is motivated by our previous work
(43), where we found significant cultural and individual differences in the expression of affect
and engagement of the children with ASC. The analyzed data contain ∼ 500k samples from
a highly synchronized multi-modal (visual, audio and physiology) and cross-cultural dataset
of children with ASC interacting with a social robot NAO (43). In our current work, we use
these data to design and implement the first fully personalized deep learning framework that
can automatically estimate the child’s affective states and engagement.
5
Figure 1: Overview: Data from three modalities (audio, visual and autonomic physiology) are collected
using unobtrusive sensors embedded in the robot (audio-visual), and worn on the child’s wrist (heart
rate, electrodermal activity and temperature). The data are obtained from 35 children with ASC and
with different cultural backgrounds and diverse expressive abilities, while they are engaged in a realworld therapeutic scenario. The context for analysis of the target data is structured through three key
robot stages: (1) Sensing, (2) Perception, and (3) Interaction. The main focus of this study is on the
Perception step, where we demonstrate the feasibility of automatically inferring the child’s affective
states and engagement levels using the proposed personalized perception of affect deep network.
The workflow of the envisioned ML-robot-assisted autism therapy consists of three key
steps. The robot sensing of outward and inward behavioral cues of the child undertaking the
therapy is attained using both open-source and new tools we built for processing the sensory
data from video, audio, and bio-signals of the children. The main focus of this work, which is
built upon deep neural networks (7), is the robot perception step. It takes as input the processed
features and automatically estimates (continuous) levels of the child’s valence, arousal, and
engagement. These are then used to design and implement the child-robot interaction (Fig. 1).
6
2
Results
We performed four sets of experiments. These were carefully designed to evaluate the role of
the model personalization and the interpretability/utility of the proposed personalized framework. We also provide comparisons of the proposed PPA-net with alternative approaches, and
investigate the role of different behavioral modalities on robot perception of affect and engagement.
2.1
System design
Fig. 2 shows the proposed PPA-net that we designed and implemented to optimally handle: (i)
the multi-modal and missing nature of the children data (feature layer) (44, 45), (ii) highly
heterogeneous children data (a famous adage says: “If you have met one person with autism,
you have met one person with autism.”) (context layer), and (iii) simultaneous and continuous estimation of the children’s affective dimensions: valence, arousal, and engagement — all
three being critical for evaluating the efficacy of the autism therapy (decision layer). For (i),
we perform the fusion of different modalities to obtain optimal input features for each child.
We used a special type of network based on auto-encoders (46) (Sec. 4). The learned feature
representations are further augmented by the expert knowledge within the context layer, quantified using the expert-assessed childhood autism rating scale (CARS) (47). The architecture of
this layer is designed based on the nesting of the children using their demographics (culture and
gender), followed by individual models for each child. Since the target outputs exhibit different
dependence structures for each child (Fig. 3), we used the notion of multi-task learning (48, 49)
to learn the child-specific decision layers.
7
Figure 2: Personalized Perception of Affect Network (PPA-net). The feature layer uses (supervised)
autoencoders to filter the noise in the features and reduce the feature size. At the intermediate level
(context layer), behavioural scores of the child’s mental, motor, and verbal ability (also quantified by
CARS) are used to adaptively select the optimal features for each child in the fusion part of the network.
Further personalization of the network to each child is achieved via the demographic variables: culture
and gender. Finally, the estimation of valence, arousal, and engagement is accomplished in the decision
layer using the child-specific network layers, which output continuous estimates of target states.
To train and evaluate the network, we used the dataset of children undergoing occupational
therapy for autism, led by experienced therapists, and assisted by a humanoid robot NAO (43).
The data come from 35 children: 17 from Japan (C1), and 18 from Serbia (C2), all of whom
had a prior diagnosis of autism. This data include: (i) video recordings of facial expressions and
head movements, body movements, pose and gestures, (ii) audio-recordings, and (iii) autonomic
physiology from the child: heart rate (HR), electrodermal activity (EDA), and body temperature
8
(T) – as measured on the non-dominant wrist of the child. We extracted various features (the
sensing step) from these modalities using state-of-the-art tools for video and audio processing
(OpenFace (50), OpenPose (51), and OpenSmile (52)) - see Appendix/C. We also developed
feature extraction/noise-cleaning tools for processing of physiology data. Fig. 3(A) summarizes
the results. Note that in 48% of the data, at least one of the target modalities was absent (e.g.,
when the child’s face was not visible, and/or when a child refused to wear the wristband).
To obtain the target labels needed to train the network, the dataset was coded in terms of
valence, arousal, and engagement on a continuous scale from −1 to +1 by five trained human
experts, while watching the audio-visual recordings of the sessions. The coders’ agreement was
measured using the intra-class correlation (ICC) (53), type (3,1). The ICC ranges from 0 − 1,
and is commonly used in behavioral sciences to assess the coders’ agreement. The average ICC
per output was: valence (0.53 ± 0.17), arousal (0.52 ± 0.14) and engagement (0.61 ± 0.14).
The codings were pre-processed and averaged across the coders. These were then used as
the ground truth for training ML models, where the data sets of each child were separated in
disjoint training, validation, and testing data subsets (Sec. 4.5). A detailed summary of the data,
features, and coding criteria is provided in Appendix/B,C.
2.2
Effects of Model Personalization
The main premise of model personalization via the PPA-net is that disentangling different
sources of variation in behavioral modalities of the children with ASC is expected to improve
the individual estimation performance compared to the traditional "one-size-fits-all" approach.
Fig. 3(D) depicts the ICC scores computed at each level in the model hierarchy. Specifically,
the performance scores at the top node in the graph are obtained using the predicted outputs
for children from both cultures using the proposed personalized PPA-net and the group-level
perception of affect network (GPA-net). Overall, the strength of the model personalization
9
Figure 3: (A) Summary of the fraction of data present across the different modalities both individually
and concurrently. (B) The dependence patterns derived from the manual codings of the valence, arousal,
and engagement. Note the large differences in these
10 patterns at the culture and individual levels. (C)
Clustering of the children from C1&C2 using the t-SNE, an unsupervised dimensionality reduction
technique, applied to the auto-encoded features (Sec. 2.2). (D) ICC scores per child: C1 (17) and
C2 (18) for valence (V), arousal (A) and engagement (E) estimation. Note the effects of the model
personalization: the performance of the proposed personalized PPA-net (in black) improves at all three
levels (culture, gender, and individual) when compared to the GPA-net (in gray). At the individual level,
we depict difference in their performance ∆ICC.
can be seen in the performance improvements at culture, gender, and individual levels in all
(sub)groups of the children. A limitation can be seen in the adverse gender-level performance
by the PPA-net on the two females from C1: this is due to the lack of data at this branch in the
model hierarchy, which evidently led to the PPA-net overfitting the data of these two children
when fine-tuning their individual layers, a common bottleneck of ML algorithms when trained
on limited data (54). Since the layers of the GPA-net were tuned using data of all the children,
this resulted in more robust estimation of the network on these individuals.
The individual estimation performance by the two networks is shown at the bottom of
Fig. 3(D). The improvements in ICC due to the network personalization range from 5% − 20%
per child. We also note drops in the PPA-net performance on some children. This is common
in multi-task learning, where the gain in performance on some tasks (here "tasks" are children)
comes at the expense of performance on the others. We also ran the paired t-test with unequal
variances (α = 0.05), using the ICC scores from 10 repetitions of the random splits of the data
per child (see Sec. 4.5), and compared the two models per child. The number of the children,
on which the personalized PPA-net outperformed significantly the GPA-net, in both cultures
are: V = 24/35, A = 23/35, and E = 31/35. More specifically, within C1, we obtained:
V = 10/17, A = 10/17, E = 15/17. Likewise, for C2, we obtained: V = 14/18, A = 13/18,
E = 16/18. Finally, the personalized PPA-net performed significantly better on all three outputs
(V, A, E) on 6/17 children from C1, and 9/18 children from C2. Taken together, these results
demonstrate notable benefits of the model personalization to improving the robot perception of
affect and engagement, at each level in the model hierarchy.
We also analyzed the auto-encoded features obtained after the fusion of target modalities
(Sec. 4). The original size of the auto-encoded features was 250 dimensions (D). Fig. 3(C)
shows the embeddings of these features into a 2D space obtained using the t-Distributed Stochastic Neighbor Embedding (t-SNE) (55) – a popular ML technique for unsupervised dimension11
ality reduction and visualization of high-dimensional data. Note the clustering of the children’s
data in the projected space (the children ID was not used), confirming the high heterogeneity in
behavioral cues of these children. Personalizing the PPA-net to each child allows it to accommodate individual differences at different levels in the model hierarchy, leading to overall better
performance compared to the GPA-net.
2.3
Interpretability and Utility
A barrier to adoption of deep learning is when interpretability is paramount. Understanding
the features that lead to a particular output builds trust with clinicians and therapists using the
system in their daily practice. To analyze the contribution of each behavioral modality, and the
features within, we used DeepLift (Learning Important FeaTures) (56), an open-source method
for computing importance scores in a neural network. Fig. 4(A) shows the importance scores of
the input features from each modality and from CARS for estimation of engagement, obtained
by applying DeepLift to the PPA-net. We note that the body and face modality are dominant
when the scores are computed for both cultures together. The prior information derived from
CARS is also a big influencer in estimation of engagement. However, at the culture level,
the DeepLift produces opposite scores for the body modality/CARS for the two cultures. This
evidences that the model was capable of disentangling the culture-level differences. By analysis
of CARS (total scores) in our previous work (43), we found statistically significant differences
in the scores for the two cultures (p < 0.05), which, in part, also explains the difference in their
contribution across the two cultures. Similar observations can be made at the gender-level, yet,
these are more difficult to interpret due to the imbalance in males vs. females.
The PPA-net utility can be demonstrated through visualization of several components of the
robot sensing and perception. Fig. 4 (B) depicts the estimated affect and engagement levels,
along with the key autonomic physiology signals of a child undergoing the therapy (currently,
12
Figure 4: (A) Interpretability can be enhanced by looking at the influence of the input features on the output target: for example, here the output target is engagement level. The relative
13
importance scores (y-axis) are shown for the face, body, audio, physiology, and CARS features (x-axis). These are obtained from the DeepLift (56) tool, which provides negative/positive
values when the input feature drives the output toward −1/ + 1. The most evident differences arise from body features, also indicating cultural differences in the two groups. Therapy
monitoring (B) and summarization (C), based on the robot’s sensing of behavioral cues (audio-visual and autonomic physiology), and perception of the current affective states and engagement
levels, of the child. In (B), we depict the engagement (E), arousal (A) and valence (V) levels of the child. The automatically estimated levels using the PPA-net are shown in blue, and the
ground-truth based on the human coding is shown in red. We also plot the corresponding signals measured from the child’s wrist: accelerometer readings (ACC) showing the movement
p
x2 + y 2 + z 2 along 3-axes (x, y, z); blood-volume pulse (BVP) and electro-dermal activity (EDA). The bars in plot (B) summarize the therapy in terms of the average
intensity r =
levels of E, V and A (± SD) within each phase of the therapy: (1) pairing, (2) recognition, and (3) imitation (there is no phase (4) - storytelling - because the child left after phase (3)).
these results are obtained by an off-line analysis of the recorded video). We note that the PPAnet was able to accurately detect the changes in the child’s engagement levels (e.g., during the
disengagement segment), while providing estimates that are overall highly consistent with human coders. Since the PPA-net is personalized using data of each child, evidently, it learned
particular expressions of affect and engagement for the interacting child. Fig. 4(C) summarizes
the therapy in terms of average valence, arousal, and engagement levels (along with their variability) within each phase of the therapy. Compared to human coders, the PPA-net produces
these statistics accurately for engagement and arousal levels, while it overestimates the valence
levels. However, as more data of target child becomes available, these can be improved by
re-training his/her individual layer.
2.4
Alternative Approaches
How much advantage does the new personalized deep learning approach obtain over more traditional ML? Table 1 (Appendix/A) shows the estimation results obtained by alternative approaches and evaluated using the same experimental protocol (Sec. 4). Here, we compare the
performance of the proposed PPA-net/GPA-net with traditional multi-layer perceptron (MLP)
deep networks with the same hierarchical structure but optimized using standard learning techniques (i.e., without sequential nesting of the layers).1 We also include the traditional ML
models: linear regression (LR) (6), support vector regression (SVR) (6), and gradient boosted
regression trees (GBRTs) (57). In the ML literature, LR is usually considered the baseline
model. SVR is an adaptation of the SVM models (6), used in state-of-the-art works on humanrobot interaction (e.g., (11, 40)), for estimation of continuous outputs. On the other hand,
GBRTs are commonly used in clinical decision tasks due to their easy interpretation of input
features (58). For more details about training and evaluation of the models, see Appendix/A.
1
The network layers are optimized jointly in MLP-0, followed by fine-tuning of individual layers (MLP).
14
From the ICC scores, we note the benefits of the proposed deep learning strategy (PPAnet). The joint learning of all layers in the MLP results in a lack of discriminative power of the
network. Compared to unpersonalized models (GPA-net, MLP-0, LR, SVR, and GBRTs), there
is a gap in performance. While LR fails to account for highly-nonlinear dependencies in the
data, the non-linear kernel method (SVR) achieves it to some extent, but does not reach the full
performance attained by the PPA-net due to the absence of hierarchical structure. On the other
hand, GBRTs are capable of discovering a hierarchy in the features, yet, they lack a principled
way of adapting to each child. Also, the large variance in performance of all the models is
because of high heterogeneity in behavioral expressions of children with ASC. By ranking
the models based on the number of ‘winning’ tasks (TaskRank), the PPA-net outperforms the
compared models on majority of the tasks (48%), followed by SVR (22%) and GBRT (13%).
2.5
Effects of Different Modalities
To assess the contribution of each modality for estimation of target outputs, we evaluated the
PPA-net using visual (face and body), audio and physiological features both independently and
together. Fig. 8 (Appendix/A) shows the average results for both cultures, and for the children
within each culture. As expected, the fusion approach outperforms the individual modalities
across all three outputs (valence, arousal, and engagement). Also, higher performance was
achieved on C1 than C2 with the multi-modal approach – confirming the complimentary and
additive nature of these modalities (59). Furthermore, the body features outperform the other
individual modalities, followed by the face and physiology modality. The low performance
by the audio modality is attributed to a high level of background noise, which is difficult to
control in real-world settings (60). Also, while the physiology features are comparable to the
best performing individual modality (body) in C1, this is not the case in C2.
15
3
Discussion
The overall objective of this work was to demonstrate the feasibility of an automated system
for robot perception of affect and engagement during autism therapy. This is driven by the societal need for new technologies that can facilitate and improve existing therapies for a growing
number of children with ASC. Recent advances in ML and data collection, using unobtrusive
sensors such as cameras and microphones, and wearable technology for measurement of autonomic physiology, have paved a way for such technology (61), however, little progress has been
done so far (62). To this end, we introduced a novel personalized ML framework that can easily
adapt to a child’s affective states and engagement even across different cultures and individuals.
This framework builds upon state-of-the-art deep learning techniques (7, 42), which we used
to implement the proposed Personalized Perception of Affect Network (PPA-net). While deep
learning has shown great success in a variety of learning tasks (e.g., object and scene recognition (63,64) and sentiment analysis (65)), it has not been explored before in the context of robot
perception for use in autism therapy. One of the reasons is the previous lack of data needed to
take full advantage of deep learning. Using the cross-cultural and multi-modal dataset containing over 500k images of child-robot interactions during autism therapy (43), we were able to
successfully design the robot perception based on the PPA-net.
As shown in our experiments comparing the personalized PPA-net with the group-level
GPA-net and MLP (i.e., the traditional "one-size-fits-all" approaches), deep learning has had
great success in leveraging a vast amount of data. However, realizing the full potential of
our framework on the data of children with ASC requires the network to personalize for each
child. We showed that with the PPA-net, an average intra-class agreement (ICC) of 59% can
be achieved between the model predictions and human (manual) coding of children’s affect and
engagement levels, where the average agreement between the human coders was 55.3%. This
16
does not imply that humans are not better in estimating affect and engagement but rather that the
proposed framework provides a more consistent and less biased estimation approach. Compared
to the standard approach in the field to coding affect (valence and arousal) levels, in the most
recent and largest public dataset of human faces (Affect-Net (66)), the coders agreement was
60.7%, and the automatic prediction of valence and arousal (using CNN-AlexNet (67)) was
60.2% and 53.9%, respectively, in terms of Pearson correlation. Note, however, that these
results are obtained from face images of typical individuals, whereas coding and automatic
estimation of the same dimensions from children with ASC is a far more challenging task.
The model personalization is accomplished using three key ingredients that make it particularly suited for the task at hand and different from existing approaches. First, it uses a
novel learning algorithm that allows the deep network to take full advantage of data sharing
at each level in the model hierarchy (i.e., the culture, gender, and individual level). This is
achieved via the newly introduced network operators (learn, nest, and clone) and fine-tuning
strategies (Sec. 4), where the former are based on the notion of network nesting (68) and deeplysupervised nets (69). We showed that, overall, this approach improves the estimation of affect
and engagement at each level in the model hierarchy, obtaining statistically significant improvements on 15/35 children (across all three outputs) when compared to the GPA-net (Fig. 3(D)).
Second, previous deep models (e.g., (70, 71)) that focused on multi-task learning do not leverage the contextual information such as demographics (culture and gender), nor account for the
expert knowledge. We also showed in our experiments on the network interpretability (Sec. 2.3)
that this is important for disentangling different sources of variance arising from the two cultures
and individuals. This, in turn, allows the PPA-net to focus on individual variation when learning
the network parameters for each child. Third, using the network layers as building blocks in
our framework, we efficiently personalized the network to the target context. Traditional ML
approaches such as SVMs, used in previous attempts to implement the robot perception, and an
17
ensemble of regression trees (6), do not offer this flexibility. By contrast, the PPA-net brings
together the interpretability, design flexibility and overall improved performance.
Another important aspect of robot perception is its ability to effectively handle multi-modal
nature of the data, especially in the presence of noisy and missing modalities. We showed in
Sec. 2.5 that the fusion of audio-visual and physiological cues contributes to increasing the network performance. While our experiments revealed that body and face modalities play a central
role in the estimation, the autonomic physiology is also an important facet of affect and engagement (72). This is the first time that both outward and inward expressions of affect and engagement were used together to facilitate the robot perception in autism therapy. We also found that
the autonomic physiology influences differently the output of the two cultures. Namely, in C1
the physiology modality alone achieved an average ICC of above 50%, where in C2 this score
was around 30%. This disparity may be attributed to cultural differences, as children from C2
were moving more during the interactions, which often caused faulty readings from the physiology sensors. Furthermore, the audio modality underperformed in our experiments, despite
the evidence in previous works that successfully used it in estimation of affect (73). There are
potentially two solutions to remedy this: use more advanced techniques for reduction of background noise and user diarization, and a richer set of audio descriptors (60). By analyzing the
feature importance, we found that CARS largely influences the estimation of engagement. This
suggests that, in addition to the data-driven approach, the expert knowledge is important for
informing the robot perception in the form of prior knowledge. Lastly, in this work, we adopted
a feature-level fusion of different modalities; however, more advanced approaches can be used
to personalize the feature fusion to each child (e.g., using a mixture-of-experts approach (74)).
One of the important questions we tried to address is how ML algorithms can be of practical
use for therapists and clinicians working with children with ASC? The potential utility of our
personalized ML framework within autism therapy is through the use of visualization of the
18
estimated affect and engagement levels, and the key autonomic physiology signals of a child
undertaking the therapy (Fig. 4(B)). We note at least two benefits of this: first, the obtained
scores can be used by the robot to automatically adapt its interaction with the child. This can
also assist a therapist to monitor in real time the target behavioral cues of the child, and to
modify the therapy “on the fly". It should also inform the therapist about the idiosyncratic
behavioral patterns of the interacting child. Furthermore, it can assist the therapists in reading
the children’s inward behavioral cues, i.e., their autonomic physiology, which cannot easily be
read from outward cues (e.g., EDA as a proxy of the child’s internal arousal levels, the increase
of which, if not detected promptly, can lead to meltdowns in children with ASC). Second, as we
show in Fig. 4(C), the output of the robot perception can be used to summarize the therapy in
terms of average valence, arousal, and engagement levels (along with their variability) within
each phase of the therapy. This, in turn, would allow for a long-term monitoring of the children’s
progress, also signaling when the robot fails to accurately perceive the child’s signals. This can
be used to improve certain aspects of the child’s behavioral expressions by profiling the child
and designing strategies to optimize his/her engagement through a personalized therapy content.
We also note some limitations of the current robot perception that highlight opportunities
for future method enhancement. First, in the current structure of the proposed PPA-net, we assumed that the children split based on their demographics solely. While the findings in Sec. 2.3
show that the body modality has the opposite influence between the two cultures on estimation
of engagement, thus justifying the current PPA-net architecture, other network structures are
also feasible. For example, an adaptive robot perception would adopt a hybrid approach where
prior knowledge (e.g. demographics) is combined with a data-driven approach to automatically
learn the network structure (75). Also, our current framework is static, while the data we used is
inherently dynamic (the sensed signals are temporally correlated). Incorporating the temporal
context within our framework can be accomplished at multiple levels: different network param19
eters can be learned for each phase of the therapy. To this end, more advanced models such as
recurrent neural networks (76), can be used in the individual layers. Furthermore, the network
generalizability not only within the previously seen children, as currently done by the PPA-net,
but also to new children is another important aspect of robot perception. Extending the current
framework so that it can optimized for previously unseen children would additionally increase
its utility. Due to the hierarchical nature of the PPA-net, a simple way to currently achieve this is
by adding an individual layer for each new child, while re-using the other layers in the network.
There are several other aspects of this work that also need be addressed in the future. Although we used a rich dataset of child-robot interactions to build the robot perception system,
this dataset contains a single therapy session per child. An ideal system would have a constant
access to the therapy data of a target child, allowing the robot to actively adapt its interpretations
of the child’s affect and engagement, and further personalize the PPA-net as the therapy progresses. For this, ML frameworks such as active learning (77) and reinforcement learning (78)
are a good fit. This would allow the robot to continuously adjust the network parameters using
new data, and also reduce the coding effort by only asking human coders to provide labels for
cases for which it is uncertain. Another constraint of the proposed robot perception solution is
that the video data come from a background camera/microphone. While this allows us to have a
more stable view for the robot sensing of the face-body modality, the view from the robot’s perspective would enable more naturalistic interactions. This is also known as active vision (79),
however, it poses a number of challenges including the camera stabilization and multi-view
adaptation (80). Finally, one of the important avenues for future research on robot perception
for autism is to focus on its utility and deployment within every-day autism therapies. Only in
this way can the robot perception and the learning of children with ASC be mutually enhanced.
20
4
4.1
Materials and Methods
Data representations
We used the feed-forward multi-layer neural network approach (81) to implement the proposed
deep learning architecture (Fig. 2). Each layer receives the output of the layer above as its
input, producing higher-level representations (82) of the features extracted from the behavioral
modalities of the children. We began with the GPA-net, where all layers are shared among
the children. The network personalization was then achieved (i) by replicating the layers to
construct the hierarchical architecture depicted in Fig. 2, and (ii) by applying the proposed finetuning strategies to optimize the network performance on each child. The last layers of the
network were then used to make individual estimations of affect and engagement.
4.2
Feature Fusion and Autoencoding
We applied the feature-level fusion to the face (xf ), body (xb ), audio (xa ), and physiology (xp )
features of each child as: x = [xf ; xb ; xa ; xp ] ∈ RDx ×1 , where Dx = 396 is the overall
dimension of the input. The continuous labels for valence (yv ), arousal (ya ), and engagement
(ye ) for each child were stored as y = [yv ; ya ; ye ] ∈ RDy ×1 , and Dy = 3. Furthermore, the data
of each child were split into non-overlapping training, validation and test datasets (Sec. 4.5).
To reduce the adverse effects of partially-observed and noisy features in the input x (Fig. 8 (A)
- Appendix/A), we used an autoencoder (AE) (83) in the first layer of the PPA-net. The AE
transforms x to a hidden representation h0 (with an encoder) through a deterministic mapping:
h0 = fθ0 (e) (x) = W0 (e) x + b0 (e) , θ0 (e) = {W0 (e) , b0 (e) },
(1)
parametrized by θ0 (e) , where e designates the parameters on the encoder side. We used the linear
activation function (LaF), where the parameters θ0 (e) = {W0 (e) , b0 (e) } are a weight coefficient
matrix and a bias vector, respectively. This hidden representation is then mapped back to the
21
input, producing the decoded features:
(d)
(d)
(d)
(d)
(d)
z0 = fθ(d) (h0 ) = W0 h0 + b0 , θ0 = {W0 , b0 },
(2)
0
(d)
where d designates the parameters of the decoder, and W0
(e)T
= W0
are the tied weights
used for the inverse mapping of the encoded features (decoder). In this way, the input data
were transformed to a lower-dimensional and less-noisy representations (’encoding’). Since the
input data are multi-modal, the encoded subspace also integrates the correlations among the
modalities, rendering more robust features for learning of the subsequent layers in the network.
We augmented the encoding process by introducing a companion objective function (CoF)
for each hidden layer (69). The CoF acts as a regularizer on the network weights, enabling the
outputs of each layer to pass the most discriminative features to the next layer. Using the CoF,
the AE also reconstructs target outputs y0 (in addition to z0 ) as:
(c)
(c)
(c)
(c)
(c)
y0 = fθ(c) (h0 ) = W0 h0 + b0 , θ0 = {W0 , b0 }.
(3)
0
(d)
(c)
The AE parameters ω0 = {θ0 (e) , θ0 , θ0 } were then optimized over the training dataset to
d
P
minimize the mean-squared-error (MSE) loss (defined as α(a, b) =
kai − bi k2 ) for both the
i=1
decoding (αd ) and output (αc ) estimates:
ω0∗
N
1 X
(i) (i)
(i) (i)
= arg min α(x, y) = arg min
(1 − λ)αd (x , z0 ) + λαc (y0 , y ) ,
N i=1
ω0
ω0
(4)
where N is the number of training datapoints from all the children. The parameter λ ∈ (0, 1)
was chosen to balance the network’s generative power (the feature decoding) and discriminative
power (the output estimation), and was optimized using validation data (in our case, the optimal
value was λ = 0.8). The learned fθ0 (e) (·) was applied to the input features x, and the resulting
code h0 was then combined with the CARS (xcars ∈ R15×1 ) for each child: h1 = [h0 ; xcars ].
This new data representation was used as input to the subsequent layers of the network.
22
Figure 5: The learning of the PPA-net. (A) The supervised-AE performs the feature smoothing by
dealing with missing values and noise in the input, while preserving the discriminative information in
the subspace h0 - constrained by the CoF0 . The learning operators in the PPA-net: (B) learn, (C) nest
and (D) clone, are used for the layer-wise supervised learning, learning of the subsequent vertical layers,
and horizontal expansion of the network, respectively. (E) The group level GPA-net is first learned by
sequentially increasing the network depth using learn & nest. The GPA-net is then used to initialize the
personalized PPA-net weights at the culture, gender, and individual level (using clone). (F) The network
personalization is then accomplished via the fine tuning steps I and II (Sec. 4).
4.3
Group-level Network
We first trained the GPA-net, where all network layers are shared among the children (Fig. 5
(E)). The weights of the GPA-net were also used to initialize the PPA-net, followed by the proposed fine-tuning strategies to personalize the network (Sec. 4.4). The former step is important
because each layer below the culture level in the PPA-net uses only a relevant subset of the
data (e.g., in C1, data of two females are present below the gender layer), resulting in less data
to train these layers. This, in turn, could easily lead to overfitting of the PPA-net, especially
of its child-specific layers, if only the data of a single child were used to learn their weights.
To this end, we employed a supervised layer-wise learning strategy, similar to that proposed in
23
recent deep learning works (68, 69). The central idea is to train the layers sequentially and in a
supervised fashion by optimizing two layers at a time: the target hidden layer and its CoF.
We defined two operators in our learning strategy - learn and nest (Fig. 5). The learn
operator is called when simultaneously learning the hidden and CoF layers. For the hidden
layers, we used the rectified linear unit (ReLU) (7), defined as: hl = max(0, Wl hl−1 + bl ),
where l = 1, . . . , 4, and θl = {Wl , bl }. ReLU is the most popular activation function that
provides a constant derivative, resulting in fast learning and preventing vanishing gradients in
deep neural networks (7). The AE output and CARS (h1 ) were fed into the fusion (l = 1) layer,
followed by the culture (l = 2), gender (l = 3), and individual (l = 4, 5) layers, as depicted
(c)
in Fig. 5, where each CoF is a fully connected LaF with parameters θl
(c)
(c)
= {Wl , bl }. The
(c)
optimal parameters of the l-th layer ωl = {θl , θl } were found by minimizing the loss:
ωl∗
N
1 X
(i)
αc (yl+1 , y (i) ),
= arg min αc (hl , y) = arg min
N i=1
ωl
ωl
(5)
where αc is the MSE loss (Sec.4.2), computed between the output of the ReLU layer (hl+1 )
passed through the LaF layer of the CoF (yl+1 ), and true outputs (y).
When training the next layer in the network, we used the nest operator (Fig.5) in a similar
fashion as in (68), to initialize the parameters as:
θl+1 = {Wl+1 ← I + , bl+1 ← 0}
(c)
(c)
(c)
(c)
(c)
θl+1 = {Wl+1 ← Wl , bl+1 ← bl },
(6)
where the weight matrix Wl+1 of the ReLU was set to an identity matrix (I). To avoid the
network being trapped in a local minimum of the previous layer, we added a low Gaussian
noise (i,j = N (0, σ 2 ), σ = 0.01) to the elements of I. We set the parameters of the supervised
linear layer using the weights of the CoF above, which assures that the network achieves similar
performance after nesting of the new ReLU layer. Before we started training the nested layer,
we ‘froze’ all the layers above by setting the gradients of their weights to zero – a common
24
approach in a layer-wise training of deep models (84). This allows the network to learn the best
weights for the target layer (at this stage). The steps learn & nest were applied sequentially to
all subsequent layers in the network. Then, the fine-tuning of the network hidden layers and the
last CoF was done jointly. We initially set the number of epochs to 500, with earlystopping, i.e.,
training until the error on a validation set reaches a clear minimum (82) (∼ 100 epochs).2
The network parameters were learned using the standard backpropagation algorithm (7).
Briefly, this algorithm indicates how a model should change its parameters that are used to
compute the representation in each layer from the representation in the previous layer. The loss
of the AE layer and each pair of the ReLU/LaF(CoF) layers was minimized using the Adadelta
gradient descent algorithm with learning rate lr = 1, 200 epochs, and a batch size of 100. The
optimal network configuration had 396 × 250, and 250 × 3 hidden neurons (h) in the AE and
its CoF layers, respectively. Likewise, the size of the fusion ReLU was 265 (250 + 15) × 200,
and 200 × 200 for all subsequent ReLU layers. The size of their CoF layers was 200 × 3. We
implemented the PPA-net using the Keras API (85) with a Tensorflow backend (86), on a Dell
Precision workstation (T7910), with the support of two GPUs (NVIDIA GF GTX 1080 Ti).
4.4
Network Personalization
To personalize the GPA-net, we devised a learning strategy that consists of three steps: the
network initialization followed by two fine-tuning steps. For the former, we introduced a new
operator, named clone, which widens the network to produce the architecture depicted in Fig. 2.
Specifically, the AE (l = 0) and fusion (l = 1) layers were configured as in the GPA-net (using
the same parameters). The clone operator was then applied to generate the culture, gender, and
2
After this, we also tried fine-tuning the last layer only; however, this did not affect the network performance.
25
individual layers, the parameters of which were initialized as follows:
(q)
← θl0 ,
q = {C1 , C2 }
(g)
← θl0 ,
g = {male, f emale}
(k)
← θl0 ,
k = {1, .., K}
l = 2 : θl
l = 3 : θl
l = 4 : θl
(c, k)
(7)
(c, 0)
l = 5 : θl−1 ← θl−1 , k = {1, .., K}.
As part of the clone procedure, the culture and gender layers were shared among the children,
while the individual layers were child-specific.
To adapt the network parameters to each child, we experimented with different fine-tuning
strategies. We report here a two-step fine-tuning strategy that performed the best. First, we
updated the network parameters along the path to a target child, while freezing the layers not
intersecting with that particular path. For instance, for child k and demographics {C1 , f emale},
(C1 )
∗
the following updates were made: ωl=1:5,k
= {θ0 , θ1 , θ2
(f emale)
, θ3
(k)
(c, k)
, θ4 , θ5
}. Practically,
this was achieved by using a batch of 100 random samples of target child, to compute the
network gradients along that child-path. In this way, the network gradients were accumulated
across all the children, and then back-propagated (1 epoch). This was repeated for 50 epochs,
and the Stochastic Gradient Descent (SGD) algorithm (lr = 0.03) was used to update the
network parameters. At this step, SGD produced better parameters than Adadelta. Namely,
due to its adaptive lr, Adadelta quickly altered the initial network parameters, overfitting the
parameters of deeper layers, for the reasons mentioned above. This, in turn, diminished the
shared knowledge provided by the GPA-net. On the other hand, the SGD with the low and fixed
lr made small updates to the network parameters at each epoch, allowing the network to better
fit each child while preserving the shared knowledge. This was followed by the second finetuning step where the child-specific layers (ReLU/LaF(CoF)) were further optimized. For this,
(k)
(c, k)
we used Adadelta (lr = 1) to tune the child-specific layers, ωk∗ = {θ3 , θ3
}, one-child-at-
the-time (200 epochs). Further details of these learning strategies are provided in Appendix/A.
26
4.5
Evaluation Procedure
We performed a random split of data of each child into three disjoint sets: we used 40% of a
child’s data as a training set, and 20% as the validation data to select the best model configuration. The remaining 40% were used as the test set to evaluate the model’s generalization to
previously unseen data. This protocol imitates a realistic scenario where a portion of a child’s
data (e.g., annotated by child therapists) is used to train and personalize the model to the child,
and the rest is used to estimate affective states and engagement from new data of that child. To
avoid any bias in the data selection, this process was repeated ten times. The input features were
z-normalized (zero mean, unit variance), and the model’s performance is reported in terms of
ICC (and MSE) computed from the model estimates and ground-truth labels (see Appendix).
27
Acknowledgments
We thank the Serbian Autism Society, and the educational therapist Ms S. Babovic for her invaluable feedback during this study. We would also like to thank the Ethics Committee from
Japan (Chubu IRB), Serbia - MHI, and USA (MIT IRB), for allowing this research to be conducted. We also thank Ms Havannah Tran and Ms Jiayu Zhou for helping us to prepare the
figures in the paper, Dr Javier Hernandez, for his insights into the processing and analysis of
physiological data, Dr Manuel Cebrian for his advice on formatting the paper, and MIT undergraduate students: Mr John Busche, for his support in experiments for alternative approaches,
and Ms Sadhika Malladi, for her help with running the DeepLift code. Our special thanks go to
all the children and their parents who participated in the data collection - without them, this research would not be possible. Funding: This work has been supported by MEXT Grant-in-Aid
for Young Scientists B Grant No. 16763279, and Chubu University Grant I Grant No. 27IS04I
(Japan). The work of O. R. has been funded by EU HORIZON 2020 under grant agreement
no. 701236 (ENGAGEME) - Marie Skłodowska-Curie Individual Fellowship, and the work
of B.S. under grant agreement no. 688835 (DE-ENIGMA). Author contributions: O.R. and
R.P. conceived the personalized machine learning framework. O.R. derived the proposed deep
learning method. M.D. and O.R. implemented the method and conducted the experiments. J.L.
supported the data collection, data processing and analysis of the results. B.S. provided insights
into the method and audio-data processing. All authors contributed to the writing of the paper.
Competing interests: The authors declare that they have no competing interests.
28
Appendix
A
Details on Model Training and Alternative Approaches
Table 1: The mean (±SD) of the ICC scores (in %) for estimation of the children’s valence, arousal,
and engagement. TaskRank quantifies the portion of tasks (35 children × 3 outputs = 105 in total) on
which the target model outperformed the compared models, including standard deep multi-layer perceptron network with last layers adapted to each child (MLP), joint MLP (MLP-0), linear regression (LR),
support-vector regression (SVR) with a Gaussian kernel, and gradient-boosted regression trees (GBRT).
Models Valence Arousal Engagement Average TaskRank (in %)
PPA-net
GPA-net
MLP
MLP-0
52±21
47±28
47±18
43±22
60±16
57±15
54±15
52±15
65±24
60±25
59±22
57±23
59±21
54±24
53±20
51±20
48.0 (1)
10.5 (4)
3.40 (5)
2.90 (6)
LR
SVR
GBRT
12±18 21±16
45±26 56±14
47±21 51±15
24±21
49±22
49±22
18±19
50±21
49±20
1.00 (7)
21.9 (2)
12.4 (3)
Figure 6: Empirical Cumulative Distribution Function (CDF) computed from average estimation errors
for valence, arousal, and engagement levels, and in terms of (A) ICC and (B) MSE. We show the performance by three top ranked models (based on TaskRank in Table 1). The individual performance
scores for 35 children are used to compute the CDFs in the plots. From the plots, we see that the improvements due to the network personalization are most pronounced for 40% < F (X) < 75% of the
children. On the other hand, the model personalization exhibits similar performance on the children
for whom the group-level models perform very well (0% < F (X) < 40%), or largely underperform
(75% < F (X) < 100%). This indicates that for the underperforming children, the individual expressions of affect and engagement vary largely across the children. Thus, more data of those children is
needed to achieve a more effective model personalization.
29
Figure 7: The networks’ learning: Mean Squared Errors (MSE) during each epoch in the network
optimization are shown for the personalized (PPA-net and MLP) and group-level (GPA-net) models, and
for training (tr) and validation (va) data. Note that the GPA-net learns faster and with a better local
minimum compared to the standard MLP. This is due to the former using layer-wise supervised learning
strategy. This is further enhanced by fine-tunning steps in PPA-net, achieving the lowest MSE during the
model learning, which is due to its ability to adapt its parameters to each culture, gender and individual.
Figure 8: The contribution of visual (face and body), audio and physiology modalities in the estimation
of the valence, arousal, and engagement levels of the children using PPA-net. The fusion approach
(’ALL’) outperforms the individual modalities, evidencing the additive contribution of each modality to
predicting the target outputs. The large error bars reflect the high level of heterogeneity in the individual
performance of the network on each child, as expected for many children with ASC.
In Table. 1, we compare different methods described in Sec. 2.4 in the paper, and detailed
below. In Fig. 6, we depict the error distributions of the top performing methods, highlighting
the regions in the error space where the proposed PPA-net is most effective (and otherwise).
Fig. 7 shows the convergence rates of the deep models evaluated in the paper, and in terms of
the learning steps and the loss minimization (MSE). Note that the proposed PPA-net is able to
fit the target children significantly better, while still outperforming the compared methods on
the previously unseen data of those children (Table 1). In traditional ML, where the goal is
30
to be able to generalize to previously unseen subjects, this could be considered as algorithmic
bias (the model overfitting). By contrast, in personalized ML, as proposed here, it is beneficial
as it allows the model to perform the best on unseen data of target subject to whom we aim
to personalize the model. Fig. 8 depicts the contribution of each modality to the estimation
performance (Sec. 2.5). The bars in the graph show the mean (±SD) ICC performance for
each modality obtained by averaging it across the children. The PPA-net configuration used to
make predictions from each modality was the same as in the multi-modal scenario. However,
the size of the auto-encoded space varied in order to accomodate the size of the input features.
Specifically, the optimal size of the encoded features per modality was: 150 (face), 50 (body),
and original feature size: 24 for the audio, and 30 for the physiology modality, were used.
In what follows, we provide additional details on the training procedures for the alternative
methods used in our experiments.
• MLP-0/MLP: For the standard multi-layer perceptron (MLP-0) deep network, we used
the same architecture/layer types as in our GPA-net, however, the training of its layers
was done in traditional manner (’at once’) using Adadelta algorithm. The number of
epochs was set to 200, with earlystopping on the validation set. The personalization of
this network (MLP) was accomplished by cloning the last layer in the network (decision
layer) and fine-tuning it to each child using SGD (lr = 0.03).
• LR: The standard Linear Regression (LR) (6) with L-2 regularization on the design matrix. The regularization parameters were set on the validation data used in our experiments
to obtain the best performance.
• SVR: Support Vector Regression (SVR) (6) is the standard kernel-based method used
for non-linear regression. It defines a kernel matrix computed by applying a pre-defined
kernel function to data pairs. We used the standard isotropic Radial Basis Function (RBF).
31
Due to the non-parametric nature of this method, it was computationally infeasible to use
all training data to form the kernel matrix. To circumvent this, we trained one SVR
per child (using all training data). Then, we selected support vectors (SV) – the most
discriminative examples – from each child (SVs=1000), and re-trained the model using
these SVs, thus, 35k data points in total. To avoid the model over fitting, the penalty
parameter C was selected on the validation data.
• GBRT: Gradient Boosted Regression Trees (GBRT) (57) is a generalization of boosting
to arbitrary differentiable loss functions. We set the number of basis functions (weak
learners) to 35, corresponding to the number of the children. The trees depth was varied
from d=3-10, and we found that d=5 performed the best on the validation set. This configuration was used to produce the results reported. The main advantage of GBRT is that
they naturally handle heterogeneous data in the input, and have ability to select the most
discriminative features for target task, also allowing for an easy interpretation. However,
like LR and SVR, their traditional version does not allow for an easy personalization to
each child, in contrast to the proposed PPA-net.
For the methods above, we used the existing publicly available implementations. Specifically, for MLP-0/MLP we used Keras API (85), and for the rest, we used the sklearn (87), a
python toolbox for ML.
B
Data
We reported the results obtained on the dataset of children undergoing occupational therapy for
autism (43). The therapy was led by experienced child therapists, and assisted by a humanoid
robot NAO. The goal of the therapy was to teach the children to recognize and imitate emotive
behaviors (using the Theory of Mind concept (88)) as expressed by NAO robot. During the
32
Table 2: The summary of participants [taken from (43)]. The average CARS scores of the two groups
are statistically different (t(34) = −3.35, p = 0.002).
Age
Age range
Gender (male:female)
CARS
C1 (Japan)
7.59± 2.43
3–12
15:2
31.8± 7.1
C2 (Serbia)
9.41± 2.46
5–13
15:4
40.3± 8.2
therapy, the robot was driven by a "person behind the curtain" (i.e., the therapist) but the data
were collected for enabling the robot to have a future autonomous perception of the affective
states of a child learner. The data include: (i) video recordings of facial expressions, head and
body movements, pose and gestures, (ii) autonomic physiology (heart rate (HR), electrodermal
activity (EDA) and body temperature (T)) from the children, as measured on their non-dominant
wrist, and (iii) audio-recordings (Fig. 1). The data come from 35 children, with different cultural
backgrounds. Namely, 17 children (16 males / 1 female) are from Japan (C1), and 19 children
(15 males / 4 females) are from Serbia (C2) (43). Note that in this paper we excluded the
data of one male child from C2 due to the low-quality recording. Each child participated in a 25
minutes long child-robot interaction. Children’s ages varied from 3-13, and all the children have
a prior diagnosis of autism (see Table 2). The protocol for the data acquisition was reviewed and
approved by relevant Institutional Review Boards (IRBs), and informed consent was obtained
in writing from the parents of the children. More details about the data, recording setting and
therapy stages (pairing, recognition, imitation and story-telling) can be found in (43).
C
Feature Processing
The raw data of synchronized video, audio and autonomic physiology recordings were processed using the state-of-the-art open-source tools. For analysis of facial behavior, we used the
OpenFace toolkit (50). This toolkit is based on Conditional Local Neural Fields (CLNF) (89),
a ML model for detection and tracking of 68 fiducial facial points, described as 2 dimensional
33
(2D) coordinates (x, y) in face images (Fig. 1). It also provides 3D estimates of head-pose
and eye-gaze direction (one for each eye), as well as the presence and intensity (on a 6 level
Likert scale) of 18 facial action units (AUs) (90). The latter are usually referred to as the judgment level descriptors of facial activity, in terms of activations of facial muscles. Most human
facial expressions can be described as a combination of these AUs and their intensities, and
they have been the focus of research on automated analysis of facial expressions (91). For
capturing the body movements, we used the OpenPose toolkit (51) for automated detection of
18-keypoint body pose locations, 21-keypoint hand estimation, and 70 fiducial facial landmarks
(all in 2D), along with their detection confidence (0-1). From this set, we used the body pose
and facial landmarks, and disregarded the hand tracking (due to frequent occlusions of the children’s hands). OpenPose is built upon recent advances in convolutional neural networks - CNNs
(specifically, the VGG-19 net (92)), and the part affinity fields for part association (51).
For audio signal processing, we used the openSmile toolkit (52) to automatically extract
acoustic low-level descriptors (LLDs) from the speech waveform on frame level. Specifically,
we used 24 LLDs ( (pitch, MFCC, LSP, etc.) provided by openSmile, which have already
been used effectively for cross-lingual automatic diagnosis of ASC from children’s voices (73).
These features were computed over sliding windows of length 100 ms with 10 ms shift, and
then aligned with the visual features using time-stamps stored during the data recording.
To measure the biosignals based on autonomic physiology (HR, EDA and T), we used the
commercially available E4 wrist-worn sensor (93, 94). This wristband provides real-time readings of blood volume pulse (BVP) and HR (64Hz), EDA via the measurement of skin conductance (4Hz), skin T (4Hz), and 3-axis accelerometer (ACC) data (32Hz). From these signals, we
also extracted additional commonly used hand-crafted features (34), as listed in Table 3. Note
that since HR is obtained from BVP, we used only the raw BVP. Again, these were temporally
aligned with the visual features using time-stamps stored during the data recording.
34
Table 3: The summary of the features used from different data modalities.
Feature
ID
1-209
210-215
216-223
Modality
Description
FACE (OpenPose)
FACE (OpenFace)
FACE (OpenFace)
224-240
FACE (OpenFace)
241-258
FACE (OpenFace)
Facial landmarks: 70x2 (x,y) and their confidence level (c)
Head pose: 3D location and 3D rotation
Eye gaze: 2x3 - 3D eye gaze direction vector in world coordinates
for left and right eye + 2D eye gaze direction in radians in world
coordinates for both eyes
Binary detection of 18 AUs: AU01,AU02, AU04, AU05, AU06,
AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20, AU23,
AU25, AU26, AU28, AU45
Intensity estimation (0-5) of 17 AUs: AU01,AU02, AU04, AU05,
AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20,
AU23, AU25, AU26, AU45
259-312
BODY (OpenPose)
Body pose: 18x3 - the pose keypoints containing the body part
locations (x,y) and detection confidence ( c )
313-327
BODY (E4 ACC)
Accelerometer
data: 3D raw signal (x, y, z)), z-normalized vector
p
2
2
2
x + y + z (mean and SD within 5 sec window), mean, SD,
max, min, 1st diff, abs value of 1st diff, abs value of normalized
1st diff, 2nd diff, abs value of 2nd diff, abs value of normalized
2nd diff, mean amplitude deviation (10 sec window)
328-338
PHYSIOLOGY (EDA)
339-347
PHYSIOLOGY (HR)
348-357
PHYSIOLOGY (T)
Raw EDA and its: z-normalized value (30 sec window), mean,
SD, max, min, integration, slope, number of peaks, amplitude,
number of zero-crossings
Raw HR signal and its: z-normalized value (4s window size),
mean, 1st diff, abs value of 1st diff, abs value of the normalized
1st diff, 2nd diff, abs value of 2nd diff, abs value of normalized
2nd diff
Raw T signal and its: z-normalized value (4s window size), mean,
1st diff, abs value of 1st diff, abs value of the normalized 1st diff,
2nd diff, abs value of 2nd diff, abs value of normalized 2nd diff
358-381
AUDIO (openSmile)
LLDs: RMS energy, Spectral flux, Spectral entropy, Spectral
variance, Spectral skewness, Spectral kurtosis, Spectral slope,
Harmonicity, MFCC 0, MFCC 1–10, MFCC 11–12, MFCC
13–14, Log Mel frequency band 0–7, LSP frequency 0–7, F0
(ACF based), F0 (SHS based), F0 envelope, Probability of voicing, Jitter, Shimmer, Logarithmic HNR, Sum of auditory spectrum (loudness), ZCR, Logarithmic HNR
382-396
CARS
15 ratings (0-4): relating to people, emotional response, imitation, body use, object use, adaptation to change, listening response, taste-smell-touch, visual response, fear or nervous, verbal
communication, activity level, nonverbal communication, level
and consistency of intellectual response, general impression
35
The multi-modal learning has been achieved by consolidating these features to act as predictors of target affective states and engagement in our personalized affect perception deep
networks. From the OpenPose output, we used the face and body features with the detection
confidence over each feature set (face&body) above 30%, which we found to be a good threshold by visually inspecting the detection results. The final feature set was formed as follows: (i)
visual: we used the facial landmarks from OpenPose, enhanced with the head-pose, eye-gaze
and AUs, as provided by OpenFace. (ii) Body: we merged the OpenPose body-pose features,
and E4 ACC features encoding the hand movements. (ii) Audio: the original feature set is
kept, and (iii) Physiology: contains the features derived from the E4 sensor, without the ACC
features. Table 3 summarizes these features.
We also included an auxiliary feature set provided by the expert knowledge. Namely, the
children’s behavioral severity at the time of the interaction (after the recordings) was scored
on the CARS (47) by the therapists (Table 2). The CARS form is typically completed in less
than 30 minutes, and it asks about 15 areas of behavior defined by a unique rating system (0-4)
developed to assist in identifying individuals with ASC. The rating values given for the 15 areas
are summed to produce a total score for each child. CARS covers the three key behavioral
dimensions pertinent to autism: social-emotional, cognitive, and sensory, and based on the total
scores, the children fall into one of the following categories: (i) no autism (score below 30),
(ii) mild-to-moderate autism (score: 30–36.5), and (iii) moderate-to-severe autism (37–60). We
used this 15-D feature set (the CARS scores for each of the 15 areas) as a unique descriptor for
each child - encoding the expert knowledge about the children’s behavioral traits.
D
Coding
The dataset was labeled by human experts in terms of two most commonly used affective dimensions (valence and arousal), and engagement, all rated on a continuous scale in the range from
36
−1 to +1. Specifically, five expert therapists (two from C1 and three from C2) coded the videos
independently while watching the audio-visual recordings of target interactions. As a measure
of the coders’ agreement, we used the intra-class correlation (ICC) score, type (3,1) (53). This
score is a measure of the proportion of a variance that is attributable to objects of measurement compared to the overall variance of the coders. The ICC is commonly used in behavioral
sciences to assess the agreement of judges. Unlike the well-known Pearson correlation (PC),
ICC penalizes the scale differences and offset between the coders, which makes it a more robust measure of coders’ agreement. The codings were aligned using the standard alignment
techniques: we applied time-shifting of ±2 seconds to each coder, and selected the shift which
produced the highest average inter-coder agreement. The ground truth labels that we used to
evaluate the ML models were then obtained by averaging the codings of 3/5 coders, who had the
highest agreement (based on the pair-wise ICC scores). We empirically found, that in this way,
outlying codings can significantly be reduced. The obtained coding ("the gold standard") was
then used as the ground truth for training ML models for estimation of valence, arousal, and engagement levels during the child-robot interactions. Finally, note that in our previous work (43),
we used discrete annotations for the three target dimensions. Since these were coded per manually selected engagement episodes, for this work we re-annotated the data to obtain a more
fine-grained (i.e., continuous) estimates of the affect and engagement from the full dataset. The
description of the exemplary behavioral cues used during the coding process is given in Table 4.
37
Table 4: The description of the behaviors and corresponding cues given to the coders as reference
points when coding target affective states and engagement levels. All three dimensions are coded on a
continuous scale based on the perceived intensity of the target dimensions.
Level
negative (-1)
Dimension
Valence
neutral (0)
Valence
positive(+1)
Valence
negative (-1)
Arousal
neutral (0)
Arousal
positive(+1)
Arousal
negative (-1)
Engagement
neutral (0)
Engagement
positive(+1)
Engagement
Description
The child shows clear signs of experiencing unpleasant feelings
(being unhappy, angry, visibly upset, showing dissatisfaction,
frightened), dissatisfaction and disappointment (e.g., when NAO
showed an expression that the child did not anticipate)
The child seems alert and/or attentive, with no obvious signs of
any emotion, pleasure or displeasure
The child shows signs of intense happiness (e.g., clapping hands),
joy (in most cases followed with episodes of laughter), and delight (e.g., when NAO performed)
The child seems very bored or uninterested (e.g., looking away,
not showing interest in the interaction, sleepy, passively observing)
The child shows no obvious signs of a physical activity (face,
head, hand, and/or bodily movements); seems calm, thinking, airdrawn
The child performs an intense and/or sudden physical activity followed by constant (sometimes rapid) movements like hand clapping, touching face/head/knees, actively playing with the robot,
wiggling in the chair (C2) or on the floor (C1), jumping, walking
around the room, being highly excited, shouting
The child is not responding to the therapist and/or NAO’s prompts
at all and after the prompts, or walks away from NAO, looking for
other objects in the room, ignoring the robot and the therapist
The child seems indifferent to the interaction, looking somewhere
else, not paying the full attention to the interaction; the therapist
repeats the question and/or attempts the task a few times, until the
child complies with the instructions
The child is paying full attention to the interaction, immediately
responds to the questions of the therapist, reacting to NAO spontaneously, executing the tasks; the child seems very interested
with minimal or no incentive from the therapist to participate in
the interaction
38
References
1. T. Kanda, H. Ishiguro, Human-robot interaction in social robotics (CRC Press, 2017).
2. T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots, Robotics
and autonomous systems 42, 143–166 (2003).
3. E. S.-W. Kim, Robots for social skills therapy in autism: Evidence and designs toward
clinical utility, Ph.D. thesis, Yale University (2013).
4. G.-Z. Yang, et al., Medical robotics—regulatory, ethical, and legal considerations for increasing levels of autonomy (2017).
5. L. D. Riek, Healthcare robotics, Communications of the ACM 60, 68–78 (November 2017).
6. C. M. Bishop, Pattern recognition and machine learning (Springer, 2006).
7. Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521, 436–444 (May 2015).
8. M. J. Matarić, Socially assistive robotics: Human augmentation versus automation, Science
Robotics 2 (March 2017).
9. A. Tapus, M. Mataric, B. Scassellati, Socially assistive robotics [Grand Challenges of
Robotics], IEEE Robotics & Automation Magazine 14, 35–42 (2007).
10. A. Peca, Robot enhanced therapy for children with autism disorders: Measuring ethical
acceptability, IEEE Technology and Society Magazine 35, 54–66 (June 2016).
11. P. G. Esteban, et al., How to build a supervised autonomous system for robot-enhanced therapy for children with autism spectrum disorder, Paladyn, Journal of Behavioral Robotics
8, 18–38 (April 2017).
39
12. D. Freeman, et al., Virtual reality in the assessment, understanding, and treatment of mental
health disorders, Psychol. Med. pp. 1–8 (2017).
13. A. P. Association, Diagnostic and statistical manual of mental disorders (DSM-5 R )
(American Psychiatric Pub, 2013).
14. D. L. Christensen, et al., Prevalence and characteristics of autism spectrum disorder among
4-year-old children in the autism and developmental disabilities monitoring network, Journal of Developmental & Behavioral Pediatrics 37, 1–8 (January 2016).
15. D. Feil-Seifer, M. Mataric, Robot-assisted therapy for children with autism spectrum disorders, Proc. of the 7th International Conference on Interaction Design and Children (2008),
pp. 49–52.
16. W. A. Bainbridge, J. W. Hart, E. S. Kim, B. Scassellati, The benefits of interactions with
physically present robots over video-displayed agents, Int. J. Soc. Robot. 3, 41–52 (January
2011).
17. M. Helt, et al., Can children with autism recover? if so, how?, Neuropsychology review 18,
339–366 (2008).
18. C. M. Corsello, Early intervention in autism, Infants & Young Children 18, 74–85 (April
2005).
19. S. Baron-Cohen, A. M. Leslie, U. Frith, Does the autistic child have a "theory of mind"?,
Cognition 21, 37–46 (October 1985).
20. S. Harker, Applied behavior analysis (aba), Encyclopedia of Child Behavior and Development pp. 135–138 (2011).
40
21. R. L. Koegel, L. Kern Koegel, Pivotal Response Treatments for Autism: Communication,
Social, and Academic Development. (2006).
22. J. J. Diehl, L. M. Schmitt, M. Villano, C. R. Crowell, The clinical use of robots for individuals with Autism Spectrum Disorders: A critical review, Research in Autism Spectrum
Disorders 6, 249–262 (2012).
23. B. Scassellati, H. Admoni, M. Matarić, Robots for use in autism research, Annual review
of biomedical engineering 14, 275–294 (2012).
24. K. Dautenhahn, I. Werry, Towards interactive robots in autism therapy: Background, motivation and challenges, Pragmatics & Cognition 12, 1–35 (2004).
25. P. Liu, D. F. Glas, T. Kanda, H. Ishiguro, Data-driven hri: Learning social behaviors by
example from human–human interaction, IEEE Transactions on Robotics 32, 988–1008
(2016).
26. E. S. Kim, R. Paul, F. Shic, B. Scassellati, Bridging the research gap: Making hri useful to
individuals with autism, Journal of Human-Robot Interaction 1 (2012).
27. B. M. Scassellati, Foundations for a theory of mind for a humanoid robot, Ph.D. thesis,
Massachusetts Institute of Technology (2001).
28. K. Dautenhahn, I. Werry, Towards interactive robots in autism therapy: Background, motivation and challenges, Pragmatics & Cognition 12, 1–35 (2004).
29. C. L. Breazeal, Designing sociable robots (MIT press, 2004).
30. P. Pennisi, et al., Autism and social robotics: A systematic review, Autism Research 9,
165–183 (February 2016).
41
31. M. A. Goodrich, A. C. Schultz, Human-robot interaction: a survey, Foundations and trends
in human-computer interaction 1, 203–275 (February 2007).
32. S. M. Anzalone, S. Boucenna, S. Ivaldi, M. Chetouani, Evaluating the engagement with
social robots, International Journal of Social Robotics 7, 465–478 (August 2015).
33. M. B. Colton, et al., Toward therapist-in-the-loop assistive robotics for children with autism
and specific language impairment, Autism 24, 25 (2009).
34. J. Hernandez, I. Riobo, A. Rozga, G. D. Abowd, R. W. Picard, Using electrodermal activity
to recognize ease of engagement in children during social interactions, Proceedings of the
ACM International Joint Conference on Pervasive and Ubiquitous Computing (2014), pp.
307–317.
35. M. E. Hoque, Analysis of speech properties of neurotypicals and individuals diagnosed with
autism and down, Proceedings of the 10th International ACM SIGACCESS Conference on
Computers and Accessibility (2008), pp. 311–312.
36. A. Baird, et al., Automatic classification of autistic child vocalisations: A novel database
and results, Interspeech pp. 849–853 (2017).
37. T. Belpaeme, et al., Multimodal child-robot interaction: Building social bonds, Journal of
Human-Robot Interaction 1, 33–53 (December 2012).
38. Z. Zheng, et al., Robot-mediated imitation skill training for children with autism, IEEE
Transactions on Neural Systems and Rehabilitation Engineering 24, 682–691 (June 2016).
39. J. Sanghvi, et al., Automatic analysis of affective postures and body motion to detect
engagement with a game companion, The 6th ACM/IEEE International Conference on
Human-Robot Interaction (HRI) (2011), pp. 305–311.
42
40. J. C. Kim, P. Azzi, M. Jeon, A. M. Howard, C. H. Park, Audio-based emotion estimation
for interactive robotic therapy for children with autism spectrum disorder, The 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) (2017), pp.
39–44.
41. S. S. Rajagopalan, O. R. Murthy, R. Goecke, A. Rozga, Play with me – measuring a child’s
engagement in a social interaction, The 11th IEEE International Conference and Workshops
on Automatic Face and Gesture Recognition (2015), vol. 1, pp. 1–8.
42. M. I. Jordan, T. M. Mitchell, Machine learning: Trends, perspectives, and prospects, Science 349, 255–260 (Jul 2015).
43. O. Rudovic, J. Lee, L. Mascarell-Maricic, B. W. Schuller, R. W. Picard, Measuring engagement in robot-assisted autism therapy: a cross-cultural study, Frontiers in Robotics and AI
4, 36 (2017).
44. J. Ngiam, et al., Multimodal deep learning, Proceedings of the 28th International Conference on Machine Learning (ICML) (2011), pp. 689–696.
45. N. Jaques, S. Taylor, A. Sano, R. Picard, Multimodal autoencoder: A deep learning approach to filling in missing sensor data and enabling better mood prediction, International
Conference on Affective Computing and Intelligent Interaction (ACII), 2017 (2015).
46. Y. Bengio, L. Yao, G. Alain, P. Vincent, Generalized denoising auto-encoders as generative
models, Advances in Neural Information Processing Systems (2013), pp. 899–907.
47. E. Schopler, M. E. Van Bourgondien, G. J. Wellman, S. R. Love, The childhood autism
rating scale, (CARS2) (WPS Los Angeles, 2010).
48. Y. Zhang, Q. Yang, An overview of multi-task learning, National Science Review (2017).
43
49. N. Jaques, O. Rudovic, S. Taylor, A. Sano, R. Picard, Predicting tomorrow’s mood, health,
and stress level using personalized multitask learning and domain adaptation, IJCAI 2017
Workshop on Artificial Intelligence in Affective Computing (2017), pp. 17–33.
50. T. Baltrušaitis, P. Robinson, L.-P. Morency, Openface: an open source facial behavior analysis toolkit, IEEE Winter Conference on Applications of Computer Vision (2016), pp. 1–10.
51. Z. Cao, T. Simon, S.-E. Wei, Y. Sheikh, Realtime multi-person 2d pose estimation using
part affinity fields, IEEE Conference on Computer Vision and Pattern Recognition (2017).
52. F. Eyben, F. Weninger, F. Gross, B. Schuller, Recent developments in opensmile, the munich open-source multimedia feature extractor, Proceedings of the 21st ACM International
Conference on Multimedia (2013), pp. 835–838.
53. P. E. Shrout, J. L. Fleiss, Intraclass correlations: uses in assessing rater reliability, Psychol.
Bull. 86, 420 (March 1979).
54. H. Larochelle, Y. Bengio, J. Louradour, P. Lamblin, Exploring strategies for training deep
neural networks, J. Mach. Learn. Res. 10, 1–40 (January 2009).
55. L. v. d. Maaten, G. Hinton, Visualizing data using t-sne, J. Mach. Learn. Res. 9, 2579–2605
(November 2008).
56. A. Shrikumar, P. Greenside, A. Shcherbina, A. Kundaje, Not just a black box: Learning
important features through propagating activation differences, International Conference on
Computer Vision and Pattern Recognition (2016).
57. J. H. Friedman, Greedy function approximation: a gradient boosting machine, Ann. Stat.
pp. 1189–1232 (October 2001).
44
58. V. Podgorelec, P. Kokol, B. Stiglic, I. Rozman, Decision trees: an overview and their use in
medicine, Journal of medical systems 26, 445–463 (2002).
59. R. Picard, M. Goodwin, Developing innovative technology for future personalized autism
research and treatment, Autism Advocate 50, 32–39 (2008).
60. B. W. Schuller, Intelligent audio analysis (Springer, 2013).
61. E. Brynjolfsson, T. Mitchell, What can machine learning do? workforce implications, Science 358, 1530–1534 (2017).
62. M. R. Herbert, Treatment-guided research, Autism Advocate 50, 8–16 (2008).
63. A. Kendall, Y. Gal, R. Cipolla, Multi-task learning using uncertainty to weigh losses for
scene geometry and semantics, International Conference on Computer Vision and Pattern
Recognition (2017).
64. R. Salakhutdinov, J. B. Tenenbaum, A. Torralba, Learning with hierarchical-deep models,
IEEE transactions on pattern analysis and machine intelligence 35, 1958–1971 (August
2013).
65. W. Wang, S. J. Pan, D. Dahlmeier, X. Xiao, Recursive neural conditional random fields for
aspect-based sentiment analysis, Computation and Language (2016).
66. A. Mollahosseini, B. Hasani, M. H. Mahoor, Affectnet: A database for facial expression,
valence, and arousal computing in the wild, IEEE Transactions on Affective Computing PP,
1-1 (2017).
67. A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional
neural networks, Advances in neural information processing systems (2012), pp. 1097–
1105.
45
68. T. Chen, I. Goodfellow, J. Shlens, Net2net: Accelerating learning via knowledge transfer,
International Conference on Learning Representations (ICLR) (2016).
69. C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, Z. Tu, Deeply-supervised nets, Artificial Intelligence and Statistics (2015), pp. 562–570.
70. S. Ruder, An overview of multi-task learning in deep neural networks, arXiv preprint
arXiv:1706.05098 (2017).
71. S. A. Taylor, N. Jaques, E. Nosakhare, A. Sano, R. Picard, Personalized multitask learning for predicting tomorrow’s mood, stress, and health, IEEE Transactions on Affective
Computing PP, 1-1 (2017).
72. R. El Kaliouby, R. Picard, S. Baron-Cohen, Affective computing and autism, Annals of the
New York Academy of Sciences 1093, 228–248 (December 2006).
73. M. Schmitt, E. Marchi, F. Ringeval, B. Schuller, Towards cross-lingual automatic diagnosis
of autism spectrum condition in children’s voices, Proceedings of the 12th Symposium on
Speech Communication (2016), pp. 1–5.
74. N. Shazeer, et al., Outrageously large neural networks: The sparsely-gated mixture-ofexperts layer, International Conference on Learning Representations (ICLR) (2017).
75. Y. Lu, et al., Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017).
76. R. J. Williams, D. Zipser, A learning algorithm for continually running fully recurrent
neural networks, Neural computation 1, 270–280 (1989).
46
77. B. Settles, Active learning, Synthesis Lectures on Artificial Intelligence and Machine
Learning 6, 1–114 (2012).
78. V. Mnih, et al., Human-level control through deep reinforcement learning, Nature 518,
529–533 (2015).
79. S. Chen, Y. Li, N. M. Kwok, Active vision in robotic systems: A survey of recent developments, The International Journal of Robotics Research 30, 1343–1377 (2011).
80. H. I. Christensen, et al., Next generation robotics, A Computing Community Consortium
(CCC) (2016).
81. Y. Bengio, Learning deep architectures for ai, Foundations and trends in Machine Learning
2, 1–127 (November 2009).
82. Y. Bengio, A. Courville, P. Vincent, Representation learning: A review and new perspectives, IEEE Trans. Pattern Anal. Mach. Intell. 35, 1798–1828 (March 2013).
83. G. E. Hinton, R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science 313, 504–507 (July 2006).
84. Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, Greedy layer-wise training of deep
networks, Advances in neural information processing systems (2007), pp. 153–160.
85. F. Chollet, et al., Keras (2015).
86. M. Abadi, et al., Tensorflow: A system for large-scale machine learning, Proceedings of
the 12th USENIX Conference on Operating Systems Design and Implementation (USENIX
Association, 2016), pp. 265–283.
47
87. F. Pedregosa, et al., Scikit-learn: Machine learning in Python, Journal of Machine Learning
Research 12, 2825–2830 (2011).
88. J. A. Hadwin, P. Howlin, S. Baron-Cohen, Teaching Children with Autism to Mind-Read:
Workbook (John Wiley & Sons, 2015).
89. T. Baltrušaitis, P. Robinson, L.-P. Morency, 3d constrained local model for rigid and nonrigid facial tracking, IEEE Conference on Computer Vision and Pattern Recognition (2012),
pp. 2610–2617.
90. E. Paul, Facial Expressions (John Wiley & Sons, Ltd, 2005).
91. J. F. Cohn, F. De la Torre, Automated face analysis for affective computing, The Oxford
handbook of affective computing (2015).
92. K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, International Conference on Computer Vision and Pattern Recognition (2014).
93. Empatica e4: https://www.empatica.com/en-eu/research/e4/ (2015).
94. J. Hernandez, D. J. McDuff, R. W. Picard, Bioinsights: extracting personal data from “still”
wearable motion sensors, The 12th IEEE International Conference on Wearable and Implantable Body Sensor Networks (2015), pp. 1–6.
48
| 2 |
1
Learning-aided Stochastic Network Optimization
with Imperfect State Prediction
arXiv:1705.05058v1 [math.OC] 15 May 2017
Longbo Huang∗ , Minghua Chen+ , Yunxin Liu†
∗[email protected], IIIS@Tsinghua University
[email protected], IE@CUHK
†[email protected], Microsoft Research Asia
Abstract—We investigate the problem of stochastic network
optimization in the presence of imperfect state prediction and
non-stationarity. Based on a novel distribution-accuracy curve
prediction model, we develop the predictive learning-aided control
(PLC) algorithm, which jointly utilizes historic and predicted
network state information for decision making. PLC is an online
algorithm that requires zero a-prior system statistical information, and consists of three key components, namely sequential
distribution estimation and change detection, dual learning, and
online queue-based control.
Specifically, we show that PLC simultaneously achieves good
long-term performance, short-term queue size reduction, accurate change detection, and fast algorithm convergence. In particular, for stationary networks, PLC achieves a near-optimal [O(),
O(log(1/)2 )] utility-delay tradeoff. For non-stationary networks,
PLC obtains an [O(), O(log2 (1/) + min(c/2−1 , ew /))] utilitymax(−c ,e−2
w )
backlog tradeoff for distributions that last Θ(
) time,
1+a
where ew is the prediction accuracy and a = Θ(1) > 0 is
a constant (the Backpressue algorithm [1] requires an O(−2 )
length for the same utility performance with a larger backlog).
Moreover, PLC detects distribution change O(w) slots faster
with high probability (w is the prediction size) and achieves
an O(min(−1+c/2 , ew /) + log2 (1/)) convergence time, which
is faster than Backpressure and other algorithms. Our results
demonstrate that state prediction (even imperfect) can help
(i) achieve faster detection and convergence, and (ii) obtain
better utility-delay tradeoffs. They also quantify the benefits of
prediction in four important performance metrics, i.e., utility
(efficiency), delay (quality-of-service), detection (robustness), and
convergence (adaptability), and provide new insight for joint
prediction, learning and optimization in stochastic networks.
I. I NTRODUCTION
Enabled by recent developments in sensing, monitoring,
and machine learning methods, utilizing prediction for performance improvement in networked systems has received a
growing attention in both industry and research. For instance,
recent research works [2], [3], and [4] investigate the benefits
of utilizing prediction in energy saving, job migration in cloud
computing, and video streaming in cellular networks. On the
industry side, various companies have implemented different
ways to take advantage of prediction, e.g., Amazon utilizes
prediction for better package delivery [5] and Facebook enables prefetching for faster webpage loading [6]. However,
despite the continuing success in these attempts, most existing
results in network control and analysis do not investigate
This paper will be presented in part at the 18th ACM International
Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), India,
July 2017.
the impact of prediction. Therefore, we still lack a thorough theoretical understanding about the value-of-prediction in
stochastic network control. Fundamental questions regarding
how prediction should be integrated in network algorithms, the
ultimate prediction gains, and how prediction error impacts
performance, remain largely unanswered.
To contribute to developing a theoretical foundation for
utilizing prediction in networks, in this paper, we consider a
general constrained stochastic network optimization formulation, and aim to rigorously quantify the benefits of system state
prediction and the impact of prediction error. Specifically, we
are given a discrete-time stochastic network with a dynamic
state that evolves according to some potentially non-stationary
probability law. Under each system state, a control action is
chosen and implemented. The action generates traffic into network queues but also serves workload from them. The action
also results in a system utility (cost) due to service completion
(resource expenditure). The traffic, service, and cost are jointly
determined by the action and the system state. The objective
is to maximize the expected utility (or equivalently, minimize
the cost) subject to traffic/service constraints, given imperfect
system state prediction information.
This is a general framework that models various practical
scenarios, for instance, mobile networks, computer networks,
supply chains, and smart grids. However, understanding the
impact of prediction in this framework is challenging. First,
statistical information of network dynamics is often unknown
a-priori. Hence, in order to achieve good performance, algorithms must be able to quickly learn certain sufficient
statistics of the dynamics, and make efficient use of prediction
while carefully handling prediction error. Second, system
states appear randomly in every time slot. Thus, algorithms
must perform well under such incremental realizations of
the randomness. Third, quantifying system service quality
often involves handling queueing in the system. As a result,
explicit connections between control actions and queues must
be established.
There has been a recent effort in developing algorithms
that can achieve good utility and delay performance for this
general problem without prediction in various settings, for
instance, wireless networks, [7], [8], [9], [10], processing
networks, [11], [12], cognitive radio, [13], and the smart grid,
[14], [15]. However, existing results mostly focus on networks
with stationary distributions. They either assume full system
statistical information beforehand, or rely on stochastic ap-
2
proximation techniques to avoid the need of such information.
Works [16] and [17] propose schemes to incorporate historic
system information into control, but they do not consider
prediction. Recent results in [18], [19], [20], [21] and [22]
consider problems with traffic demand prediction, and [23]
jointly considers demand and channel prediction. However,
they focus either on M/M/1-type models, or do not consider
queueing, or do not consider the impact of prediction error. In
a different line of work, [24], [25], [26] and [27] investigate
the benefit of prediction from the online algorithm design
perspective. Although the results provide novel understanding
about the effect of prediction, they do not apply to the
general constrained network optimization problems, where
action outcomes are general functions of time-varying network
states, queues evolve in a controlled manner, i.e., arrival and
departure rates depend on the control policy, and prediction
can contain error.
In this paper, we develop a novel control algorithm for
the general framework called predictive learning-aided control
(PLC). PLC is an online algorithm that consists of three
components, sequential distribution estimation and change
detection, dual learning, and online control (see Fig. 1).
Prediction
Distribution
& History
Estimate
Network
Learning
Queue-based
Control
Fig. 1. The PLC algorithm contains (i) a distribution estimator that utilizes
both historic and predicted information to simultaneously form a distribution
estimate and detect distribution change, (ii) a learning component that computes an empirical Lagrange multiplier based on the distribution estimate,
and (iii) a queue-based controller whose decision-making information is
augmented by the multiplier.
The distribution estimator conducts sequential statistical
comparisons based on prediction and historic network state
records. Doing so efficiently detects changes of the underlying
probability distribution and guides us in selecting the right
state samples to form distribution estimates. The estimated
distribution is then fed into the dual learning component to
compute an empirical multiplier of an underlying optimization
formulation. This multiplier is further incorporated into the
Backpressure (BP) network controller [1] to perform realtime network operation. Compared to the commonly adopted
receding-horizon-control approach (RHC), e.g., [28], PLC provides another way to utilize future state information, which
focuses on using the predicted distribution for guiding action
selection in the present slot and can be viewed as performing
steady-state control under the predicted future distribution.
We summarize our main contributions as follows.
i. We propose a general state prediction model featured with
a distribution-accuracy curve. Our model captures key factors
of several existing prediction models, including window-based
[22], distribution-based [29], and filter-based [26] models.
ii. We propose a general constrained network control algorithm called predictive learning-aided control (PLC), which is
an online algorithm that requires zero a-prior system statistical
information. PLC jointly performs sequential distribution estimation and change detection, dual learning, and queue-based
online control.
iii. We show that for stationary networks, PLC achieves
an [O(), O(log2 (1/))] utility-delay tradeoff. For nonstationary networks, PLC obtains an [O(), O(log2 (1/)
+ min(c/2−1 , ew /))] utility-backlog tradeoff for distribumax(−c ,e−2
w )
) time, where ew is the prediction
tions that last Θ(
1+a
accuracy, c ∈ (0, 1) and a > 0 is an Θ(1) constant (the
Backpressue algorithm [1] requires an O(−2 ) length for the
same utility performance with a larger backlog).1
iv. We show that for both stationary and non-stationary
system dynamics, PLC detects distribution change O(w) slots
(w is prediction window size) faster with high probability and
achieves a fast O(min(−1+c/2 , ew /) + log2 (1/)) convergence time, which is faster than the O(−1+c/2 + −c ) time of
the OLAC scheme [16], and the O(1/) time of Backpressure.
v. Our results show that state prediction (even imperfect)
can help performance in two ways (a) achieve faster detection,
i.e., detect change w slots faster, and (b) obtain a better
utility-delay tradeoff, i.e., reduce delay to O(ew /+log2 (1/))
for the same utility. They rigorously quantify the benefits of
prediction in four important performance metrics, i.e., utility
(efficiency), delay (quality-of-service), detection (robustness),
and convergence (adaptability).
The rest of the paper is organized as follows. In Section II,
we discuss a few motivating examples in different application
scenarios. We set up the notations in Section III, and present
the problem formulation in Section IV. Background information is provided in Section V. Then, we present PLC in Section
VI, and prove its performance in Section VII. Simulation
results are presented in Section VIII, followed by conclusions
in Section IX. To facilitate reading, all the proofs are placed
in the appendices.
II. M OTIVATING E XAMPLES
In this section, we present a few interesting practical scenarios that fall into our general framework.
Matching in sharing platforms: Consider a Uber-like company that provides ride service to customers. At every time,
customer requests enter the system and available cars join
to provide service. Depending on the environment condition
(state), e.g., traffic condition or customer status, matching
customers to drivers can result in different user satisfaction,
and affect the revenue of the company (utility). The company
gets access to future customer demand and car availability, and
system condition information (prediction), e.g., through reservation or machine learning tools. The objective is to optimally
match customers to cars so that the utility is maximized, e.g.,
[30] and [31].
Energy optimization in mobile networks: Consider a
base-station (BS) sending traffic to a set of mobile users.
The channel conditions (state) between users and the BS
are time-varying. Thus, the BS needs different amounts of
1 Note that when there is no prediction, i.e., w = 0 and e
w = ∞, we
recover previous results of OLAC [16].
3
power for packet transmission (cost) at different times. Due
to higher layer application requirements, the BS is required
to deliver packets to users at pre-specified rates. On the other
hand, the BS can predict future user locations in some short
period of time, from which it can estimate future channel
conditions (prediction). The objective of the BS is to jointly
optimize power allocation and scheduling among users, so
as to minimize energy consumption, while meeting the rate
requirements, e.g., [8], [13]. Other factors such as energy harvesting, e.g., [32], can also be incorporated in the formulation.
Resource allocation in cloud computing: Consider an
operator, e.g., a dispatcher, assigning computing jobs to servers
for processing. The job arrival process is time-varying (state),
and available processing capacities at servers are also dynamic
(state), e.g., due to background processing. Completing users’
job requests brings the operator reward (utility). The operator
may also have information regarding future job arrivals and
service capacities (prediction). The goal is to allocate resources
and balance the loads properly, so as to maximize system
utility. This example can be extended to capture other factors
such as rate scaling [33] and data locality constraints [34].
In these examples and related works, not only can the state
statistics be potentially non-stationary, but the system often
gets access to certain (possibly imperfect) future state information through various prediction techniques. These features
make the problems different from existing settings considered,
e.g., [8] and [15], and require different approaches for both
algorithm design and analysis.
III. N OTATIONS
R denotes the n-dimensional Euclidean space. Rn+ (Rn− )
denotes the non-negative (non-positive) orthant. Bold symbols
x = (x1 , ..., xn ) denote vectors in Rn . w.p.1 denotes “with
probability 1.” k·k denotes P
the Euclidean
norm. For a sequence
t−1
1
E
y(τ
)
denotes its average
{y(t)}∞
t=0 , y = limt→∞ t
τ =0
(when exists). x y means xP
j ≥ yj for all j. For distributions
π 1 and π 2 , kπ 1 − π 2 ktv = i |π1i − π2i | denotes the total
variation distance.
n
Pr S(t) = si the probability of being in state si at time t and
denote π(t) = (π1 (t), ..., πM (t)) the state distribution at time
t. The network controller can observe S(t) at the beginning
of every slot t, but the πi (t) probabilities are unknown. To
simplify notations, we divide time into intervals that have the
same distributions and denote {tk , k = 0, 1, ...} the starting
point of the k-th interval Ik , i.e., π(t) = π k for all t ∈ Ik ,
{tk , tk+1 −1}. The length of Ik is denoted by dk , tk+1 −tk .
B. State prediction
At every time slot, the operator gets access to a prediction
module, e.g., a machine learning algorithm, which provides
prediction of future network states. Different from recent
works, e.g., [25], [26] and [35], which assume prediction
models on individual states, we assume that the prediction
module outputs a sequence of predicted distributions Ww (t) ,
{π̂(t), π̂(t + 1), ..., π̂(t + w)}, where w + 1 is the prediction
window size. Moreover, the prediction quality is characterized
by a distribution-accuracy curve {e(0), ..., e(w)} as follows.
For every 0 ≤ k ≤ w, π̂(t + k) satisfies:
||π̂(t + k) − π(t + k)||tv ≤ e(k), ∀ k.
That is, the predicted distribution at time k has a total-variation
error bounded by some e(k) ≥ 0.3 Note that e(k) = 0 for
all 0 ≤ k ≤ w corresponds to a perfect predictor, in that it
predicts the exact distribution in every slot. We assume the
{e(0), ..., e(w)}
Pw curve is known to the operator and denote
1
ew , w+1
k=0 e(k) the average prediction error.
Our prediction model (1) is general and captures key characteristics of several existing prediction models. For instance, it
captures the exact demand statistics prediction model in [29],
where the future demand distribution is known (e(k) = 0 for
all 0 ≤ k ≤ w). It can also capture the window-based predictor
model, e.g., [22], if each π̂(t + k) corresponds to the indicator
value for the true state. Moreover, our model captures the
error-convolution prediction model proposed in [35], [25] and
[26], which captures features of the Wiener filter and Kalman
filter. Specifically, under the convolution model, the predicted
state Ŝ(t + k) at time t satisfies:4
IV. S YSTEM M ODEL
Consider a controller that operates a network with the goal
of minimizing the time average cost, subject to the queue
stability constraint. The network is assumed to operate in
slotted time, i.e., t ∈ {0, 1, 2, ...}, and there are r ≥ 1 queues
in the network.
(1)
kŜ(t + k) − S(t + k)k =
t+k
X
s=t+1
g(t + k − s)a(s),
(2)
where g(s) is the impulse function that captures how error
propagates over time in prediction, and a(s) is assumed to be
a zero mean i.i.d. random variable [25]. Thus, we can compute
the corresponding e(k) once g(s) and a(s) are given.
A. Network state
In every slot t, we use S(t) to denote the current network
state, which indicates the current network parameters, such as
a vector of conditions for each network link, or a collection of
other relevant information about the current network channels
and arrivals. S(t) is independently distributed across time, and
each realization is drawn from a state space of M distinct
states denoted as S = {s1 , s2 , . . . , sM }.2 We denote πi (t) =
2 The independent assumption is made to facilitate presentation and understanding. The results in this paper can likely be generalized to systems where
S(t) evolves according to general time inhomogeneous Markovian dynamics.
C. The cost, traffic, and service
At each time t, after observing S(t) = si , the controller
chooses an action x(t) from a set Xi , i.e., x(t) = xi for some
xi ∈ Xi . The set Xi is called the feasible action set for network
state si and is assumed to be time-invariant and compact for all
si ∈ S. The cost, traffic, and service generated by the chosen
action x(t) = xi are as follows:
3 It makes sense to assume a deterministic upper bound of the difference
here because we are dealing with distributions.
4 In [25] and [26], the state space is a metric space.
4
(a) The chosen action has an associated cost given by the
cost function f (t) = f (si , xi ) : Xi 7→ R+ (or Xi 7→ R−
in reward maximization problems).5
(b) The amount of traffic generated by the action to
queue j is determined by the traffic function Aj (t) =
Aj (si , xi ) : Xi 7→ R+ , in units of packets.
(c) The amount of service allocated to queue j is given
by the rate function µj (t) = µj (si , xi ) : Xi 7→ R+ , in
units of packets.
Here Aj (t) can include both exogenous arrivals from outside
the network to queue j, and endogenous arrivals from other
queues, i.e., transmitted packets from other queues to queue j.
We assume the functions −f (si , ·), µj (si , ·) and Aj (si , ·) are
time-invariant, their magnitudes are uniformly upper bounded
by some constant δmax ∈ (0, ∞) for all si , j, and they are
known to the operator. Note that this formulation is general
and models many network problems, e.g., [8], [15], and [36].
D. Problem formulation
Let q(t) = (q1 (t), ..., qr (t))T ∈ Rr+ , t = 0, 1, 2, ... be
the queue backlog vector process of the network, in units of
packets. We assume the following queueing dynamics:
qj (t + 1) = max qj (t) − µj (t) + Aj (t), 0 , ∀j,
(3)
and q(0) = 0. By using (3), we assume that when a queue does
not have enough packets to send, null packets are transmitted,
so that the number of packets entering qj (t) is equal to Aj (t).
We adopt the following notion of queue stability [1]:
q av , lim sup
t→∞
t−1 r
1 XX
E qj (τ ) < ∞.
t τ =0 j=1
(4)
0
distribution π 0 = (π10 , ..., πM
) with kπ 0 − π k ktv ≤ k , there
(si ) z=1,2,...,∞
(s )
exist a set of actions {xz }i=1,...,M with xz i ∈ Xi and
P (si )
(si )
variables ϑz ≥ 0 for all si and z with z ϑz = 1 for all
si (possibly depending on π 0 ), such that:
X X
(si )
(si )
i)
πi0
ϑ(s
z [Aj (si , xz ) − µj (si , xz )] ≤ −η0 , ∀ j, (6)
si
Assumption 1 corresponds to the “slack” condition commonly assumed in the literature with k = 0, e.g., [36] and
[37].7 With k > 0, we assume that when two systems are
relatively close to each other (in terms of π), they can both
be stabilized by some (possibly different) randomized control
policy that results in the same slack.
E. Discussion of the model
Two key differences between our model and previous ones
include (i) π(t) itself can be time-varying and (ii) the operator
gets access to a prediction window Ww (t) that contains
imperfect prediction. These two extensions are important to the
current network control literature. First, practical systems are
often non-stationary. Thus, system dynamics can have timevarying distributions. Thus, it is important to have efficient algorithms to automatically adapt to the changing environment.
Second, prediction has recently been made increasingly accurate in various contexts, e.g., user mobility in cellular network
and harvestable energy availability in wireless systems, by data
collection and machine learning tools. Thus, it is critical to
understand the fundamental benefits and limits of prediction,
and its optimal usage.
Π
to
We use Π to denote an action-choosing policy, and use fav
denote its time average cost, i.e.,
Π
fav
, lim sup
t→∞
t−1
1X Π
E f (τ ) ,
t τ =0
(5)
Π
where f (τ ) is the cost incurred at time τ under policy Π.
We call an action-choosing policy feasible if at every time
slot t it only chooses actions from the feasible action set Xi
when S(t) = si . We then call a feasible action-choosing policy
under which (4) holds a stable policy.
In every slot, the network controller observes the current
network state and prediction, and chooses a control action,
with the goal of minimizing the time average cost subject to
network stability. This goal can be mathematically stated as:6
Π
(P1) min : fav
, s.t. (4).
In the following, we call (P1) the stochastic problem, and we
π
to denote its optimal solution given a fixed distribution
use fav
π. It can be seen that the examples in Section II can all be
modeled by our stochastic problem framework.
Throughout our paper, we make the following assumption.
V. T HE D ETERMINISTIC P ROBLEM
For our later algorithm design and analysis, here we define
the deterministic problem and its dual problem [38]. Specifically, the deterministic problem for a given distribution π is
defined as follows [38]:
X
min : V
πi f (si , x(si ) )
(7)
s
Xi
s.t.
πi [Aj (si , x(si ) ) − µj (si , x(si ) )] ≤ 0, ∀ j,
si
x(si ) ∈ Xi
∀ i = 1, 2, ..., M.
Q
Here the minimization is taken over x ∈
i Xi , where
x = (x(s1 ) , ..., x(sM ) )T , and V ≥ 1 is a positive constant
introduced for later analysis. The dual problem of (7) can be
obtained as follows:
max : g(γ, π),
s.t. γ 0,
(8)
where g(γ, π) is the dual function for problem (7) and is
defined as:
X
g(γ, π) = inf
πi V f (si , x(si ) )
(9)
x(si ) ∈Xi
Assumption 1. For every system distribution π k , there exists
a constant k = Θ(1) > 0 such that for any valid state
si
+
X
j
5 We
use cost and utility interchangeably in this paper.
6 When π(t) is time-varying, the optimal system utility needs to be defined
carefully. We will specify it when discussing the corresponding results.
z
where η0 = Θ(1) > 0 is independent of π 0 . ♦
7 Note
(si )
(si )
γj Aj (si , x ) − µj (si , x ) .
that η0 ≥ 0 is a necessary condition for network stability [1].
5
γ = (γ1 , ..., γr )T is the Lagrange multiplier of (7). It is well
known that g(γ, π) in (9) is concave in the vector γ for all γ ∈
Rr . Hence, the problem (8) can usually be solved efficiently,
particularly when the cost functions and rate functions are
separable over different network components [39]. We use γ ∗π
to denote the optimal multiplier corresponding to a given π
∗
and sometimes omit the subscript when it is clear. Denote gπ
the optimal value of (8) under a fixed distribution π. It was
shown in [40] that:
π
∗
fav
= gπ
.
(10)
∗
gπ
That is,
characterizes the optimal time average cost of the
stochastic problem. For our analysis, we make the following
assumption on the g(γ, π k ) function.
Assumption 2. For every system distribution π k , g(γ, π k )
has a unique optimal solution γ ∗πk 6= 0 in Rr . ♦
Assumption 2 is also commonly assumed and holds for
many network utility optimization problems, e.g., [8] and [38].
VI. P REDICTIVE L EARNING - AIDED C ONTROL
In this section, we present the predictive learning-aided control algorithm (PLC). PLC contains three main components:
a distribution estimator, a learning component, and an online
queue-based controller. Below, we first present the estimation
part. Then, we present the PLC algorithm.
Wm (t)
=
{bsd (t), ..., bed (t)},
(11)
{bm (t), ..., min[bsd (t), bm (t) + Tl ]}.
(12)
Here bsd (t) and bm (t) mark the beginning slots of Wd (t) and
Wm (t), respectively, and bed (t) marks the end of Wd (t). Ideally, Wd (t) contains the most recent d samples (including the
prediction) and Wm (t) contains Tl subsequent samples (where
Tl is a pre-specified number). We denote Wm (t) = |Wm (t)|
and Wd (t) = |Wd (t)|. Without loss of generality, we assume
that d ≥ w+1. This is a reasonable assumption, as we see later
that d grows with our control parameter V while prediction
power is often limited in practice.
We use π̂ d (t) and π̂ m (t) to denote the empirical distributions of Wd (t) and Wm (t), i.e.,8
t−1
X
X
1
d
π̂i (t) =
1[S(τ )=si ] +
π̂i (τ )
d
τ ∈Ww (t)
τ =(t+w−d)+
π̂im (t)
8 Note
=
1
Wm (t)
X
Wm (t)
1[S(τ )=si ] .
τ ∈Wm (t)
that this is only one way to utilize the samples. Other methods such
as EWMA can also be applied when appropriate.
Wd (t)
t
bm (t)
bm (t + 1)
Here we specify the distribution estimator. The idea is
to first combine the prediction in Ww (t) with historic state
information to form an average distribution, and then perform
statistical comparisons for change detection. We call the
module the average distribution estimate (ADE).
Specifically, ADE maintains two windows Wm (t) and
Wd (t) to store network state samples, i.e.,
=
The idea of ADE is shown in Fig. 4.
Wm (t + 1)
A. Distribution estimation and change detection
Wd (t)
That is, π̂ d (t) is the average of the empirical distribution of the
“observed” samples in Wd (t) and the predicted distribution,
whereas π̂ m (t) is the empirical distribution.
The formal procedure of ADE is as follows (parameters
Tl , d, d will be specified later).
Average Distribution Estimate (ADE(Tl , d, d )): Initialize
bsd (0) = 0, bed (0) = t + w and bm (0) = 0, i.e.,
Wd (t) = {0, ..., t + w} and Wm (t) = φ. At every
time t, update bsd (t), bed (t) and bm (t) as follows:
(i) If Wm (t) ≥ d and ||π̂ d (t)− π̂ m (t)||tv > d , set bm (t) =
t + w + 1 and bsd (t) = bed (t) = t + w + 1.
(ii) If Wm (t) = Tl and there exists k such that ||π̂(t + k) −
l)
, set bm (t) = t + w + 1
π̂ m (t)||tv > e(k) + 2M√log(T
Tl
s
e
and bd (t) = bd (t) = t + w + 1. Mark t + w + 1 a reset
point.
(iii) Else if t ≤ bsd (t − 1), bm (t) = bm (t − 1), bsd (t) =
bsd (t − 1), and bed (t) = bed (t − 1).9
(iv) Else set bm (t) = bm (t − 1), bsd (t) = (t + w − d)+ and
bed (t) = t + w.
Output an estimate at time t as follow:
m
if Wm (t) ≥ Tl
Pπ̂w (t)
♦ (13)
π a (t) =
1
π̂(t
+
k)
else
k=0
w+1
Wm (t)
t+w
Wd (t)
t
t+w
bm (t + 1)
Wd (t + 1)
t+1
bm (t)
t+w+1
t+1
bsd (t + 1)
bed (t + 1)
Fig. 2. Evolution of Wm (t) and Wd (t). (Left) No change detected: Wd (t)
advances by one slot and Wm (t) increases its size by one. (Right) Change
detected: both windows set their start and end points to t + w + 1.
The intuition of ADE is that if the environment is changing
over time, we should rely on prediction for control. Else
if the environment is stationary, then one should use the
average distribution learned over time to combat the potential
prediction error that may affect performance. Tl is introduced
to ensure the accuracy of the empirical distribution and can
be regarded as the confidence-level given to the distribution
stationarity. A couple of√technical remarks are also ready. (a)
The term 2M log(Tl )/ Tl is to compensate the inevitable
deviation of π̂ m (t) from the true value due to randomness. (b)
In Wm (t), we only use the first Tl historic samples. Doing so
avoids random oscillation in estimation and facilitates analysis.
Note that prediction is used in two ways in ADE. First, it is
used in step (i) to decide whether the empirical distributions
match (average prediction). Second, it is used to check whether
prediction is consistent with the history (individual prediction).
The reason for having this two-way utilization is to accommodate general prediction types. For example, suppose each
π̂(t + k) denotes the indicator for state S(t + k), e.g., as in the
look-ahead window model [22]. Then, step (ii) is loose since
e(k) is large, but step (i) will be useful. On the other hand,
when π̂(t + k) gets closer to the true distribution, both steps
will be useful.
9 This step is evoked after we set b (t0 ) = bs (t0 ) = t0 + w + 1 ≥ t for
m
d
some time t0 , in which case we the two windows remain unchanged until t
0
is larger than t + w + 1.
6
VII. P ERFORMANCE A NALYSIS
B. Predictive learning-aided control
We are now ready to present the PLC algorithm. Our
algorithm is shown in Fig. 1, and the formal description is
given below.
Predictive Learning-aided Control (PLC): At time t, do:
• (Estimation) Update π a (t) with ADE(Tl , d, d ).
• (Learning) Solve the following empirical problem and
compute the optimal Lagrange multiplier γ ∗ (t), i.e.,
max : g(γ, π a (t)),
s.t. γ 0,
(14)
If γ ∗ (t) = ∞, set γ ∗ (t) = V log(V ) · 1. If Wm (t − 1) =
Tl and π a (t) 6= π a (t − 1), set q(t + w + 1) = 0.
• (Control) At every time slot t, observe the current
network state S(t) and the backlog q(t). If S(t) = si ,
choose x(si ) ∈ Xi that solves the following:
r
X
max : −V f (si , x) +
Qj (t) µj (si , x) − Aj (si , x)
j=1
s.t.
x ∈ Xi ,
(15)
(γj∗ (t)
+
where Qj (t) , qj (t) +
− θ) . Then, update the
queues according to (3) with Last-In-First-Out. ♦
For readers who are familiar with the Backpressure (BP)
algorithm, e.g., [1] and [41], the control component of PLC
is the BP algorithm with its queue vector augmented by the
empirical multiplier γ ∗ (t). Also note that packet dropping is
introduced to enable quick adaptation to new dynamics if there
is a distribution change. It occurs only when a long-lasting
distribution ends, which avoids dropping packets frequently
in a fast-changing environment.
We have the following remarks. (i) Prediction usage:
Prediction is explicitly incorporated into control by forming
an average distribution and converting the distribution estimate into a Lagrange multiplier. The intuition for having
Tl = max(V c , e−2
w ) is that when ew is small, we should rely
on prediction as much as possible, and only switch to learned
statistics when it is sufficiently accurate. (ii) Connection
with RHC: It is interesting to see that when Wm (t) < Tl ,
PLC mimics the commonly adopted receding-horizon-control
method (RHC), e.g., [28]. The main difference is that, in
RHC, future states are predicted and are directly fed into a
predictive optimization formulation for computing the current
action. Under PLC, distribution prediction is combined with
historic state information to compute an empirical multiplier
for augmenting the controller. In this regard, PLC can be
viewed as exploring the benefits of statistics whenever it finds
the system stationary (and does it automatically). (iii) Parameter selection: The parameters in PLC can be conveniently
chosen as follows. First, fix a detection error probability
δ = V − log(V ) . Then, choose a small d and a d that satisfies
d ≥ 4 log(V )2 /2d +w+1. Finally, choose Tl = max(V c , e−2
w )
and θ according to (17).
While recent works [16] and [17] also design learningbased algorithms that utilize historic information, they do not
consider prediction and do not provide insight on its benefits
and the impact of prediction error. Moreover, [16] focuses on
stationary systems and [17] adopts a frame-based scheme.
This section presents the performance results of PLC. We
focus on four metrics, detection efficiency, network utility,
service delay, and algorithm convergence. The metrics are
chosen to represent robustness, resource utilization efficiency,
quality-of-service, and adaptability, respectively.
A. Detection and estimation
We first look at the detection and estimation part. The
following lemma summarizes the performance of ADE, which
is affected by the prediction accuracy as expected.
Lemma 1. Under ADE(Tl , d, d ), we have:
(a) Suppose at a time t, π(τ1 ) = π 1 for τ1 ∈ Wd (t) and
π(τ2 ) = π 2 6= π 1 for all τ2 ∈ Wm (t) and max |π1i − π2i | >
4(w + 1)ew /d. Then, by choosing d < 0 , max |π1i −
π2i |/2 − (w + 1)ew /d and d > ln 4δ · 212 + w + 1, if Wm (t) ≥
d
Wd (t) = d, with probability at least 1−δ, bm (t+1) = t+w+1
and Wm (t + 1) = φ, i.e., Wm (t + 1) = 0.
(b) Suppose π(t) = π ∀ t. Then, if Wm (t) ≥ Wd (t) = d,
under ADE(Tl , d, d ) with d ≥ ln 4δ · 22 +w+1, with probability
d
−2 log(Tl )
at least 1 − δ − (w + 1)M Tl
, bm (t + 1) = bm (t). ♦
Proof. See Appendix A.
Lemma 1 shows that for a stationary system, i.e., π(t) = π,
Wm (t) will likely grow to a large value (Part (b)), in which
case π a (t) will stay close to π most of the time. If instead
Wm (t) and Wd (t) contain samples from different distributions, ADE will reset Wm (t) with high probability. Note that
since the first w + 1 slots are predicted, this means that PLC
detects changes O(w) slots faster compared to that without
prediction. The condition max |π1i − π2i | > 4(w + 1)ew /d
can be understood as follows. If we want to distinguish
two different distributions, we want the detection threshold
to be no more than half of the distribution distance. Now
with prediction, we want the potential prediction error to
be no more than half of the threshold, hence the factor 4.
Also note that the delay involved in detecting a distribution change is nearly order-optimal, in that it requires only
d = O(1/ mini |π1i − π2i |2 ) time, which is known to be
necessary for distinguishing two distributions [42]. Moreover,
d = O(ln(1/δ)) shows that a logarithmic window size is
enough to ensure a high detection accuracy.
B. Utility and delay
In this section, we look at the utility and delay performance
of PLC. To state our results, we first define the following
structural property of the system.
Definition 1. A system is called polyhedral with parameter
ρ > 0 under distribution π if the dual function g(γ, π)
satisfies:
g(γ ∗ , π) ≥ g(γ, π) + ρkγ ∗π − γk. ♦
(16)
The polyhedral property typically holds for practical systems, especially when action sets are finite (see [38] for more
discussions).
7
1) Stationary system: We first consider stationary systems,
i.e., π(t) = π. Our theorem shows that PLC achieves the nearoptimal utility-delay tradeoff for stationary networks. This
result is important, as any good adaptive algorithm must be
able to handle stationary settings well.
Theorem 1. Suppose π(t) = π, the system is polyhedral with
ρ = Θ(1), ew > 0, and q(0) = 0. Choose 0 < d < 0 ,
−2
2(w + 1)ew /d, d = log(V )3 /2d , Tl = max(V c , ew
) for c ∈
(0, 1) and
V
(17)
θ = 2 log(V )2 (1 + √ ),
Tl
Then, with a sufficiently large V , PLC achieves the following:
PLC
π
(a) Utility: fav
= fav
+ O(1/V )
(b) Delay: For all but an O( V1 ) fraction of traffic, the
average packet delay is D = O(log(V )2 )
(c) Dropping: The packet dropping rate is O(V −1 ). ♦
Proof. See Appendix B.
Choosing = 1/V , we see that PLC achieves the nearoptimal [O(), O(log(1/)2 )] utility-delay tradeoff. Moreover,
prediction enables PLC to also greatly reduce the queue size
(see Part (b) of Theorem 2). Our result is different from the
results in [20] and [22] for proactive service settings, where
delay vanishes as prediction power increases. This is because
we only assume observability of future states but not preservice, and highlights the difference between pre-service and
pure prediction. Note that the performance of PLC does not
depend heavily on d in Theorem 1. The value d is more
crucial for non-stationary systems, where a low false-negative
rate is critical for performance. Also note that although packet
dropping can occur during operation, the fraction of packets
dropped is very small, and the resulting performance guarantee
cannot be obtained by simply dropping the same amount of
packets, in which case the delay will still be Θ(1/).
Although Theorem 1 has a similar form as those in [17] and
[16], the analysis is very different, in that (i) prediction error
must be taken into account, and (ii) PLC performs sequential
detection and decision-making.
2) Piecewise stationary system: We now turn to the nonstationary case and consider the scenario where π(t) changes
over time. In this case, we see that prediction is critical as
it significantly accelerates convergence and helps to achieve
good performance when each distribution only lasts for a
finite time. As we know that when the distribution can change
arbitrarily, it is hard to even define optimality. Thus, we
consider the case when the system is piecewise stationary, i.e.,
each distribution lasts for a duration of time, and study how
the algorithm optimizes the performance for each distribution.
The following theorem summarizes the performance of PLC
in this case. In the theorem, we define Dk , tk +d−t∗ , where
t∗ , sup{t < tk + d : t is a reset point}, i.e., the most recent
ending time after having a cycle with size Tl (recall that reset
points are marked in step (ii) of ADE and d ≥ w + 1).
Theorem 2. Suppose dk ≥ 4d and the system is polyhedral
with ρ = Θ(1) for all k,. Also, suppose there exists ∗0 =
0
Θ(1) > 0 such that ∗0 ≤ inf k,i |πki − πk−1i
| and q(0) = 0.
Choose d < ∗0 in ADE, and choose d, θ and Tl as in Theorem
1. Fix any distribution π k with length dk = Θ(V 1+a Tl ) for
some a = Θ(1) > 0. Then, under PLC with a sufficiently large
V , if Wm (tk ) only contains samples after tk−1 , we achieve
the following with probability 1 − O(V −3 log(V )/4 ):
)
PLC
πk
(a) Utility: fav
= fav
+ O(1/V ) + O( DTkl Vlog(V
1+a )
1−c/2
(b) Queueing: q av = O((min(V
, V ew )+1) log2 (V )+
Dk + d).
(c) In particular, if dk−1 = Θ(Tl V a1 ) for a1 = Θ(1) > 0
and Wm (tk−1 ) only contains samples after tk−2 , then with
PLC
πk
probability 1 − O(V −2 ), Dk = O(d), fav
= fav
+ O(1/V )
2
1−c/2
, V ew ) + log (V )). ♦
and q av = O(min(V
Proof. See Appendix C.
A few remarks are in place. (i) Theorem 2 shows that,
with an increasing prediction power, i.e., a smaller ew , it is
possible to simultaneously reduce network queue size and the
time it takes to achieve a desired average performance (even
if we do not execute actions ahead of time). The requirement
dk = Θ(V 1+a Tl ) can be strictly less than the O(V 2−c/2+a )
requirement for RLC in [17] and the O(V 2 ) requirement of
BP for achieving the same average utility. This implies that
PLC finds a good system operating point faster than previous
algorithms, a desirable feature for network algorithms. (ii) The
dependency on Dk here is necessary. This is because PLC
does not perform packet dropping if previous intervals do
not exceed length Tl . As a result, the accumulated backlog
can affect decision making in the current interval. Fortunately
the queues are shown to be small and do not heavily affect
performance (also see simulations). (iii) To appreciate the
queueing result, note that BP (without learning) under the same
setting will result in an O(V ) queue size.
Compared to the analysis in [17], one complicating factor
in proving Theorem 2 is that ADE may not always throw
away samples from a previous interval. Instead, ADE ensures
that with high probability, only o(d) samples from a previous
interval will remain. This ensures high learning accuracy and
fast convergence of PLC. One interesting special case not
covered in the last two theorems is when ew = 0. In this
case, prediction is perfect
Pw and Tl = ∞, and PLC always runs
1
with π a (t) = w+1
k=0 π̂(t + k), which is the exact average
distribution. For this case, we have the following result.
Theorem 3. Suppose ew = 0 and q(0) = 0. Then, PLC
achieves the following:
(a) Suppose π(t) = π and the system is polyhedral with
ρ = Θ(1). Then, under the conditions of Theorem 1, PLC
achieves the [O(), O(log(1/)2 )] utility-delay tradeoff.
(b) Suppose dk ≥ d log2 (V ) and the system is polyhedral
with ρ = Θ(1) under each π k . Under the conditions of Theorem 2, for an interval dk ≥ V 1+ for
any > 0, PLC achieves
PLC
πk
+ O(1/V ) and E q(tk ) = O(log4 (V )). ♦
that fav
= fav
Proof. See Appendix D.
The intuition here is that since prediction is perfect, π a (t) =
π k during [tk + d, tk+1 − w]. Therefore, a better performance
can be achieved. The key challenge in this case is that PLC
does not perform any packet dropping. Thus, queues can build
8
A1 (t)
up and one needs to show that the queues will be concentrating
around θ · 1 even when the distribution changes.
C. Convergence time
We now consider the algorithm convergence time, which
is an important evaluation metric and measures how long it
takes for an algorithm to reach its steady-state. While recent
works [17], [16], [43], and [44] also investigate algorithm
convergence time, they do not consider utilizing prediction
in learning and do not study the impact of prediction error.
To formally state our results, we adopt the following definition of convergence time from [16].
Definition 2. Let ζ > 0 be a given constant and let π be
a system distribution. The ζ-convergence time of a control
algorithm, denoted by Tζ , is the time it takes for the effective
queue vector Q(t) to get to within ζ distance of γ ∗π , i.e.,
Tζ , inf{t | ||Q(t) − γ ∗π || ≤ ζ}. ♦
(18)
We have the following theorem. Recall that w ≤ d =
Θ(log(V )2 ).
Theorem 4. Assuming all conditions in Theorem 2, except
that π(t) = π k for all t ≥ tk . If ew = 0, under PLC,
E TG
= O(log4 (V )).
(19)
Else suppose ew > 0. Under the conditions of Theorem 2,
with probability 1 − O( V 1Tl + VD2kTl ),
E TG
= O(θ + Tl + Dk + w)
(20)
E TG1
= O(d).
(21)
Here G = Θ(1) and G1 = Θ(Dk + 2 log(V )2 (1 + V ew )),
where Dk is defined in Theorem 2 as the most recent reset
point prior to tk . In particular, if dk−1 = Θ(Tl V a1 ) for some
2
a1 = Θ(1) > 0 and θ = O(log(V
) ), then with2 probability
−2
1 − O(V ), Dk = O(d), and E TG1 = O(log (V )). ♦
Proof. See Appendix E.
Here the assumption π(t) = π k for all t ≥ tk is made to
avoid the need for specifying the length of the intervals. It is
interesting to compare (19), (20) and (21) with the convergence
results in [16] and [17] without prediction, where it was shown
that the convergence time is O(V 1−c/2 log(V )2 + V c ), with
a minimum of O(V 2/3 ). Here although we may still need
O(V 2/3 ) time for getting into an G-neighborhood (depending
on ew ), getting to the G1 -neighborhood can take only an
O(log2 (V )) time, which is much faster compared to previous
results, e.g., when ew = o(V −2 ) and Dk = O(w), we have
G1 = O(log2 (V )). This confirms our intuition that prediction accelerates algorithm convergence and demonstrates the
power of (even imperfect) prediction.
A2 (t)
CH2 (t)
Fig. 3. A single-server two-queue system. Each queue receives random
arrivals. The server can only serve one queue at a time.
Aj (t) denotes the number of arriving packets to queue j
at time t. We assume Aj (t) is i.i.d. with either 1 or 0 with
probabilities pj and 1 − pj , and use p1 = 0.3 and p2 = 0.6.
Thus, λ1 = 0.3 and λ2 = 0.6. Each queue has a time-varying
channel condition. We denote CHj (t) the channel condition
of queue j at time t. We assume that CHj (t) ∈ CHj with
CH1 = {0, 1} and CH2 = {1, 2}. The channel distributions
are assumed to be uniform. At each time, the server determines
the power allocation to each queue. We use Pj (t) to denote the
power allocated to queue j at time t. Then, the instantaneous
service rate qj (t) gets is given by:
µj (t) = log(1 + CHj (t)Pj (t)).
(22)
We assume that Pj (t) ∈ P = {0, 1, 2} for j = 1, 2, and at
each time only one queue can be served. The objective is to
stabilize the system with minimum average power. It can be
verified that Assumptions 1 and 2 both hold in this example.
We compare PLC with BP in two cases. The first case
is a stationary system where the arrival distributions remain
constant. The second case is a non-stationary case, where we
change the arrival distributions during the simulation. In both
cases we simulate the system for T = 5 × 104 slots. We
simulate V ∈ {20, 50, 100, 150, 200, 300}. We set w + 1 = 5
and generate prediction error by adding uniform random noise
to distributions with max value e(k) (specified below). We also
use d = 0.1, δ = 0.005 and d = 2 ln(4/δ)/2 + w + 1. We
also simplify the choice of θ and set it to θ = log(V )2 .
We first examine the long-term performance. Fig. 4 shows
the utility-delay performance of PLC compared to BP in the
stationary setting. There are two PLC we simulated, one is
with ew = 0 (PLC) and the other is with ew = 0.04 (PLCe). From the plot, we see that both PLCs achieve a similar
utility as BP, but guarantee a much smaller delay. The reason
PLC-e has a better performance is due to packet dropping. We
observe around an average packet dropping rate of 0.06. As
noted before, the delay reduction of PLC cannot be achieved
by simply dropping this amount of packets.
1.16
1400
Power
Queue
1.14
1200
1.12
PLC
BP
PLC−e
1000
1.1
800
1.08
PLC
BP
PLC−e
1.06
VIII. S IMULATION
In this section, we present simulation results of PLC in
a two-queue system shown in Fig. 3. Though being simple,
the system models various settings, e.g., a two-user downlink
transmission problem in a mobile network, a CPU scheduling
problem with two applications, or an inventory control system
where two types of orders are being processed.
CH1 (t)
1.04
600
400
1.02
200
1
0.98
0
50
100
150
V
200
250
300
0
0
50
100
150
200
250
300
V
Fig. 4. Utility and delay performance comparison between PLC and BP.
Next, we take a look at the detection and convergence
9
performance of PLC. Fig. 5 shows the performance of PLC
with perfect prediction (ew = 0), PLC with prediction error
(ew = 0.04) and BP when the underlying distribution changes.
Specifically, we run the simulation for T = 5000 slots and
start with the arrival rates of p1 = 0.2 and p2 = 0.4. Then,
we change them to p1 = 0.3 and p2 = 0.6 at time T /2.
300
Q(t) − PLC (ew=0)
250
200
Q(t) − PLC (ew=0.04)
150
Actual Queue − BP
100
Actual Queue − PLC (ew=0)
50
Actual Queue − PLC (ew=0.04)
0
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Time
Fig. 5. Convergence comparison between PLC and BP for queue 1 under
V = 100. Here PLC (ew = 0) is the perfect case and PLC (ew = 0.04)
contains prediction error. Both versions converge much faster compared to
BP.
We can see from the green and red curves that PLC quickly
adapts to the change and modifies the Lagrange multiplier
accordingly. By doing so, the actual queues under PLC (the
purple and the brown curves) remain largely unaffected. For
comparison, we see that BP takes a longer time to adapt to the
new distribution and results in a larger queue size. We also see
that during the 5000 slots, PLC (ew = 0.04) drops packets 3
times (zero for the first half), validating the results in Lemma 1
and Theorem 1. Moreover, after the distribution change, PLC
(ew = 0.04) quickly adapts to the new equilibrium, despite
having imperfect prediction. The fast convergence result also
validates our theorem about short term utility performance
under PLC. Indeed, if we look at slots during time 200 − 500,
and slots between 2500−3500, we see that when BP is learning
the target backlog, PLC already operates near the optimal
mode. This shows the benefits of prediction and learning in
stochastic network control.
IX. C ONCLUSION
We investigate the problem of stochastic network optimization in the presence of imperfect state prediction and
non-stationarity. Based on a novel distribution-accuracy curve
prediction model, we develop the predictive learning-aided
control (PLC) algorithm. PLC is an online algorithm that requires zero a-prior system statistical information, and contains
three main functionalities, sequential distribution estimation
and change detection, dual learning, and online queue-based
control. We show that PLC simultaneously achieves good longterm performance, short-term queue size reduction, accurate
change detection, and fast algorithm convergence. Our results
demonstrate that state prediction (even imperfect) can help
improve performance and quantify the benefits of prediction in
four important metrics, i.e., utility (efficiency), delay (qualityof-service), detection (robustness), and convergence (adaptability). They provide new insight for joint prediction, learning
and optimization in stochastic networks.
R EFERENCES
[1] L. Georgiadis, M. J. Neely, and L. Tassiulas. Resource Allocation and
Cross-Layer Control in Wireless Networks. Foundations and Trends in
Networking Vol. 1, no. 1, pp. 1-144, 2006.
[2] Y. Chon, E. Talipov, H. Shin, and H. Cha. Mobility prediction-based
smartphone energy optimization for everyday location monitoring. ACM
Sensys, 2011.
[3] G. Ananthanarayanan, A. Ghodsi, S. Shenker, and I. Stoica. Effective
straggler mitigation: Attack of the clones. ACM NSDI, 2014.
[4] X. Zou, J. Erman, V. Gopalakrishnan, E. Halepovic, R. Jana, . Jin,
J. Rexford, and R. K. Sinha. Can accurate predictions improve video
streaming in cellular networks? ACM HotMobile, 2015.
[5] TechCrunch.
Amazon
patents
“anticipatory”
shipping
to
start
sending
stuff
before
you’ve
bought
it.
http://techcrunch.com/2014/01/18/amazon-pre-ships/, Jan 2014.
[6] Adweek. Facebook begins prefetching to improve mobile site speed.
http://www.adweek.com/socialtimes/prefetching/644281, Aug 2016.
[7] M. Gatzianas, L. Georgiadis, and L. Tassiulas. Control of wireless
networks with rechargeable batteries. IEEE Trans. on Wireless Communications, Vol. 9, No. 2, Feb. 2010.
[8] A. Eryilmaz and R. Srikant. Fair resource allocation in wireless networks
using queue-length-based scheduling and congestion control. IEEE/ACM
Trans. Netw., 15(6):1333–1344, 2007.
[9] B. Li and R. Srikant. Queue-proportional rate allocation with per-link
information in multihop networks. Proceedings of ACM Sigmetrics,
2015.
[10] B. Ji and Y. Sang. Throughput characterization of node-based scheduling
in multihop wireless networks: A novel application of the gallaiedmonds structure theorem. Proceedings of ACM MobiHoc, 2016.
[11] H. Zhao, C. H. Xia, Z. Liu, and D. Towsley. A unified modeling
framework for distributed resource allocation of general fork and join
processing networks. Proc. of ACM Sigmetrics, 2010.
[12] L. Jiang and J. Walrand. Stable and utility-maximizing scheduling for
stochastic processing networks. Allerton Conference on Communication,
Control, and Computing, 2009.
[13] R. Urgaonkar and M. J. Neely. Opportunistic scheduling with reliability
guarantees in cognitive radio networks. IEEE Transactions on Mobile
Computing, 8(6):766–777, June 2009.
[14] H. Su and A. El Gamal. Modeling and analysis of the role of fastresponse energy storage in the smart grid. Proc. of Allerton, 2011.
[15] M. J. Neely R. Urgaonkar, B. Urgaonkar and A. Sivasubramaniam.
Optimal power cost management using stored energy in data centers.
Proceedings of ACM Sigmetrics, June 2011.
[16] L. Huang, X. Liu, and X. Hao. The power of online learning in stochastic
network optimization. Proceedings of ACM Sigmetrics, 2014.
[17] L. Huang. Receding learning-aided control in stochastic networks. IFIP
Performance, Oct 2015.
[18] J. Tadrous, A. Eryilmaz, and H. El Gamal. Proactive resource allocation:
harnessing the diversity and multicast gains. IEEE Tansactions on
Information Theory, 2013.
[19] J. Spencer, M. Sudan, and K Xu. Queueing with future information.
ArXiv Technical Report arxiv:1211.0618, 2012.
[20] S. Zhang, L. Huang, M. Chen, and X. Liu. Proactive serving reduces
user delay exponentially. Proceedings of ACM Sigmetrics (Poster Paper),
2014.
[21] K. Xu. Necessity of future information in admission control. Operations
Research, 2015.
[22] L. Huang, S. Zhang, M. Chen, and X. Liu. When Backpressure meets
Predictive Scheduling. Proceedings of ACM MobiHoc, 2014.
[23] L. Muppirisetty, J. Tadrous, A. Eryilmaz, and H. Wymeersch. On
proactive caching with demand and channel uncertainties. Proceedings
of Allerton Conference, 2015.
[24] S. Zhao, X. Lin, and M. Chen. Peak-minimizing online ev charging:
Price-ofuncertainty and algorithm robustification. Proceedings of IEEE
INFOCOM, 2015.
[25] N. Chen, A. Agarwal, A. Wierman, S. Barman, and L. L. H. Andrew.
Online convex optimization using predictions. Proceedings of ACM
Sigmetrics, 2015.
[26] N. Chen, J. Comden, Z. Liu, A. Gandhi, and A. Wierman. Using
predictions in online optimization: Looking forward with an eye on the
past. Proceedings of ACM Sigmetrics, 2016.
[27] M. Hajiesmaili, C. Chau, M. Chen, and L. Huang. Online microgrid
energy generation scheduling revisited: The benefits of randomization
and interval prediction. Proceedings of ACM e-Energy, 2016.
[28] M. Lin, Z. Liu, A. Wierman, and L. L. H. Andrew. Online algorithms
for geographical load balancing. IEEE IGCC, 2012.
10
[29] J. Tadrous, A. Eryilmaz, and H. El Gamal. Pricing for demand
shaping and proactive download in smart data networks. The 2nd
IEEE International Workshop on Smart Data Pricing (SDP), INFOCOM,
2013.
[30] M. Qu, H. Zhu, J. Liu, G. Liu, and H. Xiong. A cost-effective
recommender system for taxi drivers. ACM KDD, 2014.
[31] L. Huang.
The value-of-information in matching with queues.
IEEE/ACM Trans. on Netwroking, to appear.
[32] O. Simeone C. Tapparello and M. Rossi. Dynamic compressiontransmission for energy-harvesting multihop networks with correlated
sources. IEEE/ACM Trans. on Networking, 2014.
[33] Y. Yao, L. Huang, A. Sharma, L. Golubchik, and M. J. Neely. Data
centers power reduction: A two time scale approach for delay tolerant
workloads. IEEE Transactions on Parallel and Distributed Systems
(TPDS), vol. 25, no. 1, pp. 200-211, Jan 2014.
[34] W. Wang, K. Zhu, Lei Ying, J. Tan, and L. Zhang. Map task
scheduling in mapreduce with data locality: Throughput and heavytraffic optimality. IEEE/ACM Transactions on Networking, to appear.
[35] L. Gan, A. Wierman, U. Topcu, N. Chen, and S. Low. Real-time
deferrable load control: Handling the uncertainties of renewable generation. ACM e-Energy, 2013.
[36] L. Ying, S. Shakkottai, and A. Reddy. On combining shortest-path and
back-pressure routing over multihop wireless networks. Proceedings of
IEEE INFOCOM, April 2009.
[37] L. Bui, R. Srikant, and A. Stolyar. Novel architectures and algorithms for
delay reduction in back-pressure scheduling and routing. Proceedings
of IEEE INFOCOM Mini-Conference, April 2009.
[38] L. Huang and M. J. Neely. Delay reduction via Lagrange multipliers
in stochastic network optimization. IEEE Trans. on Automatic Control,
56(4):842–857, April 2011.
[39] D. P. Bertsekas, A. Nedic, and A. E. Ozdaglar. Convex Analysis and
Optimization. Boston: Athena Scientific, 2003.
[40] L. Huang and M. J. Neely.
Max-weight achieves the exact [O(1/V ), O(V )] utility-delay tradeoff under Markov dynamics.
arXiv:1008.0200v1, 2010.
[41] L. Huang and M. J. Neely. The optimality of two prices: Maximizing
revenue in a stochastic network. IEEE/ACM Transactions on Networking, 18(2):406–419, April 2010.
[42] T. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules.
Advances in Applied Mathematics, 6, 4-22., 1985.
[43] M. J. Neely. Energy-aware wireless scheduling with near optimal backlog and convergence time tradeoffs. Proceedings of IEEE INFOCOM,
2016.
[44] J. Liu. Achieving low-delay and fast-convergence in stochastic network
optimization: A nesterovian approach. Proceedings of ACM Sigmetrics,
2016.
[45] Albert Bifet and Ricard Gavald. Learning from time-changing data with
adaptive windowing. SIAM International Conference on Data Mining,
2007.
[46] F. Chung and L. Lu. Concentration inequalities and martingale inequalities - a survey. Internet Math., 3 (2006-2007), 79–127.
[47] W. Hoeffding. Probability inequalities for sums of bounded random
variables. Journal of the American Statistical Association 58 (301):
13-30, 1963.
A PPENDIX A - P ROOF OF L EMMA 1
(Proof of Lemma 1) We prove the performance of
ADE(Tl , d, ) with an argument inspired by [45]. We will make
use of the following concentration result.
Theorem 5. [46]
random
Let X1 , ..., Xn be independent
variables with Pr
X
=
1
=
p
,
and
Pr
X
=
0
=
i
i
Pn i
P1n− pi .
Consider X = i=1 Xi with expectation E X = i=1 pi .
Then, we have:
−m2
Pr X ≤ E X − m
≤ e 2E{X} ,
(23)
−m2
Pr X ≥ E X + m
≤ e 2(E{X}+m/3) . ♦ (24)
Proof. (Lemma 1) (Part (a)) In this case, it suffices to check
condition (i) in ADE. Define
t−1
X
X
1
d
π̃i (t) ,
1[S(τ )=si ] +
πi (τ ) ,
d
τ =(t+w−d)+
τ ∈Ww (t)
π̃id (t)
i.e.,
is defined with the true distributions in Wd (t).
Denote 1 = (w + 1)ew /d, we see then kπ̃ d (t) − π̂ d (t)k ≤ 1 .
Thus, for any > 0, we have:
Pr kπ̂ d (t) − π̂ m (t)ktv ≤
≤ Pr kπ̃ d (t) − π̂ m (t)ktv ≤ + 1
≤ Pr |π̃id (t) − π̂im (t)| ≤ + 1 .
(25)
Choose = 21 max |π1i − π2i | − 21 > 0 and let 0 = + 1 .
Fix α ∈ (0, 1) and consider i ∈ arg maxi |π1i −π2i |. We have:
Pr |π̃id (t) − π̂im (t)| ≤ 0
≤ Pr {|π̃id (t) − π1i | ≥ α0 }
∪{|π̂im (t) − π2i | ≥ (1 − α)0 }
d
≤ Pr |π̃i (t) − π1i | ≥ α0
+Pr |π̂im (t) − π2i | ≥ (1 − α)0 . (26)
Here the first inequality follows because if we have both
{|π̃id (t) − π1i | < α0 } and {|π̂im (t) − π2i | < (1 − α)0 , and
|π̃id (t) − π̂im (t)| ≤ 0 , we must have:
|π1i − π2i | ≤ |π̃id (t) − π1i | + |π̂im (t) − π2i | + |π̃id (t) − π̂im (t)|
= 20 < |π1i − π2i |,
which contradicts the fact that i achieves maxi |π1i − π2i |.
Using (26) and Hoeffding inequality [47], we first have:
Pr |π̂im (t) − π2i | ≥ (1 − α)0
≤ 2 exp(−2((1 − α)0 )2 Wm (t)).
(27)
≤ 2 exp(−2(α0 )2 (Wd (t) − w − 1)).
(28)
For the first term in (26), we have:
Pr |π̃id (t) − π1i | ≥ α0
Equating the above two
√ probabilities and setting the sum equal
W (t)/(Wd (t)−w−1)
to δ, we have α = √ m
, and
1+ Wm (t)/(Wd (t)−w−1)
r
p
4 1 + (Wd (t) − w − 1)/Wm (t)
p
0 = ln ·
.
(29)
δ
2(Wd (t) − w − 1)
In order to detect the different distributions, we can choose
d < 0 , which on the other hand requires that:
s
r
(∗)
4
1
d ≤ ln ·
< 0
δ
2(d − w − 1)
4 1
⇒ d > ln · 2 + w + 1.
(30)
δ 2d
Here (*) follows because Wd (t) = d ≤ Wm (t). This shows
that whenever Wd (t) = d ≤ Wm (t) and the windows are
loaded with non-coherent samples, error will be detected with
probability 1 − δ.
(Part (b)) Note that for any time t, the distribution will be
declared changed if ||π̂ d (t) − π̂ m (t)||tv > d . Choose d =
21 . Similar to the above, we have:
Pr kπ̂ d (t) − π̂ m (t)ktv ≥ d
(31)
11
≤ Pr kπ̃ d (t) − π̂ m (t)ktv ≥ d − 1
≤ Pr kπ̂ d (t) − πktv ≥ αd /2
+Pr kπ̂ m (t) − πktv ≥ (1 − α)d /2 .
Using the same argument as in (26), (27) and (28), we get:
Pr kπ̂ d (t) − π̂ m (t)ktv ≥ d ≤ δ.
This shows that step (i) declares change with probability δ.
Next we show that step (ii) does not declare a distribution
change with high probability.
To do so, we first use Theorem
√
5 with m = 2 log(Tl ) Tl to have that when Wm (t) ≥ Tl ,
2
2 log(Tl )
−2 log(Tl )
Pr kπim (t) − πi k > √
≤ e−2 log(Tl ) = Tl
.
Tl
Using the union bound, we get
2M log(Tl )
−2 log(Tl )
√
Pr kπ m (t) − πk >
≤ M Tl
. (32)
Tl
Thus, part (b) follows from the union bound over k.
Proof. (Theorem 1)P(Part (a) - Utility) Define a Lyapunov
function L(t) , 21 j qj (t)2 . Then, define the one-slot Lya
punov drift ∆(t) , E L(t + 1) − L(t) | q(t) . Using the
queueing dynamic equation (3), we have:
X
∆(t) ≤ B −
qj (t)E µj (t) − Aj (t) | q(t) .
(38)
j
2
rδmax
,
Here B ,
and the expectation is taken over π and the
potential randomness
in action selection. Adding to both sides
the term V E f (t) | q(t) , we first obtain:
∆(t) + V E f (t) | q(t) ≤ B
(39)
X
+
E V f (t) − qj (t)[µj (t) − Aj (t)] | q(t) .
j
Now add to both sides the term ∆1 (t) , E (γj∗ (t) −
θ)+ [µj (t) − Aj (t)] | q(t) , we get:
∆(t) + V E f (t) | q(t) + ∆1 (t)
(40)
X
≤B+
E V f (t) + Qj (t)[µj (t) − Aj (t)] | q(t) .
j
A PPENDIX B - P ROOF OF T HEOREM 1
= B + g(Q(t))
(Proof of Theorem 1) Here we prove the utility-delay
performance of PLC for a stationary system. We sometimes
omit the π when it is clear. For our analysis, define:
gsi (γ) =
inf
x(si ) ∈Xi
V f (si , x(si ) )
(33)
X
(si )
(si )
+
γj Aj (si , x ) − µj (si , x ) ,
j
to be the dual function when there is only a single state si . It
is clear from equations (9) and (33) that:
X
g(γ) =
πi gsi (γ).
(34)
i
We will also make use of the following results.
Eπ {kq(t + 1) − γ ∗ k | q(t)} ≤ kq(t) − γ ∗ k − η1 . ♦ (35)
Lemma 3. [16] If kπ a (t)−πktv ≤ and (6) holds for π a (t),
then γ ∗ (t) satisfies:
∗
kγ (t) − γ k ≤ b0 V ,
Here the equality follows from the definition of g(γ) and (34),
∗
π
and the last inequality uses g(Q(t)) ≤ gπ
= fav
. Taking an
expectation over q(t), carrying out a telescoping sum from
t = 0 to t = T − 1, and dividing both sides by V T , we
obtain:
T −1
T −1
1 X
1 X
π
E f (t) ≤ fav
E ∆1 (t) . (42)
+ B/V −
T t=0
V T t=0
To prove the utility performance, it remains to show that the
last term is O(1) in the limit, i.e.,
T −1
1 XX ∗
E (γj (t) − θ)+ [µj (t) − Aj (t)] = O(1). (43)
T →∞ T
t=0 j
(36)
To prove (43), consider the system evolution over the timeline. From the detection algorithm, we see that the timeline
is divided into intervals separated by reset points. Moreover,
since Wm (t) and Wd (t) are restarted, and q(t) is reset at reset
points, these intervals form renewal cycles with initial backlog
q(t) = 0 (see Fig. 6).
Wm
where b0 = Θ(1). ♦
Lemma 4. [43] Suppose Z(t) is a real-value random process
with initial value z0 that satisfies:
1) |Z(t
+ 1) − Z(t)| ≤ Zmax where Zmax > 0
2) E Z(t + 1) − Z(t) | Z(t) ≤ z(t) where z(t) = Zmax
when Z(t) < Zu and z(t) = −η with 0 ≤ η ≤ Zmax
when Z(t) ≥ Zu for some constant Zu .
Then, there exist constants rz = Θ(1), 0 < ρz < 1, and
rz Zmax
−ρz )erz Zu
D = (e
, such that for every slot t,
1−ρz
rz Z(t)
≤ D + (erz z0 − D)ρtz . ♦
(37)
E e
Now we prove Theorem 1.
(41)
lim
Lemma 2. [38] Suppose the conditions in Theorem 1 hold.
Then, under PLC with Q(t) = q(t), there exist constants
G, η = Θ(1), i.e., both independent of V , such that whenever
kq(t) − γ ∗ k > G,
∗
π
≤ B + fav
.
Wm (t) = Tl
C
Wd
C0
next round
t reset point
t+w+1
Fig. 6. Timeline divided into intervals.
Label the cycles by {Ck , k = 0, 1, ...}. We thus have:
T −1
1 XX ∗
E (γj (t) − θ)+ [µj (t) − Aj (t)]
(44)
T →∞ T
t=0 j
P
P ∗
+
E
E cost
t∈Ck
j (γj (t) − θ) [µj (t) − Aj (t)]
=
,
.
E length
E |Ck |
lim
Below, we omit the index k and use dm , maxt {bsd (t) −
bm (t)} to denote the size of C. Also, let c0 be such that ew =
Θ(V −c0 /2 ), and write Tl = V c1 where c1 , max(c, c0 ). Since
ew > 0, we have Tl < ∞.
12
We first show that the probability for having a small dm
(w.r.t. Tl ) is small. Denote the event E1 that dm ≥ V 2+c1 and
Wm (t) = Tl at t = Tl + d − w slots from the beginning of
C, i.e., step (i) of ADE does not declare any change before
Wm (t) = Tl occurs. Using Lemma 1, Tl ≥ V c , and the fact
that for a large V , d = log(V )3 /2d ≥ 22 ln(4/δ) + w + 1 for
d
δ = V − log(V ) , we have that:
−2 log(Tl )
Pr E1c
≤ δTl + V 2+c1 · (δ + (w + 1)M Tl
)
2
E1c
≤ V −c
log(V )
≤ V −3 .
(45)
Here
denotes the complementary event of E1 . Therefore,
with probability at least 1 − V −3 , dm ≥ V 2+c1 , which implies
that:
(46)
E |C| ≥ V 2+c1 /2.
Conditioning on E1 , we see that PLC will compute the empirical multiplier with statistics in Wm (t) after t = Tl + d − w
slots, and use it until a new change is declared (see Fig. 6).
Denote this period of time, i.e., after the multiplier is computed
until the cycle ends, by C 0 and its length by d0m (see Fig. 6).
We have d0m = dm − O(V c1 ) − w − 1 = Ω(V 2+c1 ) time (the
first V c1 slots are for learning and the last O(w + 1) slots are
not contained in both Wm (t) and Wd (t) due to ADE).
)
Denote another event E2 , {kπ m (t) − πk ≤ 4Vlog(V
c1 /2 },
m
where t is when Wm (t) = Tl . That is, the distribution π̂ (t)
is close to the true distribution, for t ∈ C 0 (Note that π̂ m (t)
remains constant during C 0 ). Using Theorem 5, we have that:
2
Pr E2 ≥ 1 − M e−4 log(V ) .
(47)
Thus,
Pr E2 ∩ E1
Pr E2 | E1 =
Pr E1
2
≥ 1 − M e−4 log(V ) − V −3
Here we have used Lemma 1 in [38] and the learning step in
PLC that ensures γ ∗ (t) = O(V log(V )). On the other hand,
E cost | E1 , E2c ≤ V log(V )δmax E |C| | E1 , E2c .
(52)
c
c
Let us now try to bound E |C| | E1 , E2 . Define E2a = {y ∈
c
(σ, 2σ]} and E2b
= {y > 2σ}. We have:
m
Pr kπ − π d (t)k > d | E1 , E2c
(53)
m
c
d
c
c
= Pr kπ − π (t)k > d | E1 , E2a Pr E2a |E1 , E2
c
c
+Pr kπ m − π d (t)k > d | E1 , E2b
Pr E2b
|E1 .
m
d
c
Nowmwe tryd to relate Pr kπ − π (t)k > d |dE1 , E2a to
Pr kπ − π (t)k > d | E1 , E2 . Consider a π (t) is such
that kπ m − π d (t)k > d given E1 , E2 . We note that there exist
i and j such that πid (t) ≤ πi and πjd (t) ≥ πj . Then, we can
always change π d (t) to π̃ d (t) by having one more sample for
j and one less sample for i (this can be ensured with high
probability since d = O(log3 (V ))). Since σ = O(V −c1 /2 )
and d = O(1/ log3 (V )), we will have kπ m − π̃ d (t)k > d
c
given E1 , E2a
. Therefore,
m
c
Pr kπ − π d (t)k > d | E1 , E2a
m
≥ P0 , c0 Pr kπ − π d (t)k > d | E1 , E2 .
Here c0 = mini πi / maxj πj . This shows that the probability
c
of having a change declared under E1 , E2a
is more than a
constant factor of that
under
E
,
E
.
As
a
result,
using (53)
1
2
c
and the fact that Pr E2a
|E1 , E2c ≥ 1 − O(V −3 ),
Pr kπ m − π d (t)k > d | E1 , E2c ≥ P1 ,
where P1 = c1 P0 and c1 ≥ c0 (1 − O(V −3 )). Thus,
E |C| | E1 , E2c ≤ d/P1 .
(54)
This is obtained by considering only testing for changes at
multiples
of d slots. On the other hand, it can be shown that
≥ 1 − 2V −3 .
(48) E |C| | E1 , E2 ≥ Θ(1/P1 ). This is so since, conditioning on
E1 , E2 , samples in Wd (t) evolves according to a Markov chain,
E2 , we now bound E cost , where cost ,
PWithPE1 and
with each state being a sequence of d samples. Moreover, the
∗
+
c
t∈C
j (γj (t) − θ) [µj (t) − Aj (t)]. First, when E1 takes
total mass of the set of states resulting in kπ m −π d (t)k > d is
c
, or
place, we either have dm ≤ V 2+c1 , denoted by E1a
P0 /c0 and that after V 2+c1 time, the first Wd (t) is drawn with
dm ≥ V 2+c1 but step (i) of ADE declares changes before
the steady state probability (due to S(t) being i.i.d.). Thus, the
c
c
Wm (t) = Tl , denoted by E1b
. Given E1a
, the cost is no more
Markov chain is in steady state from then, showing that the
c
, we first see that:
than V log(V )δmax V 2+c1 . For E1b
time it takes to hit a violating state is Θ(1/P1 ). Combining
c
Pr E1b
this with (54), we conclude that:
= Pr dm ≥ V 2+c1 and
E |C| | E1 , E2c ≤ dE |C| | E1 , E2 ≤ 2dE |C| . (55)
at least one change declared in the first Tl + d − w slots The last inequality follows since PrE , E ≥ 1 − 2V −3 .
1 2
≤ δ(Tl + d − w) ≤ V − log(V )/2 .
(49) Now consider the event E1 ∩ E2 . Using the fact that
c1
−3
c
Also, given E1b
, if we denote the first time a change is declared Tl = Θ(V ), Pr E2 ∩ E1 ≥ 1 − O(V ), and using almost
verbatim arguments as in the proofs of Lemmas 8 and 9 in
by T1b , we have:
[17], it can be shown that:10
c
E |C| | E1b
≤ T1b + E |C| | dm ≥ V 2+c1 − T1b
X
E
[µj (t) − Aj (t)] | E1 , E2
(56)
≤ Tl + d + 2E |C| ≤ 3E |C| .
(50)
t∈C 0
The first step follows because after the first declaration, the ≤ E qjs − qje | E1 , E2 + δmax (1 + b1 E |C| | E1 , E2 /V log V ),
requirement for any additional declaration is removed,
and the where b1 = Θ(1), and qjs and qje denote the beginning and
second step follows because
T
≤
T
+d−w
and
E
|C| | dm ≥
1b
l
ending sizes of queue j during
C 0 , respectively.
V 2+c1 − T1b ≤ 2E |C| . Thus,
We first bound the E qjs . Conditioning on E1 , we see
E cost | E1c
(51)
10 The fact that the event holds with probability almost 1 enables an analysis
3E |C|
V 2+c1
≤ V log(V )δmax · ( c2 log(V ) + log(V )/2 ).
similar to that without conditioning.
V
V
13
that
there will be Tl + d − w time until Wm (t) = Tl . Thus,
E qjs ≤ δmax b2 (V c1 + d − w) for some b2 = Θ(1).
Combining (51), (52), and (56), we obtain:
E cost
3E |C|
V 2+c1
≤ V log(V )δmax · ( c2 log(V ) + log(V )/2 )
(57)
V
V
2
+V log(V )δmax E |C| | E1 , E2c · M e−4 log(V )
+(δmax b2 (V c1 + d − w) + w + 1)δmax V log(V )
+V log(V )δmax (1 + b1 E |C| | E1 , E2 /V log V ).
The term (w + 1)δmax V log(V ) in the last w + 1 slots after
a change detection. Combining (57) with (44), (46), and (55),
we obtain (43).
(Part (b) - Delay) From the above, we see that the event
E1 ∩ E2 happens with probability at least 1 − O(1/V 3 ). Hence,
we only need to show that most packets that arrive during the
C 0 intervals experience small delay, conditioning on E1 ∩ E2 .
Denote ts and te the beginning and ending slots of C 0 . Using
(48) and Lemma 3, we get that with probability at least 1 −
2V −3
Define
kγ ∗ (t) − γ ∗ k ≤ dγ , 4b0 V 1−c1 /2 log(V ).
(58)
θ̂ , γ ∗ − (γ ∗ (t) − θ)+ ,
(59)
we see from Lemma 2 that whenever kq(t) − θ̂k > G, which
is equivalent to kQ(t) − γ ∗ k > G,
E kq(t + 1) − θ̂k | q(t) ≤ kq(t) − θ̂k − η,
for the same G = Θ(1) and η = Θ(1) < η1 in Lemma 2.11
Using (58) and θ in (17), we see that θ̂ = Θ(dγ log(V ) +
log(V )2 ). Therefore, using Theorem 4 in [16], if we assume
that C 0 never ends,
E TG (q(t)) ≤ b3 dq /η,
(60)
where b3 = Θ(1), dq = kθ̂ − q(ts )k and TG (q(t)) , inf{t −
ts : kq(t) − θ̂k ≤ G}. Note that this is after Wm (t) = Tl
in PLC, which happens after Tc = d − w + Tl slots from the
beginning of the interval. By Markov inequality,
1
Pr TG (q(t)) + Tc > (b3 dq /η + d − w + Tl )V = . (61)
V
Denote E3 , {TG (q(t)) + Tc (t) ≤ (b3 dq + d − w + Tl )V }
and let t∗ the first time after ts that Y (t) , kq(t) − θ̂k ≤
G. Following an argument almost identical to the proof of
Theorem 1 in [38], we obtain that:
te
X
√
∗
νη νY (t)
E e
≤ (te − t∗ )e2ν rδmax + eνY (t ) , (62)
2
t=t∗
where ν , δ2 +δηmax η/3 = Θ(1). Define b4 ,
max
√
∗
2e2ν rδmax /νη = Θ(1) and b5 , eνY (t ) ≤ eνG = Θ(1),
2
and choose m = log(V ) . We have from (62) that:
te
X
1
Pr Y (t) > G + m
te − ts t=t
s
11 This
is due to conditioning on E1 ∩ E2 .
(63)
te
X
1
(
Pr Y (t) > G + m + (t∗ − 1 − ts ))
te − ts t=t∗
≤ (b4 + b5 (te − t∗ ))V − log(V ) + (t∗ − ts ) /(te − ts )
(b3 dq + d − w + Tl )V + 1
) = O(1/V ).
= O(
V 2+c1
Thus, the above implies that, given the joint event E1 ∩ E2 ,
which happens with probability 1 − O(1/V 3 ), the fraction of
packets enter and depart from each qj (t) when kq(t)− θ̂k ≤ G
is given by (1 − O(1/V ))(1 − O(1/V )), i.e., 1 − O( V1 ). This
means they enter and depart when qj (t) ∈ [θ̂−G−log(V )2 , θ̂+
G + log(V )2 ] (due to LIFO), which implies that their average
delay in the queue is O(log(V )2 ).
c
(Part (c) - Dropping) First, conditioning on E1a
, which
− log(V )/2
happens probability V
, we see that we drop at most
O(V 2+c1 ) packets in this case.
Now consider when E1 takes place, and denote as above
by ts and te the starting and ending timeslots of a cycle.
In this case, from the rules of ADE, we see that rule (ii) is
inactive, since if it is satisfied at time Tl , it remains so because
π̂ m (t) remains unchanged until the cycle ends. Hence, the
only case when an interval ends is due to violation rule (i).
Let us suppose the interval ends because at some time t0 , we
have kπ̂ m (t0 ) − π̂ d (t0 )k > d . We know then PLC drops all
packets at time t0 + w + 1, i.e., q(t0 + w + 1).
We now bound E q(t0 + w + 1) . To do so, consider the
time t∗ = t0 − 2d. We see then q(t∗ ) and
sizes
P all queue
d 0
∗
0
before
t
are
independent
of
π̂
(t
).
Also,
q
(t
+w+1)
≤
j j
P
∗
q
(t
)
+
r(2d
+
w
+
1)δ
.
max
j j
Consider the time interval from when Wm (t) = Tl utill
t∗ and consider two cases, (i) ew = Ω(V −c/2 ) and (ii) ew =
O(V −c/2 ). In the first case, we see that Tl = V c . Thus, qj (ts +
Tl ) ≤ δmax V c .
In the second case, since ew = O(V −c/2 ), Tl = e−2
w . We
have from Lemma 3 that before time Tl , the estimated multiplier kγ ∗ (t) − γ ∗ k ≤ V ew = O(V 1−c/2 ). As a result, using
the definition of θ̂ in (59) and denoting
Z(t) = k(q(t)− θ̂)+ k,
we see that whenever Z(t) ≥ G, E |Z(t+1)−Z(t) | Z(t) ≤
−η. It can also be checked that the other conditions in Lemma
4 are satisfied by Z(t). Moreover, q(ts ) = 0 and Z(0) = 0.
Thus,
√
(64)
E Z(Tl ) ≤ G + rδmax + O(1).
1−c1 /2
Thus, E q(ts + Tl ) = O(V
). Combining the two
cases, we have E q(ts + Tl ) = O(V 1−c1 /2 + V c ) = O(V ).
After ts + Tl , the distribution π̂ m (t) is used to compute
the multiplier. Since Tl = max(V c , e−2
w ), we see that the
argument
above
similarly
holds.
Thus,
using
Lemma 4, we see
that E q(t∗ ) = O(V ), which implies E q(t0 + w + 1) =
O(V + d). Therefore, packets will be dropped no more than
every V 2+c1 slots, and at every time we drop no more than
O(V ) packets on average.
c
Finally, consider given E1b
. Using (46) and (50), wenote
c
that conditioning on E1b , the cycle lasts no more than 3E |C|
on average, which
means that the number of packets dropped
is at most O(E |C| ) every cycle on average. Moreover, using
(49), we see that this happens with probability O(V −3 ).
The result follows by combining the above cases.
≤
14
A PPENDIX C - P ROOF OF T HEOREM 2
(Proof of Theorem 2) We first have the following lemma to
show that if each dk ≥ 4d, then ADE keeps only o(d) samples
(timeslots) from the previous distribution in Wm (t) after
change detection. This step is important, as if Wm (t) contains
too many samples from a previous distribution interval, the
distribution estimation π̂ m (t) can be inaccurate and lead to
a high false-negative rate, which in turn affects performance
during Ik . The proof of the lemma is given at the end of this
section.
Lemma 5. Under the conditions of Theorem 2, with probability 1 − O(V −3 log(V )/4 ) only o(d) samples from Ik−1 remain
in Wm (t) ∪ Wd (t) for t ≥ tk + d. ♦
We now prove Theorem 2.
Proof. (Theorem 2) We first have from Lemma 1 that with
probability at least 1 − V −3 log(V )/4 (δ = V −3 log(V )/4 ),
distribution change will be detected before tk + d − w. Denote
this event by E4 .
reset point t⇤
new cycle starts
Dk
Wm
Wd
tk
Fig. 7. Intervals in a non-stationary system.
(Part (a) - Utility) Using Lemma 5, we see that o(d)
samples will remain in Wm (t). This implies that when V is
large and Wm (t) = d, with probability 1 − O(V − log(V )/2 ),
m
|π̂im (t) − πki | ≤ d /8, ∀ i,
(65)
where π̂ (t) is the distribution in window Wm (t) (can contain
timeslots from the previous interval). This shows that the
empirical distribution of Wm (t) is close to the true distribution
even though it may contain samples from Ik−1 . Thus, as
Wm (t) increases, π̂ m (t) will only become closer to π k , so
that (65) holds whenever Wd (t) ⊂ Ik . Denote (65) event E5 .
Now use an argument similar to the proof of Lemma 1, we
can show that:
Pr kπ̂ d (t) − π̂ m (t)ktv ≥ d ≤ V − log(V )/3 .
Hence, for each cycle C ⊂ Ik , if we denote E6 the event
that ADE does not declare any distribution change in steps (i)
and (ii) for V 1+a Tl log(V ) slots, and E2 before equation (47)
holds, we see that
Pr E6 ≥ 1 − V −2 .
(66)
This implies that Ik mostly likely only contains one cycle
C. Therefore, conditioning on E4 ∩ E5 ∩ E6 , which happens
with probability 1 − O(V −2 ) and implies that for cycle C 0 ,
q(ts ) = Θ(Dk + Tl + d − w), we have:
E cost ≤ r(Dk + Tl + d − w)b2 δmax V log(V )
+V log(V )δmax (1 + b1 E |C| /V log V ).
Applying
argument
in the proof of Theorem 1, we see
Pdthe
)
k −1
that d1k t=0
E ∆1 (t) = O( DkTlog(V
). Hence, the result
a
lV
follows.
(Part (b) - Queue) From the above, we see that at
time tk , q(tk ) = O(Dk ). We also know that the current
cycle C will start no later than tk + d − w with probability
1 − O(V −3 log(V )/4 ), in which case q(ts ) = O(Dk + d − w).
Since the system is polyhedral with ρ, using an argument
similar to the proof for Part (c) of Theorem 1, if we define
θ̃ = Θ((min(V 1−c/2 , V ew ) + 1) log2 (V ) + Dk + d − w) and
Z(t) = k(q(t) − θ̃)+ k, then throughout t ∈ [ts , tk+1 − 1],
√
(67)
E Z(t) ≤ G + rδmax + O(1).
Therefore, Part (b) follows.
Here we provide the proof for Lemma 5.
Proof. (Lemma 5) Consider time t = tk − w. We have the
following cases.
(i) Wd (t) = d and Wm (t) < d. Since dk−1 ≥ 4d, we see
that the change point tk will be detected with probability at
least 1 − δ at time t0 ≤ t + d, because Wd (t0 ) will contain
samples from π k while Wm (t0 ) will contain samples from
π k−1 (Note that although this is conditioning on Wd (t) = d
and Wm (t) < d, since at this point no statistical comparison
will be assumed, it is independent of the realizations in the two
windows). Moreover, all samples from Ik−1 will be removed
and not remain in Wm (t) and Wd (t), while at most w + 1
samples from Ik will be discarded.
(ii) Wd (t) = d and Wm (t) ≥ d. In this case, if a change
is declared, then we turn to case (iii). Otherwise, since the
samples in Wm (t) are drawn from π k−1 , we have:
Pr kπ̂ m (t) − π k−1 k ≤ d /2 ≥ 1 − V −3 log(V )/4 .
(68)
Now suppose no change is detected till time t + d. Then
Wm (t + d) ≥ d. Denote E6 , {kπ̂ m (t) − π k−1 k ≤ d /2}.
Conditioning on E6 and using (68), we have:
d
| E6 ≥ 1 − 2V −3 log(V )/4 . (69)
Pr kπ̂ m (t + d) − π k−1 k ≤
2
The inequality follows since Pr E6 ≥ 1 − V −3 log(V )/4 . Now
Wd (t + d) contains only samples from π k , in which case we
similarly have:
Pr kπ̂ d (t + d) − π k k ≤ d /2 ≤ 1 − V −3 log(V )/4 . (70)
Since the events (69) and (70) are independent, we conclude
that with probability 1 − 3V −3 log(V )/4 , a change will be
declared before tk and all samples from Ik−1 will be removed
and not remain in Wm (t) ∪ Wd (t).
(iii) Wd (t) < d. We argue that with high probability, at
most o(d) samples can remain at time tk + 2d − Wd (t). First,
note that Wd (t) < d only occurs when a detection has been
declared at a time t+w−d ≤ t0 ≤ t. Thus, if t+w−t0 = o(d),
then we are done. Otherwise suppose t + w − t0 = αd for α =
Θ(1). If they are removed, then at time t0 + 2d, Wm (t0 + 2d)
contains samples with mixed distribution π 0 = απ k−1 + (1 −
α)π k and Wd (t0 + 2d) containing samples with distribution
π k 6= π 0 . Similar to case (i), the condition Wd (t) < d is
independent of the state realizations in the two windows. Using
Lemma 1 (it can be checked that the conditions in the lemma
are satisfied), we see that this will be detected by ADE with
probability 1 − δ with a large V .
Combining all three cases completes the proof.
A PPENDIX D - P ROOF OF T HEOREM 3
(Proof of Theorem 3) We prove Theorem 3 here.
15
Proof. (Part (a) - Stationary) The results follow from the fact
that when ew = 0, PLC is equivalent to OLAC in [16] with
perfect statistics. Hence the results follow from Theorems 1
and 2 in [16].
(Part (b) - Non-Stationary) We first see that at time t, ADE
detects distribution change in time t + w through step (ii) with
probability 1. Then, after time tk + d − w, π a (t) = π k and we
see that whenever Z(t) , kq(t) − θk > G for θ = 2 log2 (V )
and G = Θ(1),
E Z(t + 1) | q(t) ≤ Z(t) − η.
(71)
change before time tk + d. Recall the event E1 that ADE does
not declare change in the first V 2+c1 slots from the proof of
Theorem 1, where c1 is such that Tl = V c1 . Note that this
implies {dm ≥ V 2+c1 }). From (45), we know that:
Pr E1 ≥ 1 − V −3 .
(75)
Conditioning on E1 , the time it takes to achieve ||Q(t)−γ ∗ || ≤
G is no more than the sum of (i) the time it takes to reach
Wm (t) = Tl , and (ii) the time it takes to go from the estimated
multiplier γ ∗ (t)−θ to γ ∗ . Denote E7 (t) = {kπ m (t)−πktv ≤
−1/2
2M log(Tl )Tl
}. When Wm (t) = Tl , we have
−2 log(Tl )
Pr E7 (t) ≥ 1 − O(M Tl
),
(76)
Denote b6 = r1z log(erz rδmax −ρz /(1 − ρz )). We want to show
via induction that for all k,
V log(V )
X
∗
∗
E
qj (tk ) ≤ qth , 2r log2 (V ) + b6 + 2G + drδmax . (72) in which case kγ (t) − γ k = Θ( √Tl ). As in the
proof of Theorem 2, we see that when Wm (t) = Tl ,
j
∗
k + Tl + d), which implies that kQ(t) − γ k =
First, it holds for time zero. Suppose it holds for interval Ik . q(t) = O(D
2
V
Θ((1 + √T ) log (V ) + Tl + Dk + d). Using Lemma 5 in [16]
We now show that it also holds for interval k + 1.
l
To do so, first we see that during time [tk , tk + d − w], there again, we see that if ADE always does not declare change,
can be an increment of qj (t) since π a (t) during this interval
(77)
E TG = O(θ + Tl + Dk + d).
is a mixed version of π k−1 and π k . Thus,
Using Markov inequality, we see that:
X
0
E
qj (tk + d) ≤ qth
, qth + drδmax .
(73)
Pr TG ≥ V 2+c1 ≤ O(V −1−c1 + Dk V −2−c1 ).
(78)
j
Using Lemma 4, we have:
E erz Z(tk+1 −d)
0
erz rδmax −ρz rz G
e
+ (erz qth − b6 erz G )ρdz k −2d .
≤
1 − ρz
Using the definition of qth and the fact that dk ≥ d log2 (V ),
0
we have that for a large V , (erz qth − b6 erz G )ρdk −2d ≤ G.
Thus,
E Z(tk+1 − d) ≤ b6 + 2G,
(74)
P
2
which implies E
j (tk+1 − d) ≤ 2r log (V ) + b6 + 2G.
jqP
4
It thus follows that E
j qj (tk+1 − d) ≤ qth ≤ b7 log (V )
for some b7 = Θ(1).
Having established this result, using an argument similar to
that in the proof of Theorem 2, we have:
E cost ≤ b7 log4 (V ) · V log(V )
+V log(V )δmax (1 + b1 E |C| /V log V ).
Using dk ≥ V 1+ , we see that Part (b) follows.
A PPENDIX E - P ROOF OF T HEOREM 4
(Proof of Theorem 4) Here we prove the convergence
results. We sometimes drop the subscript k when it is clear.
Proof. (Theorem 4) First, when ew = 0, we see that for
any interval Ik , for all time t ≥ tk + d, π a (t) = π k , and
γ ∗ (t) = γ − θ. Using Lemma 5 in [16] and the fact that
d = O(log2 (V )), we have:
E TG = E E TG | q(tk )
(∗)
(∗∗)
= E Θ(kq(tk ) − θk) = Θ(log4 (V )).
Here (*) follows from Lemma 5 in [16] and (**) follows from
(72).
Consider the other case ew > 0. Using Lemma 5, we see
that with probability at least 1−V −3 , PLC detects distribution
Thus, with probability 1 − O(V −1−c1 + Dk V −2−c1 ), convergence occurs before V 2+c1 . This proves (20).
To prove (21), define G1 = Θ(Dk + 2 log(V )2 (1 + V ew )).
Then, we see from Lemma 5 that with probability 1 −
O(V −3 log(V )/4 ), distribution change will be detected before
t0 ≤ tk + d. At that time, we have kγ ∗ (t) − γ ∗ k = O(V ew ).
Combining this with the fact that q(t0 ) = O(Dk + d), we see
that (21) follows. This completes the proof.
| 3 |
Non-negative submodular stochastic probing via stochastic
contention resolution schemes
Marek Adamczyk
Department of Computer, Control, and Management Engineering, Sapienza University
of Rome, Italy, [email protected].
September 4, 2015
arXiv:1508.07771v3 [] 3 Sep 2015
1
∗1
Abstract
In a stochastic probing problem we are given a universe E, where each element e ∈ E is
active independently with probability pe ∈ [0, 1], and only a probe of e can tell us whether it
is active or not. On this universe we execute a process that one by one probes elements —
if a probed element is active, then we have to include it in the solution, which we gradually
construct. Throughout the process we need to obey inner constraints on the set of elements
taken into the solution, and outer constraints on the set of all probed elements. The objective
is to maximize a function of successfully probed elements.
This abstract model was presented by Gupta and Nagarajan (IPCO’13), and provides
a unified view of a number of problems. Adamczyk, Sviridenko, Ward (STACS’14) gave
better approximation for matroid environments and linear objectives. At the same time
this method was easily extendable to settings, where the objective function was monotone
submodular. However, the case of non-negative submodular function could not be handled
by previous techniques.
In this paper we address this problem, and our results are twofold. First, we adapt
the notion of contention resolution schemes of Chekuri, Vondrák, Zenklusen (SICOMP’14)
to show that we can optimize non-negative submodular functions in this setting with a
constant factor loss with respect to the deterministic setting. Second, we show a new contention resolution scheme for transversal matroids, which yields better approximations in the
stochastic probing setting than the previously known tools. The rounding procedure underlying the scheme can be of independent interest — Bansal, Gupta, Li, Mestre, Nagarajan,
Rudra (Algorithmica’12) gave two seemingly different algorithms for stochastic matching
and stochastic k-set packing problems with two different analyses, but we show that our
single technique can be used to analyze both their algorithms.
∗
Supported by the ERC StG project PAAl no. 259515.
1
1 INTRODUCTION
1
2
Introduction
Stochastic variants of optimization problems were considered already in 1950 [5, 10], but only
in recent years a significant attention was brought to approximation algorithms for stochastic
variants of combinatorial problems. In this paper we consider adaptive stochastic optimization
problems in the framework of Dean et al. [12] who presented a stochastic knapsack problem.
Since the work of Dean et al. a number of problems in this framework were introduced [9, 16,
17, 18, 2, 19, 11]. Gupta and Nagarajan [20] presented an abstract framework for a subclass
of adaptive stochastic problems giving a unified view for stochastic matching [9] and sequential
posted pricing [7]. Adamczyk et al. [1] generalized the framework by also considering monotone
submodular functions in the objective. In this paper we generalize the framework even further
by showing that also maximizing a non-negative submodular function can be considered in the
probing model. On the way we develop a randomized procedure for transversal matroids which
can be used to improve approximation for the k-set packing problem [4].
Below paper enhances the iterative randomized rounding for points from matroid polytopes
that was presented in [1]. The analysis from [1] does not easily carry over when the objective
submodular function is non-monotone. To handle non-monotone objectives we are making use
of contention resolution schemes introduced by Chekuri et al. [8]. Contention resolution schemes
in the context of stochastic probing already were used by Gupta and Nagarajan [20]. Recently,
Feldman et al. [14] presented online version of contention resolution schemes which, on top of
applications for online settings, yield good approximations for stochastic probing problem for
a broader set of constraints than before — most notably, for inner knapsack constraints and
deadlines.
Our paper fills the gaps between and merges results from basically four different paper [1,
8, 20, 13]. That is the reason why this paper comes with diverse contributions: we improve
the bound on measured greedy algorithm of Feldman et al. [13]; adjust contention resolution
schemes to stochastic probing setting in a way in which submodular optimization is be possible;
use iterative randomized rounding technique to develop contention resolution schemes; moreover,
we revisit the algorithms of Bansal et al. [4].
Below we present the necessary background.
The probing model
We describe the framework following [20]. We are given a universe E, where each element e ∈ E
is active with probability pe ∈ [0, 1] independently. The only way to find out if an element
is active, is to probe it. We call a probe successful if an element turns out to be active. On
universe E we execute an algorithm that probes the elements one-by-one. If an element is active,
the algorithm is forced to add it to the current solution. In this way, the algorithm gradually
constructs a solution consisting of active elements.
Here, we consider the case in which we are given constraints on both the elements probed and
the elements included in the solution. Formally, suppose that we are given
two independence
the set
systems of downward-closed sets: an outer independence system E, I out restricting
in
of elements probed by the algorithm, and an inner independence system E, I , restricting
the set of elements taken by the algorithm. We denote by Qt the set of elements probed in the
first t steps of the algorithm, and by S t the subset of active elements from Qt . Then, S t is the
partial solution constructed by the first t steps of the algorithm. We require that at each time
t, Qt ∈ I out and S t ∈ I in . Thus, at each time t, the element e that we probe must satisfy both
Qt−1 ∪ {e} ∈ I out and S t−1 ∪ {e} ∈ I in . The goal is to maximize expected value E [f (S)] where
f : 2E → R≥0 and S is the set of all successfully probed elements.
We shall denote such a stochastic probing problem by E, p, I in , I out with function f stated
on the side, if needed.
3
1 INTRODUCTION
Submodular Optimization
A set function f : 2E 7→ R≥0 is submodular, if for any two subsets S, T ⊆ E we have f (S ∪ T ) +
f (S ∩ T ) ≤ f (S) + f (T ). Without loss of generality, we assume also that f (∅) = 0.
The multilinear extension F : [0, 1]E 7→ R≥0 of f , whose value at a point y ∈ [0, 1]E is given
by
X
Y Y
(1 − ye ) .
F (y) =
f (A) ·
ye
A⊆E
e∈A
e6∈A
Note that F (1A ) = f (A) for any set A ⊆ E, so F is an extension of f from discrete domain 2E
into a real domain [0, 1]E . The value F (y) can be interpreted as the expected value of f on a
random subset A ⊆ E that is constructed by taking each element e ∈ E with probability ye .
Contention Resolution Schemes
Consider a ground set of elements E and an down-closed family I ⊆ 2E of E’s subsets — we
call (E, I) an independence system. Let P (I) be the convex hull of characteristic vectors of sets
from I. Given x ∈ P (I) we define R (x) to be a random set in which every element e ∈ E is
included in R (x) with probability xe ; set R (x) defined like that is used frequently throughout
the paper.
Chekuri et al. [8] presented a framework of contention resolution schemes (CR schemes) that
allows to maximize non-negative submodular functions for various constraints. The following
definition and theorem come from [8].
Definition 1
Let (E, I) be independence system. For b, c ∈ [0, 1], a (b, c)-balanced CR scheme π for P (I) is a
randomized procedure that for every x ∈ b · P (I) and A ⊆ E, returns a random set πx (A) such
that:
1. always πx (A) ⊆ A ∩ supp (x) and πx (A) ∈ I,
2. P [e ∈ πx (A1 )] ≥ P [e ∈ πx (A2 )] whenever e ∈ A1 ⊆ A2 ,
3. for all e ∈ supp (x), P [ e ∈ πx (R (x))| e ∈ R (x)] ≥ c.
Theorem 1
Let (E, I) be an independence system. Let f : 2E → R be a non-negative submodular function
with multilinear relaxation F , and x be a point in b · P (I). Let π be a (b, c)-balanced CR scheme
for P (I), and let S = πx (R (x)). If f is monotone then E [f (S)] ≥ c · F (x) . Furthermore,
there is a function ηf : 2E → 2E that depends on f and can be evaluated in linear time, such
that even for f non-monotone E [f (ηf (S))] ≥ c · F (x) .
Function ηf (S) represents a pruning operation that removes from S some elements. To
prune a set S with pruning function ηf , an arbitrary ordering of the elements of E is fixed: for
simplicity of notation let E = {1, ..., |E|} which gives a natural ordering. Starting with S prun = ∅
the final set S prun = ηf (S) is constructed by going through all elements of E in the given order.
When considering an element e, S prun is replaced by S prun + e if f (S prun + e) − f (S prun ) ≥ 0.
Note that a pruning operation like that is not possible to execute in the probing model since
we commit to elements. We address this issue in Section 2, where we show how to perform
on-the-fly pruning.
Stochastic k-set packing
We are given n elements/columns, where each element e ∈ [n] has a random profit ve ∈ R+ , and
a random d-dimensional size Se ∈ {0, 1}d . The sizes are independent for different elements, but
1 INTRODUCTION
4
ve can be correlated with Se , and the coordinates of Se also might be correlated between each
other. Additionally, for each element e, there is a set Ce ⊆ [d] of at most k coordinates such that
each size vector Se takes positive values only in these coordinates, i.e., Se ⊆ Ce with probability
1. We are also given a capacity vector b ∈ Zd+ into which elements must be packed. We say that
column Se outcomes are monotone if for any possible realizations a, b ∈ {0, 1}d of Se , we have
a ≤ b or b ≤ a coordinate-wise. A strategy probes columns one by one, obeying the packing
constraints, and its goal is to maximize the expected outcome of taken columns. Bansal et al. [4]
gave a 2k-approximation algorithm for the problem, and a (k + 1)-approximation algorithm
when they assume that the outcomes of size vectors Se are monotone. The technique from [1]
allows to get a (k + 1)-approximation for the problem without the assumption of monotone
column outcomes.
1.1
Our contributions at a high level
There are two main contributions of the paper. First is Theorem 2 which implies that nonnegative submodular optimization in the probing model is possible if we are given CR schemes.
Our second contribution is the improved insight for transversal matroids in the context of
stochastic probing, captured in Theorem 4 and new analyses of algorithms from [4] for stochastic k-set packing and stochastic matching. It is based on the iterative randomized rounding
technique of [1].
1.1.1
Non-negative submodular optimization via CR schemes in the probing model
To obtain our results we extend the framework of CR schemes into the probing model, and we
define a stochastic contention resolution scheme (stoch-CR scheme). Define a polytope
o
n
P I in , I out = x x ∈ P I out , p · x ∈ P I in , x ∈ [0, 1]E .
By act (S) we denote the subset of active elements of set S. Note that event e ∈ act (R (x))
means both that e ∈ R (x) and that e is active; this event has probability pe xe .
Definition 2
Let E, p, I in , I out be a stochastic probing
problem. For b, c ∈ [0, 1], a (b, c)-balanced stoch-CR
scheme π̄ for a polytope P I in , I out is a probing strategy that for every x ∈ b · P I in , I out and
A ⊆ E, obeys outer constraints I out , and the returned random set π̄x (A) satisfies the following:
1. π̄x (A) consists only of active elements,
2. π̄x (A) ⊆ A ∩ supp (x) and π̄x (A) ∈ I in ,
3. P [ e ∈ π̄x (R (x))| e ∈ act (R (x))] ≥ c,
4. P [e ∈ π̄x (A1 )] ≥ P [e ∈ π̄x (A2 )] whenever e ∈ A1 ⊆ A2 .
In Section 2 we present a mathematical program that models our problem. Solving the
program, getting x+ , and running the stoch-CR scheme π̄x+ on R (x+ ) constitute the algorithm
from the below Theorem.
Theorem 2
Consider a stochastic probing problem E, p, I in , I out , where we need to maximize a nonE
negative submodular
function f : 2 7→ R≥0 . If there exists a (b, c)-balanced stoch-CR scheme
in
out
π̄ for P I , I , then there exists a probing strategy whose expected outcome is at least
c b · e−b − o (1) · E [f (OP T )].
5
1 INTRODUCTION
−b k
-balanced scheme for
b, 1−eb
intersection of k matroids [8], and there exists a b, (1 − b)k -balanced ordered scheme for intersection of k matroids [7, 14]. For example when kin = kout = 1, Lemma 10 and the above
Theorem, after plugging appropriate number b, yield approximation of 0.13. See Section 2 for
discussion on other types of constraints.
To put this Theorem into perspective. There exists a
1.1.2
Optimization for transversal matroids
Stochastic probing on intersection of transversal matroids We improve upon the approximation described in the previous Section, if we assume the constraints are intersections of
transversal matroids. We do it by developing a new stoch-CR scheme. This scheme is direct in
the sense that we do not construct it by applying first a scheme for outer and then for inner
constraints, as in Lemma 10.
Lemma 3
There exists a b, 1+b·(kin1 +kout) -balanced stoch-CR scheme when constraints are intersections
of kin inner and kout outer transversal matroids.
For kin = kout = 1 the above Lemma together with Theorem 2 give approximation factor
of 0.15, so a modest improvement over 0.13, but it gets significantly better for larger values of
2
and we use Theorem 2 to conclude the
kin + kout . With k = kin + kout we plug b = √1+4k+1
following Theorem.
Theorem 4
For maximizing non-negative submodular function in the probing model with kin inner and kout
outer transversal matroids there exists an algorithm with approximation ratio of
1
1
q
1−Θ √
.
k
k + k + 14 + 12
There are further applications of the techniques used in Lemma 3.
1
Regular CR scheme for transversal matroids When kin = 0 this scheme yields b, 1+b·k
balanced CR scheme
for deterministic
setting. So far only a b, (1 − b)k -balanced ordered
k
-balanced scheme were known [8]; see section 2 for the defischeme and a b, 1 − eb /b
nition of ordered scheme. Our scheme can be seen as an improvement when one looks at the
1
2
1
.
maxb (b · c) — first one yields e(k+1)
, second e(k+1)
(for k big1 ), and we get k+1
Stochastic k-set packing and stochastic matching Here we want to argue that the
martingale-based analysis of the algorithm for transversal matroids can be of independent interest. In the bipartite stochastic matching problem [4] the inner constraints define bipartite matchings and outer constraints define general b-matchings — both are intersections of 2 transversal
matroids. Stochastic k-set packing problem does not belong to our framework, but we have
already defined it in previous subsection.
Bansal et al. [4] presented an algorithm for the k-set packing, and have proven that it yields
a (k + 1)-approximation but when assuming that the column outcomes are monotone. They also
presented a 3-approximation algorithm for the stochastic matching problem. Both algorithms
first choose a subset of elements: for k-set packing the elements are chosen independently,
and for stochastic matching the elements are chosen using a dependent rounding procedure of
1
The precise value is smaller than
1
k+1
starting k ≥ 4.
6
2 OPTIMIZATION VIA STOCH-CR SCHEMES
Gandhi et al. [15]. After that they probe elements in a random order. Bansal et al. [4] gave
two quite different analyses of the two algorithms, while we show how both algorithms can be
analyzed using our martingale-based technique for transversal matroids. First, we show that
their algorithm for set packing is in fact a (k + 1)-approximation even without the monotonicity
assumption (see Appendix F). Second, we also show that the our technique of analysis provides
a proof of 3-approximation of their algorithm for stochastic matching (see Appendix G).
2
Optimization via stoch-CR schemes
Mathematical program
Another extension of f studied in [6] is given by:
X
X
X
f + (y) = max
αA f (A)
αA ≤ yj , ∀A⊆E αA ≥ 0 .
αA ≤ 1, ∀j∈E
A⊆E
A⊆E
A:j∈A
Intuitively, the solution (αA ) A⊆E above represents the distribution over 2E that maximizes the
value E [f (A)] subject to the constraint that its marginal values satisfy P [i ∈ A] ≤ yi . The
value f + (y) is then the expected value of E [f (A)] under this distribution, while the value
of F (y) is the value of E [f (A)] under the particular distribution that places each element
i in A independently. This relaxation is important for our applications because the following
mathematical programming relaxation gives an upper bound on the expected value of the optimal
feasible strategy for the related stochastic probing problem:
maximize f + (x · p) x ∈ P I in , I out .
(1)
Lemma 5
Let OP T be the optimal feasible strategy for the stochastic probing problem in our general setting,
then E [f (OP T )] ≤ f + (x+ · p).
Proof of the following Lemma can be found in [1], and for sake of completeness we put it in
the Appendix C.
However, the framework of Chekuri et al. [8] uses multilinear relaxation F and not f + . The
Lemma below allows us to make a connection. Note that F (x) is exactly equal to E [F (R (x))],
i.e., it corresponds to sampling each point e ∈ E independently with probability xe , while
definition of f + (x) involves the best possible, most likely correlated, distribution of E’s subsets.
It follows immediately that for any point x we have f + (x) ≥ F (x), and therefore the following
Lemma states a stronger lower-bound for the measured greedy algorithm of Feldman et al. [13].
Details are presented in the next paragraph.
Lemma 6
Let b ∈ [0, 1], let f be a submodular function with multilinear extension F , and let P be any downward closed polytope. Then, the solution x ∈ [0, 1]E produced by the measured greedy algorithm
satisfies 1) x ∈ b · P, 2) F (x) ≥ b · e−b − o (1) · maxy∈P f + (y).
Stronger bound for measured continuous greedy
We now briefly review the measured continuous greedy algorithm of Feldman et al. [13]. The
algorithm runs for δ−1 discrete time steps, where δ is a suitably chosen, small constant. Let
y (t) be the algorithm’s current fractional
time t, the
algorithm selects
P solution at time t. At
vector I (t) ∈ P given by arg maxx∈P e∈E xe · F y (t) ∨ 1{e} − F (y (t)) (where ∨ denotes
element-wise maximum). Then, it sets ye (t + δ) = ye (t) + δIe (t) · (1 − ye (t)) and continues to
time t + δ.
7
2 OPTIMIZATION VIA STOCH-CR SCHEMES
The analysis of Feldman et al. shows that if, at every time step
F (y(t + δ)) − F (y(t)) ≥ δ · e−t · f (OP T ) − F (y(t)) − O n3 δ2 f (OP T ) ,
(2)
then we must have F (y(T )) ≥ T e−T − o (1) · f (OP T ). We note that, in fact, this portion of
their analysis works even if f (OP T ) is replaced by any constant value. Thus, in order to prove
our claim, it suffices to derive an analogue of (2) in which f (OP T ) is replaced by f + (x+ ), where
x+ = argmaxy∈P f + (y). The remainder of the proof then follows as in [13].
Lemma 7 below contains the required analogue of (2). Hence it implies Lemma 6. Proof of
below Lemma is placed in the Appendix A.
Lemma 7
For every time 0 ≤ t ≤ T
F (y(t + δ)) − F (y(t)) ≥ δ · e−t · f + (x+ ) − F (y(t)) − O(n3 δ2 ) · f + x+ .
Using stoch-CR schemes
The following Lemma is implied by Theorem 1 — it follows just from the fact that set act (R (x))
is distributed exactly as R (x · p).
Lemma 8
Let (E, p, I in , I out ) be a probing problem. Let f : 2E 7→ R≥0 be a non-negative
submodular
in , I out for b ∈ [0, 1]. Let π̄
function with multilinear relaxation F , and x be a point
in
b
·
P
I
x
be a (b, c)-balanced stoch-CR scheme for P I in , I out . Let π̄x (R (x)) be the output of the CR
scheme, and let ηf (π̄x (R (x))) be a pruned subset of π̄x (R (x)). It holds that
E [f (ηf (π̄x (R (x))))] ≥ c · F (x · p) .
However, we cannot apply yet the above Lemma to reason about a probing strategy, because
here the pruning ηf of S is done after the process. In the probing model we commit to an
element once we successfully probe it, and therefore we cannot do the pruning operation after
the execution of a stoch-CR scheme. However, since a probing strategy inherently includes
elements one-by-one, we can naturally add to any stoch-CR scheme the pruning operation done
on the fly. The idea is to simulate the probes of elements that would be rejected by the pruning
criterion. To simulate a probe of e means not to probe e and to toss a coin with probability
pe — in case of success to behave afterwards as if e was indeed taken into the solution, and
in case of failure to behave as if e was not taken. During the execution of a stoch-CR scheme
π̄x we construct two sets: S prun consists of elements successfully probed, and S virt consists of
elements whose simulation
was successful.
If in a step we want to probe an element e such that
prun
virt
prun
virt
f S
+S
+e −f S
+S
< 0, then we simulate the probe of e and if successful
S virt ← S virt + e; otherwise we really probe e and S prun ← Sprun + e if successful. We can
see that at any step, it holds that S prun = ηf S prun + S virt . Also, the final random set
S prun + S virt is distributed
exactly as π̄x (R
(x)). Hence, the outcome of such a probing strategy
= E [f (ηf (π̄x (R (x))))] ≥ c · F (x · p) , where the
is E [f (S prun )] = E f ηf S prun + S virt
inequality comes from Lemma 8. Thus we have proven what follows.
Lemma 9
Let f ,x,π̄x be as in Lemma 8. There exists a probing strategy whose expected outcome is
E [f (ηf (π̄x (R (x))))] ≥ c · F (x · p).
Now we can finish the proof of Theorem 2. Consider the following algorithm. First, use
Lemma 6 to find a point x∗ such that
F (x∗ · p) ≥ b · e−b − o (1) ·
max
f + (x · p) .
x∈P(I in ,I out )
2 OPTIMIZATION VIA STOCH-CR SCHEMES
8
Second, run the probing strategy based on a stoch-CR scheme π̄x∗ as described in Lemma 9.
This yields, together with Lemma 5, that the outcome of such a probing strategy is
E [f (ηf (π̄x∗ (R (x∗ ))))] ≥ c · F (x∗ · p) ≥ c · b · e−b − o (1) ·
max
f + (x · p)
x∈P(I in ,I out )
≥ c · b · e−b − o (1) · E [f (OP T )] .
In [8] also an alternative approach than pruning was used. They defined a strict contention
resolution scheme where the approximation guarantee P [e ∈ πx (R (x)) |e ∈ R (x)] ≥ c holds with
equality rather than inequality. Since the pruning operation depends on an objective function,
resigning from it allows for the algorithm to be used in maximizing many submodular functions
at the same time. In our stochastic setting we can also skip the pruning operation if we have a
strict scheme. No proof of this fact is needed, since we can directly use an appropriate analog
of Lemma 8 (Theorem 4.1 from 1).
Stochastic probing for various constraints
Gupta and Nagarajan [20] introduced a notion of ordered CR scheme, for which there exists
a (possibly random) permutation σ of E, so that for each A the set πx (A) is the maximal
independent subset of I obtained by considering elements in the order of σ. Ordered scheme
are required to implement probing strategies, because of the commitment to the elements. CR
schemes exist for various types of constraints, e.g., matroids, sparse packing integer programs,
constant number of knapsacks, unsplittable flow on trees (UFP), k-systems (including intersection of k matroids, and k-matchoids). Ordered schemes exist for k-systems and UFP on trees.
See Theorem 4 in [20] for a listing with exact parameters.
The following Lemma is based on Lemma 1.6 from [8]. The proof basically carries over, the
only thing we have to do is to again incorporate probes’ simulations as in the proof of Lemma 9.
Proof of Lemma 10 is placed in Appendix C. Theorem 3.4 from [20] yields a similar result but
would imply a (b, cout + cin − 1)-balanced stoch-CR scheme. Thus the below Lemma can be
considered as a strengthening of Theorem 3.4 from [20], because an FKG inequality is used in
the proof instead of a union-bound.
Lemma 10
in , I out . Suppose we have a (b, c
Consider a probing
problem
E,
p,
I
CR-scheme
out )-balanced
in for P I in . Then there exists
π out for P I out , and a (b, cin )-balanced ordered CR scheme
π
a (b, cout · cin )-balanced stoch-CR scheme for P I in , I out .
In light of the above Lemma, one can question Definition 2 of a stoch CR-scheme — why do
we need to define it at all, if we can just be using two separate classic CR schemes π out , π in . The
reason is that there may exist stoch-CR schemes that are not convolutions of two deterministic
schemes, and that yield better approximations than corresponding convoluted ones. Such a
stoch-CR scheme is presented in Section 3.
In a recent paper, Feldman et al. [14] presented a variant of online contention resolution
schemes. They enriched the set of constraints possible to use in the stochastic probing problem
— previously inner knapsack constraints were not possible to incorporate, as well as deadlines
(element e can be taken only first de steps) for weighted settings. Their results can be extended
to monotone submodular settings by making use of a stronger bound for continuous greedy
algorithm [6] presented in [1]. The stronger bound for measured greedy algorithm — which works
for non-monotone functions — that we give in this paper can also be used in [14] enhancing
their result by the possibility of handling non-monotone functions as well.
3 STOCH-CR SCHEME FOR INTERSECTION OF TRANSVERSAL MATROIDS
3
9
Stoch-CR scheme for intersection of transversal matroids
In this section we prove Lemma 3. We assume we have only one inner matroid Min to keep the
1
.
presentation simple. Also, we shall present a 1, 21 -balanced CR scheme, instead of b, b+1
In the Appendix E we present a full scheme with arbitrary b, k in , and kout .
Our stoch-CR scheme on the input is given a point x such that p · x ∈ P Min and a set
A ⊆ E, and on the output it returns set π̄x (A). The procedure is divided in two phases. First,
the preprocessing phase, depends only on the point x. Second, the random selection phase,
depends on the set A ⊆ E and the outcome of the preprocessing phase.
Matroid properties
Let Min = E, I in . We know [21] that the convex hull of {1A |A ∈ I }, i.e., characteris
tic vectors of all P
independent sets of Min , is equivalent to the matroid polytope P Min =
of Min . Thus for
x ∈ RE
≥0 ∀A∈I in e∈A pe · xe ≤ rMin (A) , where rMin is the rank function
P
any x ∈ P Min in polynomial time we can find representation pP
·x = m
i=1 βi · 1Bi , where
β
=
1 . We shall call
B1 , . . . , Bm ∈ I and β1 , . . . , βm are non-negative weights such that m
i=1 i
in
sets B1 , . . . , Bm the support of p · x in M , and denote it by B.
In the remainder of the section we assume that we know the graph representation (E ∪ V, ⊆ E × V )
of the matroid described below. This assumption is quite natural and common, e.g., [3].
Definition 3
Consider bipartite graph (E ∪ V, ⊆ E × V ). Let I be a family of all subsets S of E such that
there exists a matching between S and V of size exactly |S|. Then M = (E, I) is a matroid,
called transversal matroid.
Preprocessing
In what follows we shall write superscripts indicating the time in which we are P
in the process.
We start by finding the support B 0 of vector p · x ∈ P Min , i.e., p · x = i βi · 1Bi0 . For
every two sets B, A ∈ B 0 we find a mapping φ0 [B, A] : B → A ∪ {⊥}, which we call transversal
mapping. This mapping will satisfy three properties.
Property 1
For each a ∈ A there is at most one b ∈ B for which φ0 [B, A] (b) = a.
Property 2
For b ∈ B \ A, if φ0 [B, A] (b) = ⊥, then A + b ∈ I, otherwise A − φ0 [B, A] (b) + b ∈ I.
Note that unlike in standard exchange properties of matroids, we do not require that
φ0 [B, A] (b) = b, if b ∈ A ∩ B. Property 3 will be presented in a moment. Once we find the
0
0
family φ0 of transversal mappings,
P for each element e ∈ E we choose one set among Bi : e ∈ Bi
with probability βi /pe xe ; since B 0 :e∈B 0 βi = pe xe this is properly defined. Denote by c (e)
i
i
t
the index of the chosen set, and call e-critical the set Bc(e)
for any t (note that c(e) is fixed
throughout the process). We nconcisely denote
o by C = (c (e))e∈E . For each
h indices of icritical sets
0 , B0
element e we define Γ0 (e) = f f 6= e ∧ φ0 Bc(f
c(e) (f ) = e — the blocking set of e. The
)
Lemma below follows from Property 1, and its proof is in the Appendix C.
Lemma 11
If p · x ∈
P Min , then for any element
e, it holds that
hP
i
EC,R(x)
f ∈Γ0 (e) pf · χ [f ∈ R (x)] ≤ 1, where the expectation is over R (x) and the choice of
critical sets C; here χ [E] is a 0-1 indicator of random event E.
3 STOCH-CR SCHEME FOR INTERSECTION OF TRANSVERSAL MATROIDS
10
Algorithm 1 Stoch-CR scheme π̄x (A)
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
find support B 0 of p · x in Min and family φ0 ; choose critical sets C
remove from A all e : xe = 0; mark all e ∈ A as available
S←∅
while there are still available elements in A do
pick element e uniformly at random from A
if e is available then
probe e
if probe of e successful then
S ← S ∪ {e}
for each set Bit of support B t do
Bit ← Bit + e
call e unavailable
else simulate the probe of e
if probe or simulation was successful then
t
for each seth Bit of support
i B do
t , Bt
f ← φ Bc(e)
c(f ) (e)
t
t
if f 6= e then Bc(f
) ← Bc(f ) − f and call f unavailable
compute the family φt+1
for each i do Bit+1 ← Bit
t ← t + 1;
return S as π̄x (A)
Random selection procedure
The whole stoch-CR scheme is presented in Algorithm 1. During the algorithm we modify sets
of support B t after each step, but we keep the weights βi unchanged. We preserve an invariant
that each Bit from B t is an independent set of matroid Min . At the end of the algorithm the set
π̄x (A) belongs to every set Bit ∈ B t . Hence, the final set π̄x (A) is independent in every matroid.
Now we define Property 3 of transversal mappings. Suppose that in the first step we update
the support B 0 according to the for loop in line 15, and we obtain B 1 . Different support B 1
requires a different family of mappings φ1 , and so in step 2, the elements that can block e are
Γ1 (e). If it happens that Γ1 (e) 6= Γ0 (e), then we cannot show the monotonicity property of
stoch-CR scheme. However, we can require from the transversal mappings to keep the blocking
sets Γt (e) unchanged, as long as e is available. In the Appendix B we show how to find such a
family of transversal mappings φ0 and how to construct φt+1 given φt .
Property 3
Let φt be a family of transversal mappings for B t . Suppose we update the support B t and obtain
B t+1 . Then we can find a family φt+1 of transversal mappings such that Γt (e) = Γt+1 (e) for
any element e that is still available after step t.
Analysis
First, an explanation. We allow to pick in line 5 elements that we have once probed and simulate
their probe. This guarantees that the probability that an available element is blocked is equal
for every step. Otherwise, again, we would not be able to guarantee the monotonicity.
In the analysis we deploy martingale
theory. In particular Doob’s Stopping Theorem which
t
states that if a martingale Z t≥0 and stopping time τ are both “well-behaving”, then E [Z τ ] =
E Z 0 . In the Appendix D we put necessary definitions, statement of Doob’s Stopping Theorem,
and an explanation of why we can use this Theorem.
3 STOCH-CR SCHEME FOR INTERSECTION OF TRANSVERSAL MATROIDS
11
The random process executed in the while loop depends on the critical sets chosen in the
preprocessing phase. Therefore, when we analyze the random process we condition on the choice
of critical sets C.
We say e is available, if it is still possible to probe, i.e., it is not yet blocked, and it was not
yet probed. Define Xe = Je ∈ AK; in Iverson notation Jf alseK = 0 = 1 − JtrueK.
Let Yet for t = 0, 1, ..., be a random variable indicating if e is still available after step t.
Initially Ye0 = Xe . Let Pet be a random variable indicating if e was probed in one of steps
0, 1, ..., or t; we have Pe0 = 0 for all e. Variable Pet+1 − Pet indicates if e was probed at step t + 1.
Given the information F t about the process up to step t, the probability of this event is
Yt
E Pet+1 − Pet F t , C = e ,
|A|
1
because if element e is still available after step t (i.e., Yet = 1), then with probability |A|
we
t
choose it in line 5, and otherwise (i.e. Ye = 0) we cannot probe it.
Variable Yet − Yet+1 indicates whether element e stopped being available at step t + 1, i.e., we
Yet
), or some f ∈ Γt (e) has blocked
either have picked it in line 5 and probed (with probability |A|
Yet P
e in line 17 (with probability |A|
· f ∈Γt (e) pf Xf ). So in total
E Yet − Yet+1 F t , C =
and we can say that
t
E Pet+1 − Pe · 1 +
X
f ∈Γt (e)
Yet
|A|
1 +
X
f ∈Γt (e)
pf Xf ,
t+1
pf Xf − Yet − Ye
F t , C = 0.
P
is a martingale. Let
This means that the sequence
1 + f ∈Γt (e) pf Xf · Pet + Yet
t≥0
τ = min t Yet = 0 be the step in which edge e became unavailable. It is clear that τ is a
stopping time (definition in Appendix D). Thus from Doob’s Stopping Theorem we get that
X
X
Eτ 1 +
pf Xf · Peτ + Yeτ C = E 1 +
pf Xf · Pe0 + Ye0 C ,
f ∈Γ0 (e)
f ∈Γτ (e)
0 and Ye0 = Xe . We have Yeτ = 0, and expression
andP
this is equal to Xe , because Pe0 =P
1 + f ∈Γτ (e) pf Xf is in fact equal to 1 + f ∈Γ0 (e) pf Xf (Property 3 of the transversal mapping,
as e was available before step τ ), which depends solely on C and A. Hence
X
1 +
pf Xf · Eτ [ Peτ | C] = Xe .
f ∈Γ0 (e)
Now just note that Eτ [ Peτ | C] is exactly the probability that e is probed, so we conclude that
,
X
P [e ∈ π̄x (A)| C] = pe · Eτ [ Peτ | C] = pe Xe 1 +
pf Xf .
f ∈Γ0 (A)
0
Monotonicity Set
Pon A, but only on the vector p · x and C, so for
P Γ (e) does not depend
A1 ⊆ A2 we have f ∈Γ0 (A) pf · Jf ∈ A1 K ≤ f ∈Γ0 (A) pf · Jf ∈ A2 K.
3 STOCH-CR SCHEME FOR INTERSECTION OF TRANSVERSAL MATROIDS
12
Approximation guarantee In the identity
,
P [ e ∈ π̄x (A)| C] = pe Xe 1 +
X
f ∈Γ0 (A)
pf Xf
we place random set R (x) instead of A; now Xf = χ [f ∈ R (x)] is a random variable. Let us
condition on e ∈ R (x), take expected value EC,R(x) [ ·| e ∈ R (x)], and apply Jensen’s inequality
to convex function x 7→ x1 to get:
P [ e ∈ π̄x (R (x))| e ∈ R (x)] = EC,R(x) [ P [e ∈ π̄x (R (x))| C, R (x)]| e ∈ R (x)]
,
X
= EC,R(x) pe Xe 1 +
pf Xf e ∈ R (x)
≥ pe
Since EC,R(x)
hP
f ∈Γ0 (e)
,
EC,R(x) 1 +
f ∈Γ0 (e)
X
f ∈Γ0 (e)
pf Xf e ∈ R (x) .
i
pf Xf e ∈ R (x) ≤ 1 from Lemma 11, we conclude that
P [ e ∈ π̄x (R (x))| e ∈ R (x)] ≥
pe
,
2
and therefore P [ e ∈ π̄x (R (x))| e ∈ act (R (x))] ≥ 12 , which is exactly Property 3 from the definition of stoch-CR scheme.
Acknowledgments
I thank Justin Ward for his help in proving Lemma 7, and also for valuable suggestions that
helped to improve the presentation of the paper.
REFERENCES
13
References
[1] Marek Adamczyk, Maxim Sviridenko, and Justin Ward. Submodular stochastic probing on
matroids. In STACS, 2014.
[2] Arash Asadpour, Hamid Nazerzadeh, and Amin Saberi. Stochastic submodular maximization. In WINE, pages 477–489, 2008.
[3] Moshe Babaioff, Nicole Immorlica, and Robert Kleinberg. Matroids, secretary problems,
and online mechanisms. In SODA, 2007.
[4] Nikhil Bansal, Anupam Gupta, Jian Li, Julián Mestre, Viswanath Nagarajan, and Atri
Rudra. When LP is the cure for your matching woes: Improved bounds for stochastic
matchings. Algorithmica, 63, 2012.
[5] E. M. L. Beale. Linear programming under uncertainty. Journal of the Royal Statistical
Society. Series B, 17:173–184, 1955.
[6] Gruia Calinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a Submodular Set Function Subject to a Matroid Constraint. SIAM JoC, 40, 2011.
[7] Shuchi Chawla, Jason D. Hartline, David L. Malec, and Balasubramanian Sivan. Multiparameter mechanism design and sequential posted pricing. In STOC, 2010.
[8] Chandra Chekuri, Jan Vondrák, and Rico Zenklusen. Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM J. Comput.,
43(6):1831–1879, 2014.
[9] Ning Chen, Nicole Immorlica, Anna R. Karlin, Mohammad Mahdian, and Atri Rudra.
Approximating matches made in heaven. In ICALP, 2009.
[10] G.B. Dantzig. Linear programming under uncertainty. Management Science, 1:197–206,
1955.
[11] Brian C. Dean, Michel X. Goemans, and Jan Vondrák. Adaptivity and approximation for
stochastic packing problems. In SODA, pages 395–404, 2005.
[12] Brian C. Dean, Michel X. Goemans, and Jan Vondrák. Approximating the stochastic
knapsack problem: The benefit of adaptivity. Math. Oper. Res., 33, 2008.
[13] Moran Feldman, Joseph Naor, and Roy Schwartz. A unified continuous greedy algorithm
for submodular maximization. In FOCS, 2011.
[14] Moran Feldman, Ola Svensson, and Rico Zenklusen. Online contention resolution schemes.
CoRR, abs/1508.00142, 2015.
[15] Rajiv Gandhi, Samir Khuller, Srinivasan Parthasarathy, and Aravind Srinivasan. Dependent rounding and its applications to approximation algorithms. J. ACM, 53, 2006.
[16] Michel X. Goemans and Jan Vondrák. Stochastic covering and adaptivity. In LATIN, pages
532–543, 2006.
[17] Sudipto Guha and Kamesh Munagala. Approximation algorithms for budgeted learning
problems. In STOC, pages 104–113, 2007.
[18] Sudipto Guha and Kamesh Munagala. Model-driven optimization using adaptive probes.
In SODA, pages 308–317, 2007.
REFERENCES
14
[19] Anupam Gupta, Ravishankar Krishnaswamy, Marco Molinaro, and R. Ravi. Approximation
algorithms for correlated knapsacks and non-martingale bandits. In FOCS, pages 827–836,
2011.
[20] Anupam Gupta and Viswanath Nagarajan. A stochastic probing problem with applications.
In IPCO, 2013.
[21] A. Schrijver. Combinatorial Optimization - Polyhedra and Efficiency. 2003.
A STRONGER BOUND FOR MEASURED CONTINUOUS GREEDY
A
15
Stronger bound for measured continuous greedy
Recall, we need to prove Lemma 7. To do so, we shall require the following additional facts from
the analysis of [13].
Lemma 12 (Lemma 3.3 in [13])
ConsiderPtwo vectors x, x′ ∈ [0, 1]E , such that for every e ∈ E, |xe − x′e | ≤ δ. Then, F (x′ ) −
F (x) ≥ e∈E (x′e − xe ) · ∂e F (x) − O n3 δ2 · f (OP T ).
Lemma 13 (Lemma 3.5 in [13])
Consider a vector x ∈ [0, 1] E . Assuming xe ≤ a for every e ∈ E, then for every set S ⊆ E,
F (x ∨ 1S ) ≥ (1 − a)f (S).
Lemma 14 (Lemma 3.6 in [13])
For every time 0 ≤ t ≤ T and element e ∈ E, ye (t) ≤ 1 − (1 − δ) t/δ ≤ 1 − exp (−t) + O (δ).
Lemma 7. For every time 0 ≤ t ≤ T :
F (y(t + δ)) − F (y(t)) ≥ δ · e−t · f + (x+ ) − F (y(t)) − O(n3 δ2 ) · f + (x+ ).
Proof. Applying Lemma 12 to the solutions y(t + δ) and y(t), we have
F (y(t + δ)) − F (y(t))
X
δ · Ie (t) (1 − y(t)) · ∂j F (y(t)) − O n3 δ2 ) · f (OP T )
≥
(3)
e∈E
=
X
e∈E
=
X
e∈E
≥
X
e∈E
δ · Ie (t) (1 − y(t)) ·
F (y(t) ∨ 1j ) − F (y(t))
− O(n3 δ2 ) · f (OP T )
1 − y(t)
δ · Ie (t) · [F (y(t) ∨ 1j ) − F (y(t))] − O(n3 δ2 ) · f (OP T )
3 2
δ · x+
e [F (y(t) ∨ 1j ) − F (y(t))] − O(n δ ) · f (OP T )
(4)
where the last inequality follows P
from our choice of I(t).
P
+
+
Moreover,
we have f (x ) = A⊆E αA f (A) for some set of values αA satisfying A⊆E αA =
P
1 and A⊆E:e∈A αA = x+
e . Thus,
X
e∈E
x+
e [F (y(t) ∨ 1j ) − F (y(t))] =
≥
X
A⊆E
X
A⊆E
αA
X
j∈A
αA [F (y(t) ∨ 1A ) − F (y(t))] ≥
[F (y(t) ∨ 1j ) − F (y(t))]
X
A⊆E
αA (e−t − O(δ)) · f (A) − F (y(t))
= (e−t − O(δ)) · f + (x+ ) − F (y(t)).
where the first inequality follows from the fact that F is concave in all positive directions,
and the second from Lemmas 13 and 14. Combining this with the inequality (4), and noting
f + (x+ ) ≥ f + (OP T ) = f (OP T ), we finally obtain F (y(t + δ)) − F (y(t)) ≥ δ ·
−t that
+
e · f (x+ ) − F (y(t)) − O(n3 δ2 ) · f + (x+ ).
B
Transversal mappings
Recall the definition of a transversal matroid.
B TRANSVERSAL MAPPINGS
16
Definition 4
Consider a bipartite graph (E ∪ V, ⊆ E × V ). Let I be a family of all subsets S of E such that
there exists an injection from S to V . Then M = (E, I) is a matroid, called transversal matroid.
We assume we know the graph (E ∪ V, ⊆ E × V ) of the matroid.
Let B 0 be the initial support. Let A ∈ B 0 be an independent set. From the definition of the
transversal matroid, there exists an injection v A : A → V ; we shall say that a ∈ A is matched
to v A (a). There can be many such injections for a given set, but we initially pick one for every
A ∈ B 0 . When a set of the support will be changed we shall explicitly define an injection. In
fact, only for added elements we will define a new match, for all other elements they will be
matched all the time to the same vertex, as long as they are available.
For any two A, B ∈ B 0 we define the mapping φ0 [B, A] : B → A ∪ {⊥} as follows. Let
v A be the injection of A, and let v B be the injection of B. If there exists element a such that
v A (a) = v B (b), then we set φ0 [B, A] (b) = a; if not, we set φ0 [B, A] (b) =⊥.
Let us verify that such a definition satisfies first two properties.
Property 1. For each a ∈ A there is at most one b ∈ B for which φ0 [B, A] (b) = a.
This one is trivially satisfied because there can be at most one b ∈ B that according to v B
is matched to v A (a).
Property 2. For b ∈ B\A, if φ0 [B, A] (b) = ⊥, then A+b ∈ I, otherwise A−φ0 [B, A] (b)+b ∈ I.
Suppose φ0 [B, A] (b) = ⊥. It means that b is matched to v B (b) to
which no element
a ∈ A is
matched to. Therefore when we add edge (b, v B (b)) to the injection a, v A (a) a∈A it is still
0
′
a proper injection, since b ∈
/ A, and so A + b ∈ I. Now suppose
the
φ A[B, A]
(b) = a 6=⊥. Now
′
set A changes to A − a + b and the underlying injection is a, v (a) a∈A\a′ ∪ {(b, v B (b))},
which is a valid injection since b ∈
/ A, and if so, then A − a′ + b is indeed an independent set.
So Property 2 also holds.
Now let us move to the most technically demanding property.
Property 3. Let φt be a family of transversal mappings for B t . Suppose we update the support
B t and obtain B t+1 . Then we can find a family φt+1 of transversal mappings such that Γt (a) =
Γt+1 (a) for any element a that is still available after step t.
Suppose that in step t we have chosen element c, and we update the support B t as described
in the for loop of the algorithm in line 15. First of all assume that c 6= a, otherwise a becomes
t
unavailable so there is nothing to prove. Let C t be the critical set of c and let v C (c) be the
t
vertex to which c is matched according to v C . Consider set B t ∈ B t and let us describe how c
t+1
affects B t+1 and injection v B .
t
t
t
Case 1, c ∈ B t : If it is c from B t that is matched to v C (c), i.e., v B (c) = v C (c), then we do
t
t
t
t+1
= v B . If v B (c) 6= v C (c), then let
not have to change anything, we set B t+1 := B t and v B
t
t
b1 be such that v B (b1 ) = v C (c). We remove b1 from B t , i.e., B t+1 := B t \ b1 (we do not have
t
t+1
to add c to B t+1 because it is already there). For every b3 ∈ B t+1 we set v B (b3 ) = v B (b3 ).
See Figure 1.
17
B TRANSVERSAL MAPPINGS
Ct
c
t
t
b1
v B (b1) = v C (c)
c
v B (c)
t
t
v C (c)
c
b3
vB
t+1
vB
t+1
(c)
b3
t
v B (b3)
Bt
B t+1
V
V
Bt
B t+1
Figure 1: Illustration of Case 1, c ∈ B t
(b3)
18
B TRANSVERSAL MAPPINGS
t
t
Case 2, c ∈
/ B t : Let b1 be such that v B (b1 ) = v C (c). We remove b1 from B t and
t
t+1
add c instead, i.e., B t+1 = B t − b1 + c. The injection is defined as: v B (c) = v C (c), and
t
t+1
v B (b3 ) = v B (b3 ) for b3 ∈ B t \ b1 . See Figure 2.
Ct
c
t
t
b1
v B (b1) = v C (c)
b2
v B (b2)
t
b3
c
vB
t+1
b2
vB
t+1
vB
t+1
(c)
(b2)
b3
t
v B (b3)
Bt
B t+1
V
V
Bt
B t+1
Figure 2: Illustration of Case 2, c ∈
/ Bt
(b3)
19
C OMITTED PROOFS
t+1
Given sets At+1
, B t+1 with
corresponding injections v A ,v B
t+1
t+1
before, i.e., φt+1 B t+1 , At+1 (b) = a, if v A (a)n= v B (b). h
t+1
we define mapping φt+1 as
i
o
t+1
t+1
(b) = a is equal to
, Bc(a)
Now we need to show that the set Γt+1 (a) = b b 6= a ∧ φ Bc(b)
n
h
i
o
t , Bt
Γt (a) = b b 6= a ∧ φ Bc(b)
(b)
=
a
, if a is still available.
c(a)
Consider again sets At+1 , B t+1 and suppose that At is the critical set of a and that B t
t
t
is the critical set of b. Suppose that both a, b are matched to vab = v A (a) = v B (b), i.e.,
t
b ∈ Γt (a). If it happened that in step t element c removed a and b, i.e., v C (c) = vab , then
t
elements a, b are blocked and not available, so there is nothing to prove here. If v C (c) 6= v,
then from the reasoning in Case 1 and 2, we know that a and
b are still matched to vab , i.e.,
t+1
t+1
vab = v A (a) = v B (b). But if so, then φt+1 B t+1 , At+1 (b) = a, and b ∈ Γt+1 (a) still,
t
because At+1 , B t+1 remain critical sets of a, b. Conversely, if b is not matched to v A (a), i.e.,
t
t
/ Γt (a). But if c during the update does not block b, then b does not
v B (b) 6= v A (a), then b ∈
t+1
t+1
/ Γt+1 (a).
change its matched vertex so we still have v B (b) 6= v A (a), and still b ∈
Illustration is given in Figure 3.
Ct
c
b1
a1
c
a2
c
a2
b3
a3
b3
a3
b4
a4
b4
a4
Bt
c
At
V
B t+1
Bt
B t+1
At+1
V
Figure 3: Illustration of change in Γ. We have blocked b1 , and if b1 ∈ Γt (a1 ), then it does
not matter anyway, because we have also blocked a1 . Element c was matched (w.r.t. Bt ) to
the same vertex as a2 , but Bt is not the critical set of c, so c ∈
/ Γt (a2 ). Assume B t is a
t
t
t
t
critical set of b3 : we have φ [B , A ](b3 ) = a3 and so b3 ∈ Γ (a3 ); after the update we still have
φt+1 [B t+1 , At+1 ](b3 ) = a3 , so b3 ∈ Γt+1 (a3 ). Element a4 did not have any element b′ ∈ B t in
Γt (a4 ), so it does not have any b′ ∈ B t+1 in Γt+1 (a4 ) as well.
C
Omitted proofs
Lemma 5. Let OP T be the optimal feasible strategy for the stochastic probing problem in our
general setting, then E [f (OP T )] ≤ f + (x+ · p).
Proof. We construct a feasible solution x of the following program
n
o
in
k out
out
maximize f + (x · p) x · p ∈ ∩kj=1 P Min
,
,
x
∈
∩
P
M
j
j=1
j
by setting xe = P [OP T probes e]. First, we show that this is indeed a feasible solution. Since
OP T is a feasible strategy, the set of elements Q probedby any execution of OP T is always an
T out
independent set of each outer matroid Mout
= E, Ijout , i.e. Q ∈ kj=1 Ijout . Thus the vector
j
20
C OMITTED PROOFS
o
n
T out
E [1Q ] = x may be represented as a convex combination of vectors from 1A A ∈ kj=1 Ijout ,
and so x ∈ P Mout
for any j ∈ 1, . . . , kout . Analogously, the set of elements S that
j
T in
were successfully probed by OP T satisfy S ∈ kj=1 Ijin for every possible execution of OP T .
Hence, the vector E
[1S ] = x · p may be
as a convex combination of vectors from
n
represented
Tkin in o
in
1A A ∈ j=1 Ij
and so x · p ∈ P Mj
for any j ∈ 1, . . . , kin . The value f + (x · p)
gives the maximum value of ES∼D [f (S)] over all distributions D satisfying PS∼D [e ∈ S] ≤ xe pe .
The solution S returned by OP T satisfies P [e ∈ S] = P [OPT probes e] pe = xe pe . Thus, OP T
defines one such distribution, and so we have E [f (OP T )] ≤ f + (x · p) ≤ f + (x+ · p).
Lemmah 11. If p · x ∈ P Min , then
i for any element e, it holds that
P
EC,R(x)
f ∈Γ0 (e) pf · χ [f ∈ R (x)] ≤ 1, where the expectation is over R (x) and the choice of
critical sets C; here χ [E] is a 0-1 indicator of random event E.
Proof. In what follows let us skip writing 0 in the superscript of bases Bi0 , mappings φ0 ,
and set Γ0 (e).
Let us condition for now
set Bc(e) of element e. For f to belong to Γ (e) it
on the critical
has to be the case that φ Bc(f ) , Bc(e) (f ) = e. Therefore
X
X
X
pf · χ [f ∈ R (x)] =
pf · χ [f ∈ R (x)] ·
χ [Bi is f -critical] ,
f ∈Γ(e)
f ∈E\{e}
i:φ[Bi ,Bc(e) ](f )=e
and by changing the order of summation it is equal to
X
X
pf · χ [f ∈ R (x)] · χ [Bi is f -critical] .
i f ∈Bi \e:φ[Bi ,Bc(e) ](f )=e
Consider f such that f ∈ Bi \ e : φ Bi , Bc(e) (f ) = e. Since χ [f ∈ R(x)] and c (f ) (the index
critical set of f ) are independent, and E [χ [f ∈ R (x)]] = xf and P Bi is f -critical| Bc(e) =
P i = c (f )| Bc(e) = pfβ·xi f , we get that
h
h
i
i
βi
= βi ,
E pf · χ [f ∈ R (x)] · χ Bij is f -critical Bc(e) = pf xf ·
p f xf
and hence
X
E
pf · χ [f ∈ R (x)] Bc(e)
f ∈Γ(e)
=
X
i
X
f ∈Bi \e:φ[Bi ,Bc(e) ](f )=e
βij
≤
X
βij
=
1,
i
where the inequality
follows from the fact that for each Bi there can be at most one element
f ∈ Bi such that φ Bi , Bc(e) (f ) = e.
Lemma 10 Consider a probing
problem E, p, I in , I out . Suppose we have a (b, cout )-balanced
in for P I in . Then
CR-scheme π out for P I out , and a (b, cin )-balanced ordered CR scheme
π
there exists a (b, cout · cin )-balanced stoch-CR scheme for P I in , I out .
Proof. First let us recall a Lemma T
from [8].
T
Lemma [1.6 from [8]] Let I = i Ii and PI = i Pi .Q Suppose each PIi has a monotone
(b, ci )-balanced
CR scheme. Then PI has a monotone (b, i ci )-balanced CR scheme defined as
T
πx (A) = i πxi (A) for A ⊆ N, x ∈ bPI .
C OMITTED PROOFS
21
Suppose we have a CR scheme π out for P(I out ) and an ordered CR scheme π in for P(I in ).
We would like to define the stochastic contention resolution scheme for P I in , I out just as in
in (act (A)). However, we cannot just simply run
the Lemma above, i.e., as πx (A) = πxout (A) ∩ πp·x
in
out
πx on A, and then πp·x on A again, and take the intersection, because that does not constitute
a feasible probing strategy. Once again, we need to make use of simulated probes to get a
in (act (A)). How to
stoch-CR scheme that will have a probability distribution of πxout (A) ∩ πp·x
in to scan elements
implement such a strategy? We first run πxout (A) on the set A. Later we use πp·x
in
of A in the order given from the definition of an ordered scheme. If πx considers element e such
that e ∈ A \ πxout (A), then we simulate the probe of e; if the e ∈ πxout (A), then we just probe
it.
in
out
out
virt
Therefore, the CR-scheme πp·x works in fact on the set act πx (A) + (A \ πx (A))
, where
(A \ πxout (A))virt represents simulated probes of elements in A \ πxout (A). Assuming e ∈ πxout
(A),
virt
out
out
have
it is easy to see that elements in act (A) and elements in act πx (A) + (A \ πx (A))
the same probability distribution. Therefore,
in
P e ∈ πp·x
(act (A)) =
in
(act πxout (A) ∪ (A \ πxout (A))virt ) πxout (A), e ∈ πxout (A) . (5)
P e ∈ πp·x
And the RHS corresponds to a second phase of a feasible probing strategy. Thus we have:
P [e ∈ πx (A)]
in
= P e ∈ πxout (A) ∩ πp·x
(πxout (A) ∪ (A \ πxout (A))virt )
h
i
in
= Eπxout χ e ∈ πxout (A) · Eπp·x
χ e ∈ πp·x
(act πxout (A) ∪ (A \ πxout (A))virt ) πxout (A), e ∈ πxout (A) .
in
Now just note that
in
Eπp·x
(πxout (A) ∪ (A \ πxout (A))virt ) πxout (A), e ∈ πxout (A) =
χ e ∈ πp·x
in
in
P e ∈ πp·x
(act πxout (A) ∪ (A \ πxout (A))virt ) πxout (A), e ∈ πxout (A) =
in
P e ∈ πp·x
(act (A)) , (6)
from line (5), and so we can simplify the previous expression to
in
(act (A))
P [e ∈ πx (A)] = Eπxout χ e ∈ πxout (A) · P e ∈ πp·x
in
(act (A))
= Eπxout χ e ∈ πxout (A) · P e ∈ πp·x
in
= P e ∈ πxout (A) · P e ∈ πp·x
(act (A)) .
Now the analysis just follows the lines of Lemma 1.6 from [8]. We plug R(x) for A, and apply
expectation on R (x) conditioned on e ∈ R (x) to get:
in
ER(x) [P [e ∈ πx (R (x))] |e ∈ R (x)] = ER(x) P e ∈ πxout (R (x)) · P e ∈ πp·x
(act (R (x))) |e ∈ R (x) .
From the fact that both π out , π in are
monotone, and inact (R (x)) is an increasing function of
R (x) we get that P e ∈ πxout (R (x)) and also P e ∈ πp·x
(act (R (x))) are increasing functions
of R (x). Thus applying FKG inequality gives us that
ER(x) [P [e ∈ πx (R (x))] |e ∈ R (x)]
in
=ER(x) P e ∈ πxout (R (x)) · P e ∈ πp·x
(act (R (x))) |e ∈ R (x)
in
≥ER(x) P e ∈ πxout (R (x)) |e ∈ R (x) · ER(x) P e ∈ πp·x
(act (R (x))) |e ∈ R (x)
in
≥b · cout · ER(x) P e ∈ πp·x
(act (R (x))) |e ∈ R (x) .
D MARTINGALE THEORY
22
Now applying also expectation on act (R (x)), we get finally
ER(x),act(R(x)) [P [e ∈ πx (R (x))] |e ∈ R (x)]
in
≥ cout · ER(x),act(R(x)) P e ∈ πp·x
(act (R (x))) |e ∈ R (x) ≥ pe · cout · cin . (7)
in (act (A)) we get the
Also, directly from equation P [e ∈ πx (A)] = P e ∈ πxout (A) · P e ∈ πp·x
in (act (·)) are monotone.
monotonicity of the stoch CR-scheme πx , since both πxout and πp·x
D
Martingale Theory
Definition 5
Let (Ω, F, P) be a probability space, where Ω is a sample space, F is a σ-algebra on Ω, and P is
a probability measure on (Ω, F). Sequence {Ft : t ≥ 0} is called a filtration if it is an increasing
family of sub-σ-algebras of F: F0 ⊆ F1 ⊆ . . . ⊆ F.
Intuitively speaking, when considering a stochastic process, σ-algebra Ft represents all information available to us right after making step t. In our case σ-algebra Ft contains all information
about each randomly chosen element to probe, about outcome of each probe, and about each
support update for every matroid, that happened before or at step t.
Definition 6
A process (Zt )t≥0 is called a martingale if for every t ≥ 0 all following conditions hold:
1. random variable Zt is Ft -measurable,
2. E [|Zt |] < ∞,
3. E [ Zt+1 | Ft ] = Xt .
Definition 7
Random variable τ : Ω 7→ {0, 1, . . .} is called a stopping time if {τ = t} ∈ Ft for every t ≥ 0.
Intuitively, τ represents a moment when an event happens. We have to be able to say
whether it happened at step t given only the information from steps 0, 1, 2, . . . , t. In our case we
define τ as the moment when element became unavailable, i.e., it was chosen to be probed or it
was blocked by other elements. It is clear that this is a stopping time according to the above
definition.
Theorem 15 (Doob’s Optional-Stopping Theorem)
Let (Zt )t≥0 be a martingale. Let τ be a stopping time such that τ has finite expectation, i.e.,
E[τ ] < ∞, and the conditional expectations of the absolute
value of the
martingale increments
are bounded, i.e., there exists a constant c such that E |Zt+1 − Zt | Ft ≤ c for all t ≥ 0. If so,
then E [Zτ ] = E [Z0 ].
In our case E [τ ] ≤ |A|, because we just pick an element at random from A, so the expected
value of picking e to be probed is exactly |A|, and since
it can be earlier blocked
by other elements,
P
, and
we have E [τ ] ≤ |A|. Also the martingale we use is
1 + f ∈Γt (e) pf Xf · Pet + Yet
t≥0
P
since Pet , Yet ∈ {0, 1} then it means that for any t we have 1 + f ∈Γt (e) pf Xf · Pet + Yet ≤
|A| + 1, and so E [|Zt+1 − Zt | Ft ] ≤ |A| + 1. Therefore, we can use Doob’s Optional-Stopping
Theorem in our analysis.
23
D MARTINGALE THEORY
out
out
in
Algorithm 2 Stoch-CR scheme π̄x (A) for Min
1 , ..., Mk in inner matroids and M1 , ..., Mk out
in
out
outer matroids, and x ∈ b · P I , I
.
//Preprocessing:
2: find support B 0in[j] of 1b p · x ∈ P Min
for each Min
j
j ;
3: find support B 0out[j] of 1b x ∈ P Mout
for each Mout
j
j ;
1:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
0
out
find family φ0in[j] for every Min
j ; find family φout[j] for every Mj ;
out for each Mout ;
choose critical sets Cjin for each Min
j and Cj
j
//Random selection phase:
remove from A all e : xe = 0; mark all e ∈ A as available; S ← ∅
while there are still available elements in A do
pick element e uniformly at random from A
if e is available then
probe e
if probe of e successful then
S ← S ∪ {e}
for each matroid Min
j do
t
for each set Bit of support Bin[j]
do
t
t
Bi ← Bi + e
call e unavailable
else simulate the probe of e
if probe or simulation was successful then
for each set Bit of support B t do
f ← φ Bctin (e) , Bctin (f ) (e)
j
22:
j
if f 6= e then Bctin (f ) ← Bctin (f ) − f and call f unavailable
j
j
23:
24:
25:
26:
for each matroid Mout
do
j
t
for each set Bi of support B tout[j] do
t
Bit ← B
hi + e
i
f ← φ Bctout (e) , Bctout (f ) (e)
j
27:
j
if f 6= e then Bctout (f ) ← Bctout(f ) − f and call f unavailable
j
t+1
compute families φt+1
in[j] , φout[j]
29:
t←t+1
30: return S as πx (A)
28:
j
24
E FULL PROOF OF LEMMA ??
E
Full proof of Lemma 3
The full scheme
is presented
on Figure 2. Let us concisely denote by C all the critical sets
in
chosen, i.e., C = Cj
× Cjout
. Transversal mappings φ (M)0 are found in exactly
in
out
j∈[k ]
j∈[k
]
the same manner as in the single matroid version.
There are two
respect
to what was presented in the main body. First,
main differences with
1
in
in
since p · x ∈ b · P Mj , then b p · x ∈ P Mj , and the support B 0in[j] is found by decomposing
1
b p · x. Thus when element f chooses a
in[j]
in[j]
βi / 1b pf xf = b · βi / (pf xf ). This
critical set in matroid Min
j it chooses with probability
results in the following modification in Lemma 11. The
proof is completely analogous, so we skip it.
Lemma 16
If 1b p · x ∈ P Min
j , then for any element e, it holds that
hP
i
EC in ,R(x)
p
·
χ
[f
∈
R
(x)]
≤ b, where the expectation is over R (x) and the choice
0
f ∈Γin[j] (e) f
j
of critical sets C.
Now we also need to deal with outer matroids. Again, the proof is completely analogous, so
we skip it.
Lemma 17
1
out
If b x ∈ P Mj , then for any element e, it holds that
hP
i
ECjout ,R(x)
f ∈Γ0
(e) χ [f ∈ R (x)] ≤ b, where the expectation is over R (x) and the choice of
out[j]
critical sets C.
Second. Let Yet for t = 0, 1, ..., be a random variable indicating if e is still available after
step t. Initially Ye0 = Xe . Let Pet be a random variable indicating, if e was probed in one
of steps 0, 1, ..., or t. In step t + 1 element e can be blocked if, for some j ∈ [k in ], we pick
element f ∈ Γtin[j] (e) and successfully probe it (or successfully simulate), or if we just pick
element f ∈ Γtout[j] (e), and probe f or simulate its probe, irregardless of the outcome. Let
S
S
Γtout (e) = j∈[kout] Γtout[j] (e) and let Γtin (e) = j∈[kin] Γtin[j] (e) \ Γtout (e) — this subtraction is
to not count an element twice, because if element f belongs to both Γtin[j] (e) and Γtout[j] (e),
then just probing f (or simulating its probe) automatically blocks e, irregardless of f ’s probe
(simulation) outcome. Hence, we do cannot account for the excessive pf Xf influence of f on e.
Therefore, the probability that e stops to be available at step t + 1 is equal to
Yt
Yt
E Yet − Yet+1 F t , C = e + e ·
|A| |A|
X
pf Xf +
f ∈Γtin (e)
Yet
·
|A|
X
Xf .
f ∈Γtout (e)
As in the single matroid case, the probability of probing e is just equal to E Pet+1 − Pet F t , C =
Yet
|A| .
Thus the martingale we use now in the analysis is
1 +
X
f ∈Γtin (e)
pf Xf +
X
f ∈Γtout (e)
Xf · Pet + Yet
.
t≥0
Let τ = min t Yet = 0 be the step in which edge e became unavailable. It is clear that τ is a
stopping time. Thus from Doob’s stopping theorem we get that
25
E FULL PROOF OF LEMMA ??
Eτ 1 +
X
pf Xf +
f ∈Γτin (e)
X
f ∈Γτout (e)
Xf · Peτ + Yeτ C
= E 1 +
X
X
pf Xf +
f ∈Γ0in (e)
f ∈Γ0out (e)
Xf · Pe0 + Ye0 C ,
And as before, since the transversal mappings are controlled per matroid, we have that Γtin (e) =
Γ0in (e) and Γtout (e) = Γ0out (e) for t ≤ τ . Thus
,
X
X
P [e ∈ π̄x (A)| C] = pe · Eτ [ Peτ | C] = pe Xe 1 +
pf Xf +
Xf .
(8)
f ∈Γ0in (e)
f ∈Γ0out (e)
Monotonicity follows as before. The approximation guarantee similarly from Jensen.
We take random set R (x) instead of A; now Xf = χ [f ∈ R (x)] is a random variable. Let us
condition on e ∈ R (x), take expected value EC,R(x) [ ·| e ∈ R (x)] on both sides of 8, and apply
Jensen’s inequality to convex function x 7→ x1 to get:
P [ e ∈ π̄x (R (x))| e ∈ R (x)] = EC,R(x) [ P [ e ∈ π̄x (R (x))| C, R (x)]| e ∈ R (x)] =
.
h
i
P
P
1 + f ∈Γ0 (e) pf Xf + f ∈Γ0 (e) Xf e ∈ R (x)
= EC,R(x) pe Xe
in
i
h out P
.
P
≥ pe EC,R(x) 1 + f ∈Γ0 (e) pf Xf + f ∈Γ0 (e) Xf e ∈ R (x) .
out
in
Since
EC,R(x)
from Lemma 16, and
X
f ∈Γ0in[j] (e)
EC,R(x)
f ∈Γ0out[j] (e)
f ∈Γ0in (e)
X
≤ EC,R(x) 1 +
pf Xf e ∈ R (x) ≤ b
X
from Lemma 17, we conclude that
X
EC,R(x) 1 +
pf Xf +
X
X
f ∈Γ0out (e)
pf Xf +
j∈[k in ] f ∈Γ0in[j] (e)
≤1 +
X
j∈[k in ]
+
EC,R(x)
X
j∈[k out ]
X
EC,R(x)
≤ 1 + kin · b + kout · b.
Xf e ∈ R (x)
X
X
j∈[k out ] f ∈Γ0out[j] (e)
f ∈Γ0in[j] (e)
Xf e ∈ R (x) ≤ b
pf Xf e ∈ R (x)
X
f ∈Γ0out[j] (e)
Xf e ∈ R (x)
Xf e ∈ R (x)
26
F STOCHASTIC K-SET PACKING
And therefore
P [ e ∈ π̄x (R (x))| e ∈ R (x)] ≥
b (kin
pe
,
+ kout ) + 1
which yields
P [ e ∈ π̄x (R (x))| e ∈ act (R (x))] ≥
b (kin
1
,
+ kout ) + 1
which is exactly Property 3 from the definition of stoch-CR scheme. Lemma 3 follows.
F
Stochastic k-set packing
In this section we are showing a (k + 1)-approximation algorithm for Stochastic k-Set Packing.
We are given n elements/columns, where each item e ∈ E = [n] has a profit ve ∈ R+ ,
and a random d-dimensional size Se ∈ {0, 1}d . The sizes are independent for different items.
Additionally, for each item e, there is a set Ce of at most k coordinates such that each size vector
Se takes positive values only in these coordinates, i.e., Se ⊆ Ce with probability 1. We are also
given a capacity vector b ∈ Zd+ into which items must be packed. We assume that ve is a random
variable that can be correlated with Se . The coordinates of Se also might be correlated between
each other.
Important thing to notice is that in this setting, unlike in the previous ones, here when we
probe an element, there is no success/failure outcome. The size Se of an element e materializes,
and the reward ve is just drawn.
Let pje = E [Se (j)] be the expected size of the j coordinate of column e. The following is an
LP that models the problem. Here U (c) denotes a uniform matroid of rank c.
max
n
X
e=1
s.t.
E [ve ] · xe
pj · x ∈ P (U (bj ))
xe ∈ [0, 1]
(LP-k-set)
∀j ∈ [d]
∀e ∈ [n] .
Where, as usual, xe stands for P [OP T probes column e]. We are going to present a probing
xe
. From
strategy in which for every element e probability that we will probe e will be at least k+1
this the Theorem will follow.
The algorithm is presented on Figure 3.
Constraint for row j is in fact given be a uniform matroid in which we can take
P at most
bj elements from subset {e |j ∈ Ce } ⊆ E. Therefore, we can decompose pj · x = l βlj · Blj,0 .
Uniform matroid is a transversal matroid, so we use the transversal mapping φ0j between sets
B j,0 , also let C = C j j∈[d] be the vector indicating the critical sets. We define in the same way
as we did already in Lemma 3, the sets Γ0j (e) of blocking elements, i.e.,
o
n
Γ0j (e) = f f 6= e ∧ φ0j [Bc0j (f ) , Bc0j (e) ] .
As before, let us from now on condition on C. Let us analyze the impact of f ∈ Γtj (e) on
e. Element f ∈ Γtj (e) blocks e when f is chosen and Sf (j) = 1. However, right now f can
belong to Γtj (e) for many j ∈ Ce . Therefore if f is chosen in line 9 of the Algorithm 3, then the
hW
i
probability that f blocks e is equal to P
C
.
(S
(j)
=
1)
t
f
j:f ∈Γ (e)
j
Let us now repeat the steps of Lemma 1. Let Xe = χ [e ∈ R (x)]. Let Yet for t = 0, 1, ..., be
a random variable indicating if e is still available after step t. Initially Ye0 = Xe . Let Pet be a
random variable indicating, if e was probed in one of steps 0, 1, ..., or t; we have Pe0 = 0 for all
e.
F STOCHASTIC K-SET PACKING
Algorithm 3 Algorithm for stochastic k-set packing
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
//Preprocessing:
for each j ∈ [d] do
find support Bj0 of pj · x in P (U (bj ))
family φ0j
critical sets C = C j j∈[d]
//Rounding:
let A ← R (x); mark all e ∈ A as available; S ← ∅
while there are still available elements in A do
pick element e uniformly at random from A
if e is available then
probe e
S ←S+e
for each j ∈ Ce such that Se (j) = 1 do
for each set Bij,t of support B j,t do
Bij,t ← Bij,t + e
call e unavailable
else simulate the probe of e
for each j ∈ Ce such that Se (j) = 1 (whether we probe or simulate) do
j,t
for each seth Bij,t of support
i B do
j,t
f ← φ Bcj,t
j (e) , Bcj (f ) (e)
j,t
if f 6= e then Bcj,t
j (f ) ← Bcj (f ) − f and call f unavailable
compute the family φt+1
j
for each i do Bij,t+1 ← Bij,t
24:
t←t+1
25: return S
23:
27
28
F STOCHASTIC K-SET PACKING
Variable Pet+1 − Pet indicates if e was probed at step t + 1. Given the information F t about
Yet
, because if
the process up to step t, the probability of this event is E Pet+1 − Pet F t , C = |A|
1
t
element e is still available after step t (i.e., Ye = 1), then with probability |A| we choose it in
line 9, and otherwise (i.e. Yet = 0) we cannot probe it.
Variable Yet − Yet+1 indicates whether element e stopped being available at step t + 1. For
this to happen we need to pick f ∈ Γtj (0) and the probe (or simulation) of f needs to result in
a vector Sf such that Sf (j) = 1. However, as already noted, there can be many j for which
f ∈ Γtj (0). Therefore, probability that e stops being available in step t + 1 is equal to
E Yet − Yet+1 F t , C =
Yet
|A|
+
Yet
|A|
·
X
f
Xf · P
_
j:f ∈Γtj (e)
(Sf (j) = 1) C .
Here we stress the condition on C because sets Γtj are constructed given the choice of critical
sets. Again we can reason that
_
X
(Sf (j) = 1) C · Pet + Yet
Xf · P
1 +
j:f ∈Γtj (e)
f
t≥0
is a martingale. Let τ = min t Yet = 0 be the step in which edge e became unavailable. It is
clear that τ is a stopping time. Thus from Doob’s Stopping Theorem we get that
Eτ 1 +
X
f
Xf · P
_
j:f ∈Γτj (e)
(Sf (j) = 1) C · Peτ + Yeτ C
X
= Eτ 1 +
Xf · P
f
_
j:f ∈Γ0j (e)
(Sf (j) = 1) C · Pe0 + Ye0 C = Xe .
We argue again using the properties of transversal mapping φtj that we have Γτj (e) = Γ0j (e) for
each j ∈ Ce , since e was available before step τ . And if so, then
X
_
Eτ 1 +
(Sf (j) = 1) C · Peτ + Yeτ C = Xe
Xf · P
f
j:f ∈Γ0j (e)
hW
i
P
and since 1 + f Xf · P
(S
(j)
=
1)
C
is just a number depending on C, we can
0
f
j:f ∈Γj (e)
say that
X
_
(Sf (j) = 1) C · Eτ [ Peτ | C] = Xe .
Xf · P
1 +
f
j:f ∈Γ0j (e)
Note that Eτ [ Peτ | C] = P [ e is probed| C], and conclude that
,
X
_
P [ e is probed| C] = Xe 1 +
Xf · P
f
j:f ∈Γ0j (e)
(Sf (j) = 1) C .
G
STOCHASTIC MATCHING AND HANDLING NEGATIVE CORRELATION
29
At this point we reason as follows:
X
f
Xf · P
≤
X
=
X
=
X
f
f
Xf ·
Xf ·
_
j:f ∈Γ0j (e)
(Sf (j) = 1) C
X
P [ Sf (j) = 1| C]
X
pjf
j:f ∈Γ0j (e)
j:f ∈Γ0j (e)
X
pjf Xf ,
j∈Ce f ∈Γ0j (e)
where inequality just follows simply from the union-bound. Then we have an identity since event
(Sf (j) = 1) is independent of C and its probability is just equal to pjf . Later we just change the
order of summation. Therefore we have shown that
,
X X j
pf Xf .
P [ e is probed| C] ≥ Xe 1 +
j∈Ce f ∈Γ0j (e)
We apply expectation EC,R(x) [ ·| e ∈ R (x)] to both sides, use Jensen’s inequality to get
,
X X j
pf Xf e ∈ R (x) .
P [ e is probed| e ∈ R (x)] ≥ 1 1 + EC,R(x)
j∈Ce f ∈Γ0j (e)
Now we can say that
X X j
pf Xf e ∈ R (x)
EC,R(x)
j∈Ce f ∈Γ0j (e)
=
X
j∈Ce
X
X j
1 = |Ce | ≤ k,
pf Xf e ∈ R (x) ≤
EC,R(x)
f ∈Γ0j (e)
j∈Ce
hP
i
j
where the last inequality EC,R(x)
f ∈Γ0j (e) pf Xf e ∈ R (x) ≤ 1 for each j comes from Lemma 11.
Hence
1
P [ e is probed| e ∈ R (x)] ≥
,
1+k
which gives
P [e is probed] = P [e ∈ R (x)] · P [ e is probed| e ∈ R (x)] ≥
xe
1+k
as desired.
G
Stochastic Matching and handling negative correlation
In the Stochastic Matching problem we are given an undirected graph G = (V, E). Each edge
e ∈ E is assigned a probability pe ∈ (0, 1] and a weight we > 0, and each node v ∈ V is assigned
G
STOCHASTIC MATCHING AND HANDLING NEGATIVE CORRELATION
30
a patience tv ∈ N+ . Each time an edge is probed and it turns out to be present with probability
pe , in which case it is (irrevocably) included in the matching we gradually construct and it gives
profit we . Therefore the inner constraints are given by intersection of two partition matroids.
We can probe at most tu edges that are incident to node u — these are outer constraints which
also can be described by intersection of two partition matroids. Our goal is to maximize the
expected weight of the constructed matching.
Let us consider the bipartite case where V = (A ∪ B) and E ⊆ A × B. In this case Bansal et
al. [4] provided an LP-based 3-approximation. We shall also obtain this approximation factor.
Here we present a variant of our scheme from Lemma 3 that does not use transversal mappings,
because in this case they are trivial. Moreover, previously we required the input of the stoch-CR
scheme to be a set of elements sampled independently, i.e., R (x). Now we shall apply the ideas
from Lemma 3, but we use it on a set of edges that are negatively correlated. Such a set is
returned by the algorithm of Gandhi et al. [15]. Also, since the objective is linear, and not
submodular, we do not have to take care of the monotonicity of the scheme, and therefore we
do not draw edges with repetitions. In fact we just scan rounded edges according to a random
permutation, and therefore the below algorithm is the same that was presented by Bansal et
al. [4]. Thus what follows is an alternative analysis of the algorithm from [4].
Consider the following LP for stochastic matching problem:
max
X
(9)
we pe xe
e
X
e∈δ(v)
X
e∈δ(v)
p e xe ≤ 1
∀v ∈ V
(10)
xe ≤ t v
∀v ∈ V
(11)
∀e ∈ E.
(12)
0 ≤ xe ≤ 1
Suppose that (xe )e∈E is the optimal solution to this LP. We round the
with dependent
solution
rounding of Gandhi et al. [15]; we call the algorithm GKPS. Let X̂e
be the rounded
e∈E
n
o
solution, and denote Ê = e ∈ E X̂e = 1 . From the definition of dependent rounding we
know that:
h
i
1. (Marginal distribution) P X̂e = 1 = xe ;
2. (Degree preservation) For any v ∈ V it holds that
X
X
xe
X̂e ≤
≤ tv ;
e∈δ(v)
e∈δ(v)
3. (Negative correlation) For any v ∈ V and any subset S ⊆ δ (v) of edges incident to v it
holds that:
"
#
i
^
Y h
∀b∈{0,1} P
P X̂e = b .
X̂e = b ≤
e∈S
e∈S
Negative correlation property and constraint (10) imply that
X
X
X
E
pf X̂f e ∈ Ê ≤ E
pf X̂f =
pf xf ≤ 2 − 2pe xe .
f ∈δ(e)
f ∈δ(e)
f ∈δ(e)
(13)
G
STOCHASTIC MATCHING AND HANDLING NEGATIVE CORRELATION
31
Algorithm 4 Algorithm for Stochastic Matching
1: Solve the LP; let x an optimal solution;
o
n
2: let X̂ ∈ {0, 1}E be a solution rounded using GKPS; let Ê = e X̂e = 1 ; call every e ∈ Ê
safe
3: while there are still safe elements in Ê do
4:
pick element e uniformly at random from safe elements of Ê
5:
probe e
6:
if probe successful then
7:
S ← S ∪ {e}
8:
call every f ∈ Ê ∩ δ (e) blocked
9: return S
Given the solution X̂e
e∈E
we execute the selection algorithm presented on Figure 4. Because
of the Degree preservation property we will not exceed the patience of any vertex. We say that
an edge e is safe if no other edge adjacent to e was already successfully probed; otherwise edge
is blocked. Initially all edges are safe.
The expected outcome of our algorithm is
X
e
P [e probed] · pe · we =
i
h
i
X h
P X̂e = 1 · P e probed| X̂e = 1 · pe · we
e
=
X
e
h
i
we pe xe · P e probed| X̂e = 1 ,
h
i
and we shall show that P e probed| X̂e = 1 ≥ 13 for any edge e, which will imply 1/3-approximation
of the algorithm.
From now let us condition that we know the set of edges Ê and we know that X̂e = 1.
Consider a random variable Yet which indicates if edge e is still in the graph after step t. We
consider variable Yft for any edge f ∈ E. Initially we have Yf0 = 1 for any f ∈ Ê, and Yf0 = 0
for f ∈
/ Ê. Let variable Pet denote if edge e was probed in one of steps 0, 1, ..., t; we have Pe0 = 0.
Let Σt be the number of edges that are left after t steps. Variable Pet+1 − Pet indicates
whether edge e was probed in step
F t about the process up to step
h t + 1. Given the information
i
t
t, probability of this event is E Pet+1 − Pet F t , Ê, X̂e = 1 = YΣet , i.e., if edge e still exists in the
graph after step t (i.e. Yet = 1), then the probability is Σ1t , otherwise it is 0.
Variable Yet − Yet+1 indicates whether
edge e was blockedifrom the graph in step t +1. Given
h
P
t
t
t
t
F , probability of this event is E Ye − Yet+1 F t , Ê, X̂e = 1 = YΣet ·
f ∈δ(e) pf Yf + 1 .
It is immediate to note that Yft ≤ X̂f for any edge f , and that Pet+1 − Pet is always nonnegative. Hence
X
pf X̂f + 1 · Pet+1 − Pet − Yet − Yet+1 F t , Ê, X̂e = 1
E
f ∈δ(e)
≥E
X
f ∈δ(e)
pf Yft + 1 · Pet+1 − Pet − Yet − Yet+1 F t , Ê, X̂e = 1 = 0,
which means that the sequence
X
f ∈δ(e)
pf X̂f + 1 · Pet − 1 − Yet
t≥0
G
32
STOCHASTIC MATCHING AND HANDLING NEGATIVE CORRELATION
is a super-martingale.
Let τ = min t Yet = 0 be the step in which edge e was either blocked or probed. It is clear
that τ is a stopping time. Thus
from Doob’s
Stopping Theorem —
this
time in the variant for
super-martingales, i.e., if E Z t+1 − Z t F t ≥ 0 , then E [Z τ ] ≥ E Z 0 — we get that
Eτ
X
f ∈δ(e)
pf X̂f + 1 · Peτ − (1 − Yeτ ) Ê, X̂e = 1
≥ E
X
f ∈δ(e)
0
pf X̂f + 1 · Pe0 − 1 − Ye
Ê, X̂e = 1 ,
where the expectation above is over the random variable τ only. Since Pe0 = 0,Ye0 = 1,Yeτ = 0
the above inequality implies that
X
Eτ
pf X̂f + 1 · Peτ Ê, X̂e = 1 ≥ 1.
f ∈δ(e)
Since we condition all the time on Ê and X̂e = 1 we can write that
h
i
X
pf X̂f + 1 · Eτ Peτ | Ê, X̂e = 1 ≥ 1.
f ∈δ(e)
h
i
h
i
Let us notice that Eτ Peτ | Ê, X̂e = 1 is exactly equal to P e probed| Ê, X̂e = 1 . Thus we can
write that
h
i
1
.
P e probed| Ê, X̂e = 1 ≥ P
f ∈δ(e) pf X̂f + 1
Now we can apply to both sides of the above inequality expectation over Ê but still conditioned
on X̂e = 1:
h
i
h h
i
i
P e probed| X̂e = 1 = EÊ P e probed| Ê, X̂e = 1 X̂e = 1
"
≥ EÊ
1
P
f ∈δ(e) pf X̂f
+1
and from Jensen’s inequality, and the fact that x 7→ x1 is convex, we get that
"
#
1
1
hP
i.
EÊ P
X̂e = 1 ≥
X̂
=
1
p
X̂
+
1
EÊ
f ∈δ(e) pf X̂f + 1
e
f ∈δ(e) f f
From inequality (13) we get EÊ
2xe pe ≤ 3 and we conclude that
hP
#
X̂e = 1 ,
i
hP
i
p
X̂
+
1
X̂
=
1
≤
E
p
X̂
+
1
≤ 3−
e
f
f
f
f
f ∈δ(e)
f ∈δ(e)
Ê
h
i 1
P e probed| X̂e = 1 ≥ .
3
| 8 |
Program Induction by Rationale Generation:
Learning to Solve and Explain Algebraic Word Problems
Wang Ling♠
Dani Yogatama♠
Chris Dyer♠
Phil Blunsom♠♦
♠DeepMind
♦University of Oxford
{lingwang,dyogatama,cdyer,pblunsom}@google.com
arXiv:1705.04146v3 [] 23 Oct 2017
Abstract
Solving algebraic word problems requires executing a series of arithmetic
operations—a program—to obtain a final
answer. However, since programs can
be arbitrarily complicated, inducing them
directly from question-answer pairs is a
formidable challenge. To make this task
more feasible, we solve these problems by
generating answer rationales, sequences
of natural language and human-readable
mathematical expressions that derive the
final answer through a series of small
steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach,
we have created a new 100,000-sample
dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via
answer rationales is a promising strategy
for inducing arithmetic programs.
1
Introduction
Behaving intelligently often requires mathematical reasoning. Shopkeepers calculate change,
tax, and sale prices; agriculturists calculate the
proper amounts of fertilizers, pesticides, and water for their crops; and managers analyze productivity. Even determining whether you have enough
money to pay for a list of items requires applying
addition, multiplication, and comparison. Solving these tasks is challenging as it involves recognizing how goals, entities, and quantities in the
real-world map onto a mathematical formalization, computing the solution, and mapping the solution back onto the world. As a proxy for the
richness of the real world, a series of papers have
used natural language specifications of algebraic
word problems, and solved these by either learning to fill in templates that can be solved with
equation solvers (Hosseini et al., 2014; Kushman
et al., 2014) or inferring and modeling operation
sequences (programs) that lead to the final answer (Roy and Roth, 2015).
In this paper, we learn to solve algebraic word
problems by inducing and modeling programs that
generate not only the answer, but an answer rationale, a natural language explanation interspersed
with algebraic expressions justifying the overall
solution. Such rationales are what examiners require from students in order to demonstrate understanding of the problem solution; they play the
very same role in our task. Not only do natural
language rationales enhance model interpretability, but they provide a coarse guide to the structure
of the arithmetic programs that must be executed.
In fact the learner we propose (which relies on a
heuristic search; §4) fails to solve this task without modeling the rationales—the search space is
too unconstrained.
This work is thus related to models that can
explain or rationalize their decisions (Hendricks
et al., 2016; Harrison et al., 2017). However, the
use of rationales in this work is quite different
from the role they play in most prior work, where
interpretation models are trained to generate plausible sounding (but not necessarily accurate) posthoc descriptions of the decision making process
they used. In this work, the rationale is generated
as a latent variable that gives rise to the answer—it
is thus a more faithful representation of the steps
used in computing the answer.
This paper makes three contributions. First, we
have created a new dataset with more than 100,000
algebraic word problems that includes both answers and natural language answer rationales (§2).
Figure 1 illustrates three representative instances
Problem 1:
Question: Two trains running in opposite directions cross a
man standing on the platform in 27 seconds and 17 seconds
respectively and they cross each other in 23 seconds. The ratio
of their speeds is:
Options: A) 3/7 B) 3/2 C) 3/88 D) 3/8 E) 2/2
Rationale: Let the speeds of the two trains be x m/sec and y
m/sec respectively. Then, length of the first train = 27x meters,
and length of the second train = 17 y meters. (27x + 17y) / (x +
y) = 23 → 27x + 17y = 23x + 23y → 4x = 6y → x/y = 3/2.
Correct Option: B
Problem 2:
Question: From a pack of 52 cards, two cards are drawn together at random. What is the probability of both the cards
being kings?
Options: A) 2/1223 B) 1/122 C) 1/221 D) 3/1253 E) 2/153
Rationale: Let s be the sample space.
Then n(s) = 52C2 = 1326
E = event of getting 2 kings out of 4
n(E) = 4C2 = 6
P(E) = 6/1326 = 1/221
Answer is C
Correct Option: C
Problem 3:
Question: For which of the following does p(a)−p(b) = p(a−
b) for all values of a and b?
Options:A) p(x) = x2 , B) p(x) = x/2, C) p(x) = x + 5, D)
p(x) = 2x1, E) p(x) = |x|
Rationale: To solve this easiest way is just put the value and
see that if it equals or not.
with option A. p(a) = a2 and p(b) = b2
so L.H.S = a2 − b2
and R.H.S = (a − b)2 → a2 + b2 − 2ab.
so L.H.S not equal to R.H.S
with option B. p(a) = a/2 and p(b) = b/2
L.H.S = a/2 − b/2 → 1/2(a − b)
R.H.S = (a − b)/2
so L.H.S = R.H.S which is the correct answer.
answer:B
Correct Option: B
Figure 1: Examples of solved math problems.
from the dataset. Second, we propose a sequence
to sequence model that generates a sequence of instructions that, when executed, generates the rationale; only after this is the answer chosen (§3).
Since the target program is not given in the training data (most obviously, its specific form will depend on the operations that are supported by the
program interpreter); the third contribution is thus
a technique for inferring programs that generate a
rationale and, ultimately, the answer. Even constrained by a text rationale, the search space of
possible programs is quite large, and we employ
a heuristic search to find plausible next steps to
guide the search for programs (§4). Empirically,
we are able to show that state-of-the-art sequence
to sequence models are unable to perform above
chance on this task, but that our model doubles the
accuracy of the baseline (§6).
2
Dataset
We built a dataset1 with 100,000 problems with
the annotations shown in Figure 1. Each question
is decomposed in four parts, two inputs and two
outputs: the description of the problem, which we
will denote as the question, and the possible (multiple choice) answer options, denoted as options.
Our goal is to generate the description of the rationale used to reach the correct answer, denoted as
rationale and the correct option label. Problem
1 illustrates an example of an algebra problem,
which must be translated into an expression (i.e.,
(27x + 17y)/(x + y) = 23) and then the desired
quantity (x/y) solved for. Problem 2 is an example that could be solved by multi-step arithmetic
operations proposed in (Roy and Roth, 2015). Finally, Problem 3 describes a problem that is solved
by testing each of the options, which has not been
addressed in the past.
2.1
Construction
We first collect a set of 34,202 seed problems that
consist of multiple option math questions covering
a broad range of topics and difficulty levels. Examples of exams with such problems include the
GMAT (Graduate Management Admission Test)
and GRE (General Test). Many websites contain
example math questions in such exams, where the
answer is supported by a rationale.
Next, we turned to crowdsourcing to generate
new questions. We create a task where users are
presented with a set of 5 questions from our seed
dataset. Then, we ask the Turker to choose one
of the questions and write a similar question. We
also force the answers and rationale to differ from
the original question in order to avoid paraphrases
of the original question. Once again, we manually
check a subset of the jobs for each Turker for quality control. The type of questions generated using this method vary. Some turkers propose small
changes in the values of the questions (e.g., changing the equality p(a) − p(b) = p(a − b) in Problem 3 to a different equality is a valid question, as
long as the rationale and options are rewritten to
reflect the change). We designate these as replica
problems as the natural language used in the question and rationales tend to be only minimally unaltered. Others propose new problems in the same
topic where the generated questions tend to dif1
Available at https://github.com/deepmind/
AQuA
3000
2000
frequency
1000
Question Rationale
100,949
250
250
9.6
16.6
21,009
14,745
67.8
89.1
17,849
25,034
77.4
105.7
38,858
39,779
0
Training Examples
Dev Examples
Test Examples
Average Length
Numeric
Vocab Size
Average Length
Non-Numeric
Vocab Size
Average Length
All
Vocab Size
0
200
400
600
800
1000
length
Table 1: Descriptive statistics of our dataset.
fer more radically from existing ones. Some Turkers also copy math problems available on the web,
and we define in the instructions that this is not
allowed, as it will generate multiple copies of the
same problem in the dataset if two or more Turkers
copy from the same resource. These Turkers can
be detected by checking the nearest neighbours
within the collected datasets as problems obtained
from online resources are frequently submitted by
more than one Turker. Using this method, we obtained 70,318 additional questions.
2.2
Statistics
Descriptive statistics of the dataset is shown in
Figure 1. In total, we collected 104,519 problems
(34,202 seed problems and 70,318 crowdsourced
problems). We removed 500 problems as heldout
set (250 for development and 250 for testing). As
replicas of the heldout problems may be present in
the training set, these were removed manually by
listing for each heldout instance the closest problems in the training set in terms of character-based
Levenstein distance. After filtering, 100,949 problems remained in the training set.
We also show the average number of tokens (total number of tokens in the question, options and
rationale) and the vocabulary size of the questions
and rationales. Finally, we provide the same statistics exclusively for tokens that are numeric values
and tokens that are not.
Figure 2 shows the distribution of examples
based on the total number of tokens. We can see
that most examples consist of 30 to 500 tokens, but
there are also extremely long examples with more
than 1000 tokens in our dataset.
3
Model
Generating rationales for math problems is challenging as it requires models that learn to perform math operations at a finer granularity as
Figure 2: Distribution of examples per length.
each step within the solution must be explained.
For instance, in Problem 1, the equation (27x +
17y)/(x + y) = 23 must be solved to obtain
the answer. In previous work (Kushman et al.,
2014), this could be done by feeding the equation
into an expression solver to obtain x/y = 3/2.
However, this would skip the intermediate steps
27x + 17y = 23x + 23y and 4x = 6y, which must
also be generated in our problem. We propose a
model that jointly learns to generate the text in the
rationale, and to perform the math operations required to solve the problem. This is done by generating a program, containing both instructions that
generate output and instructions that simply generate intermediate values used by following instructions.
3.1
Problem Definition
In traditional sequence to sequence models (Sutskever et al., 2014; Bahdanau et al.,
2014), the goal is to predict the output sequence
y = y1 , . . . , y|y| from the input sequence
x = x1 , . . . , x|x| , with lengths |y| and |x|.
In our particular problem, we are given the
problem and the set of options, and wish to predict the rationale and the correct option. We set x
as the sequence of words in the problem, concatenated with words in each of the options separated
by a special tag. Note that knowledge about the
possible options is required as some problems are
solved by the process of elimination or by testing
each of the options (e.g. Problem 3). We wish to
generate y, which is the sequence of words in the
rationale. We also append the correct option as the
last word in y, which is interpreted as the chosen
option. For example, y in Problem 1 is “Let the
. . . = 3/2 . hEORi B hEOSi”, whereas in Problem
2 it is “Let s be . . . Answer is C hEORi C hEOSi”,
where “hEOSi” is the end of sentence symbol and
i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
...
|z|
x
z
v
r
From
a
pack
of
52
cards
,
two
cards
are
drawn
together
at
random
.
What
is
the
probability
of
both
cards
being
kings
?
<O>
A)
2/1223
<O>
B)
1/122
...
Id(“Let”)
Id(“s”)
Id(“be”)
Id(“the”)
Id(“sample”)
Id(“space”)
Id(“.”)
Id(“\n”)
Id(“Then”)
Id(“n”)
Id(“(”)
Id(“s”)
Id(“)”)
Id(“=”)
Str to Float(x5 )
Float to Str(m1 )
Id(“C”)
Id(“2”)
Id(“=”)
Str to Float(y17 )
Choose(m1 ,m2 )
Float to Str(m3 )
Id(“E”)
Id(“=”)
Id(“event”)
Id(“of”)
Id(“getting”)
Id(“2”)
Id(“kings”)
Id(“out”)
Id(“of”)
...
Id(“hEOSi”)
Let
s
be
the
sample
space
.
\n
Then
n
(
s
)
=
52
52
C
2
=
2
1326
1326
E
=
event
of
getting
2
kings
out
of
...
hEOSi
y1
y2
y3
y4
y5
y6
y7
y8
y9
y10
y11
y12
y13
y14
m1
y15
y16
y17
y18
m2
m3
y19
y20
y21
y22
y23
y24
y25
y26
y27
y28
...
y|y|
Table 2: Example of a program z that would generate the output y. In v, italics indicates string
types; bold indicates float types. Refer to §3.3 for
description of variable names.
“hEORi” is the end of rationale symbol.
3.2
Generating Programs to Generate
Rationales
We wish to generate a latent sequence of program
instructions, z = z1 , . . . , z|z| , with length |z|,
that will generate y when executed.
We express z as a program that can access x, y,
and the memory buffer m. Upon finishing execution we expect that the sequence of output tokens
to be placed in the output vector y.
Table 2 illustrates an example of a sequence of
instructions that would generate an excerpt from
Problem 2, where columns x, z, v, and r denote
the input sequence, the instruction sequence (program), the values of executing the instruction, and
where each value vi is written (i.e., either to the
output or to the memory). In this example, instructions from indexes 1 to 14 simply fill each position
with the observed output y1 , . . . , y14 with a string,
where the Id operation simply returns its parameter without applying any operation. As such, running this operation is analogous to generating a
word by sampling from a softmax over a vocabulary. However, instruction z15 reads the input word
x5 , 52, and applies the operation Str to Float,
which converts the word 52 into a floating point
number, and the same is done for instruction z20 ,
which reads a previously generated output word
y17 . Unlike, instructions z1 , . . . , z14 , these operations write to the external memory m, which
stores intermediate values. A more sophisticated
instruction—which shows some of the power of
our model—is z21 =
Choose(m1 , m2 ) → m3
1
which evaluates m
m2 and stores the result in m3 .
This process repeats until the model generates the
end-of-sentence symbol. The last token of the program as said previously must generate the correct
option value, from “A” to “E”.
By training a model to generate instructions that
can manipulate existing tokens, the model benefits from the additional expressiveness needed
to solve math problems within the generation
process. In total we define 22 different operations, 13 of which are frequently used operations when solving math problems. These are:
Id, Add, Subtract, Multiply, Divide,
Power, Log, Sqrt, Sine, Cosine, Tangent,
Factorial, and Choose (number of combinations). We also provide 2 operations to convert between Radians and Degrees, as these
are needed for the sine, cosine and tangent operations. There are 6 operations that convert floating
point numbers into strings and vice-versa. These
include the Str to Float and Float to Str
operations described previously, as well as operations which convert between floating point numbers and fractions, since in many math problems
the answers are in the form “3/4”. For the same
reason, an operation to convert between a floating point number and number grouped in thousands is also used (e.g. 1000000 to “1,000,000”
or “1.000.000”). Finally, we define an operation (Check) that given the input string, searches
through the list of options and returns a string with
the option index in {“A”, “B”, “C”, “D”, “E”}. If
the input value does not match any of the options,
or more than one option contains that value, it cannot be applied. For instance, in Problem 2, once
the correct probability “1/221” is generated, by applying the check operation to this number we can
hi
ri
qi,j=1
softmax
softmax
softmax
oi
ri
qij
softmax
qi,j+1
hi+1
j < argc(oi)?
copy
input
copy
output
execute
aij
vi
Figure 3: Illustration of the generation process of
a single instruction tuple at timestamp i.
obtain correct option “C”.
3.3
Generating and Executing Instructions
In our model, programs consist of sequences of
instructions, z. We turn now to how we model
each zi , conditional on the text program specification, and the program’s history. The instruction
zi is a tuple consisting of an operation (oi ), an ordered sequence of its arguments (ai ), and a decision about where its results will be placed (ri ) (is
it appended in the output y or in a memory buffer
m?), and the result of applying the operation to its
arguments (vi ). That is, zi = (oi , ri , ai , vi ).
Formally, oi is an element of the pre-specified
set of operations O, which contains, for example
add, div, Str to Float, etc. The number of
arguments required by oi is given by argc(oi ), e.g.,
argc(add) = 2 and argc(log) = 1. The arguments are ai = ai,1 , . . . , ai,argc(oi ) . An instruction will generate a return value vi upon execution,
which will either be placed in the output y or hidden. This decision is controlled by ri . We define
the instruction probability as:
p(oi , ai , ri ,vi | z <i , x, y, m) =
p(oi | z <i , x) × p(ri | z <i , x, oi )×
argc(oi )
Y
p(ai,j | z <i , x, oi , m, y)×
j=1
[vi = apply(oi , a)],
where [p] evaluates to 1 if p is true and 0 otherwise,
and apply(f, x) evaluates the operation f with arguments x. Note that the apply function is not
learned, but pre-defined.
The network used to generate an instruction at
a given timestamp i is illustrated in Figure 3. We
first use the recurrent state hi to generate p(oi |
z <i , x) = softmax(hi ), using a softmax over the
oi ∈O
set of available operations O.
In order to predict ri , we generate a new hidden state ri , which is a function of the current program context hi , and an embedding of the current predicted operation, oi . As the output can
either be placed in the memory m or the output
y, we compute the probability p(ri = OUTPUT |
z <i , x, oi ) = σ(ri · wr + br ), where σ is the logistic sigmoid function. If ri = OUTPUT, vi is
appended to the output y; otherwise it is appended
to the memory m.
Once we generate ri , we must predict ai , the
argc(oi )-length sequence of arguments that operation oi requires. The jth argument ai,j can be
either generated from a softmax over the vocabulary, copied from the input vector x, or copied
from previously generated values in the output
y or memory m. This decision is modeled using a latent predictor network (Ling et al., 2016),
where the control over which method used to generate ai,j is governed by a latent variable qi,j ∈
{SOFTMAX, COPY- INPUT, COPY- OUTPUT}. Similar to when predicting ri , in order to make this
choice, we also generate a new hidden state for
each argument slot j, denoted by qi,j with an
LSTM, feeding the previous argument in at each
time step, and initializing it with ri and by reading
the predicted value of the output ri .
• If qi,j = SOFTMAX, ai,j is generated by sampling from a softmax over the vocabulary Y,
p(ai,j | qi,j = SOFTMAX) = softmax(qi,j ).
ai,j ∈Y
This corresponds to a case where a string is used
as argument (e.g. y1 =“Let”).
• If qi,j = COPY- INPUT, ai,j is obtained by copying an element from the input vector with a
pointer network (Vinyals et al., 2015) over input
words x1 , . . . , x|x| , represented by their encoder
LSTM state u1 , . . . , u|x| . As such, we compute
the probability distribution over input words as:
p(ai,j | qi,j =COPY- INPUT) =
softmax
ai,j ∈x1 ,...,x|x|
(1)
f (uai,j , qi,j )
Function f computes the affinity of each token xai,j and the current output context qi,j . A
common implementation of f , which we follow,
is to apply a linear projection from [uai,j ; qi,j ]
into a fixed size vector (where [u; v] is vector
concatenation), followed by a tanh and a linear
projection into a single value.
• If qi,j = COPY- OUTPUT, the model copies from
either the output y or the memory m. This is
equivalent to finding the instruction zi , where
the value was generated. Once again, we define a pointer network that points to the output
instructions and define the distribution over previously generated instructions as:
This is analogous to the question answering
work in Liang et al. (2016), where the query that
generates the correct answer must be found during inference, and training proved to be difficult
without supervision. In Roy and Roth (2015) this
problem is also addressed by adding prior knowledge to constrain the exponential space.
In our work, we leverage the fact that we are
generating rationales, where there is a sense of
progression within the rationale. That is, we assume that the rationale solves the problem step by
p(ai,j | qi,j =COPY- OUTPUT) =
step. For instance, in Problem 2, the rationale first
describes the number of combinations of two cards
softmax f (hai,j , qi,j )
ai,j ∈z1 ,...,zi−1
in a deck of 52 cards, then describes the number
of combinations of two kings, and finally comHere, the affinity is computed using the decoder
putes the probability of drawing two kings. Thus,
state hai,j and the current state qi,j .
while generating the final answer without the raFinally, we embed the argument ai,j 2 and the
tionale requires a long sequence of latent instrucstate qi,j to generate the next state qi,j+1 . Once
tions, generating each of the tokens of the rationale
all arguments for oi are generated, the operation
requires far less operations.
is executed to obtain vi . Then, the embedding of
More formally, given the sequence z1 , . . . , zi−1
vi , the final state of the instruction qi,|ai | and the
generated so far, and the possible values for zi
previous state hi are used to generate the state at
given by the network, denoted Zi , we wish to filter
the next timestamp hi+1 .
Zi to Zi (yk ), which denotes a set of possible options that contain at least one path capable of gen4 Inducing Programs while Learning
erating the next token at index k. Finding the set
Zi (yk ) is achieved by testing all combinations of
The set of instructions z that will generate y is uninstructions that are possible with at most one level
observed. Thus, given x we optimize the marginal
of indirection, and keeping those that can generate
probability function:
yk . This means that the model can only generate one intermediate value in memory (not includX
X
p(y | x) =
p(y | z)p(z | x) =
p(z | x), ing the operations that convert strings into floating
z∈Z
z∈Z(y)
point values and vice-versa).
where p(y | z) is the Kronecker delta function
δe(z),y , which is 1 if the execution of z, denoted as
e(z), generates y and 0 otherwise. Thus, we can
redefine p(y|x), the marginal over all programs Z,
as a marginal over programs that would generate
y, defined as Z(y). As marginalizing over z ∈
Z(y) is intractable, we approximate the marginal
by generating samples from our model. Denote
the set of samples
that are generated by Ẑ(y). We
P
maximize z ∈ Ẑ(y)p(z|x).
However, generating programs that generate y
is not trivial, as randomly sampling from the RNN
distribution over instructions at each timestamp is
unlikely to generate a sequence z ∈ Z(y).
2
The embeddings of a given argument ai,j and the return
value vi are obtained with a lookup table embedding and two
flags indicating whether it is a string and whether it is a float.
Furthermore, if the the value is a float we also add its numeric
value as a feature.
Decoding. During decoding we find the most
likely sequence of instructions z given x, which
can be performed with a stack-based decoder.
However, it is important to refer that each generated instruction zi = (oi , ri , ai,1 , . . . , ai,|ai | , vi )
must be executed to obtain vi . To avoid generating
unexecutable code—e.g., log(0)—each hypothesis
instruction is executed and removed if an error occurs. Finally, once the “hEORi” tag is generated,
we only allow instructions that would generate one
of the option “A” to “E” to be generated, which
guarantees that one of the options is chosen.
5
Staged Back-propagation
As it is shown in Figure 2, math rationales with
more than 200 tokens are not uncommon, and with
additional intermediate instructions, the size z can
easily exceed 400. This poses a practical challenge
for training the model.
For both the attention and copy mechanisms,
for each instruction zi , the model needs to compute the probability distribution between all the attendable units c conditioned on the previous state
hi−1 . For the attention model and input copy
mechanisms, c = x0,i−1 and for the output copy
mechanism c = z. These operations generally
involve an exponential number of matrix multiplications as the size of c and z grows. For instance, during the computation of the probabilities
for the input copy mechanism in Equation 1, the
affinity function f between the current context q
and a given input uk is generally implemented by
projecting u and q into a single vector followed
by a non-linearity, which is projected into a single affinity value. Thus, for each possible input
u, 3 matrix multiplications must be performed.
Furthermore, for RNN unrolling, parameters and
intermediate outputs for these operations must be
replicated for each timestamp. Thus, as z becomes
larger the attention and copy mechanisms quickly
become a memory bottleneck as the computation
graph becomes too large to fit on the GPU. In contrast, the sequence-to-sequence model proposed in
(Sutskever et al., 2014), does not suffer from these
issues as each timestamp is dependent only on the
previous state hi−1 .
To deal with this, we use a training method we
call staged back-propagation which saves memory by considering slices of K tokens in z, rather
than the full sequence. That is, to train on a minibatch where |z| = 300 with K = 100, we would
actually train on 3 mini-batches, where the first
batch would optimize for the first z 1:100 , the second for z 101:200 and the third for z 201:300 . The
advantage of this method is that memory intensive
operations, such as attention and the copy mechanism, only need to be unrolled for K steps, and K
can be adjusted so that the computation graph fits
in memory.
However, unlike truncated back-propagation for
language modeling, where context outside the
scope of K is ignored, sequence-to-sequence
models require global context. Thus, the sequence
of states h is still built for the whole sequence z.
Afterwards, we obtain a slice hj:j+K , and compute the attention vector.3 Finally, the prediction
of the instruction is conditioned on the LSTM state
and the attention vector.
6
Experiments
We apply our model to the task of generating rationales for solutions to math problems, evaluating it
on both the quality of the rationale and the ability
of the model to obtain correct answers.
6.1
Baselines
As the baseline we use the attention-based sequence to sequence model proposed by Bahdanau
et al. (2014), and proposed augmentations, allowing it to copy from the input (Ling et al., 2016) and
from the output (Merity et al., 2016).
6.2
Hyperparameters
We used a two-layer LSTM with a hidden size of
H = 200, and word embeddings with size 200.
The number of levels that the graph G is expanded
during sampling D is set to 5. Decoding is performed with a beam of 200. As for the vocabulary
of the softmax and embeddings, we keep the most
frequent 20,000 word types, and replace the rest of
the words with an unknown token. During training, the model only learns to predict a word as an
unknown token, when there is no other alternative
to generate the word.
6.3
Evaluation Metrics
The evaluation of the rationales is performed
with average sentence level perplexity and BLEU4 (Papineni et al., 2002). When a model cannot
generate a token for perplexity computation, we
predict unknown token. This benefits the baselines
as they are less expressive. As the perplexity of
our model is dependent on the latent program that
is generated, we force decode our model to generate the rationale, while maximizing the probability
of the program. This is analogous to the method
used to obtain sample programs described in Section 4, but we choose the most likely instructions
at each timestamp instead of sampling. Finally,
the correctness of the answer is evaluated by computing the percentage of the questions, where the
chosen option matches the correct one.
6.4
Results
3
This modeling strategy is sometimes known as late fusion, as the attention vector is not used for state propagation,
it is incorporated “later”.
The test set results, evaluated on perplexity,
BLEU, and accuracy, are presented in Table 3.
Model
Seq2Seq
+Copy Input
+Copy Output
Our Model
Perplexity
524.7
46.8
45.9
28.5
BLEU
8.57
21.3
20.6
27.2
Accuracy
20.8
20.4
20.2
36.4
Table 3: Results over the test set measured in Perplexity, BLEU and Accuracy.
Perplexity. In terms of perplexity, we observe
that the regular sequence to sequence model fares
poorly on this dataset, as the model requires
the generation of many values that tend to be
sparse. Adding an input copy mechanism greatly
improves the perplexity as it allows the generation process to use values that were mentioned
in the question. The output copying mechanism
improves perplexity slightly over the input copy
mechanism, as many values are repeated after their
first occurrence. For instance, in Problem 2, the
value “1326” is used twice, so even though the
model cannot generate it easily in the first occurrence, the second one can simply be generated by
copying the first one. We can observe that our
model yields significant improvements over the
baselines, demonstrating that the ability to generate new values by algebraic manipulation is essential in this task. An example of a program that is
inferred is shown in Figure 4. The graph was generated by finding the most likely program z that
generates y. Each node isolates a value in x, m,
or y, where arrows indicate an operation executed
with the outgoing nodes as arguments and incoming node as the return of the operation. For simplicity, operations that copy or convert values (e.g.
from string to float) were not included, but nodes
that were copied/converted share the same color.
Examples of tokens where our model can obtain
the perplexity reduction are the values “0.025”,
“0.023”, “0.002” and finally the answer “E” , as
these cannot be copied from the input or output.
BLEU. We observe that the regular sequence to
sequence model achieves a low BLEU score. In
fact, due to the high perplexities the model generates very short rationales, which frequently consist
of segments similar to “Answer should be D”, as
most rationales end with similar statements. By
applying the copy mechanism the BLEU score
improves substantially, as the model can define
the variables that are used in the rationale. In-
terestingly, the output copy mechanism adds no
further improvement in the perplexity evaluation.
This is because during decoding all values that can
be copied from the output are values that could
have been generated by the model either from the
softmax or the input copy mechanism. As such,
adding an output copying mechanism adds little to
the expressiveness of the model during decoding.
Finally, our model can achieve the highest
BLEU score as it has the mechanism to generate
the intermediate and final values in the rationale.
Accuracy. In terms of accuracy, we see that
all baseline models obtain values close to chance
(20%), indicating that they are completely unable
to solve the problem. In contrast, we see that our
model can solve problems at a rate that is significantly higher than chance, demonstrating the value
of our program-driven approach, and its ability to
learn to generate programs.
In general, the problems we solve correctly correspond to simple problems that can be solved in
one or two operations. Examples include questions such as “Billy cut up each cake into 10 slices,
and ended up with 120 slices altogether. How
many cakes did she cut up? A) 9 B) 7 C) 12 D)
14 E) 16”, which can be solved in a single step. In
this case, our model predicts “120 / 10 = 12 cakes.
Answer is C” as the rationale, which is reasonable.
6.5
Discussion.
While we show that our model can outperform the
models built up to date, generating complex rationales as those shown in Figure 1 correctly is still
an unsolved problem, as each additional step adds
complexity to the problem both during inference
and decoding. Yet, this is the first result showing
that it is possible to solve math problems in such
a manner, and we believe this modeling approach
and dataset will drive work on this problem.
7
Related Work
Extensive efforts have been made in the domain
of math problem solving (Hosseini et al., 2014;
Kushman et al., 2014; Roy and Roth, 2015), which
aim at obtaining the correct answer to a given math
problem. Other work has focused on learning to
map math expressions into formal languages (Roy
et al., 2016). We aim to generate natural language
rationales, where the bindings between variables
and the problem solving approach are mixed into
x Bottle R contains 250 capsules and costs $ 6.25 .
Bottle T contains 130 capsules and costs $ 2.99 .
What is the difference between the cost per capsule for bottle R and the
cost per capsule for bottle T ?
(A) $ 0.25
(B) $ 0.12
(C) $ 0.05
(D) $ 0.03
(E) $ 0.002
m
sub(m6,m3)
250
6.25
0.025
130
2.99
div(m1,m2)
y
0.023
check(m7)
0.002
E
div(m4,m5)
Cost
per
capsule
in
R
is
6.25
/
250
=
0.025
\n
Cost
per
capsule
in
T
is
2.99
/
130
=
0.023
\n
The
difference
0.002
\n
The
answer
is
is
E
<EOS>
Figure 4: Illustration of the most likely latent program inferred by our algorithm to explain a held-out
question-rationale pair.
a single generative model that attempts to solve the
problem while explaining the approach taken.
Our approach is strongly tied with the work
on sequence to sequence transduction using the
encoder-decoder paradigm (Sutskever et al., 2014;
Bahdanau et al., 2014; Kalchbrenner and Blunsom, 2013), and inherits ideas from the extensive
literature on semantic parsing (Jones et al., 2012;
Berant et al., 2013; Andreas et al., 2013; Quirk
et al., 2015; Liang et al., 2016; Neelakantan et al.,
2016) and program generation (Reed and de Freitas, 2016; Graves et al., 2016), namely, the usage
of an external memory, the application of different operators over values in the memory and the
copying of stored values into the output sequence.
Providing textual explanations for classification
decisions has begun to receive attention, as part of
increased interest in creating models whose decisions can be interpreted. Lei et al. (2016), jointly
modeled both a classification decision, and the selection of the most relevant subsection of a document for making the classification decision. Hendricks et al. (2016) generate textual explanations
for visual classification problems, but in contrast
to our model, they first generate an answer, and
then, conditional on the answer, generate an explanation. This effectively creates a post-hoc justification for a classification decision rather than
a program for deducing an answer. These papers,
like ours, have jointly modeled rationales and answer predictions; however, we are the first to use
rationales to guide program induction.
8
Conclusion
In this work, we addressed the problem of generating rationales for math problems, where the task is
to not only obtain the correct answer of the problem, but also generate a description of the method
used to solve the problem. To this end, we collect
100,000 question and rationale pairs, and propose
a model that can generate natural language and
perform arithmetic operations in the same decoding process. Experiments show that our method
outperforms existing neural models, in both the
fluency of the rationales that are generated and the
ability to solve the problem.
References
Jacob Andreas, Andreas Vlachos, and Stephen Clark.
2013. Semantic parsing as machine translation. In
Proc. of ACL.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly
learning to align and translate. arXiv 1409.0473.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy
Liang. 2013. Semantic parsing on freebase from
question-answer pairs. In Proc. of EMNLP.
Alex Graves, Greg Wayne, Malcolm Reynolds,
Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwiska, Sergio Gmez Colmenarejo, Edward
Grefenstette, Tiago Ramalho, John Agapiou,
Adri Puigdomnech Badia, Karl Moritz Hermann,
Yori Zwols, Georg Ostrovski, Adam Cain, Helen
King, Christopher Summerfield, Phil Blunsom,
Koray Kavukcuoglu, and Demis Hassabis. 2016.
Hybrid computing using a neural network with
dynamic external memory. Nature 538(7626):471–
476.
Brent Harrison, Upol Ehsan, and Mark O. Riedl.
2017.
Rationalization:
A neural machine
translation approach to generating natural language explanations.
CoRR abs/1702.07826.
http://arxiv.org/abs/1702.07826.
Lisa Anne Hendricks, Zeynep Akata, Marcus
Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor
Darrell. 2016. Generating visual explanations. In
Proc. ECCV.
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Proc. of EMNLP.
Bevan Keeley Jones, Mark Johnson, and Sharon Goldwater. 2012. Semantic parsing with bayesian tree
transducers. In Proc. of ACL.
Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent
continuous translation models. In Proc. of EMNLP.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proc. of ACL.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.
Rationalizing neural predictions.
In Proc. of
EMNLP.
Chen Liang, Jonathan Berant, Quoc Le, Kenneth D.
Forbus, and Ni Lao. 2016. Neural symbolic machines: Learning semantic parsers on freebase with
weak supervision. arXiv 1611.00020.
Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, Andrew Senior, Fumin
Wang, and Phil Blunsom. 2016. Latent predictor
networks for code generation. In Proc. of ACL.
Stephen Merity, Caiming Xiong, James Bradbury, and
Richard Socher. 2016. Pointer sentinel mixture
models. arXiv 1609.07843.
Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever.
2016. Neural programmer: Inducing latent programs with gradient descent. In Proc. ICLR.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proc. of ACL.
Chris Quirk, Raymond Mooney, and Michel Galley.
2015. Language to code: Learning semantic parsers
for if-this-then-that recipes. In Proc. of ACL.
Scott E. Reed and Nando de Freitas. 2016. Neural
programmer-interpreters. In Proc. of ICLR.
Subhro Roy and Dan Roth. 2015. Solving general
arithmetic word problems. In Proc. of EMNLP.
Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016.
Equation parsing: Mapping sentences to grounded
equations. In Proc. of EMNLP.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks. arXiv 1409.3215.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly.
2015. Pointer networks. In Proc. of NIPS.
| 2 |
1
Deep Recurrent Models of Pictionary-style Word
Guessing
arXiv:1801.09356v1 [] 29 Jan 2018
Ravi Kiran Sarvadevabhatla, Member, IEEE, Shiv Surya, Trisha Mittal and R. Venkatesh Babu Senior
Member, IEEE
Abstract—The ability of intelligent agents to play games in human-like fashion is popularly considered a benchmark of progress in
Artificial Intelligence. Similarly, performance on multi-disciplinary tasks such as Visual Question Answering (VQA) is considered a marker
for gauging progress in Computer Vision. In our work, we bring games and VQA together. Specifically, we introduce the first
computational model aimed at Pictionary, the popular word-guessing social game. We first introduce Sketch-QA, an elementary version of
Visual Question Answering task. Styled after Pictionary, Sketch-QA uses incrementally accumulated sketch stroke sequences as visual
data. Notably, Sketch-QA involves asking a fixed question (“What object is being drawn?”) and gathering open-ended guess-words from
human guessers. We analyze the resulting dataset and present many interesting findings therein. To mimic Pictionary-style guessing, we
subsequently propose a deep neural model which generates guess-words in response to temporally evolving human-drawn sketches. Our
model even makes human-like mistakes while guessing, thus amplifying the human mimicry factor. We evaluate our model on the
large-scale guess-word dataset generated via Sketch-QA task and compare with various baselines. We also conduct a Visual Turing Test
to obtain human impressions of the guess-words generated by humans and our model. Experimental results demonstrate the promise of
our approach for Pictionary and similarly themed games.
Index Terms—Deep Learning, Pictionary, Games, Sketch, Visual Question Answering
F
1
I NTRODUCTION
In the history of AI, computer-based modelling of human
player games such as Backgammon, Chess and Go has been
an important research area. The accomplishments of wellknown game engines (e.g. TD-Gammon [1], DeepBlue [2],
AlphaGo [3]) and their ability to mimic human-like game
moves has been a well-accepted proxy for gauging progress
in AI. Meanwhile, progress in visuo-lingual problems such
as visual captioning [4], [5], [6] and visual question answering [7], [8], [9] is increasingly serving a similar purpose for
computer vision community. With these developments as
backdrop, we explore the popular social game PictionaryTM .
The game of Pictionary brings together predominantly
the visual and linguistic modalities. The game uses a shuffled
deck of cards with guess-words printed on them. The
participants first group themselves into teams and each team
takes turns. For a given turn, a team’s member selects a card.
He/she then attempts to draw a sketch corresponding to the
word printed on the card in such a way that the team-mates
can guess the word correctly. The rules of the game forbid
any verbal communication between the drawer and teammates. Thus, the drawer conveys the intended guess-word
primarily via the sketching process.
Consider the scenario depicted in Figure 1. A group of
people are playing Pictionary. New to the game, a ‘social’
robot is watching people play. Passively, its sensors record
the strokes being drawn on the sketching board, guess-words
uttered by the drawer’s team members and finally, whether
the last guess is correct. Having observed multiple such game
rounds, the robot learns computational models which mimic
human guesses and enable it to participate in the game.
As a step towards building such computational modContact: [email protected]
Fig. 1: We propose a deep recurrent model of Pictionarystyle word guessing. Such models can enable social robots
to participate in real-life game scenarios as shown above.
Picture credit:Trisha Mittal.
els, we first collect guess-word data via Sketch Question
Answering (Sketch-QA), a novel, Pictionary-style guessing
task. We employ a large-scale crowdsourced dataset of handdrawn object sketches whose temporal stroke information is
available [10]. Starting with a blank canvas, we successively
add strokes of an object sketch and display this process to
human subjects (see Figure 2). Every time a stroke is added,
the subject provides a best-guess of the object being drawn. In
case existing strokes do not offer enough clues for a confident
guess, the subject requests the next stroke be drawn. After
the final stroke, the subject is informed the object category.
2
STROKE 1
STROKE 2
STROKE 5
STROKE 6
STROKE 10
...
User requests for User requests for
next stroke
next stroke
...
User enters guess
“house”
User does not change
previous guess
User changes guess to
“telephone”
STROKE 15
...
User does not change
previous guess
Ground truth is
informed to the user
Fig. 2: The time-line of a typical Sketch-QA guessing session: Every time a stroke is added, the subject either inputs a
best-guess word of the object being drawn (stroke #5, 10). In case existing strokes do not offer enough clues, he/she requests
the next stroke be drawn. After the final stroke (#15), the subject is informed the object’s ground-truth category.
Sketch-QA can be viewed as a rudimentary yet novel to the sketchers. In this aspect, the dataset collection proceform of Visual Question Answering (VQA) [5], [7], [8], [9]. dure used for TU-Berlin dataset aligns with the draw-usingOur approach differs from existing VQA work in that [a] guess-word-only paradigm of Pictionary. For each sketch
the visual content consists of sparsely detailed hand-drawn object, temporal order in which the strokes were drawn
depictions [b] the visual content necessarily accumulates over is also available. A subsequent analysis of the TU-Berlin
time [c] at all times, we have the same question – “What is the dataset by Schneider and Tuytelaars [11] led to the creation
object being drawn?” [d] the answers (guess-words) are open- of a curated subset of sketches which were deemed visually
ended (i.e. not 1-of-K choices) [e] for a while, until sufficient less ambiguous by human subjects. For our experiments, we
sketch strokes accumulate, there may not be ‘an answer’. use this curated dataset containing 160 object categories with
Asking the same question might seem an oversimplification an average of 56 sketches per category.
of VQA. However, other factors — extremely sparse visual
detail, inaccuracies in object depiction arising from varying
2.2 Data collection methodology
drawing skills of humans and open-ended nature of answers
— pose unique challenges that need to be addressed in order To collect guess-word data for Sketch-QA, we used a webaccessible crowdsourcing portal. Registered participants
to build viable computational models.
were initially shown a screen displaying the first stroke of
Concretely, we make the following contributions:
a randomly selected sketch object from a randomly chosen
• We introduce a novel task called Sketch-QA to serve as
category (see Figure 2). A GUI menu with options ‘Yes’,‘No’
a proxy for Pictionary (Section 2.2).
was
provided. If the participants felt more strokes were
• Via Sketch-QA, we create a new crowdsourced dataset
needed
for guessing, they clicked the ‘No’ button, causing
of paired guess-word and sketch-strokes, dubbed
the
next
stroke to be added. On the other hand, clicking
W ORD G UESS -160, collected from 16,624 guess sequences of 1,108 subjects across 160 sketch object ‘Yes’ would allow them to type their current best guess of
the object category. If they wished to retain their current
categories.
guess, they would click ‘No’, causing the next stroke to be
• We perform comparative analysis of human guessers
and a machine-based sketch classifier via the task of added. This act (clicking ‘No’) also propagates the most
recently typed guess-word and associates it with the strokes
sketch recognition (Section 4).
accumulated so far. The participant was instructed to provide
• We introduce a novel computational model for word
guessing (Section 6). Using W ORD G UESS -160 data, we guesses as early as possible and as frequently as required.
analyze the performance of the model for Pictionary- After the last stroke is added, the ground-truth category was
style on-line guessing and conduct a Visual Turing Test revealed to the participant. Each participant was encouraged
to gather human assessments of generated guess-words to guess a minimum of 125 object sketches. Overall, we
obtained guess data from 1,108 participants.
(Section 7).
Given the relatively unconstrained nature of guessing,
Please visit github.com/val-iisc/sketchguess for code
and dataset related to this work. To begin with, we shall we pre-process the guess-words to eliminate artifacts as
look at the procedural details involved in the creation of described below.
W ORD G UESS -160 dataset.
2.3
2
2.1
C REATING THE W ORD G UESS -160 DATASET
Sketch object dataset
As a starting point, we use hand-sketched line drawings
of single objects from the large-scale TU-Berlin sketch
dataset [10]. This dataset contains 20,000 sketches uniformly
spread across 250 object categories (i.e. 80 sketches per
category). The sketches were obtained in a crowd-sourced
manner by providing only the category name (e.g. “sheep”)
Pre-processing
Incomplete Guesses: In some instances, subjects provided
guess attempts for initial strokes but entered blank guesses
subsequently. For these instances, we propagated the last
non-blank guess until the end of stroke sequence.
Multi-word Guesses: In some cases, subjects provided
multi-word phrase-like guesses (e.g. “pot of gold at the end
of the rainbow” for a sketch depicting the object category
rainbow). Such guesses seem to be triggered by extraneous
elements depicted in addition to the target object. For these
3
Table 1 which shows the number of sequences eliciting x
guesses (x = {1, 2, 3, > 4}).
Guesses
# Sequences
1
12520
2
2643
3
568
>4
279
TABLE 1: The distribution of possible number of guesses and
count of number of sequences which elicited them.
Fig. 3: In the above plot, x-axis denotes the number of unique
guesses. y-axis denotes the number of subjects who made
corresponding number of unique guesses.
instances, we used the HunPos tagger [12] to retain only the
noun word(s) in the phrase.
Misspelt Guesswords: To address incorrect spellings, we
used the Enchant spellcheck library [13] with its default
Words set augmented with the 160 object category names
from our base dataset [10] as the spellcheck dictionary.
Uppercase Guesses: In some cases, the guess-words
exhibit non-uniform case formatting (e.g. all uppercase or a
mix of both uppercase and lowercase letters). For uniformity,
we formatted all words to be in lowercase.
In addition, we manually checked all of the guess-word
data to remove unintelligible and inappropriate words. We
also removed sequences that did not contain any guesses.
Thus, we finally obtain the G UESS W ORD -160 dataset comprising of guesswords distributed across 16,624 guess sequences and 160 categories. It is important to note that the
final or the intermediate guesses could be ‘wrong’, either
due to the quality of drawing or due to human error. We
deliberately do not filter out such guesses. This design choice
keeps our data realistic and ensures that our computational
model has the opportunity to characterize both the ‘success’
and ‘failure’ scenarios of Pictionary.
A video of a typical Sketch-QA session can be viewed at
https://www.youtube.com/watch?v=YU3entFwhV4.
In the next section, we shall present various interesting
facets of our W ORD G UESS -160 dataset.
3
G UESS S EQUENCE A NALYSIS
Given a sketch, how many guesses are typically provided
by subjects? To answer this, we examine the distribution of
unique guesses per sequence. As Figure 3 shows, the number
of guesses have a large range. This is to be expected given the
large number of object categories we consider and associated
diversity in depictions. A large number of subjects provide a
single guess. This arises both from the inherent ambiguity of
the partially rendered sketches and the confidence subjects
place on their guess. This observation is also borne out by
Fig. 4: Here, x-axis denotes the categories. y-axis denotes
the number of sketches within the category with multiple
guesses. The categories are shown sorted by the number of
sketches which elicited multiple guesses.
We also examined the sequences which elicited multiple
guesses in terms of object categories they belong to. The
categories were sorted by the number of multi-guess sequences their sketches elicited. The top-10 and bottom-10
categories according to this criteria can be viewed in Figure
4. This perspective helps us understand which categories are
inherently ambiguous in terms of their stroke-level evolution
when usually drawn by humans.
Another interesting statistic is the distribution of first
guess location relative to length of the sequence. Figure
5 shows the distribution of first guess index locations as
a function of sequence length (normalized to 1). Thus, a
value closer to 1 implies that the first guess was made late
in the sketch sequence. Clearly, the guess location has a
large range across the object categories. The requirement to
accurately capture this range poses a considerable challenge
for computational models of human guessing.
To obtain a category-level perspective, we computed the
median first-guess location and corresponding deviation
of first guess location on a per-category basis and sorted
the categories by the median values. The resulting plot for
the top and bottom categories can be viewed in Figure
6. This perspective helps understand which the level at
which categories evolve to a recognizable iconic stroke
composition relative to the original, full-stroke reference
sketch. Thus, categories such as axe,envelope,ladder,
although seemingly simple, are depicted in a manner which
induces doubt in the guesser, consequently delaying the
induction of first guess. On the other hand, categories such as
cactus,strawberry,telephone tend to be drawn such
that the early, initial strokes capture the iconic nature of
either the underlying ground-truth category or an easily
recognizable object form different from ground-truth.
The above analysis focused mostly on the overall
sequence-level trends in the dataset. In the next section, we
focus on the last guess for each sketch stroke sequence. Since
4
Fig. 5: The distribution of first guess locations normalized ([0, 1]) over sequence lengths (y-axis) across categories (x-axis).
Hypernyms-Parent and Child (HY-PC): The ground-truth
and prediction have a parent-child (hypernym) relationship
in the WordNet graph.
Wu-Palmer Similarity (WUP) [15]: This calculates relatedness of two words using a graph-distance based method
applied to the corresponding WordNet synsets. If WUP
similarity between prediction and ground-truth is at least
0.9, we deem it a correct classification.
Fig. 6: Categories sorted by the median location of first guess.
the final guess is associated with the full sketch, it can be
considered the guesser’s prediction of the object underlying
the sketch. Such predictions can then be compared with
ground-truth labels originally provided with the sketch
dataset to determine ‘human guesser’ accuracy (Section 4.2).
Subsequently, we compare ‘human guesser’ accuracy with
that of a machine-based sketch object recognition classifier
and discuss trends therein (Section 5).
4
F INAL GUESS - WORD ANALYSIS
With G UESS W ORD -160 data at hand, the first question that
naturally arises is: What is the “accuracy” of humans on the
final, full sketches (i.e. when all the original strokes have been
included)? For a machine-based classifier, this question has
a straightforward answer: Compute the fraction of sketches
whose predicted category label is exactly the same as groundtruth. However, given the open-ended nature of guess-words,
an ‘exact matching’ approach is not feasible. Even assuming
the presence of a universal dictionary, such an approach is
too brittle and restrictive. Therefore, we first define a series
of semantic similarity criteria which progressively relax the
correct classification criterion for the final sketches.
4.1
Matching criteria for correct classification
Exact Match (EM): The predicted guess-word is a literal
match (letter-for-letter) with the ground-truth category.
Subset (SUB): The predicted guess-word is a subset of
ground-truth or vice-versa. This criteria lets us characterize
certain multi-word guesses as correct (e.g. guess: pot of gold
at the end of the rainbow, ground-truth: rainbow).
Synonyms (SYN): The predicted guess-word is a synonym
of ground-truth. For synonym determination, we use the
WordNet [14] synsets of prediction and ground-truth.
Hypernyms (HY): The one-level up parents (hypernyms) of
ground-truth and predicted guess-word are the same in the
hierarchy induced by WordNet graph.
4.2
Classification Performance
To compute the average accuracy of human guesses, we progressively relax the ‘correct classification’ rule by combining
the matching criteria (Section 4.1) in a logical-OR fashion.
The average accuracy of human guesses can be viewed in
Table 2. The accuracy increases depending on the extent to
which each successive criterion relaxes the base ‘exact match’
rule. The large increase in accuracy for ‘EM | SUB’ (2nd
row of the table) shows the pitfall of naively using the exact
matching (1-hot label, fixed dictionary) rule.
At this stage, a new question arises: which of these criteria
best characterizes human-level accuracy? Ultimately, groundtruth label is a consensus agreement among humans. To
obtain such consensus-driven ground-truth, we performed a
human agreement study. We displayed “correctly classified”
sketches (w.r.t a fixed criteria combination from Table 2)
along with their labels, to human subjects. Note that the
labelling chosen changes according to criteria combination.
(e.g. A sketch with ground-truth revolver could be shown
with the label firearm since such a prediction would
be considered correct under the ‘EM | SUB | SYN | HY’
combination). Also, the human subjects weren’t informed
about the usage of criteria combination for labelling. Instead,
they were told that the labellings were provided by other
humans. Each subject was asked to provide their assessment
of the labelling on a scale of −2 (‘Strongly Disagree with
labelling’) to 2 (‘Strongly Agree with labelling’). We randomly
chose 200 sketches correctly classified under each criteria
combination. For each sketch, we collected 5 agreement
ratings and computed the weighted average of the agreement
score. Finally, we computed the average of these weighted
scores. The ratings (Table 3) indicate that ‘EM | SUB | SYN’
is the criteria combination most agreed upon by human
subjects for characterizing human-level accuracy. Having
determined the criteria for a correct match, we can also
contrast human-classification performance with a machinebased state-of-the-art sketch classifier.
5
Criteria Combination
EM
EM | SUB
EM | SUB | SYN
EM | SUB | SYN | HY
EM | SUB | SYN | HY | HY-PC
EM | SUB | SYN | HY | HY-PC | WUP
Accuracy
67.27
75.49
77.97
80.08
82.09
83.33
TABLE 2: Accuracy of human guesses for various matching criteria (Section 4.1). The | indicates that the matching criteria
are combined in a logical-OR fashion to determine whether the predicted guess-word matches the ground-truth or not.
Criteria Combination
EM | SUB
EM | SUB | SYN
EM | SUB | SYN | HY
EM | SUB | SYN | HY | HY-PC
EM | SUB | SYN | HY | HY-PC | WUP
Avg. rating
1.01
1.93
0.95
1.1
0.21
TABLE 3: Quantifying the suitability of matching criteria combination for characterizing human-level sketch object recognition
accuracy. The larger the human rating score, more suitable the criteria. See Section 4.2 for details.
5
C OMPARING HUMAN CLASSIFICATION PERFOR MANCE WITH A MACHINE - BASED CLASSIFIER
We contrast the human-level performance (‘EM | SUB |
SYN’ criteria) with a state-of-the-art sketch classifier [16]. To
ensure fair comparison, we consider only the 1204 sketches
which overlap with the test set used to evaluate the machine
classifier. Table 5 summarizes the prediction combinations
(e.g. Human classification is correct, Machine classification is
incorrect) between the classifiers. While the results seem to
suggest that machine classifier ‘wins’ over human classifier,
the underlying reason is the open-ended nature of human
guesses and the closed-world setting in which the machine
classifier has been trained.
To determine whether the difference between human
and machine classifiers is statistically significant, we use the
Cohen’s d test. Essentially, Cohen’s d is an effect size used
to indicate the standardised difference between two means
and ranges between 0 and 1. Suppose, for a given category
c, the mean accuracy w.r.t human classification criteria is
µch and the corresponding variance is Vhc . Similarly, let the
corresponding quantities for the machine classifier be µcm
c
and Vm
. Cohen’s d for category c is calculated as :
µc − µch
dc = m
(1)
s
where s is the pooled standard deviation, defined as:
r
Machines outperform humans
scorpion (0.84)
rollerblades (0.82)
person walking (0.82)
revolver (0.81)
sponge bob (0.81)
rainbow (0.80)
person sitting (0.79)
sailboat (0.79)
suitcase (0.75)
Humans outperform machines
dragon (0.79)
owl (0.75)
mouse (0.72)
horse (0.72)
flower with stem (0.71)
wine-bottle (0.65)
lightbulb (0.65)
snake (0.63)
leaf (0.63)
TABLE 4: Category level performance of human and machine classifiers. The numbers alongside category names
correspond to Cohen’s d scores.
Prediction
Human Machine
4
5
5
4
4
4
5
5
Relative % of test data
9.05
20.43
67.61
2.91
TABLE 5: Comparing human and machine classifiers for the
possible prediction combinations – 4 indicates correct and
5 indicates incorrect prediction.
Given that the binomial distribution can be considered the
sum of n Bernoulli trials, it is appropriate for our task, as a
sketch is either classified correctly (success) or misclassified
(failure).
Some examples of misclassifications (and the groundtruth category labels) can be seen in Figure 8. Although the
guesses and ground-truth categories are lexically distant, the
guesses are sensible when conditioned on visual stroke data.
Vmc + Vhc
(2)
2
We calculated Cohen’s d for all categories as indicated
above and computed the average of resulting scores. The
6 C OMPUTATIONAL M ODELS
average value is 0.57 which indicates significant differences
in the classifiers according to the signficance reference tables We now describe our computational model designed to
commonly used to determine Cohen’s d significance. In produce human-like guess-word sequences in an on-line
general, though, there are categories where one classifier manner. For model evaluation, we split the 16624 sequences
outperforms the other. The list of top-10 categories where in G UESS W ORD -160 randomly into disjoint sets containing
one classifier outperforms the other (in terms of Cohen’s d) 60% , 25% and 15% of the data which are used during
training, validation and testing phases respectively.
is given in Table 4.
The distribution of correct human guess statistics on a per- Data preparation: Suppose a sketch I is composed of N
category basis can be viewed in Figure 7. For each category, strokes. Let the cumulative stroke sequence of I be I =
we calculate confidence intervals. These intervals inform us {S1 , S2 , . . . SN }, i.e. SN = I (see Figure 2). Let the sequence
at a given level of certainty whether the true accuracy results of corresponding guess-words be GI = {g1 , g2 , . . . gN }. The
will likely fall in the range identified. In particular, the Wilson sketches are first resized to 224 × 224 and zero-centered.
score method of calculating confidence intervals, which we To ensure sufficient training data, we augment sketch data
employ, assume that the variable of interest (the number of and associated guess-words. For sketches, each accumulated
successes) can be modeled as a binomial random variable. stroke sequence St ∈ I is first morphologically dilated
s=
6
Fig. 7: Distribution of correct predictions across categories, sorted by median category-level score. x-axis shows categories
and y-axis stands for classification rate.
cot
toaster
laptop
radio
pen
cigarette
earphones
flower
banana
rainbow
cloud
leaf
Fig. 8: Some examples of misclassifications: Human guesses
are shown in blue. Ground-truth category labels are in pink.
To arrive at the final CNN regressor, we begin by finetuning a pre-trained photo object CNN. To minimize the
impact of the drastic change in domain (photos to sketches)
and task (classification to word-embedding regression), we
undertake a series of successive fine-tuning steps which we
describe next.
6.1
(‘thickened’). Subsequent augmentations are obtained by
applying vertical flip and scaling (paired combinations of
−7%, −3%, 3%, 7% scaling of image side). We also augment
guess-words by replacing each guess-word in GI with its
plural form (e.g. pant is replaced by pants) and synonyms
wherever appropriate.
Data representation: The penultimate fully-connected
layer’s outputs of CNNs fine-tuned on sketches are used
to represent sketch stroke sequence images. The guesswords are represented using pre-trained word-embeddings.
Typically, a human-generated guess sequence contains two
distinct phases. In the first phase, no guesses are provided by
the subject since the accumulated strokes provide insufficient
evidence. Therefore, many of the initial guesses (g1 , g2 etc.)
are empty and hence, no corresponding embeddings exist.
To tackle this, we map ‘no guess’ to a pre-defined non-wordembedding (symbol “#”).
Model design strategy: Our model’s objective is to map the
cumulative stroke sequence I to a target guess-word sequence
GI . Given our choice of data representation above, the model
effectively needs to map the sequence of sketch features to a
sequence of word-embeddings. To achieve this sequence-tosequence mapping, we use a deep recurrent neural network
(RNN) as the architectural template of choice (see Figure 9).
For the sequential mapping process to be effective, we
need discriminative sketch representations. This ensures that
the RNN can focus on modelling crucial sequential aspects
such as when to initiate the word-guessing process and
when to transition to a new guess-word once the guessing
has begun (Section 6.2). To obtain discriminative sketch
representations, we first train a CNN regressor to predict a
guess-word embedding when an accumulated stroke image
is presented (Section 6.1). It is important to note that we
ignore the sequential nature of training data in the process.
Additionally, we omit the sequence elements corresponding
to ‘no-guess’ during regressor training and evaluation. This
frees the regressor from having to additionally model the
complex many-to-one mapping between strokes accumulated
before the first guess and a ‘no-guess’.
Learning the CNN word-embedding regressor
Step-1: We fine-tune the VGG-16 object classification net [17]
using Sketchy [18], a large-scale sketch object dataset, for
125-way classification corresponding to the 125 categories
present in the dataset. Let us denote the resulting fine-tuned
net by M1 .
Step-2: M1 ’s weights are used to initialize a VGG-16 net
which is then fine-tuned for regressing word-embeddings
corresponding to the 125 category names of the Sketchy
dataset. Specifically, we use the 500-dimensional wordembeddings provided by the word2vec model trained on
1-billion Google News words [19]. Our choice is motivated
by the open-ended nature of guess-words in Sketch-QA
and the consequent need to capture semantic similarity
between ground-truth and guess-words rather than perform exact matching. For the loss function w.r.t predicted
word embedding p and ground-truth embedding g , we
2
consider [a] Mean Squared Loss : kp − gk [b] Cosine
Loss [20] : 1- cos(p, g) = 1 − (pT g/kpk kgk) [c] Hingerank Loss [21] : max[0, margin − p̂T ĝ + p̂T ĥ] where p̂, ĝ are
length-normalized versions of p, g respectively and ĥ(6= ĝ )
corresponds to the normalized version of a randomly chosen
category’s word-embedding. The value for margin is set
to 0.1 [d] Convex combination of Cosine Loss (CLoss) and
Hinge-rank Loss (HLoss) : CLoss + λHLoss. The predicted
embedding p is deemed a ‘correct’ match if the set of its k nearest word-embedding neighbors contains g . Overall, we
found the convex combination loss with λ = 1 (determined
via grid search) to provide the best performance. Let us
denote the resulting CNN regressor as M2 .
Step-3: M2 is now fine-tuned with randomly ordered sketches
from training data sequences and corresponding wordembeddings. By repeating the grid search for the convex
combination loss, we found λ = 1 to once again provide the
best performance on the validation set. Note that in this case,
ĥ for Hinge-rank Loss corresponds to a word-embedding randomly selected from the entire word-embedding dictionary.
Let us denote the fine-tuned CNN regressor by M3 .
As mentioned earlier, we use the 4096-dimensional
output from fc7 layer of M3 as the representation for each
accumulated stroke image of sketch sequences.
7
#
#
#
#
bag
engine
tractor
tractor
LSTM
(512)
LSTM
(512)
LSTM
(512)
LSTM
(512)
LSTM
(512)
LSTM
(512)
LSTM
(512)
LSTM
(512)
M3
M3
M3
M3
M3
M3
M3
M3
Fig. 9: The architecture for our deep neural model of word guessing. The rectangular bars correspond to guess-word
embeddings. M3 corresponds to the CNN regressor whose penultimate layer’s outputs are used as input features to the
LSTM model. “#” reflects our choice of modelling ‘no guess’ as a pre-defined non-word embedding. See Section 6 for details.
6.2
RNN training and evaluation
RNN Training: As with the CNN regressor, we configure
the RNN to predict word-embeddings. For preliminary
evaluation, we use only the portion of training sequences
corresponding to guess-words. For each time-step, we use the
same loss (convex combination of Cosine Loss and Hingerank Loss) determined to be best for the CNN regressor.
We use LSTM [22] as the specific RNN variant. For all the
experiments, we use Adagrad optimizer [23] (with a starting
learning rate of 0.01) and early-stopping as the criterion for
terminating optimization.
Evaluation: We use the k -nearest neighbor criteria mentioned
above and examine performance for k = 1, 2, 3. To determine
the best configuration, we compute the proportion of ‘correct matches’ on the subsequence of validation sequences
containing guess-words. As a baseline, we also compute
the sequence-level scores for the CNN regressor M3 . We
average these per-sequence scores across the validation
sequences. The results show that the CNN regressor performs
reasonably well in spite of the overall complexity involved in
regressing guess-word embeddings (see first row of Table 6).
However, this performance is noticeably surpassed by LSTM
net, demonstrating the need to capture temporal context in
modelling guess-word transitions.
7
OVERALL R ESULTS
For the final model, we merge validation and training
sets and re-train with the best architectural settings as
determined by validation set performance (i.e. M3 as the
feature extraction CNN, LSTM with 512 hidden units as the
RNN component and convex combination of Cosine Loss and
LSTM
–
128
256
512
Avg. sequence-level accuracy
1
3
5
52.77
54.13
55.03
55.35
63.02
63.11
63.79
64.03
66.40
66.25
66.40
66.81
TABLE 6: Sequence-level accuracies over the validation set
are shown. In each sequence, only the portion with guesswords is considered for evaluation. The first row corresponds
to M3 CNN regressor. The first column shows the number of
hidden units in the LSTM. The sequence level accuracies with
k-nearest criteria applied to per-timestep guess predictions
are shown for k = 1, 3, 5.
Hinge-rank Loss as the optimization objective). We report
performance on the test sequences.
The full-sequence scenario is considerably challenging
since our model has the additional challenge of having
to accurately determine when the word-guessing phase
should begin. For this reason, we also design a two-phase
architecture as an alternate baseline. In this baseline, the first
phase predicts the most likely sequential location for ‘no
guess’-to-first-guess transition. Conditioned on this location,
the second phase predicts guess-word representations for
rest of the sequence (see Figure 11). To retain focus, we only
report performance numbers for the two-phase baseline. For
a complete description of baseline architecture and related
ablative experiments, please refer to Appendix A.
As can be observed in Table 7, our proposed word-guess
model outperforms other baselines, including the two-phase
baseline, by a significant margin. The reduction in long-range
8
spectacles
binoculars
dragon
lion
santa
Fig. 10: Examples of guesses generated by our model on test set sequences.
Architecture
M3 (CNN)
Two-phase
Proposed
Avg. sequence-level accuracy
1
3
5
43.61
46.33
62.04
51.54
52.08
69.35
54.18
54.46
71.11
TABLE 7: Overall average sequence-level accuracy on test set
are shown for guessing models (CNNs only baseline [first
row], two-phase baseline [second] and our proposed model
[third]).
temporal contextual information, caused by splitting the
original sequence into two disjoint sub-sequences, is possibly
a reason for lower performance for the two-phase baseline.
Additionally, the need to integrate sequential information is
once again highlighted by the inferior performance of CNNonly baseline. We also wish to point out that 17% of guesses
in the test set are out-of-vocabulary words, i.e. guesses not
present in train or validation set. Inspite of this, our model
achieves high sequence-level accuracy, thus making the case
for open-ended word-guessing models.
Examples of guesses generated by our model on test set
sketch sequences can be viewed in Figure 10.
Visual Turing Test: As a subjective assessment of our model,
we also conduct a Visual Turing Test. We randomly sample
K = 200 sequences from our test-set. For each of the
model predictions, we use the nearest word-embedding as
the corresponding guess. We construct two kinds of paired
sequences (si , hi ) and (si , mi ) where si corresponds to the ith sketch stroke sequence (1 6 i 6 K ) and hi , mi correspond
to human and model generated guess sequences respectively.
We randomly display the stroke-and-guess-word paired
sequences to 20 human judges with 10 judges for each of the
two sequence types. Without revealing the origin of guesses
(human or machine), each judge is prompted “Who produced
these guesses?”.
The judges entered their ratings on a 5-point Likert
scale (‘Very likely a machine’, ‘Either is equally likely’,’Very
likely a human’). To minimize selection bias, the scale
ordering is reversed for half the subjects [24]. For each
sequence i, 1 6 i 6 K , we first compute the mode (µH
i
(human guesses), µM
(model guesses)) of the 10 ratings
i
by guesser type. To determine the statistical significance
of the ratings, we additionally analyze the K rating pairs
0
0
0
1
01-a
01-a
01-a
01-a
bag
engine
tractor
tractor
LSTM
(512)
LSTM
(512)
LSTM
(512)
LSTM
(512)
M3
M3
M3
M3
Fig. 11: Architecture for the two-phase baseline. The first
phase (blue dotted line) is used to predict location of the
transition to the word-guessing phase (output 1). Starting
from transition location, the second-phase (red dotted line)
sequentially outputs word-embedding predictions until the
end of stroke sequence.
M
((µH
i , µi ), 1 6 i 6 K ) using the non-parametric Wilcoxon
Signed-Rank test [25].
When we study the distribution of ratings (Figure 12), the
human subject-based guesses from W ORD G UESS -160 seem
to be clearly identified as such – the two most frequent rating
levels correspond to ‘human’. The non-trivial frequency
of ‘machine’ ratings reflects the ambiguity induced not
only by sketches and associated guesses, but also by the
possibility of machine being an equally viable generator. For
the model-generated guesses, many could be identified as
such, indicating the need for more sophisticated guessing
models. This is also evident from the Wilcoxon SignedRank test which indicates a significant effect due to the
guesser type (p = 0.005682, Z = 2.765593). Interestingly, the
second-most preferred rating for model guesses is ‘human’,
indicating a degree of success for the proposed model.
8
R ELATED W ORK
Beyond its obvious entertainment value, Pictionary involves
a number of social [26], [27], collaborative [28], [29] and cognitive [30], [31] aspects which have been studied by researchers.
In an attempt to find neural correlates of creativity, Saggar et
al. [32] analyze fMRI data of participants instructed to draw
sketches of Pictionary ‘action’ words (E.g. “Salute”,“Snore”).
In our approach, we ask subjects to guess the word instead
9
of drawing the sketch for a given word. Also, our sketches
correspond to nouns (objects).
Human-elicited text-based responses to visual content,
particularly in game-like settings, have been explored for
object categorization [33], [34]. However, the visual content is
static and does not accumulate sequentially, unlike our case.
The work of Ullman et al. [35] on determining minimally
recognizable image configurations also bears mention. Our
approach is complementary to theirs in the sense that we
incrementally add stroke content (bottom-up) while they
incrementally reduce image content (top-down).
In recent times, deep architectures for sketch recognition [16], [36], [37] have found great success. However, these
models are trained to output a single, fixed label regardless of
the intra-category variation. In contrast, our model, trained
on actual human guesses, naturally exhibits human-like
variety in its responses (e.g. a sketch can be guessed as
‘aeroplane’ or ‘warplane’ based on evolution of strokebased appearance). Also, our model solves a much more
complex temporally-conditioned, multiple word-embedding
regression problem. Another important distinction is that
our dataset (W ORD G UESS -160) contains incorrect guesses
which usually arise due to ambiguity in sketched depictions.
Such ‘errors’ are normally considered undesirable, but we
deliberately include them in the training phase to enable
realistic mimicking. This in turn requires our model to
implicitly capture the subtle, fine-grained variations in sketch
quality – a situation not faced by existing approaches which
simply optimize for classification accuracy.
Our dataset collection procedure is similar to the one
employed by Johnson et al. [38] as part of their Pictionarystyle game Stellasketch. However, we do not let the subject
choose the object category. Also, our subjects only provide
guesses for stroke sequences of existing sketches and not
for sketches being created in real-time. Unfortunately, the
Stellasketch dataset is not available publicly for further study.
It is also pertinent to compare our task and dataset
with QuickDraw, a large-scale sketch collection initiative by Google (https://github.com/googlecreativelab/
quickdraw-dataset). The QuickDraw task generates a dataset
of object sketches. In contrast, our task SketchQA results
in a dataset of human-generated guess words. In QuickDraw, a sketch is associated with a single, fixed category.
In SketchQA, a sketch from an existing dataset is explicitly associated with a list of multiple guess words. In
SketchQA, the freedom provided to human guessers enables
sketches to have arbitrarily fine-grained labels (e.g. ‘airplane’,
‘warplane’,‘biplane’). However, QuickDraw’s label set is
fixed. Finally, our dataset (W ORD G UESS -160) captures a
rich sequence of guesses in response to accumulation of
sketch strokes. Therefore, it can be used to train humanlike guessing models. QuickDraw’s dataset, lacking human
guesses, is not suited for this purpose.
Our computational model employs the Long Short Term
Memory (LSTM) [22] variant of Recurrent Neural Networks
(RNNs). LSTM-based frameworks have been utilized for
tasks involving temporally evolving content such as as
video captioning [5], [39] and action recognition [40], [41],
[42]. Our model not only needs to produce human-like
guesses in response to temporally accumulated content,
but also has the additional challenge of determining how
long to ‘wait’ before initiating the guessing process. Once
the guessing phase begins, our model typically outputs
multiple answers. These per-time-step answers may even be
unrelated to each other. This paradigm is different from a
setup wherein a single answer constitutes the output. Also,
the output of RNN in aforementioned approaches is a softmax distribution over all the words from a fixed dictionary.
In contrast, we use a regression formulation wherein the
RNN outputs a word-embedding prediction at each timestep. This ensures scalability with increase in vocabulary and
better generalization since our model outputs predictions
in a constant-dimension vector space. [43] adopt a similar
regression formulation to obtain improved performance for
image annotation and action recognition.
Since our model aims to mimic human-like guessing
behavior, a subjective evaluation of generated guesses falls
within the ambit of a Visual Turing Test [44], [45], [46].
However, the free-form nature of guess-words and the
ambiguity arising from partial stroke information make our
task uniquely more challenging.
9
D ISCUSSION AND C ONCLUSION
We have introduced a novel guessing task called SketchQA to crowd-source Pictionary-style open-ended guesses
for object line sketches as they are drawn. The resulting
dataset, dubbed G UESS W ORD -160, contains 16624 guess
sequences of 1108 subjects across 160 object categories.
We have also introduced a novel computational model
which produces open-ended guesses and analyzed its performance on G UESS W ORD -160 dataset for challenging on-line
Pictionary-style guessing tasks.
In addition to the computational model, our dataset
G UESS W ORD -160 can serve researchers studying human
perceptions of iconic object depictions. Since the guesswords are paired with object depictions, our data can also
aid graphic designers and civic planners in creation of
meaningful logos and public signage. This is especially
important since incorrectly perceived depictions often result
in inconvenience, mild amusement, or in extreme cases,
end up deemed offensive. Yet another potential application
domain is clinical healthcare. G UESS W ORD -160 consists of
partially drawn objects and corresponding guesses across a
large number of categories. Such data could be useful for
neuro psychiatrists to characterize conditions such as visual
agnosia: a disorder in which subjects exhibit impaired object
recognition capabilities [47].
In future, we wish to also explore computational models
for optimal guessing, i.e. models which aim to guess the
sketch category as early and as correctly as possible. In the
futuristic context mentioned at the beginning (Figure 1), such
models would help the robot contribute as a productive
team-player by correctly guessing its team-member’s sketch
as early as possible. In our dataset, each stroke sequence was
shown only to a single subject and therefore, is associated
with a single corresponding sequence of guesses. This shortcoming is to be mitigated in future editions of Sketch-QA.
A promising approach for data collection would be to use
digital whiteboards, high-quality microphones and state-ofthe-art speech recognition software to collect realistic paired
stroke-and-guess data from Pictionary games in home-like
10
0.3
Very likely machine
Somewhat machine
Both equally
Somewhat human
Very likely human
% of total ratings
0.25
0.2
0.15
I, BI pairs for binary (Guess/No Guess) classification and
during inference, repeatedly apply the CNN model on successive time-steps, stopping when the CNN model outputs
1 (indicating the beginning of guessing phase). The second
possibility is to train an RNN and during inference, stop
unrolling when a 1 is encountered. We describe the setup for
CNN model first.
0.1
0.05
0
Human Guesses
Machine Guesses
Fig. 12: Distribution of ratings for human and machinegenerated guesses.
settings [48]. It would also be worthwhile to consider SketchQA beyond object names (‘nouns’) and include additional
lexical types (e.g. action-words and abstract phrases). We
believe the resulting data, coupled with improved versions
of our computational models, could make the scenario from
Figure 1 a reality one day.
A PPENDIX A
T WO -P HASE BASELINE M ODEL
In this section, we present the architectural design and related
evaluation experiments of the two-phase baseline originally
mentioned in Section 7.
Typically, a guess sequence contains two distinct phases.
In the first phase, no guesses are provided by the subject
since the accumulated strokes provide insufficient evidence.
At a later stage, the subject feels confident enough to provide
the first guess. Thus, the location of this first guess (within
the overall sequence) is the starting point for the second
phase. The first phase (i.e. no guesses) offers no usable guesswords. Therefore, rather than tackling both the phases within
a single model, we adopt a divide-and-conquer approach.
We design this baseline to first predict the phase transition
location (i.e. where the first guess occurs). Conditioned on
this location, the model predicts guess-word representations
for rest of the sequence (see Figure 11).
In the two-phase model and the model described in
the main paper, the guess-word generator is a common
component. The guess-word generation model is already
described in the main paper (Section 6). For the remainder
of the section, we focus on the first phase of the two-phase
baseline.
Consider a typical guess sequence GI . Suppose the first
phase (‘no guesses’) corresponds to an initial sub-sequence of
length k . The second phase then corresponds to the remainder sub-sequence of length (N − k). Denoting ‘no guess’ as 0
and a guess-word as 1, GI is transformed to a binary sequence
BI = [(0, 0 . . . k times )(1, 1 . . . (N − k) times)]. Therefore,
the objective for the Phase I model is to correctly predict the
transition index i.e. (k + 1).
A.1
Phase I model (Transition prediction)
Two possibilities exist for Phase-I model. The first possibility
is to train a CNN model using sequence members from
A.1.1 CNN model
For the CNN model, we fine-tune VGG-16 object classification model [17] using Sketchy [18] as in the proposed
model. The fine-tuned model is used to initialize another
VGG-16 model, but with a 256-dimensional bottleneck layer
introduced after fc7 layer. Let us denote this model as Q1 .
A.1.2 Sketch representation
As feature representations, we consider two possibilities:app
[a] Q1 is fine-tuned for 2-way classification (Guess/No
Guess). The 256-dimensional output from final fullyconnected layer forms the feature representation. [b] The
architecture in option [a] is modified by having 160-way
class prediction as an additional, auxiliary task. This choice
is motivated by the possibility of encoding category-specific
transition location statistics within the 256-dimensional feature representation (see Figure 5). The two losses corresponding to the two outputs (2-way and 160-way classification)
of the modified architecture are weighted equally during
training.
Loss weighting for imbalanced label distributions: When
training the feature extraction CNN (Q1 ) in Phase-I, we
encounter imbalance in the distribution of no-guesses (0s)
and guesses (1s). To mitigate this, we employ class-based loss
weighting [49] for the binary classification task. Suppose the
number of no-guess samples is n and the number of guess
samples is g . Let µ = n+g
2 . The weights for the classes are
n
computed as w0 = fµ0 where f0 = (n+g)
and w1 = fµ1 where
g
f1 = (n+g) . The binary cross-entropy loss is then computed
as:
L(P, G) =
X
−wx [gx log(px ) + (1 − gx )log(1 − px )]
x∈Xtrain
(3)
where gx , px stand for ground-truth and prediction
respectively and wx = w0 when x is a no-guess sample
and wx = w1 otherwise. For our data, w0 = 1.475 and
w1 = 0.765, thus appropriately accounting for the relatively
smaller number of no-guess samples in our training data.
A similar procedure is also used for weighting losses
when the 160-way auxiliary classifier variant of Q1 is trained.
In this case, the weights are determined by the per-object category distribution of the training sequences. Experimentally,
Q1 with auxiliary task shows better performance – see first
two rows of Table 8.
A.1.3 LSTM setup
We use the 256-dimensional output of the Q1 -auxiliary CNN
as the per-timestep sketch representation fed to the LSTM
model. To capture the temporal evolution of the binary
sequences, we configure the LSTM to output a binary label
Bt ∈ {0, 1} for each timestep t. For the LSTM, we explored
11
CNN model
LSTM
Loss
01
01-a
01-a
01-a
01-a
01-a
01-a
01-a
–
–
64
128
256
512
128
128
CCE
CCE
Seq
Seq
Seq
Seq
wSeq
mRnk
Window width
1
3
5
17.37
20.45
17.30
18.94
18.68
18.22
19.20
18.87
36.57
41.22
38.40
39.25
40.04
39.45
41.48
37.88
49.67
54.91
52.75
53.47
53.41
54.78
55.64
52.23
TABLE 8: The transition location prediction accuracies for various Phase I architectures are shown. 01 refers to the binary
output CNN model pre-trained for feature extraction. 01-a refers to the 01 CNN model with 160-way auxiliary classification.
The last two rows correspond to test set accuracies of the best CNN and LSTM configurations. For the ‘Loss’ column, CCE =
Categorical-cross entropy, Seq = Average sequence loss, wSeq = Weighted sequence loss, mRnk = modified Ranking Loss.
The results are shown for ‘Window width’ sized windows centered on ground-truth transition location. The rows below
dotted line show performance of best CNN and LSTM models on test sequences.
CNN
model
01-a
01-a
01-a
LSTM
128
128
128
Window width
α
5
7
10
1
3
5
19.00
19.20
18.48
41.55
41.48
40.10
55.44
54.85
54.06
TABLE 9: Weighted loss performance for various values of
α.
variations in number of hidden units (64, 128, 256, 512). The
weight matrices are initialized as orthogonal matrices with
a gain factor of 1.1 [50] and the forget gate bias is set to
1. For training the LSTMs, we use the average sequence
loss, computed as the average of the per-time-step binary
cross-entropy losses. The loss is regularized by a standard
L2 -weight norm weight-decay parameter (α = 0.0005). For
optimization, we use Adagrad with a learning rate of 5×10−5
and the momentum term set to 0.9. The gradients are clipped
to 5.0 during training. For all LSTM experiments, we use a
mini-batch size of 1.
A.1.4 LSTM Loss function variants
The default sequence loss formulation treats all time-steps
of the sequence equally. Since we are interested in accurate
localization of transition point, we explored the following
modifications of the default loss for LSTM:
Transition weighted loss: To encourage correct prediction at
the transition location, we explored a weighted version of
the default sequence-level loss. Beginning at the transition
location, the per-timestep losses on either side of the transition are weighted by an exponentially decaying factor
s
e−α(1−[t/(k+1)] ) where s = 1 for time-steps [1, k], s = −1
for [k + 2, N ]. Essentially, the loss at the transition location
is weighted the most while the losses for other locations are
downscaled by weights less than 1 – the larger the distance
from transition location, the smaller the weight. We tried
various values for α. The localization accuracy can be viewed
in Table 9. Note that the weighted loss is added to the original
sequence loss during actual training.
Modified ranking loss: We want the model to prevent occurrence of premature or multiple transitions. To incorporate
this notion, we use the ranking loss formulation proposed
by Ma et al. [42]. Let us denote the loss at time step t as Ltc
y
and the softmax score for the ground truth label yt as pt t .
We shall refer to this as detection score. In our case, for the
Phase-I model, Ltc corresponds to the binary cross-entropy
loss. The overall loss at time step t is modified as:
Lt = λs Ltc + λr Ltr
(4)
We want the Phase-I model to produce monotonically
non-decreasing softmax values for no-guesses and guesses
as it progresses more into the sub-sequence. In other words,
if there is no transition at time t, i.e. yt = yt−1 , then we want
the current detection score to be no less than any previous
detection score. Therefore, for this situation, the ranking loss
is computed as:
t
Ltr = max(0, p∗y
− pyt t )
t
(5)
where
t
p∗y
=
t
max
t0 ∈[ts ,t−1]
pyt0t
(6)
where ts corresponds to time step 1 when yt = 0 (No
Guesses) or ts = tp (starting location of Guessing).
If time-step t corresponds to a transition, i.e. yt 6= yt−1 , we
want the detection score of previous phase (‘No Guess’) to
be as small as possible (ideally 0). Therefore, we compute the
ranking loss as:
y
Ltr = pt t−1
(7)
During training, we use a convex combination of sequence
loss and the ranking loss with the loss weighting determined
by grid search over λs−r (see Table 10). From our experiments, we found the transition weighted loss to provide the
best performance (Table 8).
A.1.5 Evaluation
At inference time, the accumulated stroke sequence is
processed sequentially by Phase-I model until it outputs
a 1 which marks the beginning of Phase-II. Suppose the
predicted transition index is p and ground-truth index is g .
12
CNN model
LSTM
λs , λ r
01-a
01-a
01-a
128
128
128
0.5, 1.0
1, 1
1, 0.5
Window width
1
3
5
17.43
18.41
18.87
35.26
39.45
37.88
48.23
53.08
52.23
TABLE 10: Ranking loss performance for various weighting of sequence loss and rank loss.
P-I
P-II
Average sequence-level accuracy
k=1
01-a
Unified
01-a
M3
Unified
R25
k=3
k=5
P-II only
Full
P-II only
Full
P-II only
Full
54.06
46.35
57.05
43.61
62.04
46.33
64.11
56.45
64.76
51.54
69.35
52.08
66.85
59.30
67.19
54.18
71.11
54.46
TABLE 11: Overall average sequence-level accuracy on test set are shown for guessing models (CNNs only baseline [first
row], Unified [second], Two Phased [third]). R25 corresponds to best Phase-II LSTM model.
The prediction is deemed correct if p ∈ [g − δ, g + δ] where
δ denotes half-width of a window centered on p. For our
experiments, we used δ ∈ {0, 1, 2}. The results (Table 8)
indicate that the Q1 -auxiliary CNN model outperforms the
best LSTM model by a very small margin. The addition of
weighted sequence loss to the default version plays a crucial
role in the latter (LSTM model) since the default version does
not explicitly optimize for the transition location. Overall, the
large variation in sequence lengths and transition locations
explains the low performance for exact (k = 1) localization.
Note, however, that the performance improves considerably
when just one to two nearby locations are considered for
evaluation (k = 3, 5).
During inference, the location predicted by Phase-I model
is used as the starting point for Phase-II (word guessing). We
do not describe Phase-II model since it is virtually identical
in design as the model described in the main paper (Section
6).
A.2
Overall Results
To determine overall performance, we utilize the best architectural settings as determined by validation set performance.
We then merge validation and training sets, re-train the best
models and report their performance on the test set. As
the overall performance measure, we report two items on
the test set – [a] P-II: the fraction of correct matches with
respect to the subsequence corresponding to ground-truth
word guesses. In other words, we assume 100% accurate
localization during Phase I and perform Phase II inference
beginning from the ground-truth location of the first guess.
[b] Full: We use Phase-I model to determine transition
location. Note that depending on predicted location, it is
possible that we obtain word-embedding predictions when
the ground-truth at the corresponding time-step corresponds
to ‘no guess’. Regarding such predictions as mismatches, we
compute the fraction of correct matches for the full sequence.
As a baseline model (first row of Table 11), we use outputs
of the best performing per-frame CNNs from Phase I and
Phase II.
The results (Table 11) show that the Unified model
outperforms Two-Phased model by a significant margin. For
Phase-II model, the objective for CNN (whose features are
used as sketch representation) and LSTM are the same. This is
not the case for Phase-I model. The reduction in long-range
temporal contextual information, caused by splitting the
original sequence into two disjoint sub-sequences, is possibly
another reason for lower performance of the Two-Phased
model.
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
G. Tesauro, “TD-gammon, a self-teaching backgammon program,
achieves master-level play,” Neural Computation, vol. 6, no. 2, pp.
215–219, 1994.
Deep Blue Versus Kasparov: The Significance for Artificial Intelligence.
AAAI Press, 1997.
D. Silver et al., “Mastering the game of Go with deep neural
networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489,
2016.
X. Chen and C. Lawrence Zitnick, “Mind’s eye: A recurrent visual
representation for image caption generation,” in CVPR, 2015, pp.
2422–2431.
S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell,
and K. Saenko, “Sequence to sequence-video to text,” in CVPR,
2015, pp. 4534–4542.
K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov,
R. S. Zemel, and Y. Bengio, “Show, attend and tell: Neural image
caption generation with visual attention.” in ICML, vol. 14, 2015,
pp. 77–81.
S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh, “VQA: Visual question answering,” in ICCV,
2015, pp. 2425–2433.
H. Xu and K. Saenko, “Ask, attend and answer: Exploring questionguided spatial attention for visual question answering,” in ECCV.
Springer, 2016, pp. 451–466.
M. Ren, R. Kiros, and R. Zemel, “Exploring models and data for
image question answering,” in NIPS, 2015, pp. 2953–2961.
M. Eitz, J. Hays, and M. Alexa, “How do humans sketch objects?”
ACM Trans. on Graphics, vol. 31, no. 4, p. 44, 2012.
R. G. Schneider and T. Tuytelaars, “Sketch classification and
classification-driven analysis using fisher vectors,” ACM Trans.
Graph., vol. 33, no. 6, pp. 174:1–174:9, Nov. 2014.
P. Halácsy, A. Kornai, and C. Oravecz, “HunPos: an open source
trigram tagger,” in Proc. ACL on interactive poster and demonstration
sessions, 2007, pp. 209–212.
D. Lachowicz, “Enchant spellchecker library,” 2010.
G. A. Miller, “Wordnet: a lexical database for english,” Communications of the ACM, vol. 38, no. 11, pp. 39–41, 1995.
Z. Wu and M. Palmer, “Verbs semantics and lexical selection,” in
ACL. Association for Computational Linguistics, 1994, pp. 133–138.
13
[16] R. K. Sarvadevabhatla, J. Kundu, and V. B. Radhakrishnan, “Enabling my robot to play pictionary: Recurrent neural networks for
sketch recognition,” in ACMMM, 2016, pp. 247–251.
[17] K. Simonyan and A. Zisserman, “Very deep convolutional networks
for large-scale image recognition,” arXiv preprint arXiv:1409.1556,
2014.
[18] P. Sangkloy, N. Burnell, C. Ham, and J. Hays, “The sketchy database:
learning to retrieve badly drawn bunnies,” ACM Transactions on
Graphics (TOG), vol. 35, no. 4, p. 119, 2016.
[19] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint
arXiv:1301.3781, 2013.
[20] T. Qin, X.-D. Zhang, M.-F. Tsai, D.-S. Wang, T.-Y. Liu, and H. Li,
“Query-level loss functions for information retrieval,” Information
Processing & Management, vol. 44, no. 2, pp. 838–855, 2008.
[21] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov
et al., “Devise: A deep visual-semantic embedding model,” in NIPS,
2013, pp. 2121–2129.
[22] S. Hochreiter and J. Schmidhuber, “Long short-term memory,”
Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[23] J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods
for online learning and stochastic optimization,” JMLR, vol. 12, no.
Jul, pp. 2121–2159, 2011.
[24] J. C. Chan, “Response-order effects in likert-type scales,” Educational
and Psychological Measurement, vol. 51, no. 3, pp. 531–540, 1991.
[25] F. Wilcoxon, “Individual comparisons by ranking methods,”
Biometrics Bulletin, vol. 1, no. 6, pp. 80–83, 1945. [Online]. Available:
http://www.jstor.org/stable/3001968
[26] T. B. Wortham, “Adapting common popular games to a human
factors/ergonomics course,” in Proc. Human Factors and Ergonomics
Soc. Annual Meeting, vol. 50. SAGE, 2006, pp. 2259–2263.
[27] F. Mäyrä, “The contextual game experience: On the socio-cultural
contexts for meaning in digital play,” in Proc. DIGRA, 2007, pp.
810–814.
[28] N. Fay, M. Arbib, and S. Garrod, “How to bootstrap a human
communication system,” Cognitive science, vol. 37, no. 7, pp. 1356–
1367, 2013.
[29] M. Groen, M. Ursu, S. Michalakopoulos, M. Falelakis, and E. Gasparis, “Improving video-mediated communication with orchestration,” Computers in Human Behavior, vol. 28, no. 5, pp. 1575 – 1579,
2012.
[30] D. M. Dake and B. Roberts, “The visual analysis of visual metaphor,”
1995.
[31] B. Kievit-Kylar and M. N. Jones, “The semantic pictionary project,”
in Proc. Annual Conf. Cog. Sci. Soc., 2011, pp. 2229–2234.
[32] M. Saggar et al., “Pictionary-based fMRI paradigm to study
the neural correlates of spontaneous improvisation and figural
creativity,” Nature (2005), 2015.
[33] L. Von Ahn and L. Dabbish, “Labeling images with a computer
game,” in SIGCHI. ACM, 2004, pp. 319–326.
[34] S. Branson, C. Wah, F. Schroff, B. Babenko, P. Welinder, P. Perona,
and S. Belongie, “Visual recognition with humans in the loop,”
in European Conference on Computer Vision. Springer, 2010, pp.
438–451.
[35] S. Ullman, L. Assif, E. Fetaya, and D. Harari, “Atoms of recognition
in human and computer vision,” PNAS, vol. 113, no. 10, pp. 2744–
2749, 2016.
[36] Q. Yu, Y. Yang, Y.-Z. Song, T. Xiang, and T. M. Hospedales, “Sketcha-net that beats humans,” arXiv preprint arXiv:1501.07873, 2015.
[37] O. Seddati, S. Dupont, and S. Mahmoudi, “Deepsketch: deep
convolutional neural networks for sketch recognition and similarity
search,” in CBMI. IEEE, 2015, pp. 1–6.
[38] G. Johnson and E. Y.-L. Do, “Games for sketch data collection,” in
Proceedings of the 6th eurographics symposium on sketch-based interfaces
and modeling. ACM, 2009, pp. 117–123.
[39] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach,
S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent
convolutional networks for visual recognition and description,” in
CVPR, 2015, pp. 2625–2634.
[40] S. Yeung, O. Russakovsky, N. Jin, M. Andriluka, G. Mori, and L. FeiFei, “Every moment counts: Dense detailed labeling of actions in
complex videos,” arXiv preprint arXiv:1507.05738, 2015.
[41] J. Y.-H. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals,
R. Monga, and G. Toderici, “Beyond short snippets: Deep networks
for video classification,” in CVPR, 2015, pp. 4694–4702.
[42] S. Ma, L. Sigal, and S. Sclaroff, “Learning activity progression in
lstms for activity detection and early detection,” in CVPR, 2016, pp.
1942–1950.
[43] G. Lev, G. Sadeh, B. Klein, and L. Wolf, “Rnn fisher vectors for
action recognition and image annotation,” in ECCV. Springer,
2016, pp. 833–850.
[44] D. Geman, S. Geman, N. Hallonquist, and L. Younes, “Visual turing
test for computer vision systems,” PNAS, vol. 112, no. 12, pp.
3618–3623, 2015.
[45] M. Malinowski and M. Fritz, “Towards a visual turing challenge,”
arXiv preprint arXiv:1410.8027, 2014.
[46] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu, “Are you
talking to a machine? dataset and methods for multilingual image
question,” in NIPS, 2015, pp. 2296–2304.
[47] L. Baugh, L. Desanghere, and J. Marotta, “Agnosia,” in Encyclopedia
of Behavioral Neuroscience. Academic Press, Elsevier Science, 2010,
vol. 1, pp. 27–33.
[48] G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, and
A. Gupta, “Hollywood in homes: Crowdsourcing data collection
for activity understanding,” in ECCV, 2016.
[49] D. Eigen and R. Fergus, “Predicting depth, surface normals
and semantic labels with a common multi-scale convolutional
architecture,” in Proceedings of the IEEE International Conference on
Computer Vision, 2015, pp. 2650–2658.
[50] A. M. Saxe, J. L. McClelland, and S. Ganguli, “Exact solutions
to the nonlinear dynamics of learning in deep linear neural
networks,” CoRR, vol. abs/1312.6120, 2013. [Online]. Available:
http://arxiv.org/abs/1312.6120
| 1 |
arXiv:1707.06864v1 [] 21 Jul 2017
Interval structures for braid groups B(e, e, n)
Georges Neaime
Laboratoire de Mathématiques Nicolas Oresme
Université de Caen Normandie
[email protected]
March 18, 2018
Abstract
Complex braid groups are a generalization of Artin-Tits groups. The general
goal is to extend what is known for Artin-Tits groups to other complex braid
groups. We are interested in Garside structures that derive from intervals. Actually, we construct intervals in the complex reflection group G(e, e, n) which
gives rise to Garside groups. Some of these groups correspond to the complex
braid group B(e, e, n). For the other Garside groups that appear, we give some
of their properties in order to understand these new structures.
Contents
1 Introduction
2
2 The groups G(e, e, n) and B(e, e, n)
6
3 Reduced words in G(e, e, n)
9
4 Balanced elements of maximal length
15
5 Interval structures
20
6 About the interval structures
24
1
1
Introduction
A reflection is an element s of GLn (C) with n ≥ 1 such that ker(s− 1) is a hyperplane
and s2 = 1. Relaxing the last condition to s of finite order defines the notion of pseudoreflection. Let W be a finite subgroup of GLn (C) and R be the set of reflections of
W . We say that W is a complex reflection group if W is generated by R. Since
every complex reflection group is a direct product of irreducible ones, we restrict our
work to irreducible complex reflection groups. These groups have been classified by
Shephard and Todd [17] in 1954. The classification consists of two families and 15
exceptions. For e, n ≥ 1, the first family is denoted by G(e, e, n) and defined as the
group of n × n matrices consisting of
• monomial matrices (each row and column has a unique nonzero entry),
• with all nonzero entries lying in µe , the group of e-th roots of unity, and
• for which the product of the nonzero entries is 1.
The second family is denoted by G(2e, e, n) and defined as the group of n × n matrices
consisting of monomial matrices, with all nonzero entries lying in µ2e , and for which
the product of the nonzero entries is 1 or −1. For the definition of the exceptional
groups, the reader may check [17].
For every complex reflection group W , there exists a corresponding hyperplane
arrangement
and hyperplane complement: A = {Ker(s − 1) | s ∈ R} and X =
S
Cn \ A. The pure braid group is defined as P := π1 (X) and the braid group (or
complex braid group) as B := π1 (X/W ). Note that we have the short exact sequence:
1 −→ P −→ B −→ W −→ 1.
If W = G(de, e, n) with d = 1 or 2, then we denote by B(de, e, n) the associated
braid group. This construction of the braid group is also valid for finite complex
pseudo-reflection groups. However, using the classification of Shephard and Todd for
irreducible complex pseudo-reflection groups and case-by-case results of [3], we may
restrict our work to complex reflection groups as far as group-theoretic properties of
the braid group are concerned.
The previous definitions are a generalization of the well-known Coxeter and ArtinTits groups that we recall now. One way of defining a finite Coxeter group W is by
a presentation with
• a generating set S and
• relations:
– quadratic relations: s2 = 1 for all s ∈ S and
– braid relations: sts
| {z· ·}· for s 6= t ∈ S where mst is the order of
| {z· ·}· = tst
mst
mst
st ∈ W .
2
The Artin-Tits group B(W ) corresponding to W is the group of fractions of the
monoid B + (W ) defined by a presentation with generators: a set Se in bijection with
e
the set S and the relations are only the braid relations: s̃|t̃s̃{z· ·}· = |t̃s̃{z
t̃ · ·}· for s̃ 6= t̃ ∈ S.
mst
mst
The seminal example of these groups is when W = Sn , the symmetric group with
n ≥ 2. It has a presentation with generators s1 , s2 , · · · , sn−1 and relations:
• s2i = 1 for 1 ≤ i ≤ n − 1,
• si si+1 si = si+1 si si+1 for 1 ≤ i ≤ n − 2, and
• si sj = sj si for |i − j| > 1.
The Artin-Tits group associated with Sn is the ‘classical’ braid group denoted by Bn .
The following diagram presentation encodes all the information about the generators
and relations of the presentation of Bn .
s̃1
s̃n−2 s̃n−1
s̃2
Figure 1: Diagram for the presentation of Bn .
The link with the first definitions is as follows. Consider W a real reflection group
meaning that W < GLn (R) < GLn (C). By a theorem of Coxeter, every real reflection group corresponds to a Coxeter group. Furthermore, by a theorem of Brieskorn,
the Artin-Tits group corresponding to W is the braid group π1 (X/W ) attached to W .
It is widely believed that complex braid groups share similar properties with ArtinTits groups. One would like to extend what is known for Artin-Tits groups to other
complex braid groups. For instance, it is shown by Bessis and Corran [1] in 2006,
and by Corran and Picantin [6] in 2009 that the complex braid group B(e, e, n) admits some Garside structures. We are interested in constructing Garside structures
for B(e, e, n) that derive from intervals in the associated complex reflection group
G(e, e, n). For instance, the complex braid group B(e, e, n) admits interval structures
that derive from the non-crossing partitions of type (e, e, n). See [1] for a detailed
study. The aim of this paper is to provide new interval structures for B(e, e, n). In
Section 3, we compute the length of all elements of G(e, e, n) over an appropriate
generating set. This allows us to construct intervals in the complex reflection group
G(e, e, n) and show that they are lattices which automatically gives rise to Garside
structures. This is done in Theorem 5.15. In the last section, we provide group presentations for these Garside structures which we denote by B (k) (e, e, n) for 1 ≤ k ≤ e − 1
and identify which of them correspond to B(e, e, n). For the other Garside structures
that appear, we give some of their properties in order to understand them. One of
the important results obtained is Theorem 6.14:
B (k) (e, e, n) is isomorphic to B(e, e, n) if and only if k ∧ e = 1.
In the remaining part of this section, we include the necessary preliminaries to
accurately describe these Garside structures.
3
1.1
Garside monoids and groups
In his PhD thesis, defended in 1965 [12], and in the article that followed [13], Garside
solved the Conjugacy Problem for the classical braid group Bn by introducing a submonoid Bn+ of Bn and an element ∆n of Bn+ that he called fundamental, and then
showing that there exists a normal form for every element in Bn . In the beginning of
the 1970’s, it was realized by Brieskorn and Saito [2] and Deligne [11] that Garside’s
results extend to all Artin-Tits groups. At the end of the 1990’s, after listing the
abstract properties of Bn+ and the fundamental element ∆n , Dehornoy and Paris [10]
defined the notion of Gaussian groups and Garside groups which leads, in “a natural,
but slowly emerging program” as stated in [8], to Garside theory. For a complete
study about Garside structures that is still far from complete, we refer the reader to
[8].
We start by defining Garside monoids and groups. Let M be a monoid. Under
some assumptions about M , more precisely the assumptions 1 and 2 of Definition 1.2,
one can define a partial order relation on M as follows.
Definition 1.1. Let f, g ∈ M . We say that f left-divides g or simply f divides g when
there is no confusion, written f g, if f g ′ = g holds for some g ′ ∈ M . Similarly, we
say that f right-divides g, written f r g, if g ′ f = g holds for some g ′ ∈ M .
We are ready to define Garside monoids and groups.
Definition 1.2. We say that M is a Garside monoid if
1. M is cancellative, that is f g = f h =⇒ g = h and gf = hf =⇒ g = h for
f, g, h ∈ M ,
2. there exists λ : M −→ N s.t. λ(f g) ≥ λ(f ) + λ(g) and g 6= 1 =⇒ λ(g) 6= 0,
3. any two elements of M have a gcd and an lcm for and r , and
4. there exists an element ∆ ∈ M such that the set of its left divisors coincides
with the set of its right divisors, generate M , and is finite.
The element ∆ is called a Garside element of M and the divisors of ∆ are called
the simples of the Garside structure.
Assumptions 1 and 3 of Definition 1.2 ensure that Ore’s conditions are satisfied.
Hence there exists a group of fractions of the monoid M in which it embeds. This
allows us to give the following definition.
Definition 1.3. A Garside group is the group of fractions of a Garside monoid.
Note that one of the important aspects of a Garside structure is the existence of
a normal form for all elements of the Garside group. Furthermore, many problems
like the Word and Conjugacy problems can be solved in Garside groups which makes
their study interesting.
4
1.2
Interval structures
Let G be a finite group generated by a finite set S. There is a way to construct
Garside structures from intervals in G.
We start by defining a partial order relation on G.
Definition 1.4. Let f, g ∈ G. We say that g is a divisor of f or f is a multiple of
g, and write g f , if f = gh with h ∈ G and ℓ(f ) = ℓ(g) + ℓ(h), where ℓ(f ) is the
length over S of f ∈ G.
Definition 1.5. For w ∈ G, define a monoid M ([1, w]) by the presentation of monoid
with
• generating set P in bijection with the interval
[1, w] := {f ∈ G | 1 f w} and
• relations: f g = h if f, g, h ∈ [1, w], f g = h, and f h, that is ℓ(f ) + ℓ(g) = ℓ(h).
Similarly, one can define the partial order relation on G
g r f if and only if ℓ(f g −1 ) + ℓ(g) = ℓ(f ),
then define the interval [1, w]r and the monoid M ([1, w]r ).
Definition 1.6. Let w be in G. We say that w is a balanced element of G if [1, w] =
[1, w]r .
We have the following theorem (see Section 10 of [15] for a proof).
Theorem 1.7. If w ∈ G is balanced and both posets ([1, w], ) and ([1, w]r , r ) are
lattices, then M ([1, w]) is a Garside monoid with Garside element w and with simples
[1, w].
The previous construction gives rise to an interval structure. The interval monoid
is M ([1, w]). When M ([1, w]) is a Garside monoid, its group of fractions exists and
is denoted by G(M ([1, w])). We call it the interval group. We will give a seminal
example of this structure. It shows that Artin-Tits groups admit interval structures.
Example 1.8. Let W be a finite coxeter group
W =< S | s2 = 1, sts
| {z· ·}· = tst
| {z· ·}· for s 6= t ∈ S > .
mst
mst
Take G = W and g = w0 the longest element over S in W . We have [1, w0 ] = W .
Construct the interval monoid M ([1, w0 ]). We have M ([1, w0 ]) is the Artin-Tits
monoid B + (W ). Hence B + (W ) is generated by a copy W of W with f g = h if
f g = h and ℓ(f ) + ℓ(g) = ℓ(h); f, g, and h ∈ W . We have the following result.
Theorem 1.9. B + (W ) is a Garside monoid with Garside element w0 and with simples W .
5
The groups G(e, e, n) and B(e, e, n)
2
In this section, we recall some presentations by generators and relations of G(e, e, n)
and B(e, e, n) based on the results of [3] and [6].
2.1
Presentations
Recall that for e, n ≥ 1, G(e, e, n) is the group of n × n matrices consisting of monomial matrices, with all nonzero entries lying in µe , the e-th roots of unity, and for
which the product of the nonzero entries is 1. Note that this family includes three
families of finite Coxeter groups: G(1, 1, n) is the symmetric group, G(e, e, 2) is the
dihedral group, and G(2, 2, n) is type Dn .
Define a group by a presentation with generators and relations that can be described by the following diagram.
t1
2
e
t0
s3
s4
sn−1
sn
2
2
2
2
2
Figure 2: Diagram for the presentation of BMR of G(e, e, n).
The generators of this presentation are t0 , t1 , s3 , s4 , · · · , sn−1 , sn and the relations
are:
• quadratic relations for all generators,
• the relations of the symmetric group for s3 , s4 , · · · , sn−1 ,
• the relation of the dihedral group I2 (e) for t0 and t1 : t0 t1 t0 · · · = t1 t0 t1 · · ·,
| {z } | {z }
e
e
• s3 ti s3 = ti s3 ti for i = 0, 1, and
• sj ti = ti sj for i = 0, 1 and 4 ≤ j ≤ n.
It is shown in [3] that this group is isomorphic to the complex reflection
group G(e,
e, n)
I
0
0
0
j−2
0 ζe−i
0
0
0 1
0
0
0 for i = 0, 1 and sj 7−→ sj :=
with ti 7−→ ti := ζei
0
1 0
0
0
0 In−2
0
0 0 In−j
for 3 ≤ j ≤ n, where Ik is the identity matrix with 1 ≤ k ≤ n.
This presentation is called the presentation of BMR (Broué Malle Rouquier) of
G(e, e, n).
6
By [3], removing the quadratic relations gives a presentation of the complex braid
group B(e, e, n) attached to G(e, e, n) with diagram as follows.
t̃1
s̃3
s̃n−1
s̃4
s̃n
e
t̃0
Figure 3: Diagram for the presentation of BMR of B(e, e, n).
Note that the set of generators of this presentation is in bijection with
{t0 , t1 , s3 , s4 , · · · , sn }. This presentation is called the presentation of BMR of B(e, e, n).
2.2
Other presentations
Consider the presentation of BMR for the complex braid group B(e, e, n). For e ≥ 3
and n ≥ 3, it is shown in [5], p.122, that the monoid defined by this presentation
fails to embed in B(e, e, n). Thus, this presentation does not give rise to a Garside
structure for B(e, e, n). Considering the interest of the Garside groups given earlier,
it is legitimate to look for a Garside structure for B(e, e, n).
Let t̃i := t̃i−1 t̃i−2 t̃−1
i−1 for 2 ≤ i ≤ e − 1. Consider the following diagram presentation (the kite).
t̃i
t̃2
t̃1
t̃e−1
t̃0
s̃3
s̃4
s̃n
···
s̃n−1
Figure 4: Diagram for the presentation of CP of B(e, e, n).
The generators of this presentation are t̃0 , t̃1 , · · · , t̃e−1 , s̃3 , s̃4 , · · · , s̃n−1 , s̃n and
the relations are:
• the relations of the symmetric group for s̃3 , s̃4 , · · · , s̃n−1 ,
7
• the relation of the ‘dual’ dihedral group for t̃0 , t̃1 , · · · , t̃e−1 : t̃i t̃i−1 = t̃j t̃j−1 for
i, j ∈ Z/eZ,
• s̃3 t̃i s̃3 = t̃i s̃3 t̃i for 0 ≤ i ≤ e − 1, and
• s̃j t̃i = t̃i s̃j for 0 ≤ i ≤ e − 1 and 4 ≤ j ≤ n.
It is shown in [6] that the group defined by this presentation is isomorphic to
B(e, e, n). We call it the presentation of CP (Corran Picantin) of B(e, e, n). It is also
shown in [6] that:
Proposition 2.1. The presentation of CP gives rise to a Garside structure for
B(e, e, n) with
t se · · · sen and
t1 e
t se · · · sen · · · se3 e
t e
t0 se3 e
Garside element: ∆ = e
t1 e
|
{z0 3
}
| 1{z0 }3
|{z}
∆3
∆2
∆n
simples: the elements of the form δ2 δ3 · · · δn where δi is a divisor of ∆i for
2 ≤ i ≤ n.
It is stated in [6] that if one adds the relations x2 = 1 for all generators x of the
presentation of CP, one obtains a presentation of a group isomorphic to G(e, e, n). It
is called the presentation of CP of G(e, e, n) with diagram as follows.
ti
t2
2
2
t1
te−1
t0
2
2
2
2
s3
2
s4
···
2
sn
2
sn−1
Figure 5: Diagram for the presentation of CP of G(e, e, n).
The generators of this presentation belong to the set X := {t0 , t1 , · · · , te−1 , s3 , · · · , sn }
and the diagram presentation encodes the same relations as the diagram of Figure 2.2
with quadratic relations for all the generators in X.
0 ζe−i
0
0
0
The isomorphism with G(e, e, n) is given by ti 7−→ ti := ζei
0
0
I
n−2
Ij−2 0 0
0
0
0 1
0
for 3 ≤ j ≤ n. Denote
for 0 ≤ i ≤ e − 1, and sj 7−→ sj :=
0
1 0
0
0
0 0 In−j
X the set {t0 , t1 , · · · , te−1 , s3 , · · · , sn }.
8
In the next section, we compute the length over X of each element w in G(e, e, n)
by providing a minimal word representative over X of w.
3
Reduced words in G(e, e, n)
In this section, we represent each element of G(e, e, n) by a reduced word over X.
After some preparations, this is done in Proposition 3.10. Also, we characterize all
the elements of G(e, e, n) that are of maximal length over X.
Recall that an element w ∈ X∗ is called word over X. We denote by ℓ(w) the
length over X of the word w.
Definition 3.1. Let w be an element of G(e, e, n). We define ℓ(w) to be the minimal
word length ℓ(w) of a word w over X that represents w. A reduced expression of w
is any word representative of w of word length ℓ(w).
We introduce an algorithm that produces a word RE(w) over X for a given matrix
w in G(e, e, n). Later on, we prove that RE(w) is a reduced expression over X of w.
Input : w, a matrix in G(e, e, n).
Output: RE(w), a word over X.
Local variables: w′ , RE(w), i, U , c, k.
Initialisation: U := [1, ζe , ζe2 , ..., ζee−1 ], s2 := t0 , s2 := t0 ,
RE(w) = ε: the empty word, w′ := w.
for i from n down to 2 do
c := 1; k := 0;
while w′ [i, c] = 0 do
c := c + 1;
end
Then w′ [i, c] is the root of unity on row i;
while U [k + 1] 6= w′ [i, c] do
k := k + 1;
end
Then w′ [i, c] = ζek .
if k 6= 0 then
w′ := w′ sc sc−1 · · · s3 s2 tk ; Then w′ [i, 2] = 1;
RE(w) := tk s2 s3 · · · sc RE(w);
c := 2;
end
w′ := w′ sc+1 · · · si ; Then w′ [i, i] = 1;
RE(w) := si · · · sc+1 RE(w);
end
Return RE(w);
Algorithm 1: A word over X corresponding to an element w ∈ G(e, e, n).
9
0
0
Example 3.2. We apply Algorithm 1 to w :=
0
1
0
ζ32
Step 1 (i = 4, k = 0, c = 1): w′ := ws2 s3 s4 =
0
0
0 0
0 ζ32
Step 2 (i = 3, k = 1, c = 2): w′ := w′ s2 =
ζ3 0
0 0
0 0 1 0
1 0 0 0
′
′
, then w′ := w′ s3
then w := w t1 =
0 1 0 0
0 0 0 1
Step 3 (i = 2, k = 0, c = 1): w′ := w′ s2 = I4 .
Hence RE(w) = s2 s3 t1 s2 s4 s3 s2 = t0 s3 t1 t0 s4 s3 t0 .
0 0
ζ32 0
0 ζ3
0 0
0
1
0
0
ζ3 0
0
0
1 0
0 0
,
0 0
0 1
0
1
=
0
0
1
0
∈ G(3, 3, 4).
0
0
0
0
.
0
1
1 0
0 0
0 1
0 0
0
0
.
0
1
Let wn := w ∈ G(e, e, n). For i from n to 2,
the i-th step of Algorithm 1
0
wi
into a block diagonal matrix
transforms the block diagonal matrix
0 In−i
wi−1
0
∈ G(e, e, n) with w1 = I1 . Actually, for 2 ≤ i ≤ n, there exists
0
In−i+1
a unique c with 1 ≤ c ≤ n such that wi [i, c] 6= 0. At each step i of Algorithm 1, if
wi [i, c] = 1, we shift it into the diagonal position [i, i] by right multiplication by transpositions of the symmetric group Sn . If wi [i, c] 6= 1, we shift it into the first column
by right multiplication by transpositions, transform it into 1 by right multiplication
by an element of {t0 , t1 , · · · , te−1 }, and then shift the 1 obtained into the diagonal
position [i, i]. The following lemma is straightforward.
Lemma 3.3. For 2 ≤ i ≤ n, the block wi−1 is obtained by
• removing the row i and the column c from wi , then by
• multiplying the first column of the new matrix by wi [i, c].
Example 3.4. Let w be as in Example 3.2. The block
wn−1 isobtained by removing
0 0 1
the row n and the column 1 from wn = w to obtain ζ32 0 0, then by multiplying
0 ζ3 0
the first column of this matrix by 1. Check that the block wn−1 obtained in Example
3.2, after multiplying w by t0 s3 s4 , is the same as the one obtained here. The same
can be said for the other blocks wi with 2 ≤ i ≤ n − 1.
Definition 3.5. At each step i from n to 2,
10
• if wi [i, c] = ζek with k
si · · · s3 tk t0 s3 · · · sc
si · · · s3 tk t0
si · · · s3 tk
6= 0, we define REi (w) to be the word
if c ≥ 3,
if c = 2,
if c = 1, and
• if wi [i, c] = 1, we define REi (w) to be the word
si · · · sc+1 .
Remark that for 2 ≤ i ≤ n, the word REi (w) is either the empty word (when
wi [i, i] = 1) or a word that contains si necessarily but does not contain any of
si+1 , si+2 , · · · , sn .
Lemma 3.6. We have RE(w) = RE2 (w)RE3 (w) · · · REn (w).
Proof. The output RE(w) of Algorithm 1 is a concatenation of the words
RE2 (w), RE3 (w), · · · , and REn (w) obtained at each step i from n to 2 of Algorithm 1.
Example 3.7. If w is defined as in Example 3.2, we have
RE(w) = t0
s t t s s t .
|{z} |3 {z1 }0 |4 {z3 }0
RE2 (w) RE3 (w) RE4 (w)
Proposition 3.8. The word RE(w) given by Algorithm 1 is a word representative
over X of w ∈ G(e, e, n).
Proof. Algorithm 1 transforms the matrix w into In by multiplying it on the right
by elements of X and RE(w) is a concatenation (in reverse order) of elements of X
corresponding to the matrices of X used to transform w into In . Hence RE(w) is a
word representative over X of w ∈ G(e, e, n).
The following proposition will prepare us to prove that the output of Algorithm 1
is a reduced expression over X of a given element w ∈ G(e, e, n).
Proposition 3.9. Let w be an element of G(e, e, n). For all x ∈ X, we have
|ℓ(RE(xw)) − ℓ(RE(w))| = 1.
Proof. For 1 ≤ i ≤ n, there exists unique ci such that w[i, ci ] 6= 0. We denote w[i, ci ]
by ai .
Case 1: Suppose x = si for 3 ≤ i ≤ n.
Since s2i w = w, we can assume without restriction that ci−1 < ci .
Set w′ := si w. Since the left multiplication by the matrix x exchanges the rows i − 1
and i of w and the other rows remain the same, by Definition 3.5 and Lemma 3.3, we
have:
REi+1 (xw)REi+2 (xw) · · · REn (xw) = REi+1 (w)REi+2 (w) · · · REn (w) and
RE2 (xw)RE3 (xw) · · · REi−2 (xw) = RE2 (w)RE3 (w) · · · REi−2 (w).
Then, in order to prove our property, we should compare ℓ1 := ℓ(REi−1 (w)REi (w))
11
and ℓ2 := ℓ(REi−1 (xw)REi (xw)).
Since ci−1 < ci , by Lemma 3.3, the rows i − 1 and i of the blocks wi and wi′ are of
the form:
.. c .. c′ .. i
bi−1
i−1
wi :
ai
i
.. c
wi′ :
i−1
i
.. c′ ..
ai
i
bi−1
with c < c′ and where we write bi−1 instead of ai−1 since ai−1 is likely to change
when applying Algorithm 1 if ci−1 = 1, that is ai−1 on the first column of w.
We will discuss different cases depending on the values of ai and bi−1 .
• Suppose ai = 1.
– If bi−1 = 1,
we have REi (w) = si · · · sc′ +2 sc′ +1 and REi−1 (w) = si−1 · · · sc+2 sc+1 .
Furthermore, we have REi (xw) = si · · · sc+2 sc+1
and REi−1 (xw) = si−1 · · · sc′ +1 sc′ .
It follows that ℓ1 = ((i − 1) − (c + 1) + 1) + (i − (c′ + 1) + 1) = 2i − c − c′ − 1
and ℓ2 = ((i − 1) − c′ + 1) + (i − (c + 1) + 1) = 2i − c − c′ hence ℓ2 = ℓ1 + 1.
– If bi−1 = ζek with 1 ≤ k ≤ e − 1,
we have REi (w) = si · · · sc′ +2 sc′ +1 and REi−1 (w) = si−1 · · · s3 tk t0 s3 sc .
Furthermore, we have REi (xw) = si · · · s3 tk t0 s3 sc and REi−1 (xw) = sc′ · · · si−1 .
It follows that ℓ1 = (((i−1)−3+1)+2+(c−3+1))+(i−(c′ +1)+1) = 2i+
c−c′ −3 and ℓ2 = ((i−1)−c′ +1)+((i−3+1)+2+(c−3+1)) = 2i+c−c′ −2
hence ℓ2 = ℓ1 + 1.
It follows that
if ai = 1, then ℓ(RE(si w)) = ℓ(RE(w)) + 1.
(a)
• Suppose now that ai = ζek with 1 ≤ k ≤ e − 1.
– If bi−1 = 1,
we have REi (w) = si · · · s3 tk t0 s3 sc′ and REi−1 (w) = si−1 · · · sc+1 .
Also, we have REi (xw) = si · · · sc+1 and
REi−1 (xw) = si−1 · · · s3 tk t0 s3 sc′ −1 .
It follows that ℓ1 = ((i − 1) − (c + 1) − 1) + ((i − 3 + 1) + 2 + (c′ − 3 + 1)) =
2i−c+c′ −5 and ℓ2 = (((i−1)−3+1)+2+((c′−1)−3+1))+(i−(c+1)−1) =
2i − c + c′ − 6 hence ℓ2 = ℓ1 − 1.
12
′
– If bi−1 = ζek with 1 ≤ k ′ ≤ e − 1,
we have REi (w) = si · · · s3 tk t0 s3 sc′ and
REi−1 (w) = si−1 · · · s3 tk′ t0 s3 sc .
Also, we have REi (xw) = si · · · s3 tk′ t0 s3 sc and
REi−1 (xw) = si−1 · · · s3 tk t0 s3 sc′ −1 .
It follows that ℓ1 = ((i−1)−3+1)+2+(c−3+1)+(i−3+1)+2+(c′−3+1) =
2i + c + c′ − 5 and ℓ2 = ((i − 1) − 3 + 1) + 2 + ((c′ − 1) − 3 + 1) + (i − 3 +
1) + 2 + (c − 3 + 1) = 2i + c + c′ − 6 hence ℓ2 = ℓ1 − 1.
It follows that
if ai 6= 1, then ℓ(RE(si w)) = ℓ(RE(w)) − 1.
(b)
Case 2: Suppose x = ti for 0 ≤ i ≤ e − 1.
Set w′ := ti w. By definition of the left multiplication by ti , we have that the last
n − 2 rows of w and w′ are the same. Hence, by Definition 3.5 and Lemma 3.3, we
have:
RE3 (xw)RE4 (xw) · · · REn (xw) = RE3 (w)RE4 (w) · · · REn (w). In order to prove our
property in this case, we should compare ℓ1 := ℓ(RE2 (w)) and ℓ2 := ℓ(RE2 (xw)).
• Consider the case where c1 < c2 .
Since c1 < c2 , by Lemma 3.3, the blocks w2 and w2′ are of the form:
0
ζe−i a2
b
0
and w2′ =
w2 = 1
with b1 instead of a1 since a1 is likely
ζei b1
0
0 a2
to change when applying Algorithm 1 if c1 = 1.
– Suppose a2 = 1,
we have b1 = 1 necessarily hence ℓ1 = 0. Since RE2 (xw) = ti , we have
ℓ2 = 1. It follows that when c1 < c2 ,
if a2 = 1, then ℓ(RE(ti w)) = ℓ(RE(w)) + 1.
(c)
– Suppose a2 = ζek with 1 ≤ k ≤ e − 1,
we have RE2 (w) = tk t0 . Thus ℓ1 = 2. We also have ℓ2 = 1 for any b1 . It
follows that when c1 < c2 ,
if a2 6= 1, then ℓ(RE(ti w)) = ℓ(RE(w)) − 1.
(d)
• Now, consider the case where c1 > c2 .
Since c1 > c2 , by Lemma 3.3, the blocks w2 and w2′ are of the form:
−i
0 a1
ζ b
0
w2 =
with b2 instead of a2 since a2 is likely
and w2′ = e 2
b2 0
0
ζei a1
to change when applying Algorithm 1 if c2 = 1.
13
– Suppose a1 6= ζe−i ,
we have ℓ1 = 1 necessarily for any b2 , and since ζei a1 6= 1, we have ℓ2 = 2.
Hence when c1 > c2 ,
if a1 6= ζe−i , then ℓ(RE(ti w)) = ℓ(RE(w)) + 1.
(e)
– Suppose a1 = ζe−i ,
we have ℓ1 = 1 and ℓ2 = 0 for any b2 . Hence when c1 > c2 ,
if a1 = ζe−i , then ℓ(RE(ti w)) = ℓ(RE(w)) − 1.
(f )
This finishes our proof.
Proposition 3.10. Let w be an element of G(e, e, n). The word RE(w) is a reduced
expression over X of w.
Proof. We must prove that ℓ(w) = ℓ(RE(w)).
Let x1 x2 · · · xr be a reduced expression over X of w. Hence ℓ(w) = ℓ(x1 x2 · · · xr ) = r.
Since RE(w) is a word representative over X of w, we have ℓ(RE(w)) ≥ ℓ(x1 x2 · · · xr ) =
r. We prove that ℓ(RE(w)) ≤ r. Observe that we can write w as x1 x2 · · · xr where
x1 , x2 , · · · , xr are the matrices of G(e, e, n) corresponding to x1 , x2 , · · · , xr .
By Proposition 3.9, we have: ℓ(RE(w)) = ℓ(RE(x1 x2 · · · xr )) ≤ ℓ(RE(x2 x3 · · · xr )) +
1 ≤ ℓ(RE(x3 · · · xr )) + 2 ≤ · · · ≤ r. Hence ℓ(RE(w)) = r = ℓ(w) and we are done.
The following proposition is useful in the other sections. It summarizes the proof
of Proposition 3.9.
Proposition 3.11. Let w be an element of G(e, e, n). Denote by ai the unique
nonzero entry w[i, ci ] on the row i of w where 1 ≤ i, ci ≤ n.
1. For 3 ≤ i ≤ n, we have:
(a) if ci−1 < ci , then
ℓ(si w) = ℓ(w) − 1 if and only if ai 6= 1.
(b) if ci−1 > ci , then
ℓ(si w) = ℓ(w) − 1 if and only if ai−1 = 1.
2. If c1 < c2 , then ∀ 0 ≤ k ≤ e − 1, we have
ℓ(tk w) = ℓ(w) − 1 if and only if a2 6= 1.
3. If c1 > c2 , then ∀ 0 ≤ k ≤ e − 1, we have
ℓ(tk w) = ℓ(w) − 1 if and only if a1 = ζe−k .
Proof. The claim 1(a) is deduced from (a) and (b), 2 is deduced from (c) and (d), and
3 is deduced from (e) and (f ) where (a), (b), (c), (d), (e), and (f ) are given in the
proof of Proposition 3.9. Since s2i = 1, 1(b) can be deduced from 1(a).
14
The next proposition is about elements of G(e, e, n) that are of maximal length.
Proposition 3.12. The maximal length of an element of G(e, e, n) is n(n − 1). It
is realized for diagonal matrices w such that w[i, i] is an e-th root of unity different
from 1 for 2 ≤ i ≤ n. A minimal word representative of such an element is then of
the form tk2 t0 s3 tk3 t0 s3 · · · sn · · · s3 tkn t0 s3 · · · sn with 1 ≤ k2 , · · · , kn ≤ (e − 1), and
|
{z
}
| {z } | {z }
the number of elements that are of maximal length is (e − 1)(n−1) .
Proof. By Algorithm 1, an element w in G(e, e, n) is of maximal length when wi [i, i] =
ζek for 2 ≤ i ≤ n and ζek 6= 1. By Lemma 3.3, this condition is satisfied when w is a
diagonal matrix such that w[i, i] is an e-th root of unity different from 1 for 2 ≤ i ≤ n.
A minimal word representative given by Algorithm 1 for such an element is of the
form tk2 t0 s3 tk3 t0 s3 · · · sn · · · s3 tkn t0 s3 · · · sn (1 ≤ k2 , · · · , kn ≤ (e − 1)) which is of
| {z } | {z }
|
{z
}
length 2 + 4 + ... + 2(n − 1) = n(n − 1). The number of elements of this form is
(e − 1)(n−1) .
Example 3.13. Consider λ :=
(ζe−1 )(n−1)
ζe
..
.
∈ G(e, e, n). We have
ζe
RE(λ) = t1 t0 s3 t1 t0 s3 · · · sn · · · s3 t1 t0 s3 · · · sn . Hence ℓ(λ) = n(n − 1) which is the
|
{z
}
|{z} | {z }
maximal length of an element of G(e, e, n).
Note that λ is the image in G(e, e, n) of Λ, the Garside element of the presentation
of Corran and Picantin of the complex braid group B(e, e, n).
Once the length function over X is understood, we can construct intervals in
G(e, e, n) and characterize the balanced elements that are of maximal length.
4
Balanced elements of maximal length
The aim of this section is to prove that the only balanced elements of G(e, e, n) that
are of maximal length are λk with 1 ≤ k ≤ e − 1, where λ was given in Section 3. This
is done by characterizing the intervals of the elements that are of maximal length.
We start by recalling the partial order relations on G(e, e, n).
Definition 4.1. Let w, w′ ∈ G(e, e, n). We say that w′ is a divisor of w or w is
a multiple of w′ , and write w′ w, if w = w′ w′′ with w′′ ∈ G(e, e, n) and ℓ(w) =
ℓ(w′ ) + ℓ(w′′ ). This defines a partial order relation on G(e, e, n).
Similarly, we have another partial order relation on G(e, e, n).
Definition 4.2. Let w, w′ ∈ G(e, e, n). We say that w′ is a right divisor of w or w
is a left multiple of w′ , and write w′ r w, if there exists w′′ ∈ G(e, e, n) such that
w = w′′ w′ and ℓ(w) = ℓ(w′′ ) + ℓ(w′ ).
The following lemma is straightforward.
15
Lemma 4.3. Let w, w′ ∈ G(e, e, n) and let x1 x2 · · · xr be a reduced expression of
w′ over X. We have w′ w if and only if ∀ 1 ≤ i ≤ r, ℓ(xi xi−1 · · · x1 w) =
ℓ(xi−1 · · · x1 w) − 1.
Proof. On the one hand, we have w′ w′′ = w with w′′ = xr xr−1 · · · x1 w and the condition ∀ 1 ≤ i ≤ r, ℓ(xi xi−1 · · · x1 w) = ℓ(xi−1 · · · x1 w)−1 implies that ℓ(w′′ ) = ℓ(w) − r.
So we get ℓ(w′′ ) + ℓ(w′ ) = ℓ(w). Hence w′ w.
On the other hand, since x2 = 1 for all x ∈ X, we have ℓ(xw) = ℓ(w) ± 1 for all
w ∈ G(e, e, n). If there exists i such that ℓ(xi xi−1 · · · x1 w) = ℓ(xi−1 · · · x1 w) + 1
with 1 ≤ i ≤ r, then ℓ(w′′ ) = ℓ(xr xr−1 · · · x1 w) > ℓ(w) − r. It follows that
ℓ(w′ ) + ℓ(w′′ ) > ℓ(w). Hence w′ w.
Consider the homomorphism − : X∗ −→ G(e, e, n) : x 7−→ x := x ∈ X. If
RE(w) = x1 x2 · · · xr with w ∈ G(e, e, n) and x1 , x2 , · · · , xr ∈ X, then RE(w) =
x1 x2 · · · xr = w where x1 , x2 , · · · , xr ∈ X.
Definition 4.4. We define D to be the set
o
n
w ∈ G(e, e, n) s.t. REi (w) REi (λ) f or 2 ≤ i ≤ n .
Proposition 4.5. The set D consists of elements w of G(e, e, n) such that for all
2 ≤ i ≤ n, REi (w) is one of the following words
si · · · s3 t1 t0 s3 · · · si′ with 3 ≤ i′ ≤ i,
si · · · s3 t1 t0 ,
si · · · s3 tk
with 0 ≤ k ≤ e − 1, and
with 3 ≤ i′ ≤ i.
si · · · si′
Proof. We have REi (λ) = si · · · s3 t1 t0 s3 · · · si . Let w ∈ G(e, e, n). Note that REi (w)
is necessarily one of the words given in the the first column of the following table. For each REi (w), there exists unique w′ ∈ G(e, e, n) with RE(w′ ) given in the
second
column,
such that REi (w)w′ = REi(λ). In the last column, we compute
ℓ REi (w) + ℓ(w′ ) that is equal to ℓ REi (λ) = 2(i − 1) only for the first four cases.
The result follows immediately.
REi (w)
si · · · s3 t1 t0 s3 · · · si′ with 3 ≤ i′ ≤ i
si · · · s3 t1 t0
si · · · s3 tk with 0 ≤ k ≤ e − 1
si · · · si′ with 3 ≤ i′ ≤ i
si · · · s3 tk t0 with 2 ≤ k ≤ e − 1
si · · · s3 tk t0 s3 · · · si′ with 2 ≤ k ≤ e − 1,
and 3 ≤ i′ ≤ i
RE(w′ )
si′ +1 · · · si
s3 · · · si
tk−1 s3 · · · si
si′ −1 · · · s3 t1 t0 s3 · · · si
t0 tk−1 s3 · · · si
si′ · · · s3 t0 tk−1 s3 · · · si
The next proposition characterizes the divisors of λ in G(e, e, n).
Proposition 4.6. The set D is equal to the interval [1, λ] where
[1, λ] = {w ∈ G(e, e, n) s.t. 1 w λ}.
16
2(i − 1)
2(i − 1)
2(i − 1)
2(i − 1)
2i
2(i − 1)+
2(i′ − 1)
Proof. Let w ∈ G(e, e, n). We have RE(w) = RE2 (w)RE3 (w) · · · REn (w). Let w ∈ X∗
− ∈ X∗ the word obtained by reading w
be a word representative of w. Denote by ←
w
←−−−−−−
←−−−−−
from right to left. For 3 ≤ i ≤ n, we denote by αi the element REi−1 (w) · · · RE2 (w) in
G(e, e, n).
Suppose that w ∈ D. We apply Lemma 4.3 to prove that w λ. Fix 2 ≤ i ≤ n. By
Proposition 4.5, we have four different possibilities for REi (w).
First, consider the cases REi (w) = si · · · s3 t1 t0 s3 · · · si′ , si · · · s3 t1 t0 , or si · · · si′ with
←−−−−
3 ≤ i′ ≤ i. Hence REi (w) = si′ · · · s3 t0 t1 s3 · · · si , t0 t1 s3 · · · si , or si′ · · · si , respectively.
Note that the left multiplication of the matrix λ by αi produces permutations only
in the block consisting of the first i − 1 rows and the first i − 1 columns of λ. Since
λ[i, i] = ζe (6= 1), by 1(a) of Proposition 3.11, the left multiplication of αi λ by
s3 · · · si decreases the length. Also, by 2 of Proposition 3.11, the left multiplication
of s3 · · · si αi λ by t1 decreases the length. Note that by these left multiplications,
λ[i, i] = ζe is shifted to the first row then transformed to ζe ζe−1 = 1. Hence, by 1(b) of
Proposition 3.11, the left multiplication of t1 s3 · · · si αi λ by si′ · · · s3 t0 decreases the
length, as desired.
←−−−−
Suppose that REi (w) = si · · · s3 tk with 0 ≤ k ≤ e − 1. We have REi (w) = tk s3 · · · si .
Since λ[i, i] = ζe (6= 1), by 1(a) of Proposition 3.11, the left multiplication of αi λ
by s3 · · · si decreases the length. By 2 of Proposition 3.11, the left multiplication of
s3 · · · si αi λ by tk also decreases the length. Hence, applying Lemma 4.3, we have
w λ.
Conversely, suppose that w ∈
/ D, we prove that w λ. If RE(w) = x1 · · · xr ,
by Lemma 4.3, we show that there exists 1 ≤ i ≤ r such that ℓ(xi xi−1 · · · x1 λ) =
ℓ(xi−1 · · · x1 λ) + 1. Since w ∈
/ D, by Proposition 4.5, we may consider the first
REi (w) that appears in RE(w) = RE2 (w) · · · REn (w) such that REi (w) = si · · · s3 tk t0
←−−−−−
or si · · · s3 tk t0 s3 · · · si′ with 2 ≤ k ≤ e − 1 and 3 ≤ i′ ≤ i. Thus, we have RE i (w) =
t0 tk s3 · · · si or si′ · · · s3 t0 tk s3 · · · si , respectively. Since λ[i, i] = ζe (6= 1), by 1(a) of
Proposition 3.11, the left multiplication of αi λ by s3 · · · si decreases the length. By
2 of Proposition 3.11, the left multiplication of s3 · · · si αi λ by tk also decreases the
length. Note that by these left multiplications, λ[i, i] = ζe is shifted to the first row
then transformed to ζe ζe−k = ζe1−k . Since 2 ≤ k ≤ e − 1, we have ζe1−k 6= 1. By 3 of
Proposition 3.11, it follows that the left multiplication of tk s3 · · · si αi λ by t0 increases
the length. Hence w λ.
We want to recognize if an element w ∈ G(e, e, n) is in the set D directly from its
matrix form. For this purpose, we start by describing a matrix form for each element
w ∈ G(e, e, n). Since w is a monomial matrix, there exists nonzero entries that we
refer to as bullets such that the upper left-hand side of w with respect to the bullets,
that we denote by Z(w), have all its entries zero. We denote the other side of the
matrix by Z ′ (w).
17
0
0
ζ32 0
0
0 0 0 0 ζ
3
Example 4.7. Let w =
0 0 ζ3 0 0 ∈ G(3, 3, 5). The bullets are the
1 0 0 0 0
2
0 ζ3 0 0 0
encircled elements and the drawn path separates Z(w) from Z ′ (w).
Remark 4.8. Let w[i, c] be one of the bullets of w ∈ G(e, e, n). We have
w[i − 1, c] ∈ Z(w) and w[i, c − 1] ∈ Z(w).
Also the bullets are the only nonzero entries of w that satisfy this condition.
The following proposition gives a nice description of the divisors of λ in G(e, e, n).
Proposition 4.9. Let w ∈ G(e, e, n). We have that w ∈ D if and only if all nonzero
entries of Z ′ (w) are either 1 or ζe .
Proof. Let w ∈ D and let w[i, c] be a nonzero entry of Z ′ (w). Since w ∈ D, by
Proposition 4.5, we have REi (w) = si · · · s3 t1 t0 , si · · · s3 t1 t0 s3 · · · si′ , or si · · · si′ with
3 ≤ i′ ≤ i. By Lemma 3.3, we have wi [i, i′ ] = w[i, c] for 1 < i′ ≤ i. Hence, by
Definition 3.5, we have w[i, c] = 1 or ζe .
Conversely, let w[i, c] ∈ Z ′ (w). We have w[i, c] = 1 or ζe . Then we have again
REi (w) = si · · · s3 t1 t0 , si · · · s3 t1 t0 s3 · · · si′ , or si · · · si′ for 3 ≤ i′ ≤ i. If w[i, c] is a
bullet of w, by Lemma 3.3, we have wi [i, 1] = ζek for some 0 ≤ k ≤ e − 1, for which
case REi (w) = si · · · s3 tk . Hence, by Proposition 4.5, we have w ∈ D.
ζ32 0 0
0
0 0 ζ 0
3
Example 4.10. Let w =
0 ζ3 0 0
0 0 0 ζ32
it follows immediately that w ∈
/ D.
∈ G(3, 3, 4). Since Z ′ (w) contains ζ32 ,
Our description of the interval [1, λ] allows us to prove easily that λ is balanced.
Let us recall the definition of a balanced element.
Definition 4.11. A balanced element in G(e, e, n) is an element w such that w′ w
holds precisely when w r w′ .
The next lemma is obvious. It will be useful later.
Lemma 4.12. Let g be a balanced element and let w, w′ ∈ [1, g]. If w′ w, then
(w′ )−1 w g.
In order to prove that λ is balanced, we first check the following.
Lemma 4.13. If w ∈ D, we have w−1 λ ∈ D and λw−1 ∈ D.
18
Proof. We show that w−1 λ = tw λ ∈ D and λw−1 = λtw ∈ D, where tw is the complex
conjugate of the transpose tw of the matrix w. We use the matrix form of an element
of D. Actually, by Remark 4.8, the bullets of tw correspond to the complex conjugate
of the bullets of w. Also, all nonzero entries of w−1 , apart from its bullets, are in
{1, ζe−1 }. Multiplying w−1 by λ, we have that all nonzero entries of w−1 λ and of
λw−1 , apart from their bullets, are in {ζe , 1}. Hence w−1 λ ∈ D and λw−1 ∈ D.
Example 4.14. We illustrate the idea of the proof of Lemma
0
0
Consider w ∈ D as follows and show that tw λ ∈ D: w = 0
ζ3
0
0 0 0 ζ32 0
0 0 0 ζ3 0
0
0
0
0
1
0 0 0 0 1
tw λ
tw
2
0 0 ζ3 0 0 −→ 0 0 ζ3 0 0 −→
1 0 0 0 0
1 0 0 0 0
2
0 ζ3 0 0 0
0 ζ3 0 0 0
4.13.
0 0
1
0
0
0
0
ζ3
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
ζ32
0
0
0
1
0
0
ζ3
tw
0 −→
0
0
1 0
0 ζ3
0 0 .
0 0
0 0
Proposition 4.15. We have that λ is balanced.
Proof. Suppose that w λ. We have w ∈ D then λw−1 ∈ D by Lemma 4.13. Hence
λ = (λw−1 )w satisfies ℓ(λw−1 ) + ℓ(w) = ℓ(λ), namely w r λ. Conversely, suppose
that w r λ. We have λ = w′ w with w′ ∈ G(e, e, n) and ℓ(w′ ) + ℓ(w) = ℓ(λ). It
−1
−1
follows that w′ ∈ D then w′ λ ∈ D by Lemma 4.13. Since w = w′ λ, we have
w ∈ D, namely w λ.
We state without proof a similar result for the powers of λ in G(e, e, n).
Definition 4.16. For 1 ≤ k ≤ e − 1, let Dk be the set of w ∈ G(e, e, n) such that all
nonzero entries of Z ′ (w) are either 1 or ζek .
Proposition 4.17. For 1 ≤ k ≤ e − 1,
0 0 0
0 0 0
Example 4.18. Let w = 0 0 ζ3
1 0 0
0 ζ32 0
we have [1, λk ] = Dk and λk is balanced.
1 0
0 1
0 0
∈ G(3, 3, 5). We have w ∈ [1, λ2 ].
0 0
0 0
19
More generally, let w ∈ G(e, e, n) be an element of maximal length, namely by
Proposition 3.12, a diagonal matrix such that for 2 ≤ i ≤ n, w[i, i] is an e-th root
of unity different from 1. As previously, we can prove that a divisor w′ of w satisfies
that for all 2 ≤ i ≤ n, if w′ [i, c] 6= 0 is not a bullet of w′ , then w′ [i, c] = 1 or w[i, i].
Suppose that w is of maximal length such that w[i, i] 6= w[j, j] with 2 ≤ i 6= j ≤ n.
We have (i, j) w where (i, j) is the transposition matrix. Hence w′ := (i, j)−1 w =
(i, j)w r w. We have w′ [j, i] = w[i, i]. Thus, w′ [j, i] that is not a bullet of w′ is
different from 1 and w[j, j], since it is equal to w[i, i]. Hence w′ w. We thus get
the following.
Proposition 4.19. The balanced elements of G(e, e, n) that are of maximal length
over X are precisely λk where 1 ≤ k ≤ e − 1.
We are ready to study the interval structures associated with the intervals [1, λk ]
where 1 ≤ k ≤ e − 1.
5
Interval structures
In this section, we construct the monoid M ([1, λk ]) associated with each of the intervals [1, λk ] constructed in Section 4 with 1 ≤ k ≤ e − 1. By Proposition 4.19, λk
is balanced. Hence, by Theorem 1.7, in order to prove that M ([1, λk ]) is a Garside
monoid, it remains to show that both posets ([1, λk ], ) and ([1, λk ], r ) are lattices.
This is stated in Corollary 5.13. The interval structure is given in Theorem 5.15.
Let 1 ≤ k ≤ e − 1 and let w ∈ [1, λk ]. For each 1 ≤ i ≤ n there exists a unique
ci such that w[i, ci ] 6= 0. We denote w[i, ci ] by ai . We apply Lemma 4.3 to prove the
following lemmas. The reader is invited to write the matrix form of w to illustrate
each step of the proof.
Lemma 5.1. Let ti w where i ∈ Z/eZ.
• If c1 < c2 , then tk t0 w and
• if c2 < c1 , then tj w for all j with j 6= i.
Hence if ti w and tj w with i, j ∈ Z/eZ and i 6= j, then tk t0 w.
Proof. Suppose c1 < c2 .
Since w ∈ [1, λk ], by Proposition 4.9, a2 = 1 or ζek . Since ti w, we have ℓ(ti w) =
ℓ(w) − 1. Hence by 2 of Proposition 3.11, we get a2 6= 1. Hence a2 = ζek . Again by 2
of Proposition 3.11, since a2 6= 1, we have ℓ(tk w) = ℓ(w) − 1. Let w′ := tk w. We have
w′ [1, c2 ] = ζe−k a2 = ζe−k ζek = 1. Hence by 3 of Proposition 3.11, ℓ(t0 w′ ) = ℓ(w′ ) − 1.
It follows that tk t0 w.
Suppose c2 < c1 .
Since ti w, we have ℓ(ti w) = ℓ(w) − 1. Hence by 3 of Proposition 3.11, we have
a1 = ζe−i . If there exists j ∈ Z/eZ with j 6= i such that tj w, then ℓ(tj w) = ℓ(w)−1.
Again by 3 of Proposition 3.11, we have a1 = ζe−j . Thus, i = j which is not allowed.
The last statement of the lemma follows immediately.
20
Lemma 5.2. If ti w with i ∈ Z/eZ and s3 w, then s3 ti s3 = ti s3 ti w.
Proof. Set w′ := s3 w and w′′ := ti w.
Suppose c1 < c2 .
Since w ∈ [1, λk ], by Proposition 4.9, we have a2 = 1 or ζek . Since ti w, we have
ℓ(ti w) = ℓ(w) − 1. Thus, by 2 of Proposition 3.11, we get a2 6= 1. Hence a2 = ζek .
Suppose that c3 < c2 . Since s3 w, we have ℓ(s3 w) = ℓ(w) − 1. Hence by 1(b)
of Proposition 3.11, a2 = 1 which is not allowed. Then, assume c2 < c3 . Since
w ∈ [1, λk ], we have a3 = 1 or ζek . By 1(a) of Proposition 3.11, we have a3 6= 1.
Hence a3 = ζek . Now, we prove that s3 ti s3 w by applying Lemma 4.3. Indeed, since
a3 6= 1, by 2 of Proposition 3.11, ℓ(ti w′ ) = ℓ(w′ ) − 1, and since a2 6= 1, by 1(a) of
Proposition 3.11, we have ℓ(s3 w′′ ) = ℓ(w′′ ) − 1.
Suppose c2 < c1 .
Since ℓ(ti w) = ℓ(w) − 1, by 3 of Proposition 3.11, we have a1 = ζe−i .
• Assume c2 < c3 .
Since ℓ(s3 w) = ℓ(w) − 1, by 1(a) of Proposition 3.11, we have a3 6= 1. We have
ℓ(ti w′ ) = ℓ(w′ ) − 1 for both cases c1 < c3 and c3 < c1 . Actually, if c1 < c3 , since
a3 6= 1, by 2 of Proposition 3.11, we have ℓ(ti w′ ) = ℓ(w′ ) − 1, and if c3 < c1 ,
since a1 = ζe−i , by 3 of Proposition 3.11, ℓ(ti w′ ) = ℓ(w′ ) − 1. Now, since
ζei a1 = ζei ζe−i = 1, by 1(b) of Proposition 3.11, we have ℓ(s3 w′′ ) = ℓ(w′′ ) − 1.
• Assume c3 < c2 .
Since a1 = ζe−i , by 3 of Proposition 3.11, ℓ(ti w′ ) = ℓ(w′ ) − 1. Since ζei a1 = 1,
by 1(b) of Proposition 3.11, we have ℓ(s3 w′′ ) = ℓ(w′′ ) − 1.
Lemma 5.3. If ti w with i ∈ Z/eZ and sj w with 4 ≤ j ≤ n, then ti sj = sj ti w.
Proof. We distinguish four different cases: case 1: c1 < c2 and cj−1 < cj , case 2:
c1 < c2 and cj < cj−1 , case 3: c2 < c1 and cj−1 < cj , and case 4: c2 < c1 and
cj < cj−1 . The proof is similar to the proofs of Lemmas 5.1 and 5.2 so we prove that
sj ti w only for the first case. Suppose that c1 < c2 and cj−1 < cj . Since ti w,
we have ℓ(ti w) = ℓ(w) − 1. Hence, by 2 of Proposition 3.11, we have a2 6= 1. Also,
since sj w, we have ℓ(sj w) = ℓ(w) − 1. Hence, by 1(a) of Proposition 3.11, we have
aj 6= 1. Set w′ := sj w. Since a2 6= 1, then ℓ(ti w′ ) = ℓ(w′ ) − 1. Hence sj ti w.
The proof of the following lemma is similar to the proofs of Lemmas 5.2 and 5.3
and is left to the reader.
Lemma 5.4. If si w and si+1 w for 3 ≤ i ≤ n−1, then si si+1 si = si+1 si si+1 w,
and if si w and sj w for 3 ≤ i, j ≤ n and |i − j| > 1, then si sj = sj si w.
The following proposition is a direct consequence of all the preceding lemmas.
Proposition 5.5. Let x, y ∈ X = {t0 , t1 , · · · , te−1 , s3 , · · · , sn }. The least common
multiple in ([1, λk ], ) of x and y, denoted by x∨y, exists and is given by the following
identities:
21
• ti ∨ tj = tk t0 = ti ti−k = tj tj−k for i 6= j ∈ Z/eZ,
• ti ∨ s3 = ti s3 ti = s3 ti s3 for i ∈ Z/eZ,
• ti ∨ sj = ti sj = sj ti for i ∈ Z/eZ and 4 ≤ j ≤ n,
• si ∨ si+1 = si si+1 si = si+1 si si+1 for 3 ≤ i ≤ n − 1, and
• si ∨ sj = si sj = sj si for 3 ≤ i 6= j ≤ n and |i − j| > 1.
We have a similar result for ([1, λk ], r ).
Proposition 5.6. Let x, y ∈ X. The least common multiple in ([1, λk ], r ) of x and
y, denoted by x ∨r y, exists and is equal to x ∨ y.
Proof. Define an antihomomorphism φ : G(e, e, n) −→ G(e, e, n) : ti 7−→ t−i , sj 7−→ sj
with i ∈ Z/eZ and 3 ≤ j ≤ n. It is obvious that φ2 is the identity. Also, if
w ∈ G(e, e, n), we have ℓ(φ(w)) = ℓ(w). Let x, y ∈ X and w ∈ [1, λk ]. We prove that
if x r w and y r w, then x ∨ y r w. Actually, we have w = vx and w = v ′ y
with v, v ′ ∈ G(e, e, n). Thus, φ(w) = φ(x)φ(v) and φ(w) = φ(y)φ(v ′ ). Since φ respects the length over X in G(e, e, n), we have φ(x) φ(w) and φ(y) φ(w). Hence
φ(x) ∨ φ(y) φ(w). Moreover, one can check that φ(x) ∨ φ(y) = φ(x ∨ y) for all
x, y ∈ X. Thus, φ(x ∨ y) φ(w), that is φ(w) = φ(x ∨ y)u for some u ∈ G(e, e, n).
We get w = φ(u)(x ∨ y) and since φ respects the length function, we have x ∨ y r w.
Note that Propositions 5.5 and 5.6 are important to prove that both posets
([1, λk ], ) and ([1, λk ], r ) are lattices. Actually, they will make possible an induction proof of Proposition 5.12 below. For now, let us state some general properties
about lattices that will be useful in our proof. Let (S, ) be a finite poset.
Definition 5.7. Say that (S, ) is a meet-semilattice if and only if f ∧ g := gcd(f, g)
exists for any f, g ∈ S.
V
Equivalently, (S, ) is a meet-semilattice if and only if T exists for any finite
nonempty subset T of S.
Definition 5.8. Say that (S, ) is a join-semilattice if and only if f ∨ g := lcm(f, g)
exists for any f, g ∈ S.
W
Equivalently, (S, ) is a join-semilattice if and only if T exists for any finite
nonempty subset T of S.
Proposition 5.9. (S, ) is a meet-semilattice if and only if for any f, g ∈ S, either
f ∨ g exists, or f and g have no common multiples.
Proof. Let f, g ∈ S and suppose that f and g have at least one common multiple.
Let A := {h ∈ S | f h and g h} be the set of the common multiples
V of f and g.
Since
S
is
finite,
A
is
also
finite.
Since
(S,
)
is
a
meet-semilattice,
A exists and
V
A = lcm(f, g) = f ∨ g.
Conversely, let f, g ∈ S and let B := {h ∈ S | h f and h g} be the set ofWall
common divisors ofWf and g. Since all elements of B have common multiples, B
exists and we have B = gcd(f, g) = f ∧ g.
22
Definition 5.10. The poset (S, ) is a lattice if and only if it is both a meet- and
join- semilattice.
The following lemma is a consequence of Proposition 5.9.
W
Lemma 5.11. If (S, ) is a meet-semilattice such that S exists, then (S, ) is a
lattice.
We will prove that ([1, λk ], ) is a meet-semilattice by applying Proposition 5.9.
On occasion, for 1 ≤ m ≤ ℓ(λk ) with ℓ(λk ) = n(n − 1), we introduce
([1, λk ])m := {w ∈ [1, λk ] s.t. ℓ(w) ≤ m}.
Proposition 5.12. Let 0 ≤ k ≤ e − 1. For 1 ≤ m ≤ n(n − 1) and u, v in ([1, λk ])m ,
either u∨v exists in ([1, λk ])m , or u and v do not have common multiples in ([1, λk ])m .
Proof. Let 1 ≤ m ≤ n(n − 1). We make a proof by induction on m. By Proposition
5.5, our claim holds for m = 1. Suppose m > 1. Assume that the claim holds for
all 1 ≤ m′ ≤ m − 1. We want to prove it for m′ = m. The proof is illustrated in
Figure 6 below. Let u, v be in ([1, λk ])m such that u and v have at least one common
multiple in ([1, λk ])m which we denote by w. Write u = xu1 and v = yv1 such that
x, y ∈ X and ℓ(u) = ℓ(u1 ) + 1, ℓ(v) = ℓ(v1 ) + 1. By Proposition 5.5, x ∨ y exists.
Since x w and y w, x ∨ y divides w. We can write x ∨ y = xy1 = yx1 with
ℓ(x ∨ y) = ℓ(x1 ) + 1 = ℓ(y1 ) + 1. By Lemma 4.12, we have x1 , v1 ∈ [1, λk ]. Also, we
have ℓ(x1 ) < m, ℓ(v1 ) < m and x1 , v1 have a common multiple in ([1, λk ])m−1 . Thus,
by the induction hypothesis, x1 ∨ v1 exists in ([1, λk ])m−1 . Similarly, y1 ∨ u1 exists in
([1, λk ])m−1 . Write x1 ∨v1 = v1 x2 = x1 v2 with ℓ(x1 ∨v1 ) = ℓ(v1 )+ℓ(x2 ) = ℓ(v2 )+ℓ(x1 )
and write y1 ∨ u1 = u1 y2 = y1 u2 with ℓ(y1 ∨ u1 ) = ℓ(y1 ) + ℓ(u2 ) = ℓ(u1 ) + ℓ(y2 ). By
Lemma 4.12, we have u2 , v2 ∈ [1, λk ]. Also, we have ℓ(u2 ) < m, ℓ(v2 ) < m and u2 , v2
have a common multiple in ([1, λk ])m−1 . Thus, by the induction hypothesis, u2 ∨ v2
exists in ([1, λk ])m−1 . Write u2 ∨ v2 = v2 u3 = u2 v3 with ℓ(u2 ∨ v2 ) = ℓ(v2 ) + ℓ(u3 ) =
ℓ(u2 ) + ℓ(v3 ). Since uy2 v3 = vx2 u3 is a common multiple of u and v that divides
every common multiple w of u and v, we deduce that u ∨ v = uy2 v3 = vx2 u3 and we
are done.
y
x
y1
u1
u2
y2
v1
x1
v2
x2
u3
v3
w
Figure 6: The proof of Proposition 5.12.
23
Similarly, applying Proposition 5.6, we obtain the same results for ([1, λk ], r ).
We thus proved the following.
Corollary 5.13. Both posets ([1, λk ], ) and ([1, λk ], r ) are lattices.
Proof. Applying Proposition 5.9, ([1,Wλk ], ) is a meet-semilattice. Also, by definition of the interval [1, λk ], we have [1, λk ] = λk . Thus, applying Proposition 5.9,
([1, λk ], ) is a lattice. The same can be done for ([1, λk ], r ).
We are ready to define the interval monoid M ([1, λk ]).
Definition 5.14. Let Dk be a set in bijection with Dk = [1, λk ] with
[1, λk ] −→ Dk : w 7−→ w.
We define the monoid M ([1, λk ]) by the following presentation of monoid with
• generating set: Dk (a copy of the interval [1, λk ]) and
• relations: w = w′ w′′ whenever w, w′ and w′′ ∈ [1, λk ], w = w′ w′′ and ℓ(w) =
ℓ(w′ ) + ℓ(w′′ ).
We have that λk is balanced. Also, by Corollary 5.13, both posets ([1, λk ], ) and
([1, λk ], r ) are lattices. Hence, by Theorem 1.7, we have:
Theorem 5.15. M ([1, λk ]) is an interval monoid with Garside element λk and with
simples Dk . Its group of fractions exists and is denoted by G(M ([1, λk ])).
These interval structures have been implemented into GAP3, Contribution to the
chevie package (see [16]). The next section is devoted to the study of these interval
structures.
6
About the interval structures
In this section, we provide a new presentation of the interval monoid M ([1, λk ]).
Furthurmore, we prove that G(M ([1, λk ])) is isomorphic to the complex braid group
B(e, e, n) if and only if k ∧ e = 1. When k ∧ e 6= 1, we describe these new structures
and show some of their properties.
6.1
Presentations
Our first aim is to prove that the interval monoid M ([1, λk ]) is isomorphic to the
monoid B ⊕k (e, e, n) defined as follows.
Definition 6.1. For 1 ≤ k ≤ e − 1, we define the monoid B ⊕k (e, e, n) by a presentation of monoid with
e = {t̃0 , t̃1 , · · · , t̃e−1 , s̃3 , · · · , s̃n } and
• generating set: X
24
• relations:
for
for
for
for
for
s̃i s̃j s̃i = s̃j s̃i s̃j
s̃i s̃j = s̃j s̃i
s̃3 t̃i s̃3 = t̃i s̃3 t̃i
s̃j t̃i = t̃i s̃j
t̃i t̃i−k = t̃j t̃j−k
|i − j| = 1,
|i − j| > 1,
i ∈ Z/eZ,
i ∈ Z/eZ and 4 ≤ j ≤ n, and
i, j ∈ Z/eZ.
Note that the monoid B ⊕1 (e, e, n) is the monoid B ⊕ (e, e, n) of Corran and Picantin introduced in Section 2.
The following result is similar to Matsumoto’s property in the case of real reflection
groups.
Proposition 6.2. There exists a map F : [1, λk ] −→ B ⊕k (e, e, n) defined by
F (w) = x̃1 x̃2 · · · x̃r whenever x1 x2 · · · xr is a reduced expression over X of w, where
e for 1 ≤ i ≤ r.
x̃i ∈ X
B
w2 if w2 is obtained from w1 by
Proof. Let w1 and w2 be in X∗ . We write w1
applying only the relations of the presentation of B ⊕k (e, e, n) where we replace t̃i by
ti and s̃j by tj for all i ∈ Z/eZ and 3 ≤ j ≤ n.
Let w be in [1, λk ] and suppose that w1 and w2 are two reduced expressions over X
B
of w. We prove that w1
w2 by induction on ℓ(w1 ).
The result holds vacuously for ℓ(w1 ) = 0 and ℓ(w1 ) = 1. Suppose that ℓ(w1 ) > 1.
Write
w1 = x1 w′ 1 and w2 = x2 w′ 2 , with x1 , x2 ∈ X.
If x1 = x2 , we have x1 w1′ = x2 w2′ in G(e, e, n) from which we get w1′ = w2′ . Then, by
B
B
the induction hypothesis, we have w′ 1
w′ 2 . Hence w1
w2 .
If x1 6= x2 , since x1 w and x2 w, we have x1 ∨ x2 w where x1 ∨ x2 is the lcm
of x1 and x2 in ([1, λk ], ) given in Proposition 5.5. Write w = (x1 ∨ x2 )w′ . Also,
B
write x1 ∨ x2 = x1 v1 and x1 ∨ x2 = x2 v2 where we can check that x1 v1
x2 v2 for
all possible cases for x1 and x2 . All the words x1 w′ 1 , x2 w′ 2 , x1 v1 w′ , and x2 v2 w′
represent w. In particular, x1 w′ 1 and x1 v1 w′ represent w. Hence w1′ = v1 w′ and by
the induction hypothesis, we have w′ 1
B
x1 w′ 1
v1 w′ . Thus, we have
B
x1 v1 w′ .
Similarly, since x2 w′ 2 and x2 v2 w′ represent w, we get
x2 v2 w′
B
Since x1 v1
B
x2 w′ 2 .
x2 v2 , we have
x1 v1 w′
B
x2 v2 w′ .
x1 v1 w′
B
x2 v2 w′
We obtain:
w1
Hence w1
B
B
x1 w′ 1
B
w2 and we are done.
25
B
x2 w′ 2
B
w2 .
By the following proposition, we provide an alternative presentation of the interval
monoid M ([1, λk ]) given in Definition 5.14.
Proposition 6.3. The monoid B ⊕k (e, e, n) is isomorphic to M ([1, λk ]).
Proof. Consider the map ρ : Dk −→ B ⊕k (e, e, n) : w 7−→ F (w) where F is defined
in Proposition 6.2. Let w = w ′ w ′′ be a defining relation of M ([1, λk ]). Since ℓ(w) =
ℓ(w′ ) + ℓ(w′′ ), a reduced expression for w′ w′′ is obtained by concatenating reduced
expressions for w′ and w′′ . It follows that F (w′ w′′ ) = F (w′ )F (w′′ ). We conclude that
ρ has a unique extension to a monoid homomorphism M ([1, λk ]) −→ B ⊕k (e, e, n),
which we denote by the same symbol.
e −→ M ([1, λk ]) : x
Conversely, consider the map ρ′ : X
e 7−→ x. In order to prove
′
that ρ extends to a unique monoid homomorphism B ⊕k (e, e, n) −→ M ([1, λk ]), we
e1 = w
e2 of
have to check that w1 = w2 in M ([1, λk ]) for any defining relation w
B ⊕k (e, e, n). Given a relation w
e1 = w
e2 = x̃1 x̃2 · · · x̃r of B ⊕k (e, e, n), we have w1 =
w2 = x1 x2 · · · xr a reduced word over X. On the other hand, applying repeatedly
the defining relations in M ([1, λk ]) yields to w = x1 x2 · · · xr if w = x1 x2 · · · xr is a
reduced expression over X. Thus, we can conclude that w1 = w2 , as desired.
e −→
Hence we have defined two homomorphisms ρ : Dk −→ B ⊕k (e, e, n) and ρ′ : X
k
′
′
M ([1, λ ]) such that ρ ◦ ρ = idB ⊕k (e,e,n) and ρ ◦ ρ = idM([1,λk ]) . It follows that
B ⊕k (e, e, n) is isomorphic to M ([1, λk ]).
We deduce that B ⊕k (e, e, n) is a Garside monoid and we denote by B (k) (e, e, n)
its group of fractions.
Fix k such that 1 ≤ k ≤ e − 1. A diagram presentation for B (k) (e, e, n) is the same
as the diagram corresponding to the presentation of Corran and Picantin of B(e, e, n)
given in Figure 2.2, with a dashed edge between t̃i and t̃i−k and between t̃j and t̃j−k
for each relation of the form t̃i t̃i−k = t̃j t̃j−k , i, j ∈ Z/eZ. For example, the diagram
corresponding to B (2) (8, 8, 2) is as follows.
3
•
2
•
1
•
0
•
4
•
•
5
•
6
•
7
Figure 7: Diagram for the presentation of B (2) (8, 8, 2).
26
6.2
Identifying B(e, e, n)
Now, we want to check which of the monoids B ⊕k (e, e, n) are isomorphic to B ⊕ (e, e, n).
Assume there exists an isomorphism φ : B ⊕k (e, e, n) −→ B ⊕ (e, e, n) for a given k with
0 ≤ k ≤ e − 1. We start with the following lemma.
Lemma 6.4. The isomorphism φ fixes s̃3 , s̃4 , · · · , s̃n and permutes the t̃i where
i ∈ Z/eZ.
e ∗ . We have ℓ(f ) ≤ ℓ(φ(f )). Thus, we have ℓ(x̃) ≤ ℓ(φ(x̃)) for
Proof. Let f be in X
e Also, ℓ(φ(x̃)) ≤ ℓ(φ−1 (φ(x̃))) = ℓ(x). Hence ℓ(x̃) = ℓ(φ(x̃)) = 1. It follows
x̃ ∈ X.
that φ(x̃) ∈ {t̃0 , t̃1 , · · · , t̃e−1 , s̃3 , · · · , s̃n }.
Furthurmore, the only generator of B ⊕k (e, e, n) that commutes with all other generators except for one of them is s̃n . On the other hand, s̃n is the only generator of
B ⊕ (e, e, n) that satisfies the latter property. Hence φ(s̃n ) = s̃n . Next, s̃n−1 is the
only generator of B ⊕k (e, e, n) that does not commute with s̃n . The only generator
of B ⊕ (e, e, n) that does not commute with s̃n is also s̃n−1 . Hence φ(s̃n−1 ) = s̃n−1 .
Next, the only generator of B ⊕k (e, e, n) different from s̃n and that does not commute
with s̃n−1 is s̃n−2 . And so on, we get φ(s̃j ) = s̃j for 3 ≤ j ≤ n. It remains that
φ({t̃i | 0 ≤ i ≤ e − 1}) = {t̃i | 0 ≤ i ≤ e − 1}.
Proposition 6.5. The monoids B ⊕k (e, e, n) and B ⊕ (e, e, n) are isomorphic if and
only if k ∧ e = 1.
Proof. Assume there exists an isomorphism φ between the monoids B ⊕k (e, e, n) and
B ⊕ (e, e, n). By Lemma 6.4, we have φ(s̃j ) = s̃j for 3 ≤ j ≤ n and φ({t̃i | 0 ≤ i ≤ e − 1})
= {e
ti | 0 ≤ i ≤ e − 1}. A diagram presentation Γk of B ⊕k (e, e, 2) for 1 ≤ k ≤ e − 1
can be viewed as the same diagram presentation of B (k) (e, e, 2) given earlier. The
isomorphism φ implies that we have only one connected component in Γk or, in another words, a closed chain. Hence k is a generator of the cyclic group Z/eZ. Then
k satisfies the condition k ∧ e = 1.
Conversely, let 1 ≤ k ≤ e − 1 such that k ∧ e = 1. We define a map φ : B ⊕ (e, e, n) −→
B ⊕k (e, e, n) where φ(s̃j ) = s̃j for 3 ≤ j ≤ n and φ(t̃0 ) = t̃k , φ(t̃1 ) = t̃2k , φ(t̃2 ) = t̃3k ,
· · · , φ(t̃e−1 ) = e
tek . The map φ is a well-defined monoid homomorphism, which is both
surjective (as it corresponds a generator of B ⊕ (e, e, n) to a generator of B ⊕k (e, e, n))
and injective (as it is bijective on the relations). Hence φ defines an isomorphism of
monoids.
When k ∧ e = 1, since B ⊕k (e, e, n) is isomorphic to B ⊕ (e, e, n), we have the
following.
Corollary 6.6. B (k) (e, e, n) is isomorphic to the complex braid group B(e, e, n) for
k ∧ e = 1.
The reason that the proof of Proposition 6.5 fails in the case k ∧ e 6= 1 is that
we have more than one connected component in Γk that link t̃0 , t̃1 , · · · , and t̃e−1
together, as we can see in Figure 7. Actually, it is easy to check that the number of
connected components that link t̃0 , t̃1 , · · · , and t̃e−1 together is the number of cosets
of the subgroup of Z/eZ generated by the class of k, that is equal to k ∧ e, and each
of these cosets have e′ = e/k ∧ e elements. This will be useful in the next subsection.
27
6.3
New Garside groups
When k ∧ e 6= 1, we describe B (k) (e, e, n) as an amalgamated product of k ∧ e copies
of the complex braid group B(e′ , e′ , n) with e′ = e/e ∧ k, over a common subgroup
which is the Artin-Tits group B(2, 1, n − 1). This allows us to compute the center of
B (k) (e, e, n). Finally, using the Garside structure of B (k) (e, e, n), we compute its first
and second integral homology groups using the Dehornoy-Lafont complex [9] and the
method used in [4].
By an adaptation of the results of Crisp in [7] as in Lemma 5.2 of [4], we have the
following embedding. Let B := B(2, 1, n − 1) be the Artin-Tits group defined by the
following diagram presentation.
q1
q2
qn−2
q3
qn−1
Proposition 6.7. The group B injects in B (k) (e, e, n).
Proof. Define a monoid homomorphism φ : B + −→ B ⊕k (e, e, n) : q1 7−→ t̃i t̃i−k ,
q2 7−→ s̃3 , · · · , qn−1 7−→ s̃n . It is easy to check that for all x, y ∈ {q1 , q2 , · · · , qn−1 }, we
have lcm(φ(x), φ(y)) = φ(lcm(x, y)). Hence by applying Lemma 5.2 of [4], B(2, 1, n − 1)
injects in B (k) (e, e, n).
We construct B (k) (e, e, n) as follows.
Proposition 6.8. Let B(1) := B(e′ , e′ , n) ∗B B(e′ , e′ , n) be the amalgamated product
of two copies of B(e′ , e′ , n) over B = B(2, 1, n − 1) with e′ = e/e ∧ k. Define B(2) :=
B(e′ , e′ , n) ∗B (B(e′ , e′ , n) ∗B B(e′ , e′ , n)) and so on until defining B((e ∧ k) − 1).
We have B((e ∧ k) − 1) = B (k) (e, e, n).
Proof. Due to the presentation of B (k) (e, e, n) given in Definition 6.1 and to the
presentation of the amalgamated products (see Section 4.2 of [14]), one can deduce
that B((e ∧ k) − 1) is equal to B (k) (e, e, n).
28
B((e ∧ k) − 1) = B (k) (e, e, n)
B(e′ , e′ , n)
B(3)
B(2)
B(e′ , e′ , n)
B(e′ , e′ , n)
B(e′ , e′ , n)
Figure 8: The construction of B (k) (e, e, n).
Example 6.9. Consider the case of B (2) (6, 6, 3). It is an amalgamated product of
k ∧ e = 2 copies of B(e′ , e′ , 3) with e′ = e/(k ∧ e) = 3 over the Artin-Tits group
B(2, 1, 2). Consider the following diagram of this amalgamation.
The presentation of B(3, 3, 3) ∗ B(3, 3, 3) over B(2, 1, 2) is as follows:
• the generators are the union of the generators of the two copies of B(3, 3, 3),
• the relations are the union of the relations of the two copies of B(3, 3, 3) with
the additional relations s̃3 = s̃′3 and t̃2 t̃0 = t̃3 t̃1
This is exactly the presentation of B (2) (6, 6, 3) given in Definition 6.1.
s̃3
t̃3 t̃1
∼
s̃′3
t̃2 t̃0
−→
t̃5
t̃4
t̃3
t̃2
t̃0
t̃1
s̃′3
s̃3
B (2) (6, 6, 3)
Proposition 6.10. The center of B (k) (e, e, n) is infinite cyclic isomorphic to Z.
Proof. By Corollary 4.5 of [14] that computes the center of an amalgamated product,
the center of B (k) (e, e, n) is the intersection of the centers of B and B(e′ , e′ , n). Since
the center of B and B(e′ , e′ , n) is infinite cyclic [3], the center of B (k) (e, e, n) is infinite
cyclic isomorphic to Z.
29
Since the center of B(e, e, n) is also isomorphic to Z (see [3]), in order to distinguish these structures from the braid groups B(e, e, n), we compute their first and
second integral homology groups. We recall the Dehornoy-Lafont complex and follow
the method in [4] where the second integral homology group of B(e, e, n) is computed.
e by considering s̃n < s̃n−1 < · · · < s̃3 < t̃0 < t̃1 <
We order the elements of X
e which
· · · < t̃e−1 . For f ∈ B ⊕k (e, e, n), denote by d(f ) the smaller element in X
e
divides f on the right. An r-cell is an r-tuple [x1 , · · · , xr ] of elements in X such that
x1 < x2 < · · · < xr and xi = d(lcm(xi , xi+1 , · · · , xr )). The set Cr of r-chains is
er , the set of all r-cells with the convention
the free ZB ⊕k (e, e, n)-module with basis X
e0 = {[∅]}. We provide the definition of the differential ∂r : Cr −→ Cr−1 .
X
e and A an r-cell. Denote
Definition 6.11. Let [α, A] be an (r + 1)-cell, with α ∈ X
⊕k
α/A the unique element of B (e, e, n) such that (α/A )lcm(A) = lcm(α, A). Define
the differential ∂r : Cr −→ Cr−1 recursively through two Z-module homomorphisms
sr : Cr −→ Cr+1 and ur : Cr −→ Cr as follows.
∂r+1 [α, A] = α/A [A] − ur (α/A [A]),
with ur+1 = sr ◦ ∂r+1 where u0 (f [∅]) = [∅], for all f ∈ B ⊕k (e, e, n), and
sr ([∅]) = 0, sr (x[A]) = 0 if α := d(xlcm(A)) coincides with the first coefficient in A,
and otherwise sr (x[A]) = y[α, A] + sr (yur (α/A [A])) with x = yα/A .
We provide the final result of the computation of ∂1 , ∂2 , and ∂3 for all 1, 2, and
e we have
3-cells, respectively. For all x ∈ X,
∂1 [x] = (x − 1)[∅],
for all 1 ≤ i ≤ e − 1,
∂2 [t̃0 , t̃i ] = t̃i+k [t̃i ] − t̃k [t̃0 ] − [t̃k ] + [t̃i+k ],
e with xyx = yxy,
for x, y ∈ X
∂2 [x, y] = (yx + 1 − x)[y] + (y − xy − 1)[x], and
e with xy = yx,
for x, y ∈ X
∂2 [x, y] = (x − 1)[y] − (y − 1)[x].
For j 6= −k mod e, we have:
∂3 [s̃3 , t̃0 , t̃j ] = (s̃3 t̃k t̃0 s̃3 − t̃k t̃0 s̃3 + t̃j+2k s̃3 )[t̃0 , t̃j ] − t̃j+2k s̃3 t̃j+k [s̃3 , t̃j ] + (t̃j+2k −
s̃3 t̃j+2k )[s̃3 , t̃j+k ] + (s̃3 − t̃j+2k s̃3 − 1)[t̃0 , t̃j+k ] + (s̃3 t̃2k − t̃2k )[s̃3 , t̃k ] + (t̃2k s̃3 + 1 −
s̃3 )[t̃0 , t̃k ] + [s̃3 , t̃j+2k ] + t̃2k s̃3 t̃k [s̃3 , t̃0 ] − [s̃3 , t̃2k ] and
∂3 [s̃3 , t̃0 , t̃−k ] = (s̃3 t̃k t̃0 s̃3 − t̃k t̃0 s̃3 + t̃k s̃3 )[t̃0 , t̃−k ] − t̃k s̃3 t̃0 [s̃3 , t̃−k ] + (1 − t̃2k +
s̃3 t̃2k )[s̃3 , t̃k ] + (1 + t̃2k s̃3 − s̃3 )[t̃0 , t̃k ] + (t̃k − s̃3 t̃k + t̃2k s̃3 t̃k )[s̃3 , t̃0 ] − [s̃3 , t̃2k ].
Also, for 1 ≤ i ≤ e − 1 and 4 ≤ j ≤ n, we have:
30
∂3 [s̃j , t̃0 , t̃i ] = (s̃j − 1)[t̃0 , t̃i ] − t̃i+k [s̃j , t̃i ] + t̃k [s̃j , t̃0 ] − [s̃j , t̃i+k ] + [s̃j , t̃k ],
e with xyx = yxy, xz = zx, and yzy = zyz,
for x, y, z ∈ X
∂3 [x, y, z] =
(z + xyz − yz − 1)[x, y] − [x, z] + (xz − z − x + 1 − yxz)y[x, z] + (x − 1 − yx + zyx)[y, z],
e with xyx = yxy, xz = zx, and yz = zy,
for x, y, z ∈ X
∂3 [x, y, z] = (1 − x + yx)[y, z] + (y − 1 − xy)[x, z] + (z − 1)[x, y],
e with xy = yx, xz = zx, and yzy = zyz,
for x, y, z ∈ X
∂3 [x, y, z] = (1 + yz − z)[x, y] + (y − 1 − zy)[x, z] + (x − 1)[y, z], and
e with xy = yx, xz = zx, and yz = zy,
for x, y, z ∈ X
∂3 [x, y, z] = (1 − y)[x, z] + (z − 1)[x, y] + (x − 1)[y, z].
Let dr = ∂r ⊗ZB ⊕k (e,e,n) Z : Cr ⊗ZB ⊕k (e,e,n) Z −→ Cr−1 ⊗ZB ⊕k (e,e,n) Z be the
differential with trivial coefficients. For example, for d2 , we have: for all 1 ≤ i ≤ e − 1,
d2 [t̃0 , t̃i ] = [t̃i ] − [t̃0 ] − [t̃k ] + [t̃i+k ],
e with xyx = yxy,
for x, y ∈ X
e with xy = yx,
for x, y ∈ X
d2 [x, y] = [y] − [x], and
d2 [x, y] = 0.
The same can be done for ∂3 .
Definition 6.12. Define the integral homology group of order r to be
Hr (B (k) (e, e, n), Z) = ker(dr )/Im(dr+1 ).
We are ready to compute the first and second integral homology groups of B (k) (e, e, n).
Using the presentation of B (k) (e, e, n) given in 6.1, one can check that the abelianization of B (k) (e, e, n) is isomorphic to Z. Since H1 (B (k) (e, e, n), Z) is isomorphic to the
abalianization of B (k) (e, e, n), we deduce that H1 (B (k) (e, e, n), Z) is isomorphic to Z.
Since H1 (B(e, e, n), Z) is also isomorphic to Z (see [3]), the first integral homology
group does not give any additional information weither these groups are isomorphic
to some B(e, e, n) or not.
Recall that by [4], we have
• H2 (B(e, e, 3), Z) ≃ Z/eZ where e ≥ 2,
• H2 (B(e, e, 4), Z) ≃ Z/eZ × Z/2Z when e is odd and H2 (B(e, e, 4), Z) ≃ Z/eZ ×
(Z/2Z)2 when e is even, and
• H2 (B(e, e, n), Z) ≃ Z/eZ × Z/2Z when n ≥ 5 and e ≥ 2.
31
In order to compute H2 (B (k) (e, e, n), Z), we follow exactly the proof of Theorem 6.4
in [4]. Using the same notations as in [4], we set
vi = [t̃0 , t̃i ] + [s̃3 , t̃0 ] + [s̃3 , t̃k ] − [s̃3 , t̃i ] − [s̃3 , t̃i+k ] where 1 ≤ i ≤ e − 1,
and we also have
H2 (B (k) (e, e, n), Z) = (K1 /d3 (C1 )) ⊕ (K2 /d3 (C2 )).
We have d3 [s̃3 , t̃0 , t̃j ] = vj − vj+k + vk if j 6= −k, and d3 [s̃3 , t̃0 , t̃−k ] = v−k + vk . Denote
ui = [s̃3 , t̃0 , t̃i ] for 1 ≤ i ≤ e − 1. We define a basis of C2 as follows. For each coset of
the subgroup of Z/eZ generated by the class of k, say {t̃x , t̃x+k , · · · , t̃x−k } such that
1 ≤ x ≤ e − 1, we define wx+ik = ux+ik + ux+(i+1)k + · · · + ux−k for 0 ≤ i ≤ e − 1,
and when x = 0, we define wik = uik + u(i+1)k + · · · + u−k for 1 ≤ i ≤ e − 1. In
the Z-basis (wk , w2k , · · · , w−k , w1 , w1+k , · · · , w1−k , · · · , wx , wx+k , · · · , wx−k , · · · ) and
(vk , v2k , · · · , v−k , v1 , v1+k , · · · , v1−k , · · · , vx , vx+k , · · · , vx−k , · · · ), d3 is in triangular
form with (e ∧ k) − 1 diagonal coefficients that are zero, all other diagonal coefficients are equal to 1 except one of them that is equal to e′ = e/(e ∧ k). In this case,
we have H2 (B (k) (e, e, 3), Z) = Z(e∧k)−1 × Z/e′ Z. The rest of the proof is essentially
similar to the proof of Theorem 6.4 in [4]. When n = 4, we get [s̃4 , t̃i ] ≡ [s̃4 , t̃i+2k ]
for every i and since 2[s̃4 , t̃i ] ≡ 0 for every i, we get K2 /d3 (C2 ) ≃ (Z/2Z)c , where c is
the number of cosets of the subgroup of Z/eZ generated by the class of 2k.
In the following proposition, we provide the second integral homology group of
B (k) (e, e, n).
Proposition 6.13. Let n ≥ 3 and e ≥ 2.
• When n = 3, we have H2 (B (k) (e, e, 3), Z) ≃ Z(e∧k)−1 × Z/e′ Z,
• when n = 4, H2 (B (k) (e, e, 4), Z) ≃ Z(e∧k)−1 × Z/e′ Z × (Z/2Z)c , where c is the
number of cosets of the subgroup of Z/eZ generated by the class of 2k, and
• when n ≥ 5, H2 (B (k) (e, e, n), Z) ≃ Z(e∧k)−1 × Z/e′ Z × Z/2Z.
Comparing this result with H2 (B(e, e, n), Z), one can check that if k ∧ e 6= 1,
B (k) (e, e, n) is not isomorphic to a complex braid group of type B(d, d, n) with d ≥ 2.
Thus, we conclude by the following theorem.
Theorem 6.14. B (k) (e, e, n) is isomorphic to B(e, e, n) if and only if k ∧ e = 1.
Acknowledgements
I would like to thank my PhD supervisors, Eddy Godelle and Ivan Marin for fruitful
discussions. Also, I would like to thank all the members of Laboratoire de Mathématiques Nicols Oresme and Laboratoire Amiénois de Mathémathiques Fondamentales
et Appliquées for providing me the right environment to finish this work.
32
References
[1] D. Bessis and R. Corran. Non-crossing partitions of type (e, e, r). Adv. Math.,
202:1–49, 2006.
[2] E. Brieskorn and K. Saito. Artin-gruppen und coxeter-gruppen. Invent. Math.,
17:245–271, 1972.
[3] M. Broué, G. Malle, and R. Rouquier. Complex reflection groups, braid groups,
hecke algebras. J. Reine Angew. Math., 500:127–190, 1998.
[4] F. Callegaro and I. Marin. Homology computations for complex braid groups. J.
European Math. Soc., (16):103–164, 2014.
[5] R. Corran. On monoids related to braid groups. PhD thesis, University of Sydney,
May 2000.
[6] R. Corran and M. Picantin. A new garside structure for braid groups of type
(e, e, r). J. London Math. Soc., 84(3):689–711, 2011.
[7] J. Crisp. Injective maps between artin groups. “Geometric Group Theory Down
Under” ed. J. Cossey et al., de Gruyter Verlag, pages 119–137, 1999.
[8] P. Dehornoy, F. Digne, E. Godelle, D. Krammer, and J. Michel. Foundations of
Garside Theory, volume 22. EMS Tracts in Mathematics, 2015.
[9] P. Dehornoy and Y. Lafont. Homology of gaussian groups. Ann. Inst. Fourier,
53(2):1001–1052, 2003.
[10] P. Dehornoy and L. Paris. Gaussian groups and garside groups, two generalizations of artin-groups. Proc. London Math. Soc., 79(3):569–604, 1999.
[11] P. Deligne. Les immeubles des groupes de tresses généralisés. Invent. Math.,
17:273–302, 1972.
[12] F.A. Garside. The theory of knots and associated problems. PhD thesis, Oxford
University, 1965.
[13] F.A. Garside. The braid group and other groups. Quart. J. Math. Oxford,
20:235–254, 1969.
[14] A. Karrass, W. Magnus, and D. Solitar. Combinatorial Group Theory. Reprint
of the Interscience Publishers, New York, 1976.
[15] J. Michel. Groupes finis de réflexion. University notes, Université Paris Diderot,
2004. http://webusers.imj-prg.fr/ jean.michel/papiers/cours2004.pdf.
[16] J. Michel and G. Neaime. Contribution to the chevie package, cp.g.
http://webusers.imj-prg.fr/ jean.michel/gap3.
see
[17] G.C. Shephard and J.A. Todd. Finite unitary reflection groups. Canad. J. Math.,
6(2):274–304, 1954.
33
| 4 |
NON-GAUSSIAN QUASI-LIKELIHOOD ESTIMATION OF SDE DRIVEN BY
LOCALLY STABLE LÉVY PROCESS
arXiv:1608.06758v3 [] 8 Jun 2017
HIROKI MASUDA
Abstract. We address estimation of parametric coefficients of a pure-jump Lévy driven univariate
stochastic differential equation (SDE) model, which is observed at high frequency over a fixed time
period. It is known from the previous study [36] that adopting the conventional Gaussian quasi-maximum
likelihood estimator then leads to an inconsistent estimator. In this paper, under the assumption that
the driving Lévy process is locally stable, we extend the Gaussian framework into a non-Gaussian
counterpart, by introducing a novel quasi-likelihood function formally based on the small-time stable
approximation of the unknown transition density. The resulting estimator turns out to be asymptotically
mixed normally distributed without ergodicity and finite moments for a wide range of the driving purejump Lévy processes, showing much better theoretical performance compared with the Gaussian quasimaximum likelihood estimator. Extensive simulations are carried out to show good estimation accuracy.
The case of large-time asymptotics under ergodicity is briefly mentioned as well, where we can deduce
an analogous asymptotic normality result.
1. Introduction
Stochastic differential equation (SDE) driven by a Lévy process is one of basic models to describe
time-varying physical and natural phenomena. There do exist many situations where non-Gaussianity
of distributions of data increments, or of a residual sequence whenever available, is significant in smalltime, making diffusion type models observed at high frequency somewhat inappropriate to reflect reality;
see [3] as well as the enormous references therein, and also [16]. This non-Gaussianity may not be well
modeled even by a diffusion with compound-Poisson jumps as well since jump-time points are then rather
sparse compared with sampling frequency, so that most increments are approximately Gaussian except
for intervals containing jumps. SDE driven by a pure-jump Lévy process may then serve as a good natural
candidate model. For those models, however, a tailor-made estimation procedure seems to be far from
being well developed, which motivated our present study.
In this paper, we consider a solution to the univariate Markovian SDE
(1.1)
dXt = a(Xt , α)dt + c(Xt− , γ)dJt
defined on an underlying complete filtered probability space (Ω, F, (Ft )t∈R+ , P) with
Ft = σ(X0 ) ∨ σ(Js ; s ≤ t),
(1.2)
where:
• The initial random variable X0 is F0 -measurable;
• The driving noise J is a symmetric pure-jump (càdlàg) Lévy process independent of X0 ;
• The trend coefficient a : R × Θα → R and scale coefficient c : R × Θγ → R are assumed to be
known except for the p-dimensional parameter
θ := (α, γ) ∈ Θα × Θγ = Θ ⊂ Rp ,
with Θα ∈ Rpα and Θγ ∈ Rpγ being bounded convex domains.
Our objective here is estimation of θ, when the true value θ0 = (α0 , γ0 ) ∈ Θ does exist and the process X
is observed only at discrete but high-frequency time instants tnj = jhn , j = 0, 1, . . . , n, with nonrandom
sampling step size hn → 0 as n → ∞. We will mostly work under the bounded-domain asymptotics 1:
T
(1.3)
Tn ≡ T , i.e. hn =
n
Date: June 9, 2017.
Key words and phrases. Asymptotic mixed normality, high-frequency sampling, locally stable Lévy process, stable quasilikelihood function, stochastic differential equations.
1The equidistance assumption on the sampling times could be removed as soon as the ratios of min
j≤n (tj − tj−1 ) and
maxj≤n (tj − tj−1 ) are bounded in an appropriate order. This may be shown by the same line as in [36, pp.1604–1605].
1
2
HIROKI MASUDA
for a fixed terminal sampling time T ∈ (0, ∞), that is, we observe not a complete path (Xt )t≤T but the
time-discretized step process
(1.4)
(n)
Xt
:= Xbt/hn chn
over the period [0, T ]; see Section 3.3 for the large-time asymptotics where Tn → ∞ under the ergodicity.
Due to the lack of a closed-form formula for the transition distribution, a feasible approach based
on the genuine likelihood function is rarely available. In this paper, we will introduce a non-Gaussian
quasi-likelihood function 2, which extends the prototype mentioned in [34] and [37], under the locally
(symmetric) β-stable property of J in the sense that
(1.5)
L(h−1/β Jh ) ⇒ Sβ ,
h → 0,
where Sβ stands for the standard symmetric β-stable distribution corresponding to the characteristic
function
ϕ0 (u) := exp(−|u|β );
among others, we refer to [24], [47], [48] and [53]) for comprehensive accounts of general stable distributions. It is known from [6, Proposition 1] that, as long as the linear scaling h Jh for some h → 0
is concerned, the strictly stable distribution is the only possible asymptotic distribution. Many locally
stable Lévy processes with finite variance can exhibit large-time Gaussianity (i.e. central limit effect)
in addition to the small-time non-Gaussianity. In the main results, we will assume the locally stable
property (1.5) with a stronger mode (see Lemmas 2.2 and 2.5) and that the stability index β is known
with
β ∈ [1, 2).
It should be noted that the value β is also known as the Blumenthal-Getoor activity index defined by
Z
β := inf b ≥ 0 :
|z|b ν(dz) < ∞ ,
|z|≤1
which measures degree of J’s jump activity.
The proposed maximum quasi-likelihood estimator θ̂n = (α̂n , γ̂n ) has the property that
√
√
(1.6)
nh1−1/β
(α̂
−
α
),
n(γ̂
−
γ
)
n
0
n
0
n
is asymptotically mixed normally distributed under some conditions, which extends the previous works
[33] and [36] that adopting the Gaussian quasi-maximum likelihood estimator; we refer to [39, Section
2] for some formal comparisons. In particular, the convergence (1.6) clarifies that the activity index β
determines the rate of convergence of estimating the trend parameter α; note that
√ 1−1/β
nh
= T 1−1/β n(2−β)/(2β) → ∞
as n → ∞. It should be emphasized that this estimator can be much more efficient compared with
the Gaussian maximum quasi-likelihood estimator studied in [36]. Most notably, unlike the case of
diffusions, we can estimate not only the scale parameter γ but also the trend parameter α, with the
explicit asymptotic distribution in hand; see [15] for the related local asymptotic normality result. To
prove the asymptotic mixed normality, we will take a doubly approximate procedure based on the EulerMaruyama scheme combined with the stable approximation of L(h−1/β Jh ) for h → 0. Our result provides
us with the first systematic methodology for estimating the possibly non-linear pure-jump Lévy driven
SDE (1.1) based on a non-Gaussian quasi-likelihood.
Here are a couple of further remarks on our model.
(1) The model is semiparametric in the sense that we do not completely specify the Lévy measure
of L(J), while supposing the parametric coefficients; of course, the Lévy measure is an infinitedimensional parameter, so that β alone never determines the distribution L(J) in general. In
estimation of L(X), it would be desirable (whenever possible) to estimate the parameter θ with
leaving the remaining parameters contained in Lévy measure as much as unknown. The proposed
quasi-likelihood, termed as (non-Gaussian) stable quasi-likelihood, will provide us with a widely
applicable tool for this purpose.
2Non-Gaussian quasi-likelihoods have not received much attention compared with the popular Gaussian one. Among
others, we refer to the recent paper [12] for a certain non-Gaussian quasi-likelihood estimation of a possibly heavy-tailed
GARCH model, and also to [52] for self-weighted Laplace quasi-likelihood in a time series context.
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
3
(2) It is assumed from the very beginning that β < 2 so that J contains no Gaussian component.
Normally, the simultaneous presence of a non-degenerate diffusion part and a non-null jump part
makes the parametric-estimation problem much more complicated. The recent papers [25] and
[28] discussed usefulness of pure-jump models. Although they are especially concerned with financial context, pure-jump processes should be useful for model building in many other application
fields where non-Gaussianity of time-varying data is of primary importance. For example, econometrics, signal processing, population dynamics, hydrology, radiophysics, turbulence, biological
molecule movement, noise-contaminated biosignals, and so on; we refer to [3], [9], and [11] for
some recent related works.
(3) Finally, our model (1.1) may be formally seen as a continuous-time analogue to the discrete-time
model
Xj = a(Xj−1 , α) + c(Xj−1 , γ)j ,
j = 1, . . . , n,
where j are i.i.d. random variables. By making use of the locally stable property (1.5), our
model setup enables us to formulate a flexible and unified estimation procedure, which cannot
be shared with the discrete-time counterpart. The bounded-domain asymptotics (1.3) makes it
possible to “localize” the event, sidestepping both stability (such as the ergodicity) and moment
condition on L(J1 ). Instead, in order to deduce the asymptotic mixed normality we need much
more than the (martingale) central limit theorem with Gaussian limit. Fortunately, we have
the very general tool to handle this, that is, Jacod’s characterization of conditionally Gaussian
martingales (see [13] and [20], and also Section 6.1), which in particular can deal with the SDE
(1.1) when J is a pure-jump Lévy process.
The following conventions and basic notations are used throughout this paper. We will largely suppress
the dependence on n from the notations tnj and hn . For any process Y ,
∆j Y = ∆nj Y := Ytj − Ytj−1
denotes the jth increments, and we write
gj−1 (v) = g(Xtj−1 , v)
for a function g having two components, such as aj−1 (α) = a(Xtj−1 , α). For a variable x = {xi }, we write
2
∂
∂x = { ∂x
}i , ∂x2 = { ∂x∂i ∂xj }i,j , and so forth, with omitting the subscript x when there is no confusion;
i
given a function f = f (s1 , . . . , sk ) : S1 × · · · × Sk → Rm with Si ⊂ Rdi , we write ∂sj11 . . . ∂sjkk f for the array
Qk
of partial derivatives of dimension m × ( l=1 dl jl ). The characteristic function of a random variable ξ
is denoted by ϕξ . For any matrix M we let M ⊗2 := M M > (> denotes the transpose). We use C for
a generic positive constant which may vary at each appearance, and write an . bn when an ≤ Cbn for
p
L
every n large enough. Finally, the symbols −
→ and −
→ denote the convergences in P-probability and in
distribution, respectively; all the asymptoics below will be taken for n → ∞ unless otherwise mentioned.
The paper is organized as follows. We first describe the basic model setup in Section 2. The main
results are presented in Section 3, followed by numerical experiments in Section 4. Section 5 presents the
proofs of the criteria for the key assumptions given in Section 2. Finally, Section 6 is devoted to proving
the main results.
2. Basic setup and assumptions
2.1. Locally stable Lévy process.
2.1.1. Definition and criteria. We denote by
g0,β (y) :=
cβ
,
|z|1+β
z 6= 0,
the Lévy density of Sβ , where
(2.1)
cβ :=
−1
βπ
1 1
Γ(1 − β) cos
2 β
2
with c1 = limβ→1 cβ = π −1 ([48, Lemma 14.11]).
4
HIROKI MASUDA
Assumption 2.1 (Driving noise structure).
(1) The Lévy process J has no drift, no Gaussian component, and a symmetric Lévy measure ν, so that
Z
(2.2)
ϕJt (u) = exp t (cos(uz) − 1)ν(dz) ,
u ∈ R.
The Lévy measure ν admits a Lebesgue density g of the form
g(z) = g0,β (z) {1 + ρ(z)} ,
z 6= 0,
where ρ : R \ {0} → [−1, ∞) is a measurable symmetric function such that for some constants
δ > 0, cρ ≥ 0, and ρ > 0,
|ρ(z)| ≤ cρ |z|δ ,
|z| ≤ ρ ,
z 6= 0.
(2) Further, ρ is continuously differentiable in R \ {0} and the pair (cρ , β, δ) satisfies either
(a) cρ = 0, or
(b) cρ > 0 and δ > β with
|ρ(z)| + |z∂ρ(z)| ≤ cρ |z|δ ,
z 6= 0.
The function ρ controls the degree of “Sβ -likeness” around the origin. Also, if in particular ρ is
bounded, then E(|J1 |q ) < ∞ for q ∈ (−1, β); see [48, Theorem 25.3].
We denote by φβ the density of Sβ , and by ϕh for the characteristic function of h−1/β Jh :
h
u ∈ R.
ϕh (u) := ϕJ1 (h−1/β u) ,
Lemma 2.2 below, which will play an essential role in the proof of the main results, shows that Assumption
2.1 ensures not only the locally stable property (1.5) but also an L1 -local limit theorem with specific
convergence rate; it is partly a refinement of [32, Lemma 4.4].
Lemma 2.2.
(1) Let Assumption 2.1(1) hold with the function ρ being bounded.
(a) For every C ≥ 0 and s < 1,
Z
(u−s ∨ uC )|ϕh (u) − ϕ0 (u)|du . haν ,
(0,∞)
where the constant aν ∈ (0, 1] is defined by
1
(cρ = 0)
aν =
(δ/β) ∧ 1 (cρ > 0).
In particular, the distribution L(h−1/β Jh ) for h ∈ (0, 1] admits a positive smooth Lebesgue
density, which we denote by fh , such that
sup |fh (y) − φβ (y)| . haν .
y
(2.3)
(b) For each κ ∈ (0, β), we have
Z
|y|κ |fh (y) − φβ (y)| dy → 0.
(2) Let Assumption 2.1 hold, and assume that there exists a constant K > 0 such that g(z) = 0
(equivalently, ρ(z) = −1) for every |z| > K. Then, for any r ∈ (0, 1] we have
Z
β(1∧aν )
(2.4)
|fh (y) − φβ (y)| dy . h β+r .
The additional assumptions on ρ in Lemma 2.2 will not be real restrictions, because in the proof
our main results we will truncate the support of ν in order to deal with possibly heavy-tailed J. The
localization argument is allowed under the bounded-domain asymptotics (1.3); see Section 6.1. Since
1 ∧ aν > 1/2 under Assumption 2.1(2), it follows from (2.4) that under (1.3) we can pick a sufficiently
small r > 0 to ensure nh
(2.5)
2β(1∧aν )
β+r
→ 0, which in turn implies that
Z
√
n |fh (y) − φβ (y)| dy → 0.
As will be seen later, it is the convergences (2.3) and (2.5) that are essential for the proofs of the main
results. Assumption 2.1 (also Assumption 2.4 given below) serves as a set of practical sufficient conditions.
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
5
Remark 2.3. The conclusions of Lemma 2.2 may remain the same even when we add an independent
jump component. For example, we may replace the original J by J ? := J + J [ , where J [ is a pure-jump
Lévy process having a Lévy density g [ ∈ C 1 (R \ {0}) such that g [ ≡ 0 for |z| > K and that
(2.6)
lim sup |z|1+β−δ g [ (z) + |z| ∂g [ (z) < ∞.
|z|→0
?
Then, the Lévy density of J equals
1
g ? (z) = g(z) + g [ (z) = g0,β (z) 1 + ρ(z) + |z|1+β g [ (z) ,
cβ
1+β [
and it can be checked that if the function ρ satisfies Assumption 2.1, then so is z 7→ ρ(z) + c−1
g (z).
β |z|
[
In particular, (2.6) is fulfilled if jumps of J are of compound-Poisson type.
Assumption 2.1 is designed to give conditions only in terms of the Lévy density g. However, Assumption
2.1(2) excludes cases where ρ ∈ C 1 (R \ {0}) with |∂ρ(0+)| > 0 (hence δ = 1) and β ≤ 1; note that we do
not explicitly impose that β ∈ [1, 2) in Lemma 2.2 (and also in Lemma 2.5 below). It is possible to give
another set of conditions. Let
ψh (u) := log ϕh (u).
Assumption 2.4 (Driving noise structure). Assumption 2.1(1) holds with the function ρ being bounded,
ψh ∈ C 1 (R\{0}), and there exist constants cψ ≥ 0 and r ∈ [0, 1] and a function ψ (h) such that ψ (h) → 0
as h → 0 and that
1
(2.7)
u > 0,
|∂u ψh (u)| . ∨ ucψ ,
u
Z
ur ϕ0 (u) ∂u ψh (u) + βuβ−1 du ≤ ψ (h).
(2.8)
(0,∞)
Lemma 2.5. Under Assumption 2.4, we have
Z
β
|fh (y) − φβ (y)| dy . (ψ (h) ∨ haν ) β+r
for the constant aν given in Lemma 2.2(1). In particular, (2.5) holds if
√
β
n(ψ (h) ∨ haν ) β+r → 0.
Lemmas 2.2(2) and 2.5 have no inclusion relation with different domains of the applicability. Although
we do not look at a truncated support of ν in Lemma 2.5, it will be needed that the tail of ν is light
enough: for details, see Theorem 3.5 given later.
2.1.2. Examples.
Example 2.6. Trivially, Assumption 2.1 is satisfied by the β-stable driven case (L(J1 ) = Sβ ) and the
whole class of the driving Lévy process considered in [7] and [8], where cρ = 0 (equivalently ρ ≡ 0). See
also Remark 3.4.
In the next two concrete examples, Assumption 2.4 is helpful for verification of (2.5) while Assumption
2.1(2) may not.
Example 2.7 (Symmetric tempered β-stable Lévy process with β ∈ [1, 2)). The symmetric exponentially
tempered β-stable Lévy process, which we denote by T Sβ (λ) for λ > 0, is defined through the Lévy density
z 7→ g0,β (z) exp(−λ|z|);
we refer to [26] and the references therein for details of general tempered stable distributions. When
L(J1 ) = T Sβ (λ), then L(h−1/β Jh ) = T Sβ (λh) ⇒ Sβ as h → 0. Assumption 2.1(1) is satisfied with
ρ(z) = exp(−λ|z|) − 1, hence δ = 1 for any β < 2, and aν = 1 ∧ (1/β). However, Assumption 2.1(2)
requires β < 1, which conflicts with the case β ∈ [1, 2) of our interest here. We will instead verify
Assumption 2.4 for β ∈ [1, 2); the function ψh is explicitly given by
1
u2
u
λh
log
1
+
−
2u
arctan
(β = 1)
π
2
2
λ h
λh
ψh (u) =
u
2cβ Γ(−β) (λ2 h2/β + u2 )β/2 cos β arctan
− λβ h
(β ∈ (1, 2))
λh1/β
6
HIROKI MASUDA
u
First we consider β = 1, where ∂u ψh (u) = − π2 arctan( λh
). Using the estimate
(2.9)
sup
y≥0
u
| arctan( λh
)
we have |∂u ψh (u) + 1| .
Z
arctan y − (π/2)
< ∞,
−(1/y)
− π2 | . uh . This gives
ur ϕ0 (u)|∂u ψh (u) + 1|du . h
(0,∞)
Z
ur−1 e−u du . h
(0,∞)
√ R
2
Lemma 2.5 ensures that n |fh (y) − φβ (y)|dy . (nh 1+r )1/2 → 0, with r > 0 being small enough; even
when Tn → ∞, it suffices for the last convergence to suppose that nh2−1 → 0 some 1 > 0.
Next we consider β ∈ (1, 2) with r = 0; we may control r ≥ 0 independently of the case β = 1.
Substituting the expression (2.1) we see that ∂u ψh (u) + βuβ−1 quals the sum of three terms ∆h,k (u)
(k = 1, 2, 3), where
−1
u
βπ
βπ
u
,
∆h,1 (u) := β cos
cos
− cos β arctan
2
2
λh1/β
(λ2 h2/β + u2 )1−β/2
1−β/2
u2
∆h,2 (u) := β uβ−1 −
,
λ2 h2/β + u2
and ∆h,3 (u) satisfies that |∆h,3 (u)| . h1/β (λ2 h2/β + u2 )β/2−1 . h1/β uβ−2 . By the mean-value theorem
u
together with (2.9), we derive |∆h,1 (u)| . u2(1−β/2)
· (h−1/β u)−1 = h1/β uβ−2 . Hence, for β > 1,
Z
Z
β
e−u uβ−2 du . h1/β .
ϕ0 (u) ∆h,1 (u) + ∆h,3 (u) du ≤ h1/β
(0,∞)
(0,∞)
Further, we have
Z
Z
ϕ0 (u)|∆h,2 (u)|du ≤
(0,∞)
∆h,2 (u)du
(0,∞)
Z
=
β u
β−1
−
u2
1−β/2
λ2 h2/β + u2
h
i∞
= − (u2 + λ2 h2/β )β/2 − (u2 )β/2
0+
∞
2 2/β Z 1
λ h
=−
(u2 + sλ2 h2/β )β/2−1 ds
2
0
0+
Z 1
1 β
= λ h
sβ/2−1 ds . h.
2
0
du
(0,∞)
Combining these estimates yields that
Z
ϕ0 (u)|∂u ψh (u) + βuβ−1 |du . h1∧(1/β) = h1/β .
(0,∞)
Again Lemma 2.5 concludes that
automatic under (1.3).
√ R
n |fh (y) − φβ (y)|dy . (nhβ/2 )1/2 → 0 if nh2/β → 0, which is
Example 2.8 (Symmetric generalized hyperbolic Lévy process). The symmetric generalized hyperbolic
distribution [4], denoted by GH(λ, η, ζ), is infinitely divisible with the characteristic function
p
λ/2
Kλ (ζ η 2 + u2 )
η2
,
u 7→
η 2 + u2
Kλ (ηζ)
where Kλ denotes the modified Bessel function of the third kind with index λ ∈ R. In this example,
we will make extensive use of several exact and/or asymptotic properties of Kλ , without notice in most
places; we refer to [1, Chapter 9] for details. If L(J1 ) = GH(λ, η, ζ), then for each u ∈ R
p
λh/2
Kλ (ζ/h) (ηh)2 + u2 h
(ηh)2
ih−1 Jh
E(e
)=
→ exp(−ζ|u|), h → 0,
(ηh)2 + u2
Kλ (ηζ)
showing that J is locally Cauchy (for ζ = 1). In the sequel we set L(J1 ) = GH(λ, η, 1). By [46] we know
that the Lévy measure of GH(λ, η, 1) admits the density such that
1
π
1
z 7→
1+
λ+
|z| + o(|z|) ,
|z| → 0.
π|z|2
2
2
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
7
Hence Assumption 2.1(2) fails to hold since β = δ = 1 here (except for the case of λ = −1/2, corresponding to a symmetric normal inverse Gaussian Lévy process [5]). We will observe that J instead meets
Assumption 2.4 for any (λ, η) ∈ R × (0, ∞). In this case, E(|J1 |q ) < ∞ for any q > 0.
Direct computations give
u
Kλ+1 1 p
(2.10)
∂u ψh (u) + 1 = 1 − p
(ηh)2 + u2 .
h
(ηh)2 + u2 Kλ
The right-hand side is essentially bounded, hence in particular (2.7)
holds.
R
We will verify (2.8) with r = 0. Given (2.10), we see that (0,∞) ϕ0 (u) |∂u ψh (u) + 1| du ≤ Ih0 + Ih00 ,
where
Z
u
du,
Ih0 :=
ϕ0 (u) 1 − p
(ηh)2 + u2
(0,∞)
Z
u
Kλ+1 1 p
00
2
2
p
Ih :=
ϕ0 (u)
(ηh) + u − 1 du.
h
(ηh)2 + u2 Kλ
(0,∞)
For Ih0 , we divide the domain of integration into (0, 1] and (1, ∞) and then derive the following estimates.
• The (0, 1]-part can be bounded by
Z
hp
i1
2ηh
u
p
du = 1 −
. h.
1− p
=
(ηh)2 + u2
2
2
0+
(ηh) + u
1 + ηh + (ηh)2 + 1
(0,1]
• The (1, ∞)-part equals
Z
Z
(ηh)2
p
ϕ0 (u)
du . h2
ϕ0 (u) . h2 .
p
u + (ηh)2 + u2
(ηh)2 + u2
(1,∞)
(1,∞)
Hence we obtain Ih0 . h. As for Ih00 , we first make the change of variables:
Z
Kλ+1 p 2
vh
η + v 2 − 1 dv.
Ih00 =
ϕ0 (vh) p
η 2 + v 2 Kλ
(0,∞)
Just like the case of Ih0 , we look at the (0, 1]-part and the (1, ∞)-part separately.
• Since we are supposing that η > 0, the (0, 1]-part trivially equals O(h).
• The (1, ∞)-part is somewhat more delicate. It can be written as
Z
vh
1
1
00
p
Ih,1 :=
ϕ0 (vh) p
λ+
+ σ(v) dv,
2
2
2
2
η +v
η + v2
(1,∞)
where
σ(v) :=
Kλ+1 p 2
1
1
p
η + v2 − 1 − λ +
,
2
Kλ
2
η + v2
which satisfies the property supv≥1 (η 2 + v 2 )σ(v) < ∞. We then observe that
Z
Z
1
v
v
00
p
ϕ0 (vh) 2
dv
+
ϕ
(vh)
Ih,1
≤h λ+
|σ(v)|dv
0
2 (1,∞)
η + v2
η2 + v2
(1,∞)
Z
Z
1
v
.h λ+
ϕ0 (vh) 2
dv +
v −2 dv
2 (1,∞)
η + v2
(1,∞)
Z
1
v
.h λ+
ϕ0 (vh) 2
dv
+
1
.
2 (1,∞)
η + v2
Further, using the integration by parts and the change of variables we derive
Z
Z
h
v
1 −h
2
e−vh log(η 2 + v 2 )dv
ϕ0 (vh) 2
dv = − e log(η + 1) +
η + v2
2
2 (1,∞)
(1,∞)
2
Z
x
−x
2
.1+
e log η +
dx . 1 + log(1/h).
h
(h,∞)
Summarizing the above computations we conclude that
(
Z
h log(1/h) (λ 6= −1/2)
ϕ0 (u) |∂u ψh (u) + 1| du .
h
(λ = −1/2)
(0,∞)
8
HIROKI MASUDA
verifying (2.8) with r = 0. By Lemma 2.5 we obtain (now aν = 1)
Z
0
|fh (y) − φβ (y)| dy . h log(1/h) . ha
for any a0 ∈ (0, 1).
2.2. Locally stable stochastic differential equation. Now let us recall the underlying SDE model
(1.1). Denote by Θ the closure of Θ = Θα × Θγ .
Assumption 2.9 (Regularity of the coefficients).
(1) The functions a(·, α0 ) and c(·, γ0 ) are globally
Lipschitz and of class C 2 (R), and c(x, γ) > 0 for every (x, γ).
3
(2) a(x,
·) ∈ C 3 (Θα ) and
c(x, ·) ∈ C (Θγ ) for each x ∈R.
(3) sup
θ∈Θ
max max
0≤k≤3 0≤l≤2
∂αk ∂xl a(x, α) + ∂γk ∂xl c(x, γ)
+ c−1 (x, γ)
. 1 + |x|C .
The standard theory (for example [23, III §2c.]) ensures that the SDE admits a unique strong solution
as a functional of X0 and the Poisson random measure driving J; in particular, each Xt is Ft -measurable.
Assumption 2.10 (Identifiability). The random functions t 7→ (a(Xt , α), c(Xt , γ)) and t 7→ (a(Xt , α0 ), c(Xt , γ0 ))
on [0, T ] a.s. coincide if and only if θ = θ0 .
3. Stable quasi-likelihood estimation
3.1. Heuristic for construction. To motivate
our
R tj
R quasi-likelihood, we here present a formal heuristic
argument. In what follows we abbreviate tj−1
as j . For a moment, we write Pθ for the image measures
of X given by (1.1). In view of the Euler approximation under Pθ ,
Z
Z
Xtj = Xtj−1 + a(Xs , α)ds + c(Xs− , γ)dJs
j
j
≈ Xtj−1 + aj−1 (α)h + cj−1 (γ)∆j J,
from which we may expect that
j (θ) = n,j (θ) :=
∆j X − haj−1 (α)
≈ h−1/β ∆j J
h1/β cj−1 (γ)
in an appropriate sense. It follows from the locally stable property (1.5) that for each n the random
variables 1 (θ), . . . , n (θ) will be approximately i.i.d. with common distribution Sβ .
Now assume that the process X admits a (time-homogeneous) transition Lebesgue density under
Pθ , say ph (x, y; θ)dy = Pθ (Xh ∈ dy|X0 = x), and let Eθj−1 denote the expectation operator under Pθ
conditional on Ftj−1 . Then, we may consider the following twofold approximation of the conditional
distribution L(Xtj |Xtj−1 ):
Z
1
ph (Xtj−1 , Xtj ; θ) =
exp(−iuXtj )Eθj−1 {exp(iuXtj )}du
2π
Z
1
≈
exp(−iuXtj )Eθj−1 exp iu(Xtj−1 + aj−1 (α)h + cj−1 (γ)∆j J) du
2π
(Euler approximation)
Z
1
=
exp {−iu(∆j X − aj−1 (α)h)} ϕh cj−1 (γ)h1/β u du
2π
Z
1
1
=
exp{−iuj (θ)}ϕh (v)dv
cj−1 (γ)h1/β 2π
1
=
fh (j (θ))
cj−1 (γ)h1/β
1
≈
φβ (j (θ))
(Locally stable approximation).
cj−1 (γ)h1/β
This formal observation suggests to estimate θ0 by a maximizer of the random function
n
X
1
(3.1)
Hn (θ) :=
log
φβ (j (θ)) ,
cj−1 (γ)h1/β
j=1
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
9
which we call the stable quasi-likelihood. We then define the stable quasi-maximum likelihood estimator
(SQMLE) by any element θ̂n = (α̂n , γ̂n ) such that
n
X
(3.2)
− log cj−1 (γ) + log φβ (j (θ)) .
θ̂n ∈ argmax Hn (θ) = argmax
θ∈Θ
θ∈Θ
j=1
Since we are assuming that Θ is compact, there always exists at least one such θ̂n . The heuristic argument
for the SQMLE will be verified in Section 3.2. The SQMLE is the non-Gaussian-stable counterpart to
the Gaussian quasi-likelihood previously studied by [27] and [36] for diffusion and Lévy driven SDE,
respectively.
Remark 3.1. It may happen, though very rarely, that the density fh of L(h−1/β Jh ) is explicit for each
h > 0. The normal-inverse Gaussian J [5], which we will use for simulations in Section 4.1, is such an
example. In that case, the approximation
1
ph (Xtj−1 , Xtj ; θ) ≈
fh (j (θ))
cj−1 (γ)h1/β
may result in a better quasi-likelihood since it precisely incorporates information of the driving noise.
Nevertheless and obviously, such an “exact L(h−1/β Jh )” consideration much diminishes the target class
of J, and going in this direction entails individual case studies.
3.2. Main result: Asymptotic mixed normality of SQMLE. For F-measurable random variables
µ = µ(ω) ∈ Rp and a.s. nonnegative definite Σ = Σ(ω) ∈ Rp ⊗ Rp , we denote by M Np (µ, Σ) the
p-dimensional mixed normal distribution corresponding to the characteristic function
1
v 7→ E exp iµ · v − v · Σv
.
2
That is to say, when Y ∼ M Np (µ, Σ), Y is defined on an extension of the original probability space
(Ω, F, P), and is equivalent in distribution to a random variable µ+Σ1/2 Z for Z ∼ Np (0, Ip ) independent of
F, where Ip denotes the p-dimensional identity matrix. Such an (orthogonal) extension of the underlying
probability space is always possible.
We introduce the two bounded smooth continuous functions:
∂φβ
gβ (y) := ∂y log φβ (y) =
(y),
kβ (y) := 1 + ygβ (y).
φβ
R
R
R
R
We see that gβ (y)φβ (y)dy = kβ (y)φβ (y)dy = 0, and that gβ (y)fh (y)dy = 0 as soon as |gβ (y)|fh (y)dy <
∞ because fh is symmetric. We also write
Z
Z
Cα (β) = gβ2 (y)φβ (y)dy,
Cγ (β) = kβ2 (y)φβ (y)dy,
Z
Z
1 T {∂α a(Xt , α0 )}⊗2
1 T {∂γ c(Xt , γ0 )}⊗2
ΣT,α (θ0 ) =
dt,
Σ
(γ
)
=
dt.
T,γ 0
T 0
c2 (Xt , γ0 )
T 0
c2 (Xt , γ0 )
The asymptotic behavior of the SQMLE defined through (3.1) and (3.2) is given in the next theorem,
which is the main result of this paper.
Theorem 3.2. Suppose that Assumptions 2.1 with β ∈ [1, 2), 2.9, and 2.10 hold. Then we have
√
√
L
nh1−1/β
(α̂n − α0 ), n(γ̂n − γ0 ) −
→ M Np 0, ΓT (θ0 ; β)−1 ,
(3.3)
n
where
ΓT (θ0 ; β) :=
Cα (β)ΣT,α (θ0 )
0
.
0
Cγ (β)ΣT,γ (γ0 )
In Section 3.3, we will deduce the large-time counterpart to Theorem 3.2 under the the ergodicity. In
that case the asymptotic distribution is not mixed normal but normal, with the asymptotic covariance
matrix taking a completely analogous form.
Below we list some immediate consequences of Theorem 3.2 and some related remarks worth being
mentioned.
10
HIROKI MASUDA
∂ c(x,γ )
a(x,α0 )
(1) The asymptotic distribution of θ̂n is normal if both x 7→ γc(x,γ0 )0 and x 7→ ∂αc(x,γ
are non0)
random; this is the case if X is a Lévy process.
(2) The estimators α̂n and γ̂n are asymptotically orthogonal, whereas not necessarily independent
due to possible non-Gaussianity in the limit.
(3) For β ∈ (1, 2), we can rewrite (3.3) as (recall (1.3))
√
n1/β−1/2 (α̂n − α0 ), n(γ̂n − γ0 )
L
−
→ M Np 0, diag T −2(1−1/β) {Cα (β)ΣT,α (θ0 )}−1 , {Cγ (β)ΣT,γ (γ0 )}−1 .
If fluctuation of X is virtually stable in the sense that both of the random time averages ΣT,α (θ0 )
and ΣT,γ (γ0 ) do not vary so much with the terminal sampling time T , then, due to the factor
“T −2(1−1/β) ”, the asymptotic covariance matrix of α̂n would tend to get smaller (resp. larger) in
magnitude for a larger (resp. smaller) T . This feature with respect to T is non-asymptotic.
(4) Of special interest is the locally Cauchy case (β = 1), where Hn is fully explicit:
n
X
2
Hn (θ) = −
log(πh) + log cj−1 (γ) + log 1 + j (θ) .
j=1
In this case,
√
√
n(α̂n − α0 ), n(γ̂n − γ0 )
−1
−1
Z T
Z T
{∂α a(Xt , α0 )}⊗2
{∂γ c(Xt , γ0 )}⊗2
1
1
L
dt
dt
.
−
→ M Np 0, diag
,
2T 0
c(Xt , γ0 )2
2T 0
c(Xt , γ0 )2
This formally extends the i.i.d. model from the location-scale Cauchy population, where we have
√
n-asymptotic normality for the maximum-likelihood estimator. The Cauchy quasi-likelihood
has been also investigated in the robust-regression literature; see [41] and [42] for a breakdownpoint result in some relevant models. It would be interesting to study their SDE-model counterparts.
Remark 3.3. Based on the general criterion for the stable convergence, we could deduce a slightly more
general statement where the SQMLE has non-trivial asymptotic bias. In general, it is however impossible
to make an explicit bias correction in a unified manner without specific information of fh (hence of the
Lévy measure ν). Even when we have a full parametric form of ν, it may contain a parameter which
cannot be consistently estimated unless Tn → ∞; see [38] for specific examples.
Remark 3.4. The asymptotic efficiency in the sense of Hajék-Le Cam-Jeganathan is of primary theoretical importance (see [51]). Compared with the diffusion case studied in [14] and [15], asymptotic-efficiency
phenomena for the Lévy driven SDE (1.1) when observing (1.4) have been less well-known. Nevertheless,
for the classical local asymptotic normality property results when X is a Lévy process, one can consult
[38] for several explicit case studies, and to [18] for a general locally stable Lévy processes. Moreover,
[7] and the recent preprint [8] proved the local asymptotic mixed normality property about the unknown
parameters especially when c(x, γ) is a constant and the Lévy measure ν has a bounded support with
a stable-like behavior near the origin. Importantly, the model settings of [7] and [8] can be covered by
ours (Example 2.6), so that the asymptotic efficiency of our SQMLE is assured. In view of their result
and just like the fact that the Gaussian QMLE is asymptotically efficient for diffusions, it seems quite
promising that the proposed SQMLE is asymptotically efficient for the general class of SDE (1.1) driven
by a locally β-stable Lévy process.
Here is a variant of Theorem 3.2.
Theorem 3.5. Suppose that Assumptions 2.4 holds with β ∈ [1, 2) and
√
β
n(ψ (h) ∨ haν ) β+r → 0
for the constant aν given in Lemma 2.2(1). Suppose also that Assumptions 2.9 and 2.10 hold, and that
Z
|z|q ν(dz) < ∞
|z|>1
for every q > 0. Then we have (3.3).
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
11
To state a corollary to Theorems 3.2 and 3.5, we introduce the following statistics:
n
n
1 X {∂γ cj−1 (γ̂n )}⊗2
1 X {∂α aj−1 (α̂n )}⊗2
Σ̂T,α,n :=
,
Σ̂
:=
.
T,γ,n
n j=1
c2j−1 (γ̂n )
n j=1
c2j−1 (γ̂n )
√
It turns out in the proof that the quantity ( nh1−1/β )−1 ∂α Hn (θ0 ), n−1/2 ∂γ Hn (θ0 ) , the normalized
quasi-score, F-stably converges in distribution (Section 6.4.2), from which the Studentization via the
continuous-mapping theorem is straightforward:
Corollary 3.6. Under the assumptions of either Theorem 3.2 or Theorem 3.5, we have
1/2 √
1/2 √ 1−1/β
L
nhn
(µ̂n − µ0 ), Cγ (β)Σ̂T,γ,n
n(σ̂n − σ0 ) −
→ Np (0, Ip ).
(3.4)
Cα (β)Σ̂T,α,n
Table 1 summarizes the rates of convergence of the β-stable maximum quasi-likelihood estimators with
β ≤ 2, when the target SDE model is
(3.5)
dXt = a(Xt , α)dt + c(Xt− , γ)dZt
for a driving Lévy process Z with the correctly specified coefficient (a, c); again, note that the Gaussian
QMLE requires Tn → ∞, which is not necessary for the SQMLE with β < 2. We refer to [35] for a handy
statistic for testing the case (i) against the case (ii) based on the Gaussian QMLE.
Quasi-likelihood
Driving Lévy process Z
(i) Gauss
Wiener process
(ii) Gauss
Lévy process with jumps
(iii) Non-Gaussian stable
Locally β-stable Lévy process
Rates of convergence
α̂n
γ̂n
√
√
nhn
n
√
√
nhn
nhn
√ 1−1/β
√
nhn
n
Ref.
[27]
[33], [36]
Table 1. Comparison of the Gaussian (β = 2) and non-Gaussian stable (β ∈ [1, 2))
QMLE for the SDE (3.5), where the coefficient (a, c) is correctly specified: Case (iii) is
the contribution of this paper.
Remark 3.7. We have been focusing on β ≥ 1. For β ∈ (0, 1), direct use of the Euler scheme would
spoil the proofs in Section 6 because small-time variation of X by the noise term is dominated by that of
the trend coefficient a(x, α). In this case, direct use of the present stable quasi-likelihood based on the
mere Euler scheme would be inadequate. It would be necessary to take the drift structure into account
more precisely, as in the trajectory-fitting estimator studied in [31].
3.3. Ergodic case under long-time asymptotics. In this section, instead of the bounded-domain
asymptotics (1.3) we consider the sampling design
√ 2−1/β
(3.6)
Tn → ∞ and
nh
→ 0,
√ 1−1/β
which still implies that nh
→ ∞ when β ∈ [1, 2); for example, it suffices to have Tn → ∞ and
nh2 → 0. Theorem 3.11 below shows that under the ergodicity of X the asymptotic normality of the
SQMLE (3.2) holds. The logic of construction of the stable quasi-likelihood is completely the same as in
Section 3.1.
We will adopt Assumption 2.4 for the structural assumptions on J, and impose Assumption 2.9 without
any change.
Assumption 3.8 (Stability).
1
T
(3.7)
Z
(1) There exists a unique invariant measure π0 such that
Z
p
g(Xt )dt −
→ g(x)π0 (dx),
T → ∞,
T
0
for every measurable function g of at most polynomial growth.
(2) sup E(|Xt |q ) < ∞ for every q > 0.
t∈R+
12
HIROKI MASUDA
The property (3.7) follows from the convergence kPt (x, ·) − π0 (·)kT V → 0 as t → ∞ for each x ∈ R,
where Pt (x, dy) denotes the transition function of X under the true measure and kµkT V the total variation
norm of a signed measure µ. The next lemma, which directly follows from [36, Proposition 5.4], provides
a set of sufficient conditions for Assumption 3.8.
Lemma 3.9. Let X be given by (1.1) and suppose that ν({z 6= 0; |z| ≤ }) > 0 for every > 0. Further,
assume the following conditions.
(1) Both a(·, α0 ) and c(·, γ0 ) are of class C 1 (R) and globally Lipschitz, and c is bounded.
(2) c(x, γ0 ) 6= 0 for every x.
(3) E(J1 ) = 0 and either one
R of the following conditions holds:
• E(|X0 |q ) < ∞ and |z|>1 |z|q ν(dz) < ∞ for every q > 0, and
lim sup
|x|→∞
• E(eq|X0 | ) < ∞ and
R
|z|>1
a(x, α0 )
< 0.
x
eq|z| ν(dz) < ∞ for some q > 0, and
lim sup sgn(x)a(x, α0 ) < 0.
|x|→∞
Then Assumption 3.8 holds.
We also need a variant of Assumption 2.10.
Assumption 3.10 (Model identifiability). The functions x 7→ (a(x, α), c(x, γ)) and x 7→ (a(x, α0 ), c(x, γ0 ))
are coincide π0 -a.e. if and only if θ = θ0 .
Theorem 3.11. Suppose that Assumptions 2.4 holds with
√
β
n(ψ (h) ∨ haν ) β+r → 0
for the constant aν given in Lemma 2.2(1). Suppose also that Assumptions 2.9, 3.8, and 3.10 hold. Then,
under (3.6) we have
√
√
L
nh1−1/β (α̂n − α0 ), n(γ̂n − γ0 ) −
→ Np 0, diag Vα (θ0 ; β)−1 , Vγ (θ0 ; β)−1 ,
where
{∂α a(x, α0 )}⊗2
π0 (dx),
c(x, γ0 )2
Z
{∂γ c(x, γ0 )}⊗2
Vγ (θ0 ; β) := Cγ (β)
π0 (dx).
c(x, γ0 )2
Z
Vα (θ0 ; β) := Cα (β)
The proof of Theorem 3.11 will be sketched in Section 6.7. Obviously, Studentization is possible just
the same as in Corollary 3.6. Again we remarks that Assumption 2.4 could be replaced with any other
one implying the convergences (2.3) and (2.5).
Remark 3.12. We have Cα (2) = 1 and Cγ (2) = 2, hence taking β = 2 in the expressions of Vα (θ0 )
and Vγ (θ0 ) formally results in the asymptotic Fisher information matrices for the diffusion case [27] (also
[50]).
4. Numerical experiments
For simulations, we use the nonlinear data-generating SDE
α2
dt + exp γ1 cos(Xt ) + γ2 sin(Xt ) dJt ,
dXt = α1 Xt +
2
1 + Xt
with θ = (α1 , α2 , γ1 , γ2 ) and J being either:
• The normal inverse Gaussian Lévy process (Example 2.8);
• The 1.5-stable Lévy processes (Example 2.6).
X0 = 0,
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
13
The setting is a special case of a(x, α) = α1 a1 (x) + α2 a2 (x) and c(x, γ) = exp{γ1 c1 (x) + γ2 c2 (x)}, for
√
√
which the asymptotic covariances of the nh1−1/β (α̂k,n − αk,0 ) and n(γ̂l,n − γl,0 ) are given by the
inverses of
Z
Z
1 T a2k (Xt )
1 T 2
Cα (β)
c (Xt )dt,
dt
and
C
(β)
γ
T 0 c2 (Xt , γ0 )
T 0 l
respectively.
4.1. Normal inverse Gaussian driver. Let J be an normal inverse Gaussian (NIG) Lévy process such
that
L(Jt ) = N IG(η, 0, t, 0),
where η > 0 may be unknown. This is a special case of the generalized hyperbolic Lévy process considered
in Example 2.8 with λ = −1/2. The numerical results below show that the SQMLE effectively works.
We set η = 5 or 10; the bigger η leads to a lighter tail of ∆j J, hence a seemingly more “diffusion-like”
sample-path behavior. Also, we set the terminal time T = 1 or 5. For each pair (η, T ), we proceed as
follows.
• First we apply the Euler scheme for the true model with discretization step size being ∆ :=
T /(3000 × 50).
3000×50
• Then we thin generated single path (Xk∆ )k=0
to pick up (Xjh )nj=0 with h = ∆ × 50 × 6,
∆ × 50 × 3 and ∆ × 50 for n = 500, 1000 and 3000, respectively.
Here, the number “50” of generation over each sub-periods (tj−1 , tj ] reflects that X virtually continuously
evolves as time goes along, though not observable. We independently repeat the above procedures for
L = 1000 times to get 1000 independent estimates θ̂n = (α̂n , γ̂n ), based on which boxplots and histograms
for Studentized versions are computed (Corollary 3.6). We used the function optim in R [44], and in each
optimization for l = 1, . . . , L we generated independent uniform random numbers Unif(αk,0 −10, αk,0 +10)
and Unif(γ0,l − 10, γ0,l + 10) for initial values for searching αk and γl , respectively.
The two cases are conducted:
(i) We know a priori that α2,0 = γ2,0 = 0, and the estimation target is θ0 = (α1,0 , γ1,0 ) = (−1, 1.5);
(ii) The estimation target is θ0 = (α1,0 , α2,0 , γ1,0 , γ2,0 ) = (−1, 1, 1.5, 0.5).
From the obtained simulation results, we observed the following.
• Figures 1 and 2: case of (i).
– The boxplots show the clear tendency that estimation accuracy for each T gets better for
larger n.
– The histograms show overall good standard normal approximations; the straight line in red
is the target standard normal density. It is observed is that the estimation performance of
γ̂n gets worse if the nuisance parameter η gets larger from 5 to 10. In particular, for the
cases where η = 10 we can see downward bias of the Studentized γ̂n , although it disappears
as n increases.
Overall, we see very good finite-sample performance of α̂n , while that of γ̂n may be affected to
some extent by the value of (T, η). As in the case of estimation of the diffusion coefficient for a
diffusion type processes, for better estimation of γ the value T should not be so large, equivalently
h should not be so large.
• Figures 3 and Figures 4–5: case of (ii).
– General tendencies are the same as in the previous case: for each T , estimate accuracy gets
better for larger n, while the gain of estimation accuracy for larger n is somewhat smaller
compared with the previous case.
– The histograms show that, compared with the previous case, the Studentized estimators
are of heavier tails and asymptotic bias associated with γ̂n severely remains, especially for
(T, η) = (5, 10) (Figure 5), unless n is large enough.
2
1
0
−1
−2
−2
−1
0
1
2
3
HIROKI MASUDA
3
14
a_3000
g_500
g_1000
g_3000
a_500
a_1000
a_3000
g_500
g_1000
g_3000
a_500
a_1000
a_3000
g_500
g_1000
g_3000
a_500
a_1000
a_3000
g_500
g_1000
g_3000
2
1
0
−1
−2
−2
−1
0
1
2
3
a_1000
3
a_500
Figure 1. NIG-J example. Boxplots of 1000 independent estimates α̂n (green) and γ̂n
(blue) for n = 500, 1000, 3000; (T, η) = (1, 5) (upper left), (T, η) = (1, 10) (upper right),
(T, η) = (5, 5) (lower left), and (T, η) = (5, 10) (lower right).
4
0.4
Density
0.1
0.0
0.4
Density
0.4
0.3
0.2
Density
0
2
4
−4
−2
0
2
4
2
4
Density
0.0
0.1
0.2
0.3
0.4
g.3000−CQMLE
0.4
4
0
0.1
−2
g.1000−CQMLE
2
4
0.3
−2
a.3000−CQMLE
0.3
0
2
0.2
−4
0.0
−4
Density
−2
4
0.1
4
0.1
−4
2
0.0
2
0.4
4
0.4
4
0
0.3
2
0.3
2
0
a.1000−CQMLE
Density
0
0.2
0.3
0.4
0.1
0.0
0.4
Density
0.3
−2
0.1
−2
−2
g.3000−CQMLE
0.2
−4
g.500−CQMLE
Density
0
−4
0.1
4
0.1
−2
4
0.0
2
0.0
−4
2
0.0
−4
0.4
Density
2
0.2
Density
0.3
0.4
Density
0.2
0.1
0.4
Density
0.1
4
0.1
0
0
0.4
Density
2
0.0
−2
−2
0.1
0
0
g.1000−CQMLE
0.3
0.4
−2
−2
a.500−CQMLE
0.3
0.4
−4
−4
0.0
Density
Density
−4
0.3
Density
4
−4
g.3000−CQMLE
0.1
2
4
0.1
4
0.0
0
2
0.0
2
0.2
0.3
0.2
0.1
0.0
−2
0
0.3
0.4
0
g.1000−CQMLE
0.4
g.500−CQMLE
−4
−2
a.3000−CQMLE
0.2
−2
4
0.0
−4
0.1
−4
2
0.2
4
0
0.3
0.4
2
0.0
4
−2
g.500−CQMLE
0.2
Density
0
0.3
0.4
0.3
2
−4
0.1
−2
a.1000−CQMLE
0.2
0
4
0.0
−4
0.1
−2
2
0.0
4
0
0.3
0.4
Density
0.2
0.1
2
0.0
−4
−2
g.3000−CQMLE
0.0
0
0.0
−4
a.3000−CQMLE
0.2
4
0.3
0.4
0.3
0.2
0.0
−2
a.500−CQMLE
Density
2
g.1000−CQMLE
0.1
Density
0
0.2
−2
g.500−CQMLE
−4
0.3
0.4
0.3
Density
0.1
0.0
−4
0.2
4
a.1000−CQMLE
0.2
2
0.2
0
0.2
−2
a.500−CQMLE
0.2
0.3
Density
0.0
0.1
0.2
0.3
Density
0.2
0.1
0.0
−4
Density
a.3000−CQMLE
0.4
a.1000−CQMLE
0.4
a.500−CQMLE
−4
−2
0
2
4
−4
−2
0
Figure 2. NIG-J example. Histograms of 1000 independent Studentized estimates of α
(green) and γ (blue) for n = 500, 1000, 3000; (T, η) = (1, 5) (upper left 2 × 3 submatrix),
(T, η) = (1, 10) (upper right 2 × 3 submatrix), (T, η) = (5, 5) (lower left 2 × 3 submatrix),
and (T, η) = (5, 10) (lower right 2 × 3 submatrix).
15
−2
−1
0
1
2
3
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
a1_1000
a1_3000
a2_500
a2_1000
a2_3000
g1_500
g1_1000
g1_3000
g2_500
g2_1000
g2_3000
a1_500
a1_1000
a1_3000
a2_500
a2_1000
a2_3000
g1_500
g1_1000
g1_3000
g2_500
g2_1000
g2_3000
a1_500
a1_1000
a1_3000
a2_500
a2_1000
a2_3000
g1_500
g1_1000
g1_3000
g2_500
g2_1000
g2_3000
a1_500
a1_1000
a1_3000
a2_500
a2_1000
a2_3000
g1_500
g1_1000
g1_3000
g2_500
g2_1000
g2_3000
−2
−1
0
1
2
3
−2
−1
0
1
2
3
−2
−1
0
1
2
3
a1_500
Figure 3. NIG-J example. Boxplots of 1000 independent estimates α̂1,n (green), α̂2,n
(blue), γ̂1,n (pink) and γ̂2,n (red) for n = 500, 1000, 3000; (T, η) = (1, 5) (top), (T, η) =
(1, 10) (second from the top), (T, η) = (5, 5) (second from the bottom), and (T, η) =
(5, 10) (bottom).
HIROKI MASUDA
0.4
Density
0.1
0.0
4
2
4
2
4
0.4
0.3
Density
−4
−2
0
Density
0.1
0.2
0.3
0.4
g1.3000−CQMLE
−2
0
2
4
−4
−2
0
Density
0.1
0.2
0.3
0.4
g2.3000−CQMLE
0.4
4
2
0.2
4
g2.1000−CQMLE
2
4
0.1
2
0.3
0
2
0.0
−4
Density
−2
0
0.0
0
0.1
−4
0.2
0.3
0.4
0.1
0.0
0.4
0.3
Density
0.1
4
0.4
4
−2
0.4
2
−2
a2.3000−CQMLE
0.3
Density
0
0.3
2
−4
0.0
−4
g2.500−CQMLE
Density
0
4
0.1
−2
0.1
−2
2
g1.1000−CQMLE
0.0
−4
0
0.0
−4
0.4
4
0.2
Density
0.3
0.4
Density
0.2
0.1
0.4
Density
4
0.3
Density
2
4
0.4
2
0.1
0
2
0.3
Density
0
0.0
−2
0
g1.500−CQMLE
g2.3000−CQMLE
0.4
−4
−2
0.1
−2
−2
a2.1000−CQMLE
0.1
−4
0.4
−4
0.3
Density
4
−4
0.0
Density
Density
4
0.1
2
4
0.1
2
0.0
0
2
0.0
0
0.2
0.3
0.2
0.1
0.0
−2
0
0.3
0.4
−2
g2.1000−CQMLE
0.4
g2.500−CQMLE
−4
−2
g1.3000−CQMLE
0.2
−4
4
0.0
−4
0.1
4
2
0.0
4
0
0.3
0.4
2
0.0
2
−2
a2.500−CQMLE
0.2
Density
0
0.3
0.4
0.3
0.2
0
−4
0.1
−2
g1.1000−CQMLE
0.1
−2
4
0.0
−4
0.0
−4
2
0.0
4
0
0.3
0.4
0.3
Density
0.1
2
−2
a2.3000−CQMLE
0.0
0
0.0
−4
0.2
4
a1.3000−CQMLE
0.2
2
0.2
0
0.2
0.3
Density
0.2
0.1
0.0
−2
g1.500−CQMLE
Density
−2
a2.1000−CQMLE
0.4
a2.500−CQMLE
−4
0.3
0.4
0.3
Density
0.1
0.0
−4
0.2
4
a1.1000−CQMLE
0.2
2
0.2
0
0.2
−2
a1.500−CQMLE
0.2
0.3
Density
0.0
0.1
0.2
0.3
Density
0.2
0.1
0.0
−4
Density
a1.3000−CQMLE
0.4
a1.1000−CQMLE
0.4
a1.500−CQMLE
0.2
16
−4
−2
0
2
4
−4
−2
0
Figure 4. NIG-J example. Histograms of 1000 independent Studentized estimates of
α1 (green), α2 (blue), γ1 (cream) and γ2 (red) for n = 500, 1000, 3000; (T, η) = (1, 5)
(left 4 × 3 submatrix) and (T, η) = (1, 10) (right 4 × 3 submatrix).
4.2. Genuine β-stable driver. Next we set L(J1 ) = Sβ with β = 1.5. Given a realization (xtj )nj=0 of
(Xtj )nj=0 we have to repeatedly evaluate
n
X
xtj − xtj−1 − a(xtj−1 , α)h
− log[h1/β c(xtj−1 , γ)] + log φβ
(α, γ) 7→
.
h1/β c(xtj−1 , γ)
j=1
The stable density φβ is no longer explicit while we can resort to numerically integration. Here we used
the function dstable in the R package stabledist. As in the previous example, we give simulation
results for (pα , pγ ) = (1, 1) and (2, 2), with using uniformly distributed initial values for optim search.
In order to observe effect of the terminal-time value T we conduct the cases of T = 5 and T = 10, for
n = 100, 200, and 500. For Studentization, we used the values Cα (1.5) = 0.4281 and Cγ (1.5) = 0.9556
borrowed from [40, Table 6].
• Figures 6 and 7 show the boxplots and the histograms when pα = pγ = 1 for T = 5 and 10. As is
expected, we observe much better estimation accuracy compared with the previous NIG-driven
case. The figures reveal that the estimation accuracy of α are overall better for larger T , while
at the same time a larger h may lead to a more biased α̂n . Different from the NIG driven case
there is no severe bias in estimating γ. Somewhat surprisingly, the accuracy of Studentization
especially for the scale parameters may be good enough even for much smaller n compared with
the NIG driven case: the standard normality is well achieved even for n = 100.
• Figures 8 and 9 show the results for pα = pγ = 2 with T = 5 or 10. The observed tendencies,
including those compared with the NIG driven cases, are almost analogous to the case where
pα = pγ = 1.
In sum, our stable quasi-likelihood works quite well especially when J is standard 1.5-stable, although
so small T should be avoided for good estimation accuracy of α.
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
0.4
Density
0.1
0.0
4
2
4
2
4
0.4
0.3
Density
−4
−2
0
Density
0.1
0.2
0.3
0.4
g1.3000−CQMLE
−2
0
2
4
−4
−2
0
Density
0.1
0.2
0.3
0.4
g2.3000−CQMLE
0.4
4
2
0.2
4
g2.1000−CQMLE
2
4
0.1
2
0.3
0
2
0.0
−4
Density
−2
0
0.0
0
0.1
−4
0.2
0.3
0.4
0.1
0.0
0.4
0.3
Density
0.2
0.1
4
0.4
4
−2
0.4
2
−2
a2.3000−CQMLE
0.3
Density
0
0.3
2
−4
0.1
−2
g2.500−CQMLE
Density
0
4
g1.1000−CQMLE
0.1
−2
2
0.0
−4
0.0
−4
0
0.0
−4
0.4
4
0.2
Density
0.3
0.4
Density
0.2
0.1
0.4
Density
4
0.3
Density
2
4
0.4
2
0.1
0
2
0.3
Density
0
0.0
−2
0
g1.500−CQMLE
g2.3000−CQMLE
0.4
−4
−2
0.1
−2
−2
a2.1000−CQMLE
0.1
−4
0.4
−4
0.3
Density
4
−4
0.0
Density
Density
4
0.1
2
4
0.1
2
0.0
0
2
0.0
0
0.2
0.3
0.2
0.1
0.0
−2
0
0.3
0.4
−2
g2.1000−CQMLE
0.4
g2.500−CQMLE
−4
−2
g1.3000−CQMLE
0.2
−4
4
0.0
−4
0.1
4
2
0.0
4
0
0.3
0.4
2
0.0
2
−2
a2.500−CQMLE
0.2
Density
0
0.3
0.4
0.3
0.2
0
−4
0.1
−2
g1.1000−CQMLE
0.1
−2
4
0.0
−4
0.0
−4
2
0.0
4
0
0.3
0.4
0.3
Density
0.1
2
−2
a2.3000−CQMLE
0.0
0
0.0
−4
0.2
4
a1.3000−CQMLE
0.2
2
0.2
0
0.2
0.3
Density
0.2
0.1
0.0
−2
g1.500−CQMLE
Density
−2
a2.1000−CQMLE
0.4
a2.500−CQMLE
−4
0.3
0.4
0.3
Density
0.1
0.0
−4
0.2
4
a1.1000−CQMLE
0.2
2
0.2
0
0.2
−2
a1.500−CQMLE
0.2
0.3
Density
0.0
0.1
0.2
0.3
Density
0.2
0.1
0.0
−4
Density
a1.3000−CQMLE
0.4
a1.1000−CQMLE
0.4
a1.500−CQMLE
17
−4
−2
0
2
4
−4
−2
0
3
2
1
0
−1
−2
−3
−4
−4
−3
−2
−1
0
1
2
3
Figure 5. NIG-J example. Histograms of 1000 independent Studentized estimates of
α1 (green), α2 (blue), γ1 (cream) and γ2 (red) for n = 500, 1000, 3000; (T, η) = (5, 5)
(left 4 × 3 submatrix) and (T, η) = (5, 10) (right 4 × 3 submatrix).
a_100
a_200
a_500
g_100
g_200
g_500
a_100
a_200
a_500
g_100
g_200
g_500
Figure 6. S1.5 -J example. Boxplots of 1000 independent estimates α̂n (green) and γ̂n
(blue) for n = 100, 200, 500; T = 5 (left) and T = 10 (right).
5. Proofs of Lemmas 2.2 and 2.5
This section presents the proofs of the L1 -local limit theorems given in Section 2.1.
5.1. Proof of Lemma 2.2. (1) We begin with the proof of (a). By the expression (2.2) we have
Z
ϕh (u) = exp
(cos(uz) − 1)h1+1/β g(h1/β z)dz
Z
= ϕ0 (u) exp
(cos(uz) − 1)ρ(h1/β z)g0,β (z)dz =: ϕ0 (u) exp {χh (u)} .
Pick a small 0ρ > 0 such that sup|y|≤0ρ |ρ(y)| ≤ 1/2. We will make use of the following two different
bounds for the function χh : on the one hand, we have
Z
|χh (u)| ≤ (1 − cos(uz))|ρ(h1/β z)|g0,β (z)dz
HIROKI MASUDA
4
2
4
4
0
2
4
0.4
Density
−2
0
2
4
−4
0
2
4
0
2
4
2
4
Density
0.3
0.4
g.500−SQMLE
0.4
−2
−2
g.200−SQMLE
0.3
−4
0.2
0.1
0.0
−4
0.4
Density
−2
0.3
0.4
Density
2
0.1
−4
0.2
0.1
0
0.3
0.4
Density
0
−2
g.100−SQMLE
0.1
−2
0.0
−4
0.2
4
0.0
−4
0.3
0.4
Density
0.2
0.1
2
0.3
0.4
Density
0.2
0.1
0.0
2
0
g.500−SQMLE
0.3
0.4
0.3
0.2
0.1
0
−2
g.200−SQMLE
0.0
−2
0.0
−4
0.1
4
0.0
2
Density
0
0.1
−2
g.100−SQMLE
−4
0.3
0.4
0.3
Density
0.1
0.0
−4
a.500−SQMLE
0.0
4
0.2
2
a.200−SQMLE
0.0
0
0.2
−2
a.100−SQMLE
0.2
0.3
Density
0.0
0.1
0.2
0.3
Density
0.2
0.1
0.0
−4
Density
a.500−SQMLE
0.4
a.200−SQMLE
0.4
a.100−SQMLE
0.2
18
−4
−2
0
2
4
−4
−2
0
Figure 7. S1.5 -J example. Histograms of 1000 independent Studentized estimates of α
(green) and γ (blue) for n = 100, 200, 500; T = 5 (left 2 × 3 submatrix) and T = 10
(right 2 × 3 submatrix).
Z
1/β
(1 − cos(uz))g0,β (z)|ρ(h
=
Z
≤
1
2
(1 − cos(uz))g0,β (z)|ρ(h1/β z)|dz
z)|dz +
|z|≤0ρ h−1/β
Z
Z
|z|>0ρ h−1/β
∞
(1 − cos(uz))g0,β (z)dz + 4kρk∞
g0,β (z)dz
|z|≤0ρ h−1/β
0ρ h−1/β
Z
1
1
(cos(uz) − 1)g0,β (z)dz + Ch = |u|β + Ch;
≤−
2
2
on the other hand,
Z
(1 − cos(uz))g0,β (z)|ρ(h1/β z)|dz
|z|≤0ρ h−1/β
δ/β
. cρ h
Z
2
g0,β (z)|z| dz +
|z|≤0ρ h−1/β , |z|>1
Z
sin
|z|≤0ρ h−1/β , |z|≤1
0ρ h−1/β
z −1−β+δ dz + u2
1
. cρ h(δ/β)∧1 + u2 hδ/β ,
. cρ hδ/β
Z
δ
Z
1
z 1−β+δ dz
uz
δ
g0,β (z)|z| dz
2
0
where we used the fact supy | siny y | < ∞ in the second step, so that
|χh (u)| . cρ h(δ/β)∧1 + u2 hδ/β + h.
It follows from these estimates for χh with the mean-value theorem that for every s < 1 and C ≥ 0 we
have
Z
Z
(u−s ∨ uC )|ϕh (u) − ϕ0 (u)|du .
(u−s ∨ uC )ϕ0 (u) |exp{χh (u)} − 1| du
(0,∞)
(0,∞)
Z
≤
−s
(u
C
∨ u )ϕ0 (u)
0≤s≤1
(0,∞)
Z
.
sup exp(sχh (u)) |χh (u)|du
β
(u−s ∨ uC )e−|u|
/2
cρ h(δ/β)∧1 + u2 hδ/β + h du
(0,∞)
. cρ h(δ/β)∧1 + h . haν .
This prove the first half in (a). Since suph∈(0,1] ϕh (u) . exp(−C|u|β ) from the above argument, the
existence of the positive smooth density fh follows from the same argument as in the proof of [32, Lemma
4.4(a)]. The latter half is a direct consequence of the Fourier inversion:
Z
Z
1
e−iuy (ϕh (u) − ϕ0 (u)) du . |ϕh (u) − ϕ0 (u)|du . haν .
sup |fh (y) − φβ (y)| = sup
2π
y
y
This completes the proof of (a).
Next we prove (b). Because of the boundedness of ρ, the Lévy density of L(h−1/β Jh ) is bounded by a
constant multiple of g0,β (z). Invoking [48, Theorem 25.3], we see that the tail of fh is bounded by that
19
−6
−4
−2
0
2
4
6
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
a1_200
a1_500
a2_100
a2_200
a2_500
g1_100
g1_200
g1_500
g2_100
g2_200
g2_500
a1_100
a1_200
a1_500
a2_100
a2_200
a2_500
g1_100
g1_200
g1_500
g2_100
g2_200
g2_500
−6
−4
−2
0
2
4
6
a1_100
Figure 8. S1.5 -J example. Boxplots of 1000 independent estimates α̂1,n (green), α̂2,n
(blue), γ̂1,n (pink) and γ̂2,n (red) for n = 100, 200, 500; T = 5 (top) and 10 (bottom).
of φβ uniformly in h ∈ (0, 1]: for each κ < β,
sup sup M β−κ
(5.1)
h∈(0,1] M >0
Z
|y|κ fh (y)dy < ∞.
|y|>M
R
Then, for any positive sequence bn ↑ ∞ the quantity |y|κ |fh (y) − φβ (y)| dy is bounded by the sum of
the two terms
Z
Z
|y|κ fh (y)dy +
|y|κ φβ (y)dy . bκ−β
→0
n
|y|≥bn
and
|y|≥bn
Z
sup |fh (y) − φβ (y)|
y
|y|κ dy
aν
≤ b1+κ
n h .
|y|≤bn
aν
The convergence b1+κ
→ 0 follows on taking any bn = o(h−aν /(1+κ) ).
n h
(2) Again pick a positive real sequence bn → ∞. Then
Z
Z
Z
(5.2)
|fh (y) − φβ (y)|dy .
|fh (y) − φβ (y)|dy +
(bn ,∞)
(0,bn ]
|fh (y) − φβ (y)|dy =: δn0 + δn00 .
HIROKI MASUDA
4
4
0.4
Density
0.1
0.0
4
2
4
2
4
2
4
0.4
0.3
Density
0.2
−4
−2
0
Density
0.0
0.1
0.2
0.3
0.4
g1.500−SQMLE
−4
−2
0
2
4
−4
−2
0
g2.500−SQMLE
Density
0.2
0.3
0.4
g2.200−SQMLE
2
2
0.1
4
0.4
0
0
0.0
2
0.3
−2
0.2
0.3
0.4
0.1
0.0
0.4
0.3
Density
0.2
0.1
0
0.4
2
Density
−4
−2
a2.500−SQMLE
0.3
Density
0
0.4
4
−4
0.1
−2
0.3
2
4
0.0
−2
g2.100−SQMLE
Density
0
2
0.0
−4
0.1
−2
0
g1.200−SQMLE
0.0
−4
0.2
Density
0.3
0.4
Density
0.2
0.1
0.4
Density
0.1
−4
0.4
4
0.4
4
4
0.3
Density
2
0.3
Density
Density
0
0.1
2
2
g1.100−SQMLE
0.0
0
0
0.1
−2
−2
a2.200−SQMLE
0.0
−4
0.4
−2
−2
g2.500−SQMLE
0.2
−4
−4
0.1
4
0.1
4
−4
0.4
Density
2
0.0
2
4
0.1
0
0.3
0.4
0.3
0.2
0.1
0
2
0.0
−2
g2.200−SQMLE
0.0
−2
0
0.3
0.4
Density
0.2
−4
g2.100−SQMLE
−4
−2
g1.500−SQMLE
0.1
4
4
0.0
−4
0.0
2
2
0.0
4
0
0.3
0.4
2
0.3
0.4
0.3
0.2
0
0.2
Density
0
−2
a2.100−SQMLE
0.1
−2
g1.200−SQMLE
0.1
−2
−4
0.0
−4
0.0
−4
4
0.1
4
2
0.0
2
0
0.3
0.4
Density
0.2
0.1
0
−2
a2.500−SQMLE
0.0
−2
0.0
−4
a1.500−SQMLE
0.2
4
0.3
0.4
0.3
0.2
0.0
0.1
Density
2
a2.200−SQMLE
g1.100−SQMLE
Density
0
0.2
−2
a2.100−SQMLE
−4
0.3
0.4
0.3
Density
0.1
0.0
−4
0.2
4
a1.200−SQMLE
0.2
2
0.2
0
0.2
−2
a1.100−SQMLE
0.2
0.3
Density
0.0
0.1
0.2
0.3
Density
0.2
0.1
0.0
−4
Density
a1.500−SQMLE
0.4
a1.200−SQMLE
0.4
a1.100−SQMLE
0.2
20
−4
−2
0
2
4
−4
−2
0
Figure 9. S1.5 -J example. Histograms of 1000 independent Studentized estimates of
α1 (green), α2 (blue), γ1 (cream) and γ2 (red) for n = 100, 200, 500; T = 5 (left 4 × 3
submatrix) and T = 10 (right 4 × 3 submatrix).
By (5.1) with κ = 0 we have
δn0 . b−β
n .
(5.3)
Recalling that ψh (u) := log ϕh (u) and that we are assuming that g ≡ 0 on {|z| > K}, we have ∂u ϕh (u) =
y|
ϕh (u)∂u ψh (u) for u > 0. Using the Fourier inversion, the integration by parts, and the fact supy∈R | sin
|y|r <
∞ for any r ∈ [0, 1], we can bound δn00 as follows:
Z
Z
e−iuy (ϕh (u) − ϕ0 (u)) du dy
δn00 .
(0,bn ]
Z
Z
cos(uy) (ϕh (u) − ϕ0 (u)) du dy
.
(0,bn ]
Z
.
(0,bn ]
Z
(0,∞)
1
y
sin(uy) (∂u ϕh (u) − ∂u ϕ0 (u)) du dy
(0,∞)
y r−1
.
(0,bn ]
. brn
Z
Z
Z
ur |∂u ϕh (u) − ∂u ϕ0 (u)| dudy
(0,∞)
ur |∂u ϕh (u) − ∂u ϕ0 (u)| du
(0,∞)
(5.4)
.
brn
Z
ur |ϕh (u) − ϕ0 (u)||∂u ψh (u)|du + brn
(0,∞)
Z
ur ϕ0 (u)|∂u ψh (u) + βuβ−1 |du.
(0,∞)
Suppose for a moment that
h
,
u > 0.
u
Then |∂u ψh (u)| . (1 + uβ )/u and it follows from (5.4) and the statement (1)(a) that
Z
Z
00
r
r−1
β
r
δn . bn
u (1 + u )|ϕh (u) − ϕ0 (u)|du + bn h
ur−1 ϕ0 (u)du
|∂u ψh (u) + βuβ−1 | .
(5.5)
(0,∞)
(0,∞)
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
21
. brn haν + brn h . brn h1∧aν
(5.6)
if r ∈ (0, 1]. By (5.3) and (5.6) we obtain
r 1∧aν
δn . b−β
.
n + bn h
1∧aν
Optimizing the upper bound with respect to bn results in the choice bn ∼ h− β+r , with which we conclude
(2.4). We note that introducing the parameter r > 0 is essential in the above estimates.
It remains to prove (5.5). Partially differentiating with respect to u under the integral sign, we obtain
Z
Z
sin(uy)
sin(uy)
1/β
ρ(h
y)dy
−
2c
dy
∂u ψh (u) = −2cβ
β
β
y
yβ
(0,Kh−1/β ]
(0,Kh−1/β ]
=: Rh (u) + Ah (u).
It suffices to show that |Rh (u)| . h/u and |Ah (u) + βuβ−1 | . h/u for u > 0. Write ξβ (y) = y −β ρ(y). We
have Rh ≡ 0 if cρ = 0. In case where cρ > 0, thanks to Assumption 2.1(2)(b), the change of variables
and the integration by parts yield that
Z
1−1/β
|Rh (u)| . h
sin(uh−1/β x)ξβ (x)dx
(0,K]
cos(uh−1/β x)
ξβ (x)dx
uh−1/β
(0,K]
Z
h
h
.
1 + |ξβ (0+)| +
|∂x ξβ (x)|dx . .
u
u
(0,K]
= h1−1/β
Z
∂x
Turning to Ah (u), we need the following specific identity from the Lebesgue integration theory [19]: for
r > 0 and β ∈ (0, 2), we have
Z
Z
sin x
βπ
e−ry y β−1 (cos r + y sin r)
1
(5.7)
dx
−
Γ(1
−
β)
cos
dy.
=
β
2
Γ(β) (0,∞)
1 + y2
(0,r) x
From the definition (2.1) and the property of the gamma function, we have the identity
β
2cβ
= Γ(1 −
β) cos( βπ
2 ).
Applying (5.7) together with the change of variables, we obtain
Z
sin x
β−1
β−1
|Ah (u) + βu
| = − 2cβ u
dx + βuβ−1
β
(0,uKh−1/β ] x
Z
βπ
sin x
dx
−
Γ(1
−
β)
cos
. uβ−1
β
2
(0,uKh−1/β ) x
Z
Z
−ry β−1
−ry β
e y
e y
β−1
.u
dy +
dy
2
2
(0,∞) 1 + y
(0,∞) 1 + y
r=uKh−1/β
Z
Z
−x β−1
e
x
e−x xβ
−β−1
= uβ−1 r−β
dx
+
r
dx
2
2
(0,∞) 1 + (x/r)
(0,∞) 1 + (x/r)
r=uKh−1/β
Z
Z
−x β
re x
≤ uβ−1 r−β
e−x xβ−1 dx +
dx
2
2
(0,∞)
(0,∞) r + x
r=uKh−1/β
Z
h
re−x xβ
h
.
1 + sup
dx . .
2
2
u
u
r>0 (0,∞) r + x
Here, in the last step we used that
Z
Z
x
re−x xβ
x −x β
dx = arctan
e x
−
arctan
(βxβ−1 − xβ )e−x dx
2 + x2
r
r
r
(0,∞)
(0,∞)
(0,∞)
Z
Z
β−1 −x
.
x
e dx +
xβ e−x dx < ∞
(0,∞)
(0,∞)
uniformly in r > 0. Thus we have obtained (5.5), completing the proof of the claim (2).
5.2. Proof of Lemma 2.5. It is enough to notice that combining Lemma 2.2(1), (5.2), (5.3) and (5.4)
leads to
Z
aν r
|fh (y) − φβ (y)| dy . b−β
n + (ψ (h) ∨ h )bn ,
β
and that the upper bound is optimized (with respect to bn ) to be (ψ (h) ∨ haν ) β+r .
22
HIROKI MASUDA
6. Proofs of the main results
This section is devoted to proving Theorems 3.2 and 3.5, Corollary 3.6, and Theorem 3.11.
6.1. Localization: elimination of large jumps.
6.1.1. Preliminaries. Prior to the proofs, we need to introduce a localization of the underlying probability
space by eliminating possible large jumps of J, thus enabling us to proceed as if E(|J1 |q ) < ∞ for every
q > 0. The point here is that, since our main results are concerned with the weak properties over the
fixed period [0, T ], we may conveniently focus on a subset ΩK,T (∈ F) ⊂ Ω on which jumps of J are
bounded by a constant K: supω∈Ω, t≤T |∆Jt (ω)| ≤ K, the probability P(ΩK,T ) being arbitrarily close to
1 for K large enough. This simple yet very powerful “localization” device is standard in the context of
limit theory for statistics based on high-frequency data [21], and has been considered for quite general
semimartingale models; we refer to [22, Section 4.4.1] for a comprehensive account.
Here we take a direct and concise route, without resorting to the general localization result. Recall
the Lévy-Khintchine representation (2.2), and let µ(dt, dz) denote the Poisson random measure having
the
R intensity measure dt ⊗ ν(dz), and µ̃(dt, dz) its compensated version. Fix any K > 1. We have
zν(dz) = 0 since ν is assumed to be symmetric, and the Lévy-Itô decomposition of J takes the
1<|z|≤K
form
Z
Z
Jt =
zµ(ds, dz) =: MtK + AK
t ,
z µ̃(ds, dz) +
|z|≤K
|z|>K
where M K is a purely-discontinuous martingale and AK is a compound-Poisson process independent
of M K . Note that the symmetry assumption of ν makes the parametric form of the drift coefficient
unaffected by elimination of large jumps of J. Since supt |∆MtK | ≤ K, we have E(|MtK |q ) < ∞ for any
t ∈ R+ and q > 0. 3 Further, the event
ΩK,T := µ (0, T ], {z; |z| > K} = 0 ∩ {|X0 | ≤ K} ⊂ Ω
R
has the probability exp{−T |z|>K ν(dz)}P(|X0 | ≤ K), which gets arbitrarily close to 1 by picking a
sufficiently large K. Denote by (XtK )t∈[0,T ] a solution process to the SDE
K
dX K = a(XtK , α0 )dt + c(Xt−
, γ0 )dMtK ,
which obviously admits a strong solution as a functional of (X0 , M K ), and satisfy
Xt (ω) = XtK (ω)
for t ∈ [0, T ] and ω ∈ ΩK,T .
Before proceeding, we recall the notation of the stable convergence in law. Let
Ω̃, F̃, P̃(dω, dω 0 ) := Ω × Ω0 , F ⊗ F 0 , P(dω)Q(ω, dω 0 )
be an extended probability space, with Q denoting a transition probability from (Ω, F) to (Ω0 , F 0 ). For
any random variables Gn defined on (Ω, F, P) and G∞ on (Ω̃, F̃, P̃), all taking their values in some metric
L
s
space E, we say that Gn converges stably in law to G∞ , denoted by Gn −−→
G∞ , if E{f (Gn )U } →
Ẽ{f (G∞ )U } for every bounded F-measurable random variable U ∈ R and every bounded continuous
L
function f : E → R. This mode of convergence entails that (Gn , Hn ) −
→ (G∞ , H∞ ) for every random
p
variables Hn and H∞ such that Hn −
→ H∞ . We refer to [20], [21], [22], [23, Chapters VIII.5c and IX.7],
and [17] for comprehensive accounts of the stable convergence in law.
The stable central limit theorem is a special case, where the limit is a mixed normal distribution, and
plays an essential role in the proof of the asymptotic mixed normality of the SQMLE. In Section 6.4.1, we
will apply the general result due to Jacod [20], one of the crucial findings in which is the characterization
of conditionally Gaussian continuous-time martingales defined on an extended probability space; the
technique essentially dates back to [13]. In most of the other existing stable convergence results, the
nesting condition on the underlying triangular arrays of filtrations is assumed, whereas it fails to hold for
our model. The foremost point of Jacod’s result is that it does not require the nesting condition.
3 More precisely, if K = inf{a > 0; supp(ν) ⊂ {z; |z| ≤ a}}, then by [48, Theorem 26.1] we have E{exp(r|J | log |J |)} <
t
t
∞ for each t > 0 and r ∈ (0, 1/K). The moment estimate for Lévy processes in small time is interesting in its own right.
Several authors have studied asymptotic behavior of the moment E{f (Jh )} as h → 0 for a suitable function f . We refer to
the recent paper [29] as well as the references therein for recent developments on this subject.
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
23
6.1.2. Statements and consequences. We write D([0, T ]; R) for the space of càdlàg processes over [0, T ]
taking values in R.
Lemma 6.1. Let ζn (x, θ) : Θ × D([0, T ]; R) → R, ξn (x) : D([0, T ]; R) → Rk , and ξ0 (x, a) : D([0, T ]; R) ×
Rp → Rk be measurable functions. Let η ∈ Rp be a random variable defined on (Ω0 , F 0 , P0 ). Then we
have the following.
p
p
(i) If supθ |ζn (θ, X K )| −
→ 0 for every K > 0 large enough, then supθ |ζn (θ, X)| −
→ 0.
L
L
s
ξ0 (X K , η) for every K > 0 large enough, then ξn (X) −
→ ξ0 (X, η).
(ii) If ξn (X K ) −−→
Proof. (i) Fix any > 0 and then pick a K > 0 so large that P(ΩcK,T ) < . Then
K
lim sup P sup |ζn (X, θ)| > ≤ + lim sup P sup |ζn (X , θ)| > ≤ .
n
n
θ
θ
(ii) Fix any > 0 and continuous bounded function f , and then pick a K > 0 for which P(ΩcK,T ) <
/(2kf k∞ ). Then, the term |E{f (ξn (X))} − Ẽ{f (ξ0 (X, η))}| is bounded by
E{f (ξn (X)); ΩK,T } − Ẽ{f (ξ0 (X, η)); ΩK,T × Ω0 }
+ E{f (ξn (X)); ΩcK,T } + Ẽ{f (ξ0 (X, η)); ΩcK,T × Ω0 }
≤ E{f (ξn (X K )); ΩK,T } − Ẽ{f (ξ0 (X K , η)); ΩK,T × Ω0 } + 2kf k∞ P(ΩcK,T ),
hence lim supn |E{f (ξn (X))} − Ẽ{f (ξ0 (X, η))}| ≤ .
The item (ii) will be used only for η ∈ Rp being a standard Gaussian random vector independent of F.
Based on Lemma 6.1, in order to prove Theorems 3.2 and 3.5 and Corollary 3.6, we may and do suppose
that
∃K > 0,
(6.1)
P (∀t ∈ [0, T ], |∆Jt | ≤ K) = 1,
thereby focusing on X K for arbitrarily large (but fixed) K > 0. Specific value of K does not matter in
the proofs below. We keep writing X instead of X K for notational simplicity.
For later use, we mention and recall some important consequences of either Assumption 2.1 with (6.1),
or Assumption 2.4.
• Following the argument [22, Section 2.1.5] together with Gronwall’s inequality under the global
Lipschitz condition of (a(·, α0 ), c(·, γ0 )), we see that
(6.2)
E sup |Xt |q ≤ C,
sup
E (|Xt − Xs |q |Fs ) . h(1 + |Xs |C )
t≤T
t∈[s,s+h]∩[0,T ]
for any q ≥ 2 and s ∈ [0, T ]; in particular,
E (|Xt − Xs |q ) = O(h).
sup
t∈[s,s+h]∩[0,T ]
R
• There exists a constant C0 > 0 such that |z|>y ν(dz) . y −β for y ∈ (0, C0 ], with which [30,
Theorem 2(a) and (c)] gives
κ
(6.3)
E sup |Jt |
. hκ/β
t≤h
for each κ ∈ (0, β).
• The convergences (2.3) and (2.5) hold:
Z
√
n |fh (y) − φβ (y)|dy → 0,
Z
|y|κ |fh (y) − φβ (y)|dy → 0,
κ ∈ [0, β).
24
HIROKI MASUDA
6.2. Preliminary asymptotics. Let us recall the notation j (θ) = {h1/β cj−1 (γ)}−1 (∆j X − haj−1 (α)).
Throughout this section, we look at asymptotic behavior of the auxiliary random function
Un (θ) :=
n
X
πj−1 (θ)η(j (θ)),
j=1
where π : R × Θ → Rk ⊗ Rm and η : R → Rm are measurable functions. This form of Un (θ) will appear in
common in the proofs of the consistency and asymptotic (mixed) normality of the SQMLE. The results
in this section will be repeatedly used in the subsequent sections.
Let Ej−1 (·) be a shorthand for E(·|Ftj−1 ) and write Un (θ) = U1,n (θ) + U2,n (θ), where
U1,n (θ) :=
n
X
πj−1 (θ) η(j (θ)) − Ej−1 {η(j (θ))} ,
j=1
U2,n (θ) :=
n
X
πj−1 (θ)Ej−1 {η(j (θ))}.
j=1
Given doubly indexed random functions Fnj (θ) on Θ, a positive sequence (an ), and a constant q > 0, we
will write
∗
−1
q
Fnj (θ) = OLq (an ) if sup sup E sup |an Fnj (θ)| < ∞.
n j≤n
θ
6.2.1. Uniform estimate of the martingale part U1,n .
Lemma 6.2. Suppose that:
(i) π ∈ C 1 (R × Θ) and supθ {|π(x, θ)| + |∂θ π(x, θ)|} . 1 + |x|C ;
(ii) η ∈ C 1 (R) and |η(y)| + |y||∂η(y)| . 1 + log(1 + |y|).
√
∗
n), hence in particular
Then, for every q > 0 we have U1,n (θ) = OL
q(
√
1
U1,n (θ) = Op ( nh1−1/β )−1 = op (1).
(6.4)
sup
1−1/β
nh
θ
Proof. Since we are assuming that the parameter space Θ is a bounded convex domain, the Sobolev
inequality [2, p.415] is in force: for each q > p,
E sup |n−1/2 U1,n (θ)|q . sup E |n−1/2 U1,n (θ)|q + sup E |n−1/2 ∂θ U1,n (θ)|q .
θ
θ
θ
−1/2
To complete the proof, it therefore suffices to show that both {n
U1,n (θ)} and {n−1/2 ∂θ U1,n (θ)} are
q
L -bounded for each θ and q > p. Fix any q > p ∨ 2 and θ in the rest of this proof.
Pn
Put χj (θ) = πj−1 (θ) η(j (θ)) − Ej−1 {η(j (θ))} , so that U1,n (θ) = j=1 χj (θ). Under the present
regularity conditions we may pass the differentiation with respect to θ through the operator Ej−1 :
∂θ χj (θ) = ∂θ πj−1 (θ) η(j (θ)) − Ej−1 {η(j (θ))}
(6.5)
+ πj−1 (θ) ∂η(j (θ))∂θ j (θ) − Ej−1 {∂η(j (θ))∂θ j (θ)} .
For each n, the sequences {χj (θ)}j and {∂θ χj (θ)}j form martingale difference arrays with respect to
Pn
(Ftj ), hence Burkholder’s inequality gives E{|n−1/2 ∂θk Un (θ)|q } . n−1 j=1 E{|∂θk χj (θ)|q } for k = 0, 1.
The required Lq -boundedness of {n−1/2 ∂θk U1,n (θ)} follows on showing that supj≤n E(|∂θk χj (θ)|q ) . 1.
Observe that for β ≥ 1 and r ∈ (0, β),
r
(6.6)
|j (θ)|r = h−1/β c−1
j−1 (γ){∆j X − haj−1 (α)}
n
o
0
. (1 + |Xtj−1 |C ) |h−1/β ∆j X|r + hr (1−1/β) (1 + |Xtj−1 |C )
. (1 + |Xtj−1 |C ) |h−1/β ∆j X|r + 1 .
Applying the estimate (6.3) together with the linear growth property of a(·, α0 ), the Lipschitz property
of c(·, γ0 ), the estimate (6.2), and Burkholder’s inequality for the stochastic integral with respect to J,
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
25
we derive the chain of inequalities:
Z
r/2
1
Ej−1 |h−1/β ∆j X|r . hr(1−1/β)
Ej−1 {|a(Xs , α0 )|2 }ds
h j
Z
r
+ h−r/β Ej−1
(c(Xs , γ0 ) − cj−1 (γ0 ))dJs
j
+ (1 + |Xtj−1 |C )E(|h−1/β Jh |r )
r(1−1/β)
. (1 + h
C
−r/β
Z
)(1 + |Xtj−1 | ) + h
j−1
E
r/2
(|Xs − Xtj−1 | )ds
2
j
. (1 + hr(1−1/β) )(1 + |Xtj−1 |C ) + h−r/β h2 (1 + |Xtj−1 |C )
r/2
. 1 + |Xtj−1 |C .
(6.7)
Using (6.6) and (6.7) with the disintegration, we arrive at the estimate
E (1 + |Xtj−1 |C )|j (θ)|r . 1 + sup E(|Xt |C ) . 1
t≤T
valid for r ∈ (0, β). By means of the condition on η,
E(|χj (θ)|q ) . E (1 + |Xtj−1 |C )Ej−1 {|η(j (θ))|q }
. E (1 + |Xtj−1 |C ) 1 + Ej−1 [{log(1 + |j (θ)|)}q ]
. E (1 + |Xtj−1 |C ) 1 + Ej−1 {|j (θ)|r }
. 1 + sup E(|Xt |C ),
t≤T
q
concluding that supj≤n E(|χj (θ)| ) . 1.
Next we note that
∂α j (θ) = −h1−1/β
∂α aj−1 (α)
,
cj−1 (γ)
∂γ j (θ) = −
∂γ cj−1 (γ)
j (θ).
cj−1 (γ)
By (6.5), the components of ∂θ χj (θ) consists of the terms
(1)
πj−1 (θ) η(j (θ)) − Ej−1 {η(j (θ))} ,
(2)
πj−1 (θ) ∂η(j (θ)) − Ej−1 {∂η(j (θ))} ,
(3)
πj−1 (θ) j (θ)∂η(j (θ)) − Ej−1 {j (θ)∂η(j (θ))}
for some π (i) (x, θ), i = 1, 2, 3, all satisfying the conditions imposed on π(x, θ). Again taking the conditions
on η into account, we can proceed as in the previous paragraph to obtain supj≤n E(|∂θ χj (θ)|q ) . 1. The
proof is complete.
6.2.2. Uniform estimate of the predictable (compensator) part U2,n . Introduce the notation:
δj0 (γ) =
cj−1 (γ0 ) −1/β
h
∆j J,
cj−1 (γ)
b(x, θ) = c−1 (x, γ){a(x, α0 ) − a(x, α)},
a∆
c∆
j−1 (s) = a(Xs , α0 ) − aj−1 (α0 ),
j−1 (s) = c(Xs , γ0 ) − cj−1 (γ0 ),
Z
−1/β
−1/β Z
h
h
rj (γ) =
a∆
c∆ (s)dJs .
j−1 (s)ds +
cj−1 (γ) j
cj−1 (γ) j j−1
Then
j (θ) = δj0 (γ) + h1−1/β bj−1 (θ) + rj (γ).
Expanding η we have
0
0
00
U2,n (θ) = U2,n
(θ) + U2,n
(θ) + U2,n
(θ),
(6.8)
where, with rj (θ; η) :=
R1
0
∂η(δj0 (γ) + h1−1/β bj−1 (θ) + srj (γ))ds and π 0 (x, θ) := π(x, θ)c−1 (x, γ),
0
U2,n
(θ) :=
n
X
j=1
n
o
πj−1 (θ)Ej−1 η δj0 (γ) + h1−1/β bj−1 (θ) ,
26
HIROKI MASUDA
0
U2,n
(θ) := h−1/β
00
U2,n
(θ)
−1/β
:= h
n
X
j=1
n
X
Z
0
πj−1
(θ)Ej−1 rj (θ; η) a∆
(s)ds
,
j−1
j
0
πj−1
(θ)Ej−1
Z
rj (θ; η)
c∆
j−1 (s−)dJs
.
j
j=1
A uniform law of large numbers for (nh1−1/β )−1 U2,n (θ) will be one of the key ingredients in the proofs.
0
00
Lemma 6.3 below reveals that the terms U2,n
(θ) and U2,n
(θ) have no contribution in the limit; we will
0
deal with the remaining term U2,n
(θ) in Section 6.2.3.
Let us recall Itô’s formula, which is valid for any C β -function4 ψ (see [22, Theorems 3.2.1b) and
3.2.2a)]): for t > s,
Z t
ψ(Xt ) = ψ(Xs ) +
∂ψ(Xu− )dXu
s
Z tZ
{ψ(Xu− + c(Xu− , γ0 )z) − ψ(Xu− ) − ∂ψ(Xu− )c(Xu− , γ0 )z} µ(du, dz).
+
s
Let A denote the formal infinitesimal generator of X:
Z
Aψ(x) = ∂ψ(x)a(x, α0 ) + {ψ(x + c(x, γ0 )z) − ψ(x) − ∂ψ(x)c(x, γ0 )z} ν(dz),
the second term in the right-hand side being assumed well-defined. Then
Z t
Z tZ
(6.9)
ψ(Xt ) = ψ(Xs ) +
Aψ(Xu )du +
{ψ(Xu− + c(Xu− , γ0 )z) − ψ(Xu− )} µ̃(du, dz).
s
s
C
Obviously, we have |Aψ(x)| . 1 + |x| for ψ such that the derivatives ∂ k ψ for k ∈ {0, 1, 2} exist and have
polynomial majorants.
Lemma 6.3. Suppose that:
(i) π ∈ C 1 (R × Θ) and supθ {|π(x, θ)| + |∂θ π(x, θ)|} . 1 + |x|C ;
(ii) η ∈ C 1 (R) with bounded first derivative.
0
∗
2−1/β
00
∗
2−1/β
Then we have U2,n
(θ) = OL
) and U2,n
(θ) = OL
) for every q > 0. In particular, we
q (nh
q (nh
−1/2 0
−1/2 00
have supθ |n
U2,n (θ)| = op (1) and supθ |n
U2,n (θ)| = op (1).
0
Proof. In this proof, q denotes any positive real greater than or equal to 2. We begin with U2,n
(θ).
Applying (6.9) with ψ(x) = a(x, α0 ) and then taking the conditional expectation, we get
Z
Z
j−1
∆
Ej−1 {a∆
E
aj−1 (s)ds =
j−1 (s)}ds
j
j
Z Z
s
≤
j
Z
tj−1
Z s
.
j
Z
(6.10)
tj−1
Z s
.
j
Ej−1 {|Aa(Xu , α0 )|} duds
1 + Ej−1 (|Xu |C ) duds
∗
2
(1 + |Xtj−1 |C )duds = OL
q (h ).
tj−1
j−1
∆
j−1
{a∆
Write mj (θ; η) = rj (θ; η) − E {rj (θ; η)} and ã∆
j−1 (s) = aj−1 (s) − E
j−1 (s)}. Using (6.10) and
noting that rj (θ; η) is essentially bounded, we get
Z
n
X
0
−1/β
0
j−1
∆
∗
2−1/β
U2,n (θ) = h
πj−1 (θ)E
mj (θ; η) ãj−1 (s)ds + OL
)
q (nh
= h−1/β
j=1
n
X
j=1
j
0
πj−1
(θ)
Z
∗
2−1/β
Ej−1 mj (θ; η)ã∆
).
j−1 (s) ds + OLq (nh
j
0
∗
2−1/β
By Jensen’s inequality, the claim U2,n
(θ) = OL
) follows if we show
q (nh
q
1 j−1
∆
(6.11)
sup sup sup E sup E
mj (θ; η)ãj−1 (s)
. 1.
h
n j≤n s∈[tj−1 ,tj ]
θ
4In case of β ∈ (1, 2), this means that ψ is C 1 and the derivative ∂ψ is locally Hölder continuous with index β − [β].
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
By (6.9) we may express ã∆
j−1 (s) as
Z s
Z
∆
(6.12)
ãj−1 (s) =
f (Xtj−1 , Xu )du +
tj−1
s
27
Z
g(Xu− , z)µ̃(du, dz),
tj−1
where Ej−1 {f (Xtj−1 , Xu )} = 0 with f (x, x0 ) being at most polynomial growth in (x, x0 ), and where
g(x, z) := a(x + zc(x, γ0 ), α0 ) − a(x, α0 ).
Hence, for (6.11) it suffices to prove
Z s Z
q
1 j−1
(6.13)
sup sup sup E sup E
< ∞.
g(Xu− , z)µ̃(du, dz)
mj (θ; η)
h
n j≤n s∈[tj−1 ,tj ]
θ
tj−1
Let Hj,t (θ; η) := E {mj (θ; η)| Ft } for t ∈ [tj−1 , tj ]; then, Hj,tj (θ; η) = mj (θ; η). Recall we are supposing
(1.2): Ft = σ(X0 ) ∨ σ(Js ; s ≤ t). Invoking the self-renewing property of a Lévy process [43, Theorem
I.32], we see that {Hj,t (θ; η), Ftj−1 ∨ σ(Jt ); t ∈ [tj−1 , tj ]} is an essentially bounded martingale. According
to the martingale representation theorem [23, Theorem III.4.34], the process Hj,t (θ) can be represented
as a stochastic integral of the form
Z t Z
ξj (s, z; θ)µ̃(ds, dz),
t ∈ [tj−1 , tj ],
(6.14)
Hj,t (θ; η) =
tj−1
with a bounded predictable process s 7→ ξj (s, z; θ) such that
Z
sup sup sup sup Ej−1
ξj2 (s, z; θ)ν(dz) < ∞.
n j≤n s∈[tj−1 ,tj ]
θ
Now, we look at the quantity inside the absolute value sign | · · · | in the left-hand side of (6.13). By taking
the conditioning with respect to Fs inside the sign “Ej−1 ”, substituting the expression (6.14) with t = s,
and then applying the integration-by-parts formula for martingales, it follows that the quantity equals
Z s Z
1 j−1
E
ξj (u, z; θ)g(Xu− , z)ν(dz)du .
h
tj−1
By the regularity conditions on a(x, α0 ) and c(x, γ0 ) we have |g(x, z)| . |z|(1 + |x|). It follows from this
bound together with (6.2) and Jensen and Cauchy-Schwarz inequalities that
Z s Z
q
1
E sup Ej−1
ξj (u, z; θ)g(Xu− , z)ν(dz)du
h
θ
tj−1
q
Z
Z s
1
j−1
.
du
ξj (u, z; θ)g(Xu− , z)ν(dz)
E sup E
h tj−1
θ
Z
q/2
Z
q/2
Z
1 s
j−1
2
j−1
2
.
E sup E
ξj (u, z; θ)ν(dz)
E
g (Xu− , z)ν(dz)
du
h tj−1
θ
Z
1 s
.
E Ej−1 1 + |Xu |C du
h tj−1
Z
1 s
.
E 1 + |Xtj−1 |C du . 1.
h tj−1
0
∗
2−1/β
This proves (6.13), concluding that U2,n
(θ) = OL
).
q (nh
00
Next we consider U2,n
(θ). Using the martingale representation for mj (θ; η) as before, we have
Z
n
X
00
−1/β
0
j−1
∆
U2,n (θ) = h
πj−1 (θ)E
mj (θ; η) cj−1 (s−)dJs
j
j=1
+ h−1/β
n
X
0
πj−1
(θ)Ej−1 {rj (θ; η)}Ej−1
= h−1/β
−1/β
=h
j=1
n
X
j=1
c∆
j−1 (s−)dJs
j
j=1
n
X
Z
0
πj−1
(θ)Ej−1 ∆j Hj (θ; η)
Z
c∆
j−1 (s−)dJs
j
0
πj−1
(θ)Ej−1
Z Z
Z Z
ξj (s, z; θ)µ̃(ds, dz)
j
j
c∆
j−1 (s−)z µ̃(ds, dz)
28
HIROKI MASUDA
= h−1/β
n
X
0
πj−1
(θ)Ej−1
Z Z
ξj (s, z; θ)zc∆
(s)ν(dz)ds
.
j−1
j
j=1
Rs
j−1 ∆
∗
As in the case of a∆
{cj−1 (s)}| ≤ tj−1 Ej−1 {|Ac(Xu , γ0 )|}du = OL
q (h). Hence
j−1 , we have |E
Z
n
X
00
0
2−1/β
∗
(6.15)
U2,n
(θ) = h−1/β
πj−1
(θ) Ej−1 Ξ̃j,s (θ)c̃∆
),
j−1 (s) ds + OLq (nh
j
j=1
∆
where Ξ̃j,s (θ) := ξj (s, z; θ)zν(dz) − Ej−1 { ξj (s, z; θ)zν(dz)} for s ∈ [tj−1 , tj ] and c̃∆
j−1 (s) := cj−1 (s) −
j−1 ∆
j−1
2
∆
E {cj−1 (s)}. We have sups∈[tj−1 ,tj ] supθ E {|Ξ̃j,s (θ)| } . 1 and c̃j−1 (s) admits a similar representation to (6.12). Now we once more apply the martingale representation theorem: for each j and
s ∈ [tj−1 , tj ], the processes Mu0j (θ) := Ej−1 {Ξ̃j,s (θ)|Fu } and Mu00j := Ej−1 {c̃∆
j−1 (s)|Fu } for u ∈ [tj−1 , s]
are martingales with respect to the filtration {Ftj−1 ∨ σ(Ju ) : u ∈ [tj−1 , s]}, hence there correspond
R s R 0j
00j
0j
mu (z; θ)µ̃(du, dz) and Ms00j =
predictable processes m0j
u (z; θ) and mu (z) such that Ms (θ) = tj−1
R s R 00j
R
R
2
j−1
2
mu (z)µ̃(du, dz), and that supθ Ej−1 { (m0j
{ (m00j
u (z; θ)) ν(dz)} ∨ E
u (z)) ν(dz)} . 1. Thus,
tj−1
using the integration by parts formula as before we can rewrite (6.15) as
Z
Z Z s
n
1X 0
1
00
00j
U2,n
(θ) = nh2−1/β ·
Ej−1
m0j
(z;
θ)m
(z)ν(dz)
duds
πj−1 (θ) 2
u
u
n j=1
h j tj−1
R
R
∗
2−1/β
+ OL
).
q (nh
We can apply Cauchy-Schwarz inequality to conclude that the first term in the right-hand side is
∗
2−1/β
00
OL
), hence so is U2,n
(θ).
q (nh
√ 2−1/β
3/2−1/β
.h
→ 0 for β ∈ [1, 2), the last part of the lemma is trivial. The proof is thus
Since nh
complete.
6.2.3. Uniform law of large numbers. Building on Lemmas 6.2 and 6.3, we now turn to the uniform law
of large numbers for Un (θ) = U1,n (θ) + U2,n (θ). First we note the following auxiliary result.
Lemma 6.4. For any measurable function f : R × Θ → R such that
sup {|f (x, θ)| + |∂x f (x, θ)|} . 1 + |x|C ,
θ
we have (h = T /n)
sup sup
θ
t≤T
Z
[t/h]
1 t
1 X
p
f (Xtj−1 , θ) −
→ 0.
f (Xs , θ)ds −
n j=1
T 0
Proof. The target quantity can be bounded by
Z
[t/h]
1 X 1
h
sup
sup |f (Xs , θ) − fj−1 (θ)|ds + sup sup |f (Xt , θ)|
T θ t≤T
t≤T n j=1 h j θ
Z
n
1X1
h
.
(1 + |Xtj−1 | + |Xs |)C |Xs − Xtj−1 |ds +
1 + sup |Xt |C .
n j=1 h j
T
t≤T
By (6.2) the expectation of the upper bound tends to zero, hence the claim.
Proposition 6.5. Assume that the conditions in Lemma 6.2 hold and that supθ |∂x π(x, θ)| . 1 + |x|C .
(1) If β = 1, we have
1
1
sup Un (θ) −
n
T
θ
Z
T
Z
π(Xt , θ)
0
c(Xt , γ0 )
η
z + b(Xt , θ) φ1 (z)dzdt = op (1).
c(Xt , γ)
(2) If β ∈ (1, 2), we have
sup
θ
1
1
Un (θ) −
n
T
T
Z
π(Xt , θ)η
0
c(Xt , γ0 )
z φβ (dz)dzdt = op (1).
c(Xt , γ)
If further η is odd, then
sup
θ
1
Un (θ) = Op (1).
nh1−1/β
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
29
0
Proof. By Lemmas 6.2 and 6.3 it suffices to only look at U2,n
(θ) (recall (6.8)); the assumptions in Lemma
6.3 are implied by those in Lemma 6.2. Let
n
n
o
1
1X
1
0
0
1−1/β
0
j−1
η
δ
(γ)
+
h
b
(θ)
.
U 2,n (θ) :=
U
(θ)
=
π
(θ)
E
j−1
j−1
j
2,n
n j=1
nh1−1/β
h1−1/β
Pn
Pn
0
1
2
(1) For β = 1, we can write U 2,n (θ) as the sum of n−1 j=1 fj−1
(θ) and n−1 j=1 fj−1
(θ), where
Z
c(x, γ0 )
z + b(x, θ) φ1 (z)dz,
f 1 (x, θ) := π(x, θ) η
c(x, γ)
Z
c(x, γ0 )
f 2 (x, θ) := π(x, θ) η
z + b(x, θ) {fh (z) − φ1 (z)}dz.
c(x, γ)
Pick a κ ∈ (0, β) = (0, 1). Since |η(y)| . 1 + |y|κ ,
c(x, γ0 )
sup η
z + b(x, θ) . (1 + |x|C )(1 + |z|κ ).
c(x, γ)
θ
Hence we have the bounds:
Z
1
C
(6.16)
sup |f (x, θ)| . (1 + |x| ) (1 + |z|κ )φ1 (y)dy . 1 + |x|C
θ
2
C
R
and |f (x, θ)| . (1 + |x| ) (1 + |z|κ )|fh (y) − φ1 (y)|dy = (1 + |x|C )o(1); in particular,
n
(6.17)
sup
θ
1X 2
f (θ) = op (1).
n j=1 j−1
Under the conditions on η, simple manipulations lead to
c(x, γ0 )
c(x, γ0 )
sup ∂η
z + b(x, θ) · ∂x
z + ∂x b(x, θ)
c(x, γ)
c(x, γ)
θ
c(x, γ0 )
c(x, γ0 )
C
. (1 + |x| ) k∂ηk∞ + sup
z + b(x, θ) ∂η
z + b(x, θ)
c(x, γ)
c(x, γ)
θ
Z
. (1 + |x|C ) 1 + (1 + |z|κ )φ1 (y)dy . 1 + |x|C .
Consequently,
sup ∂x f 1 (x, θ) . 1 + |x|C .
(6.18)
θ
The claim follows on applying Lemma 6.4 with (6.16), (6.17), and (6.18).
(2) For β ∈ (1, 2), we have
0
h1−1/β U 2,n (θ)
n
n
o
1X
πj−1 (θ)Ej−1 η δj0 (γ) + h1−1/β bj−1 (θ)
n j=1
Z
n
1X
cj−1 (γ0 )
=
πj−1 (θ) η
z fh (z)dz
n j=1
cj−1 (γ)
Z 1
n
n
o
1X
+ h1−1/β
πj−1 (θ)bj−1 (θ)
Ej−1 ∂η δj0 (γ) + sh1−1/β bj−1 (θ) ds.
n j=1
0
=
(6.19)
As with the case β = 1, the first term in the rightmost side of (6.19) turns out to be equal to
R
c(Xt ,γ0 )
1 T
T 0 π(Xt , θ)η( c(Xt ,γ) z)φβ (dz)dzdt + op (1) uniformly in θ. Moreover, by the boundedness of ∂η and the
estimate |πj−1 (θ)bj−1 (θ)| . 1 + |Xtj−1 |C , the second term is Op (h1−1/β ) = op (1) uniformly in θ. Hence
Lemma 6.4 ends the proof of the first half.
Under the conditions in Lemma 6.2, it follows from (6.8) and Lemma 6.3 that
sup
θ
1
nh1−1/β
0
Un (θ) − U 2,n (θ) = Op (h).
If η is odd, the symmetry of the density fh implies that the first term in the rightmost side of (6.19) a.s.
0
equals 0 for each γ. Then supθ |U 2,n (θ)| = Op (1), hence the latter claim.
30
HIROKI MASUDA
We will also need the next corollary.
Corollary 6.6. Assume that the conditions in Lemma 6.2 hold, let β ∈ (1, 2), and let η : R → R be an
odd function. Then, for every q > 0 we have
n
X
1
nh1−1/β
∗
πj−1 (θ)Ej−1 {η(j (α0 , γ))} = OL
q (h),
j=1
and also
n
X
1
nh1−1/β
√
∗
πj−1 (θ)η(j (α0 , γ)) = OL
( nh1−1/β )−1 .
q
j=1
0
Proof. We have U2,n
(θ) ≡ 0 from the first identity in (6.19) and the fact bj−1 (α0 , γ) ≡ 0. This combined
with (6.8) and Lemmas 6.2 and 6.3 ends the proof.
6.3. Proof of Theorem 3.2: consistency. We first prove the following more or less well-known fact.
Lemma 6.7 (Consistency under possible multi-scaling). Let K1 ⊂ Rp1 and K2 ⊂ Rp2 be compact sets,
and let Hn : K1 × K2 → R be a random function of the form
Hn (u1 , u2 ) = k1,n H1,n (u1 ) + k2,n H2,n (u1 , u2 )
for some positive non-random sequences (k1,n ) and (k2,n ) and some continuous random functions H1,n :
K1 → R and H2,n : K1 × K2 → R. Let (u1,0 , u2,0 ) ∈ K1◦ × K2◦ be a non-random vector. Assume the
following conditions:
• k2,n = o(k1,n );
p
p
• supu1 |H1,n (u1 ) − H1,0 (u1 )| −
→ 0 and sup(u1 ,u2 ) |H2,n (u1 , u2 ) − H2,0 (u1 , u2 )| −
→ 0 for some continuous random functions H1,0 and H2,0 ;
• {u1,0 } = argmax H1,0 and {u2,0 } = argmax H2,0 (u1,0 , ·) a.s.
p
Then, for any (û1,n , û2,n ) ∈ K1 ×K2 such that Hn (û1,n , û2,n ) ≥ sup Hn −op (k2,n ), we have (û1,n , û2,n ) −
→
(u1,0 , u2,0 ).
Proof. The claim is a special case of [45, Theorem 1]. For convenience, we sketch the proof.
L
From the assumption we have (H1,n , H2,n ) −
→ (H1,0 , H2,0 ) in C(K1 × K2 ). Let
−1
−1
Hn0 (u1 ) := k1,n
Hn (u1 , û2,n ) = H1,n (u1 ) + k2,n k1,n
H2,n (u1 , û2,n ).
L
The second term in the rightmost side is op (1) uniformly in u1 ∈ K1 , so that Hn0 (·) −
→ H1,0 (·) in C(K1 )
−1
−1
with the limit a.s. uniquely maximized at û1,0 . Since Hn0 (û1,n ) ≥ supu1 k1,n
Hn (u1 , û2,n ) − op (k2,n k1,n
)=
p
→ u1,0 . We can follow a
sup Hn0 − op (1), the argmax theorem (for example [51]) concludes that û1,n −
p
similar line to deduce û2,n −
→ u2,0 with replacing Hn0 by
−1
Hn00 (u2 ) := k2,n
{Hn (û1,n , u2 ) − Hn (û1,n , u2,0 )} = H2,n (û1,n , u2 ) − H2,n (û1,n , u2,0 ),
which has the continuous limit process H2,0 (u1,0 , ·) − H2,0 (u1,0 , u2,0 ) in C(K2 ), uniquely maximized at
û2,0 a.s.
Returning to our model, we make a few remarks on Assumption 2.10. Recall the notation b(x, θ) =
c (x, γ){a(x, α0 ) − a(x, α)} and gβ (y) = ∂y log φβ (y). For β > 1, we define the random functions
Yβ,1 (·) = Yβ,1 (·; γ0 ) : Θγ → R and Yβ,2 (·) = Yβ,2 (·; θ0 ) : Θ → R by
Z Z
1 T
c(Xt , γ0 )
c(Xt , γ0 )
Yβ,1 (γ) =
(6.20)
log
φβ
z
− log φβ (z) φβ (z)dzdt,
T 0
c(Xt , γ)
c(Xt , γ)
Z T
Z
1
c(Xt , γ0 )
2
Yβ,2 (θ) =
b (Xt , θ) ∂gβ
z φβ (z)dzdt.
2T 0
c(Xt , γ)
−1
We also define Y1 (·) = Y1 (·; θ0 ) : Θ → R by
Z Z
1 T
c(Xt , γ0 )
c(Xt , γ0 )
Y1 (θ) =
log
φ1
z + b(Xt , θ)
− log φ1 (z) φ1 (z)dzdt.
T 0
c(Xt , γ)
c(Xt , γ)
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
31
These three functions are a.s. continuous in θ. Since the function z 7→ c1 φβ (c1 z +c2 ) defines a probability
density for every constants c1 > 0 and c2 ∈ R, Jensen’s inequality (applied ω-wise) imply that the dtintegrand in (6.20) is a.s. non-positive. The equality Yβ,1 (γ) = 0 holds only when the dt-integrand is
zero for t ∈ [0, T ] a.s., hence {γ0 } = argmax Yβ,1 a.s. Similarly, {θ0 } = argmax Y1 a.s. Moreover,
Z
Z T
1
Yβ,2 (α, γ0 ) =
b2 (Xt , (α, γ0 )) ∂gβ (z)φβ (z)dzdt
2T 0
Z
Z
1
{∂φβ (z)}2
1 T −2
=−
c (Xt , γ0 ){a(Xt , α0 ) − a(Xt , α)}2 dt ≤ 0,
dz ·
2
φβ (z)
T 0
where the maximum 0 is attained if and only if α = α0 .
6.3.1. Case of β = 1. Let
n
Y1,n (θ) :=
1
1X
Hn (θ) − Hn (θ0 ) =
n
n j=1
log
cj−1 (γ0 )
+ log φ1 (j (θ)) − log φ1 (j (θ0 )) .
cj−1 (γ)
Since {θ0 } = argmax Y1 a.s., by means of Lemma 6.7 the consistency of θ̂n (∈ argmax Y1,n ) is ensured
p
by the uniform convergence supθ |Y1,n (θ) − Y1 (θ)| −
→ 0. This follows from Lemma 6.4 and Proposition
6.5(1) with π(x, θ) ≡ 1 and η = log φ1 .
6.3.2. Case of β ∈ (1, 2). We have
Hn (θ) − Hn (θ0 ) = kn Yβ,1,n (γ) + ln Yβ,2,n (α, γ),
where kn := n, ln := nh2(1−1/β) , and
1
{Hn (α0 , γ) − Hn (α0 , γ0 )},
n
1
Yβ,2,n (α, γ) :=
{Hn (α, γ) − Hn (α0 , γ)}.
2(1−1/β)
nh
Yβ,1,n (γ) :=
By Lemma 6.7, it suffices to prove the uniform convergences:
p
sup |Yβ,1,n (γ) − Yβ,1 (γ)| −
→ 0,
(6.21)
γ
p
sup |Yβ,2,n (θ) − Yβ,2 (θ)| −
→ 0.
(6.22)
θ
The proof of (6.21) is much the same as in the case of β = 1, hence we only prove (6.22). Observe that
n
X
1
Yβ,2,n (θ) =
log φβ (j (θ)) − log φβ (j (α0 , γ))
nh2(1−1/β) j=1
=
nh1−1/β
+
=:
n
X
1
1
2n
n
bj−1 (θ)gβ (j (α0 , γ)) +
j=1
n
X
1 X 2
b (θ)∂gβ (j (α0 , γ))
2n j=1 j−1
b2j−1 (θ) {∂gβ (˜
j (θ)) − ∂gβ (j (α0 , γ))}
j=1
Y0β,2,n (θ)
+ Y0β,2,n (θ) + Y00β,2,n (θ),
where ˜j (θ) is a random point on the segment connecting j (θ) and j (α0 , γ). Since gβ is odd, by means of
(6.4) and Corollary 6.6 we have supθ |Y0β,2,n (θ)| = op (1). We also get supθ |Y00β,2,n (θ)| = op (1), by noting
that supθ |˜
j (θ) − j (α0 , γ)| ≤ supθ |j (θ) − j (α0 , γ)| . (1 + |Xtj−1 |C )h1−1/β = Op (h1−1/β ) = op (1). It
remains to look at Y0β,2,n . The function gβ is bounded and smooth, and satisfies that
(6.23)
sup |y|k+1 ∂ k gβ (y) < ∞
y
for each non-negative integer k. The convergence supθ |Y0β,2,n (θ) − Yβ,2 (θ)| = op (1) now follows on
applying Proposition 6.5(2) for π(x, θ) = 21 b2 (x, θ) and η = ∂gβ with the trivial modification that inside
the function η we have “j (α0 , γ)” instead of “j (θ)”.
32
HIROKI MASUDA
6.4. Proof of Theorem 3.2: asymptotic mixed normality. We introduce the rate matrix
√
√
Dn = diag(Dn,1 , . . . , Dn,p ) := diag
nh1−1/β Ipα , nIpγ ∈ Rp ⊗ Rp .,
and then denote the normalized SQMLE by
√
√
ûn =
nh1−1/β (α̂n − α0 ), n(γ̂n − γ0 ) := Dn (θ̂n − θ0 ).
The consistency allows us to focus on the event {θ̂n ∈ Θ}, on which we have ∂θ Hn (θ̂n ) = 0 so that the
two-term Taylor expansion gives
(6.24)
−Dn−1 ∂θ2 Hn (θ0 )Dn−1 + r̂n ûn = Dn−1 ∂θ Hn (θ0 ),
where r̂n = {r̂nkl }k,l is a bilinear form such that
p
X
−1 −1
|r̂n | .
Dn,k
Dn,l sup |∂θk ∂θl ∂θm Hn (θ)| θ̂n,m − θ0,m .
θ
k,l,m=1
Here we wrote θ = (θi )pi=1 , and similarly for θ0 and θ̂n . Let
∆n,T := Dn−1 ∂θ Hn (θ0 ),
Γn,T := −Dn−1 ∂θ2 Hn (θ0 )Dn−1 .
If we have
L
(6.25)
(∆n,T , Γn,T ) −
→ (∆T , ΓT (θ0 ; β))
(6.26)
−1 −1
Dn,k
Dn,l sup |∂θk ∂θk ∂θm Hn (θ)| = Op (1),
where
∆T ∼ M Np (0, ΓT (θ0 ; β)) ,
k, l, m ∈ {1, . . . , p},
θ
then r̂n = op (1) and
−1
ûn = ΓT (θ0 ; β) + op (1)
∆n,T
= Γ−1
T (θ0 ; β)∆n,T + op (1)
L
−1
−
→ Γ−1
T (θ0 ; β)∆T ∼ M Np 0, ΓT (θ0 ; β) ,
completing the proof; since ΓT (θ0 ; β) may be random, the appropriate mode of convergence to deduce
(6.25) is the stable convergence (see Section 6.1). Thus, it suffices to prove (6.26) and
Ls
(6.27)
∆n,T −−→
∆T ∼ M Np 0, ΓT (θ0 ; β) ,
p
Γn,T −
→ ΓT (θ0 ; β).
(6.28)
6.4.1. Proof of (6.26). We may and do set pα = pγ = 1. Write R(x, θ) for generic matrix-valued function
on R × Θ such that supθ |R(x, θ)| . 1 + |x|C . By straightforward computations,
1
∂ 3 H (θ)
2(1−1/β) α n
nh
=
1
nh1−1/β
n
X
Rj−1 (θ)gβ (j (θ))
j=1
n
1X
Rj−1 (θ)∂gβ (j (θ)) + h1−1/β Rj−1 (θ)∂ 2 gβ (j (θ)) ,
n j=1
n
X
1
1
2
∂ ∂γ Hn (θ) =
Rj−1 (θ)gβ (j (θ)) + Rj−1 (θ)j (θ)∂gβ (j (θ))
nh2(1−1/β) α
nh1−1/β j=1
n
1X
+
Rj−1 (θ)j (θ)∂ 2 gβ (j (θ)) + Rj−1 (θ)∂gβ (j (θ)) ,
n j=1
n
1 3
1X
∂γ Hn (θ) =
Rj−1 (θ) + Rj−1 (θ)j (θ)gβ (j (θ))
n
n j=1
+ Rj−1 (θ)2j (θ)∂gβ (j (θ)) + Rj−1 (θ)3j (θ)∂ 2 gβ (j (θ)) ,
+
1
∂ ∂ 2 H (θ)
1−1/β α γ n
nh
n
1X
=
Rj−1 (θ)gβ (j (θ)) + Rj−1 (θ)j (θ)∂gβ (j (θ))
n j=1
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
33
2
2
+ Rj−1 (θ)j (θ)∂ gβ (j (θ)) .
By (6.23), all the terms having the factor “1/n” in front of the summation sign in the above right-hand
sides are Op (1) uniformly in θ. Since the functions y 7→ gβ (y) and y 7→ y∂gβ (y) are odd, it follows from
Proposition 6.5(2) that both
1
nh1−1/β
1
nh1−1/β
n
X
j=1
n
X
Rj−1 (θ)gβ (j (θ)) = Op (1),
Rj−1 (θ)j (θ)∂gβ (j (θ)) = Op (1),
j=1
hold uniformly in θ. These observations are enough to conclude (6.26).
6.4.2. Proof of (6.27). Let j := j (θ0 ) and observe that
1
1
∆n,T = √ 1−1/β ∂α Hn (θ0 ), √ ∂γ Hn (θ0 )
n
nh
n
n
X
1
∂α aj−1 (α0 )
1 X ∂γ cj−1 (γ0 )
√
√
= −
gβ (j ), −
{1 + j gβ (j )} .
n j=1 cj−1 (γ0 )
n j=1 cj−1 (γ0 )
To apply Jacod’s stable central limit theorem, we introduce the partial sum process in D([0, T ]; Rp ):
[t/h]
[t/h]
1 X ∂γ cj−1 (γ0 )
1 X ∂α aj−1 (α0 )
gβ (j ), − √
{1 + j gβ (j )} , t ∈ [0, T ].
∆n,t := − √
n j=1 cj−1 (γ0 )
n j=1 cj−1 (γ0 )
Let
(6.29)
πj−1
∂α aj−1 (α0 )
∂γ cj−1 (γ0 )
= πj−1 (θ0 ) := diag −
,−
cj−1 (γ0 )
cj−1 (γ0 )
∈ Rp ⊗ R2 ,
η(y) := (gβ (y), 1 + ygβ (y)) = (gβ (y), kβ (y)) ∈ R2
(6.30)
(bounded),
RT
πj−1 η(j ). Write Γt (θ0 ; β) for ΓT (θ0 ; β) with the integral signs “ 0 ” in
Rt
their definitions replaced by “ 0 ”. Then, by means of [20, Theorem 3-2] (or [23, Theorem IX.7.28]), the
stable convergence (6.27) is implied by the following conditions: for each t ∈ [0, T ] and for any bounded
(Ft )-martingale M ,
[t/h]
4
X
1
p
j−1
√ πj−1 η(j )
(6.31)
−
→ 0,
E
n
j=1
so that ∆n,t = n
(6.32)
(6.33)
Pn
−1/2
j=1
[t/h]
n
⊗2 o
1 X
p
πj−1 · Ej−1 η(j ) − Ej−1 {η(j )}
πj−1 −
→ Γt (θ0 ; β),
n j=1
sup
t∈[0,T ]
[t/h]
1 X
p
√
→ 0,
πj−1 Ej−1 {η(j )} −
n j=1
[t/h]
(6.34)
X
j−1
E
j=1
1
√ πj−1 η(j )∆j M
n
p
−
→ 0.
The Lyapunov condition (6.31) trivially holds
since η is bounded and |πj−1 | . 1+|Xtj−1 |C . For (6.32),
R
arguing as in the proof of Lemma 6.3 with η(z)φβ (z)dz = 0 and noting that
Z
Z
η(z){fh (z) − φβ (z)}dz ≤ kηk∞ |fh (z) − φβ (z)|dz = o(n−1/2 ),
Z
Z
gβ (y)kβ (y)φβ (y)dy = {gβ (y) + ygβ2 (y)}φβ (y)dy = 0,
we obtain for each q > 0
(6.35)
Ej−1 {η(j )} =
Z
∗
2−1/β
η(z)fh (z)dz + OL
)
q (h
34
HIROKI MASUDA
Z
=
Ej−1 η ⊗2 (j ) =
Z
−1/2
−1/2
∗
∗
),
) = OL
η(z)φβ (z)dz + OL
q (n
q (n
∗
2−1/β
η ⊗2 (z)fh (z)dz + OL
)
q (h
Z
=
η
⊗2
(z)φβ (z)dz +
∗
−1/2
OL
)
q (n
=
Cα (β)
0
∗
−1/2
+ OL
).
q (n
sym. Cγ (β)
Then the left-hand side of (6.32) equals
Z
[t/h]
1 X
η ⊗2 (z)φβ (z)dz πj−1 + Op (n−1/2 )
πj−1 ·
n j=1
[t/h]
1 X
Cα (β)
0
=
πj−1 ·
π
+ Op (n−1/2 ).
0
Cγ (β) j−1
n j=1
By Lemma 6.4 the first term in the right-hand side converges in probability to Γt (θ0 ; β), hence (6.32) is
verified.
The convergence (6.33) follows on applying (6.35) and Lemma 6.4:
Z
[t/h]
[t/h]
√
√
1 X
1 X
√
πj−1
πj−1 Ej−1 {η(j )} =
n η(z)fh (z)dz + Op ( nh2−1/β )
n j=1
n j=1
= op (1) + Op (h3/2−1/β ) = op (1),
all the order symbols above being uniformly valid in t ∈ [0, T ].
Finally we turn to (6.34). By means of the decomposition theorem for local martingales (see [23,
Theorem I.4.18]), we may write M = M c + M d for the continuous part M c and the associated purely
discontinuous part M d . Our underlying probability space supports no Wiener process, so that in view of
the martingale representation theorem [23, Theorem III.4.34] for M , we may set M c = 0; recall (1.2). To
show (6.34) we will follow an analogous way to [49] with successive use of general theory of martingales
convergence.
It suffices to prove the claim when both π and η are real-valued. The jumps of M over [0, T ] are
P[t/h]
a.s.
bounded, and we have Mtn := j=1 ∆j M −−→ Mt = Mtd in D([0, T ]; R). Let
[t/h]
Ntn
:=
X 1
√ πj−1 η(j ).
n
j=1
For each n, N n is a local martingale with respect to (Ft ), and (6.34) equals that hM n , N n it → 0 for each
t ≤ T . The angle-bracket process
hN n it =
[t/h]
1 X 2
π Ej−1 {η 2 (j )}
n j=1 j−1
is C-tight, that is, it is tight in D([0, T ]; R) and any weak limit process has a.s. continuous sample paths;
this can be deduced as in the proof of (6.32). Hence, by [23, Theorem VI.4.13] the sequence (N n ) is tight
in D([0, T ]; R). Further, for every > 0, as in the case of (6.31) we have
X
n
n
X
n
n
n
P sup |∆Nt | > = P max |∆j N | > ≤
P (|∆j N | > ) .
E |∆j N n |4 → 0.
t≤T
j≤n
j=1
j=1
We conclude from [23, Theorem VI.3.26] that (N n ) is C-tight.
Fix any {n0 } ⊂ N. By [23, Theorem VI.3.33] the process H n := (M n , N n ) is tight in D([0, T ]; R).
Hence, by Prokhorov’s theorem we can pick a subsequence {n00 } ⊂ {n0 } for which there exists a process
00 L
H = (M d , N ) with N being continuous, such that H n −
→ H along {n00 } in D([0, T ]; R). By (6.2) we
have
1
n
C
sup E max |∆j N | . sup √ E 1 + sup |Xt |
< ∞,
j≤n
n
n
n
t≤T
00
hence it follows from [23, Corollary VI.6.30] that the sequence (H n ) is predictably uniformly tight. In
00
00
L
particular, (H n , [H n ]) −
→ (H, [H]) with the off-diagonal component of the limit quadratic-variation
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
35
P
process being 0 a.s.: [M, N ] = hM c , N c i + s≤· (∆Ms )(∆Ns ) = 0 a.s. identically (see [23, Theorem
I.4.52]). Therefore, given any {n0 } ⊂ N we can find a further subsequence {n00 } ⊂ {n0 } for which
00
00
L
[M n , N n ] −
→ 0. This concludes that
[t/h]
n
(6.36)
n
[M , N ]t =
X 1
p
√ πj−1 η(j )∆j M −
→0
n
j=1
in D([0, T ]; R).
Since [M n , N n ] − hM n , N n i is a martingale, we may write
Z tZ
n
n
n
n
n
χn (s, z)µ̃(ds, dz)
Gt := [M , N ]t − hM , N it =
0
for some predictable process χn (s, z). Using the isometry property and the martingale property of
RtR
RtR
the stochastic integral, we have E{(Gnt )2 } = E{ 0 χn (s, z)2 dsν(dz)} = E{ 0 χn (s, z)2 µ(ds, dz)} =
P
P
E{ 0<s≤t (∆Gns )2 } = E{ 0<s≤t (∆Msn ∆Nsn )2 }. Since
1
n 2
n 2
C
(∆Ns ) ≤ max(∆j N ) .
1 + sup |Xt |
j≤n
n
t≤T
P
n 2
and 0<s≤T (∆Ms ) is essentially bounded (for M is bounded), we obtain
1
sup E{(Gnt )2 } . E 1 + sup |Xt |C → 0,
n
t≤T
t≤T
p
which combined with (6.36) yields (6.34): hM n , N n it −
→ 0. The proof of (6.27) is complete.
Remark 6.8. The setting (1.2) of the underlying filtration is not essential. Even when the underlying
probability space carries a Wiener process, we may still follow the martingale-representation argument
as in [49].
6.4.3. Proof of (6.28). The components of Γn,T consist of
(6.37)
−
n
n
X
1
∂α2 aj−1 (α0 )
1 X {∂α aj−1 (α0 )}⊗2
2
∂
H
(θ
)
=
g
(
)
−
∂gβ (j ),
n
0
β
j
α
n j=1
c2j−1 (γ0 )
nh2(1−1/β)
nh1−1/β j=1 cj−1 (γ0 )
1
n
1
1 X ∂γ2 cj−1 (γ0 )
− ∂γ2 Hn (θ0 ) = −
{1 + j gβ (j )}
n
n j=1 cj−1 (γ0 )
(6.38)
−
n
1 X {∂γ cj−1 (γ0 )}⊗2
1 + 2j gβ (j ) + 2j ∂gβ (j ) ,
n j=1
c2j−1 (γ0 )
n
1 X {∂α aj−1 (α0 )} ⊗ {∂γ cj−1 (γ0 )}
− 1−1/β ∂α ∂γ Hn (θ0 ) = −
{gβ (j ) + j ∂gβ (j )} .
n j=1
c2j−1 (γ0 )
nh
1
ByR (6.4), with Corollary
R 26.6 when β ∈ (1, 2), the first term in the right-hand side of (6.37) is op (1). Since
− ∂gβ (z)φβ (z)dz = gβ (z)φβ (z)dz = Cα (β), by Proposition 6.5 we derive
1
∂ 2 Hn (θ0 ) = Cα (β)ΣT,α (θ0 ) + op (1).
nh2(1−1/β) α
R
By Proposition 6.5 and kβ (z)φβ (z)dz = 0, the first term in the right-hand side of (6.38)
is op (1). As
R
forR the second term, noting that the function lβ (z) := 1 + 2zgβ (z) + z 2 ∂gβ (z) satisfies lβ (z)φβ (z)dz =
− kβ2 (z)φβ (z)dz = −Cγ (β), we obtain
Z
n
1 2
1 X {∂γ cj−1 (γ0 )}⊗2
− ∂γ Hn (θ0 ) = −
lβ (z)φβ (z)dz + op (1)
n
n j=1
c2j−1 (γ0 )
−
= Cγ (β)ΣT,γ (γ0 ) + op (1).
R
Finally, since {gβ (z) + z∂gβ (z)}φβ (z)dz = 0, Proposition 6.5 concludes that
−
completing the proof of (6.28).
1
nh1−1/β
∂α ∂γ Hn (θ0 ) = op (1),
36
HIROKI MASUDA
R
6.5. Proof of Theorem 3.5. Under the condition |z|>1 |z|q ν(dz) < ∞ for every q > 0, the moment
estimates (6.2) and (6.3) are in force without truncating the support of ν (Section 6.1). Further, it follows
from Lemmas 2.2(1) and 2.5 that we have both (2.3) and (2.5).
6.6. Proof of Corollary 3.6. The random mapping θ 7→ (ΣT,α (θ), ΣT,γ (γ)) is a.s. continuous, hence
applying the uniform law of large numbers presented in Lemma 6.4 we can deduce the convergences
p
p
Σ̂T,α,n −
→ ΣT,α (θ0 ) and Σ̂T,γ,n −
→ ΣT,γ (γ0 ). Then it is straightforward to derive (3.4) from (6.24), (6.25),
and (6.26).
6.7. Proof of Theorem 3.11. Most parts are essentially the same as in the proof of Theorem 3.5 (hence
as in Theorem 3.2). We only sketch a brief outline.
The convergences (2.3) and (2.5) are valid under the present assumptions. As in Theorem 3.5,
the localization introduced in Section 6.1 is not necessary here, since, under the moment boundedness supt E(|Xt |q ) < ∞ for any q > 0 and the global Lipschitz property of (a, c), we can deduce the
large-time version of the latter inequality in (6.2) by the standard argument: for any q ≥ 2 we have
E(|Xt+h − Xt |q |Ft ) . h(1 + |Xt |C ), hence in particular
sup E(|Xt+h − Xt |q ) . h sup E(1 + |Xs |C ) . h.
t∈R+
t∈R+
Obviously, (6.3) remains the same and Lemmas 6.2 and 6.3 stay valid as well.
As for the uniform low of large numbers under Tn → ∞, we have the following ergodic counterpart to
Lemma 6.4:
Lemma 6.9. For any measurable function f : R × Θ → R such that
sup {|f (x, θ)| + |∂x f (x, θ)|} . 1 + |x|C ,
θ
we have
n
1X
fj−1 (θ) −
sup
n j=1
θ
Z
p
→ 0.
f (x, θ)π0 (dx) −
R
Pn
p
→ 0 for each θ, hence
Proof. Write ∆fn (θ) = n−1 j=1 fj−1 (θ)− f (x, θ)π0 (dx). By (3.7) we have ∆fn (θ) −
it suffices to show the tightness of {supθ |∂θ ∆fn (θ)|}n in R, which implies the tightness of {∆fn (·)}n in
C(Θ). But this is obvious since
n
sup |∂θ ∆fn (θ)| .
θ
n
1X
1X
sup |fj−1 (θ) − π0 (f (·, θ))| .
(1 + |Xtj−1 |C ).
n j=1 θ
n j=1
Having Lemma 6.9 in hand, we can follow the contents of Sections 6.3, 6.4.1, and 6.4.3. The proof of
the central limit theorem is much easier than the mixed normal case, for we now have no need for looking
at the step processes introduced in Section 6.4.2 and also for taking care of the asymptotic orthogonality
condition (6.34). By means of the classical central limit theorem for martingale difference arrays [10], it
suffices to show, with the same notation as in (6.29) and (6.30),
n
4
X
1
p
j−1
√
E
πj−1 η(j )
−
→ 0,
n
j=1
n
n
⊗2 o
1X
p
πj−1 · Ej−1 η(j ) − Ej−1 {η(j )}
πj−1 −
→ diag Vα (θ0 ; β), Vγ (θ0 ; β) ,
n j=1
n
1 X
p
√
πj−1 Ej−1 {η(j )} −
→ 0,
n j=1
all of which can be deduced from the same arguments as in Section 6.4.2.
Acknowledgement. This work was partly supported by JSPS KAKENHI Grant Number 26400204 and
JST CREST Grant Number JPMJCR14D7, Japan.
ESTIMATION OF LOCALLY STABLE LÉVY DRIVEN SDE
37
References
[1] M. Abramowitz and I. A. Stegun, editors. Handbook of mathematical functions with formulas, graphs, and mathematical
tables. Dover Publications Inc., New York, 1992. Reprint of the 1972 edition.
[2] R. A. Adams. Some integral inequalities with applications to the imbedding of Sobolev spaces defined over irregular
domains. Trans. Amer. Math. Soc., 178:401–429, 1973.
[3] Y. Aı̈t-Sahalia and J. Jacod. High-Frequency Financial Econometrics. Princeton University Press, 2014.
[4] O. Barndorff-Nielsen. Exponentially decreasing distributions for the logarithm of particle size. Proc. Roy. Soc. Lond.,
A353:401–419, 1977.
[5] O. E. Barndorff-Nielsen. Processes of normal inverse Gaussian type. Finance Stoch., 2(1):41–68, 1998.
[6] J. Bertoin and R. A. Doney. Spitzer’s condition for random walks and Lévy processes. Ann. Inst. H. Poincaré Probab.
Statist., 33(2):167–178, 1997.
[7] E. Clément and A. Gloter. Local asymptotic mixed normality property for discretely observed stochastic differential
equations driven by stable Lévy processes. Stochastic Process. Appl., 125(6):2316–2352, 2015.
[8] E. Clément, A. Gloter, and H. Nguyen. LAMN property for the drift and volatility parameters of a SDE driven by a
stable lévy process. hal-01472749, 2017.
[9] T. Costa, G. Boccignone, F. Cauda, and M. Ferraro. The foraging brain: evidence of Lévy dynamics in brain networks.
PloS one, 11(9):e0161702, 2016.
[10] A. Dvoretzky. Asymptotic normality of sums of dependent random vectors. In Multivariate analysis, IV (Proc. Fourth
Internat. Sympos., Dayton, Ohio, 1975), pages 23–34. North-Holland, Amsterdam, 1977.
[11] J. Fageot, A. Amini, and M. Unser. On the continuity of characteristic functionals and sparse stochastic modeling. J.
Fourier Anal. Appl., 20(6):1179–1211, 2014.
[12] J. Fan, L. Qi, and D. Xiu. Quasi-maximum likelihood estimation of GARCH models with heavy-tailed likelihoods. J.
Bus. Econom. Statist., 32(2):178–191, 2014.
[13] V. Genon-Catalot and J. Jacod. On the estimation of the diffusion coefficient for multi-dimensional diffusion processes.
Ann. Inst. H. Poincaré Probab. Statist., 29(1):119–151, 1993.
[14] E. Gobet. Local asymptotic mixed normality property for elliptic diffusion: a Malliavin calculus approach. Bernoulli,
7(6):899–912, 2001.
[15] E. Gobet. LAN property for ergodic diffusions with discrete observations. Ann. Inst. H. Poincaré Probab. Statist.,
38(5):711–737, 2002.
[16] M. Grabchak and G. Samorodnitsky. Do financial returns have finite or infinite variance? A paradox and an explanation.
Quant. Finance, 10(8):883–893, 2010.
[17] E. Häusler and H. Luschgy. Stable convergence and stable limit theorems, volume 74 of Probability Theory and Stochastic
Modelling. Springer, Cham, 2015.
[18] D. Ivanenko, A. M. Kulik, and H. Masuda. Uniform lan property of locally stable lévy process observed at high
frequency. ALEA Lat. Am. J. Probab. Math. Stat., 12(2):835–862, 2015.
[19] K. Iwata. Lebesgue Integration. Morikita Publishing Co., Ltd., 2015.
[20] J. Jacod. On continuous conditional Gaussian martingales and stable convergence in law. In Séminaire de Probabilités,
XXXI, volume 1655 of Lecture Notes in Math., pages 232–246. Springer, Berlin, 1997.
[21] J. Jacod. Statistics and high-frequency data. In Statistical methods for stochastic differential equations, volume 124 of
Monogr. Statist. Appl. Probab., pages 191–310. CRC Press, Boca Raton, FL, 2012.
[22] J. Jacod and P. Protter. Discretization of processes, volume 67 of Stochastic Modelling and Applied Probability.
Springer, Heidelberg, 2012.
[23] J. Jacod and A. N. Shiryaev. Limit theorems for stochastic processes. Springer-Verlag, Berlin, second edition, 2003.
[24] A. Janicki and A. Weron. Simulation and chaotic behavior of α-stable stochastic processes, volume 178 of Monographs
and Textbooks in Pure and Applied Mathematics. Marcel Dekker Inc., New York, 1994.
[25] B.-Y. Jing, X.-B. Kong, and Z. Liu. Modeling high-frequency financial data by pure jump processes. Ann. Statist.,
40(2):759–784, 2012.
[26] R. Kawai and H. Masuda. On simulation of tempered stable random variates. J. Comput. Appl. Math., 235(8):2873–
2887, 2011.
[27] M. Kessler. Estimation of an ergodic diffusion from discrete observations. Scand. J. Statist., 24(2):211–229, 1997.
[28] X.-B. Kong, Z. Liu, and B.-Y. Jing. Testing for pure-jump processes for high-frequency data. Ann. Statist., 43(2):847–
877, 2015.
[29] F. Kühn. Existence and estimates of moments for Lévy-type processes. Stochastic Process. Appl., To appear.
[30] H. Luschgy and G. Pagès. Moment estimates for Lévy processes. Electron. Commun. Probab., 13:422–434, 2008.
[31] H. Masuda. Simple estimators for parametric Markovian trend of ergodic processes based on sampled data. J. Japan
Statist. Soc., 35(2):147–170, 2005.
[32] H. Masuda. Approximate self-weighted LAD estimation of discretely observed ergodic Ornstein-Uhlenbeck processes.
Electron. J. Stat., 4:525–565, 2010.
[33] H. Masuda. Approximate quadratic estimating function for discretely observed levy driven sdes with application to a
noise normality test. RIMS Kôkyûroku, 1752:113–131, 2011.
[34] H. Masuda. On quasi-likelihood analyses for stochastic differential equations with jumps. In Int. Statistical Inst.: Proc.
58th World Statistical Congress, 2011, Dublin (Session IPS007), pages 83–91, 2011.
[35] H. Masuda. Asymptotics for functionals of self-normalized residuals of discretely observed stochastic processes. Stochastic Process. Appl., 123(7):2752–2778, 2013.
[36] H. Masuda. Convergence of Gaussian quasi-likelihood random fields for ergodic Lévy driven SDE observed at high
frequency. Ann. Statist., 41(3):1593–1641, 2013.
[37] H. Masuda. Estimating an ergodic process driven by non-Gaussian noise. J. Jpn. Stat. Soc. Jpn. Issue, 44(2):471–495,
2015.
38
HIROKI MASUDA
[38] H. Masuda. Parametric estimation of Lévy processes. In Lévy matters. IV, volume 2128 of Lecture Notes in Math.,
pages 179–286. Springer, Cham, 2015.
[39] H. Masuda. Non-Gaussian quasi-likelihood estimation of SDE driven by locally stable Lévy process.
arXiv:1608.06758v2, 2016.
[40] M. Matsui and A. Takemura. Some improvements in numerical evaluation of symmetric stable density and its derivatives. Comm. Statist. Theory Methods, 35(1-3):149–172, 2006.
[41] I. Mizera and C. H. Müller. Breakdown points and variation exponents of robust M -estimators in linear models. Ann.
Statist., 27(4):1164–1177, 1999.
[42] I. Mizera and C. H. Müller. Breakdown points of Cauchy regression-scale estimators. Statist. Probab. Lett., 57(1):79–89,
2002.
[43] P. E. Protter. Stochastic integration and differential equations, volume 21 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2005. Second edition. Version 2.1, Corrected third printing.
[44] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing,
Vienna, Austria, 2013. ISBN 3-900051-07-0.
[45] P. Radchenko. Mixed-rates asymptotics. Ann. Statist., 36(1):287–309, 2008.
[46] S. Raible. Lévy processes in finance: Theory, numerics, and empirical facts. PhD thesis, PhD thesis, Universität
Freiburg i. Br, 2000.
[47] G. Samorodnitsky and M. S. Taqqu. Stable non-Gaussian random processes. Stochastic Modeling. Chapman & Hall,
New York, 1994. Stochastic models with infinite variance.
[48] K.-i. Sato. Lévy processes and infinitely divisible distributions, volume 68 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1999. Translated from the 1990 Japanese original, Revised by the
author.
[49] V. Todorov and G. Tauchen. Realized Laplace transforms for pure-jump semimartingales. Ann. Statist., 40(2):1233–
1262, 2012.
[50] M. Uchida and N. Yoshida. Adaptive estimation of an ergodic diffusion process based on sampled data. Stochastic
Process. Appl., 122(8):2885–2924, 2012.
[51] A. W. van der Vaart. Asymptotic statistics, volume 3 of Cambridge Series in Statistical and Probabilistic Mathematics.
Cambridge University Press, Cambridge, 1998.
[52] K. Zhu and S. Ling. Global self-weighted and local quasi-maximum exponential likelihood estimators for ARMAGARCH/IGARCH models. Ann. Statist., 39(4):2131–2163, 2011.
[53] V. M. Zolotarev. One-dimensional stable distributions, volume 65 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 1986. Translated from the Russian by H. H. McFaden, Translation edited
by Ben Silver.
Faculty of Mathematics, Kyushu University, 744 Motooka Nishi-ku Fukuoka 819-0395, Japan
E-mail address: [email protected]
| 10 |
Consensus analysis of large-scale nonlinear homogeneous multi-agent
formations with polynomial dynamics
arXiv:1612.01375v1 [] 5 Dec 2016
Paolo Massioni, Gérard Scorletti
Abstract—Drawing inspiration from the theory of linear “decomposable systems”, we provide a method, based on linear
matrix inequalities (LMIs), which makes it possible to prove the
convergence (or consensus) of a set of interacting agents with
polynomial dynamic. We also show that the use of a generalised
version of the famous Kalman-Yakubovic-Popov lemma allows
the development of an LMI test whose size does not depend on
the number of agents. The method is validated experimentally
on two academic examples.
Index Terms—Multi-agent systems, nonlinear systems, consensus, polynomial dynamic, sum of squares.
I. I NTRODUCTION
Large-scale systems are an emerging topic in the system
and control community, which is devoting significant efforts
on the development of analysis and control synthesis methods
for them. This deep interest can clearly be seen from the large
number of works published in the field in the last 40 years
[1], [2], [3], [23], [8], [12], [18], [20], [14], [17].
One of the main objectives of the studies is the development
and validation of “distributed control laws” for obtaining a
certain specified goal for a system of this kind. By “distributed
control”, opposed to “centralized control”, we mean a control
action that is computed locally according to the physical
spatial extension of the system, which is seen as an interconnection of simpler subsystems. One of the main problems of
large-scale systems is the “curse of dimensionality” that goes
with them, i.e. the analysis and synthesis problems related to
dynamical systems grow with the size, and for system of very
high order, such problems becomes computationally infeasible.
In the literature, if we restrict to linear systems, we can find a
few solutions [18], [8], [1] that can effectively overcome the
curse of dimensionality for a class of systems with a certain
regularity, namely for what we call “homogeneous systems”
which are made of the interconnection of a huge number of
identical subunits (also sometimes called “agents”).
In this paper we focus on this same problem, more specifically on the stability analysis of large-scale dynamical systems,
for the nonlinear case. Namely, we will consider a formation or
a multi-agent system made of a high number of identical units
interacting with one another through a symmetric graph-like
interconnection. Assuming that the dynamical equation of each
subunit is described by a polynomial in the state vector, we can
show that a linear matrix inequality (LMI) test can be devised
in order to verify the relative stability of such a formation. We
P. Massioni is with Laboratoire Ampère, UMR CNRS 5005,
INSA de Lyon, Université de Lyon, F-69621 Villeurbanne, France
[email protected]
G. Scorletti is with Laboratoire Ampère, UMR CNRS 5005,
Ecole Centrale de Lyon, Université de Lyon, F-69134 Ecully, France
[email protected]
will also be able to formulate such a test in a form which is not
strictly depending on the formation size, making it possible to
check the stability of formations with virtually any number of
agents, basically extending the analysis results of [18], [9] to
the nonlinear (polynomial) case.
II. P RELIMINARIES
A. Notation
We denote by N the set of natural numbers, by R the set
of real numbers and by Rn×m the set of real n × m matrices.
A⊤ indicates the transpose of a matrix A, In is the identity
matrix of size n, 0n×m is a matrix of zeros of size n × m
and 1n ∈ Rn a column vector that contains 1 in all of its
entries. The notation A 0 (resp. A 0) indicates that all
the eigenvalues of the symmetric matrix A are positive (resp.
negative) or equal to zero, whereas A ≻ 0 (resp. A ≺ 0)
indicates that all such eigenvalues
are strictly positive (resp.
n
negative). The symbol
indicates the binomial coefficient,
k
for which we have
n!
n
=
.
k
k!(n − k)!
The symbol ⊗ indicates the Kronecker product, for which we
remind the basic properties (A ⊗ B)(C ⊗ D) = (AC ⊗ BD),
(A ⊗ B)⊤ = (A⊤ ⊗ B ⊤ ), (A ⊗ B)−1 = (A−1 ⊗ B −1 ) (with
matrices of compatible sizes). We employ the symbol ∗ to
complete symmetric matrix expressions avoiding repetitions.
B. Agent dynamic
We consider a set of N ∈ N identical agents or subsystems
of order n, which interact with one another. Each agent, if
taken alone, is supposed to be described by a polynomial
dynamic, of the kind
ẋi = fd (xi ) = Aa χi
(1)
⊤
n
where i = 1, ..., N , xi = [xi,1 , xi,2 , ..., xi,n ] ∈ R is the
state of the ith agent, fd is a polynomial function of degree
d ∈ N, Aa ∈ Rn×ρ and χi ∈ Rρ is the vector containing all the
monomials in xi up to degree d (for example, if n = 2, d = 2,
then χi = [1, xi,1 , xi,2 , x2i,1 , xi,1 xi,2 , x2i,2 ]⊤ ). The value of ρ
is given by
n+d
ρ=
.
(2)
n
This approach is based on the sum of squares (SOS) literature
[19], which basically allows the relaxation of polynomial
problems into linear algebra’s. In this context, it is possible to
express polynomials p up to degree 2d as quadratic forms with
⊤
respect to χi , i.e. p(xi ) = χ⊤
∈ Rρ×ρ .
i X χi , with X = X
2
This quadratic expression is not unique, due to the fact that
different products of monomials in χi can yield the same
result, for example x2i,1 is either x2i,1 times 1 or xi,1 times
xi,1 . This implies that there exist linearly independent slack
ρ×ρ
matrices Qk = Q⊤
, with k = 1, . . . , ι such that
k ∈ R
⊤
χi Qk χi = 0. The number of such matrices is [19]
2
!
1
d+n
d+n
n + 2d
ι=
+
−
.
(3)
2
d
d
2d
C. Formations
Moving from one single agent to a whole formation, we
employ a matrix P ∈ RN ×N to describe the interactions
among the agents. Basically P is a sparse matrix whose entries
in the ith row and j th column indicate whether the ith agent is
influenced by the state of the j th , according to the definition
that follows.
Definition 1 (Formation). We call a formation (of non-linear
agents with polynomial dynamics) a dynamical system of order
nN , with n, N ∈ N, described by the following dynamical
equation
ẋ = (IN ⊗ Aa + P ⊗ Ab )χ
(4)
⊤
⊤ ⊤
where x = [x⊤
∈
RnN , χ
=
1 , x2 , ..., xN ]
⊤
⊤
⊤ ⊤
ρN
N ×N
[χ1 , χ2 , ..., χN ] ∈ R , P ∈ R
and Aa , Ab ∈ Rn×ρ .
This definition extends and adapts the definition of “decomposable systems” given in [18] to polynomial dynamics.
In the linear case, a formation defined above boils down to
the dynamical equation
ẋ = (IN ⊗ Aa + P ⊗ Ab )x.
(5)
In [18] it has been shown that if P is diagonalisable, then
this system (of order nN ) is equivalent to a set of parameterdependent linear systems of order n. This is obtained with
the change of variables x = (S ⊗ In )x̂, where x̂ =
⊤
⊤ ⊤
nN
[x̂⊤
and S is the matrix diagonalising
1 , x̂2 , ..., x̂N ] ∈ R
−1
P, i.e. S PS = Λ, with Λ diagonal. This turns (5) into
x̂˙ = (IN ⊗ Aa + Λ ⊗ Ab )x̂, which is a block-diagonal system
equivalent to the set
x̂˙ i = (Aa + λi Ab )x̂i
(6)
for i = 1, ..., N , with λi the ith eigenvalue of P. This idea
of decomposing a distributed system into a set of parametervarying systems is very practical and it has inspired several
works in the domain of consensus and distributed control [13],
[26], [7], [4]. In this paper, to our knowledge, the idea is
adapted to nonlinear dynamics for the first time.
D. Problem formulation
The topic of this paper is to find a proof of convergence of
the state of the agents under a given dynamics expressed as in
(4). We do not require that each agent by itself converges to a
point, but that they all converge eventually to the same state,
which could be either an equilibrium point or a trajectory. In
order to do so, we formulate first an assumption on the pattern
matrix P.
Assumption 2. The pattern matrix P ∈ RN ⊗N in (4) is
symmetric and it has one and only one eigenvalue equal to
0, associated to the eigenvector 1N , i.e P1N = 0.
This assumption is very common in the literature, it basically ensures that the interconnection matrix is a (generalised)
graph Laplacian of a symmetric connected graph [5]. Such
matrices have real eigenvalues and eigenvectors. We can then
formulate the problem on which this paper focuses.
Problem 3. We consider (4) with initial conditions x(0) ∈
RnN . Prove that limt→∞ ||xi − xj || = 0, ∀i, j ∈ {1, ..., N }.
III. F ORMATION LYAPUNOV FUNCTION
In order to be able to prove the convergence of all the agents
to the same trajectory, we define what we call a “formation
Lyapunov function”, which will have the property of tending
to zero when the agents are converging. We summarise these
notions in a definition and a lemma.
Definition 4 (Formation Lyapunov function candidate). We
define as “formation Lyapunov function candidate” a function
!
l
X
(7)
P i ⊗ Li x = x⊤ Lx
V (x) = x⊤
i=1
L⊤
i
n×n
∈R
, l ∈ N, l 6 N . The reason for this
with Li =
special structure will be clear later on, in fact it allows the
block-diagonalisation of the Lyapunov matrix in the same way
as P can be diagonalised.
Lemma 5. Consider (4) and a formation Lyapunov function
N ×(N −1)
candidate V (x) = x⊤ Lx as in (7). Let 1⊥
be
N ∈ R
the orthogonal complement of 1N , i.e. [1N 1⊥
]
is
full
rank
N
⊥
and 1⊤
N 1N = 0.
⊤
⊥
If (1⊥
N ⊗ In ) L (1N ⊗ In ) > 0, then we have that xi = xj
∀i, j ∈ {1, ..., N } if and only if V (x) = 0.
Proof: Necessity is almost obvious: if xi = xj ∀i, j ∈
{1, ..., N }, then x = 1N ⊗ xi ; the fact that P1N = 0 implies
that V (x) = 0.
We prove the sufficiency by contradiction, i.e. we suppose
that there exist i and j for which xi 6= xj and V (x) = 0.
The vector x with the complete state must then have at
least one component which is orthogonal to the columns of
(1N ⊗ In ), because (1N ⊗ In ) contains columns with all the
corresponding agent states equal. So, based on the fact that
⊤
⊥
(1⊥
N ⊗ In ) L (1N ⊗ In ) > 0, then V (x) > 0 contradicting
the hypothesis.
IV. M AIN
RESULT
We are now ready to formulate our main result. A preliminary lemma is given first, which allows diagonalising the
Lyapunov function in the same way as a linear system is
decomposed in [18].
Lemma 6. If Assumption 2 holds, then 1) there exist a matrix
S ∈ RN ×N such that S ⊤ S = SS ⊤ = IN , and S ⊤ PS = Λ,
with Λ diagonal. Moreover, we have that 2) S ⊤ 1N = T =
[t1 t2 ... tN ], with ti ∈ RN = 0 if λi = Λi,i 6= 0.
3
Proof: The first part of the lemma is proven by the
fact that all symmetric matrices are diagonalisable by an
orthonormal matrix S (i.e. S −1 = S ⊤ ) [10]. For the second
part, consider that due to Assumption 2, 1N is an eigenvector
of P with eigenvalue 0; the matrix S contains the normalised
eigenvectors of P in its columns, and all of these eigenvectors
are orthogonal to one another because S ⊤ S = IN . So each
ti is the dot product between 1N and the ith eigenvector, and
it is non zero if and only if λi = 0.
Theorem 7. Consider (4) with given N , Aa , Ab and P
statisfying Assumption 2; moreover, we order the eigenvalues
of P so that the first eigenvalue is the one equal to zero, i.e.
λ1 = 0. If for a chosen l ∈ N, there exist τj ∈ R, and matrices
n×n
Lj = L⊤
such that
j ∈R
Pl
j
(8)
j=1 λi Lj ≻ 0
Pl
Pι
⊤
Π( j=1 τj Qj + j=1 (λji (Γ⊤ Lj Aa + A⊤
a Lj Γ + Γ Lj Γ)+
j+1
⊤
⊤
⊤
λi (Γ Lj Ab + Ab Lj Γ))Π 0
(9)
for i = 2, ... N , where Γ = [0n,1 In 0n,ρ−n−2 ] (i.e. Γxi =
χi ), Π = [0ρ−1,1 Iρ−1 ], then limt→∞ ||xi − xj || = 0, ∀i, j ∈
{1, ..., N }.
Proof: In order to assure the convergence of the agents,
we need to assure the conditions stated in Lemma 5, namely
that a function V (x) = x⊤ Lx exists, with and (1⊥
N ⊗
In )⊤ L (1⊥
⊗
I
)
>
0,
and
that
V̇
(x)
<
0
for
V
(x)
>
0.
n
N
⊤
⊥
For the condition (1⊥
⊗
I
)
L
(1
⊗
I
)
>
0,
consider
that
n
n
N
N
S contains a scaled version of 1N in its first column and 1⊥
N in
P
the rest of the matrix. Knowing that L = (S ⊗ In )( li=1 Λi ⊗
Li )(S ⊤ ⊗ In ) thanks to Lemma 6, this condition is equivalent
to (8).
For what concerns the condition V̇ (x) < 0 for V (x) > 0,
it is satisfied if V̇ (x) 6 −ǫx⊤ Lx, i.e.
⊤
χ⊤ (QN + Γ⊤
N L(IN ⊗ Aa ) + ΓN L(P ⊗ Ab )+
⊤
(IN ⊗ A⊤
)LΓ
+
(P
⊗
A
)LΓ
+
ǫΓ⊤
N
N
a
b
N LΓN )χ 6 0
(10)
where
Γ
=
(I
⊗
Γ)
(so
Γ
χ
=
x)
and
Q
=
I
⊗
N
N
N
N
N
Pι
⊤
τ
Q
(for
which,
by
definition
of
Q
,
χ
Q
χ
=
0
j
j
j
N
j=1
for all values of the τi ). By the fact that P = SΛS ⊤ and
IN = SS ⊤ (Assumption 2 and Lemma 6), using the properties
of Kronecker product (10) is equivalent to
⊤
χ̂⊤ (QN + Γ⊤
N L̂(IN ⊗ Aa ) + ΓN L̂(Λ ⊗ Ab )+
⊤
⊤
(IN ⊗ Aa )L̂ΓN + (Λ ⊗ Ab )L̂ΓN + ǫΓ⊤
N L̂ΓN )χ̂ 6 0
(11)
Pl
j
⊤
with L̂ =
j=1 Λ ⊗ Lj and χ̂ = (S ⊗ Iρ )χ. Notice in
this last inequality that the term between χ̂⊤ and χ̂ is blockdiagonal, as it is the sum of terms of the kind IN ⊗ X
or Λi ⊗ X (i ∈ N). If we define χ̂i ∈ Rρ such that
⊤
⊤ ⊤
χ̂ = [χ̂⊤
1 , χ̂2 , ..., χ̂N ] , then (11) is equivalent to
PN
Pι
Pl
j
⊤
⊤
⊤
i=1 χ̂i (
j=1 τj Qj +
j=1 λi (Γ Lj Aa + Aa Lj Γ)+
Pl
Pl
j
j+1 ⊤
⊤
⊤
j=1 λi (Γ Lj Γ))χ̂i 6 0.
j=1 λi (Γ Lj Ab + Ab Lj Γ)+ǫ
(12)
The term of the sum for i = 1 is always 0 (as we chose
λ1 = 0), so there is no contribution from it and it can be
discarded. Concerning the vectors χi , remember that they all
⊤
ρ−1
contain 1 in their first entry, i.e. χi = [1 χ̃⊤
.
i ] , χ̃i ∈ R
th
For each χ̂i , the first entry is by its definition the i entry of
the vector e = S ⊤ 1, which contains zeros in all of its entries
but the first (due to Lemma 6). So for i = 2, ..., N , we have
that χ̂i = Π⊤ Πχ̂i . So (12) is equivalent to
PN
Pι
⊤ ⊤
i=2 χ̂i Π Π(
j=1 τj Qj +
Pl
j
⊤
λ
(Γ
L
A
+
A⊤
j a
a Lj Γ)+
i
Pl j=1 j+1
(13)
⊤
(Γ Lj Ab + A⊤
b Lj Γ)+
j=1 λi
P
ǫ lj=1 λji (Γ⊤ Lj Γ))Π⊤ Πχ̂i 6 0
The set of LMIs in (9) imply (13), which concludes the proof.
This theorem allows proving the convergence of N agents
with two sets of N − 1 parameter-dependent LMIs, whose
matrix size is respectively n (i.e. the order of each agent taken
alone) and ρ − 1. This result is already interesting as it avoids
using LMIs scaling with N n, which is the global system order.
In the next section, we explore whether it is possible to further
reduce the computational complexity.
V. VARIATION ON THE MAIN RESULT
We explore the possibility of using a generalised version
of the famous Kalman-Yakubovich-Popov (KYP) lemma [22].
This lemma allows turning a parameter-depending LMI into a
parameter-independent one.
A. The Kalman-Yakubovic-Popov lemma
The Kalman-Yakubovic-Popov lemma or KYP [22] is
a widely celebrated result for dynamical systems that allows turning frequency-dependent inequalities into frequencyindependent ones, by exploiting a state-space formulation. It
turns out that such a result can be adapted and generalised to
inequalities depending on any scalar parameter. Namely, we
will use the following generalised version of the KYP.
Lemma 8 (Generalized KYP [6]). Consider
η
X
ξi Mi ,
M (ξ) = M0 +
(14)
i=1
with ξ ∈ Rl a vector of decision variables and Mi = Mi⊤ ∈
RnM ×nM , i = 1, ..., η. The quadratic constraint
⊤
φ(θ) M (ξ) φ(θ) ≺ 0 for θ ∈ [θ, θ]
(15)
⊤
is verified if and only if there exist D = D ≻ 0 and G = −G ⊤
such that
⊤
C̃
M (ξ) C̃ D̃ +
⊤
D̃
⊤
I 0
−2D
(θ + θ)D + G I 0
≺0
à B̃
à B̃
(θ + θ)D − G
−2θθD
(16)
with à ,B̃, C̃ and D̃ such that
à B̃
φ(θ) = D̃ + C̃θI(I − ÃθI)−1 B̃ = θI ⋆
, (17)
C̃ D̃
where the operator ⋆ implicitly defined above is known as the
Redheffer product [27].
The lemma applies as well if the sign ≺ in (15) is replaced
by : in this case replace ≺ with in (16) as well.
4
B. Second main result
Let us define
λ = max {λi }, λ = min {λi }.
26i6N
(18)
26i6N
Then, for θ ∈ [λ, λ], the following set of LMIs
Pl
j
(19)
j=1 θ Lj ≻ 0
Pl
Pι
⊤
Π( j=1 τj Qj + j=1 (θj (Γ⊤ Lj Aa + A⊤
a Lj Γ + Γ Lj Γ)+
j+1
⊤
⊤
⊤
θ (Γ Lj Ab + Ab Lj Γ))Π 0
(20)
“embeds” the set of LMIs in (8) and (9) (notice that we have
moved from a discrete set of values to a continuous interval
which includes them all). Subsequently, Lemma 8 can be used
to turn the θ-dependent LMIs in (19) and (20) into parameterindependent ones. The dependence of the terms in (19) and
(20) from θ (which is ultimately λi ) is polynomial, so we need
to define
i⊤
h
(21)
φν (θ) = θceil((l+1)/2) Iν , θceil((l+1)/2)−1 Iν , ..., Iν
which requires
Ãν = Uν ⊗ In ,
C̃ν =
Iν
01×ν
⊗ In ,
B̃ν =
D̃ν =
0(ν−1)×1
1
0ν×1
1
⊗ In ,
⊗ In .
Theorem 9. Consider (4) with given N , Aa , Ab and P statisfying Assumption 2; excluding the first eigenvalue of P, which
is equal to 0, we have that λ 6 λi 6 λ, with i = 2, ... N .
If for a chosen l ∈ N, there exist τi ∈ R, and matrices
n×n
Li = L⊤
, and there exist Dνk , Gνk ∈ Rνk ×νk ,
i ∈ R
⊤
Dνk = Dνk ≻ 0 and Gνk = −Gν⊤k such that
[∗]⊤ Mk C̃νk D̃νk +
Iνk
0
(λ + λ)Dνk + Gνk
⊤ −2Dνk
≺0
[∗]
Ãνk B̃νk
∗
−2λλDνk
(23)
for k = 1, 2, with ν1 = n and ν2 = ρ − 1, where Γ =
[0n,1 In 0n,ρ−n−2 ] (i.e. Γxi = χi ), Π = [0ρ−1,1 Iρ−1 ], and
Ãνk , B̃νk , C̃νk , D̃νk , φνk are defined in (22) and (21), with
l
X
λji Lj
(24)
j=1
Pι
φν (λi )⊤ M2 φν (λi ) = Π( j=1 τj Qj +
Pl 2 j ⊤ 2
⊤
⊤
j=1 (λi (Γ Lj Aa + Aa Lj Γ + Γ Lj Γ)+
j+1
⊤
λi (Γ⊤ Lj Ab + A⊤
b Lj Γ))Π ,
VI. E XAMPLES
In order to provide a few challenging examples of application of the proposed method, we focus on a problem that is
widely studied in the nonlinear dynamics community, namely
the synchronisation of oscillators [21], [25]. The approach here
is of course numerical and different (or complementary) with
respect to the ones found in such a literature, where the objective is usually to find a control law and then analytically prove
stability. Our approach is just to propose a control law and test
numerically whether it will make the subsystems converge or
not. We consider two famous examples of nonlinear systems,
namely the Van der Pol oscillator [11] and the Lorenz attractor
[16].
A. Van der Pol oscillator
(22)
where Uν ∈ Riν×ν is a matrix containing 1’s in the first upper
diagonal and 0’s elsewhere, and ν = n for (19) and ν = ρ − 1
for (20). We are now ready to formulate our second main
result.
φν1 (λi )⊤ M1 φν1 (λi ) = −
With this second theorem, we replace the two sets of N − 1
LMIs of size n and ρ − 1, with only two LMIs of matrix size
n ceil((l + 3)/2) and ρ ceil((l + 3)/2). This is an interesting
result because the computational complexity is no longer
depending on N , i.e. the number of agents. On the other hand,
the choice of a bigger l will improve the chances of solving
the LMIs for high values of N .
(25)
then limt→∞ ||xi − xj || = 0, ∀i, j ∈ {1, ..., N }.
Proof: A direct application of Lemma 8 for M1 and M2
implies that the hypotheses of Theorem 7 are satisfied if the
hypotheses here are.
We consider a system of N agents of equation
ẋi = yi
ẏi = µ(1 − xi )yi − xi − cri
(26)
for which n = 2, with ri = −(xi−1 + yi−1 ) + 2(xi + yi ) −
(xi+1 + yi+1 ) (where the index is to be considered as modulo
N , i.e. 0 → N , N + 1 → 1). The interconnection between
each oscillator is given by the term c, a proportional feedback
gain. This feedback law has just been guessed, and we use
Theorem 9 to prove whether it works. We have coded the
related LMI problem in Matlab using Yalmip [15], choosing
µ = 0.5, N = 10, l = 6 and c = 15 (all arbitrary values). By
using SeDuMi [24] as solver, we managed to find a feasible
solution, which yields a valid formation Lyapunov function.
Figure 1 and Figure 2 show the evolution of the system during
a simulation, with the individual states shown as well as the
value of the Lyapunov function over time.
B. Lorenz attractor
We consider now system of N agents of equation
ẋi = σl (yi − xi ) − crx,i
ẏi = xi (ρl − zi ) − yi − cry,i
żi = xi yi − βl zi − crz,i
(27)
with ri,• = − •i−1 +2 •i −•i+1 (again the index is taken
modulo N ). We set arbitarily ρl = 28, σl = 10, βl = 8/3,
N = 8, l = 6 and c = 50. This time we used Theorem 7, successfully obtaining a formation Lyapunov function. Figure 3,
Figure 4 and Figure 5 again show the evolution of the system
during a simulation, with individual states and the value of the
Lyapunov function. Notice that the Lorenz oscillator does not
converge to a limit cycle but to a chaotic trajectory.
5
5
xi
4
yi
3
50
2
40
30
0
z
states
1
-1
20
-2
10
-3
starting points
0
-4
-5
0
2
4
6
8
10
-10
-20
12
40
t
20
-10
0
Fig. 1.
0
10
x
Evolution of the state of the 10 coupled Van der Pol oscillators.
20
-20
y
Fig. 4. Tridimensional visualisation of the state of some of the coupled
Lorenz systems of the example (the trajectories eventually converge to the
consensus trajectory).
10 4
10 2
10 4
10 -2
10 2
10 -4
10 0
10 -6
10 -2
V
V
10 0
10 -8
10 -10
10 -4
0
2
4
6
8
10
12
10 -6
t
10 -8
Fig. 2. Value of the formation Lyapunov function V for the coupled Van
der Pol oscillators.
10 -10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
t
50
Fig. 5. Value of the formation Lyapunov function V for the coupled Lorenz
systems.
xi
y
40
z
i
i
VII. C ONCLUSION
states
30
We have introduced a new method for proving convergence
or consensus of multi-agent system with polynomial dynamic.
This method is the generalisation of the analysis methods in
[18] and it has proven effective in test cases featuring dynamical oscillators. Further research will investigate if convex
controller synthesis results can be obtained with a similar
approach.
20
10
0
-10
-20
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
t
Fig. 3.
Evolution of the state of the 8 coupled Lorenz systems.
0.8
R EFERENCES
[1] B. Bamieh, F. Paganini, and M.A. Dahleh. Distributed control of
spatially invariant systems. IEEE Trans. Aut. Control, 47(7):1091–1107,
2002.
[2] R. D’Andrea and G.E. Dullerud. Distributed control design for spatially
interconnected systems. IEEE Trans. Aut. Control, 48(9):1478–1495,
2003.
6
[3] E.J. Davison and W. Gesing. Sequential stability and optimization of
large scale decentralized systems. Automatica, 15(3):307–324, 1979.
[4] O. Demir and J. Lunze. A decomposition approach to decentralized
and distributed control of spatially interconnected systems. In 18th
IFAC World Congress, volume 44, pages 9109–9114, Milan, Italy, 2011.
Elsevier.
[5] R. Diestel. Graph Theory. Springer, 1996.
[6] M. Dinh, G. Scorletti, V. Fromion, and E. Magarotto. Parameter dependent H∞ control by finite dimensional LMI optimization: application
to trade-off dependent control. International Journal of Robust and
Nonlinear Control, 15(9):383–406, 2005.
[7] A. Eichler, C. Hoffmann, and H. Werner. Robust control of decomposable LPV systems. Automatica, 50(12):3239–3245, 2014.
[8] J.A. Fax and R.M. Murray. Information flow and cooperative control of
vehicle formations. IEEE Trans. Aut. Control, 49(9), 2004.
[9] R. Ghadami and B. Shafai. Decomposition-based distributed control for
continuous-time multi-agent systems. IEEE Transactions on Automatic
Control, 58(1):258–264, 2013.
[10] G.H. Golub and C.F. Van Loan. Matrix Computations. John Hopkins
University Press, 3rd edition, 1996.
[11] R. Grimshaw. Nonlinear ordinary differential equations, volume 2. CRC
Press, 1991.
[12] C. Langbort, R.S. Chandra, and R. D’Andrea. Distributed control design
for systems interconnected over an arbitrary graph. IEEE Trans. Aut.
Control, 49(9):1502–1519, 2004.
[13] Z. Li, Z. Duan, and G. Chen. On H∞ and H2 performance regions of
multi-agent systems. Automatica, 47(4):797–803, 2011.
[14] Z. Li, Z. Duan, G. Chen, and L. Huang. Consensus of multiagent systems
and synchronization of complex networks: A unified viewpoint. IEEE
Trans. Circuits Syst. Regul. Pap., 57(1):213–224, 2010.
[15] J. Löfberg. Yalmip: a toolbox for modeling and optimization in
MATLAB. In Proc. of the CACSD Conference, Taipei, Taiwan, 2004.
[16] E.N. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric
Sciences, 20(2):130–141, 1963.
[17] P. Massioni. Distributed control for alpha-heterogeneous dynamically
coupled systems. Systems & Control Letters, 72:30–35, 2014.
[18] P. Massioni and M. Verhaegen. Distributed control of vehicle formations:
a decomposition approach. In Proc. of the 47th IEEE Conference on
Decision and Control, pages 2906–2912. IEEE, 2008.
[19] P.A. Parrilo. Semidefinite programming relaxations for semialgebraic
problems. Mathematical programming, 96(2):293–320, 2003.
[20] A.P. Popov and H. Werner. A robust control approach to formation
control. In Proc. of the 10th European Control Conference, Budapest,
Hungary, August 2009.
[21] M. Pourmahmood, S. Khanmohammadi, and G. Alizadeh. Synchronization of two different uncertain chaotic systems with unknown parameters
using a robust adaptive sliding mode controller. Communications in
Nonlinear Science and Numerical Simulation, 16(7):2853 – 2868, 2011.
[22] A. Rantzer. On the Kalman-Yakubovich-Popov lemma. Systems &
Control Letters, 28(1):7–10, 1996.
[23] G. Scorletti and G. Duc. An LMI approach to decentralized H∞ control.
International Journal of Control, 74(3):211–224, 2001.
[24] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization
over symmetric cones. Optimization Methods and Software, 11–12:625–
653, 1999.
[25] L. Torres, G. Besançon, D. Georges, and C. Verde. Exponential nonlinear
observer for parametric identification and synchronization of chaotic
systems. Mathematics and Computers in Simulation, 82(5):836–846,
2012.
[26] M. Zakwan, Z. Binnit-e-Rauf, and M. Ali. Polynomial based fixedstructure controller for decomposable systems. In 13th International
Bhurban Conference on Applied Sciences and Technology (IBCAST),
pages 140–144. IEEE, 2016.
[27] K. Zhou, J.C. Doyle, and K. Glover. Robust and optimal control,
volume 40. Prentice Hall New Jersey, 1996.
| 3 |
FIT BUT Technical Report Series
Kamil Dudka, Lukáš Holı́k, Petr Peringer,
Marek Trtı́k, and Tomáš Vojnar
Technical Report No. FIT-TR-2015-03
Faculty of Information Technology, Brno University of Technology
Last modified: October 28, 2015
arXiv:1510.07995v1 [] 27 Oct 2
From Low-Level Pointers
to High-Level Containers
NOTE: This technical report contains an extended version of the VMCAI’16 paper
with the same name.
From Low-Level Pointers to High-Level Containers
Kamil Dudka1 , Lukáš Holı́k1 , Petr Peringer1 , Marek Trtı́k2 , and Tomáš Vojnar1
1
FIT, Brno University of Technology
2
LaBRI, Bordeaux
Abstract. We propose a method that transforms a C program manipulating containers using low-level pointer statements into an equivalent program where the
containers are manipulated via calls of standard high-level container operations
like push back or pop front. The input of our method is a C program annotated
by a special form of shape invariants which can be obtained from current automatic shape analysers after a slight modification. The resulting program where
the low-level pointer statements are summarized into high-level container operations is more understandable and (among other possible benefits) better suitable
for program analysis since the burden of dealing with low-level pointer manipulations gets removed. We have implemented our approach and successfully tested
it through a number of experiments with list-based containers, including experiments with simplification of program analysis by separating shape analysis from
analysing data-related properties.
1
Introduction
We present a novel method that recognizes low-level pointer implementations of operations over containers in C programs and transforms them to calls of standard highlevel container operations, such as push back, insert, or is empty. Unlike the related
works that we discuss below, our method is fully automated and yet it guarantees preservation of the original semantics. Transforming a program by our method—or even just
the recognition of pointer code implementing container operations that is a part of our
method—can be useful in many different ways, including simplification of program
analysis by separating shape and data-related analyses (as we show later on in the paper), automatic parallelization [10], optimization of garbage collection [20], debugging
and automatic bug finding [2], profiling and optimizations [17], general understanding of the code, improvement of various software engineering tasks [6], detection of
abnormal data structure behaviour [11], or construction of program signatures [4].
We formalize the main concepts of our method instantiated for NULL-terminated
doubly-linked lists (DLLs). However, the concepts that we introduce can be generalized
(as we discuss towards the end of the paper) and used to handle code implementing other
kinds of containers, such as singly-linked lists, circular lists, or trees, as well.
We have implemented our method and successfully tested it through a number
of experiments with programs using challenging pointer operations. Our benchmarks
cover a large variety of program constructions implementing containers based on NULLterminated DLLs. We have also conducted experiments showing that our method can
be instantiated to other kinds of containers, namely circular DLLs as well as DLLs
with head/tail pointers (our implementation is limited to various kinds of lists due to
1
typedef struct
int x;
struct SNode
struct SNode
} Node;
#define NEW(T)
*f;
*b;
13
2
2 while (nondet()) {
3
Node*p=NEW(Node);
4
if (h==NULL)
5
h=p;
6
else
7
t->f=p;
8
p->f=NULL;
9
p->x=nondet();
10
p->b=t;
11
t=p;
12 }
...
13 Node *p=h;
14 while (p!=NULL) {
15
p->x=0;
16
p=p->f;
17 }
h t
h
t
h
t
L
nondet()
(T*)malloc(sizeof(T))
1 Node *h=0, *t=0;
(a)
h=
t=
SNode {
3
h t
L
p=malloc()
list L;
h t p
h==
while (nondet()) {
Node*p=NEW(Node);
h t p
h=p
h
t
h p
h
t
p
p->f=
L=push_back(L,p);
h p
h
t
p
h p
h
t
p
h
t
p
t
9
p->x=nondet()
t
10
p->b=t
t=p
p
L
t->f=p
8
p
L
7
p->x=nondet();
t
h
11
t
(b)
t
4
h!=
5
}
...
Node *p=front(L);
while (p!=NULL) {
p->x=0;
p=next(L,p);
}
h
p
L
L
(c)
Fig. 1. A running example. (a) A C code using low-level pointer manipulations. (b) The transformed pseudo-C++ code using container operations. (c) A part of the CFG of the low-level code
from Part (a) corresponding to lines 1-12, annotated by shape invariants.
limitations of the shape analyser used). We further demonstrate that our method can
simplify verification of pointer programs by separating the issue of shape analysis from
that of verification of data-related properties. Namely, we first obtain shape invariants
from a specialised shape analyser (Predator [7] in our case), use it within our method to
transform the given pointer program into a container program, and then use a tool that
specialises in verification of data-related properties of container programs (for which
we use the J2BP tool [14,15]).
Overview of the proposed method. We demonstrate our method on a running example
given in Fig. 1(a). It shows a C program that creates a DLL of non-deterministically
chosen length on lines 2–12 and then iterates through all its elements on lines 13–
17. Fig. 1(b) shows the code transformed by our method. It is an equivalent C++-like
program where the low-level pointer operations are replaced by calls of container operations which they implement. Lines 4–8, 10, and 11 are identified as push back (i.e.,
insertion of an element at the end of the list), line 13 as setting an iterator to the first
element of a list, and line 16 as a shift of the iterator.
The core of our approach is recognition of low-level pointer implementations of
destructive container operations, i.e., those that change the shape of the memory, such
as push back in our example. In particular, we search for control paths along which
pieces of the memory evolve in a way corresponding to the effect of some destructive
container operations. This requires(1) a control-flow graph with edges annotated by
2
an (over)approximation of the effect of program statements on the memory (i.e., their
semantics restricted to the reachable program configurations) and (2) a specification of
the operational semantics of the container operations that are to be searched for.
We obtain an approximation of the effect of program statements by extending current methods of shape analysis. These analyses are capable of inferring a shape invariant
for every node of the control-flow graph (CFG). The shape invariants are based on using various abstract objects to represent concrete or summarized parts of memory. For
instance, tools based on separation logic [13] use points-to and inductive predicates;
TVLA [19] uses concrete and summary nodes; the graph-based formalism of [7] uses
regions and list-segments; and sub-automata represent list-like or tree-like structures
in [8]. In all these cases, it is easy to search for configurations of abstract objects that
may be seen as having a shape of a container (i.e., a list-like container, a tree-like container, etc.) within every possible computation. This indicates that the appropriate part
of memory may be used by the programmer to implement a container.
To confirm this hypothesis, one needs to check that this part of memory is actually manipulated as a container of the appropriate type across all statements that work
with it. Additional information about the dynamics of memory changes is needed. In
particular, we need to be able to track the lifecycle of each part of the memory through
the whole computation, to identify its abstract encodings in successive abstract configurations, and by comparing them, to infer how the piece of the memory is changing. We
therefore need the shape analyser to explicitly tell us which abstract objects of a successor configuration are created from which abstract objects of a predecessor configuration
or, in other words, which abstract objects in the predecessor configuration denote parts
of the memory intersecting with the denotation of an object in the successor configuration. We say that the former objects are transformed into the latter ones, and we call the
relationship a transformation relation. A transformation relation is normally not output
by shape analysers, however, tools such as Predator [7] (based on SMGs), Slayer [1]
(based on separation logic), or Forester [8] (based on automata) actually work with it at
least implicitly when applying abstract transformers. We only need them to output it.
The above concepts are illustrated in Fig. 1(c). It shows a part of the CFG of the
program from Fig. 1(a) with lines annotated by the shape invariant in the round boxes
on their right. The invariant is expressed in the form of so-called symbolic memory
graphs (SMGs), the abstract domain of the shape analyser Predator [7], simplified to
a bare minimum sufficient for exposing main concepts of our method. The basic abstract objects of SMGs are shown as the black circles in the figure. They represent
continuous memory regions allocated by a single allocation command. Every region
has a next selector, shown as the line on its right-top leading into its target region on
the right, and prev selector, shown as a line on its left-bottom leading to the target region on the left. The ⊥ stands for the value NULL. Pairs of regions connected by the
represent the second type of abstract object, so called doubly-linked segments (DLS).
They represent doubly-linked lists of an arbitrary length connecting the two regions.
The dashed envelope indicates a memory that has the shape of a container, namely
of a NULL-terminated doubly-linked list. The transformation relation between objects of
successive configurations is indicated by the dashed lines. Making the tool Predator [7]
output it was easy, and we believe that it would be easy for other tools as well.
3
Further, we propose a specification of operational semantics of the container operations which has the same form as the discussed approximation of operational semantics
of the program. It consists of input and output symbolic configuration, with abstract objects related by a transformation relation. For example, Fig. 2 shows a specification of
push back as an operation which appends a region pointed by a variable y to a doublylinked list pointed by x. The left pair specifies the case when the input DLL is empty,
the right pair the case when it is not.
To find an implementation of thus specified push back,
y
y
semantic annotations of the CFG are searched for chains of
x
the transformation relation matching the specification. That
is, they start and end by configurations that include the inz
z
put and the output of the push back specification, resp., and
the composition of the transformation relation between these Fig. 2. Specification of
configurations matches the transformation relation specified. z = push back(x, y).
In Fig. 1(c), one of the chains implementing push back is shown as the sequence
of greyed configurations. It matches the case of the non-empty input DLL on the right
of Fig. 2. Destructive program statements within the chain implementing the found operation are candidates for replacement by a call of the container operation. In the figure,
lines 7, 8, 10 are candidates for replacement by L=push back(L,p). However, the replacement can be done only if a set of chains is found that together gives a consistent
image about a use of containers in the whole program. In our example, it is important
that on the left of the greyed chain, there is another chain implementing the case of
push back for empty DLLs (matching the left part of the specification in Fig. 2).
After identifying containers and destructive container operations as discussed above,
we search for implementations of non-destructive operations (like iterators or emptiness
tests). This leads to replacement of lines 13 and 16 in Fig. 1(a) by the initialization and
shift of the iterator shown on the same lines in Fig. 1(b). This step is much simpler,
and we will only sketch it in the paper. We then simplify the code using standard static
analysis. In the example, the fact that h and t become a dead variable until line 13 leads
to removing lines 4, 5, 6, and 11.
Our method copes even with cases when the implementation of a container operation is interleaved with other program statements provided that they do not interfere
with the operation (which may happen, e.g., when a manipulation of several containers
is interleaved). Moreover, apart from the container operations, arbitrary low-level operations can be used over the memory structures linked in the containers provided they
do not touch the container linking fields.
Related work. There have been proposed many dynamic analyses for recognition of
heap data structures, such as, e.g., [9,12,16,17,22,4]. These approaches are typically
based on observing program executions and matching observed heap structures against
a knowledge base of predefined structures. Various kinds of data structures can be
recognised, including various kinds of lists, red-black trees, B-trees, etc. The main purposes of the recognition include reverse engineering, program understanding, and profiling. Nevertheless, these approaches do not strive for being so precise that the inferred
information could be used for safe, fully automatic code replacement.
4
There exist static analyses with similar targets as the dynamic analyses above. Out
of them, the closest to us is probably the work [5]. Its authors do also target transformation of a program with low-level operations into high-level ones. However, their aim is
program understanding (design recovery), not generation of an equivalent “executable”
program. Indeed, the result does not even have to be a program, it can be a natural language description. Heap operations are recognised on a purely syntactical level, using
a graph representation of the program on which predefined rewriting rules are applied.
Our work is also related to the entire field of shape analysis, which provides the
input for our method. Due to a lack of space, we cannot give a comprehensive overview
here (see, e.g., [7,8,1] for references). Nevertheless, let us note that there is a line of
works using separation-logic-based shape analysis for recognition of concurrently executable actions (e.g., [18,21]). However, recognizing such actions is a different task
than recognizing low-level implementation of high-level container usage.
In summary, to the best of our knowledge, our work is the first one which targets
automatic replacement of a low-level, pointer-manipulating code by a high-level one,
with guarantees of preserving the semantics.
2
Symbolic Memory Graphs with Containers
We now present an abstract domain of symbolic memory graphs (SMGs), originally
introduced in [7], which we use for describing shape invariants of the programs being
processed. SMGs are a graph-based formalism corresponding to a fragment of separation logic capable of describing classes of heaps with linked lists. We present their
simplified version restricted to dealing with doubly-linked lists, sufficient for formalising the main concepts of our method. Hence, nodes of our SMGs represent either
concrete memory regions allocated by a single allocation statement or doubly-linked
list segments (DLSs). DLSs arise by abstraction and represent sets of doubly-linked
sequences of regions of an arbitrary length. Edges of SMGs represent pointer links.
In [7], SMGs are used to implement a shape analysis within the generic framework
of abstract interpretation [3]. We use the output of this shape analysis, extended with
a transformation relation, which provides us with precise information about the dynamics of the memory changes, as a part of the input of our method (cf. Section 4). Further,
in Section 3, we propose a way how SMGs together with a transformation relation can
be used to specify the container operations to be recognized.
Before proceeding, we recall that our use of SMGs can be changed for other domains common in the area of shape analysis (as mentioned already in the introduction
and further discussed in Section 7).
Symbolic memory graphs. We use > to explicitly denote undefined values of functions. We call a region any block of memory allocated as a whole (e.g., using a single
malloc() statement), and we denote by ⊥ the special null region. For a set A, we use
A⊥ , A> , and A⊥,> to denote the sets A ∪ {⊥}, A ∪ {>}, and A ∪ {⊥, >}, respectively.
Values stored in regions can be accessed through selectors (such as next or prev). To
simplify the presentation, we assume dealing with pointer and integer values only.
For the rest of the paper, we fix sets of pointer selectors S p , data selectors Sd ,
regions R, pointer variables V p , and container variables Vc (container variables do not
5
appear in the input programs, they get introduced by our transformation procedure to
denote parts of memory which the program treats as containers). We assume all these
sets to be pairwise disjoint and disjoint with Z⊥,> . We use V to denote the set V p ∪ Vc
of all variables, and S to denote the set S p ∪ Sd of all selectors.
A doubly-linked list segment (DLS) is a pair (r, r0 ) ∈ R × R of regions that abstracts
a doubly-linked sequence of regions of an arbitrary length that is uninterrupted by any
external pointer pointing into the middle of the sequence and interconnects the front
region represented by r with the back region r0 . We use D to denote the set of all DLSs
/ Both regions and DLSs will be called objects.
and assume that R ∩ D = 0.
To illustrate the above, the top-left part of Fig. 3 shows a memory layout with five regions (black circles), four of which form a NULL-terminated DLL. The bottom-left part
h
t
p
of Fig. 3 shows a sequence of three
doubly-linked regions abstracted into
L
a DLS (depicted as a pair of regions repre
t
h
p
linked via the “spring” ). Note that we
L
could also abstract all four doubly-linked
Fig. 3. A DLL and an SDLL, a PC and an SPC.
regions into a single DLS.
We can now define a symbolic memory graph (SMG) formally. It is a triple G =
(R, D, val) consisting of a set R ⊆ R of regions, a set D ⊆ R × R ⊆ D of DLSs, and a map
val defining the pointer and data fields of regions in R. It assigns to every pointer selector
sel p ∈ S p a function val(sel p ) : R → R⊥,> that defines the successors of every region
r ∈ R. Further, it assigns to every data selector seld ∈ Sd a function val(seld ) : R → Z>
that defines the data values of every region r ∈ R. We will sometimes abuse the notation
and write simply sel(r) to denote val(sel)(r). An SMG G0 = (R0 , D0 , val0 ) is a sub-SMG
of G, denoted G0 G, if R0 ⊆ R, D0 ⊆ D, and val0 (sel) ⊆ val(sel) for all sel ∈ S.
Container shapes. We now proceed to defining a notion of container shapes that we will
be looking for in shape invariants produced by shape analysis and whose manipulation
through given container operations we will be trying to recognise. For simplicity, we
restrict ourselves to NULL-terminated DLLs. However, in our experimental section, we
present results for some other kinds of list-shaped containers too. Moreover, at the end
of the paper, we argue that a further generalization of our approach is possible. Namely,
we argue that the approach can work with other types of container shapes as well as on
top of other shape domains.
A symbolic doubly-linked list (SDLL) with a front region r and a back region r0 is
an SMG in the form of a sequence of regions possibly interleaved with DLSs, interconnected so that it represents a DLL. Formally, it is an SMG G = (R, D, val) where
R = {r1 , . . . , rn }, n ≥ 1, r1 = r, rn = r0 , and for each 1 ≤ i < n, either next(ri ) = ri+1
and prev(ri+1 ) = ri , or (ri , ri+1 ) ∈ D and next(ri ) = prev(ri+1 ) = >. An SDLL which is
NULL-terminated, i.e., with prev(r) = ⊥ and next(r0 ) = ⊥, is called a container shape
(CS). We write csh(G) to denote the set of all CSs G0 that are sub-SMGs of an SMG G.
The bottom-right part of Fig. 3 contains an SDLL connecting a DLS and a region. It is
NULL-terminated, hence a CS, which is indicated by the dashed envelope .
Symbolic program configurations. A symbolic program configuration (SPC) is a pair
(G, σ) where G = (R, D, val) is an SMG and σ : (V p → R⊥,> ) ∪ (Vc → csh(G)> ) is
6
a valuation of the variables. An SPC C0 = (G0 , σ0 ) is a sub-SPC of an SPC C = (G, σ),
denoted C0 C, if G0 G and σ ⊆ σ0 . The bottom-right part of Fig. 3 depicts an SPC
with pointer variables h and t positioned next to the regions σ(h) and σ(t) they evaluate
to. The figure further shows a variable L positioned next to the CS σ(L) it evaluates to.
The top-right part of Fig. 3 is a PC as it has no DLSs. Examples of other SPCs are shown
in the annotations of program locations in Fig. 1(c) (and also Fig. 5 in Appendix E).
Additional notation. For an SMG or an SPC X, we write reg(X) to denote the set of its
regions, and obj(X) to denote the set of all its objects (regions and DLSs). A (concrete)
memory graph (MG), program configuration (PC), or doubly-linked list (DLL) is an
SMG, SPC, or DLL, respectively, whose set of DLSs is empty, i.e., no abstraction is
involved. A bijection π : R → R is called a region renaming. For an SMG, SPC, or
a variable valuation x, we denote by π(x) the structure arising from x by replacing each
occurrence of r ∈ R by π(r). A bijection λ : V → V is called a variable renaming, and
we define λ(x) analogous to π(x).
Abstraction and concretization. We now formalize the standard pair of abstraction and
concretization functions used in abstract interpretation for our domains of MGs and
SMGs. An SMG G is an abstraction of an MG g iff it can be obtained via the following
three steps: (i) Renaming regions of g by some region renaming π (making the semantics of an SMG closed under renaming). (ii) Removing some regions (which effectively
removes some constraint on a part of the memory, thus making its representation more
abstract). (iii) Folding some DLLs into DLSs (abstracting away some details of the
internal structure of the DLLs). In particular, a DLL l with a front region r and a back
region r0 may be folded into a DLS dl = (r, r0 ) by removing the inner regions of l (we say
that these regions get folded into dl ), removing the next-value of r and the prev-value of
r0 (unless r = r0 ), and adding dl into the set DG of DLSs of G.
Now, let g be a component of a PC (g, σ). The PC may be abstracted into an SPC
(G, σ0 ) by(a) forgetting values of some variables x ∈ dom(σ), i.e., setting σ0 (x) to >,
and (b) abstracting g into G by Steps (i–iii) above. Here, Step (i) is augmented by
redirecting every σ0 (x) to π(σ(x)), Step (ii) is allowed to remove only regions that are
neither in dom(σ0 ) nor in any CS that is in dom(σ0 ), and Step (iii) may fold a DLL into
a DLS only if none of its inner regions is in dom(σ0 ), redirecting values of container
variables from the original CSs to the ones that arise from it by folding.
The concretization of an SPC (SMG) X is then the set JXK of all PCs S
(MGs, resp.)
that can be abstracted to X. When X is a set of SPCs (SMGs), then JXK = X∈X JXK.
The left part of Fig. 3 shows an abstraction of an MG (top) into an SMG (bottom).
Step (i) renames all regions in the MG into the regions of the SMG, Step (ii) is not
applied, and Step (iii) folds three left-most regions of the DLL into a DLS. The repre
arrows show the so-called assignment of representing objects defined below.
3
Operations and their Specification
In this section, we introduce a notion of operations and propose their finite encoding in
the form of the so-called symbolic operations. Symbolic operations play a crucial role in
7
our approach since they are used to describe both of the two inputs of our algorithm for
recognition of high-level container operations in low-level code. In particular, on one
hand, we assume a (slightly extended) shape analyser to provide us with a CFG of the
program being processed annotated by symbolic operations characterizing the effect of
the low-level pointer statements used in the program (as discussed in Section 4). On the
other hand, we assume the high-level container operations whose effect—implemented
by sequences of low-level pointer statements—is to be sought along the annotated CFG
to be also described as symbolic operations. This can either be done by the users of
the approach (as discussed at the end of this section), or a library of typical high-level
container operations can be pre-prepared.
Below, we, in particular, concentrate on destructive container operations, i.e., those
container operations which change the shape of the heap. Non-destructive container
operations are much easier to handle, and we discuss them at the end of Section 5.
Operations and Symbolic Operations. We define an operation as a binary relation δ on
PCs capturing which input configurations are changed to which output configurations
by executing the operation. The individual pairs u = (c, c0 ) ∈ δ relating one input and
one output configuration are called updates. Operations corresponding to pointer statements or container operations relate infinitely many different input and output configurations, hence they must be represented symbolically. We therefore define a symbolic
update as a triple U = (C, ;,C0 ) where C = (G, σ), C0 = (G0 , σ0 ) are SPCs, and ;
is a binary relation over objects (regions and DLSs) called transformation relation.
A symbolic operation is then simply a (finite) set ∆ of symbolic updates.
Symbolic updates will be used to search for implementation of destructive container
operations based on changes of the SMGs labelling the given CFG. To be able to do
this with enough precision, symbolic updates must describe the “destructive” effect
that the operation has on the shape of the memory (addition/removal of a region or
a change of a selector value). For this, we require the semantics of a symbolic update
to be transparent, meaning that every destructive change caused by the operation is
explicitly and unambiguously visible in the specification of the operation (i.e., it cannot,
e.g., happen in an invisible way somewhere inside a DLS). On the other hand, we are not
interested in how the code modifies data values of regions. The semantics of a symbolic
update thus admits their arbitrary changes.
Semantics of symbolic updates. To define semantics of symbolic operations, we need to
distinguish abstract object (region or DLS) of an SPC C = (G, σ) representing a region
r of a PC c = (g, σ0 ) ∈ JCK. Recall that G arises by abstracting g by Steps (i–iii). Let π
be the region renaming used in Step (i). We define the representing object repre(r) of r
in C as(1) the region π(r) if π(r) ∈ reg(G), (2) > if π(r) is removed in G by Step (ii),
and (3) the DLS d if π(r) is folded into d ∈ obj(G) in Step (iii). We use c ∈repre JCK to
denote that the function repre is an assignment of representing objects of C to regions
of c. The inverse repre−1 (o) gives the set of all regions of c that are represented by the
object o ∈ obj(C). Notice that the way of how g is abstracted to G by Steps (i–iii) is not
necessarily unique, hence the assignment repre is not unique either. The right part of
Fig. 3 shows an example of abstraction of a PC c (top) to an SPC C (bottom), with the
assignment of representing objects repre shown via the top-down arrows.
8
Using this notation, the semantics of a symbolic update U = (C, ;,C0 ) can be defined as the operation JUK which contains all updates u = (c, c0 ) such that:
1. c ∈repre JCK and c0 ∈repre0 JC0 K.
2. An object o ∈ obj(C) transforms into an object o0 ∈ obj(C0 ), i.e., o ; o0 , iff the
denotations repre−1 (o) and (repre0 )−1 (o) share some concrete region, i.e., ∃r ∈
reg(c) ∩ reg(c0 ) : repre(r) = o ∧ repre0 (r) = o0 .
3. The semantics is transparent: (i) each selector change is explicit, i.e., if sel(r)
in c differs from sel(r) in c0 for a region r ∈ reg(c) ∩ reg(c0 ), then repre(r) ∈
reg(C) and repre0 (r) ∈ reg(C0 ) are regions such that sel(repre(r)) in C differs from
sel(repre0 (r)) in C0 ; (ii) every deallocation is explicit meaning that if a region r of
c is removed (i.e., it is not a region of c0 ), then repre(r) is a region (not a DLS) of
C; (iii) every allocation is explicit meaning that if a region r of c0 is added (i.e., it is
not a region of c), then repre0 (r) is a region of C0 .
S
The semantics of a symbolic operation ∆ is naturally defined as J∆K = U∈∆ JUK.
An example of a symbolic update is shown e.g. in Fig. 1(c), on the right of the
program edge between locations 3 and 4. It consists of the right-most SPCs attached to
these locations (denote them as C = (G, σ) and C0 = (G0 , σ0 ), and their DLSs as d and d 0 ,
respectively) and the transformation relation between their objects denoted by the dotted
vertical lines. The allocation done between the considered locations does not touch the
DLSs, it only adds a new region pointed to by p. This is precisely expressed by the
symbolic update U = (C, ;,C0 ) where ; = {(σ(L), σ0 (L))}. The relation ; (the dotted
line between objects σ(L) and σ0 (L)) says that, for every update from JUK, denotations
of the two DLSs d and d 0 share regions (by Point 2 above). By Point 3, there are no
differences in pointer links between the DLLs encoded by the two DLSs; the DLLs
encoded by d and the ones encoded by d 0 must be identical up to values of data fields.
The only destructive change that appears in the update is the addition of the freshly
allocated region σ0 (p) that does not have a ;-predecessor (due to Point 3(iii) above).
User Specification of Destructive Container Operations. As stated already above, we
first concentrate on searching for implementations of user-specified destructive container operations in low-level code. In particular, we consider non-iterative container
operations, i.e., those that can be implemented as non-looping sequences of destructive
pointer updates, region allocations, and/or de-allocations.1
We require the considered destructive container operations to be operations δy=op(x)
that satisfy the following requirements: (1) Each δy=op(x) is deterministic, i.e., it is
a function. (2) The sets x = x1 , ..., xn ∈ Vn and y = y1 , ..., ym ∈ Vm , n, m ≥ 0, are the
input and output parameters of the operation so that for every update ((g, σ), (g0 , σ0 )) ∈
δy=op(x) , the input PC has dom(σ) = {x1 , . . . , xn } and the output PC has dom(σ0 ) =
{y1 , . . . , ym }. (3) Since we concentrate on destructive operations only, the operation
1
Hence, e.g., an implementation of a procedure inserting an element into a sorted list, which
includes the search for the element, will not be understood as a single destructive container
operation, but rather as a procedure that calls a container iterator in a loop until the right place
for the inserted element is found, and then calls a destructive container operation that inserts
the given region at a position passed to it as a parameter.
9
does not modify data values, i.e., δy=op(x) ⊆ δconst where δconst contains all updates that
do not change data values except for creating an unconstrained data value or destroying
a data value when creating or destroying some region, respectively.
Container operations δy=op(x) of the above form can be specified by a user as symbolic operations, i.e., sets of symbolic updates, ∆y=op(x) such that δy=op(x) = J∆y=op(x) K∩
δconst . Once constructed, such symbolic operations can form a reusable library.
For instance, the operation δz=push back(x,y) can be specified as a symbolic operation
∆z=push back(x,y) which inputs a CS referred to by variable x and a region pointed to by y
and outputs a CS referred by z. This symbolic operation is depicted in Fig. 2. It consists
of two symbolic updates in which the user relates possible initial and final states of the
memory. The left one specifies the case when the input container is empty, the right one
the case when it is nonempty.
4
Annotated Control Flow Graphs
In this section, we describe the semantic annotations of a control-flow graph that our
procedure for recognizing implementation of high-level container operations in lowlevel code operates on.
Control-flow graph. A control flow graph (CFG) is a tuple cfg = (L, E, `I , `F ) where
L is a finite set of (control) locations, E ⊆ L × Stmts × L is a set of edges labelled by
statements from the set Stmts defined below, `I is the initial location, and `F the final
location. For simplicity, we assume that any two locations `, `0 are connected by at most
one edge h`, stmt, `0 i.
The set of statements consists of pointer statements stmt p ∈ Stmts p , integer data
statements stmtd ∈ Stmtsd , container statements stmtc ∈ Stmtsc , and the skip statement
skip, i.e., Stmts = Stmts p ∪ Stmtsd ∪ Stmtsc ∪ {skip}. The container statements and
the skip statement do not appear in the input programs, they are generated by our transformation procedure. The statements from Stmts are generated by the following grammar (we present a simplified minimalistic form to ease the presentation):
stmt p ::= p = (p | ps | malloc() | ⊥) | ps = p | free(p) | p == (p | ⊥) | p!= (p | ⊥)
stmtd ::= pd = (n | pd) | pd == pd | pd != pd
stmtc ::= y = op(x)
Above, p ∈ V p , s ∈ S p , d ∈ Sd , n ∈ Z, and x, y ∈ V∗ .
For each stmt ∈ Stmts p ∪ Stmtsd , let δstmt be the operation encoding its standard
C semantics. For example, the operation δx=y->next contains all updates (c, c0 ) where
c = (g, σ) is a PC s.t. σ(y) 6= > and c0 is the same as c up to the variable x that is
assigned the next-successor of the region pointed to by y. For each considered container
statement stmt ∈ Stmtsc , the operation δstmt is to be specified by the user. Let p =
e1 , . . . , en where ei = h`i−1 , stmti , `0i i, 1 ≤ i ≤ n, be a sequence of edges of cfg. We call p
a control flow path if `0i = `i for each 1 ≤ i < n. The semantics JpK of p is the operation
δstmtn ◦ · · · ◦ δstmt1 .
A state of a computation of a CFG cfg is a pair (l, c) where l is a location of cfg
and c is a PC. A computation of cfg is a sequence of states φ = (`0 , c0 ), (`1 , c1 ), . . .
of length |φ| ≤ ∞ where there is a (unique) edge ei = h`i , stmti , `i+1 i ∈ E such that
(ci , ci+1 ) ∈ δstmti for each 0 ≤ i < |φ|. The path e0 , e1 , . . . is called the control path of φ.
10
Semantic annotations. A semantic annotation of a CFG cfg consists of a memory
invariant mem, a successor relation I, and a transformation relation .. The quadruple (cfg, mem, I, .) is then called an annotated control-flow graph (annotated CFG).
A memory invariant mem is a total map that assigns to every location ` of cfg a set
mem(`) of SPCs describing (an overapproximation of) the set of memory configurations reachable at the given location. For simplicity, we assume that sets of regions of
any two different SPCs in img(mem) are disjoint. The successor relation is a binary relation on SPCs. For an edge e = h`, stmt, `0 i and SPCs C ∈ mem(`), C0 ∈ mem(`0 ), C I C0
indicates that PCs of JCK are transformed by executing stmt into PCs of JC0 K. The relation . is a transformation relation on objects of configurations in img(mem) relating
objects of C with objects of its I-successor C0 in order to express how the memory
changes by executing the edge e. The change is captured in the form of the symbolic
operation ∆e = {(C, .,C0 ) | (C,C0 ) ∈ I ∩ mem(`) × mem(`0 )}. For our analysis to be
sound, we require ∆e to overapproximate δstmt restricted to Jmem(`)K, i.e., J∆e K ⊇ δ`stmt
for δ`stmt = {(c, c0 ) ∈ δstmt | c ∈ Jmem(`)K}.
A symbolic trace of an annotated CFG is a possibly infinite sequence of SPCs Φ =
C0 ,C1 , . . . provided that Ci I Ci+1 for each 0 ≤ i < |Φ| ≤ ∞. Given a computation φ =
(`0 , c0 ), (`1 , c1 ), . . . of length |φ| = |Φ| such that ci ∈ JCi K for 0 ≤ i ≤ |φ|, we say that Φ
is a symbolic trace of computation φ.
A part of the annotated CFG of our running example from Fig. 1(a) is given in
Fig. 1(c), another part can be found in Fig. 5 in Appendix E. For each location `, the
set mem(l) of SPCs is depicted on the right of the location l. The relation . is depicted
by dotted lines between objects of SPCs attached to adjacent program locations. The
relation I is not shown as it can be almost completely inferred from .: Whenever objects
of two SPCs are related by ., the SPCs are related by I. The only exception is the Ichain of the left-most SPCs along the control path 1, 2, 3, 4 in Fig. 1(c).
5
Replacement of Low-Level Manipulation of Containers
With all the notions designed above, we are now ready to state our methods for identifying low-level implementations of container operations in an annotated CFG and for
replacing them by calls of high-level container operations. Apart from the very end of
the section, we concentrate on destructive container operations whose treatment turns
out to be significantly more complex. We assume that the destructive container operations to be sought and replaced are specified as sequences of destructive pointer updates,
region allocations, and/or de-allocations as discussed in the last paragraph of Sect. 3.
Given a specification of destructive container operations and an annotated CFG, our
algorithm needs to decide: (1) which low-level pointer operations to remove, (2) where
to insert calls of container operations that replace them and what are these operations,
and (3) where and how to assign the right values to the input parameters of the inserted
container operations. To do this, the algorithm performs the following steps.
The algorithm starts by identifying container shapes in the SPCs of the given annotated CFG. Subsequently, it looks for the so-called transformation chains of these
container shapes which capture their evolution along the annotated CFG. Each such
chain is a sequence of sub-SMGs that appear in the labels of a path of the given an11
notated CFG. In particular, transformation chains consisting of objects linked by the
transformation relation, meaning that the chain represents evolution of the same piece
of memory, and corresponding to some of the specified container operations are sought.
The algorithm then builds a so-called replacement recipe of a consistent set of transformation chains that interprets the same low-level code as the same high-level container operation for each possible run of the code. The recipe determines which code
can be replaced by which container operation and where exactly the container operation is to be inserted within the sequence of low-level statements implementing it. This
sequence can, moreover, be interleaved with some independent statements that are to
be preserved and put before or after the inserted call of a container operation.
The remaining step is then to find out how and where to assign the right values of
the input parameters of the inserted container operations. We do this by computing a socalled parameter assignment relation. We now describe the above steps in detail. For
the rest of Sect. 5, we fix an input annotated CFG cfg and assume that we have specified
a symbolic operation ∆stmt for every container statement stmt ∈ Stmtsc .
5.1
Transformation Chains
A transformation chain is a sequence of sub-SMGs that describes how a piece of memory evolves along a control path. We in particular look for such transformation chains
whose overall effect corresponds to the effect of some specified container operation.
Such transformation chains serve us as candidates for code replacement.
Let p = h`0 , stmt1 , `1 i, . . . , h`n−1 , stmtn , `n i be a control flow path. A transformation
chain (or simply chain) with the control path p is a sequence τ = τ[0] · · · τ[n] of SMGs
such that, for each 0 ≤ i ≤ n, there is an SPC Ci = (Gi , σi ) ∈ mem(`i ) with τ[i] Gi
and the relation Iτ = {(Ci−1 ,Ci ) | 1 ≤ i ≤ n} is a subset of I, i.e., Ci is the successor
of Ci−1 for each i. We will call the sequence C0 , . . . ,Cn the symbolic trace of τ, and we
let .iτ = . ∩ (obj(τ[i − 1]) × obj(τ[i])) for 1 ≤ i ≤ n denote the transformation relation
between the objects of the i − 1th and ith SMG of τ.
An example of a chain, denoted as τpb below, is the sequence of the six SMGs that
are a part of the SPCs highlighted in grey in Fig. 1(c). The relation Iτpb links the six
SPCs, and the relation .τpb consists of the pairs of objects connected by the dotted lines.
Let ∆ be a specification of a container operation. We say that a transformation chain
τ implements ∆ w.r.t. some input/output parameter valuations σ/σ0 iff JUτ K ⊆ J∆K for the
symbolic update Uτ = ((τ[0], σ), .nτ ◦ · · · ◦ .1τ , (τ[n], σ0 )). Intuitively, Uτ describes how
MGs in Jτ[0]K are transformed into MGs in Jτ[n]K along the chain. When put together
with the parameter valuations σ/σ0 , Uτ is required to be covered by ∆.
In our example, by taking the composition of relations .5τpb ◦ · · · ◦ .1τpb (relating objects from location 4 linked by dotted lines with objects at location 11), we see that the
chain τpb implements the symbolic operation ∆z=push back(x,y) from Fig 2, namely, its
symbolic update on the right. The parameter valuations σ/σ0 can be constructed as L and
p correspond to x and y at location 4, respectively, and L corresponds to z at location 11.
Let τ be a chain implementing ∆ w.r.t. input/output parameter valuations σ/σ0 . We
define implementing edges of τ w.r.t. ∆, σ, and σ0 as the edges of the path p of τ that are
labelled by those destructive pointer updates, region allocations, and/or deallocations
12
that implement the update Uτ . Formally, the i-th edge ei of p, 1 ≤ i ≤ n, is an imple/ ., (τ[i], 0))K
/ ∩ δconst is not an identity (the update
menting edge of τ iff J((τ[i − 1], 0),
does not talk about values of variables, hence the empty valuations).
For our example chain τpb , the edges (7,8), (8,9), and (10,11) are implementing.
Finding transformation chains in an annotated CFG. Let ∆stmt be one of the given symbolic specifications of the semantics of a destructive container statement stmt ∈ Stmtsc .
We now sketch our algorithm for identifying chains that implement ∆stmt . More deb
tails can be found in Appendix A. The algorithm is based on pre-computing sets U
of so-called atomic symbolic updates that must be performed to implement the effect
of each symbolic update U ∈ ∆stmt . Each atomic symbolic update corresponds to one
pointer statement that performs a destructive pointer update, a memory allocation, or
b can be computed by looking at the differences in the selector
a deallocation. The set U
values of the input and output SPCs of U. The algorithm then searches through symbolic traces of the annotated CFG cfg and looks for sequences of sub-SMGs present
b (in any permutation) or
in them and linked by the atomic symbolic updates from U
by identity (meaning that a statement irrelevant for stmt is performed). Occurrences of
atomic updates are found based on testing entailment between symbolic atomic updates
and symbolic updates annotating subsequent CFG locations. This amounts to checking
entailment of the two source and the two target SMGs of the updates using methods
of [7], augmented with testing that the transformation relation is respected. Soundness
of the procedure depends on the semantics of symbolic updates being sufficiently precise, which is achieved by transparency of their semantics.
For example, for the container statement z=push back(x,y) and the symbolic upb will consist
date U corresponding to an insertion into a list of length one or more, U
of (i) symbolic updates corresponding to the pointer statements assigning y to the nextselector of the back region of x, (ii) assigning the back region of x to the prev-selector
of y, and (iii) assigning ⊥ to the next-selector of y. Namely, for the chain τpb in Fig. 1(c)
b consists of three
and the definition of the operation ∆z=push back(x,y) in Fig. 2, the set U
symbolic updates: from location 7 to 8 by performing Point (i), then from location 8 to 9
by performing (iii), and from location 10 to 11 by performing (ii).
5.2
Replacement Locations
A replacement location of a transformation chain τ w.r.t. ∆, σ, and σ0 is one of the
locations on the control path p of τ where it is possible to insert a call of a procedure
implementing ∆ while preserving the semantics of the path. In order to formalize the
notion of replacement locations, we call the edges of p that are not implementing (do
not implement the operation—e.g., they modify data) and precede or succeed the replacement location as the prefix or suffix edges, and we denote pp/s/i the sequences of
edges obtained by removing all but prefix/suffix/implementing edges, respectively. The
replacement location must then satisfy that Jpp · pi · ps K Jmem(` )K = JpK Jmem(` )K where
0
0
the notation δ S stands for the operation δ restricted to updates with the source configurations from the set S. The prefix edges are chosen as those which read the state of the
container shape as it would be before the identified container operation, the suffix edges
13
as those which read its state after the operation. The rest of not implementing edges is
split arbitrarily. If we do not find a splitting satisfying the above semantical condition,
τ is discarded from further processing.
For our example chain τpb , the edges (4,7) and (9,10) can both be put into the prefix
since none of them saves values of pointers used in the operation (see Fig. 1(c)). The
edge (9,10) is thus shifted up in the CFG, and the suffix remains empty. Locations 8–11
can then be used as the replacement locations.
5.3
Replacement Recipes
A replacement recipe is a map ϒ that assigns to each chain τ of the annotated CFG cfg
out
a quadruple ϒ(τ) = (∆τ , σin
τ , στ , `τ ), called a replacement template, with the following
meaning: ∆τ is a specification of a container operation that is to be inserted at the reout
placement location `τ as a replacement of the implementing edges of τ. Next, σin
τ /στ
are input/output parameter valuations that specify which parts of the memory should be
passed to the inserted operation as its input parameters and which parts of the memory
correspond to the values of the output parameters that the operation should return.
For our example chain τpb , a replacement template ϒ(τpb ) can be obtained, e.g., by
taking ∆τpb = ∆z=push back(x,y) , `τpb = 11, σin
τpb (x) = στpb [0] (L) denoting the CS in the
gray SPC of loc. 4, σin
(y)
=
σ
(p)
denoting
the right-most region of the gray SPC
τpb [0]
τpb
out
of loc. 4, and στpb (z) = στpb [5] (L) denoting the CS in the gray SPC of loc. 11.
We now give properties of replacement recipes that are sufficient for the CFG cfg0
generated by our code replacement procedure, presented in Sect. 5.5, to be semantically
equivalent to the original annotated CFG cfg.
Local consistency. A replacement recipe ϒ must be locally consistent meaning that
out
(i) every τ ∈ dom(ϒ) implements ∆τ w.r.t. σin
τ and στ and (ii) `τ is a replacement locain
out
tion of τ w.r.t. ∆τ , στ , and στ . Further, to enforce that τ is not longer than necessary,
we require its control path τ to start and end by an implementing edge. Finally, implementing edges of the chain τ cannot modify selectors of any object that is a part of a CS
which is itself not at the input of the container operation.
Global consistency. Global consistency makes it safe to replace the code w.r.t. multiple
overlapping chains of a replacement recipe ϒ, i.e., the replacements defined by them do
not collide. A replacement recipe ϒ is globally consistent iff the following holds:
1. A location is a replacement location within all symbolic traces passing it or within
none. Formally, for each maximal symbolic trace Φ passing the replacement location `τ of a chain τ ∈ dom(ϒ), there is a chain τ0 ∈ dom(ϒ) s.t. `τ0 = `τ and the
symbolic trace of τ0 is a sub-sequence of Φ passing `τ .
2. An edge is an implementing edge within all symbolic traces passing it or within
none. Formally, for each maximal symbolic trace Φ passing an implementing edge
e of a chain τ ∈ dom(ϒ), there is a chain τ0 ∈ dom(ϒ) s.t. e is its implementing edge
and the symbolic trace of τ0 is a sub-sequence of Φ passing e.
14
3. For any chains τ, τ0 ∈ dom(ϒ) that appear within the same symbolic trace, the following holds: (a) If τ, τ0 share an edge, then they share their replacement location,
i.e., `τ = `τ0 . (b) Moreover, if `τ = `τ0 , then τ is an infix of τ0 or τ0 is an infix of τ. The
latter condition is technical and simplifies the proof of correctness of our approach.
4. Chains τ, τ0 ∈ dom(ϒ) with the same replacement location `τ = `τ0 have the same
operation, i.e., ∆τ = ∆τ0 .
5. An edge is either implementing for every chain of dom(ϒ) going through that edge
or for no chain in dom(ϒ) at all.
Notice that Points 1, 2, and 3 speak about symbolic traces. That is, they do not have to
hold along all control paths of the given CFG cfg but only those which appear within
computations starting from Jmem(`I )K.
Connectedness. The final requirement is connectedness of a replacement recipe ϒ. It
reflects the fact that once some part of memory is to be viewed as a container, then
destructive operations on this part of memory are to be done by destructive container
operations only until the container is destroyed by a container destructor. Note that
this requirement concerns operations dealing with the linking fields only, the rest of
the concerned objects can be manipulated by any low-level operations. Moreover, the
destructive pointer statements implementing destructive container operations can also
be interleaved with other independent pointer manipulations, which are handled as the
prefix/suffix edges of the appropriate chain.
Connectedness of ϒ is verified over the semantic annotations by checking that in
the .-future and past of every container (where a container is understood as a container
shape that was assigned a container variable in ϒ), the container is created, destroyed,
and its linking fields are modified by container operations only. A formal description
can be found in Appendix B.
Computing recipes. The algorithm for building a replacement recipe ϒ starts by looking for chains τ of the annotated CFG cfg that can be associated with replacement
out
templates ϒ(τ) = (∆τ , σin
τ , στ , `τ ) s.t. local consistency holds. It uses the approach described in Sect. 5.1. It then tests global consistency of ϒ. All the five sub-conditions
can be checked straightforwardly based on their definitions. If ϒ is found not globally consistent, problematic chains are pruned it until global consistency is achieved.
Testing for connectedness is done by testing all Ics -paths leading forward from output
parameters of chains and backward from input parameters of chains. Testing whether
J(S, ., S0 )K ∩ δconst or J(S0 , ., S)K ∩ δconst is an identity, which is a part of the procedure,
can be done easily due to the transparency of symbolic updates. Chains whose container
parameters contradict connectedness are removed from ϒ. The pruning is iterated until
ϒ is both globally consistent and connected.
5.4
Parameter Assignment
To prevent conflicts of names of parameters of the inserted container operations, their
calls are inserted with fresh parameter names. Particularly, given a replacement recipe
15
ϒ, the replacement location `τ of every chain τ ∈ dom(ϒ) is assigned a variable renaming λ`τ that renames the input/output parameters of the symbolic operation ∆τ , specifying the destructive container operation implemented by τ, to fresh names. The renamed
parameters of the container operations do not appear in the original code, and so the
code replacement algorithm must insert assignments of the appropriate values to the
parameters of the operations prior to the inserted calls of these operations. For this,
we compute a parameter assignment relation ν containing pairs (`, x := y) specifying
which assignment x := y is to be inserted at which location `. Intuitively, ν is constructed so that the input parameters of container operations take their values from the
output container parameters of the preceding container operations or, in case of pointer
variables, directly from the access paths (consisting of a pointer variable v or a selector value vs) that are used in the original program to access the concerned memory
regions. More details are given in Appendix C. Let us just note that if we fail to find
a parameter assignment, we remove some chains from ϒ and restart the search.
5.5
Code Replacement
The input of the replacement procedure is the annotated CFG cfg, a replacement recipe ϒ,
a variable renaming λ` for every replacement location ` of ϒ, and a parameter assignment relation ν. The procedure produces a modified CFG cfg0 . It first removes all implementing edges of every chain τ ∈ dom(ϒ) and adds instead an edge with a call to
λ` (∆` ) at `τ , and then adds an edge with the assignment x := y at ` for every pair
(`, x := y) ∈ ν. The edge removal is done simply by replacing the statement on the
given edge by the skip statement whose semantics is identity. Given a statement stmt
and a location `, edge addition amounts to: (1) adding a fresh location `• , (2) adding
a new edge h`, stmt, `• i, (3) replacing every edge h`, stmt0 , `0 i by h`• , stmt0 , `0 i. Intuitively, edge removal preserves all control paths going through the original edge, only
the statement is now “skipped”, and edge addition inserts the given statement into all
control paths containing the given location.
After replacing destructive container operations, we replace non-destructive container operations, including, in particular, usage of iterators to reference elements of
a list and to move along the list, initialisation of iterators (placing an iterator at a particular element of a list), and emptiness tests. With a replacement recipe ϒ and an assignment relation ν at hand, recognizing non-destructive operations in the annotated CFG
cfg is a much easier task than that of recognizing destructive operations. Actually, for
the above operations, the problem reduces to analysing annotations of one CFG edge at
a time. We refer an interested reader to Appendix E for more details.
Preservation of semantics. It can now be proved (cf. Appendix D) that under the assumption that the replacement recipe ϒ is locally and globally consistent and connected
and the parameter assignment relation ν is complete, our code replacement procedure
preserves the semantics. In particular, computations of the CFG cfg are surjectively
mapped to computations of the CFG cfg0 that are equivalent in the following sense.
They can be divided into the same number of segments that are in the computation of
cfg delimited by borders of the chains that it passes through. The two computations
agree on the final PCs of the respective segments. Note also that the transformation
16
preserves memory safety errors—if they appear, the related containers will not be introduced due to violation of connectedness.
6
Implementation and Experimental Results
We have implemented our approach as an extension of the Predator shape analyser [7]
and tested it through a number of experiments. Our code and experiments are publicly available at http://www.fit.vutbr.cz/research/groups/verifit/tools/
predator-adt.
The first part of our experiments concentrated on how our approach can deal with
various low-level implementations of list operations. We built a collection of 18 benchmark programs manipulating NULL-terminated DLLs via different implementations of
typical list operations, such as insertion, iteration, and removal. Moreover, we generated
further variants of these implementations by considering various legal permutations of
their statements. We also considered interleaving the pointer statements implementing
list operations with various other statements, e.g., accessing the data stored in the lists.2
Finally, we also considered two benchmarks with NULL-terminated Linux lists that heavily rely on pointer arithmetics. In all the benchmarks, our tool correctly recognised
list operations among other pointer-based code and gave us a complete recipe for code
transformation. On a standard desktop PC, the total run time on a benchmark was almost
always under 1s (with one exception at 2.5s), with negligible memory consumption.
Next, we successfully applied our tool to multiple case studies of creating, traversing, filtering, and searching lists taken from the benchmark suite of Slayer [1] (modified
to use doubly-linked instead of singly-linked lists). Using a slight extension of our prototype, we also successfully handled examples dealing with lists with head/tail pointers
as well as with circular lists. These examples illustrate that our approach can be generalized to other kinds of containers as discussed in Section 7. These examples are also
freely available at the link above. Moreover, in Appendix F, we present an example how
we deal with code where two container operations are interleaved.
Further, we concentrated on showing that our approach can be useful to simplify
program analysis by separating low-level pointer-related analysis from analysing other,
higher-level properties (like, e.g., sortedness or other data-related properties). To illustrate this, we used our approach to combine shape analysis implemented in Predator
with data-related analysis provided by the J2BP analyser [14]. J2BP analyses Java programs, and it is based on predicate abstraction extended to cope with containers.
We used 4 benchmarks for the evaluation. The first one builds an ordered list of
numerical data, inserts another data element into it, and finally checks sortedness of the
resulting list, yielding an assertion failure if this is not the case (such a test harness must
be used since J2BP expects a closed program and verifies absence of assertion failures).
The other benchmarks are similar in that they produce lists that should fulfill some
property, followed by code that checks whether the property is satisfied. The considered
properties are correctness of the length of a list, the fact that certain inserted values
appear in a certain order, and correctness of rewriting certain values in a list. We used
2
In practice, there would typically be many more such statements, seemingly increasing the size
of the case studies, but such statements are not an issue for our method.
17
our tool to process the original C code. Next, we manually (but algorithmically) rewrote
the result into an equivalent Java program. Then, we ran J2BP to verify that no assertion
failures are possible in the obtained code, hence verifying the considered data-related
properties. For each benchmark, our tool was able to produce (within 1 sec.) a container
program for J2BP, and J2BP was able to complete the proof. At the same time, note that
neither Predator nor J2BP could perform the verification alone (Predator does not reason
about numerical data and J2BP does not handle pointer-linked dynamic data structures).
7
Possibilities of Generalizing the Approach
Our method is built around the idea of specifying operations using a pair of abstract configurations equipped with a transformation relation over their components. Although we
have presented all concepts for the simple abstract domain of SMGs restricted to NULLterminated DLLs, the main idea can be used with abstract domains describing other
kinds of lists, trees, and other data structures too. We now highlight what is needed for
that. The abstract domain to be used must allow one to define a sufficiently fine-grained
assignment of representing objects, which is necessary to define symbolic updates with
transparent semantics. Moreover, one needs a shape analysis that computes annotations
of the CFG with a precise enough invariant, equipped with the transformation relation,
encoding pointer manipulations in a transparent way. However, most shape analyses do
actually work with such information internally when computing abstract post-images
(due to computing the effect of updates on concretized parts of the memory). We thus
believe that, instead of Predator, tools like, e.g., Slayer [1] or Forester [8] can be modified to output CFGs annotated in the needed way.
Other than that, given an annotated CFG, our algorithms searching for container operations depend mostly on an entailment procedure over symbolic updates (cf. Sec. 5.1,
App. A). Entailment of symbolic updates is, however, easy to obtain as an extension of
entailment over the abstract domain provided the entailment is able to identify which
parts of the symbolic shapes encode the same parts of the concrete configurations.
8
Conclusions and Future Work
We have presented and experimentally evaluated a method that can transform in a sound
and fully automated way a program manipulating NULL-terminated list containers via
low-level pointer operations to a high-level container program. Moreover, we argued
that our method is extensible beyond the considered list containers (as illustrated also
by our preliminary experiments with lists extended with additional pointers and circular
lists). A formalization of an extension of our approach to other kinds of containers,
a better implementation of our approach, as well as other extensions of our approach
(including, e.g., more sophisticated target code generation and recognition of iterative
container operations) are subject of our current and future work.
Acknowledgement. This work was supported by the Czech Science Foundation project
14-11384S.
18
References
1. J. Berdine, B. Cook, and S. Ishtiaq. SLAyer: Memory Safety for Systems-Level Code. In
Proc. of CAV’11, volume 6806 of LNCS, pages 178–183. Springer, 2011.
2. T. M. Chilimbi, M. D. Hill, and J. R. Larus. Cache-Conscious Structure Layout. In Proc. of
PLDI’99, pages 1–12. ACM, 1999.
3. P. Cousot and R. Cousot. Abstract Interpretation: A Unified Lattice Model for Static Analysis
of Programs by Construction or Approximation of Fixpoints. In Proc. of POPL’77, pages
238–252. ACM, 1977.
4. A. Cozzie, F. Stratton, H. Xue, and S. T. King. Digging for Data Structures. In Proc. of
USENIX’08, pages 255–266. USENIX Association, 2008.
5. R. Dekker and F. Ververs. Abstract Data Structure Recognition. In Proc. of KBSE’94, pages
133–140, 1994.
6. B. Demsky, M. D. Ernst, P. J. Guo, S. McCamant, J. H. Perkins, and M. C. Rinard. Inference
and Enforcement of Data Structure Consistency Specifications. In Proc. of ISSTA’06, pages
233–244. ACM, 2006.
7. K. Dudka, P. Peringer, and T. Vojnar. Byte-Precise Verification of Low-Level List Manipulation. In Proc. of SAS’13, volume 7935 of LNCS, pages 215–237. Springer, 2013.
8. P. Habermehl, L. Holı́k, A. Rogalewicz, J. Simácek, and T. Vojnar. Forest Automata for
Verification of Heap Manipulation. Formal Methods in System Design, 41(1):83–106, 2012.
9. I. Haller, A. Slowinska, and H. Bos. MemPick: High-Level Data Structure Detection in
C/C++ Binaries. In Proc. of WCRE’13, pages 32–41, 2013.
10. L. J. Hendren and A. Nicolau. Parallelizing Programs with Recursive Data Structures. IEEE
Trans. Parallel Distrib. Syst., 1(1):35–47, 1990.
11. M. Jump and K. S. McKinley. Dynamic Shape Analysis via Degree Metrics. In Proc. of
ISMM’09, pages 119–128. ACM, 2009.
12. C. Jung and N. Clark. DDT: Design and Evaluation of a Dynamic Program Analysis for
Optimizing Data Structure Usage. In Proc. of MICRO’09, pages 56–66. ACM, 2009.
13. P. O’Hearn, J. Reynolds, and H. Yang. Local Reasoning about Programs that Alter Data
Structures. In Proc. of CSL’01, volume 2142 of LNCS, pages 1–19. Springer, 2001.
14. P. Parı́zek. J2BP. http://plg.uwaterloo.ca/˜{}pparizek/j2bp.
15. P. Parı́zek and O. Lhoták. Predicate Abstraction of Java Programs with Collections. In Proc.
of OOPSLA’12, pages 75–94. ACM, 2012.
16. S. Pheng and C. Verbrugge. Dynamic Data Structure Analysis for Java Programs. In Proc.
of ICPC’06, pages 191–201. IEEE Computer Society, 2006.
17. E. Raman and D. I. August. Recursive Data Structure Profiling. In Proc. of MSP’05, pages
5–14. ACM, 2005.
18. M. Raza, C. Calcagno, and P. Gardner. Automatic Parallelization with Separation Logic. In
Proc. of ESOP’09, pages 348–362. Springer, 2009.
19. S. Sagiv, T. W. Reps, and R. Wilhelm. Parametric Shape Analysis via 3-valued Logic.
TOPLAS, 24(3), 2002.
20. R. Shaham, E. K. Kolodner, and S. Sagiv. On the Effectiveness of GC in Java. In Proc. of
ISMM’00, pages 12–17. ACM, 2000.
21. V. Vafeiadis. RGSep Action Inference. In Proc. of VMCAI’10, pages 345–361. Springer,
2010.
22. D. H. White and G. Lüttgen. Identifying Dynamic Data Structures by Learning Evolving
Patterns in Memory. In Proc. of TACAS’13, volume 7795 of LNCS, pages 354–369. Springer,
2013.
19
A
Finding transformation chains in an annotated CFG
In this appendix, we provide a detailed description of our algorithm that can identify
chains which implement—w.r.t. some input/output valuations—the symbolic operations
∆stmt of a destructive container statement stmt ∈ Stmtsc in a given annotated CFG. For
b of the so-called atomic
that, as we have already said in Sect. 5.1, we pre-compute sets U
symbolic updates that must be performed to implement the effect of each symbolic
update U ∈ ∆stmt . Each atomic symbolic update corresponds to one pointer statement
which performs a destructive pointer update, a memory allocation, or a deallocation.
b can be computed by looking at the differences in the selector values of the
The set U
input and output SPCs of U.
The algorithm identifying chains τ in the given annotated CFG iterates over all
symbolic updates U = (C, ;,C0 ) ∈ ∆stmt where stmt ∈ Stmtsc is a destructive container
statement. For each of them, it searches the annotated CFG for a location ` that is labelled by an SPC (G, σ) ∈ mem(`) for which there is a sub-SMG H G and a valuation
σ0 such that J(H, σ0 )K ⊆ JC0 K. The latter test is carried out using an entailment procedure on the abstract heap domain used. If the test goes through, the first point of a new
chain τ[0] = H and the input valuation σ = σ0 are constructed.
b The algorithm
The rest of the chain τ is constructed as follows using the set U.
investigates symbolic traces C = C0 ,C1 , . . . starting from the location `. At each Ci , i > 0,
it attempts to identify τ[i] as a sub-SMG of Ci that satisfies the following property for
some valuation σi : Objects of τ[i] are the .-successors of objects of τ[i − 1] in Ci , and for
the symbolic update Ui = J(τ[i − 1], σi−1 ), ., (τ[i], σi )K, it either holds that JUi K ∩ δconst
is an identity, or JUK ⊆ JUa K for some atomic pointer change Ua . This is, τ[i − 1] does
either not change (meaning that the implementation of the sought container operation is
interleaved with some independent pointer statements), or it changes according to some
of the atomic symbolic updates implementing the concerned container operation. In the
latter case, the i-th edge of the control path of τ is an implementing edge of τ. We note
that both of the tests can be carried out easily due to the transparency of the semantics
of symbolic updates (cf. Section 3), which makes all pointer changes specified by U,
Ui , and Ua “visible”.
If τ[i] and σi satisfying the required property are not found, the construction of τ
along the chosen symbolic trace fails, otherwise the algorithm continues extending it.
b is associated with
The chain τ is completed when every atomic symbolic update from U
some implementing edge. That means that all steps implementing U have appeared, no
other modifications to τ[0] were done, and so τ[0] was modified precisely as specified
by U. Finally, an output valuation σ0 is chosen so that τ implements ∆ w.r.t. σ and σ0 .
Note that since one of the conditions for τ[i] is that its objects are images of objects
of τ[i − 1], operations that allocate regions would never be detected because a newly
allocated region cannot be the image of any object. The construction of chains implementing such operations is therefore done in the opposite direction, i.e., it starts from
C0 and continues against ..
B
Connectedness
The below formalized requirement of connectedness of a replacement recipe ϒ reflects
the fact that once some part of memory is to be viewed as a container, then destruc20
tive operations on this part of memory are to be done by destructive container operations only, until the container is destroyed by a container destructor. Note that this
requirement concerns operations dealing with the linking fields only, the rest of the
concerned objects can be manipulated by any low-level operations. Moreover, recall
that destructive pointer statements implementing destructive container operations can
be interleaved by other independent pointer manipulations, which are handled as the
prefix/suffix edges of the appropriate chain.
Let C,C0 ∈ dom(mem). The successor CS of a CS S C is a CS S0 C0 s.t. obj(S0 ) =
obj(C0 ) ∩ .(obj(S)). The CS S is then called the predecessor CS of S0 in C, denoted S Ics
S0 . The successor set of a CS S is defined recursively as the smallest set succ(S) of CSs
that contains S and each successor CS S00 of each CS S0 ∈ succ(S) that is not obtained
by a statement that appears in the path of some chain in dom(ϒ). Symmetrically, for
a CS S which is the input of some chain in dom(ϒ), we define its predecessor set as
the smallest set predτ (S) of CSs that contains S and each predecessor S00 of each CS in
S0 ∈ predτ (S) that is not obtained by a statement that appears in the path of some chain
in dom(ϒ).
Using the notions of successor and predecessor sets, ϒ is defined as connected iff it
satisfies the following two symmetrical conditions: (1) For every CS S0 in the successor
set of a CS S that is the output of some chain in dom(ϒ), S0 is either the input of
some chain in dom(ϒ) or, for all successors S00 of S0 , J(S0 , ., S00 )K ∩ δconst is an identity.
(2) For every CS S0 in the predecessor set of a CS S that is the input of some chain in
dom(ϒ), S0 is either the output of some chain in dom(ϒ) or, for all predecessors S00 of
S0 , J(S00 , ., S0 )K ∩ δconst is an identity. Intuitively, a CS must not be modified in between
of successive destructive container operations.
C
Parameter Assignment
Let ϒ be a replacement recipe. Further, for the replacement location ` of every chain τ ∈
dom(ϒ), let λ` be the variable renaming that renames the input/output parameters of the
symbolic operation ∆τ , specifying the destructive container operation implemented by
τ, to fresh names. Below, we describe a method that computes a parameter assignment
relation ν containing pairs (`, x := y) specifying which assignments are to be inserted at
which location in order to appropriately initialize the renamed input/output parameters.
As said already in Sect. 5.4, ν is constructed so that input container parameters take
their values from variables that are outputs of preceding container operations.
In particular, for an input container parameter x and an output container parameter y (i.e., x, y ∈ Vc ), a location where x can be identified with y is s.t. every element
of pred(Sx ) at the location is in succ(Sy ) and it is not in succ(Sy0 ) of any other output
parameter y0 . Here Sv denotes the (unique) container shape that is the value of the container parameter defined in ϒ . The parameters must be properly assigned along every
feasible path leading to `. Therefore, for every replacement location ` and every input
container parameter x of λ` (∆` ), we search backwards from ` along each maximal Ics path p leading to x. We look for a location ` on p where x can be identified with y. If we
find it, we add (`, x := y) to ν. If we do not find it on any of the paths, ν is incomplete.
τ is then removed from dom(ϒ), and the domain of ϒ is pruned until it again becomes
21
globally consistent. The computation of ν is then run anew. If the search succeeds for
all paths, then we call ν complete.
Region parameters are handled in a simpler way, using the fact that the input regions are usually sources or targets of modified pointers (or they are deallocated within
the operation). Therefore, in the statements of the original program that perform the
corresponding update, they must be accessible by access paths (consisting of a pointer
variable v or a selector value vs). These access paths can then be used to assign the
appropriate value to the region parameters of the inserted operations. Let τ be a chain
in dom(ϒ) with an input region parameter x ∈ V p . For every control path p leading to
`τ , taking into account that `τ can be shared with other chains τ0 , we choose the location `0 which is the farthest location from `τ on p s.t. it is a starting location of a chain
τ0 ∈ dom(ϒ) with `τ0 = `τ . We then test whether there is an access path a (i.e., a variable y or a selector value ys) s.t. all SPCs of mem(`0 ) have a region r which is the
.-predecessor of input region x of the chain τ0 and which is accessible by a. If so, we
put to ν the pair (`, x := a). Otherwise, τ is removed from ϒ, and the whole process of
pruning ϒ and computing the assignments is restarted.
D
Correctness
We now argue that under the assumption that the replacement recipe ϒ is locally and
globally consistent and connected and the parameter assignment relation ν is complete,
our code replacement procedure preserves the semantics of the input annotated CFG
cfg. We show that computations of cfg and of the CFG cfg0 obtained by the replacement
procedure are equivalent in the sense that for every computation φ of one of the annotated CFGs, there is a computation in the other one, starting with the same PC, such that
both computations can be divided into the same number of segments, and at the end of
every segment, they arrive to the same PCs. The segments will be delimited by borders
of chains that the computation in cfg passes through.
Formally, the control path of every computation φ of cfg is a concatenation of segments s1 s2 · · · . There are two kinds of segments: (1) A chain segment is a control path
of a chain of ϒ, and it is maximal, i.e., it cannot be extended to either side in order to
obtain a control path of another chain of ϒ. (2) A padding segment is a segment between
two chain segments such that none of its infixes is a control path of a chain.
We define the image of an edge e = h`1 , stmt1 , `2 i of cfg as the edge e0 = h`1 , stmt01 , `2 i
of cfg0 provided no new edge f = h`2 , stmt2 , `•2 i is inserted to cfg0 . Otherwise, the image
is the sequence of the edges e0 f . The image of a path p of cfg is then the sequence of
images of edges of p, and p is called the origin of p0 . Similarly to computations of cfg,
every computation φ0 of cfg0 can be split into a sequence of chain and padding segments.
They are defined analogously, the only difference is that instead of talking about control
paths of chains, the definition talks about images of control paths of chains.
We say that a PC c = (g, σ) of a computation of cfg is equivalent to a PC c0 =
0
(g , σ0 ) of a computation of cfg0 iff g = g0 and σ ⊆ σ0 . That is, the PCs are almost the
same, but c0 can define more variables (to allow for equivalence even though c0 defines
values of parameters of container operations). A computation φ of cfg is equivalent
to a computation φ0 of cfg0 iff they start with the same PC, have the same number of
22
segments, and the configurations at the end of each segment are equivalent. This is
summarised in the following theorem.
Theorem 1. For every computation of cfg starting with (`I , c), c ∈ Jmem(`I )K, there is
an equivalent computation of cfg0 , and vice versa.
For simplicity, we assume that every branching in a CFG is deterministic (i.e., conditions on branches starting from the same location are mutually exclusive).
In order to prove Theorem 1, we first state and prove several lemmas.
Lemma 1. Let φ be a computation of an annotated CFG cfg over a control path p and
a symbolic trace Φ = C0 ,C1 , · · · . Then:
1. There is precisely one way of how to split it into segments.
2. If the i-th segment is a chain segment which spans from the j-th to the k-th location
of p, then C j , . . . ,Ck is a symbolic trace of some chain τi of ϒ.
3. Every replacement point and implementing edge on p belong to some τi .
Proof. Let φ = (`0 , c0 ), (`1 , c1 ), . . .. Let the i-th segment be a chain segment that spans
from the j-th to the k-th location of φ. We start by Point 2 of the lemma.
Due to consistency, it holds that C j , . . . ,Ck is a symbolic trace of some chain τ ∈
dom(ϒ). Particularly, it can be argued as follows: The control path of a chain segment
is by definition a control path of some chain τ of ϒ but there is no chain of ϒ with the
control path that is an infix of p and larger than the one of τ. The control path of τ
contains the replacement location of τ. Let it be `l , j ≤ l ≤ k, and let e and e0 be the
extreme edges of the control path of τ. By local consistency, e and e0 are implementing.
By Point 2 of global consistency, the extreme edges are also implementing edges of
some chains τ0 and τ00 with the symbolic traces being infixes of Φ. By Point 3(b) of
global consistency, since control paths of τ0 and τ00 overlap with that of τ, they share the
replacement location with it, and hence, by Point 3(a) of global consistency, the control
path of one of them is an infix of the control path of the other. The larger of the two thus
spans from the j-th to the k-th location (not more since the control path of τ is maximal
by definition). Hence, the longer of the two chains τ0 , τ00 can be taken as τi .
Point 3 of the lemma is also implied by consistency. If there was a contradicting
implementing edge or replacement location, Points 1 or 2 of global consistency, respectively, would imply that it belongs to a chain τ ∈ dom(ϒ) such that its symbolic trace is
an infix of Φ. The control path of τ cannot be an infix of a padding segment since this
would contradict the definition of a padding segment. Moreover, the control path of τ
cannot overlap with the control path of any τi . By Point 3(b) of global consistency, it
would share the replacement location with τi , hence by Point 3(a) and by the maximality of τi , τ would be an infix of τi , and by Point 5 of global consistency, its implementing
edges would be implementing edges of τi .
Finally, Point 1 of the lemma is easy to establish using the above reasoning.
t
u
Lemma 2. A computation of an annotated CFG cfg over a control path p is equivalent
to a computation over the image of p of the CFG cfg0 obtained from cfg by the procedure
for replacing destructive pointer operations.
23
Proof. Let φ = (`0 , c0 ), (`1 , c1 ), . . . be a computation of cfg. The equivalent computation
φ0 of cfg0 can be constructed by induction on the number n of segments of φ.
Take the case n = 0 as the case when φ consists of the initial state (`I , c0 ) only.
The claim of the lemma holds since both φ and φ0 are the same. A computation φ with
n + 1 segments arises by appending a segment to a prefix φn of a computation φ which
contains its n first segments, for which the claim holds by the induction hypothesis. That
is, assuming that φn ends by (`, c) there is a prefix of a computation φ0n of cfg0 which is
equivalent to φn and ends by (`0 , c0 ) where c and c0 are equivalent, and the control path
of φ0n is the image of the control path of φn .
Let the (n + 1)-th segment of φ be a padding one. Then the (n + 1)-th segment of
cfg0 is an infix of a computation that starts by (`0 , c0 ) and continues along the image of
the control path of the (n + 1)-th segment of φ. The paths are the same up to possibly
inserted assignment of parameters of container operations in the control of φ0n since, by
Lemma 1, a padding segment does not contain replacement points and implementing
instructions. Assignments of the parameters of container operations do not influence
the semantics of the path (modulo the values of the parameters) since the path does not
contain any calls of operations and the parameters are not otherwise used. Hence the
two configurations at the ends of the (n + 1)-th segments of φ and φ0 must be equivalent.
Let the (n+1)-th segment of φ be a chain segment. Let Φ = C0 ,C1 , . . . be a symbolic
trace of φ. By Lemma 1, we know that the infix of Φ that corresponds to the (i +
1)-th segment is a symbolic trace of some τi of ϒ, and that the control path of the
segment does not contain any replacement points other than the one of τi , and that all
implementing edges in it are implementing edges of τi . Let pi be the control path of τi
(and of the i + 1th segment). The control path pi is hence exactly p0i = pp .estmt .ps where
stmt is the statement calling the container operation δstmt of τi with parameters renamed
by λ`τi , and pp and ps are the prefix and suffix edges of τi . This means that under the
assumption that parameters of the operation stmt are properly assigned at the moment
of the call, p0i modifies the configuration c0 in the same way as pi modifies c, modulo
assignment of parameters, and hence the resulting configurations are equivalent. The
parameters are properly assigned since ν was complete.
t
u
Lemma 3. The control path of every computation of the annotated CFG cfg0 obtained
by the procedure for replacing destructive pointer operations from a CFG cfg is an
image of a control path of a computation in cfg.
Proof. By contradiction. Assume that the control path p0 of a computation φ0 of cfg0
is not an image of a control path of a computation in cfg. Take the longest prefix p00
of p0 such that there exists a computation φ of cfg with a prefix that has a control path
p and p00 is the image of p. p00 is a sequence of n images of segments of p ended by
an incomplete image of the (n + 1)-th segment of p. By Lemma 2, the configurations
of φ and φ0 at the end of the n-th segment and its image, respectively, are equivalent.
The computation φ0 then continues by steps within the image of the (n + 1)-th segment
of φ until a point where its control path diverges from it. The (n + 1)−th segment of
φ cannot be a padding segment since edges of padding segments and their images do
not differ (up to assignments of parameters of container operations, which are irrelevant
here). The (n + 1)-th segment of φ is thus a chain segment. By equivalence of semantics
24
of control paths of chains and their images (Section 5.1), φ0 must have had a choice to
go through the image of the (n + 1)-th segment until reaching its ending location with
a configuration equivalent to the one of φ. However, this means that branching of cfg0
is not deterministic. Since the code replacement procedure cannot introduce nondeterminism, it contradicts the assumption of the branching of cfg being deterministic.
t
u
The proof of Theorem 1 is now immediate.
Proof (Theorem 1). Straightforward by Lemma 2 and Lemma 3.
E
t
u
Replacement of Non-destructive Container Operations
We now discuss our way of handling non-destructive container operations, including,
in particular, a use of iterators to reference elements of a list and to move along the
list, initialisation of iterators (placing an iterator at a particular element of a list), and
emptiness tests. With a replacement recipe ϒ and an assignment relation ν at hand, recognizing non-destructive operations in an annotated CFG is a much easier task than that
of recognizing destructive operations. Actually, for the above operations, the problem
reduces to analysing annotations of one CFG edge at a time.
We first present our handling of non-destructive container operations informally by
showing how we handle some of these operations in the running example of Sect. 1. For
convenience, the running example is repeated in Fig. 43 . Fig. 5 then shows the annotated
CFG corresponding to lines 13–17 of the running example. In the figure, some of the
CSs are assigned the container variable L. The assignment has been obtained as follows:
We took the assignment of the chain input/output parameters defined by templates of
ϒ, renamed it at individual locations using renamings λ` , and propagated the obtained
names of CSs along Ics .
Clearly, the pointer assignment on the edge h13, p=h, 14i can be replaced by the
iterator initialisation p=front(L) because on line 13, the variable h is either at the
front region of the CS L, or in the case L is empty, it equals ⊥. This is exactly what
f ront(L) does: it returns a pointer to the front region of L, and if L is empty, it returns ⊥.
The pointer statement of the edge h16, p=p->f, 14i can by replaced by the list iteration
p=next(L,p) because in all SPCs at line 16, p points to an element of the CS L and
next is the binding pointer of L.
In the following four paragraphs, we present detailed definitions of common and
frequently used non-destructive container operations.
Go-to-front. Let e be an edge of an analysed CFG labelled by either p0 =p, p0 =ps,
or p0 s0 =p where p, p0 ∈ V p and s, s0 ∈ S p . This edge is a go-to-front element nondestructive container operation if there is a variable L ∈ Vc s.t. for each symbolic update (C, ·, ·) ∈ ∆e the following holds true: If C does not contain any container shape,
3
Actually, Fig. 4(c) contains a bit more complex version of the annotated CFG than the one
shown in Fig. 1(c). The reason is that Fig. 4(c) shows the actual output of Predator whereas
Fig. 1(c) has been slightly optimized for simplicity of the presentation. The optimization has
been based on that a DLL consisting of one region is subsumed by the set of DLLs represented
by a DLS—this subsumption is, however, not used by Predator.
25
typedef struct
int x;
struct SNode
struct SNode
} Node;
#define NEW(T)
SNode {
1
*f;
*b;
h=
t=
12
(T*)malloc(sizeof(T))
1 Node *h=0, *t=0;
1
2
h t
3
h t
while (nondet()) {
Node*p=NEW(Node);
p->x=nondet();
7
t
p
L
h
p
t
p
L
L
t->f=p
8
h p
h t
p
h
t
p
h p
h t
p
h
t
p
h p
h t
p
h
t
p
p
h
t
p
t
9
t
p->x=nondet()
}
...
Node *p=front(L);
while (p!=NULL) {
p->x=0;
p=next(L,p);
}
L
h
L
p->f=
L=push_back(L,p);
t
p
h t
h t p
h=p
(a)
4
h!=
5
h
L
h t
h t p
h==
t
L
h t
p=malloc()
2 while (nondet()) {
3
Node*p=NEW(Node);
4
if (h==NULL)
5
h=p;
6
else
7
t->f=p;
8
p->f=NULL;
9
p->x=nondet();
10
p->b=t;
11
t=p;
12 }
...
13 Node *p=h;
14 while (p!=NULL) {
15
p->x=0;
16
p=p->f;
17 }
h
L
nondet()
list L;
2
h t
10
t
p->b=t
t=p
11
p=
h
p
h t
t
L
L
1
L
2
(c)
(b)
Fig. 4. The running example from Sect. 1. (a) A C code using low-level pointer manipulations.
(b) The transformed pseudo-C++ code using container operations. (c) A part of the CFG of the
low-level code from Part (a) corresponding to lines 1-12, annotated by shape invariants (in the
form obtained from the Predator tool).
13
p
p=h
p==
17
p!=
h t
h t
4
h
t
h t
p
p
15
p
L
p->x=0
p
L
p
L
L
L
p
t
p
L
L
p
t
L
p
h
t
L
h t
t
p
h
t
p
h
t
L
p
h
h
t
h
1
5
7
6
t
L
p
h
h t
t
p
h
h
L
p
16
L
h t
ht
p
2
14
h
h t
h t
L
3
p
h
t
2
p
L
L
4
L
t
p
L
p=p->f
1
5
6 7
6 7
5
3
Fig. 5. The annotated CFG corresponding to lines 13–17 of the C code depicted in Fig. 4(a).
then either σC (p) = ⊥ or sC (σC (p)) = ⊥ according to whether the first or the second
syntactical form of the edge implementing the go-to-front operation is used. Otherwise,
σC (L) 6= > and, according to the syntactical form used, either σC (p) or sC (σC (p)) is the
front region of σC (L), respectively. We replace the label of each go-to-front edge, according to the syntactical form used, by p0 =front(L), p0 =front(L), or p0 s0 =front(L),
respectively.
26
Go-to-next. Let e be an edge of an analysed CFG labelled by p0 =ps where p, p0 ∈ V p
and s ∈ S p . This edge is a go-to-next element non-destructive container operation if
there is a variable L ∈ Vc s.t. for each symbolic update (C, ·, ·) ∈ ∆e there is σC (L) 6= >,
s corresponds to the next selector, and σC (p) belongs to σC (L). We replace the label of
each go-to-next edge by p0 =next(L, p).
End-reached? Let e = h`, stmt, ·i be an edge of an analysed CFG s.t. stmt is either
p==(p0 | ⊥) or p!=(p0 | ⊥) where p, p0 ∈ V p . Let p00 ∈ V p be a fresh variable s.t. for
each SPC C ∈ mem(`) either σC (p00 ) = σC (p0 ) or σC (p00 ) = ⊥, according to the syntactical form used. The edge e is an end-reached non-destructive container query operation if there is a variable L ∈ Vc , two different variables x, y ∈ {p, p00 }, two SPCs
C0 ,C00 ∈ mem(`) s.t. σC0 (L) 6= >, σC0 (x) = ⊥, σC00 (L) 6= >, σC00 (x) 6∈ {⊥, >}, and for
each symbolic update (C, ·, ·) ∈ ∆e , the following holds true: σC (y) = ⊥ and if C contains no container shape, then σC (x) = ⊥. Otherwise, σC (L) 6= > and σC (x) is either ⊥
or it belongs to σC (L). We replace the label stmt of each end-reached edge by x==end(L)
or x!=end(L), according to the syntactical form used, respectively.
Is-empty? Let e = h`, stmt, ·i be an edge of an analysed CFG s.t. stmt is either in the
form p==(p0 | ⊥) or p!=(p0 | ⊥) where p, p0 ∈ V p . Let p00 ∈ V p be a fresh variable s.t. for
each SPC C ∈ mem(`), either σC (p00 ) = σC (p0 ) or σC (p00 ) = ⊥, according to the syntactical form used. This edge is an is-empty non-destructive container query operation
if there is a variable L ∈ Vc , C0 ∈ mem(`) s.t. σC0 (L) 6= > and there are two different variables x, y ∈ {p, p00 } s.t. for each symbolic update (C, ·, ·) ∈ ∆e , the following
holds true: σC (y) = ⊥ and if C contains no container shape, then σC (x) = ⊥; otherwise,
σC (L) 6= > and σC (x) belongs to σC (L). We replace the label stmt of each is-empty
edge by 0==is empty(L) or 0!=is empty(L), according to the syntactical form used,
respectively.
We have omitted definitions of non-destructive container operations go-to-previous
and go-to-back since they are analogous to definitions of operations go-to-next and
go-to-front, respectively. There are more non-destructive container operations, such as
get-length. We have omitted them since we primarily focus on destructive operations in
this paper.
We now show how the definitions above apply to the edges of the annotated CFG cfg
corresponding to lines 13–17 of our running example from Fig. 4(a), which is depicted
in Fig. 5. Let us start with the edge h13, p=h, 14i. Clearly, according to the syntax of
the statement, the edge may be a go-to-front edge only. We can see in the figure that
the leftmost SPC at location 13 does not contain any container shape and σ(h) = ⊥. In
all the remaining SPCs at that location there is defined a container variable L and σ(h)
always references the front region of σ(L). Therefore, we replace p=h by p=front(L).
Next, we consider the edge h14, p!=⊥, 15i. According to the syntax of the statement,
it can either be an is-empty or an end-reached edge. Since variable p does not belong to
the container shape referenced by L in the rightmost SPC at location 14, the edge cannot
be an is-empty one. Nevertheless, it satisfies all properties of an end-reached edge. In
particular, whenever L is present in an SPC, then p references either a region of L or it is
⊥, and both of these cases occur at that location (for two different SPCs). Moreover, the
27
p
1
q
Y
X
p->f=q
q
p
2
Y
q->f->b=
p
q
3
Y'
q->b=p
p
q
p
q
4
q->f=
Y'
5
X'
Y'
Fig. 6. A fraction of an annotated CFG depicting interleaved operations push back and
pop front applied on CSs X and Y and the region q.
leftmost SPC at location 13 does not contain any container shape and σ(h) is indeed ⊥.
Therefore, we replace p!=⊥ by p!=end(L). The discussion of the edge h14, p==⊥, 17i
is similar to the last one, so we omit it.
The statement p->x=0 of the only outgoing edge from location 15 does not syntactically match any of our non-destructive container operations, so the label of this edge
remains unchanged.
Finally, the edge h16, p=p->f, 14i can be either a go-to-front or a go-to-next edge,
according to the syntax of its statement. However, since p does not reference the front
region of L in the rightmost SPC at location 16, the edge cannot be a go-to-front one. On
the other hand, we can easily check that L is defined at each SPC attached to location
16, f represents the next selector of L, and p always references a region of L. So, this
edge is a go-to-next edge, and we replace its label by p=next(L, p).
F
An Example of an Interleaved Use of Container Operations
We now provide an example illustrating that our approach can handle even the case
when a programmer interleaves the code implementing some container operations—in
our case, push back and pop front. The example is depicted in Fig. 6. The implementing edges of the push back operation are (1, 2), (3, 4), and (4, 5). A non-implementing
edge is (2, 3) because it operates on the pointer qf b of the region qf , which is not
included in the specification of the operation. The edge (2, 3) is an implementing edge
of the operation pop front.
From the point of view of the push back operation, one can see that the nonimplementing edge (2, 3) can be safely moved before the edge (1, 2). From the point
of view of the pop front operation, this move is also possible since (1, 2) is nonimplementing, i.e., it can be moved after the implementing edge (2, 3). Our algorithm
considers this rearrangement of edges and so it transforms this C code into the equivalent sequence of operations: (Y 0 ,q) = pop front(Y); X 0 = push back(X, q).
28
| 6 |
arXiv:1504.05158v1 [] 20 Apr 2015
Multi-swarm PSO algorithm for the Quadratic
Assignment Problem: a massive parallel
implementation on the OpenCL platform
Piotr Szwed and Wojciech Chmiel
AGH University of Science and Technology
Al. Mickiewicza 30
30 059 Kraków, Poland
{[email protected], [email protected]}
November 16, 2017
Abstract
This paper presents a multi-swarm PSO algorithm for the Quadratic
Assignment Problem (QAP) implemented on OpenCL platform. Our
work was motivated by results of time efficiency tests performed for singleswarm algorithm implementation that showed clearly that the benefits of
a parallel execution platform can be fully exploited, if the processed population is large. The described algorithm can be executed in two modes:
with independent swarms or with migration. We discuss the algorithm
construction, as well as we report results of tests performed on several
problem instances from the QAPLIB library. During the experiments the
algorithm was configured to process large populations. This allowed us
to collect statistical data related to values of goal function reached by
individual particles. We use them to demonstrate on two test cases that
although single particles seem to behave chaotically during the optimization process, when the whole population is analyzed, the probability that
a particle will select a near-optimal solution grows.
Keywords: QAP, PSO, OpenCL, GPU calculation, particle swarm
optimization, mulit-swarm, discrete optimization
1
Introduction
Quadratic Assignment Problem (QAP) [19, 5] is a well known combinatorial
problem that can be used as optimization model in many areas [3, 23, 14]. QAP
may be formulated as follows: given a set of n facilities and n locations, the goal
is to find an assignment of facilities to unique locations that minimizes the sum
of flows between facilities multiplied by distances between their locations. As the
1
problem is NP hard [30], it can be solved optimally for small problem instances.
For larger problems (n>30), several heuristic algorithm were proposed [34, 24, 6].
One of the discussed methods [26, 21] is the Particle Swarm Optimization
(PSO). It attempts to find an optimal problem solution by moving a population
of particles in the search space. Each particle is characterized by two features its
position and velocity. Depending on a method variation, particles may exchange
information on their positions and reached values of goal functions [8].
In our recent work [33] we have developed PSO algorithm for the Quadratic
Assignment Problem on OpenCL platform. The algorithm was capable of processing one swarm, in which particles shared information about the global best
solution to update their search directions. Following typical patterns for GPU
based calculations, the implementation was a combination of parallel tasks (kernels) executed on GPU orchestrated by sequential operations run on the host
(CPU). Such organization of computations involves inevitable overhead related
to data transfer between the host and the GPU device. The time efficiency
test reported in [33] showed clearly that the benefits of a parallel execution
platform can be fully exploited, if processed populations are large, e.g. if they
comprise several hundreds or thousands particles. For smaller populations sequential algorithm implementation was superior both as regards the total swarm
processing time and the time required to process one particle. This suggested
a natural improvement of the previously developed algorithm: by scaling it up
to high numbers of particles organized into several swarms.
In this paper we discuss a multi-swarm implementation PSO algorithm for
the QAP problem on OpenCL platform. The algorithm can be executed in two
modes: with independent swarms, each maintaining its best solution, or with
migration between swarms. We describe the algorithm construction, as well
as we report tests performed on several problem instances from the QAPLIB
library [28]. Their results show advantages of massive parallel computing: the
obtained solutions are very close to optimal or best known for particular problem
instances.
The developed algorithm is not designed to exploit the problem specificity
(see for example [10]), as well as it is not intended to compete with supercomputer or grid based implementations providing exact solutions for the QAP [2].
On the contrary, we are targeting low-end GPU devices, which are present in
most laptops and workstations in everyday use, and accept near-optimal solutions.
During the tests the algorithm was configured to process large numbers of
particles (in the order of 10000). This allowed us to collect data related to
goal function values reached by individual particles and present such statistical measures as percentile ranks and probability mass functions for the whole
populations or selected swarms.
The paper is organized as follows: next Section 2 discusses the QAP problem,
as well as the PSO method. It is followed by Section 3, which describes the
adaptation of PSO to the QAP and the parallel implementation on the OpenCL
platform. Experiments performed and their results are presented in Section 4.
Section 5 provides concluding remarks.
2
2
Related works
2.1
Quadratic Assignment Problem
Quadratic Assignment Problem was introduced by Koopmans and Beckman in
1957 as a mathematical model describing assignment of economic activities to
a set of locations [19].
Let V = {1, ..., n} be a set of locations (nodes) linked by n2 arcs. Each
arc linking a pair of nodes (k, l) is attributed with a non-negative weight dkl
interpreted as a distance. Distances are usually presented in form of n × n
distance matrix D = [dkl ]. The next problem component is a set of facilities
N = {1, ..., n} and a n × n non-negative flow matrix F = [fij ], whose elements
describe flows between pairs of facilities (i, j).
The problem goal is to find an assignment π : N → V that minimizes the
total cost calculated as sum of flows fij between pairs of facilities (i, j) multiplied
by distances dπ(i)π(j) between pairs of locations (π(i), π(j)), to which they are
assigned. The permutation π can be encoded as n2 binary variables xki , where
k = π(i), what gives the following problem statement:
min
n X
n X
n X
n
X
fij dkl xki xlj
(1)
i=1 j=1 k=1 l=1
subject to:
Pn
xij = 1,
Pn
for 1 ≤ j ≤ n
xij = 1,
for 1 ≤ i ≤ n
i=1
j=1
(2)
xij ∈ {0, 1}
The n × n matrix X = [xki ] satisfying (2) is called permutation matrix.
In most cases matrix D and F are symmetric. Moreover, their diagonal
elements are often equal 0. Otherwise, the component fii dkk xki xki can be extracted as a linear part of the goal function interpreted as an installation cost
of i-th facility at k-th location .
QAP models found application in various areas including transportation
[3], scheduling, electronics (wiring problem), distributed computing, statistical
data analysis (reconstruction of destroyed soundtracks), balancing of turbine
running [23], chemistry , genetics [29], creating the control panels and manufacturing [14].
In 1976 Sahni and Gonzalez proved that the QAP is strongly N P-hard [30],
by showing that a hypothetical existence of a polynomial time algorithm for
solving the QAP would imply an existence of a polynomial time algorithm for
an N P-complete decision problem - the Hamiltonian cycle.
In many research works QAP is considered one of the most challenging optimization problem. This in particular regards problem instances gathered in a
publicly available and continuously updated QAPLIB library [28, 4]. A practical size limit for problems that can be solved with exact algorithms is about
3
n = 30 [16]. In many cases optimal solutions were found with branch and bound
algorithm requiring high computational power offered by computational grids
[2] or supercomputing clusters equipped with a few dozen of processor cores and
hundreds gigabytes of memory [15]. On the other hand, in [10] a very successful
approach exploiting the problem structure was reported. It allowed to solve
several hard problems from QAPLIB using very little resources.
A number of heuristic algorithms allowing to find a near-optimal solutions
for QAP were proposed. They include Genetic Algorithm [1], various versions
of Tabu search [34], Ant Colonies [32, 12] and Bees algorithm [11]. Another
method, being discussed further, is Particle Swarm Optimization [26, 21] .
2.2
Particle Swarm Optimization
The classical PSO algorithm [8] is an optimization method defined for continuous
domain. During the optimization process a number of particles move through
a search space and update their state at discrete time steps t = 1, 2, 3, . . . Each
particle is characterized by position x(t) and velocity v(t). A particle remembers
its best position reached so far pL (t), as well as it can use information about
the best solution found by the swarm pG (t).
The state equation for a particle is given by the formula (3). Coefficients
c1 , c2 , c3 ∈ [0, 1] are called respectively inertia, cognition (or self recognition)
and social factors, whereas r1 , r2 are random numbers uniformly distributed in
[0, 1]
v(t + 1) = c1 · v(t) + c2 · r2 (t) · (pL (t) − x(t)) + c3 · r3 (t) · (pG (t) − x(t))
x(t + 1) = x(t) + v(t)
(3)
An adaptation of the PSO method to a discrete domain necessities in giving
interpretation to the velocity concept, as well as defining equivalents of scalar
multiplication, subtraction and addition for arguments being solutions and velocities. Examples of such interpretations can be found in [7] for the TSP and
[26] for the QAP.
A PSO algorithm for solving QAP using similar representations of particle state was proposed by Liu et al. [21]. Although the approach presented
there was inspiring, the paper gives very little information on efficiency of the
developed algorithm.
2.3
GPU based calculations
Recently many computationally demanding applications has been redesigned to
exploit the capabilities offered by massively parallel computing GPU platforms.
They include such tasks as: physically based simulations, signal processing, ray
tracing, geometric computing and data mining [27]. Several attempts have been
also made to develop various population based optimization algorithms on GPUs
including: the particle swarm optimization [36], the ant colony optimization [35],
4
the genetic [22] and memetic algorithm [20]. The described implementations
benefit from capabilities offered by GPUs by processing whole populations by
fast GPU cores running in parallel.
3
Algorithm design and implementation
In this section we describe the algorithm design, in particular the adaptation
of Particle Swarm Optimization metaheuristic to the QAP problem, as well as
a specific algorithm implementation on OpenCL platform. As it was stated in
Section 2.2, the PSO uses generic concepts of position x and velocity v that can
be mapped to a particular problem in various ways. Designing an algorithm for
a GPU platform requires decisions on how to divide it into parts that are either
executed sequentially at the host side or in parallel on the device.
3.1
PSO adaptation for the QAP problem
A state of a particle is a pair (X, V ). In the presented approach both are
n × n matrices, where n is the problem size. The permutation matrix X = [xij ]
encodes an assignment of facilities to locations. Its elements xij are equal to 1,
if j-th facility is assigned to i-th location, and take value 0 otherwise.
A particle moves in the solution space following the direction given by the
velocity V . Elements vij have the following interpretation: if vij has high
positive value, then a procedure determining the next solution should favor an
assignment xij = 1. On the other hand, if vij ≤ 0, then xij = 0 should be
preferred.
The state of a particle reached in the t-th iteration will be denoted by
(X(t), V (t)). In each iteration it is updated according to formulas (4) and
(5).
V (t + 1) = Sv c1 · V (t) + c2 · r2 (t) · (P L (t) − X(t))+
c3 · r3 (t) · (P G (t) − X(t))
X(t + 1) = Sx (X(t) + V (t))
(4)
(5)
Coefficients r2 and r3 are random numbers from [0, 1] generated in each iteration for every particle separately. They are introduced to model a random
choice between movements in the previous direction (according to c1 – inertia), the best local solution (self recognition) or the global best solution (social
behavior).
All operators appearing in (4) and (5) are standard operators from the linear
algebra. Instead of redefining them for a particular problem, see e.g. [7], we
propose to use aggregation functions Sv and Sx that allow to adapt the algorithm
to particular needs of a discrete problem.
5
The function Sv is used to assure that velocities have reasonable values.
Initially, we thought that unconstrained growth of velocity can be a problem,
therefore we have implemented a function, which restricts the elements of V to
an interval [−vmax , vmax ]. This function is referred as raw in Table 1. However,
the experiments conducted showed, that in case of small inertia factor, e.g. c1 =
0.5, after a few iterations all velocities tend to 0 and in consequence all particles
converge to the best solution encountered earlier by the swarm. To avoid such
effect another function that additionally performs column normalization was
proposed.
Pn For each j-th column the sum of absolute values of the elements
nj = i=1 |vij | is calculated and then the following assignment is made: vij ←
vij /nj .
According to formula (5) a new particle position X(t + 1) is obtained by
aggregating the previous state components: X(t) and V (t). As elements of a
matrix X(t) + V (t) may take values from [−vmax , vmax + 1], the Sx function
is responsible for converting it into a valid permutation matrix satisfying (2),
i.e. having exactly one 1 in each row and column. Actually, Sv is rather a
procedure, than a function, as it incorporates some elements of random choice.
Three variants of Sx procedures were implemented:
1. GlobalM ax(X) – iteratively searches for xrc , a maximum element in a
matrix X, sets it to 1 and clears other elements in the row r and c.
2. P ickColumn(X) – picks a column c from X, selects a maximum element
xrc , replaces it by 1 and clears other elements in r and c.
3. SecondT arget(X, Z, d) – similar to GlobalM ax(X), however during the
first d iterations ignores elements xij , such that zij = 1. (As the parameter
Z a solution X from the last iteration is used.)
In several experiments, in which GlobalM ax aggregation procedure was applied, particles seemed to get stuck, even if their velocities were far from zero
[33]. We reproduce this effect on a small 3 × 3 example:
1 0 0
X= 0 0 1
0 1 0
8 1 3
X +V = 0 4 6
2 4 2
7
V = 0
2
1
Sx (X + V ) = 0
0
1
4
3
0
0
1
3
5
2
0
1
0
If GlobalM ax(X(t) + V (t)) is used for the described case, in subsequent
iterations it will hold: X(t + 1) = X(t), until another particle is capable of
changing (P G (t) − X(t))) component of formula (4) for velocity calculation.
A solution for this problem can be to move a particle to a secondary direction
(target), by ignoring d < n elements that are in the solution X(t) already set
to 1. This, depending on d, gives an opportunity to reach other solutions,
hopefully yielding smaller goal function values. If chosen elements are maximal
6
in the remaining matrix, denoted here as X d V , they are still reasonable
movement directions. For the discussed example possible values of X d V
matrices, where d = 1 and d = 2, are shown below (6). Elements of a new
solution are marked with circles, whereas the upper index indicates the iteration,
in which the element was chosen.
82
0
1V =
2
8
3
0
V
=
2
2
X
X
X
8
0
2V =
23
1
4
43
1
4
41
1
41
4
3
61
2
32
6
2
or
2
3
6
2
(6)
It can be observed that for d = 1 the value is exactly the same, as it would
result from the GlobalM ax, however setting d = 2 allows to reach a different
solution. The pseudocode of SecondT arget procedure is listed in Algorithm 1.
3.2
Migration
The intended goal of the migration mechanism is to improve the algorithm
exploration capabilities by exchanging information between swarms. Actually,
it is not a true migration, as particles do not move. Instead we modify stored
P G [k] solutions (global best solution for a k-th swarm) replacing it by randomly
picked solution from a swarm that performed better (see Algortithm 2).
The newly set P G [k] value influences the velocity vector for all particles in
k-th swarm according to the formula (4). It may happen that the goal function
value corresponding to the assigned solution P G [k]) is worse than the previous
one. It is accepted, as the migration is primarily designed to increase diversity
within swarms.
It should be mentioned that a naive approach consisting in copying best
P G values between the swarms would be incorrect. (Consider replacing line 5
of Algorithm 2 with: P G [sm−k−1 ] ← P G [sk ].) In such case during algorithm
stagnation spanning over several iterations: in the first iteration the best value
P G [1] would be cloned, in the second two copies would be created, in the third
four and so on. Finally, after k iterations 2k swarms would follow the same
direction. In the first group of experiments reported in Section 4 we used up to
250 swarms. It means that after 8 iterations all swarms would be dominated by
a single solution.
7
Algorithm 1 Aggregation procedure SecondT arget
Require: X = Z + V - new solution requiring normalization
Require: Z - previous solution
Require: depth - number of iterations, in which during selection of maximum
element the algorithm ignore positions, where corresponding element of Z
is equal to 1
1: procedure SecondTarget(X,Z,depth)
2:
R ← {1, . . . , n}
3:
C ← {1, . . . , n}
4:
for i in (1,n) do
5:
Calculate M , the set of maximum elements
6:
if i ≤ depth then
7:
Ignore elements xij such that zij = 1
8:
M←
{(r, c) : zrc 6= 1 ∧ ∀i∈R,j∈C,zij 6=1 (xrc ≥ xij )}
9:
else
10:
M ← {(r, c) : ∀i∈R,j∈C (xrc ≥ xij )}
11:
end if
12:
Randomly select (r, c) from M
13:
R ← R \ {r}
. Update the sets R and C
14:
C ← C \ {c}
15:
for i in (1, n) do
16:
xri ← 0
. Clear r-th row
17:
xic ← 0
. Clear c-th column
18:
end for
19:
xrc ← 1
. Assign 1 to the maximum element
20:
end for
21:
return X
22: end procedure
8
Algorithm 2 Migration procedure
Require: d - migration depth satisfying d < m/2, where m is the number of
swarms
Require: P G - table of best solutions for m swarms
Require: X - set of all solutions
1: procedure migration(d,P G ,X)
2:
Sort swarms according to their PkG values into
a sequence (s1 , s2 . . . , sm−2 , sm−1 )
3:
for k in (1, d) do
4:
Randomly choose a solution xkj
belonging to the swarm sk
5:
Assign P G [sm−k−1 ] ← xkj
6:
Update the best goal function value for
the swarm m − k − 1
7:
end for
8: end procedure
3.3
OpenCL algorithm implementation
OpenCL [18] is a standard providing a common language, programming interfaces and hardware abstraction for heterogeneous platforms including GPU,
multicore CPU, DSP and FPGA [31]. It allows to accelerate computations by
decomposing them into a set of parallel tasks (work items) operating on separate
data.
A program on OpenCL platform is decomposed into two parts: sequential
executed by the CPU host and parallel executed by multicore devices. Functions
executed on devices are called kernels. They are written in a language being a
variant of C with some restrictions related to keywords and datatypes. When
first time loaded, the kernels are automatically translated into the instruction
set of the target device. The whole process takes about 500ms.
OpenCL supports 1D, 2D or 3D organization of data (arrays, matrices and
volumes). Each data element is identified by 1 to 3 indices, e.g. d[i][j] for twodimensional arrays. A work item is a scheduled kernel instance, which obtains
a combination of data indexes within the data range. To give an example, a 2D
array of data of n × m size should be processed by n · m kernel instances, which
are assigned with a pair of indexes (i, j), 0 ≤ i < n and 0 ≤ j < m. Those
indexes are used to identify data items assigned to kernels.
Additionally, kernels can be organized into workgroups, e.g. corresponding
to parts of a matrix, and synchronize their operations within a group using
so called local barrier mechanism. However, workgroups suffer from several
platform restrictions related to number of work items and amount of accessible
memory.
OpenCL uses three types of memory: global (that is exchanged between the
host and the device), local for a work group and private for a work item.
In our implementation we used aparapi platform [17] that allows to write
9
Generate particles
Apply Sx
Calculate goal function
Update best solutions
Generate velocities
YES
STOP
NO
Apply Sx
Calculate goal function
Update best solutions
Apply migration
Update velocities and apply Sv
Figure 1: Functional blocks of OpenCL based algorithm implementation.
OpenCL programs directly in Java language. The platform comprises two parts:
an API and a runtime capable of converting Java bytecodes into OpenCL workloads. Hence, the host part of the program is executed on a Java virtual machine, and originally written in Java kernels are executed on an OpenCL enabled
device.
The basic functional blocks of the algorithm are presented in Fig. 1. Implemented kernels are marked with gray color. The code responsible for generation
of random particles is executed by the host. We have also decided to leave
the code for updating best solutions at the host side. Actually, it comprises a
number of native System.arraycopy() calls. This regards also the migration
procedure, which sorts swarms indexes in a table according to their P G value
and copies randomly picked entries.
Particle data comprise a number of matrices (see Fig. 2): X and Xnew –
solutions, P L – local best particle solution and V – velocity. They are all
stored in large continuous tables shared by all particles. Appropriate table part
belonging to a particle can be identified based on the particle id transferred to
a kernel. Moreover, while updating velocity, the particles reference a table P G
indexed by the swarm id.
An important decision related to OpenCL program design is related to data
10
#p
#p
X
#p
V
PL
#p
#p
#sw
goal function
values
Xnew
PG
rands
Figure 2: Global variables used in the algorithm implementation
ranges selection. The memory layout in Fig. 2 suggests 3D range, whose dimensions are: row, column and particle number. This can be applied for relatively
simple velocity or goal function calculation. However, the proposed algorithms
for Sx , see Algorithm 1, are far too complicated to be implemented as a simple
parallel work item. Finally we decided to use one dimension (particle id) for Sx
and goal function calculation, and two dimensions (particle id, swarm id) for
velocity kernels.
It should be mentioned that requirements of parallel processing limits applicability of object oriented design at the host side. In particular we avoid creating
particles or swarms with their own memory and then copying small chunks between the host and the device. Instead we rather use a flyweight design pattern
[13]: if a particle abstraction is needed, a single object can be configured to
see parts of large global arrays X, V as its own memory and perform required
operations, e.g. initialization with random values.
4
Experiments and results
In this section we report results of conducted experiments, which aimed at
establishing the optimization performance of the implemented algorithm, as
well as to collect data related to its statistical properties.
4.1
Optimization results
The algorithm was tested on several problem instances form the QAPLIB [28],
whose size ranged between 12 and 150. Their results are gathered in Table 1
and Table 2. The selection of algorithm configuration parameters (c1 , c2 and
c3 factors, as well as the kernels used) was based on previous results published
in [33]. In all cases the second target Sx aggregation kernel was applied (see
Algorithm 1), which in previous experiments occurred the most successful.
During all tests reported in Table 1, apart the last, the total numbers of
particles were large: 10000-12500. For the last case only 2500 particles were
11
used due to 1GB memory limit of the GPU device (AMD Radeon HD 6750M
card). In this case the consumed GPU memory ranged about 950 MB.
The results show that algorithm is capable of finding solutions with goal
function values are close to reference numbers listed in QAPLIB. The gap is
between 0% and 6.4% for the biggest case tai150b. We have repeated tests
for tai60b problem to compare the implemented multi-swarm algorithm with
the previous single-swarm version published in [33]. Gap values for the best
results obtained with the single swarm algorithm were around 7%-8%. For the
multi-swarm implementation discussed here the gaps were between 0.64% and
2.03%.
The goal of the second group of experiments was to test the algorithm configured to employ large numbers of particles (50000-10000) for well known esc32*
problem instances from the QAPLIB. Altough they were considered hard, all of
them have been were recently solved optimally with exact algorithms [25, 10].
The results are summarized in Table 2. We used the following parameters:
c1 = 0.8, c2 = 0.5 c3 = 0.5, velocity kernel: normalized, Sx kernel: second target. During nearly all experiments optimal values of goal functions were reached
in one algorithm run. Only the problem esc32a occurred difficult, therefore for
this case the number of particles, as well as the upper iteration limits were
increased to reach the optimal solution. What was somehow surprising, in all
cases solutions differing from those listed in QAPLIB were obtained. Unfortunately, our algorithm was not prepared to collect sets of optimal solutions, so
we are not able to provide detailed results on their numbers.
It can be seen that optimal solutions for problem instances esc32c–h were
found in relatively small numbers of iterations. In particular, for esc32e and
es32g, which are characterized by small values of goal functions, optimal solutions were found during the initialization or in the first iteration.
The disadvantage of the presented algorithm is that it uses internally matrix representation for solutions and velocities. In consequence the memory
consumption is proportional to n2 , where n is the problem size. The same regards the time complexity, which for goal function and Sx procedures can be
estimated as o(n3 ). This makes optimization of large problems time consuming
(e.g. even 400 sec for one iteration for tai150b). However, for for medium size
problem instances, the iteration times are much smaller, in spite of large populations used. For two runs of the algorithm bur26a reported in Table 1, where
during each iteration 12500 particles were processed, the average iteration time
was equal 1.73 sec. For 50000-10000 particles and problems of size n = 32 the
average iteration time reported in Table 2 was less than 3.7 seconds.
4.2
Statistical results
An obvious benefit of massive parallel computations is the capability of processing large populations (see Table 2). Such approach to optimization may
resemble a little bit a brutal force attack: the solution space is randomly sampled millions of times to hit the best solution. No doubt that such approach
can be more successful if combined with a correctly designed exploration mech12
Instance
chr12a
bur26a
bur26a
lipa50a
tai60a
tai60a
tai60b
tai60b
tai60b
tai60b
tai64c
esc64a
tai80a
tai80b
sko100a
tai100b
esc128
tai150b
No
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Self recognition c2
Inertia c1
Total particles
Swarm size
Number of swarms
Size
0.5
0.5
0.5
0.5
0.5
0.5
0.3
0.5
0.3
0.3
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
Social factor c3
0.5
0.5
0.5
0.5
0.5
0.5
0.3
0.5
0.3
0.3
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
Norm
Raw
Raw
Norm
Norm
Raw
Norm
Norm
Norm
Norm
Norm
Raw
Raw
Raw
Raw
Raw
Raw
Raw
Velocity kernel
0.5
0.8
0.8
0.5
0.8
0.8
0.8
0.5
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0
33%
0
0
0
0
0
0
33%
33%
0
0
0
0
0
0
0
0
Migration factor
10000
12500
12500
10000
10000
15000
10000
10000
10000
10000
10000
10000
10000
10000
10000
10000
5000
2500
9 552
5 426 670
5 429 693
62 794
7 539 614
7 426 672
620 557 952
617 825 984
612 078 720
614 088 768
1 856 396
116
14 038 392
835 426 944
154 874
1 196 819 712
64
530 816 224
Reached goal
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
9 552
5 426 670
5 426 670
62 093
7 205 962
7 205 962
608 215 054
608 215 054
608 215 054
608 215 054
1 855 928
116
13 499 184
818 415 043
152 002
1 185 996 137
64
498 896 643
Reference value
200
250
250
200
200
300
200
200
200
200
200
200
200
200
200
200
100
50
0.00%
0.00%
0.06%
1.13%
4.63%
3.06%
2.03%
1.58%
0.64%
0.97%
0.03%
0.00%
3.99%
2.08%
1.89%
0.91%
0.00%
6.40%
Gap
12
26
26
50
60
60
60
60
60
60
64
64
80
80
100
100
128
150
Table 1: Results of tests for various instance from QAPLIB
21
156
189
1640
817
917
909
1982
2220
1619
228
71
1718
1509
1877
1980
1875
1894
Iteration
13
Table 2: Results of tests for esc32* instances from QAPLIB (problem size
n = 32). Reached optimal values are marked with asterisks.
Instance
esc32a
esc32a
esc32a
esc32b
esc32c
esc32d
esc32e
esc32g
esc32h
Swarms
50
10
50
50
50
50
50
50
50
Particles
1000
5000
2000
1000
1000
1000
1000
1000
1000
Total part.
100000
100000
100000
50000
50000
50000
50000
50000
50000
Goal
138
134
130∗
168∗
642∗
400∗
2∗
6∗
438∗
Iter
412
909
2407
684
22
75
0
1
77
Time/iter [ms]
3590.08
3636.76
3653.88
3637.84
3695.19
3675.32
3670.38
3625.17
3625.17
anism that directs the random search process towards solutions providing good
or near-optimal solutions. In this section we analyze collected statistical data
related to the algorithm execution to show that the optimization performance of
the algorithm can be attributed not only to large sizes of processed population,
but also to the implemented exploration mechanism.
PSO algorithm can be considered a stochastic process controlled by random
variables r2 (t) and r3 (t) appearing in its state equation (3). Such analysis for
continuous problems were conducted in [9]. On the other hand, the observable
algorithm outcomes, i.e. the values of goal functions f (xi (t)) for solutions xi ,
i = 1, . . . , n reached in consecutive time moments t ∈ {1, 2, 3, . . . } can be also
treated as random variables, whose distributions change over time t. Our intuition is that a correctly designed algorithm should result in a nonstationary
stochastic process {f (xi (t)) : t ∈ T }, characterized by growing probability that
next values of goal functions in the analyzed population are closer to the optimal
solution.
To demonstrate such behavior of the implemented algorithm we have collected detailed information on goal function values during two optimization task
for the problem instance bur26a reported in Table 1 (cases 2 and 3). For both of
them the algorithm was configured to use 250 swarms comprising 50 particles.
In the case 2 the migration mechanism was applied and the optimal solution
was found in the iteration 156, in the case 3 (without migration) a solution very
close to optimal (gap 0.06%) was reached in the iteration 189.
Fig. 3 shows values of goal function for two selected particles during run
3. The plots show typical QAP specificity. PSO and many other algorithms
perform a local neighborhood search. For the QAP the neighborhood is characterized by great variations of goal function values. Although mean values of
goal function decrease in first twenty or thirty iterations, the particles behave
randomly and nothing indicates that during subsequent iterations smaller values
of goal functions would be reached more often.
In Fig. 4 percentile ranks (75%, 50% 25% and 5%) for two swarms, which
14
6.2
·106
#4716
#643
goal
6
5.8
5.6
5.4
−20
0
20
40
60
80
100 120 140
160 180 200
220
iteration
Figure 3: Variations of goal function values for two particles exploring the solutions space during the optimization process (bur26a problem instance)
reached best values in cases 2 and 3 are presented. Although the case 3 is
characterized by less frequent changes of scores, than the case 2, probably this
effect can not be attributed to the migration applied. It should be mentioned
that for a swarm comprising 50 particles, the 0.05 percentile corresponds to just
two of them.
·106
goal
6
Best
pct-0.75
pct-0.50
pct-0.25
pct-0.05
5.8
5.6
5.4
−20
0
20
40
60
80
100 120 140
160 180 200
220
iteration
·106
goal
Min (opt)
pct-0.75
pct-0.50
pct-0.25
pct-0.05
6
5.8
5.6
5.4
−20
0
20
40
60
80
100 120 140
160 180 200
220
iteration
Figure 4: Two runs of bur26a optimization. Percentile ranks for 50 particles
belonging to the most successful swarms: without migration (above) and with
migration (below).
Collected percentile rank values for the whole population comprising 12500
15
particles are presented in Fig. 5. For both cases the plots are clearly separated. It can be also observed that solutions very close to optimal are practically reached between the iterations 20 (37.3 sec) and 40 (72.4 sec). For the
whole population the 0.05 percentile represents 625 particles. Starting with the
iteration 42 their score varies between 5.449048 · 106 and 5.432361 · 106 , i.e. by
about 0.3%.
·106
goal
6
Best
pct-0.75
pct-0.50
pct-0.25
pct-0.05
5.8
5.6
5.4
−20
0
20
40
60
80
100 120 140
·106
220
Min (opt)
6
goal
160 180 200
iteration
pct-0.75
pct-0.50
pct-0.25
pct-0.05
5.8
5.6
5.4
−20
0
20
40
60
80
100 120 140
160 180 200
220
iteration
Figure 5: Two runs of bur26a optimization. Percentile ranks for all 12500
particles: without migration (above) and with migration (below).
Fig. 6 shows, how the probability distribution (probability mass function
PMF) changed during the optimization process. In both cases the the optimization process starts with a normal distribution with the mean value about
594500. In the subsequent iterations the maximum of PMF grows and moves
towards smaller values of the goal function. There is no fundamental difference
between the two cases, however for the case 3 (with migration) maximal values
of PMF are higher. It can be also observed that in the iteration 30 (completed
in 56 seconds) the probability of hitting a good solution is quite high, more then
10%.
Interpretation of PMF for the two most successful swarms that reached best
values in the discussed cases is not that obvious. For the case without migration
(Fig. 7 above) there is a clear separation between the initial distribution and
the distribution reached in the iteration, which yielded the best result. In the
second case (with migration) a number of particles were concentrated around
local minima.
The presented data shows advantages of optimization performed on mas16
0.15
it#1
it#30
it#60
it#120
it#189
p(f )
0.1
5 · 10−2
0
5.4
5.6
5.8
6
6.2
6.4
·106
goal value f
p(f )
0.2
it#1
it#30
it#50
it#100
it#156
0.1
0
5.4
5.6
5.8
6
goal value f
6.2
6.4
·106
Figure 6: Probability mass functions for 12050 particles organized into 250 x
50 swarms during two runs: without migration (above) and with migration
(below).
sive parallel processing platforms. Due to high number of solutions analyzed
simultaneously, the algorithm that does not exploit the problem structure can
yield acceptable results in relatively small number of iterations (and time). For
a low-end GPU devices, which was used during the test, good enough results
were obtained after 56 seconds. It should be mentioned that for both presented
cases the maximum number of iterations was set to 200. With 12500 particles,
the ratio of potentially explored solutions to the whole solution space was equal
200 · 12500/26! = 6.2 · 10−21 .
5
Conclusions
In this paper we describe a multi-swarm PSO algorithm for solving the QAP
problem designed for the OpenCL platform. The algorithm is capable of processing in parallel large number of particles organized into several swarms that
either run independently or communicate with use of the migration mechanism.
Several solutions related to particle state representation and particle movement
were inspired by the work of Liu at al. [21], however, they were refined here to
provide better performance.
We tested the algorithm on several problem instances from the QAPLIB library obtaining good results (small gaps between reached solutions and reference
values). However, it seems that for problem instances of large sizes the selected
17
p(f )
0.6
it#1
it#189
0.4
0.2
0
5.4
5.5
5.6
5.7
5.8
5.9
6
6.1
6.2
6.3
6.4
·106
6
6.1
6.2
6.3
6.4
·106
goal value f
it#1
it#156
p(f )
0.2
0.1
0
5.4
5.5
5.6
5.7
5.8
5.9
goal value f
Figure 7: Probability mass functions for 50 particles belonging to the most successful swarms during two runs: without migration (above) and with migration
(below). One point represents an upper bound for 5 particles.
representation of solutions in form of permutation matrices hinders potential
benefits of parallel processing.
During the experiments the algorithm was configured to process large populations. This allowed us to collect statistical data related to goal function
values reached by individual particles. We used them to demonstrate on two
cases that although single particles seem to behave chaotically during the optimization process, when the whole population is analyzed, the probability that
a particle will select a near-optimal solution grows. This growth is significant
for a number of initial iterations, then its speed diminishes and finally reaches
zero.
Statistical analysis of experimental data collected during optimization process may help to tune the algorithm parameters, as well as to establish realistic
limits related to expected improvement of goal functions. This in particular
regards practical applications of optimization techniques, in which recurring
optimization problems appear, i.e. the problems with similar size, complexity
and structure. Such problems can be near-optimally solved in bounded time on
massive parallel computation platforms even, if low-end devices are used.
18
References
[1] Ahuja, R.K., Orlin, J.B., Tiwari, A.: A greedy genetic algorithm for the
quadratic assignment problem. Computers & Operations Research 27(10),
917–934 (2000)
[2] Anstreicher, K., Brixius, N., Goux, J.P., Linderoth, J.: Solving large
quadratic assignment problems on computational grids. Mathematical Programming 91(3), 563–588 (2002)
[3] Bermudez, R., Cole, M.H.: A genetic algorithm approach to door assignments in breakbulk terminals. Tech. Rep. MBTC-1102, Mack-Blackwell
Transportation Center, University of Arkansas, Fayetteville, Arkansas
(2001)
[4] Burkard, R.E., Karisch, S.E., Rendl, F.: QAPLIB - a Quadratic Assignment Problem library. Journal of Global Optimization 10(4), 391–403
(1997)
[5] Çela, E.: The quadratic assignment problem: theory and algorithms. Combinatorial Optimization, Springer, Boston (1998)
[6] Chmiel, W., Kadłuczka, P., Packanik, G.: Performance of swarm algorithms for permutation problems. Automatyka 15(2), 117–126 (2009)
[7] Clerc, M.: Discrete particle swarm optimization, illustrated by the traveling
salesman problem. In: New optimization techniques in engineering, pp.
219–239. Springer (2004)
[8] Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory.
In: Micro Machine and Human Science, 1995. MHS ’95., Proceedings of
the Sixth International Symposium on. pp. 39–43 (Oct 1995)
[9] Fernández Martínez, J., García Gonzalo, E.: The PSO family: deduction, stochastic analysis and comparison. Swarm Intelligence 3(4), 245–273
(2009)
[10] Fischetti, M., Monaci, M., Salvagnin, D.: Three ideas for the quadratic
assignment problem. Operations Research 60(4), 954–964 (2012)
[11] Fon, C.W., Wong, K.Y.: Investigating the performance of bees algorithm
in solving quadratic assignment problems. International Journal of Operational Research 9(3), 241–257 (2010)
[12] Gambardella, L.M., Taillard, E., Dorigo, M.: Ant colonies for the quadratic
assignment problem. Journal of the operational research society pp. 167–
176 (1999)
[13] Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Pearson Education (1994)
19
[14] Grötschel, M.: Discrete mathematics in manufacturing. In: Malley, R.E.O.
(ed.) ICIAM 1991: Proceedings of the Second International Conference on
Industrial and Applied Mathematics. pp. 119–145. SIAM (1991)
[15] Hahn, P., Roth, A., Saltzman, M., Guignard, M.: Memory-aware parallelized rlt3 for solving quadratic assignment problems. Optimization online (2013), http://www.optimization-online.org/DB_HTML/2013/12/
4144.html
[16] Hahn, P.M., Zhu, Y.R., Guignard, M., Smith, J.M.: Exact solution of
emerging quadratic assignment problems. International Transactions in Operational Research 17(5), 525–552 (2010)
[17] Howes, L., Munshi, A.: Aparapi - AMD. http://developer.amd.com/
tools-and-sdks/opencl-zone/aparapi/, online: last accessed: Jan 2015
[18] Howes, L., Munshi, A.: The OpenCL specification. https://www.khronos.
org/registry/cl/specs/opencl-2.0.pdf, online: last accessed: Jan
2015
[19] Koopmans, T.C., Beckmann, M.J.: Assignment problems and the location
of economic activities. Econometrica 25, 53–76 (1957)
[20] Krüger, F., Maitre, O., Jiménez, S., Baumes, L., Collet, P.: Generic local
search (memetic) algorithm on a single GPGPU chip. In: Tsutsui, S., Collet, P. (eds.) Massively Parallel Evolutionary Computation on GPGPUs,
pp. 63–81. Natural Computing Series, Springer Berlin Heidelberg (2013)
[21] Liu, H., Abraham, A., Zhang, J.: A particle swarm approach to quadratic
assignment problems. In: Saad, A., Dahal, K., Sarfraz, M., Roy, R. (eds.)
Soft Computing in Industrial Applications, Advances in Soft Computing,
vol. 39, pp. 213–222. Springer Berlin Heidelberg (2007)
[22] Maitre, O.: Genetic programming on GPGPU cards using EASEA. In:
Tsutsui, S., Collet, P. (eds.) Massively Parallel Evolutionary Computation
on GPGPUs, pp. 227–248. Natural Computing Series, Springer Berlin Heidelberg (2013)
[23] Mason, A., Rönnqvist, M.: Solution methods for the balancing of jet turbines. Computers & OR 24(2), 153–167 (1997)
[24] Misevicius, A.: An implementation of the iterated tabu search algorithm
for the quadratic assignment problem. OR Spectrum 34(3), 665–690 (2012)
[25] Nyberg, A., Westerlund, T.: A new exact discrete linear reformulation
of the quadratic assignment problem. European Journal of Operational
Research 220(2), 314–319 (2012)
[26] Onwubolu, G.C., Sharma, A.: Particle swarm optimization for the assignment of facilities to locations. In: New Optimization Techniques in Engineering, pp. 567–584. Springer (2004)
20
[27] Owens, J.D., Luebke, D., Govindaraju, N., Harris, M., Krüger, J., Lefohn,
A.E., Purcell, T.J.: A survey of general-purpose computation on graphics
hardware. In: Computer graphics forum. vol. 26, pp. 80–113. Wiley Online
Library (2007)
[28] Peter Hahn and Miguel Anjos: QAPLIB home page. http://anjos.mgi.
polymtl.ca/qaplib/, online: last accessed: Jan 2015
[29] Phillips, A.T., Rosen, J.B.: A quadratic assignment formulation of the
molecular conformation problem. JOURNAL OF GLOBAL OPTIMIZATION 4, 229–241 (1994)
[30] Sahni, S., Gonzalez, T.: P-complete approximation problems. J. ACM
23(3), 555–565 (1976)
[31] Stone, J.E., Gohara, D., Shi, G.: Opencl: A parallel programming standard
for heterogeneous computing systems. Computing in science & engineering
12(3), 66 (2010)
[32] Stützle, T., Dorigo, M.: Aco algorithms for the quadratic assignment problem. New ideas in optimization pp. 33–50 (1999)
[33] Szwed, P., Chmiel, W., Kadłuczka, P.:
OpenCL implementation of PSO algorithm for the Quadratic Assignment Problem.
In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz,
R., Zadeh, L.A., Zurada, J.M. (eds.) Artificial Intelligence and
Soft Computing, Lecture Notes in Computer Science, vol. Accepted for ICAISC’2015 Conference. Springer International Publishing (2015), http://home.agh.edu.pl/~pszwed/en/lib/exe/fetch.php?
media=papers:draft-icaics-2015-pso-qap-opencl.pdf
[34] Taillard, E.D.: Comparison of iterative searches for the quadratic assignment problem. Location Science 3(2), 87 – 105 (1995)
[35] Tsutsui, S., Fujimoto, N.: ACO with tabu search on GPUs for fast solution
of the QAP. In: Tsutsui, S., Collet, P. (eds.) Massively Parallel Evolutionary Computation on GPGPUs, pp. 179–202. Natural Computing Series,
Springer Berlin Heidelberg (2013)
[36] Zhou, Y., Tan, Y.: GPU-based parallel particle swarm optimization. In:
Evolutionary Computation, 2009. CEC’09. IEEE Congress on. pp. 1493–
1500. IEEE (2009)
21
| 9 |
Integral Policy Iterations for Reinforcement Learning Problems in
Continuous Time and Space ?
Jae Young Lee ∗ , Richard S. Sutton
arXiv:1705.03520v1 [] 9 May 2017
Department of Computing Science, University of Alberta, Edmonton, AB, Canada, T6G 2R3.
Abstract
Policy iteration (PI) is a recursive process of policy evaluation and improvement to solve an optimal decision-making, e.g., reinforcement
learning (RL) or optimal control problem and has served as the fundamental to develop RL methods. Motivated by integral PI (IPI)
schemes in optimal control and RL methods in continuous time and space (CTS), this paper proposes on-policy IPI to solve the general
RL problem in CTS, with its environment modelled by an ordinary differential equation (ODE). In such continuous domain, we also
propose four off-policy IPI methods—two are the ideal PI forms that use advantage and Q-functions, respectively, and the other two
are natural extensions of the existing off-policy IPI schemes to our general RL framework. Compared to the IPI methods in optimal
control, the proposed IPI schemes can be applied to more general situations and do not require an initial stabilizing policy to run; they
are also strongly relevant to the RL algorithms in CTS such as advantage updating, Q-learning, and value-gradient based (VGB) greedy
policy improvement. Our on-policy IPI is basically model-based but can be made partially model-free; each off-policy method is also
either partially or completely model-free. The mathematical properties of the IPI methods—admissibility, monotone improvement, and
convergence towards the optimal solution—are all rigorously proven, together with the equivalence of on- and off-policy IPI. Finally, the
IPI methods are simulated with an inverted-pendulum model to support the theory and verify the performance.
Key words: policy iteration, reinforcement learning, optimization under uncertainties, continuous time and space, iterative schemes,
adaptive systems
1
Introduction
Policy iteration (PI) is a recursive process to solve an optimal decision-making/control problem by alternating between policy evaluation to obtain the value function with
respect to the current policy (a.k.a. the current control law in
control theory) and policy improvement to improve the policy by optimizing it using the obtained value function (Sutton and Barto, 2017; Lewis and Vrabie, 2009). PI was first
proposed by Howard (1960) in the stochastic environment
known as Markov decision process (MDP) and is strongly
relevant to reinforcement learning (RL) and approximate dynamic programming (ADP). PI has served as a fundamental
principle to develop RL and ADP methods especially when
the underlying environment is modelled or approximated by
an MDP in a discrete space. There are also model-free off? The authors gratefully acknowledge the support of Alberta
Innovates–Technology Futures, the Alberta Machine Intelligence
Institute, Google Deepmind, and the Natural Sciences and Engineering Research Council of Canada.
∗ Corresponding author. Tel.: +1 587 597 8677.
Email addresses: [email protected] (Jae Young
Lee ), [email protected] (Richard S. Sutton).
Preprint submitted to Automatica
policy PI methods using Q-functions and their extensions to
incremental RL algorithms (e.g., Lagoudakis and Parr, 2003;
Farahmand, Ghavamzadeh, Mannor, and Szepesvári, 2009;
Maei, Szepesvári, Bhatnagar, and Sutton, 2010). Here, offpolicy PI is a class of PI methods whose policy evaluation
is done while following a policy, termed as a behavior policy, which is possibly different from the target policy to be
evaluated; if the behavior and target policies are same, it is
called an on-policy method. When the MDP is finite, all the
on- or off-policy PI methods converge towards the optimal
solution in finite time. Another advantage is that compared
to backward-in-time dynamic programming, the forward-intime computation of PI like the other ADP methods (Powell,
2007) alleviates the problem known as the curse of dimensionality. In continuing tasks, a discount factor γ is normally
introduced to PI and RL to suppress the future reward and
thereby have a finite return. Sutton and Barto (2017) gives
a comprehensive overview of PI, ADP, and RL algorithms
with their practical applications and recent success in the
RL field.
11 May 2017
1.1
tal case γ = 1, but not the discounted case 0 < γ < 1. 1 The
other restriction of those stability-based designs is that the
initial policy needs to be asymptotically stabilizing to run
the PI methods. This is quite contradictory for IPI methods
since they are partially or completely model-free, but it is
hard or even impossible to find a stabilizing policy without knowing the dynamics. Besides, compared with the RL
frameworks in CTS, e.g., those in (Doya, 2000; Mehta and
Meyn, 2009; Frémaux, Sprekeler, and Gerstner, 2013), this
stability-based approach rather restricts the class of the cost
(or reward) and the dynamics. For instances, the dynamics
was assumed to have at least one equilibrium state, 2 and
the goal was always to (locally) stabilize the system in an
optimal fashion to that equilibrium state although there may
also exist multiple isolated equilibrium states to be considered or bifurcation; for such optimal stabilization, the cost
was always crafted to be non-negative for all points and zero
at the equilibrium.
PI and RL in Continuous Time and Space (CTS)
Different from the MDP in discrete space, the dynamics of
real physical world is usually modelled by (ordinary) differential equations (ODEs) inevitably in CTS. PI has been
also studied in such continuous domain mainly under the
framework of deterministic optimal control, where the optimal solution is characterized by the partial differential
Hamilton-Jacobi-Bellman equation (HJBE) which is however extremely difficult or hopeless to be solved analytically,
except in a very few special cases. PI in this field is often
referred to as successive approximation of the HJBE (to recursively solve it!), and the main difference among them lies
in their policy evaluation—the earlier versions of PI solve
the associated infinitesimal Bellman equation (a.k.a. Lyapunov or Hamiltonian equation) to obtain the value function
for the current policy (e.g., Leake and Liu, 1967; Kleinman, 1968; Saridis and Lee, 1979; Beard, Saridis, and Wen,
1997; Abu-Khalaf and Lewis, 2005, to name a few); Murray,
Cox, Lendaris, and Saeks (2002) proposed a PI algorithm
along with the trajectory-based policy evaluation that does
not rely on the system model and can be viewed as a deterministic Monte-Carlo policy evaluation (Sutton and Barto,
2017). Motivated by those two approaches above, Vrabie
and Lewis (2009) recently proposed a partially model-free
PI scheme called integral PI (IPI), which is more relevant
to RL/ADP in that the Bellman equation associated with
its policy evaluation is of a temporal difference form—see
(Lewis and Vrabie, 2009) for a comprehensive overview.
By partially model-free, it is meant in this paper that the
PI can be done without explicit use of some or any inputindependent part of the dynamics. IPI is then extended to
a series of completely or partially model-free off-policy IPI
methods (e.g., Lee, Park, and Choi, 2012, 2015; Luo, Wu,
Huang, and Liu, 2014; Modares, Lewis, and Jiang, 2016, to
name a few), with a hope to further extend them to incremental off-policy RL methods for adaptive optimal control
in CTS—see (Vamvoudakis, Vrabie, and Lewis, 2014) for an
incremental extension of the on-policy IPI method. The on/off-policy equivalence and the mathematical properties of
stability/admissibility/monotone-improvement of the generated policies and convergence towards the optimal solution
were also studied in the literatures above all regarding PI in
CTS.
Independently of the research on PI, several RL methods
have come to be proposed also in CTS. Advantage updating was proposed by Baird III (1993) and then reformulated
by Doya (2000) under the environment represented by an
ODE. Doya (2000) also extended TD(λ) to the CTS domain and then combined it with the two policy improvement methods—the continuous actor with its update rule
and the value-gradient based (VGB) greedy policy improvement; see also (Frémaux et al., 2013) for an extension of
Doya (2000)’s continuous actor-critic using spiking neural
networks. Mehta and Meyn (2009) defined the Hamiltonian
function as a Q-function and then proposed a Q-learning
method in CTS based on stochastic approximation. Unlike
in MDP, however, these RL methods in CTS and the related
action-dependent (AD) functions such as advantage and Qfunctions are barely relevant to the PI methods in CTS due
to the gap between optimal control and RL.
1.2
Contributions and Organizations
The main goal of this paper is to build up a theory on IPI in
a general RL framework when the time and the state-action
space are all continuous and the environment is modelled by
an ODE. As a result, a series of IPI methods are proposed
in the general RL framework with mathematical analysis.
This also provides the theoretical connection of IPI to the
aforementioned RL methods in CTS. The main contributions
of this paper can be summarized as follows.
On the other hand, the aforementioned PI methods in CTS
were all designed via Lyapunov’s stability theory (Haddad
and Chellaboina, 2008) to guarantee that the generated policies are all asymptotically stable and thereby yield finite returns (at least on a bounded region around an equilibrium
state), provided that so is the initial policy. There are two
main restrictions, however, for these stability-based works
to be extended to the general RL framework. One is the fact
that except in the LQR case (e.g., Modares et al., 2016), there
is no (or less if any) direct connection between stability and
discount factor γ in RL. This is why in the nonlinear cases,
the aforementioned PI methods in CTS only consider the to-
(1) Motivated by the work of IPI (Vrabie and Lewis, 2009;
Lee et al., 2015) in the optimal control framework, we
propose the corresponding on-policy IPI scheme in the
general RL framework and then prove its mathematical properties of admissibility/monotone-improvement
1
In RL community, the return or the RL problem (in a discrete
space) is said to be discounted if 0 ≤ γ < 1 and total if γ = 1.
2
For an example of a dynamics with no equilibrium state, see
(Haddad and Chellaboina, 2008, Example 2.2).
2
of the generated policies and the convergence towards
the optimal solution (see Section 3). To establish them,
we rigorously define the general RL problem in CTS
with its environment formed by an ODE and then build
up a theory regarding policy evaluation and improvement (see Section 2).
(2) Extending on-policy IPI in Section 3, we propose four
off-policy IPI methods in CTS—two named integral advantage PI (IAPI) and integral Q-PI (IQPI) are the ideal
PI forms of advantage updating (Baird III, 1993; Doya,
2000) and Q-learning in CTS, and the other two named
integral explorized PI (IEPI) and integral C-PI (ICPI)
are the natural extensions of the existing off-policy IPI
methods (Lee et al., 2015) to our general RL problem.
All of the off-policy methods are proven to generate the
effectively same policies and value functions to those
in on-policy IPI—they all satisfy the above mathematical properties of on-policy IPI. These are all shown in
Section 4 with detailed discussions and comparisons.
2
Preliminaries
The proposed on-policy IPI is basically model-based but
can be made partially model-free by slightly modifying its
policy improvement (see Section 3.2). IEPI is also partially
model-free; IAPI and IQPI are even completely model-free;
so is ICPI, but only applicable under the special u-affineand-concave (u-AC) setting shown in Section 3.3. Here, we
emphasize that Doya (2000)’s VGB greedy policy improvement is also developed under this u-AC setting, and ICPI
provides its model-free version. Finally, to support the theory
and verify the performance, simulation results are provided
in Section 5 for an inverted-pendulum model. As shown in
the simulations and all of the IPI algorithms in this paper,
the initial policy is not required to be asymptotically stable to achieve the learning objective. Conclusions follow in
Section 6. This theoretical work lies between the fields of
optimal control and machine learning and also provides the
unified framework of unconstrained and input-constrained
formulations in both RL and optimal control (e.g., Doya,
2000; Abu-Khalaf and Lewis, 2005; Mehta and Meyn, 2009;
Vrabie and Lewis, 2009; Lee et al., 2015 as the special cases
of our framework).
(1) for each fixed τ ≥ t, µ(τ, ·) is continuous;
(2) for each fixed x ∈ X , µ(·, x) is right continuous;
(3) for each x ∈ X , the state trajectory X· generated under
Uτ = µ(τ, Xτ ) ∀τ ≥ t and the initial condition Xt = x
is uniquely defined over the whole time interval [t, ∞).
.
In this paper, the state space X is given by X = Rn , and the
action space U ⊆ Rm is an m-dimensional manifold in Rm
with (or without) boundary; t ≥ 0 denotes a given specific
time instant. The environment considered in this paper is the
deterministic one in CTS described by the following ODE:
Ẋτ = f (Xτ , Uτ ),
(1)
where τ ∈ [t, ∞) is the time variable; f : X × U → X
is a continuous function; Xτ ∈ X denotes the state vector
at time τ ; the action trajectory U· : [t, ∞) → U is a right
continuous function over [t, ∞).
.
A (non-stationary) policy µ = µ(τ, x) refers to a function
µ : [t, ∞) × X → U such that
It is said to be stationary if there is a function π : X → U
such that µ(τ, x) = π(x) for all (τ, x) ∈ [t, ∞) × X , in
which case we call π a (stationary) policy. In this paper, we
use π to indicate a stationary policy, and µ to denote any
non-stationary behavior policy; the latter will be not shown
until Section 4.
For notational efficiency and consistency to the classical RL
framework, we will use the notation
Eπ [Z|Xt = x] (or its abbreviation Exπ [Z]),
which plays no stochastic role but just means the deterministic value Z when Xt = x and Uτ = π(Xτ ) for all τ ≥ t.
Using this notation, the state vector Xτ at time τ ≥ t generated under the initial condition Xt = x and the policy π is
denoted by Eπ [Xτ |Xt = x] or simply Exπ [Xτ ]. Throughout
the paper, we also denote ∆t > 0 the time difference, and
for simplicity, t0 the next time and (Xt0 , Ut0 ) the next state
.
.
and action at that time, i.e., t0 = t + ∆t, Xt0 = Xt0 , and
0 .
Ut = Ut0 . For any differentiable function v : X → R, its
time derivative v̇ : X × U → R is defined as
Notations and Terminologies. N, Z, and R are the sets of
all natural numbers, integers, and real numbers, respectively;
.
R = R ∪ {−∞, ∞} is the set of all extended real numbers;
.
Z+ = N ∪ {0}, the set of all nonnegative integers; Rn×m
is the set of all n-by-m real matrices; AT and rank (A)
denote the transpose and the rank of a matrix A ∈ Rn×m ,
.
respectively. Rn = Rn×1 is the n-dimensional Euclidean
m
space; span{xj }j=1 is the linear subspace of Rn spanned
by the vectors x1 , · · · , xm ∈ Rn ; kxk denotes the Euclidean
norm of x ∈ Rn . A subset Ω of Rn is said to be compact
if it is closed and bounded; for any Lebesque measurable
subset E of Rn , we denote the Lebesque measure of E by
|E|. A function f : D → Rm on a domain D ⊆ Rn is said
to be C1 if its first-order partial derivatives exist and are all
continuous; ∇f : D → Rm×n denotes the gradient of f ;
Imf ⊆ Rm is the image of f . The restriction of f on a
subset Ω ⊆ D is denoted by f |Ω .
v(Xt0 ) − v(Xt )
.
v̇(Xt , Ut ) = lim
,
∆t→0
∆t
which is explicitly expressed by the chain rule as
v̇(Xt , Ut ) = ∇v(Xt )f (Xt , Ut ),
where Xt ∈ X and Ut ∈ U are free variables.
3
2.1
R̄ : X → [0, ∞) such that Rτ is bounded as
RL Problem in Continuous Time and Space
∀x ∈ X : Eπ |Rτ | Xt = x ≤ ατ −t R̄(x),
The RL problem considered in this paper is to find the best
policy π∗ that maximizes the value function vπ : X → R
.
vπ (x) = Eπ [Gt |Xt = x],
then vπ ∈ Va since for k = − ln1αγ and each x ∈ X ,
|vπ (x)| ≤ k R̄(x) < ∞. 4 For the total case γ = 1,
the condition (4) implies that the immediate reward Rτ
exponentially converges to 0 with the rate α ∈ [0, 1).
(2)
where Gt is the discounted or total return defined as
.
Gt =
Z
(4)
(2) For γ ∈ (0, 1), if the reward R or Rπ is lower-bounded,
or the trajectory Exπ [X· ] is bounded for each x ∈ X ,
then vπ ∈ Va . This is because such boundedness gives
the bound (4) for α = 1 and some constant R̄ > 0. 5 In
this case, we have supx∈X |vπ (x)| ≤ k · R̄ < ∞, so vπ
is also bounded. Especially, when R is lower-bounded,
such property is independent of the policy π—in other
words, vπ is bounded (and thus admissible) for any given
policy π (see the simulation setting in Section 5 for a
practical example of this bounded R case).
∞
γ τ −t Rτ dτ,
t
.
with the immediate reward Rτ = R(Xτ , Uτ ) ∈ R at time τ
.
and the discount factor γ ∈ (0, 1]. The reward R = R(x, u)
here is a continuous upper-bounded function of x ∈ X and
u ∈ U. 3 For any policy π, we also define the π-reward Rπ
.
.
(and Rτπ ) as Rπ (x) = R(x, π(x)) and Rτπ = Rπ (Xτ ). In
this paper, an admissible policy π is defined as follows.
Consider the Hamiltonian function h : X × U × R1×n → R
Definition 1 A policy π (or its value function vπ ) is said to
be admissible, denoted by vπ ∈ Va , if
.
h(x, u, p) = R(x, u) + p f (x, u),
(5)
|vπ (x)| < ∞ for all x ∈ X ,
by which the time derivative of any differentiable function
v : X → R can be represented as
.
where Va = vπ : π is admissible}, the set of all admissible
value functions vπ .
v̇(x, u) = h(x, u, ∇v(x)) − R(x, u),
To make our RL problem feasible, this paper assumes that
every vπ ∈ Va is C1 and that there is an optimal admissible
policy π∗ such that vπ v∗ holds for any policy π. Here,
v∗ is the optimal value function; the partial ordering v w
for any two functions v, w : X → R means v(x) ≤ w(x)
for all x ∈ X . There may exist another optimal policy π∗0
than π∗ , but the optimal value function is always same by
vπ∗0 vπ∗ and vπ∗ vπ∗0 . In this paper, π∗ indicates any one
of the optimal policies, and v∗ denotes the unique optimal
value function for them. Also note that in the discounted
case γ ∈ (0, 1), v∗ is upper-bounded since so is R. Hence,
for the consistency to the discounted case, we assume only
for the total case γ = 1 that v∗ is upper-bounded and
∀admissible π : lim sup lπ (x, k; v∗ ) ≤ 0,
(6)
for all (x, u) ∈ X × U. Substituting u = π(x) for any
policy π into (6), we also have the following expression:
Exπ [Rt + v̇(Xt , Ut )] = h(x, π(x), ∇v(x)) ∀x ∈ X , (7)
which converts a time-domain expression (the left-hand side)
into the Hamiltonian formula expressed in the state-space
(the right-hand side). In addition, essentially required in our
theory is the following lemma, which also provides conversions of (in)equalities between time domain and state space.
Lemma 1 Let v : X → R be any differentiable function.
Then, for any policy π,
(3)
k→∞
Z
v(x) ≤ Eπ
.
where lπ (x, k; v) = γ k∆t Eπ [v(Xt+k∆t )|Xt = x]. Here, (3)
is also true in the discounted case since limk→∞ γ k∆t = 0
and v∗ is upper-bounded.
t0
γ τ −t Rτ dτ + γ ∆t v(Xt0 ) Xt = x
(8)
t
for all x ∈ X and all ∆t > 0 iif
− ln γ · v(x) ≤ h(x, π(x), ∇v(x)) ∀x ∈ X .
Remark 1 Admissibility of π strongly depends not only on
the policy π itself, but on the choice of R and γ in vπ and the
properties (e.g., boundedness and stability) of the system (1)
and the state trajectory X· under π as examplified below.
(9)
Moreover, the equalities in (8) and (9) are also necessary
and sufficient. That is, the equality in (8) holds ∀x ∈ X and
∀∆t > 0 iif the equality in (9) is true ∀x ∈ X .
(1) For γ ∈ (0, 1], if there exist α ∈ [0, γ −1 ) and a function
4
An example is the LQR case shown in Section 3.3.
R is already upper-bounded, so it is bounded iif lower-bounded;
by continuity of R and π, R(X· , π(X· )) is bounded if so is X· .
3
5
By the upper-boundedness of R, we only obviate the situation
when the reward Rτ → ∞ as kXτ k → ∞ and/or kUτ k → ∞.
4
For an admissible policy π, rearranging (10) as
Proof. By the standard calculus,
Z t0
1 − γ ∆t
1
· vπ (x) =Eπ
γ τ −t Rτ dτ
∆t
∆t t
0
∆t vπ (Xt ) − vπ (Xt )
+γ ·
Xt = x ,
∆t
d τ −t
γ v(Xτ ) = γ τ −t ln γ · v(Xτ ) + v̇(Xτ , Uτ )
dτ
for any x ∈ X and any τ ≥ t. Hence, by (6) and (9), we
have for any τ ≥ t that
limiting ∆t → 0, and using (7) yield the infinitesimal form:
h
i
0 ≤ Exπ γ τ −t · h(Xτ , π(Xτ ), ∇v(Xτ )) + ln γ · v(Xτ )
h
i
= Exπ γ τ −t · Rτ + v̇(Xτ , Uτ ) + ln γ · v(Xτ )
d τ −t
γ v(Xτ ) Xt = x ∀x ∈ X ,
= Eπ γ τ −t Rτ +
dτ
− ln γ · vπ (x) = hπ (x, π(x))
= h(x, π(x), ∇vπ (x)) ∀x ∈ X ,
where hπ : X × U → R is the Hamiltonian function for a
given admissible policy π and is defined, with a slight abuse
of notation, as
and, for any x ∈ X and any ∆t > 0, integrating it from t
to t0 (= t + ∆t) yields (8). One can also show that the
equality in (9) for all x ∈ X implies the equality in (8) for
all x ∈ X and ∆t > 0 by following the same procedure.
Finally, the proof of the opposite direction can be easily done
by following the similar procedure to the derivation of (11)
from (10) below. 2
2.2
.
hπ (x, u) = h(x, u, ∇vπ (x)),
By time-invariance property 6 of a stationary policy π and
The application of Lemma 1 shows that finding vπ satisfying
(10) and (11) are both equivalent. In the following theorem,
we state that the boundary condition (15), the counterpart of
that in Corollary 1 is actually necessary and sufficient for a
solution v of the Bellman equation (13) or (14) to be equal
to the corresponding value function vπ .
t0
Z
γ τ −t Rτ dτ + γ ∆t Gt0 ,
t
we can see that vπ ∈ Va satisfies the Bellman equation:
Z
t0
vπ (x) = Eπ
γ
τ −t
Rτ dτ + γ
∆t
vπ (Xt0 )
Theorem 1 Let π be admissible and v : X → R be a
function such that either of the followings holds ∀x ∈ X :
Xt = x
t
(1) v satisfies the Bellman equation for some ∆t > 0:
(10)
for any x ∈ X and any ∆t > 0. Using (10), we obtain the
boundary condition of vπ ∈ Va at τ = ∞.
Z
v(x) = Eπ
(13)
− ln γ · v(x) = h(x, π(x), ∇v(x)).
vπ (x) = lim
∆t→∞
t+∆t
γ
τ −t
Rτ dτ + γ
∆t
γ τ −t Rτ dτ + γ ∆t v(Xt0 ) Xt = x ;
(2) v is differentiable and satisfies
Proof. Taking the limit ∆t → ∞ of (10) yields
Z
t0
t
Proposition 1 Suppose that π is admissible. Then, ∀x ∈ X ,
limτ →∞ γ τ −t · Exπ [vπ (Xt+τ )] = 0.
Exπ
(12)
which is obviously continuous since so are the functions f ,
R, and ∇vπ (see also (5)). Both hπ (x, u) and h(x, u, vπ (x))
will be used interchangeably in this paper for convenience to
indicate the same Hamiltonian function for π; the Hamiltonian function for the optimal policy π∗ will be also denoted
by h∗ (x, u), or equivalently, h∗ (x, u, ∇v∗ (x)).
Bellman Equation with Boundary Condition
Gt =
(11)
(14)
Then, limk→∞ lπ (x, k; v) = v(x) − vπ (x) for each x ∈ X .
Moreover, v = vπ over X if (and only if)
vπ (Xt0 )
t
∀x ∈ X :
= vπ (x) + lim γ ∆t · Exπ [vπ (Xt0 )],
lim lπ (x, k; v) = 0.
k→∞
(15)
∆t→∞
Proof. Since π is admissible, Eπ [vπ (Xτ )|Xt = x] is finite
.
for all τ ≥ t and x ∈ X . First, let ṽ(x) = v(x) − vπ (x) and
suppose v satisfies (13). Then, subtracting (10) from (13)
yields ṽ(x) = γ ∆t ·Exπ [ṽ(Xt0 )], whose repetitive applications
to itself results in
ṽ(x) = γ ∆t Exπ [ṽ(Xt+∆t )] = · · · = γ k∆t Exπ ṽ(Xt+k∆t ) .
which implies limτ →∞ γ τ −t · Exπ [vπ (Xt+τ )] = 0. 2
Corollary 1 Suppose that π is admissible. Then, for any
x ∈ X and any ∆t > 0, limk→∞ lπ (x, k; vπ ) = 0.
6
vπ (x) = Eπ [Gt1 |Xt1 = x] = Eπ [Gt2 |Xt2 = x] ∀t1 , t2 ≥ 0.
5
Therefore, by limiting k → ∞ and using Corollary 1, we
obtain limk→∞ lπ (x, k; v) = ṽ(x), which also proves ṽ = 0
under (15). The proof of the other case for v satisfying (14)
instead of (13) is direct by Lemma 1. Conversely, if v = vπ
over X , then (15) is obviously true by Corollary 1. 2
Theorem 2 (Policy Improvement Theorem) Suppose π is
admissible and Assumption 1 holds. Then, the policy π 0 given
by (18) is also admissible and satisfies vπ vπ0 v∗ .
Proof. Since π is admissible, (18) in Assumption 1 and (11)
imply that for any x ∈ X ,
Remark 2 (15) is always true for any γ ∈ (0, 1) and any
bounded v. Hence, whenever vπ is bounded and 0 < γ < 1,
e.g., the second case in Remark 1 including the simulation
example in Section 5, any bounded function v satisfying the
Bellman equation (13) or (14) is equal to vπ by Theorem 1.
2.3
hπ (x, π 0 (x)) ≥ hπ (x, π(x)) = − ln γ · vπ (x).
By Lemma 1, it is equivalent to
Optimality Principle and Policy Improvement
vπ (x) ≤
Exπ0
Z
t0
γ
τ −t
Rτ dτ + γ
∆t
,
vπ (Xt0 )
t
Note that the optimal value function v∗ ∈ Va satisfies
and by the repetitive applications itself,
v∗ (x) = max vπ (x)
π
vπ (x) ≤ Exπ0
v∗ (x) = max Exπ
Z
π
t
0
vπ (x) ≤ vπ0 (x) + lim sup lπ0 (x, k; vπ )
t
(16)
for all x ∈ X , where maxπ denotes the maximization among
the all stationary policies. Hence, by the similar procedure
to derive (11) from (10), we obtain from (16)
k→∞
≤ vπ0 (x) + lim sup γ k∆t · Exπ0 [v∗ (Xt+k∆t )] (19)
k→∞
≤ vπ0 (x) + max{0, V∗ },
− ln γ · v∗ (x) = max h∗ (x, π(x), ∇v∗ (x)) ∀x ∈ X .
from which and vπ ∈ Va , we can conclude that vπ0 (x) for
each x ∈ X has a lower bound; since it also has an upper
bound as vπ0 (x) ≤ v∗ (x) ≤ V∗ , π 0 is admissible. Finally,
(3) with the admissible policy π 0 and (19) imply that for
each x ∈ X ,
π
Here, the above maximization formula can be characterized
as the following HJBE:
u∈U
γ τ −t Rτ dτ + γ k∆t vπ (Xt+k∆t ) .
Let V∗ ∈ R be an upper bound of v∗ . Then, in the limit
k → ∞, we obtain for each x ∈ X
τ −t
∆t
0
γ
Rτ dτ + γ v∗ (Xt )
− ln γ · v∗ (x) = max h(x, u, ∇v∗ (x)), ∀x ∈ X ,
t+k∆t
t
and hence, by principle of optimality, the following Bellman
optimality equation:
Z
(17)
vπ (x) ≤ vπ0 (x) + lim sup lπ0 (x, k; v∗ ) ≤ vπ0 (x) ≤ v∗ (x),
k→∞
and the optimal policy π∗ as
which completes the proof. 2
π∗ (x) ∈ arg max h∗ (x, u), ∀x ∈ X
Corollary 2 Under Assumption 1, v∗ satisfies the HJBE (17).
u∈U
under the following assumption. 7
Proof. Under Assumption 1, let π∗0 be a policy such that
π∗0 (x) ∈ arg maxu∈U h∗ (x, u). Then, π∗0 is admissible and
v∗ vπ∗0 holds by Theorem 2; trivially, vπ∗0 v∗ . Hence, π∗0
is an optimal policy. Noting that any admissible π satisfies
(11), we obtain from “(11) with π = π∗0 and vπ = v∗ :”
Assumption 1 For any admissible policy π, there exists a
policy π 0 such that for all x ∈ X ,
π 0 (x) ∈ arg max hπ (x, u).
(18)
u∈U
− ln γ·v∗ (x) = h(x, π∗0 (x), ∇v∗ (x)) = max h(x, u, ∇v∗ (x))
The following theorems support the argument.
u∈U
for all x ∈ X , which is exactly the HJBE (17). 2
7
If U is compact, then by continuity of hπ , arg maxu∈U hπ (x, u)
in (18) is non-empty for any admissible π and any x ∈ X . This
guarantees the existence of a function π 0 satisfying (18). For the
other cases, where the action space U is required to be convex,
see Section 3.3 (specifically, (30) and (34)).
For the uniqueness of the solution v∗ to the HJBE, we further
assume throughout the paper that
6
Assumption 2 There is one and only one element w∗ ∈ Va
over Va that satisfies the HJBE:
Algorithm 1a describes the whole procedure of on-policy
IPI—it starts with an initial admissible policy π0 (line 1)
and performs policy evaluation and improvement until vi
and/or πi converge (lines 4–7). In policy evaluation (line 4),
the agent solves the Bellman equation (21) to find the value
function vi = vπi for the current policy πi . Then, in policy
improvement (line 5), the next policy πi+1 is obtained by
maximizing the associated Hamiltonian function.
− ln γ · w∗ (x) = max h(x, u, ∇w∗ (x)), ∀x ∈ X .
u∈U
Corollary 3 Under Assumptions 1 and 2, v∗ = w∗ . That is,
v∗ is the unique solution to the HJBE (17) over Va .
Remark 3 The policy improvement equation (18) in Assumption 1 can be generalized and rewritten as
π 0 (x) ∈ arg max κ · hπ (x, u) + bπ (x)
(20)
3.1
As stated in Theorem 3 below, on-policy IPI guarantees the
admissibility and monotone improvement of πi and the perfect value function estimation vi = vπi at each i-th iteration
under Assumption 1 and the boundary condition:
u∈U
for any constant κ > 0 and any function bπ : X → R. 8
Obviously, (18) is the special case of (20) with κ = 1 and
bπ (x) = 0; policy improvement of our IPI methods in this
paper can be also considered to be equal to “(20) with
a special choice of κ and bπ ” (as long as the associated
functions are perfectly estimated in their policy evaluation).
3
Assumption 3a For each i ∈ Z+ , if πi is admissible, then
lim lπi (x, k; vi ) = 0 for any x ∈ X .
k→∞
∞
Theorem 3 Let {πi }∞
i=0 and {vi }i=0 be the sequences generated by Algorithm 1a under Assumptions 1 and 3a. Then,
On-policy Integral Policy Iteration (IPI)
(P1) ∀i ∈ Z+ : vi = vπi ;
(P2) ∀i ∈ Z+ : πi+1 is admissible and satisfies
Now, we are ready to state our basic primary PI scheme,
which is named on-policy integral policy iteration (IPI) and
estimate the value function vπ only (in policy evaluation)
based on the on-policy state trajectory X· generated under π
during some finite time interval [t, t0 ]. The value function
estimate obtained in policy evaluation is then utilized in
the maximization process (policy improvement) yielding the
next improved policy.
1
2
3
4
πi+1 (x) ∈ arg max hπi (x, u);
(P3) the policy is monotonically improved, i.e.,
vπ0 vπ1 · · · vπi vπi+1 · · · v∗ .
Proof. π0 is admissible by the first line of Algorithm 1a.
For any i ∈ Z+ , suppose πi is admissible. Then, since vi
satisfies Assumption 3a, vi = vπi by Theorem 1. Moreover,
Theorem 2 under Assumption 1 shows that πi+1 is admissible and satisfies vπi vπi+1 v∗ . Furthermore, (23) holds
by (12), (22), and vi = vπi . Finally, the proof is completed
by mathematical induction. 2
From Theorem 3, one can directly see that for any x ∈ X ,
the real sequence {vi (x) ∈ R}∞
i=0 satisfies
t
v0 (x) ≤ · · · ≤ vi (x) ≤ vi+1 (x) ≤ · · · ≤ v∗ (x) < ∞,
(24)
implying pointwise convergence to some function v̂∗ . Since
vi (= vπi ) is continuous by the C1 -assumption on every
vπ ∈ Va (see Section 2.1), the convergence is uniform on any
compact subset of X by Dini’s theorem (Thomson, Bruckner, and Bruckner, 2001) provided that v̂∗ is continuous. This
is summarized and sophisticated in the following theorem.
Policy Improvement: find a policy πi+1 such that
πi+1 (x) ∈ arg max h(x, u, ∇vi (x)) ∀x ∈ X ; (22)
u∈U
6
(23)
u∈U
Algorithm 1a: On-policy IPI for the General Case (1)–(2)
(
π0 : X → U, the initial admissible policy;
Initialize:
∆t > 0, the time difference;
i ← 0;
repeat
Policy Evaluation: given policy πi , find the solution
vi : X → R to the Bellman equation: for any x ∈ X ,
Z t0
x
τ −t
∆t
0
vi (x) = Eπi
γ
Rτ dτ + γ vi (Xt ) ; (21)
5
Admissibility, Monotone Improvement, & Convergence
i ← i + 1;
until convergence is met.
Theorem 4 Under the same conditions to Theorem 3, there
is a Lebesque measurable, lower semicontinuous function v̂∗
.
defined as v̂∗ (x) = supi∈Z+ vi (x) such that
8
This is obviously true since the modification term “bπ (x)” does
not depend on u ∈ U and thus not contribute to the maximization.
7
(C2) if v̂∗ ∈ Va and for every compact subset Ω ⊂ X , T is
continuous under dΩ , then vi → v∗ pointwisely on X
and uniformly on any compact subset of X .
(1) vi → v̂∗ pointwisely on X ;
(2) for any ε > 0 and any compact set Ω of X , there exists
its compact subset E ⊆ Ω ⊂ X such that Ω \ E < ε,
v̂∗ |E is continuous, and vi → v̂∗ uniformly on E.
Proof. By Lemma 2 and Bessaga (1959)’s converse of the
Banach’s fixed point principle, there exists a metric d on
Va such that (Va , d) is a complete metric space and T is
a contraction (and thus continuous) under d. Moreover, by
Lemma 2 and Banach’s fixed point principle (e.g., Kirk and
Sims, 2013, Theorem 2.2),
Moreover, if v̂∗ is continuous over X , then the convergence
vi → v̂∗ is uniform on any compact subset of X .
Proof. By (24), the sequence {vi (x) ∈ R}∞
i=0 for any fixed
x ∈ X is monotonically increasing and upper bounded by
v∗ (x) < ∞. Hence, vi (x) converges to v̂∗ (x) by monotone
convergence theorem (Thomson et al., 2001), the pointwise
convergence vi → v̂∗ . Since vi is continuous, v̂∗ is Lebesque
measurable and lower semicontinuous by its construction
(Folland, 1999, Propositions 2.7 and 7.11c). Next, by Lusin’s
theorem (Loeb and Talvila, 2004), for any ε > 0 and any
compact set Ω ⊂ X , there exists a compact subset E ⊆ Ω
such that |Ω \ E| < ε and the restriction v̂∗ |E is continuous.
Hence, the monotone sequence vi converges to v̂∗ uniformly
on E (and on any compact subset of X if v̂∗ is continuous
over X ) by Dini’s theorem (Thomson et al., 2001). 2
∀v0 ∈ Va : lim vN = lim T N v0 = v∗ in the metric d.
N →∞
N →∞
To prove the second part, suppose that v̂∗ ∈ Va and that T is
continuous under dΩ for every compact subset Ω ⊂ X . Then,
since v̂∗ ∈ Va is C1 by assumption and thereby, continuous,
vi converges to v̂∗ pointwisely on X and uniformly on every
compact Ω ⊂ X by Theorem 4; the latter implies vi → v̂∗
in dΩ . Therefore, in the uniform pseudometric dΩ ,
v̂∗ = lim vi+1 = lim T vi = T
i→∞
Next, we prove the convergence vi → v∗ to the optimal
solution v∗ using the PI operator T : Va → Va defined on
the space Va of admissible value functions as
i→∞
lim vi = T v̂∗
i→∞
by continuity of T under dΩ . That is, v̂∗ |Ω = (T v̂∗ )|Ω for
every compact Ω ⊂ X . This implies v̂∗ = T v̂∗ and thus,
v̂∗ = v∗ by Lemma 2, which completes the proof. 2
.
T vπ = vπ0
3.2
Partially Model-Free Nature
0
under Assumption 1, where π is the next admissible policy
that satisfies (18) and is obtained by policy improvement
with respect to the given value function vπ ∈ Va . Let its
.
N -th recursion T N be defined as T N vπ = T N −1 [T vπ ] and
.
0
T vπ = vπ . Then, any sequence {vi ∈ Va }∞
i=0 generated
by Algorithm 1a under Assumptions 1 and 3a satisfies
Policy evaluation (21) can be done without using the explicit
knowledge of the system dynamics f (x, u) in (1)—there is
no explicit term of f shown in (21), and all of the necessary information on f are captured by the observable state
trajectory X· during a finite time interval [t, t0 ]. Hence, IPI
is model-free as long as so is its policy improvement (22),
which is not unfortunately (see the definition (5) of h). Nevertheless, the policy improvement (22) can be modified to
yield the partially model-free IPI. To see this, consider the
decomposition (25) of the dynamics f below:
T N v0 = vN for any N ∈ N.
Lemma 2 Under Assumptions 1 and 2, the optimal value
function v∗ is the unique fixed point of T N for all N ∈ N.
f (x, u) = fd (x) + fc (x, u),
Proof. See Appendix A. 2
(25)
where fd : X → X is independent of u and called a drift
dynamics, and fc : X × U → X is the corresponding inputcoupling dynamics. 9 Substituting the definitions (5) and
(12) into (20), choosing κ = 1 and b(x) = −∇vπ (x)fd (x),
and replacing (π, vπ ) with (πi , vi ) then yield
To precisely state our convergence theorem, let Ω be any
given compact subset and define the uniform pseudometric
dΩ : Va × Va → [0, ∞) on Va as
.
dΩ (v, w) = sup v(x) − w(x) for v, w ∈ Va .
πi+1 (x) ∈ arg max R(x, u) + ∇vi (x)fc (x, u) ,
x∈Ω
(26)
u∈U
Theorem 5 For the value function sequence {vi }∞
i=0 generated by Algorithm 1a under Assumptions 1, 2, and 3a,
a partially model-free version of (22). In summary, the whole
procedure of Algorithm 1a can be done even when the drift
dynamics fd is completely unknown.
(C1) there exists a metric d : Va × Va → [0, ∞) such that
T is a contraction (and thus continuous) under d and
vi → v∗ in the metric d, i.e., limi→∞ d(vi , v∗ ) = 0;
9
There are an infinite number of ways of choosing fd and fc ; one
typical choice is fd (x) = f (x, 0) and fc (x, u) = f (x, u) − fd (x).
8
3.3
(1) s is strictly monotone, odd, and bijective;
(2) S in (31) is finite at any point on the boundary ∂U. 11
Case Studies
The partially model-free policy improvement (26) can be
even more simplified if:
This gives the closed-form expression σ(ξ) = s(Γ−1 ξ) of
the function σ in (30) and includes the sigmoidal example in
Section 5 as its special case. 12 Another well-known example
is:
(1) the system dynamics f (x, u) is affine in u, i.e.,
f (x, u) = fd (x) + Fc (x)u,
U = Rm (i.e., Umax,j = ∞, 1 ≤ j ≤ m) and s(u) = u/2,
(32)
in which case (31) becomes S(u) = uT Γu.
(27)
where Fc : X → Rn×m is a continuous matrix-valued
function; it is (25) with fc (x, u) = Fc (x)u;
(2) the action space U ⊆ Rm is convex;
(28)
(3) the reward R(x, u) is strictly concave in u and given
by
R(x, u) = R0 (x) − S(u),
(29)
where R0 : X → R is a continuous upper-bounded
function called the state reward, and S : U → R,
named the action penalty, is a strictly convex C1 function whose restriction S|Uint on the interior Uint of U
m
satisfies Im(∇S|T
Uint ) = R .
The well-known special case of (27)–(29) is the following
linear quadratic regulation (LQR): (31), (32) and
fd (x) = Ax, Fc (x) = B, R0 (x) = −kCxk2 ,
where (A, B, C) for A ∈ Rn×n , B ∈ Rn×m , and C ∈ Rp×n
is stabilizable and detectable. In this LQR case, if the policy
πi is linear, i.e., πi (x) = Ki x (Ki ∈ Rm×n ), then its value
function vπi , if finite, can be represented in a quadratic form
vπi (x) = xT Pπi x (Pπi ∈ Rn×n ). Moreover, when vi is
quadratically represented as vi (x) = xT Pi x, (30) becomes
In this case, solving the maximization in (26) (or (22)) is
equivalent to finding the regular point u ∈ U such that
πi+1 (x) = Ki+1 x and Ki+1 = Γ−1 B T Pi ,
−∇S(u) + ∇vi (x)Fc (x) = 0,
Żτ = A +
Z
I Zτ + BUτ
(35)
R(Zτ , Uτ ) dτ Zt = x ,
t
11
12
1
2
3
(31)
4
for a positive definite matrix Γ ∈ Rm×m and a continuous
function s : Rm → Uint such that
5
v→u
∞
vπ (x) = Ẽπ
v
(sT )−1 (w) · Γ dw
ln γ
2
yields the following value function expression in terms of
Zτ without the discount factor γ as
Remark 4 Whenever each j-th component Uτ,j of Uτ meets
some physical limitation |Uτ,j | ≤ Umax,j for some threshold
Umax,j ∈ (0, ∞],
one can formulate the action space U in
(28) as U = Uτ ∈ Rm : |Uτ,j | ≤ Umax,j , 1 ≤ j ≤ m
and determine the action penalty S(u) in (29) as
S(u) = lim
(34)
a linear policy again. This observation gives the policy evaluation and improvement in Algorithm 1b below. Moreover,
whenever the given policy π is linear and (31)–(33) are all
true, the process Zτ generated by
where the gradient ∇S of S is a strictly monotone mapping
that is also bijective when its domain U is restricted to its
interior Uint . Rearranging it with respect to u, we obtain the
explicit closed-form expression of (26) also known as the
VGB greedy policy (Doya, 2000):
πi+1 (x) = σ FcT (x)∇viT (x) ,
(30)
.
−1
where σ : Rm → Uint is defined as σ = (∇S|T
, which
Uint )
is also strictly monotone, bijective, and continuous. Therefore, in the u-affine-and-concave (u-AC) case (27)–(29), 10
the complicated maximization process in (26) over the continuous action space can be obviated by directly calculating
the next policy πi+1 by (30). Also note that under the u-AC
setting (27)–(29), the VGB greedy policy π 0 = σ◦ FcT ∇vπT
for an admissible π is the unique policy satisfying (18).
Z
(33)
0
6
10
This includes the frameworks in (Doya, 2000; Abu-Khalaf and
Lewis, 2005; Vrabie and Lewis, 2009) as special cases.
9
∂U = {Uτ ∈ Rm : Uτ,j = Umax,j , 1 ≤ j ≤ m}.
See also (Doya, 2000; Abu-Khalaf and Lewis, 2005).
Algorithm 1b: On-policy IPI for the LQR Case (31)–(33)
(
π0 (x) = K0 x, the init. admissible policy;
Initialize:
∆t > 0, the time difference;
i ← 0;
repeat (under the LQR setting (31), (32), and (33))
Policy Evaluation: given policy πi (x) = Ki x, find the
solution vi (x) = xT Pi x to the Bellman equation (21);
Policy Improvement: Ki+1 = Γ−1 B T Pi ;
i ← i + 1;
until convergence is met.
(1) µ(t, x, u) = u for all x ∈ X and all u ∈ U0 ;
(2) for each fixed u ∈ U0 , µ(·, ·, u) is a policy (starting at
time t), which is possibly non-stationary.
where Ẽπ [Y |Zt = x] denotes the value of Y when Zt = x
and Uτ = π(Zτ ) ∀τ ≥ t. This transforms any discounted
LQR problem into the total one (with its state Zτ in place
of Xτ ). Hence, the application of the standard LQR theory
(Anderson and Moore, 1989) shows that
An AD policy is actually a policy parameterized by u ∈ U0 ;
the purpose of such a parameterization is to impose the condition Ut = u at the initial time t through the first property
in Definition 2 so as to make it possible to explore the stateaction space X × U (or its subset X × U0 ), rather than the
state space X alone. The simplest form of an AD policy µ is
v∗ (x) = xT P∗ x ≤ 0 for some P∗ ∈ Rn×n ,
.
π∗ (x) = K∗ x with K∗ = Γ−1 B T P∗ ;
(2) v∗ is the unique solution to the HJBE (17).
(1)
Furthermore, the application of the analytical result of IPI
(Lee et al., 2014, Theorem 5 and Remark 4 with } → ∞)
gives the following statements.
µ(τ, x, u) = u for all τ ≥ t and all (x, u) ∈ X × U0
used in Section 5; another useful important example is
Lemma 3 (Policy Improvement Theorem: the LQR Case)
Let π be linear and admissible. Then, the linear policy π 0
given by π 0 (x) = K 0 x and K 0 = Γ−1 B T Pπ under the LQR
setting (31)–(33) is admissible and satisfies Pπ ≤ Pπ0 ≤ 0.
∞
Theorem 6 Let {πi }∞
i=0 and {vi }i=0 be the sequences generated by Algorithm 1b and parameterized as πi (x) = Ki x
and vi (x) = xT Pi x. Then,
(1)
(2)
(3)
(4)
µ(τ, x, u) = π(x) + e(τ, x, u) (with U = Rm ),
for a policy π and a probing signal e(τ, x, u) given by
e(τ, x, u) = (u − π(x))e−σ(τ −t) +
Aj sin ωj (τ − t) ,
j=1
where σ > 0 regulates the vanishing rate of the first term;
Aj ∈ Rm and ωj ∈ R are the amplitude and the angular
frequency of the j-th sin term in the summation, respectively.
πi is admissible and Pi = Pπi for all i ∈ Z+ ;
P0 ≤ P1 ≤ · · · ≤ Pi ≤ Pi+1 ≤ · · · ≤ P∗ ≤ 0;
limi→∞ Pi = P∗ and limi→∞ Ki = K∗ ;
the convergence Pi → P∗ is quadratic.
In what follows, for an AD policy µ starting at time t ≥ 0,
we will interchangeably use
Remark 5 In the LQR case, Assumptions 1, 2, and 3a are
all true—Assumptions 1 and 2 are trivially satisfied as shown
above; Assumption 3a is also true by
Eµ [Z|Xt = x, Ut = u]
and
E(µx,u) [Z|t],
(36)
with a slight abuse of notations, to indicate the deterministic
value Z when Xt = x and Uτ = µ(τ, Xτ , u) for all τ ≥ t.
Using this notation, the state vector Xτ at time τ ≥ t that
starts at time t and is generated under µ and the initial
condition “Xt = x and Ut = u” is denoted by E(µx,u) [Xτ |t].
For any non-AD policy µ, we also denote Eµ [Z|Xt = x]
and Exµ [Z|t], instead of (36), to indicate the value Z when
Xt = x and Uτ = µ(τ, Xτ ) ∀τ ≥ t, where the condition
Ut = u is obviated.
T
lim lπi (x, k; vi ) = lim Exπi γ k∆t · Xt+k∆t
Pi Xt+k∆t
k→∞
k→∞
T
= lim Ẽxπi Zt+k∆t
Pi Zt+k∆t = 0,
k→∞
where we have used the equality Ẽxπ [Zτ ] = Exπ [(γ/2)τ −t Xτ ]
and the fact that a linear policy π is admissible iif it stabilizes
the Zτ -system (35) and thus satisfies limτ →∞ Ẽxπ [Zτ ] = 0
∀x ∈ X (Lee et al., 2014, Section 2). To the best authors’
knowledge, however, the corresponding theory does not exist for the nonlinear discounted case ‘γ ∈ (0, 1).’ See (AbuKhalaf and Lewis, 2005; Vrabie and Lewis, 2009; Lee et al.,
2015) for the u-AC case (27)–(29) with γ = 1 and R ≤ 0.
4
N
X
Each off-policy IPI method in this paper is designed to generate the policies and value functions satisfying the same
properties in Theorems 3 and 4 as those in on-policy IPI (Algorithm 1a), but they are generated using the off-policy trajectories generated by a (AD) behavior policy µ, rather than
the target policy πi . Specifically, each off-policy method estimates vi and/or a (AD) function in policy evaluation using the off-policy state and action trajectories and then employs the estimated function in policy improvement to find
a next improved policy. Each off-policy IPI method will be
described only with its policy evaluation and improvement
steps since the others are all same to those in Algorithm 1a.
Extensions to Off-policy IPI Methods
In this section, we propose a series of completely/partially
model-free off-policy IPI methods, which are effectively
same to on-policy IPI but uses data generated by a behavior
policy µ, rather than the target policy πi . For this, we first
introduce the concept of action-dependent (AD) policy.
4.1
Definition 2 For a non-empty subset U0 ⊆ U of the action
space U, a function µ : [t, ∞) × X × U0 → U, denoted by
µ(τ, x, u) for τ ≥ t and (x, u) ∈ X × U0 , is said to be an
AD policy over U0 (starting at time t) if:
Integral Advantage Policy Iteration (IAPI)
First, we consider a continuous function aπ : X × U → R
called the advantage function for an admissible policy π
10
(Baird III, 1993; Doya, 2000), which is defined as
.
aπ (x, u) = hπ (x, u) + ln γ · vπ (x)
Algorithm 3a: Integral Q-Policy-Iteration (IQPI)
the current policy πi
Policy Evaluation: given an weighting factor β > 0 ,
an AD policy µ over U
(37)
and satisfies aπ (x, π(x)) = 0 by (11). Considering (20) in
Remark 3 with bπ (x) = ln γ · vπ (x) and κ = 1, we can see
that the maximization process (18) can be replaced by
π 0 (x) ∈ arg max aπ (x, u) ∀x ∈ X ;
find a continuous function qi : X × U → R such that
for all (x, u) ∈ X × U:
qi (x, πi (x)) = E(x,u)
µ
(38)
u∈U
(42)
where
Zτ = κ1 Rτ − κ2 qi (Xτ , Uτ ) + κ3 qi (Xτ , πi (Xτ )),
.
κ1 κ2 > 0 and κ3 = κ2 − ln(γ −1 β);
Policy Improvement: find a policy πi+1 such that
max a∗ (x, u) = 0 and π∗ (x) ∈ arg max a∗ (x, u).
u∈U
πi+1 (x) ∈ arg max qi (x, u) ∀x ∈ X ;
(43)
u∈U
These ideas give model-free off-policy IPI named integral
advantage policy iteration (IAPI), whose policy evaluation
and improvement steps are shown in Algorithm 2 while the
other parts are, as mentioned above, all same to those in
Algorithm 1a and thus omitted. Given πi , the agent tries to
find/estimate in policy evaluation both vi (x) and ai (x, u)
satisfying (40) and the off-policy Bellman equation (39) for
“an AD policy µ over the entire action space U.” Here, vi and
ai corresponds to the value and the advantage functions with
respect to the i-th admissible policy πi (see Theorem 7 in
Section 4.5). Then, the next policy πi+1 is updated in policy
improvement by using the advantage function ai (x, u) only.
Notice that this IAPI provides the ideal PI form of advantage
updating and the associated ideal Bellman equation—see
(Baird III, 1993) for advantage updating and the approximate
version of the Bellman equation (39).
X × U → R defined as
.
qπ (x, u) = κ1 · vπ (x) + aπ (x, u)/κ2
(44)
for an admissible policy π, where κ1 , κ2 ∈ R are any
two nonzero real numbers that have the same sign, so that
κ1 /κ2 > 0 holds. 13 By its definition and continuities of vπ
and aπ , the Q-function qπ is also continuous over its whole
domain X × U. Here, since both κ1 and κ2 are nonzero, the
Q-function (44) does not lose both information on vπ and aπ ;
thereby, qπ plays a similar role of the DT Q-function—on
one hand, it holds the property
κ1 · vπ (x) = qπ (x, π(x)),
Algorithm 2: Integral Advantage Policy Iteration (IAPI)
(45)
and on the other, it replaces (18) and (38) with
Policy Evaluation: given πi and an AD policy µ over U,
1
a C function vi : X → R
find
such that
a continuous function ai : X × U → R
π 0 (x) ∈ arg max qπ (x, u) ∀x ∈ X
(46)
u∈U
by (20) for κ = κκ21 > 0 and bπ (x) = κ1 (1+ lnκ2γ )·vπ (x); the
HJBE (17) and the optimal policy π∗ are also characterized
.
by the optimal Q-function q∗ = qπ∗ as
(1) for all (x, u) ∈ X × U,
Z t0
(x,u)
τ −t
∆t
0
vi (x) = Eµ
γ
Zτ dτ + γ vi (Xt ) t , (39)
κ1 · v∗ (x) = max q∗ (x, u) and π∗ (x) ∈ arg max q∗ (x, u).
t
u∈U
where Zτ = Rτ − ai (Xτ , Uτ ) + ai (Xτ , πi (Xτ ));
(2) ai (x, πi (x)) = 0 for all x ∈ X ;
πi+1 (x) ∈ arg max ai (x, u) ∀x ∈ X ;
u∈U
Algorithm 3a shows the policy evaluation and improvement
of IQPI—the former is derived by substituting
(45) and
κ1 aπ (x, u) = κ2 · qπ (x, u) − qπ (x, π(x)) (obtained from
(44) and (45)) into (39) in IAPI, and the latter directly from
(46). At each iteration, while IAPI needs to find/estimate
both vi and ai , IQPI just estimate and use in its loop qi only.
In addition, the constraint on the AD function such as (40)
(40)
Policy Improvement: find a policy πi+1 such that
(41)
u∈U
4.2
β τ −t Zτ dτ + β ∆t qi (Xt0 , πi (Xt0 )) t ,
t
.
the optimal advantage function a∗ = aπ∗ also characterizes
the HJBE (17) and the optimal policy π∗ as
u∈U
t0
Z
13
Integral Q-Policy-Iteration (IQPI)
Our general Q-function qπ includes the previously proposed Qfunctions in CTS as special cases—Baird III (1993)’s Q-function
(κ1 = 1, κ2 = 1/∆t); hπ for γ ∈ (0, 1) (κ1 = κ2 = − ln γ),
and its generalization for γ ∈ (0, 1] (any κ1 = κ2 > 0) both
recognized as Q-functions by Mehta and Meyn (2009).
Our next model-free off-policy IPI named integal Q-policyiteration (IQPI) estimates and uses a general Q-function qπ :
11
in IAPI does not appear in IQPI, making the algorithm simpler. As will be shown in Theorem 7 in Section 4.5, qi in
Algorithm 3a corresponds to the Q-function qπi for the i-th
admissible policy πi , and the policies {πi }∞
i=0 generated by
IQPI satisfy the same properties to those in on-policy IPI.
Algorithm 4a: IEPI for the General Case (1)–(2)
the current policy πi
Policy Evaluation: given
,
a (non-stationary) policy µ
find a C1 function vi : X → R such that for all x ∈ X ,
Z t0
vi (x) = Exµ
γ τ −t Zτ dτ + γ ∆t vi (Xt0 ) Xt = x ,(50)
Notice that simplification of IQPI is possible by setting
.
κ = ln(γ −1 β) = κ1 = κ2 and γ 6= β,
t
(47)
where
in which case Zτ in IQPI is dramatically simplified to (49)
shown in the Algorithm 3b, the simplified IQPI, and the Qfunction qπ in its definition (44) becomes
qπ (x, u) = κ · vπ (x) + aπ (x, u).
Policy Improvement: find a policy πi+1 satisfying (26);
(48)
Note that the difference of IEPI from on-policy IPI lies in its
Bellman equation (50)—it contains the compensating term
“∇vi (Xτ )(fc (τ ) − fcπi (τ ))” that naturally emerges due to
the difference between the behavior policy µ and the target
one πi . For µ = πi , the compensating term becomes identically zero, in which case the Bellman equation (50) becomes (21) in on-policy IPI. For any given policy µ, IEPI in
fact generates the same result {(vi , πi )}∞
i=0 to its on-policy
version (Algorithm 1a) under the same initial condition as
shown in Theorem 7 (and Remark 7) in Section 4.6.
In this case, the gain κ (6= 0) of the integral is the scaling
factor of vπ in the Q-function qπ , relative to aπ . As mentioned by Baird III (1993), a bad scaling between vπ and
aπ in qπ , e.g., extremely large |κ|, may result in significant
performance degradation or extremely slow Q-learning.
Compared with the other off-policy IPI methods, the use
of the weighting factor β ∈ (0, ∞) is one of the major
distinguishing feature of IQPI—β plays a similar role to the
discount factor γ ∈ (0, 1] in the Bellman equation, but can
be arbitrarily set in the algorithm; it can be equal to γ or
not. In the special case (47), β should not be equal to γ
since the log ratio ln(β/γ) of the two determines the nonzero scaling gain κ in (48) and (49). Since Algorithm 3b is
a special case of IQPI (Algorithm 3a), it also has the same
mathematical properties shown in Section 4.5.
In what follows, we are particularly interested in IEPI under the u-AC setting (27)–(29) shown in Algorithm 4b. In
this case, the maximization process in the policy improvement is simplified to the update rule (30) also known as the
VGB greedy policy (Doya, 2000). On the other hand, the
compensation term in Zτ of the Bellman equation (50) is
also simplified to “∇vi (Xτ )Fc (Xτ )ξτπi ,” which is linear in
.
the difference ξτπi = Uτ − πi (Xτ ) at time τ and contains
the function ∇vi · Fc also shown in its policy improvement
rule (30) in common. This observation brings our next offpolicy IPI method named integral C-policy-iteration (ICPI).
Algorithm 3b: IQPI with the Simplified Setting (47)
the current policy πi
Policy Evaluation: given an weighting factor β > 0 ,
an AD policy µ over U
Algorithm 4b: IEPI in the u-AC Setting (27)–(29)
the current policy πi
Policy Evaluation: given
,
a (non-stationary) policy µ
find a continuous function qi : X × U → R such that (42)
holds for all (x, u) ∈ X × U and for Zτ given by
Zτ = κ · Rτ − qi (Xτ , Uτ ) ;
(49)
find a C1 function vi : X → R such that (50) holds for all
x ∈ X and for Zτ given by
Policy Improvement: find a policy πi+1 satisfying (43);
4.3
Zτ = Rτπi − ∇vi (Xτ ) fc (τ ) − fcπi (τ ) ,
.
.
fc (τ ) = fc (Xτ , Uτ ), fcπi (τ ) = fc (Xτ , πi (Xτ ));
Integral Explorized Policy Iteration (IEPI)
where ξτπi
The on-policy IPI (Algorithm 1a) can be easily generalized
and extended to its off-policy version without introducing
any AD function such as ai in IAPI and qi in IQPI. In
this paper, we name it integral explorized policy iteration
(IEPI) following the perspectives of Lee et al. (2012) and
present its policy evaluation and improvement loop in Algorithm 4a. Similarly to on-policy IPI with its policy improvement (22) replaced by (26), IEPI is also partially modelfree—the input-coupling dynamics fc has to be used in both
policy evaluation and improvement while the drift term fd
is not when the system dynamics f is decomposed to (25).
Zτ = Rτπi − ∇vi (Xτ )Fc (Xτ ) ξτπi ,
.
= Uτ − πi (Xτ ) is the policy difference;
Policy Improvement: update the next policy πi+1 by (30);
4.4
Integral C-Policy-Iteration (ICPI)
In the u-AC setting (27)–(29), we now modify IEPI (Algorithm 4b) to make it model-free by employing a function
cπ : X → Rm defined for a given admissible policy π as
.
cπ (x) = FcT (x)∇vπT (x),
12
(51)
which is continuous by continuity of Fc and ∇vπ . Here,
the function cπ will appear in both policy evaluation and
improvement in Common and contains the input-Coupling
term Fc , so we call it C-function for an admissible policy π.
Indeed, when (27)–(29) are true, the next policy π 0 satisfying
(18) for an admissible policy π is explicitly given by
(2) By virtue of the fact that there is no AD function
to be estimated in ICPI as in IEPI, the exploration
over its smaller space X × {uj }m
j=0 , rather than the
entire state-action space X × U, is enough to obtain
the desired result “vi = vπi and ci = cπi ” in its policy
evaluation (see Algorithm 5 and Theorem 7 in the next
subsection). Here, uj ’s are any vectors in U such that
π 0 (x) = (σ ◦ cπ )(x) = σ(cπ (x)).
m 15
span{uj − uj−1 }m
j=1 = R .
FcT (x)∇viT (x)
In the same way, if ci (x) =
is true, then (30)
in Algorithm 4b can be replaced by (53), and the compensating term ∇vi (Xτ )Fc (Xτ )ξτπi in policy evaluation of IEPI
πi
(Algorithm 4b) by cT
i (Xτ )ξτ .
Remark 6 One might consider a general version of ICPI
by replacing the term ∇vi (x)fc (x, u) in the general IEPI
(Algorithm 4a) with an AD function, say c0i (x, u). In this
case, however, it loses the merits of ICPI over IAPI and IQPI
shown above. Furthermore, the solution (vi , c0i ) of the associated Bellman equation is not uniquely determined—a pair
.
of vi and any cbi (x, u) = c0i (x, u) + b(x) for a continuous
function b(x) is also a solution to (39) for Zτ given by
Motivated by the above idea, we propose integral C-policyiteration (ICPI) whose policy evaluation and improvement
are shown in Algorithm 5. In the former, the functions vπi
and cπi for the given (admissible) policy πi are estimated by
solving the associated off-policy Bellman equation for vi and
ci , and then the next policy πi+1 is updated using ci in the
latter. In fact, ICPI is a model-free extension of IEPI—while
ICPI does not, IEPI obviously needs the knowledge of the
input-coupling dynamics Fc to run. A model-free off-policy
IPI so-named integral Q-learning 14 by Lee et al. (2012,
2015), which was derived from IEPI under the Lyapunov’s
stability framework, also falls into a class of ICPI for the
unconstrained total case (U = Rm and γ = 1).
Zτ = Rτπi − cbi (Xτ , Uτ ) + cbi (Xτ , πi (Xτ )).
4.5
Assumption 3b For each i ∈ Z+ , if πi is admissible, then
.
vi = qi (·, πi (·))/κ1 is C1 and
(1) As in IEPI, the complicated maximization in the policy
improvement of IAPI and IQPI has been replaced by
the simple update rule (53), which is a kind of modelfree VGB greedy policy (Doya, 2000).
lim lπi x, k; vi = 0 for all x ∈ X .
k→∞
Theorem 7 Under Assumptions 1 and 3a (or 3b in IQPI),
∞
the sequences {πi }∞
i=0 and {vi }i=0 generated by any offpolicy IPI method (IAPI, IQPI, IEPI, or ICPI) satisfy the
properties (P1)–(P3) in Theorem 3. Moreover,
14
The name ‘integral Q-learning’ does not imply that it is involved
with our Q-function (44). Instead, its derivation was based on the
value function with singularly-perturbed actions (Lee et al., 2012).
ai = aπi (IAPI), qi = qπi (IQPI), and ci = cπi (ICPI)
Algorithm 5: Integral C-Policy-Iteration (ICPI)
for all i ∈ Z+ ; if Assumption 2 also holds, then vi converges
towards the optimal solution v∗ in a sense that {vi }∞
i=0 satisfies the convergence properties (C1)–(C2) in Theorem 5.
(under the u-AC setting (27)–(29))
Policy Evaluation: given πi and an AD policy µ over a
finite subset U0 = {u0 , u1 , · · · , um } of U satisfying (54),
1
a C function vi : X → R
find
such that
a continuous function ci : X → Rm
Proof. See Appendix B. 2
(52)
Remark 7 If the policy π 0 satisfying (18) in Assumption 1
is unique for each admissible π, then Theorem 7 also shows
∞
that {πi }∞
i=0 and {vi }i=0 generated by any off-policy IPI in
this paper are even equivalent to those in on-policy IPI under
the same initial π0 . An example of this is the u-AC case (27)–
(29), where the next policy πi+1 is always uniquely given by
the VGB greedy policy (30) for given vi ∈ Va .
(53)
15
(39) holds for each (x, u) ∈ X × U0 and for Zτ given by
where
ξτπi
Policy Improvement: update the next policy πi+1 by
πi+1 (x) = σ(ci (x)).
Mathematical Properties of Off-policy IPI Methods
Now, we show that every off-policy IPI method is effectively
same to on-policy IPI in a sense that the sequences {vi }∞
i=0
and {πi }∞
i=0 generated satisfy Theorems 3 and 4 under the
same assumptions and are equal to those in on-policy IPI
under the uniqueness of the next policy π 0 in Assumption 1.
.
In the case of IQPI, we let vi = qi (·, π(·))/κ1 and assume
Compared with IAPI and IQPI, the advantages of ICPI (at
the cost of restricting the RL problem to the u-AC one (27)–
(29)) are as follows.
πi
Zτ = Rτπi − cT
i (x) ξτ
.
= Uτ − πi (Xτ ) is the policy difference;
(54)
When U contains the zero vector, any linearly independent subset {uj }m
j=1 and u0 = 0 is an example of such uj ’s in (54).
13
Table 1
Summary of the off-policy IPI methods.
4.6
Name
Model-free
Rτ or Rτπ
Functions involved
Search Space
Algorithm No.
Constraint(s)
IAPI
O
Rτ
vπ and aπ
X ×U
2
(40)
IQPI
O
Rτ
qπ
X ×U
3a
3b
X
(47)
IEPI
4
Rτπ
vπ
X
4a
4b
X
(27)–(29)
ICPI
O
Rτπ
vπ and cπ
X × {uj }
5
(27)–(29)
Table 2 and shown in their off-policy Bellman equations (or
their Zτ ’s) become all zeros and thus no longer detectable—
the need for the behavior policy µ different from the target
policy πi to obtain or estimate the respective functions in
such AD terms. In the case of IEPI, if Uτ = πi (Xτ ) for all
τ ∈ [t, t0 ], then it becomes equal to “on-policy IPI with its
policy improvement (22) replaced by (26).”
Summary and Discussions
The off-policy IPI methods presented in this section are compared and summarized in Table 1. As shown in Table 1, all
of the off-policy IPI methods are model-free except IEPI
which needs the full-knowledge of a input-coupling dynamics fc in (25) to run; here, ICPI is actually a model-free version of the u-AC IEPI (Algorithm 4b). While IAPI and IQPI
explore the whole state-action space X × U to learn their
respective functions (vπ , aπ ) and qπ , IEPI and ICPI search
only the significantly smaller spaces X and X × {uj }m
j=0 ,
respectively. This is due to the fact that IEPI and ICPI both
learn no AD function such as aπ and qπ as shown in the
fourth column of Table 1. While IAPI and IQPI employ the
reward Rτ , both IEPI and ICPI use the πi -reward Rτπi at
each i-th iteration.
Table 2
The AD parts in policy evaluation of the model-free methods
Name
IAPI
IQPI
ICPI
5
Table 1 also summarizes the constraint(s) on each algorithm.
IAPI has the constraint (40) on ai and πi in the policy evaluation that reflects the equality aπi (x, πi (x)) = 0 similarly
to advantage updating (Baird III, 1993; Doya, 2000). ICPI
is designed under the u-AC setting (27)–(29), which gives:
The AD-Parts
ai (Xτ , Uτ ) − ai (Xτ , π(Xτ ))
κ2 · qi (Xτ , Uτ ) − qi (Xτ , πi (Xτ ))
ci (Xτ )ξτπi
Inverted-Pendulum Simulation Examples
To support the theory and verify the performance, we present
the simulation results of the IPI methods applied to the 2ndorder inverted-pendulum model (n = 2 and m = 1):
θ̈τ = −0.01θ̇τ + 9.8 sin θτ − Uτ cos θτ ,
(1) the uniqueness of the target solution (vi , ci ) = (vπi , cπi )
of the Bellman equation (39) for Zτ given by (52);
where θτ , Uτ ∈ R are the angular position of and the external
torque input to the pendulum at time τ , respectively, with
the torque limit given by |Uτ | ≤ Umax for Umax = 5 [N·m].
Note that this model is exactly same to that used by Doya
(2000) except that the action Uτ , the torque input, is coupled
with the term ‘cos θτ ’ rather than the constant ‘1,’ which
makes our problem more realistic and challenging. Letting
.
Xτ = [ θτ θ̇τ ]T , then the inverted-pendulum model can be
expressed as (1) and (27) with
(2) the exploration of a smaller space X × {uj }m
j=0 , rather
than the whole state-action space X × U;
(3) the simple update rule (53) in policy improvement, the
model-free version of the VGB greedy policy (Doya,
2000), in place of the complicated maximization over U
for each x ∈ X such as (41) and (43) in IAPI and IQPI.
The special IEPI scheme (Algorithm 4b) designed under the
u-AC setting (27)–(29) also updates the next policy πi+1 via
the simple policy improvement update rule (30) (a.k.a. the
VGB greedy policy (Doya, 2000)), rather than performing
the maximization (26). IQPI can be also simplified to Algorithm 3b under the different weighting (or discounting) by
.
β (6= γ) and the gain setting κ1 = κ2 = κ (κ = ln(β/γ))
shown in (47). In this case, β ∈ (0, ∞) determines the gain
of the integral in policy evaluation and scales vπ with respect to aπ in qπ (see (48) and (49)).
fd (x) =
x2
9.8 sin x1 − 0.01x2
and Fc (x) =
0
,
− cos x1
where x = [ x1 x2 ]T ∈ R2 . Here, our learning objective is
to make the pendulum swing up and eventually settle down
at the upright position θτ = 2πk for some k ∈ Z. The
reward R to achieve such a goal under the limited torque
was therefore set to (29) and (31) with U = [−Umax , Umax ],
Γ = 1, and the functions R0 (x) and s(ξ) given by
For any of the model-free methods, if Uτ = πi (Xτ ), rather
than Uτ = µ(τ, Xτ , u), then their AD parts summarized in
R0 (x) = 102 cos x1 and s(ξ) = Umax tanh(ξ/Umax );
14
(a) v̂10 (x) (IEPI)
(b) v̂10 (x) (ICPI)
(c) v̂10 (x) (IAPI)
(d) q̂10 (x, π̂10 (x))/κ (IQPI)
Fig. 1. The estimates of vi |i=10 (≈ v∗ ) done by the respective IPI methods over the region Ωx ; the horizontal and vertical axes correspond
to the values of the angular position x1 and the velocity x2 of the pendulum; v̂i , q̂i , and π̂i denote the estimates of vi , qi , and πi obtained
by running each off-policy IPI. In IQPI case, v̂i ≈ q̂i (·, π̂i (·))/κ holds by (45) as long as the associated functions are well-estimated.
6
the sigmoid function s with Γ = 1 then gives the following
expressions of the functions σ(ξ) in (30) and S(u) in (29):
θτ
θτ
θτ
θτ
5
4
Amplitude
σ(ξ) = Umax tanh(ξ/Umax ),
u
u
2
S(u) = (Umax
/2) · ln u++ · u−− ,
.
where u± = 1 ± u/Umax . Here, note that S(u) is finite for
all u ∈ U and has its maximum at the end points u = ±Umax
2
as S(±Umax ) = (Umax
ln 4)/2 ≈ 17.3287. The initial policy
π0 was given by π0 = 0 and for its admissibility vπ0 ∈ Va ,
we set the discount factor as γ = 0.1, less than 1. This is a
high gain on the state-reward R0 (x) (= 102 cos x1 ) and low
discounting (γ = 0.1) scheme, which made it possible to
achieve the learning objective merely after the first iteration.
(IEPI)
(ICPI)
(IAPI)
(IQPI)
3
2
1
0
−1
0
1
2
3
4
Time [s]
5
6
7
8
(a) Eπ̂10 [θτ |X0 = x0 ], the trj. of the angular position θτ
6
θ̇τ (IEPI)
4
Under the above u-AC framework, we simulated the four
off-policy methods (Algorithms 2, 3b, 4b, and 5) with their
parameters ∆t = 10 [ms] and β = 1. On-policy IPI in Section 3 is a special case µ = π of IEPI and thus omitted.
The behavior policy µ used in the simulations was µ = 0
for IEPI and µ(t, x, u) = u for the others; the next target policy πi+1 was given by πi+1 (x) = σ(yi (x)), where
yi (x) = FcT (x)∇viT (x) in IEPI, yi (x) = ci (x) in ICPI; in
IAPI and IQPI, yi (x) is approximately equal to the output
of a radial basis function (RBF) networks (RBFNs) to be
trained by policy improvement using ai and qi , respectively.
The functions vi , ai , qi , and ci were all approximated by
RBFNs as well. Instead of the whole spaces X and X × U,
.
we considered their compact regions Ωx = [−π, π]×[−6, 6]
and Ωx × U in our whole simulations; since our invertedpendulum system and the value function are 2π-periodic in
the angular position x1 , the state value x ∈ X was normalized to x̄ ∈ [−π, π] × R whenever input to the RBFNs. The
details about the RBFNs and the implementation methods
of the policy evaluation and improvement are shown in Appendix D. Every IPI method ran up to the 10th iteration.
θ̇τ (ICPI)
θ̇τ (IAPI)
Amplitude
2
θ̇τ (IQPI)
0
−2
−4
−6
0
1
2
3
4
Time [s]
5
6
7
8
(b) Eπ̂10 [θ̇τ |X0 = x0 ], the trj. of the angular velocity θ̇τ
Fig. 2. The state trajectories generated under the initial condition
x0 = [ 1.1π 0 ]T and the estimated policy π̂i of πi for i = 10
obtained by running each IPI method (IEPI, ICPI, IAPI, and IQPI).
estimates shown in Fig. 1 that are generated by different
IPI methods (IEPI, ICPI, IAPI, and IQPI) are all consistent
to each other. We also generated the state trajectories X·
shown in Fig. 2 for the initial condition θ0 = (1 + 0 )π
with 0 = 0.1 and θ̇0 = 0 under the estimated policy π̂i of
πi finally obtained at the last iteration (i = 10) of each IPI
method. As shown in Fig. 2, all of the policies π̂10 obtained
by different IPI methods generate the state trajectories that
are almost consistent with each other—they all achieved
the learning objective at around t = 4 [s], and the whole
state trajectories generated are almost same (or very close)
Fig. 1 shows the estimated values of vπi (x) for x ∈ Ωx after
the learning has been completed (at i = 10), where after
convergence, vπi may be considered to be an approximation
of the optimal value function v∗ . Although there are some
small ripples in the case of IQPI, the final value function
15
to each other. Also note that the IPI methods achieved our
learning objective without using an initial stabilizing policy
that is usually required in the optimal control setting under
the total discounting γ = 1 (e.g., Abu-Khalaf and Lewis,
2005; Vrabie and Lewis, 2009; Lee et al., 2015).
6
Frémaux, N., Sprekeler, H., and Gerstner, W. Reinforcement learning using a continuous time actor-critic framework with spiking
neurons. PLoS Comput. Biol., 9(4):e1003024, 2013.
Haddad, W. M. and Chellaboina, V. Nonlinear dynamical systems
and control: a Lyapunov-based approach. Princeton University
Press, 2008.
Howard, R. A. Dynamic drogramming and Markov processes.
Tech. Press of MIT and John Wiley & Sons Inc., 1960.
Kirk, W. and Sims, B. Handbook of metric fixed point theory.
Springer Science & Business Media, 2013.
Kleinman, D. On an iterative technique for Riccati equation computations. IEEE Trans. Autom. Cont., 13(1):114–115, 1968.
Lagoudakis, M. G. and Parr, R. Least-squares policy iteration.
J. Mach. Learn. Res., 4(Dec):1107–1149, 2003.
Leake, R. J. and Liu, R.-W. Construction of suboptimal control
sequences. SIAM Journal on Control, 5(1):54–63, 1967.
Lee, J. Y., Park, J. B., and Choi, Y. H. Integral Q-learning
and explorized policy iteration for adaptive optimal control
of continuous-time linear systems. Automatica, 48(11):2850–
2859, 2012.
Lee, J. Y., Park, J. B., and Choi, Y. H. On integral generalized
policy iteration for continuous-time linear quadratic regulations.
Automatica, 50(2):475–489, 2014.
Lee, J. Y., Park, J. B., and Choi, Y. H. Integral reinforcement
learning for continuous-time input-affine nonlinear systems with
simultaneous invariant explorations. IEEE Trans. Neural Networks and Learning Systems, 26(5):916–932, 2015.
Lewis, F. L. and Vrabie, D. Reinforcement learning and adaptive
dynamic programming for feedback control. IEEE Circuits and
Systems Magazine, 9(3):32–50, 2009.
Loeb, P. A. and Talvila, E. Lusin’s Theorem and Bochner integration. Scientiae Mathematicae Japonicae, 10:55–62, 2004.
Luo, B., Wu, H.-N., Huang, T., and Liu, D. Data-based approximate policy iteration for affine nonlinear continuous-time optimal control design. Automatica, 50(12):3281–3290, 2014.
Maei, H. R., Szepesvári, C., Bhatnagar, S., and Sutton, R. S.
Toward off-policy learning control with function approximation.
In Proceedings of the 27th International Conference on Machine
Learning (ICML-10), pages 719–726, 2010.
Mehta, P. and Meyn, S. Q-learning and pontryagin’s minimum
principle. In Proc. IEEE Int. Conf. Decision and Control, held
jointly with the Chinese Control Conference (CDC/CCC), pages
3598–3605, 2009.
Modares, H., Lewis, F. L., and Jiang, Z.-P. Optimal outputfeedback control of unknown continuous-time linear systems
using off-policy reinforcement learning. IEEE Trans. Cybern.,
46(11):2401–2410, 2016.
Murray, J. J., Cox, C. J., Lendaris, G. G., and Saeks, R. Adaptive
dynamic programming. IEEE Trans. Syst. Man Cybern. Part
C-Appl. Rev., 32(2):140–153, 2002.
Powell, W. B. Approximate dynamic programming: solving the
curses of dimensionality. Wiley-Interscience, 2007.
Saridis, G. N. and Lee, C. S. G. An approximation theory of
optimal control for trainable manipulators. IEEE Trans. Syst.
Man Cybern., 9(3):152–159, 1979.
Sutton, R. S. and Barto, A. G. Reinforcement learning: an introduction. Second Edition in Progress, MIT Press, Cambridge,
MA (available at http://incompleteideas.net/sutton), 2017.
Thomson, B. S., Bruckner, J. B., and Bruckner, A. M. Elementary
real analysis. Prentice Hall, 2001.
Vamvoudakis, K. G., Vrabie, D., and Lewis, F. L. Online adaptive
algorithm for optimal control with integral reinforcement learning. Int. J. Robust and Nonlinear Control, 24(17):2686–2710,
2014.
Conclusions
In this paper, we proposed the on-policy IPI scheme and
four off-policy IPI methods (IAPI, IQPI, IEPI, and ICPI)
which solve the general RL problem formulated in CTS.
We proved their mathematical properties of admissibility,
monotone improvement, and convergence, together with the
equivalence of the on- and off-policy methods. It was shown
that on-policy IPI can be made partially model-free by modifying its policy improvement, and the off-policy methods are
partially model-free (IEPI), completely model-free (IAPI,
IQPI), or model-free but only implementable in the u-AC
setting (ICPI). The off-policy methods were discussed and
compared with each other as listed in Table 1. Numerical
simulations were performed with the 2nd-order invertedpendulum model to support the theory and verify the performance, and the results with all algorithms were consistent
and approximately equal to each other. Unlike the IPI methods in the stability-based framework, an initial stabilizing
policy is not required to run any of the proposed IPI methods. This work also provides the ideal PI forms of RL in
CTS such as advantage updating (IAPI), Q-learning in CTS
(IQPI), VGB greedy policy improvement (on-policy IPI and
IEPI under u-AC setting), and the model-free VGB greedy
policy improvement (ICPI). Though the proposed IPI methods are not online incremental RL algorithms, we believe
that this work provides the theoretical background and intuition to the (online incremental) RL methods to be developed in the future and developed so far in CTS.
References
Abu-Khalaf, M. and Lewis, F. L. Nearly optimal control laws
for nonlinear systems with saturating actuators using a neural
network HJB approach. Automatica, 41(5):779–791, 2005.
Aleksandar, B., Lever, G., and Barber, D. Nesterov’s accelerated
gradient and momentum as approximations to regularised update descent. arXiv preprint arXiv:1607.01981v2, 2016.
Anderson, B. and Moore, J. B. Optimal control: linear quadratic
methods. Prentice-Hall, Inc., 1989.
Baird III, L. C. Advantage updating. Technical report, DTIC
Document, 1993.
Beard, R. W., Saridis, G. N., and Wen, J. T. Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation.
Automatica, 33(12):2159–2177, 1997.
Bessaga, C. On the converse of banach “fixed-point principle”.
Colloquium Mathematicae, 7(1):41–43, 1959.
Doya, K. Reinforcement learning in continuous time and space.
Neural computation, 12(1):219–245, 2000.
Farahmand, A. M., Ghavamzadeh, M., Mannor, S., and Szepesvári,
C. Regularized policy iteration. In Advances in Neural Information Processing Systems, pages 441–448, 2009.
Folland, G. B. Real analysis: modern techniques and their applications. John Wiley & Sons, 1999.
16
Lemma B.1 Let v : X → R and Z : X ×U → R be any C1
and continuous functions, respectively. If there exist ∆t > 0,
a weighting factor β > 0, and “an AD policy µ over a nonempty subset U0 ⊆ U” such that for each (x, u) ∈ X × U0 ,
Vrabie, D. and Lewis, F. L. Neural network approach to
continuous-time direct adaptive optimal control for partially unknown nonlinear systems. Neural Netw., 22(3):237–246, 2009.
Appendix
Z
v(x) = Eµ
A
β τ −t Zτ dτ +β ∆t v(Xt0 ) Xt = x, Ut = u ,
t
Proof of Lemma 2
.
where Zτ = Z(Xτ , Uτ ) for τ ≥ t, then
In this proof, we first focus on the case N = 1 and then
generalize the result. By Theorem 2, T v∗ is an admissible
value function and satisfies v∗ T v∗ , but T v∗ v∗ since
v∗ is the optimal value function. Therefore, T v∗ = v∗ and
v∗ is a fixed point of T .
(B.1)
− ln β · v(x) = Z(x, u) + ∇v(x)f (x, u)
holds for all (x, u) ∈ X × U0 .
The applications of Lemma B.1 to the Bellman equations
of the off-policy IPI methods (IAPI, IQPI, IEPI, and ICPI)
provides the following claim.
Claim A.1 v∗ is the unique fixed point of T .
Claim B.1 If πi is admissible, then vi and πi+1 obtained by
the i-th policy evaluation and improvement of any off-policy
IPI method satisfy vi = vπi and (23). Moreover,
Proof. To show the uniqueness, suppose vπ ∈ Va is another
fixed point of T and let π 0 be the next policy obtained by
policy improvement with respect to the fixed point vπ . Then,
π 0 is admissible by Theorem 2, and it is obvious that
− ln γ · T vπ (x) = h x, π 0 (x), ∇ T vπ (x) x ∈ X ,
ai = aπi (IAPI), qi = qπi (IQPI), and ci = cπi (ICPI).
by vπ0 = T vπ and “(11) for the admissible policy π 0 .” The
substitution of T vπ = vπ into it results in
− ln γ·vπ (x) = h(x, π 0 (x), ∇vπ (x)) = max h(x, u, ∇vπ (x))
u∈U
for all x ∈ X , the HJBE. Therefore, vπ = v∗ by Corollary 3
and Assumptions 1 and 2, a contradiction, implying that v∗
is the unique fixed point of T . 2
Suppose that πi is admissible. Then, if Claim B.1 is true,
then πi+1 in any off-policy IPI method satisfies (23) and
hence Theorem 2 with Assumption 1 proves that πi+1 is
also admissible and satisfies vπi vπi+1 vπ∗ . Since π0
is admissible in the off-policy IPI methods, mathematical
induction proves the first part of the theorem. Moreover,
now that we have the properties (P1)–(P3), if Assumption 2
additionally holds, then we can easily prove the convergence
properties (C1)–(C2) in Theorem 4 by following its proof.
Proof of Claim B.1.
(IAPI/IQPI) Applying Lemma B.1 with U0 = U to (39) in
IAPI and to (42) in IQPI and then substituting the definition (5) of h(x, u, p) show that (vi , ai ) in IAPI and (vi , qi )
.
in IQPI (with vi = qi (·, πi (·))/κ1 ) satisfy
Now, we generalize the result to the case with any N ∈ N.
Since v∗ is the fixed point of T , we have
T N v∗ = T N −1 [T v∗ ] = T N −1 v∗ = · · · = T v∗ = v∗ ,
− ln γ · vi (x) = h(x, u, ∇vi (x)) − ai (x, u) + ai (x, πi (x))
showing that v∗ is also a fixed point of T N for any N ∈ N.
To prove that v∗ is the unique fixed point of T N for all N ,
suppose that there is some M ∈ N and v ∈ Va such that
T M v = v. Then, it implies T v = v since we have
(B.2)
κ2
qi (x, u) − κ1 vi (x)
− ln γ · vi (x) = h(x, u, ∇vi (x)) −
κ1
(B.3)
for all (x, u) ∈ X × U, respectively. Furthermore, the substitutions of u = πi (x) into (B.2) and (B.3) yield
v T v T 2v · · · T M v = v
by the repetitive applications of Theorem 2. Therefore, we
obtain v = v∗ by Claim A.1, which completes the proof. 2
B
t0
− ln γ · vi (x) = h(x, πi (x), ∇vi (x)) ∀x ∈ X ,
(B.4)
which implies vi = vπi by Theorem 1 and Assumption 3a.
Next, substituting vi = vπi into (B.2) and (B.3) and then
rearranging it with (12) (and (40) in the IAPI case) result in
Proof of Theorem 7
For the proof, we employ the following lemma regarding the
conversion from an time-integral to an algebraic equation.
Its proof is given in Appendix C.
ai (x, u) = hπi (x, u) + ln γ · vπi (x),
qi (x, u) = κ1 vπi (x) + ai (x, u)/κ2 ,
17
Moreover, by vi = vπi , the policy improvement (53) is equal
to πi+1 (x) = σ(cπi (x)) for all x ∈ X , which is the closedform solution of (23) in the u-AC setting (27)–(29). 2
and hence we obtain ai = aπi and qi = qπi by the definitions
(37) and (44). By this and the respective policy improvement
of IAPI and IQPI, it is obvious that the next policy πi+1 in
each algorithm satisfies
∀x ∈ X :
C
πi+1 (x) ∈ arg maxu∈U aπi (x, u) (IAPI);
πi+1 (x) ∈ arg maxu∈U qπi (x, u) (IQPI).
The proof is done using the following claim.
Since they are equivalent to (20) with π = πi and some
special choices of bπ and κ > 0, 16 and (20) is equivalent
to (18), πi+1 in both IAPI and IQPI satisfy (23) in (P2).
Claim C.1 Let µ be a policy starting at t and w : X × U
be a continuous function. If there exist ∆t > 0 and β > 0
such that for all x ∈ X ,
(IEPI) By (C.2) in Appendix C and (25), the Bellman equation (50) can be expressed as
Z
0 = Eµ
t
0
t0
Z
0 = Eµ
β
τ −t
w(Xτ , Uτ ) dτ Xt = x ,
(C.1)
t
γ τ −t φi (Xτ ) dτ Xt = x ,
then, w(x, µ(t, x)) = 0 for all x ∈ X .
t
By the standard calculus, for any ∆t > 0 and β > 0,
where φi : X → R is given by
.
φi (x) = R(x, πi (x)) + ln γ · vi (x) + ∇vi (x)f (x, πi (x)),
β τ −t v(Xτ )
which is obviously continuous since so are all functions
contained in it. Thus, the term “γ τ −t φi (Xτ )” is integrable
over [t, t0 ], and Claim C.1 in Appendix C with w(x, u) =
φi (x) for all (x, u) ∈ X ×U implies φi = 0, which results in
(B.4) and hence vi = vπi by Theorem 1 and Assumption 3a.
Since the policy improvement (26) in IEPI is equivalent to
solving (22), it is equivalent to (23) by vi = vπi and (12).
Z
t
t
t0
β
τ −t
h
d τ −t
β v(Xτ ) dτ
dτ
(C.2)
i
ln β · v(Xτ ) + v̇(Xτ , Uτ ) dτ.
Hence, (B.1) can be rewritten for any (x, u) ∈ X × U0 as
Z
0 = Eµ
t0
β τ −t w(Xτ , Uτ ) dτ Xt = x, Ut = u (C.3)
t
.
where w(x, u) = Z(x, u) + ln β · v(x) + ∇v(x)f (x, u).
Here, w is continuous since so are v, ∇v, Z, and f , and thus
the term “β τ −t w(Xτ , Uτ )” is integrable over the compact
time interval [t, t0 ]. Now, fix u ∈ U0 . Then, one can see that
= R(x, πi (x)) − cT
i (x)(u − πi (x)) + ∇vi (x)f (x, u)
(1) µ(·, ·, u) is obviously a policy;
(2) the condition Ut = u in (C.3) is obviated for fixed u;
(3) µ(t, x, u) = u holds for all x ∈ X .
(B.5)
.
where ψ(x) = FcT (x)∇viT (x) − ci (x). Next, let x ∈ X be
an arbitrary fixed value. Then, for each j ∈ {1, 2, · · · , m},
subtracting (B.5) for u = uj−1 from the same equation
but for u = uj yields 0 = (uj − uj−1 )ψ(x). This can be
rewritten in the following matrix-vector form:
(E1:m − E0:m−1 )ψ(x) = 0,
t0
Z
=
t
− ln γ · vi (x)
= h(x, πi (x), ∇vi (x)) + (u − πi (x))T ψ(x),
t0
=
(ICPI) Applying Lemma B.1 to policy evaluation of ICPI
and rearranging it using (5) and (27), we obtain for each
(x, u) ∈ X × U0 :
Hence, by Claim C.1, we obtain
0 = w(x, µ(t, x, u)) = w(x, u) for all x ∈ X .
Since u ∈ U0 is arbitrary, we finally have w(x, u) = 0 for
all (x, u) ∈ X × U0 , which completes the proof.
(B.6)
.
where Ek:l = [uk uk+1 · · · ul ] for 0 ≤ k ≤ l ≤ m is the
m × (l − k + 1)-matrix constructed by the column vectors
uk , uk+1 , · · · , ul . Since (54) implies that {uj − uj−1 }m
j=1
is a basis of Rm , we have rank (E1:m − E0:m−1 ) = m and
by (B.6), ψ(x) = 0. Since x ∈ X is arbitrary, ψ = 0.
Now that we have ψ(x) = 0 ∀x ∈ X , (B.5) becomes (B.4)
and thus vi = vπi by Theorem 1 and Assumption 3a. This
also implies ci = cπi by ψ = 0 and the definition of ψ.
16
Proof of Lemma B.1
Proof of Claim C.1. To prove the claim, let x0 ∈ X and
.
xk = Eµ [Xt0 Xt = xk−1 ] for k = 1, 2, 3, · · · . Then, (C.1)
obviously holds for each x = xk ∈ X (k ∈ Z+ ). Denote
.
.
t0 = t and tk = t + k∆t for any k ∈ N
and define µ̄k : [tk , ∞) × X → U for each k ∈ Z+ as
.
µ̄k (τ, x) = µ(τ − tk + t, x) for any τ ≥ tk and any x ∈ X .
See the discussions right below (37) for IAPI and (46) for IQPI.
18
They correspond to z = (x, u) and Z = X × U when the
network is AD, and z = x and Z = X when it is not. In the
simulations in Section 5, the functions vi , ai , qi , and ci are
all approximated by RBFNs as shown below:
Then, obviously, µ̄k is a policy that starts at time tk . Moreover, since in our framework, the non-stationarity, i.e., the
explicit time-dependency, comes only from, if any, that of
the applied policy, we obtain by the above process and (C.1)
that
Z tk+1
(C.4)
β τ −t w(Xτ , Uτ ) dτ Xtk = xk
0 = Eµ̄k
.
v (x) ≈ v̂(z; θiv ) = φT (z̄)θiv ,
i
.
ci (x) ≈ ĉ(z; θic ) = φT (z̄)θic ,
T
a
a .
ai (x, u) ≈ â(z; θi ) =. φAD (z̄)θi ,
q
T
qi (x, u) ≈ q̂(z; θi ) = φAD (z̄)θiq ,
tk
for all k ∈ Z+ . Next, construct µ̄ : [t, ∞) × X → U by
where z̄ ∈ Z represents the input z ∈ Z to each network
whose state-component x is normalized to x̄ ∈ [−π, π] × R
by adding ±2πk to its first component x1 for some k ∈ Z+ ;
θiv , θic ∈ RN and θia , θiq ∈ RM are the weight vectors of the
networks; N, M ∈ N are the numbers of hidden neurons;
the RBFs φ : Z → RN with Z = X and φAD : Z → RM
with Z = X × U are defined as
.
µ̄(τ, x) = µ̄k (τ, x) for τ ∈ [tk , tk+1 ) and x ∈ X .
Then, for each fixed x ∈ X , µ̄(·, x) is right continuous
since for all k ∈ Z+ , so is µ̄k (·, x) on each time interval
[tk , tk+1 ). In a similar manner, for each fixed τ ∈ [t, ∞),
µ̄(τ, ·) is continuous over X since for all k ∈ Z+ , so is
µ̄k (τ, ·) for each fixed τ ∈ [tk , tk+1 ). Moreover, since µ̄k is
a policy starting at tk , the state trajectory Eµ̄k [X· |Xtk = xk ]
is uniquely defined over [tk , tk+1 ). Therefore, noting that xk
is represented (by definitions and the recursive relation) as
xk = Eµ̄k−1 Xtk Xtk−1 = xk−1 for any k ∈ N,
2
0=
Z
tk+1
β
Eµ̄k
τ −t
Here, φj and φAD,j are the j-th components of φ and φAD , respectively; kxkΣ1 and kzkΣ2 are weighted Euclidean norms
.
.
defined as kxkΣ1 = (xT Σ1 x)1/2 and kzkΣ2 = (z T Σ2 z)1/2
.
.
for the diagonal matrices Σ1 = diag{1, 0.5} and Σ2 =
diag{1, 0.5, 1}; xj ∈ X for 1 ≤ j ≤ N and zj ∈ X × U for
1 ≤ j ≤ M are the center points of RBFs that are uniformly
distributed within the compact regions Ωx and Ωx × U, respectively. In all of the simulations, we choose N = 132 and
M = 133 , so we have 132 -RBFs in v̂ and ĉ, and 133 -RBFs
in â and q̂.
w(Xτ , Uτ ) dτ Xtk = xk
tk
k=0
Z
= Eµ̄
|
t
∞
β τ −t w(Xτ , Uτ ) dτ Xt = x0 ,
{z
}
.
=W (t;x )
D.2
which implies W (t; x0 ) = 0 for all t ≥ 0. Since
∞
Z
Z t0
1
β w(Xτ , Uτ ) dτ = lim
β τ w(Xτ , Uτ ) dτ
∆t→0 ∆t t
= w(Xt , Ut )
ψ T (z) · θi = b(z) + ε(z),
(D.2)
L
where the parameter vector θi ∈ R to be estimated, with its
dimension L, and the associated functions ψ : Z → R1×L
and b : Z → R are given in Table D.1 for each IPI method.
i
In Table D.1, Iα (Z), Dα (v), and φπAD
defined as
τ
t
by (right) continuity of w, Xτ , and Uτ , we therefore obtain
0=
∂W (t; x0 )
= − ln β · W (t; x0 ) + β −t · w(x0 , µ̄(t, x0 ))
∂t
.
Iα (Z) =
D.1
Z
t0
ατ −t Z(X̄τ , Uτ ) dτ,
t
.
Dα (v) = v(X̄t ) − α∆t v(X̄t0 ),
and thereby, w(x0 , µ(t, x0 )) = 0. Since x0 ∈ X is arbitrary,
it implies w(x, µ(t, x)) = 0 for all x ∈ X . 2
D
Policy Evaluation Methods
Under the approximation (D.1), the Bellman equations in
Algorithms 2, 3b, 4b, and 5 can be expressed, with the approximation error ε : Z → R, as the following unified form:
0
∂
∂t
2
φj (x) = e−kx−xj kΣ1 and φAD,j (z) = e−kz−zj kΣ2 .
we conclude that the state trajectory Eµ̄ [X· |Xt = x0 ] is also
uniquely defined for each x0 ∈ X over [t, ∞), implying that
µ̄ is a policy starting at time t. Finally, using the policy µ̄,
we obtain from (C.4)
∞
X
(D.1)
.
i
and φπAD
= φAD (·, πi (·)) were used for simplicity, where
X̄τ is the state value Xτ normalized to [−π, π] × R; 17 we
set β = 1 in our IQPI simulation.
Inverted-Pendulum Simulation Methods
Linear Function Approximations by RBFNs
17
X̄τ is normalized to X̄τ whenever input to the RBFN(s). Other
than that, the use of X̄τ instead of Xτ (or vice versa) does not
affect the performance (e.g., R0 (Xτ ) = R0 (X̄τ ) in our setting).
To describe the methods in a unified manner, we denote any
network input by z and its corresponding input space by Z.
19
Table D.1
θi , ψ, and b in the expression (D.2) for each Bellman equation
Name
(Alg. #)
θi
IAPI
(Alg. 2)
" #
θiv
θia
IQPI
(Alg. 3b)
IEPI
(Alg. 4b)
ICPI
(Alg. 5)
"
Ezµ
Iγ
ψ(z)
b(z)
#
Dγ (φ)
πi
φAD − φAD
Ezµ Iγ (R)
θiq
h
i
i
Ezµ Dβ φπ
AD + Iβ φAD
Ezµ Iβ (R)
θiv
h
i
Ezµ Dγ (φ) + Iγ ∇φ · Fc ξ·πi
"
#
Dγ (φ)
z
Eµ
π
I γ φ ξ· i
Ezµ Iγ (Rπi )
" #
θiv
θic
D.3
In each i-th policy improvement, the next policy πi+1 was
obtained using the estimates θ̂iv , θ̂ic , θ̂ia , or θ̂iq obtained at the
i-th policy evaluation by (D.4) or (D.5) depending on the
algorithms. In all of the simulations, the next policy πi+1
was parameterized as πi+1 (x) ≈ σ(ŷi (x)), where ŷi (x) was
directly determined in IEPI and ICPI (Algorithms 4b and 5)
as ŷi (x) = FcT (x)∇v̂ T (x; θ̂iv ) and ŷi (x) = ĉ(x; θ̂ic ), respectively. In IAPI and IQPI, ŷi (x) is the output of the RBFN
we additionally introduced:
Ezµ Iγ (Rπi )
ŷi (x) = φT (x̄)θ̂iu
to perform the respective maximizations (41) and (43). Here,
θ̂iu ∈ RN is updated by the mini-batch regularized update descent (RUD) (Aleksandar, Lever, and Barber, 2016) shown
in Algorithm D.1, which is a variant of stochastic gradient
descent, to perform such maximizations with improved convergence speed. In Algorithm D.1, Jimp (x; θ) is given by
In each i-th policy evaluation of each method, ψ(z) and
b(z) in (D.2) were evaluated at the given data points z =
zinit,j (j = 1, 2, · · · , Linit with L ≤ Linit ) that are uniformly
distributed over the respective compact regions Ωx ×U (IAPI
and IQPI), Ωx (IEPI), and Ωx ×U0 with U0 = {−Umax , Umax }
(ICPI). The trajectory X· over [t, t0 ] with each data point
zinit,j used as its initial condition was generated using the
4th-order Runge-Kutta method with its time step ∆t/10 =
1 [ms], and the trapezoidal approximation:
Iα (Z) ≈
Z(X̄t , Ut ) + α∆t Z(X̄t0 , Ut0 )
2
(
q̂(x, u; θ̂iq )|u=σ(φT (x̄)θ) in IQPI,
Jimp (x; θ) =
â(x, u; θ̂ia )|u=σ(φT (x̄)θ) in IAPI;
· ∆t
was used in the evaluation of ψ and b. The number Linit of
the data points zinit,j used in each IPI algorithm was 173
(IAPI), 25 × 31 × 25 (IQPI), 172 (IEPI), and 172 × 2 (ICPI).
In the i-th policy
evaluation of IAPI, we also evaluated the
.
0
vectors ρ(x) = φAD (x,π
∈ RL at the grid points xgrid,k
i (x))
(k = 1, 2, · · · , Lgrid with Lgrid = 502 ) uniformly distributed
in Ωx , in order to take the constraint
ρT (x)θi = εconst (x)
(D.3)
obtained from ai (x, πi (x)) = 0 into considerations, where
εconst : X → R is the residual error. After evaluating ψ(·)
and b(·) at all points zinit,j (and in addition to that, ρ(·) at all
points xgrid,j in the IAPI case), the parameters θi in (D.2)
were estimated using least squares as
the error tolerance 0 < δ 1 was set to δ = 0.01; the
smoothing factor λj ∈ (0, 1) and the learning rate ηj > 0
were scheduled as λj = (1 − ) · (1 − 103 ηj ) with = 10−3
and
−3
10 for 1 ≤ j ≤ 30;
ηj =
10−3 /(j − 30) for j > 30.
Algorithm D.1: Mini-Batch RUD for Policy Improvement
(
θ̂iu = v = 0 ∈ RN ;
1 Initialize:
0 < δ 1 be a small constant;
2 j ← 1;
3 repeat
4
5
θ̂i =
X
Linit
ψj ψjT
−1 X
Linit
ψj bj
(D.4)
6
.
in the case of IQPI, IEPI, and ICPI, where ψj = ψ(zinit,j )
.
and bj = b(zinit,j ). This θ̂i minimizes the squared error
J(θi ) = ε2 (zinit,1 ) + · · · + ε2 (zinit,Linit ). In IAPI, θi in (D.2)
and (D.3) were estimated also in the least-squares sense as
X
−1 X
Lgrid
Linit
Linit
X
T
T
θ̂i =
ψj ψj +
ρk ρk
ψj bj , (D.5)
7
j=1
j=1
j=1
k=1
Policy Improvement Methods
Calculate
∂Jimp (x; θ̂iu )
v ← λj v + ηj
θ̂iu ← θ̂iu + v;
j ← j + 1;
until kvk < δ
8 return θ̂iu ;
j=1
.
where ρk = ρ(xgrid,k ); this θ̂i minimizes the squared error
JIAPI (θi ) = J(θi )+(ε2const (xgrid,1 )+· · ·+ε2const (xgrid,Lgrid )).
20
∂ θ̂iu
L
grid
X
k=1
L
grid
at all grid points {xgrid,k }k=1
;
∂Jimp (xgrid,k ; θ̂iu )
∂ θ̂iu
;
| 3 |
Explicit linear kernels for packing problems∗
Valentin Garnero†
Christophe Paul†
Dimitrios M. Thilikos†‡
Ignasi Sau†
arXiv:1610.06131v1 [] 19 Oct 2016
Abstract
During the last years, several algorithmic meta-theorems have appeared (Bodlaender
et al. [FOCS 2009], Fomin et al. [SODA 2010], Kim et al. [ICALP 2013]) guaranteeing the existence of linear kernels on sparse graphs for problems satisfying some
generic conditions. The drawback of such general results is that it is usually not
clear how to derive from them constructive kernels with reasonably low explicit constants. To fill this gap, we recently presented [STACS 2014] a framework to obtain
explicit linear kernels for some families of problems whose solutions can be certified
by a subset of vertices. In this article we enhance our framework to deal with packing problems, that is, problems whose solutions can be certified by collections of
subgraphs of the input graph satisfying certain properties. F-Packing is a typical
example: for a family F of connected graphs that we assume to contain at least one
planar graph, the task is to decide whether a graph G contains k vertex-disjoint subgraphs such that each of them contains a graph in F as a minor. We provide explicit
linear kernels on sparse graphs for the following two orthogonal generalizations of
F-Packing: for an integer ` > 1, one aims at finding either minor-models that are
pairwise at distance at least ` in G (`-F-Packing), or such that each vertex in G
belongs to at most ` minors-models (F-Packing with `-Membership). Finally,
we also provide linear kernels for the versions of these problems where one wants to
pack subgraphs instead of minors.
Keywords: Parameterized complexity; linear kernels; packing problems; dynamic programming; protrusion replacement; graph minors.
1
Introduction
Motivation. A fundamental notion in parameterized complexity (see [10] for a recent
textbook) is that of kernelization, which asks for the existence of polynomial-time preprocessing algorithms producing equivalent instances whose size depends exclusively on
the parameter k. Finding kernels of size polynomial or linear in k (called linear kernels)
∗
Emails: Valentin Garnero: [email protected], Christophe Paul: [email protected], Ignasi Sau:
[email protected], Dimitrios M. Thilikos: [email protected].
†
AlGCo project-team, CNRS, LIRMM, Université de Montpellier, Montpellier, France.
‡
Department of Mathematics, National and Kapodistrian University of Athens, Athens, Greece.
1
is one of the major goals of this area. A pioneering work in this direction was the linear
kernel of Alber et al. [2] for Dominating Set on planar graphs, generalized by Guo and
Niedermeier [24] to a family of problems on planar graphs. Several algorithmic metatheorems on kernelization have appeared in the last years, starting with the result of
Bodlaender et al. [5] on graphs of bounded genus. It was followed-up by similar results
on larger sparse graph classes, such as graphs excluding a minor [20] or a topological
minor [26].
The above results guarantee the existence of linear kernels on sparse graph classes
for problems satisfying some generic conditions, but it is hard to derive from them
constructive kernels with explicit constants. We recently made in [22] a significant step
toward a fully constructive meta-kernelization theory on sparse graphs with explicit
constants. In a nutshell, the main idea is to substitute the algorithmic power of CMSO
logic that was used in [5, 20, 26] with that of dynamic programming (DP for short) on
graphs of bounded decomposability (i.e., bounded treewidth). We refer the reader to
the introduction of [22] for more details. Our approach provides a DP framework able
to construct linear kernels for families of problems on sparse graphs whose solutions can
be certified by a subset of vertices of the input graph, such as r-Dominating Set or
Planar-F-Deletion.
Our contribution. In this article we make one more step in the direction of a fully constructive meta-kernelization theory on sparse graphs, by enhancing the existing framework [22] in order to deal with packing problems. These are problems whose solutions
can be certified by collections of subgraphs of the input graph satisfying certain properties. We call these problems packing-certifiable, as opposed to vertex-certifiable ones.
For instance, deciding whether a graph G contains at least k vertex-disjoint cycles is a
typical packing-certifiable problem. This problem, called Cycle Packing, is FPT as it
is minor-closed, but it is unlikely to admit polynomial kernels on general graphs [6].
As an illustrative example, for a family of connected graphs F containing at least one
planar graph, we provide a linear kernel on sparse graphs for the F-Packing problem1 :
decide whether a graph G contains at least k vertex-disjoint subgraphs such that each of
them contains a graph in F as a minor, parameterized by k. We provide linear kernels
as well for the following two orthogonal generalizations of F-Packing: for an integer
` > 1, one aims at finding either minor-models that are pairwise at distance at least
` in G (`-F-Packing), or such that each vertex in G belongs to at most ` minorsmodels (F-Packing with `-Membership). While only the existence of linear kernels
for F-Packing was known [5], to the best of our knowledge no kernels were known for
`-F-Packing and F-Packing with `-Membership, except for `-F-Packing when F
1
We would like to clarify here that in our original conference submission of [22] we claimed, among
other results, a linear kernel for F-Packing on sparse graphs. Unfortunately, while preparing the
camera-ready version, we realized that there was a bug in one of the proofs and we had to remove that
result from the paper. It turned out that for fixing that bug, several new ideas and a generalization of
the original framework seemed to be necessary; this was the starting point of the results presented in
the current article.
2
consists only of a triangle and the maximum degree is also considered as a parameter [3].
We would like to note that the kernels for F-Packing and for F-Packing with `Membership apply to minor-free graphs, while those for `-F-Packing for ` > 2 apply
to the smaller class of apex-minor-free graphs.
We also provide linear kernels for the versions of the above problems where one wants
to pack subgraphs instead of minors (as one could expect, the kernels for subgraphs
are considerably simpler than those for minors). We call the respective problems `-FSubgraph-Packing and F-Subgraph-Packing with `-Membership. While the first
problem can be seen as a broad generalization of `-Scattered Set (see for instance [5,
22]), the second one was recently defined by Fernau et al. [16], motivated by the problem
of discovering overlapping communities (see also [32, 33] for related problems about
detecting overlapping communities): the parameter ` bounds the number of communities
that a member of a network can belong to. More precisely, the goal is to find in a graph
G at least k subgraphs isomorphic to a member of F such that every vertex in V (G)
belongs to at most ` subgraphs. This type of overlap was also studied by Fellows et
al. [15] in the context of graph editing. Fernau et al. [16] proved, in particular, that the
F-Subgraph-Packing with `-Membership problem is NP-hard for all values of ` > 1
when F = {F } and F is an arbitrary connected graph with at least three vertices, but
polynomial-time solvable for smaller graphs. Note that F-Subgraph-Packing with `Membership generalizes the F-Subgraph-Packing problem, which consists in finding
in a graph G at least k vertex-disjoint subgraphs isomorphic to a member of F. The
smallest kernel for the F-Subgraph-Packing problem [29] has size O(k r−1 ), where
F = {F } and F is an arbitrary graph on r vertices. A list of references of kernels for
particular cases of the family F can be found in [16]. Concerning the kernelization of
F-Subgraph-Packing with `-Membership, Fernau et al. [16] provided a kernel on
general graphs with O((r +1)r k r ) vertices, where r is the maximum number of vertices of
a graph in F. In this article we improve this result on graphs excluding a fixed graph as
a minor, by providing a linear kernel for F-Subgraph-Packing with `-Membership
when F is any family of (not necessarily planar) connected graphs.
Our techniques: vertex-certifiable vs. packing-certifiable problems. It appears
that packing-certifiable problems are intrinsically more involved than vertex-certifiable
ones. This fact is well-known when speaking about FPT-algorithms on graphs of bounded
treewidth [11, 28], but we need to be more precise with what we mean by being “more
involved” in our setting of obtaining kernels via DP on a tree decomposition of the input
graph. Loosely speaking, the framework that we presented in [22] and that we need
to redefine and extend here, can be summarized as follows. First of all, we propose a
general definition of a problem encoding for the tables of DP when solving parameterized
problems on graphs of bounded treewidth. Under this setting, we provide three general
conditions guaranteeing that such an encoding can yield a so-called protrusion replacer,
which in short is a procedure that replaces large “protrusions” (i.e., subgraphs with
small treewidth and small boundary) with “equivalent” subgraphs of constant size. Let
us be more concrete on these three conditions that such an encoding E needs to satisfy
3
in order to obtain an explicit linear kernel for a parameterized problem Π.
The first natural condition is that on a graph G without boundary, the optimal size
of the objects satisfying the constraints imposed by E coincides with the optimal size of
solutions of Π in G; in that case we say that E is a Π-encoder. On the other hand, we
need that when performing DP using the encoding E, we can use tables such that the
maximum difference among all the values that need to be stored is bounded by a function
g of the treewidth; in that case we say that E is g-confined. Finally, the third condition
requires that E is “suitable” for performing DP, in the sense that the tables at a given
node of a tree decomposition can be computed using only the information stored in the
tables of its children (as it is the case of practically all natural DP algorithms); in that
case we say that E is DP-friendly. These two latter properties exhibit some fundamental
differences when dealing with vertex-certifiable or packing-certifiable problems.
Indeed, as discussed in more detail in Section 3, with an encoding E we associate
a function f E that corresponds, roughly speaking, to the maximum size of a partial
solution that satisfies the constraints defined by E. In order for an encoder to be gconfined for some function g(t) of the treewidth t, for some vertex-certifiable problems
such as r-Scattered Set (see [22]) we need to “force” the confinement artificially,
in the sense that we directly discard the entries in the tables whose associated values
differ by more than g(t) from the maximum (or minimum) ones. Fortunately, we can
prove that an encoder with this modified function is still DP-friendly. However, this
is not the case for packing-certifiable problems such as F-Packing. Intuitively, the
difference lies on the fact that in a packing-certifiable problem, a solution of size k can
contain arbitrarily many vertices (for instance, if one wants to find k disjoint cycles in
an n-vertex graph with girth Ω(log n)) and so it can as well contain arbitrarily many
vertices from any subgraph corresponding to a rooted subtree of a tree decomposition
of the input graph G. This possibility prevents us from being able to prove that an
encoder is DP-friendly while still being g-confined for some function g, as in order to
fill in the entries of the tables at a given node, one may need to retrieve information
from the tables of other nodes different from its children. To circumvent this problem,
we introduce another criterion to discard the entries in the tables of an encoder: we
recursively discard the entries of the tables whose associated partial solutions induce
partial solutions at some lower node of the rooted tree decomposition that need to be
discarded. That is, if an entry of the table needs to be discarded at some node of a tree
decomposition, we propagate this information to all the other nodes.
Organization of the paper. Some basic preliminaries can be found in Section 2,
including graph minors, parameterized problems, (rooted) tree decompositions, boundaried graphs, the canonical equivalence relation ≡Π,t for a problem Π and an integer
t, FII, protrusions, and protrusion decompositions. The reader not familiar with the
background used in previous work on this topic may see [5, 20, 22, 26]. In Section 3 we
introduce the basic definitions of our framework and present an explicit protrusion replacer for packing-certifiable problems. Since many definitions and proofs in this section
are quite similar to the ones we presented in [22], for better readability we moved the
4
proofs of the results marked with ‘[?]’ to Appendix A. Before moving to the details of
each particular problem, in Section 4 we summarize the main ingredients that we use in
our applications. The next sections are devoted to showing how to apply our methodology to various families of problems. More precisely, we start in Section 5 with the
linear kernel for Connected-Planar-F-Packing. This problem is illustrative, as it
contains most of the technical ingredients of our approach, and will be generalized later
in the two orthogonal directions mentioned above. Namely, in Section 6 we deal with the
variant in which the minor-models are pairwise at distance at least `, and in Section 7
with the version in which each vertex can belong to at most ` minor-models. In Section 8
we adapt the machinery developed for packing minors to packing subgraphs, considering both variants of the problem. For the sake of completeness, each of the considered
problems will be redefined in the corresponding section. Finally, Section 9 concludes the
article.
2
Preliminaries
In our article graphs are undirected, simple, and without loops. We use standard graphtheoretic notation; see for instance [13]. We denote by dG (v, w) the distance in G between
two vertices v and w and by dG (W1 , W2 ) = min{dG (w1 , w2 ) : w1 ∈ W1 , w2 ∈ W2 } the
distance between two sets of vertices W1 and W2 of G. Given S ⊆ V (G), we denote by
N (S) the set of vertices in V (G) \ S having at least one neighbor in S.
Definition 1. A parameterized graph problem Π is called packing-certifiable if there
exists a language LΠ (called certifying language for Π) defined on pairs (G, S), where G
is a graph and S is a collection of subgraphs of G, such that (G, k) is a Yes-instance
of Π if and only if there exists a collection S of subgraphs of G with |S| > k such that
(G, S) ∈ LΠ .
In the above definition, for the sake of generality we do not require the subgraphs in
the collection S to be pairwise distinct. Also, note that the subclass of packing-certifiable
problems where each subgraph in S is restricted to consist of a single vertex corresponds
to the class of vertex-certifiable problems defined in [22].
For a class of graphs G, we denote by ΠG the problem Π where the instances are
restricted to contain graphs belonging to G. With a packing-certifiable problem we can
associate in a natural way an optimization function as follows.
Definition 2. Given a packing-certifiable parameterized problem Π, the maximization
function f Π : Γ∗ → N ∪ {−∞} is defined as
max{|S| : (G, S) ∈ LΠ } , if there exists such an S and
Π
f (G) =
(1)
−∞
, otherwise.
Definition 3. A boundaried graph is a graph G with a set B ⊆ V (G) of distinguished
vertices and an injective labeling λG : B → N. The set B is called the boundary of G
5
and it is denoted by ∂(G). The set of labels is denoted by Λ(G) = {λG (v) : v ∈ ∂(G)}.
We say that a boundaried graph is a t-boundaried graph if Λ(G) ⊆ {1, . . . , t}.
We denote by Bt the set of all t-boundaried graphs.
Definition 4. Let G1 and G2 be two boundaried graphs. We denote by G1 ⊕G2 the graph
obtained from G by taking the disjoint union of G1 and G2 and identifying vertices with
the same label in the boundaries of G1 and G2 . In G1 ⊕ G2 there is an edge between two
labeled vertices if there is an edge between them in G1 or in G2 .
Given G = G1 ⊕ G2 and G02 , we say that G0 = G1 ⊕ G02 is the graph obtained from G
by replacing G2 with G02 . The following notion was introduced by Bodlaender el al. [5].
Definition 5. Let Π be a parameterized problem and let t ∈ N. Given G1 , G2 ∈ Bt ,
we say that G1 ≡Π G2 if Λ(G1 ) = Λ(G2 ) and there exists a transposition constant
∆Π,t (G1 , G2 ) ∈ Z such that for every H ∈ Bt and every k ∈ Z, it holds that (G1 ⊕H, k) ∈
Π if and only if (G2 ⊕ H, k + ∆Π,t (G1 , G2 )) ∈ Π.
Definition 6. A tree decomposition of a graph G is a couple (T, X = {Bx : x ∈ V (T )}),
S
where T is a tree and such that x∈V (T ) Bx = V (G), for every edge {u, v} ∈ E(G) there
exists x ∈ V (T ) such that u, v ∈ Bx , and for every vertex u ∈ V (G) the set of nodes
{x ∈ V (T ) : u ∈ Bx } induce a subtree of T . The vertices of T are referred to as nodes
and the sets Bx are called bags.
A rooted tree decomposition (T, X , r) is a tree decomposition with a distinguished
node r selected as the root. A nice tree decomposition (T, X , r) (see [27]) is a rooted
tree decomposition where T is binary and for each node x with two children y, z it holds
Bx = By = Bz and for each node x with one child y it holds Bx = By ∪ {u} or
Bx = By \ {u} for some u ∈ V (G). The width of a tree decomposition is the size of a
largest bag minus one. The treewidth of a graph, denoted by tw(G), is the smallest width
of a tree decomposition of G. A treewidth-modulator of a graph G is a set X ⊆ V (G)
such that tw(G − X) 6 t, for some fixed constant t.
Given a bag B (resp. a node x) of a rooted tree decomposition T , we denote by
GB (resp. Gx ), the subgraph induced by the vertices appearing in the subtree of T
rooted at the node corresponding to B (resp. the node x). We denote by Ft the set
of all t-boundaried graphs that have a rooted tree decomposition of width t − 1 with
all boundary vertices contained in the root-bag. Obviously Ft ⊆ Bt . (Note that graphs
can be viewed as 0-boundaried graphs, hence we use a same alphabet Γ for describing
graphs and boundaried graphs.)
Definition 7. Let t, α be positive integers. A t-protrusion Y of a graph G is an induced
subgraph of G with |∂(Y )| 6 t and tw(Y ) 6 t − 1, where ∂(Y ) is the set of vertices of
Y having neighbors in V (G) \ V (Y ). An (α, t)-protrusion decomposition of a graph G
is a partition P = Y0 ] Y1 ] · · · ] Y` of V (G) such that for every 1 6 i 6 `, N (Yi ) ⊆ Y0 ,
max{`, |Y0 |} 6 α, and for every 1 6 i 6 `, Yi ∪N (Yi ) is a t-protrusion of G. When (G, k)
6
is the input of a parameterized problem with parameter k, we say that an (α, t)-protrusion
decomposition of G is linear whenever α = O(k).
We say that a rooted tree decomposition of a protrusion G (resp. a boundaried graph
G) is boundaried if the boundary ∂(G) is contained in the root bag. In the following
we always consider boundaried nice tree decompositions of width t − 1, which can be
computed in polynomial time for fixed t [4, 27].
3
A framework to replace protrusions for packing problems
In this section we restate and in many cases modify the definitions given in [22] in
order to deal with packing-certifiable problems; we will point out the differences. As
announced in the introduction, missing proofs can be found in Appendix A.
Encoders. In the following we extend the definition of an encoder given in [22, Definition
3.2] so that it is able to deal with packing-certifiable problems. The main difference is
that now the function f E is incorporated in the definition of an encoder, since as discussed
in the introduction we need to consider an additional scenario where the entries of the
table are discarded (technically, this is modeled by setting those entries to “−∞”) and
for this we will have to deal with the partial solutions particular to each problem. In the
applications of the next sections, we will call such functions that propagate the entries
to be discarded relevant. We also need to add a condition about the computability of the
function f E , so that encoders can indeed be used for performing dynamic programming.
Definition 8. An encoder is a triple E = (C E , LE , f E ) where
∗
C E is a function in 2N → 2Υ that maps a finite subset of integers I ⊆ N to a set C E (I)
of strings over some alphabet Υ. Each string R ∈ C E (I) is called an encoding. The
size of the encoder is the function sE : N → N defined as sE (t) := max{|C E (I)| :
I ⊆ {1, . . . , t}}, where |C E (I)| denotes the number of encodings in C E (I);
LE is a computable language which accepts triples (G, S, R) ∈ Γ∗ × Σ∗ × Υ∗ , where G
is a boundaried graph, S is a collection of subgraphs of G and R ∈ C E (Λ(G)) is an
encoding. If (G, S, R) ∈ LE , we say that S satisfies the encoding R in G; and
f E is a computable function in Γ∗ × Υ∗ → N ∪ {−∞} that maps a boundaried graph
G and an encoding R ∈ C E (Λ(G)) to an integer or to −∞.
As it will become clear with the applications described in the next sections, an
encoder is a formalization of the tables used by an algorithm that solves a packingcertifiable problem Π by doing DP over a tree decomposition of the input graph. The
encodings in C E (I) correspond to the entries of the DP-tables of graphs with boundary
labeled by the set of integers I. The language LE identifies certificates which are partial
solutions satisfying the boundary conditions imposed by an encoding.
7
The following definition differs from [22, Definition 3.3] as now the function f E is
incorporated in the definition of an encoder E.
Definition 9. Let Π be a packing-certifiable problem. An encoder E is a Π-encoder
if C E (∅) is a singleton, denoted by {R∅ }, such that for any 0-boundaried graph G,
f E (G, R∅ ) = f Π (G).
The following definition allows to control the number of possible distinct values
assigned to encodings and plays a similar role to FII or monotonicity in previous work [5,
20, 26].
Definition 10. An encoder E is g-confined if there exists a function g : N → N such that
for any t-boundaried graph G with Λ(G) = I it holds that either {R ∈ C E (I) : f E (G, R) 6=
−∞} = ∅ or maxR {f E (G, R) 6= −∞} − minR {f E (G, R) 6= −∞} 6 g(t).
For an encoder E and a function g, in the next sections we will denote the relevant
functions discussed before by f¯gE to distinguish them from other functions that we will
need.
Equivalence relations and representatives. We now define some equivalence relations on t-boundaried graphs.
Definition 11. Let E be an encoder, let G1 , G2 ∈ Bt , and let G be a class of graphs.
1. G1 ∼∗E,t G2 if Λ(G1 ) = Λ(G2 ) =: I and there exists an integer ∆E,t (G1 , G2 ) (depending on G1 , G2 ) such that for any encoding R ∈ C E (I) we have f E (G1 , R) =
f E (G2 , R) − ∆E,t (G1 , G2 ).
2. G1 ∼G,t G2 if either G1 ∈
/ G and G2 ∈
/ G, or G1 , G2 ∈ G and, for any H ∈ Bt ,
H ⊕ G1 ∈ G if and only if H ⊕ G2 ∈ G.
3. G1 ∼∗E,G,t G2 if G1 ∼∗E,t G2 and G1 ∼G,t G2 .
4. If we restrict the graphs G1 , G2 to be in Ft , then the corresponding equivalence
relations, which are a restriction of ∼∗E,t and ∼∗E,G,t , are denoted by ∼E,t and ∼E,G,t ,
respectively.
If for all encodings R, f E (G1 , R) = f E (G2 , R) = −∞, then we set ∆E,t (G1 , G2 ) := 0
(note that any fixed integer would satisfy the first condition in Definition 11). Following
the notation of Bodlaender et al. [5], the function ∆E,t is called the transposition function
for the equivalence relation ∼∗E,t . Note that we can use the restriction of ∆E,t to couples
of graphs in Ft to define the equivalence relation ∼E,t .
In the following we only consider classes of graphs whose membership can be expressed in Monadic Second Order (MSO) logic. Therefore, we know that the number of
equivalence classes of ∼G,t is finite [7], say at most rG,t , and we can state the following
lemma.
8
Lemma 1. [?] Let G be a class of graphs whose membership is expressible in MSO logic.
For any encoder E, any function g : N → N and any integer t ∈ N, if E is g-confined
then the equivalence relation ∼∗E,G,t has at most r(E, g, t, G) := (g(t) + 2)sE (t) · 2t · rG,t
equivalence classes. In particular, the equivalence relation ∼E,G,t has at most r(E, g, t, G)
equivalence classes as well.
Definition 12. An equivalence relation ∼∗E,G,t is DP-friendly if, for any graph G ∈ Bt
with ∂(G) = A and any two boundaried graphs H and GB with G = H ⊕ GB such
that GB has boundary B ⊆ V (G) with |B| 6 t and A ∩ V (GB ) ⊆ B, the following
holds. Let G0 ∈ Bt with ∂(G0 ) = A be the graph obtained from G by replacing the
subgraph GB with some G0B ∈ Bt such that GB ∼∗E,G,t G0B . Then G ∼∗E,G,t G0 and
∆E,t (G, G0 ) = ∆E,t (GB , G0B ).
The following useful fact states that for proving that ∼∗E,G,t is DP-friendly, it suffices
to prove that G ∼∗E,t G0 instead of G ∼∗E,G,t G0 .
Fact 1. [?] Let G ∈ Bt with a separator B, let GB ∼E,G,t G0B , and let G0 ∈ Bt as in
Definition 12. If G ∼∗E,t G0 , then G ∼∗E,G,t G0 .
In order to perform a protrusion replacement that does not modify the behavior of
the graph with respect to a problem Π, we need the relation ∼∗E,t to be a refinement of
the canonical equivalence relation ≡Π,t .
Lemma 2. [?] Let Π be a packing-certifiable parameterized problem defined on a graph
class G, let E be an encoder, let g : N → N, and let G1 , G2 ∈ Bt . If E is a g-confined Πencoder and ∼∗E,G,t is DP-friendly, then the fact that G1 ∼∗E,G,t G2 implies the following:
• G1 ≡Π G2 ; and
• ∆Π,t (G1 , G2 ) = ∆E,t (G1 , G2 ).
In particular, this holds when G1 , G2 ∈ Ft and G1 ∼E,G,t G2 .
Definition 13. Given an encoder E and an equivalence class C ⊆ Ft of ∼E,G,t , a graph
G ∈ C is a progressive representative of C if for any G0 ∈ C, it holds that ∆E,t (G, G0 ) 6 0.
Lemma 3. [?] Let G be a class of graphs whose membership is expressible in MSO logic.
For any encoder E, any function g : N → N, and any t ∈ N, if E is g-confined and ∼∗E,G,t
is DP-friendly, then any equivalence class of ∼E,G,t has a progressive representative of
size at most b(E, g, t, G) := 2r(E,g,t,G)+1 · t, where r(E, g, t, G) is the function defined in
Lemma 1.
An explicit protrusion replacement. The next lemma specifies conditions under
which, given an upper bound on the size of the representatives, a generic DP algorithm
can provide in linear time an explicit protrusion replacer.
9
Lemma 4. [?] Let G be a class of graphs, let E be an encoder, let g : N → N, and let
t ∈ N such that E is g-confined and ∼∗E,G,t is DP-friendly. Assume we are given an upper
bound b > t on the size of a smallest progressive representative of any class of ∼E,G,t .
Given a t-protrusion Y inside some graph, we can compute a t-protrusion Y 0 of size at
most b such that Y ∼E,G,t Y 0 and ∆E,t (Y 0 , Y ) 6 0. Furthermore, such a protrusion can
be computed in time O(|Y |), where the hidden constant depends only on E, g, b, G, and t.
Let us now piece everything together to state the main result of [22] that we need to
reprove here for packing-certifiable problems. For issues of constructibility, we restrict
G to be the class of H-(topological)-minor-free graphs.
Theorem 1. [?] Let G be the class of graphs excluding some fixed graph H as a (topological) minor and let Π be a parameterized packing-certifiable problem defined on G. Let
E be an encoder, let g : N → N, and let t ∈ N such that E is a g-confined Π-encoder and
∼∗E,G,t is DP-friendly. Given an instance (G, k) of Π and a t-protrusion Y in G, we can
compute in time O(|Y |) an equivalent instance (G − (Y − ∂(Y )) ⊕ Y 0 , k 0 ) where Y 0 is
a t-protrusion with |Y 0 | 6 b(E, g, t, G) and k 0 6 k and where b(E, g, t, G) is the function
defined in Lemma 3.
Such a protrusion replacer can be used to obtain a kernel when, for instance, one is
able to provide a protrusion decomposition of the instance.
Corollary 1. [?] Let G be the class of graphs excluding some fixed graph H as a (topological) minor and let Π be a parameterized packing-certifiable problem defined on G. Let
E be an encoder, let g : N → N, and let t ∈ N such that E is a g-confined Π-encoder and
∼∗E,G,t is DP-friendly. Given an instance (G, k) of Π and an (αk, t)-protrusion decomposition of G, we can construct a linear kernel for Π of size at most (1 + b(E, g, t, G)) · α · k,
where b(E, g, t, G) is the function defined in Lemma 3.
4
Main ideas for the applications
In this section by sketch the main ingredients that we use in our applications for obtaining
the linear kernels, before going through the details for each problem in the next sections.
General methodology. The next theorem will be fundamental in the applications.
Theorem 2 (Kim et al. [26]). Let c, t be two positive integers, let H be an h-vertex graph,
let G be an n-vertex H-topological-minor-free graph, and let k be a positive integer. If
we are given a set X ⊆ V (G) with |X| 6 c · k such that tw(G − X) 6 t, then we can
compute in time O(n) an ((αH · t · c) · k, 2t + h)-protrusion decomposition of G, where
αH is a constant depending only on H, which is upper-bounded by 40h2 25h log h .
A typical application of our framework for obtaining an explicit linear kernel for
a packing-certifiable problem Π on a graph class G is as follows. The first task is to
define an encoder E and to prove that for some function g : N → N, E is a g-confined
10
Π-encoder and ∼∗E,G,t is DP-friendly. The next ingredient is a polynomial-time algorithm
that, given an instance (G, k) of Π, either reports that (G, k) is a Yes-instance (or a
No-instance, depending on the problem), or finds a treewidth-modulator of G with size
O(k). The way to obtain this algorithm depends on each particular problem and in our
applications we will use a number of existing results in the literature in order to find it.
Once we have such a linear treewidth-modulator, we can use Theorem 2 to find a linear
protrusion decomposition of G. Finally, it just remains to apply Corollary 1 to obtain
an explicit linear kernel for Π on G; see Figure 1 for a schematic illustration.
Algorithm with two
possible outputs given
an instance (G, k) of
a problem Π defined
on a graph class G
(G, k) is a
Yes/No-instance
Treewidth-modulator
of size O(k)
Π-encoder
confined + DP-friendly
Theo. 2
Linear
protrusion decomposition
Coro. 1
Linear kernel
for Π on G
Figure 1: Illustration of a typical application of the framework presented in this article.
Let us provide here some generic intuition about the additional criterion mentioned
in the introduction to discard the entries in the tables of an encoder. For an encoder
E = (C E , LE , f E ) and a function g : N → N, we need some notation in order to define the
relevant function f¯gE , which will be an appropriate modification of f E . Let G ∈ Bt with
boundary A and let RA be an encoding. We (recursively) define RA to be irrelevant for
f¯gE if there exists a certificate S such that (G, S, RA ) ∈ LE and |S| = f E (G, RA ) and
a separator B ⊆ V (G) with |B| 6 t and B 6= A, such that S induces an encoding RB
in the graph GB ∈ Bt with f¯gE (GB , RB ) = −∞. Here, by using the term “induces” we
implicitly assume that S defines an encoding RB in the graph GB ; this will be the case
in all the encoders used in our applications.
To define f¯gE , we will always use the following natural function f E , which for each
problem Π is meant to correspond to an extension to boundaried graphs of the maximization function f Π of Definition 2. For a graph G and an encoding R, this natural
function is defined as f E (G, R) = max{k : ∃S, |S| > k, (G, S, R) ∈ LE }. Then we define
the function f¯gE as follows:
f¯gE (G, RA ) =
if f E (G, RA ) + g(t) < max{f E (G, R) : R ∈ C E (Λ(G))},
or if RA is irrelevant for f¯gE .
f E (G, RA ), otherwise.
−∞,
That is, we will use the modified encoder (C E , LE , f¯gE ). We need to guarantee that the
above function f¯gE is computable, as required2 in Definition 8. Indeed, from the definition
it follows that an encoding RA defined at a node x of a given tree decomposition is
The fact that the values of the function f¯gE can be calculated is important, in particular, in the proof
of Lemma 4, since we need to be able to compute equivalence classes of the equivalence relation ∼E,G,t .
2
11
irrelevant if and only if RA can be obtained by combining encodings corresponding to
the children of x, such that at least one of them is irrelevant. This latter property can be
easily computed recursively on a tree decomposition, by performing standard dynamic
programming. We will omit this computability issue in the applications, as the same
argument sketched here applies to all of them.
In order to obtain the linear treewidth-modulators mentioned before, we will use
several results from [5, 19, 20], which in turn use the following two propositions. For
an integer r > 2, let Γr be the graph obtained from the (r × r)-grid by triangulating
internal faces such that all internal vertices become of degree 6, all non-corner external
vertices are of degree 4, and one corner of degree 2 is made adjacent to all vertices of
the external face (the corners are the vertices that in the underlying grid have degree
2). As an example, the graph Γ6 is shown in Figure 2.
Figure 2: The graph Γ6 .
Proposition 1 (Demaine and Hajiaghayi [12]). There is a function fm : N → N such
that for every h-vertex graph H and every positive integer r, every H-minor-free graph
with treewidth at least fm (h) · r, contains the (r × r)-grid as a minor.
Proposition 2 (Fomin et al. [17]). There is a function fc : N → N such that for
every h-vertex apex graph H and every positive integer r, every H-minor-free graph with
treewidth at least fc (h) · r, contains the graph Γr as a contraction.
2
The current best upper bound [25] for the function fm is fm (h) = 2O(h log h) and,
up to date, there is no explicit bound for the function fc . We would like to note that
this non-existence of explicit bounds for fc is an issue that concerns the graph class
of H-minor-free graphs and it is perfectly compatible with our objective of providing
explicit constants for particular problems defined on that graph class, which will depend
on the function fc .
Let us now provide a sketch of the main basic ingredients used in each of the applications.
12
Packing minors. Let F be a fixed finite set of graphs. In the F-Packing problem,
we are given a graph G and an integer parameter k and the question is whether G has
k vertex-disjoint subgraphs G1 , . . . , Gk , each containing some graph in F as a minor.
When all the graphs in F are connected and F contains at least one planar graph, we call
the problem Connected-Planar-F-Packing. The encoder uses the notion of rooted
packing introduced by Adler et al. [1], which we also used in [22] for ConnectedPlanar-F-Deletion. To obtain the treewidth-modulator, we use the Erdős-Pósa
property for graph minors [8, 14, 30]. More precisely, we use that on minor-free graphs,
as proved by Fomin et al. [21], if (G, k) is a No-instance of Connected-PlanarF-Packing, then (G, k 0 ) is a Yes-instance of Connected-Planar-F-Deletion for
k 0 = O(k). Finally, we use a result of Fomin et al. [20] that provides a polynomial-time
algorithm to find treewidth-modulators for Yes-instances of Connected-Planar-FDeletion. The obtained constants involve, in particular, the currently best known
constant-factor approximation of treewidth on minor-free graphs.
Packing scattered minors. Let F be a fixed finite set of graphs and let ` be a positive
integer. In the `-F-Packing problem, we are given a graph G and an integer parameter
k and the question is whether G has k subgraphs G1 , . . . , Gk pairwise at distance at
least `, each containing some graph from F as a minor. The encoder for `-F-Packing
is a combination of the encoder for F-Packing and the one for `-Scattered Set
that we used in [22]. For obtaining the treewidth-modulator, unfortunately we cannot
proceed as for packing minors, as up to date no linear Erdős-Pósa property
√ for packing
scattered planar minors is known; the best bound we are aware of is O(k k), which is
not enough to obtain a linear kernel. To circumvent this problem, we use the following
trick: we (artificially) formulate `-F-Packing as a vertex-certifiable problem and prove
that it fits the conditions required by the framework of Fomin et al. [20] to produce a
treewidth-modulator. (We would like to stress that this formulation of the problem as
a vertex-certifiable one is not enough to apply the results of [22], as one has to further
verify the necessary properties of the encoder are satisfied and it does not seem to be an
easy task at all.) Once we have it, we consider the original formulation of the problem
to define its encoder. As a drawback of resorting to the general results of [20] and, due
to the fact that `-F-Packing is contraction-bidimensional, we provide linear kernels for
the problem on the (smaller) class of apex-minor-free graphs.
Packing overlapping minors. Let F be a fixed finite set of graphs and let ` be
a positive integer. In the F-Packing with `-Membership problem, we are given a
graph G and an integer parameter k and the question is whether G has k subgraphs
G1 , . . . , Gk such that each subgraph contains some graph from F as a minor, and each
vertex of G belongs to at most ` subgraphs. The encoder is an enhanced version of the
one for packing minors, in which we allow a vertex to belong simultaneously to several
minor-models. To obtain the treewidth-modulator, the situation is simpler than above,
thanks to the fact that a packing of models is in particular a packing of models with
`-membership. This allows us to use the linear Erdős-Pósa property that we described
13
for packing minors and therefore to construct linear kernels on minor-free graphs.
Packing scattered and overlapping subgraphs. The definitions of the corresponding problems are similar to the ones above, just by replacing the minor by the subgraph
relation. The encoders are simplified versions of those that we defined for packing
scattered and overlapping minors, respectively. The idea for obtaining the treewidthmodulator is to apply a simple reduction rule that removes all vertices not belonging
to any of the copies of the subgraphs we are looking for. It can be easily proved that
if a reduced graph is a No-instance of the problem, then it is a Yes-instance of `0 Dominating Set, where `0 is a function of the integer ` corresponding to the problem
and the largest diameter of a subgraph in the given family. We are now in position to
use the machinery of [20] for `0 -Dominating Set and find a linear treewidth-modulator.
5
A linear kernel for Connected-Planar-F-Packing
Let F be a finite set of graphs. We define the F-Packing problem as follows.
F-Packing
Instance:
Parameter:
Question:
A graph G and a non-negative integer k.
The integer k.
Does G have k vertex-disjoint subgraphs G1 , . . . , Gk
each containing some graph in F as a minor?
In order to build a protrusion decomposition for instances of the above problem,
we use a version of the Erdős-Pósa property (see Definition 16 and Theorem 3) that
establishes a linear relation between No-instances of F-Packing and Yes-instances of
F-Deletion, and then we apply tools of Bidimensionality theory on F-Deletion (see
Corollary 2). Hence, we also need to define the F-Deletion problem.
F-Deletion
Instance:
Parameter:
Question:
A graph G and a non-negative integer k.
The integer k.
Does G have a set S ⊆ V (G) such that |S| 6 k
and G − S is H-minor-free for every H ∈ F?
When all the graphs in F are connected, the corresponding problems are called
Connected-F-Packing and Connected-F-Deletion, and when F contains at least
one planar graph, we call them Planar-F-Packing and Planar-F-Deletion, respectively. When both conditions are satisfied, the problems are called ConnectedPlanar-F-Packing and Connected-Planar-F-Deletion (the parameterized versions of these problems are respectively denoted by cFP, cFD, pFP, pFD, cpFP, and
cpFD).
14
In this section we present a linear kernel for Connected-Planar-F-Packing on
the family of graphs excluding a fixed graph H as a minor.
We need to define which kind of structure a certificate for F-Packing is. For an
arbitrary graph, a solution will consist of a packing of models as defined below. We also
recall the definition of model.
Definition 14. A model of a graph F in a graph G is a mapping Φ that assigns to
every vertex v ∈ V (F ) a non-empty connected subgraph Φ(v) of G, and to every edge
e ∈ E(F ) an edge Φ(e) ∈ E(G), such that:
• the graphs Φ(v) for v ∈ V (F ) are mutually vertex-disjoint and the edges Φ(e) for
e ∈ E(F ) are pairwise distinct;
• for {u, v} ∈ E(F ), Φ({u, v}) has one endpoint in V (Φ(u)) and the other in
V (Φ(v)).
We denote by Φ(F ) the subgraph of G obtained by the (disjoint) union of the subgraphs
Φ(v) for v ∈ V (F ) plus the edges Φ(e) for e ∈ E(F ).
Definition 15. Given a set F of minors and a graph G, a packing of models S is a
set of vertex-disjoint models. That is, the graphs Φ(F ) for Φ ∈ S, F ∈ F are pairwise
vertex-disjoint.
5.1
A protrusion decomposition for an instance of F-Packing
In order to find a linear protrusion decomposition, we need some preliminaries.
Definition 16. A class of graphs F satisfies the Erdős-Pósa property [14] if there exists
a function f such that, for every integer k and every graph G, either G contains k vertexdisjoint subgraphs each isomorphic to a graph in F, or there is a set S ⊆ V (G) of at
most f (k) vertices such that G − S has no subgraph in F.
Given a connected graph F , let M(F ) be the class of graphs that can be contracted
to F . Robertson and Seymour [30] proved that M(F ) satisfies the Erdős-Pósa property
if and only if F is planar. A significant improvement on the function f (k) has been
recently provided by Chekuri and Chuzhoy [8]. When G belongs to a proper minorclosed family, Fomin et al. [21] proved that f can be taken to be linear for any planar
graph F . It is not difficult to see that these results also hold if instead of a connected
planar graph F , we consider a finite family F of connected graphs containing at least
one planar graph. This discussion can be summarized as follows, with a precise upper
bound on the desired linear constant.
Theorem 3 (Fomin et al. [21]). Let F be a finite family of connected graphs containing
at least one planar graph on r vertices, let H be an h-vertex graph, and let G be the class
of H-minor-free graphs. There exists a constant c such that if (G, k) ∈
/ cpFPG , then
(G, c · r · 215h+8h log h · k) ∈ cpFDG .
15
The next theorem provides a way to find a treewidth-modulator for an instance of
a problem verifying the so-called bidimensionality and separability properties restricted
to the class of (apex)-minor-free graphs. Loosely speaking, the algorithm consists in
building a tree decomposition of the instance, then finding a bag that separates the
instance in such a way that the solution is balanced, and finally finding recursively other
bags in the two new tree decompositions. In order to make the algorithm constructive,
we need to build a tree decomposition of the input graph whose width differs from the
optimal one by a constant factor. To this aim, we use a (polynomial) approximation
algorithm of treewidth on minor-free graphs, which is well-known to exist. Let us denote
by τH this approximation ratio. To the best of our knowledge there is no explicit
upper bound on this ratio, but one can be derived from the proofs of Demaine and
Hajiaghayi [12]. We note that any improvement on this constant will directly translate
to the size of our kernels. We also need to compute an initial solution of the problem
under consideration. Fortunately, for all our applications, there is an EPTAS on minorfree graphs [19]. By choosing the approximation ratio of the solution to be 2, we can
announce the following theorem adapted from Fomin et al. [20].
Theorem 4 (Fomin et al. [20]). For any real ε > 0 and any minor-bidimensional (resp.
contraction-bidimensional) linear-separable problem Π on the class G of graphs that exclude a minor H (resp. an apex-minor H), there exists an integer t > 0 such that any
graph G ∈ G has a treewidth-t-modulator of size at most ε · f Π (G).
The impact of the tree decomposition approximation is hidden in the value of t,
and the impact of the solution approximation will be hidden in the “O” notation. The
parameters from the class of graphs or from the problem will affect the time complexity
of the algorithm, and not the size of our kernel. In our applications we state corollaries
of the above result (namely, Corollary 2 and Corollary 3) in which we choose ε = 1 and
we provide an explicit bound on the value of t.
We are in position to state the following corollary claiming that, given an instance of
Planar-F-Deletion, in polynomial time we can either find a treewidth-modulator or
report that is a No-instance. This is a corollary of the result of Fomin et al. [20] stated
in Theorem 4, where ε is fixed to be 1. The bound on the treewidth is derived from the
proof of Theorem 4 in [20].
Corollary 2. Let F be a finite set of graphs containing at least one r-vertex planar
graph F , let H be an h-vertex graph, and let G be the class of H-minor-free graphs. If
(G, k 0 ) ∈ pFDG , then there exists a set X ⊆ V (G) such that |X| = k 0 and tw(G − X) =
√
3 · f (h)3 ). Moreover, given an instance (G, k) with |V (G)| = n, there is an
O(r r · τH
m
algorithm running in time O(n3 ) that either finds such a set X or correctly reports that
(G, k) ∈
/ pFDG .
Note that since in Theorem 4 the value of ε can be chosen arbitrarily, we can state
many variants of the above corollary. For instance, in our previous article [22], we used
the particular case where |X| = O(r · fm (h) · k 0 ) and tw(G − X) = O(r · fm (h)2 ).
We are now able to construct a linear protrusion decomposition.
16
Lemma 5. Let F be a finite set of graphs containing at least one r-vertex planar graph
F , let H be an h-vertex graph, and let G be the class of H-minor-free graphs. Let (G, k)
be an instance of Connected-Planar-F-Packing. If (G, k) ∈
/ cpFPG , then we can
construct in polynomial time a linear protrusion decomposition of G.
Proof. Given an instance (G, k) of cpFPG , we run the algorithm given by Corollary 2 for
the Connected-Planar-F-Deletion problem with input (G, k 0 = c·r·215h+8h log h ·k).
If the algorithm is not able to find a treewidth-modulator X of size |X| = k 0 , then by
Theorem 3 we can conclude that (G, k) ∈ cpFPG . Otherwise, we use the set X as input
to the algorithm given by Theorem 2, which outputs in linear time an ((αH ·t)·k 0 , 2t+h)protrusion decomposition of G, where
√
3 · f (h)3 ) is provided by Corollary 2 (the bound on the treewidth);
• t = O(r r · τH
m
• k 0 = O(r · 2O(h log h) · k) is provided by Theorem 3 (the parameter of F-Deletion);
and
• αH = O(h2 2O(h log h) ) is the constant provided by Theorem 2.
3 · f (h)3 ) · k, O(r √r · τ 3 · f (h)3 ) That is, we obtained an O(h2 2O(h log h) · r5/2 · τH
m
m
H
protrusion decomposition of G, as claimed.
5.2
An encoder for F-Packing
Our encoder EFP for F-Packing uses the notion of rooted packing [1], and is inspired
by results on the Cycle Packing problem [5].
Assume first for simplicity that F = {F } consists of a single connected graph F .
Following [1], we introduce a combinatorial object called rooted packing. These objects
are originally defined for branch decompositions, but can easily be translated to tree
decompositions. Loosely speaking, rooted packings capture how potential models of F
intersect the separator that the algorithm is processing. It is worth mentioning that the
notion of rooted packing is related to the notion of folio introduced by Robertson and
Seymour [31], but more suited to dynamic programming.
Definition 17. Let F be a connected graph. Given a set B of boundary vertices of the
input graph G, we define a rooted packing of B as a quintuple (A, SF∗ , SF , ψ, χ), where
• SF ⊆ SF∗ are both subsets of V (F );
• A is a (possible empty) collection of mutually disjoint non-empty subsets of B;
• ψ : A → SF is a surjective mapping assigning vertices of SF to the sets in A; and
• χ : SF × SF → {0, 1} is a binary symmetric function between pairs of vertices in
SF .
17
We also define a potential model of F in G matching with (A, SF∗ , SF , ψ, χ) as a
partial mapping Φ, that assigns to every vertex v ∈ SF a non-empty subgraph Φ(v) ⊆ G
such that {A ∈ A : ψ(A) = v} is the set of intersections of B with connected components
of Φ(v); to every vertex v ∈ SF∗ \ SF a non-empty connected subgraph Φ(v) ⊆ G; and to
every edge e ∈ {e ∈ E(F ) : χ(e) = 1 ∨ e ∈ SF∗ × SF∗ \ SF } an edge Φ(e) ∈ E(G), such
that Φ satisfies the two following conditions (as in Definition 14):
• the graphs Φ(v) for v ∈ V (F ) are mutually vertex-disjoint and the edges Φ(e) for
e ∈ E(F ) are pairwise distinct; and
• for {u, v} ∈ E(F ), Φ({u, v}) has one endpoint in V (Φ(u)) and the other in
V (Φ(v)).
ψ
χ=1
χ=1
χ=0
SF
SF∗
Figure 3: Example of a rooted packing (left) and a potential model matching with it
(right).
See Figure 3 for a schematic illustration of the above definition. The intended meaning of a rooted packing (A, SF∗ , SF , ψ, χ) on a separator B is as follows. The packing A
represents the intersection of the connected components of the potential model with B.
The subsets SF∗ , SF ⊆ V (F ) and the function χ indicate that we are looking in the graph
G for a potential model of F [SF∗ ] containing the edges between vertices in SF given by
the function χ. Namely, the function χ captures which edges of F [SF∗ ] have been realized so far in the processed graph. Since we allow the vertex-models intersecting B
to be disconnected, we need to keep track of their connected components. The subset
SF ⊆ SF∗ tells us which vertex-models intersect B (in other words, SF is the boundary
of F [SF∗ ]), and the function ψ associates the sets in A with the vertices in SF . We can
think of ψ as a coloring that colors the subsets in A with colors given by the vertices in
SF . Note that several subsets in A can have the same color u ∈ SF , which means that
18
the vertex-model of u in G is not connected yet, but it may get connected in further
steps of the dynamic programming. Again, see [1] for the details.
It is proved in [1] that rooted packings allow to carry out dynamic programming in
order to determine whether an input graph G contains a graph F as a minor. It is easy
to see that the number of distinct rooted packings at a separator B is upper-bounded
2
by f (t, F ) := 2t log t · rt · 2r , where t > |B|. In particular, this proves that when G is the
class of graphs excluding a fixed graph H on h vertices as a minor, then the index of the
2
equivalence relation ∼G,t is bounded by 2t log t · ht · 2h .
The encodings generator C EFP . Let G ∈ Bt with boundary ∂(G) labeled with Λ(G).
The function C EFP maps Λ(G) to a set C EFP (Λ(G)) of encodings. Each R ∈ C EFP (Λ(G))
is a set of at most |Λ(G)| rooted packings {(Ai , SF∗ i , SFi , ψi , χi ) | Fi ∈ F}, where each
such rooted packing encodes a potential model of a minor Fi ∈ F (multiple models of
the same graph are allowed).
The language LEFP . For a packing of models S, we say that (G, S, R) belongs to the
language LEFP (or that S is a packing of models satisfying R) if there is a packing of
S
potential models matching with the rooted packings of R in G \ Φ∈S Φ(F ).
Note that we allow the entirely realized models of S to intersect ∂(G) arbitrarily,
but they must not intersect potential models imposed by R.
As mentioned in the introduction, the natural definition of the maximization function
E
does not provide a confined encoder, hence we need to use the relevant function f¯g FP .
In order to define this function we note that, given a separator B and a subgraph GB ,
a (partial) solution naturally induces an encoding RB ∈ C EFP (Λ(GB )), where the rooted
packings correspond to the intersection of models with B.
Formally, let G be a t-boundaried graph with boundary A and let S be a partial
solution satisfying some RA ∈ C EFP (Λ(G)). Let also P be the set of potential models
matching with the rooted packings in RA . Given a separator B in G, we define the
induced encoding RB = {(Ai , SF∗ i , SFi , ψi , χi ) | Φi ∈ S ∪ P} ∈ C EFP (Λ(GB )) such that for
each (potential) model Φi ∈ S ∪ P of Fi ∈ F intersecting B,
• Ai contains elements of the form B ∩ C, where C is a connected component of the
graph induced by V (Φi (v)) ∩ V (GB ), with v ∈ V (Fi );
• ψi maps each element of Ai to its corresponding vertex in Fi ; and
• SF∗ i , SFi , correspond to the vertices of Fi whose vertex models intersect GB and B,
respectively.
Clearly, the set of models of S entirely realized in GB is a partial solution satisfying
RB .
Provided with a formal definition of an induced encoding, and following the description given in Section 4, we can state the definition of an irrelevant encoding for our
problem. Let G ∈ Bt with boundary A and let RA be an encoding. An encoding RA is
19
E
irrelevant for f¯g FP if there exists a certificate S such that (G, S, RA ) ∈ LEFP and |S| =
f EFP (G, RA ), and a separator B ⊆ V (G) with |B| 6 t and B 6= A, such that S induces
E
(as defined above) an encoding RB in the graph GB ∈ Bt with f¯g FP (GB , RB ) = −∞.
E
The function f¯g FP . Let G ∈ Bt with boundary A and let g(t) = t. We define the
E
function f¯g FP as
−∞,
if f EFP (G, RA ) + g(t) <
max{f EFP (G, R) : R ∈ C EFP (Λ(G))}
E
f¯g FP (G, RA ) =
(2)
E
or if RA is irrelevant for f¯g FP .
f EFP (G, R ), otherwise.
A
In the above equation, f EFP is the natural maximization function associated with the
encoder, that is, f EFP (G, R) is the maximal number of (entire) models in G which do
not intersect potential models imposed by R. Formally,
f EFP (G, R) = max{k : ∃S, |S| > k, (G, S, R) ∈ LEFP }.
2
The size of EFP . Recall that f (t, F ) := 2t log t · rt · 2r is the number of rooted packings
for a minor F of size r on a boundary of size t. If we let r := maxF ∈F |V (F )| and J be
P
any set of positive integers such that j∈J j 6 t, by definition of EFP , it holds that
X
X
2
2
2
2j log j · rj · 2r ) 6 (
2t log t · rt ) · 2r 6 t · 2t log t · rt · 2r . (3)
sEFP (t) 6 (
j∈J
j∈J
Note that an encoding can also be seen as the rooted packing of the disjoint union
of at most t minors of F.
Fact 2. Let G ∈ Bt with boundary A, let Φ be a model (resp. a potential model matching
with a rooted packing defined on A) of a graph F in G, let B be a separator of G, and
let GB ∈ Bt be as in Definition 12. Let (A, SF∗ , SF , φ, χ) be the rooted packing induced by
Φ (as defined above). Let G0B ∈ Bt with boundary B and let G0 be the graph obtained by
replacing GB with G0B . If G0B has a potential model Φ0B matching with (A, SF∗ , SF , φ, χ),
then G0 has a model (resp. a potential model) of F .
Proof. Let us build a model (resp. a potential model) Φ0 of F in G0 . For every vertex v
in V (F ) \ SF∗ , we set Φ0 (v) = Φ(v). For every vertex v in SF∗ \ SF , we set Φ0 (v) = Φ0B (v).
For every vertex v in SF , we set Φ0 (v) = Φ(v)[V (G) \ V (GB )] ⊕ Φ0B (v). As Φ(v) is
connected and the connected components in Φ0B (v) have the same boundaries than the
ones in Φ(v)[V (GB )] (by definition of rooted packing), it follows that Φ0 (v) is connected.
Note that Φ0 (v) do not intersect Φ0 (u), since Φ(v), Φ0B (v) do not intersect Φ0 (u) for any
u ∈ V (F ).
For every edge e in V (F ) × V (F ) \ SF∗ or such that χ(e) = 0 we set Φ0 (e) = Φ(e).
For every edge e in SF∗ × SF∗ \ SF or such that χ(e) = 1 we set Φ0 (e) = Φ0B (e). Since B
is a separator in G, SF is a separator in F and there is no edge in V (F ) \ SF∗ × SF∗ \ SF .
Since Φ, Φ0B are (potential) models, the edges Φ0 (e), e ∈ E(F ) are obviously distinct and
if e = {u, v}, then Φ0 (e) as one endpoint in Φ0 (u) and the other in Φ0 (v).
20
A
A
G0
G0
B
B
G0B
GB
Figure 4: Illustration of a protrusion replacement for F-Packing.
See Figure 4 for an illustration of the scenario described in the statement of Fact 2.
Lemma 6. The encoder EFP is a g-confined cFP-encoder for g(t) = t. Furthermore, if
G is an arbitrary class of graphs, then the equivalence relation ∼∗E ,G,t is DP-friendly.
FP
Proof. Let us first show that the encoder EFP is a cFP-encoder. Indeed, if G is a 0boundaried graph, then C EFP (∅) consists of a single encoding R∅ (an empty set of rooted
packings), and by definition of LEFP , any S such that (G, S, R∅ ) ∈ LEFP is a packing
E
of models. According to Equation (2), there are two possible values for f¯g FP (G, R∅ ):
either f EFP (G, R∅ ), which by definition equals f Π (G), or −∞. Let S be a packing of
E
models of size f Π (G), and assume for contradiction that f¯g FP (G, R∅ ) = −∞. Then, by
a recursive argument we can assume that there is a separator B of size at most t and a
subgraph GB of G as in Definition 12, such that S induces RB and f EFP (GB , RB ) + t <
max{f EFP (G, R) : R ∈ C EFP (I)}. Let M be the set of models entirely realized in GB .
We have |M | = f EFP (GB , RB ), as otherwise S is not maximal. Let MB be the set of
models intersecting B, so we have |MB | 6 t. Finally, let M0 be a packing of models in
GB of size max{f EFP (G, R), R ∈ C EFP (I)}. Clearly, S \ (M ∪ MB ) ∪ M0 is a packing of
models smaller than S (by optimality), that is, |M0 | 6 |M | + t, a contradiction with the
E
E
definition of f¯g FP . Hence f¯g FP (G, R∅ ) = f Π (G).
E
By definition of the function f¯g FP , the encoder EFP is g-confined for g : t 7→ t.
It remains to prove that the equivalence relation ∼∗E ,G,t is DP-friendly for g(t) = t.
FP
Due to Fact 1, it suffices to prove that ∼∗E ,t is DP-friendly. Let G ∈ Bt with boundary
FP
A, let B be any separator of G, and let GB be as in Definition 12. The subgraph GB
can be viewed as a t-boundaried graph with boundary B. We define H ∈ Bt to be the
21
graph induced by V (G) \ (V (GB ) \ B), with boundary B (that is, we forget boundary
A) labeled in the same way than GB . Let G0B ∈ Bt such that GB ∼∗E ,t G0B and let
FP
G0 = H ⊕ G0B , with boundary A. We have to prove that G ∼∗E ,t G0 and ∆EFP ,t (G, G0 ) =
FP
EFP
EFP
0
0
¯
¯
∆E ,t (GB , G ), that is, that fg (G, RA ) = fg (G , RA ) + ∆E ,t (GB , G0 ) for all RA ∈
FP
B
FP
B
C EFP (Λ(G)).
E
Let RA be an encoding defined on A. Assume first that f¯g FP (G, RA ) 6= −∞. Let
E
S = M ∪ MB ∪ MH be a packing of models satisfying RA with size f¯g FP (G, RA ) in G,
with M being the set of models entirely contained in GB , MH the set of models entirely
contained in V (H) \ B, and MB the set of models intersecting B and H. Notice that
M, MB , MH is a partition of S. Let P be the set of potential models matching with the
rooted packings in RA . Let also RB ∈ C EFP (Λ(GB )) be the encoding induced by S ∪ P.
E
Observe that f¯g FP (GB , RB ) 6= −∞, as otherwise, by definition of the relevant funcE
E
tion f¯g FP , we would have that f¯g FP (G, RA ) = −∞. Also, by construction of RB it
E
holds that |M | = f¯g FP (GB , RB ), as otherwise S would not be not maximum. Let
M 0 be a packing of models of F in G0B such that (G0B , M 0 , R) ∈ LEFP and of maxiE
mum cardinality, that is, such that |M 0 | = f¯g FP (G0B , RB ). Consider now the potential
models matching with RB . There are two types of such potential models. The first
ones match with rooted packings defined by the intersection of models in S and B;
we glue them with the potential models defined by H ∩ MB to construct MB0 . The
other ones match with rooted packings defined by the intersection of potential model
in P and B; we glue them with the potential models defined by H ∩ P to construct
E
P 0 . Observe that |MB | = |MB0 |. As GB ∼∗E ,t G0B and f¯g FP (GB , RB ) 6= −∞, we
FP
E
have that |M 0 | = f¯g FP (GB , RB ) + ∆E ,t (GB , G0 ), and therefore |M 0 ∪ M 0 ∪ MH | =
B
FP
B
E
E
f¯g FP (GB , RB ) + ∆EFP ,t (GB , G0B ) + |MB | + |MH | = f¯g FP (G, RA ) + ∆EFP ,t (GB , G0B ).
By definition we have that MH and M 0 are packings of models. The set MB0 contains
vertex-disjoint models by Fact 2. Note that models in MH ∪ M 0 are vertex-disjoint
(because V (H) ∩ V (GB ) = ∅), models in MH ∪ MB0 are vertex-disjoint (because the ones
in MH ∪ MB are vertex-disjoint), and models in M 0 ∪ MB0 are vertex-disjoint (because
M 0 satisfies RB ). Hence MH ∪ M 0 ∪ MB0 is a packing of models.
It remains to prove that MH ∪ M 0 ∪ MB0 satisfies RA . The set P 0 contains vertexdisjoint potential models by Fact 2. Models in P 0 ∪ M 0 are vertex-disjoint, as MB0
satisfies RB . Models in P 0 ∪ MB0 are vertex-disjoint by definition of RB . Finally, models
in P 0 ∪ MH are vertex-disjoint since S satisfies RA .
E
It follows that G0 has a packing of models satisfying RA of size f¯g FP (G, RA ) +
∆EFP ,t (GB , G0B ), that is, G ∼∗E ,t G0 and ∆EFP ,t (G, G0 ) = ∆EFP ,t (GB , G0B ).
FP
E
E
Assume now that f¯g FP (G, RA ) = −∞. If f¯g FP (G0 , RA ) 6= −∞, then applying the
E
same arguments as above we would have that f¯g FP (G, RA ) 6= −∞, a contradiction.
22
5.3
A linear kernel for F-Packing
We are now ready to provide a linear kernel for Connected-Planar-F-Packing.
Theorem 5. Let F be a finite family of connected graphs containing at least one planar
graph on r vertices, let H be an h-vertex graph, and let G be the class of H-minor-free
graphs. Then cpFPG admits a constructive linear kernel of size at most f (r, h)·k, where
f is an explicit function depending only on r and h, defined in Equation (4).
Proof. By Lemma 5, given an instance (G, k) we can either conclude that (G, k) is
a Yes-instance of cpFPG , or build in linear time an ((αH · t) · k 0 , 2t + h)-protrusion
decomposition of G, where αH , t, k 0 are defined in the proof of Lemma 5.
We now consider the encoder EFP defined in Subsection 5.2. By Lemma 6, EFP is a
g-confined cpFPG -encoder and ∼∗E ,G,t is DP-friendly, where g(t) = t and G is the class
FP
of H-minor-free graphs. An upper bound on sEFP (t) is given in Equation (3). Therefore,
we are in position to apply Corollary 1 and obtain a linear kernel for cpFPG of size at
most
(αH · t) · (b (EFP , g, t, G) + 1) · k 0 , where
(4)
• b (EFP , g, t, G) is the function defined in Lemma 3;
• t is the bound on the treewidth provided by Corollary 2;
• k 0 is the parameter of F-Deletion provided by Theorem 3; and
• αH is the constant provided by Theorem 2.
By using the recent results of Chekuri and Chuzhoy [9], it can be shown that the
factor αH = O(h2 2O(h log h) ) in Theorem 3 can be replaced with hO(1) . However, in
this case this would not directly translate into an improvement of the size of the kernel
given in Equation (4), as the term hO(1) would be dominated by the term fm (h) =
2
2O(h log h) .
6
Application to `-F-Packing
We now consider the scattered version of the packing problem. Given a finite set of
graphs F and a positive integer `, the `-F-Packing problem is defined as follows.
`-F-Packing
Instance:
Parameter:
Question:
A graph G and a non-negative integer k.
The integer k.
Does G have k subgraphs G1 , . . . , Gk pairwise at distance at
least `, each containing some graph from F as a minor?
23
We again consider the version of the problem where all the graphs in F are connected
and at least one is planar, called Connected-Planar-`-F-Packing (cp`FP).
We obtain a linear kernel for Connected-Planar `-F-Packing on the family of
graphs excluding a fixed apex graph H as a minor. We use again the notions of model,
packing of models, and rooted packing.
6.1
A protrusion decomposition for an instance of `-F-Packing
In order to obtain a linear protrusion decomposition for `-F-Packing, a natural idea
could be to prove an Erdős-Pósa property at distance `, generalizing the approach for
F-Packing described in Section 5. Unfortunately, the best known Erdős-Pósa relation
between a maximum `-F-packing and a minimum `-F-deletion set is not linear. Indeed,
by following and extending the ideas of Giannopoulou [23, Theorem
√ 8.7 in Section 8.4] for
the special case of cycles, it is possible to derive a bound of O(k k), which is superlinear,
and therefore not enough for our purposes. Proving a linear bound for this Erdős-Pósa
relation, or finding a counterexample, is an exciting topic for further research.
We will use another trick to obtain the decomposition: we will (artificially) consider
the `-F-Packing problem as a vertex-certifiable problem. Hence we propose the formulation described below, which is clearly equivalent to the previous one. Using such
a formulation, a natural question is whether the `-F-Packing problem can fit into the
framework for vertex-certifiable problems [22]. However, finding an appropriate encoder
for this formulation does not seem an easy task, and it is more convenient to describe
the encoder for `-F-Packing using the new framework designed for packing problems.
`-F-Packing
Instance:
Parameter:
Question:
A graph G and a non-negative integer k.
The integer k.
Does G have a set {v1 , . . . , vk } of k vertices such that every vi
belongs to a subgraph Gi of G with G1 , . . . , Gk pairwise at
distance at least ` and each containing some graph from F
as a minor?
With such a formulation, we are in position to use some powerful results from Bidimensionality theory. It is not so difficult to see that the `-F-Packing problem is
contraction-bidimensional [20]. Then we can use Theorem 4 and obtain the following corollary. Again, the bound on the treewidth is derived from the proof of Theorem
4 in [20].
Corollary 3. Let F be a finite set of graphs containing at least one r-vertex planar graph
F , let H be an h-vertex apex graph, and let G be the class of H-minor-free graphs. If
(G, k) ∈ p`FPG , then there exists a set X ⊆ V (G) such that |X| = k and tw(G − X) =
3 · f (h)3 ). Moreover, given an instance (G, k) with |V (G)| = n, there is
O((2r + `)3/2 · τH
c
an algorithm running in time O(n3 ) that either finds such a set X or correctly reports
24
that (G, k) ∈
/ p`FPG .
We are now able to construct a linear protrusion decomposition.
Lemma 7. Let F be a finite set of graphs containing at least one r-vertex planar graph
F , let H be an h-vertex apex graph, and let G be the class of H-minor-free graphs. Let
(G, k) be an instance of Connected Planar-`-F-Packing. If (G, k) ∈ cp`FPG , then
we can construct in polynomial time a linear protrusion decomposition of G.
Proof. Given an instance (G, k) of cp`FPG , we run the algorithm given by Corollary 3.
If the algorithm is not able to find a treewidth-modulator X of size |X| = k, then we can
conclude that (G, k) ∈
/ cp`FPG . Otherwise, we use the set X as input to the algorithm
given by Theorem 2, which outputs in linear time an ((αH · t) · k, 2t + h)-protrusion
decomposition of G, where
3 · f (h)3 ) is provided by Corollary 3; and
• t = O((r + `)3/2 · τH
c
• αH = O(h2 2O(h log h) ) is the constant provided by Theorem 2.
3 · f (h)3 · k, O((r + `)3/2 · τ 3 · f (h)3 ) -proThis is an h2 · 2O(h log h) · (r + `)3/2 · τH
c
c
H
trusion decomposition of G.
6.2
An encoder for `-F-Packing
Our encoder E`FP for `-F-Packing is a combination of the encoder for F-Packing and
the one for `-Scattered Set that we defined in [22].
The encodings generator C E`FP . Let G ∈ Bt with boundary ∂(G) labeled with Λ(G).
The function C E`FP maps Λ(G) to a set C E`FP (Λ(G)) of encodings. Each R ∈ C E`FP (Λ(G))
is a pair (RP , RS ), where
• RP is a set of at most |Λ(G)| rooted packings {(Ai , SF∗ i , SFi , φi , χi ) | i ∈ Λ(G), Fi ∈
F}, where each such rooted packing encodes a potential model of a minor Fi ∈ F
(that is, RP is an encoding of F-Packing); and
• RS maps label j ∈ Λ(G) to an |Λ(G)|-tuple (d, di , i ∈ Λ(G), i 6= j) ∈ [0, ` + 1]|Λ(G)|
(that is, RS is an encoding of `-Scattered Set), for simplicity, since each label
in Λ(G) is uniquely associated with a vertex in ∂(G), we denote by R(v) the vector
assigned by RS to label λ(v).
The language LE`FP . For a packing of models S, we say that (G, S, R) belongs to the
language LE`FP (or that S is a packing of models satisfying R) if
• the models are pairwise at distance at least `, that is, for each Φ1 , Φ2 ∈ S models
of F1 , F2 ∈ F, respectively, dG (V (Φ1 (F1 )), V (Φ2 (F2 ))) > `;
• there is a packing of potential models matching with the rooted packings of RP
S
pairwise at distance at least ` and at distance at least ` from Φ∈S Φ(F ); and
25
• for any vertex v ∈ ∂(G), if (d, di ) = R(v) then dG (v, S ∪ P) > d, and dG (v, w) >
dλ(w) , for any w ∈ ∂(G).
E
Similarly to F-Packing, we need the relevant version of the function f¯g `FP . Let G ∈
Bt with boundary A and let S be a partial solution satisfying some RA ∈ C E`FP (Λ(G)).
Let also P be the set of potential models matching with the rooted packings in RA .
Given a separator B in G, and GB as in Definition 12, we define the induced encoding
RB = (RP , RS ) as follows:
• RP is defined by the intersection of B with models in S ∪ P, (as for F-Packing);
and
• RS maps each v ∈ B to R(v) = (dGB (v, S ∪ P), dGB (v, w), w ∈ B).
The set of models of S entirely realized in GB is a partial solution satisfying RB .
The definition of an irrelevant encoding is as described in Section 4.
E
E
The function f¯g `FP . Let G ∈ Bt with boundary A and let g(t) = 2t. We define f¯g `FP
as
E
f¯g `FP (G, RA ) =
if f E`FP (G, RA ) + 2t <
max{f E`FP (G, R) : R ∈ C E`FP (Λ(G))},
E
or if RA is irrelevant for f¯g `FP .
f E`FP (G, RA ), otherwise.
−∞,
(5)
In the above equation, f E`FP is the natural optimization function defined as
f E`FP (G, R) = max{k : ∃S, |S| > k, (G, S, R) ∈ LE`FP }.
(6)
Size of E`FP . Since C E`FP (I) = C EFP × ([0, ` + 1]t )t , it holds that
2
sE`FP (t) 6 sĒFP (t) × (` + 2)t .
(7)
Lemma 8. The encoder E`FP is a g-confined c`FP-encoder for g(t) = 2t. Furthermore,
if G is an arbitrary class of graphs, then the equivalence relation ∼∗E ,G,t is DP-friendly.
`FP
Proof. We first prove that E`FP is a c`FP-encoder. Obviously, {(G, S) : (G, S, R∅ ) ∈
LE`FP , R∅ ∈ C E`FP (∅)} = LΠ . As in the proof of Lemma 6, in order to show that
E
f E`FP (G, R∅ ) 6= −∞ we prove that the value computed by f¯g `FP has not been truncated.
Let G, GB and S, M, MH , MB , M0 as in proof of Lemma 6, and let M0∗ = M0 \ {Φ(F ) :
∗ = M \ {Φ(F ) : Φ(F ) ∩ N
Φ(F ) ∩ Nr/2 (B) 6= ∅, F ∈ F} and MH
H
r/2 (B) 6= ∅, F ∈ F}.
∗
∗
M0 ∪ MH is a scattered packing of size at least |S| − 2t.
E
The encoder E`FP is g-confined for g : t 7→ 2t by definition of f¯g `FP .
Following the proof of Lemma 6 again, let G, G0 ∈ Bt with boundary A and let
E
E
GB , G0B , H ∈ Bt with boundary B. We have to prove that f¯g `FP (G, RA ) = f¯g `FP (G0 , RA )+
∆E`FP ,t (GB , G0B ) for every RA ∈ C E`FP (Λ(G)).
26
E
Let RA be an encoding defined on A. Assume that f¯g `FP (G, RA ) 6= −∞. Let S =
E
M ∪ MB ∪ MH be a packing of models satisfying RA with size f¯g `FP (G, RA ) in G, with
M, MB , MH as in the proof of Lemma 6. Let also P be the set of potential models
matching with RA and let RB ∈ C E`FP (Λ(GB )) be the encoding induced by S ∪ P.
E
Observe that, by definition, f¯g `FP (GB , RB ) 6= −∞. Hence there is a packing M 0
in G0B of maximum cardinality and such that (G0B , M 0 , R) ∈ LE`FP . As in the proof of
Lemma 6, we can define MB0 to be the set of models obtained from the potential models
defined by the intersection of models in MB with H, glued to the ones in G0B matching
with RB . We can also define P 0 to be the set of potential models obtained from the
potential models defined by the intersection of models in MB with H, glued to the ones
in G0B matching with RB . As GB ∼∗E ,t G0B and following the argumentation in Lemma
`FP
E
6 we have that |M 0 ∪ M 0 ∪ MH | = f¯g `FP (G, RA ) + ∆E ,t (GB , G0 ).
B
`FP
B
We already have that S 0 = MH ∪ M 0 ∪ MB0 is a packing of models according to the
proof of Lemma 6. It remains to prove that (potential) models in S 0 ∪ P 0 are pairwise
at distance at least `. We follow the proof of [22, Lemma 6]. Let P be a shortest path
between any two models in S 0 ∪ P 0 . We subdivide P into maximal subpaths in G0B
and maximal subpaths in H. Clearly the length of a subpath in H does not change.
Moreover, note that the length of a subpath in G0B with extremities v, w ∈ B is at least
dGB (v, w), by definition of RB . Note also that the length of a subpath in G0B with an
extremity in a model and the other v ∈ B is at least dGB (v, S), also by definition of RB .
Therefore, the distance between any two models is indeed at least `.
E
It follows that G0 has a scattered packing of models satisfying RA of size f¯g `FP (G, RA )+
∆E`FP ,t (GB , G0B ), that is, G ∼∗E ,t G0 and ∆E`FP ,t (G, G0 ) = ∆E`FP ,t (GB , G0B ). The case
`FP
E
where f¯g `FP (G, RA ) = −∞ is easily handled as in Lemma 6.
6.3
A linear kernel for `-F-Packing
We are now ready to provide a linear kernel for Connected-Planar-`-F-Packing.
Theorem 6. Let F be a finite family of connected graphs containing at least one planar
graph on r vertices, let H be an h-vertex apex graph, and let G be the class of H-minorfree graphs. Then cp`FPG admits a constructive linear kernel of size at most f (r, h, `)·k,
where f is an explicit function depending only on r, h, and `, defined in Equation (8).
Proof. By Lemma 7, given an instance (G, k) we can either report that (G, k) is a Yesinstance of cp`FPG , or build in linear time an ((αH ·t)·k, 2t+h)-protrusion decomposition
of G, where αH and t are defined in the proof of Lemma 7.
We now consider the encoder E`FP defined in Subsection 6.2. By Lemma 8, E`FP
is a g-confined cp`FPG -encoder and ∼∗E ,G,t is DP-friendly, where g(t) = 2t and G is
`FP
the class of H-minor-free graphs. An upper bound on sE`FP (t) is given in Equation (7).
Therefore, we are in position to apply Corollary 1 and obtain a linear kernel for cp`FPG
of size at most
(αH · t) · (b (E`FP , g, t, G) + 1) · k 0 , where
(8)
27
• b (E`FP , g, t, G) is the function defined in Lemma 3;
• t is the bound on the treewidth provided by Corollary 3; and
• αH is the constant provided by Theorem 2.
7
Application to F-Packing with `-Membership
Now we consider a generalization of the F-Packing problem that allows models to
be close to each other (conversely to `-F-Packing, which asks for scattered models).
That is, we consider the version for minors of the F-Subgraph-Packing with `Membership defined in [16]. Let F be a finite set of graphs. For every integer ` > 1,
we define the F-Packing with `-Membership problem as follows.
F-Packing with `-Membership
Instance: A graph G and a non-negative integer k.
Parameter: The integer k.
Question: Does G have k subgraphs G1 , . . . , Gk such that
each subgraph contains some graph from F as a minor,
and each vertex of G belongs to at most ` subgraphs?
We again consider the version of the problem where all the graphs in F are connected and at least one is planar, called Connected-Planar-F-Packing with `Membership (cpFP`M).
We obtain a linear kernel for Connected-Planar-F-Packing with `-Membership
on the family of graphs excluding a fixed graph H as a minor. We use again the notions
of model, packing of models, and rooted packing.
Now, for an arbitrary graph, a certificate for F-Packing with `-Membership is a
packing of models with `-membership, defined as follows.
Definition 18. Given a set F of minors and a graph G, a packing of models with `membership S is a set of models such that each vertex of G belongs to at most ` models,
that is, to at most ` subgraphs Φ(F ) for Φ ∈ S, F ∈ F.
Note that the above definition is equivalent to saying that each vertex of G belongs
to at most ` vertex-models, since vertex-models of a model are vertex-disjoint.
7.1
A protrusion decomposition for an instance of F-Packing with `Membership
In order to find a linear protrusion decomposition, we use again the Erdős-Pósa property,
as we did in Subsection 5.1. The construction of a linear protrusion decomposition
becomes straightforward from the fact that a packing of models is in particular a packing
of models with `-membership for every integer ` > 1.
28
Lemma 9. Let F be a finite set of graphs containing at least one r-vertex planar graph
F , let H be an h-vertex graph, and let G be the class of H-minor-free graphs. Let (G, k)
be an instance of cpFP`MG . If (G, k) ∈
/ cpFP`MG , then we can construct in polynomial
time a linear protrusion decomposition of G.
Proof. It suffices to note that if S is a packing of models of size k, then it is in particular
a packing of models with `-membership for every integer ` > 1. Hence, if (G, k) ∈
/
cpFPrMG then (G, k) ∈
/ cpFPG and we can apply Lemma 5.
7.2
An encoder for F-Packing with `-Membership
Our encoder EFP`M for F-Packing with `-Membership uses again the notion of rooted
packing, but now we allow the rooted packings to intersect.
The encodings generator C EFP`M . Let G ∈ Bt with boundary ∂(G) labeled with
Λ(G). The function C EFP`M maps Λ(G) to a set C EFP`M (Λ(G)) of encodings. Each R ∈
C EFP`M (Λ(G)) is a set of at most ` · |Λ(G)| rooted packings{(Ai , SF∗ i , SFi , φi , χi ) | Fi ∈ F},
where each such rooted packing encodes a potential model of a minor Fi ∈ F (multiple
models of the same graph are allowed).
The language LEFP`M . For a packing of models with `-membership S, we say that
(G, S, R) belongs to the language LEFP`M (or that S is a packing of models with `membership satisfying R) if there is a packing of potential models with `-membership
matching with the rooted packings of R in G \ {u : u ∈ Φ1 (F1 ), . . . , u ∈ Φ` (F` ); Φi ∈
S, Fi ∈ F}, that is, such that each vertex belongs to at most ` models or potential
models.
E
The function f¯g FP`M . Similarly to F-Packing, we need the relevant version of the
E
E
function f¯g FP`M . The function f¯g FP`M is defined exactly as the one for F-Packing in
Section 5 (in particular, the encoding induced by a partial solution is also the set of
rooted packings defined by the intersection of the partial solution and the separator).
The size of EFP`M . Note that the encoder contains at most `t rooted packings on a
boundary of size t. Hence, if we let r := maxF ∈F |V (F )|, and J be any set such that
P
j∈J j 6 `t and ∀j ∈ J, j 6 t, by definition of EFP`M it holds that
2
sEFP`M (t) 6 `t · 2t log t · rt · 2r .
It just remains to prove that the relation ∼∗E
is DP-friendly. Note that in the
FP`M ,G,t
encoder, the only difference with respect to F-Packing is that rooted packings are now
allowed to intersect. Namely, the constraint on the intersection is that each vertex belongs to at most ` models. This constraint can easily be verify locally, so no information
has to be transmitted through the separator. Hence, the proof of the following lemma
is exactly the same as the proof of Lemma 6, and we omit it.
Lemma 10. The encoder EFP`M is a g-confined cFP`M-encoder for g(t) = t. Furthermore, if G is an arbitrary class of graphs, then the equivalence relation ∼∗E
is
FP`M ,G,t
DP-friendly.
29
7.3
A linear kernel for F-Packing with `-Membership
We are now ready to provide a linear kernel for Connected-Planar F-Packing with
`-Membership.
Theorem 7. Let F be a finite family of connected graphs containing at least one planar
graph on r vertices, let H be an h-vertex graph, and let G be the class of H-minor-free
graphs. Then cpFP`M admits a constructive linear kernel of size at most f (r, h, `) · k,
where f is an explicit function depending only on r, h, and `.
The proof of the above theorem is exactly the same as the one of Theorem 6, the
only difference being in the size sEFP`M (t) of the encoder, and hence in the value of
b (EFP`M , g, t, G).
8
Application to F-Subgraph-Packing
In this section we apply our framework to problems where to objective is to pack subgraphs. The F-Subgraph-Packing problem consists in finding vertex-disjoint subgraphs (instead of minors) isomorphic to graphs in a given finite family F. Similarly to F-(Minor)-Packing, we study two more generalizations of the problem,
namely the `-F-Subgraph-Packing, asking for subgraphs at distance ` from each
other, and the F-Subgraph-Packing with `-Membership problem [16] that allows
vertices to belong to at most ` subgraphs. Let F be a finite set of graphs and let
` > 1 be an integer. The F-Subgraph-Packing, the `-F-Subgraph-Packing, and
the F-Subgraph-Packing with `-Membership problems are defined as follows.
F-Subgraph-Packing
Instance: A graph G and a non-negative integer k.
Parameter: The integer k.
Question: Does G have k vertex-disjoint subgraphs
G1 , . . . , Gk , each isomorphic to a graph in F?
`-F-Subgraph-Packing
Instance: A graph G and two non-negative integers k and `.
Parameter: The integer k.
Question: Does G have k subgraphs G1 , . . . , Gk pairwise at distance
at least ` and each isomorphic to a graph in F?
30
F-Subgraph-Packing with `-Membership
Instance: A graph G and two non-negative integers k and `.
Parameter: The integer k.
Question: Does G have k subgraphs G1 , . . . , Gk , each isomorphic to
in F, and a graph such that each vertex of G belongs
to at most ` subgraphs?
Again, for technical reasons, we consider the versions of the above problems where
all the graphs in F are connected, called Connected F-Subgraph-Packing (cFSP),
Connected`-F-Subgraph-Packing (c`FSP),and Connected F-Subgraph-Packing with `-Membership (cFSP`M), respectively. As in Section 5, connectivity is necessary to use the equivalent notion of rooted packings. Furthermore, in this section we
also need connectivity to build the protrusion decomposition, whereas the presence of a
planar graph in F is not mandatory anymore.
Similarly to F-Packing, we establish a relation between instances of F-Subgraph
-Packing (and its variants) and instances of d-Dominating Set for an appropriate
value of d. Therefore we also define this problem. Note that here we do not use any
Erdős-Pósa property to establish this relation.
d-Dominating
Instance:
Parameter:
Question:
Set
A graph G and two non-negative integers k and d.
The integer k.
Is there a set D of vertices in G with size at most k,
such that for every vertex v ∈ V (G), Nd [v] ∩ D 6= ∅?
In this section we obtain a linear kernel for Connected F-Subgraph-Packing, Connected `-F-Subgraph-Packing, and F-Subgraph-Packing with `-Membership
on the families of graphs excluding respectively a fixed graph, a fixed apex graph, and
a fixed graph, as a minor.
For these three problems, the structure of a solution will be respectively a packing
of subgraph models, a packing of subgraph models, and a packing of subgraph models with
`-membership. In order to define a packing of subgraph models, we need the definition
of a subgraph model of F in G, which is basically an isomorphism from a graph F to a
subgraph of G.
Definition 19. A subgraph model of a graph F in a graph G is a mapping Φ, that
assigns to every vertex v ∈ V (F ) a vertex Φ(v) ∈ v(G), such that
• the vertices Φ(v) for v ∈ V (F ) are distinct; and
• if {u, v} ∈ E(F ), then {Φ(u), Φ(v)} ∈ E(G).
We denote by Φ(F ) the subgraph of G with vertex set {Φ(v) : v ∈ V (F )} and edge
set {{Φ(u), Φ(v)} : {u, v} ∈ E(F )}, which is obviously isomorphic to F .
31
Definition 20. Let F be a set of subgraphs and let G be a graph. A packing of subgraph
models S is a set of vertex-disjoint subgraph models, that is, the graphs Φ(F ) for Φ ∈
S, F ∈ F are vertex-disjoint. A packing of subgraph models with `-membership S is
a set of subgraph models such that every vertex v ∈ V (G) is the image of at most `
mappings Φ ∈ S.
8.1
A protrusion decomposition for an instance of F-Subgraph-Packing
In order to find a linear protrusion decomposition, we first need a preprocessing reduction rule. This rule, which has also been used in previous work [5, 20], enables us to
establish a relation between instances of F-Subgraph-Packing (and its variants) and
d-Dominating Set. Then we will be able to apply Theorem 4 on d-Dominating Set
to find a linear treewidth-modulator that allows to construct the decomposition.
Rule 1. Let v be a vertex of G that does not belong to any subgraph of G isomorphic to
a graph in F. Then remove v from G.
Note that Rule 1 can be applied in time O(nr ), where n is the size of G and r is the
maximum size of a graph in F. We call a graph reduced under Rule 1 if the rule cannot
be applied anymore on G.
The next proposition states a relation between an instance of `-F-Subgraph-Packing
and d-Dominating Set. The relation with the two other problems are straightforward,
as explained below.
Proposition 3. Let G be a graph reduced under Rule 1. If (G, k) is a No-instance
of Connected `-F-Subgraph-Packing, then (G, k) is a Yes-instance of (2d + `)Dominating Set, where d is the largest diameter of the graphs in F.
Proof. Let (G, k) be a No-instance of `-F-Subgraph-Packing and let d be the largest
diameter of a graph in F. Let us choose any vertex v ∈ V (G) and remove Nd+` (v) from
G. We repeat this operation until there is no subgraph model of F in G. We call D the
set of removed vertices. As (G, k) is a No-instance of F-Subgraph-Packing, |D| 6 k
and as G is reduced under Rule 1 all vertices in V (G) \ Nd+` (D) belong to a (connected)
subgraph model (which intersects Nd+` (D)), hence all vertices in V (G) \ Nd+` (D) are
at distance at most 2d + ` from D. Therefore (G, k) is a Yes-instance of (2d + `)Dominating Set.
Note that if (G, k) is a No-instance of F-Subgraph-Packing with `-Membership
then it is a No-instance of F-Subgraph-Packing (that is, of 1-F-Subgraph-Packing)
and then it is a No-instance of `-F-Subgraph-Packing for every integer ` > 1. According to Proposition 3, it follows that (G, k) is a Yes-instance of (2d+1)-Dominating
Set.
We now apply Theorem 4 in order to find a treewidth-modulator for a Yes-instance of
(2d + 1)-Dominating Set. We now use the following corollary of Theorem 4.
32
Corollary 4. Let F be a finite set of connected graphs, let H be an h-vertex apex graph,
and let G be the class of H-minor-free graphs. If (G, k)
G , then there exists a set
√ ∈ d-DS
3 · f (h)3 ). Moreover, given
X ⊆ V (G) such that |X| = k and tw(G − X) = O(d d · τH
c
an instance (G, k) with |V (G)| = n, there is an algorithm running in time O(n3 ) that
either finds such a set X or correctly reports that (G, k) ∈
/ d-DSG .
We are now able to construct a linear protrusion decomposition.
Lemma 11. Let F be a finite set of connected graphs, let H be an h-vertex apex
graph, and let G be the class of H-minor-free graphs. Let (G, k) be an instance of
Connected-`-F-Subgraph-Packing (or of Connected-F-Subgraph-Packing, or
of Connected-F-Subgraph-Packing with `-Membership). If (G, k) ∈
/ cFSPG ,
then we can construct in polynomial time a linear protrusion decomposition of G.
Proof. Given an instance (G, k) of cFSPG , we run the algorithm given by Corollary 4
for the Connected (2d + `)-Dominating Set problem, where d is the largest diameter
of the graphs in F. If the algorithm is not able to find a treewidth-modulator X of
size |X| = k, then by Proposition 3 we can conclude that (G, k) ∈ c`FSPG (resp.
(G, k) ∈ cFSPG and (G, k) ∈ cFSP`MG ). Otherwise, we use the set X as input to
the algorithm given by Theorem 2, which outputs in linear time an ((αH · t) · k, 2t + h)protrusion decomposition of G, where
3 · f (h)3 ) is provided by Corollary 4; and
• t = O((2d + `)3/2 · τH
c
• αH = O(h2 2O(h log h) ) is the constant provided by Theorem 2.
3 · f (h)3 · k , O((2d + `)3/2 · τ 3 · f (h)3 ) -proThis is an h2 2O(h log h) · (2d + `)3/2 · τH
c
m
H
trusion decomposition of G.
8.2
An encoder for F-Subgraph-Packing
Our encoder EFSP for F-Subgraph-Packing uses a simplified version of rooted packings.
Definition 21. Let F be a connected graph and let G be a boundaried graph with boundary B. A rooted set of B is a quadruple (A, SF∗ , SF , ψ), where
• SF ⊆ SF∗ are both subsets of V (F );
• A is a non-empty subset of B; and
• ψ : A → SF is a bijective mapping assigning vertices of SF to the vertices in A.
We also define a potential subgraph model of F in G matching with (A, SF∗ , SF , ψ)
as a partial mapping Φ, that assigns to every vertex v ∈ SF a vertex Φ(v) ∈ A such
that ψ(Φ(v)) = v, and to every vertex v ∈ SF∗ a vertex Φ(v) ∈ V (G) such that for all
u, v ∈ SF∗ if {u, v} ∈ E(F ) then, {Φ(u), Φ(v)} ∈ E(G). Moreover, for every v ∈ SF∗ \ SF ,
it holds that Φ(v) ∈ V (G) \ B.
33
Intuitively, the rooted set is a simplification of the rooted packing defined in Section
5. The collection A of subsets of B is replaced with a subset A of B (since now the image
of a vertex v ∈ V (F ) is a vertex of G). The sets SF∗ , SF still describe the subgraph of F
which is realized in G and its vertices that lie in B. The function ψ plays the same role
as in rooted packings: it can be viewed as the inverse of the potential subgraph model
Φ restricted to B. Note that we do not need the function χ anymore because the edges
cannot appear later (because now the image of a vertex v ∈ V (F ) is a vertex, and we
are dealing with a tree decomposition).
The number of distinct rooted sets at a separator B is upper-bounded by f (t, F ) :=
t
2 · rt · 22r , where t > |B| and r = |V (F )|.
Here, we only describe the encoder for F-Subgraph-Packing. Similarly to Section 6, the encoder for `-F-Subgraph-Packing is obtained by a combination of the
encoder for F-Subgraph-Packing and the one for `-Scattered Set. As in Section 7,
the encoder for F-Subgraph-Packing with `-Membership is obtained by allowing
intersections in the rooted set.
The encodings generator C EFSP . Let G ∈ Bt with boundary ∂(G) labeled with Λ(G).
The function C EFSP maps Λ(G) to a set C EFSP (Λ(G)) of encodings. Each R ∈ C EFSP (Λ(G))
is a set of at most |Λ(G)| rooted sets {(Ai , SF∗ i , SFi , ψi ) : Fi ∈ F}, where each such rooted
set encodes a potential subgraph model of Fi ∈ F (multiple subgraphs models of the
same graph are allowed), and where the sets Ai are pairwise disjoint.
The language LEFSP . For a packing of subgraph models S, we say that (G, S, R)
belongs to the language LEFSP (or that S is a packing of models satisfying R) if there is
a packing of vertex-disjoint potential subgraph models matching with the rooted sets of
S
R in G \ Φ∈S Φ(F ).
Note that we allow the entirely realized subgraph models of S to intersect ∂(G) arbitrarily, but they must not intersect potential subgraph models imposed by R.
E
As in the previous sections, we need to use the relevant function f¯g FSP . To this aim,
we need to remark that, given a separator B and a subgraph GB , a (partial) solution
naturally induces an encoding RB ∈ C EFSP (Λ(GB )) where the rooted sets correspond to
the intersection of models with B.
Formally, let G be a t-boundaried graph with boundary A and let S be a partial
solution satisfying some RA ∈ C EFSP (Λ(G)). Let also P be the set of potential subgraph
models matching with the rooted set in RA . Given a separator B in G, we define the
induced encoding RB = {(Ai , SF∗ i , SFi , ψi ) : Φi ∈ S ∪ P} ∈ C EFSP (Λ(GB )) such that for
each (potential) subgraph model Φi ∈ S ∪ P of Fi ∈ F intersecting B,
• Ai contains vertices of Φi (Fi ) in B;
• ψi maps each vertex of Ai to its corresponding vertex in Fi ; and
• SF∗ i and SFi correspond to the vertices of Fi whose images by Φ belong to GB and
B, respectively.
34
Clearly, the set of models of S entirely realized in GB is a partial solution satisfying RB .
The definition of an irrelevant encoding is the same as in Section 4.
E
E
The function f¯g FSP . Let G ∈ Bt with boundary A. We define the function f¯g FSP as
E
f¯g FSP (G, RA ) =
if f EFSP (G, RA ) + t <
max{f EFSP (G, R) : R ∈ C EFSP (Λ(G))}
E
or if RA is irrelevant for f¯g FSP .
f EFSP (G, R), otherwise.
−∞,
In the above equation, f EFSP is the natural maximization function, that is f EFSP (G, R)
is the maximal number of (entirely realized) subgraph models in G which do not intersect
potential subgraph models imposed by R. Formally,
f EFSP (G, R) = max{k : ∃S, |S| > k, (G, S, R) ∈ LEFSP }.
The size of EFSP . Recall that f (t, F ) := 2t · rt · 22r is the number of rooted sets for a
subgraph F of size r on a boundary of size t. Our encoder contains at most t vertexdisjoint rooted sets, for subgraphs of size at most r := maxF ∈F |V (F )| and such that the
sum of their boundary size is at most t. Hence we can bound the size of the encoder as
X
sEFSP (t) 6
2j · rj · 22r 6 t · 2t · rt · 22r .
j∈J
Note that the encoder for `-F-Subgraph-Packing generates couples of encodings
for F-Subgraph-Packing and `-Scattered Set, and therefore the size of the encoder
can be bounded as
2
sE`FSP (t) 6 sEFSP (t) · (` + 2)t .
Finally, note that the encoder for F-Subgraph-Packing with `-Membership contains at most `t rooted sets on a boundary of size t, and thus the size of the encoder can
be bounded as
sEFSP `M (t) 6 `t · 2t · rt · 22r .
Similarly to Fact 2, the following fact claims that rooted sets allow us to glue and
unglue boundaried graphs, preserving the existence of subgraphs. We omit the proof as
it is very similar to the one of Fact 2.
Fact 3. Let G ∈ Bt with boundary A, let Φ be a subgraph model (resp. a potential
subgraph model matching with a rooted set defined on A) of a graph F in G, let B be
a separator of G, and let GB ∈ Bt be as in Definition 12. Let (A, SF∗ , SF , ψ) be the
rooted set induced by Φ (as defined above). Let G0B ∈ Bt with boundary B and let G0 be
the graph obtained by replacing GB with G0B . If G0B has a potential subgraph model Φ0B
matching with (A, SF∗ , SF , ψ), then G0 has a subgraph model (resp. a potential subgraph
model) of F .
35
We now have to prove that the encoders EFSP , E`FSP , EFSP `M are confined and DPfriendly. The proofs are very similar to the proof of Lemma 6; the proofs for E`FSP and
EFSP `M have to be adapted following Sections 6 and 7, respectively. This seems natural as
the encoder EFSP is defined with rooted sets, which are simplifications of rooted packings.
Lemma 12. The encoders EFSP , E`FSP , and EFSP `M are g-confined for g(t) = t, g(t) = 2t,
and g(t) = t, respectively. They are respectively a cFSP-encoder, a c`FSP-encoder,
and a cFSP`M -encoder. Furthermore, if G is an arbitrary class of graphs, then the
equivalence relations ∼∗E ,G,t , ∼∗E`FSP ,G,t , and ∼∗EFSP `M ,G,t are DP-friendly.
FSP
8.3
A linear kernel for F-Subgraph-Packing
We are now ready to provide a linear kernel for Connected F-Subgraph-Packing,
Connected `-F-Subgraph-Packing, and Connected F-Subgraph-Packing with
`-membership.
Theorem 8. Let F be a finite family of connected graphs with diameter at most d, let H
be an h-vertex graph, and let G be the class of H-minor-free graphs. Then Connected
F-Subgraph-Packing, Connected `-F-Subgraph-Packing, and Connected FSubgraph Packing with `-Membership admit constructive kernels of size O(k),
where the constant hidden in the “O” notation depends on h, d, and `.
The proof is similar to the ones in the previous sections. Using the protrusion
decomposition given by Lemma 11 and the encoders described in Section 8.2, we have
all the material to apply Corollary 1. The size of the kernel differs from the previous
sections due to the size of the encoders and due to the bound on the treewidth of
protrusions given by Lemma 11.
To conclude, we would like to mention that Romero and López-Ortiz [32] introduced another problem allowing intersection of subgraph models, called F-(Subgraph)Packing with `-Overlap. In this problem, also studied in [16, 33], a subgraph model
can intersect any number of other models, but they are allowed to pairwise intersect on
at most ` vertices. It is easier to perform dynamic programming on the membership
version than on the overlap version, since the intersection constraint is local for the
first one (just on vertices) but global for the second one (on pairs of models). However,
we think that it is possible to define an encoder (with all the required properties) for
F-(Subgraph)-Packing with `-Overlaps using rooted sets and vectors of integers
counting the overlaps (similarly to `-F-Subgraph-Packing). This would imply the
existence of a linear kernel for the F-(Subgraph)-Packing with `-Overlap problem
on sparse graphs. We leave it for further research.
9
Conclusions and further research
In this article we generalized the framework introduced in [22] to deal with packingcertifiable problems. Our main result can be seen as a meta-theorem, in the sense that
36
as far a particular problem satisfies the generic conditions stated in Corollary 1, an
explicit linear kernel on the corresponding graph class follows. Nevertheless, in order to
verify these generic conditions and, in particular, to verify that the equivalence relation
associated with an encoder is DP-friendly, the proofs are usually quite technical and one
first needs to get familiar with several definitions. We think that it may be possible to
simplify the general methodology, thus improving its applicability.
Concerning the explicit bounds derived from our results, one natural direction is to
reduce them as much as possible. These bounds depend on a number of intermediate
results that we use along the way and improving any of them would result in an improvement on the overall kernel sizes. It is worth insisting here that some of the bounds
involve the (currently) non-explicit function fc defined in Proposition 2, which depends
exclusively on the considered graph class (and not on each particular problem). In order
to find explicit bounds for this function fc , we leave as future work using the lineartime deterministic protrusion replacer recently introduced by Fomin et al. [18], partially
inspired from [22].
Acknowledgement. We would like to thank Archontia C. Giannopoulou for insightful
discussions about the Erdős-Pósa property for scattered planar minors.
References
[1] I. Adler, F. Dorn, F. V. Fomin, I. Sau, and D. M. Thilikos. Faster parameterized
algorithms for minor containment. Theoretical Computer Science, 412(50):7018–
7028, 2011.
[2] J. Alber, M. Fellows, and R. Niedermeier. Polynomial-Time Data Reduction for
Dominating Set. Journal of the ACM, 51(3):363–384, 2004.
[3] A. Atminas, M. Kaminski, and J.-F. Raymond. Scattered packings of cycles. Theoretical Computer Science, 647:33–42, 2016.
[4] H. L. Bodlaender. A linear-time algorithm for finding tree-decompositions of small
treewidth. SIAM Journal on Computing, 25(6):1305–1317, 1996.
[5] H. L. Bodlaender, F. V. Fomin, D. Lokshtanov, E. Penninkx, S. Saurabh, and
D. M. Thilikos. (Meta) Kernelization. In Proc. of the 50th IEEE Symposium on
Foundations of Computer Science (FOCS), pages 629–638. IEEE Computer Society,
2009.
[6] H. L. Bodlaender, S. Thomassé, and A. Yeo. Kernel bounds for disjoint cycles and
disjoint paths. Theoretical Computer Science, 412(35):4570–4578, 2011.
[7] J. R. Büchi. Weak second order arithmetic and finite automata. Zeitschrift für
mathematische Logik und Grundlagen der Mathematik, 6:66–92, 1960.
37
[8] C. Chekuri and J. Chuzhoy. Large-treewidth graph decompositions and applications. In Proc. of the 45th Symposium on the Theory of Computing (STOC), pages
291–300, 2013.
[9] C. Chekuri and J. Chuzhoy. Polynomial bounds for the grid-minor theorem. In
Proc. of the 46th ACM Symposium on the Theory of Computing (STOC), pages
60–69, 2014.
[10] M. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk,
M. Pilipczuk, and S. Saurabh. Parameterized Algorithms. Springer, 2015.
[11] M. Cygan, J. Nederlof, M. Pilipczuk, M. Pilipczuk, J. M. M. van Rooij, and J. O.
Wojtaszczyk. Solving connectivity problems parameterized by treewidth in single
exponential time. In Proc. of the 52nd IEEE Symposium on Foundations of Computer Science (FOCS), pages 150–159. IEEE Computer Society, 2011.
[12] E. D. Demaine and M. Hajiaghayi. Linearity of grid minors in treewidth with
applications through bidimensionality. Combinatorica, 28(1):19–36, 2008.
[13] R. Diestel. Graph Theory, volume 173. Springer-Verlag, 4th edition, 2010.
[14] P. Erdős and L. Pósa. On independent circuits contained in a graph. Canadian
Journal of Mathematics, 17:347–352, 1965.
[15] M. R. Fellows, J. Guo, C. Komusiewicz, R. Niedermeier, and J. Uhlmann. Graphbased data clustering with overlaps. Discrete Optimization, 8(1):2–17, 2011.
[16] H. Fernau, A. López-Ortiz, and J. Romero. Kernelization algorithms for packing
problems allowing overlaps. In Proc. of the 12th Annual Conference on Theory
and Applications of Models of Computation, (TAMC), volume 9076 of LNCS, pages
415–427, 2015.
[17] F. V. Fomin, P. A. Golovach, and D. M. Thilikos. Contraction obstructions for
treewidth. Journal of Combinatorial Theory, Series B, 101(5):302–314, 2011.
[18] F. V. Fomin, D. Lokshtanov, N. Misra, M. S. Ramanujan, and S. Saurabh. Solving dSAT via Backdoors to Small Treewidth. In Proc. of the 26th ACM-SIAM Symposium
on Discrete Algorithms (SODA), pages 630–641, 2015.
[19] F. V. Fomin, D. Lokshtanov, V. Raman, and S. Saurabh. Bidimensionality and EPTAS. In Proc. of the 22nd ACM-SIAM Symposium on Discrete Algorithms (SODA),
pages 748–759, 2011.
[20] F. V. Fomin, D. Lokshtanov, S. Saurabh, and D. M. Thilikos. Bidimensionality
and kernels. In Proc. of the 21st ACM-SIAM Symposium on Discrete Algorithms
(SODA), pages 503–510, 2010.
38
[21] F. V. Fomin, S. Saurabh, and D. M. Thilikos. Strengthening Erdős-Pósa property
for minor-closed graph classes. Journal of Graph Theory, 66(3):235–240, 2011.
[22] V. Garnero, C. Paul, I. Sau, and D. M. Thilikos. Explicit linear kernels via dynamic
programming. SIAM Journal on Discrete Mathematics, 29(4):1864–1894, 2015.
[23] A. Giannopoulou. Partial Orderings and Algorithms on Graphs. PhD thesis, Department of Mathematics, University of Athens, Greece, 2012.
[24] J. Guo and R. Niedermeier. Linear problem kernels for NP-hard problems on planar
graphs. In Proc. of the 34th International Colloquium on Automata, Languages and
Programming (ICALP), volume 4596 of LNCS, pages 375–386, 2007.
[25] K. Kawarabayashi and Y. Kobayashi. Linear min-max relation between the
treewidth of H-minor-free graphs and its largest grid. In Proc. of the 29th International Symposium on Theoretical Aspects of Computer Science (STACS), volume 14
of LIPIcs, pages 278–289, 2012.
[26] E. J. Kim, A. Langer, C. Paul, F. Reidl, P. Rossmanith, I. Sau, and S. Sikdar. Linear kernels and single-exponential algorithms via protrusion decompositions. ACM
Transactions on Algorithms, 12(2):21, 2016.
[27] T. Kloks. Treewidth. Computations and Approximations. Springer-Verlag LNCS,
1994.
[28] D. Lokshtanov, D. Marx, and S. Saurabh. Lower bounds based on the exponential
time hypothesis. Bulletin of the EATCS, 105:41–72, 2011.
[29] H. Moser. A problem kernelization for graph packing. In Proc. of the 35th International Conference on Current Trends in Theory and Practice of Computer Science
(SOFSEM), volume 5404 of LNCS, pages 401–412, 2009.
[30] N. Robertson and P. D. Seymour. Graph Minors. V. Excluding a Planar Graph.
Journal of Combinatorial Theory, Series B, 41(1):92–114, 1986.
[31] N. Robertson and P. D. Seymour. Graph Minors. XIII. The Disjoint Paths Problem.
Journal of Combinatorial Theory, Series B, 63(1):65–110, 1995.
[32] J. Romero and A. López-Ortiz. The G-packing with t-overlap problem. In Proc.
of the 8th International Workshop on Algorithms and Computation (WALCOM),
volume 8344 of LNCS, pages 114–124, 2014.
[33] J. Romero and A. López-Ortiz. A parameterized algorithm for packing overlapping
subgraphs. In Proc. of the 9th International Computer Science Symposium in Russia
(CSR), volume 8476 of LNCS, pages 325–336, 2014.
39
A
A.1
Deferred proofs in Section 3
Proof of Lemma 1
Let us first show that the equivalence relation ∼∗E ,t has finite index. Let I ⊆ {1, . . . , t}.
FSP
Since we assume that EFSP is g-confined, we have that for any G ∈ Bt with Λ(G) = I, the
function f EFSP (G, · ) can take at most g(t) + 2 distinct values (g(t) + 1 finite values and
possibly the value −∞). Therefore, it follows that the number of equivalence classes of
E
∼∗E ,t containing all graphs G ∈ Bt with Λ(G) = I is at most (g(t) + 2)|C FSP (I)| . As the
FSP
number of subsets of {1, . . . , t} is 2t , we deduce that the overall number of equivalence
s
(t)
classes of ∼∗E ,t is at most (g(t) + 2) EFSP · 2t . Finally, since the equivalence relation
FSP
∼∗E ,G,t is the Cartesian product of the equivalence relations ∼∗E ,t and ∼G,t , the result
FSP
FSP
follows from the fact that G can be expressed in MSO logic.
A.2
Proof of Fact 1
Let G = G− ⊕ GB and let G0 = G− ⊕ G0B . Assume that G ∼∗E ,t G0 . In order to
FSP
deduce that G ∼∗E ,G,t G0 , it suffices to prove that G ∼G,t G0 . Let H ∈ Bt . We
FSP
need to show that G ⊕ H ∈ G if and only if G0 ⊕ H ∈ G. We have that G ⊕ H =
(GB ⊕ G− ) ⊕ H = GB ⊕ (G− ⊕ H), and similarly for G0 . Since GB ∼G,t G0B , it follows
that G ⊕ H = GB ⊕ (G− ⊕ H) ∈ G if and only if GB ⊕ (G− ⊕ H) = G ⊕ H ∈ G.
A.3
Proof of Lemma 2
Let EFSP = (C EFSP , LEFSP , f EFSP ) be a Π-encoder and let G1 , G2 ∈ Bt such that G1 ∼EFSP ,t
G2 . We need to prove that for any H ∈ Bt and any integer k, (G1 ⊕ H, k) ∈ Π if and
only if (G2 ⊕ H, k + ∆EFSP ,t (G1 , G2 )) ∈ Π.
Suppose that (G1 ⊕ H, k) ∈ Π (by symmetry the same arguments apply starting with
G2 ). Since G1 ⊕ H is a 0-boundaried graph and EFSP is a Π-encoder, we have that
f EFSP (G1 ⊕ H, R∅ ) = f Π (G1 ⊕ H) > k.
(9)
As ∼∗E ,G,t is DP-friendly and G1 ∼∗E ,G,t G2 , it follows that (G1 ⊕H) ∼∗E ,G,t (G2 ⊕H)
FSP
FSP
FSP
and that ∆EFSP ,t (G1 ⊕H, G2 ⊕H) = ∆EFSP ,t (G1 , G2 ). Since G2 ⊕H is also a 0-boundaried
graph, the latter property and Equation (9) imply that
f EFSP (G2 ⊕ H, R∅ ) = f EFSP (G1 ⊕ H, R∅ ) + ∆EFSP ,t (G1 , G2 ) > k + ∆EFSP ,t (G1 , G2 ). (10)
Since EFSP is a Π-encoder, f Π (G2 ⊕ H) = f EFSP (G2 ⊕ H, R∅ ), and from Equation (10) it
follows that (G2 ⊕ H, k + ∆EFSP ,t (G1 , G2 )) ∈ Π.
A.4
Proof of Lemma 3
Let C be an arbitrary equivalence class of ∼EFSP ,G,t , and let G1 , G2 ∈ C. Let us first argue
that C contains some progressive representative. Since ∆EFSP ,t (G1 , G2 ) = f EFSP (G1 , R) −
40
f EFSP (G2 , R) for every encoding R such that f EFSP (G1 , R), f EFSP (G2 , R) 6= −∞, G ∈ C
is progressive if f EFSP (G, R) is minimal in f EFSP (C, R) = {f (G, R) : G ∈ C} for every
encoding R (including those for which the value is −∞). Since f EFSP (C, R) is a subset
of N ∪ {−∞}, it necessarily has a minimal element, hence there is a progressive representative in C (in other words, the order defined by G1 4 G2 if ∆EFSP ,t (G1 , G2 ) 6 0 is
well-founded).
Now let G ∈ G be a progressive representative of C with minimum number of vertices.
We claim that G has size at most 2r(E,g,t,G)+1 · t (we would like to stress that at this stage
we only need to care about the existence of such representative G, and not about how
to compute it). Let (T, X ) be a boundaried nice tree decomposition of G of width at
most t − 1 such that ∂(G) is contained in the root-bag (such a nice tree decomposition
exists by [27]).
We first claim that for any node x of T , the graph Gx is a progressive representative of its equivalence class with respect to ∼EFSP ,G,t , namely C0 . Indeed, assume
for contradiction that Gx is not progressive, and therefore we know that there exists
G0x ∈ C0 such that ∆EFSP ,t (G0x , Gx ) < 0. Let G0 be the graph obtained from G by replacing Gx with G0x . Since ∼∗E ,G,t is DP-friendly, it follows that G ∼EFSP ,G,t G0 and
FSP
that ∆EFSP ,t (G0 , G) = ∆EFSP ,t (G0x , Gx ) < 0, contradicting the fact that G is a progressive
representative of the equivalence class C.
We now claim that for any two nodes x, y ∈ V (T ) lying on a path from the root
to a leaf of T , it holds that Gx EFSP ,G,t Gy . Indeed, assume for contradiction that
there are two nodes x, y ∈ V (T ) lying on a path from the root to a leaf of T such that
Gx ∼EFSP ,G,t Gy . Let C0 be the equivalence class of Gx and Gy with respect to ∼EFSP ,G,t .
By the previous claim, it follows that both Gx and Gy are progressive representatives of
C0 , and therefore it holds that ∆EFSP ,t (Gy , Gx ) = 0. Suppose without loss of generality
that Gy ( Gx (that is, Gy is a strict subgraph of Gx ), and let G0 be the graph obtained from G by replacing Gx with Gy . Again, since ∼∗E ,G,t is DP-friendly, it follows
FSP
that G ∼EFSP ,G,t G0 and that ∆EFSP ,t (G0 , G) = ∆EFSP ,t (Gy , Gx ) = 0. Therefore, G0 is a
progressive representative of C with |V (G0 )| < |V (G)|, contradicting the minimality of
|V (G)|.
Finally, since for any two nodes x, y ∈ V (T ) lying on a path from the root to a leaf of
T we have that Gx EFSP ,G,t Gy , it follows that the height of T is at most the number of
equivalence classes of ∼EFSP ,G,t , which is at most r(EFSP , g, t, G) by Lemma 1. Since T is
a binary tree, we have that |V (T )| 6 2r(E,g,t,G)+1 − 1. Finally, since |V (G)| 6 |V (T )| · t,
it follows that |V (G)| 6 2r(E,g,t,G)+1 · t, as we wanted to prove.
A.5
Proof of Lemma 4
Let EFSP = (C EFSP , LEFSP , f EFSP ) be the given encoder. We start by generating a repository
R containing all the graphs in Ft with at most b + 1 vertices. Such a set of graphs, as
well as a boundaried nice tree decomposition of width at most t − 1 of each of them,
can be clearly generated in time depending only on b and t. By assumption, the size of
41
a smallest progressive representative of any equivalence class of ∼EFSP ,G,t is at most b,
so R contains a progressive representative of any equivalence class of ∼EFSP ,G,t with at
most b vertices. We now partition the graphs in R into equivalence classes of ∼EFSP ,G,t
as follows: for each graph G ∈ R and each encoding R ∈ C EFSP (Λ(G)), as LEFSP and
f EFSP are computable, we can compute the value f EFSP (G, R) in time depending only
on EFSP , g, t, and b. Therefore, for any two graphs G1 , G2 ∈ R, we can decide in time
depending only on EFSP , g, t, b, and G whether G1 ∼EFSP ,G,t G2 , and if this is the case, we
can compute the transposition constant ∆EFSP ,t (G1 , G2 ) within the same running time.
Given a t-protrusion Y on n vertices with boundary ∂(Y ), we first compute a boundaried nice tree decomposition (T, X , r) of Y in time f (t) · n, by using the linear-time
algorithm of Bodlaender [4, 27]. Such a t-protrusion Y equipped with a tree decomposition can be naturally seen as a t-boundaried graph by assigning distinct labels from
{1, . . . , t} to the vertices in the root-bag. We can assume that Λ(Y ) = {1, . . . , t}. Note
that the labels can be transferred to the vertices in all the bags of (T, X , r), by performing a standard shifting procedure when a vertex is introduced or removed from the
nice tree decomposition [5]. Therefore, each node x ∈ V (T ) defines in a natural way
a t-protrusion Yx ⊆ Y with its associated boundaried nice tree decomposition, with all
the boundary vertices contained in the root bag. Let us now proceed to the description
of the replacement algorithm.
We process the bags of (T, X ) in a bottom-up way until we encounter the first node
x in V (T ) such that |V (Yx )| = b + 1 (note that as (T, X ) is a nice tree decomposition,
when processing the bags in a bottom-up way, at most one new vertex is introduced at
every step, and recall that b > t, hence such an x exists). We compute the equivalence
class C of Yx according to ∼EFSP ,G,t ; this corresponds to computing the set of encodings
C EFSP (Λ(Yx )) and the associated values of f EFSP (Yx , ·) that, by definition of an encoder,
can be calculated since f EFSP is a computable function. As |V (Yx )| = b + 1, the graph
Yx is contained in the repository R, so in constant time we can find in R a progressive
representative Yx0 of C with at most b vertices and the corresponding transposition constant ∆EFSP ,t (Yx0 , Yx ) 6 0, (the inequality holds because Yx0 is progressive). Let Z be the
graph obtained from Y by replacing Yx with Yx0 , so we have that |V (Y )| < |V (Z)| (note
that this replacement operation directly yields a boundaried nice tree decomposition of
width at most t − 1 of Z). Since ∼∗E ,G,t is DP-friendly, it follows that Y ∼EFSP ,G,t Z
FSP
and that ∆EFSP ,t (Z, Y ) = ∆EFSP ,t (Yx0 , Yx ) 6 0.
We recursively apply this replacement procedure on the resulting graph until we eventually obtain a t-protrusion Y 0 with at most b vertices such that Y ∼EFSP ,G,t Y 0 . The
corresponding transposition constant ∆EFSP ,t (Y 0 , Y ) can be easily computed by summing up all the transposition constants given by each of the performed replacements.
Since each of these replacements introduces a progressive representative, we have that
∆EFSP ,t (Y 0 , Y ) 6 0. As we can assume that the total number of nodes in a nice tree
decomposition of Y is O(n) [27, Lemma 13.1.2], the overall running time of the algorithm is O(n) (the constant hidden in the “O” notation depends indeed exclusively on
EFSP , g, b, G, and t).
42
A.6
Proof of Theorem 1
By Lemma 1, the number of equivalence classes of the equivalence relation ∼EFSP ,G,t is
finite and by Lemma 3 the size of a smallest progressive representative of any equivalence
class of ∼EFSP ,G,t is at most b(EFSP , g, t, G). Therefore, we can apply Lemma 4 and deduce
that, in time O(|Y |), we can find a t-protrusion Y 0 of size at most b(EFSP , g, t, G) such
that Y ∼EFSP ,G,t Y 0 and the corresponding transposition constant ∆EFSP ,t (Y 0 , Y ) with
∆EFSP ,t (Y 0 , Y ) 6 0. Since EFSP is a Π-encoder and ∼∗E ,G,t is DP-friendly, it follows
FSP
from Lemma 2 that Y ≡Π Y 0 and that ∆Π,t (Y 0 , Y ) = ∆EFSP ,t (Y 0 , Y ) 6 0. Therefore, if
we set k 0 := k + ∆Π,t (Y 0 , Y ), it follows that (G, k) and ((G − (Y − ∂(Y ))) ⊕ Y 0 , k 0 ) are
indeed equivalent instances of Π with k 0 6 k and |Y 0 | 6 b(EFSP , g, t, G).
A.7
Proof of Corollary 1
For 1 6 i 6 `, where ` is the number of protrusions in the decomposition, we apply
the polynomial-time algorithm given by Theorem 1 to replace each t-protrusion Yi with
a graph Yi0 of size at most b(EFSP , g, t, G) and to update the parameter accordingly.
In this way we obtain an equivalent instance (G0 , k 0 ) such that G0 ∈ G, k 0 6 k and
|V (G0 )| 6 |Y0 | + ` · b(EFSP , g, t, G) 6 (1 + b(EFSP , g, t, G))α · k .
43
| 8 |
A Minimal Developmental Model Can Increase Evolvability in
Soft Robots
Sam Kriegman
Nick Cheney
Morphology, Evolution & Cognition Lab
University of Vermont
Burlington, VT, USA
[email protected]
Creative Machines Lab
Cornell University
Ithaca, NY, USA
arXiv:1706.07296v1 [] 22 Jun 2017
Francesco Corucci
Josh C. Bongard
The BioRobotics Institute
Scuola Superiore Sant’Anna
Pisa, Italy
Morphology, Evolution & Cognition Lab
University of Vermont
Burlington, VT, USA
ABSTRACT
1
Different subsystems of organisms adapt over many time scales,
such as rapid changes in the nervous system (learning), slower
morphological and neurological change over the lifetime of the organism (postnatal development), and change over many generations
(evolution). Much work has focused on instantiating learning or
evolution in robots, but relatively little on development. Although
many theories have been forwarded as to how development can aid
evolution, it is difficult to isolate each such proposed mechanism.
Thus, here we introduce a minimal yet embodied model of development: the body of the robot changes over its lifetime, yet growth
is not influenced by the environment. We show that even this
simple developmental model confers evolvability because it allows
evolution to sweep over a larger range of body plans than an equivalent non-developmental system, and subsequent heterochronic
mutations ‘lock in’ this body plan in more morphologically-static
descendants. Future work will involve gradually complexifying
the developmental model to determine when and how such added
complexity increases evolvability.
Many theories have been proposed as to how development can
confer evolvability. Selfish gene theory [8] suggests that prenatal
development from a single-celled egg is not a superfluous byproduct
of evolution, but is instead a critical process that ensures uniformity among genes contained within a single organism and in turn
their cooperation towards mutual reproduction. Developmental
plasticity, the ability of an organism to modify its form in response
to environmental conditions, is believed to play a crucial role in the
origin and diversification of novel traits [17]. Others have shown
that development can in effect ‘encode’, and thus avoid on a much
shorter time scale, constraints that would otherwise be encountered
and suffered by non-developmental systems [15].
Several models that specifically address development of embodied agents have been reported in the literature. For example Eggenberger [11] demonstrated how shape could emerge during growth
in response to physical forces acting on the growing entity. Bongard
[3] adopted models of genetic regulatory networks to demonstrate
how evolution could shape the developmental trajectories of embodied agents. Later, it was shown how such development could
lead to a form of self-scaffolding that smoothed the fitness landscape and thus increased evolvability [2]. Miller [16] introduced a
developmental model that enabled growing organisms to regrow
structure removed by damage or other environmental stress.
In the spirit of Beer’s minimal cognition experiments [1], we
introduce here a minimal model of morphological development in
embodied agents (figure 2). This model strips away some aspects
of other developmental models, such as those that reorganize the
genotype to phenotype mapping [3, 11, 15] or allow the agent’s
environment to influence its development [14, 16]. We use soft
robots as our model agents since they provide many more degrees
of developmental freedom compared to rigid bodies, and can in
principle reduce human designer bias. Here, development is monotonic and irreversible, predetermined by genetic code without any
sensory feedback from the environment, and is thus ballistic in
nature rather than adaptive.
While biological development occurs along a time axis, it has
been implied in some developmental models that time provides only
an avenue for regularities to form across space, and that only the
resulting fixed form — its spatial patterns, repetition and symmetry
— are necessary for increasing evolvability. Compositional pattern
CCS CONCEPTS
•Computing methodologies → Mobile agents;
KEYWORDS
Morphogenesis; Heterochrony; Development; Artificial life; Evolutionary robotics; Soft robotics.
ACM Reference format:
Sam Kriegman, Nick Cheney, Francesco Corucci, and Josh C. Bongard. 2017.
A Minimal Developmental Model Can Increase Evolvability in Soft Robots.
In Proceedings of GECCO ’17, Berlin, Germany, July 15-19, 2017, 8 pages.
DOI: http://dx.doi.org/10.1145/3071178.3071296
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from [email protected].
GECCO ’17, Berlin, Germany
© 2017 ACM. 978-1-4503-4920-8/17/07. . . $15.00
DOI: http://dx.doi.org/10.1145/3071178.3071296
INTRODUCTION
GECCO ’17, July 15-19, 2017, Berlin, Germany
Kriegman et al.
Phylogenetic Time (T )
T =0
T = 47
T = 24
T = 69
T = 88
Ontogenetic time (t)
t=0
T = 47
t=2
t=4
t=6
t=8
Actuation cycle
t=7
T = 47
t = 7.05
t = 7.10
t = 7.15
t = 7.20
Figure 1: The evolutionary history of an Evo-Devo robot. One of the five phylogenies is broken down into five ontogenies which
is in turn shown at five points in its actuation cycle. Voxel color indicates the remaining development. Blue for shrinkage, red
for growing, and green for no further change. This robot is featured in video at https://youtu.be/gXf2Chu4L9A.
producing networks (CPPNs, [19]) explicitly make this assumption
in their abstraction of development which collapses the time line
to a single point. While CPPNs have proven to be an invaluable
resource in evolutionary robotics [6], we argue here that discarding
time may in some cases reduce evolvability and that there exist
fundamental benefits of time itself in evolving systems.
In this paper, we examine two distinct ways by which ballistic
development can increase evolvability. First, we show how an ontogenetic time scale provides evolution with a simple mechanism
for inducing mutations with a range of magnitude of phenotypic
impact: mutations that occur early in the life time of an agent have
relatively large effects while those that occur later have smaller
effects. This is important since, according to Fisher’s geometric
model [12], the likelihood a mutation is beneficial is inversely proportional to its magnitude: Small mutations are less likely to break
an existing solution. Larger exploratory mutations, although less
likely to be beneficial on average, are more likely to provide an
occasional path out of local optima. Second, we posit that changing
ontogenies diversify targets for natural selection to act upon, and
that advantageous traits ‘discovered’ by the phenotype during this
change can become subject to heritable modification through the
‘Baldwin Effect’ [10].
Hinton and Nowlan [14] relied on this second effect when they
demonstrated how learning could guide evolution towards a solution to which no evolutionary path led. We consider a similar
hypothesis with embodied robots and ballistic development, rather
than a disembodied bitstring and random search. We demonstrate
how open-loop morphological development, without feedback from
the environment and without direct communication to the genotype, can similarly alter the search space in which evolution operates making search much easier. Hinton & Nowlan’s model of
learning was a type of environment-mediated development, in the
sense that developmental change stops when the ‘correct specification’ is found, and this information is then used to bias selection
towards individuals that find the solution more quickly. Our work
demonstrates that this explicit suppression of development is not
necessary; and that completely undirected morphological change
is enough to confer evolvability.
A Minimal Developmental Model Can Increase Evolvability in Soft Robots
METHODS
1 https://github.com/skriegman/gecco-2017 contains the source code necessary for
reproducing our results.
Vk (t)
Volume
2
All experiments1 were performed in the open-source soft-body
physics simulator Voxelyze, which is described in detail in Hiller
and Lipson [13].
We consider a locomotion task for soft robots composed of a
4 × 4 × 3 grid of voxels (see figure 1 for example). Each voxel within
and across robots is identical with one exception: its volume. At
any given time, a robot is completely specified by an array of resting
volumes, one for each of its 48 constituent voxels. If the resting volumes are static across time then a robot’s genotype is this array of
48 voxel volumes; however, because we enforce bilateral symmetry,
a genome of half that size is sufficient. On top of the deformation
imposed by the genome, each voxel is volumetrically actuated according to a global signal that varies sinusoidally in volume over
time (figure 2). The actuation is a linear contraction/expansion from
their baseline resting volume.
Under this type of rhythmic actuation, many asymmetrical mass
distributions will elicit locomotion to some extent. For instance, a
simple design, with larger voxels in its front half relative to those in
its back half, may be mobile when its voxels are actuated uniformly.
Although this design would be rather inefficient since it most likely
drags much of its body across the floor as it moves. More productive
designs are not so intuitive, even with this fixed controller.
An individual is evaluated for 8 seconds, or 32 actuation cycles.
The fitness was taken to be the distance, in the positive y direction,
the robot’s center of mass moved in 8 seconds, normalized by the
robot’s total volume. Thus, a robot with volume 48 that moves a
distance of 48 will have the same fitness — a fitness of one — as a
similarly shaped but smaller robot with volume 12 that moves a
distance of 12. Distance here is measured in units that correspond
to the length of a voxel with volume one. If, however, a robot rolls
over onto its top layer of voxels it is assigned a fitness of zero and
evaluation is terminated. This constraint prevents a rolling ball
morphology from dominating more interesting gaits.
We have now built up all of necessary machinery of our first type
of robot which we shall call the Evo robot. Populations of these
robots can evolve: body plans change from generation to generation
(phylogeny); but they can not develop: body plans maintain a fixed
form, apart from actuation, while they behave within their lifetime
(ontogeny).
We consider a second type of robot, the Evo-Devo robot, which
inherits all of the properties of the Evo robot but has a special
ability: Evo-Devo robots can develop as well as evolve. These robots
are endowed with a minimally complex model of development in
which resting volumes change linearly in ontogeny. We call this
ballistic development to distinguish it from environment-mediated
development. Ballistic development is monotonic with a fixed rate
predetermined by a genetic program; its onset and termination
are constrained at birth and death, respectively; it is strictly linear,
without mid-course correction. The volume of the k th voxel in
an Evo-Devo robot changes linearly from a starting volume, vk 0 ,
to a final volume, vk1 , within the lifetime of a robot (figure 2).
Accordingly, the genotype of a robot that can develop is twice as
large as that of robots that cannot develop, since there are two
GECCO ’17, July 15-19, 2017, Berlin, Germany
Evo
Vk (t)
Evo-Devo
"
vk1
#
"
vk0
#
Lifetime
t
"
vk0
#
t
Figure 2: The voxel picture. The k th voxel in an Evo robot
maintains a fixed resting volume, vk 0 , throughout the robot’s lifetime. Sinusoidal actuation is applied on top of the
resting volume. In contrast, the k th voxel in an Evo-Devo robot changes linearly from a starting volume, vk 0 , to a final
volume, vk 1 , over the robot’s entire lifetime. Growth, the
case when vk 1 > vk0 , is pictured here, but shrinkage is also
possible and occurs when vk 1 < vk 0 . When vk 1 = vk 0 , EvoDevo voxels are behaviorally equivalent to Evo voxels. Voxels actuate at 4 Hz in our experiments (for 8 sec or 32 cycles)
however actuation is drawn here at a lower frequency to better convey the sinusoidal volumetric structure in time.
parameters (vk 0 and vk 1 ) that determine the volume of the k th
voxel at any particular time. Although it is important to note that
the space of possible morphologies (collection of resting volumes)
is equivalent both with and without development.
2.1
From gene to volume.
Like most animals, our robots are bilaterally symmetrical. We build
this constraint into our robots by constraining the 24 voxels on the
positive x side of the robot to be equal to their counterparts on the
other side of the y axis. Instead of 48 Evo genes, therefore, we end
up with 24.
A single Evo gene stores the resting length, sk , of the k th voxel,
which is cubed to obtain the resting volume, r k (t), at any time, t,
during the robot’s lifetime.
r k (t) = sk3
k = 1, 2, . . . , 24
(1)
The resting lengths may be any real value in the range (0.25, 1.75),
inclusive. Note that the resting volume of an Evo robot does not
depend on t, and is thus constant in ontogenetic time.
Volumetric actuation, a(t) with amplitude u, and period w, takes
the following general form in time.
a(t) = u ∗ sin(2πt/w)
(2)
Actuation is limited to ±20% and cycles four times per second
(u = 0.20, w = 0.25 sec).
However, for smaller resting volumes, the actuation amplitude
is limited and approaches zero (no actuation) as the resting volume
goes to its lower bound, 0.253 . This restriction is enforced to prevent
opposite sides of a voxel from penetrating each other, effectively
incurring negative volumes, which can lead to simulation instability.
This damping is applied only where sk < 1 (shrinking voxels) and
GECCO ’17, July 15-19, 2017, Berlin, Germany
accomplished through the following function.
(
1
sk ≥ 1
d(sk ) =
(4sk − 1)/3 sk < 1
Kriegman et al.
(3)
Thus d(sk ) is zero when sk = 0.25, and is linearly increasing in
sk ≤ 1. The true actuation, ã(t, sk ), is calculated by multiplying the
unrestricted actuation, a(t), by the limiting factor, d(sk ).
ã(t, sk ) = a(t) ∗ d(sk )
(4)
Actuation is then added to the resting volume to realize the
current volume, Vk (t), of the k th voxel of an Evo robot at time t.
Vk (t) = [sk + ã(t, sk )]3
(5)
For Evo-Devo robots, a gene is a pair of voxel lengths (sk 0 , sk 1 )
corresponding to the k th voxel’s starting and final resting lengths,
respectively. Thus, for a voxel in an Evo-Devo robot, the resting
volume at time t ∈ (0, τ ) is calculated as follows.
h
i3
t
(6)
r k (t) = sk 0 + (sk 1 − sk 0 )
τ
Where the difference in starting and final scale (sk 1 −sk 0 ) determines
the slope of linear development which may be positive (growth)
or negative (shrinkage). The current volume of the k th voxel of an
Evo-Devo robot is then determined by the following.
h
i 3
1/3
1/3
Vk (t) = r k (t) + ã t, r k (t)
(7)
Hence the starting resting volume, vk 0 , and final resting volume,
vk 1 , are the current volumes at t = 0 and t = τ , respectively.
vk 0 = Vk (0) = sk3 0
vk 1 = Vk (τ ) = sk3 1
(8)
Note that an Evo gene is a special case of an Evo-Devo gene where
sk 0 = sk 1 , or, equivalently, where vk0 = vk 1 .
For convenience, let’s define the current total volume of the
robot across all 48 voxels as Q(t).
Q(t) = 2
24
Õ
Vk (t)
(9)
k =1
We track the y position of the center of mass, y(t), as well as the
current total volume, Q(t), at n discrete intervals within the lifetime
of a robot. Fitness, F , is the sum of the distance traveled in time
interval, divided by the average volume in the interval.
n
Õ
y(t) − y(t − 1)
F =2
(10)
Q(t) + Q(t − 1)
t =1
We track y(t) and Q(t) 100 times per second. Since robots are
evaluated for eight seconds, n = 800.
2.2
A direct encoding.
This paper differs from previous evolutionary robotics work that
used Voxelyze [4–7] in that we evolve the volumes of a fixed collection of voxels, rather than the presence/absence of voxels in
a bounding region. Another difference is that we do not employ
the CPPN-NEAT evolutionary algorithm [19], but instead use a
direct encoding with bilateral symmetry about the y axis. A comparison of encodings in our scenario is beyond the scope of this
paper. However we noticed that the range of evolved morphologies
here, under our particular settings, was much smaller than that of
previous work which used voxels as building blocks, and that it is
easier to reach extreme volumes for individual voxels using a direct
encoding.
Apart from the difference in encoding, this work is by in large
consistent with this previous work. We use the same physical environment as Cheney et al. [6]: a wide-open flat plain. The material
properties of our voxels are also consistent with the ‘muscle’ voxel
type from the palette in this work; although these voxels had a
fixed resting volume of one (sk = 1 for all k). Our developmental
mechanism is strongly based on Corucci et al. [7], which used
volumetric deformation in a closed-loop pointing task.
2.3
Evolutionary search.
We employ a standard evolutionary algorithm, Age-Fitness-Pareto
Optimization (AFPO, [18]), which uses the concept of Pareto dominance and an objective of age (in addition to fitness) intended to
promote diversity among candidate designs. For 30 runs, a population of 30 robots is evolved for 2000 generations. Every generation,
the population is first doubled by creating modified copies of each
individual in the population. Next, an additional random individual is injected into the population. Finally, selection reduces the
population down to its original size according to the two objectives
of fitness (maximized) and age (minimized).
The same number of parent voxels are mutated, on average,
in both Evo and Evo-Devo children. Mutations follow a normal
distribution (σ = 0.75) and are applied by first choosing what
parameter types to mutate, and then choosing which voxels to
mutate. For Evo robots, we simply visit each voxel (on the positive
x side) of the parent and, with probability 0.5, mutate its single
parameter value. For Evo-Devo parents, we flip a coin for each
parameter to be mutated (if neither will be mutated, flip a final coin
to choose one or the other). This results in a 25% chance of mutating
both, and a 37.5% chance of mutating each of the two individual
parameters alone. Then we apply the same mutation process as
before in Evo robots: loop through each voxel of the parent and,
with probability 0.5, mutate the selected parameter(s).
2.4
An artificially rugged landscape.
We did not fine-tune the mutation hyperparameters (scale and
probability), but intentionally chose a relatively high probability of
mutation in order to elicit a large mutational impact in an attempt
to render evolutionary search more difficult. This removes easy to
follow gradients in the search space — ‘compressing’ gentle slopes
into abrupt cliffs — which make ‘good designs’ more difficult to find.
Any one of these good solutions then, to a certain extent, become
like Hinton & Nowlan’s ‘needle in a haystack’ [14].
Note that there are other ways to enforce rugged fitness landscapes, and such landscapes are naturally occurring in many systems, though our particular task/environment is not one of them.
Future work should investigate these tasks and environments with
a fine-tuned mutation rate.
A Minimal Developmental Model Can Increase Evolvability in Soft Robots
3.2
Random designs
Density
Evo
Evo-Devo
−10
−5
0
5
10
Vol-normed distance
Figure 3: One thousand randomly generated robots for each
group. The horizontal axes measure fitness: volume normalized distance in the positive y direction. The best overall designs are the best Evo robots since they maintain their good
form as they behave. However, most designs are immobile
(mode at zero) and Evo-Devo robots are more likely to move
(less mass around zero) since they explore a continuum of
body plans rather than a single static guess.
RESULTS
In this section we present the results of our experiments2 and
indicate statistical significance under the Mann-Whitney U test
where applicable.
3.1
Random search.
To get a sense of the evolutionary search space, prior to optimization, we randomly generated one thousand robots from each group
(figure 3). The horizontal axes of figure 3 measure the fitness (equation 10) of our randomly generated designs. The top portion of this
figure plots the histogram of relative frequencies, using equal bin
sizes between groups. The mode is zero for both groups, meaning
that the majority of designs are immobile.
The best possibility here is to randomly guess a good Evo robot
since this good morphology is utilized for the full 32 actuation cycles.
This is why the best random designs are Evo robots. However, the
Evo-Devo distribution contains much less mass around zero than
the Evo distribution. It follows that it is more likely that an EvoDevo robot moves at all, if only temporarily, since this only requires
some interval of the many morphologies it sweeps over to be mobile.
Also note that while the total displacement may be lower in the
Evo-Devo case, since these robots ‘travel’ through a number of
different morphologies, they may pass through those which run
at a higher instantaneous speed (but spend less of their lifetime in
this morphology).
2 https://youtu.be/gXf2Chu4L9A
directs to a video overview of our experiments.
Evolution.
The results of the evolutionary algorithm are displayed in figure
4a. In the earliest generations, evolution is consistent with random
search and the best Evo robots start off slightly better than the
best Evo-Devo robots. However, the best Evo-Devo robots quickly
overtake the best Evo robots. At the end of optimization there is a
significant difference between Evo and Evo-Devo run champions
(U = 122, p < 0.001).
We also chose to reevaluate Evo-Devo robots with their development frozen at their median ontogenetic morphologies (figure 4b).
For each robot, we measure the robot’s fitness (equation 10) at this
midlife morphology with development frozen, for two seconds. Selection is completely blind to this frozen evaluation. It exists solely
for the purpose of post-evolution analysis, and serves primarily
as a sanity check to make sure Evo-Devo robots are not explicitly
utilizing their ability to grow/shrink to move faster.
Development appears to inhibit locomotion to some degree as
the best morphologies run slightly faster with development turned
off, particularly in earlier generations. A significant difference,
at the 0.001 level, between Evo robots and Evo-Devo robots with
development frozen at midlife, occurs after only 108 generations
compared to 255 generations with development enabled. Note
that the midlife morphology is not necessarily the top speed of
an Evo-Devo robot. In fact it is almost certainly not the optimal
ontogenetic form since the best body plan may occur at any point
in its continuous ontogeny, including the start and endpoints.
3.3
3
GECCO ’17, July 15-19, 2017, Berlin, Germany
Closing the window.
Once an Evo-Devo robot identifies a good body plan in its ontogenetic sweep, its descendants can gain fitness by ‘suppressing’ development around the good plan through heterochronic mutations.
This can be accomplished by incrementally closing the developmental window, the interval (sk 0 , sk 1 ), for each voxel, around the good
morphology. In the limit, under a fixed environment, this process
ends with a decedent born with the good design from the start and
devoid of any developmental at all (sk 0 = sk 1 for all voxels). This
phenomenon, best known as the Baldwin Effect, is instrumental in
evolution because natural selection is a hill-climbing process and
therefore blind to needles in a haystack, good designs (local optima)
to which no gradient of increased fitness leads. The developmental
sweep, however, alters the search space in which evolution operates,
surrounding the good design by a slope which natural selection
can climb [14].
To investigate the relationship between development and fitness,
we add up all of the voxel-level development windows to form a
individual-level summary statistic, W . We define the total development window, W , as the sum of the absolute difference of starting
and final resting lengths across the robot’s 48 voxels.
W =
48
Õ
k =1
abs(sk1 − sk 0 )
(11)
Overall there is a strong negative correlation between fitness,
F , and the total development window, W , in Evo-Devo robots
(figure 5). To achieve the highest fitness values a robot needs to
have narrow developmental windows at the voxel level. However,
this statistic doesn’t discriminate between open/closed windows
GECCO ’17, July 15-19, 2017, Berlin, Germany
Fitness
25
a.
Evo
Evo-Devo
60
20
15
25
10
Frozen midlife
b.
20
15
10
5
5
Development window
Vol-normed distance
Kriegman et al.
50
40
30
20
10
0
0
0
500
1000
1500
2000
Generation
Figure 4: For thirty runs, a population of thirty robots is
evolved for two thousand generations. (a) Best of generation
fitness for Evo and Evo-Devo robots. (b) The same robots are
reevaluated with development frozen at their midlife morphology. Means are plotted with 95% bootstrapped confidence intervals.
early/late in evolution. To show what sorts of development window/evolutionary time relationships eventually lead to highly fit
individuals, we grab the lineages of only the most fit individuals
at the end of evolutionary time (figure 6). In the most fit individuals, development windows tend to first increase slightly in
phylogeny before decreasing to their minimum, or close nearby.
The age objective in AFPO lowers the selection pressure on younger
individuals which allows them to explore, through larger developmental windows, a larger portion of design space until someone in
the population discovers a locally optimal solution which creates a
new selection pressure for descendants with older genetic material
to ‘lock in’ or canalize this form with smaller developmental windows. These results further suggests that development itself is not
optimal, it is only helpful in that it can lead to better optima down
the road once the window is closed.
3.4
The effect of mutations.
In addition to the parameter-sweeping nature of its search, developmental time provides evolution with a simple mechanism for
inducing mutations with a range of magnitude of phenotypic impact. The overall mutation impact in our experiments is conveyed
in figure 7 through 2D histograms of child and parent fitness. Recall that a child is created through mutation by each individual
(parent) in the current population. These plots include the entire
evolutionary history of all robots in every run. There are relatively
so few robots with negative fitness that the histograms need not
−5
0
5
10
15
20
25
Fitness
Figure 5: The relationship between the amount of development at the individual level (W ) and fitness (F ). The fastest
individuals have small developmental windows surrounding a fast body plan.
extend into this region since they contain practically zero density
and would appear completely white.
The diagonal represents equal parent and child fitness, a behaviorally neutral mutation. Hexagons below the diagonal represent
detrimental mutations: lower child fitness relative to that of its parent. Hexagons above the diagonal represent beneficial mutations:
higher child fitness relative to that of its parent. Mutations are generally detrimental for both groups, particularly in later generations
once evolution has found a working solution. For Evo robots (figure 7a), most if not all of the mass in the marginal density of child
speed is concentrated around zero. This means that mutations to an
Evo robot are almost certain to break the existing parent solution,
rendering a previously mobile design immobile.
The majority of Evo-Devo children, however, are generally concentrated on, or just below the diagonal in figure 7b. This general
pattern holds even in later generations when evolution has found
working solutions with high fitness. It follows that mutations to
an Evo-Devo robot may be phenotypically smaller than mutations
to an Evo robot, even though they use the same mutation operator.
Furthermore, figure 7b displays a high frequency of mutations with
a wide range of magnitude of phenotypic impact including smaller,
low-risk mutations which are useful for refining mobile designs; as
well as a range of larger, higher-risk mutations which occasionally
provide the high-reward of jumping into the neighborhood of a
more fit local optima at a range of distances in the fitness landscape.
Now let’s define the impact of developmental mutations, M, as
the relative difference in child (FC ) and parent fitnesses (F P ), for
A Minimal Developmental Model Can Increase Evolvability in Soft Robots
prefer Evo robots (with a low mutation rate) since they reside in
a smaller search space. But how low should the mutation rate
be? It may in fact be difficult to know a priori which mutation
rate is optimal. It is also important to recognize that while we use
mutation rate here to artificially tune the ruggedness of the fitness
landscape, in a naturally rugged landscape we presumably would
not have direct access to such an easily tunable parameter to ‘undo’,
or smooth-out the ruggedness.
Moreover, we know that there exist contexts in which developmental flexibility can permit the local speeding up of the basic,
slow process of natural selection, thanks to the Baldwin Effect [9].
Our new data suggests that even open-loop morphological change
increases the probability of randomly finding (and subsequently
‘locking in’) a mobile design (figure 3), and that this probability is
increasing in the amount of change (figure 6) even though ballistic
development and fitness are inversely correlated (figure 5). The
staticity of Evo robots prevents this local speed-up which can place
them at a significant disadvantage in rugged fitness landscapes.
45
35
25
15
45
Development window
GECCO ’17, July 15-19, 2017, Berlin, Germany
35
25
15
45
35
25
15
45
35
25
15
45
35
25
15
45
35
25
4
15
85 65 45 25
85 65 45 25
85 65 45 25
85 65 45 25
Ancestor
85 65 45 25
Figure 6: Closing the window. Total development window
trajectories (in phylogeny) of the lineages of the most fit
individuals in each run. Phylogenetic time goes from left
to right: from the oldest ancestor (randomly created) to its
most recent decedent, the current run champion.
positive fitnesses only.
M=
FC
− 1;
FP
FC , F P > 0
(12)
Then the average mutational impact for early-in-the-life mutations
(any mutations that, at least in part, modify initial volumes) is
M 0 = −0.29. While the average mutational impact for late-in-thelife mutations (that modify final volumes) is M 1 = −0.10. Although
both types of mutations are detrimental on average, later-in-life
mutations are more beneficial (less detrimental) on average (p <
0.001). This makes sense in a task with dependent time steps since
a child created through a late-in-life mutation will at least start
out with the same behavior as its parent and then slowly diverge
over its life. Whereas an early-in-life mutation creates a behavioral
change at t = 0.
3.5
CONCLUSION
In this paper we introduced a minimal yet embodied model of
development in order to isolate the intrinsic effect of morphological change in ontogenetic time, without the confounding effects of environmental mediation. Even our simple developmental
model naturally provides a continuum in terms of the magnitude
of mutational phenotypic impact, from the very large (caused by
early-in-life developmental mutations) to the very small (caused
by late-in-life mutations). We predict that, because of this, such a
developmental system will be more evolvable than an equivalent
non-developmental system because the latter lacks this inherent
spectrum in the magnitude of mutational impacts.
We showed that even without any sensory feedback, open-loop
development can confer evolvability because it allows evolution to
sweep over a much larger range of body plans. Our results suggest
that widening the span of the developmental sweep increases the
likelihood of stumbling across locally optimal designs otherwise
invisible to natural selection, which automatically creates a new
selection pressure to canalize development around this good form.
This implies that species with completely blind developmental plasticity tend to evolve faster and more ‘clearsightedly’ than those
without it.
Future work will involve closing the developmental feedback
loop with as little additional machinery as possible to determine
when and how such added complexity increases evolvability.
The necessity of development.
In attempting to induce a needle-in-the-haystack fitness landscape,
as a proof of concept, we intentionally set the mutation rate and
scale fairly high. A low-resolution hyperparameter sweep (figure
8) indicates that the efficacy of ballistic development is indeed
dependent on the mutation rate: there is no significant difference
between Evo and Evo-Devo at either very low or very high rates.
Higher fitness values are obtained through smaller mutation rates,
which raises the question: Is development useful only in its ability
to decrease the phenotypic impact of mutations? If so we might
5
ACKNOWLEDGEMENTS
We would like to acknowledge financial support from NSF awards
PECASE-0953837 and INSPIRE-1344227 as well as the Army Research Office contract W911NF-16-1-0304. N. Cheney is supported
by NASA Space Technology Research Fellowship #NNX13AL37H.
F. Corucci is supported by grant agreement #604102 (Human Brain
Project) funded by the European Union Seventh Framework Programme (FP7/2007-2013). We also acknowledge computation provided by the Vermont Advanced Computing Core.
GECCO ’17, July 15-19, 2017, Berlin, Germany
25
Kriegman et al.
a.
Evo
25
20
Child fitness
Child fitness
20
b.
Evo-Devo
15
10
5
15
10
5
0
0
0
5
10
15
20
25
Parent fitness
0
5
10
15
20
25
Parent fitness
Figure 7: Mutation impact: child fitness by parent fitness (vol-normed distance). The diagonal represents a neutral mutation,
equivalent child and parent fitness. Hexagon bins below the diagonal represent detrimental mutations (child less fit than its
parent); bins above the diagonal represent beneficial mutations (child more fit than its parent).
Mutation rate sweep
Fitness
26
22
18
14
10
Evo
Evo-Devo
1
24
48
Avg Num Voxels Mutated
Figure 8: A hyperparameter sweep of mutation rate: a probability dictating the average number of voxels mutated in an
individual robot.
REFERENCES
[1] Randall D Beer. 1996. Toward the evolution of dynamical neural networks for
minimally cognitive behavior. From animals to animats 4 (1996), 421–429.
[2] Josh C Bongard. 2011. Morphological change in machines accelerates the evolution of robust behavior. Proceedings of the National Academy of Sciences 108,
4 (2011), 1234–1239.
[3] Josh C Bongard and Rolf Pfeifer. 2001. Repeated Structure and Dissociation of
Genotypic and Phenotypic Complexity in Artificial Ontogeny. Proceedings of
The Genetic and Evolutionary Computation Conference (2001), 829–836.
[4] Nick Cheney, Josh C Bongard, and Hod Lipson. 2015. Evolving soft robots
in tight spaces. In Proceedings of the 2015 annual conference on Genetic and
Evolutionary Computation. ACM, 935–942.
[5] Nick Cheney, Jeff Clune, and Hod Lipson. 2014. Evolved electrophysiological
soft robots. In ALIFE, Vol. 14. 222–229.
[6] Nick Cheney, Robert MacCurdy, Jeff Clune, and Hod Lipson. 2013. Unshackling
evolution: evolving soft robots with multiple materials and a powerful generative encoding. In Proceedings of the 15th annual conference on Genetic and
evolutionary computation. ACM, 167–174.
[7] Francesco Corucci, Nick Cheney, Hod Lipson, Cecilia Laschi, and Josh C Bongard.
2016. Material properties affect evolution’s ability to exploit morphological
computation in growing soft-bodied creatures. In ALIFE 15. 234–241.
[8] Richard Dawkins. 1982. The extended phenotype: The long reach of the gene.
Oxford University Press.
[9] Daniel C Dennett. 2003. The Baldwin effect: A crane, not a skyhook. Evolution
and learning: The Baldwin effect reconsidered (2003), 60–79.
[10] Keith L Downing. 2004. Development and the Baldwin effect. Artificial Life 10,
1 (2004), 39–63.
[11] Peter Eggenberger. 1997. Evolving morphologies of simulated 3D organisms
based on differential gene expression. Procs. of the Fourth European Conf. on
Artificial Life (1997), 205–213.
[12] Ronald Aylmer Fisher. 1930. The genetical theory of natural selection. Oxford
University Press.
[13] Jonathan Hiller and Hod Lipson. 2014. Dynamic simulation of soft multimaterial
3d-printed objects. Soft Robotics 1, 1 (2014), 88–101.
[14] Geoffrey E Hinton and Steven J Nowlan. 1987. How learning can guide evolution.
Complex systems 1, 3 (1987), 495–502.
[15] Kostas Kouvaris, Jeff Clune, Loizos Kounios, Markus Brede, and Richard Watson.
2017. How evolution learns to generalise: Using the principles of learning theory
to understand the evolution of developmental organisation. PLoS Computational
Biology (2017), 1–41.
[16] Julian Francis Miller. 2004. Evolving a self-repairing, self-regulating, French
flag organism. In Genetic and Evolutionary Computation Conference. Springer,
129–139.
[17] Armin P Moczek, Sonia Sultan, Susan Foster, Cris Ledón-Rettig, Ian Dworkin,
H Fred Nijhout, Ehab Abouheif, and David W Pfennig. 2011. The role of developmental plasticity in evolutionary innovation. Proceedings of the Royal Society
of London B: Biological Sciences 278, 1719 (2011), 2705–2713.
[18] Michael Schmidt and Hod Lipson. 2011. Age-Fitness Pareto Optimization.
Springer New York, New York, NY, 129–146.
[19] Kenneth O Stanley. 2007. Compositional pattern producing networks: A novel
abstraction of development. Genetic Programming and Evolvable Machines 8,
2 (2007), 131–162.
| 9 |
B UILDING EFFECTIVE DEEP NEURAL NETWORK
arXiv:1705.06778v2 [] 19 Oct 2017
ARCHITECTURES ONE FEATURE AT A TIME
Martin Mundt
Frankfurt Institute for Advanced Studies
Ruth-Moufang-Str. 1
60438 Frankfurt, Germany
[email protected]
Tobias Weis
Goethe-University Frankfurt
Theodor-W.-Adorno-Platz 1
60323 Frankfurt, Germany
[email protected]
Kishore Konda
INSOFE
Janardana Hills, Gachibowli
500032 Hyderabad, India
[email protected]
Visvanathan Ramesh
Frankfurt Institute for Advanced Studies
Ruth-Moufang-Str. 1
60438 Frankfurt, Germany
[email protected]
A BSTRACT
Successful training of convolutional neural networks is often associated with sufficiently deep architectures composed of high amounts of features. These networks
typically rely on a variety of regularization and pruning techniques to converge
to less redundant states. We introduce a novel bottom-up approach to expand
representations in fixed-depth architectures. These architectures start from just a
single feature per layer and greedily increase width of individual layers to attain
effective representational capacities needed for a specific task. While network
growth can rely on a family of metrics, we propose a computationally efficient
version based on feature time evolution and demonstrate its potency in determining feature importance and a networks’ effective capacity. We demonstrate how
automatically expanded architectures converge to similar topologies that benefit
from lesser amount of parameters or improved accuracy and exhibit systematic
correspondence in representational complexity with the specified task. In contrast
to conventional design patterns with a typical monotonic increase in the amount of
features with increased depth, we observe that CNNs perform better when there is
more learnable parameters in intermediate, with falloffs to earlier and later layers.
1
I NTRODUCTION
Estimating and consequently adequately setting representational capacity in deep neural networks
for any given task has been a long standing challenge. Fundamental understanding still seems to be
insufficient to rapidly decide on suitable network sizes and architecture topologies. While widely
adopted convolutional neural networks (CNNs) such as proposed by Krizhevsky et al. (2012); Simonyan & Zisserman (2015); He et al. (2016); Zagoruyko & Komodakis (2016) demonstrate high
accuracies on a variety of problems, the memory footprint and computational complexity vary.
An increasing amount of recent work is already providing valuable insights and proposing new
methodology to address these points. For instance, the authors of Baker et al. (2016) propose a
reinforcement learning based meta-learning approach to have an agent select potential CNN layers
in a greedy, yet iterative fashion. Other suggested architecture selection algorithms draw their inspiration from evolutionary synthesis concepts (Shafiee et al., 2016; Real et al., 2017). Although the
former methods are capable of evolving architectures that rival those crafted by human design, it is
currently only achievable at the cost of navigating large search spaces and hence excessive computation and time. As a trade-off in present deep neural network design processes it thus seems plausible
to consider layer types or depth of a network to be selected by an experienced engineer based on
prior knowledge and former research. A variety of techniques therefore focus on improving already
well established architectures. Procedures ranging from distillation of one network’s knowledge into
another (Hinton et al., 2014), compressing and encoding learned representations Han et al. (2016),
1
pruning alongside potential retraining of networks (Han et al., 2015; 2017; Shrikumar et al., 2016;
Hao et al., 2017) and the employment of different regularization terms during training (He et al.,
2015; Kang et al., 2016; Rodriguez et al., 2017; Alvarez & Salzmann, 2016), are just a fraction of
recent efforts in pursuit of reducing representational complexity while attempting to retain accuracy.
Underlying mechanisms rely on a multitude of criteria such as activation magnitudes (Shrikumar
et al., 2016) and small weight values (Han et al., 2015) that are used as pruning metrics for either
single neurons or complete feature maps, in addition to further combination with regularization and
penalty terms.
Common to these approaches is the necessity of training networks with large parameter quantities
for maximum representational capacity to full convergence and the lack of early identification of
insufficient capacity. In contrast, this work proposes a bottom-up approach with the following contributions:
• We introduce a computationally efficient, intuitive metric to evaluate feature importance at
any point of training a neural network. The measure is based on feature time evolution,
specifically the normalized cross-correlation of each feature with its initialization state.
• We propose a bottom-up greedy algorithm to automatically expand fixed-depth networks
that start with one feature per layer until adequate representational capacity is reached. We
base addition of features on our newly introduced metric due to its computationally efficient
nature, while in principle a family of similarly constructed metrics is imaginable.
• We revisit popular CNN architectures and compare them to automatically expanded networks. We show how our architectures systematically scale in terms of complexity of
different datasets and either maintain their reference accuracy at reduced amount of parameters or achieve better results through increased network capacity.
• We provide insights on how evolved network topologies differ from their reference counterparts where conventional design commonly increases the amount of features monotonically
with increasing network depth. We observe that expanded architectures exhibit increased
feature counts at early to intermediate layers and then proceed to decrease in complexity.
2
B UILDING NEURAL NETWORKS BOTTOM - UP FEATURE BY FEATURE
While the choice and size of deep neural network model indicate the representational capacity
and thus determine which functions can be learned to improve training accuracy, training of neural
networks is further complicated by the complex interplay of choice of optimization algorithm and
model regularization. Together, these factors define define the effective capacity. This makes training of deep neural networks particularly challenging. One practical way of addressing this challenge
is to boost model sizes at the cost of increased memory and computation times and then applying
strong regularization to avoid over-fitting and minimize generalization error. However, this approach
seems unnecessarily cumbersome and relies on the assumption that optimization difficulties are not
encountered. We draw inspiration from this challenge and propose a bottom-up approach to increase
capacity in neural networks along with a new metric to gauge the effective capacity in the training
of (deep) neural networks with stochastic gradient descent (SGD) algorithms.
2.1
N ORMALIZED WEIGHT- TENSOR CROSS - CORRELATION AS A MEASURE FOR NEURAL
NETWORK EFFECTIVE CAPACITY
In SGD the objective function J (Θ) is commonly equipped with a penalty on the parameters R (Θ),
yielding a regularized objective function:
Jˆ (Θ) = J (Θ) + αR (Θ) .
(1)
Here, α weights the contribution of the penalty. The regularization term R (Θ) is typically chosen
as a L2 -norm, coined weight-decay, to decrease model capacity or a L1 -norm to enforce sparsity.
Methods like dropout (Srivastava et al., 2014) and batch normalization (Ioffe & Szegedy, 2015) are
typically employed as further implicit regularizers.
In principle, our rationale is inspired by earlier works of Hao et al. (2017) who measure a complete
feature’s importance by taking the L1 -norm of the corresponding weight tensor instead of operating
2
on individual weight values. In the same spirit we assign a single importance value to each feature
based on its values. However we do not use the weight magnitude directly and instead base our
metric on the following hypothesis: While a feature’s absolute magnitude or relative change between
two subsequent points in time might not be adequate measures for direct importance, the relative
amount of change a feature experiences with respect to its original state provides an indicator for
how many times and how much a feature is changed when presented with data. Intuitively we
suggest that features that experience high structural changes must play a more vital role than any
feature that is initialized and does not deviate from its original states’ structure. There are two
potential reasons why a feature that has randomly been initialized does not change in structure: The
first being that its form is already initialized so well that it does not need to be altered and can serve
either as is or after some scalar rescaling or shift in order to contribute. The second possibility is that
too high representational capacity, the nature of the cost function, too large regularization or the type
of optimization algorithm prohibit the feature from being learned, ultimately rendering it obsolete.
As deep neural networks are commonly initialized from using a distribution over high-dimensional
space the first possibility seems unlikely (Goodfellow et al., 2016).
As one way of measuring the effective capacity at a given state of learning, we propose to monitor
the time evolution of the normalized cross-correlation for all weights with respect to their state
at initialization. For a convolutional neural network composed of layers l = 1, 2, . . . , L − 1 and
complementary weight-tensors Wfl l j l kl f l+1 with spatial dimensions j l × k l defining a mapping
from an input feature-space f l = 1, 2, ...F l onto the output feature space f l+1 = 1, 2, ...F l+1 that
serves as input to the next layer, we define the following metric:
h
i
P
l
l
l
l
W
−
W̄
◦
W
−
W̄
l
l
l
l
l
l
l+1
l+1
l
l
l
l+1
l+1
f ,j ,k
f j k f
,t0
f
,t0
f j k f
,t
f
,t
clf l+1 ,t = 1 −
(2)
l
l
Wf l j l kl ,t0
· Wf l j l kl ,t
2,f l+1
2,f l+1
which is a measure of self-resemblance. In this equation, Wfl l j l kl f l+1 ,t is the state of a layer’s
weight-tensor at time t or the initial state after initialization t0 . W̄fl l+1 ,t is the mean taken over
spatial and input feature dimensions. ◦ depicts the Hadamard product that we use in an extended
fashion from matrices to tensors where each dimension is multiplied in an element-wise fashion
analogously. Similarly the terms in the denominator are defined as the L2 -norm of the weight-tensor
taken over said dimensions and thus resulting in a scalar value. Above equation can be defined in an
analogous way for multi-layer perceptrons by omitting spatial dimensions.
The metric is easily interpretable as no structural changes of features lead to a value of zero and
importance approaches unity the more a feature is deviating in structure. The usage of normalized
cross-correlation with the L2 -norm in the denominator has the advantage of having an inherent invariance to effects such as translations or rescaling of weights stemming from various regularization
contributions. Therefore the contribution of the sum-term in equation 1 does not change the value
of the metric if the gradient term vanishes. This is in contrast to the measure proposed by Hao
et al. (2017), as absolute weight magnitudes are affected by rescaling and make it more difficult to
interpret the metric in an absolute way and find corresponding thresholds.
2.2
B OTTOM - UP CONSTRUCTION OF NEURAL NETWORK REPRESENTATIONAL CAPACITY
We propose a new method to converge to architectures that encapsulate necessary task complexity
without the necessity of training huge networks in the first place. Starting with one feature in each
layer, we expand our architecture as long as the effective capacity as estimated through our metric
is not met and all features experience structural change. In contrast to methods such as Baker et al.
(2016); Shafiee et al. (2016); Real et al. (2017) we do not consider flexible depth and treat the amount
of layers in a network as a prior based on the belief of hierarchical composition of the underlying
factors. Our method, shown in algorithm 1, can be summarized as follows:
1. For a given network arrangement in terms of function type, depth and a set of hyperparameters: initialize each layer with one feature and proceed with (mini-batch) SGD.
2. After each update step evaluate equation 2 independently per layer and increase feature
dimensionality by Fexp (one or higher if a complexity prior exists) if all currently present
features in respective layer are differing from their initial state by more than a constant .
3. Re-initialize all parameters if architecture has expanded.
3
Algorithm 1 Greedy architecture feature expansion algorithm
Require: Set hyper-parameters: learning rate λ0 , mini-batch size, maximum epoch tend , . . .
Require: Set expansion parameters: = 10−6 , Fexp = 1 (or higher)
1: Initialize parameters: t = 1, F l = 1 ∀ l = 1, 2, . . . L − 1, Θ , reset = f alse
2: while t ≤ tend do
3:
for mini-batches in training set do
4:
reset ← f alse
5:
Compute gradient and perform update step
6:
for l = 1 to L − 1 do
)
7:
for i = f l+1 to F l+1 do
l
in parallel.
8:
Update ci,t according to equation 2
9:
end for
10:
if max(clt ) < 1 − then
11:
F l+1 ← F l+1 + Fexp
12:
reset ← true
13:
end if
14:
end for
15:
if reset == true then
16:
Re-initialize parameters Θ, t = 0, λ = λ0 , . . .
17:
end if
18:
end for
19:
t←t+1
20: end while
The constant is a numerical stability parameter that we set to a small value such as 10−6 , but
could in principle as well be used as a constraint. We have decided to include the re-initialization
in step 3 (lines 15 − 17) to avoid the pitfalls of falling into local minima1 . Despite this sounding
like a major detriment to our method, we show that networks nevertheless rapidly converge to a
stable architectural solution that comes at less than perchance expected computational overhead and
at the benefit of avoiding training of too large architectures. Naturally at least one form of explicit or
implicit regularization has to be present in the learning process in order to prevent infinite expansion
of the architecture. We would like to emphasize that we have chosen the metric defined in equation
2 as a basis for the decision of when to expand an architecture, but in principle a family of similarly
constructed metrics is imaginable. We have chosen this particular metric because it does not directly
depend on gradient or higher-order term calculation and only requires multiplication of weights
with themselves. Thus, a major advantage is that computation of equation 2 can be parallelized
completely and therefore executed at less cost than a regular forward pass through the network.
3
R EVISITING POPULAR ARCHITECTURES WITH ARCHITECTURE EXPANSION
We revisit some of the most established architectures ”GFCNN” (Goodfellow et al., 2013) ”VGG-A
& E” (Simonyan & Zisserman, 2015) and ”Wide Residual Network: WRN” (Zagoruyko & Komodakis, 2016) (see appendix for architectural details) with batch normalization (Ioffe & Szegedy,
2015). We compare the number of learnable parameters and achieved accuracies with those obtained
through expanded architectures that started from a single feature in each layer. For each architecture
we include all-convolutional variants (Springenberg et al., 2015) that are similar to WRNs (minus
the skip-connections), where all pooling layers are replaced by convolutions with larger stride. All
fully-connected layers are furthermore replaced by a single convolution (affine, no activation function) that maps directly onto the space of classes. Even though the value of more complex type of
sub-sampling functions has already empirically been demonstrated (Lee et al., 2015), the amount of
features of the replaced layers has been constrained to match in dimensionality with the preceding
convolution layer. We would thus like to further extend and analyze the role of layers involving
sub-sampling by decoupling the dimensionality of these larger stride convolutional layers.
1
We have empirically observed promising results even without re-initialization, but deeper analysis of stability (e.g. expansion speed vs. training rate), initialization of new features during training (according to chosen
scheme or aligned with already learned representations?) is required.
4
Error increase Feature importance
Ranked activations
0
1.0
0.8
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
1.0
25 50 75 100 125 0
Layer 1 features
Ranked activations
Error increase Feature importance
1.0
0.8
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
1.0
0
25
50
75 100 125 0
Layer 1 features
Cross-correlation (ours)
L1 weight norm
40 80 120 160 200 0
Layer 2 features
Cross-correlation (ours)
40
40 80 120 160 200 0
Layer 3 features
L1 weight norm
80 120 160 200 0
Layer 2 features
40
80 120 160 200 0
Layer 3 features
MNIST
100 200 300 400 500
Layer 4 features
CIFAR100
100 200 300 400 500
Layer 4 features
Figure 1: Pruning of complete features for the GFCNN architecture trained on MNIST (top panel)
and CIFAR100 (bottom panel). Top row shows sorted feature importance values for every layer
according to three different metrics at the end of training. Bottom row illustrates accuracy loss when
removing feature by feature in ascending order of feature importance.
We consider these architectures as some of the best CNN architectures as each of them has been chosen and tuned carefully according to extensive amounts of hyper-parameter search. As we would like
to demonstrate how representational capacity in our automatically constructed networks scales with
increasing task difficulty, we perform experiments on the MNIST (LeCun et al., 1998), CIFAR10
& 100 (Krizhevsky, 2009) datasets that intuitively represent little to high classification challenge.
We also show some preliminary experiments on the ImageNet (Russakovsky et al., 2015) dataset
with ”Alexnet” (Krizhevsky et al., 2012) to conceptually show that the algorithm is applicable to
large scale challenges. All training is closely inspired by the procedure specified in Zagoruyko &
Komodakis (2016) with the main difference of avoiding heavy preprocessing. We preprocess all
data using only trainset mean and standard deviation (see appendix for exact training parameters).
Although we are in principle able to achieve higher results with different sets of hyper-parameters
and preprocessing methods, we limit ourselves to this training methodology to provide a comprehensive comparison and avoid masking of our contribution. We train all architectures five times on
each dataset using a Intel i7-6800K CPU (data loading) and a single NVIDIA Titan-X GPU. Code
has been written in both Torch7 (Collobert et al., 2011) and PyTorch (http://pytorch.org/) and will
be made publicly available.
3.1
T HE TOP - DOWN PERSPECTIVE : F EATURE IMPORTANCE FOR PRUNING
We first provide a brief example for the use of equation 2 through the lens of pruning to demonstrate
that our metric adequately measures feature importance. We evaluate the contribution of the features
by pruning the weight-tensor feature by feature in ascending order of feature importance values and
re-evaluating the remaining architecture. We compare our normalized cross-correlation metric 2 to
the L1 weight norm metric introduced by Hao et al. (2017) and ranked mean activations evaluated
5
over an entire epoch. In figure 1 we show the pruning of a trained GFCNN, expecting that such
a network will be too large for the easier MNIST and too small for the difficult CIFAR100 task.
For all three metrics pruning any feature from the architecture trained on CIFAR100 immediately
results in loss of accuracy, whereas the architecture trained on MNIST can be pruned to a smaller
set of parameters by greedily dropping the next feature with the currently lowest feature importance
value. We notice how all three metrics perform comparably. However, in contrast to the other two
metrics, our normalized cross-correlation captures whether a feature is important on absolute scale.
For MNIST the curve is very close to zero, whereas the metric is close to unity for all CIFAR100
features. Ultimately this is the reason our metric, in the way formulated in equation 2, is used for
the algorithm presented in 1 as it doesn’t require a difficult process to determine individual layer
threshold values. Nevertheless it is imaginable that similar metrics based on other tied quantities
(gradients, activations) can be formulated in analogous fashion.
As our main contribution lies in the bottom-up widening of architectures we do not go into more detailed analysis and comparison of pruning strategies. We also remark that in contrast to a bottom-up
approach to finding suitable architectures, pruning seems less desirable. It requires convergent training of a huge architectures with lots of regularization before complexity can be introduced, pruning
is not capable of adding complexity if representational capacity is lacking, pruning percentages are
difficult to interpret and compare (i.e. pruning percentage is 0 if the architecture is adequate), a
majority of parameters are pruned only in the last ”fully-connected” layers (Han et al., 2015), and
pruning strategies as suggested by Han et al. (2015; 2017); Shrikumar et al. (2016); Hao et al. (2017)
tend to require many cross-validation with consecutive fine-tuning steps. We thus continue with the
bottom-up perspective of expanding architectures from low to high representational capacity.
3.2
T HE BOTTOM - UP PERSPECTIVE : E XPANDING ARCHITECTURES
We use the described training procedure in conjunction with algorithm 1 to expand representational
complexity by adding features to architectures that started with just one feature per layer with the
following additional settings:
Architecture expansion settings and considerations: Our initial experiments added one feature
at a time, but large speed-ups can be introduced by means of adding stacks of features. Initially,
we avoided suppression of late re-initialization to analyze the possibility that rarely encountered
worst-case behavior of restarting on an almost completely trained architecture provides any benefit.
After some experimentation our final report used a stability parameter ending the network expansion
if half of the training has been stable (no further change in architecture) and added Fexp = 8 and
Fexp = 16 features per expansion step for MNIST and CIFAR10 & 100 experiments respectively.
We show an exemplary architecture expansion of the GFCNN architecture’s layers for MNIST and
CIFAR100 datasets in figure 2 and the evolution of the overall amount of parameters for five different
experiments. We observe that layers expand independently at different points in time and more
features are allocated for CIFAR100 than for MNIST. When comparing the five different runs we
can identify that all architectures converge to a similar amount of network parameters, however
at different points in time. A good example to see this behavior is the solid (green) curve in the
MNIST example, where the architecture at first seems to converge to a state with lower amount
of parameters and after some epochs of stability starts to expand (and re-initialize) again until it
ultimately converges similarly to the other experiments.
We continue to report results obtained for the different datasets and architectures in table 1. The
table illustrates the mean and standard deviation values for error, total amount of parameters and the
mean overall time taken for five runs of algorithm 1 (deviation can be fairly large due to the behavior
observed in 2). We make the following observations:
• Without any prior on layer widths, expanding architectures converge to states with at least
similar accuracies to the reference at reduced amount of parameters, or better accuracies
by allocating more representational capacity.
• For each architecture type there is a clear trend in network capacity that is increasing with
dataset complexity from MNIST to CIFAR10 to CIFAR100 2 .
2
For the WRN CIFAR100 architecture the ∗ signifies hardware memory limitations due to the arrangement
of architecture topology and thus expansion is limited. This is because increased amount of early-layer features
requires more memory in contrast to late layers, which is particularly intense for the coupled WRN architecture.
6
CIFAR100
400
350
300
250
200
150
100
50
0
MNIST
Features
Layer 1
Layer 2
Layer 3
Layer 4
Run 1
Run 2
Run 3
Run 4
Run 5
Parameters
106
105
104
103
102
0
40
80
120
160
Epochs
200
240
0
20
40
60
Epochs
80
100
Figure 2: Exemplary GFCNN network expansion on MNIST and CIFAR100. Top panel shows one
architecture’s individual layer expansion; bottom panel shows the evolution of total parameters for
five runs. It is observable how different experiments converge to similar network capacity on slightly
different time-scales and how network capacity systematically varies with complexity of the dataset.
Table 1: Mean error and standard deviation and number of parameters (in millions) for architectures
trained five times using the original reference and automatically expanded architectures respectively.
For MNIST no data augmentation is applied. We use minor augmentation (flips, translations) for
CIFAR10 & 100. All-convolutional (all-conv) versions have been evaluated for each architecture
(except WRN where convolutions are stacked already). The * indicates hardware limitation.
VGG-E
original
0.359
13.70
0.59
expanded
0.394 ±0.05
3.35 ±0.45
5.35 ±1.95
original
0.386
20.57
0.48
expanded
0.388 ±0.03
6.49 ±0.89
9.8 ±3.02
all-conv
error [%]
params [M]
time [h]
0.535
3.71
0.39
0.552 ±0.03
2.19 ±0.55
1.68 ±1.05
0.510
10.69
0.64
0.502 ±0.04
3.79 ±0.57
3.06 ±0.97
0.523
21.46
0.52
0.528 ±0.04
6.02 ±0.73
6.58 ±2.81
standard
error [%]
params [M]
time [h]
11.32
4.26
0.81
11.03 ±0.19
4.01 ±0.62
1.40 ±0.22
7.18
13.70
1.61
6.73 ±0.05
8.54 ±1.51
3.95 ±0.59
7.51
20.57
1.32
5.64 ±0.11
27.41 ±4.09
16.32 ±5.39
all-conv
error [%]
params
time [h]
8.78
3.71
1.19
8.13 ±0.11
10.62 ±1.91
3.38 ±0.76
6.71
10.69
1.74
6.56 ±0.18
8.05 ±1.57
5.24 ±1.11
7.46
21.46
1.54
5.42 ±0.11
44.98 ±7.31
26.46 ±9.77
standard
error [%]
params [M]
time [h]
34.91
4.26
0.81
34.23 ±0.29
6.82 ±1.08
1.83 ±0.56
25.01
13.70
1.61
25.17 ±0.34
8.48 ±1.40
3.83 ±0.47
29.43
20.57
1.32
25.06 ±0.55
28.41 ±2.26
16.67 ±2.89
all-conv
error [%]
params [M]
time [h]
29.83
3.71
1.19
28.34 ±0.43
21.40 ±3.71
4.72 ±1.15
24.30
10.69
1.74
23.95 ±0.28
10.84 ±2.41
5.38 ±1.46
31.94
21.46
1.54
24.87 ±0.16
44.59 ±4.49
22.76 ±3.94
MNIST
standard
error [%]
params [M]
time [h]
expanded
0.528 ±0.03
1.61 ±0.31
0.90 ±0.32
CIFAR10+
VGG-A
original
0.487
4.26
0.29
CIFAR100+
GFCNN
type
WRN-28-10
original
overfit
36.48
2.47
expanded
0.392±0.05
4.83 ±0.57
2.91 ±0.28
4.04
36.48
8.22
3.95 ±0.12
25.30 ±1.62
21.18 ±2.12
18.51
36.48
8.22
18.44∗
27.75∗
13.9∗
• Even though we have introduced re-intialization of the architecture the time taken by algorithm 1 is much less than one would invest when doing a manual, grid- or random-search.
• Shallow GFCNN architectures are able to gain accuracy by increasing layer width, although
there seems to be a natural limit to what width alone can do. This is in agreement with
observations pointed out in other works such as Ba & Caurana (2014); Urban et al. (2017).
• The large reference VGG-E (lower accuracy than VGG-A on CIFAR) and WRN-28-10
(complete overfit on MNIST) seem to run into optimization difficulties for these datasets.
However, expanded alternate architecture clearly perform significantly better.
7
VGG-E
CIFAR100
CIFAR10
MNIST
Reference
600
400
Features
200
0
200
400
600
Expanded
0
5
10
Layer
15
0
5
15
Layer
0
5
VGG-E-all-conv
CIFAR100
1000
10
10
15
Layer
CIFAR10
MNIST
Reference
Features
500
0
500
1000
0
Expanded
5
10
Layer
15
20 0
5
10
15
Layer
20 0
5
10
Layer
15
20
Figure 3: Mean and standard deviation of topologies as evolved from the expansion algorithm for
a VGG-E and VGG-E all-convolutional architecture run five times on MNIST, CIFAR10 and CIFAR100 datasets respectively. Top panels show the reference architecture, whereas bottom shows
automatically expanded architecture alternatives. Expanded architectures vary in capacity with
dataset complexity and topologically differ from their reference counterparts.
In general we observe that these benefits are due to unconventional, yet always coinciding, network
topology of our expanded architectures. These topologies suggest that there is more to CNNs than
simply following the rule of thumb of increasing the number of features with increasing architectural depth. Before proceeding with more detail on these alternate architecture topologies, we want
to again emphasize that we do not report experiments containing extended methodology such as
excessive preprocessing, data augmentation, the oscillating learning rates proposed in Loshchilov
& Hutter (2017) or better sets of hyper-parameters for reasons of clarity, even though accuracies
rivaling state-of-the-art performances can be achieved in this way.
3.3
A LTERNATE FORMATION OF DEEP NEURAL NETWORK TOPOLOGIES
Almost all popular convolutional neural network architectures follow a design pattern of monotonically increasing feature amount with increasing network depth (LeCun et al., 1998; Goodfellow
et al., 2013; Simonyan & Zisserman, 2015; Springenberg et al., 2015; He et al., 2016; Zagoruyko
& Komodakis, 2016; Loshchilov & Hutter, 2017; Urban et al., 2017). For the results presented in
table 1 all automatically expanded network topologies present alternatives to this pattern. In figure 3, we illustrate exemplary mean topologies for a VGG-E and VGG-E all-convolutional network
as constructed by our expansion algorithm in five runs on the three datasets. Apart from noticing
the systematic variations in representational capacity with dataset difficulty, we furthermore find
topological convergence with small deviations from one training to another. We observe the highest feature dimensionality in early to intermediate layers with generally decreasing dimensionality
towards the end of the network differing from conventional CNN design patterns. Even if the expanded architectures sometimes do not deviate much from the reference parameter count, accuracy
seems to be improved through this topological re-arrangement. For architectures where pooling
has been replaced with larger stride convolutions we also observe that dimensionality of layers
with sub-sampling changes independently of the prior and following convolutional layers suggesting that highly-complex sub-sampling operations are learned. This an extension to the proposed
8
all-convolutional variant of Springenberg et al. (2015), where introduced additional convolutional
layers were constrained to match the dimensionality of the previously present pooling operations.
If we view the deep neural network as being able to represent any function that is limited rather by
concepts of continuity and boundedness instead of a specific form of parameters, we can view the
minimization of the cost function as learning a functional mapping instead of merely adopting a set
of parameters (Goodfellow et al., 2016). We hypothesize that evolved network topologies containing higher feature amount in early to intermediate layers generally follow a process of first mapping
into higher dimensional space to effectively separate the data into many clusters. The network can
then more readily aggregate specific sets of features to form clusters distinguishing the class subsets.
Empirically this behavior finds confirmation in all our evolved network topologies that are visualized in the appendix. Similar formation of topologies, restricted by the dimensionality constraint of
the identity mappings, can be found in the trained residual networks.
While He et al. (2015) has shown that deep VGG-like architectures do not perform well, an interesting question for future research could be whether plainly stacked architectures can perform similarly
to residual networks if the arrangement of feature dimensionality is differing from the conventional
design of monotonic increase with depth.
3.4
A N OUTLOOK TO I MAGE N ET
We show two first experiments on the ImageNet dataset using an all-convolutional Alexnet to show
that our methodology can readily be applied to large scale. The results for the two runs can be found
in table 2 and corresponding expanded architectures are visualized in the appendix. We observe that
the experiments seem to follow the general pattern and again observe that topological rearrangement
of the architecture yields substantial benefits. In the future we would like to extend experimentation
to more promising ImageNet architectures such as deep VGG and residual networks. However,
these architectures already require 4-8 GPUs and large amounts of time in their baseline evaluation,
which is why we presently are not capable of evaluating these architectures and keep this section at
a very brief proof of concept level.
Table 2: Two experiments with all-convolutional Alexnet on the large scale Imagenet dataset comparing the reference implementation with our expanded architecture.
Alexnet - 1
original
expanded
4
top-1 error
43.73 %
37.84 %
top-5 error
20.11 %
15.88 %
params
35.24 M
34.76 M
Alexnet - 2
time
27.99 h
134.21 h
top-1 error
43.73 %
38.47 %
top-5 error
20.11 %
16.33 %
params
35.24 M
32.98 M
time
27.99 h
118.73 h
C ONCLUSION
In this work we have introduced a novel bottom-up algorithm to start neural network architectures
with one feature per layer and widen them until a task depending suitable representational capacity
is achieved. For the use in this framework we have presented one potential computationally efficient
and intuitive metric to gauge feature importance. The proposed algorithm is capable of expanding architectures that provide either reduced amount of parameters or improved accuracies through
higher amount of representations. This advantage seems to be gained through alternative network
topologies with respect to commonly applied designs in current literature. Instead of increasing the
amount of features monotonically with increasing depth of the network, we empirically observe that
expanded neural network topologies have high amount of representations in early to intermediate
layers.
Future work could include a re-evaluation of plainly stacked deep architectures with new insights
on network topologies. We have furthermore started to replace the currently present re-initialization
step in the proposed expansion algorithm by keeping learned filters. In principle this approach looks
promising but does need further systematic analysis of new feature initialization with respect to the
already learned feature subset and accompanied investigation of orthogonality to avoid falling into
local minima.
9
ACKNOWLEDGEMENTS
This work has received funding from the European Unions Horizon 2020 research and innovation
program under grant agreement No 687384. Kishora Konda and Tobias Weis received funding from
Continental Automotive GmbH. We would like to further thank Anjaneyalu Thippaiah for help with
execution of ImageNet experiments.
R EFERENCES
Jose M. Alvarez and Mathieu Salzmann. Learning the Number of Neurons in Deep Networks. In NIPS, 2016.
Lei J. Ba and Rich Caurana. Do Deep Nets Really Need to be Deep ? arXiv preprint arXiv:1312.6184, 2014.
Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing Neural Network Architectures using
Reinforcement Learning. arXiv preprint arXiv:1611.02167, 2016.
Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine
learning. BigLearn, NIPS Workshop, 2011.
Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout Networks.
In ICML, 2013.
Ian J. Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
Song Han, Jeff Pool, John Tran, and William J. Dally. Learning both Weights and Connections for Efficient
Neural Networks. In NIPS, 2015.
Song Han, Huizi Mao, and William J. Dally. Deep Compression - Compressing Deep Neural Networks with
Pruning, Trained Quantization and Huffman Coding. In ICLR, 2016.
Song Han, Huizi Mao, Enhao Gong, Shijian Tang, William J. Dally, Jeff Pool, John Tran, Bryan Catanzaro,
Sharan Narang, Erich Elsen, Peter Vajda, and Manohar Paluri. DSD: Dense-Sparse-Dense Training For
Deep Neural Networks. In ICLR, 2017.
Li Hao, Asim Kadav, Hanan Samet, Igor Durdanovic, and Hans Peter Graf. Pruning Filters For Efficient
Convnets. In ICLR, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving Deep into Rectifiers: Surpassing HumanLevel Performance on ImageNet Classification. In ICCV, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In
CVPR, 2016.
Geoffrey E. Hinton, Oriol Vinyals, and Jeff Dean. Distilling the Knowledge in a Neural Network. In NIPS
Deep Learning Workshop, 2014.
Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift. arxiv preprint arXiv:1502.03167, 2015.
Guoliang Kang, Jun Li, and Dacheng Tao. Shakeout: A New Regularized Deep Neural Network Training
Scheme. In AAAI, 2016.
Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, Toronto, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in Neural Information Processing Systems 25, pp. 1097–1105. Curran Associates,
Inc., 2012.
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11), 1998.
Chen-Yu Lee, Patrick W. Gallagher, and Zhuowen Tu. Generalizing Pooling Functions in Convolutional Neural
Networks: Mixed, Gated, and Tree. 2015.
Ilya Loshchilov and Frank Hutter. SGDR: Stochastic Gradient Descent With Warm Restarts. In ICLR, 2017.
Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Quoc Le, and Alex
Kurakin. Large-Scale Evolution of Image Classifiers. arXiv preprint arXiv:1703.01041, 2017.
10
Pau Rodriguez, Jordi González, Guillem Cucurull, Josep M. Gonfaus, and Xavier Roca. Regularizing CNNs
With Locally Constrained Decorrelations. In ICLR, 2017.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej
Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale
Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
Mohammad J. Shafiee, Akshaya Mishra, and Alexander Wong. EvoNet: Evolutionary Synthesis of Deep Neural
Networks. arXiv preprint arXiv:1606.04393, 2016.
Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. Not Just a Black Box: Interpretable Deep Learning by Propagating Activation Differences. In ICML, 2016.
Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR, 2015.
Jost T. Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for Simplicity: The
All Convolutional Net. In ICLR, 2015.
Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout :
A Simple Way to Prevent Neural Networks from Overfitting. JMLR, 15, 2014.
Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Abdel-rahman
Mohamed, Matthai Philipose, Matthew Richardson, and Rich Caruana. Do Deep Convolutional Nets Really
Need To Be Deep And Convolutional? In ICLR, 2017.
Sergey Zagoruyko and Nikos Komodakis. Wide Residual Networks. In BMVC, 2016.
11
A
A.1
A PPENDIX
DATASETS
• MNIST (LeCun et al., 1998): 50000 train images of hand-drawn digits of spatial size 28 ×
28 belonging to one of 10 equally sampled classes.
• CIFAR10 & 100 (Krizhevsky, 2009): 50000 natural train images of spatial size 32 × 32
each containing one object belonging to one of 10/100 equally sampled classes.
• ImageNet (Russakovsky et al., 2015): Approximately 1.2 million training images of objects
belonging to one of 1000 classes. Classes are not equally sampled with 732-1300 images
per class. Dataset contains 50 000 validation images, 50 per class. Scale of objects and size
of images varies.
A.2
T RAINING HYPER - PARAMETERS
All training is closely inspired by the procedure specified in Zagoruyko & Komodakis (2016) with
the main difference of avoiding heavy preprocessing. Independent of dataset, we preprocess all
data using only trainset mean and standard deviation. All training has been conducted using crossentropy as a loss function and weight initialization following the normal distribution as proposed by
He et al. (2015). All architectures are trained with batch-normalization with a constant of 1 · 10−3 ,
a batch-size of 128, a L2 weight-decay of 5 · 10−4 , a momentum of 0.9 and nesterov momentum.
Small datasets: We use initial learning rates of 0.1 and 0.005 for the CIFAR and MNIST datasets
respectively. We have rescaled MNIST images to 32 × 32 (CIFAR size) and repeat the image across
color channels in order to use architectures without modifications. CIFAR10 & 100 are trained
for 200 epochs and the learning rate is scheduled to be reduced by a factor of 5 every multiple of
60 epochs. MNIST is trained for 60 epochs and learning rate is reduced by factor of 5 once after
30 epochs. We augment the CIFAR10 & 100 training by introducing horizontal flips and small
translations of up to 4 pixels during training. No data augmentation has been applied to the MNIST
dataset.
ImageNet: We use the single-crop technique where we rescale the image such that the shorter side
is equal to 224 and take a centered crop of spatial size 224 × 224. In contrast to Krizhevsky et al.
(2012) we limit preprocessing to subtraction and divison of trainset mean and standard deviation
and do not include local response normalization layers. We randomly augment training data with
random horizontal flips. We set an initial learning rate of 0.1 and follow the learning rate schedule
proposed in Krizhevsky et al. (2012) that drops the learning rate by a factor of 0.1 every 30 epochs
and train for a total of 74 epochs.
The amount of epochs for the expansion of architectures is larger due to the re-initialization. For
these architectures the mentioned amount of epochs corresponds to training during stable conditions,
i.e. no further expansion. The procedure is thus equivalent to training the converged architecture
from scratch.
A.3
A RCHITECTURES
GFCNN (Goodfellow et al., 2013) Three convolution layer network with larger filters (followed by
two fully-connected layers, but without ”maxout”. The exact sequence of operations is:
1. Convolution 1: 8 × 8 × 128 with padding = 4 → batch-normalization → ReLU →
max-pooling 4 × 4 with stride = 2.
2. Convolution 2: 8 × 8 × 198 with padding = 3 → batch-normalization → ReLU →
max-pooling 4 × 4 with stride = 2.
3. Convolution 3: 5 × 5 × 198 with padding = 3 → batch-normalization → ReLU →
max-pooling 2 × 2 with stride = 2.
4. Fully-connected 1: 4 × 4 × 198 → 512 → batch-normalization → ReLU.
5. Fully-connected 2: 512 → classes.
Represents the family of rather shallow ”deep” networks.
12
VGG (Simonyan & Zisserman, 2015) ”VGG-A” (8 convolutions) and ”VGG-E” (16 convolutions) networks. Both architectures include three fully-connected layers. We set the number
of features in the MLP to 512 features per layer instead of 4096 because the last convolutional layer of these architecture already produces outputs of spatial size 1 × 1 (in contrast
to 7 × 7 on ImageNet) on small datasets. Batch normalization is used before the activation
functions. Examples of stacking convolutions that do not alter spatial dimensionality to
create deeper architectures.
WRN (Zagoruyko & Komodakis, 2016) Wide Residual Network architecture: We use a depth of
28 convolutional layers (each block completely coupled, no bottlenecks) and a width-factor
of 10 as reference. When we expand these networks this implies an inherent coupling of
layer blocks due to dimensional consistency constraints with outputs from identity mappings.
Alexnet (Krizhevsky et al., 2012) We use the all convolutional variant where we replace the first
fully-connected large 6 × 6 × 256 → 4096 layer with a convolution of corresponding
spatial filter size and 256 filters and drop all further fully-connected layers. The rationale
behind this decision is that previous experiments, our own pruning experiments and those
of Hao et al. (2017); Han et al. (2015), indicate that original fully-connected layers are
largely obsolete.
A.4
AUTOMATICALLY EXPANDED ARCHITECTURE TOPOLOGIES
In addition to figure 3 we show mean evolved topologies including standard deviation for all architectures and datasets reported in table 1 and 2. In figure 4 and 5 all shallow and VGG-A architectures
and their respective all-convolutional variants are shown. Figure 6 shows the constructed wide residual 28 layer network architectures where blocks of layers are coupled due to the identity mappings.
Figure 7 shows the two expanded Alexnet architectures as trained on ImageNet.
As explained in the main section we see that all evolved architectures feature topologies with large
dimensionality in early to intermediate layers instead of in the highest layers of the architecture as
usually present in conventional CNN design. For architectures where pooling has been replaced with
larger stride convolutions we also observe that dimensionality of layers with sub-sampling changes
independently of the prior and following convolutional layers suggesting that highly-complex pooling operations are learned. This an extension to the proposed all-convolutional variant of Springenberg et al. (2015), where introduced additional convolutional layers were constrained to match the
dimensionality of the previously present pooling operations.
13
GFCNN
CIFAR100
600
CIFAR10
MNIST
Reference
400
Features
200
0
200
400
Expanded
0
1
2
Layer
3
4
50
1
2
Layer
3
4
50
1
2
GFCNN-all-conv
CIFAR100
Layer
3
4
CIFAR10
5
MNIST
Reference
Features
200
0
200
400
600
0
Expanded
1
2
3
4
5
Layer
6
70
1
2
3
4
5
Layer
6
70
1
2
3
4
5
Layer
6
7
Figure 4: Mean and standard deviation of topologies as evolved from the expansion algorithm for
the shallow networks run five times on MNIST, CIFAR10 and CIFAR100 datasets respectively.
VGG-A
CIFAR100
600
CIFAR10
MNIST
Reference
400
Features
200
0
200
400
600
0
Expanded
2
4
6
8
Layer
10
0
2
4
8
Layer
10
0
2
4
VGG-A-all-conv
CIFAR100
600
6
6
8
Layer
CIFAR10
10
MNIST
Reference
400
Features
200
0
200
400
600
0
Expanded
2
4
6
Layer
8
10
12 0
2
4
6
Layer
8
10
12 0
2
4
6
Layer
8
10
12
Figure 5: Mean and standard deviation of topologies as evolved from the expansion algorithm for
the VGG-A style networks run five times on MNIST, CIFAR10 and CIFAR100 datasets respectively.
14
Wide Residual Network 28-10
CIFAR100
800
CIFAR10
MNIST
Reference
600
400
Features
200
0
200
400
600
800
0
Expanded
5
10
15
Layer
20
25
0
5
10
15
Layer
20
25
0
5
10
15
Layer
20
25
Figure 6: Mean and standard deviation of topologies as evolved from the expansion algorithm for
the WRN-28 networks run five times on MNIST, CIFAR10 and CIFAR100 datasets respectively.
Note that the CIFAR100 architecture was limited in expansion by hardware.
ImageNet
Alexnet-all-conv 1
Alexnet-all-conv 2
Reference
400
Features
200
0
200
400
600
0
Expanded
1
2
3
4
5
Layer
6
7
8
90
1
2
3
4
5
Layer
6
7
8
9
Figure 7: Architecture topologies for the two all-convolutional Alexnets of table 2 as evolved from
the expansion algorithm on ImageNet.
15
| 7 |
1
Solving nonlinear circuits with pulsed excitation by multirate
partial differential equations
Andreas Pels1 , Johan Gyselinck2 , Ruth V. Sabariego3 , and Sebastian Schöps1
School of Computational Engineering and Institut für Theorie Elektromagnetischer Felder,
Technische Universität Darmstadt, Germany
2 BEAMS Department, Université libre de Bruxelles, Belgium
3 Department of Electrical Engineering, EnergyVille, KU Leuven, Belgium
In this paper the concept of Multirate Partial Differential Equations (MPDEs) is applied to obtain an efficient solution
for nonlinear low-frequency electrical circuits with pulsed excitation. The MPDEs are solved by a Galerkin approach and a
conventional time discretization. Nonlinearities are efficiently accounted for by neglecting the high-frequency components
(ripples) of the state variables and using only their envelope for the evaluation. It is shown that the impact of this
approximation on the solution becomes increasingly negligible for rising frequency and leads to significant performance
gains.
Index Terms—Finite element analysis, Nonlinear circuits, Numerical simulation, Multirate partial differential equations.
I. I NTRODUCTION
ULTISCALE and multirate problems occur naturally in many applications from electrical engineering. Classical discretization schemes are often inefficient in these cases and it is preferable to address
the dynamics of each scale separately. In time domain
the multirate phenomenon is often characterized by the
fact that some solution components are active while the
majority is latent (e.g. behind a low-pass filter [1]) or by
problems with oscillatory solutions that are composed of
multiple frequencies. An example is depicted in Fig. 1, it
consists of a fast periodically varying ripple and a slowly
varying envelope.
Multirate Partial Differential Equations (MPDEs) have
been successfully applied in nonlinear high-frequency
applications with largely separated time scales [2], [3].
Various methods have been proposed for the numerical
solution of the MPDEs, e.g. harmonic balance [2], shooting methods, classical time stepping [4] or a combination
of both [5]. Instead of the Fourier basis functions, one can
also use classical nodal basis functions or more sophisticated problem-specific basis functions as for example the
pulse width modulation (PWM) basis functions [6], [7].
This contribution focuses on improving the efficiency
of the numerical solution of nonlinear MPDEs with
a high-frequent pulsed excitation. In contrast to prior
works, we propose neglecting the influence of highfrequency components on the nonlinearity. Consequently,
the nonlinear relation is evaluated using only the envelope
of the state variables.
M
Corresponding author: A. Pels (e-mail: [email protected]).
voltage (V) / current (A)
arXiv:1710.06278v1 [] 17 Oct 2017
1 Graduate
100
50
Current iL,ref
Voltage vC,ref
Excitation vi
0
0
2
4
6
8
10
time t (ms)
Fig. 1. Solution of the buck converter shown in Fig. 2 at fs = 1000 Hz.
It consists of a slowly varying envelope and fast ripples. The switching
cycle of the pulsed input voltage is Ts = 1 ms, the duty cycle D = 0.7.
The paper is structured as follows: after this introduction, Section II presents the multirate formulation. Section III discusses the Galerkin approach in time domain
and the extraction of the envelope to evaluate nonlinear
behavior. Section IV gathers numerical results based on
the buck converter benchmark example and discusses the
accuracy and efficiency of the modified method. Finally,
conclusions are drawn.
II. M ULTIRATE FORMULATION
Spatial discretization of low-frequency field formulations as, e.g. electro- or magneto-quasi-statics [8], or
network models of power converter circuits as, e.g. the
buck converter in Fig. 2 lead to (nonlinear) systems of
ordinary or differential algebraic equations of the form
A x(t)
d
x(t) + B x(t) x(t) = c(t)
dt
(1)
c 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution
to servers or lists, or reuse of any copyrighted component of this work in other works.
2
L
RL
iL
vi
vC
C R
vC (V)
80
Fig. 2. Simplified buck converter.
with an initial condition x(t0 ) = x0 , where x(t) ∈ RNs is
the vector of Ns state variables, A(x), B(x) ∈ RNs ×Ns
are matrices that may depend on the solution and c(t) ∈
RNs is the excitation vector.
The system of equations (1) is hereafter rewritten as
MPDEs [2], [3], [7] in terms of the two time scales t1
and t2
∂b
x
∂b
x
b(t1 , t2 ) = b
+
+ B(b
x) x
c(t1 , t2 ) , (2)
A(b
x)
∂t1
∂t2
b(t1 , t2 ) and b
where x
c(t1 , t2 ) are the multivariate forms
of x(t) and c(t). If b
c(t1 , t2 ) fulfills the relation b
c(t, t) =
c(t), the solution of the original problem can be exb(t1 , t2 ) of the MPDEs by
tracted from the solution x
b(t, t) [2].
x(t) = x
Let t1 denote the slow time scale and t2 the fast
time scale, which is furthermore assumed to be periodic.
Without limiting the generality of the approach, we
choose b
c(t1 , t2 ) := c(t2 ) such that the right-hand-side
only depends on the fast time scale [7].
III. N UMERICAL METHOD
System (2) can be numerically solved either using a
Galerkin framework, shooting methods, classical time
stepping or a combination. Here, we propose to use a
variational setting, i.e. a Galerkin approach, for the fast
time scale and a conventional time stepping for the slowly
varying envelope.
A. Galerkin in time domain
We represent the solution by an expansion of Np +
1 suitable basis functions pk (τ (t2 )) and coefficients
bj (t1 , t2 ) can
wj,k (t1 ). The approximated state variables x
be written as the series
bj (t1 , t2 ) =
x
Np
X
pk (τ (t2 ))wj,k (t1 ),
(3)
k=0
with τ (t2 ) = Tt2s mod 1, where Ts is the switching cycle
of the excitation and mod denotes the modulo operation.
Since we deal with pulsed right-hand-sides, we choose
the PWM basis functions of [7], although constructed for
8
40
6
0
0
4
2
4
2
6
8
t2 (ms)
t1 (ms)
0
Fig. 3. Illustration of the multirate solution for fs = 500 Hz. The
solution of the original system of differential equations is denoted in
black.
linear problems [6]. The zero-th basis function is constant
p0 (τ ) = 1 and the corresponding coefficient wj,0 defines
the envelope of the j-th solution component. The first
basis function is defined by
( √
−D
if 0 ≤ τ ≤ D
3 2τD
p1 (τ ) =
,
√ 1+D−2τ
3 1−D
if D ≤ τ ≤ 1
where D is a free parameter, which can be chosen
according to the duty cycle of the PWM, i.e. between
0 and 1. The basis functions of higher degree pk (τ ), 2 ≤
k ≤ Np are obtained recursively by integrating pk−1 (τ )
and orthonormalizing with respect to the L2 (0, 1) scalar
product, [6], [7].
Finally, a Galerkin approach is applied on the interval
of one period, which yields
ZTs
A(b
x)
0
∂b
x
∂t1
+
∂b
x
∂t2
(4)
b−b
+B(b
x) x
c pl (τ (t2 )) dt2 = 0,
for all l = 0, . . . , Np . The integration with respect to t2
leads to a system of differential (algebraic) equations in
t1 , which can be solved by conventional time integration.
B. Treatment of nonlinearity
During time integration of (4), the integrals have to
be evaluated every time step due to their nonlinear
dependency on the solution. This may lead to an unnecessary increase of the computational effort as the
ripple components are often small in comparison with the
magnitude of the envelope. Consequently, one can save
computational time by ignoring the ripple components
of the solution and only using its envelope w̄ for the
evaluation of the nonlinearity.
As mentioned before, the envelope is stored in the
vector of coefficients w(t1 ) given by
w = [w1,0 , . . . , w1,Np , w2,0 , . . . , wNs ,Np ]> .
(5)
Let us abstractly define a function f , which extracts the
envelope w̄(t1 ) from w(t1 ), i.e.
>
w̄ = [w1,0 , w2,0 , . . . , wNs ,0 ] .
(7)
Therefore the matrices A, B only depend on t1 and
the evaluation of (4) simplifies significantly. Equation (4)
becomes
A(w̄)
0
∂t1
+
(8)
dw
+ B(w(t1 )) w(t1 ) = C(t1 ) ,
dt1
(11)
where the matrices are given by
A(w) = A(f (w)) ⊗ I,
(12)
B(w) = B(f (w)) ⊗ I + A(f (w)) ⊗ Q,
(13)
and ⊗ denotes the Kronecker product. The right-handside vector is given by
C(t1 ) =
0
b
c1 (t1 , t2 ) p(τ (t2 ))
..
dt2 .
.
b
cNs (t1 , t2 ) p(τ (t2 ))
100
150
current iL (A)
Fig. 4. Inductance of nonlinear coil versus current iL through the coil.
original
simplified
10−2
10−4
10−6
103
104
105
Fig. 5. Error of the original and simplified MPDE approach (Np = 4)
versus frequency fs .
IV. N UMERICAL R ESULTS
in equation (8), we get
Ts
50
frequency fs (Hz)
for all l = 0, . . . , Np , where the matrices A and B are
independent of t2 . Introducing
Z 1
I = Ts
p(τ ) p>(τ ) dτ ,
(9)
0
Z 1
dp >
p (τ ) dτ ,
(10)
Q =−
0 dτ
Z
0
∂b
x
∂t2
b−b
+B(w̄) x
c pl (τ (t2 )) dt2 = 0,
A(w(t1 ))
0.5
0
error
which is in the case of PWM basis functions the zero-th
components, i.e.
∂b
x
1
(6)
w̄(t1 ) = f (w(t1 )),
ZTs
inductance (mH)
3
(14)
Eventually, system (11) can be more efficiently solved by
conventional time discretization in the sense that larger
time steps can be used than for the original problem (1).
However, drawbacks are the approximation of the nonlinearity and the increased size of the matrices, i.e. A(w),
B(w) ∈ RNs (Np +1)×Ns (Np +1) and C(t) ∈ RNs (Np +1) ,
which is a similar tradeoff as in harmonic balance [2].
The numerical tests are performed on the simplified
buck converter model [6] using a nonlinear coil, whose
characteristic is shown in Fig. 4. The code is implemented
in GNU Octave [9]; for time integration the high-order
implicit Runge-Kutta method Radau5 from odepkg is
used, [10], [11]. As basis functions pk (τ ) we choose the
problem-specific PWM basis functions introduced earlier.
The reference solution to which all results are compared is calculated directly by solving (1) with a very
accurate time discretization (tol = 10−12 ). The buck
converter is operated in the range of frequencies from
500 Hz to 100 kHz. To determine the accuracy of the
MPDE approach the relative L2 -error of the buck converter voltage
=
kvC,ref − vC kL2 (Ω)
kvC,ref kL2 (Ω)
(15)
with respect to t ∈ Ω = [0, 10] ms is approximated by
numerical quadrature. This calculation is performed for
each frequency fs = T1s .
The MPDE solution is expanded using Np = 4
basis functions and (11) is solved using a tolerance of
tol = 10−6 . The result is exemplary depicted in Fig. 3.
Fig. 5 shows the error of the approach without (denoted
as “original approach”) and with the simplified evaluation
of the nonlinearity (8) (denoted as “simplified approach”)
with respect to the frequency. Without the simplification,
the integrals in (4) are evaluated using Gauss-Kronrod
quadrature and lead to an accuracy of < 10−4 for
frequencies fs > 10 kHz. As expected, the higher the
frequency, the higher the accuracy of the method since
102
101
original
simplified
100
103
104
105
frequency fs (Hz)
Fig. 6. Computational time for the original MPDE formulation and the
one with simplified evaluation of nonlinearity (Np = 4).
the magnitude of the ripples in relation to the envelope
decreases. The simplified approach introduces an additional error due to the approximation of the integrals.
However, the accuracy for fs > 10 kHz, i.e., < 10−3 ,
is still sufficient for most applications. Fig. 7 shows
the solutions of reference, simplified and original MPDE
approach versus time at a low frequency fs = 1000 Hz.
Here, the error committed in the simplified approach is
clearly distinguishable.
Table I shows the speedup (in terms of time for solving
the equation systems) of the simplified MPDE approach
compared to a conventional time discretization at the
same accuracy. For higher frequency, the conventional
time discretization of (1) becomes more and more inefficient as a higher number of ripples has to be resolved.
The MPDE approach on the contrary resolves the ripples
with the Galerkin approach so that the time discretization
resolves only the envelope. This leads to a solution time
almost independent of the frequency, i.e., approximately
1 s for the simplified and 100 s for the original approach,
see Fig. 6. The higher solution time of the original
approach is a result of the evaluation of the integrals in
each time step. Thus the speedup of the original approach
is much lower compared to the simplified approach.
V. C ONCLUSION
The MPDE approach is applied to a nonlinear lowfrequency example with pulsed excitation. The solution is
obtained by a Galerkin approach and time discretization.
To evaluate the nonlinearity the ripple components due to
the pulsed excitation are neglected and only the envelope
is used. The accuracy of the proposed method rises with
increasing excitation frequency and the method offers
a considerable speedup compared to conventional time
discretization with the same accuracy.
ACKNOWLEDGEMENT
This work is supported by the ‘Excellence Initiative’ of
German Federal and State Governments and the Graduate
School CE at TU Darmstadt and in part by the Walloon Region of Belgium (WBGreen FEDO, grant RW1217703).
output voltage vC (V)
solution time (s)
4
80
60
40
original MPDE, Np = 4
simplified MPDE, Np = 4
reference solution
20
0
0
2
4
6
8
10
time (ms)
Fig. 7. Comparison of solutions for fs = 1000 Hz.
TABLE I
S PEEDUP OF MPDE APPROACH (Np = 4) COMPARED TO
CONVENTIONAL TIME DISCRETIZATION FOR DIFFERENT
FREQUENCIES .
fs (kHz)
10
50
100
approx. speedup
60
400
1000
approx. error
8 · 10−4
3 · 10−5
7 · 10−6
R EFERENCES
[1] S. Schöps, H. De Gersem, and A. Bartel, “A cosimulation
framework for multirate time-integration of field/circuit coupled
problems,” IEEE Trans. Magn., vol. 46, no. 8, pp. 3233–3236,
Jul. 2010.
[2] H. G. Brachtendorf, G. Welsch, R. Laur, and A. Bunse-Gerstner,
“Numerical steady state analysis of electronic circuits driven by
multi-tone signals,” Electr. Eng., vol. 79, no. 2, pp. 103–112, 1996.
[3] J. Roychowdhury, “Analyzing circuits with widely separated time
scales using numerical PDE methods,” IEEE Trans. Circ. Syst.
Fund. Theor. Appl., vol. 48, no. 5, pp. 578–594, May 2001.
[4] T. Mei, J. Roychowdhury, T. Coffey, S. Hutchinson, and D. Day,
“Robust, stable time-domain methods for solving MPDEs of
fast/slow systems,” IEEE Trans. Comput. Aided. Des. Integrated
Circ. Syst., vol. 24, no. 2, pp. 226–239, Feb. 2005.
[5] K. Bittner and H. G. Brachtendorf, “Adaptive multi-rate wavelet
method for circuit simulation,” Radioengineering, vol. 23, no. 1,
Apr. 2014.
[6] J. Gyselinck, C. Martis, and R. V. Sabariego, “Using dedicated
time-domain basis functions for the simulation of pulse-widthmodulation controlled devices – application to the steady-state
regime of a buck converter,” in Electromotion 2013, Cluj-Napoca,
Romania, Oct. 2013.
[7] A. Pels, J. Gyselinck, R. V. Sabariego, and S. Schöps, “Multirate partial differential equations for the efficient simulation of
low-frequency problems with pulsed excitations,” 2017, arXiv
1707.01947.
[8] H. A. Haus and J. R. Melcher, Electromagnetic Fields and Energy.
Prentice-Hall, 1989.
[9] J. W. Eaton, D. Bateman, S. Hauberg, and R. Wehbring,
The GNU Octave 4.0 Reference Manual 1/2: Free Your
Numbers. Samurai Media Limited, Oct. 2015. [Online].
Available: http://www.gnu.org/software/octave/doc/interpreter
[10] E. Hairer and G. Wanner, “Stiff differential equations solved by
radau methods,” J. Comput. Appl. Math., vol. 111, no. 1–2, pp.
93–111, 1999.
[11] T. Treichl and J. Corno, ODEpkg – A package for solving
ordinary differential equations and more., 0th ed., GNU Octave,
2015. [Online]. Available: https://octave.sourceforge.io/odepkg/
| 5 |
How many units can a commutative ring
have?
arXiv:1701.02341v1 [] 9 Jan 2017
Sunil K. Chebolu and Keir Lockridge
Abstract. László Fuchs posed the following problem in 1960, which remains open: classify
the abelian groups occurring as the group of all units in a commutative ring. In this note,
we provide an elementary solution to a simpler, related problem: find all cardinal numbers
occurring as the cardinality of the group of all units in a commutative ring. As a by-product,
we obtain a solution to Fuchs’ problem for the class of finite abelian p-groups when p is an
odd prime.
1. INTRODUCTION It is well known that a positive integer k is the order of the
multiplicative group of a finite field if and only if k is one less than a prime power.
The corresponding fact for finite commutative rings, however, is not as well known.
Our motivation for studying this—and the question raised in the title—stems from a
problem posed by László Fuchs in 1960: characterize the abelian groups that are the
group of units in a commutative ring (see [8]). Though Fuchs’ problem remains open, it
has been solved for various specialized classes of groups, where the ring is not assumed
to be commutative. Examples include cyclic groups ([10]), alternating, symmetric and
finite simple groups ([4, 5]), indecomposable abelian groups ([1]), and dihedral groups
([2]). In this note we consider a much weaker version of Fuchs’ problem, determining
only the possible cardinal numbers |R× |, where R is a commutative ring with group
of units R× .
In a previous note in the M ONTHLY ([6]), Ditor showed that a finite group G of odd
order is the group of units of a ring if and only if G is isomorphic to a direct product
of cyclic groups Gi , where |Gi | = 2ni − 1 for some positive integer ni . This implies
that an
positive integer is the number of units in a ring if and only if it is of the
Qodd
t
form i=1 (2ni − 1) for some positive integers n1 , . . . , nt . As Ditor mentioned in his
paper, this theorem may be derived from the work of Eldridge in [7] in conjunction
with the Feit-Thompson theorem which says that every finite group of odd order is
solvable. However, the purpose of his note was to give an elementary proof of this
result using classical structure theory. Specifically, Ditor’s proof uses the following key
ingredients: Mashke’s theorem, which classifies (for finite groups) the group algebras
over a field that are semisimple rings; the Artin-Wedderburn theorem, which describes
the structure of semisimple rings; and Wedderburn’s little theorem, which states that
every finite division ring is a field.
In this note we give another elementary proof of Ditor’s theorem for commutative
rings. We also extend the theorem to even numbers and infinite cardinals, providing a
complete answer to the question posed in the title; see Theorem 8. Our approach also
gives an elementary solution to Fuchs’ problem for finite abelian p-groups when p is
an odd prime; see Corollary 4.
2. FINITE CARDINALS We begin with two lemmas. For a prime p, let Fp denote
the field of p elements. Recall that any ring homomorphism φ : A −→ B maps units
to units, so φ induces a group homomorphism φ× : A× −→ B × .
1
Lemma 1. Let φ : A −→ B be a homomorphism of commutative rings. If the induced
group homomorphism φ× : A× −→ B × is surjective, then there is a quotient A′ of A
such that (A′ )× ∼
= B×.
Proof. By the first isomorphism theorem for rings, Im φ is isomorphic to a quotient
of A. It therefore suffices to prove that (Im φ)× = B × . Since ring homomorphisms
map units to units and φ× is surjective, we have B × ⊆ (Im φ)× . The reverse inclusion
(Im φ)× ⊆ B × holds since every unit in the subring Im φ must also be a unit in the
ambient ring B .
Lemma 2. Let V and W be finite fields of characteristic 2.
1. The tensor product V ⊗F2 W is isomorphic as a ring to a finite direct product
of finite fields of characteristic 2.
2. As F2 -vector spaces, dim(V ⊗F2 W ) = (dim V )(dim W ).
Proof. To prove (1), let K and L be finite fields of characteristic 2. By the primitive element theorem, we have L ∼
= F2 [x]/(f (x)), where f (x) is an irreducible polynomial
in F2 [x]. This implies that
K ⊗ F2 L ∼
=
K[x]
.
(f (x))
The irreducible factors of f (x) in K[x] are distinct since the extension L/F2 is sepQt
arable. Now let f (x) = i=1 fi (x) be the factorization of f (x) into its distinct irreducible factors in K[x]. We then have the following series of ring isomorphisms:
Y K[x]
K[x]
K[x] ∼
∼
,
L∼
=
= Qt
=
(f (x))
( i=1 fi (x)) i=1 (fi (x))
t
K ⊗ F2
where the last isomorphism follows from the Chinese remainder theorem for the ring
K[x]. Since each factor K[x]/(fi (x)) is a finite field of characteristic 2, we see that
K ⊗F2 L is isomorphic as a ring to a direct product of finite fields of characteristic 2,
as desired.
For (2), simply note that if {v1 , . . . , vk } is a basis for V and {w1 , . . . , wl } is a
basis for W , then {vi ⊗ wj | 1 ≤ i ≤ k, 1 ≤ j ≤ l} is a basis for V ⊗F2 W .
We may now classify the finite abelian groups of odd order that appear as the group
of units in a commutative ring.
Proposition 3. Let G be a finite abelian group of odd order. The group G is isomorphic
to the group of units in a commutative ring if and only if G is isomorphic to the group
of units in a finite direct product of finite fields of characteristic 2. In particular, an
odd positiveQinteger k is the number of units in a commutative ring if and only if k is
t
of the form i=1 (2ni − 1) for some positive integers n1 , . . . , nt .
Proof. The ‘if’ direction of the second statement follows from the fact that, for rings
A and B , (A × B)× ∼
= A× × B × .
For the converse, since the trivial group is the group of units of F2 , let G be a
nontrivial finite abelian group of odd order and let R be a commutative ring with
group of units G. Since G has odd order, the unit −1 in R must have order 1. This
implies that R has characteristic 2.
2
Now let
G∼
= Cpα1 × · · · × Cpαk
1
k
denote a decomposition of G as a direct product of cyclic groups of prime power order
(the primes pi are not necessarily distinct). Let gi denote a generator of the ith factor.
Define a ring S by
S=
F2 [x1 , . . . , xk ]
.
α
p k
α
p 1
(x11
− 1, . . . , xkk − 1)
Since R is a commutative ring of characteristic 2, there is a natural ring homomorphism S −→ R sending xi to gi for all i. Since the gi ’s together generate G, this
map induces a surjection S × −→ R× , and hence by Lemma 1 there is a quotient of S
whose group of units is isomorphic to G.
Since any quotient of a finite direct product of fields is again a finite direct product
of fields (of possibly fewer factors), the proof will be complete if we can show that
S is isomorphic as a ring to a finite direct product of fields of characteristic 2. To see
this, observe that the map
α
p k
α
p 1
F2 [x1 ]/(x11 − 1) × · · · × F2 [x1 ]/(xkk − 1) −→ S
sending a k -tuple to the product of its entries is surjective and F2 -linear in each factor;
by the universal property of the tensor product, it induces a surjective ring homomorphism
α
p k
α
p 1
F2 [x1 ]/(x11 − 1) ⊗F2 · · · ⊗F2 F2 [x1 ]/(xkk − 1) −→ S.
α
(†)
α
The dimension of the source of (†) as an F2 -vector space is p1 1 · · · pk k by Lemma 2
(2); this is also the dimension of the target (count monomials in the polynomial ring S ).
Consequently, the map (†) is an isomorphism of rings. The irreducible factors of each
α
p i
polynomial xi i − 1 are distinct since this polynomial has no roots in common with its
derivative (pi is odd). Therefore by the Chinese remainder theorem, each tensor factor
is a finite direct product of finite fields of characteristic 2. Since the tensor product
distributes over finite direct products, we may use Lemma 2 (1) to conclude that S is
ring isomorphic to a finite direct product of finite fields of characteristic 2.
For any odd prime p, we now characterize the finite abelian p-groups that are realizable as the group of units of a commutative ring. Recall that a finite p-group is a
finite group whose order is a power of p. An elementary abelian finite p-group is a
finite group that is isomorphic to a finite direct product of cyclic groups of order p.
Corollary 4. Let p be an odd prime. A finite abelian p-group G is the group of units
of a commutative ring if and only if G is an elementary abelian p-group and p is a
Mersenne prime.
Proof. The ‘if’ direction follows from the fact if p = 2n − 1 is a Mersenne prime,
then
(Fp+1 × · · · × Fp+1 )× ∼
= Cp × · · · × Cp .
3
For the other direction, let p be an odd prime and let G be a finite abelian p-group. If
G is the group of units of commutative ring, then by Proposition 3, G ∼
= T × where T
is a finite direct product of finite fields of characteristic 2. Consequently,
G∼
= C2n1 −1 × · · · × C2nt −1 .
Since each factor must be a p-group, for each i we have 2ni − 1 = pzi for some
positive integer zi . We claim that zi = 1 for all i. This follows from [3, 2.3], but since
the argument is short we include it here for convenience.
Assume to the contrary that zi > 1 for some i. Consider the equation pzi + 1 =
ni
2 . Since p > 1, we have ni ≥ 2 and hence pzi ≡ −1 mod 4. This means p ≡ −1
mod 4 and zi is odd. Since zi > 1, we have a nontrivial factorization
2ni = pzi + 1 = (p + 1)(pzi −1 − pzi −2 + · · · − p + 1),
so pzi −1 − pzi −2 + · · · − p + 1 must be even. On the other hand, since zi and p are
both odd, working modulo 2 we obtain
0 ≡ pzi −1 − pzi −2 + · · · − p + 1 ≡ zi ≡ 1 mod 2,
a contradiction. Hence zi = 1 for all i, so p is Mersenne and G an is elementary
abelian p-group.
The above corollary does not hold for the Mersenne prime p = 2; for example, C4 =
F×
5 . As far as we know, Fuchs’ problem for finite abelian 2-groups remains open.
We next provide a simple example demonstrating that every even number is the
number of units in a commutative ring.
Proposition 5. Every even number is the number of units in a commutative ring.
Proof. Let m be a positive integer and consider the commutative ring
R2m =
Z[x]
.
(x2 , mx)
Every element in this ring can be uniquely represented by an element of the form
a + bx, where a is an arbitrary integer and 0 ≤ b ≤ m − 1. We will now show that
a + bx is a unit in this ring if and only if a is either 1 or −1; this implies the ring has
× ∼
exactly 2m units. (In fact, it can be shown that R2m
= C2 × Cm .)
If a + bx is a unit in R2m , there there exits an element a′ + b′ x such that (a +
bx)(a′ + b′ x) = 1 in R2m . Since x2 = 0 in R2m , we must have aa′ = 1 in Z; i.e., a
is 1 or −1. Conversely, if a is 1 or −1, we see that (a + bx)(a − bx) = 1 in R2m .
3. INFINITE CARDINALS Propositions 3 and 5 solve our problem for finite cardinals. For infinite cardinals, we will prove the following proposition.
Proposition 6. Every infinite cardinal is the number of units in a commutative ring.
Our proof relies mainly on the Cantor-Bernstein theorem:
Theorem 7 (Cantor-Bernstein). Let A and B be any two sets. If there exist injective mappings f : A −→ B and g : B −→ A, then there exits a bijective mapping
h : A −→ B . In other words, if |A| ≤ |B| and |B| ≤ |A|, then |A| = |B|.
4
We also use other standard facts from set theory which may be found in [9]. For example, we make frequent use of the fact that whenever α and β are infinite cardinals
with α ≤ β , then αβ ≤ β . Recall that ℵ0 denotes the cardinality of the set of natural
numbers.
Proof of Proposition 6. Let λ be an infinite cardinal and let S be a set whose cardinality is λ. Consider F2 (S), the field of rational functions in the elements of S . We claim
that |F2 (S)× | = λ. By the Cantor-Bernstein theorem, it suffices to prove that |S| ≤
|F2 (S)× | and |F2 (S)× | ≤ |S|. Since S ⊆ F2 (S)× , it is clear that |S| ≤ |F2 (S)× |.
For the reverse inequality, first observe that if A is a finite set, then |F2 [A]| = ℵ0 .
This follows by induction on the size of A, because F2 [x] is countable and R[x] is
countable whenever R is countable. We now have the following:
|F2 (S)× | ≤ |F2 (S)|
≤ |F2 [S] × F2 [S]| (every rational function is a ratio of two polynomials)
= |F2 [S]|2
= |F2 [S]|
X
≤
|F2 [A]|
A⊂S, 1≤|A|<ℵ0
X
=
ℵ0
A⊂S, 1≤|A|<ℵ0
≤
∞
X
i=1
i
|S| ℵ0 =
∞
X
|S|ℵ0 = |S|ℵ20 = |S|ℵ0 = |S|.
i=1
Combining Propositions 3, 5, and 6, we obtain our main result.
Theorem 8. Let λ be a cardinal number. There exists a commutative ring R with
|R× | = λ if and only if λ is equal to
Qt
1. an odd number of the form i=1 (2ni − 1) for some positive integers n1 , . . . , nt ,
2. an even number, or
3. an infinite cardinal number.
ACKNOWLEDGMENTS. We would like to thank George Seelinger for simplifying our presentation of a
ring with an even number of units. We also would like to thank the anonymous referees for their comments.
REFERENCES
1. S. K. Chebolu, K. Lockridge, Fuchs’ problem for indecomposable abelian groups, J. Algebra 438 (2015)
325–336.
2. ———, Fuchs’ problem for dihedral groups, J. Pure Appl. Algebra 221 no. 2 (2017) 971–982.
3. ———, Fields with indecomposable multiplicative groups, Expo. Math. 34 (2016) 237–242.
4. C. Davis, T. Occhipinti, Which finite simple groups are unit groups? J. Pure Appl. Algebra 218 no. 4
(2014) 743–744.
5. ———, Which alternating and symmetric groups are unit groups? J. Algebra Appl. 13 no. 3 (2014).
6. S. Z. Ditor, On the group of units of a ring, Amer. Math. Monthly 78 (1971) 522–523.
5
7. K. E. Eldridge, On ring structures determined by groups, Proc. Amer. Math. Soc. 23 (1969) 472–477.
8. L. Fuchs, Abelian groups. International Series of Monographs on Pure and Applied Mathematics, Pergamon Press, New York-Oxford-London-Paris, 1960.
9. P. R. Halmos, Naive set theory. The University Series in Undergraduate Mathematics, D. Van Nostrand
Co., Princeton, NJ-Toronto-London-New York, 1960.
10. K. R. Pearson, J. E. Schneider, Rings with a cyclic group of units, J. Algebra 16 (1970) 243–251.
Department of Mathematics, Illinois State University, Normal, IL 61790, USA
[email protected]
Department of Mathematics, Gettysburg College, Gettysburg, PA 17325, USA
[email protected]
6
| 0 |
Efficiently decodable codes for the binary deletion channel ∗
arXiv:1705.01963v2 [] 26 Jul 2017
Venkatesan Guruswami†
Ray Li‡
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
In the random deletion channel, each bit is deleted independently with probability p. For the random
deletion channel, the existence of codes of rate (1 − p)/9, and thus bounded away from 0 for any p < 1,
has been known. We give an explicit construction with polynomial time encoding and deletion correction
algorithms with rate c 0 (1 − p) for an absolute constant c 0 > 0.
1 Introduction
We consider the problem of designing error-correcting codes for reliable and efficient communication on the
binary deletion channel. The binary deletion channel (BDC) deletes each transmitted bit independently with
probability p, for some p ∈ (0, 1) which we call the deletion probability. Crucially, the location of the deleted
bits are not known at the decoder, who receives a subsequence of the original transmitted sequence. The
loss of synchronization in symbol locations makes the noise model of deletions challenging to cope with.
As one indication of this, we still do not know the channel capacity of the binary deletion channel. Quoting
from the first page of Mitzenmacher’s survey [17]: “Currently, we have no closed-form expression for the
capacity, nor do we have an efficient algorithmic means to numerically compute this capacity.” This is in
sharp contrast with the noise model of bit erasures, where each bit is independently replaced by a ’?’ with
probability p (the binary erasure channel (BEC)), or of bit errors, where each bit is flipped independently
with probability p (the binary symmetric channel (BSC)). The capacity of the BEC and BSC equal 1 − p and
1 − h(p) respectively, and we know codes of polynomial complexity with rate approaching the capacity in
each case.
The capacity of the binary deletion channel is clearly at most 1 − p, the capacity of the simpler binary
erasure channel. Diggavi and Grossglauser [3] establish that the capacity of the deletion channel for p ≤ 12
is at least 1 − h(p). Kalai, Mitzenmacher, and Sudan [11] proved this lower bound is tight as p → 0, and
Kanoria and Montanari [12] determined a series expansion that can be used to determine the capacity exactly.
Turning to large p, Rahmati and Duman [18] prove that the capacity is at most 0.4143(1 − p) for p ≥ 0.65.
Drinea and Mitzenmacher [4, 5] proved that the capacity of the BDC is at least (1 − p)/9, which is within a
constant factor of the upper bound. In particular, the capacity is positive for every p < 1, which is perhaps
surprising. The asymptotic behavior of the capacity of the BDC at both extremes of p → 0 and p → 1 is
thus known.
∗ Research
supported in part by NSF grant CCF-1422045.
[email protected]
‡ Email: [email protected]
† Email:
This work is concerned with constructive results for coding for the binary deletion channel. That is, we
seek codes that can be constructed, encoded, and decoded from deletions caused by the BDC, in polynomial
time. Recently, there has been good progress on codes for adversarial deletions, including constructive
results. Here the model is that the channel can delete an arbitrary subset of pn bits in the n-bit codeword. A
code capable of correcting pn worst-case deletions can clearly also correct deletions caused by a BDC with
deletion probability (p − ϵ) with high probability, so one can infer results for the BDC from some results
√
for worst-case deletions. For small p, Guruswami and Wang [9] constructed binary codes of rate 1 − O( p)
to efficiently correct a p fraction worst-case deletions. So this also gives codes of rate approaching 1 for
the BDC when p → 0. For larger p, Kash et al. [13] proved that randomly chosen codes of small enough
rate R > 0 can correctly decode against pn adversarial deletions when p ≤ 0.17. Even non-constructively,
this remained the best achievability result in terms of correctable deletion fraction until the recent work of
Bukh, Guruswami, and Håstad [2]√who constructed codes of positive rate efficiently decodable against pn
adversarial deletions for any p < 2 − 1. For adversarial deletions, it is impossible to correct a deletion
fraction of 1/2, whereas the capacity of the BDC is positive for all p < 1. So solving the problem for the
much harder worst-case deletions is not a viable approach to construct positive rate codes for the BDC for
p > 1/2.
To the best of our knowledge, explicit efficiently decodable code constructions were not available for the
binary deletion channel for arbitrary p < 1. We present such a construction in this work. Our rate is worse
than the (1 − p)/9 achieved non-constructively, but has asymptotically the same dependence on p for p → 1.
Theorem 1.1. Let p ∈ (0, 1). There is an explicit a family of binary codes that (1) has rate (1 − p)/110, (2)
is constructible in polynomial time, (3) encodable in time O(N ), and (3) decodable with high probability on
the binary deletion channel with deletion probability p in time O(N 2 ). (Here N is the block length of the
code)
1.1 Some other related work
One work that considers efficient recovery against random deletions is by Yazdi and Dolecek [20]. In their
setting, two parties Alice and Bob are connected by a two-way communication channel. Alice has a string X ,
Bob has string Y obtained by passing X through a binary deletion channel with deletion probability p ≪ 1,
and Bob must recover X . They produce a polynomial-time synchronization scheme that transmits a total of
O(pn log(1/p)) bits and allows Bob to recover X with probability exponentially approaching 1.
For other models of random synchronization errors, Kirsch and Drinea [14] prove information capacity
lower bounds for channels with i.i.d deletions and duplications. Fertonani et al. [6] prove capacity bounds
for binary channels with i.i.d insertions, deletions, and substitutions.
For deletion channels over non-binary alphabets, Rahmati and Duman [18] prove a capacity upper bound
of C 2 (p) + (1 − p) log(|Σ|/2), where C 2 (p) denotes the capacity of the binary deletion channel with deletion
probability p, when the alphabet size |Σ| is even. In particular, using the best known bound for C 2 (p) of
C 2 (p) ≤ 0.4143(1 − p), the upper bound is (1 − p)(log |Σ| − 0.5857).
In [8], the authors of this paper consider the model of oblivious deletions, which is in between the BDC
and adversarial deletions in power. Here, the channel can delete any pn bits of the codeword, but must do
so without knowledge of the codeword. In this model, they prove the existence of codes of positive rate for
correcting any fraction p < 1 of oblivious deletions.
2
1.2 Our construction approach
Our construction concatenates a high rate outer code over a large alphabet that is efficiently decodable against
a small fraction of adversarial insertions and deletions, with a good inner binary code. For the outer code, we
can use the recent construction of [10]. To construct the inner code, we first choose a binary code correcting
a small fraction of adversarial deletions. By concentration bounds, duplicating bits of a codeword in a
disciplined manner is effective against the random deletion channel, so we, for some constant B, duplicate
every bit of the binary code B/(1 −p) times. We further ensure our initial binary code has only runs of length
1 and 2 to maximize the effectiveness of duplication. We add small buffers of 0s between inner codewords
to facilitate decoding.
One might wonder whether it would be possible to use Drinea and Mitzenmacher’s existential result
[4, 5] of a (1 − p)/9 capacity lower bound as a black box inner code to achieve a better rate together with
efficient decodability. We discuss this approach in §3.2 and elaborate on what makes such a construction
difficult to implement.
2 Preliminaries
General Notation. Throughout the paper, log x refers to the base-2 logarithm.
We use interval notation [a, b] = {a, a + 1, . . . , b} to denote intervals of integers, and we use [a] = [1, a] =
{1, 2, . . . , a}.
Let Binomial(n, p) denote the Binomial distribution.
Words. A word is a sequence of symbols from some alphabet. We denote explicit words using angle
brackets, like h01011i. We denote string concatenation of two words w and w ′ with ww ′. We denote
w k = ww · · · w where there are k concatenated copies of w.
A subsequence of a word w is a word obtained by removing some (possibly none) of the symbols in w.
Let ∆i/d (w 1 , w 2 ) denote the insertion/deletion distance between w 1 and w 2 , i.e. the minimum number of
insertions and deletions needed to transform w 1 into w 2 . By a lemma due to Levenshtein [15], this is equal
to |w 1 | + |w 2 | − 2 LCS(w 1 , w 2 ), where LCS denotes the length of the longest common subsequence.
Define a run of a word w to be a maximal single-symbol subword. That is, a subword w ′ in w consisting
of a single symbol such that any longer subword containing w ′ has at least two different symbols. Note the
runs of a word partition the word. For example, 110001 has 3 runs: one run of 0s and two runs of 1s.
We say that c ∈ {0, 1}m and c ′ ∈ {0, 1}m are confusable under δm deletions if it is possible to apply δm
deletions to c and c ′ and obtain the same result. If δ is understood, we simply say c and c ′ are confusable.
Concentration Bounds. We use the following forms of Chernoff bound.
Lemma 2.1 (Chernoff). Let A1 , . . . , An be i.i.d random variables taking values in [0, 1]. Let A =
and δ ∈ [0, 1]. Then
Pr[A ≤ (1 − δ ) E[A]] ≤ exp −δ 2 E[A]/2
Furthermore,
Pr[A ≥ (1 + δ ) E[A]] ≤
eδ
(1 + δ )1+δ
E[A]
We also have the following corollary, whose proof is in Appendix A.
3
.
Ín
i=1 Ai
(1)
(2)
Lemma 2.2. Let 0 < α < β. Let A1 , . . . , An be independent random variables taking values in [0, β] such
that, for all i, E[Ai ] ≤ α. For γ ∈ [α, 2α], we have
" n
#
Õ
(γ − α)2n
Pr
.
(3)
Ai ≥ nγ ≤ exp −
3α β
i=1
3 Efficiently decodable codes for random deletions with p approaching 1
3.1 Construction
We present a family of constant rate codes that decodes with high probability on a binary deletion channel
with deletion fraction p (BDCp ). These codes have rate c 0 (1 − p) for an absolute positive constant c 0 , which
is within a constant of the upper bound (1 − p), which even holds for the erasure channel. By Drinea and
Mitzenmacher [4] the maximum known rate of a non-efficiently correctable binary deletion channel code is
(1 − p)/9.
The construction is based on the intuition that deterministic codes are better than random codes for the
deletion channel. Indeed, for adversarial deletions, length n √
random codes correct at most 0.22n deletions
[13], while explicitly constructed codes can correct close to ( 2 − 1)n deletions [2].
We begin by borrowing a result from [9].
Lemma 3.1 (Corollary of Lemma 2.3 of [9]). Let 0 < δ < 12 . For every binary string c ∈ {0, 1}m , there are
m 2
at most δm (1−δ
strings c ′ ∈ {0, 1}m such that c and c ′ are confusable under δm deletions.
)m
The next lemma gives codes against a small fraction of adversarial deletions with an additional run-length
constraint on the codewords.
Lemma 3.2. Let δ > 0. There exists a length m binary code of rate R = 0.6942 − 2h(δ ) − O(log(δm)/m)
correcting a δ fraction of adversarial insertions and deletions such that each codeword contains only runs
of size 1 and 2. Furthermore this code is constructible in time Õ(2(0.6942+R)m ).
Proof. It is easy to show that the number of codewords with only runs of 1 and 2 is Fm , the mth Fibonacci
number, and it is well known that Fm = φm + o(1) ≈ 20.6942m where φ is the golden ratio. Now we construct
m 2
the code by choosing it greedily. Each codeword is confusable with at most δm (1−δ
other codewords,
)m
so the number of codewords we can choose is at least
20.6942m
δm
m 2
(1−δ )m
= 2m(0.6942−2h(δ )−O (log(δm)/m)) .
(4)
We can find all words of length m whose run lengths are only 1 and 2 by recursion in time O(Fm ) =
O(20.6942m ). Running the greedy algorithm, we need to, for at most Fm · 2 Rm pairs of such words, determine
whether the pair is confusable (we only need to check confusability of a candidate word with words already
added to the code). Checking confusability of two words under adversarial deletions reduces to checking
whether the longest common subsequence is at least (1 − δ )m, which can be done in time O(m 2 ). This gives
an overall runtime of O(m 2 · Fm · 2 Rm ) = Õ(2(0.6942+R)m ).
Corollary 3.3. There exists a constant m ∗0 such that for all m ≥ m ∗0 , there exists a length m binary code of
rate Rin = 0.555 correcting a δ in = 0.0083 fraction of adversarial insertions and deletions such that each
4
codeword contains runs of size 1 and 2 only and each codeword starts and ends with a 1. Furthermore this
code is constructible in time O(21.25m ).
Our construction utilizes the following result as a black box for efficiently coding against an arbitrary
fraction of insertions and deletions with rate approaching capacity.
Theorem 3.4 (Theorem 1.1 of [10]). For any 0 ≤ δ < 1 and ϵ > 0, there exists a code C over alphabet Σ,
with |Σ| = poly(1/ϵ), with block length n, rate 1 − δ − ϵ, and is efficiently decodable from δn insertions and
deletions. The code can be constructed in time poly(n), encoded in time O(n), and decoded in time O(n 2 ).
We apply Theorem 3.4 for small δ , so we also could use the high rate binary code construction of [7] as
an outer code.
We now turn to our code construction for Theorem 1.1.
1
1
The code. Let B = 60, B ∗ = 1.43̄B = 86, η = 1000
, δ out = 1000
. Let m 0 = max(α log(1/δ out )/η, m ∗0 ),
∗
where α is a sufficiently large constant and where m 0 is given by Corollary 3.3. Let ϵout > 0 be small enough
such that the alphabet Σ, given by Theorem 3.4 with ϵ = ϵout and δ = δ out , satisfies |Σ| ≥ m 0 , and let Cout
be the corresponding code.
Let Cin : |Σ| → {0, 1}m be the code given by Corollary 3.3, and let Rin = 0.555, δ in = 0.0083,
and m = R1i n log |Σ| = O(log(1/ϵ)) be the rate, tolerable deletion fraction, and block length of the code,
respectively (Rin and δ in are given by Corollary 3.3). Each codeword of Cin has runs of length 1 and 2 only,
and each codeword starts and ends with a 1. This code is constructed greedily.
Our code is a modified concatenated code. We encode our message as follows.
• Outer Code. First, encode the message into the outer code, Cout , to obtain a word c (out ) = σ1 . . . σn .
• Concatenation with Inner Code. Encode each outer codeword symbol σi ∈ Σ by the inner code Cin .
• Buffer. Insert a buffer of ηm 0s between adjacent inner codewords. Let the resulting word be c (cat ) .
Let ci(in) = Cin (σi ) denote the encoded inner codewords of c (cat ) .
• Duplication. After concatenating the codes and inserting the buffers, replace each character (including
characters in the buffers) with ⌈B/(1 − p)⌉ copies of itself to obtain a word of length N := Bnm/(1 −p).
(dup)
}.
Let the resulting word be c, and the corresponding inner codewords be {ci
Rate. The rate of the outer code is 1 − δ out − ϵout , the rate of the inner code is Rin , the buffer and
1
and (1 − p)/B respectively. This gives a total rate that is slightly greater
duplications multiply the rate by 1+η
than (1 − p)/110.
Notation. Let s denote the received word after the codeword c is passed through the deletion channel.
Note that (i) every bit of c can be identified with a bit in c (cat ) , and (ii) each bit in the received word s can
be identified with a bit in c. Thus, we can define relations f (dup) : c (cat ) → c, and f (del) : c → s (that is,
relations on the indices of the strings). These are not functions because some bits may be mapped to multiple
(for f (dup) ) or zero (for f (del) ) bits. Specifically, f (del) and f (dup) are the inverses of total functions. In this
way, composing these relations (i.e. composing their inverse functions) if necessary, we can speak about
the image and pre-image of bits or subwords of one of c (cat ) , c, and s under these relations. For example,
during the Duplication step of encoding, a bit hb j i of c (cat ) is replaced with B/(1 − p) copies of itself, so the
corresponding string hb j i B/(1−p) in c forms the image of hb j i under f (dup) , and conversely the pre-image of
the duplicated string hb j i B/(1−p) is that bit hb j i.
Decoding algorithm.
5
• Decoding Buffer. First identify all runs of 0s in the received word with length at least Bηm/2. These
are our decoding buffers that divide the word into decoding windows, which we identify with subwords
of s.
• Deduplication. Divide each decoding window into runs. For each run, if it has strictly more than
B ∗ copies of a bit, replace it with as two copies of that bit, otherwise replace it with one copy. For
example, h0i 2B gets replaced with h00i while h0i B gets replaced with h0i. For each decoding window,
concatenate these runs of length 1 and 2 in their original order in the decoding window to produce a
deduplicated decoding window.
• Inner Decoding. For each deduplicated decoding window, decode an outer symbol σ ∈ Σout from
each decoding window by running the brute force deletion correction algorithm for Cin . That is, for
each deduplicated decoding window s ∗(in) , find by brute force a codeword c ∗(in) in Cin that such that
∆i/d (c ∗(in) , s ∗(in) ) ≤ δ inm. If c ∗(in) is not unique or does not exist, do not decode an outer symbol σ from
this decoding window. Concatenate the decoded symbols σ in the order in which their corresponding
decoding windows appear in the received word s to obtain a word s (out ) .
• Outer Decoding. Decode the message m from s (out ) using the decoding algorithm of Cout in Theorem 3.4.
(dup)
the decoding window whose pre-image under f (del) contains indices
For purposes of analysis, label as si
(dup)
(dup)
contains bits in multiple decoding
. If this decoding window is not unique (that is, the image of ci
in ci
(dup)
arbitrarily. Note this labeling may mean some decoding windows are unlabeled,
windows), then assign si
and also that some decoding windows may have multiple labels. In our analysis, we show both occurrences
(dup)
(dup)
after Deduplication to be si(in) .
, denote the result of si
are rare. For a decoding window si
The following diagram depicts the encoding and decoding steps. The pair ({ci(in) }i , c (cat ) ) indicates
that, at that step of encoding, we have produced the word c (cat ) , and the sequence {ci(in) }i are the “inner
codewords” of c (cat ) (that is, the words in between what would be identified by the decoder as decoding
(dup)
}i , c) is used similarly.
buffers). The pair ({ci
Cout
Ci n , Buf
m −−−−→ c (out ) −−−−−−−→
o
n
Dup n
o
(dup)
(in)
ci
, c (cat ) −−−→ ci
,c
i
i
BDC
DeBuf n (dup) o DeDup n (in) o Deci n (out ) Decout
−−−−−→ s
−−−−−→ m
−−−−−−→ si
s −−−−−→ si
i
i
Runtime. The outer code is constructible in poly(n) time and the inner code is constructible in time
O(21.25m ) = poly(1/ϵ), which is a constant, so the total construction time is poly(N ).
Encoding in the outer code is linear time, each of the n inner encodings is constant time, and adding the
buffers and applying duplications each can be done in linear time. The overall encoding time is thus O(N ).
The Buffer step of the decoding takes linear time. The Deduplication step of each inner codeword
takes constant time, so the entire step takes linear time. For each inner codeword, Inner Decoding takes
time O(m 2 2m ) = poly(1/ϵ) by brute force search over the 2m possible codewords: checking each of the 2m
codewords is a longest common subsequence computation and thus takes time O(m 2 ), giving a total decoding
time of O(m 2 2m ) for each inner codeword. We need to run this inner decoding O(n) times, so the entire
6
Inner Decoding step takes linear time. The Outer Decoding step takes O(n 2 ) time by Theorem 3.4. Thus the
total decoding time is O(N 2 ).
Correctness. Note that, if an inner codeword is decoded incorrectly, then one of the following holds.
1. (Spurious Buffer) A spurious decoding buffer is identified in the corrupted codeword during the Buffer
step.
2. (Deleted Buffer) A decoding buffer neighboring the codeword is deleted.
(dup)
3. (Inner Decoding Failure) Running the Deduplication and Inner Decoding steps on si
inner codeword incorrectly.
computes the
We show that, with high probability, the number of occurrences of each of these events is small.
The last case is the most nontrivial, so we deal with it first, assuming the codeword contains no spurious
decoding buffers and the neighboring decoding buffers are not deleted. In particular, we consider an i such
(dup)
(dup)
that our decoding window si
whose pre-image under f (del) only contains bits in ci
(because no
(dup)
appear in any other decoding window (because no spurious
deleted buffer) and no bits in the image of ci
buffer).
Recall that the inner code Cin can correct against δ in = 0.0083 fraction of adversarial insertions and
deletions. Suppose an inner codeword ci(in) = r 1 . . . rk ∈ Cin has k runs r j each of length 1 or 2, so that
m/2 ≤ k ≤ m.
Definition 3.5. A subword of α identical bits in the received word s is
• type-0 if α = 0
• type-1 if α ∈ [1, B ∗ ]
• type-2 if α ∈ [B ∗ + 1, ∞).
By abuse of notation, we say that a length 1 or 2 run r j of the inner codeword ci(in) has type-t j if the image
of r j in s under f (del) ◦ f (dup) forms a type-t j subword.
Let t 1 , . . . , tk be the types of the runs r 1 , . . . , rk , respectively. The image of a run r j under f (del) ◦ f (dup)
has length distributed as Binomial(B|r j |/(1 − p), 1 − p). Let δ = 0.43̄ be such that B ∗ = (1 + δ )B. By the
Chernoff bounds in Lemma 2.1, the probability that a run r j of length 1 is type-2 is
Pr
Z ∼Binomial(B/(1−p),1−p)
B
[Z > B ∗ ] < e δ /(1 + δ )1+δ < 0.0071.
(5)
Similarly, the probability that a run r j of length-2 is type-1 is at most
Pr
Z ∼Binomial(2B/(1−p),1−p)
2B
[Z ≤ B ∗ ] < e −((1−δ )/2)
< 0.0081.
The probability any run is type-0 is at most PrZ ∼Binomial(B/(1−p),1−p) [Z = 0] < e −B < 10−10 .
(6)
We now have established that, for runs r j in ci(in) , the probability that the number of bits in the image of
r j in s under f (del) ◦ f (dup) is “incorrect” (between 1 and B ∗ for length 2 runs, and greater than B ∗ for length
1 runs), is at most 0.0081, which is less than δ in . If the only kinds of errors in the Local Decoding step
7
were runs of c of length 1 becoming runs of length 2 and runs of length 2 become runs of length 1, then we
have that, by concentration bounds, with probability 1 − 2−Ω(m) , the number of insertions deletions needed
to transform si(in) back into ci(in) is at most δ inm, in which case si(in) gets decoded to the correct outer symbol
using Cin .
However, we must also account for the fact that some runs r j of ci(in) may become deleted completely
after duplication and passing through the deletion channel. That is, the image of r j in s under f (del) ◦ f (dup)
is empty, or, in other words, r j is type-0. In this case the two neighboring runs r j−1 and r j+1 appear merged
together in the Deduplication step of decoding. For example, if a run of 1s was deleted completely after
duplication and deletion, its neighboring runs of 0s would be interpreted by the decoder as a single run.
Fortunately, as we saw, the probability that a run is type-0 is extremely small (< 10−10 ), and we show each
type-0 run only increases ∆i/d (ci(in) , si(in) ) by a constant. We show this constant is at most 6.
To be precise, let Yj be a random variable that is 0 if |r j | = t j , 1 if {|r j |, t j } = {1, 2}, and 6 if t j = 0. We
Í
claim kj=1 Yj is an upper bound on ∆i/d (ci(in) , si(in) ). To see this, first note that if t j , 0 for all i, then the
number of runs of ci(in) and si(in) are equal, so we can transform ci(in) into si(in) by adding a bit to each length-1
(in)
(in)
type-2 run of ci and deleting a bit from each length-2 type-1 run of si .
(in)
Now, if some number, ℓ, of the t j are 0, then at most 2ℓ of the runs in ci become merged with some
other run (or a neighboring decoding buffer) after duplication and deletion. Each set of consecutive runs
r j , r j+2 , . . . , r j+2j ′ that are merged after duplication and deletion gets replaced with 1 or 2 copies of the
corresponding bit. For example, if r 1 = h11i, r 2 = h0i, r 3 = h11i, and if after duplication and deletion, 2B
bits remain in the image of each of r 1 and r 3 , and r 2 is type-0, then the image of r 1r 2r 3 under f (del) ◦ f (dup)
is h1i 4B , which gets decoded as h11i in the Deduplication step because h1i 4B is type-2. To account for the
type-0 runs in transforming ci(in) into si(in) , we (i) delete at most two bits from each of the ℓ type-0 runs in ci(in)
and (ii) delete at most two bits for each of at most 2ℓ merged runs in ci(in) . The total number of additional
insertions and deletions required to account for type-0 runs of c is thus at most 6ℓ, so we need at most 6
insertions and deletions to account for each type-0 run.
Our analysis covers the case when some bits in the image of ci(in) under f (del) ◦ f (dup) are interpreted
as part of a decoding buffer. Recall that inner codewords start and end with a 1, so that r 1 ∈ {h1i, h11i} for
every inner codeword. If, for example, t 1 = 0, that is, the image under f (del) ◦ f (dup) of the first run of 1s,
r 1 , is the empty string, then the bits of r 2 are interpreted as part of the decoding buffer. In this case too, our
analysis tells us that the type-0 run r 1 increases ∆i/d (ci(in) , si(in) ) by at most 6.
Í
We conclude kj=1 Yj is an upper bound for ∆i/d (ci(in) , si(in) ).
Note that if r j has length 1, then by (5) we have
E[Yj ] = 1 · Pr[r j is type-2] + 6 · Pr[r j is type-0] < 1 · 0.0071 + 6 · 10−9 < 0.0082.
(7)
Similarly, if r j has length 2, then by (6) we have
E[Yj ] = 1 · Pr[r j is type-1] + 6 · Pr[r j is type-0] < 1 · 0.0081 + 6 · 10−9 < 0.0082.
(8)
Thus E[Yj ] < 0.0082 for all i. We know the word si(in) is decoded incorrectly (i.e. is not decoded as σi ) in
8
the Inner Decoding step only if ∆i/d (ci(in) , si(in) ) > δ inm. The Yj are independent, so Lemma 2.2 gives
Pr[si(in) decoded incorrectly] ≤ Pr[Y1 + Y2 + · · · + Yk ≥ δ inm]
≤ Pr[Y1 + Y2 + · · · + Yk ≥ δ in k]
(δ in − 0.0082)2 k
≤ exp −
3 · 6 · δ in
≤ exp (−Ω(m))
(9)
where the last inequality is given by k ≥ m/2. Since our m ≥ Ω(log(1/δ out )) is sufficiently large, we have
the probability si(in) is decoded incorrectly is at most δ out /10. If we let Yj(i) denote the Yj corresponding to
Í (i)
(in)
inner codeword ci , the events Ei given by j Yj ≥ δ inm are independent. By concentration bounds on
the events Ei , we conclude the probability that there are at least δ out n/9 incorrectly decoded inner codewords
that are not already affected by spurious buffers and neighboring deleted buffers is 2−Ω(n) .
Our aim is to show that the number of spurious buffers, deleted buffers, and inner decoding failures is
small with high probability. So far, we have shown that, with high probability, assuming a codeword is not
already affected by spurious buffers and neighboring deleted buffers, the number of inner decoding failures
is small. We now turn to showing the number of spurious buffers is likely to be small.
A spurious buffer appears inside an inner codeword if many consecutive runs of 1s are type-0. A spurious
buffer requires at least one of the following: (i) a codeword contains a sequence of at least ηm/5 consecutive
type-0 runs of 1s, (ii) a codeword contains a sequence of ℓ ≤ ηm/5 consecutive type-0 runs of 1s, such that,
for the ℓ + 1 consecutive runs of 0s neighboring these type-0 runs of 1s, their image under f (del) ◦ f (dup) has
at least 0.5ηm 0s. We show both happen with low probability within a codeword.
A set of ℓ consecutive type-0 runs of 1s occurs with probability at most 10−10ℓ . Thus the probability
an inner codeword has a sequence of ηm/5 consecutive type-0 runs of 1s is at most m 2 · 10−10ηm/5 =
exp(−Ω(ηm)). Now assume that in an inner codeword, each set of consecutive type-0 runs of 1s has
size at most ηm/5. Each set of ℓ consecutive type-0 runs of 1s merges ℓ + 1 consecutive runs of 0s
in c, so that they appear as a single longer run in s. The sum of the lengths of these ℓ + 1 runs is
some number ℓ ∗ that is at most 2ℓ + 2. The number of bits in the image of these runs of ci(in) under
f (del) ◦ f (dup) is distributed as Binomial(ℓ ∗B/(1 − p), 1 − p). This has expectation ℓ ∗ B ≤ 0.41Bηm, so
by concentration bounds, the probability this run of s has length at least 0.5Bηm, i.e. is interpreted as
a decoding buffer, is at most exp(−Ω(ηm)). Hence, conditioned on each set of consecutive type-0 runs
of 1s having size at most ηm/5, the probability of having no spurious buffers in a codeword is at least
1 − exp(−Ω(ηm)). Thus the overall probability there are no spurious buffers a given inner codeword is at
least (1 − exp(−Ω(ηm))(1 − exp(−Ω(ηm))) = 1 − exp(−Ω(ηm)). Since each inner codeword contains at most
m candidate spurious buffers (one for each type-0 run of 1s), the expected number of spurious buffers in an
inner codeword is thus at most m · exp(−Ω(ηm)). By our choice of m ≥ Ω(log(1/δ out )/η), this is at most
δ out /10. The occurrence of conditions (i) and (ii) above are independent between buffers. The total number
of spurious buffers thus is bounded by the sum of n independent random variables each with expectation at
most δ out /10. By concentration bounds, the probability that there are at least δ out n/9 spurious buffers is
2−Ω(n) .
A deleted buffer occurs only when the image of the ηm 0s in a buffer under f (del) ◦ f (dup) is at most
Bηm/2. The number of such bits is distributed as Binomial(Bηm/(1 − p), 1 − p). Thus, each buffer is deleted
with probability exp(−Bηm) < δ out /10 by our choice of m ≥ Ω(log(1/δ out )/η). The events of a buffer
receiving too many deletions are independent across buffers. By concentration bounds, the probability that
there are at least δ out n/9 deleted buffers is thus 2−Ω(n) .
Each inner decoding failure, spurious buffer, and deleted buffer increases the distance ∆i/d (ci(out ) , si(out ) )
9
by at most 3: each inner decoding failure causes up to 1 insertion and 1 deletion; each spurious buffer causes
up to 1 deletion and 2 insertions; and each deleted buffer causes up to 2 deletions and 1 insertion. Our
message is decoded incorrect if ∆i/d (ci(out ) , si(out ) ) > δ out n. Thus, there is a decoding error in the outer code
only if at least one of (i) the number of incorrectly decoded inner codewords without spurious buffers or
neighboring deleted buffers, (ii) the number of spurious buffers, or (iii) the number of deleted buffers is at
least δ out n/9. However, by the above arguments, each is greater than δ out n/9 with probability 2−Ω(n) , so
there is a decoding error with probability 2−Ω(n) . This concludes the proof of Theorem 1.1.
3.2 Possible Alternative Constructions
As mentioned in the introduction, Drinea and Mitzenmacher [4, 5] proved that the capacity of the BDCp
is at least (1 − p)/9. However, their proof is nonconstructive and they do not provide an efficient decoding
algorithm.
One might think it is possible to use Drinea and Mitzenmacher’s construction as a black box. We could
follow the approach in this paper, concatenating an outer code given by [10] with the rate (1 − p)/9 randomdeletion-correcting code as a black box inner code. The complexity of the Drinea and Mitzenmacher’s
so-called jigsaw decoding is not apparent from [5]. However, the inner code has constant length, so
construction, encoding, and decoding would be constant time. Thus, the efficiency of the inner code would
not affect the asymptotic runtime.
The main issue with this approach is that, while the inner code can tolerate random deletions with
probability p, inner codeword bits are not deleted in the concatenated construction according to a BDCp ;
the 0 bits closer to the buffers between the inner codewords are deleted with higher probability because they
might be “merged” with a buffer. For example, if an inner codeword is h101111i, then because the codeword
is surrounded by buffers of 0s, deleting the leftmost 1 effectively deletes two bits because the 0 is interpreted
as part of the buffer. While this may not be a significant issue because the distributions of deletions in this
deletion process and BDCp are quite similar, much more care would be needed to prove correctness.
Our construction does not run into this issue, because our transmitted codewords tend to have many 1s
on the ends of the inner codewords. In particular, each inner codeword of Cin has 1s on the ends, so after the
Duplication step each inner codeword has B/(1 − p) or 2B/(1 − p) 1s on the ends. The 1s on the boundary of
the inner codeword will all be deleted with probability ≈ exp(−B), which is small. Thus, in our construction,
it is far more unlikely that bits are merged with the neighboring decoding buffer, than if we were to use
a general inner code construction. Furthermore, we believe our construction based on bit duplication of a
worst-case deletion correcting code is conceptually simpler than appealing to an existential code.
As a remark, we presented a construction with rate (1 −p)/110, but using a randomized encoding we can
improve the constant from 1/110 to 1/60. We can modify our construction so that, during the Duplication
step of decoding, instead of replacing each bit of c (cat ) with a fix number B/(1 −p) copies of itself, we instead
replaced each bit independently with Poisson(B/(1 − p)) copies of itself. Then the image of a run r j under
duplication and deletion is distributed as Poisson(B), which is independent of p. Because we don’t have a
dependence on p, we can tighten our bounding in (5) and (6). To obtain (1 − p)/60, we can take B = 28.12
and set B ∗ = 40, where B ∗ is the threshold after which runs are decoded as two bits instead of one bit in the
Deduplication step. The disadvantage of this approach is that we require our encoding to be randomized,
whereas the construction presented above uses deterministic encoding.
10
4 Future work and open questions
A lemma due to Levenshtein [15] states that a code C can decode against pn adversarial deletions if and
only if it can decode against pn adversarial insertions and deletions. While this does not automatically
preserve the efficiency of the decoding algorithms, all the recent efficient constructions of codes for worstcase deletions also extend to efficient constructions with similar parameters for recovering from insertions
and deletions [1, 7].
In the random error model, decoding deletions, insertions, and insertions and deletions are not the same.
Indeed, it is not even clear how to define random insertions. One could define insertions and deletions via
the Poisson repeat channel where each bit is replaced with a Poisson many copies of itself (see [4, 17]).
However, random insertions do not seem to share the similarity to random deletions that adversarial deletions
share with adversarial insertions; we can decode against arbitrarily large Poisson duplication rates, whereas
for codes of block length n we can decode against a maximum of n adversarial insertions or deletions [5].
Alternatively one can consider a model of random insertions and deletions where, for every bit, the bit is
deleted with a fixed probability p1 , a bit is inserted after it with a fixed probability p2 , or it is transmitted
unmodified with probability 1 − p1 − p2 [19]. One could also investigate settings involving memoryless
insertions, deletions, and substitutions [16].
There remain a number of open questions even concerning codes for deletions only. Here are a few
highlighted by this work.
1. Can we close the gap between
deletions?
√
2 − 1 and
1
2
on the maximum correctable fraction of adversarial
2. Can we construct efficiently decodable codes for the binary deletion channel with better rate, perhaps
reaching or beating the best known existential capacity lower bound of (1 − p)/9?
3. Can we construct efficient codes for the binary deletion channel with rate 1 − O(h(p)) for p → 0?
References
[1] Joshua Brakensiek, Venkatesan Guruswami, and Samuel Zbarsky. Efficient low-redundancy codes for
correcting multiple deletions. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium
on Discrete Algorithms, pages 1884–1892, 2016.
[2] Boris Bukh, Venkatesan Guruswami, and Johan Håstad. An improved bound on the fraction of
correctable deletions. IEEE Trans. Information Theory, 63(1):93–103, 2017.
[3] Suhas Diggavi and Matthias Grossglauser. On transmission over deletion channels. In Proceedings
of the 39th Annual Allerton Conference on Communication, Control, and Computing, pages 573–582,
2001.
[4] Eleni Drinea and Michael Mitzenmacher. On lower bounds for the capacity of deletion channels. IEEE
Transactions on Information Theory, 52(10):4648–4657, 2006.
[5] Eleni Drinea and Michael Mitzenmacher. Improved lower bounds for the capacity of i.i.d. deletion and
duplication channels. IEEE Trans. Information Theory, 53(8):2693–2714, 2007.
11
[6] Dario Fertonani, Tolga M. Duman, and M. Fatih Erden.
Bounds on the capacity of channels with insertions, deletions and substitutions.
IEEE Trans. Communications, 59(1):2–6, 2011. URL: http://dx.doi.org/10.1109/TCOMM.2010.110310.090039,
doi:10.1109/TCOMM.2010.110310.090039.
[7] Venkatesan Guruswami and Ray Li.
Efficiently decodable insertion/deletion codes for
high-noise and high-rate regimes.
In IEEE International Symposium on Information Theory, ISIT 2016, Barcelona, Spain, July 10-15, 2016, pages 620–624, 2016.
URL:
http://dx.doi.org/10.1109/ISIT.2016.7541373, doi:10.1109/ISIT.2016.7541373.
[8] Venkatesan Guruswami and Ray Li. Coding against deletions in oblivious and online models.
abs/1612.06335, 2017. Manuscript. URL: http://arxiv.org/abs/1612.06335.
[9] Venkatesan Guruswami and Carol Wang.
Deletion codes in the high-noise and highrate regimes.
IEEE Trans. Information Theory, 63(4):1961–1970, 2017.
URL:
http://dx.doi.org/10.1109/TIT.2017.2659765, doi:10.1109/TIT.2017.2659765.
[10] Bernhard Haeupler and Amirbehshad Shahrasbi. Synchronization strings: codes for insertions and
deletions approaching the singleton bound. In Proceedings of the 49th Annual ACM SIGACT Symposium
on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 33–46, 2017.
[11] Adam Kalai, Michael Mitzenmacher, and Madhu Sudan. Tight asymptotic bounds for the deletion
channel with small deletion probabilities. In IEEE International Symposium on Information Theory, ISIT 2010, June 13-18, 2010, Austin, Texas, USA, Proceedings, pages 997–1001, 2010. URL:
http://dx.doi.org/10.1109/ISIT.2010.5513746, doi:10.1109/ISIT.2010.5513746.
[12] Yashodhan Kanoria and Andrea Montanari. Optimal coding for the binary deletion channel with
small deletion probability. IEEE Trans. Information Theory, 59(10):6192–6219, 2013. URL:
http://dx.doi.org/10.1109/TIT.2013.2262020, doi:10.1109/TIT.2013.2262020.
[13] Ian Kash, Michael Mitzenmacher, Justin Thaler, and John Ullman. On the zero-error capacity threshold
for deletion channels. In Information Theory and Applications Workshop (ITA), pages 1–5, January
2011.
[14] Adam Kirsch and Eleni Drinea. Directly lower bounding the information capacity for channels
with i.i.d.deletions and duplications. IEEE Trans. Information Theory, 56(1):86–102, 2010. URL:
http://dx.doi.org/10.1109/TIT.2009.2034883, doi:10.1109/TIT.2009.2034883.
[15] Vladimir I. Levenshtein. Binary codes capable of correcting deletions, insertions, and reversals.
Dokl. Akad. Nauk, 163(4):845–848, 1965 (Russian). English translation in Soviet Physics Doklady,
10(8):707-710, 1966.
[16] Hugues Mercier, Vahid Tarokh, and Fabrice Labeau. Bounds on the capacity of discrete memoryless channels corrupted by synchronization and substitution errors. IEEE Trans. Information Theory, 58(7):4306–4330, 2012. URL: http://dx.doi.org/10.1109/TIT.2012.2191682,
doi:10.1109/TIT.2012.2191682.
[17] Michael Mitzenmacher. A survey of results for deletion channels and related synchronization channels.
Probability Surveys, 6:1–33, 2009.
[18] Mojtaba Rahmati and Tolga M. Duman. Upper bounds on the capacity of deletion channels
using channel fragmentation. IEEE Trans. Information Theory, 61(1):146–156, 2015. URL:
http://dx.doi.org/10.1109/TIT.2014.2368553, doi:10.1109/TIT.2014.2368553.
12
[19] Ramji Venkataramanan, Sekhar Tatikonda, and Kannan Ramchandran. Achievable rates for channels
with deletions and insertions. IEEE Trans. Information Theory, 59(11):6990–7013, 2013. URL:
http://dx.doi.org/10.1109/TIT.2013.2278181, doi:10.1109/TIT.2013.2278181.
[20] S. M. Sadegh Tabatabaei Yazdi and Lara Dolecek. A deterministic polynomial-time protocol for
synchronizing from deletions. IEEE Trans. Information Theory, 60(1):397–409, 2014. URL:
http://dx.doi.org/10.1109/TIT.2013.2279674, doi:10.1109/TIT.2013.2279674.
A
Proof of Lemma 2.2
Proof. For each i, we can find a random variable Bi such that Bi ≥ Ai always, Bi takes values in [0, β], and
E[Bi ] = α. Applying Lemma 2.1 gives
#
#
" n
" n
Õ
Õ
Bi ≥ nγ
Ai ≥ nγ ≤ Pr
Pr
i=1
"
i=1
n
Õ
γ − α nα
Bi
≤ Pr
≥ 1+
β
α
β
i=1
γ −α 2 nα
· β ª
α
©
≤ exp −
®
3
«
¬
2
(γ − α) n
.
= exp −
3α β
#
(10)
13
| 8 |
Parsing Expression Grammars Made Practical
Nicolas Laurent
∗
Kim Mens
arXiv:1509.02439v2 [] 17 Sep 2016
ICTEAM, Université catholique de Louvain,
Belgium
[email protected]
ICTEAM, Université catholique de Louvain,
Belgium
[email protected]
this task. These include parser generators (like the venerable
Yacc) and more recently parser combinator libraries [5].
Most of the work on parsing has been built on top of
Chomsky’s context-free grammars (CFGs). Ford’s parsing
expression grammars (PEGs) [3] are an alternative formalism exhibiting interesting properties. Whereas CFGs use
non-deterministic choice between alternative constructs,
PEGs use prioritized choice. This makes PEGs unambiguous
by construction. This is only one of the manifestations of a
broader philosophical difference between CFGs and PEGs.
CFGs are generative: they describe a language, and the
grammar itself can be used to enumerate the set of sentences belonging to that language. PEGs on the other hand
are recognition-based: they describe a predicate indicating
whether a sentence belongs to a language.
The recognition-based approach is a boon for programmers who have to find mistakes in a grammar. It also enables us to add new parsing operators, as we will see in section 4. These benefits are due to two PEG characteristics.
(1) The parser implementing a PEG is generally close to the
grammar, making reasoning about the parser’s operations
easier. This characteristic is shared with recursive-descent
CFG parsers. (2) The single parse rule: attempting to match
a parsing expression (i.e. a sub-PEG) at a given input position will always yield the same result (success or failure) and
consume the same amount of input. This is not the case for
CFGs. For example, with a PEG, the expression (a∗) will always greedily consume all the a’s available, whereas a CFG
could consume any number of them, depending on the grammar symbols that follow.
Abstract
Parsing Expression Grammars (PEGs) define languages by
specifying a recursive-descent parser that recognises them.
The PEG formalism exhibits desirable properties, such as
closure under composition, built-in disambiguation, unification of syntactic and lexical concerns, and closely matching
programmer intuition. Unfortunately, state of the art PEG
parsers struggle with left-recursive grammar rules, which are
not supported by the original definition of the formalism and
can lead to infinite recursion under naive implementations.
Likewise, support for associativity and explicit precedence
is spotty. To remedy these issues, we introduce Autumn, a
general purpose PEG library that supports left-recursion, left
and right associativity and precedence rules, and does so efficiently. Furthermore, we identify infix and postfix operators as a major source of inefficiency in left-recursive PEG
parsers and show how to tackle this problem. We also explore the extensibility of the PEG paradigm by showing how
one can easily introduce new parsing operators and how our
parser accommodates custom memoization and error handling strategies. We compare our parser to both state of the
art and battle-tested PEG and CFG parsers, such as Rats!,
Parboiled and ANTLR.
Categories and Subject Descriptors D.3.4 [Programming Languages]: Parsing
Keywords parsing expression grammar, parsing,
left-recursion, associativity, precedence
1. Introduction
Context Parsing is well studied in computer science.
There is a long history of tools to assist programmers in
Challenges Yet, problems remain. First is the problem of
left-recursion, an issue which PEGs share with recursivedescent CFG parsers. This is sometimes singled out as a reason why PEGs are frustrating to use [11]. Solutions that do
support left-recursion do not always let the user choose the
associativity of the parse tree for rules that are both left- and
right-recursive; either because of technical limitations [1] or
by conscious design [10]. We contend that users should be
able to freely choose the associativity they desire.
Whitespace handling is another problem. Traditionally,
PEG parsers do away with the distinction between lexing
and parsing. This alleviates some issues with traditional lex-
∗ Nicolas
Laurent is a research fellow of the Belgian fund for scientific
research (F.R.S.-FNRS).
[Copyright notice will appear here once ’preprint’ option is removed.]
Parsing Expression Grammars Made Practical
1
2016/9/20
ing: different parts of the input can now use different lexing schemes, and structure is possible at the lexical level
(e.g. nested comments) [3]. However, it means that whitespace handling might now pollute the grammar as well as
the generated syntax tree. Finally, while PEGs make lineartime parsing possible with full memoization1, there is a fine
balance to be struck between backtracking and memoization [2]. Memoization can bring about runtime speedups at
the cost of memory use. Sometimes however, the run time
overhead of memoization nullifies any gains it might bring.
Other problems plague parsing tools of all denominations. While solutions exist, they rarely coexist in a single
tool. Error reporting tends to be poor, and is not able to exploit knowledge held by users about the structure of their
grammars. Syntax trees often consist of either a full parse
tree that closely follows the structure of the grammar, or
data structures built on the fly by user-supplied code (semantic actions). Both approaches are flawed: a full parse tree is
too noisy as it captures syntactic elements with no semantic meaning, while tangling grammatical constructs and semantic actions (i.e. code) produces bloated and hard-to-read
grammars. Generating trees from declarative grammar annotations is possible, but infrequent.
Solution To tackle these issues, we introduce a new parsing library called Autumn. Autumn implements a generic
PEG parser with selective memoization. It supports leftrecursion (including indirect and hidden left-recursion) and
both types of associativity. It also features a new construct
called expression cluster. Expression clusters enable the
aforementioned features to work faster in parts of grammars dedicated to postfix and infix expressions. Autumn
also tackles, to some extent, the problems of whitespace
handling, error reporting, and syntax tree creation. By alleviating real and significant pain points with PEG parsing,
Autumn makes PEG parsing more practical.
Structure This paper starts by describing the issues that
occur when using left-recursion to define the syntax of infix and postfix binary operators (section 2). Next we will
describe our solutions to these issues (section 3). Then, we
show the freedom afforded by the PEG paradigm regarding
extensibility and lay down our understanding of how this
paradigm may be extended further (section 4). Finally, we
compare Autumn to other PEG and CFG parsers (section 5)
and review related work (section 6) before concluding. Because of space restrictions, we do not review the basics of
the PEG formalism, but refer to Ford’s original paper [3].
formance, and syntax tree construction. Even more so with
PEGs, due to their backtracking nature and poor handling
of left-recursion. These issues motivate many of the features
supported by our parser library. Our running example is a
minimalistic arithmetic language with addition, subtraction,
multiplication and division operating on single-digit numbers. Table 1 shows four PEGs that recognise this language,
albeit with different associativity. They all respect the usual
arithmetic operator precedence. Grammars (a), (b) and (c)
are classical PEGs, whose specificities we now examine.
Grammar (d) exhibits our own expression clusters and represents our solution to the problems presented in this section.
No support for left-recursion. The recursive-descent nature of PEGs means that most PEG parsers cannot support
all forms of left-recursion, including indirect and hidden left
recursion.3 Left-recursion is direct when a rule designates
a sequence of which the first element is a recursive reference, or when a rule designates a choice which has such a
sequence as alternate. The reason this type of recursion is
singled out is that it is easy to transform into a non-leftrecursive form. Left-recursion is hidden when it might or
might not occur depending on another parsing expression.
For instance, X = Y ? X can result in hidden left-recursion
because Y ? might succeed consuming no input.
PEG parsers that do not support left-recursion can only
handle grammars (a) and (b). These parsers are unable to
produce a left-associative parse of the input. Some tools
can handle direct left-recursive rules by rewriting them to
the idiomatic (b) form and re-ordering their parse tree to
simulate a left-associative parse [4, 9].
We argue it is necessary to support indirect and hidden
left-recursion, so that the grammar author is able to organise
his grammar as he sees fit. Autumn supports left-recursion
natively, as will be described in section 3. Using expression
clusters, the associativity for operators that are both leftand right-recursive can be selected explicitly by using the
@left recur annotation (as shown in grammar (d)). Rightassociativity is the default, so no annotations are required in
that case.
Performance issues in non-memoizing parsers. Grammar (a) is parsed inefficiently by non-memoizing parsers.
Consider a grammar for arithmetic with L levels of precedence and P operators. In our example, L = P = 2. This
grammar will parse a single number in O((P + 1)L ) expression invocations (i.e. attempts to match an expression).
For the Java language this adds up to thousands of invocations to parse a single number. The complexity is somewhat
amortized for longer arithmetic expressions, but the cost remains prohibitively high. Memoizing all rules in the grammar makes the complexity O(P L), but this coarse-grained
solution might slow things down because of the inherent
memoization overhead: cache lookups can be expensive [2].
2. Problems Caused by Binary Operators
This section explains why infix and postfix binary2 operators are a significant pain point in terms of expressivity, per1 In this context, memoization means caching the result of the invocation of
a parsing expression at a given position.
2 Binary should be understood broadly here: n-ary infix operators (such
as the ternary conditional operator) can be modelled in terms of binary
operators.
Parsing Expression Grammars Made Practical
3 To the best of our knowledge, Autumn is the only available parser to
support all forms of left-recursion with associativity selection.
2
2016/9/20
E = S ‘+’ E | S ‘−’ E | S
E = S ( ‘+’ E) ∗ | S ( ‘−’ E) ∗ | S
S = N ‘∗’ S | N ‘/’ S | N
S = N ( ‘∗’ S) ∗ | N ( ‘/’ S) ∗ | S
N = [0 − 9]
N = [0 − 9]
(a) Layered,
right-associative
3.1 Overview
Autumn is an open source parsing library written in Java,
available online at http://github.com/norswap/autumn.
The library’s entry points take a PEG and some text to
parse as input. A PEG can be thought of as a graph of parsing
expressions. For instance a sequence is a node that has edges
towards all parsing expressions in the sequence. The PEG
can be automatically generated from a grammar file, or built
programmatically, in the fashion of parser combinators.
Similarly, parsing can be seen as traversing the parsing
expression graph. The order and number of times the children of each node are visited is defined by the node’s parsing expression type. For instance, a sequence will traverse
all its children in order, until one fails; a choice will traverse all its children in order, until one succeeds. This behaviour is defined by how the class associated to the parsing expression type implements the parse method of the
ParsingExpression interface. As such, each type of parsing expression has its own mini parsing algorithm.
(b) Idiomatic
E = expr
E = E ‘+’ S | E ‘−’ S | S
S = S ‘∗’ N | S ‘/’ N | N
N = [0 − 9]
(c) Layered,
left-associative
→ E ‘+’ E
@+ @left recur
→ E ‘−’ E
→ E ‘∗’ E
@+ @left recur
→ E ‘/’ E
→ [0 − 9]
@+
(d) Autumn expression cluster
Table 1: 4 PEGs describing a minimal arithmetic language.
E stands for Expression, S for Summand and N for Number.
In contrast, parsing a number in grammar (b) is always
O(P L). Nevertheless, grammar (a) still produces a meaningful parse if the operators are right-associative. Not so for
grammar (b), which flattens the parse into a list of operands.
PEG parsers supporting left-recursion can use grammar
(c), the layered, left-associative variant of grammar (a). Our
own implementation of left-recursion requires breaking leftrecursive cycles by marking at least one expression in the
cycle as left-recursive. This can optionally be automated.
If the rules are marked as left-recursive, using grammar (c)
we will parse a single-digit number in O(P L). If, however,
the cycle breaker elects to mark the sequence expressions
corresponding to each operator (e.g. (E ‘+’ S)) as leftrecursive, then the complexity is O((P + 1)L ).
Expression clusters (grammar (d)) do enable parsing in
O(P L) without user intervention or full memoization. This
is most welcome, since the algorithm we use to handle leftrecursion does preclude memoization while parsing a leftrecursive expression.
Implicit precedence. Grammars (a), (b) and (c) encode
precedence by grouping the rules by precedence level: operators in S have more precedence than those in E. We say
such grammars are layered. We believe that these grammars
are less legible than grammar (d), where precedence is explicit. In an expression cluster, precedence starts at 0, the
@+ annotation increments the precedence for the alternate
it follows, otherwise precedence remains the same. It is also
easy to insert new operators in expression clusters: simply
insert a new alternate. There is no need to modify any other
parsing expression.4
3.2 Precedence
Implementing precedence is relatively straightforward. First,
we store the current precedence in a global parsing state,
initialized to 0 so that all nodes can be traversed. Next, we
introduce a new type of parsing expression that records the
precedence of another expression. A parsing expression of
this type has the expression to which the precedence must
apply as its only child. Its role is to check if the precedence
of the parsing expression is not lower than the current precedence, failing if it is the case, and, otherwise, to increase the
current precedence to that of the expression.
Using explicit precedence in PEGs has a notable pitfall.
It precludes memoization over (expression, position) pairs,
because the results become contingent on the precedence
level at the time of invocation. As a workaround, we can
disable memoization for parts of the grammar (the default),
or we can memoize over (expression, position, precedence)
triplets using a custom memoization strategy.
3.3 Left-Recursion and Associativity
To implement left-recursion, we build upon Seaton’s work
on the Katahdin language [8]. He proposes a scheme to handle left-recursion that can accommodate both left- and rightassociativity. In Katahdin, left-recursion is strongly tied to
precedence, much like in our own expression clusters. This
is not a necessity however, and we offer stand-alone forms
of left-recursion and precedence in Autumn too.
Here also, the solution is to introduce a new type of
parsing expression. This new parsing expression has a single
child expression, indicating that this child expression should
be treated as left-recursive. All recursive references must be
made to the new left-recursive expression.
Algorithm 1 presents a simplified version of the parse
method for left-recursive parsing expressions. The algorithm
maintains two global data structures. First, a map from (po-
3. Implementation
This section gives an overview of the implementation of
Autumn, and briefly explains how precedence and leftrecursion handling are implemented.
4 We
are talking about grammar evolution here, i.e. editing a grammar.
Grammar composition is not yet supported by the library.
Parsing Expression Grammars Made Practical
3
2016/9/20
sition, expression) pairs to parse results. Second, a set of
blocked parsing expressions, used to avoid right-recursion
in left-associative parses. A parse result represents any data
generated by invoking a parsing expression at an input position, including the syntax tree constructed and the amount
of input consumed. We call the parse results held in our data
structure seeds [1] because they represent temporary results
that can “grow” in a bottom-up fashion. Note that our global
data structures are “global” (in practice, scoped to the ongoing parse) so that they persist between (recursive) invocations of the algorithm. Other implementations of the parse
method need not be concerned with them.
Let us first ignore left-associative expressions. When invoking a parsing expression at a given position, the algorithm starts by looking for a seed matching the pair, returning it if present. If not, it immediately adds a special seed that
signals failure. We then parse the operand, update the seed,
and repeat until the seed stops growing. The idea is simple:
on the first go, all left-recursion is blocked by the failure
seed, and the result is our base case. Each subsequent parse
allows one additional left-recursion, until we have matched
all the input that could be. For rules that are both left- and
right-recursive, the first left-recursion will cause the rightrecursion to kick in. Because of PEG’s greedy nature, the
right-recursion consumes the rest of the input that can be
matched, leaving nothing for further left-recursions. The result is a right-associative parse.
Things are only slightly different in the left-associative
case. Now the expression is blocked, so it cannot recurse,
except in left position. Our loop still grows the seed, ensuring a left-associative parse.
The algorithm has a few pitfalls. First, it requires memoization to be disabled while the left-recursive expression is
being parsed. Otherwise, we might memoize a temporary
result. Second, for left-associative expressions, it blocks
all non-left recursion while we only need to block rightrecursion. To enable non-right recursion, our implementation includes an escape hatch operator that inhibits the
blocked set while its operand is being parsed. This operator
has to be inserted manually.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
else if blocked contains expr then
return failure
current = failure
seeds[position][expr] = failure
if expr is left-associative then
blocked.add(expr)
repeat
result = parse(expr.operand)
if result consumed more input than current then
current = result
seeds[position][expr] = result
else
20
remove seeds[position][expr]
if expr is left-associative then
blocked.remove(expr)
21
return current
18
19
Algorithm 1: Left-recursion and associativity handling.
2 describes the parse method of expression clusters. The
code presents a few key differences with respect to the regular left-recursion parsing algorithm. We now maintain a map
from cluster expressions to their current precedence. We iterate over all the precedence groups in our cluster, in decreasing order of precedence. For each group, we verify that
the group’s precedence is not lower than the current precedence. If not, the current precedence is updated to that of the
group. We then iterate over the operators in the group, trying to grow our seed. After growing the seed, we retry all
operators in the group from the beginning. Note that we can
do away with the blocked set: left-associativity is handled
via the precedence check. For left-associative groups, we increment the precedence by one, forbidding recursive entry
in the group. Upon finishing the invocation, we remove the
current precedence mapping only if the invocation was not
recursive: if it was, another invocation is still making use of
the precedence.
3.4 Expression Clusters
Expression clusters integrate left-recursion handling with
precedence. As outlined in section 2, this results in a readable, easy-to-maintain and performant construct.
An expression cluster is a choice where each alternate
must be annotated with a precedence (recall the @+ annotation from earlier), and can optionally be annotated with an
associativity. Alternates can additionally be marked as leftassociative, right-associativity being the default. All alternates at the same precedence level must share the same associativity, hence it needs to be mentioned only for the first
alternate.
Like left-recursive and precedence expressions, expression clusters are a new kind of parsing expression. Algorithm
Parsing Expression Grammars Made Practical
seeds = {}
blocked = []
parse expr: left-recursive expression at position:
if seeds[position] [expr] exists then
return seeds[position][expr]
4. Customizing Parser Behaviour
4.1 Adding New Parsing Expression Types
The core idea of Autumn is to represent a PEG as a graph
of parsing expressions implementing a uniform interface.
By implementing the ParsingExpression interface, users
can create new types of parsing expressions. Many of the
features we will introduce in this section make use of this
capability.
Restrictions The only restriction on custom parsing expressions is the single parse rule: invoking an expression at
a given position should always yield the same changes to the
parse state. Custom expressions should follow this rule, and
ensure that they do not cause other expressions to violate it.
This limits the use of global state to influence the behaviour
4
2016/9/20
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
current = failure
seeds[position][expr] = failure
min precedence = precedences[expr] if defined, else 0
loop: for group in expr.groups do
if group.precedence < min precedence then
break
precedences[expr] = group.precedence +
group.left associative ? 1 : 0
for op in group.ops do
result = parse(op)
if result consumed more input than current then
current = result
seeds[position][expr] = result
goto loop
22
remove seeds[position][expr]
if there is no other ongoing invocation of expr then
remove precedences[expr]
23
return current
20
21
used custom parsing expression types to trace the execution
of the parser and print out debugging information.
We are currently developing a grammar debugger for Autumn and the same principle is used to support breakpoints:
parsing expressions of interest are wrapped in a special parsing expression that checks whether the parse should proceed
or pause while the user inspects the parse state.
Transforming expression graphs is integral to how Autumn works: we use such transformations to resolve recursive reference and break left-recursive cycles in grammars
built from grammar files.
seeds = {}
precedences = {}
parse expr: cluster expression at position:
if seeds[position] [expr] exists then
return seeds[position][expr]
4.3 Customizable Error Handling & Memoization
Whenever an expression fails, Autumn reports this fact to
the configured error handler for the parse. The default error
reporting strategy is to track and report the farthest error
position, along with some contextual information.
Memoization is implemented as a custom parsing expression taking an expression to memoize as operand. Whenever the memoization expression is encountered, the current
parse state is passed to the memoization strategy. The default
strategy is to memoize over (expression, position) pairs. Custom strategies allow using memoization as a bounded cache,
discriminating between expressions, or including additional
parse state in the key.
Algorithm 2: Parsing with expression clusters.
of sub-expressions. Respecting the rule makes memoization
possible and eases reasoning about the grammar.
The rule is not very restrictive, but it does preclude the
user from changing the way other expressions parse. This is
exactly what our left-recursion and cluster operators do, by
blocking recursion. We get away with this by blocking memoization when using left-recursion or precedence. There is
a workaround: use a transformation pass to make modified
copies of sub-expressions. Experimenting with it was not
one of our priorities, as experience shows that super-linear
parse times are rare. In practice, the fact that binary operators are exponential in the number of operators (while still
linear in the input size) is a much bigger concern, which is
adequately addressed by expression clusters.
Extending The Parse State To be practical, custom parsing expressions may need to define new parsing states, or
to annotate other parsing expressions. We enable this by endowing parsing expressions, parsers and parse states with an
extension object: essentially a fast map that can hold arbitrary data. There are also a few hooks to the library’s internals. Our design objective was to allow most native operators
to be re-implemented as custom expressions. Since many of
our features are implemented as parsing expressions, the result is quite flexible.
4.4 Syntax Tree Construction
In Autumn, syntax trees do not mirror the structure of the
grammar. Instead, an expression can be captured, meaning
that a node with a user-supplied name will be added in
the syntax tree whenever the expression succeeds. Nodes
created while parsing the expression (via captures on subexpressions) will become children of the new node. This
effectively elides the syntax tree and even allows for some
nifty tricks, such as flattening sub-trees or unifying multiple
constructs with different syntax. The text matched by an
expression can optionally be recorded. Captures are also
implemented as a custom parsing expression type.
4.5 Whitespace Handling
The parser can be configured with a parsing expression to
be used as whitespace. This whitespace specification is tied
to token parsing expressions, whose foremost effect is to
skip the whitespace that follows the text matched by their
operand. A token also gives semantic meaning: it represents
an indivisible syntactic unit. The error reporting strategy can
use this information to good effect, for instance.
We mentioned earlier that we can record the text matched
by an expression. If this expression references tokens, the
text may contain undesirable trailing whitespace. To avoid
this, we make Autumn keep track of the furthest nonwhitespace position before the current position.
4.2 Grammar Instrumentation
Our library includes facilities to transform the expression
graph before starting the parse. Transformations are specified by implementing a simple visitor pattern interface. This
can be used in conjunction with new parsing expression
types to instrument grammars. In particular, we successfully
Parsing Expression Grammars Made Practical
5. Evaluation
In Table 2, we measure the performance of parsing the
source code of the Spring framework (∼ 34 MB of Java
code) and producing matching parse trees. The measure5
2016/9/20
Parser
Time (Single)
Time (Iterated)
Memory
Autumn
13.17 s
12.66 s
6 154 KB
Mouse
Parboiled
Rats!
101.43 s
12.02 s
5.95 s
99.93 s
11.45 s
2.41 s
45 952 KB
13 921 KB
10 632 KB
4.63 s
2.31 s
44 432 KB
ANTLR v4 (Java 7)
Table 2: Performance comparison of Autumn to other PEG
parsing tools as well as ANTLR. Measurements done over
34MB of Java code.
ments were taken on a 2013 MacBook Pro with a 2.3GHz
Intel Core i7 processor, 4GB of RAM allocated to the Java
heap (Java 8, client VM), and an SSD drive. The Time (Single) column reports the median of 10 task runs in separate
VMs. The Time (Iterated) column reports the median of 10
task runs inside a single VM, after discarding 10 warm-up
runs. The reported times do not include the VM boot time,
nor the time required to assemble the parser combinators
(when applicable). For all reported times, the average is always within 0.5s of the median. All files are read directly
from disk. The Memory column reports the peak memory
footprint, defined as the maximum heap size measured after
a GC activation. The validity of the parse trees was verified
by hand over a sampling of all Java syntactical features.
The evaluated tools are Autumn; Rats! [4], a state of the
art packrat PEG parser generator with many optimizations;
Parboiled, a popular Java/Scala PEG parser combinator library; Mouse [6], a minimalistic PEG parser generator that
does not allow memoization; and, for comparison, ANTLR
v4 [9] a popular and efficient state of the art CFG parser.
Results show that Autumn’s performance is well within
the order of magnitude of the fastest parsing tools. This
is encouraging, given that we did not dedicate much effort
to optimization yet. Many optimizations could be applied,
including some of those used in Rats! [4]. Each parser was
evaluated with a Java grammar supplied as part of its source
distribution. For Autumn, we generated the Java grammar
by automatically converting the one that was written for
Mouse. We then extracted the expression syntax into a big
expression cluster and added capture annotations. The new
expression cluster made the grammar more readable and is
responsible for a factor 3 speedup of the parse with Autumn
(as compared to Autumn without expression clusters).
6. Related Work
Feature-wise, some works have paved the way for full leftrecursion and precedence handling. OMeta [12] is a tool
for pattern matching over arbitrary data types. It was the
first tool to implement left-recursion for PEGs, albeit allowing only right-associative parses. Katahdin [8] is a language
whose syntax and semantics are mutable at run-time. It pioneers some of the techniques we successfully deployed, but
is not a parsing tool per se. IronMeta is a port of OMeta to
C# that supports left-recursion using an algorithm developed
by Medeiros et al. [7]. This algorithm enables left-recursion,
associativity and precedence by compiling parsing expressions to byte code for a custom virtual machine. However,
Iron Meta doesn’t support associativity handling.
7. Conclusion
Left-recursion, precedence and associativity are poorly supported by PEG parsers. Infix and postfix expressions also
cause performance issues in left-recursion-capable PEG
parsers. To solve these issues, we introduce Autumn, a
parsing library that handles left-recursion, associativity and
precedence in PEGs, and makes it efficient through a construct called expression cluster. Autumn’s performance is
on par with that of both state of the art and widely used
PEG parsers. Autumn is built with extensibility in mind, and
makes it easy to add custom parsing expressions, memoization strategies and error handlers. It offers lightweight solutions to ease syntax tree construction, whitespace handling
and grammar instrumentation. In conclusion, Autumn is a
practical parsing tool that alleviates significant pain points
felt in current PEG parsers and constitutes a concrete step
towards making PEG parsing practical.
Acknowledgments
We thank Olivier Bonaventure, Chris Seaton, the SLE reviewers and our shepherd Markus Völter for their advice.
References
[1] A. Warth et al. Packrat Parsers Can Support Left Recursion.
In PEPM, pages 103–110. ACM, 2008.
[2] R. Becket and Z. Somogyi. DCGs + Memoing = Packrat
Parsing but Is It Worth It? In PADL, LNCS 4902, pages
182–196. Springer, 2008.
[3] B. Ford. Parsing Expression Grammars: A Recognition-based
Syntactic Foundation. In POPL, pages 111–122. ACM, 2004.
[4] R. Grimm. Better Extensibility Through Modular Syntax. In
PLDI, pages 38–51. ACM, 2006.
[5] G. Hutton. Higher-order functions for parsing. J. Funct.
Program. 2, pages 323–343, 1992.
[6] R. R. Redziejowski. Mouse: From Parsing Expressions to
a practical parser. In CS&P 2, pages 514–525. Warsaw
University, 2009.
[7] S. Medeiros et al. Left Recursion in Parsing Expression
Grammars. SCP 96, pages 177–190, 2014.
[8] C. Seaton. A Programming Language Where the Syntax
and Semantics Are Mutable at Runtime. Master’s thesis,
University of Bristol, 2007.
[9] T. Parr et al. Adaptive LL(*) Parsing: The Power of Dynamic
Analysis. In OOPSLA, pages 579–598. ACM, 2014.
[10] L. Tratt. Direct left-recursive parsing expression grammars.
Technical Report EIS-10-01, Middlesex University, 2010.
[11] L. Tratt. Parsing: The solved problem that isn’t, 2011. URL
http://tratt.net/laurie/blog/entries/parsing_the_solved_pro
[12] A. Warth et al. OMeta: An Object-oriented Language for
Pattern Matching. In DLS, pages 11–19. ACM, 2007.
| 6 |
Sparse Fourier Transform in Any Constant Dimension with
Nearly-Optimal Sample Complexity in Sublinear Time
arXiv:1604.00845v1 [] 4 Apr 2016
Michael Kapralov
EPFL
April 5, 2016
Abstract
We consider the problem of computing a k-sparse approximation to the Fourier transform of a length N
signal. Our main result is a randomized algorithm for computing such an approximation (i.e. achieving the
ℓ2 /ℓ2 sparse recovery guarantees using Fourier measurements) using Od (k log N log log N ) samples of the
signal in time domain that runs in time Od (k logd+3 N ), where d ≥ 1 is the dimensionality of the Fourier
transform. The sample complexity matches the lower bound of Ω(k log(N/k)) for non-adaptive algorithms
due to [DIPW10] for any k ≤ N 1−δ for a constant δ > 0 up to an O(log log N ) factor. Prior to our work a
result with comparable sample complexity k log N logO(1) log N and sublinear runtime was known for the
Fourier transform on the line [IKP14], but for any dimension d ≥ 2 previously known techniques either
suffered from a poly(log N ) factor loss in sample complexity or required Ω(N ) runtime.
Contents
1 Introduction
2
2 Preliminaries
5
3 The algorithm and proof overview
9
4 Organization
13
5 Analysis of L OCATE S IGNAL: main definitions and basic claims
13
6 Analysis of L OCATE S IGNAL: bounding ℓ1 norm of undiscovered elements
6.1 Bounding noise from heavy hitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Bounding effect of tail noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 Putting it together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
20
23
25
7 Analysis of R EDUCE L1N ORM and S PARSE FFT
7.1 Analysis of R EDUCE L1N ORM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Analysis of SNR reduction loop in S PARSE FFT . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Analysis of S PARSE FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
26
32
35
8 ℓ∞ /ℓ2 guarantees and constant SNR case
8.1 ℓ∞ /ℓ2 guarantees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Recovery at constant SNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
36
38
9 Utilities
9.1 Properties of E STIMATE VALUES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 Properties of H ASH T O B INS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3 Lemmas on quantiles and the median estimator . . . . . . . . . . . . . . . . . . . . . . . . .
42
42
43
44
10 Semi-equispaced Fourier Transform
46
11 Acknowledgements
48
A Omitted proofs
50
1
1 Introduction
The Discrete Fourier Transform (DFT) is a fundamental mathematical concept that allows to represent a discrete
signal of length N as a linear combination of N pure harmonics, or frequencies. The development of a fast
algorithm for Discrete Fourier Transform, known as FFT (Fast Fourier Transform) in 1965 revolutionized digital
signal processing, earning FFT a place in the top 10 most important algorithms of the twentieth century [Cip00].
Fast Fourier Transform (FFT) computes the DFT of a length N signal in time O(N log N ), and finding a faster
algorithm for DFT is a major open problem in theoretical computer science. While FFT applies to general
signals, many of the applications of FFT (e.g. image and video compression schemes such as JPEG and MPEG)
rely on the fact that the Fourier spectrum of signals that arise in practice can often be approximated very well by
only a few of the top Fourier coefficients, i.e. practical signals are often (approximately) sparse in the Fourier
basis.
Besides applications in signal processing, the Fourier sparsity property of real world signal plays and important role in medical imaging, where the cost of measuring a signal, i.e. sample complexity, is often a major
bottleneck. For example, an MRI machine effectively measures the Fourier transform of a signal x representing the object being scanned, and the reconstruction problem is exactly the problem of inverting the Fourier
transform x
b of x approximately given a set of measurements. Minimizing the sample complexity of acquiring a
signal using Fourier measurements thus translates directly to reduction in the time the patient spends in the MRI
machine [LDSP08] while a scan is being taken. In applications to Computed Tomography (CT) reduction in
measurement cost leads to reduction in the radiation dose that a patient receives [Sid11]. Because of this strong
practical motivation, the problem of computing a good approximation to the FFT of a Fourier sparse signal
fast and using few measurements in time domain has been the subject of much attention several communities.
In the area of compressive sensing [Don06, CT06], where one studies the task of recovering (approximately)
sparse signals from linear measurements, Fourier measurements have been one of the key settings of interest. In
particular, the seminal work of [CT06, RV08] has shown that length N signals with at most k nonzero Fourier
coefficients can be recovered using only k logO(1) N samples in time domain. The recovery algorithms are
based on linear programming and run in time polynomial in N . A different line of research on the Sparse
Fourier Transform (Sparse FFT), initiated in the fields of computational complexity and learning theory, has
been focused on developing algorithms whose sample complexity and running time scale with the sparsity as
opposed to the length of the input signal. Many such algorithms have been proposed in the literature, including
[GL89, KM91, Man92, GGI+ 02, AGS03, GMS05, Iwe10, Aka10, HIKP12b, HIKP12a, BCG+ 12, HAKI12,
PR13, HKPV13, IKP14]. These works show that, for a wide range of signals, both the time complexity and the
number of signal samples taken can be significantly sub-linear in N , often of the form k logO(1) N .
In this paper we consider the problem of computing a sparse approximation to a signal x ∈ CN given access
to its Fourier transform x
b ∈ CN .1 The best known results obtained in both compressive sensing literature and
sparse FFT literature on this problem are summarized in Fig. 1. We focus on algorithms that work for worstcase signals and recover k-sparse approximations satisfying the so-called ℓ2 /ℓ2 approximation guarantee. In
this case, the goal of an algorithm is as follows: given m samples of the Fourier transform x
b of a signal x, and
the sparsity parameter k, output x′ satisfying
kx − x′ k2 ≤ C min kx − yk2 ,
k-sparse y
(1)
The algorithms are randomized2 and succeed with at least constant probability.
Higher dimensional Fourier transform. While significant attention in the sublinear Sparse FFT literature
has been devoted to the basic case of Fourier transform on the line (i.e. one-dimensional signals), the spars1
Note that the problem of reconstructing a signal from Fourier measurements is equivalent to the problem of computing the Fourier
transform of a signal x whose spectrum is approximately sparse, as the DFT and its inverse are only different by a conjugation.
2
Some of the algorithms [CT06, RV08, CGV12] can in fact be made deterministic, but at the cost of satisfying a somewhat weaker
ℓ2 /ℓ1 guarantee.
2
Reference
Time
Samples
C
Dimension
d > 1?
[CT06, RV08, CGV12]
[Bou14, HR16]
[CP10]
[HIKP12a]
[IKP14]
[IK14]
[DIPW10]
N × m linear program
N × m linear program
O(k log(N ) log(N/k))
k log2 (N ) logO(1) log N
N logO(1) N
O(k log2 (k) log(N ))
O(k log N )
O(k log(N ) log(N/k))
k log(N ) logO(1) log N
O(k log N )
Ω(k log(N/k))
O(1)
(log N )O(1)
any
any
any
O(1)
yes
yes
no
no
yes
lower bound
Figure 1: Bounds for the algorithms that recover k-sparse Fourier approximations. All algorithms produce an
output satisfying Equation 1 with probability of success that is at least constant. The forth column specifies
constraints on approximation factor C. For example, C = O(1) means that the algorithm can only handle
constant C as opposed to any C > 1. The last column specifies whether the sample complexity bounds are
unchanged, up to factors that depend on dimension d only, for higher dimensional DFT.
est signals often occur in applications involving higher-dimensional DFTs. Although a reduction from DFT
on a two-dimensional grid with relatively prime side lengths p × q to a one-dimensional DFT of length pq
is possible [GMS05, Iwe12]), the reduction does not apply to the most common case when the side lengths
of the grid are equal to the same powers of two. It turns out that most sublinear Sparse FFT techniques developed for the one-dimensional DFT do not extend well to the higher dimensional setting, suffering from
at least a polylogaritmic loss in sample complexity. Specifically, the only prior sublinear time algorithm that
applies to general m × m grids is due to to [GMS05], has O(k logc N ) sample and time complexity for a
rather large value of c. If N is a power of 2, a two-dimensional adaptation of the [HIKP12a] algorithm (outlined in [GHI+ 13]) has roughly O(k log3 N ) time and sample complexity, and an adaptation of [IKP14] has
O(k log2 N (log log N )O(1) ) sample complexity. In general dimension d ≥ 1 both of these algorithms have
sample complexity Ω(k logd N ).
Thus, none of the results obtained so far was able to guarantee sparse recovery from high dimensional (any
d ≥ 2) Fourier measurements without suffering at least a polylogarithmic loss in sample complexity, while at
the same time achieving sublinear runtime.
Our results. In this paper we give an algorithm that achieves the ℓ2 /ℓ2 sparse recovery guarantees (1) with
d-dimensional Fourier measurements that uses Od (k log N log log N ) samples of the signal and has the running
time of Od (k logd+3 N ). This is the first sublinear time algorithm that comes within a poly(log log N ) factor
of the sample complexity lower bound of Ω(k log(N/k)) due to [DIPW10] for any dimension higher than one.
Sparse Fourier Transform overview. The overall outline of our algorithm follows the framework of [GMS05,
HIKP12a, IKP14, IK14], which adapt the methods of [CCFC02, GLPS10] from arbitrary linear measurements
to Fourier measurements. The idea is to take, multiple times, a set of B = O(k) linear measurements of the
form
X
ũj =
s i xi
i:h(i)=j
for random hash functions h : [N ] → [B] and random sign changes si with |si | = 1. This corresponds to
hashing to B buckets. With such ideal linear measurements, O(log(N/k)) hashes suffice for sparse recovery,
giving an O(k log(N/k)) sample complexity.
The sparse Fourier transform algorithms approximate ũ using linear combinations of Fourier samples.
Specifically, the coefficients of x are first permuted via a random affine permutation of the input space. Then the
coefficients are partitioned into buckets. This step uses the“filtering” process that approximately partitions the
3
range of x into intervals (or, in higher dimension, squares, or ℓ∞ balls) with N/B coefficients each, where each
interval corresponds to one bucket. Overall, this ensures that most of the large coefficients are “isolated”, i.e.,
are hashed to unique buckets, as well as that the contributions from the “tail” of the signal x to those buckets
is not much greater than the average (the tail of the signal defined as Errk (x) = mink−sparse y ||x − y||2 ). This
allows one to mimic the iterative recovery algorithm described for linear measurements above. However, there
are several difficulties in making this work using an optimal number of samples.
This enables the algorithm to identify the locations of the dominant coefficients and estimate their values,
producing a sparse estimate χ of x. To improve this estimate, we repeat the process on x − χ by subtracting
the influence of χ during hashing, thereby refining the approximation of x constructed. After a few iterations
of this refinement process the algorithm obtains a good sparse approximation χ of x.
A major hurdle in implementing this strategy is that any filter that has been constructed in the literature so far
is imprecise in that coefficients contribute (“leak”’) to buckets other than the one they are technically mapped
into. This contribution, however, is limited and can be controlled by the quality of the filter. The details of
filter choice have played a crucial role in recent developments in Sparse FFT algorithms. For example, the
best known runtime for one-dimensional Sparse FFT, due to [HIKP12b], was obtained by constructing filters
that (almost) precisely mimic the ideal hash process, allowing for a very fast implementation of the process in
dimension one. The price to pay for the precision of the filter, however, is that each hashing becomes a logd N
factor more costly in terms of sample complexity and runtime than in the idealized case. At the other extreme,
the algorithm of [GMS05] uses much less precise filters, which only lead to a C d loss of sample complexity
in higher dimensions d, for a constant C > 0. Unfortunately, because of the imprecision of the filters the
iterative improvement process becomes quite noisy, requiring Ω(log N ) iterations of the refinement process
above. As [GMS05] use fresh randomness for each such iteration, this results in an Ω(log N ) factor loss in
sample complexity. The result of [IKP14] uses a hybrid strategy, effectively interpolating between [HIKP12b]
and [GMS05]. This gives the near optimal O(k log N logO(1) log N ) sample complexity in dimension one (i.e.
Fourier transform on the line), but still suffers from a logd−1 N loss in dimension d.
Techniques of [IK14]. The first algorithm to achieve optimal sample complexity was recently introduced
in [IK14]. The algorithms uses an approach inspired by [GMS05] (and hence uses ‘crude’ filters that do not
lose much in sample complexity), but introduces a key innovation enabling optimal sample complexity: the
algorithm does not use fresh hash functions in every repetition of the refinement process. Instead, O(log N )
hash functions are chosen at the beginning of the process, such that each large coefficient is isolated by most of
those functions with high probability. The same hash functions are then used throughout the execution of the
algorithm. As every hash function required a separate set of samples to construct the buckets, reusing the hash
functions makes sample complexity independent of the number of iterations, leading to the optimal bound.
While a natural idea, reusing hash functions creates a major difficulty: if the algorithm identified a nonexistent large coefficient (i.e. a false positive) by mistake and added it to χ, this coefficient would be present
in the difference vector x − χ (i.e. residual signal) and would need to be corrected later. As the spurious
coefficient depends on the measurements, the ‘isolation’ properties required for recovery need not hold for it
as its position is determined by the hash functions themselves, and the algorithm might not be able to correct
the mistake. This hurdle was overcome in [IK14] by ensuring that no large coefficients are created spuriously
throughout the execution process. This is a nontrivial property to achieve, as the hashing process is quite
noisy due to use of the ‘crude’ filters to reduce the number of samples (because the filters are quite simple,
the bucketing process suffers from substantial leakage). The solution was to recover the large coefficients
in decreasing order of their magnitude. Specifically, in each step, the algorithm recovered coefficients with
magnitude that exceeded a specific threshold (that decreases at an exponential rate). With this approach the
ℓ∞ norm of the residual signal decreases by a constant factor in every round, resulting in the even stronger
ℓ∞ /ℓ2 sparse recovery guarantees in the end. The price to pay for this strong guarantee was the need for a very
strong primitive for locating dominant elements in the residual signal: a primitive was needed that would make
mistakes with at most inverse polynomial probability. This was achieved by essentially brute-force decoding
4
over all potential elements in [N ]: the algorithm loops over all elements i ∈ [N ] and for each i tests, using the
O(log N ) measurements taken, whether i is a dominant element in the residual signal. This resulted in Ω(N )
runtime.
Our techniques. In this paper we show how to make the aforementioned algorithm run in sub-linear
time, at the price of a slightly increased sampling complexity of Od (k log N log log N ). To achieve a sublinear runtime, we need to replace the loop over all N coefficients by a location primitive (similar to that in
prior works) that identifies the position of any large coefficient that is isolated in a bucket in logO(1) N time
per bucket, i.e. without resorting to brute force enumeration over the domain of size N . Unfortunately, the
identification step alone increases the sampling complexity by O(log N ) per hash function, so unlike [IK14],
here we cannot repeat this process using O(log N ) hash functions to ensure that each large coefficient is isolated
by one of those functions. Instead, we can only afford O(log log N ) hash functions overall, which means that
1/ logO(1) N fraction of large coefficients will not be isolated in most hashings. This immediately precludes
the possibility of using the initial samples to achieve ℓ∞ norm reduction as in [IK14]. Another problem,
however, is that the weaker location primitive that we use may generate spurious coefficients at every step
of the recovery process. These spurious coefficients, together with the 1/ log O(1) N fraction of non-isolated
elements, contaminate the recovery process and essentially render the original samples useless after a small
number of refinement steps. To overcome these hurdles, instead of the ℓ∞ reduction process of [IK14] we
use a weaker invariant on the reduction of mass in the ‘heavy’ elements of the signal throughout our iterative
process. Specifically, instead of reduction of ℓ∞ norm of the residual as in [IK14] we give a procedure for
reducing the ℓ1 norm of the ‘head’ of the signal. To overcome the contamination coming from non-isolated as
well as spuriously created coefficients, we achieve ℓ1 norm reduction by alternating two procedures. The first
procedure uses the O(log log N ) hash functions to reduce the ℓ1 norm of ‘well-hashed’ elements in the signal,
and the second uses a simple sparse recovery primitive to reduce the ℓ∞ norm of offending coefficients when
the first procedure gets stuck. This can be viewed as a signal-to-noise ratio (SNR) reduction step similar in spirit
the one achieved in [IKP14]. The SNR reduction phase is insufficient for achieving the ℓ2 /ℓ2 sparse recovery
guarantee, and hence we need to run a cleanup phase at the end, when the signal to noise ratio is constant. It has
been observed before (in [IKP14]) that if the signal to noise ratio is constant, then recovery can be done using
standard techniques with optimal sample complexity. The crucial difference between [IKP14] and our setting
is, however, that we only have bounds on ℓ1 -SNR as opposed to ℓ2 -SNR In [IKP14]. It turns out, however, that
this is not a problem – we give a stronger analysis of the corresponding primitive from [IKP14], showing that
ℓ1 -SNR bound is sufficient.
Related work on continuous Sparse FFT. Recently [BCG+ 12] and [PS15] gave algorithms for the related
problem of computing Sparse FFT in the continuous setting. These results are not directly comparable to ours,
and suffer from a polylogarithmic inefficiency in sample complexity bounds.
2 Preliminaries
For a positive even integer a we will use the notation [a] = {− a2 , − a2 + 1, . . . , −1, 0, 1, . . . , a2 − 1}. We will
consider signals of length N = nd , where n is a power of 2 and d ≥ 1 is the dimension. We use the notation
ω = e2πi/n for the root of unity of order n. The d-dimensional forward and inverse Fourier transforms are
given by
1 X iT j
1 X −iT j
ω
xi and xj = √
ω x̂i
(2)
x̂j = √
N i∈[n]d
N i∈[n]d
respectively, where j ∈ [n]d . We will denote the forward Fourier transform by F and Note that we use
the orthonormal version of the Fourier transform. We assume that the input signal has entries of polynomial
precision and range. Thus, we have ||x̂||2 = ||x||2 for all x ∈ CN (Parseval’s identity). Given access to samples
5
of x
b, we recover a signal z such that
||x − z||2 ≤ (1 + ǫ)
min
k− sparse y
||x − y||2
We will use pseudorandom spectrum permutations, which we now define. We write Md×d for the set of
d × d matrices over Zn with odd determinant. For Σ ∈ Md×d , q ∈ [n]d and i ∈ [n]d let πΣ,q (i) = Σ(i − q)
mod n. Since Σ ∈ Md×d , this is a permutation. Our algorithm will use π to hash heavy hitters into B buckets,
where we will choose B ≈ k. We will often omit the subscript Σ, q and simply write π(i) when Σ, q is fixed
or clear from context. For i, j ∈ [n]d we let oi (j) = π(j) − (n/b)h(i) be the “offset” of j ∈ [n]d relative to
i ∈ [n]d (note that this definition is different from the one in [IK14]). We will always have B = bd , where b is
a power of 2.
Definition 2.1. Suppose that Σ−1 exists mod n. For a, q ∈ [n]d we define the permutation PΣ,a,q by (PΣ,a,q x̂)i =
T
x̂ΣT (i−a) ω i Σq .
T Σi
Lemma 2.2. F −1 (PΣ,a,q x̂)πΣ,q (i) = xi ω a
The proof is given in [IK14] and we do not repeat it here. Define
Errk (x) =
min
k−sparse y
||x − y||2 and µ2 = Err2k (x)/k.
(3)
In this paper, we assume knowledge of µ (a constant factor upper bound on µ suffices). We also assume that
the signal to noise ration is bounded by a polynomial, namely that R∗ := ||x||∞ /µ ≤ N O(1) . We use the
d
∞
notation B∞
r (x) to denote the ℓ∞ ball of radius r around x: Br (x) = {y ∈ [n] : ||x − y||∞ ≤ r}, where
||x − y||∞ = maxs∈d ||xs − ys ||◦ , and ||xs − ys ||◦ is the circular distance on Zn . We will also use the notation
f . g to denote f = O(g). For a real number a we write |a|+ to denote the positive part of a, i.e. |a|+ = a if
a ≥ 0 and |a|+ = 0 otherwise.
We will use the filter G, Ĝ constructed in [IK14]. The filter is defined by a parameter F ≥ 1 that governs
its decay properties. The filter satisfies supp Ĝ ⊆ [−F · b, F · b]d and
n
and
Lemma 2.3 (Lemma 3.1 in [IK14]). One has (1) Gj ∈ [ (2π)1F ·d , 1] for all j ∈ [n]d such that ||j||∞ ≤ 2b
F
2
(2) |Gj | ≤ 1+(b/n)||j||
for all j ∈ [n]d as long as b ≥ 3 and (3) Gj ∈ [0, 1] for all j as long as F is even.
∞
Remark 2.4. Property (3) was not stated explicitly in Lemma 3.1 of [IK14], but follows directly from their
construction.
The properties above imply that most of the mass of the filter is concentrated in a square of side O(n/b),
approximating the “ideal” filter (whose value would be equal to 1 for entries within the square and equal to
0 outside of it). Note that for each i ∈ [n]d one has |Goi (i) | ≥ (2π)1d·F . We refer to the parameter F as the
sharpness of the filter. Our hash functions are not pairwise independent, but possess a property that still makes
hashing using our filters efficient:
Lemma 2.5 (Lemma 3.2 in [IK14]). Let i, j ∈ [n]d . Let Σ be uniformly random with odd determinant. Then
for all t ≥ 0 one has Pr[||Σ(i − j)||∞ ≤ t] ≤ 2(2t/n)d .
Pseudorandom spectrum permutations combined with a filter G give us the ability to ‘hash’ the elements
of the input signal into a number of buckets (denoted by B). We formalize this using the notion of a hashing.
A hashing is a tuple consisting of a pseudorandom spectrum permutation π, target number of buckets B and a
sharpness parameter F of our filter, denoted by H = (π, B, F ). Formally, H is a function that maps a signal x
to B signals, each corresponding to a hash bucket, allowing us to solve the k-sparse recovery problem on input
x by reducing it to 1-sparse recovery problems on the bucketed signals. We give the formal definition below.
6
Definition 2.6 (Hashing H = (π, B, F )). For a permutation π = (Σ, q), parameters b > 1, B = bd and F ,
d
a hashing H := (π, B, F ) is a function mapping a signal x ∈ C[n] to B signals H(x) = (us )s∈[b]d , where
d
us ∈ C[n] for each s ∈ [b]d , such that for each i ∈ [n]d
X
T
us,i =
Gπ(j)−(n/b)·s xj ω i Σj ∈ C,
j∈[n]d
where G is a filter with B buckets and sharpness F constructed in Lemma 2.3.
For a hashing H = (π, B, F ), π = (Σ, q) we sometimes write PH,a , a ∈ [n]d to denote PΣ,a,q . We will
consider hashings of the input signal x, as well as the residual signal x − χ, where
d
Definition 2.7 (Measurement m = m(x, H, a)). For a signal x ∈ C[n] , a hashing H = (π, B, F ) and a
d
parameter a ∈ [n]d , a measurement m = m(x, H, a) ∈ C[b] is the B-dimensional complex valued vector of
d
evaluations of a hashing H(x) at a ∈ C[n] , i.e. length B, indexed by [b]d and given by evaluating the hashing
H at a ∈ [n]d , i.e. for s ∈ [b]d
X
T
ms =
Gπ(j)−(n/b)·s xj ω a Σj ,
j∈[n]d
where G is a filter with B buckets and sharpness F constructed in Lemma 2.3.
d
d
Definition 2.8. For any x ∈ C[n] and any hashing H = (π, B, G) define the vector µ2H,· (x) ∈ R[n] by letting
for every i ∈ [n]d
X
|
|xj |2 |Goi (j) |2 .
µ2H,i (x) := |G−1
oi (i)
j∈[n]d \{i}
We access the signal x in Fourier domain via the function H ASH T O B INS (x̂, χ, (H, a)), which evaluates
the hashing H of residual signal x − χ at point a ∈ [n]d , i.e. computes the measurement m(x, H, a) (the
computation is done with polynomial precision). One can view this function as “hashing” x into B bins by
convolving it with the filter G constructed above and subsampling appropriately. The pseudocode for this
function is given in section 9.2. In what follows we will use the following properties of H ASH T O B INS:
Lemma 2.9. There exists a constant C > 0 such that for any dimension d ≥ 1, any integer B ≥ 1, any
d
x, χ ∈ C[n] , x′ := x − χ, if Σ ∈ Md×d , a, q ∈ [n]d are selected uniformly at random, the following conditions
hold.
Let π = (Σ, q), H = (π, B, G), where G is the filter with B buckets and sharpness F constructed in
Lemma 2.3, and let u = H ASH T O B INS (x̂, χ, (H, a)). Then if F ≥ 2d, F = Θ(d), for any i ∈ [n]d
P
−1
′
−aT Σi u
′
(1) For any H one has maxa∈[n]d |G−1
h(i) − xi | ≤ Goi (i) ·
j∈S\{i} Goi (j) |xj |. Furthermore,
oi (i) ω
P
′
d·F · C d ||x′ || /B + N −Ω(c) ;
EH [G−1
1
j∈S\{i} Goi (j) |xj |] ≤ (2π)
oi (i) ·
(2) EH [µ2H,i (x′ )] ≤ (2π)2d·F · C d kx′ k22 /B,
Furthermore,
(3) for any hashing H, if a is chosen uniformly at random from [n]d , one has
T Σi
−a
Ea [|G−1
oi (i) ω
uh(i) − x′i |2 ] ≤ µ2H,i (x′ ) + N −Ω(c) .
Here c > 0 is an absolute constant that can be chosen arbitrarily large at the expense of a factor of cO(d) in
runtime.
7
The proof of Lemma 2.9 is given in Appendix A. We will need several definitions and lemmas from [IK14],
which we state here. We sometimes need slight modifications of the corresponding statements from [IK14],
in which case we provide proofs in Appendix A. Throughout this paper the main object of our analysis is a
properly defined set S ⊆ [n]d that contains the ’large’ coefficients of the input vector x. Below we state our
definitions and auxiliary lemmas without specifying the identity of this set, and then use specific instantiations
of S to analyze outer primitives such as R EDUCE L1N ORM, R EDUCE I NF N ORM and R ECOVER AT C ONSTSNR. This is convenient because the analysis of all of these primitives can then use the same basic claims
about estimation and location primitives. The definition of S given in (4) above is the one we use for analyzing
R EDUCE L1N ORM and the SNR reduction loop. Analysis of R EDUCE I NF N ORM (section 8.1) and R ECOVER AT C ONSTANT SNR (section 8.2) use different instantiations of S, but these are local to the corresponding
sections, and hence the definition in (4) is the best one to have in mind for the rest of this section.
First, we need the definition of an element i ∈ [n]d being isolated under a hashing H = (π, B, F ). Intuitively, an element i ∈ S is isolated under hashing H with respect to set S if not too many other elements S are
hashed too close to i. Formally, we have
Definition 2.10 (Isolated element). Let H = (π, B, F ), where π = (Σ, q), Σ ∈ Md×d , q ∈ [n]d . We say that
an element i ∈ [n]d is isolated under hashing H at scale t if
t
−d·F
|π(S \ {i}) ∩ B∞
· αd/2 2(t+1)d · 2t .
(n/b)·h(i) ((n/b) · 2 )| ≤ (2π)
We say that i is simply isolated under hashing H if it is isolated under H at all scales t ≥ 0.
The following lemma shows that any element i ∈ S is likely to be isolated under a random permutation π:
Lemma 2.11. For any integer k ≥ 1 and any S ⊆ [n]d , |S| ≤ 2k, if B ≥ (2π)4d·F ·k/αd for α ∈ (0, 1) smaller
than an absolute constant, F ≥ 2d, and a hashing H = (π, B, F ) is chosen randomly (i.e. Σ ∈ Md×d , q ∈ [n]d
are chosen uniformly at random, and π = (Σ, q)), then each i ∈ [n]d is isolated under permutation π with
√
probability at least 1 − 12 α.
The proof of the lemma is very similar to Lemma 5.4 in [IK14] (the only difference is that the ℓ∞ ball is
centered at the point that i hashes to in Lemma 2.11, whereas it was centered at π(i) in Lemma 5.4 of [IK14])
and is given in Appendix A for completeness.
As every element i ∈ S is likely to be isolated under one random hashing, it is very likely to be isolated
under a large fraction of hashings H1 , . . . , Hrmax :
Lemma 2.12. For any integer k ≥ 1, and any S ⊆ [n]d , |S| ≤ 2k, if B ≥ (2π)4d·F · k/αd for α ∈ (0, 1)
smaller than an absolute constant, F ≥ 2d, Hr = (πr , B, F ), r = 1, . . . , rmax a sequence of random hashings,
√
then every i ∈ [n]d is isolated with
√ respect to S under at least (1 − α)rmax hashings Hr , r = 1, . . . , rmax
with probability at least 1 − 2−Ω( αrmax ) .
Proof. Follows by an application of Chernoff bounds and Lemma 2.11.
It is convenient for our location primitive (L OCATE S IGNAL, see Algorithm 1) to sample the signal at pairs
of locations chosen randomly (but in a correlated fashion). The two points are then combined into one in a
linear fashion. We now define notation for this common operation on pairs of numbers in [n]d . Note that we are
viewing pairs in [n]d × [n]d as vectors in dimension 2, and the ⋆ operation below is just the dot product over this
two dimensional space. However, since our input space is already endowed with a dot product (for i, j ∈ [n]d
we denote their dot product by iT j), having special notation here will help avoid confusion.
8
Operations on vectors in [n]d . For a pair of vectors (α1 , β1 ), (α2 , β2 ) ∈ [n]d × [n]d we let (α1 , β1 ) ⋆ (α2 , β2 )
denote the vector γ ∈ [n]d such that
γi = (α1 )i · (α2 )i + (β1 )i · (β2 )i for all i ∈ [d].
Note that for any a, b, c ∈ [n]d × [n]d one has a ⋆ b + a ⋆ c = a ⋆ (b + c), where addition for elements of
[n]d × [n]d is componentwise. We write 1 ∈ [n]d for the all ones vector in dimension d, and 0 ∈ [n]d for the
zero vector. For a set A ⊆ [n]d × [n]d and a vector (α, β) ∈ [n]d × [n]d we denote
A ⋆ (α, β) := {a ⋆ (α, β) : a ∈ A}.
Definition 2.13 (Balanced set of points). For an integer ∆ ≥ 2 we say that a (multi)set Z ⊆ [n]d is ∆-balanced
r·zs
}z∈Z
in coordinate s ∈ [1 : d] if for every r = 1, . . . , ∆ − 1 at least 49/100 fraction of elements in the set {ω∆
belong to the left halfplane {u ∈ C : Re(u) ≤ 0} in the complex plane, where ω∆ = e2πi/∆ is the ∆-th root of
unity.
r·zs
is uniformly distributed over the
Note that if ∆ divides n, then for any fixed value of r the point ω∆
′
roots of unity for some ∆ between 2 and ∆ for every r = 1, . . . , ∆ − 1 when zs is uniformly random
in [n]. Thus for r 6= 0 we expect at least half the points to lie in the halfplane {u ∈ C : Re(u) ≤ 0}. A set
Z is balanced if it does not deviate from expected behavior too much. The following claim is immediate via
standard concentration bounds:
∆′ -th
Claim 2.14. There exists a constant C > 0 such that for any ∆ a power of two, ∆ = logO(1) n, and n a power
of 2 the following holds if ∆ < n. If elements of a (multi)set A ⊆ [n]d × [n]d of size C log log N are chosen
uniformly at random with replacement from [n]d × [n]d , then with probability at least 1 − 1/ log4 N one has
that for every s ∈ [1 : d] the set A ⋆ (0, es ) is ∆-balanced in coordinate s.
Since we only use one value of ∆ in the paper (see line 8 in Algorithm 1), we will usually say that a set is
simply ‘balanced’ to denote the ∆-balanced property for this value of ∆.
3 The algorithm and proof overview
In this section we state our algorithm and give an outline of the analysis. The formal proofs are then presented
in the rest of the paper (the organization of the rest of the paper is presented in section 4). Our algorithm
(Algorithm 2), at a high level, proceeds as follows.
Measuring x
b. The algorithms starts by taking measurements of the signal in lines 5-16. Note that the
algorithm selects O(log log N ) hashings Hr = (πr , B, F ), r = 1, . . . , O(log log N ), where πr are selected
uniformly at random, and for each r selects a set Ar ⊆ [n]d ×[n]d of size O(log log N ) that determines locations
to access in frequency domain. The signal x
b is accessed via the function H ASH T O B INS (see Lemma 2.9 above
for its properties. The function H ASH T O B INS accesses filtered versions of x
b shifted by elements of a randomly
selected set (the number of shifts is O(log N/ log log N )). These shifts are useful for locating ‘heavy’ elements
from the output of H ASH T O B INS. Note that since each hashing takes O(B) = O(k) samples, the total sample
complexity of the measurement step is O(k log N log log N ). This is the dominant contribution to sample
complexity, but it is not the only one. The other contribution of O(k log N log log N ) comes from invocations of
E STIMATE VALUES from our ℓ1 -SNR reduction loop (see below). The loop goes over O(log R∗ ) = O(log N )
iterations, and in each iteration E STIMATE VALUES uses O(log log N ) fresh hash functions to keep the number
of false positives and estimation error small.
The location algorithm is Algorithm 1. Our main tool for bounding performance of L OCATE S IGNAL is
Theorem 3.1, stated below. Theorem 3.1 applies to the following setting. Fix a set S ⊆ [n]d and a set of
hashings H1 , . . . , Hrmax that encode signal measurement patterns, and let S ∗ ⊆ S denote the set of elements
9
of S that are not isolated with respect to most of these hashings. Theorem 3.1 shows that for any signal x and
partially recovered signal χ, if L denotes the output list of an invocation of L OCATE S IGNAL on the pair (x, χ)
with measurements given by H1 , . . . , Hrmax and a set of random shifts, then the ℓ1 norm of elements of the
residual (x − χ)S that are not discovered by L OCATE S IGNAL can be bounded by a function of the amount of
ℓ1 mass of the residual that fell outside of the ‘good’ set S \ S ∗ , plus the ‘noise level’ µ ≥ ||x[n]d \S ||∞ times k.
If we think of applying Theorem 3.1 iteratively, we intuitively get that the fixed set of measurements given
by hashings H1 , . . . , Hr allows us to always reduce the ℓ1 norm of the residual x′ = x − χ on the ‘good’
set S \ S ∗ to about the amount of mass that is located outside of this good set(this is exactly how we use
L OCATE S IGNAL in our signal to noise ratio reduction loop below). In section 6 we prove
Theorem 3.1. For any constant C ′ > 0 there exist absolute constants C1 , C2 , C3 > 0 such that for any x, √
χ∈
CN , x′ = x−χ, any integer k ≥ 1 and any S ⊆ [n]d such that ||x[n]d \S ||∞ ≤ C ′ µ, where µ = ||x[n]d \[k] ||2 / k,
the following conditions hold if ||x′ ||∞ /µ = N O(1) .
Let πr = (Σr , qr ), r = 1, . . . , rmax denote permutations, and let Hr = (πr , B, F ), F ≥ 2d, F = Θ(d),
where B ≥ (2π)4d·F k/αd for α ∈ (0, 1) smaller than a constant. Let S ∗ ⊆ S denote the set of elements
√
that are not isolated with respect to at least a α fraction of hashings {Hr }. Then if additionally for every
s ∈ [1 : d] the sets Ar ⋆ (1, es ) are balanced in coordinate s (as per Definition 2.13) for all r = 1, . . . , rmax ,
√
and rmax , cmax ≥ (C1 / α) log log N , then
L :=
r[
max
r=1
max
L OCATE S IGNAL χ, k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
satisfies
2
||x′S\S ∗ \L ||1 ≤ (C2 α)d/2 ||x′S ||1 + C3d (||χ[n]d \S ||1 + ||x′S ∗ ||1 ) + 4µ|S|.
Reducing signal to noise ratio. Once the samples have been taken, the algorithm proceeds to the signal to
noise (SNR) reduction loop (lines 17-23). The objective of this loop is to reduce the mass of the top (about k)
elements in the residual signal to roughly the noise level µ · k (once this is done, we run a ‘cleanup’ primitive,
referred to as R ECOVER AT C ONSTANT SNR, to complete the recovery process – see below). Specifically, we
define the set S of ‘head elements’ in the original signal x as
S = {i ∈ [n]d : |xi | > µ},
(4)
where µ2 = Err2k (x)/k is the average tail noise level. Note that we have |S| ≤ 2k. Indeed, if |S| > 2k, more
than k elements of S belong to the tail, amounting to more than µ2 · k = Err2k (x) tail mass. Ideally, we would
like this loop to construct and approximation χ(T ) to x supported only on S such that ||(x−χ(T ) )S ||1 = O(µk),
i.e. the ℓ1 -SNR of the residual signal on the set S of heavy elements is reduced to a constant. As some false
positives will unfortunately occur throughout the execution of our algorithm due to the weaker sublinear time
location and estimation primitives that we use, our SNR reduction loop is to construct an approximation χ(T )
to x with the somewhat weaker properties that
||(x − χ(T ) )S ||1 + ||χ(T ) ||[n]d \S = O(µk)
and
||χ(T ) ||0 ≪ k.
(5)
Thus, we reduce the ℓ1 -SNR on the set S of ‘head’ elements to a constant, and at the same time not introduce
too many spurious coefficients (i.e. false positives) outside S, and these coefficients do not contribute much
ℓ1 mass. The SNR reduction loop itself consists of repeated alternating invocations of two primitives, namely
R EDUCE L1N ORM and R EDUCE I NF N ORM. Of these two the former can be viewed as performing most of
the reduction, and R EDUCE I NF N ORM is naturally viewed as performing a ‘cleanup’ phase to fix inefficiencies
of R EDUCE L1N ORM that are due to the small number of hash functions (only O(log log N ) as opposed to
10
O(log N ) in [IK14]) that we are allowed to use, as well as some mistakes that our sublinear runtime location
and estimation primitives used in R EDUCE L1N ORM might make.
Algorithm 1 Location primitive: given a set of measurements corresponding to a single hash function, returns
a list of elements in [n]d , one per each hash bucket
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
procedure L OCATE S IGNAL(χ, H, {m(b
x , H, a ⋆ (1, w)}a∈A,w∈W )
⊲ H = (π, B, F ), B = bd
Let x′ := x − χ. Compute {m(xb′ , H, a ⋆ (1, w)}a∈A,w∈W using Corollary 10.2 and H ASH T O B INS.
L←∅
for j ∈ [b]d do
⊲ Loop over all hash buckets, indexed by j ∈ [b]d
d
f ←0
for s = 1 to d do
⊲ Recovering each of d coordinates separately
1
∆ ← 2⌊ 2 log2 log2 n⌋
for g = 1 to log∆ n − 1 do
w ← n∆−g · es
⊲ Note that w ∈ W
If there exists a unique r ∈ [0 : ∆ − 1] such that
mj (xb′ ,H,a⋆(1,w))
mj (xb′ ,H,a⋆(1,0))
− 1 < 1/3 for at least 3/5 fraction of a =
then f ← f + ∆g−1 · r · es else return FAIL
end for
end for
L ← L ∪ {Σ−1 f }
end for
return L
end procedure
⊲ Add recovered element to output list
−r·βs
· ω −(n·∆
ω∆
11:
12:
13:
14:
15:
16:
17:
18:
−g f
s )·βs
·
(α, β) ∈ A
R EDUCE L1N ORM is presented as Algorithm 3 below. The algorithm performs O(log log N ) rounds of
the following process: first, run L OCATE S IGNAL on the current residual signal, then estimate values of the
elements that belong to the list L output by L OCATE S IGNAL, and only keep those that are above a certain
1
threshold (see threshold 10000
2−t ν + 4µ in the call the E STIMATE VALUES in line 9 of Algorithm 3). This
thresholding operation is crucial, and allows us to control the number of false positives. In fact, this is very
similar to the approach of [IK14] of recovering elements starting from the largest. The only difference is that (a)
our ‘reliability threshold’ is dictated by the ℓ1 norm of the residual rather than the ℓ∞ norm, as in [IK14], and (b)
some false positives can still occur due to our weaker estimation primitives. Our main tool for formally stating
the effect of R EDUCE L1N ORM is Lemma 3.2 below. Intuitively, the lemma shows that R EDUCE L1N ORM
reduces the ℓ1 norm of the head elements of the input signal x − χ by a polylogarthmic factor, and does not
introduce too many new spurious elements (false positives) in the process. The introduced spurious elements,
if any, do not contribute much ℓ1 mass to the head of the signal. Formally, we show in section 7.1
Lemma 3.2. For any x ∈ CN , any integer k ≥ 1, B ≥ (2π)4d·F · k/αd for α ∈ (0, 1] smaller than an absolute
constant and F ≥ 2d, F = Θ(d) the following conditions hold for the set S := {i ∈ [n]d : |xi | > µ}, where
µ2 := ||x[n]d \[k] ||22 /k. Suppose that ||x||∞ /µ = N O(1) .
For any sequence of hashings Hr = (πr , B, F ), r = 1, . . . , rmax , if S ∗ ⊆ S denotes the set of elements of
√
S that are not isolated with respect to at least a α fraction of the hashings Hr , r = 1, . . . , rmax , then for any
d
χ ∈ C[n] , x′ := x − χ, if ν ≥ (log4 N )µ is a parameter such that
A ||(x − χ)S ||1 ≤ (ν + 20µ)k;
B ||χ[n]d \S ||0 ≤
1
k;
log19 N
11
C ||(x − χ)S ∗ ||1 + ||χ[n]d \S ||1 ≤
ν
k,
log4 N
the following conditions hold.
√
If parameters rmax , cmax are chosen to be at least (C1 / α) log log N , where C1 is the constant from
Theorem 3.1 and measurements are taken as in Algorithm 2, then the output χ′ of the call
max
, 4µ(log 4 n)T −t , µ)
R EDUCE L1N ORM (χ, k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
satisfies
1. ||(x′ − χ′ )S ||1 ≤
1
νk
log4 N
+ 20µk
2. ||(χ + χ′ )[n]d \S ||0 ≤ ||χ[n]d \S ||0 +
(ℓ1 norm of head elements is reduced by ≈ log4 N factor)
1
k
log20 N
(few spurious coefficients are introduced)
3. ||(x′ − χ′ )S ∗ ||1 + ||(χ + χ′ )[n]d \S ||1 ≤ ||x′S ∗ ||1 + ||χ[n]d \S ||1 +
coefficients does not grow fast)
1
νk
log20 N
(ℓ1 norm of spurious
with probability at least 1 − 1/ log2 N over the randomness used to take measurements m and by calls to E S 2
TIMATE VALUES . The number of samples used is bounded by 2O(d ) k(log log N )2 , and the runtime is bounded
2
by 2O(d ) k logd+2 N .
Equipped with Lemma 3.2 as well as its counterpart Lemma 8.1 that bounds the performance of R EDU (see section 8.1) we are able to prove that the SNR reduction loop indeed achieves its cause,
namely (5). Formally, we prove in section 7.2
CE I NF N ORM
Theorem 3.3. For any x ∈ CN , any integer k ≥ 1, if µ2 = Err2k (x)/k and R∗ ≥ ||x||∞ /µ = N O(1) , the
following conditions hold for the set S := {i ∈ [n]d : |xi | > µ} ⊆ [n]d .
Then the SNR reduction loop of Algorithm 2 (lines 19-25) returns χ(T ) such that
||(x − χ(T ) )S ||1 . µ
(T )
||χ[n]d \S ||1
(ℓ1 -SNR on head elements is constant)
.µ
1
(T )
||χ[n]d \S ||0 .
(spurious elements contribute little in ℓ1 norm)
log
19
N
k
(small number of spurious elements have been introduced)
with probability at least 1 − 1/ log N over the internal randomness used by Algorithm 2. The sample
2
2
complexity is 2O(d ) k log N (log log N ). The runtime is bounded by 2O(d ) k logd+3 N .
Recovery at constant ℓ1 -SNR. Once (5) has been achieved, we run the R ECOVER AT C ONSTANT SNR
primitive (Algorithm 5) on the residual signal. Adding the correction χ′ that it outputs to the output χ(T ) of the
SNR reduction loop gives the final output of the algorithm. We prove in section 8.2
√
Lemma 3.4. For any ǫ > 0, x̂, χ ∈ CN , x′ = x − χ and any integer k ≥ 1 if ||x′[2k] ||1 ≤ O(||x[n]d \[k] ||2 k)
and ||x′[n]d \[2k] ||22 ≤ ||x[n]d \[k] ||22 , the following conditions hold. If ||x||∞ /µ = N O(1) , then the output χ′ of
R ECOVER AT C ONSTANT SNR(x̂, χ, 2k, ǫ) satisfies
||x′ − χ′ ||22 ≤ (1 + O(ǫ))||x[n]d \[k] ||22
2) 1
with at least 99/100 probability over its internal randomness. The sample complexity is 2O(d
2
the runtime complexity is at most 2O(d ) 1ǫ k logd+1 N.
12
ǫ k log N ,
and
We give the intuition behind the proof here, as the argument is somewhat more delicate than the analysis
of R ECOVER AT C ONST SNR in [IKP14], due to the ℓ1 -SNR, rather than ℓ2 -SNR assumption. Specifically, if
instead of ||(x − χ)[2k] ||1 ≤ O(µk) we had ||(x − χ)[2k] ||22 ≤ O(µ2 k), then it would be essentially sufficient
to note that after a single hashing into about k/(ǫα) buckets for a constant α ∈ (0, 1), every element i ∈ [2k]
is recovered with probability at least 1 − O(ǫα), say, as it is enough to (on average) recover all but about an
ǫ fraction of coefficients. This would not be sufficient here since we only have a bound on the ℓ1 norm of the
residual, and hence some elements can contribute much more ℓ2 norm than others. However, we are able to
2
αǫµ
show that the probability that an element of the residual signal x′i is not recovered is bounded by O( αǫµ
|x′i |2 + |x′i | ),
where the first term corresponds to contribution of tail noise and the second corresponds to the head elements.
This bound implies that the total expected ℓ22 mass in the elements that are not recovered is upper bounded by
P
P
αǫµ
αǫµ2
′ 2
2
′
2
i∈[2k] |xi | · O( |x′i |2 + |x′i | ) ≤ O(ǫµ k + ǫµ
i∈[2k] |xi |) = O(ǫµ k), giving the result.
Finally, putting the results above together, we prove in section 7.3
d
Theorem 3.5. For any ǫ > 0, x ∈ C[n] and any integer k ≥ 1, if R∗ ≥ ||x||∞ /µ = N O(1) , µ2 =
O(||x[n]d \[k] ||22 /k) and α > 0 is smaller than an absolute constant, S PARSE FFT(x̂, k, ǫ, R∗ , µ) solves the
2
2
ℓ2 /ℓ2 sparse recovery problem using 2O(d ) (k log N log log N + 1ǫ k log N ) samples and 2O(d ) 1ǫ k logd+3 N
time with at least 98/100 success probability.
4 Organization
The rest of the paper is organized as follows. In section 5 we set up notation necessary for the analysis of
L OCATE S IGNAL, and specifically for a proof of Theorem 3.1, as well as prove some basic claims. In section 6
we prove Theorem 3.1. In section 7 we prove performance guarantees for R EDUCE L1N ORM (Lemma 3.2),
then combine them with Lemma 8.1 to prove that the main loop in Algorithm 2 reduces ℓ1 norm of the head
elements. We then conclude with a proof of correctness for Algorithm 2. Section 8.1 is devoted to analyzing
the R EDUCE I NF N ORM procedure, and section 8.2 is devoted to analyzing the R ECOVER AT C ONSTANT SNR
procedure. Some useful lemmas are gathered in section 9, and section 10 describes the algorithm for semiequispaced Fourier transform that we use to update our samples with the residual signal. Appendix A contains
proofs omitted from the main body of the paper.
5 Analysis of L OCATE S IGNAL: main definitions and basic claims
In this section we state our main signal location primitive, L OCATE S IGNAL (Algorithm 1). Given a sequence of
d
measurements m(b
x, Hr , a ⋆ (1, w))}a∈Ar ,w∈W , r = 1, . . . , rmax a signal x
b ∈ C[n] and a partially recovered
d
signal χ ∈ C[n] , L OCATE S IGNAL outputs a list of locations L ⊆ [n]d that, as we show below in Theorem 3.1
(see section 6), contains the elements of x that contribute most of its ℓ1 mass. An important feature of L O CATE S IGNAL is that it is an entirely deterministic procedure, giving recovery guarantees for any signal x and
any partially recovered signal χ. As Theorem 3.1 shows, however, these guarantees are strongest when most
of the mass of the residual x − χ resides on elements in [n]d that are isolated with respect to most hashings
H1 , . . . , Hrmax used for measurements. This flexibility is crucial for our analysis, and is exactly what allows
us to reuse measurements and thereby achieve near-optimal sample complexity.
In the rest of this section we first state Algorithm 1, and then derive useful characterization of elements i
of the input signal (x − χ)i that are successfully located by L OCATE S IGNAL. The main result of this section
is Corollary 5.2. This comes down to bounding, for a given input signal x and partially recovered signal χ,
the expected ℓ1 norm of the noise contributed to the process of locating heavy hitters in a call to L OCATE S IG NAL(b
x, χ, H, {m(b
x, H, a ⋆ (1, w))}a∈A,w∈W ) by (a) the tail of the original signal x (tail noise etail ) and (b)
13
Algorithm 2 S PARSE FFT(x̂, k, ǫ, R∗ , µ)
1: procedure S PARSE FFT(x̂, k, ǫ, R∗ , µ)
2:
χ(0) ← 0
⊲ in Cn .
∗
3:
T ← log(log4 N ) R
4:
F ← 2d
5:
B ← (2π)4d·F · k/αd , α > 0 sufficiently small constant
√
√
6:
rmax ← (C/ α) log log N, cmax ← (C/ α) log log N for a sufficiently large constant C > 0
1
7:
W ← {0d }, ∆ ← 2⌊ 2 log2 log2 n⌋
⊲ 0d is the zero vector in dimension d
8:
for g = 1 to ⌈log
S∆ n⌉ do
9:
W ← W ∪ ds=1 n∆−g · es
⊲ es is the unit vector in direction s
10:
end for
11:
G ← filter with B buckets and sharpness F , as per Lemma 2.3
12:
for r = 1 to rmax do
⊲ Samples that will be used for location
d
13:
Choose Σr ∈ Md×d , qr ∈ [n] uniformly at random, let πr := (Σr , qr ) and let Hr := (πr , B, F )
14:
Let Ar ← C log log N elements of [n]d × [n]d sampled uniformly at random with replacement
15:
for w ∈ W do
16:
m(b
x, Hr , a ⋆ (1, w)) ← H ASH T O B INS (x̂, 0, (Hr , a ⋆ (1, w))) for all a ∈ Ar , w ∈ W
17:
end for
18:
end for
19:
for t = 0, 1, . . . , T − 1 do
4
T −t , µ
max
,
4µ(log
n)
20:
χ′ ← R EDUCE L1N ORM χ(t) , k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
21:
⊲ Reduce ℓ1 norm of dominant elements in the residual signal
22:
ν ′ ← (log4 N )(4µ(log 4 N )T −(t+1) + 20µ)
⊲ Threshold
4
′′
(t)
′
′
′
23:
χ ← R EDUCE I NF N ORM (x̂, χ + χ , 4k/(log N ), ν , ν )
24:
⊲ Reduce ℓ∞ norm of spurious elements introduced by R EDUCE L1N OM
25:
χ(t+1) ← χ(t) + χ′ + χ′′
26:
end for
27:
χ′ ← R ECOVER AT C ONSTANT SNR(x̂, χ(T ) , 2k, ǫ)
28:
return χ(T ) + χ′
29: end procedure
max
,
ν,
µ
Algorithm 3 R EDUCE L1N ORM x
b, χ, k, χ(t) , k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
max
, ν, µ)
procedure R EDUCE L1N ORM(b
x , χ, k, χ(t) , k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
(0)
χ ←0
⊲ in Cn
B ← (2π)4d·F · k/αd
for t = 0 to log2 (log4 N ) do
for r = 1 to rmax do
max
Lr ← L OCATE S IGNAL χ + χ(t) , k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
end for
S max
Lr
L ← rr=1
1
ν2−t + 4µ, C(log log N + d2 + log(B/k)))
χ′ ← E STIMATE VALUES (b
x, χ + χ(t) , L, 4k, 1, 1000
χ(t+1) ← χ(t) + χ′
end for
return χ + χ(T )
end procedure
14
the heavy hitters and false positives (heavy hitter noise ehead ). It is useful to note that unlike in [IK14], we
cannot expect the tail of the signal to not change, but rather need to control this change.
In what follows we derive useful conditions under which an element i ∈ [n]d is identified by L OCATE SIGNAL. Let S ⊆ [n]d be any set of size at most 2k, and let µ be such that x[n]d \S ≤ µ. Note that this fits the
definition of S given in (4) (but other instantiations are possible, and will be used later in section 8.2).
Consider a call to
L OCATE S IGNAL (χ, H, {m(b
x, H, a ⋆ (1, w)}a∈A,w∈W ).
For each a ∈ A and fixed w ∈ W we let z := a ⋆ (1, w) ∈ [n]d to simplify notation. The measurement vectors
m := m(xb′ , H, z) computed in L OCATE S IGNAL satisfy, for every i ∈ S, (by Lemma 9.2)
X
T
mh(i) =
Goi (j) x′j ω z Σj + ∆h(i),z ,
j∈[n]d
where ∆ corresponds to polynomially small estimation noise due to approximate computation of the Fourier
transform, and the filter Goi (j) is the filter corresponding to hashing H. In particular, for each hashing H and
parameter a ∈ [n]d one has:
X
T
−z T Σi
−z T Σi
Goi (j) x′j ω z Σ(j−i) + G−1
= x′i + G−1
G−1
oi (i) ∆h(i),z ω
oi (i)
oi (i) mh(i) ω
j∈[n]d \{i}
It is useful to represent the residual signal x as a sum of three terms: x′ = (x − χ)S − χ[n]d \S + x[n]d \S ,
where the first term is the residual signal coming from the ‘heavy’ elements in S, the second corresponds to
false positives, or spurious elements discovered and erroneously subtracted by the algorithm, and the third
corresponds to the tail of the signal. Similarly, we bound the noise contributed by the first two (head elements
and false positives) and the third (tail noise) parts of the residual signal to the location process separately. For
each i ∈ S we write
T
−z Σi
= x′i
G−1
oi (i) mh(i) ω
X
T
+ G−1
Goi (j) x′j ω z Σ(j−i) −
oi (i) ·
j∈S\{i}
+
G−1
oi (i)
·
X
Goi (j) xj ω z
j∈[n]d \S
−z
+ G−1
oi (i) · ∆h(i) ω
T Σi
T Σ(j−i)
X
Goi (j) χj ω z
j∈[n]d \S
T Σ(j−i)
(head elements and false positives)
(tail noise)
.
(6)
Noise from heavy hitters. The first term in (6) corresponds to noise from (x − χ)S\{i} − χ[n]d \(S\{i}) , i.e.
noise from heavy hitters and false positives. For every i ∈ S, hashing H we let
X
Goi (j) |yj |, where y = (x − χ)S − χ[n]d \S .
(7)
ehead
(H, x, χ) := G−1
i
oi (i) ·
j∈S\{i}
We thus get that ehead
(H, x, χ) upper bounds the absolute value of the first error term in (6). Note that G ≥ 0 by
i
Lemma 2.3 as long as F is even, which is the setting that we are in. If ehead
(H, x, χ) is large, L OCATE S IGNAL
i
may not be able to locate i using measurements of the residual signal x− χ taken with hashing H. However, the
noise in other hashings may be smaller, allowing recovery. In order to reflect this fact we define, for a sequence
d
of hashings H1 , . . . , Hr and a signal y ∈ C[n]
head
(Hr , x, χ),
ehead
({Hr }, x, χ) := quant1/5
i
r ei
15
(8)
where for a list of reals u1 , . . . , us and a number f ∈ (0, 1) we let quantf (u1 , . . . , us ) denote the ⌈f · s⌉-th
largest element of u1 , . . . , us .
Tail noise. To capture the second term in (6) (corresponding to tail noise), we define, for any i ∈ S, z ∈
[n]d , w ∈ W, permutation π = (Σ, q) and hashing H = (π, B, F )
−1
etail
i (H, z, x) := Goi (i) ·
X
Goi (j) xj ω z
T Σ(j−i)
.
(9)
j∈[n]d \S
With this definition in place etail
i (H, z, x) upper bounds the second term in (6). As our algorithm uses
d
several values of a ∈ Ar ⊆ [n] × [n]d to perform location, a more robust version of etail
i (H, z) will be useful.
To that effect we let for any Z ⊆ [n]d (we will later use Z = Ar ⋆ (1, w) for various w ∈ W)
1/5
−1
etail
i (H, Z, x) := quantz∈Z Goi (i) ·
X
Goi (j) xj ω z
T Σ(j−i)
.
(10)
j∈[n]d \S
Note that the algorithm first selects sets Ar ⊆ [n]d ×[n]d , and then access the signal at locations Ar ⋆(1, w), w ∈
W.
The definition of etail
i (H, A⋆(1, w), x) for a fixed w ∈ W allows us to capture the amount of noise that our
measurements that use H suffer from for locating a specific set of bits of Σi. Since the algorithm requires all
w ∈ W to be not too noisy in order to succeed (see precondition 2 of Lemma 5.1), it is convenient to introduce
notation that captures this. We define
etail
i (H, A, x) := 40µH,i (x) +
X
w∈W
etail
i (H, A ⋆ (1, w), x) − 40µH,i (x)
+
(11)
where for any η ∈ R one has |η|+ = η if η > 0 and |η|+ = 0 otherwise.
The following definition is useful for bounding the norm of elements i ∈ S that are not discovered by
several calls to L OCATE S IGNAL on a sequence of hashings {Hr }. For a sequence of measurement patterns
{Hr , Ar } we let
1/5 tail
etail
(12)
i ({Hr , Ar }, x) := quantr ei (Hr , Ar , x).
Finally, for any S ⊆ [n]d we let
ehead
(·) :=
S
X
ehead
(·) and etail
i
S (·) :=
i∈S
X
etail
i (·),
i∈S
where · stands for any set of parameters as above.
Equipped with the definitions above, we now prove the following lemma, which yields sufficient conditions
for recovery of elements i ∈ S in L OCATE S IGNAL in terms of ehead and etail .
Lemma 5.1. Let H = (π, B, R) be a hashing, and let A ⊆ [n]d × [n]d . Then for every S ⊆ [n]d and for every
d
x, χ ∈ C[n] and x′ = x − χ, the following conditions hold. Let L denote the output of
L OCATE S IGNAL (χ, H, {m(b
x, H, a ⋆ (1, w))}a∈A,w∈W ).
Then for any i ∈ S such that |x′i | > N −Ω(c) , if there exists r ∈ [1 : rmax ] such that
1. ehead
(H, x′ ) < |x′i |/20;
i
16
′
′
2. etail
i (H, A ⋆ (1, w), x ) < |xi |/20 for all w ∈ W;
3. for every s ∈ [1 : d] the set A ⋆ (0, es ) is balanced in coordinate s (as per Definition 2.13),
then i ∈ L. The time taken by the invocation of L OCATE S IGNAL is O(B · logd+1 N ).
Proof. We show that each coordinate s = 1, . . . , d of Σi is successfully recovered in L OCATE S IGNAL. Let
q = Σi for convenience. Fix s ∈ [1 : d]. We show by induction on g = 0, . . . , log∆ n − 1 that after the
g-th iteration of lines 6-10 of Algorithm 1 we have that fs coincides with qs on the bottom g · log2 ∆ bits, i.e.
fs − qs = 0 mod ∆g (note that we trivially have fs < ∆g after iteration g).
The base of the induction is trivial and is provided by g = 0. We now show the inductive step. Assume
by the inductive hypothesis that fs − qs = 0 mod ∆g−1 , so that qs = fs + ∆g−1 (r0 + ∆r1 + ∆2 r2 + . . .)
for some sequence r0 , r1 , . . ., 0 ≤ rj < ∆. Thus, (r0 , r1 , . . .) is the expansion of (qs − fs )/∆g−1 base ∆, and
r0 is the least significant digit. We now show that r0 is the unique value of r that satisfies the conditions of
lines 8-10 of Algorithm 1.
First, we have by (6) together with (7) and (9) one has for each a ∈ A and w ∈ W
T
−Ω(c)
mh(i) (xb′ , H, a ⋆ (1, w)) − Goi (i) x′i ω ((a⋆(1,w)) q ≤ ehead
(H, x, χ) + etail
.
i
i (H, a ⋆ (1, w), x) + N
Since 0 ∈ W, we also have for all a ∈ A
T
−Ω(c)
mh(i) (xb′ , H, a ⋆ (1, 0)) − Goi (i) x′i ω (a⋆(1,0)) q ≤ ehead
(H, x, χ) + etail
,
i
i (H, a ⋆ (1, 0), x) + N
where the N −Ω(c) terms correspond to polynomially small error from approximate computation of the Fourier
transform via Lemma 10.2.
Let j := h(i). We will show that i is recovered from bucket j. The bounds above imply that
T
x′ ω (a⋆(1,w)) q + E ′
mj (xb′ , H, a ⋆ (1, w))
= ′i (a⋆(1,0))T q
xi ω
+ E ′′
mj (xb′ , H, a ⋆ (1, 0))
(13)
−Ω(c) and |E ′′ | ≤ ehead (H, x, χ)+
for some E ′ , E ′′ satisfying |E ′ | ≤ ehead
(H, x, χ)+etail
i
i (H, a⋆(1, w), x)+N
i
tail
−Ω(c)
ei (H, a ⋆ (1, 0)) + N
. For all but 1/5 fraction of a ∈ A we have by definition of etail (see (10)) that
both
tail
′
etail
(14)
i (H, a ⋆ (1, w), x) ≤ ei (H, A ⋆ (1, w), x) ≤ |xi |/20
and
tail
′
etail
i (H, a ⋆ (1, 0) ≤ ei (H, A ⋆ (1, 0), x) ≤ |xi |/20.
(15)
In particular, we can rewrite (13) as
T
x′ ω (a⋆(1,w)) q + E ′
mj (xb′ , H, a ⋆ (1, w))
= ′i (a⋆(1,0))T q
xi ω
+ E ′′
mj (xb′ , H, a ⋆ (1, 0))
T
=
T
ω (a⋆(1,w)) q
1 + ω −(a⋆(1,w)) q E ′ /x′i
·
ξ
where
ξ
=
ω (a⋆(1,0))T q
1 + ω −(a⋆(1,0))T q E ′′ /x′i
T q−(a⋆(1,0))T q
= ω (a⋆(1,w))
=ω
(a⋆(0,w))T q
(16)
·ξ
· ξ.
Let A∗ ⊆ A denote the set of values of a ∈ A that satisfy the bounds (14) and (15) above. We thus have
for a ∈ A∗ , combining (16) with assumptions 1-2 of the lemma, that
|E ′ |/x′i ≤ (2/20) + 1/N −Ω(c) ≤ 1/8 and
17
|E ′′ |/x′i ≤ (2/20) + 1/N −Ω(c) ≤ 1/8
(17)
for sufficiently large N , where O(c) is the word precision of our semi-equispaced Fourier transform computation. Note that we used the assumption that |x′i | ≥ N −Ω(c) .
Writing a = (α, β) ∈ [n]d × [n]d , we have by (16) that
mj (xb′ ,H,a⋆(1,w))
mj (xb′ ,H,a⋆(1,0))
Tq
= ω ((α,β)⋆(0,w))
wT q = n∆−g qs when w = n∆−g es (as in line 8 of Algorithm 1), we get
· ξ, and since
mj (xb′ , H, a ⋆ (1, w))
T
−g
−g
−g
= ω (a⋆(0,w)) q · ξ = ω n∆ βs qs · ξ = ω n∆ βs qs + ω n∆ βs qs (ξ − 1).
′
b
mj (x , H, a ⋆ (1, 0))
We analyze the first term now, and will show later that the second term is small. Since qs = fs + ∆g−1 (r0 +
∆r1 + ∆2 r2 + . . .) by the inductive hypothesis, we have, substituting the first term above into the expression
in line 10 of Algorithm 1,
−r·βs
· ω −n∆
ω∆
−g f
s ·βs
· ω n∆
−g β
s qs
−r·βs
· ω n∆
= ω∆
−r·βs
· ω n∆
= ω∆
−g (q
s −fs )·βs
−g (∆g−1 (r
0 +∆r1 +∆
−r·βs
· ω (n/∆)·(r0 +∆r1 +∆
= ω∆
2r
2r
2 +...))·βs
2 +...)·βs
r0 ·βs
−r·βs
· ω∆
= ω∆
(−r+r0 )·βs
= ω∆
.
We used the fact that ω n/∆ = e2πi(n/∆)/n = e2πi/∆ = ω∆ and (ω∆ )∆ = 1. Thus, we have
−r·βs −(n2
ω
ω∆
−g f
s )·βs
mj (xb′ , H, a ⋆ (1, w))
(−r+r0 )·βs
(−r+r0 )·βs
(ξ − 1).
+ ω∆
= ω∆
′
b
mj (x , H, a ⋆ (1, 0))
(18)
(−r+r )·β
s
0
= 1, and it remains to note that
We now consider two cases. First suppose that r = r0 . Then ω∆
1+1/8
∗
by (17) we have |ξ − 1| ≤ 1−1/8 − 1 ≤ 2/7 < 1/3. Thus every a ∈ A passes the test in line 9 of Algorithm 1.
Since |A∗ | ≥ (4/5)|A| > (3/5)|A| by the argument above, we have that r0 passes the test in line 9. It remains
to show that r0 is the unique element in 0, . . . , ∆ − 1 that passes this test.
Now suppose that r 6= r0 . Then by the assumption that A ⋆ (0, es ) is balanced (assumption 3 of the lemma)
(−r+r0 )·βs
have negative real part. This means that for at least 49/100 of a ∈ A we
at least 49/100 fraction of ω∆
have using triangle inequality
i
h
(−r+r0 )·βs
(−r+r0 )·βs
(−r+r0 )·βs
(−r+r0 )·βs
(ξ − 1)
− 1 − ω∆
(ξ − 1) − 1 ≥ ω∆
+ ω∆
ω∆
≥ |i − 1| − 1/3
√
≥ 2 − 1/3 > 1/3,
and hence the condition in line 9 of Algorithm 1 is not satisfied for any r 6= r0 . This shows that location is
successful and completes the proof of correctness.
Runtime bounds follow by noting that L OCATE S IGNAL recovers d coordinates with log n bits per coordinate. Coordinates are recovered in batches of log ∆ bits, and the time taken is bounded by B · d(log∆ n)∆ ≤
B(log N )3/2 . Updating the measurements using semi-equispaced FFT takes B logd+1 N time.
We also get an immediate corollary of Lemma 5.1. The corollary is crucial to our proof of Theorem 3.1
(the main result about efficiency of L OCATE S IGNAL) in the next section.
Corollary 5.2. For any integer rmax ≥ 1, for any sequence of rmax hashings Hr = (πr , B, R), r ∈ [1 : rmax ]
d
and evaluation points Ar ⊆ [n]d ×[n]d , for every S ⊆ [n]d and for every x, χ ∈ C[n] , x′ := x−χ, the following
18
conditions hold. If for eachS
r ∈ [1 : rmax ] Lr ⊆ [n]d denotes the output of L OCATE S IGNAL(b
x , χ, Hr , {m(b
x, Hr , a⋆
rmax
(1, w))}a∈Ar ,w∈W ), L = r=1 Lr , and the sets Ar ⋆ (0, w) are balanced for all w ∈ W and r ∈ [1 : rmax ],
then
−Ω(c)
||x′S\L ||1 ≤ 20||ehead
({Hr }, x, χ)||1 + 20||etail
.
(*)
S
S ({Hr , Ar }, x)||1 + |S| · N
Furthermore, every element i ∈ S such that
−Ω(c)
|x′i | > 20(ehead
({Hr }, x, χ) + etail
i
i ({Hr , Ar }, x)) + N
(**)
belongs to L.
Proof. Suppose that i ∈ S fails to be located in any of the R calls, and |x′i | ≥ N −Ω(c) . By Lemma 5.1 and the
assumption that Ar ⋆ (0, w) is balanced for all w ∈ W and r ∈ [1 : rmax ] this means that for at least one half
of values r ∈ [1 : rmax ] either (A) ehead
(Hr , x, χ) ≥ |xi |/20 or (B) etail
i
i (Hr , Ar ⋆ (1, w), x) > |xi |/20 for at
least one w ∈ W. We consider these two cases separately.
(Hs , x, χ) ≥ |xi |/20 for at least one half of r ∈ [1 : rmax ], so in
Case (A). In this case we have ehead
i
1/5 head
head
particular ei ({Hr }, x, χ) ≥ quantr ei (Hr , x, χ) ≥ |x′i |/20.
′
Case (B). Suppose that etail
i (Hr , Ar ⋆ (1, w), x) > |xi |/20 for some w = w(r) ∈ W for at least one half of
r ∈ [1 : rmax ] (denote this set by Q ⊆ [1 : rmax ]). We then have
1/5
tail
etail
i ({Hr , Ar }, x) = quantr∈[1:rmax ] ei (Hr , Ar , x)
"
=
1/5
quantr∈[1:rmax ]
40µHr ,i (x) +
X
w∈W
etail
i (Hr , Ar
⋆ (1, w), x) − 40µHr ,i (x)
≥ min 40µHr ,i (x) + etail
i (Hr , Ar ⋆ (1, w(r)), x) − 40µHr ,i (x)
r∈Q
≥
≥
min etail
i (Hr , Ar
r∈Q
|x′i |/20
⋆ (1, w(r)), x)
+
+
#
as required. This completes the proof of (*) as well as (**).
6 Analysis of L OCATE S IGNAL: bounding ℓ1 norm of undiscovered elements
The main result of this section is Theorem 3.1, which is our main tool for showing efficiency of L OCATE S IG NAL. Theorem 3.1 applies to the following setting. Fix a set S ⊆ [n]d and a set of hashings H1 , . . . , Hrmax ,
and let S ∗ ⊆ S denote the set of elements of S that are not isolated with respect to most of these hashings
H1 , . . . , Hrmax . Theorem 3.1 shows that for any signal x and partially recovered signal χ, if L denotes the
output list of an invocation of L OCATE S IGNAL on the pair (x, χ) with hashings H1 , . . . , Hrmax , then the ℓ1
norm of elements of the residual (x − χ)S that are not discovered by L OCATE S IGNAL can be bounded by a
function of the amount of ℓ1 mass of the residual that fell outside of the ‘good’ set S \ S ∗ , plus the ‘noise level’
µ ≥ ||x[n]d \S ||∞ times k.
If we think of applying Theorem 3.1 iteratively, we intuitively get that the fixed set of measurements with
hashings {Hr } allows us to always reduce the ℓ1 norm of the residual x′ = x − χ on the ‘good’ set S \ S ∗ to
about the amount of mass that is located outside of this good set.
Theorem 3.1 There exist absolute constants C1 , C2 , C3 > 0 such that for any x, χ ∈ CN and residual signal
x′ = x − χ the following conditions hold. Let S ⊆ [n]d , |S| ≤ 2k, be such that ||x[n]d \S ||∞ ≤ µ. Suppose that
19
||x||∞ /µ ≤ N O(1) . Let B ≥ (2π)4d·F · k/αd . Let S ∗ ⊆ S denote the set of elements that are not isolated with
√
max
. Suppose that for every s ∈ [1 : d] the sets Ar ⋆ (0, es )
respect to at least a α fraction of hashings {Hr }rr=1
are balanced (as per Definition 2.13), r = 1, . . . , rmax , and the exponent F of the filter G is even and satisfies
F ≥ 2d. Let
r[
max
L OCATE S IGNAL (χ, Hr , {m(b
x, Hr , a ⋆ (1, w)}a∈Ar ,w∈Wr ).
L=
r=1
√
Then if rmax , cmax ≥ (C1 / α) log log N , one has
2
||x′S\S ∗ \L ||1 ≤ (C2 α)d/2 ||x′S ||1 + C3d (||χ[n]d \S ||1 + ||x′S ∗ ||1 ) + 4µ|S|.
As we will show later, Theorem 3.1 can be used to show that (assuming perfect estimation) invoking L O repeatedly allows one to reduce to ℓ1 norm of the head elements down to essentially
CATE S IGNAL
||x′S ∗ ||1 + ||χ[n]d \S ||1 ,
i.e. the ℓ1 norm of the elements that are not well isolated and the set of new elements created by the process
due to false positives in location. In what follows we derive bounds on ||ehead ||1 (in section 6.1) and ||etail ||1
(in section 6.2) that lead to a proof of Theorem 3.1.
6.1 Bounding noise from heavy hitters
We first derive bounds on noise from heavy hitters that a single hashing H results in, i.e. ehead (H, x), (see
Lemma 6.1), and then use these bounds to bound ehead ({H}, x) (see Lemma 6.3). These bounds, together with
upper bounds on contribution of tail noise from the next section, then lead to a proof of Theorem 3.1.
Lemma 6.1. Let x, χ ∈ CN , x′ = x − χ. Let S ⊆ [n]d , |S| ≤ 2k, be such that ||x[n]d \S ||∞ ≤ µ. Suppose that
||x||∞ /µ ≤ N O(1) . Let B ≥ (2π)4d·F · k/αd . Let π = (Σ, q) be a permutation, let H = (π, B, F ), F ≥ 2d be
∗ ⊆ S denote the set of elements i ∈ S that are
a hashing into B buckets and filter G with sharpness F . Let SH
head
not isolated under H. Then one has, for e
defined with respect to S,
O(d) d/2
α ||x′S\S ∗ ||1 + (2π)d·F · 2O(d) (||x′S ∗ ||1 + ||χ[n]d \S ||1 ).
||ehead
S\S ∗ (H, x, χ)||1 ≤ 2
H
H
∗ = ∅, then one has ||ehead (H, x, χ)||
O(d) αd/2 ||x′ || .
Furthermore, if χ[n]d \S = 0 and SH
∞ ≤2
S
S ∞
∗
Proof. By (7) for i ∈ S \ SH
X
|
·
(H, x′ ) = |G−1
ehead
i
oi (i)
∗ \{i}
j∈S\SH
+ |G−1
oi (i) | ·
=
|G−1
oi (i) |
X
∗
j∈SH
|Goi (j) |x′j |
|Goi (j) |x′j | +
(isolated head elements)
X
j∈[n]d \S
· (A1 (i) + A2 (i)).
|Goi (j) ||χj | (non-isolated head elements and false positives)
(19)
P
P
Let A1 := i∈S\S ∗ A1 (i), A2 := i∈S\S ∗ A2 (i).
H
H
We bound A1 and A2 separately.
20
Bounding A1 . We start with a convenient upper bound on A1 :
X
X
|Goi (j) ||x′j |
(recall that oi (j) = π(j) − (n/b)h(i))
A1 =
∗ j∈S\S ∗ \{i}
i∈S\SH
H
=
X X
∗
t≥0 i∈S\SH
≤
=
X X
∗
j∈S\SH
∗ \{i} s.t.
j∈S\SH
||π(j)−π(i)||∞ ∈(n/b)·[2t −1,2t+1 −1)
max
∗
t≥0 i∈S\SH
X
X
||π(j)−π(i)||∞ ≥(n/b)·(2t −1)
|x′j | ·
X
t≥0
max
||π(j)−π(i)||∞ ≥
(n/b)·(2t −1)
|Goi (j) ||x′j |, (consider all scales t ≥ 0)
Goi (j) ·
Goi (j) ·
X
∗ \{i} s.t.
j∈S\SH
||π(j)−π(i)||∞ ≤(n/b)·(2t+1 −1)
|x′j |
∗
i ∈ S \ SH
\ {j} s.t. ||π(j) − π(i)||∞ ≤ (n/b) · (2t+1 − 1)
(20)
∗
S \ SH
Note that in the first line we summed, over all i ∈
(i.e. all isolated i), the contributions of all other i ∈ S
to the noise in their buckets. We need to bound the first line in terms of ||x′S\S ∗ ||1 . For that, we first classified
H
∗ according to the ℓ
all j ∈ S \ SH
∞ distance from i to j (in the second line), then upper bounded the value of
the filter Goi (j) based on the distance ||π(i) − π(j)||∞ , and finally changed order of summation to ensure that
∗ 3 . In order to upper bound
the outer summation is a weighted sum of absolute values of x′j over all j ∈ S \ SH
′
A1 it now suffices to upper bound all factors multiplying xj in the last line of the equation above. As we now
show, a strong bound follows from isolation properties of i.
We start by upper bounding G using Lemma 2.3, (2). We first note that by triangle inequality
||π(j)− (n/b)h(i)||∞ ≥ ||π(j)− π(i)||∞ − ||π(i)− (n/b)h(i)||∞ ≥ (n/b)(2t − 1)− (n/b) = (n/b)(2t−1 − 2).
The rhs is positive for all t ≥ 3 and for such t satisfies 2t−1 − 2 ≤ 2t−2 . We hence get for all t ≥ 3
F
F
2
2
≤
≤ 2−(t−3)F .
max
Goi (j) ≤
1 + ||π(j) − (n/b)h(i)||∞
1 + 2t−2
||π(j)−π(i)||∞ ≥(n/b)·(2t−1 −1)
(21)
We also have the bound ||G||∞ ≤ 1 from Lemma 2.3, (3). It remains to bound the last term on the rhs of the
last line in (20). We need the fact that for a pair i, j such that ||π(j) − π(i)||∞ ≤ 2t+1 − 1 we have by triangle
inequality
||π(j) − (n/b)h(i)||∞ ≤ ||π(j) − π(i)||∞ + ||π(i) − (n/b)h(i)||∞ ≤ (n/b)(2t+1 − 1) + (n/b) = (n/b)2t+1 .
Equipped with this bound, we now conclude that
∗
i ∈ S \ SH
\ {j} s.t. ||π(j) − π(i)||∞ ≤ (n/b) · (2t+1 − 1)
t+1
= |π(S \ {i}) ∩ B∞
)| ≤ (2π)−d·F · αd/2 2(t+2)d+1 · 2t ,
(n/b)h(i) ((n/b) · 2
(22)
∗ are isolated (see Definition 2.10). We thus get for any j ∈ S \S ∗
where we used the assumption that i ∈ S \SH
H
X
∗
t+1
Goi (j) · i ∈ S \ SH \ {j} s.t. ||π(j) − π(i)||∞ ≤ (n/b) · (2
− 1)
ηj :=
max
t≥0
≤
||π(j)−π(i)||∞ ≥
(n/b)·(2t −1)
X
t≥0
((2π)−d·F · αd/2 2(t+2)d+1 · 2t ) min{1, 2−(t−3)F }
≤ (2π)−d·F · αd/2 22d+1
X
t≥0
2t(d+1) · min{1, 2−(t−3)F }
3
We note here that we started by summing over i first and then over j, but switched the order of summation to the opposite in the
last line. This is because the quantity Goi (j) , which determines contribution of j ∈ S to the estimation error of i ∈ S is not symmetric
in i and j. Indeed, even though G itself is symmetric around the origin, we have oi (j) = π(j) − (n/b)h(i) 6= oj (i).
21
.
We now note that
X
X
2(t−3)(d+1) · min{1, 2−(t−3)F }
2t(d+1) · min{1, 2−(t−3)F } = 1 + 22(d+1) + 23(d+1)
t≥3
t≥0
2(d+1)
=1+2
3(d+1)
+2
X
t≥3
2(t−3)(d+1−F ) ≤ 1 + 22(d+1) + 23(d+1)+1 ≤ 24(d+1)+1 ,
∗ one has η ≤ (2π)−d·F · 2O(d) αd/2 .
since F ≥ 2d by assumption of the lemma, and hence for all j ∈ S \ SH
j
Combining the estimates above, we now get
X
|x′j | · ηj ≤ ||x′S ||1 (2π)−d·F · 2O(d) αd/2 ,
A1 ≤
∗
j∈S\SH
as required. The ℓ∞ bound for the case when χ[n]d \S = 0 follows in a similar manner and is hence omitted.
We now turn to bounding A2 . The bound that we get here is weaker since χ[n]d \S is an adversarially placed
signal and we do not have isolation properties with respect to it, resulting in a weaker bound on (the equivalent
∗ than we had for j ∈ S \ S ∗ . We let y := x′ − χ
of) ηj for j ∈ SH
[n]d \S to simplify notation. We have, as
H
S∗
in (20),
X
|x′j | · κj ,
A2 ≤
∗
j∈S\SH
where
X
κj =
max
||π(j)−π(i)||∞ ≥
t≥0
(n/b)·(2t −1)
Goi (j) ·
∗
i ∈ S \ SH
\ {j} s.t. ||π(j) − π(i)||∞ ≤ (n/b) · (2t+1 − 1)
.
The first term can be upper bounded as before. For the second term, we note that every pair of points i1 , i2 ∈
∗ by triangle inequality satisfy
S \ SH
(n/b)||π(i1 ) − π(i2 )||∞ ≤ (n/b)||π(i1 ) − π(j)||∞ + ||π(j) − π(i2 )||∞ ≤ (n/b) · (2t+2 − 2) ≤ (n/b) · 2t+2
Since both i1 and i2 are isolated under π, this means that
∗
i ∈ S \ SH
\ {j} s.t. ||π(j) − π(i)||∞ ≤ (n/b) · (2t+1 − 1)
≤ (2π)−d·F · αd/2 2(t+3)d · 2t+2 + 1,
where we used the bound from Definition 2.10 for i, but counted the point i itself (this is what makes the bound
on κj weaker than the bound on ηj ). A similar calculation to the one above for A1 now gives
X
∗
Goi (j) · i ∈ S \ SH
\ {j} s.t. ||π(j) − π(i)||∞ ≤ (n/b) · (2t+1 − 1)
κj :=
max
t≥0
≤
||π(j)−π(i)||∞ ≥
(n/b)·(2t −1)
X
((2π)−d·F · αd/2 2(t+3)d · 2t+2 + 1) min{1, 2−(t−3)F }
t≥0
≤ 2O(d) ((2π)−d·F · αd/2 + 1) = 2O(d) .
We thus have
A2 ≤
X
j∈[n]d
|yj |κj ≤ 2O(d) ||y||1 .
Plugging our bounds on A1 and A2 into (19), we get
O(d)
−1
(2π)−d·F · αd/2 ||x′S ||1 + 2O(d) ||y||1 )
ehead
(H, x, χ) ≤ |G−1
i
oi (i) | · (A1 + A2 ) ≤ |Goi (i) |(2
≤ 2O(d) αd/2 ||x′S ||1 + (2π)d·F · 2O(d) ||y||1
as required.
22
Remark 6.2. The second bound of this lemma will be useful later in section 8.1 for analyzing R EDUCE I NF N ORM.
We now bound the final error induced by head elements, i.e. ehead ({Hr }, x, χ):
Lemma 6.3. Let x, χ ∈ [n]d , x′ = x − χ. Let S ⊆ [n]d , |S| ≤ 2k, be such that ||x[n]d \S ||∞ ≤ µ. Suppose that
max
be a set of permutations, let Hr = (πr , B, F ), F ≥
||x||∞ /µ ≤ N O(1) . Let B ≥ (2π)4d·F · k/αd . Let {πr }rr=1
2d be a hashing into B buckets and filter G with sharpness F . Let S ∗ denote the set of elements i ∈ S that are
√
not isolated under at least α fraction of Hr . Then, one has for ehead defined with respect to S,
O(d) d/2
||ehead
α ||x′S ||1 + (2π)d·F · 2O(d) ||χ[n]d \S ||1 .
S\S ∗ ({Hr }, x, χ)||1 ≤ 2
d/2 αd/2 ||x′ || .
Furthermore, if χ[n]d \S = 0, then ||ehead
S ∞
S\S ∗ ({Hr }, x, χ)||∞ ≤ 2
1/5
(Hr , x, χ). This
Proof. Recall that by (8) one has for each i ∈ [n]d ehead
({Hr }, x, χ) = quantr∈[1:rmax ] ehead
i
i
√
∗
head
means that for each i ∈ S \ S there exist at least (1/5 − α)rmax values of r such that ei (Hr , x, χ) >
ehead
({Hr }, x, χ), and hence
i
||ehead
S\S ∗ ({Hr }, x, χ)||1 ≤
(1/5 −
1
√
α)rmax
rX
max
r=1
||ehead
S\Sr∗ (Hr , x, χ)||1 .
By Lemma 6.1 one has
O(d) d/2
α ||x′S ||1 + (2π)d·F · 2O(d) ||χ[n]d \S ||1
||ehead
S\S ∗ (Hr , x, χ)||1 ≤ 2
Hr
for all r, implying that
||ehead
S\S ∗ ({Hr }, x, χ)||1 ≤
1
√ (2O(d) αd/2 ||x′S ||1 + (2π)d·F · 2O(d) ||χ[n]d \S ||1 )
(1/5 − α)
≤ 2O(d) αd/2 ||x′S ||1 + (2π)d·F · 2O(d) ||χ[n]d \S ||1
as required.
The proof of the second bound follows analogously using the ℓ∞ bound from Lemma 6.1.
Remark 6.4. The second bound of this lemma will be useful later in section 8.1 for analyzing R EDUCE I NF N ORM.
6.2 Bounding effect of tail noise
d
[n]
Lemma 6.5. For any constant C ′ > 0 there exists an absolute constant C
√ > 0 such that for any x ∈ C ,
d
′
any integer k ≥ 1 and S ⊆ [n] such that ||x[n]d \S ||∞ ≤ C ||x[n]d \[k] ||2 / k, for any integer B ≥ 1 a power
of 2d the following conditions hold. If (H, A) are random measurements as in Algorithm 2, H = (π, B, F )
satisfies F ≥ 2d and ||x[n]d \[k] ||2 ≥ N −Ω(c) , where O(c) is the word precision of our semi-equispaced Fourier
transform computation, then for any i ∈ [n]d one has, for etail defined with respect to S,
h
i
√
d·F
d
−Ω(|A|)
B.
(H,
A,
x)
≤
(2π)
·
C
(40
+
|W|2
)||x
||
/
EH,A etail
d
2
i
[n] \[k]
23
Proof. Recall that for any H = (π, B, G), a, w one has (etail (H, a ⋆ (1, w), x[n]d \[k] ))2 = |ui |2 , where
u = H ASH T O B INS (x\
[n]d \S , 0, (H, a ⋆ (1, w))).
Since the elements of A are selected uniformly at random, we have for any H and w by Lemma 2.9, (3),
since a ⋆ (1, w) is uniformly random in [n]d , that
T Σi
−1
−(a⋆(1,w))
2
Ea [(etail
i (H, a ⋆ (1, w), x)) ] = Ea [|Goi (i) ω
uh(i) − xi |2 ] ≤ µ2H,i (x) + N −Ω(c) ,
(23)
where c > 0 is the large constant that governs the precision of our Fourier transform computations. By
Lemma 2.9, (2) applied to the pair (x\
[n]d \S , 0) there exists a constant C > 0 such that
EH [µ2H,i ] ≤ (2π)2d·F · C d ||x[n]d \S ||22 /B
We would like to upper bound the rhs in terms of ||x[n]d \[k] ||22 (the tail energy), but this requires an argument
since S is not exactly the set of top k elements of x. However, since S contains the large coefficients of x, a
bound is easy to obtain. Indeed, denoting the set of top k coefficients of x by [k] ⊆ [n]d as usual, we get
||x[n]d \S ||22 ≤ ||x[n]d \(S∪[k]) ||22 + ||x[k]\S ||22 ≤ ||x[n]d \[k] ||22 + k · ||x[k]\S ||2∞ ≤ (C ′ + 1)||x[n]d \[k] ||2 .
Thus, we have
EH [µ2H,i (x) + N −Ω(c) ] ≤ (2π)2d·F · (C ′ + 2)C d ||x[n]d \[k] ||22 /B,
where we used the assumption that ||x[n]d \k ||2 ≥ N −Ω(c) . We now get by Jensen’s inequality
√
EH [µH,i (x)] ≤ (2π)d·F · (C ′′ )d ||x[n]d \k ||2 / B
(24)
for a constant C ′′ > 0. Note that
By (23) for each i ∈ [n]d , hashing H, evaluation point a ∈ [n]d × [n]d and direction w we have
2
2
Ea [(etail
i (H, a ⋆ (1, w), x)) ] = (µH,i (x)) . Applying Jensen’s inequality, we hence get for any H and w ∈ W
Ea [etail
i (H, a ⋆ (1, w), x)] ≤ µH,i (x).
(25)
tail
Applying Lemma 9.5 with Y = etail
i (H, a ⋆ (1, w), x) and γ = 1/5 (recall that the definition of ei (H, z, x)
involves a 1/5-quantile over A) and using the previous bound, we get, for any fixed H and w ∈ W
tail
EA ei (H, A ⋆ (1, w), x) − 40 · µH,i (x)
≤ µH,i (x) · 2−Ω(|A|) ,
(26)
+
and hence by a union bound over all w ∈ W we have
#
"
X
tail
≤ µH,i (x) · |W|2−Ω(|A|) .
ei (H, A ⋆ (1, w), x) − 40 · µH,i (x)
EA
+
w∈W
Putting this together with (24), we get
h
i
(H,
A,
x)
EH,A etail
i
"
"
= EH EA 40µH,i (x) +
≤ EH
h
X
w∈W
etail
i (H, A ⋆ (1, w), x) − 40 · µH,i (x)
i
µH,i (x)(40 + |W|2−Ω(|A|) )
√
≤ (2π)d·F (C ′′ )d (40 + |W|2−Ω(|A|) )||x[n]d \k ||2 / B
as required.
24
+
##
d
Lemma 6.6. For any constant C ′ > 0 there exists an absolute constant
C > 0 such that for any x ∈ C[n] , any
√
integer k ≥ 1 and S ⊆ [n]d such that ||x[n]d \S ||∞ ≤ C ′ ||x[n]d \[k] ||/ k, if B ≥ 1, then the following conditions
hold , for etail defined with respect to S.
If hashings Hr = (πr , B, F ), F ≥ 2d and sets Ar , |Ar | ≥ cmax for r = 1, . . . , rmax are chosen at random,
then
(1) for every i ∈ [n]d one has
h
i
√
({H
,
A
},
x)
≤ (2π)d·F C d (40 + |W|2−Ω(cmax ) )||x[n]d \[k] ||2 / B.
E{(Hr ,Ar )} etail
r
r
i
(2) for every i ∈ [n]d one has
h
√ i
d·F d
C (40 + |W|2−Ω(cmax ) )||x[n]d \[k] ||2 / B = 2−Ω(rmax )
Pr{(Hr ,Ar )} etail
i ({Hr , Ar }, x) > (2π)
and
E{(Hr ,Ar )}
etail
i ({Hr , Ar }, x)
d·F
− (2π)
d
−Ω(cmax )
C (40 + |W|2
√
= 2−Ω(rmax ) · (2π)d·F C d (40 + |W|2−Ω(cmax ) )||x[n]d \[k] ||2 / B.
√
)||x[n]d \[k] ||2 / B
+
Proof. Follows by applying Lemma 9.5 with Y = etail
i (Hr , Ar , x).
6.3 Putting it together
The bounds from the previous two sections yield a proof of Theorem 3.1, which we restate here for convenience
of the reader:
Theorem 3.1 For any constant C ′ > 0 there exist absolute constants C1 , C2 , C3 > 0 such that √
for any
d
x ∈ C[n] , any integer k ≥ 1 and any S ⊆ [n]d such that ||x[n]d \S ||∞ ≤ C ′ µ, where µ = ||x[n]d \[k] ||2 / k, the
following conditions hold.
Let πr = (Σr , qr ), r = 1, . . . , rmax denote permutations, and let Hr = (πr , B, F ), F ≥ 2d, where
B ≥ (2π)4d·F k/αd for α ∈ (0, 1) smaller than a constant. Let S ∗ ⊆ S denote the set of elements that are not
√
√
isolated with respect to at least a α fraction of hashings {Hr }. Then if rmax , cmax ≥ (C1 / α) log log N ,
d
then with probability at least 1 − 1/ log2 N over the randomness of the measurements for all χ ∈ C[n] such
that x′ := x − χ satisifies ||x′ ||∞ /µ ≤ N O(1) one has
L :=
r[
max
r=1
max
L OCATE S IGNAL χ, k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
satisfies
2
||x′S\S ∗ \L ||1 ≤ (C2 α)d/2 ||x′S ||1 + C3d (||χ[n]d \S ||1 + ||x′S ∗ ||1 ) + 4µ|S|.
Proof. First note that with probability at least 1 − 1/(10 log 2 N ) for every s ∈ [1 : d] the sets Ar ⋆ (0, es ) are
balanced (as per Definition 2.13) for all r = 1, . . . , rmax and all w ∈ W by Claim 2.14.
By Corollary 5.2 applied with S ′ = S \ S ∗ one has
′
tail
({Hr , Ar }, x)||1 ) + ||x′ ||∞ |S| · N −Ω(c) .
||(x − χ)(S\S ∗ )\L ||1 ≤ 20 · (||ehead
S\S ∗ ({Hr }, x )||1 + ||e
We also have
′
O(d) d/2
α ||x′S ||1 + (2π)d·F · 2O(d) ||χ[n]d \S ||1
||ehead
S\S ∗ ({Hr }, x )||1 ≤ 2
25
by Lemma 6.3 and with probability at least 1 − 1/(10 log 2 N )
√
d·F d
−Ω(cmax )
B
({H
,
A
},
x)||
≤
(2π)
C
(40
+
|W|2
)||x
||
|S|/
||etail
d
∗
r
r
1
2
[n] \[k]
S\S
by Lemma 6.6. The rhs of the previous equation is bounded by |S|µ by the choice of B as long as α is smaller
than a absolute constant, as required. Putting these bounds together and using the fact that |W| ≤ log N (so that
|W| · (2−Ω(rmax ) + 2−Ω(cmax ) ) ≤ 1), and taking a union bound over the failure events, we get the result.
7 Analysis of R EDUCE L1N ORM and S PARSE FFT
In this section we first give a correctness proof and runtime analysis for R EDUCE L1N ORM (section 7.1), then
analyze the SNR reduction loop in S PARSE FFT(section 7.2) and finally prove correctness of S PARSE FFT and
provide runtime bounds in section 7.3.
7.1 Analysis of R EDUCE L1N ORM
The main result of this section is Lemma 3.2 (restated below). Intuitively, the lemma shows that R EDU CE L1N ORM reduces the ℓ1 norm of the head elements of the input signal x − χ by a polylogarthmic factor, and
does not introduce too many new spurious elements (false positives) in the process. The introduced spurious
elements, if any, do not contribute much ℓ1 mass to the head of the signal. Formally, we show
Lemma 3.2(Restated) For any x ∈ CN , any integer k ≥ 1, B ≥ (2π)4d·F · k/αd for α ∈ (0, 1] smaller than an
absolute constant and F ≥ 2d, F = Θ(d) the following conditions hold for the set S := {i ∈ [n]d : |xi | > µ},
where µ2 ≥ ||x[n]d \[k] ||22 /k. Suppose that ||x||∞ /µ = N O(1) .
For any sequence of hashings Hr = (πr , B, F ), r = 1, . . . , rmax , if S ∗ ⊆ S denotes the set of elements of
√
S that are not isolated with respect to at least a α fraction of the hashings Hr , r = 1, . . . , rmax , then for any
d
χ ∈ C[n] , x′ := x − χ, if ν ≥ (log4 N )µ is a parameter such that
A ||(x − χ)S ||1 ≤ (ν + 20µ)k;
B ||χ[n]d \S ||0 ≤
1
k;
log19 N
C ||(x − χ)S ∗ ||1 + ||χ[n]d \S ||1 ≤
ν
k,
log4 N
the following conditions hold.
√
If parameters rmax , cmax are chosen to be at least (C1 / α) log log N , where C1 is the constant from
Theorem 3.1 and measurements are taken as in Algorithm 2, then the output χ′ of the call
max
, 4µ(log 4 n)T −t , µ)
R EDUCE L1N ORM (χ, k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
satisfies
1. ||(x′ − χ′ )S ||1 ≤
1
νk
log4 N
+ 20µk
2. ||(χ + χ′ )[n]d \S ||0 ≤ ||χ[n]d \S ||0 +
(ℓ1 norm of head elements is reduced by ≈ log4 N factor)
1
k
log20 N
(few spurious coefficients are introduced)
3. ||(x′ − χ′ )S ∗ ||1 + ||(χ + χ′ )[n]d \S ||1 ≤ ||x′S ∗ ||1 + ||χ[n]d \S ||1 +
coefficients does not grow fast)
1
νk
log20 N
(ℓ1 norm of spurious
with probability at least 1 − 1/ log2 N over the randomness used to take measurements m and by calls to E S 2
TIMATE VALUES . The number of samples used is bounded by 2O(d ) k(log log N )2 , and the runtime is bounded
2
by 2O(d ) k logd+2 N .
Before giving the proof of Lemma 3.2, we prove two simple supporting lemmas.
26
Lemma 7.1 (Few spurious elements are introduced in R EDUCE L1N ORM). For any x ∈ CN , any integer k ≥ 1,
B ≥ (2π)4d·F · k/αd for α ∈ (0, 1] smaller than an absolute constant and F ≥ 2d, F = Θ(d) the following
conditions hold for the set S := {i ∈ [n]d : |xi | > µ}, where µ2 ≥ ||x[n]d \[k] ||22 /k.
For any sequence of hashings Hr = (πr , B, F ), r = 1, . . . , rmax , if S ∗ ⊆ S denotes the set of elements of
√
S that are not isolated with respect to at least a α fraction of the hashings Hr , r = 1, . . . , rmax , then for any
d
χ ∈ C[n] , x′ := x − χ the following conditions hold.
Consider the call
max
, 4µ(log 4 n)T −t , µ),
R EDUCE L1N ORM (χ, k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
where we assume that measurements of x are taken as in Algorithm 2. Denote, for each t = 0, . . . , log2 (log4 N ),
the signal recovered by step t in this call by χ(t) (see Algorithm 3). There exists an absolute constant C > 0
such that if for a parameter ν ≥ 2t µ at step t
A ||(x′ − χ(t) )S ||1 ≤ (2−t ν + 20µ)k;
B ||(χ + χ(t) )[n]d \S ||0 ≤
2
k,
log19 N
C ||(x′ − χ(t) )S ∗ ||1 + ||(χ + χ(t) )[n]d \S ||1 ≤
2ν
k,
log4 N
then with probability at least 1 − (log N )−3 over the randomness used in E STIMATE VALUES at step t one has
||(χ + χ(t+1) )[n]d \S ||0 − ||(χ + χ(t) )[n]d \S ||0 ≤
1
log
21
N
k.
Proof. Recall that L′ ⊆ L is the list output by E STIMATE VALUES. We let
n
o
L′′ = i ∈ L : |χ′i − x′i | > α1/2 2−t ν + 20µ
denote the set of elements in L′ that failed to be estimated to within an additive α1/2 2−t ν + 20µ error term.
For any element i ∈ L we consider two cases, depending on whether i ∈ L′ \ L′′ or i ∈ L′′ .
Case 1: First suppose that i ∈ L′ \ L′′ , i.e. |x′i − χ′i | < α1/2 (2−t ν + 20µ). Then if α is smaller than an
absolute constant, we have
|x′i | >
1
ν2−t + 4µ − (α1/2 (2−t ν + 20µ)) ≥ 2µ,
1000
because only elements i with |χ′i | >
1
−t
1000 ν2
+ 4µ are included in the set L′ in the call
χ′ ← E STIMATE VALUES (x, χ(t) , L, k, ǫ, C(log log N + d2 + O(log(B/k))),
1
ν2−t + 4µ)
1000
1
due to the pruning threshold of 1000
ν2−t + 4µ passed to E STIMATE VALUES in the last argument.
Since ||x[n]d \S ||∞ ≤ µ by definition of S, this means that either i ∈ S, or i ∈ supp χ(t) . In both cases i
contributes at most 0 to ||(χ + χ(t+1) )[n]d \S ||0 − ||(χ + χ(t) )[n]d \S ||0 .
27
Case 2: Now suppose that i ∈ L′′ , i.e. |(x′ − χ′ )i | ≥ α1/2 (2−t ν + 20µ). In this case i may contribute 1
to ||(χ + χ(t+1) )[n]d \S ||0 − ||(χ + χ(t) )[n]d \S ||0 . However, the number of elements in L′′ is small. To show
this, we invoke Lemma 9.1 to obtain precision guarantees for the call to E STIMATE VALUES on the pair x, χ
and set of ‘head elements’ S ∪ supp χ. Note that |S| ≤ 2k, as otherwise we would have ||x[n]d \[k] ||22 > µ · k,
a contradiction. Further, by assumption B of the lemma we have ||(χ + χ(t) )[n]d \S ||0 ≤ k, so |S ∪ supp(χ +
χ(t) )| ≤ 4k. The ℓ1 norm of x′ − χ(t) on S ∪ supp(χ + χ(t) ) can be bounded as
||(x′ − χ(t) )S ||1 + ||(x′ − χ(t) )supp(χ+χ(t) )\S ||1
4k
(t)
≤
≤
||(x′ − χ(t) )S ||1 + ||χ[n]d \S ||1 + ||x′[n]d \S ||∞ · | supp(χ + χ(t) )|
(2−t ν
+ 20µ)k +
2ν
k
log4 N
2k
4k
+ µ · (4k)
≤ 2−t ν + 20µ,
For the ℓ2 bound on the tail of the signal we have
||(x′ − χ(t) )[n]d \(S∪supp(χ+χ(t) )) ||22
4k
≤
||x[n]d \S ||22
4k
≤ µ2 .
We thus have by Lemma 9.1, (1) for every i ∈ L′ that the estimate wi returned by E STIMATE VALUES
satisfies
Pr[|wi − x′i | > α1/2 (2−t ν + 20µ)] < 2−Ω(rmax ) .
Since rmax is chosen as rmax = C(log log N + d2 + log(B/k)) for a sufficiently large absolute constant
C > 0, we have
Pr[|wi − x′i | > α1/2 (2−t ν + 20µ)] < 2−Ω(rmax ) ≤ (k/B) · (log N )−25 .
This means that
E[|L′′ |] ≤ |L| · (k/B) · (log N )−25 ≤ (B · rmax )(k/B) · (log N )−25 ≤ (log N )−23 ,
where the expectation is over the randomness used in E STIMATE VALUES. We used the fact that |L′ | ≤ |L| ≤
B · rmax and that rmax to derive the upper bound above. An application of Markov’s inequality completes the
proof.
Lemma 7.2 (Spurious elements do not introduce significant ℓ1 error). For any x ∈ CN , any integer k ≥ 1,
B ≥ (2π)4d·F · k/αd for α ∈ (0, 1] smaller than an absolute constant and F ≥ 2d, F = Θ(d) the following
conditions hold for the set S := {i ∈ [n]d : |xi | > µ}, where µ2 ≥ ||x[n]d \[k] ||22 /k.
For any sequence of hashings Hr = (πr , B, F ), r = 1, . . . , rmax , if S ∗ ⊆ S denotes the set of elements of
√
S that are not isolated with respect to at least a α fraction of the hashings Hr , r = 1, . . . , rmax , then for any
d
χ ∈ C[n] , x′ := x − χ the following conditions hold.
Consider the call
max
, 4µ(log 4 n)T −t , µ),
R EDUCE L1N ORM (χ, k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
where we assume that measurements of x are taken as in Algorithm 2. Denote, for each t = 0, . . . , log2 (log4 N ),
the signal recovered by step t in this call by χ(t) (see Algorithm 3). There exists an absolute constant C > 0
such that if for a parameter ν ≥ 2t µ at step t
A ||(x′ − χ(t) )S ||1 ≤ (2−t ν + 20µ)k;
B ||(χ + χ(t) )[n]d \S ||0 ≤
2
k;
log19 N
28
C ||(x′ − χ(t) )S ∗ ||1 + ||(χ + χ(t) )[n]d \S ||1 ≤
2ν
k,
log4 N
then with probability at least 1 − (log N )−3 over the randomness used in E STIMATE VALUES at step t one has
||(x′ − χ(t+1) )([n]d \S)∪S ∗ ||1 − ||(x′ − χ(t) )([n]d \S)∪S ∗ ||1 ≤
1
k(ν + µ)
log21 N
Proof. We let Q := ([n]d \ S) ∪ S ∗ to simplify notation, and recall that L′ ⊆ L is the list output by E STIMATE VALUES. We let
n
o
L′′ = i ∈ L : |χ′i − x′i | > α1/2 2−t ν + 20µ
denote the set of elements in L′ that failed to be estimated to within an additive α1/2 2−t ν + 20µ error term.
We write
||(x − χ(t+1) )Q ||1 = ||(x − χ(t+1) )Q\L′ ||1 + ||(x − χ(t+1) )(Q∩L′ )\L′′ ||1 + ||(x − χ(t+1) )Q∩L′′ ||1
(t+1)
(27)
(t)
= χi for all i 6∈ L′ , and hence ||(x′ − χ(t+1) )Q\L ||1 = ||(x′ − χ(t) )Q\L ||1 .
√
(t+1)
Second, for i ∈ (Q ∩ L′ ) \ L′′ (second term) one has |x′i − χi
| ≤ α(ν2−t + 4µ). Since only elements
1
ν2−t + 4µ are reported by the threshold setting in E STIMATE VALUES, so |x′i − χ′ | ≤
i ∈ L with |χ′i | > 1000
√
α(2−t ν + 20µ) ≤ x′i as long as α is smaller than a constant. We thus get that ||(x − χ(t+1) )(Q∩L′ )\L′′ ||1 ≤
||(x − χ(t) )(Q∩L′ )\L′′ ||1 .
For the third term, we note that for each i ∈ L the estimate wi computed in the call to E STIMATE VALUES
satisfies
h
i √
√
(28)
E |wi − x′i | − α(2−t ν + 20µ) + ≤ α(2−t ν + µ)k2−Ω(rmax )
We first note that χi
by Lemma 9.1, (2). Verification of the preconditions of the lemma is identical to Lemma 7.1 (note that the
assumptions of this lemma and Lemma 7.1 are identical) and is hence omitted. Since rmax = C(log log N +
√
log(B/k)), the rhs of (28) is bounded by (log N )−25 α(2−t ν +µ)k as long as C > 0 is larger than an absolute
constant. We thus have
X √
√
α(2−t ν + 20µ) + |wi − x′i | − α(2−t ν + 20µ) + .
||(x′ − χ(t+1) )S∩L′′ ||1 ≤
i∈S∩L′′
Combining (28) with the fact that by by Lemma 9.1, (1), we have for every i ∈ L
√
Pr |wi − x′i | > α(2−t ν + 20µ) ≤ 2−Ω(rmax ) ≤ (k/B) · (log N )−25
by our choice of rmax , we get that
√
||(x′ − χ(t+1) )S∩L′′ ||1 ≤ 2 α(2−t ν + 20µ) · |L| · (k/B) · (log N )−25 .
An application of Markov’s inequality then implies, if α is smaller than an absolute constant, that
Pr[||(x′ − χ(t+1) )S∩L′′ ||1 >
1
log
21
N
(ν + µ)k] < 1/ log 3 N.
Substituting the bounds we just derived into (27), we get
||(x − χ(t+1) )Q ||1 ≤ ||(x − χ(t) )Q ||1 +
as required.
29
1
log
21
N
(ν + µ)k
Equipped with the two lemmas above, we can now give a proof of Lemma 3.2:
Proof of Lemma 3.2: We prove the result by strong induction on t = 0, . . . , log2 (log4 N ). Specifically, we
prove that there exist events Et , t = 0, . . . , log2 (log4 N ) such that (a) Et depends on the randomness used in the
call to E STIMATE VALUES at step t, Et satisfies Pr[Et |E0 ∧. . . Et−1 ] ≥ 1−3/ log 2 N and (b) for all t conditional
on E0 ∧ E1 ∧ . . . ∧ Et one has
(1) ||(x′ − χ(t) )S\S ∗ ||1 ≤ (2−t ν + 20µ)k;
(2) ||(χ + χ(t) )[n]d \S ||0 ≤ ||χ[n]d \S ||0 +
t
k;
log21 N
(3) ||(x′ − χ(t) )S ∗ ||1 + ||(χ + χ(t) )[n]d \S ||1 ≤ ||x′S ∗ ||1 + ||χ[n]d \S ||1 +
t
νk
log21 N
The base is provided by t = 0 and is trivial since χ(0) = 0. We now give the inductive step.
We start by proving the inductive step for (2) and (3). We will use Lemma 7.1 and Lemma 7.2, and hence we
start by verifying that their preconditions (which are identical for the two lemmas) are satisfied. Precondition
A is satisfied directly by inductive hypothesis (1). Precondition B is satisfied since
||(χ + χ(t) )[n]d \S ||0 ≤ ||χ[n]d \S ||0 +
t
log21 N
k≤
1
log19 N
k+
log 2(log4 N )
2
≤
k,
21
19
log N
log N
where we used assumption B of this lemma and inductive hypothesis (2). Precondition C is satisfied since
||(x′ −χ(t) )S ∗ ||1 +||(χ+χ(t) )[n]d \S ||1 ≤ ||x′S ∗ ||1 +||χ[n]d \S ||1 +
t
log
21
N
νk ≤
ν
t
2ν
k+ 21 νk ≤
k,
4
log N
log N
log4 N
where we used assumption 3 of this lemma, inductive assumption (3) and the fact that t ≤ log2 (log4 N ) ≤
log N for sufficiently large N .
Proving (2). To prove the inductive step for (2), we use Lemma 7.1. Lemma 7.1 shows that with probability
at least 1 − (log N )−2 over the randomness used in E STIMATE VALUES (denote the success event by Et1 ) we
have
1
k,
||(χ + χ(t+1) )[n]d \S ||0 − ||(χ + χ(t) )[n]d \S ||0 ≤
21
log N
so ||(χ + χ(t+1) )[n]d \S ||0 ≤ ||(χ + χ(t) )[n]d \S ||0 +
1
k
log21 N
≤ ||χ[n]d \S ||0 +
t+1
k
log21 N
as required.
Proving (3). At the same time we have by Lemma 7.2 that with probability at least 1 − (log N )−2 (denote
the success event by Et2 )
||(x′ − χ(t+1) )([n]d \S)∪S ∗ ||1 − ||(x′ − χ(t) )([n]d \S)∪S ∗ ||1 ≤
1
kν,
log21 N
so by combing this with assumption (3) of the lemma we get
||(x′ − χ(t+1) )([n]d \S)∪S ∗ ||1 ≤
as required.
30
1
log
20
N
νk +
t+1
νk
log21 N
Proving (1). We let L′′ ⊆ L denote the set of elements in L that fail to be estimated to within a small additive
error. Specifically, we let
n
o
L′′ = i ∈ L : |χ′i − x′i | > α1/2 2−t ν + 20µ ,
where χ′ is the output of E STIMATE VALUES in iteration t. We bound ||(x′ − χ(t+1) )S\S ∗ ||1 by splitting this
ℓ1 norm into three terms, depending on whether the corresponding elements were updated in iteration t and
whether they were well estimated. We have
||(x′ − χ(t+1) )S\S ∗ ||1 = ||(x′ − (χ(t) + χ′ ))S\S ∗ ||1
≤ ||(x′ − (χ(t) + χ′ ))S\(S ∗ ∪L) ||1 + ||(x′ − (χ(t) + χ′ ))(S∩L)\L′ \L′′ ||1 + ||(x′ − (χ(t) + χ′ ))(S∩L′ )\L′′ ||1
+ ||(x′ − (χ(t) + χ′ ))L′′ ||1
= ||(x′ − χ(t) )S\(S ∗ ∪L) ||1 + ||(x′ − (χ(t) + χ′ ))(S∩L)\L′ \L′′ ||1 + ||(x′ − (χ(t) + χ′ ))(S∩L′ )\L′′ ||1
+ ||(x′ − (χ(t) + χ′ ))(L∩S)∩L′′ ||1
=: S1 + S2 + S3 + S4 ,
(29)
where we used the fact that χ′S\L ≡ 0 to go from the second line to the third. We now bound the four terms.
The second term (i.e. S2 ) captures elements of S that were estimated precisely (and hence they are not
in L′′ ), but were not included into L′ as they did not pass the threshold test (being estimated as larger than
1
−t
1000 2 ν + 4µ) in E STIMATE VALUES . One thus has
||(x − (χ(t) + χ′ ))(S∩L)\L′ \L′′ ||1 ≤ α1/2 (2−t ν + 20µ) · |(S ∩ L′ ) \ L′′ | + (
≤ ((
1 −t
2 ν + 4µ) · |(S ∩ L′ ) \ L′′ |
1000
1
+ α1/2 )2−t ν + (4 + 20α1/2 )µ)2k
1000
(30)
since |S| ≤ 2k by assumption of the lemma.
The third term (i.e. S3 ) captures elements of S that were reported by E STIMATE VALUES (hence do not
belong to L′ ) and were approximated well (hence belong to L′′ ). One has, by definition of the set L′′ ,
||(x − (χ(t) + χ′ ))(S∩L′ )\L′′ ||1 = α1/2 (2−t ν + 20µ) · |(S ∩ L′ ) \ L′′ |
(31)
≤ 2α1/2 (2−t ν + 20µ)k
since |S| ≤ 2k by assumption of the lemma.
For the forth term (i.e. S4 ) we have
X
|χ′i − x′i | − α1/2 2−t ν + 20µ
||(x′ − (χ(t) + χ′ ))L′′ ||1 ≤ α1/2 2−t ν + 20µ · |L′′ | +
i∈S
+
.
By Lemma 9.1, (1) (invoked on the set S ∪ supp(χ + χ(t) + χ′ )) we have E[|L′′ |] ≤ B · 2−Ω(rmax ) and by
Lemma 9.1, (2) for any i one has
′
′
1/2
−t
2 ν + 20µ
E |χi − xi | − α
≤ |L| · α1/2 2−t ν + 20µ 2−Ω(rmax ) .
+
Since the parameter rmax in E STIMATE VALUES is chosen to be at least C(log log N + d2 + log(B/k)) for a
sufficiently large constant C, and |L| = O(log N )B, we have
i
h
1
2−t ν + 20µ k
E ||(x′ − (χ(t) + χ′ ))L′′ ||1 ≤ α1/2 2−t ν + 20µ |L|2−Ω(rmax ) ≤
25
log N
31
By Markov’s inequality we thus have
||(x′ − (χ(t) + χ′ ))L′′ ||1 ≤ α1/2 2−t ν + 20µ |L|2−Ω(rmax ) ≤
1
log
22
N
2−t ν + 20µ k
(32)
with probability at least 1 − 1/ log 3 N . Denote the success event by Et0 .
Finally, in order to bound the first term (i.e. S1 ), we invoke Theorem 3.1 to analyze the call to L OCATE S IG √
NAL in the t-th iteration. We note that since rmax , cmax ≥ (C1 / α) log log N (where C1 is the constant from
Theorem 3.1) by assumption of the lemma, the preconditions of Theorem 3.1 are satisfied. By Theorem 3.1
together with (1) and (3) of the inductive hypothesis we have
2
||(x′ − χ(t) )S\(S ∗ ∪L) ||1 ≤ (4C2 α)d/2 ||(x′ − χ(t) )S\S ∗ ||1 + (4C)d (||(χ + χ(t) )[n]d \S ||1 + ||(x′ − χ(t) )S ∗ ||1 ) + 4µ|S|
2
2
≤ O((4C2 α)d/2 )(2−t ν + 20µ)k + (4C)d ( 20 νk) + 8µk
log N
1
(2−t ν + 20µ)k + 8µk
≤
1000
(33)
if α is smaller than an absolute constant and N is sufficiently large.
Now substituting bounds on S1 , S2 , S3 , S4 provided by (33), (30), (31) and (32) into (29) we get
2
||(x′ − χ(t+1) )S\S ∗ ||1 ≤ (
+ O(α1/2 ))2−t ν + (16 + O(α1/2 ))µk
1000
≤ 2−t ν + 20µk
when α is a sufficiently small constant, as required. This proves the inductive step for (1) and completes the
proof of the induction.
Let Et = Et0 ∧Et1 ∧Et2 denote the success event for step t. We have by a union bound Pr[Et ] ≥ 1−3t/ log2 N
as required.
Sample complexity and runtime It remains to bound the sampling complexity and runtime. First note that
R EDUCE L1N ORM only takes fresh samples in the calls to E STIMATE VALUES that it issues. By Lemma 9.1
2
2
each such call uses 2O(d ) k(log log N ) samples, amounting to 2O(d ) k(log log N )2 samples over O(log log N )
iterations.
By Lemma 5.1 each call to L OCATE S IGNAL takes O(B(log N )3/2 ) time. Updating the measurements
m(b
x, Hr , a ⋆ (1, w)), w ∈ W takes
2
|W|cmax rmax · F O(d) · B logd+1 N log log N = 2O(d ) · k logd+2 N
2
time overall. The runtime complexity of the calls to E STIMATE VALUES is 2O(d ) · k logd+1 N (log log N )2 time
2
overall. Thus, the runtime is bounded by 2O(d ) k logd+2 N .
7.2 Analysis of SNR reduction loop in S PARSE FFT
In this section we prove
Theorem 3.3 For any x ∈ CN , any integer k ≥ 1, if µ2 ≥ Err2k (x)/k and R∗ ≥ ||x||∞ /µ = N O(1) , the
following conditions hold for the set S := {i ∈ [n]d : |xi | > µ} ⊆ [n]d .
Then the SNR reduction loop of Algorithm 2 (lines 19-25) returns χ(T ) such that
||(x − χ(T ) )S ||1 . µ
(ℓ1 -SNR on head elements is constant)
||χ[n]d \S ||1 . µ
(spurious elements contribute little in ℓ1 norm)
(T )
(T )
||χ[n]d \S ||0 .
log
1
19
N
k
(small number of spurious elements have been introduced)
32
with probability at least 1 − 1/ log N over the internal randomness used by Algorithm 2. The sample
2
2
complexity is 2O(d ) k log N (log log N ). The runtime is bounded by 2O(d ) k logd+3 N .
Proof. We start with correctness. We prove by induction that after the t-th iteration one has
(1) ||(x − χ(t) )S ||1 ≤ 4(log4 N )T −t µk + 20µk;
(2) ||x − χ(t) ||∞ = O((log4 N )T −(t−1) µ);
(t)
(3) ||χ[n]d \S ||0 ≤
t
k.
log20 N
The base is provided by t = 0, where all claims are trivially true by definition of R∗ . We now prove the
inductive step. The main tool here is Lemma 3.2, so we start by verifying that its preconditions are satisfied.
First note that
1
First, since |S ∗ | ≤ 2−Ω(rmax ) |S| ≤ 2−Ω(rmax ) k ≤ log19
k with probability at least 1 − 2−Ω(rmax ) ≥
N
√
1 − 1/ log N by Lemma 2.12 and choice of rmax ≥ (C/ α) log log N for a sufficiently large constant C > 0.
Also, by Claim 2.14 we have that with probability at least 1−1/ log2 N for every s ∈ [1 : d] the sets Ar ⋆(0, es )
1
are balanced (as per Definition 2.13 with ∆ = 2⌊ 2 log2 log2 n⌋ , as needed for Algorithm 1). Also note that by (2)
of the inductive hypothesis one has ||x − χ(t) ||∞ /µ = R∗ · O(log N ) = N O(1) .
First, assuming the inductive hypothesis (1)-(3), we verify that the preconditions of Lemma 3.2 are satisfied with ν = 4(log 4 N )T −t µk. First, for (A) one has ||(x − χ(t) )S ||1 ≤ 4(log4 N )T −t µk. This satisfies
precondition A of Lemma 3.2. We have
(t)
(t)
||(x − χ(t) )S ∗ ||1 + ||χ[n]d \S ||1 ≤ ||x − χ(t) ||∞ · (||(x − χ(t) )S ∗ ||0 + ||χ[n]d \S ||0 )
t
16
1
4
k+
k ≤
νk
≤ O(log N ) · ν ·
19
20
log N
log N
log14 N
(34)
for sufficiently large N . Since the rhs is less than log14 N νk, precondition (C) of Lemma 3.2 is also satisfied. Precondition (B) of Lemma 3.2 is satisfied by inductive hypothesis, (3) together with the fact that
T = o(log R∗ ) = o(log N ).
Thus, all preconditions of Lemma 3.2 are satisfied. Then by Lemma 3.2 with ν = 4(log4 N )T −t µ one has
with probability at least 1 − 1/ log 2 N
1. ||(x′ − χ(t) − χ′ )S ||1 ≤
1
νk
log4 N
(t)
+ 20µk;
2. ||(χ(t) + χ′ )[n]d \S ||0 − ||χ[n]d \S ||0 ≤
1
k;
log20 N
(t)
3. ||(x′ − (χ(t) + χ′ ))S ∗ ||1 + ||(χ(t) + χ′ )[n]d \S ||1 ≤ ||(x′ − χ(t) )S ∗ ||1 + ||χ[n]d \S ||1 +
1
νk.
log20 N
Combining 1 above with (34) proves (1) of the inductive step:
||(x − χ(t+1) )S ||1 = ||(x − χ(t) − χ′ )S ||1 ≤
1
1
νk + 20µk =
4(log4 N )T −t µk + 20µk
4
log N
log4 N
= 4(log4 N )T −(t+1) µk + 20µk.
(t+1)
(t)
t
k yields ||χ[n]d \S ||0 ≤ logt+1
k as required.
Also, combining 2 above with the fact that ||χ[n]d \S ||0 ≤ log20
20
N
N
In order to prove the inductive step is remains to analyze the call to R EDUCE I NF N ORM, for which we use
Lemma 8.1 with parameter k̃ = 4k/ log 4 N . We first verify that preconditions of the lemma are satisfied. Let
y := x − (χ + χ(t) + χ′ ) to simplify notation. For that we need to verify that
||y[k̃] ||1 /k̃ ≤ 4(log4 N )T −(t+1) µ = (log4 N ) · (
33
1
ν + 20µ)
log4 N
(35)
and
p
||y[n]d \[k̃] ||2 / k̃ ≤ (log4 N ) · (
1
ν + 20µ),
log4 N
(36)
where we denote k̃ := 4k/ log 4 N for convenience. The first condition is easy to verify, as we now show.
Indeed, we have
||yk̃ ||1 ≤ ||yS ||1 + ||ysupp(χ(t) +χ′ )\S ||1 + ||x[n]d \S ||∞ · k̃
≤ ||yS ||1 + ||(χ(t) + χ′ )[n]d \S ||1 + ||xsupp(χ(t) +χ′ )\S ||∞ · k̃ + ||x[n]d \S ||∞ · k̃
1
1
2
≤
νk + 20µk +
νk + 2µk̃ ≤
νk + 40µk,
4
4
log N
log N
log4 N
where we used the triangle inequality to upper bound ||ysupp(χ(t) +χ′ )\S ||1 by ||(χ(t) +χ′ )[n]d \S ||1 +||xsupp(χ(t) +χ′ )\S ||∞ ·
k̃ to go from the first line to the second. We thus have
||y[k̃] ||1 /k̃ ≤ (
1
2
νk + 40µk)/(4k/ log 4 N ) ≤ (log4 N ) · ( 4 ν + 20µ)
4
log N
log N
as required. This establishes (35).
To verify the second condition, we first let S̃ := S ∪ supp(χ + χ(t) + χ′ ) to simplify notation. We have
||y[n]d \[k̃] ||22 = ||yS̃\[k̃] ||22 + ||y([n]d \S̃)\[k̃] ||22 ≤ ||yS̃\[k̃] ||22 + µ2 k,
(37)
where we used the fact that y[n]d\S̃ = x[n]d \S̃ and hence ||y([n]d \S̃)\[k̃] ||22 ≤ µ2 k. We now note that ||yS̃\[k̃] ||1 ≤
||yS̃ ||1 ≤ 2( log14 N νk + 20µk), and so it must be that ||yS\[k̃] ||∞ ≤ 2( log14 N νk + 20µk)(k/k̃), as otherwise the
top k̃ elements of y[k̃] would contribute more than 2( log14 N νk + 20µk) to ||yS̃ ||1 , a contradiction. With these
constraints ||yS̃\[k̃] ||22 is maximized when there are k̃ elements in yS̃\[k̃] , all equal to the maximum possible
value, i.e. ||yS̃\[k̃] ||22 ≤ 4( log14 N νk+20µk)2 (k/k̃)2 k̃. Plugging this into (37), we get ||y[n]d \[k̃] ||22 ≤ ||yS̃\[k̃] ||22 +
µ2 k ≤ 4( log14 N νk + 20µk)2 (k/k̃)2 k̃ + µ2 k. This implies that
s
s
1
1
4( 4 νk + 20µk)2 (k/k̃)2 + µ2 (k/k̃) ≤ 2(k/k̃) ( 4 νk + 20µk)2 + µ2
log N
log N
1
1
≤ 2(( 4 νk + 20µk) + µ)(k/k̃) ≤ (log4 N )( 4 νk + 20µk),
log N
log N
p
||y[n]d \[k̃] ||2 / k̃ ≤
establishing (36).
Finally, also recall that ||yS\[k̃] ||∞ ≤ 2( log14 N νk + 20µk)(k/k̃) ≤ (log4 N ) · ( log14 N νk + 20µk) and
||y[n]d \S̃ ||∞ = ||x[n]d \S ||∞ ≤ µ.
We thus have that all preconditions of Lemma 8.1 are satisfied for the set of top k̃ elements of y, and hence
its output satisfies
1
||x − (χ(t) − χ′ − χ′′ )||∞ = O(log4 N ) · ( 4 νk + 20µk).
log N
Putting these bounds together establishes (2), and completes the inductive step and the proof of correctness.
Finally, taking a union bound over all failure events (each call to E STIMATE VALUES succeeds with probability at least 1 − log12 N , and with probability at least 1 − 1/ log2 N for all s ∈ [1 : d] the set Ar ⋆ (0, es )
is balanced in coordinate s) and using the fact that log T = o(log N ) and each call to L OCATE S IGNAL is
deterministic, we get that success probability of the SNR reduction look is lower bounded by 1 − 1/ log N .
34
Sample complexity and runtime The sample complexity is bounded by the the sample complexity of the
calls to R EDUCE L1N ORM and R EDUCE I NF N ORM inside the loop times O(log N/ log log N ) for the num2
ber of iterations. The former is bounded by 2O(d ) k(log log N )2 by Lemma 3.2, and the latter is bounded by
2
2
2O(d ) k/ log N by Lemma 8.1, amounting to at most 2O(d ) k log N (log log N ) samples overall. The run2
time complexity is at most 2O(d ) k logd+3 N overall for the calls to R EDUCE L1N ORM and no more than
2
2O(d ) k logd+3 N overall for the calls to R EDUCE I NF N ORM.
7.3 Analysis of S PARSE FFT
d
Theorem 3.5 For any ǫ > 0, x ∈ C[n] and any integer k ≥ 1, if R∗ ≥ ||x||∞ /µ = poly(N ), µ2 ≥
||x[n]d \[k] ||22 /k, µ2 = O(||x[n]d \[k] ||22 /k) and α > 0 is smaller than an absolute constant, S PARSE FFT(x̂, k, ǫ, R∗ , µ)
2
2
solves the ℓ2 /ℓ2 sparse recovery problem using 2O(d ) (k log N log log N + 1ǫ k log N ) samples and 2O(d ) 1ǫ k logd+3 N
time with at least 98/100 success probability.
Proof. By Theorem 3.3 the set S := {i ∈ [n]d : |xi | > µ} satisfies
||(x − χ(T ) )S ||1 . µk
and
(T )
||χ[n]d \S ||1 . µk
1
(T )
||χ[n]d \S ||0 .
log
19
N
k
with probability at least 1 − 1/ log N .
We now show that the signal x′ := x− χ(T ) satisfies preconditions of Lemma 3.4 with parameter k. Indeed,
letting Q ⊆ [n]d denote the top 2k coefficients of x′ , we have
(T )
||x′Q ||1 ≤ ||x′Q∩S ||1 + ||χ(Q\S)∩supp χ(T ) ||1 + |Q| · ||x[n]d \S ||1 ≤ O(µk)
Furthermore, since Q is the set of top 2k elements of x′ , we have
||x′[n]d \Q ||22 ≤ ||x′[n]d \(S∪supp χ(T ) ) ||22 ≤ ||x[n]d \(S∪supp χ(T ) ) ||22 ≤ ||x[n]d \S ||22
≤ µ2 |S| + ||x[n]d \[k] ||22 = O(µ2 k)
as required.
Thus, with at least 99/100 probability we have by Lemma 3.4 that
||x − χ(T ) − χ′ ||2 ≤ (1 + O(ǫ)) Errk (x).
By a union bound over the 1/ log N failure probability of the SNR reduction loop we have that S PARSE FFT is
correct with probability at least 98/100, as required.
It remains to bound the sample and runtime complexity. The number of samples needed to compute
m(b
x, Hr , a ⋆ (1, w)) ← H ASH T O B INS (x̂, 0, (Hr , a ⋆ (1, w)))
2
2
for all a ∈ Ar , w ∈ W is bounded by 2O(d ) k log N (log log N ) by our choice of B = 2O(d ) k, rmax =
O(log log N ), |W| = O(log N/ log log N ) and |Ar | = O(log log N ), together with Lemma 9.2. This is
2
asymptotically the same as the 2O(d ) k log N (log log N ) sample complexity of the ℓ1 norm reduction loop by
2
Theorem 3.3. The sampling complexity of the call to R ECOVER AT C ONSTANT SNR is at most 2O(d ) 1ǫ k log N
by Lemma 3.4, yielding the claimed bound.
2
The runtime of the SNR reduction loop is bounded by 2O(d ) k logd+3 N by Theorem 3.3, and the runtime
2
of R ECOVER AT C ONSTANT SNR is at most 2O(d ) 1ǫ k logd+2 N by Lemma 3.4.
35
8 ℓ∞ /ℓ2 guarantees and constant SNR case
In this section we state and analyze our algorithm for obtaining ℓ∞ /ℓ2 guarantees in Õ(k) time, as well as a
procedure for recovery under the assumption of bounded ℓ1 norm of heavy hitters (which is very similar to the
R ECOVER AT C ONST SNR procedure used in [IKP14]).
8.1 ℓ∞ /ℓ2 guarantees
The algorithm is given as Algorithm 4.
Algorithm 4 R EDUCE I NF N ORM(x̂, χ, k, ν, R∗ , µ)
1: procedure R EDUCE I NF N ORM (x̂, χ, k, ν, R∗ , µ)
2:
χ(0) ← 0
⊲ in Cn
3:
B ← (2π)4d·F · k/αd for a small constant α > 0
4:
T ← log2 R∗
√
5:
rmax ← (C/ α) log N for sufficiently large constant C > 0
1
6:
W ← {0d }, ∆ ← 2⌊ 2 log2 log2 n⌋
⊲ 0d is the zero vector in dimension d
7:
for g = 1 to ⌈log
S∆ n⌉ do
8:
W ← W ∪ ds=1 {n∆−g · es }
⊲ es is the unit vector in direction s
9:
end for
10:
G ← filter with B buckets and sharpness F , as per Lemma 2.3
11:
for r = 1 to rmax do
⊲ Samples that will be used for location
d
12:
Choose Σr ∈ Md×d , qr ∈ [n] uniformly at random, let πr := (Σr , qr ) and let Hr := (πr , B, F )
13:
Let Ar ← C log log N elements of [n]d × [n]d sampled uniformly at random with replacement
14:
for w ∈ W do
15:
m(b
x, Hr , a ⋆ (1, w)) ← H ASH T O B INS (x̂, 0, (Hr , a ⋆ (1, w))) for all a ∈ Ar , w ∈ W
16:
end for
17:
end for
18:
for t = 0 to T − 1 do
⊲ Locating elements of the residual that pass a threshold test
19:
for r = 1 to rmax do
max
20:
Lr ← L OCATE S IGNAL χ(t) , k, {m(b
x, Hr , a ⋆ (1, w))}rr=1,a∈A
r ,w∈W
21:
end for
S max
Lr
22:
L ← rr=1
′
23:
χ ← E STIMATE VALUES (x̂, χ(t) , L, k, 1, O(log n), 5(ν2T −(t+1) + µ), ∞)
24:
χ(t+1) ← χ(t) + χ′
25:
end for
26:
return χ(T )
27: end procedure
Lemma 8.1. For any x, χ ∈ Cn , x′ = x − χ, any integer k ≥ 1, if parameters ν and µ satisfy ν ≥
||x′[k] ||1 /k, µ2 ≥ ||x′[n]d \[k] ||22 /k, then the following conditions hold. If S ⊆ [n]d is the set of top k eld
ements of x′ in terms of absolute value, and ||x′[n]d \S ||∞ ≤ ν, then the output χ ∈ C[n] of a call to
R EDUCE I NF N ORM(b
x , χ, k, ν, R∗ , µ) with probability at least 1 − N −10 over the randomness used in the call
satisfies
||x′ − χ||∞ ≤ 8(ν + µ) + O(1/N c ),
(all elements in S have been reduced to about ν + µ),
36
where the O(||x′ ||∞ /N c ) term corresponds to polynomially small error in our computation of the semiequispaced Fourier transform. Furthermore, we have χ[n]d \S ≡ 0. The number of samples used is bounded by
2
2
2O(d ) k log3 N . The runtime is bounded by 2O(d ) k logd+3 N .
Proof. We prove by induction on t that with probability at least 1 − N −10 one has for each t = 0, . . . , T − 1
(1) ||(x′ − χ(t) )S ||∞ ≤ 8(ν2T −t + µ)
(t)
(2) χ[n]d \S ≡ 0
(3) |(x′i − χ(t) )i | ≤ |x′i | for all i ∈ [n]d
for all such t.
The base t = 0 holds trivially. We now prove the inductive step. First, since r = C log N for a constant
√
C > 0, we have by Lemma 2.12 that each i ∈ √S is isolated under at least a 1 − α fraction of hashings
H1 , . . . , Hrmax with probability at least 1 − 2−Ω( αrmax ) ≥ 1 − N −10 as long as C > 0 is sufficiently large.
This lets us invoke Lemma 6.3 with S ∗ = ∅. We now use Lemma 6.3 to obtain bounds on functions ehead and
etail applied to our hashings {Hr } and vector x′ . Note that ehead and etail are defined in terms of a set S ⊆ [n]d
(this dependence is not made explicit to alleviate notation). We use S = [k̃], i.e. S is the top k elements of x′ .
The inductive hypothesis together with the second part of Lemma 6.3 gives for each i ∈ S
||ehead
({Hr }, x′ , χ(t) )||∞ ≤ (Cα)d/2 ||(x′ − χ(t) )S ||∞ .
S
To bound the effect of tail noise, we invoke the second part of Lemma 6.6, which states that if rmax = C log N
√
′
for a sufficiently large constant C > 0 , we have ||etail
S ({Hr , Ar }, x )||∞ = O( αµ).
These two facts together imply by the second claim of Corollary 5.2 that each i ∈ S such that
√
√
|(x′ − χ(t) )i | ≥ 20 α||(x′ − χ(t) )S ||∞ + 20 αµ
is located. In particular, by the inductive hypothesis this means that every i ∈ S such that
√
|(x′ − χ(t) )i | ≥ 20 α(ν2T −t + 2µ) + (4µ)
is located and reported in the list L . This means that
√
||(x′ − χ(t) )[n]d \L ||∞ ≤ 20 α(ν2T −t + 2µ) + (4µ),
and hence it remains to show that each such element in L is properly estimated in the call to E STIMATE VALUES,
and that no elements outside of S are updated.
We first bound estimation quality. First note that by part (3) of the inductive hypothesis together with
Lemma 9.1, (1) one has for each i ∈ L
√
Pr[|χ′ − (x′ − χ(t) )i | > α · (ν + µ)] < 2−Ω(rmax ) < N −10 ,
as long as rmax ≥ C log N for a sufficiently large constant C > 0. This means that all elements in the list L
are estimated up to an additive (ν + µ)/10 ≤ (ν2T −t + µ)/10 term as long as α is smaller than an absolute
constant. Putting the bounds above together proves part (1) of the inductive step.
To prove parts (2) and (3) of the inductive step, we recall that the only elements i ∈ [n]d that are updated are
the ones that satisfy |χ′ | ≥ 5(ν2T −(t+1) + µ). By the triangle inequality and the bound on additive estimation
error above that
|(x′ − χ(t) )i | ≥ 5(ν2T −(t+1) + µ) − (ν + µ)/10 > 4(ν2T −(t+1) + µ) ≥ 4(ν + µ).
Since |(x′ − χ(t) )i | ≤ |xi | by part (2) of the inductive hypothesis, we have that only elements i ∈ [n]d with
|x′i | ≥ 4(ν + µ) are updated, but those belong to S since ||x′[n]d \S ||∞ ≤ ν by assumption of the lemma. This
proves part (3) of the inductive step. Part (2) of the inductive step follows since |(x′ − χ(t) − χ′ )i | ≤ (ν + µ)/10
by the additive error bounds above, and the fact that |(x′ − χ(t) )i | > 4(ν + µ). This completes the proof of the
inductive step and the proof of correctness.
37
Sample complexity and runtime Since H ASH T O B INS uses B · F d samples by Lemma 9.2, the sample
complexity of location is bounded by
2
B · F d · rmax · cmax · |W| = 2O(d ) k log3 N.
Each call to E STIMATE VALUES uses B · F d · k · rmax samples, and there are O(log N ) such calls overall,
resulting in sample complexity of
2
B · F d · rmax · log N = 2O(d ) k log2 N.
2
Thus, the sample complexity is bounded by 2O(d ) k log3 N . The runtime bound follows analogously.
8.2 Recovery at constant SNR
The algorithm is given by
Algorithm 5 R ECOVER AT C ONSTANT SNR(x̂, χ, k, ǫ)
1: procedure R ECOVER AT C ONSTANT SNR(x̂, χ, k, ǫ)
2:
B ← (2π)4d·F · k/(ǫαd )
3:
Choose Σ ∈ Md×d , q ∈ [n]d uniformly at random, let π := (Σ, q) and let Hr := (πr , B, F )
4:
Let A ← C log log N elements of [n]d × [n]d sampled uniformly at random with replacement
1
⊲ 0d is the zero vector in dimension d
5:
W ← {0d }, ∆ ← 2⌊ 2 log2 log2 n⌋
6:
for g = 1 to ⌈log
S∆ n⌉ do
7:
W ← W ∪ ds=1 n∆−g · es
⊲ es is the unit vector in direction s
8:
end for
9:
for w ∈ W do
10:
m(b
x, H, a ⋆ (1, w)) ← H ASH T O B INS (x̂, 0, (H, a ⋆ (1, w))) for all a ∈ A, w ∈ W
11:
end for
12:
L ← L OCATE S IGNAL χ(t) , k, {m(b
x, H, a ⋆ (1, w))}a∈A,w∈W
13:
χ′ ← E STIMATE VALUES (x̂, χ, 2k, ǫ, O(log N ), 0)
14:
L′ ←top 4k elements of χ′
15:
return χ + χ′L′
16: end procedure
Our analysis will use
Lemma 8.2 (Lemma 9.1 from [IKP14]). Let x, z ∈ Cn and k ≤ n. Let S contain the largest k terms of x, and
T contain the largest 2k terms of z. Then ||x − zT ||22 ≤ ||x − xS ||22 + 4||(x − z)S∪T ||22 .
√
Lemma 3.4 For any ǫ > 0, x̂, χ ∈ CN , x′ = x − χ and any integer k ≥ 1 if ||x′[2k] ||1 ≤ O(||x[n]d \[k] ||2 k)
and ||x′[n]d \[2k] ||22 ≤ ||x[n]d \[k] ||22 , the following conditions hold. If ||x||∞ /µ = N O(1) , then the output χ′ of
R ECOVER AT C ONSTANT SNR(x̂, χ, 2k, ǫ) satisfies
||x′ − χ′ ||22 ≤ (1 + O(ǫ))||x[n]d \[k] ||22
2) 1
with at least 99/100 probability over its internal randomness. The sample complexity is 2O(d
2
the runtime complexity is at most 2O(d ) 1ǫ k logd+1 N.
ǫ k log N ,
and
Remark 8.3. We note that the error bound is in terms of the k-term approximation error of x as opposed to the
2k-term approximation error of x′ = x − χ.
38
Proof. Let S denote the top 2k coefficients of x′ . We first derive bounds on the probability that an element
i ∈ S is not located. Recall that by Lemma 5.1 for any i ∈ S if
1. ehead
(H, x′ , 0) < |x′i |/20;
i
′
′
2. etail
i (H, A ⋆ (1, w), x ) < |xi |/20 for all w ∈ W;
3. for every s ∈ [1 : d] the set A ⋆ (0, es ) is balanced (as per Definition 2.13),
then i ∈ L, i.e. i is successfully located in L OCATE S IGNAL.
We now upper bound the probability that an element i ∈ S is not located. We let µ2 := ||x[n]d \k ||22 /k to
simplify notation.
We need to bound, for i ∈ S, the quantity
X
−1
′
·
Goi (j) |x′j |.
(H,
x
,
0)
=
G
ehead
i
oi (i)
Contribution from head elements.
j∈S\{i}
Recall that m(b
x, H, a ⋆ (1, w)) = H ASH T O B INS (x̂, 0, (H, a ⋆ (1, w))), and let m := m(b
x, H, a ⋆ (1, w)) to
simplify notation. By Lemma 2.9, (1) one has
T Σi
ω −a
EH [ max |G−1
oi (i)
a∈[n]d
mh(i) − (x′S )i |] ≤ (2π)d·F · C d ||x′S ||1 /B + µ/N 2
(38)
for a constant C > 0. This yields
(H, x′ , 0)] ≤ (2π)d·F · C d ||x′S ||1 /B . (2π)d·F · C d µk/B . αd C d ǫµ.
EH [ehead
i
by the choice of B in R ECOVER AT C ONSTANT SNR. Now by Markov’s inequality we have for each i ∈ [n]d
PrH [ehead
(H, x′ , 0) > |x′i |/20] . αd C d ǫµ/|x′i | . αǫµ/|x′i |
i
(39)
as long as α is smaller than a constant.
Contribution of tail elements
(see (9), (10), (11) and (12)).
We have
We restate the definitions of etail variables here for convenience of the reader
−1
etail
i (H, z, x) := Goi (i) ·
X
Goi (j) xj ω z
T Σ(j−i)
.
j∈[n]d \S
For any Z ⊆ [n]d we have
1/5
−1
etail
i (H, Z, x) := quantz∈Z Goi (i) ·
X
Goi (j) xj ω z
T Σ(j−i)
.
j∈[n]d \S
Note that the algorithm first selects sets Ar ⊆ [n]d × [n]d , and then accesses the signal at locations given by
Ar ⋆ (1, w), w ∈ W (after permuting input space).
′
The definition of etail
i (H, Ar , x ) for permutation π = (Σ, q) allows us to capture the amount of noise that
our measurements for locating a specific set of bits of Σi suffer from. Since the algorithm requires all w ∈ W
to be not too noisy in order to succeed (see preconditions 2 and 3 of Lemma 5.1), we have
X
′
′
′
etail
(H,
A,
x
)
=
40µ
(x)
+
etail
H,i
i
i (H, A ⋆ (1, w), x ) − 40µH,i (x )
+
w∈W
39
where for any η ∈ R one has |η|+ = η if η > 0 and |η|+ = 0 otherwise.
For each i ∈ S we now define an error event Ei∗ whose non-occurrence implies location of element i, and
then show that for each i ∈ S one has
αǫµ2
(40)
PrH,A [Ei∗ ] . ′ 2 .
|xi |
Once we have (40), together with (39) it allows us to prove the main result of the lemma. In what follows we
concentrate on proving (40). Specifically, for each i ∈ S define
′
′
Ei∗ = {(H, A) : ∃w ∈ W s.t. etail
i (H, A ⋆ (1, w), x ) > |xi |/20}.
′
Recall that etail
[n]d \S , χ[n]d \S , (H, z)) by definition of the measurements m.
i (H, z, x ) = H ASH T O B INS (x\
′ 2
2
′
By Lemma 2.9, (3) one has, for a uniformly random z ∈ [n]d , that Ez [|etail
i (H, z, x )| |] = µH,i (x ). By
Lemma 2.9, (2), we have that
EH [µ2H,i (x′ )] ≤ (2π)2d·F · C d k(x − χ)[n]d \S k22 /B + µ2 /N 2 ≤ αǫµ2 .
(41)
Thus by Markov’s inequality
′ 2
′
2
′ 2
′
2
Prz [etail
i (H, z, x ) > (|xi |/20) ] ≤ αǫ(µH,i (x )) /(|xi |/20) .
Combining this with Lemma 9.5, we get for all τ ≤ (1/20)(|x′i |/20) and all w ∈ W
1/5
′
′
2
′
′
Ω(|A|)
.
PrA [quantz∈A⋆(1,w) etail
i (H, z, x ) > |xi |/20|µH,i (x ) = τ ] < (4τ /(|xi |/20))
(42)
Equipped with the bounds above, we now bound Pr[Ei∗ ]. To that effect, for each τ > 0 let the event Ei (τ )
be defined as Ei (τ ) = {µH,i (x′ ) = τ }. Note that since we assume that we operate on O(log n) bit integers,
µH,i (x′ ) takes on a finite number of values, and hence Ei (τ ) is well-defined. It is convenient to bound Pr[Ei∗ ]
as a sum of three terms:
[
′
′
Ei (τ )
PrH,A [Ei∗ ] ≤ PrH,A etail
i (H, A, x ) > |xi |/20
+
+
Z
(1/8)(|x′i |/20)
√
Z
αǫµ
∞
√
τ ≤ αǫµ
h
i
′
′
(H,
A,
x
)
>
|x
|/20
|E
(τ
)
Pr[Ei (τ )]dτ
PrH,A etail
i
i
i
Pr[Ei (τ )]dτ
(1/8)(|x′i |/20)
√
We now bound each of the three terms separately for i such that |x′i |/20 ≥ 2 αǫµH,i (x′ ). This is sufficient
for our purposes, as other elements only contribute a small amount of ℓ22 mass.
1. By (42) and a union bound over W the first term is bounded by
√
|W| · ( αǫµ/(|x′i |/20))Ω(|A|) ≤ αǫµ2 /|x′i |2 · |W| · 2−Ω(|A|) ≤ αǫµ2 /|x′i |2 .
(43)
since |A| ≥ C log log N for a sufficiently large constant C > 0 in R ECOVER AT C ONSTANT SNR.
2. The second term, again by a union bound over W and using (42), is bounded by
Z (1/8)(|x′ |/20)
i
|W| · (4τ /(|x′i |/20))Ω(|A|) Pr[Ei (τ )]dτ
√
≤
Z
αǫµ
(1/8)(|x′i |/20)
√
αǫµ
|W|
· (4τ /(|x′i |/20))Ω(|A|) (4τ /(|x′i |/20))2 Pr[Ei (τ )]dτ
40
(44)
Since |A| ≥ C log log N for a sufficiently large constant C > 0 and (4τ /(|x′i |/20)) ≤ 1/2 over the
√
whole range of τ by our assumption that |x′i |/20 ≥ 2 αǫµH,i (x′ ), we have
|W| · (4τ /(|x′i |/20))Ω(|A|) ≤ |W| · (1/2)Ω(|A|) = o(1)
√
for each τ ∈ [ αǫµ, (1/8)(|x′i |/20)]. Thus, (44) is upper bounded by
Z (1/8)(|x′ |/20)
i
(4τ /(|x′i |/20))2 Pr[Ei (τ )]dτ
√
αǫµ
1
.
′
(|xi |/20)2
Z
(1/8)(|x′i |/20)
√
αǫµ
τ 2 Pr[Ei (τ )]dτ
≤
since
Z
(1/8)(|x′i |/20)
√
by (41).
αǫµ
τ 2 Pr[Ei (τ )]dτ ≤
3. For the third term we have
Z ∞
(1/8)(|x′i |/20)
Z
∞
0
αǫµ2
(|x′i |/20)2
τ 2 Pr[Ei (τ )]dτ = EH [µ2H,i (x′ )] = O(α)ǫµ2
Pr[Ei (τ )]dτ = Pr[µH,i (x′ ) > (1/8)(|x′i |/20)] .
αǫµ2
|x′i |2
by Markov’s inequality applied to (41).
Putting the three estimates together, we get Pr[Ei∗ ] =
O(α)ǫµ2
.
|x′i |2
Pr[i 6∈ L] .
Together with (39) this yields for i ∈ S
αǫµ2 αǫµ
+ ′ .
|x′i |2
|xi |
In particular,
E
"
X
i∈S
|x′i |2
#
· 1i∈S\L ≤
.
X
i∈S
X
i∈S
|x′i |2 Pr[i 6∈ L]
|x′i |2
αǫµ αǫµ2
+ ′2
|x′i |
|xi |
. αǫµ2 k,
√
where we used the assumption of the lemma that ||x′[2k] ||1 ≤ O(||x[n]d \[k] ||2 k) and ||x′[n]d \[2k] ||22 ≤ ||x[n]d \[k] ||22
in the last line. By Markov’s inequality we thus have Pr[||x′S\L ||22 > ǫµ2 k] < 1/10 as long as α is smaller than
a constant.
We now upper bound ||x′ − χ′ ||22 . We apply Lemma 8.2 to vectors x′ and χ′ with sets S and L′ respectively,
getting
||x′ − χ′L′ ||22 ≤ ||x′ − x′S ||22 + 4||(x′ − χ′ )S∪L′ ||22
≤ ||x[n]d \[k] ||22 + 4||(x′ − χ′ )S\L ||22 + 4||(x′ − χ′ )S∩L ||22
≤ ||x[n]d \[k] ||22 + 4ǫµ2 k + 4ǫµ2 |S|
≤ ||x[n]d \[k] ||22 + O(ǫµ2 k),
√
where we used the fact that ||(x′ − χ′ )S∩L ||∞ ≤ ǫµ with probability at least 1 − 1/N over the randomness
used in E STIMATE VALUES by Lemma 9.1, (3). This completes the proof of correctness.
41
2
Sample complexity and runtime The number of samples taken is bounded by 2O(d ) 1ǫ k log N by Lemma 9.2,
2
the choice of B. The sampling complexity of the call to E STIMATE VALUES is at most 2O(d ) 1ǫ k log N . The
2
x, H, a ⋆ (1, w)) and
runtime is bounded by 2O(d ) 1ǫ k logd+1 N log log N for computing the measurements m(b
2) 1
d+1
O(d
2
N for estimation.
ǫ k log
9 Utilities
9.1 Properties of E STIMATE VALUES
In this section we describe the procedure E STIMATE VALUES (see Algorithm 6), which, given access to a signal
x in frequency domain (i.e. given x
b), a partially recovered signal χ and a target list of locations L ⊆ [n]d ,
estimates values of the elements in L, and outputs the elements that are above a threshold ν in absolute value.
The SNR reduction loop uses the thresholding function of E STIMATE VALUES and passes a nonzero threshold,
while R ECOVER AT C ONSTANT SNR uses ν = 0.
Algorithm 6 E STIMATE VALUES(x, χ, L, k, ǫ, ν, rmax )
1: procedure E STIMATE VALUES(x, χ, L, k, ǫ, ν, rmax )
⊲ rmax controls estimate confidence
2:
B ← (2π)4d·F · k/(ǫα2d )
3:
for r = 0 to rmax do
4:
Choose Σr ∈ Md×d , qr , zr ∈ [n]d uniformly at random
5:
Let πr := (Σr , qr ), Hr := (πr , B, F ), F = 2d
6:
ur ← H ASH T O B INS (x̂, χ, χ, (Hr , zr ))
7:
⊲ Using semi-equispaced Fourier transform (Corollary 10.2)
8:
end for
9:
L′ ← ∅
⊲ Initialize output list to empty
10:
for f ∈ L do
11:
for r = 0 to rmax do
12:
j ← hr (f )
−zrT Σr f
⊲ Estimate x′f from each measurement
13:
wfr ← vr,j G−1
of (f ) ω
14:
end for
max
⊲ Median is taken coordinatewise
15:
wf ← median{wfr }rr=1
′
16:
If |wf | > ν then L ← L′ ∪ {f }
17:
end for
18:
return wL′
19: end procedure
Lemma 9.1 (ℓ1 /ℓ2 bounds on estimation quality). For any ǫ ∈ (0, 1], any x, χ ∈ Cn , x′ = x − χ, any
L ⊆ [n]d , any integer k and any set S ⊆ [n]d , |S| ≤ 2k the following conditions hold. If ν ≥ ||(x − χ)S ||1 /k
and µ2 ≥ ||(x − χ)[n]d \S ||22 /k, then the output w of E STIMATE VALUES(x̂, χ, L, k, ǫ, ν, rmax ) satisfies the
following bounds if rmax is larger than an absolute constant.
For each i ∈ L
√
(1) Pr[|wi − x′i | > ǫα(ν + µ)] < 2−Ω(rmax ) ;
√
√
(2) E ||wi − x′i | − ǫα(ν + µ)|+ ≤ ǫα(ν + µ)2−Ω(rmax ) ;
h
i
(3) E |wi − x′i |2 − ǫα(ν + µ)2 + ≤ 2−Ω(rmax ) ǫ(ν 2 + µ2 ).
42
2
2) 1
The sample complexity is bounded by 1ǫ 2O(d ) krmax . The runtime is bounded by 2O(d
ǫ k log
d+1
N rmax .
Proof. We analyze the vector ur ← H ASH T O B INS (x̂, χ, (Hr , zr )) using the approximate linearity of H ASH T O B INS given by Lemma A.1 (see Appendix A). Writing x′ = x′S + x′[n]d \S , we let
uhead
:= H ASH T O B INS (c
xS , χS , (Hr , zr )) and utail
:= H ASH T O B INS (x\
r
r
[n]d \S , χ[n]d \S , (Hr , zr ))
we apply Lemma 2.9, (1) to the first vector, obtaining
T
−zr Σi head
uh(i) − (x′S )i |] ≤ (2π)d·F · C d ||xS ||1 /B + µ/N 2
EHr ,zr [|G−1
oi (i) ω
(45)
Similarly applying Lemma 2.9, (2) and (3) to the utail , we get
T
−zr Σi tail
uhr (i) − (x′[n]d \S )i |2 ] ≤ (2π)2d·F · C d kx′[n]d \S k22 /B,
EHr ,zr [|G−1
oi (i) ω
which by Jensen’s inequality implies
T
−ar Σi tail
uh(i) − ((x − χ)[n]d \S )i |] ≤ (2π)d·F · C d
EHr ,zr [|G−1
oi (i) ω
≤ (2π)d·F
Putting (45) and (46) together and using Lemma A.1, we get
q
kx[n]d \S k22 /B
p
· C d µ · k/B.
T
−zr Σi
uh(i) − (x − χ)i |] ≤ (2π)d·F · C d (||xS ||1 /B + µ ·
EHr ,zr [|G−1
oi (i) ω
p
k/B).
(46)
(47)
We hence get by Markov’s inequality together with the choice B = (2π)4d·F · k/(ǫα2d ) in E STIMATE VALUES
(see Algorithm 6)
T
−zr Σi
uh(i) − (x − χ)i | >
PrHr ,zr [|G−1
oi (i) ω
1√
ǫα(ν + µ)] ≤ (Cα)d/2 .
2
(48)
The rhs is smaller than 1/10 as long as α is smaller than an absolute constant.
Since wi is obtained by taking the median in real and imaginary components, we get by Lemma 9.4
|wi − x′i | ≤ 2median(|wi1 − x′i |, . . . , |wirmax − x′i |).
By (48) combined with Lemma 9.5 with γ = 1/10 we thus have
√
Pr{Hr ,zr } [|wi − x′i | > ǫα(ν + µ)] < 2−Ω(rmax ) .
This establishes (1). (2) follows similarly by applying the first bound from Lemma 9.5 with γ = 1/2 to
random variables Xr = |wir − xi |, r = 1, . . . , rmax and Y = |wi − xi |. The third claim of the lemma follows
analogously.
The sample and runtime bounds follow by Lemma 9.2 and Lemma 10.1 by the choice of parameters.
9.2 Properties of H ASH TO B INS
Lemma 9.2. H ASH T O B INS(b
x, χ, (H, a)) computes u such that for any i ∈ [n],
X
T
uh(i) = ∆h(i) +
Goi (j) (x − χ)j ω a Σj
j
where G is the filter defined in section 2, and ∆2h(i) ≤ kχk22 /((R∗ )2 N 11 ) is a negligible error term. It takes
O(BF d ) samples, and if kχk0 . B, it takes O(2O(d) · B logd N ) time.
43
Algorithm 7 Hashing using Fourier samples (analyzed in Lemma 9.2)
1: procedure H ASH T O B INS(b
x , χ, (H, a))
2:
G ← filter with B buckets, F = 2d
⊲ H = (π, B, F ), π = (Σq)
′
′
′
′
−Ω(c)
3:
Compute y = Ĝ
χ−χ
b k∞ < N
⊲ c is a large constant
√· PΣ,a,q (x̂ − χ̂ ), for some χ with kb
4:
Compute uj = NF −1 (y ′ )(n/b)·j for j ∈ [b]d
5:
return u
6: end procedure
b so |S| . (2F )d · B and in fact S ⊂ B∞ 1/d (0).
Proof. Let S = supp(G),
F ·B
First, H ASH T O B INS computes
b · PΣ,a,q x\
b · PΣ,a,q x
b · PΣ,a,q χ\
\
y′ = G
− χ′ = G
−χ+G
− χ′ ,
for an approximation χ
b′ to χ
b. This is efficient because one can compute (PΣ,a,q x
b)S with O(|S|) time and
′
′
samples, and PΣ,a,q χ
bS is easily computed from χ
bT for T = {Σ(j − b) : j ∈ S}. Since T is an image of an
ℓ∞ ball under a linear transformation and χ is B-sparse, by Corollary 10.2, an approximation χ
b′√to χ
b can be
d
O(d)
′
−Ω(c)
b 2=
b
N
kGk
computed
in
O(2
·
B
log
N
)
time
such
that
|b
χ
−
χ
b
|
<
N
for
all
i
∈
T
.
Since
k
Gk
≤
i
1
i
√
b
N kGk2 ≤ N kGk∞ ≤ N and G is 0 outside S, this implies that
b · PΣ,a,q (χ\
b 1 max |(PΣ,a,q (χ\
b 1 max |(χ\
kG
− χ′ )k2 ≤ kGk
− χ′ ))i | = kGk
− χ′ )i | ≤ N −Ω(c)
i∈S
i∈T
(49)
√
b = NG
b · PΣ,a,q (χ\
as long as c is larger than an absolute constant. Define ∆ by ∆
− χ′ ). Then H ASH T O B INS
B
computes u ∈ C such that for all i,
√
√
uh(i) = N F −1 (y ′ )(n/b)·h(i) = N F −1 (y)(n/b)·h(i) + ∆(n/b)·h(i) ,
b · PΣ,a,q x
\
for y = G
− χ. This computation takes O(ky ′ k0 + B log B) . B log(N ) time. We have by the
convolution theorem that
√
\
b · PΣ,a,q (x
− χ))(n/b)·h(i) + ∆(n/b)·h(i)
uh(i) = N F −1 (G
\
= (G ∗ F(PΣ,a,q (x
− χ)))(n/b)·h(i) + ∆(n/b)·h(i)
X
\
=
G(n/b)·h(i)−π(j) F(PΣ,a,q (x
− χ))π(j) + ∆(n/b)·h(i)
π(j)∈[N ]
=
X
i∈[N ]
T Σj
Goi (j) (x − χ)j ω a
+ ∆(n/b)·h(i)
where the last step is the definition of oi (j) and Lemma 2.2.
Finally, we note that
√
b 2 = N kG
b · PΣ,a,q (χ\
|∆(n/b)·h(i) | ≤ k∆k2 = k∆k
− χ′ )k2 ≤ N −Ω(c) ,
where we used (49) in the last step. This completes the proof.
9.3 Lemmas on quantiles and the median estimator
In this section we prove several lemmas useful for analyzing the concentration properties of the median estimate.
We will use
44
Theorem 9.3 (Chernoff bound). Let
1 , . . . , Xn be independent 0/1 Bernoulli random variables with
PX
n
µ. Then for any δ > 1 one has Pr[ i=1 Xi > (1 + δ)µ] < e−δµ/3 .
Pn
i=1 E[Xi ]
Lemma 9.4 (Error bounds for the median estimator). Let X1 , . . . , Xn ∈ C be independent random variables.
Let Y := median(X1 , . . . , Xn ), where the median is applied coordinatewise. Then for any a ∈ C one has
|Y − a| ≤2median(|X1 − a|, . . . , |Xn − a|)
p
=2 median(|X1 − a|2 , . . . , |Xn − a|2 ).
Proof. Let i, j ∈ [n] be such that Y = re(Xi ) + i · im(Xj ). Suppose that re(Xi ) ≥ re(a) (the other case is
analogous). Then since re(Xi ) is the median in the list (re(X1 ), . . . , re(Xn )) by definition of Y , we have that
at least half of Xs , s = 1, . . . , n satisfy |re(Xs ) − re(a)| > |re(Xi ) − re(a)|, and hence
|re(Xi ) − re(a)| ≤ median(|re(X1 ) − re(a)|, . . . , |re(Xn ) − re(a)|).
(50)
Since squaring a list of numbers preserves the order, we also have
(re(Xi ) − re(a))2 ≤ median((re(X1 ) − re(a))2 , . . . , (re(Xn ) − re(a))2 ).
(51)
A similar argument holds for the imaginary part. Combining
|Y − a|2 = (re(a) − re(Xi ))2 + (im(a) − im(Xi ))2
with (50) gives
|Y − a|2 ≤median((re(X1 ) − re(a))2 , . . . , (re(Xn ) − re(a))2 )
+ median((im(X1 ) − im(a))2 , . . . , (im(Xn ) − im(a))2 )
Noting that
|Y − a| = ((re(a) − re(Xi ))2 + (im(a) − im(Xi ))2 )1/2 ≤ |re(a) − re(Xi )| + |im(a) − im(Xi )|
and using (51), we also get
|Y − a| ≤median(|re(X1 ) − re(a)|, . . . , |re(Xn ) − re(a)|)
+ median(|im(X1 ) − im(a)|, . . . , |im(Xn ) − im(a)|).
The results of the lemma follow by noting that |re(X) − re(a)| ≤ |X − a| and |im(X) − im(a)| ≤ |X − a|.
Lemma 9.5. Let X1 , . . . , Xn ≥ 0 be independent random variables with E[Xi ] ≤ µ for each i = 1, . . . , n.
Then for any γ ∈ (0, 1) if Y ≤ quantγ (X1 , . . . , Xn ), then
E[|Y − 4µ/γ|+ ] ≤ (µ/γ) · 2−Ω(n)
and
Pr[Y ≥ 4µ/γ] ≤ 2−Ω(n) .
Proof. For any t ≥ 1 by Markov’s inequality Pr[Xi > tµ/γ] ≤ γ/t. Define indicator random variables Zi by
letting Zi = 1 if Xi > tµ/γ and Zi = 0 otherwise. Note that E[Zi ] ≤ γ/t
Pnfor each i. Then since Y is bounded
n
above by the γn-th largest of {Xi }i=1 , we have Pr[Y > tµ/γ] ≤ Pr[ i=1 Zi ≥ γn]. As E[Zi ] ≤ γ/t, this
45
=
P
can only happen if the sum ni=1 Zi exceeds expectation by a factor of at least t. We now apply Theorem 9.3
to the sequence Zi , i = 1, . . . , n. We have
#
" n
X
Zi ≥ γn ≤ e−(t−1)γn/3
(52)
Pr
i=1
by Theorem 9.3 invoked with δ = t − 1. The assumptions of Theorem 9.3 are satisfied as long as t > 2. This
proves the second claim we have t = 4 in that case.
For the first claim we have
Z ∞
tµ · Pr[Y ≥ t · µ/γ]dt
E[Y · 1Y ≥4·µ/γ ] ≤
Z4 ∞
tµe−(t−1)n/3 dt
(by (52))
≤
4
Z ∞
≤ e−n/3
tµe−(t−2)n/3 dt
4
−n/3
= O(µ · e
)
as required.
10 Semi-equispaced Fourier Transform
In this section we give an algorithm for computing the semi-equispaced Fourier transform, prove its correctness
and give runtime bounds.
Algorithm 8 Semi-equispaced Fourier Transform in 2O(d) k logd N time
d
procedure S EMI E QUISPACED FFT(x, c)
⊲ x ∈ C[n] is k-sparse
2:
Let B ≥ 2d k, be a power of 2d , b = B 1/d
c′ ← d-th tensor powers of the flat window functions of [HIKP12a], see below
3:
G, G
n for i ∈ [2b]d .
4:
yi ← √1N (x ∗ G)i· 2b
1:
⊲ FFT on [2b]d
yb ← FFT(y)
6:
x
b′i ← ybi for ||i||∞ ≤ b/2.
7:
return x
b′
8: end procedure
5:
c′ as d-th tensor powers of the flat window functions of [HIKP12a], so that Gi = 0 for
We define filters G, G
all ||i||∞ & c(n/b) log N , kG − G′ k2 ≤ N −c ,
1 if ||i||∞ ≤ b/2
′
c
,
Gi=
0 if ||i||∞ > b
c′ i ∈ [0, 1] everywhere.
and G
The following is similar to results of [DR93, IKP14].
Lemma 10.1. Let n be a power of two, N = nd , c ≥ 2 a constant. Let integer B ≥ 1, be a power of 2d ,
d
b′i for all ||i||∞ ≤ b/2 such that
b = B 1/d . For any x ∈ C[n] Algorithm 8 computes x
in cO(d) ||x||0 logd N + 2O(d) B log B time.
|b
x′i − x
bi | ≤ kxk2 /N c
46
Proof. Define
1
z = √ x ∗ G.
N
bi for all i ∈ [n]d . Furthermore, because subsampling and aliasing are dual under the
We have that zbi = x
bi G
Fourier transform, since yi = zi·(n/2b) , i ∈ [2b]d is a subsampling of z we have for i such that ||i||∞ ≤ b/2 that
x
b′i = ybi =
X
j∈[n/(2b)]d
X
=
j∈[n/(2b)]d
X
=
j∈[n/(2b)]d
=
X
j∈[n/(2b)]d
zbi+2b·j
bi+2b·j
x
bi+2b·j G
c′ i+2b·j +
x
bi+2b·j G
c′ i+2b·j +
x
bi+2b·j G
X
j∈[n/(2b)]d
X
j∈[n/(2b)]d
c′ i+2b·j )
bi+2b·j − G
x
bi+2b·j (G
c′ i+2b·j ).
bi+2b·j − G
x
bi+2b·j (G
For the second term we have using Cauchy-Schwarz
X
c′ ||2 ≤ ||x||2 /N c .
c′ i+2b·j ) ≤ ||x||2 ||G
bi+2b·j − G
b−G
x
bi+2b·j (G
j∈[n/(2b)]d
For the first term we have
X
j∈[n/(2b)]d
c′ i+2b·j = x
c′ i+2b·0 = x
x
bi+2b·j G
bi · G
bi
c′ i+2b·j is larger than b in ℓ∞ norm,
for all i ∈ [2b]d such that ||i||∞ ≤ b, since for any j 6= 0 the argument of G
c′ i+2b·j = 0 for all j 6= 0.
and hence G
Putting these bounds together we get that
c′ k2 ≤ kxk2 N −c
b−G
|b
x′i − x
bi | ≤ kb
xk2 kG
as desired.
The time complexity of computing the FFT of y is 2O(d) B log B. The vector y can be constructed in time
O(d)
c
||x||0 logd N . This is because the support of Gi is localized so that each nonzero coordinate i of x only
contributes to cO(d) logd N entries of y.
We will need the following simple generalization:
Corollary 10.2. Let n be a power of two, N = nd , c ≥ 2 a constant, and Σ ∈ Md×d , q ∈ [n]d . Let integer
d
B ≥ 1 be a power of 2d , b = B 1/d . Let S = {Σ(i − q) : i ∈ Z, ||i||∞ ≤ b/2}. Then for any x ∈ C[n] we can
compute x
b′i for all i ∈ S time such that
|b
x′i − x
bi | ≤ kxk2 /N c
in cO(d) ||x||0 logd N + 2O(d) B log B time.
47
Proof. Define x∗j = ω qj xΣ−T j . Then for all i ∈ [n],
1 X −j T Σ(i−q)
x
bΣ(i−q) = √
ω
xj
N j∈[n]d
1 X −j T Σi j T Σq
ω
ω
xj
=√
N j∈[n]d
1
=√
N
1
=√
N
=
x
b∗i .
X
′ T
′ T
ω −(j ) i ω (j ) q xΣ−T j ′
j ′ =ΣT j∈[n]d
X
′ T
ω −(j ) i x∗j ′
j ′ =ΣT j∈[n]d
We can access x
b∗i with O(d2 ) overhead, so by Lemma 10.1 we can approximate x
bΣ(i−q) = x
b∗i for ||i||∞ ≤ k in
d
O(d)
c
k log N time.
11 Acknowledgements
The author would like to thank Piotr Indyk for many useful discussions at various stages of this work.
References
[AGS03]
A. Akavia, S. Goldwasser, and S. Safra. Proving hard-core predicates using list decoding. FOCS,
44:146–159, 2003.
[Aka10]
A. Akavia. Deterministic sparse Fourier approximation via fooling arithmetic progressions. COLT,
pages 381–393, 2010.
[BCG+ 12] P. Boufounos, V. Cevher, A. C. Gilbert, Y. Li, and M. J. Strauss. What’s the frequency, Kenneth?:
Sublinear Fourier sampling off the grid. RANDOM/APPROX, 2012.
[Bou14]
J. Bourgain. An improved estimate in the restricted isometry problem. GAFA, 2014.
[CCFC02] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. ICALP,
2002.
[CGV12]
M. Cheraghchi, V. Guruswami, and A. Velingker. Restricted isometry of Fourier matrices and list
decodability of random linear codes. SODA, 2012.
[Cip00]
B. A. Cipra. The Best of the 20th Century: Editors Name Top 10 Algorithms. SIAM News, 33,
2000.
[CP10]
E. Candes and Y. Plan. A probabilistic and ripless theory of compressed sensing. IEEE Transactions on Information Theory, 2010.
[CT06]
E. Candes and T. Tao. Near optimal signal recovery from random projections: Universal encoding
strategies. IEEE Trans. on Info.Theory, 2006.
[DIPW10] K. Do Ba, P. Indyk, E. Price, and D. Woodruff. Lower Bounds for Sparse Recovery. SODA, 2010.
48
[Don06]
D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306,
2006.
[DR93]
A. Dutt and V. Rokhlin. Fast fourier transforms for nonequispaced data. SIAM J. Sci. Comput.,
14(6):1368–1393, November 1993.
[GGI+ 02] A. Gilbert, S. Guha, P. Indyk, M. Muthukrishnan, and M. Strauss. Near-optimal sparse Fourier
representations via sampling. STOC, 2002.
[GHI+ 13] B. Ghazi, H. Hassanieh, P. Indyk, D. Katabi, E. Price, and L. Shi. Sample-optimal average-case
sparse Fourier transform in two dimensions. Allerton, 2013.
[GL89]
O. Goldreich and L. Levin. A hard-corepredicate for allone-way functions. STOC, pages 25–32,
1989.
[GLPS10] A. C. Gilbert, Y. Li, E. Porat, and M. J. Strauss. Approximate sparse recovery: optimizing time
and measurements. In STOC, pages 475–484, 2010.
[GMS05]
A. Gilbert, M. Muthukrishnan, and M. Strauss. Improved time bounds for near-optimal space
Fourier representations. SPIE Conference, Wavelets, 2005.
[HAKI12] H. Hassanieh, F. Adib, D. Katabi, and P. Indyk. Faster GPS via the Sparse Fourier Transform.
MOBICOM, 2012.
[HIKP12a] H. Hassanieh, P. Indyk, D. Katabi, and E. Price. Near-optimal algorithm for sparse Fourier transform. STOC, 2012.
[HIKP12b] H. Hassanieh, P. Indyk, D. Katabi, and E. Price. Simple and practical algorithm for sparse Fourier
transform. SODA, 2012.
[HKPV13] S. Heider, S. Kunis, D. Potts, and M. Veit. A sparse Prony FFT. SAMPTA, 2013.
[HR16]
I. Haviv and O. Regev. The restricted isometry property of subsampled fourier matrices. SODA,
2016.
[IK14]
P. Indyk and M. Kapralov. Sample-optimal Fourier sampling in any fixed dimension. FOCS, 2014.
[IKP14]
P. Indyk, M. Kapralov, and E. Price. (Nearly) sample-optimal sparse Fourier transform. SODA,
2014.
[Iwe10]
M. A. Iwen. Combinatorial sublinear-time Fourier algorithms. Foundations of Computational
Mathematics, 10:303–338, 2010.
[Iwe12]
M.A. Iwen. Improved approximation guarantees for sublinear-time Fourier algorithms. Applied
And Computational Harmonic Analysis, 2012.
[KM91]
E. Kushilevitz and Y. Mansour. Learning decision trees using the Fourier spectrum. STOC, 1991.
[LDSP08] M. Lustig, D.L. Donoho, J.M. Santos, and J.M. Pauly. Compressed sensing mri. Signal Processing
Magazine, IEEE, 25(2):72–82, 2008.
[LWC12]
D. Lawlor, Y. Wang, and A. Christlieb.
arXiv:1207.6368, 2012.
[Man92]
Y. Mansour. Randomized interpolation and approximation of sparse polynomials. ICALP, 1992.
49
Adaptive sub-linear time fourier algorithms.
[PR13]
S. Pawar and K. Ramchandran. Computing a k-sparse n-length Discrete Fourier Transform using
at most 4k samples and O(klogk) complexity. ISIT, 2013.
[PS15]
E. Price and Z. Song. A robust sparse Fourier transform in the continuous setting. FOCS, 2015.
[RV08]
M. Rudelson and R. Vershynin. On sparse reconstruction from Fourier and Gaussian measurements. CPAM, 61(8):1025–1171, 2008.
[Sid11]
Emil Sidky. What does compressive sensing mean for X-ray CT and comparisons with its MRI
application. In Conference on Mathematics of Medical Imaging, 2011.
A
Omitted proofs
Proof of Lemma 2.11: We start with
t
EΣ,q [|π(S \ {i}) ∩ B∞
(n/b)·h(i) ((n/b) · 2 )|] =
X
j∈S\{i}
t
PrΣ,q [π(j) ∈ B∞
(n/b)·h(i) ((n/b) · 2 )]
(53)
Recall that by definition of h(i) one has ||(n/b) · h(i) − π(i)||∞ ≤ (n/b), so by triangle inequality
||π(j) − π(i)||∞ ≤ ||π(j) − (n/b)h(i)||∞ + ||π(i) − (n/b)h(i)||∞ ,
so
t
EΣ,q [|π(S \ {i}) ∩ B∞
(n/b)·h(i) ((n/b) · 2 )|] ≤
≤
X
j∈S\{i}
X
j∈S\{i}
t
PrΣ,q [π(j) ∈ B∞
π(i) ((n/b) · (2 + 1))]
t+1
PrΣ,q [π(j) ∈ B∞
)]
π(i) ((n/b) · 2
(54)
Since πΣ,q (i) = Σ(i − q) for all i ∈ [n]d , we have
t+1
PrΣ,q [π(j) ∈ B∞
)] = PrΣ,q [||Σ(j − i)||∞ ≤ (n/b) · 2t+1 ] ≤ 2(2t+2 /b)d ,
π(i) ((n/b) · 2
where we used the fact that by Lemma 2.5, for any fixed i, j 6= i and any radius r ≥ 0,
PrΣ [kΣ(i − j)k∞ ≤ r] ≤ 2(2r/n)d
(55)
with r = (n/b) · 2t+1 .
Putting this together with (54), we get
t
t+2
EΣ,q [|π(S \ {i}) ∩ B∞
/b)d ≤ (|S|/B) · 2(t+2)d+1
(n/b)·h(i) ((n/b) · 2 )|] ≤ |S| · 2(2
1
≤ (2π)−d·F · 64−(d+F ) αd 2(t+2)d+1 .
4
Now by Markov’s inequality we have that i fails to be isolated at scale t with probability at most
i 1
h
t
−d·F
−(d+F ) d/2 (t+2)d+t+1
≤ 2−t αd/2 .
((n/b)
·
2
)|
>
(2π)
·
64
α
2
PrΣ,q |π(S \ {i}) ∩ B∞
π(i)
4
Taking the union bound over all t ≥ 0, we get
X1
1
1
PrΣ,q [i is not isolated] ≤
2−t αd/2 ≤ αd/2 ≤ α1/2
4
2
2
t≥0
as required.
Before giving a proof of Lemma 2.9, we state the following lemma, which is immediate from Lemma 9.2:
50
Lemma A.1. Let x, x1 , x2 , χ, χ1 , χ2 ∈ CN , x = x1 + x2 , χ = χ1 + χ2 . Let Σ ∈ Md×d , q, a ∈ [n]d ,
B = bd , b ≥ 2 an integer. Let
u = H ASH T O B INS (x̂, χ, (H, a))
u1 = H ASH T O B INS (c
x1 , χ1 , (H, a))
u2 = H ASH T O B INS (c
x2 , χ2 , (H, a)).
Then for each j ∈ [b]d one has
T Σi
−a
|G−1
oi (i) uj ω
T Σi
1 −a
− (x − χ)i |p . |G−1
oi (i) uj ω
+ N −Ω(c)
T Σi
2 −a
− (x1 − χ1 )i |p + |G−1
oi (i) uj ω
− (x2 − χ2 )i |p
for p ∈ {1, 2}, where O(c) is the word precision of our semi-equispaced Fourier transform computations.
Proof of Lemma 2.9: By Lemma 2.5, for any fixed i and j and any t ≥ 0,
PrΣ [kΣ(i − j)k∞ ≤ t] ≤ 2(2t/n)d .
Per Lemma 9.2, H ASH T O B INS computes the vector u ∈ CB given by
X
T
uh(i) − ∆h(i) =
Goi (j) x′j ω a Σj
(56)
j∈[n]d
for some ∆ with k∆k2∞ ≤ N −Ω(c) . We define the vector v ∈ Cn by vΣj = x′j Goi (j) , so that
uh(i) − ∆h(i) =
so
T Σi
uh(i) − ω a
X
T
ω a j vj =
j∈[n]d
Goi (i) x′i − ∆h(i) =
√
√
N vba
) .
N (v[
{Σi} a
We have by (56) and the fact that (X + Y )2 ≤ 2X 2 + 2Y 2
T Σi
−a
|G−1
oi (i) ω
T Σi
a
uh(i) − x′i |2 = G−2
oi (i) |uh(i) − ω
T Σi
Goi (i) x′i |2
2
Goi (i) x′i − ∆h(i) |2 + 2G−2
oi (i) ∆h(i)
X
T
2
= 2G−2
Goi (j) x′j ω a Σj |2 + 2G−2
oi (i) |
oi (i) ∆h(i)
a
≤ 2G−2
oi (i) |uh(i) − ω
j∈[n]d
By Parseval’s theorem, therefore, we have
T Σi
−a
Ea [|G−1
oi (i) ω
uh(i) − x′i |2 ] ≤ 2G−2
oi (i) Ea [|
X
T Σj
Goi (j) x′j ω a
j∈[n]d
2
2
= 2G−2
oi (i) (kv{Σi} k2 + ∆h(i) )
X
. N −Ω(c) +
|x′j Goi (j) |2
j∈[n]d \{i}
. N −Ω(c) +
X
j∈[n]d \{i}
. N −Ω(c) + µ2Σ,q (i).
51
|x′j Goi (j) |2
|2 ] + 2Ea [∆2h(i) ]
(57)
We now prove (2). Recall that the filter G approximates an ideal filter, which would be 1 inside B∞
0 (n/b)
and 0 everywhere else. We use the bound on Goi (j) = Gπ(i)−π(j) in terms of ||π(i) − π(j)||∞ from Lemma 2.3,
(2). In order to leverage the bound, we partition [n]d = B∞
(n/b)·h(i) (n/2) as
B∞
(n/b)·h(i) (n/2)
=
B∞
(n/b)·h(i) (n/b)
∪
log2 (b/2)
[
t=1
t
∞
t−1
B∞
((n/b)2
)
\
B
((n/b)2
)
.
(n/b)·h(i)
(n/b)·h(i)
t
∞
t−1 )
For simplicity of notation, let X0 =
and Xt = B∞
(n/b)·h(i) ((n/b) · 2 ) \ B(n/b)·h(i) ((n/b) · 2
for t ≥ 1. For each t ≥ 1 we have by Lemma 2.3, (2)
F
2
.
max
|Goi (l) | ≤
max |Goi (l) | ≤
1 + 2t−1
π(l)∈Xt
π(l)6∈B∞
((n/b)2t−1 )
(n/b)·h(i)
B∞
(n/b)·h(i) (n/b)
Since the rhs is greater than 1 for t ≤ 0, we can use this bound for all t ≤ log2 (b/2). Further, by Lemma 2.5
we have for each j 6= i and t ≥ 0
t
t+1
PrΣ,q [π(j) ∈ Xt ] ≤ PrΣ,q [π(j) ∈ B∞
/b)d .
(n/b)·h(i) ((n/b) · 2 )] ≤ 2(2
Putting these bounds together, we get
EΣ,q [µ2Σ,q (i)] = EΣ,q [
X
j∈[n]d \{i}
≤
≤
≤
log2 (b/2)
X
|x′j |2
X
|x′j |2
j∈[n]d \{i}
X
j∈[n]d \{i}
≤ 2O(d)
X
·
PrΣ,q [π(j) ∈ Xt ] · max |Goi (l) |
π(l)∈Xt
t=0
log2 (b/2)
j∈[n]d \{i}
2F
B
|x′j Goi (j) |2 ]
X
·
(2
t=0
|x′j |2
kx′ k22
B
t+1
+∞
X
d
/b) ·
2
1 + 2t−1
F
2(t+1)d−F (t−1)
t=0
d·F completes the proof of (2).
as long as F ≥ 2d and F = Θ(d). Recalling that G−1
oi (i) ≤ (2π)
The proof of (1) is similar. We have
X
X
T
EΣ,q [ max |
x′j Goi (j) ω a Σj |] ≤ EΣ,q [
|x′j Goi (j) |] + |∆h(i) |
a∈[n]d
j∈[n]d \{i}
j∈[n]d \{i}
≤ |∆h(i) | +
≤ |∆h(i) | +
≤ |∆h(i) | +
log2 (b/2)
X
|x′j |
X
|x′j |
j∈[n]d \{i}
X
j∈[n]d \{i}
′
O(d) kx k1
≤ |∆h(i) | + 2
PrΣ,q [π(j) ∈ Xt ] · max |Goi (l) |
π(l)∈Xt
t=0
log2 (b/2)
j∈[n]d \{i}
2F
B
X
·
B
·
|x′j |
X
t=0
+∞
X
t+1
(2
d
/b) ·
2
1 + 2t−1
F
2(t+1)d−F (t−1)
t=0
,
(58)
52
where
∆h(i) . N −Ω(c) .
d·F and R∗ ≤ ||x|| /µ completes the proof of (1).
Recalling that G−1
∞
oi (i) ≤ (2π)
53
| 8 |
No Tits alternative for cellular automata
arXiv:1709.00858v3 [math.DS] 11 Jan 2018
Ville Salo
[email protected]
January 12, 2018
Abstract
We show that the automorphism group of a one-dimensional full shift
(the group of reversible cellular automata) does not satisfy the Tits alternative. That is, we construct a finitely generated subgroup which is
not virtually solvable yet does not contain a free group on two generators.
We give constructions both in the two-sided case (acting group Z) and
the one-sided case (acting monoid N, alphabet size at least eight). Lack
of Tits alternative follows for several groups of symbolic (dynamical) origin: automorphism groups of two-sided one-dimensional uncountable sofic
shifts, automorphism groups of multidimensional subshifts of finite type
with positive entropy and dense minimal points, automorphism groups of
full shifts over non-periodic groups, and the mapping class groups of twosided one-dimensional transitive SFTs. We also show that the classical
Tits alternative applies to one-dimensional (multi-track) reversible linear
cellular automata over a finite field.
1
Introduction
In [49] Jacques Tits proved that if F is a field (with no restrictions on characteristic), then a finitely generated subgroup of GL(n, F ) either contains a free
group on two generators or contains a solvable subgroup of finite index. We say
that a group G satisfies the Tits alternative if whenever H is a finitely generated
subgroup of G, either H is virtually solvable or H contains a free group with
two generators. Whether an infinite group satisfies the Tits alternative is one
of the natural first questions to ask.
The fact that GL(n, F ) satisfies the Tits alternative implies several things:
• The ‘Von Neumann conjecture’, that a group is amenable if and only if it
contains no nonabelian free subgroup, is true for linear groups.1
• Linear groups cannot have intermediate growth. Generally known as the
Milnor problem [41].
• Linear groups have no infinite finitely generated periodic2 subgroups. Generally known as the Burnside problem [14].
1 Mentioned
2A
open by Day in [23].
group G is periodic, or torsion, if ∀g ∈ G : ∃n ≥ 1 : g n = 1.
1
The first item is true because solvable groups are amenable. The second
is true by the theorem of Milnor [41] and Wolf [52], which states that if G is
finitely generated and solvable then either G is virtually nilpotent or G has
exponential growth rate. The third is true because free groups are not periodic,
and solvable groups cannot have finitely generated infinite periodic subgroups
because locally finite groups are closed under group extensions.
These three properties (or lack thereof) are of much interest in group theory,
since in each case whether groups can have these ‘pathological properties’ was
open for a long time. It seems that none of the three have been answered for
automorphism groups of full shifts (equivalently, groups of reversible cellular
automata). We show that the classical Tits alternative is not enough to solve
the three questions listed – it is not true. Concretely, we show that there is a
residually finite variant of A5 ≀ Z which does not satisfy the Tits alternative and
embeds in the automorphism group of a full shift.
A (two-sided) full shift is ΣZ where Σ is a finite alphabet, with dynamics
of Z given by the shift σ : ΣZ → ΣZ defined by σ(x)i = xi+1 . A subshift is a
topologically closed shift-invariant subsystem of a full shift. A special case is a
sofic shift, a subsystem of a full shift obtained by forbidding a regular language
of words, and SFTs (subshift of finite type) are obtained by forbidding a finite
language. An endomorphism of a subshift is a continuous self-map of it, which
commutes with the shift. The automorphism group of a subshift X, denoted
by Aut(X), is the group of endomorphisms having (left and right) inverses, and
the automorphism group of a full shift is also known as the group of reversible
cellular automata. See [38] for precise definitions, [34] for definitions in the
multidimensional case, and [16] for subshifts on general groups. All these notions
have one-sided versions where N is used in place of Z. In the case of one-sided
subshifts we will only discuss full shifts ΣN .
As far as symbolic dynamics goes, automorphism groups of subshifts are a
classical topic [33, 19], with lots of progress in the 80’s and 90’s [12, 13, 37, 10]
especially in sofic settings, but also in others [35, 27]. In the last few (half a
dozen) years there has been a lot of interest in these groups [43, 48, 44, 21, 18,
22, 26, 25, 20] especially in settings where the automorphism group is, for one
reason or another, more restricted. Also the full shift/sofic setting, which we
concentrate on in this paper, has been studied in recent works [28, 45, 46, 3].
The popular opinion is that the automorphism group of a full shift is a
complicated and intractable object. However, with the Tits alternative (or the
three questions listed above) in mind, looking at known (published) finitely
generated subgroups as purely abstract groups does not really support this
belief. As far as the author knows, all that is known about the set of subgroups of
Aut(ΣZ ) for a nontrivial alphabet Σ follow from the facts that it is independent
of the alphabet, contains the right-angled Artin groups (graph groups) and
is closed under direct and free products (‘cograph products’) and finite group
extensions. See [13, 37, 45]. The author does not know whether all groups
generated by these facts satisfy the Tits alternative,3 but believes that if finite
group extensions are replaced with just containment of finite groups, this does
follow from the results of [2].
Some of the known (families of) groups which satisfy the Tits alternative are
3 More precisely, I do not know how finite extensions play together with graph products,
when the operations are alternated.
2
hyperbolic groups [31], outer automorphism groups of free groups [5], finitelygenerated Coxeter groups [42], and right-angled Artin groups and more generally groups obtained by graph products from other groups satisfying the Tits
alternative, under minor restrictions [2]. In particular, we obtain that the automorphism group of a full shift contains a finitely-generated subgroup which is
not embeddable in any such group.
Two particularly famous concrete examples of groups that do not satisfy
the Tits alternative are the Grigorchuk group [30] and Thompson’s group F
[15]. These groups also have many other interesting properties, so it would be
more interesting to embed them instead of inventing a new group for the task.
The Grigorchuk group can indeed be embedded in the automorphism group of
a subshift, by adapting the construction in [7], but the author does not know
whether it embeds in the automorphism group of a sofic shift. Thompson’s group
F embeds in the automorphism group of an SFT [47], but is not residually finite,
and thus does not embed in the automorphism group of a full shift. We mention
also that there are solvable groups of derived length three whose automorphism
groups do not satisfy the alternative [32].
The variant of A5 ≀ Z we describe is not a very complex group, and it is
plausible that some weaker variant of the Tits alternative holds in automorphism
groups of full shifts, and allows this type of groups in place of ‘virtually solvable
groups’. In particular, our group is elementarily amenable [17], and one could
ask whether every finitely generated subgroup of the automorphism group of
a mixing SFT is either elementarily amenable or contains a free nonabelian
subgroup. If this were the case, it would solve the Von Neumann, Milnor and
Burnside problems for automorphism groups of mixing SFTs.
The group we construct satisfies the law [g, h]30 , and is thus an example
of a residually finite group which satisfies a law, but does not satisfy the Tits
alternative. It turns out that such an example has been found previously [24,
Theorem 1], and we were delighted to find that indeed our example sits precisely
in the variety used in their theorem. However, our example is rather based on
an answer4 of Ian Agol on the mathoverflow website [36]. The idea behind
the embedding is based on Turing machines [3] in the two-sided case. In the
one-sided case we use a commutator trick applied to subshifts acting on finite
sets.
2
Results and corollaries
In the two-sided case, we obtain several results, all based on the same construction (Lemma 3) and the fact the automorphism group of the full shift embeds
quite universally into automorphism groups of subshifts.
In the statements, Σ and A are arbitrary finite alphabets.
Theorem 1. For any alphabet Σ, the group Aut(ΣZ ) of reversible cellular automata on the alphabet Σ does not satisfy the Tits alternative.
The following summarizes the (well-known) embeddings listed in Section 5,
and the corollaries for the Tits alternative.
4 The
answer reads ‘A5 ≀ Z’.
3
Theorem 3. Let G be an infinite finitely-generated group, and X ⊂ ΣG a
subshift. Then we have Aut(AZ ) ≤ Aut(X), and thus Aut(X) does not satisfy
the Tits alternative, if one of the following holds:
• G = Z and X is an uncountable sofic shift,
• G = Zd and X is a subshift of finite type with positive entropy and dense
minimal points.
• G is not periodic and X is a nontrivial full shift.
The first embedding is from [37, 45], the second is from [34].
The mapping class group of a subshift is defined in [9]. Combining the embedding theorem of [37, 45] and results of [9] (and a short additional argument)
gives the following.
Theorem 2. If X ⊂ ΣZ is a nontrivial transitive SFT, then Aut(AZ ) ≤ MX ,
and thus MX does not satisfy the Tits alternative.
Automorphisms of two-sided full shifts also appear in some less obviously
symbolic dynamical context, in particular the rational group Q of [29] contains
the automorphism group of a full shift, [51] constructs its classifying space
(more generally those of mixing SFTs), implementing it as the fundamental
group of a simplicial complex built out of integer matrices, and [50] ‘realizes’5
automorphisms of full shifts in the centralizer of a particular homeomorphism
of a sphere of any dimension at least 5.
In the case of one-sided SFTs, there are nontrivial restrictions on finite subgroups, and the group is generated by elements of finite order. The automorphism group of ΣN is not finitely generated if |Σ| ≥ 3, while |Aut({0, 1})N | = 2
[33, 11]. We prove that the Tits alternative also fails in the one-sided case.
Theorem 4. Let |Σ| ≥ 8. Then Aut(ΣN ) does not satisfy the Tits alternative.
Of course, this group embeds in Aut(ΣZ ), so Theorem 1 follows from Theorem 4. While Aut(ΣZ ) embeds in many situations where we have a symbolic
group action, Aut(ΣN ) embeds more generally in many monoid and (non-vertextransitive) graph contexts, though unlike in the case of Z, we are not aware of
anything concrete to cite.
The group Aut(ΣN ) also arises in a less obviously symbolic context: It is
shown in [6] that if Sd denotes the set of centered (coefficient of xd−1 is zero)
monic polynomials p of degree d over the complex numbers, such that the filledin Julia set (points whose iteration stays bounded) of p does not contain any
critical point, then π1 (Sd ) (the fundamental group of Sd as a subspace of Cd )
factors onto Aut({0, . . . , d − 1}N ). Unfortunately, the Tits alternative is not
closed under quotients (since free groups satisfy the Tits alternative trivially by
the Nielsen-Schreier theorem), so we do not directly get a corollary for π1 (Sd ).
5 While the precise statement in [50] is natural and interesting, it seems hard to get a nontrivial group-theoretic interepretation of this result just from the statement of the theorem: it
does follow that the automorphism group of a full shift is a subquotient of the homeomorphism
group of the sphere but since this group contains free groups, so is any other countable group.
4
3
Residually finite wreath product
Grigorchuk group and Thompson’s group F are presumably particularly famous
examples of groups not satisfying the Tits alternative mostly because they are
particularly famous for other reasons, and happen not to satisfy the Tits alternative – one can construct such examples directly by group extensions: A5 ≀ Z
does not satisfy the alternative by a similar proof as that of Lemma 2.
The group A5 ≀ Z is not residually finite since A5 is non-abelian, and thus
cannot be embedded in the automorphism group of a full shift. Informally, there
is an obvious way to ‘embed’ it, but this embedding is not quite a homomorphism, because the relations only hold on some ‘well-formed’ configurations. In
this section, we describe the abstract group obtained through this ‘embedding’
– a kind of broken wreath product. Luckily for us, it turns out not to satisfy
the Tits alternative either.
Let N ⊂ N be an infinite6 set, let G be a finite group and write G +N Z for
the group generated by the elements of G and a new element , which act on
S
n
n∈N G by
a · (g1 , g2 , . . . , gn ) = (ag1 , g2 , . . . , gn )
for a ∈ G and n ∈ N , and
· (g1 , g2 , . . . , gn ) = (g2 , g3 , . . . , gn , g1 ).
S
More precisely, the formulas attach a bijection on n∈N Gn to each a ∈ G and
to
(by the above formulas), and G +N Z is (up to isomorphism) the group
of bijections they generate. This is a variant of the usual wreath product of G
and Z, but G +N Z is obviously residually finite for any finite group G, since it
is defined by its action on the finite sets Gn . Note that
simply rotates (the
coordinates of) Gn for n ∈ N , and generates a copy of Z.
A subquotient of a group is a factor of a subgroup.
Lemma 1. Let G be a finitely generated group which has An5 as a subquotient
for infinitely many n. Then G is not virtually solvable.
Proof. Suppose it is. Then there is an index k solvable subgroup K ≤ G. Let
G1 ≤ G be a subgroup. Then [G1 : K ∩ G1 ] ≤ k and K is solvable. Let
h : G1 → G2 be a surjective homomorphism. Then for K ′ = h(K ∩ G1 ), we
have [G2 : K ′ ] ≤ [G1 : K ∩ G1 ] ≤ k. Because K ′ is a subquotient of the solvable
group K, it is solvable, and we obtain that any subquotient of G has a solvable
subgroup of index at most k.
The group An5 is now a subquotient of G for arbitrarily large (thus all) n,
so An5 has a solvable subgroup K ′ of index at most k. We claim that if n is
sufficiently large with respect to k, this is a contradiction. A simple argument for
n ≥ k being enough is that if there is some coordinate i such that K ′ projected
to that coordinate is surjective, then the non-solvable group A5 is a quotient of
K ′ , contradicting solvability of K ′ . If on the other hand in every coordinate the
projection of K ′ is a proper subgroup of A5 , then there are at least n + 1 cosets
of K ′ in An5 . (Actually there are exponentially many cosets in n.)
6 The definition works fine if N is a finite set, but then the group we define is finite, and
thus satisfies the alternative.
5
Lemma 2. If N ⊂ N is infinite, then the group A5 +N Z is not virtually solvable
and does not contain a free nonabelian group.
Proof. First observe that φ(a) = 0 for a ∈ G and φ( ) = 1 extends to a welldefined homomorphism φ : A5 +N Z → Z (where we use that N is infinite).
Suppose first that the free group F2 ≤ A5 +N Z is generated freely by g, h,
and N ⊂ N is arbitrary. Clearly on any An5 , [g, h] performs zero total rotation,
that is, φ([g, h]) = 0. Thus independently of n, we have [g, h]30 · ~v = ~v for any
n and ~v ∈ An5 , since the exponent of A5 is 30. This implies that g and h satisfy
a nontrivial relation, contradicting the assumption.7
So show that A5 +N Z is not virtually solvable, we need to show that it has
n
A5 as a subquotient for arbitrarily large n. We show it is a subgroup of a factor
(thus also a factor of a subgroup). Pick n ∈ N . Then A5 +N Z acts on An5 and
a momen’ts reflection shows that this induces a surjective homomorphism from
A5 ≀ Z/nZ to A5 ≀ Z/nZ. We have An5 ≤ A5 ≀ Z/nZ.
Since A5 acts faithfully on {1, 2, 3, 4, 5}, it is easy to see that the following
N
group (again defined
S by its action) nis isomorphic to A5 + Z: Let elements of
A5 and act on n∈N {1, 2, 3, 4, 5} by
a · (m1 , m2 , . . . , mn ) = (a(m1 ), m2 , . . . , mn )
for a ∈ A5 , and
· (m1 , m2 , . . . , mn ) = (m2 , m3 , . . . , mn , m1 ).
4
The construction in the two-sided case
If Σ is a finite alphabet, write Σ∗ for the set of words over Σ (including the
empty word).
Lemma 3. There exists an alphabet Σ such that we have A5 +2N Z ≤ Aut(ΣZ ).
Proof. Let A = {1, 2, 3, 4, 5} and choose Σ = {#} ∪ A2 . Before describing
the automorphism, we define an auxiliary faithful action of A5 +2N Z on finite
words. Think of a word w ∈ (A2 )n as two words u, v ∈ An on top of each
other, the topmost one defined by ui = (wi )1 and the second vi = (wi )2 , for
i = 1, 2, . . . , n. We use the notation w = [ uv ] = ([ uv11 ] , [ uv22 ] , . . . , [ uvnn ]). Define
a bijection ψ : (A2 )n → A2n by ψ([ uv ]) = uv R where v R is the reversal of v
defined by aR = a for a ∈ A and (va)R = a(v R ).
Now, conjugate the defining action of A5 +2N Z on A2n to (A2 )n through ψ
to obtain the action
i m
h
mn
m1 m2
mn
a·m1
2
a · ( m′1 , m′2 , . . . , m′n ) = ( m′1 , m′2 , . . . , m′n )
for a ∈ A5 , and for
the following counter-clockwise ‘conveyor belt’ rotation
h ′ i
m3 m4
mn
m1 m2
mn
2
).
· ( m′1 , m′2 , . . . , m′n ) = ([ m
m1 ] , m′1 , m′2 , . . . , m′
n−1
7 Indeed
we have shown that the group satisfies a law, similarly as in [24, Theorem 1].
6
Now, we define our automorphisms. To a ∈ A5 we associate the automorphism fa : ΣZ → ΣZ defined by fa (x)i = Fa (xi−1 , xi ) where Fa : Σ2 → Σ is
defined by Fa (b, c) = c if b 6= #, Fa (#, #) = # and
Fa (#, [ cb ]) = [ a·b
c ]
where a · b, is the action of a ∈ A5 by permutation on b ∈ A. It is easy to see
that fa is an endomorphism of ΣZ , and xi = # ⇐⇒ fa (x)i = #.
To , we associate f : ΣZ → ΣZ defined by f (x)i = F (xi−1 , xi , xi+1 )i
where F : Σ3 → Σ is defined by F (a, #, b) = # for all a, b ∈ A2 and
F (#, [ dc ] , #) = [ dc ] ,
F (#, [ dc ] , [ fe ]) = [ ec ] ,
F ([ ab ] , [ dc ] , #) = db ,
F ([ ab ] , [ dc ] , [ fe ]) = [ eb ]
for all a, b, c, d, e, f ∈ A. It is easy to see that F is also an endomorphism of
ΣZ , and xi = # ⇐⇒ F (x)i = #.
Now, let Y ⊂ ΣZ be the set of points x where both the left tail x(−∞,−1] and
the right tail x[0,∞) contain infinitely many #-symbols, and consider any point
x ∈ Y . Then x splits uniquely into an infinite concatenation of words
x = . . . w−2 #w−1 #w0 #w1 #w2 . . .
where wi ∈ (A2 )∗ for all i ∈ Z. If f is either f or one of the fa for a ∈ A5 ,
then the decomposition of f (x) contains #s in the same positions as that of x,
in the sense that (up to shifting indices)
f (x) = . . . u−2 #u−1 #u0 #u1 #u2 . . .
where |ui | = |wi | for all i and the words begin in the same coordinates. Thus
f (x) ∈ Y . It is easy to see that between two #s, the mapping wi 7→ ui performed
by f is precisely the one we defined for words in (A2 )∗ described above for the
corresponding generator of A5 +2N Z.
It follows that a 7→ fa |Y ,
7→ f |Y extends uniquely to an embedding of
A5 +2N Z into the group of self-homeomorphisms of Y . Since Y is dense in ΣZ and
fa and f are endomorphisms of ΣZ , a 7→ fa , 7→ f extends to an embedding
of A5 +2N Z into Aut(ΣZ ).
5
Embedding results
In this section we list some embeddings from the literature. We start with
uncountable sofic shifts, where uncountable refers to the cardinality of the set
of points. Note that full shifts are uncountable sofic shifts (ΣN is uncountable,
and the empty language is regular).
The following is [45, Lemma 7].
Lemma 4. If X ⊂ ΣZ is an uncountable sofic shift, then Aut(AZ ) ≤ Aut(X)
for any alphabet A.
7
Proposition 1. If X is an uncountable sofic shift, then A5 +N Z ≤ Aut(X).
Proof. By Lemma 4, we have Aut(ΣZ ) ≤ Aut(X) where Σ is the alphabet of
Lemma 3. We have A5 +2N Z ≤ Aut(ΣZ ) by Lemma 3, so it is enough to check
that A5 +N Z ≤ A5 +2N Z. One can check that such an embedding is induced by
a 7→ a for a ∈ A5 , and 7→ 2 .
As for countable sofic shifts, we do not have a characterization of the situations when the automorphism group satisfies the Tits alternative. However, in
that setting, there are stronger methods for studying the three embeddability
questions listed in the introduction and we refer to [47].
The following is [34, Theorem 3].
d
Lemma 5. If X ⊂ ΣZ is an SFT with positive entropy and dense minimal
points, then Aut(AZ ) ≤ Aut(X) for any alphabet A.
Below, minimal points are points whose orbit-closure is minimal as a dynamical system.
Theorem 3. Let G be an infinite finitely-generated group, and X ⊂ ΣG a
subshift. Then we have Aut(AZ ) ≤ Aut(X), and thus Aut(X) does not satisfy
the Tits alternative, if one of the following holds:
• G = Z, X is an uncountable sofic shift,
• G = Zd , X is an SFT with positive entropy and dense minimal points, or
• G is not periodic, X is a nontrivial full shift.
Proof. The first two were proved above. The third item is true, because if
Z ≤ G by n 7→ g n and X = AG for A ⊂ Σ, then if K is a set of left cosets
representatives for hgi then f ∈ Aut(AZ ) directly turns into an automorphism
of AG by fˆ(x)kgn = f (y)n where yi = xkgi and k ∈ K.
The first item generalizes to sofic H × Z-shifts where H is finite by [46],
and could presumably be generalized to virtually-cyclic groups with the same
idea. By symmetry-breaking arguments, we believe the third item generalizes to
SFTs with a nontrivial point of finite support, on any group which is not locally
finite, and also to cellular automata acting on sets of colorings of undirected
graphs, but this is beyond the scope of the present paper. A generalization of
the second item seems worth conjecturing more explicitly.
Conjecture 1. Let G be an amenable group which is not locally finite and X
a subshift of finite type with positive entropy and dense minimal points. Then
we have Aut(AZ ) ≤ Aut(X) for any finite alphabet A.
Next, we deal with the mapping class group. By definition, the group hσi
is contained in the center of Aut(X) for any subshift X, in particular this is a
normal subgroup.
Lemma 6. Let X be any uncountable sofic shift. Then Aut(AZ ) ≤ Aut(X)/hσi
for every finite alphabet A.
8
Proof. Let φ : Aut(AZ ) ≤ Aut(X) be the embedding given by Lemma 4. From
its proof in [45], it is easy to see that there exists an infinite subshift Y ≤ X
such that φ(f ) fixes every point in Y for every f ∈ Aut(AZ ); namely the maps
φ(f ) only act nontrivially at a bounded distance from an unbordered word w
which can taken to be arbitrarily long.
We show that based on only this, φ is automatically also an embedding of
Aut(AZ ) into Aut(X)/hσi. Suppose not, and that φ(f ) ◦ σ k = φ(g) for some
f, g ∈ Aut(AZ ). Then in particular φ(f ) ◦ σ k (y) = φ(g)(y) =⇒ σ k (y) = y
for every y ∈ Y . If k 6= 0 this is a contradiction since Y is an infinite subshift.
If k = 0, then φ(f ) = φ(g) implies f = g since φ : Aut(AZ ) → Aut(X) is an
embedding.
The following is now a straightforward corollary of [9]. See [9] for the definition of the mapping class group MX of a subshift X (which is not needed in
the proof).
Lemma 7. Let X be any transitive SFT. Then Aut(AZ ) ≤ MX for every
alphabet A.
Proof. In [9, Theorem 5.6], it is shown in particular that if X is a transitive
SFT, then its mapping class group contains an isomorphic copy of Aut(X)/hσi,
which then contains a copy of Aut(AZ ) by the above lemma.
In [29, Corollary 5.5], it is shown that the automorphism group of every
full shift embeds in the group Q that they define, sometimes called the rational group. Thus we also obtain a new proof that Q does not satisfy the Tits
alternative.
6
One-sided automorphism groups
The automorphism group of the full shift {0, 1}N is isomorphic to Z2 . For large
enough alphabets, however, we show that Aut(ΣN ) does not satisfy the Tits
alternative. This gives another proof of Theorem 1.
The high-level idea of the proof is that we can associate to a subshift a
kind of action of it on a finite set, in a natural way. Mapping n 7→ (0n−1 1)Z ,
the action of the subshift generated by the image of N ⊂ N corresponds to the
group A5 +N Z defined previously. It turns out that Lemma 2 generalizes to such
actions, and any infinite subshift can be used in place of this (almost) periodic
subshift. The generalization is based on a commutator trick from [4, 40, 1, 8, 3].
The trick to adapting the construction to cellular automata on N is to consider
‘actions of the trace of another cellular automaton’.
We only give the definitions in the special case of A5 and binary subshifts.
Let X ⊂ {0, 1}Z be a subshift and let Y = {1, 2, 3, 4, 5}. For each g ∈ A5 and
j ∈ {0, 1} define a bijection gj : Y × X → Y × X by gj (y, x) = (g · y, x) for
x0 = j and gj (y, x) = (y, x) for x0 6= j. Define a bijection
y Y × X by
(y, x) = (y, σ(x)). Denote the group generated by these maps by A5 +X Z.
Lemma 8. Let X ⊂ {0, 1}Z be infinite. Then the group A5 +X Z is not virtually
solvable and does not contain a free nonabelian group.
9
Proof. Observe that φ(g) = 0 for g ∈ A5 and φ( ) = 1 extends to a welldefined homomorphism φ : A5 +X Z → Z (since X is infinite). Suppose first
that F2 ≤ A5 +N Z is generated freely by g, h. Then as in Lemma 2, we get
φ([g, h]) = 0, and again we have [g, h]30 · (y, x) = (y, x).
We now show that An5 is a subgroup of A5 +X Z for arbitrarily large n, and
the claim follows from Lemma 1.
Consider the cylinder sets [w]i = {x ∈ {0, 1}Z | x[i,i+|w|−1] = w} and for
g ∈ A5 , w ∈ {0, 1}∗ and i ∈ Z define
πg,i,w (y, x) = (gy, x) if x ∈ [w]i , and πg,i,w (y, x) = (y, x) otherwise.
We claim that πg,i,w ∈ A5 +X Z for all g, i, w. To see this, observe that by the
definition of how g0 , g1 ∈ A5 act on Y × X, and by conjugating with , we have
πg,w,i ∈ A5 +X Z for all g ∈ A5 , i ∈ Z and w ∈ {0, 1}. We proceed inductively
on |w|: Let w = uv where u, v ∈ {0, 1}+ are nonempty words, and let a ∈ A5
be any commutator, that is, a = [b, c]. Then one sees easily that
[πb,i,u , πc,i+|u|,v ] = π[b,c],i,w .
Because A5 is perfect (that is, A5 = [A5 , A5 ]), we get that πg,i,w ∈ A5 +X Z for
every g ∈ A5 . This proves the claim.
Now, let w be an unbordered word8 of length n [39] that occurs in some point
of X. Then the elements πg,i,w for 0 ≤ i < |w| generate a group isomorphic to
An5 (because the fact w is unbordered implies the supports of their actions are
disjoint), and we conclude.
Theorem 4. Let |Σ| ≥ 8. Then Aut(ΣN ) does not satisfy the Tits alternative.
Proof. Let Σ = Y ⊔ A ⊔ B where A, B are finite sets, |A| ≥ 3 and |Y | = 5.
Let f : ΣN → ΣN be any reversible cellular automaton of infinite order such
that xi ∈ A ⇐⇒ f (x)i ∈ A and xi ∈
/ A =⇒ f (x)i = xi . One construction is
the following: for a, b ∈ A, a 6= b, define Fa,b : Σ2 → Σ by
for all c ∈
/ {a, b} : Fa,b (a, c) = b and Fa,b (b, c) = a
and Fa,b (c, d) = c otherwise. Define fa,b ∈ Aut(ΣN ) by fa,b (x)0 = Fa,b (x0 , x1 ).
This is an involution, so it is an automorphism. Now it is easy to check that for
any distinct elements a, b, c ∈ A, fa,b ◦ fb,c is of infinite order, by considering its
action on any point that begins with the word bk a.
The trace of f is the subshift of ΣZ consisting of y such that for some x ∈ ΣN ,
n
f (x)0 = yn for all n. Since f is of infinite order on ΣN and fixes all symbols
not in A, its trace Tf intersected with AZ is infinite. Let π : A → {0, 1} be a
function, and extend it to π : AZ → {0, 1}Z in the obvious way. Pick π so that
the subshift X = π(Tf ∩ AZ ) is infinite. This is possible because if the indicator
function of some symbol gives a finite subshift, then this symbol appears with
a fixed period in every point, so if all indicator functions give finite subshifts,
every point can be described by a finite amount of data.
For each permutation p ∈ A5 and i ∈ {0, 1}, take the fp,i to be the automorphism defined by fp,i (x)0 = p(x0 ) if x0 ∈ Y , x1 ∈ A and π(x1 ) = i, and by
fp,i (x)0 = x0 otherwise.
8 Alternatively,
one can use the Marker Lemma [38] for X.
10
One can now check that f and fp,i for p ∈ A5 , i ∈ {0, 1} together generate a
group isomorphic to A5 +X Z, and by the previous lemma, this group does not
satisfy the Tits alternative.
Question 1. For which one-dimensional transitive SFTs does the Tits alternative hold? Is it always false when the automorphism group is infinite?
Unlike in the two-sided case, automorphism groups of one-sided full shifts
do not in general embed into each other, due to restrictions on finite subgroups,
see [11].
7
Tits alternative for linear cellular automata
We show that linear cellular automata, with a suitable definition, satisfy the
assumptions of Tits’ theorem directly.
Let K be a finite field and let V be a finite-dimensional vector space over
K. Then the full shift V Z is also a vector space over K with cellwise addition
(x + y)i = xi + yi and cellwise scalar multiplication (a · x)i = a · xi . Consider the
semigroup LEnd(V Z ) of endomorphisms f of the full shift V Z which are also
linear maps, i.e. f (x + y) = f (x) + f (y), a · f (x) = f (a · x), and the group
LAut(V Z ) ≤ LEnd(V Z ) of such maps which are also automorphisms (in which
case the automorphism is automatically
linear as well).
P
The (formal) Laurent series i∈Z ci xi , where ci ∈ K for all i and ci = 0 for
all small enough i, form a field that we denote by K((x)). By GL(n, K((x)))
we denote the group of linear automorphisms of the vector space K((x))n over
K((x)). A Laurent polynomial is a Laurent series with finitely many nonzero
coefficients ci , and we write K[x, x−1 ] for this subring of K((x)). In addition
to being a vector space over K((x)), K((x))n is a module over K[x, x−1 ] and
a vector space over K (by simply restricting the scalar field).
Lemma 9. If V is an n-dimensional vector space over the finite field K, the
semigroup LEnd(V Z ) is isomorphic to the semigroup Mn (K[x, x−1 ]) of n-byn matrices over the ring K[x, x−1 ]. The group LAut(V Z ) corresponds to the
invertible matrices in this isomorphism.
Proof. We may assume V = K n . Let X be the set of points x ∈ (K n )Z such that
n Z
x−i = 0n for all but finitely many i. Note that this is a subspace of (K
P ) for the
−1 n
K-linear structure. Define a map φ : X → K[x, x ] by φ(x)j = i∈Z (xi )j xi ,
and observe that this is an isomorphism of the vector spaces X and K[x, x−1 ]n
over K.
Suppose first that f : (K n )Z → (K n )Z is a linear CA, i.e. K-linear, continuous and shift-commuting. Then we see that the images of ei ∈ (K n )Z defined
by ((ei )0 )j = δij (where δab = 1 if a = b, δab = 0 otherwise) determine f completely. If some f (ei ) has infinitely many nonzero coordinates, then it is easy
to see that f is not continuous at (0n )Z . One can then directly work out the
matrix representation for φ ◦ f |X ◦ φ−1 from the images f (ei ), observing that
applying the left shift σ corresponds to scalar multiplication by x−1 . Since X
is dense in (K n )Z , f is completely determined by f |X , so this representation as
a matrix semigroup is faithful.
By inverting the construction, every matrix arises from the local rule of some
cellular automaton. The claim about invertible matrices is straightforward.
11
The following is obvious.
Lemma 10. The group of invertible matrices in Mn (K[x, x−1 ]) embeds in
GL(n, K((x))).
Combining the lemmas, we get that LAut(V Z ) is a linear group over a finitedimensional vector space. The result of [49] now directly applies.
Theorem 5. Let V be a finite-dimensional vector space over a finite field K.
Then the group LAut(V Z ) satisfies the Tits alternative.
The author does not know if the group of (algebraic) automorphisms of a
one-dimensional group shift always satisfies the Tits alternative.
Acknowledgements
The author thanks Ilkka Törmä for his comments on the paper, and especially
for suggesting a much simpler embedding in the two-sided case. The author
thanks Johan Kopra for pointing out several typos in the first public draft. The
last section arose from discussions with Pierre Guillon and Guillaume Theyssier.
The question of Tits alternative for cellular automata was proposed by Tom
Meyerovitch.
References
[1] Scott Aaronson, Daniel Grier, and Luke Schaeffer. The classification of
reversible bit operations. Electronic Colloquium on Computational Complexity, (66), 2015.
[2] Yago Antolı́n. Tits Alternatives for Graph Products, pages 1–5. Springer
International Publishing, Cham, 2014.
[3] Sebastián Barbieri, Jarkko Kari, and Ville Salo. The Group of Reversible
Turing Machines, pages 49–62. Springer International Publishing, Cham,
2016.
[4] David A. Barrington. Bounded-width polynomial-size branching programs
recognize exactly those languages in NC1. Journal of Computer and System
Sciences, 38(1):150 – 164.
[5] Mladen Bestvina, Mark Feighn, and Michael Handel. The Tits alternative
for out(Fn ) I: Dynamics of exponentially-growing automorphisms. Annals
of Mathematics, 151(2):517–623, 2000.
[6] Paul Blanchard, Robert L. Devaney, and Linda Keen. The dynamics of
complex polynomials and automorphisms of the shift. Inventiones mathematicae, 104(1):545–580, Dec 1991.
[7] Nicols Matte Bon. Topological full groups of minimal subshifts with subgroups of intermediate growth. Journal of Modern Dynamics, 9(01):67–80,
2015.
12
[8] Tim Boykett, Jarkko Kari, and Ville Salo. Strongly Universal Reversible
Gate Sets, pages 239–254. Springer International Publishing, Cham, 2016.
[9] M. Boyle and S. Chuysurichay.
The Mapping Class Group of a
Shift of Finite Type.
ArXiv e-prints, April 2017.
Available at
https://arxiv.org/abs/1704.03916.
[10] Mike Boyle and Ulf-Rainer Fiebig. The action of inert finite-order automorphisms on finite subsystems of the shift. Ergodic Theory and Dynamical
Systems, 11(03):413–425, 1991.
[11] Mike Boyle, John Franks, and Bruce Kitchens. Automorphisms of one-sided
subshifts of finite type. Ergodic Theory Dynam. Systems, 10(3):421–449,
1990.
[12] Mike Boyle and Wolfgang Krieger. Periodic points and automorphisms of
the shift. Transactions of the American Mathematical Society, 302(1):pp.
125–149, 1987.
[13] Mike Boyle, Douglas Lind, and Daniel Rudolph. The automorphism group
of a shift of finite type. Transactions of the American Mathematical Society,
306(1):pp. 71–114, 1988.
[14] William Burnside. On an unsettled question in the theory of discontinuous
groups. Quart. J. Pure Appl. Math, 33(2):230–238, 1902.
[15] James W. Cannon, William J. Floyd, and Walter R. Parry. Introductory notes on Richard Thompson’s groups. Enseignement Mathématique,
42:215–256, 1996.
[16] T. Ceccherini-Silberstein and M. Coornaert. Cellular Automata and
Groups. Springer Monographs in Mathematics. Springer-Verlag Berlin Heidelberg, 2010.
[17] Ching Chou. Elementary amenable groups. Illinois J. Math., 24(3):396–
407, 09 1980.
[18] E. Coven and R. Yassawi. Endomorphisms and automorphisms of minimal
symbolic systems with sublinear complexity. ArXiv e-prints, November
2014. Available at https://arxiv.org/abs/1412.0080.
[19] Ethan M Coven. Endomorphisms of substitution minimal sets. Probability
Theory and Related Fields, 20(2):129–133, 1971.
[20] V. Cyr, J. Franks, B. Kra, and S. Petite. Distortion and the automorphism
group of a shift. ArXiv e-prints, November 2016.
[21] V. Cyr and B. Kra.
The automorphism group of a shift of
subquadratic growth.
ArXiv e-prints, March 2014.
Available at
https://arxiv.org/abs/1403.0238.
[22] V. Cyr and B. Kra. The automorphism group of a minimal shift of stretched
exponential growth. ArXiv e-prints, September 2015.
13
[23] Mahlon M. Day. Amenable semigroups. Illinois J. Math., 1(4):509–544, 12
1957.
[24] Yves de Cornulier and Avinoam Mann. Some Residually Finite Groups
Satisfying Laws, pages 45–50. Birkhäuser Basel, Basel, 2007.
[25] S. Donoso, F. Durand, A. Maass, and S. Petite. On automorphism groups
of Toeplitz subshifts. ArXiv e-prints, January 2017.
[26] Sebastian Donoso, Fabien Durand, Alejandro Maass, and Samuel Petite.
On automorphism groups of low complexity subshifts. Ergodic Theory and
Dynamical Systems, 36(01):64–95, 2016.
[27] Doris Fiebig and Ulf-Rainer Fiebig. The automorphism group of a coded
system. Transactions of the American Mathematical Society, 348(8):3173–
3191, 1996.
[28] J. Frisch, T. Schlank, and O. Tamuz. Normal amenable subgroups of the
automorphism group of the full shift. ArXiv e-prints, December 2015.
Available at https://arxiv.org/abs/1512.00587.
[29] Rostislav I Grigorchuk, Volodymyr V Nekrashevich, and Vitali I Sushchanskii. Automata, dynamical systems and groups. In Proc. Steklov Inst. Math,
volume 231, pages 128–203, 2000.
[30] R. I. Grigorčuk. On Burnside’s problem on periodic groups. Funktsional.
Anal. i Prilozhen., 14(1):53–54, 1980.
[31] Mikhael Gromov. Hyperbolic groups. Essays in group theory, 8(75-263):2,
1987.
[32] Brian Hartley. A conjecture of Bachmuth and Mochizuki on automorphisms
of soluble groups. Canad. J. Math, 28:1302–1310, 1976.
[33] Gustav A. Hedlund. Endomorphisms and automorphisms of the shift dynamical system. Math. Systems Theory, 3:320–375, 1969.
[34] Michael Hochman. On the automorphism groups of multidimensional shifts
of finite type. Ergodic Theory Dynam. Systems, 30(3):809–840, 2010.
[35] B. Host and F. Parreau. Homomorphismes entre systèmes dynamiques
définis par substitutions. Ergodic Theory and Dynamical Systems, 9:469–
477, 8 1989.
[36] Ian Agol (https://mathoverflow.net/users/1345/ian agol).
Which
group does not satisfy the tits alternative?
MathOverflow.
URL:https://mathoverflow.net/q/38775 (version: 2010-09-16).
[37] K. H. Kim and F. W. Roush. On the automorphism groups of subshifts.
Pure Mathematics and Applications, 1(4):203–230, 1990.
[38] Douglas Lind and Brian Marcus. An introduction to symbolic dynamics
and coding. Cambridge University Press, Cambridge, 1995.
14
[39] M. Lothaire. Algebraic combinatorics on words, volume 90 of Encyclopedia
of Mathematics and its Applications. Cambridge University Press, Cambridge, 2002.
[40] Hiroki Matui. Some remarks on topological full groups of cantor minimal
systems. International Journal of Mathematics, 17(02):231–251, 2006.
[41] John Milnor. Growth of finitely generated solvable groups. J. Differential
Geom., 2(4):447–449, 1968.
[42] Guennadi A. Noskov and Èrnest B. Vinberg. Strong Tits alternative for
subgroups of Coxeter groups. Journal of Lie Theory, 12(1):259–264, 2002.
[43] Jeanette Olli. Endomorphisms of sturmian systems and the discrete chair
substitution tiling system. Dynamical Systems, 33(9):4173–4186, 2013.
[44] Ville Salo.
Toeplitz subshift whose automorphism group is not
finitely generated.
ArXiv e-prints, November 2014.
Available at
https://arxiv.org/abs/1411.3299.
[45] Ville Salo. A note on subgroups of automorphism groups of full shifts.
ArXiv e-prints, July 2015.
[46] Ville Salo. Transitive action on finite points of a full shift and a finitary
Ryan’s theorem. ArXiv e-prints, October 2016. Accepted in Ergodic Theory
and Dynamical Systems.
[47] Ville Salo and Michael Schraudner. Automorphism groups of subshifts
through group extensions. Preprint.
[48] Ville Salo and Ilkka Törmä. Block Maps between Primitive Uniform and
Pisot Substitutions. ArXiv e-prints, June 2013.
[49] Jacques Tits. Free subgroups in linear groups. Journal of Algebra, 20(2):250
– 270, 1972.
[50] J. B. Wagoner. Realizing symmetries of a subshift of finite type by homeomorphisms of spheres. Bull. Amer. Math. Soc. (N.S.), 14(2):301–303, 04
1986.
[51] J. B. Wagoner. Triangle identities and symmetries of a subshift of finite
type. Pacific J. Math., 144(1):181–205, 1990.
[52] Joseph A. Wolf. Growth of finitely generated solvable groups and curvature
of Riemanniann manifolds. J. Differential Geom., 2(4):421–446, 1968.
15
| 4 |
The Non-Uniform k-Center Problem
Deeparnab Chakrabarty
Prachi Goyal
Ravishankar Krishnaswamy
arXiv:1605.03692v2 [] 13 May 2016
Microsoft Research, India
dechakr,t-prgoya,[email protected]
Abstract
In this paper, we introduce and study the Non-Uniform k-Center (NUkC) problem. Given a finite
metric space (X, d) and a collection of balls of radii {r1 ≥ · · · ≥ rk }, the NUkC problem is to find a
placement of their centers on the metric space and find the minimum dilation α, such that the union of
balls of radius α · ri around the ith center covers all the points in X. This problem naturally arises as a
min-max vehicle routing problem with fleets of different speeds.
The NUkC problem generalizes the classic k-center problem when all the k radii are the same (which
can be assumed to be 1 after scaling). It also generalizes the k-center with outliers (kCwO for short)
problem when there are k balls of radius 1 and ` balls of radius 0. There are 2-approximation and
3-approximation algorithms known for these problems respectively; the former is best possible unless
P=NP and the latter remains unimproved for 15 years.
We first observe that no O(1)-approximation is to the optimal dilation is possible unless P=NP,
implying that the NUkC problem is more non-trivial than the above two problems. Our main algorithmic
result is an (O(1), O(1))-bi-criteria approximation result: we give an O(1)-approximation to the optimal
dilation, however, we may open Θ(1) centers of each radii. Our techniques also allow us to prove a simple
(uni-criteria), optimal 2-approximation to the kCwO problem improving upon the long-standing 3-factor.
Our main technical contribution is a connection between the NUkC problem and the so-called
firefighter problems on trees which have been studied recently in the TCS community. We show NUkC is
as hard as the firefighter problem. While we don’t know if the converse is true, we are able to adapt ideas
from recent works [3, 1] in non-trivial ways to obtain our constant factor bi-criteria approximation.
1
Introduction
Source location and vehicle routing problems are extremely well studied [19, 23, 9] in operations research. Consider the following location+routing problem: we are given a set of k ambulances with speeds
s1 , s2 , . . . , sk respectively, and we have to find the depot locations for these vehicles in a metric space (X, d)
such that any point in the space can be served by some ambulance as fast as possible. If all speeds were the
same, then we would place the ambulances in locations S such that maxv∈X d(v, S) is minimized – this is
the famous k-center problem. Differing speeds, however, leads to non-uniformity, thus motivating the titular
problem we consider.
Definition 1.1 (The Non-Uniform k-Center Problem (NUkC)). The input to the problem is a metric space
(X, d) and a collection of k balls of radii {r1 ≥ r2 ≥ · · · ≥ rk }. The objective is to find a placement C ⊆ X
of the centers of these balls, so as to minimize the dilation parameter α such that the union of balls of radius
α · ri around the ith center covers all of X. Equivalently, we need to find centers {c1 , . . . , ck } to minimize
i)
maxv∈X minki=1 d(v,c
ri .
1
As mentioned above, when all ri ’s are the same (and equal to 1 by scaling), we get the k-center problem.
The k-center problem was originally studied by Gonzalez [10] and Hochbaum and Shmoys [13] as a clustering
problem of partitioning a metric space into different clusters to minimize maximum intra-cluster distances.
One issue (see Figure 1 for an illustration and refer to [11] for a more detailed explanation) with k-center
(and also k-median/means) as an objective function for clustering is that it favors clusters of similar sizes
with respect to cluster radii. However, in presence of qualitative information on the differing cluster sizes,
the non-uniform versions of the problem can arguably provide more nuanced solutions. One such extreme
special case was considered as the “clustering with outliers” problem [7] where some fixed number/fractions
of points in the metric space need not be covered by the clusters. In particular, Charikar et al [7] consider
(among many problems) the k-center with outlier problem (kCwO, for short) and show a 3-approximation
for this problem. It is easy to see that kCwO is a special case of the NUkC problem when there are k balls of
radius 1 and ` (the number of outliers) balls of radius 0.
Motivated by the aforementioned reasons (both from facility location as well as from clustering settings), we investigate the worst-case complexity of the NUkC problem. Gonzalez [10] and Hochbaum and
Shmoys [13] give 2-approximations for the k-center problem, and also show that no better factor is possible
unless P = NP. Charikar et al [7] give a 3-approximation for the kCwO problem, and this has been the best
factor known for 15 years. Given these algorithms, it is natural to wonder if a simple O(1)-approximation
exists for the NUkC problem. In fact, our first result shows a qualitative distinction between NUkC and these
problems: constant-approximations are impossible for NUkC unless P=NP.
Theorem 1.2. For any constant c ≥ 1, the NUkC problem does not admit a c-factor approximation unless
P = N P , even when the underlying metric is a tree metric.
The hardness result is by a reduction from the so-called resource minimization for fire containment
problem on trees (RMFC-T, in short), a variant of the firefighter problem. To circumvent the above hardness,
we give the following bi-criteria approximation algorithm which is the main result of the paper, and which
further highlights the connections with RMFC-T since our algorithms heavily rely on the recent algorithms
for RMFC-T [3, 1]. An (a, b)-factor bi-criteria algorithm for NUkC returns a solution which places at most
a balls of each type (thus in total it may use as many as a · k balls), and the dilation is at most b times the
optimum dilation for the instance which places exactly one ball of each type.
Theorem 1.3. There is an (O(1), O(1))-factor bi-criteria algorithm for the NUkC problem.
Figure 1: The left figure shows the dataset, the middle figure shows a traditional k-center clustering, and the
right figure depicts a non-uniform clustering
2
Furthermore, as we elucidate below, our techniques also give uni-criteria results when
√ the number of
distinct radii is 2. In particular, we get a 2-approximation for the kCwO problem and a (1+ 5)-approximation
when there are only two distinct types of radii.
Theorem 1.4. There is a 2-approximation for the kCwO problem.
√
Theorem 1.5. There is a (1 + 5)-approximation for the NUkC problem when the number of distinct radii
is at most 2.
1.1
Discussion on Techniques
Our proofs of Theorems 1.2 and 1.3 shows a strong connection between NUkC and the so-called resource
minimization for fire containment problem on trees (RMFC-T, in short). This connection is one of the main
findings of the paper, so we first formally define this problem.
Definition 1.6 (Resource Minimization for Fire Containment on Trees (RMFC-T)). Given a rooted tree T as
input, the goal is to select a collection of non-root nodes N from T such that (a) every root-leaf path has
at least one vertex from N , and (b) maxt |N ∩ Lt | is minimized, where Lt is the tth-layer of T , that is, the
vertices of T at exactly distance t from the root.
To understand the reason behind the name, consider a fire starting at the root spreading to neighboring
vertices each day; the RMFC-T problem minimizes the number of firefighters needed per day so as to prevent
the fire spreading to the leaves of T .
It is NP-hard to decide if the optimum of RMFC-T is 1 or not [8, 17]. Given any RMFC-T instance
and any c > 1, we construct an NUkC instance on a tree metric such that in the “yes” case there is always
a placement with dilation = 1 which covers the metric, while in the “no” case even a dilation of c doesn’t
help. Upon understanding our hardness construction, the inquisitive reader may wonder if the reduction
also works in the other direction, i.e., whether we can solve NUkC using a reduction to RMFC-T problem.
Unfortunately, we do not know if this is true even for two types of radii. However, as we explain below we
still can use positive results for the RMFC-T problem to design good algorithms for the NUkC problem.
Indeed, we start off by considering the natural LP relaxation for the NUkC problem and describe an
LP-aware reduction of NUkC to RMFC-T. More precisely, given a feasible solution to the LP-relaxation for
the given NUkC instance, we describe a procedure to obtain an instance of RMFC-T defined by a tree T ,
with the following properties: (i) we can exhibit a feasible fractional solution for the LP relaxation of the
RMFC-T instance, and (ii) given any feasible integral solution to the RMFC-T instance, we can obtain a
feasible integral solution to the NUkC instance which dilates the radii by at most a constant factor. Therefore,
an LP-based ρ-approximation to RMFC-T would immediately imply (ρ, O(1))-bicriteria approximation
algorithms for NUkC. This already implies Theorem 1.4 and Theorem 1.5 since the corresponding RMFC-T
instances have no integrality gap. Also, using a result of Chalermsook and Chuzhoy [3], we directly obtain
an (O(log∗ n), O(1))-bicriteria approximation algorithm for NUkC.
Here we reach a technical bottleneck: Chalermsook and Chuzhoy [3] also show that the integrality
gap of the natural LP relaxation for RMFC-T is Ω(log∗ n). When combined with our hardness reduction
in Theorem 1.2 , this also implies a (Ω(log∗ n), c) integrality gap for any constant c > 1 for the natural LP
relaxation for NUkC. That is, even if we allow a violation of c in the radius dilation, there is a Ω(log∗ n)integrality gap in terms of the violation in number of balls opened of each type.
However, very recently, Adjiashvili, Baggio and Zenklusen [1] show an improved O(1)-approximation
for the RMFC-T problem. At a very high level, the main technique in [1] is the following. Given an
3
RMFC-T instance, they carefully and efficiently “guess” a subset of the optimum solution, such that the
natural LP-relaxation for covering the uncovered leaves has O(1)-integrality gap. However, this guessing
procedure crucially uses the tree structure of T in the RMFC-T problem. Unfortunately for us though, we get
the RMFC-T tree only after solving the LP for NUkC, which already has an Ω(log∗ n)-gap! Nevertheless,
inspired by the ideas in [1], we show that we can also efficiently preprocess an NUkC instance, “guessing”
the positions of a certain number of balls in an optimum solution, such that the standard LP-relaxation for
covering the uncovered points indeed has O(1)-gap. We can then invoke the LP-aware embedding reduction
to RMFC-T at this juncture to solve our problem. This is quite delicate, and is the most technically involved
part of the paper.
1.2
Related Work and Open Questions
The k-center problem [10, 13] and the k-center with outliers [7] probems are classic problems in approximation algorithms and clustering. These problems have also been investigated under various settings such as the
incremental model [5, 22], streaming model [4, 22], and more recently in the map-reduce model [14, 21].
Similarly, the k-median [6, 15, 20, 2] and k-means [15, 16, 12, 18] problems are also classic problems studied
extensively in approximation algorithms and clustering. The generalization of k-median to a routing+location
problem was also studied recently [9]. It would be interesting to explore the complexity of the non-uniform
versions of these problems. Another direction would be to explore if the new non-uniform model can be
useful in solving clustering problems arising in practice.
2
Hardness Reduction
In this section, we prove Theorem 1.2 based on the following NP-hardness [17] for RMFC-T.
Theorem 2.1. [17] Given a tree T whose leaves are at the same distance from the root, it is NP-hard to
distinguish between the following two cases. YES: There is a solution to the RMFC-T instance of value 1.
NO: All solutions to the RMFC-T instance has value 2.
Given an RMFC-T instance defined by tree T , we now describe the construction of our NUkC instance.
Let h be the height of the tree, and let Lt denote the vertices of the tree at distance exactly t from the root. So,
the leaves constitute Lh since all leaves are at the same distance from the root. The NUkC instance, I(T ), is
defined by the metric space (X, d), and a collection of balls. The points in our metric space will correspond to
the leaves of the tree, i.e., X = Lh . To define the metric, we assign a weight d(e) = (2c + 1)h−i+1 for each
edge whose one endpoint is in Li and the other in Li−1 ; we then define d be the shortest-path metric on X
induced by this weighted tree. Finally, we set k = h, and define the k radii r1 ≥ r2 ≥ . . . ≥ rk iteratively as
follows: define rk := 0, and for k ≥ i > 1, set ri−1 := (2c + 1) · ri + 2(2c + 1). This completes the NUkC
instance. Before proceeding we make the simple observation: for any two leaves u and u0 with lca v ∈ Lt , we
have d(u, u0 ) = 2(2c + 1 + (2c + 1)2 + · · · + (2c + 1)h−t ) = rt . The following lemma proves Theorem 1.2.
Lemma 2.2. If T is the YES case of Theorem 2.1, then I(T ) has optimum dilation = 2. If T is the NO case
of 2.1, then I(T ) has optimum dilation ≥ 2c.
Proof. Suppose T is in the YES case, and there is a solution to RMFC-T which selects at most 1 node from
each level Lt . If v ∈ Lt is selected, then select a center cv arbitrarily from any leaf in the sub-tree rooted at v
and open the ball of radius rt . We now need to show all points in X = Lh are covered by these balls. Let u
4
be any leaf; there must be a vertex v in some level Lt in u’s path to the root such that a ball of radius rt is
opened at cv . However, d(u, cv ) ≤ d(u, v) + d(v, cv ) = 2rt and so the ball of radius 2rt around cv covers u.
Now suppose T is in the NO case, and the NUkC instance has a solution with optimum dilation < 2c.
We build a good solution for the RMFC-T instance N as follows: suppose the NUkC solution opens the
radius < 2c · rt ball around center u. Let v be the vertex on the u-root path appearing in level Lt . We then
pick this node in N . Observe two things: first, this ball covers all the leaves in the sub-tree rooted at v since
rt ≥ d(u, u0 ) for any such u0 . Furthermore, since the NUkC solution has only one ball of each radius, we get
that |N ∩ Lt | ≤ 1. Finally, since d(u, w) ≥ 2c · rt for all leaves w not in the sub-tree rooted at v, the ball of
radius rt around u doesn’t contain any leaves other than those rooted at v. Contra-positively, since all leaves
w are covered in some ball, every leaf must lie in the sub-tree of some vertex picked in N . That is, N is a
solution to RMFC-T with value = 1 contradicting the NO case.
3
LP-aware reduction from NUkC to RMFC-T
For reasons which will be apparent soon, we consider instances I of NUkC counting multiplicites. That
is, we consider
instance to be a collection of tuples (k1 , r1 ), . . . , (kh , rh ) to indicate there are ki balls of
Pan
h
radius ri . So t=1 kt = k. Intuitively, the reason we do this is that if two radii rt and rt+1 are “close-by”
then it makes sense to round up rt+1 to rt and increase kt , losing only a constant-factor loss in the dilation.
LP-relaxation for NUkC. We now state the natural LP relaxation for a given NUkC instance I. For each
point p ∈ X and radius type ri , we have an indicator variable xp,i ≥ 0 for whether we place a ball of radius ri
centered at p. By doing a binary search on the optimal dilation and scaling, we may assume that the optimum
dilation is 1. Then, the following linear program must be feasible. Below, we use B(q, ri ) to denote the set
of points within distance ri from q.
∀p ∈ X,
∀t ∈ 1, · · · , h
h
X
X
t=1
q∈B(p,rt )
X
xq,t
≥1
(NUkC LP)
≤ kt
xp,t
p∈X
LP-relaxation for RMFC-T. Since we reduce fractional NUkC to fractional RMFC-T, we now state the
natural LP relaxation for RMFC-T on a tree T of depth h + 1. In fact, we will work with the following
budgeted-version of RMFC-T (which is equivalent to the original RMFC-T problem — for a proof, see [1]):
Instead of minimizing the maximum number of “firefighters” at any level t (that is |N ∩ Lt | where N is
the chosen solution), suppose we specify a budget limit of kt on |N ∩ Lt |. The goal is the minimize the
maximum dilation of these budgets. Then the following is a natural LP relaxation for the budgeted RMFC-T
problem on trees. Here L = Lh is the set of leaves, and Lt are the layer t-nodes. For a leaf node v, let Pv
denote the vertex set of the unique leaf-root path excluding the root.
min α
X
yu ≥ 1
∀v ∈ L,
u∈Pv
X
∀t ∈ 1, · · · , h
u∈Lt
5
yu
≤ α · kt
(RMFC-T LP)
The LP-aware Reduction to Tree metrics. We now describe our main reduction algorithm, which takes
as input an NUkC instance I = {(X, d); (k1 , r1 ), . . . , (kh , rh )} and a feasible solution x to NUkC LP, and
returns a budgeted RMFC-T instance IT defined by a tree T along with budgets for each level, and a feasible
solution y to RMFC-T LP with dilation 1. The tree we construct will have height h + 1 and the budgeted
RMFC-T instance will have budgets precisely kt at level 1 ≤ t ≤ h, and the budget for the leaf level is 0.
For clarity, throughout this section we use the word points to denote elements of the metric space in I, and
the word vertices/nodes to denote the tree nodes in the RMFC-T instance that we construct. We build the
tree T in a bottom-up manner, where in each round, we pick a set of far-away representative points (the
distance scale increases as we move up the tree) and cluster all points to their nearest representative. This
is similar to a so-called clustering step in many known algorithms for facility location (see, e.g. [6]), but
whereas an arbitrary set of far-away representatives would suffice in the facility location algorithms, we need
to be careful in how we choose this set to make the overall algorithm work.
Formally, each vertex of the tree T is mapped to some point in X, and we denote the mapping of the
vertices at level t by ψt : Lt → X. We will maintain that each ψt will be injective, so ψt (u) 6= ψt (v) for
u 6= v in Lt . So, ψt−1 is well defined for the range of ψt .
The complete algorithm runs in rounds h + 1 to 2 building the tree one level per round. To begin with,
the ψh+1 mapping is an arbitrary bijective mapping between L := Lh+1 , the set of leaves of the tree, and the
points of X (so, in particular, |L| = |X|). We may assume it to be the identity bijection. In each round t, the
range of the mappings become progressively smaller, that is1 , P
ψt (Lt ) ⊇ ψt−1 (Lt−1 ). We call ψt (Lt ) as the
winners at level t. We now describe round t. Let Covt (p) := q∈B(p,rt ) xq,t denote
Pthe fractional amount
the point p is covered by radius rt balls in the solution x. Also define Cov≥t (p) := s≥t Covs (p) denoting
the fractional amount p is covered by radius rt or smaller balls. Let Covh+1 (p) = 0 for all p.
Algorithm 1 Round t of the LP-aware Reduction.
Input: Level Lt , subtrees below Lt , the mappings ψs : Ls → X for all t ≤ s ≤ h.
Output: Level Lt−1 , the connections between Lt−1 and Lt , and the mapping ψt−1 .
Define A = ψt (Lt ) the set of points who are winners at level t.
while A 6= ∅ do
(a) Choose the point p ∈ A with minimum coverage Cov≥t (p).
(b) Let N (p) := {q ∈ A : d(p, q) ≤ 2rt−1 } be the set of all nearby points in A to p.
(c) Create a new tree vertex w ∈ Lt−1 corresponding to p and set ψt−1 (w) := p. Call p a winner at
level t − 1, and each q ∈ N (p) ⊆ A a loser to p at this level.
(d) Create edge (w, v) for tree vertices v ∈ ψt−1 (N (p)) associated with N (p) at level t.
(e) Set A ← A \ (N (p)).
(f) Set yw = Covt−1 (p).
end while
Finally, we add a root vertex and connect it to all vertices in L1 . This gives us the final tree T and a
solution y which assigns a value to all non-leaf, non-root vertices of the tree T . The following claim asserts
well-definedness of the algorithm.
Lemma 3.1. The solution y is a feasible solution to RMFC-T LP on IT with dilation 1.
Proof. The proof is via two claims for the two different set of inequalities.
1
We are using the notation ψ(X) :=
S
x∈X
ψ(x).
6
Claim 3.2. For all 1 ≤ t ≤ h, we have
P
w∈Lt
yw ≤ kt .
Proof.
Fix t. Let
P
P Wt ⊆ X denote the winners at level t, that is, Wt = ψt (Lt ). By definition of the algorithm,
y
=
w∈Lt w
p∈Wt Covt (p). Now note that for any two points p, q ∈ Wt , we have B(p, rt ) ∩ B(q, rt ) = ∅.
To see this, consider the first point which enters A in the (t + 1)th round when Lt was being formed. If
this is p, then all points in the radius
t ball are deleted from A. Since the balls are disjoint, the second
P 2rP
inequality of NUkC LP implies p∈Wt q∈B(p,rt ) xq,t ≤ kt . The second summand in the LHS is precisely
Covt (p).
P
Claim 3.3. For any leaf node w ∈ L, we have v∈Pw yv ≥ 1.
Proof. We start with an observation. Fix a level t and a winner point p ∈ Wt . Let u ∈ Lt such that ψt (u) = p.
Since Wt ⊆ Wt+1 ⊆ · · · ⊆ Wh , there is a leaf v in the subtree rooted at u corresponding to p. Moreover, by
0
0
the
P way we formed our tree edges in step (d), we have that ψs (w ) = p for all w in the (u, v)-path and hence
w0 ∈[u,v]-path yw0 is precisely Cov≥t (p).
Now, for contradiction, suppose there is some leaf corresponding to, say point p, such that the root-leaf
path has total y-assignment less than 1. Then, pick the point, among all such unsatisfied points p, who appears
in a winning set Wt with t as small as possible.
By the preceding observation, the total y-assignment p receives on its path from level h to level t is
−1
exactly Cov≥t (p). Moreover, suppose p loses to q at level t − 1, i.e., ψt−1 (p) is a child of ψt−1
(q). In
particular, it means that q has also been a winner up to level t and so the total y-assignment on q’s path
−1
upto level t is also precisely Cov≥t (q). Additionally, since ψt−1
(q) became the parent node for ψt−1 (p), we
know that Cov≥t (q) ≤ Cov≥t (p) due to the way we choose winners in step (a) of the while loop. Finally,
by our maximality assumption on p, we know that q is fractionally satisfied by the y-solution. Therefore,
there is fractional assignment of at least (1 − Cov≥t (q)) on q’s path from nodes in level t − 1 to level 1.
Putting these observations together, we get that the total fractional assignment on p’s root-leaf path is at least
Cov≥t (p) + (1 − Cov≥t (q)) ≥ 1, which results in the desired contradiction.
The following lemma shows that any good integral solution to the RMFC-T instance IT can be converted
to a good integral solution for the NUkC instance I.
Lemma 3.4. Suppose there exists a feasible solution N to IT such that for all 1 ≤ t ≤ h, |N ∩ Lt | ≤ αkt .
Then there is a solution to the NUkC instance I that opens, for each 1 ≤ t ≤ h, at most αkt balls of radius
≤ 2r≥t ,where r≥t := rt + rt+1 + · · · + rh .
Proof. Construct the NUkC solution as follows: for level 1 ≤ t ≤ h and every vertex w ∈ N ∩ Lt , place
the center at ψt (w) of radius 2r≥t . We claim that every point in X is covered by some ball. Indeed, for any
p ∈ X, look at the leaf v = ψh+1 (p), and let w ∈ N be a node in the root-leaf path. Let w ∈ Lt for some t.
Now observe that d(p, ψt (w)) ≤ 2r≥t ; this is because for any edge (u0 , v 0 ) in the tree where u0 is in Lt and
is the parent of v 0 , we have that d(ψt+1 (v 0 ), ψt+1 (u0 )) < 2rt .
This completes the reduction, and we now prove a few results which follow easily from known results about
the firefighter problem.
Theorem 3.5. There is a polynomial time (O(log∗ n), 8)-bi-criteria algorithm for NUkC.
7
Proof. Given any instance I of NUkC, we first club the radii to the nearest power of 2 to get an instance I 0
with radii (k1 , r1 ), · · · , (kh , rh ) such that an (a, b)-factor solution for I 0 is an (a, 2b)-solution for I. Now,
by scaling, we assume that the optimal dilation for I 0 is 1; we let x be the feasible solution to the NUkC LP.
Then, using Algorithm 1, we can construct the tree IT0 and a feasible solution y to the RMFC-T LP. We can
now use the following theorem of Chalermsook and Chuzhoy [3]: given any feasible solution to the RMFC-T
LP, we can obtain a feasible set N covering all the leaves such that for all t, |N ∩ Lt | ≤ O(log∗ n)kt . Finally,
we can apply Lemma 3.4 to obtain a (O(log∗ n), 4) solution to I 0 (since r≥t ≤ 2rt ).
Proof of Theorem 1.4 and Theorem 1.5. We use the following claim regarding the integrality gap of RMFC-T
LP for depth 2 trees.
Claim 3.6. When h = 2 and kt ’s are integers, given any fractional solution to RMFC-T LP, we can find a
feasible integral solution as well.
Proof. Given a feasible solution y to RMFC-T LP, we need to find a set N such that |N ∩ Lt | ≤ kt for
t = 1, 2. There must exist at least one vertex w ∈ L1 such that yw ∈ (0, 1) for otherwise the solution y
is trivially integral. If only one vertex w ∈ L1 is fractional, then since k1 is an integer, we can raise this
yw to be an integer as well. So at least two vertices w and w0 in L1 are fractional. Now, without loss of
generality, let us assume that |C(w)| ≥ |C(w0 )|, where C(w) is the set of children of w. Now for some small
0 := y + , y 0 := y 0 − , ∀c ∈ C(w), y 0 := y − , and
constant 0 < < 1, we do the following: yw
w
c
w
c
w0
∀c ∈ C(w0 ), yc0 := yc + . Note that y(L1 ) remains unchanged, y(L2 ) can only decrease, and root-leaf paths
still add to at least 1. We repeat this till we rule out all fractional values.
To see the proof of Theorem 1.4, note that an instance of the k-center with outliers problem is an NUkC
instance with (k, 1), (`, 0), that is, r1 = 1 and r2 = 0. We solve the LP relaxation and obtain the tree and an
RMFC-T solution. The above claim implies a feasible integral solution to RMFC-T since h = 2, and finally
note that r≥1 = r1 for kCwO, implying we get a 2-factor approximation.
√
The proof of Theorem 1.5 is similar. If√r1 < θr2 where θ = ( 5 + 1)/2, then we simply run k-center
with k = k1√+ k2 . This gives a 2θ = 5 + 1-approximation. Otherwise, we apply Lemma 3.4 to get a
2(1 + 1θ ) = 5 + 1-approximation.
We end this section with a general theorem, which is an improvement over Lemma 3.4 in the case when
many of the radius types are close to each other, in which case r≥t could be much larger than rt . Indeed, the
natural way to overcome this would be to group the radius types into geometrically increasing values as we
did in the proof of Theorem 3.5. However, for some technical reasons we will not be able to bucket the radius
types in the following section, since we would instead be bucketing the number of balls of each radius type in
a geometric manner. Instead, we can easily modify Algorithm 1 to build the tree by focusing only on radius
types where the radii grow geometrically.
Theorem 3.7. Given an NUkC instance I = {M = (X, d), (k1 , r1 ), (k2 , r2 ), . . . , (kh , rh )} and an LP
solution x for NUkC LP, there is an efficient reduction which generates an RMFC-T instance IT and an LP
solution y to RMFC-T LP, such that the following holds:
(i) For any two tree vertices w ∈ Lt and v ∈ Lt0 where w is an ancestor of v (which means t ≤ t0 ),
suppose p and q are the corresponding points in the metric space, i.e., p = ψt (w) and q = ψt0 (v), then
it holds that d(p, q) ≤ 8 · rt .
8
(ii) Suppose there exists a feasible solution N to IT such that for all 1 ≤ t ≤ h, |N ∩ Lt | ≤ αkt . Then
there is a solution to the NUkC instance I that opens, for each 1 ≤ t ≤ h, at most αkt balls of radius
at most 8 · rt .
3.1
Proof of Theorem 3.7
Both the algorithm as well as the proof are very similar, and we now provide them for completeness. At a
high level, the only difference occurs when we identify and propagate winners: instead of doing it for each
radius type, we identify barrier levels where the radius doubles, and perform the clustering step only at the
barrier levels. We now present the algorithm, which again proceeds in rounds h + 1, h, h − 1, . . . , 2, but
makes jumps whenever there are many clusters of similar radius type. To start with, define rh+1 = 0.
Algorithm 2 Round t of the Improved Reduction.
Input: Level Lt , subtrees below Lt , the mappings ψs : Ls → X for all t ≤ s ≤ h.
Output: Level Lt−1 , the connections between Lt−1 and Lt , and the mapping ψt−1 .
Let t0 = mins s.t.rs ≤ 2rt−1 be the type of the largest radius smaller than 2rt−1 .
Define A = ψt (Lt ) the set of points who are winners at level t.
while A 6= ∅ do
(a) Choose the point p ∈ A with minimum coverage Cov≥t (p).
(b) Let N (p) := {q ∈ A : d(p, q) ≤ 2rt0 } denote all points in A within 2rt0 from p.
(c) Create new vertices wt−1 , . . . , wt0 −1 ∈ Lt−1 , . . . , Lt0 −1 levels respectively, all corresponding to p,
i.e., set ψi (w) := p for all t0 − 1 ≤ i ≤ t − 1. Connect each pair of these vertices in successive levels
with edges. Call p a winner at levels t − 1, . . . , t0 − 1.
(d) Create edge (wt−1 , v) for vertices v ∈ ψt−1 (N (p)) associated with N (p) at level t.
(f) Set A ← A \ (N (p)).
(g) Set ywi = Covi (p) for all t − 1 ≤ i ≤ t0 − 1.
end while
Jump to round t0 − 1 of the algorithm. Add t0 − 1 to the set of barrier levels
Our proof proceeds almost in an identical manner to those of Lemmas 3.1 and 3.4, but now our tree has
an additional property that for any two nodes u ∈ Li and v ∈ Li0 where u is an ancestor of v, the distance
between the corresponding points in the metric space p = ψi (u) and q = ψi0 (v) is at most d(p, q) ≤ 8ri ,
which was the property not true in the earlier reduction. This is easy to see because as we traverse a tree
path from u to v, notice that each time we change winners, the distance between the corresponding points in
the metric space decreases geometrically. This proves property (i) of Theorem 3.7. With this in hand, the
remaining proofs to prove the second property are almost identical to the ones in Section 3 and we sketch
them below for completeness.
Lemma 3.8. The solution y is a feasible solution to RMFC-T LP on IT with dilation 1.
Proof. The proof is via two claims for the two different set of inequalities.
P
Claim 3.9. For all 1 ≤ t ≤ h, we have w∈Lt yw ≤ kt .
Proof. Fix a barrier
Plevel t. Let W
Pt ⊆ X denote the winners at level t, that is, Wt = ψt (Lt ). By definition
of the algorithm, w∈Lt yw = p∈Wt Covt (p). Now note that for any two points p, q ∈ Wt , we have
B(p, rt ) ∩ B(q, rt ) = ∅. To see this, consider the first point which enters A in the round (corresponding to the
9
previous barrier) when Lt was being formed. If this is p, then all points in the
P radiusP2rt ball is deleted from
A. Since the balls are disjoint, the second inequality of NUkC LP implies p∈Wt q∈Bt (p) xq,t ≤ kt . The
second summand in the LHS is the definition of Covt (p). The same argument holds for all levels t between
two consecutive barrier levels t1 and t2 s.t. t1 > t2 , as the winner set remains the same, and the radius rt is
only smaller than the radius rt2 at the barrier t2 .
P
Claim 3.10. For any leaf node w ∈ L, we have v∈Pw yv ≥ 1.
Proof. This proof is identical to that of Claim 3.3, and we repeat it for completeness. Fix a level t and a
winner point p ∈ Wt . Let u ∈ Lt such that ψt (u) = p. Since Wt ⊆ Wt+1 ⊆ · · · ⊆ Wh , there is a leaf v in
the subtree rooted at u corresponding to p. Moreover, by the
Pway we formed our tree edges in step (d), we
have that ψs (w) = p for all w0 in the (u, v)-path and hence w0 ∈[u,v]-path yw0 is precisely Cov≥t (p).
Now, for contradiction, suppose there is some leaf corresponding to, say point p, such that the root-leaf
path has total y-assignment less than 1. Then, pick the point, among all such unsatisfied points p, who appears
in a winning set Wt with t as small as possible.
By the preceding observation, the total y-assignment p receives on its path from level h to level t is
−1
exactly Cov≥t (p). Moreover, suppose p loses to q at level t − 1, i.e., ψt−1 (p) is a child of ψt−1
(q). In
particular, it means that q has also been a winner up to level t and so the total y-assignment on q’s path up
−1
to level t is also precisely Cov≥t (q). Additionally, since ψt−1
(q) became the parent node for ψt−1 (p), we
know that Cov≥t (q) ≤ Cov≥t (p) due to the way we choose winners in step (a) of the while loop. Finally,
by our maximality assumption on p, we know that q is fractionally satisfied by the y-solution. Therefore,
there is fractional assignment of at least (1 − Cov≥t (q)) on q’s path from nodes in level t − 1 to level 1.
Putting these observations together, we get that the total fractional assignment on p’s root-leaf path is at least
Cov≥t (p) + (1 − Cov≥t (q)) ≥ 1, which results in the desired contradiction.
Finally, the following lemma shows that any good integral solution to the RMFC-T instance IT can be
converted to a good integral solution for the NUkC instance I.
Lemma 3.11. Suppose there exists a feasible solution N to IT such that for all 1 ≤ t ≤ h, |N ∩ Lt | ≤ αkt .
Then there is a solution to the NUkC instance I that opens, for each 1 ≤ t ≤ h, at most αkt balls of radius
at most 8rt .
Proof. Construct the NUkC solution as follows: for level 1 ≤ t ≤ h and every vertex w ∈ N ∩ Lt , place
the center at ψt (w) of radius 8 · rt . We claim that every point in X is covered by some ball. Indeed, for any
p ∈ X, look at the leaf v = ψh+1 (p), and let w ∈ N be a node in the root-leaf path which covers it in the
instance IT . By property (i) of Theorem 3.7, we have that the distance between ψt (w) and p is at most 8 · rt ,
and hence the ball of radius 8 · rt around ψt (w) covers p. The number of balls of radius type t is trivially at
most αkt .
4
Getting an (O(1), O(1))-approximation algorithm
In this section, we improve our approximation factor on the number of clusters from O(log∗ n) to O(1),
while maintaining a constant-approximation in the radius dilation. As mentioned in the introduction, this
requires more ideas since using NUkC LP one cannot get any factor better than (O(log∗ n), O(1))-bicriteria
10
approximation since any integrality gap for RMFC-T LP translates to a (Ω(log∗ n), Ω(1)) integrality gap
for NUkC LP.
Our algorithm is heavily inspired by the recent paper of Adjiashvili et al [1] who give an O(1)approximation for the RMFC-T problem. In fact, the structure of our algorithms follows the same three
“steps” of their algorithm. Given an RMFC-T instance, [1] first “compress” the input tree to get a new tree
whose depth is bounded; secondly, [1] give a partial rounding result which saves “bottom heavy” leaves, that
is, leaves which in the LP solution are covered by nodes from low levels; and finally, Adjiashvili et al [1] give
a clever partial enumeration algorithm for guessing the nodes from the top levels chosen by the optimum
solution. We also proceed in these three steps with the first two being very similar to the first two steps in [1].
However, the enumeration step requires new ideas for our problem. In particular, the enumeration procedure
in [1] crucially uses the tree structure of the firefighter instance, and the way our reduction generates the
tree for the RMFC-T instance is by using the optimal LP solution for the NUkC instance, which in itself
suffers from the Ω(log∗ n) integrality gap. Therefore, we need to devise a more sophisticated enumeration
scheme although the basic ideas are guided by those in [1]. Throughout this section, we do not optimize for
the constants.
4.1
Part I: Radii Reduction
In this part, we describe a preprocessing step which decreases the number of types of radii. This is similar to
Theorem 5 in [1].
Theorem 4.1. Let I be an instance of NUkC with radii {r1 , r2 , · · · , rk }. Then we can efficiently construct a
new instance Ib with radii multiplicities (k0 , rb0 ), ..., (kL , rbL ) and L = Θ(log k) such that:
(i) ki := 2i for all 0 ≤ i < L and kL ≤ 2L .
b
(ii) If the NUkC instance I has a feasible solution, then there exists a feasible solution for I.
b we can efficiently obtain a (3α, β)-bicriteria solution to I.
(iii) Given an (α, β)-bicriteria solution to I,
Proof. For an instance I, we construct the compressed instance Ib as follows. Partition the radii into Θ(log k)
b we simply round up all
classes by defining barriers at rbi = r2i for 0 ≤ i ≤ blog kc. Now to create instance I,
i
i+1
the radii rj for 2 ≤ j < 2
to the value rbi = r2i . Notice that the multiplicity of rbi is precisely 2i (except
maybe for the last bucket, where there might be fewer radii rounded up than the budget allowed).
Property (i) is just by construction of instance. Property (ii) follows from the way we rounded up the
radii. Indeed, if the optimal solution for I opens a ball of radius rj around a point p, then we can open a
cluster of radius rbi around p, where i is such that 2i ≤ j < 2i+1 . Clearly the number of clusters of radius rbi
is at most 2i because OPT uses at most one cluster of each radius rj .
For property (iii), suppose we have a solution Sb for Ib which opens α2i clusters of radius βb
ri for all
0 ≤ i ≤ L. Construct a solution S for I as follows. For each 1 ≤ i ≤ L, let Ci denote the set of centers
where Sb opens balls of radius βb
ri . In the solution S, we also open balls at precisely these centers with 2α
i−1
balls of radius rj for every 2
≤ j < 2i . Since |Ci | ≤ α · 2i , we can open a ball at every point in Ci ;
furthermore, since j < 2i , we have rj ≥ rbi and so we cover whatever the balls from Sb covered.
Finally, we also open the α clusters (corresponding to i = 0) of radius βr1 = βb
r0 at the respective
centers C0 where Sb opens centers of radius rb0 . Therefore, the total number of clusters of radius type is at
most 2α with the exception of r1 , which may have 3α clusters.
11
4.2
Part II: Satisfying Bottom Heavy Points
One main reason why the above height reduction step is useful, is the following theorem from [1] for RMFC-T
instances on trees; we provide a proof sketch for completeness.
Theorem 4.2 ([1]). Given a tree T of height h and a feasible solution y to (RMFC-T LP), we can find a
feasible integral solution N to RMFC-T such that for all 1 ≤ t ≤ h, |N ∩ Lt | ≤ kt + h.
Proof. Let y be a basic feasible solution of (RMFC-T LP). Call a vertex v of the tree loose if yv > 0 and the
sum of y-mass on the vertices from v to the root (inclusive of v) is < 1. Let VL be the set of loose vertices of
the tree, and let VI be the set of vertices with yv = 1. Clearly N = VL ∪VI is a feasible solution: every leaf-toroot path either contains an integral vertex or at least two fractional vertices with the vertex closer to root being
loose. Next we claim that |VL | ≤ h; this proves the theorem since |N ∩ Lt | ≤ |VI ∩ Lt | + |VL | ≤ kt + |VL |.
The full proof can be found in Lemma 6, [1] – here is a high level sketch. There are |L| + h inequalities
in (RMFC-T LP), and so the number of fractional variables is at most |L| + h. We may assume there are no
yv = 1 vertices. Now, in every leaf-to-root path there must be at least 2 fractional vertices, and the one closest
to the leaf must be non-loose. If the closest fractional vertex to each leaf was unique, then that would account
for |L| fractional non-loose vertices implying the number of loose vertices must be ≤ h. This may not be
true; however, if we look at linearly independent set of inequalities that are tight, we can argue uniqueness as
a clash can be used to exhibit linear dependence between the tight constraints.
Theorem 4.3. Suppose we are given an NUkC instance Ib with radii multiplicities
(k0 , rb0 ), (k1 , rb1 ), . . . , (kL , rbL ) with budgets ki = 2i for radius type rbi , and an LP solution x to (NUkC LP)
b Let τ = log log L, and suppose X 0 ⊆ X be the points covered mostly by small radii, that is, let
for I.
Cov≥τ (p) ≥ 12 for every p ∈ X 0 . Then, there is an efficient procedure round which opens at most O(kt ) balls
of radius O(b
rt ) for τ ≤ t ≤ L, and covers all of X 0 .
Proof. The procedure round works as follows: we partition the points of X 0 into two sets, one set XU in
which the points receive at least 14 of the coverage by clusters of radius rbi where i ∈ {log log L, log log L +
1, . . . , log L}, and another set XB in which the points receive 14 coverage from clusters of levels t ∈ {log L +
P L
0
1, log L + 2, . . . , L}. More precisely, XU := {p ∈ X 0 : log
t=τ Covt (p) ≥ 1/4}, and XB = X \ XB .
Now consider the following LP-solution to (NUkC LP) for Ib restricted to XU : we scale x by a factor 4
and zero-out x on radii type rbi for i ∈
/ {log log L, . . . , log L}. By definition of XU this is a feasible fractional
solution; furthermore, the LP-reduction algorithm described in Section 3 will lead to a tree T of height ≤ log L
and fractional solution y for (RMFC-T LP) on T were each ki ≥ 2log log L = log L. Applying Theorem 4.2,
we can find an integral solution N with at most O(ki ) vertices at levels i ∈ {log log L, . . . , log L}. We can
then translate this solution back using Theorem 3.7 to NUkC and find O(kt ) clusters of radius O(b
rt ) to cover
all the points XU . A similar argument, when applied to the smaller radius types rbt for t ∈ {log L, . . . , L}
can cover the points in XB .
We now show how we can immediately also get a (very weakly) quasi-polynomial time O(1)-approximation
for NUkC. Indeed, if we could enumerate the set of clusters of radii rbt for 0 ≤ t < log log L, we can then
explicitly solve an LP where all the uncovered points need to be fractionally covered by only clusters of
radius type rbt for t ≥ log log L. We can then round this solution using Corollary 4.3 to obtain the desired
O(1)-approximation for the NUkC instance. Moreover, the time complexity of enumerating the optimal
clusters of radii rbt for 0 ≤ t < log log L is nO(log L) = nO(log log k) , since the number of clusters of radius at
least rblog log L is at most O(2log log L ) = O(log L). Finally, there was nothing special in the proof of Theorem 4.3 about the choice of τ = log log L — we could set t = log(q) L to be the q th iterated logarithm of
12
L, and obtain an O(q)-approximation. As a result, we get the following corollary. Note that this gives an
alternate way to prove Theorem 3.5.
Corollary 4.4. For any q ≥ 1, there exists an (O(q), O(1))-factor bicriteria algorithm for NUkC which
(q)
runs in nO(log k) time.
4.3
Part III: Clever Enumeration of Large Radii Clusters
In this section, we show how to obtain the (O(1), O(1))-factor bi-criteria algorithm. At a high level, our
algorithm tries to “guess” the centers2 A of large radius, that is rbi for i ≤ τ := log log L = log log log k,
which the optimum solution uses. However, this guessing is done in a cleverer way than in Corollary 4.4.
In particular, given a guess which is consistent with the optimum solution (the initial “null set” guess is
trivially consistent), our enumeration procedure generates a list of candidate additions to A of size at most
2τ ≈ poly log logk (instead of n), one of which is a consistent enhancement of the guessed set A. This
reduction in number of candidates also requires us to maintain a guess D of points where the optimum
solution doesn’t open centers. Furthermore, we need to argue that the “depth of recursion” is also bounded by
poly log logk; this crucially uses the technology developed in Section 3. Altogether, we get the total time is
at most (poly log logk)poly log logk = o(k) for large k.
We start with some definitions. Throughout, A and D represent sets of tuples of the form (p, t) where
p ∈ X and t ∈ {0, 1, . . . , τ }. Given such a set A, we associate a partial solution SA which opens a ball
of radius 22b
rt at the point p. For the sake of analysis, fix an optimum solution OPT. We say the set A is
consistent with OPT if for all (p, t) ∈ A, there exists a unique q ∈ X such that OPT opens a ball of radius
rbt at q and d(p, q) ≤ 11b
rt . In particular, this implies that SA covers all points which this OPT-ball covers.
We say the set D is consistent with OPT if for all (q, t) ∈ D, OPT doesn’t open a radius rbt ball at q (it may
open a different radius ball at q though). Given a pair of sets (A,D), we define the minLevel of each point p
as follows
minLevelA,D (p) := 1 + arg max{(q, t) ∈ D for all q ∈ B(p, rbt )}
t
If (A, D) is a consistent pair and minLevelA,D (p) = t, then this implies in the OPT solution, p is covered by
a ball of radius rbt or smaller.
Next, we describe a nuanced LP-relaxation for NUkC. Fix a pair of sets (A, D) as described above. Let
XG be the subset of points in X covered by the partial solution SA . Fix a subset Y ⊆ X \ XG of points.
Define the following LP.
∀p ∈ Y,
∀t ∈ 1, · · · , h
L
X
X
t=minLevel(p)
q∈B(p,b
rt )
X
xq,t
≤
xq,t
≥1
(LPNUkC (Y, A, D))
kt
q∈Y
∀(p, t) ∈ A,
xp,t = 1
The following claim encapsulates the utility of the above relaxation.
Claim 4.5. If (A, D) is consistent with OPT, then (LPNUkC (Y, A, D)) is feasible.
2
Actually, we end up guessing centers “close” to the optimum centers, but for this introductory paragraph this intuition is
adequate.
13
Proof. We describe a feasible solution to the above LP using OPT. Given OPT, define O to be the collection
of pairs (q, t) where OPT opens a radius rbt ball at point q. Note that the number of tuples in O with second
term t is ≤ kt .
Since A is consistent with OPT, for every (p, t) ∈ A, there exists a unique (q, t) ∈ O; remove all such
tuples from O. Define xq,t = 1 for all other remaining tuples. By the uniqueness property, we see the second
inequality of the LP is satisfied. We say a point p ∈ X is covered by (q, t) if p lies in the rbt -radius ball
around q. Since Y ⊆ X \ XG , and since the partial solution SA contains all points p which is covered by all
the removed (q, t) tuples, we see that every point p ∈ Y is covered by some remaining (q, t) ∈ O. Since
D is consistent with OPT, for every point p ∈ Y and t < minLevelA,D (p), if q ∈ B(p, rbt ) then (q, t) ∈
/ O.
Therefore, the first inequality is also satisfied.
Finally, for convenience, we define a forbidden set F := {(p, i) : p ∈ X, 1 ≤ i ≤ τ } which if added to
D disallows any large radii balls to be placed anywhere.
Now we are ready to describe the enumeration Algorithm 3. We start with A and D being null, and thus
vacuously consistent with OPT. The enumeration procedure ensures that: given a consistent (A, D) tuple,
either it finds a good solution using LP rounding (Step 10), or generates candidate additions (Steps 18–20) to
A or D ensuring that one of them leads to a larger consistent tuple.
Algorithm 3 Enum(A, D, γ)
1: Let XG = {p : ∃ (q, i) ∈ A s.t d(p, q) ≤ 22b
ri } denote points covered by SA .
2: if there is no feasible solution to LPNUkC (X \ XG , A, D) then
3:
Abort.
//Claim 4.5 implies (A, D) is not consistent.
4: else
5:
x∗ be a feasible solution to LPNUkC (X \ XG , A, D).
6: end if
7: Let XB = {u ∈ X \ XG : Cov≥τ (u) ≥ 21 } denote bottom-heavy points in x∗
8: Let SB be the solution implied by Theorem 4.3.
// This solution opens O(kt ) balls of radius O(b
rt ) for τ ≤ t ≤ L and covers all of XB .
9: Let XT = X \ (XG ∪ XB ) denote the top heavy points in x∗
10: if LPNUkC (XT , A, F ∪ D) has a feasible solution xT then
11:
By definition of F , in xT we have Cov≥τ (u) = 1 for all u ∈ XT .
12:
ST be the solution implied by Theorem 4.3.
// This solution opens O(kt ) balls of radius O(b
rt ) for τ ≤ t ≤ L and covers all of XT .
13:
Output (SA ∪ SB ∪ ST ). //This is a (O(1), O(1))-approximation for the NUkC instance.
14: else
15:
for every level 0 ≤ t ≤ τ do
16:
Let Ct = {p ∈ XT s.t minLevelA,D (p) = t}, the set of points in XT with minLevel t.
17:
Use the LP-aware reduction from Section 3 using x∗ and the set of points Ct to create tree Tt .
18:
for every winner p at level t in Tt do
19:
Enum(A ∪ {(p,St)}, D, γ − 1)
20:
Enum(A, D ∪ p0 ∈B(p,11brt ) {(p0 , t)}), γ − 1)
21:
end for
22:
end for
23: end if
14
Define γ0 := 4 log log k · log log log k. The algorithm is run with Enum(∅, ∅, γ0 ). The proof that we get
a polynomial time (O(1), O(1))-bicriteria approximation algorithm follows from three lemmas. Lemma 4.6
shows that if Step 10 is true with a consistent pair (A, D), then the output in Step 13 is a (O(1), O(1))approximation. Lemma 4.7 shows that indeed Step 10 is true for γ0 as set. Finally, Lemma 4.8 shows with
such a γ0 , the algorithm runs in polynomial time.
Lemma 4.6. If (A, D) is a consistent pair such that Step 10 is true, then the solution returned is an
(O(1), O(1))-approximation algorithm.
Proof. Since A is consistent with OPT, SA opens at most kt centers with radius ≤ 22b
rt for all 0 ≤ t ≤ τ .
By design, SB and ST open at most O(kt ) centers with radius ≤ O(rt ) for τ ≤ t ≤ L.
Lemma 4.7. Enum(∅, ∅, γ0 ) finds consistent (A, D) such that Step 10 is true.
Proof. For this we identify a particular execution path of the procedure Enum(A, D, γ), that at every point
maintains a tuple (A, D) that is consistent with OPT. At the beginning of the algorithm, A = ∅ and D = ∅,
which is consistent with OPT.
Now consider a tuple (A, D) that is consistent with OPT and let us assume that we are within the
execution path Enum(A, D, γ). Let X \ XG be the points not covered by A and let x∗ be a solution to
LPNUkC (X \ XG , A, D). If OPT covers all top-heavy points XT using only smaller radii, then this implies
LPNUkC (XT , A, F ∪ D) has a feasible solution implyin Step 10 is true. So, we may assume, there exists at
least one top-heavy point q ∈ XT that OPT covers using a ball radii ≥ rbτ around a center oq . In particular,
minLevelA,D (q) ≤ τ . Let q ∈ Ct and hence q belongs to Tt for some 0 ≤ t ≤ τ . Let p ∈ Pt be the level t
winner in Tt s.t q belongs to the sub-tree rooted at p in Ti ; p may or may not be q. We now show that there is
at least one recursive call where we make non-trivial progress in (A, D). Indeed, we do this in two cases:
case (A) If OPT opens a ball of radius rbt at a point o such that d(o, p) ≤ 11b
rt . In this case, Step 19 maintains
consistency. Furthermore, we can “charge” (p, t) uniquely to the point o with radius rbt . To see this, for
contradiction, let us assume that before arriving to the recursive call where (p, t) is added to A, some other
0
0
0
tuple (u, t) ∈ A , in an earlier recursive call with (A , D ) as parameters charged to (o, t). Then by definition
we know that d(u, o) ≤ 11b
rt implying d(u, p) ≤ 22b
rt . Then p would be in XG in all subsequent iterations,
contradicting that p ∈ XT currently.
case (B): Tf there is no (o, t) ∈ OPT with d(o, p) ≤ 11b
rt , then for all points p0 ∈ B(p, 11b
rt ) we can add
0
(p , t) to D. In this case, we follow the recursive call in Step 20.
To sum, we can definitely follow the recursive calls in the consistent direction; but how do we bound
the depth of recursion. In case (A), the measure of progress is clear – we increase the size of |A|, and it can
be argued (we do so below) the maximum size of A is at most poly log logk. Case (B) is subtler. We do
increase size of D, but D could grow as large as Θ(n). Before going to the formal proof, let us intuitively
argue what “we learn” in Case (B). Recall q is covered in OPT by a ball around the center oq . Since
minLevel(q) = t ≤ τ , by definition there is a point v ∈ B(q, rbt ) such that (v, t) ∈
/ D, and d(q, oq ) ≤ rbt .
Together, we get d(v, oq ) ≤ 2b
rt , that is, v ∈ B(oq , 2b
rt ). Now also note, since q lies in p’s subtree in Tt , by
construction of the trees, d(p, q) ≤ 8b
rt by property (i) of Theorem 3.7. Therefore, d(p, oq ) ≤ 9b
rt and in case
(B), for all points u ∈ B(oq , 2b
rt ) we put (u, t) in the set D in the next recursive call. This is “new info” since
for the current D we know that at least one point v ∈ B(oq , 2b
rt ), we had (v, t) ∈
/ D.
Formally, we define the following potential function. Let Oτ denote the centers in OPT around which
balls of radius rbj , j ≤ τ have been opened. Given the set D, for 0 ≤ t ≤ τ and for all o ∈ Oτ , define the
15
(D)
indicator variable Zo,t which is 1 if for all points u ∈ B(o, 2b
rt ), we have (u, t) ∈ D and 0 otherwise.
Φ(A, D) := |A| +
τ
XX
(D)
Zo,t
o∈Oτ t=0
Note that Φ(∅, ∅) = 0. From the previous paragraph, we conclude that in both case (A) or (B), the
potential increases by at least 1.
for any consistent A, D we can upper bound Φ(A, D) as follows.
PFinally,
τ
t
τ +1 = log L = log log k. The second term in Φ is at most
Since A is consistent, |A| ≤
t=0 2 ≤ 2
τ
+1
2
· τ = log L log log L. Thus, in at most 2 log log k · log log log k < γ0 steps we reach a consistent pair
(A, D) with Step 10 true.
Lemma 4.8. Enum(∅, ∅, γ0 ) runs in polynomial time for large enough k.
Proof. Each single call of Enum is clearly polynomial time, and so we bound the number of recursive
calls. Indeed, this number will be o(k). We first bound the number of recursive calls in a single execution
of Enum(A, D, γ). For a fixed tuple (A, D), Algorithm 3, constructs trees T0 , . . . , Tτ in Step 17, using
the reduction algorithm from Section 3. Let Ltj represent the set of nodes at level j in the tree Tt . Then
Pt = ψt (Ltt ) represents the set of points that are winners at level t in Tt . NowSfor any tree Tt , Algorithm 3
makes two recursive calls for each winner in Pt (Step 19-20). Let PAD = τt=0 Pt be the set of all the
winners that the algorithm considers in a single call to Enum(A, D, γ). The total number of recursive calls
in a single execution is therefore 2|PAD |. Now we claim that for a fixed tuple (A, D), the total number of
winners is bounded.
Claim 4.9. |PAD | ≤ 4 log log k · log log log k
Proof. Consider a tree Tt and the corresponding set Pt as defined above. We use y (t) to denote the RMFC-T
LP solution given along with the tree Tt given by the reduction algorithm in Section 3. Let p ∈ Pt be a
winner at level t (and consequently, at level τ also), and suppose it is mapped to tree vertices w at level t
and w0 at level τ . Then, by the way the tree was constructed and because p ∈ XT is top-heavy, we have
P
(t)
1
u∈[w,w0 ]−path yu ≥ 2 (Refer to the proof of Claim 3.3 for more clarity). So each winner at level t has a
1
path down to level τ with fractional coverage
2 . But the total fractional coverage in the top part of
Patτ least
t
the tree is at most the total budget, which is t=1 2 ≤ 2 log L ≤ 2 log log k. Therefore, |Pt | ≤ 4 log log k.
Adding for all 1 ≤ t ≤ τ , gives |PAD | ≤ 4 log log k · t ≤ 4 log log k · log log log k.
Since the recursion depth γ0 , the total number of recursive calls made to the Enum is loosely upper
bounded by γ0poly log logk = o(k), thus completing the proof.
References
[1] D. Adjiashvili, A. Baggio, and R. Zenklusen. Firefighting on trees beyond integrality gaps. CoRR,
abs/1601.00271, 2016.
[2] J. Byrka, T. Pensyl, B. Rybicki, A. Srinivasan, and K. Trinh. An improved approximation for k-median,
and positive correlation in budgeted optimization. Proceedings, ACM-SIAM Symposium on Discrete
Algorithms (SODA), 2015.
16
[3] P. Chalermsook and J. Chuzhoy. Resource minimization for fire containment. Proceedings, ACM-SIAM
Symposium on Discrete Algorithms (SODA), 2010.
[4] M. Charikar, L. O’ Callaghan, and R. Panigrahy. Better streaming algorithms for clustering problems.
ACM Symp. on Theory of Computing (STOC), 2003.
[5] M. Charikar, C. Chekuri, T. Feder, and R. Motwani. Incremental clustering and dynamic infomation
retrieval. ACM Symp. on Theory of Computing (STOC), 1997.
[6] M. Charikar, S. Guha, D. Shmoys, and E. Tardos. A constant-factor approximation algorithm for the
k-median problem. ACM Symp. on Theory of Computing (STOC), 1999.
[7] M. Charikar, S. Khuller, D. M. Mount, and G. Narasimhan. Algorithms for facility location problems
with outliers. Proceedings, ACM-SIAM Symposium on Discrete Algorithms (SODA), 2001.
[8] S. Finbow, A. King, G. MacGillivray, and R. Rizzi. The firefighter problem for graphs of maximum
degree three. Discrete Mathematics, 307(16):2094–2105, 2007.
[9] I. L. Goertz and V. Nagarajan. Locating depots for capacitated vehicle routing. Proceedings, International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, 2011.
[10] T. F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer
Science, 38:293–306, 1985.
[11] S. Guha, R. Rastogi, and K. Shim. CURE: An efficient clustering algorithm for large databases.
Proceedings of SIGMOD, 1998.
[12] S. Har-Peled and S. Mazumdar. Coresets for k-means and k-median clustering and their applications.
ACM Symp. on Theory of Computing (STOC), 2004.
[13] D. S. Hochbaum and D. B. Shmoys. A best possible heuristic for the k-center problem. Mathematics of
operations research, 10(2):180–184, 1985.
[14] S. Im and B. Moseley. Fast and better distributed mapreduce algorithms for k-center clustering.
Proceedings, ACM Symposium on Parallelism in Algorithms and Architectures, 2015.
[15] K. Jain and V. V. Vazirani. Approximation algorithms for metric facility location and k-median problems
using the primal-dual schema and lagrangian relaxation. J. ACM, 48(2):274 – 296, 2001.
[16] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. : A local search
approximation algorithm for k-means clustering. 2002.
[17] A. King and G. MacGillivray. The firefighter problem for cubic graphs. Discrete Mathematics,
310(3):614–621, 2010.
[18] A. Kumar, Y. Sabharwal, and S. Sen. A simple linear time (1 + )-approximation algorithm for k-means
clustering in any dimensions. Proceedings, IEEE Symposium on Foundations of Computer Science
(FOCS), 2004.
[19] G. Laporte. Location routing problems. In B. L. Golden and A. A. Assad, editors, Vehicle Routing:
Methods and Studies, pages 163–198. 1998.
17
[20] S. Li and O. Svensson. Approximating k-median via pseudo-approximation. ACM Symp. on Theory of
Computing (STOC), 2013.
[21] G. Malkomes, M. J. Kusner, W. Chen, K. Q. Weinberger, and B. Moseley. Fast distributed k-center
clustering with outliers on massive data. Advances in Neur. Inf. Proc. Sys. (NIPS), 2015.
[22] R. McCutchen and S. Khuller. Streaming algorithms for k-center clustering with outliers and with
anonymity. Proceedings, International Workshop on Approximation Algorithms for Combinatorial
Optimization Problems, 2008.
[23] H. Min, V. Jayaraman, and R. Srivastava. Combined location-routing problems: A synthesis and future
research directions. European Journal of Operational Research, 108:1–15, 1998.
18
| 8 |
BAYESIAN INVERSE PROBLEMS WITH
NON-COMMUTING OPERATORS
arXiv:1801.09540v1 [] 29 Jan 2018
PETER MATHÉ
Abstract. The Bayesian approach to ill-posed operator equations in Hilbert space recently gained attraction. In this context,
and when the prior distribution is Gaussian, then two operators
play a significant role, the one which governs the operator equation, and the one which describes the prior covariance. Typically
it is assumed that these operators commute. Here we extend this
analysis to non-commuting operators, replacing the commutativity assumption by a link condition. We discuss its relation to the
commuting case, and we indicate that this allows to use interpolation type results to obtain tight bounds for the contraction of
the posterior Gaussian distribution towards the data generating
element.
1. Setup and Problem formulation
We shall consider the equation
y δ = Kx + δξ,
(1)
where δ > 0 prescribes a base noise level, and K : X → Y is a compact
linear operator between Hilbert spaces. The noise element ξ is a weak
random element. If this random element ξ has covariance Σ, then we
may pre-whiten Equation (1) to get
(2)
z δ = Σ−1/2 Kx + δΣ−1/2 ξ,
which is now a linear inverse problem under Gaussian white noise.
In the Bayesian framework we choose a prior for x. Since this is
2
assumed to be a tight and centered Gaussian measure N (0, δα C0 ), it
is equipped with a (scaled) covariance C0 which has a finite trace. As
calculations show, the relevant operator in the analysis will then be
1/2
B := Σ−1/2 KC0 , we refer to [2] for details.
Therefore, we have (at least) two operators to consider, the prior
covariance operator C0 as well as the operator H := B ∗ B. Both operators are non-negative compact self-adjoint operators in X. To simplify
the analysis we shall assume that the operator C0 is injective.
Within the present, very basic Bayesian context much is known, and
we refer to the recent survey [2] and references therein. In particular we
know that the posterior is (tight) Gaussian, and we find the following
Date: January 30, 2018.
1
2
PETER MATHÉ
representation for the posterior mean and covariance for the model
from (2):
1/2
(3)
xδα = C0 (αI + H)−1 B ∗ z δ , (posterior mean) and
(4)
Cαδ = δ 2 C0 (αI + H)−1 C0 (posterior covariance).
1/2
1/2
In the study [2] the authors highlight that the (square of the) contraction of the posterior towards the element x∗ , generating the data z δ is
driven by the squared posterior contraction (SPC), given as
(5)
δ
SPC(α, δ) := Ex Ezα kx∗ − xk2 ,
∗
α, δ > 0,
where the outward expectation is taken with respect to the data generating distribution, that is, the distribution generating z δ when x∗ is
given, and the inward expectation is taken with respect to the posterior distribution, given data z δ and having chosen a parameter α.
Moreover, the SPC has a decomposition
(6)
SPC(α, δ) = b2x∗ (α) + V δ (α) + tr Cαδ .
2
with the squared bias b2x∗ (α) := x∗ − Ex xδα , the estimation vari
∗
∗
2
ance V δ (α) := Ex xδα − Ex xδα , and the posterior spread tr Cαδ .
Proposition 1 ibid asserts that the estimation variance V δ (α) is always
smaller than the posterior spread, thus we need to bound the bias and
posterior spread, only. Therefore we recall the form of the bias from
Lemma 1 ibid. as
∗
(7)
1/2
−1/2 ∗
bx∗ (α) = C0 sα (H)C0
x
,
α > 0,
where we abbreviate sα (H) = α (α + H)−1 . Plainly, if C0 and H commute, then we have that
(8)
bx∗ (α) = ksα (H)x∗ k .
Also, it is easily seen from the cyclic commutativity of the trace that
(9)
tr Cαδ = δ 2 tr (α + H)−1 C0 .
Here we aim at providing tight bounds, both for the bias and the posterior spread for non-commuting operators C0 and H. For Bayesian inverse problems we are aware of only one study [1], and we shall return to
more details, later. Within the ’classical’ theory of ill-posed problems
this was met earlier. The typical situation arises, when smoothness is
measured in some Hilbert scale, say Xa , where a ∈ R describes smoothness properties, generated by some (unbounded) operator. In order to
derive convergence rates for the reconstruction of the solution to the
equation (1) the operator K is then linked to the scale by assuming
that there is some µ > 0 such that
kKxkY ≍ kxk−µ ,
x ∈ X−µ .
BAYESIAN INVERSE PROBLEMS
3
This was first presented in [12] when considering regularization in
Hilbert scales, see also [7, Cor. 8.22] for a similar condition. Subsequently, in particular when measuring smoothness in a more general
sense in terms of variable Hilbert scales, such links were assumed in a
more general context. A comprehensive study is [11], which shows how
interpolation in variable Hilbert scales can be used in order to derive
error bounds under such link conditions. The present study may be
seen as an application of these techniques to Bayesian inverse problems.
We shall start in Section 2 with introducing the link condition, discuss its implications, and we relate this to the commuting case. Then
we shall use the decomposition of the SPC as in (6), and thus find
bounds for the bias in Section 3, and bounds for the posterior spread
in Section 4, respectively. We then summarize the results for giving
bounds for the squared posterior contraction in Section 5, and we conclude with a discussion in Section 6.
2. Linking operators and scales of Hilbert spaces
In order to introduce the fundamental link condition we need some
notions and auxiliary calculus.
2.1. Link condition. We first recall the following concepts from [2, 4].
Definition 1 (index function). A function ψ : (0, ∞) → R+ is called
an index function if it is a continuous non-decreasing function with
ψ(0) = 0.
Definition 2 (partial ordering for index functions). Given two index
functions g, h we shall write g ≺ h if the function t 7→ h(t)/g(t) is an
index function (h tends to zero faster than g).
The link condition which we are going to introduce now will be based
upon a partial ordering of self-adjoint operators, and we refer to [3] for
a comprehensive account. Although the monograph formally treats
matrices, only, most of the results transfer to (compact bounded) operators in Hilbert space.
Definition 3 (partial ordering for self-adjoint operators). Let G and
G′ be bounded self-adjoint operators in some Hilbert spaces X. We say
that G ≤ G′ if for all x ∈ X the inequality hGx, xi ≤ hG′ x, xi holds
true.
The following concept of ’concavity’ is the extension of concavity
from real functions to self-adjoint operators by functional calculus.
Definition 4. Let f : [0, a] → R+ be a continuous function. It is called
operator concave if for any pair G, H ≥ 0 of self-adjoint operators with
spectra in [0, a] we have
G+H
f (G) + f (H)
(10)
f(
)≥
.
2
2
4
PETER MATHÉ
We mention that operator concave function must be operator monotone, i.e., if G ≤ H then we will have that f (G) ≤ f (H), see [3,
Thm. V.2.5]1. The concept of operator concavity will be crucial for
the interpolation, below. However, the above partial ordering has also
implications for the ranges of the operators, and this is comprised in
Theorem 1 (Douglas’ Range Inclusion Theorem, see [6]). Consider
operators S : Y → X and T : Z → X, acting between Hilbert spaces.
The following assertions are equivalent.
(1) R(S) ⊂ R(T ),
(2) there is a constant C such that SS ∗ ≤ C 2 T T ∗ ,
(3) there is a constant C such that kS ∗ xkY ≤ C kT ∗ xkZ x ∈ X,
(4) there is a ’factor’ R : Y → Z, kRk ≤ C, such that S = T R.
Of course, for self-adjoint operators S, T : X → X the norm estimate from (3) is again for S = S ∗ and T = T ∗ . Also, if the operator T is injective then the composition T −1 S is a bounded operator
and kT −1 Sk = kRk ≤ C.
As it was stressed above, the governing operators C0 and H = B ∗ B
are non-negative self-adjoint. As an immediate application, by considering B : X → Y and its self-adjoint companion B ∗ B : X → X we
plainly have that kBukX = (B ∗ B)1/2 u , u ∈ X, such that R(B ∗ ) =
X
R((B ∗ B)1/2 ) = R(H 1/2 ).
Before formally introducing the link assumption, we first make the
standing assumption that the compound mapping Σ−1/2 K : X → Y
is bounded. The link condition will provide us with a ’tuning index
function’ ψ such that the ranges of ψ(C0 ) and K ∗ Σ−1/2 coincide, and
hence we shall assume that
kψ(C0 )vkX ≍ Σ−1/2 Kv
Y
1/2
,
v ∈ X.
Using this with v := C0 u, u ∈ X we arrive at
1/2
kΘ(C0 )ukX ≍ Σ−1/2 KC0 u
where we introduced the function
(11)
Θ(t) = Θψ (t) :=
Y
= H 1/2 u
√
tψ(t),
Y
,
u ∈ X,
t > 0,
1/2
and the operator H is gives an before from B := Σ−1/2 KC0 as H =
B ∗ B. We observe that its square Θ2 is strictly monotone, and it increases super-linearly. Below, the inverse will play an important role,
and we stress that this will be a sub-linearly increasing index function, as this typically is the case for power type functions g(t) := tq
with 0 < q ≤ 1. So, we formally make the following
1Formally,
operator monotone functions are defined on [0, ∞). The asserted
monotonicity can be seen from the proof of Theorem V.2.5 ibid.
BAYESIAN INVERSE PROBLEMS
5
Assumption 1 (link condition). There are an index function ψ, and
constants 0 < m ≤ 1 ≤ M < ∞ such that
(12)
m kψ(C0 )uk ≤ Σ−1/2 Ku ≤ M kψ(C0 )uk ,
u ∈ X.
Moreover, with the function Θ from (11) the related function
−1 1/2
(13)
f0 (s) := Θ2
(s)
, s > 0,
has an operator concave square f02 .
We first draw the following consequence. For this we let
√
(14)
ϕ0 (t) := t, t > 0,
throughout this study.
1/2
Proposition 1. Under Assumption 1 we have that R(C0 ) = R(f0 (H)).
In particular the operator f0 (H)ϕ0 (C0 )−1 is norm bounded by M.
Proof. Arguing as above, the inequalities in (12) have their counterpart
for the function Θ as
(15)
m kΘ(C0 )uk ≤ H 1/2 u ≤ M kΘ(C0 )uk ,
u ∈ X.
We can rewrite the left hand side as
Θ2 (C0 ) ≤
1
H.
m2
Since f02 is assumed to be operator concave, and hence operator monotone we conclude that
1
1
2
(16)
C0 ≤ f0
H ≤ 2 f02 (H) ,
2
m
m
where we used that 0 < m ≤ 1. Rewriting this in terms of a norm
1/2
≤ m1 kf0 (H)ukX , u ∈ X, and hence
inequality shows that C0 u
1/2
R(C0 )
X
that
⊆ R(f0 (H)). The other inclusion is proven similar by
using the right hand side of (15), and hence it is omitted. The norm
boundedness is a consequence of Theorem 1, as it was stressed after its
formulation.
Basically, the above inequalities in (15) correspond to Assumption 3.1
(2 & 3) from [1], see Section 6, ibid., if the function Θ is a power function.
2.2. Linking commuting operators. In previous studies, dealing
with commuting operators, no functional dependence was assumed,
except the recent survey [2]. We start with the following technical
assertion.
6
PETER MATHÉ
Lemma 1. Suppose that we have two commuting self-adjoint nonnegative compact operators A, B in Hilbert space. If the operator A has
only simple eigenvalues then there is a continuous function ψ : [0, kAk] →
R+ , with limu→0 ψ(u) = 0, such that ψ(A) = B.
Proof. Indeed, the pair A, B is commonly diagonalizable, and we may
consider (infinite) diagonal matrices Ds and Dt , having the eigenvalues
of A and B on the diagonals. Since all eigenvalues of A were assumed to
be simple, we can assume that s1 > s2 > · · · > 0. The corresponding
eigenvalues for B will not be ordered, in general. We consider the
mapping s, t : N → R+ assigning s(j) = sj and t(j) := tj , respectively.
The mapping s is strictly increasing, such that we may consider the
composition ψ̄ := t ◦ s−1 : (sj ) 7→ (tj ). By linear interpolation this
extends to a continuous mapping ψ : (0, kAk] → R+ . We need to show
that ψ(u) → 0 as u → 0. But, since the sequence tj , j = 1, 2, . . . has
zero as only accumulation point, there is N ∈ N with tn ≤ ε for n ≥ N.
Let δ := sN > 0. Then for n ≥ N we find that ψ(sn ) = tn ≤ ε, and by
linear interpolation this extends to the whole interval [0, δ].
Of course, we cannot find that the above function ψ be an index function. For this to hold additional assumptions need to be made. Here
we consider the situation as it was assumed in [8], cf. Assumption 3.1
ibid.
Proposition 2. Suppose that with respect to the common eigenbasis ej , j = 1, 2, . . . the corresponding eigenvalues sj of the prior covariance, and tj of the operator Σ−1/2 K obey some asymptotic behavior, say in the power type case sj = j −(1+2a) and mj −p ≤ tj ≤ Mj −p
for j ∈ N and parameters a, p > 0. For the (index) function ψ(t) =
tp/(1+2a) , t > 0, we have that
m kψ(C0 )uk ≤ Σ−1/2 Ku ≤ M kψ(C0 )uk , u ∈ X.
P
Proof. Let u = ∞
j=1 uj ej ∈ X be any element. Then we find
(17)
2
2
2
m kψ(C0 )uk = m
≤
∞
X
∞
X
ψ
2
(sj )u2j
2
=m
j=1
t2j u2j = Σ−1/2 Ku
∞
X
j −2p u2j
j=1
2
.
j=1
The other inequality is proven similarly, and we omit the proof.
This shows that the present setup of a link condition extends previous studies restricted to commuting operators. The calculus with
using functional dependence instead of asymptotic behavior of singular
numbers seems simpler to handle. Therefore, in order to have a fair
comparison, if C0 and H were commuting we would instead assume
BAYESIAN INVERSE PROBLEMS
Jρ
G : XρG −−−→
∗
yS
J
Jϕ
XϕG −−−→
∗
yS
Jf
7
X
∗
yS
r
H :XrH −−−
→ XfH −−−→ X
Figure 1. The setup of interpolation. The position of
XϕG between XρG and X is given by the function t →
ϕ2 ((ρ2 )−1 (t)), and f is determined in such a way that
XfH has the appropriate position in the scale on bottom.
that Θ2 (C0 ) = H. This should be kept in mind when comparing the
subsequent bounds.
2.3. Variable Hilbert scales and their interpolation. To formulate the result on interpolation we recall the concept of variable Hilbert
scales. Given an injective positive self-adjoint operator G, and some
index function f we equip R(f (G)) with the norm kxkf = kwk, where
the
element w is (uniquely) obtained from x = f (G)w. This makes
R(f (G)), k·kf a Hilbert space. Since this can be done for any index
function f we agree to denote the resulting spaces by XfG .
For our analysis we shall consider the scales generated by C0 , i.e.,
XϕC0 , and by H, hence the spaces XfH (We shall reserve Greek letters
for index functions related to the scale XϕC0 ).
We shall use interpolation of operators in variable Hilbert scales, and
we recall the fundamental result from [11].
Theorem 2 (Interpolation theorem, [11, Thm. 5]). Let G, H ≥ 0 be
self-adjoint operators with spectra in [0, b] and [0, a], respectively. Furthermore, let ϕ, ρ and r be index functions (ρ strictly increasing) on intervals [0, b] and [0, a], respectively, such that b ≥ kGk and ρ(b) ≥ r(a).
Then the function
f (t) := ϕ((ρ−1 )(r(t))),
0 < t ≤ a,
is well defined. The following assertion holds true: If t → ϕ2 ((ρ2 )−1 (t))
is operator concave on [0, ρ2 (b)] then
(18)
and
(19)
yield
(20)
kSxk ≤ C1 kxk ,
x ∈ X,
kρ(G)Sxk ≤ C2 kr(H)xk ,
x ∈ X,
kϕ(G)Sxk ≤ max {C1 , C2 } kf (H)xk ,
We depict the interpolation setup in Figure 1.
x ∈ X.
8
PETER MATHÉ
Remark 1. We mention the following important fact. The conditions
on the operator S in Theorem 2 correspond to the Figure 1 with S ∗ ,
the adjoint operator. We refer to [11, Cor. 2] for details.
It is important to notice that, in contrast to the commuting case no
link between the scales XϕG and XfH can be established whenever ϕ is
beyond ρ (ρ ≺ ϕ), or f is beyond r (r ≺ f ).
For the above interpolation it is crucial that the space XϕG is intermediate between XρG and X, i.e., we have continuous embeddings as
in Figure 1. The position is described as follows.
Definition 5 (position of an intermediate Hilbert space). The position
of XϕG between XρG and X is given by the function t → ϕ2 ((ρ2 )−1 (t)).
Remark 2. If we imagine that the spaces XρG and XϕG were Sobolev
Hilbert spaces with smoothness 0 < f ≤ r then the position would be
the quotient f /r ≤ 1. In the present context this corresponds to the
power type function t 7→ tf /r which is concave, even operator concave,
and we have seen in Theorem 2 that operator concavity is essential for
establishing results on operator interpolation.
3. Bounding the bias
We shall use the interpolation result with G := C0 and H from
above, and for various index functions and operators S. We recall the
description of the bias in the decomposition (6) given in (7) as
1/2
−1/2 ∗
bx∗ (α) = C0 sα (H)C0
x
.
To proceed we assign smoothness to the data generating element x∗
relative to the covariance operator C0 .
Assumption 2 (source set). There is an index function ϕ such that
x∗ ∈ Sϕ := {x,
x = ϕ(C0 )v, kvk ≤ 1} .
Using Proposition 1 we can bound for x∗ as above, its bias bx∗ (α) as
(21)
(22)
1
f0 (H)sα (H)ϕ0 (C0 )−1 ϕ(C0 )
m
1
=
sα (H)f0(H)ϕ0 (C0 )−1 ϕ(C0 ) .
m
bx∗ (α) ≤
Again, we emphasize that the intermediate operator f0 (H)ϕ0 (C0 )−1 is
norm bounded, such that the right hand side is finite.
In our subsequent analysis we shall distinguish three cases. These are
determined by the position of the given function ϕ within the Hilbert
scale given by C0 , and these cases are highlighted in Figure 2. We turn
to the detailed analysis of these cases.
BAYESIAN INVERSE PROBLEMS
9
regular: This case is obtained when ϕ0 (C0 )−1 ϕ(C0 ) is a bounded
operator, and hence when ϕ0 ≺ ϕ ≺ Θ.
low-order: When 1 ≺ ϕ ≺ ϕ0 we speak of the low-order case.
high-order: If ϕ is beyond the benchmark Θ (Θ ≺ ϕ) then we
call this the high-order case. As mentioned above, we need
additional assumptions to treat the latter.
Figure 2. Cases which are considered for interpolation
−1/2
3.1. Regular case: ϕ0 ≺ ϕ ≺ Θ. In this case the operator C0 ϕ(C0 ) =
ϕ0 (C0 )−1 ϕ(C0 ) is a bounded self-adjoint operator. The position of the
C0
Hilbert space XΘC0 ⊂ Xϕ/ϕ
⊂ X is then given through
0
2
−1
ϕ
2
(23)
g (t) :=
Θ2
(t) , t > 0.
ϕ0
Proposition 3. Suppose that ϕ0 ≺ ϕ ≺ Θ, and that the function g 2 is
operator concave. Under Assumptions 1 & 2 we have that
M
sα (H)ϕ f02 (H) .
bx∗ (α) ≤
m
Proof. Under the assumptions made we apply Theorem 2 with S = I,
the identity operator, to see that
ϕ
(C0 )u ≤ M kg(H)uk , u ∈ X.
ϕ0
By Theorem 1 this means that for any u, kuk ≤ 1 we find that ū, kūk ≤
M with ϕϕ0 (C0 )u = g(H)ū. Hence we can bound
ϕ
sα (H)f0 (H)
(C0 )u = ksα (H)f0 (H)g(H)ūk
ϕ0
≤ M ksα (H)f0 (H)g(H)k
But, we check that
f0 (t)g(t) = f0 (t)
ϕ(f02 (t))
= ϕ(f02 (t),
f0 (t)
t > 0,
which gives
sα (H)f0 (H)
ϕ
ϕ0
(C0 )u ≤ M sα (H)ϕ f02 (H)
.
Since this holds for arbitrary u ∈ X, kuk ≤ 1 we conclude that
1
M
ϕ
bx∗ (α) ≤
sα (H)f0 (H)
(C0 )u ≤
sα (H)ϕ f02 (H) ,
m
ϕ0
m
and this completes the proof in the regular case.
10
PETER MATHÉ
3.2. Low-order case: 1 ≺ ϕ ≺ ϕ0 . For the application of Theorem 2 in this case we recall the definition of the operator S :=
f0 (H)ϕ0 (C0 )−1 : X → X.
Proposition 4. Suppose that the Assumptions 1 & 2 hold, and that
1 ≺ ϕ ≺ ϕ0 . Then the position of XϕC0 between XϕC00 and X is given by
the function ϕ2 . If ϕ2 is operator concave then
M
bx∗ (α) ≤
sα (H)ϕ f02 (H) .
m
Proof. We aim at applying Theorem 2 for the operator S ∗ . From Proposition 1 we know that kS ∗ : X → Xk ≤ M. But also we see that
kϕ0 (C0 )S ∗ uk = ϕ0 (C0 )ϕ0 (C0 )−1 f0 (H)u = kf0 (H)uk , u ∈ X.
The position of ϕ between XϕC00 and X is given by
2
2 −1
ϕ
ϕ0
(t) = ϕ2 (t), t > 0,
and this was assumed to be operator concave. Thus Theorem 2 applies
and yields kϕ(C0 )S ∗ uk ≤ M kg(H)uk , u ∈ X for the function g given
from
2
g(t) := ϕ ϕ−1
t > 0.
0 (f0 (t)) = ϕ f0 (t) ,
By virtue of Theorem 1 we find that for every x = Sϕ(C0 )w there is
w̄, kw̄k ≤ M with
Sϕ(C0 )w = g(H)w̄.
We conclude that therefore
1
M
bx∗ (α) ≤
ksα (H)Sϕ(C0 )k ≤
ksα (H)g(H)k
m
m
M
sα (H)ϕ f02 (H)
=
m
The proof is complete.
3.3. High-order case: Θ ≺ ϕ. If we want to extend the results to
smoothness beyond Θ then we need to assume a link condition at a
later position than H 1/2 . Therefore, we shall impose the following
lifting condition.
Assumption 3 (lifting condition). There are some r > 1 and constants
m ≤ 1 ≤ M such that
(24)
mr kΘr (C0 )uk ≤ H r/2 u = M r kΘr (C0 )uk ,
u ∈ X.
This is actually stronger than the original link condition from Assumption 1. Indeed, as the Loewner–Heinz Inequality asserts, for r > 1
the function t 7→ t1/r is operator monotone, see [3, Thm. V.1.9], or [7,
Prop. 8.21] we have
Corollary 1. Assumption 3 yields Assumption 1.
BAYESIAN INVERSE PROBLEMS
11
Remark 3. We stress that in case of commuting operators C0 and H
the assumptions 3 and 1 are equivalent, i.e., the link condition implies
the lifting condition. This can be seen using the Gelfand-Naimark
theorem, and we refer to [5, Prop. 8.1] for a similar assertion with
detailed proof.
Again we shall deal with the smoothness ϕ0 (C0 )−1 ϕ(C0 ), and, as in
the regular case, we ask for the position of the corresponding space between XΘCr0 and X. This gives the following result, extending the cases
ψ
of regular and low smoothness, however, under additional requirement
on the link.
Proposition 5. Suppose that the Assumptions 3 & 2 hold, and that
ϕ/ϕ0 ≺ Θr . Assume that for g from (23) the function t 7→ g 2 (t1/r ) is
operator concave. Then we have
M
bx∗ (α) ≤
sα (H)ϕ f02 (H) .
m
Proof. The proof is similar to the regular case. The function t 7→
C0
g 2 (t1/r ) is exactly the position of Xϕ/ϕ
between XΘCr0 and X is given as
0
2
−1
ϕ
2
gr (t) :=
Θ2r
(t) .
ϕ0
−1
−1
It is readily checked that (Θ2r ) (t) = (Θ2 ) (t1/r ). Therefore, we find
that gr2 (t) = g 2 (t1/r ), and this is assumed to be operator concave, such
that we can use the Interpolation Theorem 2, and we find that
ϕ
(C0 )u ≤ M f (H r/2 )u , u ∈ X,
ϕ0
where the function f is given as
ϕ
ϕ
f (t) :=
(Θr )−1 (t) =
Θ−1 (t1/r ) , t > 0.
ϕ0
ϕ0
ϕ
This yields that f (H r/2 ) = ϕ0 Θ−1 (H 1/2 ) . Now, as in the regular
case, we arrive at
M
bx∗ (α) ≤
ksα (H)f0 (H)g(H)k .
m
We have seen there that f0 (t)g(t) = ϕ(f02 (t)), which completes the
proof.
Remark 4. The following comment seems interesting. In the highorder case, the function g 2 will in general not be operator concave.
However, the assumption which is made above, says that by re-scaling
this will eventually be operator concave if the scaling factor r is large
enough, see the discussion at the end of this section. Of course this
does not mean that the lifting Assumption 3 will hold automatically.
This is still a non-trivial assumption.
12
PETER MATHÉ
3.4. Saturation. In all the above cases, the low-order, regular and the
high-order one, we were able to derive a bias bound as in Proposition 5,
albeit under case specific assumptions. This bound cannot decay arbitrarily fast, and this is known as saturation in the regularization
theory, see again [2]. Indeed, the maximal decay rate, as α → 0 is linear, unless x∗ = 0, which is a result of the structure of the term sα (H).
This maximal decay rate is achieved when ϕ (f02 (t)) ≍ t, which after
rescaling means that ϕ(t) ≍ Θ2 (t). Thus the maximal smoothness for
which optimal decay of the bias can be achieved is given by the index
function Θ2 . This yields the following important remark, specific for
non-commuting operators.
Remark 5. Suppose that smoothness is given as in Assumption 2 with
an index function ϕ, and that we find the function Θ as in (11). Within
the range 0 ≺ ϕ ≺ Θ (low-order and regular cases) the link condition,
Assumption 1, suffices to yield optimal order decay of the bias. However, within the range Θ ≺ ϕ ≺ Θ2 the lifting, as given in Assumption 3
cannot be avoided. This effect cannot be seen for commuting operators C0 and H, because there the lifting is equivalent to the original
link condition, as discussed in Remark 3. We also observe that within
the present context, the lifting to r = 2 would be enough due to the
saturation at the function Θ2 .
We exemplify the above bounds for the bias for power type behavior, both of the smoothness in terms of ϕ(t) = tβ , and the linking
function ψ(t) = tκ . This results in a function Θ2 (t) = t1+2κ , which has
operator concave inverse. Thus, this requirement in Assumption 1 is
fulfilled whatever κ > 0 is found.
Then, the low order case 0 ≺ ϕ ≺ ϕ0 covers the range 0 < β ≤
1/2, and in this range the function ϕ2 (t) = t2β is operator concave,
because 2β ≤ 1.
The regular case ϕ0 ≺ ϕ ≺ Θ coves the exponents 1/2 ≤ β ≤ 1/2+κ.
The operator concavity was assumed to hold for the function
2
−1
2(β−1/2)
ϕ
Θ2
(t) = t 1+2κ .
ϕ0
Thus, the regular case covers the range 1/2 ≤ β ≤ 1/2 + κ.
Finally, it is seen similarly, that the high order case covers the
range 1/2 ≤ β ≤ 1/2 + (r/2)(1 + 2κ), which for r = 2 already is
beyond the saturation point.
4. Bounding the posterior spread
We recall the structure of the posterior spread from (9) as
tr Cαδ = δ 2 tr (α + H)−1 C0 .
BAYESIAN INVERSE PROBLEMS
13
As can be seen, the noise level δ enters quadratically, and we aim at
finding the dependence upon the scaling parameter α. To this end the
following result proves useful.
Proposition 6. Under Assumption 1 we have that
δ2
tr Cαδ ≤ 2 tr (α + H)−1 f02 (H) .
m
Proof. We start with the situation as given in (16). This order extends
by multiplying (α + H)−1/2 from both sides, such that we conclude
that
1
(α + H)−1/2 C0 (α + H)−1/2 ≤ 2 (α + H)−1/2 f02 (H) (α + H)−1/2 .
m
Now we apply the Weyl Monotonicity Theorem, see e.g. [3, Cor. III.2.3]
to see that this inequality applies to all singular numbers. But the
operators on both sides are self-adjoint and positive, such that singular
numbers and eigenvalues coincide. Thus we arrive at
h
i
tr Cαδ = δ 2 tr (α + H)−1 C0 = δ 2 tr (α + H)−1/2 C0 (α + H)−1/2
i
δ2 h
≤ 2 tr (α + H)−1/2 f02 (H) (α + H)−1/2
m
δ2
= 2 tr (α + H)−1 f02 (H) ,
m
where we used the cyclic commutativity of the trace. The proof is
complete.
5. Bounding the squared posterior contraction
In the previous sections we derived bounds for both the bias and
the posterior spread. In all the smoothness cases from Section 3 we
arrived at a bound of the following form. If x∗ has smoothness with
index function ϕ, and if the link condition is with operator concave
function f02 from (13) then it was shown in Propositions 3– 5 that
M
sα (H)ϕ f02 (H) , α > 0.
(25)
bx∗ (α) ≤
m
Also, the posterior spread was bounded in Proposition 6 as
δ2
tr Cαδ ≤ 2 tr (α + H)−1 f02 (H) , α > 0.
m
As was discussed in Remark 5 we shall confine to the case when ϕ ≺ Θ2 ,
i.e., before the saturation point. If this is the case then we can bound,
by using that the function sα obeys sα (t)t ≤ α, t, α > 0, the bias by
M
bx∗ (α) ≤
ϕ f02 (α) , α > 0.
m
A similar ’handy’ explicit bound for the posterior spread can hardly be
given. Under additional assumptions on the decay rate of the singular
14
PETER MATHÉ
numbers more explicit bounds can be given. We refer to [10, Sect. 4],
in particular Assumption 5 and Lemma 4.2 ibid. for details.
Overall we obtain the following result.
Theorem 3 (Bound for the SPC). Suppose that assumptions 1 & 2
hold for index functions ϕ and f0 . Under the assumptions of Propositions 3, 4 & 5, respectively, and if ϕ ≺ Θ2 , then
(26)
M2
SPC(α, δ) ≤ 2 inf ϕ2 f02 (α) + 2 tr (α + H)−1 f02 (H) , α, δ > 0.
m α>0
The above analysis is given in abstract terms of index functions, and
it is worthwhile to give an example to compare this with known (and
minimax) bounds for the commuting case.
To this end we treat the case for a moderately ill-posed operator C0 , a
power type link, and Sobolev type smoothness, with parameters a, p >
0 and β > 0 as in the original studies [8, 2].
Example 1 (power type decay).
(1) For some a > 0 we have that sj (C0 ) ≍ j −(1+2a) , j = 1, 2, . . . .
1+2a+2p
(2) There is some p > 0 such that Θ2 (t) ≍ t 1+2a as t → 0, and
2
P
(3) There is some R < ∞ such that j=∞ j 2β x∗j ≤ R2 , where
the sequence x∗j , j = 1, 2, . . . denotes the coefficients of x∗ with
respect to the eigenbasis of C0 .
This gives for the compound operator H that
sj (H) ≍ sj Θ2 (C0 ) = Θ2 (sj (C0 )) ≍ j −(1+2a+2p) ,
Notice furthermore that
sj f02 (H) = f02 (sj (H)) ≍ j −(1+2a) ,
j = 1, 2, . . .
j = 1, 2 . . .
Then we can bound, and we omit the standard calculations, the posterior spread by using Proposition 6 as
tr
Cαδ
∞
X
1+2p
sj (f02 (H)
≍ δ 2 α− 1+2a+2p .
≤δ
α + sj (H)
j=1
2
We turn to the description of the smoothness of x∗ in terms of an index
function ϕ, thus rewriting the condition from Assumption 1(3). This
β
yields that ϕ(t) = t 1+2a , see Section 4 from [2] for details. We see from
Assumption 1 (2) that saturation is at β = 1 + 2a + 2p.
Thus for 0 < β ≤ 1 + 2a + 2p we apply the bias bound from (25)
for obtaining a tight bound for the SPC. We balance the squared bias
1+2p
2β
with the bound for the posterior spread as α 1+2a+2p = δ 2 α− 1+2a+2p .
BAYESIAN INVERSE PROBLEMS
15
β
This gives α∗ = [δ 2 ] 1+2β+2p , and finally this results in rate for the
decay of the SPC as
2β
as δ → 0
SPC(α∗ (δ), δ) = O δ 2 1+2β+2p
if β ≤ 1 + 2a + 2p. Such bound is well known for commuting operators,
see [2, § 4.1], and the original study [8, Thm. 4.1].
Next we sketch the way to obtain bounds for the backwards heat
equation, with an exponentially ill-posed operator.
Example 2 (backwards heat equation, cf. [9]).
(1) For some a > 0 we have that sj (C0 ) ≍ j −(1+2a) , j = 1, 2, . . . .
2
1+a
(2) The linking function Θ2 obeys Θ2 (t) = e−2t .
(3) There is some β > 0 such that ϕ(t) = tβ/(1+2a) , t > 0.
First, the smoothness assumption is as in the previous example. We
observe that in this case always ϕ ≺ Θ, such that there is no saturation.
For bounding the bias we see that f02 (t) = 12 log−(1+2a) (1/t), t < 1.
For β/(1 + 2a) ≤ 1/2 the position will thus be
2β/(1+2a)
1
−(1+2a)
log
(1/t)
= 4β log−2β (1/t), as t → 0,
t −→
2
and this is operator concave for 0 < β ≤ 1/2. In the regular case, a
similar calculation reveals that the function
t −→ 4(β−1/2) log−2β+1 (1/t),
must be operator concave, which is true for 1/2 ≤ β ≤ 1.
So, in the range 0 < β ≤ 1 we find that
as α → 0.
bx∗ (α) = O log−β/2 (1/α)
By standard calculations we bound the posterior spread as
1
tr Cαδ ≤ Cδ 2 log−a (1/α),
α
for some constant C < ∞. Applying Theorem 3 we let α∗ (δ) :=
δ 2 logβ−a (1/α) and get the rate
SPC(α∗ (δ), δ) = O log−β (1/δ)
as δ → 0.
This corresponds to the contraction rate of the posterior as presented
in [9] with δ ∼ n−1/2 . For details we refer to Section 4 of the survey [2].
However, while these results cover all β > 0, the non-commuting case
will cover only the range 0 < β ≤ 1.
16
PETER MATHÉ
6. Conclusion
We summarize the above findings, and we start with the bias bounds.
In either of the three cases, if there is a valid link condition, if smoothness is given as in Assumption 2, and if the involved functions are
operator concave, then the norm in bx∗ (α) from (7) can be bounded by
M
sα (H)ϕ(f02(H)) .
m
This seems to be the natural extension for the bias bound to the noncommuting context. In the commuting context we would get f02 (H) =
C0 , see § 2.2.
Under these premises the analysis from [2] can be extended to the
non-commuting situation. We stressed in Remark 5 that a lifting of
the original link condition is necessary in order to yield optimal order
bounds for the squared posterior contraction up to the saturation point.
The analysis from [1] covers by different techniques the regular case.
In case of a power type function ψ, and hence of Θ, the requirements
of operator concavity reduce to power type functions with power in the
range between (0, 1), and hence these are automatically fulfilled.
For the posterior spread we derived a similar extension to the noncommuting case in Proposition 6. There is no handy way to derive
the exact increase of the spread as α → 0. Under additional assumption on the regularity of the decay for the singular numbers of H this
problem can be reduced
dimension of the operator H,
to the effective
given as NH (α) = tr (α + H)−1 H . We did not pursue this line, here.
Instead we refer to the study [10].
Finally, we presented two examples exhibiting the obtained rates for
the SPC, both for moderately and severely ill-posed operators. More
examples, using functional dependence for commuting operators, are
given in the study [2].
bx∗ (α) ≤
References
1. Sergios Agapiou, Stig Larsson, and Andrew M. Stuart, Posterior contraction
rates for the Bayesian approach to linear ill-posed inverse problems, Stochastic
Process. Appl. 123 (2013), no. 10, 3828–3860. MR 3084161
2. Sergios Agapiou and Peter Mathé, Posterior contraction in Bayesian inverse
problems under Gaussian priors, New Trends in Parameter Identification for
Mathematical Models, Springer, 2018.
3. Rajendra Bhatia, Matrix analysis, Graduate Texts in Mathematics, vol. 169,
Springer-Verlag, New York, 1997. MR 1477662 (98i:15003)
4. G. Blanchard and P. Mathé, Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration, Inverse Problems 28
(2012), no. 11, 115011, 23. MR 2992966
5. Albrecht Böttcher, Bernd Hofmann, Ulrich Tautenhahn, and Masahiro Yamamoto, Convergence rates for Tikhonov regularization from different kinds of
smoothness conditions, Appl. Anal. 85 (2006), no. 5, 555–578. MR 2213075
(2006k:65150)
BAYESIAN INVERSE PROBLEMS
17
6. R. G. Douglas, On majorization, factorization, and range inclusion of operators
on Hilbert space, Proc. Amer. Math. Soc. 17 (1966), 413–415. MR 0203464
7. Heinz W. Engl, Martin Hanke, and Andreas Neubauer, Regularization of inverse problems, Mathematics and its Applications, vol. 375, Kluwer Academic
Publishers Group, Dordrecht, 1996. MR 1408680 (97k:65145)
8. B. T. Knapik, A. W. van der Vaart, and J. H. van Zanten, Bayesian inverse
problems with Gaussian priors, Ann. Statist. 39 (2011), no. 5, 2626–2657.
MR 2906881
9.
, Bayesian recovery of the initial condition for the heat equation, Comm.
Statist. Theory Methods 42 (2013), no. 7, 1294–1313. MR 3031282
10. Kui Lin, Shuai Lu, and Peter Mathé, Oracle-type posterior contraction rates
in Bayesian inverse problems, Inverse Probl. Imaging 9 (2015), no. 3, 895–915.
MR 3406424
11. Peter Mathé and Ulrich Tautenhahn, Interpolation in variable Hilbert scales
with application to inverse problems, Inverse Problems 22 (2006), no. 6, 2271–
2297. MR 2277542 (2008c:47028)
12. F. Natterer, Error bounds for Tikhonov regularization in Hilbert scales, Applicable Anal. 18 (1984), no. 1-2, 29–37.
Weierstrass Institute, Mohrenstrasse 39, 10117 Berlin, Germany
E-mail address: [email protected]
| 10 |
10003
arXiv:1606.08410v1 [] 27 Jun 2016
Building Airflow Monitoring and Control using
Wireless Sensor Networks for Smart Grid
Application
Nacer Khalil
Driss Benhaddou
Abdelhak Bensaoula
Michael Burriello
Computer Science Department
University of Houston
Engineering Technology Department
University of Houston
Physics department
University of Houston
Facilities Management
University of Houston
Raymond E Cline, Jr.
Information and logistics Technology Department
University of Houston
Many of the rooms in every building are cooled without being
used which represents a loss. Some of the buildings are rarely
used while others are used very differently throughout the day
(e.g. labs are many times empty and other times full). This
results in considerable inefficiency when maintaining different
rooms equally.
Abstract—The electricity grid is crucial to our lives. Households and institutions count on it. In recent years, the sources of
energy have become less and less available and they are driving
the price of electricity higher and higher. It has been estimated
that 40% of power is spent in residential and institutional
buildings. Most of this power is absorbed by space cooling and
heating. In modern buildings, the HVAC (heating, ventilation,
and air conditioning) system is centralised and operated by
a department usually called the central plant. The central
plant produces chilled water and steam that is then consumed
by the building AHUs (Air Handling Units) to maintain the
buildings at a comfortable temperature. However, the heating
and cooling model does not take into account human occupancy.
The AHU within the building distributes air according to the
design parameters of the building ignoring the occupancy. As
a matter of fact, there is a potential for optimization lowering
consumption to utilize energy efficiently and also to be able to
adapt to the changing cost of energy in a micro-grid environment.
This system makes it possible to reduce the consumption when
needed minimizing impact on the consumer. In this study, we
will show, through a set of studies conducted at the University
of Houston, that there is a potential for energy conservation and
efficiency in both the buildings and the central plant. We also
present a strategy that can be undertaken to meet this goal. This
strategy, airflow monitoring and control, is tested in a software
simulation and the results are presented. This system enables the
user to control and monitor the temperature in the individual
rooms according the locals needs.
Index Terms—Airflow management, WSN, Zigbee, Energy
efficiency, CPS, micro-grid
The way the current air-handling units work is that it ties
all rooms with constant quality of cooled air. This keeps them
at the same temperature, but some of the air is removed even
if it is cool enough to be kept. In addition, there is no way to
customize the temperature according to one’s needs. Usually,
one cannot change the rooms’ temperature without affecting
the temperature of the other rooms on the same floor. This is
a major drawback of the current air handling unit in addition
to loss it creates when not accounting of the occupancy of the
rooms.
The drawbacks of the current system are an important
motivation for our team to build an overlying system that
solves these drawbacks by increasing users’ comfort and
decreasing the energy loss. This system uses a Wireless Sensor
Network (WSN) to sense the temperature and to control the
quantity of air flowing into every room.
The main challenge was to design the air flow management
system without replacing the current system. We propose a
system that uses WSN where each sensor node is placed
in room of the building and another one placed in the air
distribution system. This makes it possible to sense in realtime the temperature and then, choosing the appropriate angle
to open the damper through which air will flow into the room
based on a model. The model will be presented, in addition
to a simulation that was conducted to test the system’s ability
to serve the user and increase energy efficiency.
I. I NTRODUCTION
Energy scarcity is increasingly becoming an important challenge that nations are faced to solve. In fact, 40% of the energy
is spent in buildings [1], and a significant amount is used for
space cooling/heating. This is the case for the University of
Houston, located in Houston, Texas, known for being a very
warm and humid city. As a result, it spends a considerable
amount of money to maintain a convenient temperature and
humidity level. It counts on modern and highly efficient HVAC
(heating, ventilation, and air conditioning) systems to maintain
the buildings at a convenient temperature and humidity level.
The rest of the paper is organized as follows: Section
II presents the background and related. In section III, the
methodology is presented. In section IV, the results of the
simulation are presented and discussed. Section V concludes
the paper and presents the future work.
1
10003
II. BACKGROUND AND RELATED WORK
compression cycle (VCC) in an air-conditioning system, containing refrigerant which is used to provide cooling. Another
paper on Experimental investigation of wraparound loop heat
pipe heat exchanger used in energy efficient air handling units
[13]. This is an experimental investigation on the thermal
performance of an air-to-air heat exchanger, which utilises heat
pipe technology. While most of the research is in the area of
HVAC (Heating, Ventilation and Air Conditioning) our model
deals with only air flow.
Energy is one of the most important concerns of the world.
As fossil fuels become less and less available, their price has
increased significantly in the last decade, and raises concerns
for future generations’ energy outlook. Electricity is mostly
generated using coal, which has a huge negative impact on
the environment [2].
In recent years, due to high electricity cost, high growth
rate of the electric grid and climate change, the one centuryold electric grid has seen a drastic shift in its internal design
and philosophy [3]. The smart grid is intended to support
Distributed Energy Generation (DER), two-way power and
communication flow between the producer and consumers of
power, and increased usage of renewable energies [4]. This
impacts the price of electricity where it becomes variable
and changes very frequently [5]. The user can benefit by
minimizing the usage when the price is high and shift the
necessary tasks to the times when the price is low. However,
the user cannot do this manually as it is very time consuming.
Smart buildings have the potential to automate the process in
addition to minimizing the losses in a building [6].
In an industrial environment, the buildings count on the
services delivered by the central plant for providing electric
power with high reliability for cooling and heating. The central
plant is an efficient way to manage power, because it uses the
economy of scale of larger machinery that is more efficient to
serve multiple buildings rather than smaller machinery that is
neither as efficient nor as reliable [7].
The buildings are the main consumers of electric power,
chilled water and steam. They use these three sources to power
the users’ machinery and provide them with a comfortable
space for living [8]. The buildings can be optimized to avoid
energy loss[9].
The buildings as well as the central plant can become more
efficient using more automation that takes into consideration
the electricity price, human activity and machines efficiency
profiles. In [10], they use sensing to monitor the building’s
consumption as well as the users’ occupancy and incorporate
this data into agents to estimate the occupants in the building,
predict them and adjust the building’s consumption to this
changing environment. In [11], sensory data is collected every
fifteen minutes in order to determine the consumption of the
building in the next hour.
In modern buildings, there are Air Handling Units that
consume chilled water and steam to adjust the environment
for the occupant. There are different types of Air Handling
Units, such as fan Coils, which are small and simple and
are commonly used in smaller buildings and commercial
applications. On the other hand, there are custom AHUs
available in any configuration that a user might require, which
are very common for institutional and industrial applications.
There are a number of research activities towards saving the
energy consumption in Buildings. One of those is an energy
efficient model predicting building temperature control [12].
This work focuses on the problem of controlling the vapor
III. DATA GATHERING AND ANALYSIS
In this section, we show the different studies that were
undertaken to investigate the potential for energy optimization
in the central plant and the buildings. The gathered data for
the central plant is a collection of data logs that monitor the
status of every machine of the central plant every two hours.
We used four months of data for this study.
A. Central plant optimization
The central plant is composed of a set of machinery that
provide chilled water and steam. These two systems are
independent as they do not share any machinery. However,
each of these systems is composed of a set of machines
that work hand in hand to provide the required service.
To develop the Airflow Management System using Sensor
Networks, different parameters had to be taken into account
to design the system which enables a user to control the
air temperature of the room. To understand the scenario, let
us consider the schematic in Figure 1. The main types of
machines that are used are: chillers, chilled water pumps,
condenser water pumps and cooling towers. For each type,
there are a number of machines consisting of different brands,
different ages and different usage hours. As a result, the
efficiency profile of every machine is different and thus affects
uniquely the efficiency of the whole system.
Fig. 1.
University of Houston Central plant machines map
To evaluate the efficiency of the chiller system, we used the
following measure called KFG, which stands for the number
of KiloWatt Hour required to cool 10,000 GPM of chiller
by 1 degree Fahrenheit (F) in one hour. Figure 2 shows that
the efficiency of the system varies throughout the day and
therefore there is potential to investigate more to understand
the driving factors of such variation in cost.
2
10003
Fig. 2.
Evolution of KFG by Hour
Fig. 4.
To do so, we defined two additional measures. The first
variable, ∆T , refers to the temperature difference between the
supply chilled water and the return chilled water temperature
in degrees Fahrenheit. The second variable, NT, is equal to
AHU chilled water set point minus the return temperature of
the chilled water. The set point for the AHU is equal to 55.1F.
This measure gives an indication of how much chilled water
was consumed by the buildings to estimate the overproduction
of chilled water.
Heat consumption in a building
that opening a door for a few seconds results in an important
heat exchange between the building and the outside air. This
opening of doors is caused by human activity. Therefore,
there is a potential for energy minimization by modeling and
predicting human activity and its interaction with the building.
This model can be incorporated with an airflow monitoring
and control system to minimize the losses caused by opening
doors.
IV. S IMULATION
The goal of this simulation is to monitor and control the
airflow speed which is the main consumer of chilled water
and steam. By optimizing the airflow, we optmimize the
consumption of chilled water and steam and this affects not
only the buildings but also the central plant efficiency. The
following simulation is used to study the airflow system in a
typical room. It adjusts to the user’s requested temperature by
taking into account his/her request and therefore makes the
room more comfortable while consuming less energy.
A. Scenario
Fig. 3.
When the user feels that it is too warm for him/her, the
user requests the system to change target temperature and
specifies how long the system has to achieve it. By getting
these two parameters, the system adjusts the damper angle to
allow more/less air to flow to meet the target temperature by
the required time.
Evolution of KFG as a factor of NT and ∆T
From Figure 3, it shows that KFG is affected by ∆T and
NT. This means that the more the heat is used for the same
quantity of water, the more efficient it is. Also, the closer the
chilled water return temperature is, the more efficient it is.
B. System requirements
The system has a set of requirements that drive its architecture and design, as per the following
• Achieve target temperature within the required time
• Keep that temperature constant unless changed
• Account for change in occupancy that will impact achieving the target within allocate time
The model is designed to take into consideration that once
a quantity of air is determined, the system should always
recompute it when the occupancy of the room changes as more
people can come/leave the room, adding/removing more heat
and therefore deviate the system from its objective.
The system therefore works as follows: Once the target
temperature and time are received from the user, the system
B. Building Optimization
The buildings are the main consumers of chilled water
and steam, they use them to cool/heat the air and to remove
humidity. There are many parameters that affect the production
of air. A study was conducted to estimate the variables that
affect the consumption. The chosen parameters are the state
of doors, human presence and machines used.
Figure 4 shows how these variables consume heat when the
outside temperature varies. It is clear that the doors’ states
represent the major consumers of heat in a building. The fact
3
10003
within the required time. It is based on thermodynamic laws
and most importantly the law of conversation of energy.
senses the temperature of the room, the temperature of the
air flowing to the room, computes the angle of the damper
using the model to compute the next minute’s partial objective
and adjusts the shed to that angle. After each time period,
the temperature is expected to be at a certain level, so the
system senses the temperature of the room to see if the partial
objective has been met. Afterwards, it recomputes the angle
using the model to take into consideration the error arising
from the previous partial objective.
V. R ESULTS
The purpose of this section is to test whether the model
will make it possible to reach the goal of customizing the
temperature to the users’ needs and be able to increase both
user’s comfort and energy efficiency. To do so, a program
was built that simulates the change of temperature by adding
and removing people. It also simulates internal room heat
that changes with time (e.g. computers, heat from windows...).
Afterwards, the model was added to the program to see if it
can successfully meet the user’s target within the target time.
C. Model
The model counts on the following assumptions and parameters some of which are set as constant in the system and do
not need to be sensed and others that change and must be
sensed every time step in order for the system to achieve the
objective, as follows
1) Assumptions: The model assumes some things are constant to keep it simple. The system assumes the pressure inside
the room is the same for every room and does not change
throughout time. Also, it assumes that the speed of air, vout
is the same as the speed of the air flowing to the room. In
reality, the direction of the air flow vector affects the speed.
To keep it simple, we assume these parameters.
2) Parameters:
• Target temperature Ktarget : This is the input temperature
provided by the user. It is the target temperature to be
achieved by the system.
• Time ∆ttarget : This is the input time in minutes provided
by the user. It is the time required to meet the target
temperature.
• Output α: This is the angle of the damper that is computed by the model. It is what defines the quantity of air
flowing to the system.
• Inside temperature Kin : This is the inside room temperature. It is sensed by the sensor node within the room. It
is sensed every minute. It is also used to check the error
in meeting the partial target.
• Outside Air temperature Kout : This is the temperature of
the air in the duct and it is sensed every minute because
it can be changed by the air-handling unit and it taken
into account to compute α.
• Outside air Speed vout : This is the speed of the air
flowing in the duct.
• Volume of the room: Vroom : This parameter represents
the volume of the room.
3
• Air density ρ = 1.225Kg/m
2
• Surface of damper l : This is the surface of the damper
that we change. The bigger the α, the bigger the surface,
the more air flows into the room.
The model we have created for this study outputs the angle
α that is computed using the parameters above.
ρ∗V
∗(K
−K
A. Simulation
The simulation program is a Java SE [14] program that
depicts a room with users going in and out of the room and
controls the air damper to meet the target temperature within
the target time.
B. Data collection
The simulation is run assuming the following information.
The room size is 150 cubic meters. The air speed is 100 meters
per minute. The starting temperature is 24 degrees celsius. The
user asks the system to achieve a temperature of 21 degrees
Celsius within 30 minutes. In addition, people go into the room
and exit randomly. This generates heat randomly. The system
takes into consideration such possible changes to adjust to in
order to meet the target.
Fig. 5.
Java airflow simulation program
Figure 5 shows a screenshot of the Java simulation
application that simulates the room and shows the model in
action. The simulator outputs after every minute the current
temperature of the room and the angle of the damper that
will be set to meet the target given the time left. This data
was collected to evaluate the performance of the system.
C. Data interpretation
The data collected was used to compile Figure 6 which
shows how the angle is adjusted in order to meet the target
temperature. It also shows how the temperature changes as a
result of a change in the shed angle. Initially the temperature
is set to 24 Celsius and the user chooses 21 Celsius as target
)
room
target
in
α = tan−1 ( ∆ttarget
∗vout ∗L2 ∗(Kout −Kin ) )
This model computes the angle of the damper to ensure enough
air is flowing to the room to meet the target temperature
4
10003
[3] H. Farhangi, “The path of the smart grid,” Power and Energy Magazine,
IEEE, vol. 8, no. 1, pp. 18–28, 2010.
[4] R. E. Brown, “Impact of smart grid on distribution system design,” in
Power and Energy Society General Meeting-Conversion and Delivery
of Electrical Energy in the 21st Century, 2008 IEEE. IEEE, 2008, pp.
1–4.
[5] A.-H. Mohsenian-Rad, V. W. Wong, J. Jatskevich, R. Schober, and
A. Leon-Garcia, “Autonomous demand-side management based on
game-theoretic energy consumption scheduling for the future smart
grid,” Smart Grid, IEEE Transactions on, vol. 1, no. 3, pp. 320–331,
2010.
[6] Z. M. Fadlullah, M. M. Fouda, N. Kato, A. Takeuchi, N. Iwasaki, and
Y. Nozaki, “Toward intelligent machine-to-machine communications in
smart grid,” Communications Magazine, IEEE, vol. 49, no. 4, pp. 60–65,
2011.
[7] W. Kirsner, “Chilled water plant design,” HEATING PIPING AND AIR
CONDITIONING-CHICAGO-, vol. 68, pp. 73–80, 1996.
[8] W. J. Fisk, “Health and productivity gains from better indoor environments and their relationship with building energy efficiency,” Annual
Review of Energy and the Environment, vol. 25, no. 1, pp. 537–566,
2000.
[9] N.-C. Yang and T.-H. Chen, “Assessment of loss factor approach to
energy loss evaluation for branch circuits or feeders of a dwelling unit
or building,” Energy and Buildings, vol. 48, pp. 91–96, 2012.
[10] S. Mamidi, Y.-H. Chang, and R. Maheswaran, “Improving building
energy efficiency with a network of sensing, learning and prediction
agents,” in Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1.
International
Foundation for Autonomous Agents and Multiagent Systems, 2012, pp.
45–52.
[11] R. E. Edwards, J. New, and L. E. Parker, “Predicting future hourly
residential electrical consumption: A machine learning case study,”
Energy and Buildings, vol. 49, pp. 591–603, 2012.
[12] M. Wallace, R. McBride, S. Aumi, P. Mhaskar, J. House, and T. Salsbury, “Energy efficient model predictive building temperature control,”
Chemical Engineering Science, vol. 69, no. 1, pp. 45–58, 2012.
[13] H. Jouhara and R. Meskimmon, “Experimental investigation of
wraparound loop heat pipe heat exchanger used in energy efficient air
handling units,” Energy, vol. 35, no. 12, pp. 4592–4599, 2010.
[14] J. Gosling, B. Joy, G. Steele, G. Bracha, and A. Buckley, The
Java Language Specification, java se 7 edition ed., California, USA,
February 2012. [Online]. Available: http://docs.oracle.com/javase/specs/
jls/se7/jls7.pdf
Fig. 6. Temperature change and angle computation to meet target temperature
that should be achieved within 30 minutes. Figure 6 shows that
the target is achieved within the required time. As a result, the
user’s comfort is increased because the target is achieved.
VI. C ONCLUSION AND FUTURE WORK
Our studies show that there is a potential for energy optimization in buildings and the central plant’s machinery. To
optimize the energy in the central plant, there is a need to
monitor the efficiency profile of every machine, how one’s
efficiency affects the others and how this efficiency profile
changes through time. therefore machine learning is the appropriate approach in this changing environment.
The buildings were shown to be subject to optimization that
will minimize the energy consumption and also increase the
users’ comfort. The airflow system simulation showed that
there is a potential for energy efficiency. As a future work, this
system will be tested in a real-world environment and serve
as an infrastructure for sensing and control. In addition, the
current model does not take into account real world parameters
such as pressure of the room, the effect of one room air needs
on the other rooms. These will be taken into consideration to
make the model more robust to these changes.
ACKNOWLEDGMENT
We would like to express our very great appreciation to
Demond Williams for his valuable recommendations as well
as his help in conducting this study. We would also like to
express deep gratitude for Tao LV who has been part of this
project, who has helped in the understanding of the system. We
would also like to thank the Central plant operators who have
spent with our group time to explain to us the the structure of
the plant as well as to helping us conduct many experiments.
We would like to thank the Central Plant in the University
of Houston for giving us the chance to work with them and
carry our project with their valuable assistance. We would
like to also thank the students from College of Technology
in University of Houston who spent long hours contributing
to this project by helping digitizing the data.
R EFERENCES
[1] D. of Energy. (2013) Building energy data book. [Online]. Available:
http://buildingsdatabook.eren.doe.gov
[2] D. Adriano, A. Page, A. Elseewi, A. Chang, and I. Straughan, “Utilization and disposal of fly ash and other coal residues in terrestrial
ecosystems: A review,” Journal of Environmental Quality, vol. 9, no. 3,
pp. 333–344, 1980.
5
| 3 |
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
1
Interleaved Training and Training-Based Transmission Design for Hybrid
Massive Antenna Downlink
arXiv:1710.06949v2 [] 11 Mar 2018
Cheng Zhang, Student Member, IEEE, Yindi Jing, Member, IEEE, Yongming Huang, Senior Member, IEEE,
Luxi Yang, Member, IEEE
Abstract—In this paper, we study the beam-based training
design jointly with the transmission design for hybrid massive
antenna single-user (SU) and multiple-user (MU) systems where
outage probability is adopted as the performance measure.
For SU systems, we propose an interleaved training design to
concatenate the feedback and training procedures, thus making
the training length adaptive to the channel realization. Exact
analytical expressions are derived for the average training length
and the outage probability of the proposed interleaved training.
For MU systems, we propose a joint design for the beam-based
interleaved training, beam assignment, and MU data transmissions. Two solutions for the beam assignment are provided with
different complexity-performance tradeoff. Analytical results and
simulations show that for both SU and MU systems, the proposed
joint training and transmission designs achieve the same outage
performance as the traditional full-training scheme but with
significant saving in the training overhead.
Index Terms—Hybrid massive antenna system, outage probability, beam training, beam assignment.
I. I NTRODUCTION
Massive multiple-input-multiple-output (MIMO) is considered to be a promising technique to further increase the
spectrum efficiency of wireless systems, thanks to the high
spatial degrees-of-freedom it can provide [2]–[4]. However,
the conventional full-digital implementation where one full
radio-frequency (RF) chain is installed for each antenna tends
to be impractical due to its high hardware costs and power
consumptions [5], especially for systems targeted at the millimeter wave (mmWave) band [6]–[8]. Recently, enabled by
the cost-effective and low-complexity phase shifters, a hybrid
analog-digital structure has been applied for massive antenna
systems which can effectively reduce the hardware complexity
and costs via a combination of full-dimensional analog RF
processing and low-dimensional baseband processing [9], [10].
It has been shown that this hybrid structure incurs slight
performance loss compared with its full-digital counterpart [9],
when perfect channel state information (CSI) is available.
One crucial practical issue for massive MIMO downlink is
the acquisition of CSI at the base station (BS), especially when
no channel reciprocity can be exploited, e.g., systems with
This work was supported in part by the National Natural Science Foundation
of China under Grants 61720106003, in part by the Research Project of
Jiangsu Province under Grant BE2015156, in part by the Scientific Research
Foundation of Graduate School of Southeast University. Part of this work
has been accepted by IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP) 2018 [1]. The guest editor coordinating the
review of this manuscript and approving it for publication was Dr. Christos
Masouros. (Corresponding author: Yongming Huang.)
C. Zhang, Y. Huang and L. Yang are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, P. R.
China (email: zhangcheng1988, huangym, [email protected]).
Y. Jing is with the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada, T6G 1H9 (email:
[email protected]).
frequency-division-duplexing (FDD) [11]. Due to the massive
number of channel coefficients to be estimated [12], traditional
training and channel estimation schemes cause prohibitive
training overhead. Thus, new channel estimation methods have
been proposed for massive MIMO downlink by exploiting
channel statistics [13] or channel sparsity [14]. While these
apply to full-digital systems, they are less effective for the
hybrid structure. Due to the limited number of RF chains
and the phase-only control in the RF processing, it is more
challenging to acquire the channel statistics or align with
statistically dominant directions of the channels accurately and
transmit sensing pilots with high quality [15].
One popular method for the downlink training with hybrid
massive antenna BS is to combine the codebook [16] based
beam training with the traditional MIMO channel estimation
[10], [17], [18]. In this method, a finite codebook is used which
contains all possible analog precoders, called beams. Then the
channel estimation problem is transformed to the estimation
of the beam-domain effective channels. This can reduce the
dimension of the channel estimation problem if the number
of desired beams is limited. The remaining difficulty lies in
finding the desired beams or analog precoders with affordable
training overhead.
Several typical beam-based training schemes have been
proposed for hybrid massive MIMO downlink. For systems
with single RF chain at the BS, which can serve single user
(SU) only, one straightforward method is to exhaustively train
all possible beams in the codebook, then to find the best
beam for transmission. Another typical method is based on
hierarchical search [10], [17], where all possible wide beams
are trained first and the best is selected. Within this selected
best wide beam, the beams with narrower beamwidth are
trained and the best is chosen. The training procedure is
repeated until the optimal beam with acceptable beamwidth
is found. Generally speaking, the hierarchical search scheme
has lower training overhead than the scheme with exhaustive
search. However, since hierarchical search uses wide beams
first, its beam alignment quality is very sensitive to the prebeamforming signal-to-noise-ratio (SNR) [19]. Meanwhile, its
advantage of lower training overhead diminishes as the number
of channel paths increases or when applied to multiple-user
(MU) systems [10], [20].
Beam training procedure for hybrid massive antenna systems with multiple RF chains is analogous to that of the
single RF chain case. The users generally feed back channels
of multiple beams [21], and then the BS selects the best
beam combination, and constructs the corresponding analog
precoder and baseband precoder for data transmission. The
work in [22] studied the beam selection problems with respect to the capacity and signal-to-interference-plus-noise-ratio
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
(SINR) maximization. In [23], the tradeoff of transmission
beamwidth and training overhead was studied, where the
beamwidth selection and pair scheduling were jointly designed
to maximize network throughput. In [21], partial beam-based
training was proposed where only a subset of the beams are
trained and the sum-rate loss compared with full training was
studied. Another scheme proposed in literature is based on
tabu search [8] where several initial beams are first chosen and
the training procedure is only executed within the neighbors
obtained by changing the beam of only one RF chain and
fixing the others. The training stops when a local optimal beam
combination is found in terms of mutual information.
For existing beam-based training and corresponding transmission schemes, the basic idea is to obtain the complete
effective CSI for the full or selected partial beam codebook.
Based on the obtained effective CSI values, data transmission
designs are then proposed. In such schemes, the training design
and data transmission design are decoupled. For satisfactory
performance, the size of the training beam codebook needs to
increase linearly with the BS antenna number, leading to heavy
training overhead for massive MIMO systems. The decoupled
nature of existing schemes also imposes limitations on the
tradeoff between training overhead and performance. Further,
only the throughput or diversity gain has been considered
in existing work. In [24], an interleaved training scheme
was proposed for the downlink of SU full-digital massive
antenna systems with independent and identically distributed
(i.i.d.) channels. In this scheme, the BS trains its channels
sequentially and the estimated CSI or indicator is feed back
immediately after each training step. With each feedback, the
BS decides to train another channel or terminate the training
process based on whether an outage occurs. The scheme was
shown to achieve significant reduction in training overhead
with the same outage performance compared to traditional
schemes.
By considering the aforementioned limitations of existing
training schemes and exploiting the interleaved training idea,
in this paper, we study the beam-based training design jointly
with the data transmission design for hybrid massive antenna
systems with single user and multiple users. The outage probability is adopted as the performance measure. We consider
interleaved training designs that are dynamic and adaptive, in
the sense that the length of the training interval depends on the
channel realization and the termination of the training process
depends on previous training results, to achieve favorable
tradeoff between the outage performance and the training
overhead. Compared with the work in [24], our work is
different in the following aspects. First, [24] is on full-digital
massive MIMO systems with i.i.d. channels. Whereas, in this
work, we consider hybrid massive antenna systems and a more
general channel model that incorporates channel correlation
and limited scattering. Our work uses beam-based training
while in [24] the training is conducted for each antenna
sequentially. Second, given the above differences in system,
channel, and transceiver models, the theoretical derivations and
analytical results in our work are largely different. The numbers of resolvable channel paths and RF chains are two new
parameters in our work. They both make the derivations on the
2
average training length and the outage probability considerably
more complicated compared to the case in [24]. Our work also
provides more comprehensive analytical performance results
along with abundant new insights. Third, this work considers
both SU and MU systems. Especially for MU systems, new
design issues appear, such as the beam assignment and the
local CSI at each user. Although the same basic interleaved and
adaptive training idea is used, the implementation of this idea
for MU systems is far from immediate applications and the
algorithm has large difference to the one for the SU case. The
distinct contributions of this work are summarized as follows.
•
•
•
•
For SU massive antenna systems with arbitrary number of
RF chains, we propose a beam-based interleaved training
scheme and the corresponding joint data transmission
design. The average training length and the outage probability of the proposed scheme are studied, where exact
analytical expressions are derived.
For MU massive antenna systems with arbitrary number
of RF chains, we propose a joint beam-based interleaved training and data transmission design. Two beam
assignment solutions, i.e., exhaustive search and maxmin assignment, are proposed with different complexityperformance tradeoff. Compared to exhaustive search,
the low-complexity max-min method induces negligible
increment in the average training length and small degradation in the outage performance. Due to its advantage
in complexity, the max-min method is more desirable for
the proposed MU scheme.
Analytical results and simulations show that for both
SU and MU systems, the proposed training and joint
transmission designs achieve the same outage probability
as the traditional full-training scheme but with significant
saving in the training overhead.
Based on the analytical results and simulations, useful
insights are obtained on the performance of several
special but typical scenarios, e.g., small channel angle
spread (AS) or limited scattering, and also on the effect
of important system parameters, e.g., the BS antenna
number, the RF chain number, the channel path number
or AS, and the rate requirement, on the average training
length and the outage performance.
Specifically, for the average training length of the proposed
SU scheme, our analysis reveals the following. 1) For channels
with limited scattering, the average training length is no
longer a constant as in the case of i.i.d. channels, but linearly
increases with respect to the BS antenna number. However, it
decreases linearly with increasing channel paths. Meanwhile,
fewer RF chains or higher rate requirement has negligible
effect on the average training length. 2) For channels with
non-negligible AS, the average training length is a constant
dependent on the AS, the RF chain number, and the rate requirement. The constant increases for higher rate requirement
while the increasing slope decreases with more RF chains.
Moreover, smaller AS (larger channel correlation) reduces the
increasing speed of the average training length with higher
rate requirement. For the outage probability of the proposed
scheme, the following major new insights are achieved. 1)
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
For channels with limited scattering and single RF chain, the
outage probability scales as the reciprocal of the BS antenna
number to the power of the channel path number. 2) For
channels with non-negligible AS and single RF chain, the
outage probability decreases exponentially with respect to the
BS antenna number. 3) More RF chains can further decrease
the outage probability for both kinds of channels.
Notation: In this paper, bold upper case letters and bold
lower case letters are used to denote matrices and vectors, respectively. For a matrix A, its conjugate transpose, transpose,
and trace are denoted by AH , AT and tr{A}, respectively.
For a vector a, its conjugate counterpart is a∗ . E[·] is the
mean operator and Pr[·] indicates the probability. The notation
a = O (b) means that a and b have the same scaling with
respect to a parameter given in the context. kak denotes the
2-norm of a Rand kAkF denotes the Frobenius norm of A.
x
Υ (s, x) = 0 ts−1 e−tRdt is the lower incomplete gamma
∞
function and Γ (s, x) = x ts−1 e−t dt is the upper incomplete
gamma function. X 2 (k) denotes the chi-squared distribution
with k degrees of freedom. CN (0, Σ) denotes the circularly
symmetric complex Gaussian distribution where the mean
vector is 0 and the covariance matrix is Σ.
II. S YSTEM M ODEL AND P ROBLEM S TATEMENT
A. System Model
Consider the downlink of a single-cell1 massive antenna
system with hybrid beamforming structure. The BS employs
Nt ≫ 1 antennas with NRF ∈ [1, Nt ) RF chains and serves
U single-antenna users. Since for effective communications,
each user requires a distinct beam, U ≤ NRF is assumed. Let
H = [h1 , ..., hU ] ∈ CNt ×U be the downlink channel matrix.
1) Channel Model: We consider the typical uniform array, e.g., uniform linear array or uniform planar array, at
the BS and high dimension. The beamspace representation
of the channel matrix becomes a natural choice [15], [25]
where the antenna space and beamspace are related through
a spatial discrete Fourier transform (DFT). Denote the DFT
matrix as D ∈ CNt ×Nt where the ith column is di =
[1, e−j2π(i−1)/Nt , ..., e−j2π(i−1)(Nt −1)/Nt ]T , ∀i. Assume that
there are L ∈ [1, Nt ] distinguishable scatterers or paths [15]
in User u’s channel ∀u, and define the set of their direction
indices as Iu = {Iu,1 , ..., Iu,L }. The channel vector of User
u can be written as
hu = Dh̄u = [d1 , ..., dNt ][h̄u,1 , ..., h̄u,Nt ]T ,
(1)
where h̄u,i ∼ CN (0, 1/L) for i ∈ Iu and h̄u,i = 0 for i ∈
/
Iu . This channel model can be understood as an asymptotic
approximation of the geometric channel model in [26] which
has been widely used for the mmWave band [10], [27] with
discretized angle distribution of the channel paths.
Specifically, we assume that different users have independent path directions and gains, and the L-combination
(Iu,1 , · · · , Iu,L ) follows discrete uniform distribution with
1 This work only considers single-cell systems without inter-cell interference. The main reason for the simplification is to focus on interleaved training
design and fundamental performance behavior. The multi-cell case is left for
future work.
3
each element on [1, Nt ]. One justification is given as follows.
Generally, for the geometric channel model, the angles of
different paths are independent following uniform distribution
[10] and no two paths’ continuous angles are the same. The
beamspace representation equivalently divides the angle space
into Nt uniform sections [15]. When Nt is large enough,
no two paths are in the same section. As the variances of
h̄u,i , i∈Iu are set to be the same, the average power difference
among different paths is not embodied in this channel model.
Further, it is assumed that all users have the same L. When
L = Nt , our channel model becomes the i.i.d. one [24]. When
L = 1, it reduces to the single-path one [21].
While generally speaking, the number of distinguishable
channel paths L is arbitrary in our work, two typical scenarios
are of special interest, corresponding to different scaling with
respect to Nt . The first typical scaling for L is that it is a
constant with respect to Nt , i.e., L = O(1). This corresponds
to channels with extremely small AS where having more
antennas does not result in more distinguishable paths. One
application is the outdoor environment with a few dominant
clusters [15]. Another typical scaling is when L linearly
increases with Nt , i.e., L = cNt with c ∈ (0, 1] being a
constant. It corresponds to channels with non-negligible AS
where c is the value of the AS. Since the spatial resolution
increases linearly with Nt , it is reasonable to assume that
the number of distinguishable path increases linearly with
Nt . One application is the indoor environment with a large
amount of reflections [15]. Similarly, two typical scalings for
NRF are NRF = O(1) and NRF = c̄Nt with c̄ ∈ (0, 1]
being a constant. The former case is more interesting from
the perspective of low hardware costs.
2) Hybrid Precoding and Outage Probability: The hybrid
structure at the BS calls for an analog RF precoding followed
with a baseband precoding.
The analog precoder FRF ∈ CNt ×Ls is realized by phase
shifters for low hardware complexity, where Ls is the number
of beams used for the transmission and Ls ≤ NRF . All
elements of FRF have the same constant norm. Without loss
of generality, we assume k[FRF ]i,j k2 = 1/Nt , ∀i, j. The
codebook-based beamforming scheme is used in this work,
where columns of FRF are chosen from a codebook of
vectors FRF [10], [17]. Naturally, with the channel model
in (1),
is used [28], where FRF =
√
√ the DFT codebook
{d∗1 / Nt , · · · , d∗Nt / Nt }. Each element in the codebook
is also called a beam and there are Nt beams in total.
With a given analog beamforming matrix FRF , the effective
channel√matrix √
for the baseband is HT FRF . More specifically,
T ∗
hu di / Nt = Nt h̄u,i is the effective channel of User u on
Beam i. If h̄u,i 6= 0, Beam i is a non-zero beam for User u.
The next to discuss is the baseband precoding and the outage
probability. In what follows, we consider the SU case and the
MU case separately due to their fundamental difference.
For the SU case (i.e., U = 1) where the BS chooses Ls
beams
for analog precoding, the transmitted signal vector is
√
P FRF fBB s and the transceiver equation can be written as
y=
√ T
√
P h FRF fBB s + n = P h̄T DFRF fBB s + n, (2)
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
where P can be shown to be the short-term total transmit
power, h is the channel vector from the BS to the user,
fBB is the baseband beamformer, s denotes the data symbol with unit power, and n is the additive noise following
CN (0, 1). For a fixed h, with perfect effective channel vector,
i.e., h̄T DFRF , at the user side, the channel capacity is
log2 (1+P kh̄T DFRF fBB k2 ) bps/Hz. For a given transmission
rate Rth , an outage event occurs if
2Rth − 1
,
(3)
P
where α is called the target normalized received SNR. Thus,
for random h, the outage probability is
kh̄T DFRF fBB k2 ≤ α ,
out(FRF , fBB ) = Pr(kh̄T DFRF fBB k2 ≤ α).
(4)
If further the effective channel vector is perfectly known at the
BS, fBB can be designed to match the effective channel vector,
i.e., fBB = (h̄T DFRF )H /kh̄T DFRF k, which is optimal in
the sense of minimizing the outage probability. In this case,
the outage probability becomes Pr(kh̄T DFRF k2 ≤ α) which
is dependent on FRF .
If more than Ls non-zero beams are available, beam selection is needed. Define the set of indices of the available
non-zero beams as A = {a1 , ..., aj }. The optimal beamselection is to find the strongest Ls ones within the set.
By ordering the magnitudes of the effective channels as
kh̄s1 k ≥ kh̄s2 k ≥ ... ≥ kh̄sLs k ≥ · · · ≥ kh̄sj k. The set of
indices of the selected beams is S = {s1 , ..., sLs }. Thus the
beamforming matrices are:
∗
d∗sLs
ds1
,
FRF = √ , ..., √
Nt
Nt
[h̄s1 , ..., h̄sLs ]H
fBB = q
.
(5)
kh̄s1 k2 + ... + kh̄sLs k2
P
Ls
2
.
The outage probability reduces to Pr
k
≤
α/N
k
h̄
t
s
i
i=1
For the MU case, we assume that the BS uses U out of
the NRF RF chains to serve the U users, i.e., Ls = U . The
received signal vector at the users can be presented as
√
y = P HT FRF FBB s + n,
(6)
where s ∼ CU×1 contains the U independent data symbols
satisfying E[ssH ] = (1/U )IU ; and n ∼ CN (0, I) is the
additive noise vector. For the shot-term power normalization
H
at the BS, we set tr{FRF FBB FH
BB FRF } = U .
Without loss of generality, assume that Beams n1 , · · · , nU
are selected
to serve Users
1, · · · , U respectively. Thus FRF =
√
√
[d∗n1 / Nt , ..., d∗nU / Nt ]. The effective channel matrix Ĥ =
HT FRF is therefore
h̄ 1 · · · h̄1,nU
p 1,n
..
..
..
Ĥ=[h̄1 , ..., h̄U ]T DFRF = Nt
.(7)
.
.
.
h̄U,n1 · · · h̄U,nU
One of the most widely used baseband precodings is the zeroforcing (ZF) scheme [22], [27]: FBB = λĤH (ĤĤH )−1 ,
where
q
(8)
λ = U/kFRF ĤH (ĤĤH )−1 k2F .
4
With ZF baseband precoding, the user-interference is cancelled
and the received SNRs of all users are the same, which are
SNRMU = (P/U )λ2 .
(9)
For a given target per-user transmission rate Rth , an outage
event occurs for User u if SNRMU ≤ (P/U )ᾱ, where
2Rth − 1
U
(10)
P
is the target normalized per-user received SNR. The outage
probability for the system with U users is thus
out(FRF , FBB ) = Pr λ2 ≤ ᾱ .
(11)
ᾱ ,
B. Beam-Based Training and Existing Schemes
To implement hybrid precoding, including both the beam
selection/assignment and the baseband matching/ZF, CSI is
needed at the BS, thus downlink training and CSI feedback
must be conducted. Instead of all entries in H, for the hybrid
massive MIMO system under codebook-based beamforming,
√
the BS only needs values of the effective channels Nt h̄u,i ’s.
Thus beam-based training is a more economical choice than
traditional MIMO training [12]. In what follows, existing
beam-based training schemes are briefly reviewed.
For SU systems, the typical beam-based training scheme operates as follows. For each channel realization, the BS sequentially transmits along the Nt beams for the user to estimate the
corresponding effective channels. The effective channel values
are then sent back to the BS. Another type of beam-based
training scheme uses hierarchical search [10], which generally
has smaller training length. But this advantage diminishes as
the path number L increases or the pre-beamforming SNR
decreases, along with performance degradation [19]. Specifically, for SU systems with Nt BS antennas, NRF RF chains,
and L channel paths, the training length of the hierarchical
scheme is THT−SU = M L2 logM (Nt /L), where M ≥ 2 is
the partition number for the beamwidth, e.g., M = 2 means
bisection [10, Section V-B-1]. Note that the training length
is independent of NRF since the ideal hierarchical codebook
is assumed. Meanwhile, the mechanism of the hierarchical
scheme needs L ≤ Nt /M < Nt /e0.5 = Nt /1.65 for plausible
results. It can be shown that the training length increases with
L for the aforementioned range of L. As for the tabu search
based training [8], it is sensitive to the initialization step and
stops when a local optimal beam combination is found. Thus
it has worse outage performance.
For MU systems, beam-based training has been studied in
[21], [27], [28] with similar procedure to the SU case reviewed
above since all users can conduct channel estimation at the
same time when the BS sends a pilot along one beam. But in
[21], a more general scheme was proposed where only Lt out
of Nt beams are selected for training. The value of Lt can
be used to leverage the tradeoff between the training overhead
and the performance. Larger Lt means longer training period,
less time for data transmission, and better transmission quality;
while smaller Lt leads to the opposite. For the special case of
Lt = Nt , the scheme becomes the full training case in [28].
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
It should be noted that while the training procedure for the
MU case is similar to that for the SU case, the effective CSI
feedback and the beam assignment at the BS are different.
Specifically, for an MU system with Lt beams being trained,
assume that there are ju non-zero beams among the trained
beams for User u, ∀u. Then User u feeds back to the BS the
effective channels of the ju non-zero beams along with their
indices. And the BS finds the beam assignment based on the
CSI feedback. The work in [22] considered the magnitude of
the path and the SINR, while the sum-rate maximization was
used in [21], [27].
C. Motivations of This Work
This work is on beam-based training design for SU and MU
massive MIMO systems with hybrid structure. The object is to
propose beam-based training schemes and corresponding SU
and MU transmission schemes with reduced training length,
without sacrificing the performance compared with the full
training case. The training length, or the number of symbol
transmissions required in the training period, is a crucial
measure for training quality since it affects both the available
data transmission time and the beamforming/precoding gain
during transmission. In existing beam-based training schemes,
the training length is fixed regardless of the channel realization
and further the effective CSI feedback is separated from
the training procedure. Thus we refer such designs as noninterleaved training (NIT). The combination of the NIT and
data transmission for SU and MU systems are referred to as
NIT-SU and NIT-MU transmission schemes, respectively. For
our object, interleaved training idea is used, where for each
channel realization, the training length is adaptive to the channel realization. Further, the effective CSI or indicator feedback
is concatenated with the pilot transmissions to monitor the
training status and guide the action of the next symbol period.
Naturally, for interleaved schemes, the training and the data
transmission need to be designed jointly.
In addition, in this work, outage probability is used as
the major performance measure, which is different to most
existing work where the sum-rate [27] and SINR [22] are
used. While outage probability is a useful practical quality-ofservice measure for wireless systems, its adoption in massive
MIMO is very limited [29] due to the high challenge in the
analysis. With outage probability as the performance measure
and the aim of reducing training length, in the proposed
interleaved schemes, the basic idea is to stop training when the
obtained effective CSI is enough to support the targeted rate
to avoid an outage. Other than training designs, quantitative
analysis will be conducted on the outage performance of
the proposed schemes for useful insights in hybrid massive
antenna system design.
III. I NTERLEAVED T RAINING FOR SU S YSTEMS
P ERFORMANCE A NALYSIS
AND
This section is on the SU system, where interleaved beambased training and the corresponding SU transmission are
proposed. Further, outage probability performance is analyzed
as well as the average training length of the proposed scheme.
5
A. Proposed Interleaved Training and SU Transmission
Recall that the object of interleaved training is to save
training time while still having the best outage probability
performance. Thus, instead of training all beams and finding
the best combination as in NIT, the training should stop right
after enough beams have been trained to avoid outage. Since
the set of L non-zero beams for the user I is random with
uniform distribution on the set {1, · · · , Nt }, and the channel
coefficients along the non-zero beams are i.i.d., the priorities of
the training for all beams are the same. Therefore, the natural
order is used for beam training, i.e., the BS trains from the
first beam to the Nt -th beam sequentially. The training stops
when the outage can be avoided based on the already trained
beams or no more beam is available for training.
Let B contain the indices of the non-zero beams that have
been trained. Let LB , min(NRF , |B|) be the maximum number of non-zero beams that can be used for data transmissions
given the number of RF chains and the number of known nonzero beams. Let S contain the indices of the LB known nonzero beams with the largest norms. The proposed interleaved
training and the corresponding SU transmission scheme are
shown in Algorithm 1.
Algorithm 1 Proposed Interleaved Training and Corresponding SU Transmission (IT-SU) scheme
1: B = ∅;
2: for i = 1, ..., Nt do
3:
The BS trains the ith beam; The user estimates h̄i ;
4:
If kh̄i k > 0, B = B ∪ {i} and the user finds S, which
contains the indices of the LB P
non-zero beams with the
2
largest norms, then
calculates
l∈S kh̄l k ;
P
2
5:
if kh̄i k = 0 or l∈S kh̄l k ≤ α/Nt then
6:
The user feeds back “0”; Continue;
7:
else
8:
The user feeds back h̄l , for all l ∈ S along with their
indices;
9:
The BS constructs FRF and fBB as in (5) and
conducts data transmission; Break;
10:
end if
11: end for
In the proposed scheme, at the ith training interval where
i ≤ Nt , the BS sends a pilot
√ for the user to estimate the ith
beam value: h̄i (the scalar Nt is omitted for brief notation)2.
If it is a non-zero beam (i.e., kh̄i k > 0), the user compares
the received SNR provided by the LB strongest beams known
from the finished i training intervals with the target normalized
received SNR α to see if an outage
P event will happen given the
obtained CSI. If kh̄i k = 0 or l∈S kh̄l k2 ≤ α/Nt and i < Nt ,
the already trained beams cannot provide a beam combination
to avoid outage. Thus the user feeds back the indicator “0”
to request the BS to continue training the next beam. For
the special case of i = Nt , all beams have been trained
2 In this work, the channel estimation is assumed to be error-free. For
massive antenna systems, the post-beamforming SNR during the training
phase is usually high, leading to small channel estimation error. Meanwhile,
this simplification has little effect on the structure of the proposed scheme,
although it may cause degradation in the outage performance.
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
andPan outage is unavoidable with any beam combination.
2
If
l∈S kh̄l k > α/Nt , enough beams have been trained
to avoid outage. Thus the user feeds back the LB non-zero
effective channels h̄l , l ∈ S along with their indices, and the
BS aligns the LB beams with FRF and matches the effective
channel vector with fBB as in (5) to conduct data transmission.
Since the training and feedback are interleaved in the proposed
scheme, we name it interleaved training based SU transmission
(IT-SU) scheme.
B. Average Training Length Analysis
This subsection studies the average training length of the
IT-SU scheme. Since for different channel realizations, the
number of beams being trained in our proposed IT-SU scheme
can vary due to the randomness in the path profile and path
gains, we study the average training length measured in the
number of training intervals per channel realization where the
average is over channel distribution.
Before showing the analytical result, we first discuss the
effect of NRF on the average training length. Intuitively, for
any given channel realization and at any step of the training
process, larger NRF means that the same or more beam
combinations are available based on the already trained beams.
Thus the same or larger received SNR can be provided, which
results in the same or an earlier termination of the training
period. Therefore, with other parameters fixed, the average
training length is a non-increasing function of NRF , i.e.,
larger NRF helps reduce the average training length. On the
other hand, since there are at most L non-zero paths in the
channel for each channel realization, having a larger NRF
than L cannot provide better beam combination for any given
already trained beams compared with that of when NRF = L.
Therefore, with other parameters fixed, the average training
length of the IT-SU scheme is a non-increasing function of
NRF for NRF ≤ L, then keeps unchanged for NRF ≥ L.
The average training length for NRF = L is a lower bound
for a general NRF value and the average training length for
NRF = 1 is an upper bound.
With the above discussion, in the following analysis, we
only consider the scenario of 1 ≤ NRF ≤ L. Define
i−1
Nt − i
Nt
(12)
ξ(i, j) ,
/
L
j
L−j−1
and the target normalized received SNR α, the average training
length of the proposed IT-SU scheme is
TIT-SU = Nt −
where
l
(1)
bi
(j − NRF
, min{i − 1, L − 1}
, max{0, L − 1 − Nt + i},
for i = 2, · · · , Nt − 1.
The following theorem has been proved.
Theorem 1: For the hybrid SU massive antenna system with
Nt BS antennas, the L-path channel, NRF ≤ L RF chains,
i=1
(Nt − i)Pi ,
L −Lα
e Nt ,
Nt
(13)
(14)
for i = 2, ..., Nt − 1,
(2)
bP
i
−Lα j
−Lα
Nt
Nt
1
−
e
if NRF = 1,
ξ(i,
j)e
(1)
j=b
i
(2)
min(NRF
−Lα
P−1,bi )
j Nt
/j!
ξ(i, j)( Lα
(15)
Pi =
Nt ) e
(1)
j=bi
(2)
bP
i
(2)
(1)
+
if 1 <NRF ≤ L,
+P
ξ(i,
j)
P
j
j
(1)
(−1) j!LNRF
− l)!(NRF − 1)!(NRF − 2)!l!
for j = 0, · · · , L − 1, l ≤ j − NRF ,
(2)
bi
N
t −1
X
P1 =
for i = 2, · · · , Nt − 1 and j < i, which is the probability of a
path being aligned by the ith beam and j paths being aligned
by the first i − 1 beams. Define
βj,l ,
6
j=max(NRF,bi )
(1)
(2)
and Pj and Pj are shown in (16) and (17) at the top of
next page.
Proof: See Appendix A.
Theorem 1 provides an analytical expression on the average
training length of the proposed IT-SU scheme. Other than
the two special functions Υ and Γ, it is in closed-form. The
two functions are well studied and their values can be easily
obtained. The Pi given in (14) and (15) is the probability
that the training length is i. Together, P1 , · · · , PNt form the
probability mass function (PMF) of the training length. From
(13), it can be easily concluded that the average training length
of the proposed scheme is always less than Nt since Pi ≥ 0,
∀i.
The result in Theorem 1 is general and applies for arbitrary
values of NRF , L, Nt . But due to the complicated format, it is
hard to see insightful behaviours of the average training length
directly. In what follows, several typical scenarios for massive
antenna systems are considered.
1) The Channel with Finite L, i.e., L = O(1): We first
consider the channels for the massive antenna system with
a finite number of paths. That is, L is a finite value while
Nt → ∞. The asymptotic result on the average training length
of the proposed IT-SU scheme is provided for both the special
case of single RF chain and the general case of multiple RF
chains.
Lemma 1: For the hybrid massive antenna system with
Nt ≫ 1 BS antennas, finite constant number of channel paths
L, and target normalized received SNR α, when the number
of RF chains is one, i.e., NRF = 1, or is the same as the
path number, i.e., NRF = L, the average training length of
the proposed IT-SU scheme can be written as follows:
Nt
+ O(1).
(18)
L+1
Proof: See Appendix B.
The result in Lemma 1 shows that for the two special
NRF values, the average training length of the IT-SU scheme
increases linearly with Nt , but the slope decreases linearly
with L, the number of channel paths. The traditional NIT-SU
TIT-SU =
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
(1)
Pj
,
1 − NRF
L
NRF
Lα
j−N
NRF
XRF
X−2 NRF − 2 (−1)m
e− Nt
βj,l
NRF − 1
(l + 1)NRF
m
m=0
l=0
m+1
t= Lα(l+1)
N N
t
Υ (NRF − 1 − m, t) + Γ (NRF , t) t=0 t RF
.
×
m+1
,
(16)
Lα
j−N
NRF
XRF
X−2 NRF − 2
−e− Nt
1
βj,l
2
m
(NRF − 1)
(NRF − 1) (l + 1)NRF
m
m=0
l=0
m−n+1
0
n t
m
Υ (NRF − 1 − m, t) + Γ (NRF − n, t) Lα(l+1)
X
m
−Lα(l + 1)
Nt NRF
×
.
n
N
m
−
n
+
1
t
n=0
1 − NRF
L
NRF
scheme with full training has a fixed training length Nt . Thus
the proposed IT-SU scheme has significant saving in training
time as Nt is very large. For example, when NRF = L = 1,
we have TIT-SU ≈ Nt /2, meaning that the IT-SU scheme
reduces the average training length by half. It is noteworthy
that this gain in time is obtained with no cost in outage
probability performance (more details will be explained in
the next subsection). Moreover, the average training length
is independent of the threshold α.
From the discussion at the beginning of this subsection, we
know that for any value of NRF , the average training length is
lower bounded by its value for NRF = L and upper bounded
by its value for NRF = 1. Thus the analytical results for the
two special cases in Lemma 1 lead to the following corollary.
Corollary 1: For the hybrid massive antenna system with
Nt ≫ 1 BS antennas, finite constant number of channel paths
L, and target normalized received SNR α, the average training
length of the proposed IT-SU scheme can be written as (18)
for any number of RF chains NRF .
2) The Channel with Linearly Increasing L, i.e., L =
O(Nt ): The next typical scenario to consider is that the
number of channel paths has a linear scaling with the number
of BS antennas. That is, L = cNt while Nt → ∞ for
a constant but arbitrary c. For this case, due to the more
complicated form of Pi than that of the finite L case, simple
expression of the average training length is hard to obtain.
Two special cases are analyzed in what follows.
For the special case where NRF = 1 and c = 1, i.e., single
(2)
(1)
RF chain and i.i.d. channels, we have bi = bi = i − 1 for
i = 2, ..., Nt − 1. Thus, from (14) and (15),
Pi = e−α (1 − e−α )i−1
for i = 1, ..., Nt − 1. This is the same as the result of the
interleaved antenna selection scheme for full-digital massive
antenna systems with i.i.d. channels in [24]. The same asymptotic upper bound on the average training length for large Nt
can be obtained as:
TIT-SU = eα (1 − (1 − e−α )Nt ) → eα when Nt → ∞.
This bound is only dependent on the threshold α.
(17)
For another special case where NRF = L and c = 1,
from (14) and (15), we have Pi = e−α αi−1 /(i − 1)! for
i = 1, ..., Nt − 1. This is in accordance with that of the
interleaved Scheme D for full-digital massive antenna systems
with i.i.d. channels in [24, Theorem 2]. The corresponding
upper bound on the average training length is 1 + α, which
again is only dependent on the threshold α.
For the general case of 0 < c < 1, the expression of Pi in
(15) can be used along with numerical evaluations for further
studies. Fig. 1 shows the average training length TIT-SU with
respect to Nt for different parameter values. The following
observations are obtained from the plot.
25
c=0.1, N RF=1
c=0.1, N
c=0.2, N
RF
RF
=L
=1
c=0.2, N RF=L
20
Average training length
(2)
Pj
7
c=1, N RF=L
=4
=8
15
10
5
20
40
60
80
100
120
140
160
180
Antenna number
Fig. 1. Average training length of the IT-SU scheme for α = 4, 8.
•
•
For any NRF value, TIT-SU asymptotically approaches a
constant upper bound that is independent of Nt .
TIT-SU is an increasing function of the threshold α. Thus
the advantage of the proposed scheme over NIT-SU
degrades for larger α. The advantage degradation with
increasing α is most severe when c = 1 and NRF = 1,
but for larger NRF or smaller c, it is considerably slower.
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
•
When NRF = L, TIT-SU is a decreasing function of c.
When NRF = 1, depending on the value of α, TIT-SU may
not be a monotonic function of c. This can be explained
by the two opposite effects of increasing c: the increase
in multi-path diversity and the decrease in average path
power. For NRF = L, there are enough beams to be
used to compensate for the second effect, thus larger c
tends to decrease TIT-SU via higher multi-path diversity.
For NRF = 1, the second effect is more dominant, thus
larger c tends to increase TIT-SU due to the path power
loss, especially for large threshold α.
C. Outage Performance Analysis
In this subsection, the outage probability of the proposed
IT-SU scheme is analyzed.
Theorem 2: For the hybrid SU massive antenna system with
Nt BS antennas, the L-path channel, NRF ≤ L RF chains and
the target normalized received SNR α, the outage probability
of the IT-SU scheme is
out(IT-SU) =
Υ N , αL L−N
XRF
RF Nt
L
NRF +l−1 L−NRF
+
(−1)
NRF
(NRF − 1)!
l
l=1
αL
NRF −1
−1− N l
Nt
RF
−1
NRF
e
− B(l) ,
(19)
×
l
l
−1 −
NRF
where
B(l) ,
P−2
NRF
m=0
0
1
m!
l
− NRF
m
NRF ≥ 2
Υ m + 1, αL
Nt
.
otherwise
(20)
Proof: See Appendix C.
Theorem 2 provides an analytical expression for the outage
probability of the proposed IT-SU scheme. The expression is
in closed-form other than the special function Υ. Although
the effect of NRF on the outage performance of the ITSU scheme is implicit in (19), from its derivations, we have
out(IT-SU) = Pr(x ≤ α/Nt ) where x is the sum of the largest
NRF elements in {kh̄i k2 , i ∈ I}. Apparently, for any channel
realization, the value of x increases as NRF increases from
1 to L. Therefore, for any given finite α, larger NRF means
smaller outage probability.
1) Single RF Chain Analysis: To obtain further insights in
the effect of Nt , L and α on the outage performance, we
consider the special case with NRF = 1 in what follows.
Lemma 2: For the hybrid massive antenna system with Nt
BS antennas, the L-path channel, single RF chain, and the
target normalized received SNR α, the outage probability of
the IT-SU scheme is as follows:
out(IT-SU) = (1 − e
Proof: See Appendix D.
−αL
Nt
)L .
(21)
8
It can be seen from (21) that for arbitrary values of α, P
and arbitrary scaling of L with respect to Nt between constant
and linear, i.e., L = O(NtrL ) for rL ∈ [0, 1], we have
lim out(IT-SU) = 0.
Nt →∞
This means that for the SU massive antenna system with single
RF chain, arbitrarily small outage probability can be obtained
for any desired date rate and any fixed power consumption P
as long as Nt is large enough. This shows the advantage of
having massive antenna array at the BS.
Specifically, for finite L, i.e., L = O(1), we have
L
αL
−(L+1)
+ O Nt
,
out(IT-SU) =
Nt
meaning that the outage probability scales as O Nt−L for
large Nt . For linearly increasing L where L = cNt , we have
out(IT-SU) = (1 − e−αc )cNt ,
meaning that the outage probability decreases exponentially
with respect to Nt . For i.i.d. channels where c = 1, the outage
probability of the IT-SU scheme reduces to (1 − e−α )Nt . This
is the same as that of the antenna selection scheme in the
full-digital massive antenna systems with i.i.d. channels [24].
2) Multiple RF Chain Analysis: For the case of multiple
RF chain where 1 < NRF ≤ L, since larger NRF results in
smaller outage probability, for both finite L and L = cNt ,
it can be concluded that arbitrarily small outage probability
can also be achieved for an arbitrary date rate with any fixed
power consumption P when Nt is large enough.
3) Comparison with NIT Schemes: In Section III-B, the
proposed IT-SU scheme was compared with the NIT-SU
scheme with full training in terms of training length. Here,
we give the outage probability comparison. As utilized in
the proof of Theorem 2, for the IT-SU scheme, an outage
happens only when all beams have been trained and no beam
combination can satisfy the target SNR requirement. This is
the same as that of the NIT-SU scheme with full training, thus
the two schemes have the same outage performance.
Another possible non-interleaved scheme is to have partial
training with a fixed training length of Lt < Nt [21]. It can be
seen easily that as Lt decreases, the outage probability of the
partial non-interleaved scheme increases. Thus, the proposed
IT-SU scheme is superior in terms of outage probability
compared with the NIT-SU scheme with the same training
length. Numerical validation will be given in the simulation
section.
IV. I NTERLEAVED T RAINING
FOR
MU T RANSMISSION
This section is on the more general and complicated MU
systems, where the joint beam-based interleaved training and
the corresponding MU transmission is proposed. Compared to
SU systems where the optimal transmission is the maximumratio combining of the best trained beams, the beam assignment problem is a challenging but crucial part of the MU transmission. In what follows, we first study the beam assignment
issue. Subsequently, the joint beam-based interleaved training
and MU transmission is proposed.
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
A. Feasible Beam Assignment and MU Beam Assignment
Methods
For the hybrid massive antenna BS to serve multiple users
with a fixed beam codebook, a typical idea is to assign a
beam to each user. But the beam assignment problem is far
from trivial and is a dominant factor of the performance. We
first introduce the definition of feasible beam assignment, then
propose MU beam assignment methods.
Definition 1: For the hybrid massive antenna downlink
serving U users with codebook-based beam transmission, a
beam assignment is an ordered U -tuple, (n1 , . . . , . . . nU ),
where ni is the index of the beam assigned for User i. A
feasible beam assignment is a beam assignment where the
resulting effective channel matrix as given in (7) has full rank.
In other words, a beam assignment is feasible if ZF baseband can be conducted with no singularity, thus the received
SNR, denoted as SNRMU for the MU transmission, or λ
in (9) is non-zero. If an infeasible beam assignment is used
for the MU transmission, ZF baseband precoding cannot be
conducted. Even if other baseband precoding, e.g., regularized
ZF, is used, the received SINR will be very small due to
the high interference and outage occurs. Thus feasible beam
assignment is a necessary condition to avoid outage. Two cases
that can cause infeasible beam assignment are 1) all beams
(n1 , . . . , . . . nU ) are zero-beams for any user thus the effective
channel matrix has a row with all zeros, and 2) one beam is
assigned to more than one user thus two identical columns
appear in the effective channel matrix. On the other hand,
depending on the effective channel values and interference
level, a feasible beam assignment may or may not be able
to avoid outage.
A straightforward and optimal beam assignment method is
the exhaustive search. Denote the set of known (e.g., already
trained) non-zero beam indices for User u as Bu . Define
B , ∪U
u=1 Bu . By searching over all possible feasible beam
assignments over B and finding the one with the maximum λ,
the optimal beam assignment is obtained. The complexity of
the exhaustive search is however O(|B|U ), which is unaffordable for large |B| and/or U .
Thus for practice implementation, beam assignment methods with affordable complexity are needed. To serve this
purpose, we transform the SNR maximization problem for the
beam assignment to the problem of maximizing the minimum
effective channel gain among the users, i.e.,
arg
max
n1 ,··· ,nU ∈B
min{|h̄u,nu |}.
u
(22)
Then by drawing lessons from the extended optimal relay
selection (ORS) method for MU relay networks in [30], the
following beam assignment algorithm is proposed. First, the
original ORS algorithm in [31] is used to maximize the
minimum effective channel gain. Suppose that the minimum
gain is with User i and Beam j. Then we delete User i and
Beam j and apply the original ORS scheme again to the
remaining users and beams. This procedure is repeated until
all users find their beams. It has been shown in [30], [31] that
this scheme not only achieves an optimal solution for (22),
but also achieves the unique optimal solution that maximizes
9
the uth minimum channel gain conditioned on the previous
1st to the (u − 1)th minimum channel gains for all u. Further,
the worst case complexity of this scheme is O(U 2 |B|2 ), much
less than that of the exhaustive search. This beam assignment
is referred to as the max-min assignment. With respect to
the outage probability performance, it is suboptimal. But the
method targets at maximizing the diagonal elements of the
effective channel matrix, which in general is beneficial to ZF
transmission. Simulation results exhibited in Section V show
that its outage performance loss is small compared with the
exhaustive search especially for channels with small L.
B. Joint Beam-Based Interleaved Training and MU Transmission Design
Similar to the SU case, the main goal of the interleaved
training and joint MU data transmission scheme (referred to
as the IT-MU scheme) is to save training time while preserving
the outage probability performance. The fundamental idea is to
conduct the training of each beam sequentially and terminate
right after enough beams have been trained to avoid outage.
However, different from the SU case, a big challenge for the
MU case is that each user does not know the effective channels
of other users since user cooperation is not considered. Thus
the users are not able to decide whether to terminate the
training interval given an SNR threshold. Our solution for this
is to make users feed back their acquired non-zero effective
channels during training and let the BS to make the decision.
Other differences of the MU scheme to the SU one include
the initial training steps, the beam assignment problem, and
the termination condition for the training interval. These will
be studied in details in the explanation of the scheme that
follows.
The detailed steps for the proposed IT-MU scheme is given
in Algorithm 2. At the beginning of this scheme, the first U
beams in the codebook are trained and every user estimates
the corresponding effective channels and constructs its set of
non-zero beam indices Bu . Then the non-zero beam values and
their indices are fed back to the BS (with this information, the
BS also knows Bu , ∀u). While for the SU case the beams are
trained one by one, U beams need to be trained initially for
the MU case since at least U beams are needed for a feasible
beam assignment.
After this initial training stage, if for any user, no non-zero
beam is found (in which case the user feeds back “0”) or
|B| < U where B is the union of non-zero beam indices of
all users, the training of the next beam starts. Otherwise, the
BS finds a beam assignment on B with either the exhaustive
search or the max-min method given in Section IV-A. If the
beam assignment is feasible and can avoid outage, training
terminates and data transmission starts with this beam assignment and the corresponding ZF baseband precoding as shown
in Section II-A2. Otherwise, the BS starts the training of the
next beam. When the new ith beam has been trained, each user
again estimates the corresponding effective channel. If it is a
zero-beam, the user feeds back “0”; otherwise, it feeds back
the effective channel value. If this ith beam is a zero-beam for
all users or any user still has no non-zero beam or the updated
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
Algorithm 2 The Joint Beam-Based Interleaved Training and
MU Transmission (IT-MU) Scheme.
1: The BS trains the 1st to U th beams. User u, ∀u estimates
the corresponding effective channels h̄u,1 , ..., h̄u,U and
constructs its set of non-zero beam indices Bu ;
2: If |Bu | = 0, User u, ∀u feeds back “0”. Otherwise, User
u feeds back the non-zero effective channel values along
with their beam indices;
3: if any user’s feedback is “0” or |B| < U then
4:
set os = 1 and goto Step 13;
5: else
6:
The BS finds a beam assignment on B;
7:
if the beam assignment is not feasible or the resulting
received SNR is below the outage threshold then
8:
set os = 1 and goto Step 13;
9:
else
10:
set os = 0 and goto Step 27;
11:
end if
12: end if
13: for i = U + 1, ..., Nt do
14:
The BS trains the ith beam; User u, ∀u estimates the
corresponding effective channel h̄u,i ;
15:
For all u, if kh̄u,i k = 0, User u feeds back “0”; else
User u feeds back the value h̄u,i and let Bu = Bu ∪ {i}
and B = B ∪ {i};
16:
if all users’ feedbacks are “0” or |Bu | = 0 for any u
or |B| < U then
17:
set os = 1 and continue;
18:
else
19:
The BS finds a beam assignment on B;
20:
if the beam assignment is not feasible or the resulting
received SNR is below the outage threshold then
21:
set os = 1 and continue;
22:
else
23:
set os = 0 and goto Step 27;
24:
end if
25:
end if
26: end for
27: if os = 0 then
28:
The BS uses the found beam assignment to construct
FRF and the ZF FBB for MU transmission;
29: end if
|B| is still less than U , the BS starts the training of the next
beam if an un-trained beam is available. Otherwise, the BS
finds a beam assignment on B with either the exhaustive search
or the max-min method. If the beam assignment is feasible and
can avoid outage, training terminates and transmission starts.
Otherwise, the BS starts the training of the next beam if an
un-trained beam is available. The procedure continues until a
beam assignment that can avoid outage is found or there is no
new beam for training.
C. Discussion on Average Training Length and Outage Performance
For the IT-MU scheme, the minimum possible training
length is U and the maximum possible training length is Nt .
10
Similar to the IT-SU scheme, it is reasonably expected that the
IT-MU scheme has a smaller average training length than the
NIT-MU scheme with full training. Meanwhile, since complete
effective CSI is available for the BS if necessary in the IT-MU
scheme, it achieves the same outage performance. Moreover,
the outage probability of the IT-MU scheme is smaller than
that of the NIT-MU scheme with partial training at the same
training length.
V. N UMERICAL R ESULTS AND D ISCUSSIONS
In this section, simulation results are shown to verify
the analytical results in this paper. Meanwhile, properties
of the proposed interleaved training and joint transmission
schemes are demonstrated. We also make comparison with
non-interleaved schemes.
In Fig. 2, the average training length of the IT-SU scheme in
Algorithm 1 is shown for NRF = 1 and α = 4, 83 . First, it can
be seen that the derived average training length in Theorem
1 well matches the simulation. Second, for L = 1, 3, 6 and
α = 4, the average training length increases linearly with
Nt with slope about 0.50, 0.26 and 0.13, respectively. These
match the theoretical results in Lemma 1 where the slope is
1/(L + 1) = 0.5, 0.25, 0.14, respectively. Further, the dashed
line without marker is the line of TIT-SU = Nt /4, which is the
asymptotic average training length for L = 3 in (18). Third,
for L = cNt where c = 0.1, 0.2, the average training lengths
approach to constants as Nt increases. While for c = 1, the
asymptotic constant upper bound is less explicit since Nt is
not large enough to reveal the asymptotic bound eα . When
c = 1 and α = 8, the average training length is almost the
same as the NIT-SU scheme with full training (dotted line) due
to the high SNR requirement and limited simulation range of
Nt . Finally, the average training length increases with α (much
less significant for finite L and small c) which is in accordance
with the comments on Lemma 1 and Fig. 1.
In Fig. 3, the average training length of the IT-SU scheme is
studied for NRF > 1 and α = 4, 8. Again, the results in Theorem 1 have tight match with the simulation. For L = 3 and
NRF = 3, the average training length increases linearly with
Nt with ratio about 1/(L+1) = 0.25 for α = 4, 8. Meanwhile,
the average training length has negligible reduction compared
with that of NRF = 1 in Fig. 2, e.g., 34.5 for NRF = 1 and
33.8 for NRF = 3 with Nt = 110 and α = 8. These validate
Lemma 1. Further for L = cNt where c = 0.1, 1, the average
training lengths are upper bounded by different constants as
Nt grows. The effect of c, α and NRF on the upper bound
has been studied in Section III-B2, which can be referred to
directly. Finally, the result for L = NRF = Nt , i.e., full-digital
massive MIMO with i.i.d. channels matches the theoretical
result 1 + α. Both Figs. 2 and 3 show that compared with
the NIT-SU scheme with full training (represented by dotted
lines), the IT-SU scheme achieves huge reduction in training
length with the same outage performance.
3 For P = 0 dB, α = 4, 8 mean that R
th = 2.3219, 3.1699 bps/Hz
respectively which represent the low rate scenario. For P = 10 dB, α = 4, 8
mean that Rth = 5.3576, 6.3399 bps/Hz respectively which represent the
intermediate to high rate scenario.
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
60
50
70
L=1
L=3
L=6
L=0.1N t
60
L=N t
Average training length
Average training length
24
23
L=0.2N t
L=0.2N t
40
L=1
L=3
L=6
L=0.1N t
11
Theoretical
Simulated
30
20
50
L=N t
22
Theoretical
Simulated
21
40
20
70
80
90
100
110
30
20
10
10
0
30
40
50
60
70
80
90
100
110
30
40
50
60
Antenna number
70
80
90
100
110
Antenna number
Fig. 2. Average training length of the IT-SU scheme with NRF = 1 for α = 4 (left) and α = 8 (right). The dotted line shows the training length of the
NIT-SU scheme with full training.
40
L=0.1N t , N RF=3
35
L=N t , N RF=0.1N t
Average training length
Theoretical
Simulated
20
15
5
60
70
80
90
100
110
Antenna number
L=N t , N RF=Nt
Theoretical
Simulated
15
5
50
L=N t , N RF=0.1N t
20
10
40
L=0.1N t , N RF=0.1N t
25
10
0
30
L=0.1N t , N RF=3
30
L=N t , N RF=Nt
25
L=3, NRF=3
35
L=0.1N t , N RF=0.1N t
30
Average training length
40
L=3, NRF=3
0
30
40
50
60
70
80
90
100
110
Antenna number
Fig. 3. Average training length of the IT-SU scheme with NRF > 1 for α = 4 (left) and α = 8 (right). The dotted line shows the training length of the
NIT-SU scheme with full training.
In Fig. 4, comparison on the average training length is
shown for the proposed scheme, the non-interleaved scheme
with full training, and the hierarchical scheme with M = 3
where NRF = 1 and α = 8. Note that for the considered
setting, M = 3 results in the minimum training length thus
is the most favourable for the hierarchical scheme. The figure
shows that 1) when L = 1, the hierarchical scheme has the
lowest average training length, 2) for L = 2, the IT-SU scheme
has lower training length than the hierarchical scheme when
Nt < 120 and larger training length when Nt ≥ 120, 3) for
L = 3, the IT-SU scheme has lower training length than the
hierarchical scheme for all simulated values of Nt in the range
[30, 180], 4) for L = 0.1Nt , the training length of the proposed
IT-SU scheme is lower and bounded by a constant while the
hierarchical scheme experiences fast increase in the training
length with larger Nt .
In Fig. 5, the outage performance of the IT-SU scheme is
compared with that of the NIT-SU scheme with full training
and partial training at the same training length (by setting Lt
to be the same as the average training length of the IT-SU
scheme). The cases of L = 1, 3, 0.1Nt, NRF = 1, 3, 0.1Nt
and α = 4 are studied. It can be seen that 1) the theoretical
outage probabilities of the IT-SU scheme in Theorem 2 match
the simulated values well; 2) the outage probability of the ITSU scheme is the same as that of the NIT-SU scheme with
full training, and significantly lower than that of the NIT-SU
scheme with partial training; 3) the outage probability of the
IT-SU scheme diminishes fast as Nt grows; 4) by increasing
NRF from 1 to 3 for L = 3 or from 3 to 0.1Nt for L = 0.1Nt ,
the outage probability of the IT-SU scheme decreases. These
validate the discussions in Section III-C.
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
100
90
NIT-SU, full training
Hierarchical
IT-SU
L=1
L=2
L=3
L=0.1N t
Average training length
80
70
60
50
40
30
20
10
0
40
60
80
100
120
140
160
180
Antenna Number
Fig. 4. Average training length comparison of the IT-SU scheme, the NIT-SU
scheme with full training, and the hierarchical scheme. NRF = 1, α = 8,
and M = 3.
10 0
L=1, NRF=1
Outage probability
10
L=3, NRF=1
-1
L=3, NRF=3
L=0.1N t , N RF=3
L=0.1N t , N RF=0.1N t
IT-SU, theoretical
IT-SU, simulated
NIT-SU, full training
NIT-SU, partial training
10 -2
40
50
60
70
80
90
100
very positive result for the proposed IT-SU scheme designed
with the outage performance goal. It shows that interleaved
and adaptive training design in general benefits the system
performance compared to non-interleaved training.
Figs. 7 and 8 show the average training length and outage
performance of the IT-MU scheme in Algorithm 2 respectively
where NRF = U = 3 and ᾱ = 6. Both the exhaustive
search and the max-min method are considered for the beam
assignment in the IT-MU scheme. It can be seen that for
L = 1, 3, the average training lengths of the IT-MU scheme
have linear increase with Nt , where the slopes are approximately 0.75 and 0.38, respectively. Compared with the NITMU scheme with full training, where the training length equals
Nt , the reduction in training length of the proposed scheme is
significant, and larger L results in bigger reduction. Second,
when L = 0.1Nt , the training length of the IT-MU scheme
approaches a constant, which equals 23.9, as Nt grows. The
IT-MU scheme has the same outage performance as that of the
NIT-MU scheme with full training which is much better than
that of the NIT-MU scheme with the same training length
(partial training). Lastly, by replacing the exhaustive search
with the max-min method for beam assignment, the outage
performance of the IT-MU scheme has some small degradation
for L = 3 and the degradation diminishes for L = 1. When
L = 0.1Nt , the outage performance degradation due to the
sub-optimal beam assignment is more visible. On the other
hand, the increment of average training length due to the use of
this sub-optimal assignment method is negligible. Considering
the lower complexity of the max-min method, its application
in the IT-MU scheme is more desirable.
VI. C ONCLUSIONS
10 -3
10 -4
30
12
110
Antenna number
Fig. 5. Comparison of outage probability between the IT-SU scheme and the
NIT-SU scheme.
In Fig. 6 we show the ergodic rate of the proposed ITSU scheme with slight modification4 and the NIT-SU scheme
for α = 4, 8 and P = 10 dB. It can be seen that the ITSU scheme has lower ergodic rate compared with the NITSU scheme with full training. Compared with the NIT-SU
scheme with the same average training length (partial training),
the IT-SU scheme achieves higher ergodic rate. This is a
4 For the rate comparison, we change the condition in Line 5 of Algorithm
P
1 to “kh̄i k = 0 or l∈S kh̄l k2 ≤ α/Nt and i < Nt ”. The only difference
is for the case that all beams have been trained and an outage is still
unavoidable. Previously, no transmission is conducted since outage is not
avoidable, while with the change, the user feeds back the channel coefficients
of the min(NRF , L) known non-zero beams with the largest norms and
the BS uses this information for hybrid beamforming. This change has no
effect on the outage performance of the proposed scheme but is sensible
when considering the rate performance.
For the hybrid massive antenna systems, we studied the
beam-based training and joint beamforming designs for SU
and MU transmissions with outage probability as the performance measure. For SU systems, via concatenating the
feedback with the training, an interleaved training scheme
was proposed whose training length is adaptive to channel
realizations. Then, exact analytical expressions were derived
for the average training length and outage probability of the
proposed scheme. For MU systems, we proposed a joint
interleaved training and transmission design, which contains
two new techniques compared to the single-user case: having
the BS control the training process due to the limited local CSI
at the users and feasible beam assignment methods. Analytical
and simulated results show that the proposed training and joint
transmission designs achieve the same performance as the traditional full-training scheme while save the training overhead
significantly. Meanwhile, useful insights were obtained on
the training length and outage probability of typical network
scenarios and on the effect of important system parameters,
e.g., the BS antenna number, the RF chain number, the channel
path number or angle spread, and the rate requirement.
Many future directions can be envisioned on interleaved
training designs for hybrid massive MIMO systems. One possible extension is to consider systems with channel estimation
error and analyze how it affects the system performance. An-
10
10
9
9
Ergodic Rate (bps/Hz)
Ergodic Rate (bps/Hz)
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
8
7
L=1, NRF=1
6
L=3, NRF=1
L=3, NRF=3
8
7
L=1, NRF=1
L=3, NRF=1
6
L=3, NRF=3
L=0.1N t , N RF=3
L=0.1N t , N RF=3
5
4
30
L=0.1N t , N RF=0.1N t
50
L=0.1N t , N RF=0.1N t
5
IT-SU
NIT-SU, full training
NIT-SU, partial training
40
13
60
70
80
90
100
4
30
110
IT-SU
NIT-SU, full training
NIT-SU, partial training
40
50
60
Antenna Number
70
80
90
100
110
Antenna Number
Fig. 6. Ergodic rate comparison of the IT-SU scheme and the NIT-SU scheme with full/partial training for α = 4 (left) and α = 8 (right) with P = 10 dB.
110
100
10 -1
80
Outage proability
Average training length
90
10 0
NIT-MU, full training
IT-MU, exhaustive
IT-MU, max-min
L=1
L=3
L=0.1N t
70
60
50
NIT-MU, full training
NIT-MU, partial training
IT-MU, exhaustive
IT-MU, max-min
L=1
L=3
L=0.1N t
10 -2
40
30
10 -3
20
10
30
40
50
60
70
80
90
100
30
110
40
50
60
70
80
90
100
110
Antenna number
Antenna number
Fig. 7. Average training length of the IT-MU scheme for NRF = U = 3
and ᾱ = 6.
Fig. 8. Outage performance of the IT-MU scheme for NRF = U = 3 and
ᾱ = 6.
other practical issue is the feedback overhead. For both singleuser and multi-user systems with limited feedback capacity,
an important topic is the joint design of interleaved training,
feedback, and transmission scheme. Interleaved training designs and analysis for multi-cell and cooperative systems are
also meaningful future directions.
and
A PPENDIX A
T HE P ROOF OF T HEOREM 1
In calculating the average training length, the probability
that the training length
PNt is i (denoted as Pi ) for i = 1, ..., Nt
is needed. Since
i=1 Pi = 1, it is sufficient to calculate
Pi , i = 1, ...Nt − 1 only.
The training length is 1 when the 1st beam is a non-zero
beam and its effective channel gain is strong enough to avoid
outage. The probability that the 1st beam is non-zero is L/Nt
Pr(kh̄1 k2 > α/Nt ) =
Z
∞
Le−Lxdx = e−αL/Nt .
α/Nt
Thus (14) is obtained for P1 .
The training length is i for i ∈ [2, Nt − 1] when 1) the ith
beam is a non-zero beam, 2) an outage cannot be avoided by
previously trained beams, and 3) an outage can be avoided
with the newly discovered ith beam. To help the presentation,
denote this event as Event X. It can be partitioned into the
sub-events with respect to different j for max{0, L − Nt −
1 + i} ≤ j ≤ min{L − 1, i − 1}, where for the jth event,
there are j non-zero beams within the first i − 1 beams and
L − 1 − j non-zero beams within the beams from i + 1 to Nt .
The probability for the beam distribution of sub-event j thus
(2)
(1)
equals to ξ(i, j) as defined in (12). Notice that bi and bi
are defined as the bounds of j. Further, given the the beam
distribution of sub-event j, the probability of Event X can
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
be calculated by considering three cases as follows. To help
the proof, denote the indices of the already trained j non-zero
beams as n1 , · · · , nj and let z , kh̄i k2 .
Case 1 is when j = 0. Event X happens when kh̄i k2 >
2 is when 0 <
α/Nt , whose probability is e−αL/Nt . Case P
j
j ≤ NRF − 1. Event X happens when x , l=1 kh̄nl k2 ≤
α/Nt and z > α/Nt − x. Since x ∼ 1/(2L)X 2 (2j) and
z ∼ 1/(2L)X 2 (2),
Pr[X] = Pr (x ≤ α/Nt , z > α/Nt − x)
!
Z Nα
Z ∞
−Lx
t
−Lz
j j−1 e
=
Le
dz dx
L x
α
(j − 1)!
0
N −x
t
=
j
( Lα
Nt ) e
j!
−Lα
Nt
p(x′ , y) =
l=0
.
βj,l [x′ − (NRF − 1)y]
e
′ +(l+1)y
1/L
,
y ≥ 0, x′ ≥ (NRF − 1)y.
Pr[X] = Pr (x′ ≤ α/Nt , y ≤ α/Nt − x′ , z > α/Nt − x′ )
′
Z NαZ min N x′−1 , Nα −xZ
∞
t
t
RF
=
p(x′ , y)Le−Lz dx′ dydz
=
L(Nt − i)(Nt − i) × ... × (Nt − i − L + 2)
Nt × ... × (Nt − L + 1)
PL PL−k (0) k n
L k=0 n=0 Ck,n Nt i
=
,
Nt × ... × (Nt − L + 1)
(0)
where Ck,n is the polynomial coefficient for the term Ntk in .
(m−1)
(m−1)
(m)
for m = 1, · · · , Nt −
Define ∆i , ∆i+1 − ∆i
(0)
L where ∆i = xi . Using the binomial formula, we have
(m)
(m)
∆i = N (∆i )/[Nt×...×(Nt − L + 1)], where
L L−k
n n−i
X
X (0)
X
n X1 n − i1
(m)
k
Ck,n Nt
N (∆i ) , L
i2
i1 i =1
n=0
i =1
k=0
n−
···
1
Pm−1
j=1
X
im =1
i
j
2
Pm−1
n − j=1 ij n−Pm
j=1 ij .
i
im
(0)
x
(NRF −2) −
Consequently, for Case 3.2,
0
but straightforward calculations to the following
O(Nt−2 )
i > Nt + 1 − L
t −i
,
Pi =
(NL−1
)
−1
−2
1 + O(Nt ) + O(Nt ) i ≤ Nt + 1 − L
Nt
(L)
Nt
t −i
for i = 1, 2, ..., Nt − 1. Since NL−1
/ L has the same
PNt −1
(Nt −
order as or a lower order than O(Nt−1 ), ∀i, and i=1
i)O(Nt−2 ) = O(1), from (13) we have
XNt +1−L
xi + O(1),
(23)
TIT-SU = Nt −
i=1
Nt
t −i
where xi , (Nt − i) NL−1
/ L . We rewrite xi as
xi =
Case 3 is when j > NRF − 1, where two sub-cases are
considered. Case 3.1: If NRF = 1, Event X happens when
xl , kh̄nl k2 ≤ α/Nt for all l = 1, ..., j and z > α/Nt . Since
2
xl ’s and z are i.i.d. following 1/(2L)X (2), we have Pr[X] =
j
−αL
−αL
. Case 3.2: If NRF > 1, order the already
e Nt 1 − e Nt
trained j non-zero beams such that kh̄s1 k2 ≥ · · · ≥ kh̄sj k2 ,
where s1 , · · · , sj ∈ {n1 , · · · , nj }. Event X happens when
PNRF −1
kh̄sl k2 ≤ α/Nt , y = kh̄sNRF k2 ≤ α/Nt − x′ ,
x′ , l=1
and z > α/Nt − x′ . Notice that x′ and y are correlated but
both are independent to z. Via utilizing the result of the joint
distributions of partial sums of order statistics [32, Eq. (3.31)],
the joint probability density function (PDF) of x′ and y can
be given as
j−N
XRF
14
α
Nt
0
−x′
(2)
(1)
Pj +Pj ,
(1)
RF −1
], where
where Pj is the integral for x′ ∈ [0, Nαt NN
RF
′
′
(2)
α
x
x
′
min( NRF −1 , Nt − x ) = NRF −1 and Pj is that for x′ ∈
α
x′
α
′
′
RF −1
[ Nαt NN
, Nαt ], where min( NRF
−1 , Nt − x ) = Nt − x . Via
RF
P
n
n
n
m n−m
utilizing (a + b) = m=0 m a b
, the definition of the
lower and upper
incomplete
gamma
functions,
and the indefi
R
nite integral xb−1 Υ(s, x)dx = 1b xb Υ(s, x) + Γ(s + b, x) ,
(2)
(1)
Pj and Pj can be derived as (16) and (17), respectively.
Via the law of total probability and after some simple
reorganizations based on the previous derivations, Pi , i ∈
[2, Nt −1] in (15) can be obtained, which completes the proof.
A PPENDIX B
T HE P ROOF FOR L EMMA 1
When Nt ≫ 1, L = O(1), for the special cases of NRF = 1
or L, the Pi values in (14) and (15) can be simplified via long
Since n ≤ L − k and ij ≥ 1, j = 1, ..., m , we have Ck,n = 0
for k > L − m, i.e., n < m. Thus, the highest power of Nt in
(0)
(m)
N (∆i ) is L − m and its scalar coefficient is LCL−m,m m!.
And this term corresponds to i1 = ... = im = 1, which
(L−1)
(0)
is an arithmetic
guarantees CL−m,n 6= 0. Further, ∆i
progression. Then we have
XNt −L+1 Xi−1 (1)
XNt −L+1
XNt −L+1
∆ +...
x1 +
xi =
j1 =1 1
i=1
i=1
i=1
XjL−2 −1 (L−1) XjL−1 −1 (L)
XNt −L+1 Xi−1
∆1 ).
(∆1
+
...
+
jL =1
jL−1 =1
j1 =1
i=1
From the Faulhaber’s formula,
Xn
1 Xp
p+1
kp =
Bj np+1−j ,
k=1
j=0
p+1
j
where Bj is the Bernoulli number, we have
NtX
−L+1 X
i−1
i=1
j1 =1
jm−1 −1
...
X
(m)
∆1
(m)
=
jm =1
∆1
[N m+1 + O(Ntm )]
(m + 1)! t
for m ∈ [1, L] with j0 = i. Since the denominator of x1 is
NtL +O(NtL−1 ) and the numerator of x1 is LNtL +O(NtL−1 ),
we have (Nt − L + 1)x1 = LNt + O(1). Consequently,
NtX
−L+1
i=1
L
X
(m)
∆1
[Ntm+1 + O(Ntm )]
(m
+
1)!
m=1
xi = LNt+O(1)+
= LNt + O(1)
(0)
L
X
LCL−m,m m!NtL−m+O(NtL−m−1 ) m+1
[Nt
+ O(Ntm )]
+
L + O(N L−1 )](m + 1)!
[N
t
t
m=1
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
XL
(0)
CL−m,m
+ O(1)
(m + 1)
XL L (−1)m
L
(a)
m
= LNt + LNt
+ O(1) =
Nt + O(1),
m=1 m + 1
L+1
PL
L
L
.
where (a) follows from m=1 m
(−1)m /(m + 1) = − L+1
From this result and (23), (18) can be easily obtained.
= LNt + LNt
m=1
A PPENDIX C
T HE P ROOF OF T HEOREM 2
For a given channel realization with channel path indices
I = {I1 , ..., IL }, with the IT-SU scheme, an outage happens
only when all Nt beams have been trained and the strongest
NRF beams among them cannot avoid an outage. Let SNt
be the set containing the indices of the NRF beams with
the strongest effective channel gains. From the results on the
partial
P sum of order statistics [32, Eq. 3.19], the PDF of
x , l∈SN kh̄l k2 is
t
NRF NRF −1
L!
x
−Lx L
p(x) =
e
(L − NRF )!NRF !
(NRF − 1)!
N −1
L−N
XRF
NRF RF
(L − NRF )!
NRF +l−1
+L
(−1)
(L − NRF − l)!l!
l
l=1
lxL
−
× e NRF − A(l, x) , x ≥ 0,
(24)
( P
where
NRF −2
m=0
A(l, x) ,
Thus
m
, NRF ≥ 2
− NlxL
RF
.
0
otherwise
1
m!
out(IT-SU) = Pr(x ≤ α/Nt ),
which leads to (19) by using (24).
A PPENDIX D
T
HE P ROOF OF L EMMA 2
− Lα
Nt and N
Since Υ 1,Lα
RF = 1, from (19), we
Nt = 1 − e
have
Lα
XL−1
(L − 1)! e(−1−l) Nt − 1
l
(−1)
out(IT-SU) = L
l=0
(L − 1 − l)!l!
−1 − l
XL−1
Lα
L!
e−(1+l) Nt − 1
(−1)l+1
=
l=0
(L − (l + 1))!(l + 1)!
XL
Lα
L!
(a)
e−t Nt − 1
(−1)L−t
= (−1)L
t=0
(L − t)!t!
XL
Lα
L!
(b)
+ 1 = (1 − e− Nt )L ,
(−1)t
−
t=0
(L − t)!t!
where (a) and (b) follow
Pnfromthe variable substitutions t =
l + 1 and (x + y)n = l=0 nl xn−l y l , respectively.
R EFERENCES
[1] C. Zhang, Y. Jing, Y. Huang and L. Yang, “Performance of interleaved
training for single-user hybrid massive antenna downlink,” accepted by
2018 IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP),
Calgary, Alberta, Canada, Apr. 2018.
[2] J. Hoydis, S. ten Brink and M. Debbah, “Massive MIMO in the UL/DL
of cellular networks: how many antennas do we need?,” IEEE J. Sel.
Areas Commun., vol. 31, pp. 160-171, Feb. 2013.
[3] C. Zhang, Y. Jing, Y. Huang and L. Yang, “Performance scaling law for
multicell multiuser massive MIMO,” IEEE Trans. Veh. Technol., vol. 66,
pp. 9890-9903, Nov. 2017.
15
[4] C. Masouros, J. Chen, K. Tong, M. Sellathurai and T. Ratnarajah,
“Towards massive-MIMO transmitters: On the effects of deploying
increasing antennas in fixed physical space,” 2013 Future Network &
Mobile Summit, Lisboa, 2013, pp. 1-10.
[5] T. E. Bogale, L. B. Le and X. Wang, “Hybrid analog-digital channel
estimation and beamforming: training-throughput tradeoff,” IEEE Trans.
Commun., vol. 63, pp. 5235-5249, Dec. 2015.
[6] X. Gao, L. Dai, S. Han, C. L. I and X. Wang, “Reliable beamspace
channel estimation for millimeter-wave massive MIMO systems with
lens antenna array,” IEEE Trans. Wireless Commun., vol. 16, pp. 60106021, Sept. 2017.
[7] M. Xiao, S. Mumtaz, Y. Huang, et al. “Millimeter wave communications
for future mobile networks,” IEEE J. Sel. Areas Commun., vol. 35, pp.
1909-1935, Sept. 2017.
[8] X. Gao, L. Dai, C. Yuen and Z. Wang, “Turbo-like beamforming based
on tabu search algorithm for millimeter-wave massive MIMO systems,”
IEEE Trans. Veh. Technol., vol. 65, pp. 5731-5737, July 2016.
[9] F. Sohrabi and W. Yu, “Hybrid digital and analog beamforming design
for large-scale antenna arrays,” IEEE J. Sel. Topics Signal Process., vol.
10, pp. 501-513, Apr. 2016.
[10] A. Alkhateeb, O. El Ayach, G. Leus and R. W. Heath Jr., “Channel
estimation and hybrid precoding for millimeter wave cellular systems,”
IEEE J. Sel. Topics Signal Process., vol. 8, pp. 831-846, Oct. 2014.
[11] C. Zhang, Y. Huang, Y. Jing, S. Jin and L. Yang, “Sum-rate analysis for
massive MIMO downlink with joint statistical beamforming and user
scheduling,” IEEE Trans. Wireless Commun., vol. 16, pp. 2181-2194,
Apr. 2017.
[12] M. Biguesh and A. B. Gershman, “Training-based MIMO channel
estimation: A study of estimator tradeoffs and optimal training signals,”
IEEE Trans. Signal Process., vol. 54, pp. 884-893, Mar. 2006.
[13] A. Adhikary, J. Nam, J. Y. Ahn and G. Caire, “Joint spatial division and
multiplexing—the large-scale array regime,” IEEE Trans. Inf. Theory,
vol. 59, pp. 6441-6463, Oct. 2013.
[14] X. Rao and V. K. N. Lau, “Distributed compressive CSIT estimation
and feedback for FDD multi-user massive MIMO systems,” IEEE Trans.
Signal Process., vol. 62, pp. 3261-3271, June 2014.
[15] R. W. Heath Jr., N. Gonzlez-Prelcic, S. Rangan, W. Roh and A. M.
Sayeed, “An overview of signal processing techniques for millimeter
wave MIMO systems,” IEEE J. Sel. Topics Signal Process., vol. 10, pp.
436-453, Apr. 2016.
[16] J. Zhang, Y. Huang, Q. Shi, J. Wang, and L. Yang, “Codebook design
for beam alignment in millimeter wave communication systems,” IEEE
Trans. Commun., vol. 65, pp. 4980-4995, 2017.
[17] S. Hur, T. Kim, D. J. Love, J. V. Krogmeier, T. A. Thomas, and A.
Ghosh, “Millimeter wave beamforming for wireless backhaul and access
in small cell networks,” IEEE Trans. Commun., vol. 61, pp. 4391-4403,
Oct. 2013.
[18] J. Wang, Z. Lan, C. Pyo, et al. “Beam codebook based beamforming
protocol for multi-Gbps millimeter-wave WPAN systems,” IEEE J. Sel.
Areas Commun., vol. 27, pp. 1390-1399, Oct. 2009.
[19] C. Liu, M. Li, S. V. Hanly, I. B. Collings and P. Whiting, “Millimeter
wave beam alignment: large deviations analysis and design insights,”
IEEE J. Sel. Areas Commun., vol. 35, pp. 1619-1631, July 2017.
[20] A. Alkhateeb, G. Leus, and R. W. Heath, “Compressed sensing based
multi-user millimeter wave systems: How many measurements are
needed?” IEEE Int. Conf. Acoust. Speech and Signal Process. (ICASSP),
South Brisbane, QLD, 2015, pp. 2909-2913.
[21] G. Lee, J. So and Y. Sung, “Impact of training on mmWave multiuser MIMO downlink,” IEEE Global Conf. Signal and Inf. Process.
(GlobalSIP), Washington, DC, 2016, pp. 753-757.
[22] P. V. Amadori and C. Masouros, “Low RF-complexity millimeter-wave
beamspace-MIMO systems by beam selection,” IEEE Trans. Commun.,
vol. 63, pp. 2212-2223, June 2015.
[23] H. Shokri-Ghadikolaei, L. Gkatzikis and C. Fischione, “Beam-searching
and transmission scheduling in millimeter wave communications,” IEEE
Int. Conf. Commun. (ICC), London, 2015, pp. 1292-1297.
[24] E. Koyuncu and H. Jafarkhani, “Interleaving training and limited feedback for point-to-point massive multiple-antenna systems,” IEEE Int.
Symp. Inf. Theory (ISIT), Hong Kong, 2015, pp. 1242-1246.
[25] J. Brady, N. Behdad, and A. M. Sayeed, “Beamspace MIMO for
millimeter-wave communications: System architecture, modeling, analysis and measurements,” IEEE Trans. Antennas Propag., vol. 61, pp.
3814-3827, July 2013.
[26] D. Tse and P. Viswanath, Fundamentals of wireless communication.
Cambridge University Press, 2007.
ACCEPTED BY IEEE JSTSP. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE.
[27] A. Alkhateeb, G. Leus and R. W. Heath, “Limited feedback hybrid
precoding for multi-user millimeter wave systems,” IEEE Trans. Wireless
Commun., vol. 14, pp. 6481-6494, Nov. 2015.
[28] S. He, J. Wang, Y. Huang, B. Ottersten and W. Hong, “Codebook-based
hybrid precoding for millimeter wave multiuser systems,” IEEE Trans.
Signal Process., vol. 65, pp. 5289-5304, Oct. 2017.
[29] C. Feng, Y. Jing and S. Jin, “Interference and outage probability analysis
for massive MIMO downlink with MF precoding,” IEEE Signal Process.
Lett., vol. 23, pp. 366-370, Mar. 2016.
[30] S. Atapattu, Y. Jing, H. Jiang and C. Tellambura, “Relay selection and
performance analysis in multiple-user networks,” IEEE J. Sel. Areas
Commun., vol. 31, pp. 1517-1529, Aug. 2013.
[31] S. Sharma, Y. Shi, Y. T. Hou and S. Kompella, “An optimal algorithm
for relay node assignment in cooperative Ad Hoc networks,” IEEE/ACM
Trans. Netw., vol. 19, pp. 879-892, June 2011.
[32] H. C. Yang, M. S. Alouini, Order statistics in wireless communications:
diversity, adaptation, and scheduling in MIMO and OFDM systems,
CAMBRIDGE (2011).
16
Yongming Huang received the B.S. and M.S. degrees from Nanjing University, Nanjing, China, in
2000 and 2003, respectively, and the Ph.D. degree
in electrical engineering from Southeast University,
Nanjing, China, in 2007. Since 2007, he has been
a Faculty Member with the School of Information Science and Engineering, Southeast University,
where he is currently a Full Professor. From 2008
to 2009, he visited the Signal Processing Laboratory,
School of Electrical Engineering, Royal Institute of
Technology, Stockholm, Sweden. He has authored
over 200 peer-reviewed papers, hold over 50 invention patents, and submitted
over 10 technical contributions to the IEEE standards. His current research
interests include MIMO wireless communications, cooperative wireless communications, and millimeter wave wireless communications. He has served
as an Associate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING, IEEE WIRELESS COMMUNICATIONS LETTERS, EURASIP
Journal on Advances in Signal Processing, and EURASIP Journal on Wireless
Communications and Networking.
Cheng Zhang received the B.Eng. degree from
Sichuan University, Chengdu, China in June 2009,
and the M.Sc. degree from Institute No.206 of China
Arms Industry Group Corporation, Xian, China in
May 2012. He worked as a Radar signal processing
engineer at Institute No.206 of China Arms Industry
Group Corporation, Xian, China from June 2012 to
Aug. 2013. Since Mar. 2014, he has been working towards Ph.D. degree at Southeast University,
Nanjing, China. From Nov. 2016 to Nov. 2017, he
was a visiting student with University of Alberta,
Edmonton, Canada. His current research interests include space-time signal
processing and application of learning algorithm in channel estimation and
transmission design for millimeter-wave massive MIMO communication systems.
Yindi Jing received the B.Eng. and M.Eng. degrees
from the University of Science and Technology of
China, in 1996 and 1999, respectively. She received
the M.Sc. degree and the Ph.D. in electrical engineering from California Institute of Technology,
Pasadena, CA, in 2000 and 2004, respectively. From
Oct. 2004 to Aug. 2005, she was a postdoctoral
scholar at the Department of Electrical Engineering
of California Institute of Technology. Since Feb.
2006 to Jun. 2008, she was a postdoctoral scholar
at the Department of Electrical Engineering and
Computer Science of the University of California, Irvine. In 2008, she joined
the Electrical and Computer Engineering Department of the University of
Alberta, where she is currently an associate professor. She was an Associate
Editor for the IEEE Transactions on Wireless Communications 2011-2016
and currently serves as a Senior Area Editor for IEEE Signal Processing
Letters (since Oct. 2017) and a member of the IEEE Signal Processing Society
Signal Processing for Communications and Networking (SPCOM) Technical
Committee. Her research interests are in massive MIMO systems, cooperative
relay networks, training and channel estimation, robust detection, and fault
detection in power systems.
Luxi Yang received the M.S. and Ph.D. degree
in electrical engineering from the Southeast University, Nanjing, China, in 1990 and 1993, respectively. Since 1993, he has been with the School
of Information Science and Engineering, Southeast
University, where he is currently a full professor of
information systems and communications, and the
Director of Digital Signal Processing Division. His
current research interests include signal processing
for wireless communications, MIMO communications, cooperative relaying systems, and statistical
signal processing. He has authored or co-authored two published books and
more than 150 journal papers, and holds 30 patents. Prof. Yang received the
first- and second-class prizes of Science and Technology Progress Awards of
the State Education Ministry of China in 1998, 2002 and 2014. He is currently
a member of Signal Processing Committee of Chinese Institute of Electronics.
| 7 |
arXiv:1604.07104v1 [] 25 Apr 2016
The limit of finite sample breakdown point of
Tukey’s halfspace median for general data
Xiaohui Liua,b , Shihua Luoa,b , Yijun Zuoc
a
School of Statistics, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013, China
b
Research Center of Applied Statistics, Jiangxi University of Finance and Economics, Nanchang,
Jiangxi 330013, China
c
Department of Statistics and Probability, Michigan State University, East Lansing, MI, 48823, USA
November 9, 2017
Summary
Under special conditions on data set and underlying distribution, the limit of finite sample
breakdown point of Tukey’s halfspace median ( 13 ) has been obtained in literature. In this paper,
we establish the result under weaker assumption imposed on underlying distribution (halfspace
symmetry) and on data set (not necessary in general position). The representation of Tukey’s
sample depth regions for data set not necessary in general position is also obtained, as a byproduct of our derivation.
Key words: Tukey’s halfspace median; Limit of finite sample breakdown point; Smooth condition;
Halfspace symmetry
2000 Mathematics Subject Classification Codes: 62F10; 62F40; 62F35
1
Introduction
To order multidimensional data, Tukey (1975) introduced the notion of halfspace depth. The
halfspace depth of a point x in Rd (d ≥ 1) is defined as
D(x , Fn ) =
inf
u ∈S d−1
Pn (u ⊤ X ≤ u ⊤ x ),
where S d−1 = {v ∈ Rd : kv k = 1} with k · k being the Euclidean distance, Fn denotes the
empirical distribution related to the random sample X n = {X1 , X2 , · · · , Xn } from X ∈ Rd , and
Pn is the corresponding empirical probability measure.
With this notion, a natural definition of multidimensional median is the point with maximum
halfspace depth, which is called Tukey’s halfspace median (HM ). To avoid the nonuniqueness,
HM (θ̂n ) is defined to be the average of all points lying in the median region M(X n ), i.e.,
θ̂n := T ∗ (X n ) = Ave {x : x ∈ M(X n )} ,
where M(X n ) = {x ∈ Rn : D(x , Fn ) = supz ∈Rd D(z , Fn )}, which is the inner-most region
among all τ -trimmed depth regions:
n
o
Dτ (X n ) = x ∈ Rd : D(x , Fn ) ≥ τ ,
for ∀τ ∈ (0, λ∗ ] with λ∗ = D(θ̂n , Fn ).
When d = 1, HM reduces to the ordinary univariate median, the latter has the most outstanding property, its best breakdown robustness. A nature question then is: will HM inherit
the best robustness of the univariate median?
Answers to this question have been given in the literature, e.g. Donoho and Gasko (1992),
Chen (1995) and Chen and Tyler (2002) and Adrover and Yohai (2002). The latter two obtained
the asymptotic breakdown point ( 13 ) under the maximum bias framework, whereas the former
1
two obtained the limit of finite sample breakdown point (as n → ∞) under the assumption of
absolute continuity and central or angular symmetry of underlying distribution.
Among many gauges of robustness of location estimators, finite sample breakdown point is
the most prevailing quantitative assessment. Formally, for a given sample X n of size n in Rd ,
the finite sample addition breakdown point of an location estimator T at X n is defined as:
m
n
m
n
n
: sup kT (X ∪ Y ) − T (X )k = ∞ ,
ε(T, X ) = min
1≤m≤n n + m Y m
where Y m denotes a data set of size m with arbitrary values, and X n ∪ Y m the contaminated
sample by adjoining Y m to X m .
Absolutely continuity guarantees the data set is in general position (no more than d sample
points lie on a (d − 1)-dimensional hyperplane (Mosler et al., 2009)) almost surely. In practice,
the data set X n is most likely not in general position. This is especially true when we are
considering the contaminated data set.
Unfortunately, most discussions in literature on finite sample breakdown point is under the
assumption of data set in general position. Dropping this unrealistic assumption is very much
desirable in the discussion. In this paper we achieve this. Furthermore, we also relax the
angular symmetry (Liu, 1988, 1990) assumption in Chen (1995) to a weaker version of symmetry:
halfspace symmetry (Zuo and Serfling, 2000). X ∈ Rd is halfspace symmetrical about θ0 if
P (X ∈ Hθ0 ) ≥ 1/2 for any halfspace Hθ0 containing θ0 . Minimum symmetry is required to
guarantee the uniqueness of underlying center θ in Rd .
Without the ‘in general position’ assumption, deriving the limit of finite sample breakdown
point of HM is quite challenging. We will consider this issue under the combination of halfspace
symmetry and a weak smooth condition (see Section 2 for details). Recently, Liu et al. (2015b)
have derived the exact finite sample breakdown point for fixed n. Their result nevertheless
depends on the assumption that X n is in general position and could not be directly utilized under
the current setting, because when the underlying F only satisfies the weak smooth condition,
the random sample X n generated from F may not be in general position in some scenarios.
Hence, we have to extend Liu et al. (2015b)’s results.
Our proofs in this paper heavily depend on the representation of halfspace median region
while the existing one in the literature is for the data set in general position. Hence, we have
to establish the representation of Tukey’s depth region (as the intersection of a finite set of
halfspaces) without in general position assumption, which is a byproduct of our proofs. We
2
only need X n to be of affine dimension d which is much weaker than the existing ones in
(Paindaveine and Šiman, 2011).
The rest paper is organized as follows. Section 2 presents a weak smooth condition and shows
it is weaker than the absolute continuity and the interconnection with other notions. Section
3 establishes the representation of Tukey’s sample depth regions without in-general-position
assumption. Section 4 derives the limiting breakdown point of HM. Concluding remarks end
the paper.
2
A weak smooth condition
In this section, we first present the definition of smooth condition (SC ), and then investigate
its relationship with some other conditions, i.e., absolute continuity and continuous support,
commonly assumed in the literature dealing with HM. The connection between SC and the
continuity of the population version of Tukey’s depth function D(x , F ) is also investigated.
Let P be the probability measure related to F . We say a probability distribution F in Rd of
a random vector X is smooth at x 0 ∈ Rd if P (X ∈ ∂H) = 0 for any halfspace H with x 0 on its
boundary ∂H. F is globally smooth over Rd if F is smooth at ∀x ∈ Rd .
Recall that a distribution F is absolutely continuous over Rd if for ∀ε > 0 there is a positive
number δ such that P (X ∈ A) < ε for all Borel sets A of Lebesgue measure less than δ. One
can easily show that absolute continuity implies global smoothness. Nevertheless, the vice versa
is false. The counterexample can be found in the following.
Counterexample. Let S1 = {x ∈ Rd : kx k ≤ 1}, S2 = {x ∈ Rd : kx k = 2}, and
Y = ηZ1 + (1 − η)Z2 , where η ∼ Bernoulli(0.5), and η, Z1 , Z2 are mutually independent. If Z1 ,
Z2 ∈ Rd are uniformly distributed over S1 and S2 , respectively, then it is easy to show that the
distribution of Y is not absolutely continuous, but smooth at ∀x ∈ Rd .
Furthermore, observe that a distribution F is said to have contiguous support if there is no
intersection of any two halfspaces with parallel boundaries that has nonempty interior but zero
probability and divides the support of F into two parts (see Kong and Zuo (2010)). We can
derive that if F has contiguous support it should be globally smooth, but once again the vice
versa is false. Counterexamples can easily be constructed by following a similar fashion to the
above one.
Global smoothness is a quite desirable sufficient condition on F if one desires the global
continuity of D(x , F ) as shown in the following lemma.
3
Lemma 1. If F is globally smooth, then D(x , F ) is globally continuous in x over Rd .
When F is globally smooth, we now show that if there ∃x 0 ∈ Rd such that
Proof.
lim D(x , F ) 6= D(x 0 , F ), then it will lead to a contradiction.
x →x 0
By noting lim D(x , F ) 6= D(x 0 , F ), we claim that there must exist a sequence {x k }∞
k=1
x →x 0
such that lim x k = x 0 but lim D(x k , F ) = d∗ 6= D(x 0 , F ). (If lim D(x k , F ) is divergent,
k→∞
k→∞
k→∞
by observing {D(x k , F )}∞
k=1 ⊂ [0, 1], we utilize one of its convergent subsequence instead.) For
simplicity, hereafter denote dk = D(x k , F ) for k = 0, 1, · · · , and assume d∗ < d0 if no confusion
arises.
Observe that S d−1 is compact. Hence, for each x k , there ∃u k ∈ S d−1 satisfying P (u ⊤
kX ≤
∞
d−1 is bounded, it should contain a convergent subsequence
u⊤
k x k ) = dk . Since {u k }k=1 ⊂ S
d0 −d∗
with
lim
u
=
u
.
For
this
u
and
∀ε
∈
0,
{u kl }∞
, there ∃δ0 > 0 such that
0
0
0
k
l
l=1
2
l→∞
P (X ∈ B(u 0 , δ0 )) < ε0
(1)
following from the global smoothness. Here B(u, c) = {z ∈ Rd : u ⊤ x 0 − c < u ⊤ z ≤ u ⊤ x 0 } for
∀u ∈ S d−1 and ∀c ∈ R1 .
⊤
⊤
⊤
On the other hand, lim P (u ⊤
kl X ≤ u kl x kl ) = d∗ < d0 ≤ P (u kl X ≤ u kl x 0 ). Using this and
l→∞
∞
the convergence of both {x kl }∞
l=1 and {u kl }l=1 , an element derivation leads to that: For δ0 given
above, there ∃M > 0 such that
⊤
⊤
P (X ∈ B(u kl , δ0 )) ≥ P (u ⊤
kl X ∈ (u kl x kl , u kl x 0 ]) >
d0 − d∗
> 0,
2
for ∀kl > M.
This clearly will contradict with (1), because B(u 0 , δ0 ) ∩ B(u kl , δ0 ) → B(u 0 , δ0 ) as u kl → u 0
when kl → ∞.
Lemma 1 indicates that, when F is globally smooth, D(x , F ) should be globally continuous
over Rd , but the vice versa is not clear. Fortunately, an equivalent relationship between the
smoothness of F and the continuity of D(x , F ) can be achieved at a special point as stated in
the following lemma.
Lemma 2. When F is halfspace symmetrical about θ0 , then the following statements are
equivalent:
(i) F is smooth at θ0 ;
(ii) D(x , F ) is continuous at θ0 with respect to x .
4
Proof. Similar to Lemma 1, one can show: (i) is true ⇒ (ii) is true. In the following, we
will show: (i) is false ⇒ (ii) is also false.
If (i) is false, we claim that there exists a halfspace H such that m0 := P (X ∈ ∂H) > 0.
Denote m+ = P (X ∈ H \ ∂H) and m− = P (X ∈ Hc ). Without confusion, assume that
m− ≤ m+ and the normal vector u 0 of ∂H points into the interior of H. Observe that D(x , F ) ≤
⊤
P (u ⊤
0 X ≤ u0 x ) ≤
1−m0
2
< 1/2 for ∀x ∈ Hc , i.e., the complementary of H. Hence, for any
c
sequence {x k }∞
k=1 ⊂ H such that lim x k = θ0 , we have lim sup D(x k , F ) ≤
k→∞
k→∞
1−m0
2 .
This in
turn implies that D(x , F ) is discontinuous at θ0 because D(θ0 , F ) ≥ 1/2 when F is halfspace
symmetrical about θ0 .
Summarily, relying on the discussions above, we obtain the following relationship schema
when F is halfspace symmetrical about θ0 .
=⇒
F is globally smooth
F is smooth at θ0
⇐
=⇒
⇒
Continuous support
⇔
Absolute continuity
D(x , F ) is globally continuous
⇒
D(x , F ) is continuous at θ0
Since the assumption that F is smooth at θ0 is quite general, we call it weak smooth condition
throughout this paper.
3
Representation of Tukey’s depth regions
To prove the main result, we need to know the representation of M(X n ). When X n is in
general position, this issue has been considered by Paindaveine and Šiman (2011). Nevertheless,
their result can not be directly applied to prove our main theorem, because when the underlying
distribution F only satisfies the weak smooth condition, the sample X n may not be in general
position. Hence, we have to solve this problem before proceeding further.
For convenience, we introduce the following notations. For ∀u ∈ S d−1 (d ≥ 2) and ∀τ ∈ (0, 1),
denote the (τ, u )-halfspace as
n
o
Hτ (u) = x ∈ Rd : u ⊤ x ≥ qτ (u )
with complementary Hτc (u) = {x ∈ Rd : u ⊤ x < qτ (u)} and boundary ∂Hτ (u) = {x ∈ Rd :
u ⊤ x = qτ (u )}, where qτ (u) = inf{t ∈ R1 : Fu n (t) ≥ τ }, and Fu n denotes the empirical
distribution of {u ⊤ X1 , u ⊤ X2 , · · · , u ⊤ Xn }. Obviously, u points into the interior of Hτ (u), and
5
(see e.g. Kong and Mizera (2012))
Dτ (X n ) =
\
Hτ (u ).
(2)
u∈S d−1
In the following, a halfspace Hτ (u ) is said to be τ -irrotatable if:
(a) nPn (X ∈ Hτc (u)) ≤ ⌈nτ ⌉ − 1, i.e., Hτ (u) cuts away at most ⌈nτ ⌉ − 1 sample points.
(b) ∂Hτ (u ) contains at least d sample points, and among them there exist d − 1 points, which
can determine a (d − 2)-dimensional hyperplane Vd−2 such that: it is possible to make
Hτ (u ) cutting away more than ⌈nτ ⌉ − 1 sample points only through deviating it around
Vd−2 by an arbitrary small scale.
Here ⌈·⌉ denotes the ceiling function, and Vd−2 is a singleton if d = 2. To gain more insight, we
provide a 2-dimensional example in Figure 1. In this example, X1 , X2 , X3 and X4 are clearly
not in general position, and H(u) is 1/2-irrotatable.
X
2
X
3
x
2
u
H(u)
X
X
1
4
x
1
Figure 1: Shown is an example of the τ -irrotatable halfspace. Observe that: (a) H(u) cuts
away no more than 1 sample point, i.e., X1 , and (b) ∂H(u) passes through at least 2 (= d)
sample points, i.e., X2 , X3 , X4 , and it is possible to make H(u) cutting away more than
⌈4 × 1/2⌉ − 1 = 1 sample points, i.e., X1 and X2 , through deviating it by an arbitrary small
scale around X3 . Hence, H(u) is 1/2-irrotatable.
Remarkably, if a τ1 -irrotatable halfspace cuts away strictly less than ⌈nτ1 ⌉ − 1 sample points,
it also should must be τ2 -irrotatable for some τ2 < τ1 .
This τ -irrotatable property is quite important for the following lemma, which further plays
a key role in the proof of Lemma 4.
6
Lemma 3. Suppose X n = {X1 , X2 , · · · , Xn } ⊂ Rd (d ≥ 2) is of affine dimension d. Then
for ∀τ ∈ (0, λ∗ ], we have
Dτ (X n ) =
m
\τ
Hτ (µi ),
i=1
where mτ denotes the number of all τ -irrotatable halfspaces Hτ (µi ).
Proof. By (2), Dτ (X n ) ⊂
T τ
Dτ (X n ) ⊃ m
i=1 Hτ (µi ).
Tmτ
i=1 Hτ (µi )
holds trivially. Hence, in the sequel we only prove:
Tmτ
such that x 0 ∈
/ Dτ (X n ), i.e., D(x 0 , Fn ) < τ , we now show that
T τ
this will lead to a contradiction. For simplicity, hereafter denote Vn (τ ) = m
i=1 Hτ (µi ).
If there ∃x 0 ∈
i=1 Hτ (µi )
Since Pn (·) takes values only on {0, 1/n, 2/n, · · · , n/n}, there ∃u 0 ∈ S d−1 such that
⊤
Pn (u ⊤
0 X ≤ u 0 x 0 ) = D(x 0 , Fn ).
(3)
Trivially, when X n is of affine dimension d, we have: Vn (τ ) ⊂ cov(X n ) for ∀τ ∈ (0, λ∗ ], where
cov(X n ) denotes the convex hull of X n . Hence, for x 0 ∈ Vn (τ ) and u 0 given in (3), there must
exist an integer k0 ∈ {1, 2, · · · , nλ∗ } and a permutation π0 := (i1 , i2 , · · · , in ) of (1, 2, · · · , n) such
that
⊤
⊤
⊤
⊤
⊤
u⊤
0 Xi1 ≤ u 0 Xi2 ≤ · · · ≤ u 0 Xik0 ≤ u 0 x 0 < u 0 Xik0 +1 ≤ · · · ≤ u 0 Xin .
(4)
Obviously, k0 /n < τ due to D(x 0 , Fn ) < τ , and hence k0 ≤ ⌈nτ ⌉ − 1.
Note that replacing u ∈ S d−1 with u ∈ Rd \ {0} does no harm to the definition of both
D(x , Fn ) and Dτ (X n ) (Liu and Zuo, 2014). Hence, in the sequel we pretend that the constraint
on u is u ∈ Rd \ {0} instead.
Denote C(π0 ) = {v ∈ Rd \ {0} : v ⊤ Xit ≤ v ⊤ Xik0 +1 for any 1 ≤ t ≤ k0 , and v ⊤ Xik0 +1 ≤
v ⊤ Xis for any k0 + 2 ≤ s ≤ n}. Obviously, u 0 ∈ C(π0 ), and C(π0 ) is a convex cone.
d
v
Let U := {νj }m
j=1 = {z ∈ R \ {0} : kz k = 1, z lies in a vertex of C(π0 )} with mv being U ’s
cardinal number. Clearly, mv < ∞ and ν1 , ν2 , · · · , νmτ are non-coplanar when X n is of affine
dimension d. By the construction of C(π0 ), each ν ∈ U determines a halfspace H(ν) such that:
(p1) ν is normal to ∂H(ν) and points into the interior of H(ν), (p2) H(ν) cuts away at most
⌈nτ ⌉ − 1 sample points, because Xik0 +1 , Xik0 +2 , · · · , Xin ∈ Hν , (p3) ∂H(ν) contains at least d
sample points, which are of affine dimension d − 1 due to ν is a vertex of C(π0 ).
7
⊤
⊤
⊤
For U , we claim that: there ∃v 0 ∈ U satisfying v ⊤
0 x 0 < v 0 Xik0 +1 . If not, νj x 0 ≥ νj Xik0 +1
for all j = 1, 2, · · · , mv . Hence,
⊤
⊤
mv
mv
X
X
ω j νj x 0 ≥
ωj νj Xik0 +1 ,
j=1
where
Pmv
j=1 ωj
j=1
= 1 with ωj ≥ 0 for all j = 1, 2, · · · , mv . This contradicts with (4) by noting
that C(π0 ) is convex and u 0 ∈ C(π0 ).
⊤
/ H(v 0 ). We have:
However, v ⊤
0 x 0 < v 0 Xik0 +1 implies x 0 ∈
S1. H(v 0 ) satisfies (b) given in Page 6: By (p1)-(p3), H(v 0 ) is τ -irrotatable, contradicting
with the definition of Vn (τ ).
S2. H(v 0 ) does not satisfy (b): Among all sample points contained by ∂H(v 0 ), there must
exist d − 1 points that determine a (d − 2)-dimensional hyperplane, around which we can
obtain a τ -irrotatable halfspace through rotating H(v 0 ).
(If not, there will be a contradiction: By (p2), there ∃Xj1 , Xj2 , · · · , Xjd ∈ ∂H(v 0 ),
d
which are of affine dimension d − 1. Denote W1 , W2 , · · · , Wd respectively as d−1
hy-
perplanes that passing through all (d − 2)-dimensional facets of the simplex formed by
Xj1 , Xj2 , · · · , Xjd . Then similar to Part (II) of the proof of Theorem 1 in Liu et al. (2015a),
it is easy to check that:
for ∀y ∈ Rd , y can not simultaneously lie in all W1 , W2 , · · · , Wd .
Without confusion, assume y ∈
/ W1 and Xj1 ∈ W1 . Observe that no τ -irrotatable halfspace is available through rotating H(v 0 ) around W1 . Hence, for ∀δ > 0,
c
c
)} < ⌈nτ ⌉ − 1,
max{nPn (X ∈ Hδ+
), nPn (X ∈ Hδ−
(5)
⊤
d
⊤
⊤
where Hδ+ = {z ∈ Rd : u ⊤
+ z ≥ u + Xj1 }, and Hδ− = {z ∈ R : u − z ≥ u − Xj1 } with
u + = v 0 + δu ∗ and u − = v 0 − δu ∗ , where u ∗ ∈ S d−1 is orthogonal to both v 0 and W1 .
c or y ∈ Hc for ∀δ > 0, we obtain D(y, F ) < (⌈nτ ⌉ − 1)/n ≤ τ .
Since either y ∈ Hδ+
n
δ−
This is impossible because Dn (τ ) is nonempty for ∀τ ∈ (0, λ∗ ].)
Furthermore, it is easy to show that: if there is a τ -irrotatable halfspace, say H1 , obtained
through rotating H(v 0 ) around one (d − 2)-dimensional hyperplane clockwise (without
confision), then there would be an another τ -irrotatable halfspace, say H2 , by rotating
H(v 0 ) anti-clockwise. By noting Hc (v 0 ) ⊂ H1c ∪ H2c , we can obtain either x 0 ∈ H1c or
x 0 ∈ H2c , which contradicts with the definition of Vn (τ ).
8
Hence, there is no such x 0 that x 0 ∈ Vn (τ ), but x 0 ∈
/ Dτ (X n ).
This completes the proof of this lemma.
Remark 1. It may have long been known in the statistical community that Tukey’s sample
depth regions may be polyhedral and have a finite number of facets. The detailed character of each facet of these regions is unknown, nevertheless. When X n is in general position,
Paindaveine and Šiman (2011) have shown that each hyperplane passing through a facet of
Dτ (X n ), for ∀τ ∈ (0, λ∗ ], contains exactly d and cuts away exactly ⌈nτ ⌉ − 1 sample points; see
Lemma 4.1 in Page 201 of Paindaveine and Šiman (2011) for details. Lemma 3 generalizes their
result by removing the ‘in general position’ assumption, and indicates that such hyperplanes
contain at least d and cuts away no more than ⌈nτ ⌉ − 1 sample points.
To facilitate the understanding, we provide an illustrative example in Figure 2. In this example, there are n = 4 observations, i.e., X1 , X2 , X3 , X4 , where X3 and X4 take the same value.
Clearly, they are not in general position and of affine dimension 2. Figures 2(a)-2(b) indicate
that {X1 , X3 , X4 } determines two 1/2-irrotatable halfspaces, i.e., H1/2 (u 1 ) and H1/2 (u 2 ), satisfying H1/2 (u 1 ) ∩ H1/2 (u 2 ) = L1 . Similarly, the intersection of the halfspaces determined by
{X2 , X3 , X4 } is L2 . Hence, the median region is {x : x = X3 }. From Figure 2(b) we can see
that ∂H1/2 (u2 ) contains 3 (6= 2) and H1/2 (u2 ) cuts away 0 (6= 1) sample points, which obviously
is not in agreement with the results of Paindaveine and Šiman (2011).
4
The limiting breakdown point of HM
In this section, we will derive the limit of the finite sample breakdown point of HM when
the underlying distribution satisfies only the weak smooth condition (such a limit is also called
asymptotic breakdown point in the literature, the latter notion is based on the maximum bias
notion though, see Hampel(1968)). Since HM reduces to the ordinary univariate median for
d = 1, whose breakdown point robustness has been well studied, we focus only on the scenario
of d ≥ 2 in the sequel.
9
y
X2
c
Hτ (u1), the complementary of Hτ(u1)
X
3
X1
X
4
H *(u )
τ
H (u )
τ
1
1
x
(a) Halfspace H1/2 (u1 ), which is 1/2-irrotatable because 4Pn (X ∈ Hcτ (u1 )) ≤
⌈4τ ⌉ − 1 but 4Pn (X ∈
/ H∗τ (u1 )) = 2 > ⌈4τ ⌉ − 1 for τ = 1/2.
X
2
x2
H *(u )
τ
2
Hτ(u2)
X3
X1
X4
c
Hτ (u2), the complementary of Hτ(u2), with τ=1/2
x1
(b) Halfspace H1/2 (u2 ), which is similarly 1/2-irrotatable because 4Pn (X ∈
Hcτ (u2 )) ≤ ⌈4τ ⌉ − 1 but 4Pn (X ∈
/ H∗τ (u2 )) = 2 > ⌈4τ ⌉ − 1 for τ = 1/2
X2
x
2
L2
X3
X
1
X
4
L1
x1
(c) The intersection of lines L1 and L2 .
Figure 2: Shown are examples of the τ -irrotatable halfspaces and related HM.
10
The key idea is to obtain simultaneously a lower and an upper bound of ε(T ∗ , X n ) for fixed
n, and then prove that they tend to the same value as n → ∞. When X n is of affine dimension
d, it is easy to obtain a lower bound, i.e.,
λ∗
1+λ∗ ,
for ε(T ∗ , X n ) by using a similar strategy to
Donoho and Gasko (1992) though. Finding a proper upper bound is not trivial, nevertheless.
To this end, we establish the following lemma, which provides a sharp upper bound with its
λ∗
1+λ∗
asymptotically. For simplicity, denoting by Au an arbitrary d×
.
(d−1) matrix of unit vectors such that (u .. A ) constitutes an orthonormal basis of Rd , we define
limit coinciding with that of
u
d−1 . Correspondingly,
⊤
⊤
the Au -projections of X n as Xnu = {A⊤
u X1 , Au X2 , · · · , Au Xn } for ∀u ∈ S
let θ̂nu = T ∗ (Xnu ), λ∗u = D(θ̂nu , Fu n ), and Fu n to be the empirical distribution related to Xnu .
Lemma 4. For a given data set X n of affine dimension d, the finite sample breakdown point
of Tukey’s halfspace median satisfies
ε(T ∗ , X n ) ≤
l
inf u∈S d−1 λ∗u
.
1 + inf u∈S d−1 λ∗u
y
K
u
x
V
v
x
x0
H
Figure 3: Shown is a 3-dimensional illustration. Once y ’s are putted on ℓ, all of their projections
onto V are x0 . Hence, for any x ∈
/ ℓ, its depth with respect to X n ∪ Y m would be no more
than that of x with respect to the projections of X n ∪ Y m , because all projections of the sample
points contained by K would lie in H. Here H denotes the optimal (d − 1)-dimensional optimal
halfspace of x, and K the d-dimensional halfspace whose projection is H.
Since the whole proof of this lemma is very long, we present it in two parts. For ∀u ∈ S d−1 ,
in Part (I), we first project X n onto a (d − 1)-dimensional space Vud−1 that is orthogonal to
u, and then show that there ∃x0 ∈ Vud−1 , which can lie in the inner of the complementary of
a (d − 1)-dimensional optimal halfspace of ∀x ∈ Vud−1 \ {x0 }. Here by optimal halfspace of x
11
we mean the halfspace realizing the depth at x with respect to Xnu . Denote the line passing
through x0 and parallel to u as ℓu . In Part (II), we will show that by putting nλ∗u repetitions
of y 0 at any position on ℓu but outside the convex hull of X n , i.e., ℓu \ cov(X n ), it is possible to
obtain that supx ∈cov(X ) D(x , X n ∪ Y m ) ≤ nλ∗u . Hence, inf u∈S d−1 nλ∗u repetitions of y 0 suffice
for breaking down T ∗ (X n ∪ Y m ). See Figure 3 for a 3-dimensional illustration.
Proof of Lemma 4. Trivially, it is easy to check that, for ∀u ∈ S d−1 , Xnu is of affine
dimension d − 1 if X n is of affine dimension d.
(I). In this part, we only prove that: When the affine dimension of M(Xnu ) is nonzero
for d > 2, there ∃x0 ∈ M(Xnu ) such that Ux ∩ Hx,x0 6= ∅ for ∀x ∈ M(Xnu ) \ {x0 }, where
⊤
d−2 : v⊤ x < v⊤ x }.
Ux = {v ∈ S d−2 : Pn (v⊤ (A⊤
0
u X) ≤ v x) = D(x, Fu n )}, and Hx,x0 = {v ∈ S
That is, x0 lies in the inner of the complementary of a (d − 1)-dimensional optimal halfspace
of ∀x ∈ M(Xnu ) \ {x0 }. The rest proof follows a similar fashion to Lemmas 2-3 of Liu et al.
(2015b).
By Lemma 3, M(Xnu ) is polyhedral. Similar to Theorem 2 of Liu et al. (2015a), we can
n
⊤
obtain that, if there is a sample point Xi such that A⊤
u Xi ∈ M(Xu ), then Au Xi should be
a vertex of M(Xnu ) based on the representation of M(Xnu ) obtained in Lemma 3. Let Vu be
the set of vertexes of M(Xnu ) such that, for ∀y ∈ Vu , there is an optimal halfspace Hy of y
n
⊤
satisfying Hy ∩ M(Xnu ) = {y}. Trivially, A⊤
u Xi ∈ Vu if Au Xi ∈ M(Xu ).
If there is point in Vu that can sever as x0 , then this statement holds already. Otherwise,
find a candidate point z0 by using the following iterative procedure and then show that z0 can be
used as x0 . For simplicity, hereafter denote Az = {x ∈ Rd−1 : Ux ∩ Hx,z 6= ∅} and Bz = {x ∈
Rd−1 : Ux ∩ Hx,z = ∅} for ∀z ∈ M(Xnu ). Obviously, Az ∪ Bz = Rd−1 , Az ∩ Bz = ∅, z ∈ Bz , and
Bz ⊂ M(Xnu ).
Let z1 = T ∗ (Xnu ). Clearly, Vu ∩ Bz1 = ∅. (In fact, Vu ∩ Bz = ∅ for any z ∈ M(Xnu ) \ Vu .)
If Bz1 = {z1 }, let x0 = z0 and this statement is already true. Otherwise, similar to Lemma
2 of Liu et al. (2015b), for ∀x ∈ Bz1 \ {z1 }, we obtain: (o1) u⊤ x ≥ u⊤ z1 for ∀u ∈ Uz1 , (o2)
Ux ⊂ Uz1 , and (o3) Bx ⊂ Bz1 \ {z1 }.
Denote
g(z1 ) =
sup
v∈Uz1 ,x∈Bz1 \{z1 }
v⊤ (x − z1 ).
Clearly, g(z1 ) > 0 by (o1)-(o3). Along the same line of Liu et al. (2015b), we can find a
∞
n
series {zi }∞
i=1 ⊂ M(Xu ), if there is no m > 1 such that Bzm = {zm }, satisfying that: {zi }i=1
12
contains a convergent subsequence {zik }∞
k=1 with lim zik = z0 and lim g(zik −1 ) = 0. Trivially,
k→∞
k→∞
z0 ∈ M(Xnu ) \ Vu . (If not, it is easy to obtain a contradiction.)
Now we proceed to prove Bz0 = {z0 }. First, we show
(F1):
z0 ∈ Bzj−1 \ {zj−1 } for ∀j ∈ {ik }∞
k=1 .
If not, there must ∃ũ ∈ Uz0 satisfying ũ⊤ z0 < ũ⊤ zj−1 . For this ũ ∈ Uz0 , let (i′1 , i′2 , · · · , i′n )
⊤
∗
be the permutation of (1, 2, · · · , n) such that: (a) ũ⊤ (A⊤
u Xi′s ) ≤ ũ z0 for 1 ≤ s ≤ k , and (b)
⊤
∗
∗
∗
ũ⊤ (A⊤
u Xi′t ) > ũ z0 for k + 1 ≤ t ≤ n, where k = nλu . Denote
1
⊤
⊤
⊤
ε0 = min ∗ min ũ ((Au Xi′t ) − z0 ), ũ (zj−1 − z0 ) .
k +1≤t≤n
2
∗
∗
∞
Since {zik }∞
k=1 is convergent, there must ∃j ∈ {ik }k=1 with j > j such that kzj ∗ − z0 k < ε0 .
∗
This, together with |ũ⊤ (zj ∗ − z0 )| ≤ kzj ∗ − z0 k, leads to ũ⊤ zj ∗ < ũ⊤ (A⊤
u Xi′t ) for k + 1 ≤ t ≤ n,
u
∗
⊤
∗
which further implies Pn (ũ⊤ (A⊤
u X) ≤ ũ zj ∗ ) ≤ λu . Next, by noting λu = D(zj ∗ , Fn ) ≤
⊤
Pn (ũ⊤ (A⊤
u X) ≤ ũ zj ∗ ), we obtain ũ ∈ Uzj ∗ ⊂ Uzj−1 . On the other hand, for ε0 , a similar
derivation leads to ũ⊤ zj ∗ < ũ⊤ zj−1 . This contradicts with zj ∗ ∈ Bzj−1 when j ∗ > j by (o1)(o2). Then, based on lim g(zik −1 ) = 0 and (F1), we can obtain Bz0 \ {z0 } = ∅ similar to
k→∞
Lemma 3 of Liu et al. (2015b). Hence, we may let x0 = z0 .
(II). By denoting ℓu = {z ∈ Rd : z = Au x0 + γu, ∀γ ∈ R1 } and using a similar method
to the first proof part of Theorem 1 in Liu et al. (2015b), we can obtain that, for an any given
y 0 ∈ cov(X n ) \ ℓu , it holds supx ∈cov(X n ) D(x , Fn+m ) ≤
nλ∗u
n+m ,
where Fn+m denotes the empirical
distribution related to X n ∪ Y m , and Y m contains m repetitions of y 0 .
Note that u is any given, and D(y 0 , Fn+m ) =
ε(T ∗ , X n ) ≤
m
n+m
≥
nλ∗u
n+m
when m ≤ nλ∗u . Hence
inf u ∈S d−1 λ∗u
inf u∈S d−1 nλ∗u
=
.
n + inf u∈S d−1 nλ∗u
1 + inf u ∈S d−1 λ∗u
This completes the proof.
Observe that the upper bound given in Lemma 4 involves the Au -projections. A nature
problem arises: whether the Au -projection of X is still halfspace symmetrically distributed?
The following lemma provides a positive answer to this question.
Lemma 5. Suppose X is halfspace symmetrical about θ0 ∈ Rd (d ≥ 2). Then for ∀u ∈ S d−1 ,
⊤
d−1 .
A⊤
u X is halfspace symmetrical about Au θ0 ∈ R
d−1 . Note
Proof. For ∀v ∈ S d−2 , the fact (Au v)⊤ (Au v) = v⊤ (A⊤
u Au )v = 1 implies Au v ∈ S
that
1
⊤
⊤
⊤
⊤
θ
)
=
P
(A
v)
X
≥
(A
v)
θ
X)
≥
v
(A
P v⊤ (A⊤
u
u
0 ≥ .
u 0
u
2
13
This completes the proof of this lemma.
u
Lemma 5 in fact obtains the population version, i.e., D(A⊤
u θ0 , Fu ), of D(θ̂n , Fu n ) for ∀u ∈
S d−1 , where Fu denotes the distribution of A⊤
u X.
we now are in the position to prove the following theorem.
i.i.d.
Theorem 1. Suppose that (C1) {X1 , X2 , · · · , Xn } ∼ F is of affine dimension d, (C2)
F is halfspace symmetric about point θ0 , and (C3) F is smooth at point θ0 . Then we have
a.s.
ε(T ∗ , X n ) −→ 13 ,
a.s.
as n → +∞, where −→ denotes the “almost sure convergence”.
Proof. Observe that
|D(θ̂n , Fn ) − D(θ0 , F )| ≤ sup |D(x , Fn ) − D(x , F )| + |D(θ̂n , F ) − D(θ0 , F )| .
|
{z
}
x ∈Rd
{z
}
|
E2
E1
Under Condition (C1), a direct use of Remark 2.5 in Page 1465 of Zuo (2003) leads to that
a.s.
sup |D(x , Fn ) − D(x , F )| −→ 0,
as n → +∞,
(6)
x ∈Rd
a.s.
holds with no restriction on F . Hence, E1 −→ 0.
For E2, from Lemma 2, we have that D(x , F ) is continuous at θ0 under Condition (C3). On
the other hand, since D(θ0 , F ) ≥ 1/2 > 0 under Condition (C2), an application of Lemma A.3
a.s.
a.s.
of Zuo (2003) leads to θ̂n −→ θ0 . These two facts together imply E2 −→ 0.
a.s.
a.s.
Based on E1 −→ 0 and E2 −→ 0, we in fact obtain
a.s.
D(θ̂n , Fn ) −→ D(θ0 , F ),
Relying on this and the lower bound
ε(T ∗ , X n ) ≥
λ∗
1+λ∗ ,
∀u ∈ S d−1 .
(7)
it is easy to show that
D(θ0 , F )
,
1 + D(θ0 , F )
almost surely.
d−1 . Hence,
By Lemma 5, Fu is also halfspace symmetrical and smooth at A⊤
u θ0 for ∀u ∈ S
a similar proof to (7) leads to
a.s.
D(θ̂nu , Fu n ) −→ D(A⊤
u θ0 , Fu ),
as n → ∞.
This, together with Lemma 4, and the theory of empirical processes (Pollard, 1984), leads to
ε(T ∗ , X n ) ≤
inf u∈S d−1 D(A⊤
u θ0 , Fu )
,
1 + inf u ∈S d−1 D(A⊤
u θ0 , Fu )
14
almost surely.
Next, by
D(A⊤
u θ0 , Fu )
=
=
inf
v∈S d−2
P v
inf
ū∈S d−1 , ū ⊥u
⊤
(A⊤
u X)
≤v
⊤
(A⊤
u θ0 )
P (ū ⊤ X ≤ ū ⊤ θ0 ),
where ū = Au v, we obtain
inf D(A⊤
u θ0 , Fu )
u∈S d−1
=
=
inf
inf
P (u ⊤ X ≤ u ⊤ θ0 )
u∈S d−1
u∈S d−1
inf
ū∈S d−1 ,ū ⊥u
P (ū X ≤ ū θ0 )
⊤
⊤
= D(θ0 , F ).
a.s.
This proves ε(T ∗ , X n ) −→
D(θ0 ,F )
1+D(θ0 ,F )
= 1/3, because D(θ0 , F ) =
This completes the proof of this theorem.
Remark 2
1
2
under Conditions (C2)-(C3).
It is worth noting that, both halfspace symmetry and weak smooth condition
assumptions in this paper can not be further relaxed if one wants to obtain exactly the limiting
breakdown point of HM. The former is the weakest assumotion to guarantee to have a unique
center. The latter is equivalent to the continuity of D(x , F ) at θ0 , which is necessary for deriving
the limit for both the lower and upper bound, while the upper bound given in Lemma 4 could
not be further improved for fixed n.
5
Concluding remarks
In this paper, we consider the limit of the finite sample breakdown point of HM under weaker
assumption on underlying distribution and data set. Under such assumptions, the random
observations may not be ‘in general position’. This causes additional inconvenience to the
derivation of the limiting result compared to the scenario of X n being in general position. During
our investigation, relationships between various smooth conditions have been established and the
representation of the Tukey depth and median regions has also been obtained without imposing
the ‘in general position’ assumption.
Tukey halfspace depth idea has been extended beyond the location setting to many other
settings (e.g., regression, functional data, etc.). We anticipate that our results here could also
be extended to those settings.
Acknowledgements
15
The research of the first two authors is supported by National Natural Science Foundation of
China (Grant No.11461029, 61263014, 61563018), NSF of Jiangxi Province (No.20142BAB211014,
20143ACB21012, 20132BAB201011, 20151BAB211016), and the Key Science Fund Project of
Jiangxi provincial education department (No.GJJ150439, KJLD13033, KJLD14034).
References
Adrover, J., Yohai, V., 2002. Projection estimates of multivariate location. Annals of statistics, 1760-1781.
Chen, Z., 1995. Robustness of the half-space median. Journal of statistical planning and inference, 46(2), 175-181.
Chen, Z., Tyler, D.E., 2002. The influence function and maximum bias of Tukey’s median. Ann. Statist. 30,
1737-1759.
Donoho, D.L., 1982. Breakdown properties of multivariate location estimators. Ph.D. Qualifying Paper. Dept.
Statistics, Harvard University.
Donoho, D.L., Gasko, M., 1992. Breakdown properties of location estimates based on halfspace depth and projected outlyingness. Ann. Statist. 20, 1808-1827.
Donoho, D.L., Huber, P.J., 1983. The notion of breakdown point. In: Bickel, P.J., Doksum, K.A., Hodges Jr.,
J.L. (Eds.), A Festschrift foe Erich L. Lehmann. Wadsworth, Belmont, CA, pp. 157-184.
Hampel, F.R. (1968). Contributions to the theory of robust estimation, Ph.D. Thesis, University of California,
Berkeley.
Kong, L., Zuo, Y. 2010. Smooth depth contours characterize the underlying distribution. J. Multivariate Anal.,
101, 2222-2226.
Kong, L., Mizera, I., 2012. Quantile tomography: Using quantiles with multivariate data. Statist. Sinica, 22,
1589-1610.
Liu, R. Y., 1988. On a notion of simplicial depth. Proc. Natl. Acad. Sci. USA. 85, 1732-1734.
Liu, R. Y., 1990. On a notion of data depth based on random simplices. Ann. Statist. 18, 191-219.
Liu, X.H., Zuo, Y.J., Wang, Z.Z., 2013. Exactly computing bivariate projection depth median and contours.
Comput. Statist. Data Anal. 60, 1-11.
Liu, X.H., Luo, S.H., Zuo, Y.J., 2015a. Some results on the computing of Tukey’s halfspace median.
arXiv:1604.05927, Mimeo.
Liu, X., Zuo, Y., 2014. Computing halfspace depth and regression depth. Communications in Statistics-Simulation
and Computation, 43, 969-985.
Liu, X., Zuo, Y., Wang, Q., 2015b. Finite sample breakdown point of Tukey’s halfsapce median. Preprint on
arXiv. Mimeo.
Mosler, K., Lange, T., Bazovkin, P., 2009. Computing zonoid trimmed regions of dimension d > 2. Comput.
Statist. Data Anal. 53, 2500-2510.
Paindaveine, D., Šiman, M., 2011. On directional multiple-output quantile regression. J. Multivariate Anal. 102,
193-392.
Pollard, D., 1984. Convergence of stochastic processes. Springer, New York.
Tukey, J.W., 1975. Mathematics and the picturing of data. In Proceedings of the International Congress of
Mathematicians, 523-531. Cana. Math. Congress, Montreal.
Zuo, Y., 2001. Some quantitative relationships between two types of finite sample breakdown point. Stat. Probab.
Lett. 51, 369-375.
Zuo, Y.J., 2003. Projection based depth functions and associated medians. Ann. Statist. 31, 1460-1490.
Zuo, Y.J., Serfling, R., 2000. On the performance of some robust nonparametric location measures relative to a
general notion of multivariate symmetry. Journal of Statistical Planning and Inference, 84(1), 55-79.
16
| 10 |
General Drift Analysis with Tail Bounds∗
Carsten Witt
DTU Compute
Technical University of Denmark
2800 Kgs. Lyngby
Denmark
[email protected]
arXiv:1307.2559v3 [] 2 Jun 2017
Per Kristian Lehre
School of Computer Science
University of Birmingham
Birmingham, B15 2TT
United Kingdom
[email protected]
June 5, 2017
Abstract
Drift analysis is one of the state-of-the-art techniques for the runtime analysis of
randomized search heuristics (RSHs) such as evolutionary algorithms (EAs), simulated
annealing etc. The vast majority of existing drift theorems yield bounds on the expected value of the hitting time for a target state, e. g., the set of optimal solutions,
without making additional statements on the distribution of this time. We address
this lack by providing a general drift theorem that includes bounds on the upper and
lower tail of the hitting time distribution. The new tail bounds are applied to prove
very precise sharp-concentration results on the running time of a simple EA on standard benchmark problems, including the class of general linear functions. Surprisingly,
the probability of deviating by an r-factor in lower order terms of the expected time
decreases exponentially with r on all these problems. The usefulness of the theorem
outside the theory of RSHs is demonstrated by deriving tail bounds on the number of
cycles in random permutations. All these results handle a position-dependent (variable)
drift that was not covered by previous drift theorems with tail bounds. Moreover, our
theorem can be specialized into virtually all existing drift theorems with drift towards
the target from the literature. Finally, user-friendly specializations of the general drift
theorem are given.
∗
A preliminary version of this paper appeared in the proceedings of ISAAC 2014 [27].
1
Contents
1 Introduction
2
2 Preliminaries
4
3 General Drift Theorem
7
4 Applications of the Tail Bounds
4.1 OneMax, Linear Functions and LeadingOnes . . . . . . . . . . . . . . . . .
4.2 An Application to Probabilistic Recurrence Relations . . . . . . . . . . . . .
12
13
22
5 Conclusions
25
A Existing Drift Theorems as Special Cases
A.1 Variable Drift and Fitness Levels . . . . . . . . . . . . . . . . . . . . . . . .
A.2 Multiplicative Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
29
31
B Fitness Levels Lower and Upper Bounds as Special Cases
33
C Non-monotone Variable Drift
34
1
Introduction
Randomized search heuristics (RSHs) such as simulated annealing, evolutionary algorithms
(EAs), ant colony optimization etc. are highly popular techniques in black-box optimization, i. e., the problem of optimizing a function with only oracle access to the function.
These heuristics often imitate some natural process, and are rarely designed with analysis
in mind. Their extensive use of randomness, such as in the mutation operator, render the
underlying stochastic processes non-trivial. While the theory of RSHs is less developed
than the theory of classical, randomized algorithms, significant progress has been made in
the last decade [3, 29, 20]. This theory has mainly focused on the optimization time, which
is the random variable TA,f defined as the number of oracle accesses the heuristic A makes
before the maximal argument of f is found. Most studies considered the expectation of
TA,f , however more information about the distribution of the optimisation time is often
needed. For example, the expectation can be deceiving when the runtime distribution has
a high variance. Also, tail bounds can be helpful for other performance measures, such as
fixed-budget computation which seeks to estimate the approximation-quality as a function
of time, [11].
Results on the runtime of RSHs were obtained after relevant analytical techniques were
developed, some adopted from other fields, others developed specifically for RSHs. Drift
analysis a central method for analyzing the hitting time of stochastic processes and was
introduced to the analysis of simulated annealing as early as in 1988 [34]. Informally, it
allows long-term properties of a discrete-time stochastic process (Xt )t∈N0 to be inferred
2
from properties of the one-step change ∆t := Xt − Xt+1 . In the context of EAs, one has
been particularly interested in the random variable Ta defined as the smallest t such that
Xt ≤ a. For example, if Xt represents the “distance” of the current solution in iteration t
to an optimum, then T0 is the optimization time.
Since its introduction to evolutionary computation by He and Yao in 2001 [18], drift
analysis has been widely used to analyze the optimization time of EAs. Many drift theorems
have been introduced, such as additive drift theorems [18], multiplicative drift [9, 12],
variable drift [21, 28, 33], and population drift [24]. Different assumptions and notation
used in these theorems make it hard to abstract out a unifying statement.
Drift analysis is also used outside theory of RSHs, for example in queuing theory [5, 14].
The widespread use of these techniques in separated research fields has made it difficult to
get an overview of the drift theorems. Drift analysis is also related to other areas, such as
stochastic differential equations and stochastic difference relations.
Most drift theorems used in the theory of RSHs relate to the expectation of the hitting
time Ta , and there are fewer results aboutP
the tails Pr(Ta > t) and Pr(Ta < t). From the
simple observation that Pr(Ta > t) ≤ Pr( ti=0 ∆i < a − X0 ), the problem is reduced to
bounding the deviation of a sum of random variables. If the ∆t were independent and identically distributed, then one would be in the familiar scenario of Chernoff/Hoeffding-like
bounds. The stochastic processes originating from RSHs are rarely so simple, in particular
the ∆t are often dependent variables, and their distributions are not explicitly given. However, bounds on the form E(∆t | Xt ) ≥ h(Xt ) for some function h often hold. The drift
is called variable when h is a non-constant function. The variable drift theorem provides
bounds on the expectation of Ta given some conditions on h. However, there have been no
general tail bounds from a variable drift condition. The only results in this direction seem
to be the tail bounds for probabilistic recurrence relations from [22]; however, this scenario
corresponds to the specific case of non-increasing Xt .
Our main contribution is a new, general drift theorem that provides sharp concentration
results for the hitting time of stochastic processes with variable drift, along with concrete
advice and examples how to apply it. The theorem is used to bound the tails of the
optimization time of the well-known (1+1) EA [13] to the benchmark problems OneMax
and LeadingOnes, as well as the class of linear functions, which is an intensively studied
problem in the area [37]. Surprisingly, the results show that the distribution is highly
concentrated around the expectation. The probability of deviating by an r-factor in lower
order terms decreases exponentially with r. In a different application outside the theory of
RSHs, we use drift analysis to analyze probabilistic recurrence relations and show that the
number of cycles in a random permutation of n elements is sharply concentrated around
the expectation ln n. As a secondary contribution, we prove that our general drift theorem
can be specialized into virtually all variants of drift theorems with drift towards the target
(in particular, variable, additive, and multiplicative drift) that have been scattered over the
literature on runtime analysis of RSHs. Unnecessary assumptions such as discrete or finite
search spaces will be removed from these theorems.
This paper is structured as follows. Section 2 introduces notation and basics of drift
analysis. Section 3 presents the general drift theorem with tail bounds and suggestions
3
for user-friendly corollaries. Section 4 applies the tail bounds from our theorem. Sharpconcentration results on the running time of the (1+1) EA on OneMax, LeadingOnes
and general linear functions are obtained. The application outside the theory of RSHs with
respect to random recurrence relations is described at the end of this section (Section 4.2).
In all these applications, the probability of deviating by an r-factor in lower order terms of
the expected time decreases exponentially with r. Appendix A demonstrates the generality
of the theorem by identifying a large number of drift theorems from the literature as special
cases. We finish with some conclusions.
2
Preliminaries
We analyze time-discrete stochastic processes represented by a sequence of non-negative
random variables (Xt )t∈N0 . For example, Xt could represent a certain distance value of an
RSH from an optimum. In particular, Xt might aggregate several different random variables
realized by an RSH at time t into a single one. In contrast to existing drift theorems, we do
not assume that the state space is discrete (e. g., all non-negative integers) or continuous
but only demand that it is bounded, non-negative and includes the “target” 0.
We adopt the convention that the process should pass below some threshold a ≥ 0
(“minimizes” its state) and define the first hitting time Ta := min{t | Xt ≤ a}. If the
actual process seeks to maximize its state, typically a straightforward mapping allows us
to stick to the convention of minimization. In an important special case, we are interested
in the hitting time T0 of target state 0; for example when a (1+1) EA, a very simple RSH,
is run on the well-known OneMax problem and were are interested in the first point of
time where the number of zero-bits becomes zero. Note that Ta is a stopping time and that
we assume that the stochastic process is adapted to some filtration (Ft )t∈N0 , such as its
natural filtration σ(X0 , . . . , Xt ).
The expected one-step change δt := E(Xt − Xt+1 | Ft ) for t ≥ 0 is called drift. Note that
δt in general is a random variable since the outcomes of X0 , . . . , Xt are random. Suppose we
manage to bound δt from below by some δ∗ > 0 for all possible outcomes of δt , where t < T .
Then we know that the process decreases its state (“progresses towards 0”) in expectation
by at least δ∗ in every step, and the additive drift theorem (see Theorem 1 below) will
provide a bound on T0 that only depends on X0 and δ∗ . In fact, the very natural-looking
result E(T0 | X0 ) ≤ X0 /δ∗ will be obtained. However, bounds on the drift might be more
complicated. For example, a bound on δt might depend on Xt or states at even earlier points
of time, e. g., if the progress decreases as the current state decreases. This is often the case
in applications to EAs. However, for such algorithms the whole “history” is rarely needed.
Simple EAs and other RSHs are Markov processes such that often δt = E(Xt − Xt+1 | Xt )
for an appropriate Xt .
With respect to Markov processes on discrete search spaces, drift conditions traditionally
use conditional expectations such as E(Xt − Xt+1 | Xt = i) and bound these for arbitrary
i > 0 instead of directly bounding the random variable E(Xt − Xt+1 | Xt ) on Xt > 0.
As pointed out, the drift δt in general is a random variable and should not be confused
with the “expected drift” E(δt ) = E(E(Xt − Xt+1 | Ft )), which rarely is available since it
4
averages over the whole history of the stochastic process. Drift is based on the inspection
of the progress from one step to another, taking into account every possible history. This
one-step inspection often makes it easy to come up with bounds on δt . Drift theorems could
also be formulated based on expected drift; however, this might be tedious to compute. See
[19] for one of the rare analyses of “expected drift”, which we will not get into in this paper.
We now present the first drift theorem for additive drift. It is based on [18], from which
we removed the unnecessary assumptions that the search space is discrete and the Markov
property. We only demand a bounded state space.
Theorem 1 (Additive Drift, following [18]). Let (Xt )t∈N0 , be a stochastic process over a
bounded state space S ⊆ R+
0 . Then:
(i) If E(Xt − Xt+1 ; Xt > 0 | Ft ) ≥ δu then E(T0 | F0 ) ≤
X0
δu .
(ii) If E(Xt − Xt+1 ; Xt > 0 | Ft ) ≤ δℓ then E(T0 | F0 ) ≥
X0
δℓ .
Proof. This proof uses martingale theory and resembles the approach in [6, 25]. It is simpler
than the original proof by [18].
In the following, we show only the upper bound since the lower bound is proven symmetrically. Moreover, we assume δu > 0 since the statement is trivial otherwise. We define
the stopped process Yt = Xt∧T0 + (t ∧ T0 )δu , where t ∧ T0 = min{t, T0 }. We prove that Yt is
a super-martingale. More precisely, on the one hand, we get for t < T0 by induction that
E(Yt+1 | Ft ) = E(Xt+1 + (t + 1)δu | Ft )
≤ Xt − δu + (t + 1)δu = Yt ,
where the inequality uses the drift condition. On the other hand, for t ≥ T0 , E(Yt+1 | Ft ) =
Yt , which altogether proves E(Yt+1 | Ft ) ≤ Yt .
The aim is to apply the optional stopping theorem with respect to the super-martingale
Yt , t ∈ N0 and the stopping time T0 . If we can prove that E(T0 | F0 ) < ∞ then all
conditions of the optional stopping theorem are satisfied and we get
E(YT0 | F0 ) ≤ E(Y0 | F0 ) = X0
hence
E(XT0 + T0 δ | F0 ) ≤ X0 ,
which implies
E(T0 | F0 ) ≤ (X0 − E(XT0 | F0 ))/δu = (X0 − 0)/δu = X0 /δu .
Hence, we are left with the claim E(T0 | F0 ) < ∞.
To show the claim, we exploit the uniform drift bound E(Yt − Yt+1 ; Yt > 0 | Ft ) ≥ δu
and the fact that the state space is bounded and non-negative. Note that Yt < δu implies
Yt = 0 since otherwise we would contradict the drift bound. Moreover, since the state space
is bounded, there is some constant r > 1 depending on both the largest state and δu such
5
that E(Yt+1 ; Yt > 0 | Ft ) ≤ Yt /r. We will use the last inequality to prove that E(Yt | F0 )
converges exponentially to 0, from which we finally will derive our claim.
We first observe that E(Yt+1 | Ft ) = E(Yt+1 ; Yt > 0 | Ft ) + E(Yt+1 ; Yt = 0 | Ft ) =
E(Yt+1 ; Yt > 0 | Ft ) since Yt = 0 implies Yt+1 = 0. From E(Yt+1 ; Yt > 0 | Ft ) ≤ Yt /r
we then get
E(Yt+1 | Ft ) ≤ Yt /r
and therefore
E(Yt+2 | Ft ) = E(E(Yt+2 | Ft+1 ) | Ft ) ≤ E(Yt+1 | Ft )/r.
Applying the bound on E(Yt+1 | Ft ) once again, we get E(Yt+2 | Ft ) ≤ Yt /r 2 and inductively
E(Yt | F0 ) ≤ Y0 /r t for all t ∈ N.
To finally bound E(T0 | F0 ) from above, we will use the identity
E(T0 | F0 ) =
∞
X
t=1
∞
∞
X
X
(1 − Pr(Yt = 0 | F0 )).
(1 − Pr(Yt−1 = 0 | F0 )) ≤
Pr(T0 ≥ t | F0 ) =
t=1
t=1
Hence, we need to show that (1 − Pr(Yt = 0 | F0 )) converges to 0 sufficiently fast. To
this end, we use our observation Pr(Yt = 0) = 1 − Pr(Yt ≥ δu ) made above. By Markov’s
inequality, Pr(Yt ≥ δu ) ≤ E(Yt )/δu . Combining this with our bound on E(Yt ) from above,
we get
Y0
Pr(Yt ≥ δu | F0 ) ≤
δu r t
Since Y0 < ∞, δu > 0 and r > 1, we finally conclude
E(T0 | F0 ) ≤
∞
X
t=0
(1 − Pr(Yt = 0 | F0 )) ≤
∞
X
Y0
<∞
δu r t
t=0
as suggested.
Summing up, additive drift is concerned with the very simple scenario that there is a
progress of at least δu from all non-optimal states towards the target in (i) and a progress
of at most δℓ in (ii). Since the δ-values are independent of Xt , one has to use the worst-case
drift over all non-optimal Xt . This might lead to very bad bounds on the first hitting time,
which is why more general theorems (as mentioned in the introduction) were developed.
Interestingly, these more general theorems are often proved based on Theorem 1 using
an appropriate mapping (sometimes called Lyapunov function, potential function, distance
function or drift function) from the original state space to a new one. Informally, the
mapping “smoothes out” position-dependent drift into an (almost) position-independent
drift. We will use the same approach in the following.
6
3
General Drift Theorem
In this section, we present our general drift theorem. As pointed out in the introduction,
we strive for a very general statement, which is partly at the expense of simplicity. More
user-friendly specializations will be given later. Nevertheless, the underlying idea of the
complicated-looking general theorem is the same as in all drift theorems. We look into the
one-step drift E(Xt − Xt+1 | Ft ) and assume we have a (upper or lower) bound h(Xt ) on
the drift, which (possibly heavily) depends on Xt . Based on h, we define a new function
g (see Remark 2), with the aim of “smoothing out” the dependency, and the drift w. r. t.
g (formally, E(g(Xt ) − g(Xt+1 ) | Ft )) is analyzed. Statements (i) and (ii) of the following
Theorem 3 provide bounds on E(T0 ) based on the drift w. r. t. g. In fact, g can be defined in
a very similar way as in existing variable-drift theorems [21, 28, 33], such that Statements (i)
and (ii) can be understood as generalized variable drift theorems for upper and lower bounds
on the expected hitting time, respectively.
Statements (iii) and (iv) are concerned with tail bounds on the hitting time. Here
moment-generating functions (mgfs.) of the drift w. r. t. g come into play, formally
E(e−λ(g(Xt )−g(Xt+1 )) | Ft )
is bounded. Again for generality, bounds on the mgf. may depend on the point of time t,
as captured by the bounds βu (t) and βℓ (t). We will see an example in Section 4 where the
mapping g smoothes out the position-dependent drift into a (nearly) position-independent
and time-independent drift, while the mgf. of the drift w. r. t. g still heavily depends on the
current point of time t (and indirectly on the position expected at this time).
Our drift theorem generalizes virtually all existing drift theorems concerned with a drift
towards the target, including the variable drift theorems for upper [21, 33, 28] and lower
bounds [8] (see Theorem 16 and Theorem 18), a non-monotone variable drift theorem [15]
(see Theorem 22), and multiplicative drift theorems [9, 37, 7] (see Theorem 19 and Theorem 20). Our theorem also generalizes fitness-level theorems [36, 35] (see Theorem 17 and
Theorem 21), another well-known technique in the analysis of randomized search heuristics. These generalizations are shown in Appendix A. Note that we do not consider the
case of negative drift (drift away from the target) as studied in [30, 31] since this scenario
is handled with structurally different techniques.
Remark 2. If for some function h : [xmin , xmax ] → R+ where 1/h(x) is integrable on
[xmin , xmax ], either E(Xt − Xt+1 ; Xt ≥ xmin | Ft ) ≥ h(Xt ) or E(Xt − Xt+1 ; Xt ≥ xmin |
Ft ) ≤ h(Xt ) hold, it is recommended to define the function g in Theorem 3 as
Z x
1
xmin
+
dy
g(x) :=
h(xmin )
xmin h(y)
for x ≥ xmin and g(0) := 0. In proofs, we will instead often use the extension
Z x
1
x
+
dy
g(x) :=
h(xmin )
xmin h(y)
for x ≥ 0.
7
Theorem 3 (General Drift Theorem). Let (Xt )t∈N0 , be a stochastic process, adapted to
a filtration (Ft )t∈N0 , over some state space S ⊆ {0} ∪ [xmin , xmax ], where xmin ≥ 0. Let
g : {0} ∪ [xmin , xmax ] → R≥0 be any function such that g(0) = 0, and 0 < g(x) < ∞ for all
x ∈ [xmin , xmax ]. Let Ta = min{t | Xt ≤ a} for a ∈ {0} ∪ [xmin , xmax ]. Then:
(i) If E(g(Xt )−g(Xt+1 ) ; Xt ≥ xmin | Ft ) ≥ αu for some αu > 0 then E(T0 | X0 ) ≤
g(X0 )
αu .
(ii) If E(g(Xt ) − g(Xt+1 ) ; Xt ≥ xmin | Ft ) ≤ αℓ for some αℓ > 0 then E(T0 | X0 ) ≥
g(X0 )
αℓ .
(iii) If there exists λ > 0 and a function βu : N0 → R+ such that
E(e−λ(g(Xt )−g(Xt+1 )) ; Xt > a | Ft ) ≤ βu (t)
Q
t−1
λ(g(X0 )−g(a)) for t > 0.
then Pr(Ta > t | X0 ) <
r=0 βu (r) · e
(iv) If there exists λ > 0 and a function βℓ : N0 → R+ such that
E(eλ(g(Xt )−g(Xt+1 )) ; Xt > a | Ft ) ≤ βℓ (t)
P
t−1 Qs−1
then Pr(Ta < t | X0 > a) ≤
β
(r)
· e−λ(g(X0 )−g(a)) for t > 0.
ℓ
s=1
r=0
If additionally the set of
states S ∩{x | x ≤ a} is absorbing, then
Q
t−1
−λ(g(X0 )−g(a)) .
Pr(Ta < t | X0 > a) ≤
r=0 βℓ (r) · e
Statement (ii) is also valid (but useless) if the expected hitting time is infinite. Appendix A studies specializations of these first two statements into existing variable and
multiplicative drift theorems which are mostly concerned with expected hitting time.
Special cases of (iii) and (iv). If E(e−λ(g(Xt )−g(Xt+1 )) ; Xt > a | Ft ) ≤ βu for some
time-independent βu , then Statement (iii) simplifies down to Pr(Ta > t | X0 ) < βut ·
eλ(g(X0 )−g(a)) ; similarly for Statement (iv).
On xmin . Some specializations of Theorem 3 require a “gap” in the state space between
optimal and non-optimal states, modelled by xmin > 0. One example is multiplicative drift,
see Theorem 19 in Section A.2. Another example is the process defined by X0 ∼ Unif[0, 1]
and Xt = 0 for t > 0. Its first hitting time of state 0 cannot be derived by drift arguments
since the lower bound on the drift towards the optimum within the interval [0, 1] has limit 0.
The proof of our main theorem is not too complicated. The tail bounds in (iii) and (iv)
are obtained by the exponential method (a generalized Chernoff bound), which idea is also
implicit in [17].
Proof of Theorem 3. Since g(Xt ) = 0 iff Xt = 0 and the image of g is bounded, the first
two items follow from the classical additive drift theorem (Theorem 1). To prove the third
one, we consider the stopped process that does not move after time Ta . We now use ideas
implicit in [17] and argue
Pr(Ta > t | X0 ) ≤ Pr(Xt > a | X0 ) = Pr(g(Xt ) > g(a) | X0 )
= Pr(eλg(Xt ) > eλg(a) | X0 ) < E(eλg(Xt )−λg(a) | X0 ),
8
where the first equality uses that g(x) is monotonically increasing, the second one that
x 7→ ex is a bijection, and the last inequality is Markov’s inequality. Now,
E(eλg(Xt ) | X0 ) = E(eλg(Xt−1 ) · E(e−λ(g(Xt−1 )−g(Xt )) | Ft−1 ) | X0 )
≤ E(eλg(Xt−1 ) | X0 ) · βu (t − 1)
using the prerequisite from the third item. Unfolding the remaining expectation inductively
(note that this does not assume independence of the differences g(Xr−1 ) − g(Xr )), we get
λg(Xt )
E(e
λg(X0 )
| X0 ) ≤ e
t−1
Y
βu (r),
r=0
altogether
t−1
Y
λ(g(X0 )−g(a))
Pr(Ta > t | X0 ) < e
βu (r),
r=0
which proves the third item.
The fourth item is proved similarly as the third one. By a union bound,
Pr(Ta < t | X0 > a) ≤
t−1
X
s=1
Pr(g(Xs ) ≤ g(a) | X0 )
for t > 0. Note that g(Xt ) < g(xmin ) implies g(Xt ) = g(0) = 0. Moreover,
Pr(g(Xs ) ≤ a | X0 ) = Pr(e−λg(Xs ) ≥ e−λa | X0 ) ≤ E(e−λg(Xs )+λa | X0 )
using again Markov’s inequality. By the prerequisites, we get
E(e−λg(Xs ) | X0 ) ≤ e−λg(X0 )
s−1
Y
βℓ (r)
r=0
Altogether,
Pr(Ta < t) ≤
t−1
X
s−1
Y
e−λ(g(X0 )+g(a))
βℓ (r).
r=0
s=1
If furthermore S ∩ {x | x ≤ a} is absorbing then the event Xt ≤ a is necessary for
Ta < t. In this case,
−λ(g(X0 )+g(a))
Pr(Ta < t | X0 ) ≤ Pr(g(Xt ) ≤ g(a) | X0 ) ≤ e
t−1
Y
βℓ (r).
r=0
Given some assumptions on the “drift” function h that typically hold, Theorem 3 can
be simplified. The following corollary will be used to prove the multiplicative drift theorem
(Theorem 19).
9
Corollary 4. Let (Xt )t∈N0 , be a stochastic process, adapted to a filtration (Ft )t∈N0 , over
some state space S ⊆ {0}∪[xmin , xmax ], where xmin ≥ 0. Let h : [xmin , xmax ] → R+ be a function such that 1/h(x) is integrable on [xmin , xmax ] and h(x) differentiable on [xmin , xmax ].
Then the following statements hold for the first hitting time T := min{t | Xt = 0}.
(i) If E(Xt − Xt+1 ; Xt ≥ xmin | Ft ) ≥ h(Xt ) and
R X0 1
xmin h(y) dy.
(ii) If E(Xt − Xt+1 ; Xt ≥ xmin | Ft ) ≤ h(Xt ) and
R X0 1
xmin h(y) dy.
d
dx h(x)
≥ 0, then E(T | X0 ) ≤
xmin
h(xmin )
+
d
dx h(x)
≤ 0, then E(T | X0 ) ≥
xmin
h(xmin )
+
d
(iii) If E(Xt − Xt+1 ; Xt ≥
t ) and dx h(x)
≥ λ for some λ > 0, then
xmin
| Ft ) ≥ h(X
R
X0
xmin
1
Pr(T > t | X0 ) < exp −λ t − h(xmin ) − xmin h(y) dy .
d
(iv) If E(Xt − Xt+1 ; Xt ≥ xmin | Ft) ≤ h(Xt ) and dx
h(x) ≤
−λ for some λ > 0, then
R
X0
λxmin
λ
eλt −eλ
Pr(T < t | X0 > 0) < eλ −1 exp − h(xmin ) − xmin h(y) dy .
Rx
Proof. As in Remark 2, let g(x) := x/h(xmin ) + xmin 1/h(y) dy for x ≥ 0. Note that for the
d
second derivative we have g′′ (x) = −( dx
h(x))/h(x)2 .
For (i), it suffices to show that condition (i) of Theorem 3 is satisfied for αu := 1. From
the assumption h′ (x) ≥ 0, it follows that g′′ (x) ≤ 0, hence g is a concave function. Jensen’s
inequality therefore implies that
E(g(Xt ) − g(Xt+1 ) ; Xt ≥ xmin | Ft ) ≥ g(Xt ) − g(E(Xt+1 ; Xt ≥ xmin | Ft ))
Z Xt
1
1
dy ≥
· h(Xt ) = 1,
≥
h(Xt )
Xt −h(Xt ) h(y)
where the last inequality holds because h is a non-decreasing function.
For (ii), it suffices to show that condition (i) Theorem 3 is satisfied for αℓ := 1. From
the assumption h′ (x) ≤ 0, it follows that g ′′ (x) ≥ 0, hence g is a convex function. Jensen’s
inequality therefore implies that
E(g(Xt ) − g(Xt+1 ) ; Xt ≥ xmin | Ft ) ≤ g(Xt ) − g(E(Xt+1 ; Xt ≥ xmin | Ft ))
Z Xt
1
1
dy ≤
· h(Xt ) = 1,
≤
h(Xt )
Xt −h(Xt ) h(y)
where the last inequality holds because h is a non-increasing function.
For (iii), it suffices to show that condition (iii) of Theorem 3 is satisfied for βu := e−λ .
λg(x)
Let f1 (x) := eλg(x) and note that f1′′ (x) = λeh(x)2 · (λ − h′ (x)). Since h′ (x) ≥ λ, it follows
that f1′′ (x) ≤ 0 and f1 is a concave function. By Jensen’s inequality, it holds that
E(e−λ(g(Xt )−g(Xt+1 )) ; Xt ≥ xmin | Ft ) ≤ e−λr
10
where
r := g(Xt ) − g(E(Xt+1 ; Xt ≥ xmin | Ft )) ≥
Z
Xt
Xt −h(Xt )
1
1
dy >
· h(Xt ) = 1,
h(y)
h(Xt )
where the last inequality holds because h is monotone increasing.
For (iv), it suffices to show that condition (iv) of Theorem 3 is satisfied for βℓ := eλ .
−λg(x)
Let f2 (x) := e−λg(x) and note that f2′′ (x) = λeh(x)2 · (λ + h′ (x)). Since h′ (x) ≤ λ, it follows
that f2′′ (x) ≤ 0 and f1 is a concave function. By Jensen’s inequality, it holds that
E(eλ(g(Xt )−g(Xt+1 )) ; Xt ≥ xmin | Ft ) ≤ eλr
where
r := g(Xt ) − g(E(Xt+1 ; Xt ≥ xmin | Ft )) ≤
Z
Xt
Xt −h(Xt )
1
1
dy <
· h(Xt ) = 1,
h(y)
h(Xt )
where the last inequality holds because h is monotone decreasing.
Condition (iii) and (iv) of Theorem 3 involve an mgf., which may be tedious to compute.
Inspired by [17] and [25], we show that bounds on the mgfs. follow from more user-friendly
conditions based on stochastic dominance between random variables, here denoted by ≺.
Theorem 5. Let (Xt )t∈N0 , be a stochastic process, adapted to a filtration (Ft )t∈N0 , over
some state space S ⊆ {0} ∪ [xmin , xmax ], where xmin ≥ 0. Let h : [xmin , xmax ] → R+ be
a function such that 1/h(x) is integrable on [xmin , xmax ]. Suppose there exist a random
R Xt
1/h(max{x, xmin }) dx| ≺ Z for Xt ≥ xmin and
variable Z and some λ > 0 such that | Xt+1
λZ
E(e ) = D for some D > 0. Then the following two statements hold for the first hitting
time T := min{t | Xt = 0}.
(i) If E(Xt −Xt+1 ; Xt ≥ xmin | Ft ) ≥ h(Xt ) then for any δ > 0, and η := min{λ, δλ2 /(D−
1 − λ)} and t > 0 it holds that
Pr(T > t | X0 ) ≤ exp η(xmin /h(xmin ) +
Z
X0
xmin
1/h(x) dx − (1 − δ)t) .
(ii) If E(Xt − Xt+1 ; Xt ≥ xmin | Ft ) ≤ h(Xt ) then for any δ > 0, η := min{λ, δλ2 /(D −
1 − λ)} and t > 0 it holds
Z
Pr(T < t | X0 ) ≤ exp η (1 + δ)t − xmin /h(xmin ) −
X0
1/h(x) dx
xmin
1
.
η(1 + δ)
If state 0 is absorbing then
Pr(T < t | X0 ) ≤ exp η((1 + δ)t − xmin /h(xmin ) −
11
Z
X0
xmin
1/h(x) dx) .
Remark 6. Theorem 5 assumes a stochastic dominance of the kind
Z
|
Xt
Xt+1
1/h(max{x, xmin }) dx| ≺ Z.
This is implied by |Xt+1 − Xt |/h(xmin ) ≺ Z.
Rx
1
Proof. As in Remark 2, we define g : {0} ∪ [0, xmax ] → R≥0 by g(x) := h(xxmin ) + xmin h(y)
dy
R Xt
1
for x ≥ 0. Let ∆t := g(Xt ) − g(Xt+1 ) and note that ∆t = Xt+1 h(max{x,xmin}) dx. To satisfy
the third condition of Theorem 3, we note
−η∆t
E(e
) = 1 − ηE(∆t ) +
∞
X
η k E(∆k )
t
k=2
≤ 1 − ηE(∆t ) + η 2
k!
≤ 1 − ηE(∆t ) + η
∞
X
λk−2 E(|∆t |k )
k=2
k!
=1−η+
2
∞
X
η k−2 E(|∆t |k )
k=2
2
η λZ
(e
λ2
k!
− λE(Z) − 1),
where we have used E(∆t ) ≥ 1 (proved in Theorem 3) and λ ≥ η . Since |∆t | ≺ Z, also
E(Z) ≥ 1. Using eλZ = D and η ≤ δλ2 /(D − 1 − λ), we obtain
E(e−η∆t ) ≤ 1 − η + δη = 1 − (1 − δ)η ≤ e−η(1−δ) .
Setting βu := e−η(1−δ) and using η as the λ of Theorem 3 proves the first statement.
For the second statement, analogous calculations prove
E(eη∆t ) ≤ 1 + (1 + δ)η ≤ eη(1+δ) .
We set βℓ := eη(1+δ) , use η as the λ of Theorem 3.(iv) and note that
eλ(1+δ)t
eλ(1+δ)t − eλ(1+δ)
,
≤
λ(1 + δ)
eλ(1+δ) − 1
which was to be proven. If additionally an absorbing state 0 is assumed, the stronger upper
bound follows from the corresponding statement in Theorem 3.(iv).
4
Applications of the Tail Bounds
So far we have mostly derived bounds on the expected first hitting time using Statements (i)
and (ii) of our general drift theorem. As our main contribution, we show that the general
drift theorem (Theorem 3), together with the function g defined explicitly in Remark 2
in terms of the one-step drift, constitute a very general and precise tool for analysis of
stochastic processes. In particular, it provides very sharp tail bounds on the running time
of randomized search heuristics which were not obtained before by drift analysis. It also
provides tail bounds on random recursions, such as those in analysis of random permutations
(see Section 4.2).
12
As already noted, virtually all existing drift theorems, including an existing result proving tail bounds with multiplicative drift, can be phrased as special cases of the general drift
theorem (see Appendix A). Recently, in [23] different tail bounds were proven for the scenario of additive drift using classical concentration inequalities such as Azuma-Hoeffding
bounds. These bounds are not directly comparable to the ones from our general drift
theorem; they are more specific but yield even stronger exponential bounds.
We first give sharp tail bounds on the optimization time of the (1+1) EA which maximizes pseudo-Boolean functions f : {0, 1}n → R. The optimization time is defined in the
canonical way at the smallest t such that xt is an optimum. We consider classical benchmark problems from the theory of RSHs. Despite their simplicity, their analysis has turned
out surprisingly difficult and research is still ongoing.
Algorithm 1 (1+1) Evolutionary Algorithm (EA)
Choose uniformly at random an initial bit string x0 ∈ {0, 1}n .
for t := 0 to ∞ do
Create x′ by flipping each bit in xt independently with probability 1/n (mutation).
xt+1 := x′ if f (x′ ) ≥ f (xt ), and xt+1 := xt otherwise (selection).
end for
4.1
OneMax, Linear Functions and LeadingOnes
A simple pseudo-Boolean function is given by OneMax(x1 , . . . , xn ) = x1 + · · · + xn . It is
included in the class of so-called linear functions f (x1 , . . . , xn ) = w1 xn + · · · + wn xn , where
wi ∈ R for 1 ≤ i ≤ n. We start by deriving very precise bounds on first the expected
optimization time of the (1+1) EA on OneMax and then prove tail bounds. The lower
bounds obtained will imply results for all linear functions. Note that in [8], already the
following result has been proved using variable drift analysis.
Theorems 3 and 5 in [8] The expected optimization time of the (1+1) EA on OneMax
is at most en ln n − c1 n + O(1) and at least en ln n − c2 n for certain constants c1 , c2 > 0.
The constant c2 is not made explicit in [8], whereas the constant c1 is stated as 0.369.
However, unfortunately this value is due to a typo in the very last line of the proof –
c1 should have been 0.1369 instead. We correct this mistake in a self-contained proof.
Furthermore, we improve the lower bound using variable drift. To this end, we use the
following bound on the drift.
Lemma 7. Let Xt denote the number of zeros of the current search point of the (1+1) EA
on OneMax. Then
n−x
1 n−x x
1
x
x
1−
≤ E(Xt − Xt+1 | Xt = x) ≤
1−
.
1+
2
n
n
n
(n − 1)
n
Proof. The lower bound considers the expected number of flipping zero-bits, assuming that
no one-bit flips. The upper bound is obtained in the proof of Lemma 6 in [8] and denoted
by S1 · S2 , but is not made explicit in the statement of the lemma.
13
Theorem 8. The expected optimization time of the (1+1) EA on OneMax is at most
en ln n − 0.1369n + O(1) and at least en ln n − 7.81791n − O(log n).
≤ X0 ≤ (1+ǫ)n
for an arbitrary
Proof. Note that with probability 1 − 2−Ω(n) we have (1−ǫ)n
2
2
constant ǫ > 0. Hereinafter, we assume this event to happen, which only adds an error
term of absolute value 2−Ω(n) · n log n = 2−Ω(n) to the expected optimization time.
In order to apply the variable drift theorem (more precisely, Theorem 16 for the upper
and Theorem 18 for the lower bound), we manipulate and estimate the expressions from
Lemma 7 to make them easy to integrate. To prove the upper bound on the optimization
time, we observe
1 n−x x
E(Xt − Xt+1 | Xt = x) ≥ 1 −
n
n
n−1
1
1 −x x
1
= 1−
· 1−
· · 1−
n
n
n
n
x x
1
=: hℓ (x).
≥ e−1+ n · · 1 −
n
n
Now, by the variable drift theorem, the optimization time T satisfies
!
Z (1+ǫ)n/2
Z (1+ǫ)n/2
x
1
1 −1
1
n
1− n
e
1−
+
dx ≤ en +
E(T | X0 ) ≤
·
hℓ (1)
hℓ (x)
x
n
1
1
1
(1+ǫ)n/2
≤ en − en [E1 (x/n)]1
1+O
,
n
R∞
e−t
t
dt denotes the exponential integral (for x > 0). The latter is
P
(−x)k
estimated using the series representation E1 (x) = − ln x−γ− ∞
k=1 kk! , with γ = 0.577 . . .
being the Euler-Mascheroni constant (see Equation 5.1.11 in [1]). We get for sufficiently
small ǫ that
where E1 (x) :=
x
(1+ǫ)n/2
− [E1 (x/n)]1
= E1 (1/n) − E1 ((1 + ǫ)/2) ≤ − ln(1/n) − γ + O(1/n) − 0.559774.
Altogether,
E(T | X0 ) ≤ en ln n + en(1 − 0.559774 − γ) + O(log n) ≤ en ln n − 0.1369n + O(log n)
which proves the upper bound.
For the lower bound on the optimization time, we need according to Theorem 18 a
monotone process (which is satisfied) and a function c bounding the progress towards the
optimum. We use c(x) = x − log x − 1. Since each bit flips with probability 1/n, we get
Pr(Xt+1 ≤ Xt − log(Xt ) − 1) ≤
Xt
log(Xt ) + 1
log(Xt )+1
log(Xt )+1
eXt
1
≤
.
n
n log(Xt ) + n
14
The last bound takes its maximum at Xt = 2 within the interval [2, . . . , n] and is O(n−2 )
then. For Xt = 1, we trivially have Xt+1 ≥ c(Xt ) = 0. Hence, by assuming Xt+1 ≥ c(Xt )
for all t = O(n log n), we only introduce an additive error of value O(log n).
Next the upper bound on the drift from Lemma 7 is manipulated. We get for some
sufficiently large constant c∗ > 0 that
n−x
x
x
1
·
1+
E(Xt − Xt+1 | Xt = x) ≤
1−
2
n
(n − 1)
n
n−x
2
x
1
+
x/(n
−
1)
x
+ x(n−x)
−1+ n
n2
≤e
· ·
n
1 + x/(n2 )
2x
c∗
x
≤ e−1+ n · · 1 +
=: h∗ (x),
n
n
where we used 1 + x ≤ ex twice. The drift theorem requires a function hu (x) such that
h∗ (x) ≤ hu (c(x)) = hu (x − log x − 1). Introducing the substitution y := y(x) := x − log x − 1
and its inverse function x(y), we choose hu (y) := h∗ (x(y)).
We obtain
!
Z (1−ǫ)n/2
1
1
1
+
dy
1−O
E(T | X0 ) ≥
∗
∗
h (x(1))
h (x(y))
n
1
!
Z x((1−ǫ)n/2)
1
1
1
1
+
1−
dx
1−O
≥
∗
∗
h (4)
h (x)
x
n
x(1)
!
Z (1−ǫ)n/2
2x
en
1
1
n
≥
e1− n ·
+
1−
dx
1−O
4
x
x
n
2
!
Z (1−ǫ)n/2
Z (1−ǫ)n/2
2x
en
1
n
n
1− 2x
1−
e n · dx −
e n · 2 dx
=
+
1−O
4
x
x
n
2
2
where the second inequality uses integration by substitution and x(1) = 4, the third one
x(y) ≤ y, and the last one partial integration.
With respect to the first integral in the last bound, the only difference compared to
2x
the upper bound is the 2 in the exponent of e−1+ n , such that we can proceed analogously
to the above and obtain −enE1 (2x/n) + C as anti-derivative. The anti-derivative of the
second integral is 2eE1 (2x/n) − e1−2x/n nx + C.
We obtain
i(1−ǫ)n/2
en h
1
1−2x/n n
E(T | X0 ) ≥
+ −(2e + en)E1 (2x/n) + e
1−O
4
x 2
n
Now, for sufficiently small ǫ,
(1−ǫ)n/2
− [E1 (2x/n)]2
≥ − ln(8/n) − γ − O(1/n) − 0.21939 ≥ ln n − 2.76048 − O(1/n)
and
h
e1−2x/n
en
n i(1−ǫ)n/2
≥ 1.9999 −
− O(1/n).
x 2
4
15
Altogether,
E(T | X0 ) ≥ en ln n − 7.81791n − O(log n)
as suggested.
Knowing the expected optimization time precisely, we now turn to our main new contribution, i. e., to derive sharp bounds. Note that the following upper concentration inequality
in Theorem 9 is not new but is already implicit in the work on multiplicative drift analysis
by [12]. In fact, a very similar upper bound is even available for all linear functions [37].
By contrast, the lower concentration inequality is a novel and non-trivial result.
Theorem 9. The optimization time of the (1+1) EA on OneMax is at least en ln n −
cn − ren, where c is a constant, with probability at least 1 − e−r/2 for any r ≥ 0. It is at
most en ln n + ren with probability at least 1 − e−r .
Proof of Theorem 4, upper tail. This tail can be easily derived from the multiplicative drift
theorem (Theorem 19). Let Xt denote the number of zeros at time t. By Lemma 7, we can
choose δ := 1/(en). Then the upper bound follows since X0 ≤ n and xmin = 1.
We only consider the lower tail. The aim is to prove it using Theorem 3.(iv), which
includes a bound on the moment-generating function of the drift of g. We first set up the
h (and thereby the g) used for our purposes. Obviously, xmin := 1.
Lemma 10. Consider the (1+1) EA on OneMax and let the random variable Xt denote
the current number of zeros at time t ≥ 0. Then h(x) := exp (−1 + 2⌈x⌉/n) · (⌈x⌉/n) ·
(1 + c∗ /n) , where c∗ > 0 is a sufficiently large constant, satisfies the condition E(Xt −Xt+1 |
Ri
Xt = i) ≤ h(i) for i ∈ {1, . . . , n}. Moreover, define g(i) := xmin /h(xmin ) + xmin 1/h(y) dy
P
P t
1−2Xt+1 /n ·
and ∆t := g(Xt ) − g(Xt+1 ). Then g(i) = ij=1 1/h(j) and ∆t ≤ X
j=Xt+1 +1 e
(n/j).
x
n−x x is an upper bound on
Proof. According to Lemma 7, h∗ (x) := ((1 − n1 )(1 + (n−1)
2)
n
the drift. We obtain h(x) ≥ h∗ (x) using the simple estimations exposed in the proof of
Theorem 8, lower bound part.
The representation of g(i) as a sum follows immediately from h due to the ceilings. The
2⌈x⌉
∗
bound on ∆t follows from h by estimating e−1+ n · 1 + cn ≥ e−1+2x/n .
The next lemma provides a bound on the mgf. of the drift of g, which will depend on
the current state. Later, the state will be estimated based on the current point of time,
leading to a time-dependent bound on the mgf. Note that we do not need the whole natural
filtration based on X0 , . . . , Xt but only Xt since we are dealing with a Markov chain.
Lemma 11. Let λ := 1/(en) and i ∈ {1, . . . , n}. Then E(eλ∆t | Xt = i) ≤ 1 + λ + 2λ/i +
o(λ/log n).
Proof. We distinguish between three major cases.
16
Case 1: i = 1. Then Xt+1 = 0, implying ∆t ≤ en, with probability (1/n)(1−1/n)n−1 =
(1/(en))(1 + 1/(n − 1)) and Xt+1 = i otherwise. We get
1
1
1
λ∆t
1
E(e
| Xt = i) ≤
·e + 1−
+O 2
en
en
n
e−1
1
λ
(e − 2)λ
≤ 1+
+O 2 ≤ 1+λ+
+o
.
en
n
i
ln n
Case 2: 2 ≤ i ≤ ln3 n. Let Y := i − Xt+1 and note that Pr(Y ≥ 2) ≤ (ln6 n)/n2 since
the probability of flipping a zero-bit is at most (ln3 n)/n. We further subdivide the case
according to whether Y ≥ 2 or not.
Case 2a: 2 ≤ i ≤ ln3 n and Y ≥ 2. The largest value of ∆t is taken when Y = i.
Using Lemma 10 and estimating the i-th Harmonic number, we have λ∆t ≤ (ln i) + 1 ≤
3(ln ln n) + 1. The contribution to the mgf. is bounded by
6
λ
ln n
λ∆t
3 ln ln n+1
=o
E(e
.
· 1 {Xt+1 ≤ i − 2} | Xt = i) ≤ e
·
n2
ln n
Case 2b: 2 ≤ i ≤ ln3 n and Y < 2. Then Xt+1 ≥ Xt − 1, which implies ∆t ≤
en(ln(Xt ) − ln(Xt+1 )). We obtain
i−Xt+1
E(eλ∆t · 1 {Xt+1 ≥ i − 1} | Xt = i) ≤ E(eln(i/Xt+1 ) ) ≤ E(eln(1+ i−1 ) )
Y
= E 1+
,
i−1
P
where the first inequality estimated ki=j+1 1i ≤ ln(k/j) and the second one used Xt+1 ≥
i
(1 + O((ln3 n)/n)) for i ≤ ln3 n. This implies
i − 1. From Lemma 7, we get E(Y ) ≤ en
3
ln n
i − Xt+1
i
E 1+
≤1+
1+O
i−1
en(i − 1)
n
3
1
λ
ln n
1
2λ
=1+
· 1+
+o
1+O
=1+λ+
,
en
i−1
n
i
ln n
using i/(i − 1) ≤ 2 in the last step. Adding the bounds from the two sub-cases proves the
lemma in Case 2.
ln n
≤ 1/(ln n)!. We further
Case 3: i > ln3 n. Note that Pr(Y ≥ ln n) ≤ lnnn n1
subdivide the case according to whether Y ≥ ln n or not.
Case 3a: i > ln3 n and Y ≥ ln n. Since ∆t ≤ en(ln n + 1), we get
λ
1
ln n+1
3
λ∆t
·e
=o
· 1 Xt+1 ≤ i − ln n | Xt = i) ≤
E(e
(ln n)!
ln n
Case 3b: i > ln3 n and Y < ln n. Then, using Lemma 10 and proceeding similarly as
in Case 2b,
E(eλ∆t · 1 {Xt+1 > i − ln n} | Xt = i)
≤ E(eλ exp(1−2(i−ln n)/n)·n ln(i/Xt+1 ) | Xt = i) = E
17
i − Xt+1
1+
Xt+1
exp((−2i+ln n)/n) !
.
Using i > ln3 n and Jensen’s inequality, the last expectation is at most
exp((−2i+ln n)/n)
exp((−2i+ln n)/n)
i − Xt+1
Y
1+E
≤ 1+E
Xt+1
i − ln n
exp((−2i+ln n)/n)
Y
≤ 1+E
,
i(1 − 1/ln2 n)
where the last inequality used again i > ln3 n. Since E(Y ) ≤ e−1+2i/n ni (1 + c∗ /n), we
conclude
!exp((−2i+ln n)/n)
2i/n
e
E(eλ∆t · 1 {Xt+1 > i − ln n} | Xt = i) ≤ 1 +
en(1 − 1/ln2 n)
λ
ln n
1
≤1+λ+o
,
1+O
≤ 1+
n2
ln n
en(1 − 1/ln2 n)
where we used (1 + ax)1/a ≤ 1 + x for x ≥ 0 and a ≥ 1. Adding up the bounds from the
two sub-cases, we have proved the lemma in Case 3.
Altogether,
2λ
λ
λ∆t
E(e
| Xt = i) ≤ 1 + λ +
+o
.
i
ln n
for all i ∈ {1, . . . , n}.
The bound on the mgf. of ∆t derived in Lemma 11 is particularly large for i = O(1), i. e.,
if the current state Xt is small. If Xt = O(1) held during the whole optimization process,
we could not prove the lower tail in Theorem 9 from the lemma. However, it is easy to see
that Xt = i only holds for an expected number of at most en/i steps. Hence, most of the
time the term 2λ/i is negligible, and the time-dependent βℓ (t)-term from Theorem 3.(iv)
comes into play. We make this precise in the following proof, where we iteratively bound
the probability of the process being at “small” states.
Proof of Theorem 9, lower tail. With overwhelming probability 1−2−Ω(n) , X0 ≥ (1−ǫ)n/2
for an arbitrarily small constant ǫ > 0, which we assume to happen. We consider phases in
the optimization process. Phase 1 starts with initialization and ends before the first step
√
ln n−1
where Xt < e 2 = n · e−1/2 . Phase i, where i > 1, follows Phase i − 1 and ends before
√
the first step where Xt < n · e−i/2 . Obviously, the optimum is not found before the end
of Phase ln(n); however, this does not tell us anything about the optimization time yet.
We say that Phase i is typical if it does not end before time eni−1. We will prove induci √
tively that the probability of one of the first i phases not being typical is at most c′ e 2 / n =
i−ln n
c′ e 2 for some constant c′ > 0. This implies the theorem since an optimization time of
at least en ln n − cn − ren is implied by the event that Phase ln n − ⌈r − c/e⌉ is typical,
−r+c/e+1
−r
2
= 1 − e 2 for c = e(2 ln c′ + 1).
which has probability at least 1 − c′ e
Fix some k > 1 and assume for the moment that all the first k − 1 phases are typical.
√
Then for 1 ≤ i ≤ k − 1, we have Xt ≥ ne−i/2 in Phase i, i. e., when en(i − 1) ≤ t ≤ eni − 1.
18
We analyze the event that additionally Phase k is typical, which subsumes the event Xt ≥
√ −k/2
ne
throughout Phase k. According to Lemma 11, we get in Phase i, where 1 ≤ i ≤ k,
i/2
√
√
+o( lnλn )
λ+ 2λe
λ∆t
n
| Xt ≤ 1 + λ + 2λei/2 / n + o(λ/ ln n) ≤ e
E e
The expression now depends on the time only, therefore for λ := 1/(en)
enk−1
Y
t=0
Pk
k/2
i/2 +enk·o
√
λenk+ 2λen
( lnλn ) ≤ ek+ 6en√n +o(1) ≤ ek+o(1) ,
i=1 e
n
E eλ∆t | X0 ≤ e
√
where we used that k ≤ ln n. From Theorem 3.(iv) for a = ne−k/2 and t = enk − 1 we
obtain
√ −k/2
))
.
Pr(Ta < t) ≤ ek+o(1)−λ(g(X0 )−g( ne
From the proof of of Theorem 8, the lower bound part, we already know that g(X0 ) ≥
en ln n − c′′ n for some constant c′′ > 0 (which is assumed large enough to subsume the
−O(log n) term). Moreover, g(x) ≤ en(ln x + 1) according to Lemma 10. We get
k−ln n+O(1)
√
2
Pr(Ta < t) ≤ ek+o(1)−ln n+O(1)−k/2+(ln n)/2 = e
= c′′′ ek/2 / n,
for some sufficiently large constant c′′′ > 0, which proves the bound on the probability of Phase k not being typical (without making statements about the earlier phases).
The
that all phases up to and including Phase k are typical is at least 1 −
Pk probability
√
√
′′′
i/2
( i=1 c e )/ n ≥ 1 − c′ ek/2 / n for a constant c′ > 0.
We now deduce a concentration inequality w. r. t. linear functions, essentially depending
on all variables, i. e., functions of the kind f (x1 , . . . , xn ) = w1 x1 + · · · + wn xn , where wi 6= 0.
This function class contains OneMax and has been studied intensely the last 15 years [37].
Theorem 12. The optimization time of the (1+1) EA on an arbitrary linear function with
non-zero weights is at least en ln n − cn − ren, where c is a constant, with probability at
least 1 − e−r/2 for any r ≥ 0. It is at most en ln n + (1 + r)en + O(1) with probability at
least 1 − e−r .
Proof. The upper tail is proved in Theorem 5.1 in [37]. The lower bound follows from the
lower tail in Theorem 9 in conjunction with the fact that the optimization time within the
class of linear functions is stochastically smallest for OneMax (Theorem 6.2 in [37]).
P Q
Finally, we consider LeadingOnes(x1 , . . . , xn ) := ni=1 ij=1 xj , another intensively
studied standard benchmark problem from the analysis of RSHs. Tail bounds on the optimization time of the (1+1) EA on LeadingOnes were derived in [11]. This result represents
a fundamentally new contribution, but suffers from the fact that it depends on a very specific structure and closed formula for the optimization time. Using a simplified version of
Theorem 3 (see Theorem 5), it is possible to prove similarly strong tail bounds without
needing this exact formula. As in [11], we are interested in a more general statement. Let
T (a) be the number of steps until a LeadingOnes-value of at least a is reached, where
0 ≤ a ≤ n. Let Xt := max{0, a − LeadingOnes(xt )} be the distance from the target a at
time t. Lemma 13 states the drift of (Xt )t∈N0 exactly, see also [11].
19
Lemma 13. For all i > 0, E(Xt − Xt+1 | Xt = i) = (2 − 2−n+a−i+1 )(1 − 1/n)a−i (1/n).
Proof. The leftmost zero-bit is at position a − i + 1. To increase the LeadingOnes-value
(it cannot decrease), it is necessary to flip this bit and not to flip the first a − i bits, which is
reflected by the last two terms in the lemma. The first term is due to the expected number
of free-rider bits (a sequence of previously random bits after the leftmost zero that happen
to be all 1 at the time of improvement). Note that there can be between 0 and n − a + i − 1
such bits. By the usual argumentation using a geometric distribution, the expected number
of free-riders in an improving step equals
n−a+i−1
X
k=0
min{n−a+i−1,k+1}
1
= 1 − 2−n+a−i+1 ,
k·
2
hence the expected progress in an improving step is 2 − 2−n+a−i+1 .
We can now supply the tail bounds, formulated as Statements (ii) and (iii) in the
following theorem. The first statement is an exact expression for the expected optimization
time, which has already been proved without drift analysis [11].
Theorem 14. Let T (a) the time for the (1+1) EA to reach a LeadingOnes-value of at
least a. Moreover, let r ≥ 0. Then
a
2
1
−1 .
(i) E(T (a)) = n 2−n 1 + n−1
−3/2
)
(ii) For 0 < a ≤ n − log n, with probability at least 1 − e−Ω(rn
a
1
n2
− 1 + r.
1+
T (a) ≤
2
n−1
−3/2
) − e−Ω(log
(iii) For log2 n − 1 ≤ a ≤ n, with probability at least 1 − e−Ω(rn
a
n2 − n
2 log2 n
1
T (a) ≥
−1−
1+
− r.
2
n−1
n
2
n)
Proof. The first statement is already contained in [11] and proved without drift analysis.
We now turn to the second statement. From Lemma 13, h(x) = (2−2/n)(1−1/n)a−x /n
is a lower bound on the drift E(Xt − Xt+1 | Xt = x) if x ≥ log n. To bound the change of
the g-function, we observe that h(x) ≥ 1/(en) for all x ≥ 1. This means that Xt − Xt+1 = k
implies g(Xt ) − g(Xt+1 ) ≤ enk. Moreover, to change the LeadingOnes-value by k, it is
necessary that
• the first zero-bit flips (which has probability 1/n)
• k − 1 free-riders occur.
20
The change does only get stochastically larger if we assume an infinite supply of free-riders.
Hence, g(Xt ) − g(Xt+1 ) is stochastically dominated by a random variable Z = enY , where
Y
• is 0 with probability 1 − 1/n and
• follows the geometric distribution with parameter 1/2 otherwise (where the support
is 1, 2, . . . ).
The mgf. of Y therefore equals
1
1/2
1
1
λY
E(e ) = 1 −
≤1+
,
e0 +
−λ
n
n e − (1 − 1/2)
n(1 − 2λ)
where we have used e−λ ≥ 1 − λ. For the mgf. of Z it follows
E(eλZ ) = E(eλenY ) ≤ 1 +
1
,
n(1 − 2enλ)
hence for λ := 1/(4en) we get D := E(eλZ ) = 1 + 2/n = 1 + 8eλ, which means D − 1 − λ =
(8e − 1)λ. We get
δλ2
δλ
δ
η :=
=
=
D−1−λ
8e − 1
4en(8e − 1)
(which is less than λ if δ ≤ 8e − 1) . Choosing δ := n−1/2 , we obtain η = Cn−3/2 for
C := 1/((8e − 1)(4e)).
R X0
1/h(x) dx + r)/(1 − δ) in the first statement of TheoWe set t := (xmin /h(xmin ) + xmin
rem 5. The integral within t can be bounded according to
Z X0
a
X
1
1
xmin
+
dx ≤
U :=
h(xmin )
(2 − 2/n)(1 − 1/n)a−i /n
xmin h(x)
i=1
a
1
1
n2
(1 + 1/(n − 1))a − 1
1
=
−1
+
=
·n·
1+
2 2n − 2
1/(n − 1)
2
n−1
Hence, using the theorem we get
Pr(T > t) = Pr(T > (U + r)/(1 − δ)) ≤ e−ηr ≤ e−Crn
−3/2
Since U ≤ en2 and 1/(1 − δ) ≤ 1 + 2δ = 1 + 2n−1/2 , we get
Pr(T ≥ U + 2en3/2 + 2r) ≤ e−Crn
−3/2
.
Using the upper bound on U derived above, we obtain
a
1
n2
−3/2 )
− 1 + r ≤ e−Ω(rn
1+
Pr T ≥
2
n−1
as suggested.
21
.
Finally, we prove the third statement of this theorem in a quite symmetrical way to the
second one. We can choose h(x) := 2(1 − 1/n)a−x /n as an upper bound on the drift E(Xt −
R X0
1/h(x) dx −
Xt+1 | Xt = x). The estimation of the E(eλZ ) still applies. We set t := ( xmin
2
r)/(1 − δ). Moreover, we assume X0 ≥ n − log n − 1, which happens with probability at
2
least 1 − e−Ω(log n) . Note that
2
a−log
Xn
1
1
L :=
dx ≥
2(1 − 1/n)a−i /n
xmin h(x)
i=1
a
log2 n !
1
1
n2 − n
1+
− 1+
=
2
n−1
n−1
a
log2 n
1
n2 − n
−1−
1+
,
≥
2
n−1
n
Z
X0
where the last inequality used ex ≤ 1 + 2x for x ≤ 1 and ex ≥ 1 + x for x ∈ R. The second
statement of Theorem 5 yields (since state 0 is absorbing)
Pr(T < t) = Pr(T < (L − r)/(1 + δ)) ≤ e−ηr ≤ e−Crn
Now, since
−3/2
.
L−r
≥ (L − r) − δ(L − r) ≥ L − r − en3/2 ,
1+δ
(using L ≤ en2 ), we get the third statement by analogous calculations as above.
4.2
An Application to Probabilistic Recurrence Relations
Drift analysis is not only useful in the theory of RSHs, but also in classical computer
science. Here, we study the probabilistic recurrence relation T (n) = a(n) + T (h(n)), where
n is the problem size, a(n) the amount of work at the current level of recursion, and h(n)
is a random variable, denoting the size of the problem at the next recursion level. The
asymptotic distribution (letting n → ∞) of the number of cycles is well studied [2], but
there are few results for finite n. Karp [22] studied this scenario using different probabilistic
techniques than ours. Assuming knowledge of E(h(n)), he proved upper tail bounds for
T (n), more precisely he analyzed the probability of T (n) exceeding the solution of the
“deterministic” process T (n) = a(n) + T (E(h(n))).
We pick up the example from [22, Section 2.4] on the number of cycles in a permutation π ∈ Sn drawn uniformly at random, where Sn denotes the set of all permutations of the n elements {1, . . . , n}. A cycle is a subsequence of indices i1 , . . . , iℓ such that
π(ij ) = i(j mod ℓ)+1 for 1 ≤ j ≤ ℓ. Each permutation partitions the elements into disjoint
cycles. The expected number of cycles in a random permutation is Hn = ln n + Θ(1). Moreover, it is easy to see that the length of the cycle containing any fixed element is uniform on
{1, . . . , n}. This gives rise to the probabilistic recurrence T (n) = 1+ T (h(n)) expressing the
random number of cyles, where h(n) is uniform on {0, . . . , n − 1} . As a result, [22] shows
that the number of cycles is larger than log2 (n+1)+a with probability at most 2−a+1 . Note
22
that the log2 (n), which results from the solution of the deterministic recurrence, is already
by a constant factor away from the expected value. Lower tail bounds are not obtained in
[22]. Using our drift theorem (Theorem 3), it however follows that the number of cycles is
sharply concentrated around its expectation.
Theorem 15. Let N be the number of cycles in a random permutation of n elements. Then
ǫ2
Pr(N < (1 − ǫ)(ln n)) ≤ e− 4 (1−o(1)) ln n
for any constant 0 < ǫ < 1. And for any constant ǫ > 0,
Pr(N ≥ (1 + ǫ)((ln n) + 1)) ≤ e−
min{ǫ,ǫ2 }
6
ln n
.
Proof. We regard the probabilistic recurrence as a stochastic process, where Xt , t ≥ 0,
denotes the number of elements not yet included in a cycle; X0 = n. As argued in [22], if
Xt = i then Xt+1 is uniform on {0, ..., i − 1}. Note that N equals the first hitting time for
Xt = 0, which is denoted by T0 in our notation. Obviously, N is stochastically larger than
Ta for any a > 0.
We now prove the lower tail using Theorem 3.(iv). We compute E(Xt+1 | Xt ) =
(Xt − 1)/2, which means E(Xt − Xt+1 | Xt ) ≥ X2t = |X2t | since only integral Xt can happen.
Therefore we choose h(x) = |x|/2 in the theorem. Letting xmin = 1, we obtain the drift
Ri
P
function g(i) = 2 + 1 2/⌈j⌉ dj = ij=1 2/⌈j⌉ for i ≥ 1 and g(0) = 0. We remark that other
choices of h, with 1/2 replaced by different constants, would lead to the essentially same
result.
For the drift theorem, we have to compute g(i) − g(Xt+1 ), given Xt = i, and to bound
the mgf. w. r. t. this difference. We get
(
2(ln(i) − ln(j)) for j = 1, ..., i − 1, each with prob. 1/i,
g(i) − g(Xt+1 ) ≤
2(ln(i) + 1)
with prob. 1/i
Let Xt = i. For λ > 0, we bound the mgf.
λ(g(i)−g(Xt+1 ))
E(e
i−1
i−1
j=1
j=1
1 X 2λ(ln i−ln j) 1 η η 1 η X −η
1
j ,
e
= e i + i
) ≤ · e2λ e2λ ln i +
i
i
i
i
where η = 2λ. Now assume η constant and η < 1. Then
Z i−1
λ(g(i)−g(Xt+1 ))
η−1 η
η−1
−η
E(e
)≤i e +i
j dj
1+
1
1
η−1 η
η−1
1−η
≤i e +i
(i − 1)
−1
1+
1−η
1
1
− iη−1 = iη−1 eη +
≤ iη−1 (eη + 1) +
1−η
1−η
η iη−1 + η
η
e
1−η =: β
≤e
= 1 + iη−1 eη +
1−η
23
η η−1
will turn out to be negligible (more precisely,
using 1 + x ≤ ex . The factor ee i
η−1
eO((ln n) ) ) for i ≥ ln n in the following, which is why we set a := ln n in Theorem 3.(iv).
From the theorem, P
we get Pr(Ta < t) ≤ β t e−λ(g(X0 )−g(a)) . We work with the lower
bound g(X0 ) − g(a) = nj=a 2/j ≥ 2(ln(n + 1) − ln(a + 1)), which yields
Pr(Ta < t) < β t e−λ(2(ln(n+1)−ln(a+1))) = β t e−η ln n+O(ln ln n)
η−1 )+ η t−η ln n+O(ln ln n)
1−η
= eO(t(ln n)
η
= eo(t)+O(ln ln n)+ 1−η t−η ln n
η
Now we concentrate on the difference d(ǫ) = 1−η
t − η ln n that is crucial for the order
of growth of the last exponent. We assume t := (1 − ǫ) ln n for some constant ǫ > 0 and set
η := ǫ/2 (implying ǫ < 2); hence λ = ǫ/4. We get
1−ǫ
ǫ
ǫ
ǫ2
ǫ/2
(1 − ǫ)(ln n) − (ln n) = (ln n)
− 1 ≤ − (ln n)
d(ǫ) =
1 − ǫ/2
2
2
1 − ǫ/2
4
Using the bound for d(ǫ) in the exponent and noting that ǫ > 0 is constant, give
ǫ2
Pr(Ta < (1 − ǫ) ln n) ≤ e− 4 (1−o(1)) ln n , which also bounds T0 the same way.
To prove the upper tail, we must set a := 0 in Theorem 3.(iii). Using the lower bound
on the difference of g-values derived above, we estimate for Xt = i and any λ > 0
−λ(g(i)−g(Xt+1 ))
E(e
i−1
i−1
j=0
j=0
1 X −λ(2(ln(i+1)−ln(j+1))) 1 X
e
=
)≤
i
i
j+1
i+1
η
,
where again η = 2λ. Hence, similarly to the estimations for the lower tail,
Z i
η
1
1
1 η+1
1
−λ(g(i)−g(Xt+1 ))
j η dj ≤ η+1
i
=
≤ e− η+1 =: β
E(e
) ≤ η+1
i
i
η+1
η+1
1
From the drift theorem, we get
ηt
ηt
Pr(T0 > t) ≤ β t eλ(g(X0 )−g(0)) ≤ e− η+1 eλ(2(ln(n)+1)) = e− η+1 +η(ln n+1) .
Setting t := (1 + ǫ)(ln n + 1) and η = ǫ/2, the exponent is no more than
−
η(1 + ǫ/2 + ǫ/2)(ln n + 1)
ǫ2
+ η(ln n + 1) ≤ −
(ln n + 1).
1 + ǫ/2
4 + 2ǫ
2
The last fraction is at most − ǫ6 if ǫ ≤ 1 and at most − 6ǫ otherwise (if ǫ > 1). Altogether
Pr(T0 > t | X0 = n) ≤ e−
min{ǫ2 ,ǫ}
(ln n+1)
6
.
For appropriate functions g(x), our drift theorem may provide sharp concentration
results for other probabilistic recurrences, such as the case a(n) > 1.
24
5
Conclusions
We have presented a new and versatile drift theorem with tail bounds. It can be understood as a general variable drift theorem and can be specialized into all existing variants
of variable, additive and multiplicative drift theorems we found in the literature as well
as the fitness-level technique. Moreover, it provides lower and upper tail bounds, which
were not available before in the context of variable drift. Tail bounds were used to prove
sharp concentration inequalities on the optimization time of the (1+1) EA on OneMax,
linear functions and LeadingOnes. Despite the highly random fashion this RSH operates,
its optimization time is highly concentrated up to lower order terms. The drift theorem
also leads to tail bounds on the number of cycles in random permutations. The proofs
illustrate how to use the tail bounds and we provide simplified (specialized) versions of
the corresponding statements. We believe that this research helps consolidate the area of
drift analysis. The general formulation of drift analysis increases our understanding of its
power and limitations. The tail bounds imply more practically useful statements on the
optimization time than the expected time. We expect further applications of our theorem,
also to classical randomized algorithms.
Acknowledgements.
This research received funding from the European Union Seventh Framework Programme
(FP7/2007-2013) under grant agreement no 618091 (SAGE) and from the Danish Council
for Independent Research (DFF-FNU) under grant no. 4002–00542.
References
[1] M. Abramowitz and I. A. Stegun. Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. National Bureau of Standards, 1964.
[2] R. Arratia and S. Tavare. The cycle structure of random permutations. The Annals
of Probability, 20(3):1567–1591, 1992.
[3] A. Auger and B. Doerr, editors. Theory of Randomized Search Heuristics: Foundations
and Recent Developments. World Scientific Publishing, 2011.
[4] W. Baritompa and M. A. Steel. Bounds on absorption times of directionally biased
random sequences. Random Structures and Algorithms, 9(3):279–293, 1996.
[5] E. G. Coffman, A. Feldmann, N. Kahale, and B. Poonen. Computing call admission capacities in linear networks. Probability in the Engineering and Informational Sciences,
13(04):387–406, 1999.
[6] D. Corus, D. Dang, A. V. Eremeev, and P. K. Lehre. Level-based analysis
of genetic algorithms and other search processes. CoRR, abs/1407.7663, 2014.
http://arxiv.org/abs/1407.7663.
25
[7] B. Doerr, C. Doerr, and T. Kötzing. The right mutation strength for multi-valued
decision variables. In Proc. of the Genetic and Evolutionary Computation Conference
(GECCO 2016), pages 1115–1122. ACM Press, 2016.
[8] B. Doerr, M. Fouz, and C. Witt. Sharp bounds by probability-generating functions
and variable drift. In Proc. of the Genetic and Evolutionary Computation Conference
(GECCO 2011), pages 2083–2090. ACM Press, 2011.
[9] B. Doerr and L. A. Goldberg. Adaptive drift analysis. Algorithmica, 65(1):224–250,
2013.
[10] B. Doerr, A. Hota, and T. Kötzing. Ants easily solve stochastic shortest path problems.
In Proc. of the Genetic and Evolutionary Computation Conference (GECCO 2012),
pages 17–24. ACM Press, 2012.
[11] B. Doerr, T. Jansen, C. Witt, and C. Zarges. A method to derive fixed budget results
from expected optimisation times. In Proc. of the Genetic and Evolutionary Computation Conference (GECCO 2013), pages 1581–1588. ACM Press, 2013.
[12] B. Doerr, D. Johannsen, and C. Winzen. Multiplicative drift analysis. Algorithmica,
64(4):673–697, 2012.
[13] S. Droste, T. Jansen, and I. Wegener. On the analysis of the (1+1) evolutionary
algorithm. Theoretical Computer Science, 276:51–81, 2002.
[14] A. Eryilmaz and R. Srikant. Asymptotically tight steady-state queue length bounds
implied by drift conditions. Queueing Systems: Theory and Applications, 72(3-4):311–
359, 2012.
[15] M. Feldmann and T. Kötzing. Optimizing expected path lengths with ant colony
optimization using fitness proportional update. In Proc. of Foundations of Genetic
Algorithms (FOGA 2013), pages 65–74. ACM Press, 2013.
[16] C. Gießen and C. Witt. Optimal mutation rates for the (1+λ) EA on onemax. In
Proc. of the Genetic and Evolutionary Computation Conference (GECCO 2016), pages
1147–1154. ACM Press, 2016.
[17] B. Hajek. Hitting and occupation time bounds implied by drift analysis with applications. Advances in Applied Probability, 14:502–525, 1982.
[18] J. He and X. Yao. Drift analysis and average time complexity of evolutionary algorithms. Artificial Intelligence, 127(1):57–85, 2001. Erratum in Artif. Intell. 140(1/2):
245-248 (2002).
[19] J. Jägersküpper. Combining markov-chain analysis and drift analysis - the (1+1)
evolutionary algorithm on linear functions reloaded. Algorithmica, 59(3):409–424, 2011.
[20] T. Jansen. Analyzing Evolutionary Algorithms - The Computer Science Perspective.
Natural Computing Series. Springer, 2013.
26
[21] D. Johannsen. Random combinatorial structures and randomized search heuristics.
PhD thesis, Universität des Saarlandes, Germany, 2010.
[22] R. M. Karp. Probabilistic recurrence relations. Journal of the ACM, 41(6):1136–1150,
1994.
[23] T. Kötzing. Concentration of first hitting times under additive drift. Algorithmica,
75(3):490–506, 2016.
[24] P. K. Lehre. Negative drift in populations. In Proc. of Parallel Problem Solving from
Nature (PPSN XI), volume 6238 of LNCS, pages 244–253. Springer, 2011.
[25] P. K. Lehre. Drift analysis (tutorial). In Companion to GECCO 2012, pages 1239–1258.
ACM Press, 2012.
[26] P. K. Lehre and C. Witt. Black-box search by unbiased variation. Algorithmica,
64(4):623–642, 2012.
[27] P. K. Lehre and C. Witt. Concentrated hitting times of randomized search heuristics
with variable drift. In Proceedings of the 25th International Symposium Algorithms
and Computation (ISAAC 2014), volume 8889 of Lecture Notes in Computer Science,
pages 686–697. Springer, 2014.
[28] B. Mitavskiy, J. E. Rowe, and C. Cannings. Theoretical analysis of local search strategies to optimize network communication subject to preserving the total number of
links. International Journal of Intelligent Computing and Cybernetics, 2(2):243–284,
2009.
[29] F. Neumann and C. Witt. Bioinspired Computation in Combinatorial Optimization –
Algorithms and Their Computational Complexity. Natural Computing Series. Springer,
2010.
[30] P. S. Oliveto and C. Witt. Simplified drift analysis for proving lower bounds in evolutionary computation. Algorithmica, 59(3):369–386, 2011.
[31] P. S. Oliveto and C. Witt.
Erratum: Simplified drift analysis for proving lower bounds in evolutionary computation.
CoRR, abs/1211.7184, 2012.
http://arxiv.org/abs/1211.7184.
[32] P. S. Oliveto and C. Witt. Improved time complexity analysis of the simple genetic
algorithm. Theoretical Computer Science, 605:21–41, 2015.
[33] J. E. Rowe and D. Sudholt. The choice of the offspring population size in the (1,λ) EA.
In Proc. of the Genetic and Evolutionary Computation Conference (GECCO 2012),
pages 1349–1356. ACM Press, 2012.
[34] G. H. Sasak and B. Hajek. The time complexity of maximum matching by simulated
annealing. Journal of the ACM, 35:387–403, 1988.
27
[35] D. Sudholt. A new method for lower bounds on the running time of evolutionary
algorithms. IEEE Trans. on Evolutionary Computation, 17(3):418–435, 2013.
[36] I. Wegener. Theoretical aspects of evolutionary algorithms. In Proc. of the 28th
International Colloquium on Automata, Languages and Programming (ICALP 2001),
volume 2076 of LNCS, pages 64–78. Springer, 2001.
[37] C. Witt. Tight bounds on the optimization time of a randomized search heuristic
on linear functions. Combinatorics, Probability & Computing, 22(2):294–318, 2013.
Preliminary version in STACS 2012.
28
A
Existing Drift Theorems as Special Cases
In this appendix, we show that virtually all known variants of drift theorems with drift
towards the target can be derived from our general Theorem 3 with surprisingly short
proofs.
A.1
Variable Drift and Fitness Levels
A clean form of a variable drift theorem, generalizing previous formulations from [21] and
[28], was presented in [33]. We restate this theorem in our notation and carry out two
generalizations: we allow for a continuous state space instead of demanding a finite one
and do not fix xmin = 1.
Theorem 16 (Variable Drift, Upper Bound; following [33]). Let (Xt )t∈N0 , be a stochastic
process over some state space S ⊆ {0}∪[xmin , xmax ], where xmin > 0. Let h(x) : [xmin , xmax ] →
R+ be a monotone increasing function such that 1/h(x) is integrable on [xmin , xmax ] and
E(Xt − Xt+1 | Ft ) ≥ h(Xt ) if Xt ≥ xmin . Then it holds for the first hitting time
T := min{t | Xt = 0} that
xmin
+
E(T | X0 ) ≤
h(xmin )
Z
X0
xmin
1
dx.
h(x)
Proof. Since
R x h(x) is monotone increasing, 1/h(x) is decreasing and the function g(x) :=
x/ min + xmin 1/h(y) dy, defined in Remark 2, is concave on [0, xmax ]. By Jensen’s inequality, we get
E(g(Xt ) − g(Xt+1 ) | Ft ) ≥ g(Xt ) − g(E(Xt+1 | Ft ))
Z Xt
Z Xt
1
1
=
dx ≥
dx,
E(Xt+1 |Ft ) h(x)
Xt −h(Xt ) h(x)
where the equality just expanded g(x). Using that 1/h(x) is decreasing, it follows
Z
Xt
Xt −h(Xt )
1
dx ≥
h(x)
Z
Xt
Xt −h(Xt )
1
h(Xt )
dy =
= 1.
h(Xt )
h(Xt )
Plugging in αu := 1 in Theorem 3 completes the proof.
In [33] it was also pointed out that variable drift theorems in discrete search spaces
look similar to bounds obtained from the fitness level technique (also called the method of
f -based partitions, first formulated in [36]). For the sake of completeness, we present the
classical upper bounds by fitness levels w. r. t. the (1+1) EA here and prove them by drift
analysis.
Theorem 17 (Classical Fitness Levels, following [36]). Consider the (1+1) EA maximizing
some function f and a partition of the search space into non-empty sets A1 , . . . , Am . Assume
that the sets form an f -based partition, i. e., for 1 ≤ i < j ≤ m and all x ∈ Ai , y ∈ Aj it
29
holds f (x) < f (y). Let pi be a lower bound on the probability that a search point in Ai is
mutated into a search point in ∪m
j=i+1 Aj . Then the expected hitting time of Am is at most
Pm−1 1
i=1 pi .
Proof. At each point of time, the (1+1) EA is in a unique fitness level. Let Yt the current
fitness level at time t. We consider the process defined by Xt = m − Yt . By definition
of fitness levels and the (1+1) EA, Xt is non-increasing over time. Consider Xt = k for
1 ≤ k ≤ m − 1. With probability pm−k , the X-value decreases by at least 1. Consequently,
E(Xt − Xt+1 | Xt = k) ≥ pm−k . We define h(x) = pm−⌈x⌉ , xmin = 1 and xmax = m − 1
and obtain an integrable, monotone increasing function on [xP
min , xmax ]. Hence, the upper
1
1
bound on E(T | X0 ) from Theorem 16 becomes at most p1 + m−2
i=1 pm−i , which completes
the proof.
Recently, the fitness-level technique was considerably refined and supplemented by lower
bounds [35]. We can also identify these extensions as a special case of general drift. This
material is included in an subsection on its own, Appendix B. Similarly, we have moved a
treatment of variable drift theorems with non-monotone drift to Appendix C.
Finally, so far only two theorems dealing with upper bounds on variable drift and thus
lower bounds on the hitting time seems to have been published. The first one was derived
in [8]. Again, we present a variant without unnecessary assumptions, more precisely we
allow continuous state spaces and use less restricted c(x) and h(x).
Theorem 18 (Variable Drift, Lower Bound; following [8]). Let (Xt )t∈N0 , be a stochastic
process over some state space S ⊆ {0} ∪ [xmin , xmax ], where xmin > 0. Suppose there exists
two functions c, h : [xmin , xmax ] → R+ such that h(x) is monotone increasing and 1/h(x)
integrable on [xmin , xmax ], and for all t ≥ 0,
(i) Xt+1 ≤ Xt ,
(ii) Xt+1 ≥ c(Xt ) for Xt ≥ xmin ,
(iii) E(Xt − Xt+1 | Ft ) ≤ h(c(Xt )) for Xt ≥ xmin .
Then it holds for the first hitting time T := min{t | Xt = 0} that
xmin
+
E(T | X0 ) ≥
h(xmin )
Z
X0
xmin
1
dx.
h(x)
Proof. Using the definition of g according to Remark 2, we compute the drift
!
Z Xt
1
dx | Ft
E(g(Xt ) − g(Xt+1 ) | Ft ) = E
Xt+1 h(x)
!
Z Xt
1
≤ E
dx | Ft ,
Xt+1 h(c(Xt ))
30
where we have used that Xt ≥ Xt+1 ≥ c(Xt ) and that h(x) is monotone increasing. The
last integral equals
Xt − E(Xt+1 | Ft )
h(c(x))
≤
= 1.
h(c(Xt ))
h(c(x))
Plugging in αℓ := 1 in Theorem 3 completes the proof.
Very recently, Theorem 18 was relaxed in [16] by replacing the deterministic condition
Xt+1 ≥ c(Xt ) by a probabilistic one. We note without proof that also this generalization
can be proved with Theorem 3.
A.2
Multiplicative Drift
We continue by showing that Theorem 3 can be specialized in order to re-obtain other
classical and recent variants of drift theorems. Of course, Theorem 3 is a generalization of
additive drift (Theorem 1), which interestingly was used to prove the general theorem itself.
The remaining important (in fact possibly the most important) strand of drift theorems is
therefore represented by so-called multiplicative drift, which we focus on in this subsection.
Roughly speaking, the underlying assumption is that the progress to the optimum is proportional to the distance (or can be bounded in this way). Early theorems covering this
scenario, without using the notion of of drift, can be found in [4].
The following theorem is the strongest variant of the multiplicative drift theorem (originally introduced by [12]), which can be found in [9]. It was used to analyze RSHs on
combinatorial optimization problems and linear functions. Here we also need a tail bound
from our main theorem (more precisely, the third item in Theorem 3). Note that the multiplicative drift theorem requires xmin to be positive, i. e., a gap in the state space. Without
the gap, no finite first hitting time can be proved from the prerequisites of multiplicative
drift.
Theorem 19 (Multiplicative Drift, Upper Bound; following [9]). Let (Xt )t∈N0 , be a stochastic process over some state space S ⊆ {0} ∪ [xmin , xmax ], where xmin > 0. Suppose that there
exists some δ, where 0 < δ < 1 such that E(Xt − Xt+1 | Ft ) ≥ δXt . Then the following
statements hold for the first hitting time T := min{t | Xt = 0}.
(i) E(T | X0 ) ≤
(ii) Pr(T ≥
ln(X0 /xmin )+1
.
δ
ln(X0 /xmin )+r
δ
| X0 ) ≤ e−r for all r > 0.
Proof. Choosing h(x) = δx, the process satisfies Condition (i) of Corollary 4, which implies
that
Z X0
1
ln(X0 /xmin ) + 1
xmin
dy =
,
+
E(T | X0 ) ≤
δxmin
δ
xmin δy
which proves the first item from the theorem. The process also satisfies Condition (iii) of
Corollary 4, however, this will result in the loss of a factor e in the tail bound. We argue
directly instead.
31
Using the notation from Theorem 3, we choose h(x) = δx and obtain E(Xt − Xt+1 |
Ft ) ≥ h(Xt ) by the prerequisite Ron multiplicative drift. Moreover, according to Remark 2
x
we define g(x) = xmin /(δxmin ) + xmin 1/(δy) dy = 1/δ + ln(x/xmin )/δ for x ≥ xmin . We set
a := 0 and consider
E(e−δ(g(Xt )−g(Xt+1 )) | Ft ; Xt ≥ xmin ) = E(eln(Xt+1 /xmin )−ln(Xt /xmin ) ) | Ft ; Xt ≥ xmin )
= E((Xt+1 /Xt ) | Ft ; Xt ≥ xmin ) ≤ 1 − δ,
Hence, we can choose βu (t) = 1 − δ for all Xt ≥ xmin and λ = δ in the third item of
Theorem 3 to obtain
Pr(T > t | X0 ) < (1 − δ)t · eδ(g(X0 )−g(xmin )) ≤ e−δt+ln(X0 /xmin ) .
Now the second item of Theorem 19 follows by choosing t := (ln(X0 /xmin ) + r)/δ.
Compared to the upper bound, the following lower-bound includes a condition on the
maximum step-wise progress and requires non-increasing sequences. It generalizes the version in [37] and its predecessor in [26] by not assuming xmin ≥ 1.
Theorem 20 (Multiplicative Drift, Lower Bound; following [37]). Let (Xt )t∈N0 , be a
stochastic process over some state space S ⊆ {0} ∪ [xmin , xmax ], where xmin > 0. Suppose that there exist β, δ, where 0 < β, δ ≤ 1 such that for all t ≥ 0
(i) Xt+1 ≤ Xt ,
(ii) Pr(Xt − Xt+1 ≥ βXt ) ≤
βδ
1+ln(Xt /xmin ) .
(iii) E(Xt − Xt+1 | Ft ) ≤ δXt
Define the first hitting time T := min{t | Xt = 0}. Then
E(T | X0 ) ≥
ln(X0 /xmin ) + 1 1 − β
·
.
δ
1+β
Proof. Using the definition of g according to Remark 2, we compute the drift
!
Z Xt
1
E(g(Xt ) − g(Xt+1 ) | Ft ) = E
dx | Ft
Xt+1 h(x)
!
Z Xt
1
dx ; Xt+1 ≥ (1 − β)Xt | Ft · Pr(Xt+1 ≥ (1 − β)Xt )
≤ E
Xt+1 h(x)
+ g(Xt ) · (1 − Pr(Xt+1 ≥ (1 − β)Xt ))
where we used the law of total probability and g(Xt+1 ) ≥ 0. As in the proof of Theorem 19,
we have g(x) = (1 + ln(x/xmin ))/δ. Plugging in h(x) = δx, using the bound on Pr(Xt+1 ≥
32
(1 − β)Xt ) and Xt+1 ≤ Xt , the drift is further bounded by
!
Z Xt
1
βδ
1 + ln(Xt /xmin )
E
·
dx | Ft +
1 + ln(Xt /xmin )
δ
Xt+1 δ(1 − β)Xt
=
E(Xt − Xt+1 | Ft )
δXt
1+β
+β ≤
+β ≤
,
δ(1 − β)Xt
δ(1 − β)Xt
1−β
Using αℓ = (1 + β)/(1 − β) and expanding g(X0 ), the proof is complete.
Very recently, it was shown in [7] that the monotonicity condition Xt+1 ≤ Xt in Theorem 20 can be dropped if item (iii) is replaced by E(s − Xt+1 · 1 {Xt+1 ≤ s} | Ft ) ≤ δs
for all s ≤ Xt . We note without proof that also this strengthened theorem can be obtained
from our general drift theorem.
B
Fitness Levels Lower and Upper Bounds as Special Cases
We pick up the consideration of fitness levels again and prove the following lower-bound
theorem due to Sudholt [35] by drift analysis. See Sudholt’s paper for possibly undefined
or unknown terms.
Theorem 21 (Theorem 3 in [35]). Consider an algorithm A and a partition of the search
space into non-empty sets A1 , . . . , Am . For a mutation-based EA A we again say that A
is in Ai or on level i if the best individual created so far is in Ai .PLet the probability of A
traversing from level i to level j in one step be at most ui · γi,j and m
j=i+1 γi,j = 1. Assume
that for all j > i and some 0 ≤ χ ≤ 1 it holds
γi,j ≥ χ
m
X
γi,k .
(1)
k=j
Then the expected hitting time of Am is at least
m−1
X
i=1
≥
Pr(A starts in Ai ) ·
m−1
X
i=1
1
+χ
ui
Pr(A starts in Ai ) · χ
m−1
X
j=i
m−1
X
j=i+1
1
.
uj
1
uj
Proof. Since χ ≤ 1, the second lower bound follows immediately from the first one, which
we prove in the following. To adopt the perspective of minimization, we say that A is on
distance level m − i if the best individual created so far is in Ai . Let Xt be the algorithm’s
distance level at time t. We define the drift function g mapping distance levels to nonnegative numbers (which then form a new stochastic process) by
m−1
X 1
1
+χ
g(m − i) =
ui
uj
j=i+1
33
for 1 ≤ i ≤ m − 1. Defining um := ∞, we extend the function to g(0) = 0. Our aim is to
prove that the drift
∆t (m − i) := E(g(m − i) − g(Xt+1 ) | Xt = m − i)
has expected value at most 1. Then the theorem follows immediately using additive drift
(Theorem 1) along with the law of total probability to condition on the starting level.
To analyze the drift, consider the case that the distance level decreases from m − i to
m − ℓ, where ℓ > i. We obtain
g(m − i) − g(m − ℓ) =
ℓ
X
1
1
1
−
+χ
,
ui uℓ
uj
j=i+1
which by the law of total probability (and as the distance level cannot increase) implies
ℓ
m
X
X
1
1
1
ui · γi,ℓ −
∆t (m − i) =
+χ
ui uℓ
u
j=i+1 j
ℓ=i+1
m
ℓ
X
X
1
1
= 1 + ui
γi,ℓ − + χ
,
uℓ
uj
j=i+1
ℓ=i+1
where the last equality used
Pm
ℓ=i+1 γi,ℓ
m
X
γi,ℓ χ
ℓ=i+1
= 1. If we can prove that
m
ℓ
X
X
1
1
≤
γi,ℓ ·
uj
uℓ
j=i+1
(2)
ℓ=i+1
then ∆t (m − i) ≤ 1 follows and the proof is complete. To show this, observe that
m
X
ℓ=i+1
γi,ℓ χ
m
m
ℓ
X
X
X
1
1
=
·χ
γi,ℓ
uj
uj
j=i+1
j=i+1
ℓ=j
since the term u1j appears for all terms ℓ = j, . . . , m in the outer sum, each term weighted
P
by γi,ℓ χ. By (1), we have χ m
ℓ=j γi,ℓ ≤ γi,j , and (2) follows.
We remark here without going into the details that also the refined upper bound by
fitness levels (Theorem 4 in [35]) can be proved using general drift.
C
Non-monotone Variable Drift
In many applications, a monotone increasing function h(x) bounds the drift from below.
For example, the expected progress towards the optimum of OneMax increases with the
distance of the current search point from the optimum. However, certain ant colony optimization algorithms do not have this property and exhibit a non-monotone drift [10]. To
34
handle this case, generalizations of the variable drift theorem have been developed that does
not require h(x) to be monotone. The most recent version of this theorem is presented in
[15]. Unfortunately, it turned out that the two generalizations suffer from a missing condition, relating positive and negative drift to each other. Adding the condition and removing
an unnecessary assumption (more precisely, the continuity of h(x)) the theorem in [15] can
be corrected as follows. Note that this formulation is also used in [32] but proved with a
specific drift theorem instead of our general approach.
Theorem 22 (extending [15]). Let (Xt )t∈N0 , be a stochastic process over some state space
S ⊆ {0}∪[xmin , xmax ], where xmin > 0. Suppose there exists two functions h, d : [xmin , xmax ] →
R+ , where 1/h is integrable, and a constant c ≥ 1 such that for all t ≥ 0
1. E(Xt − Xt+1 ; Xt ≥ xmin | Ft ) ≥ h(Xt ),
2.
E((Xt+1 −Xt )·1{Xt+1 >Xt } ; Xt ≥xmin |Ft )
E((Xt −Xt+1 )·1{Xt+1 <Xt } ; Xt ≥xmin |Ft )
≤
1
,
2c2
3. |Xt − Xt+1 | ≤ d(Xt ) if Xt ≥ xmin ,
4. for all x, y ≥ xmin with |x − y| ≤ d(x), it holds h(min{x, y}) ≤ ch(max{x, y}).
Then it holds for the first hitting time T := min{t | Xt = 0} that
Z X0
1
xmin
+
dx .
E(T | X0 ) ≤ 2c
h(xmin )
xmin h(x)
It is worth noting that Theorem 16 is not necessarily a special case of Theorem 22.
Proof of Theorem 22. Using the definition of g according to Remark 2 and assuming Xt ≥
xmin , we compute the drift
!
Z Xt
1
dx Ft
E(g(Xt ) − g(Xt+1 ) | Ft ) = E
Xt+1 h(x)
!
Z Xt+1
Z Xt
1
1
= E
dx · 1 {Xt+1 < Xt } Ft − E
dx · 1 {Xt+1 > Xt } Ft ,
h(x)
Xt+1 h(x)
Xt
where equality holds since the integral is empty if Xt+1 = Xt . Item (4) from the prerequisites yields h(x) ≤ ch(Xt ) if Xt − d(Xt ) ≤ x < Xt and h(x) ≥ h(Xt )/c if Xt < x ≤
Xt + d(Xt ). Using this and |Xt − Xt+1 | ≤ d(Xt ), the drift can be further bounded by
!
Z Xt+1
Z Xt
c
1
dx · 1 {Xt+1 < Xt } Ft − E
dx · 1 {Xt+1 > Xt } Ft
E
h(Xt )
Xt
Xt+1 ch(Xt )
Z Xt+1
E((Xt − Xt+1 Ft ) · 1 {Xt+1 < Xt })
1
≥ E
dx · 1 {Xt+1 < Xt } Ft =
2ch(Xt )
2ch(Xt )
Xt
1
h(Xt )
=
,
≥
2ch(Xt )
2c
where the first inequality used the Item (2) from the prerequisites and the last one Item (1).
Plugging in αu := 1/(2c) in Theorem 3 completes the proof.
35
| 9 |
Learning Pose Grammar to Encode Human Body Configuration for
3D Pose Estimation
Hao-Shu Fang1,2∗ , Yuanlu Xu1∗ , Wenguan Wang1,3∗ , Xiaobai Liu4 , Song-Chun Zhu1
1
Dept. Computer Science and Statistics, University of California, Los Angeles 2 Shanghai Jiao Tong University
3
Beijing Institute of Technology 4 Dept. Computer Science, San Diego State University
arXiv:1710.06513v6 [] 4 Jan 2018
[email protected], [email protected], [email protected]
[email protected], [email protected]
Abstract
In this paper, we propose a pose grammar to tackle the problem of 3D human pose estimation. Our model directly takes
2D pose as input and learns a generalized 2D-3D mapping
function. The proposed model consists of a base network
which efficiently captures pose-aligned features and a hierarchy of Bi-directional RNNs (BRNN) on the top to explicitly
incorporate a set of knowledge regarding human body configuration (i.e., kinematics, symmetry, motor coordination). The
proposed model thus enforces high-level constraints over human poses. In learning, we develop a pose sample simulator
to augment training samples in virtual camera views, which
further improves our model generalizability. We validate our
method on public 3D human pose benchmarks and propose a
new evaluation protocol working on cross-view setting to verify the generalization capability of different methods. We empirically observe that most state-of-the-art methods encounter
difficulty under such setting while our method can well handle such challenges.
1
Kinematics
Grammar
Symmetry
Grammar
Motor Coordination
Grammar
Estimated 3D Human
Pose
Figure 1: Illustration of human pose grammar, which express
the knowledge of human body configuration. We consider
three kinds of human body dependencies and relations in
this paper, i.e., kinematics (red), symmetry (blue) and motor
coordination (green).
Introduction
Estimating 3D human poses from a single-view RGB image has attracted growing interest in the past few years for
its wide applications in robotics, autonomous vehicles, intelligent drones etc. This is a challenging inverse task since
it aims to reconstruct 3D spaces from 2D data and the inherent ambiguity is further amplified by other factors, e.g.,
clothes, occlusions, background clutters. With the availability of large-scale pose datasets, e.g., Human3.6M (Ionescu
et al. 2014), deep learning based methods have obtained encouraging success. These methods can be roughly divided
into two categories: i) learning end-to-end networks that recover 2D input images to 3D poses directly, ii) extracting 2D
human poses from input images and then lifting 2D poses to
3D spaces.
There are some advantages to decouple 3D human pose
estimation into two stages. i) For 2D pose estimation, ex∗
Hao-Shu Fang, Yuanlu Xu and Wenguan Wang contributed
equally to this paper. This work is supported by ONR MURI
Project N00014-16-1-2007, DARPA XAI Award N66001-17-24029, and NSF IIS 1423305, 1657600. Hao-Shu Fang and Wenguan Wang are visiting students. The correspondence author is Xiaobai Liu.
Copyright c 2018, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
isting large-scale pose estimation datasets (Andriluka et al.
2014; Charles et al. 2016) have provided sufficient annotations; whereas pre-trained 2D pose estimators (Newell,
Yang, and Deng 2016) are also generalized and mature
enough to be deployed elsewhere. ii) For 2D to 3D reconstruction, infinite 2D-3D pose pairs can be generated by projecting each 3D pose into 2D poses under different camera
views. Recent works (Yasin et al. 2016; Martinez et al. 2017)
have shown that well-designed deep networks can achieve
state-of-the-art performance on Human3.6M dataset using
only 2D pose detections as system inputs.
However, despite their promising results, few previous
methods explored the problem of encoding domain-specific
knowledge into current deep learning based detectors.
In this paper, we develop a deep grammar network to explicitly encode a set of knowledge over human body dependencies and relations, as illustrated in Figure 1. These
knowledges explicitly express the composition process of
joint-part-pose, including kinematics, symmetry and motor
coordination, and serve as knowledge bases for reconstructing 3D poses. We ground these knowledges in a multi-level
RNN network which can be end-to-end trained with backpropagation. The composed hierarchical structure describes
composition, context and high-order relations among human
body parts.
Additionally, we empirically find that previous methods
are restricted to their poor generalization capabilities while
performing cross-view pose estimation, i.e., being tested on
human images from unseen camera views. Notably, on the
Human3.6M dataset, the largest publicly available human
pose benchmark, we find that the performance of state-ofthe-art methods heavily relies on the camera viewpoints. As
shown in Table 1, once we change the split of training and
testing set, using 3 cameras for training and testing on the
forth camera (new protocol #3), performance of state-ofthe-art methods drops dramatically and is much worse than
image-based deep learning methods. These empirical studies suggested that existing methods might over-fit to sparse
camera settings and bear poor generalization capabilities.
To handle the issue, we propose to augment the learning
process with more camera views, which explore a generalized mapping from 2D spaces to 3D spaces. More specifically, we develop a pose simulator to augment training samples with virtual camera views, which can further improve
system robustness. Our method is motivated by the previous
works on learning by synthesis. Differently, we focus on the
sampling of 2D pose instance from a given 3D space, following the basic geometry principles. In particular, we develop a
pose simulator to effectively generate training samples from
unseen camera views. These samples can greatly reduce the
risk of over-fitting and thus improve generalization capabilities of the developed pose estimation system.
We conduct exhaustive experiments on public human
pose benchmarks, e.g., Human3.6M, HumanEva, MPII, to
verify the generalization issues of existing methods, and
evaluate the proposed method for cross-view human pose
estimation. Results show that our method can significantly
reduce pose estimation errors and outperform the alternative
methods to a large extend.
Contributions. There are two major contributions of the
proposed framework: i) a deep grammar network that incorporates both powerful encoding capabilities of deep neural
networks and high-level dependencies and relations of human body; ii) a data augmentation technique that improves
generalization ability of current 2-step methods, allowing
it to catch up with or even outperforms end-to-end imagebased competitors.
2
Related Work
The proposed method is closely related to the following two
tracks in computer vision and artificial intelligence.
3D pose estimation. In literature, methods solving this
task can be roughly classified into two frameworks: i) directly learning 3D pose structures from 2D images, ii) a cascaded framework of first performing 2D pose estimation and
then reconstructing 3D pose from the estimated 2D joints.
Specifically, for the first framework, (Li and Chan 2014)
proposed a multi-task convolutional network that simultaneously learns pose regression and part detection. (Tekin et al.
2016a) first learned an auto-encoder that describes 3D pose
in high dimensional space then mapped the input image to
that space using CNN. (Pavlakos et al. 2017) represented
3D joints as points in a discretized 3D space and proposed
a coarse-to-fine approach for iterative refinement. (Zhou et
al. 2017) mixed 2D and 3D data and trained an unified
network with two-stage cascaded structure. These methods
heavily relies on well-labeled image and 3D ground-truth
pairs, since they need to learn depth information from images.
To avoid this limitation, some work (Paul, Viola, and Darrell 2003; Jiang 2010; Yasin et al. 2016) tried to address this
problem in a two step manner. For example, in (Yasin et
al. 2016), the authors proposed an exemplar-based method
to retrieve the nearest 3D pose in the 3D pose library using the estimated 2D pose. Recently, (Martinez et al. 2017)
proposed a network that directly regresses 3D keypoints
from 2D joint detections and achieves state-of-the-art performance. Our work takes a further step towards a unified
2D-to-3D reconstruction network that integrates the learning power of deep learning and the domain-specific knowledge represented by hierarchy grammar model. The proposed method would offer a deep insight into the rationale
behind this problem.
Grammar model. This track receives long-lasting endorsement due to its interpretability and effectiveness in
modeling diverse tasks (Liu et al. 2014; Xu et al. 2016;
2017). In (Han and Zhu 2009), the authors approached
the problem of image parsing using a stochastic grammar
model. After that, grammar models have been used in (Xu et
al. 2013; Xu, Ma, and Lin 2014) for 2D human body parsing. (Park, Nie, and Zhu 2015) proposed a phrase structure,
dependency and attribute grammar for 2D human body, representing decomposition and articulation of body parts. Notably, (Nie, Wei, and Zhu 2017) represented human body
as a set of simplified kinematic grammar and learn their
relations with LSTM. In this paper, our representation can
be analogized as a hierarchical attributed grammar model,
with similar hierarchical structures, BRNNS as probabilistic grammar. The difference lies in that our model is fully
recursive and without semantics in middle levels.
3
Representation
We represent the 2D human pose U as a set of NU joint
locations
U = {ui : i = 1, . . . , NU , ui ∈ R2 }.
(1)
Our task is to estimate the corresponding 3D human pose V
in the world reference frame. Suppose the 2D coordinate of
a joint ui is [xi , yi ] and the 3D coordinate vi is [Xi , Yi , Zi ],
we can describe the relation between 2D and 3D as a pinhole
image projection
Xi
xi
αx
Y
i
yi = K [R|RT ] , K = 0
Zi
wi
0
1
0
αy
0
x0
Tx
y0 , T = Ty ,
1
Tz
(2)
where wi is the depth w.r.t. the camera reference frame, K
is the camera intrinsic parameter (e.g., focal length αx and
αy , principal point x0 and y0 ), R and T are camera extrinsic
parameters of rotation and translation, respectively. Note we
omit camera distortion for simplicity.
2D pose
3D pose
Linear layer
Batch norm
2D-3D Loss
Kinematic grammar
Symmetry grammar
Coordination grammar
Output 3D joints
1024
1024
Block 1
1024 1024 1024
16×2
shoulder
elbow
Kinematics Grammars
wrist
Block 2
1024 1024
1024 1024
16×3
1024 1024
Kinematic relation
16×3
Input feature
Base 3D-Pose Network
3D-Pose Grammar Network
Bidirectional RNN
Figure 2: The proposed deep grammar network. Our model consists of two major components: a base network constituted by
two basic blocks and a pose grammar network encoding human body dependencies and relations w.r.t. kinematics, symmetry
and motor coordination. Each grammar is represented as a Bi-directional RNN among certain joints. See text for detailed
explanations.
It involves two sub-problems in estimating 3D pose from
2D pose: i) calibrating camera parameters, and ii) estimating 3D human joint positions. Noticing that these two subproblems are entangled and cannot be solved without ambiguity, we propose a deep neural network to learn the generalized 2D→3D mapping V = f (U; θ), where f (·) is a
multi-to-multi mapping function, parameterized by θ.
3.1
Model Overview
Our model follows the line that directly estimating 3D human keypoints from 2D joint detections, which renders our
model high applicability. More specifically, we extend various human pose grammar into deep neural network, where a
basic 3D pose detection network is first used for extracting
pose-aligned features, and a hierarchy of RNNs is built for
encoding high-level 3D pose grammar for generating final
reasonable 3D pose estimations. Above two networks work
in a cascaded way, resulting in a strong 3D pose estimator
that inherits the representation power of neural network and
high-level knowledge of human body configuration.
3.2
Base 3D-Pose Network
For building a solid foundation for high-level grammar
model, we first use a base network for capturing well both
2D and 3D pose-aligned features. The base network is inspired by (Martinez et al. 2017), which has been demonstrated effective in encoding the information of 2D and 3D
poses. As illustrated in Figure 2, our base network consists
of two cascaded blocks. For each block, several linear (fully
connected) layers, interleaved with Batch Normalization,
Dropout layers, and ReLU activation, are stacked for efficiently mapping the 2D-pose features to higher-dimensions.
The input 2D pose detections U (obtained as ground truth
2D joint locations under known camera parameters, or from
other 2D pose detectors) are first projected into a 1024-d features, with a fully connected layer. Then the first block takes
this high-dimensional features as input and an extra linear
layer is applied at the end of it to obtain an explicit 3D pose
representation. In order to have a coherent understanding of
the full body in 3D space, we re-project the 3D estimation
into a 1024-dimension space and further feed it into the second block. With the initial 3D pose estimation from the first
block, the second block is able to reconstruct a more reasonable 3D pose. To take a full use of the information of initial
2D pose detections, we introduce residual connections (He
et al. 2016) between the two blocks. Such technique is able
to encourage the information flow and facilitate our training. Additionally, each block in our base network is able to
directly access to the gradients from the loss function (detailed in Sec.4), leading to an implicit deep supervision (Lee
et al. 2015). With the refined 3D-pose, estimated from base
network, we again re-projected it into a 1024-d features. We
combine the 1024-d features from the 3D-pose and the original 1024-d feature of 2D-pose together, which leads to a
powerful representation that has well-aligned 3D-pose information and preserves the original 2D-pose information.
Then we feed this feature into our 3D-pose grammar network.
3.3
3D-Pose Grammar Network
So far, our base network directly estimated the depth of each
joint from the 2D pose detections. However, the natural of
human body that rich inherent structures are involved in this
task, motivates us to reason the 3D structure of the whole
person in a global manner. Here we extend Bi-directional
RNNs (BRNN) to model high-level knowledge of 3D human
pose grammar, which towards a more reasonable and powerful 3D pose estimator that is capable of satisfying human
anatomical and anthropomorphic constraints. Before going
deep into our grammar network, we first detail our grammar
formulations that reflect interpretable and high-level knowledge of human body configuration. Basically, given a human
body, we consider the following three types of grammar in
our network.
Kinematic grammar G kin describes human body movements without considering forces (i.e., the red skeleton in
Figure 1)). We define 5 kinematic grammar to represent the
constraints among kinematically connected joints:
kin
Gspine
: head ↔ thorax ↔ spine ↔ hip ,
(3)
kin
Gl.arm
(4)
: l.shoulder ↔ l.elbow ↔ l.wrist ,
kin
Gr.arm
: r.shoulder ↔ r.elbow ↔ r.wrist ,
(5)
kin
Gl.leg
: l.hip ↔ l.knee ↔ l.foot ,
(6)
kin
Gr.leg
(7)
: r.hip ↔ r.knee ↔ r.foot .
Kinematic grammar focuses on connected body parts and
works both forward and backward. Forward kinematics
takes the last joint in a kinematic chain into account while
backward kinematics reversely influences a joint in a kinematics chain from the next joint.
Symmetry grammar G sym measure bilateral symmetry
of human body (i.e., blue skeleton in Figure 1), as human
body can be divided into matching halves by drawing a line
down the center; the left and right sides are mirror images of
each other.
sym
kin
kin
Garm
: Gl.arm
↔ Gr.arm
,
(8)
sym
Gleg
(9)
:
kin
Gl.leg
↔
kin
Gr.leg
.
Motor coordination grammar G crd represents movements of several limbs combined in a certain manner (i.e.,
green skeleton in Figure 1). In this paper, we consider simplified motor coordination between human arm and leg. We
define 2 coordination grammar to represent constraints on
people coordinated movements:
crd
kin
kin
Gl→r
: Gl.arm
↔ Gr.leg
,
(10)
crd
Gr→l
(11)
:
kin
Gr.arm
↔
kin
Gl.leg
.
The RNN naturally supports chain-like structure, which
provides a powerful tool for modeling our grammar formulations with deep learning. There are two states (forward/backfward directions) encoded in BRNN. At each time
step t, with the input feature at , the output yt is determined
by considering two-direction states hft and hbt :
yt = φ(Wyf hft + Wyb hbt + by ),
(12)
Real
camera
Virtual
camera
Figure 3: Illustration of virtual camera simulation. The black
camera icons stand for real camera settings while the white
camera icons simulated virtual camera settings.
For the top layer, totally four BRNN nodes are derived,
two for symmetry relations and two for motor coordination
sym
dependencies. For the symmetry BRNN nodes, taking Garm
node as an example, it takes the concatenated 3D-joints (tokin
kin
tally 6 joints) from the Gl.arm
and Gr.arm
BRNNs in the
bottom layer in all times as input, and produces estimations
for the six 3D-joints taking their symmetry relations into accrd
count. Similarly, for the coordination nodes, such as Gl→r
,
kin
kin
it leverages the estimations from Gl.arm and Gr.leg BRNNs
and refines the 3D joints estimations according to coordination grammar.
In this way, we inject three kinds of human pose grammar
into a tree-BRNN model and the final 3D human joints estimations are achieved by mean-pooling the results from all
the nodes in the grammar hierarchy.
4
Learning
Given a training set Ω:
where φ is the softmax function and the states hft , hbt are
computed as:
Ω = {(Ûk , V̂k ) : k = 1, . . . , NΩ },
hft = tanh(Whf hft−1 + Waf at + bfh ) ,
where Ûk and V̂k denote ground-truth 2D and 3D pose
pairs, we define the 2D-3D loss of learning the mapping
function f (U; θ) as
hbt = tanh(Whb hbt+1 + Wab at + bbh ) ,
(13)
As shown in Figure 2, we build a two-layer tree-like hierarchy of BRNNs for modeling our three grammar, where
each of the BRNNs shares same equation in Eqn.(12) and
the three grammar are represented by the edges between
BRNNs nodes or implicitly encoded into BRNN architecture.
For the bottom layer, five BRNNs are built for modeling the five relations defined in kinematics grammar. More
specifically, they accept the pose-aligned features from our
base network as input, and generate estimation for a 3D joint
at each time step. The information is forward/backfoward
propagated efficiently over the two states with BRNN, thus
the five Kinematics relations are implicitly modeled by the
bi-directional chain structure of corresponding BRNN. Note
that we take the advantages of recurrent natures of RNN for
capturing our chain-like grammar, instead of using RNN for
modeling the temporal dependency of sequential data.
(14)
θ∗ = arg min `(Ω|θ)
θ
= arg min
θ
NΩ
X
kf (Ûk ; θ) − V̂k k2 .
(15)
k=1
The loss measures the Euclidian distance between predicted
3D pose and true 3D pose.
The entire learning process consists of two steps: i) learning basic blocks in the base network with 2D-3D loss. ii) attaching pose grammar network on the top of the trained base
network, and fine-tune the whole network in an end-to-end
manner.
4.1
Pose Sample Simulator
We conduct an empirical study on popular 3D pose estimation datasets (e.g., Human3.6M, HumanEva) and notice
Figure 4: Examples of learned 2D atomic poses in probability distribution p(U|Û).
that there are usually limited number of cameras (4 on average) recording the human subject. This raises the doubt
whether learning on such dataset can lead to a generalized
3D pose estimator applicable in other scenes with different camera positions. We believe that a data augmentation
process will help improve the model performance and generalization ability. For this, we propose a novel Pose Sample Simulator (PSS) to generate additional training samples.
The generation process consists of two steps: i) projecting
ground-truth 3D pose V̂ onto virtual camera planes to obtain
ground-truth 2D pose Û, ii) simulating 2D pose detections
U by sampling conditional probability distribution p(U|Û).
In the first step, we first specify a series of virtual camera
calibrations. Namely, a virtual camera calibration is specified by quoting intrinsic parameters K 0 from other real cameras and simulating reasonable extrinsic parameters (i.e.,
camera locations T 0 and orientations R0 ). As illustrated in
Figure 3, two white virtual camera calibrations are determined by the other two real cameras. Given a specified virtual camera, we can perform a perspective projection of a
ground-truth 3D pose V̂ onto the virtual camera plane and
obtain the corresponding ground-truth 2D pose Û.
In the second step, we first model the conditional probability distribution p(U|Û) to mitigate the discrepancy between 2D pose detections U and 2D pose ground-truth Û.
Assuming p(U|Û) follows a mixture of Gaussian distribution, that is,
p(U|Û) = p() =
NG
X
ωj N(; µj , Σj ),
(16)
j=1
where = U − Û, NG denotes the number of Gaussian
distributions, ωj denotes a combination weight for the j-th
component, N(; µj , Σj ) denotes the j-th multivariate Gaussian distribution with mean µj and covariance Σj . As suggested in (Andriluka et al. 2014), we set NG = 42. For efficiency issues, the covariance matrix Σj is assumed to be in
the form:
σj,1 0
0
2×2
..
Σj = 0
(17)
.
0 , σj,i ∈ R
0
0 σj,i
where σj,i is the covariance matrix for joint ui at j-th multivariate Gaussian distribution. This constraint enforces independence among each joint ui in 2D pose U.
The probability distribution p(U|Û) can be efficiently
learned using an EM algorithm, with E-step estimating combination weights ω and M-step updating Gaussian parameters µ and Σ. We utilizes K-means clustering to initialize
parameters as a warm start. The learned mean µj of each
Gaussian can be considered as an atomic pose representing
a group of similar 2D poses. We visualize some atomic poses
in Figure 4.
Given a 2D pose ground-truth Û, we sample p(U|Û) to
generate simulated detections U and thus use it augment the
training set Ω. By doing so we mitigate the discrepancy between the training data and the testing data. The effectiveness of our proposed PSS is validated in Section 5.5.
5
Experiments
In this section, we first introduce datasets and settings for
evaluation, and then report our results and comparisons with
state-of-the-art methods, and finally conduct an ablation
study on components in our method.
5.1
Datasets
We evaluate our method quantitatively and qualitatively on
three popular 3D pose estimation datasets.
Human3.6M (Ionescu et al. 2014) is the current largest
dataset for human 3D pose estimation, which consists of 3.6
million 3D human poses and corresponding video frames
recorded from 4 different cameras. Cameras are located at
the front, back, left and right of the recorded subject, with
around 5 meters away and 1.5 meter height. In this dataset,
there are 11 actors in total and 15 different actions performed
(e.g., greeting, eating and walking). The 3D pose groundtruth is captured by a motion capture (Mocap) system and all
camera parameters (intrinsic and extrinsic parameters) are
provided.
HumanEva-I (Sigal, Balan, and Black 2010) is another
widely used dataset for human 3D pose estimation, which
is also collected in a controlled indoor environment using a
Mocap system. HumanEva-I dataset has fewer subjects and
actions, compared with Human3.6M dataset.
MPII (Andriluka et al. 2014) is a challenging benchmark
for 2D human pose estimation in the wild, containing a
large amount of human images in the wild. We only validate our method on this dataset qualitatively since no 3D
pose ground-truth is provided.
5.2
Evaluation Protocols
For Human3.6M, the standard protocol is using all 4 camera views in subjects S1, S5, S6, S7 and S8 for training and
the same 4 camera views in subjects S9 and S11 for testing.
This standard protocol is called protocol #1. In some works,
the predictions are post-processed via a rigid transformation
before comparing to the ground-truth, which is referred as
protocol #2.
In above two protocols, the same 4 camera views are both
used for training and testing. This raise the question whether
Protocol #1
Direct. Discuss Eating Greet Phone Photo Pose Purch. Sitting SittingD. Smoke Wait WalkD. Walk WalkT. Avg.
LinKDE (PAMI’16)
Tekin et al. (ICCV’16)
Du et al. (ECCV’16)
Chen & Ramanan (Arxiv’16)
Pavlakos et al. (CVPR’17)
Bruce et al. (ICCV’17)
Zhou et al. (ICCV’17)
Martinez et al. (ICCV’17)
132.7
102.4
85.1
89.9
67.4
90.1
54.8
51.8
Ours
Protocol #2
50.1
54.3
57.0 57.1 66.6 73.3 53.4 55.7
72.8
88.6
60.3
57.7
62.7
47.5
50.6
60.4
Direct. Discuss Eating Greet Phone Photo Pose Purch. Sitting SittingD. Smoke Wait WalkD. Walk WalkT. Avg.
Ramakrishna et al.(ECCV’12)
Bogo et al.(ECCV’16)
Moreno-Noguer (CVPR’17)
Pavlakos et al. (CVPR’17)
Bruce et al. (ICCV’17)
Martinez et al. (ICCV’17)
137.4
62.0
66.1
–
62.8
39.5
Ours
Protocol #3
38.2
41.7
43.7 44.9 48.5 55.3 40.2 38.2
54.5
64.4
47.2
44.3
47.3
36.7
41.7
45.7
Direct. Discuss Eating Greet Phone Photo Pose Purch. Sitting SitingD. Smoke Wait WalkD. Walk WalkT. Avg.
Pavlakos et al. (CVPR’17)
Bruce et al. (ICCV’17)
Zhou et al. (ICCV’17)
Martinez et al. (ICCV’17)
79.2
103.9
61.4
65.7
85.2
103.6
70.7
68.8
78.3 89.9 86.3 87.9 75.8 81.8
101.1 111.0 118.6 105.2 105.1 133.5
62.2 76.9 71.0 81.2 67.3 71.6
92.6 79.9 84.5 100.4 72.3 88.2
Ours
57.5
57.8
81.6
183.6
147.2
112.7
97.6
71.9
88.2
60.7
56.2
149.3
60.2
61.7
–
69.2
43.2
132.3
88.8
104.9
89.9
66.7
85.7
58.2
58.1
164.4
125.3
122.1
107.9
69.1
95.6
71.4
59.0
162.1
118.0
139.1
107.3
72.0
103.9
62.0
69.5
205.9 150.6 171.3
182.7 112.4 129.2
135.9 105.9 166.2
139.2 93.6 136.0
77.0 65.0 68.3
92.4 90.4 117.9
65.5 53.8 55.6
78.4 55.2 58.1
141.6 154.3 157.7 158.9 141.8 158.1
67.8 76.5 92.1 77.0 73.0 75.3
84.5 73.7 65.2 67.2 60.9 67.3
–
–
–
–
–
–
79.6 78.8 80.8 72.5 73.9 96.1
46.4 47.0 51.0 56.0 41.4 40.6
68.8
75.1
85.8
61.6
70.4
151.6
138.9
117.5
133.1
83.7
136.4
75.2
74.0
168.6
100.3
103.5
–
106.9
56.5
243.0
224.9
226.9
240.1
96.5
98.5
111.6
94.6
175.6
137.3
74.6
–
88.0
69.4
162.1
118.4
120.0
106.6
71.7
103.0
64.1
62.3
160.4
83.4
92.6
–
86.9
49.2
170.7
138.8
117.7
106.2
65.8
94.4
66.0
59.1
161.7
77.3
69.6
–
70.7
45.0
177.1
126.3
137.4
87.0
74.9
86.0
51.4
65.1
150.0
86.8
71.5
–
71.9
49.5
96.6
55.1
99.3
114.0
59.1
90.6
63.2
49.5
174.8
79.7
78.0
–
76.5
38.0
127.9
65.8
106.5
90.5
63.2
89.5
55.3
52.4
150.2
87.7
73.2
–
73.2
43.1
162.1
125.0
126.5
114.1
71.9
97.5
64.9
62.9
157.3
82.3
74.0
51.9
79.5
47.7
106.4
150.9
96.7
109.5
137.6
113.5
126.1
130.8
86.2
117.7
68.1
76.9
92.3
108.1
76.7
81.4
72.9
100.3
63.3
85.5
82.3
103.8
72.1
69.1
77.5
104.4
68.9
68.2
88.6
112.1
75.6
84.9
95.8
106.9
68.5
70.4
73.8
58.5
59.6
72.8
Table 1: Quantitative comparisons of Average Euclidean Distance (mm) between the estimated pose and the ground-truth on
Human3.6M under Protocol #1, Protocol #2 and Protocol #3. The best score is marked in bold.
or not the learned estimator over-fits to training camera parameters. To validate the generalization ability of different
models, we propose a new protocol based on different camera view partitions for training and testing. In our setting,
subjects S1, S5, S6, S7, and S8 in 3 camera views are used
for training while subjects S9 and S11 in the other camera
view are selected for testing (down-sampled to 10fps). The
suggested protocol guarantees that not only subjects but also
camera views are different for training and testing, eliminating interferences of subject appearance and camera parameters, respectively. We refer our new protocol as protocol #3.
For HumanEva-I, we follow the previous protocol, evaluating on each action separately with all subjects. A rigid
transformation is performed before computing the mean reconstruction error.
5.3
Implementation Details
We implement our method using Keras with Tensorflow as
back-end. We first train our base network for 200 epoch. The
learning rate is set as 0.001 with exponential decay and the
batch size is set to 64 in the first step. Then we add the 3DPose Grammar Network on top of the base network and finetune the whole network together. The learning rate is set as
10−5 during the second step to guarantee model stability in
the training phase. We adopt Adam optimizer for both steps.
We perform 2D pose detections using a state-of-the-art
2D pose estimator (Newell, Yang, and Deng 2016). We finetuned the model on Human3.6M and use the pre-trained
model on HumanEva-I and MPII. Our deep grammar network is trained with 2D pose detections as inputs and 3D
pose ground-truth as outputs. For protocol #1 and protocol
#2, the data augmentation is omitted due to little improvement and tripled training time. For protocol #3, in addition
Walking
S1 S2 S3
Simo-Serra et al. (CVPR’13) 65.1 48.6 73.5
Kostrikov et al. (BMVC’14) 44.0 30.9 41.7
35.8 32.4 41.6
Yasin et al. (CVPR’16)
Moreno-Noguer (CVPR’17) 19.7 13.0 24.9
Pavlakos et al. (CVPR’17)
22.3 19.5 29.7
Martinez et al. (ICCV’17)
19.7 17.4 46.8
Ours
19.4 16.8 37.4
Methods
S1
74.2
57.2
46.6
39.7
28.9
26.9
30.4
Jogging
S2 S3
46.6 32.2
35.0 33.3
41.4 35.4
20.0 21.0
21.9 23.8
18.2 18.6
17.6 16.3
Avg.
56.7
40.3
38.9
26.9
24.3
24.6
22.9
Table 2: Quantitative comparisons of the mean reconstruction error (mm) on HumanEva-I. The best score is marked
in bold.
to the original 3 camera views, we further augment the training set with 6 virtual camera views on the same horizontal
plane. Consider the circle which is centered at the human
subject and locates all cameras is evenly segmented into 12
sectors with 30 degree angles each, and 4 cameras occupy
4 sectors. We generate training samples on 6 out of 8 unoccupied sectors and leave 2 closest to the testing camera
unused to avoid overfitting. The 2D poses generated from
virtual camera views are augmented by our PCSS. During
each epoch, we will sample our learned distribution once
and generate a new batch of synthesized data.
Empirically, one forward and backward pass takes 25 ms
on a Titan X GPU and a forward pass takes 10 ms only,
allowing us to train and test our network efficiently.
5.4
Results and Comparisons
Human3.6M. We evaluate our method under all three protocols. We compare our method with 10 state-of-the-art methods (Ionescu et al. 2014; Tekin et al. 2016b; Du et al. 2016;
Chen and Ramanan 2016; Sanzari, Ntouskos, and Pirri 2016;
Figure 5: Quantitative results of our method on Human3.6M and MPII. We show the estimated 2D pose on the original image
and the estimated 3D pose from a novel view. Results on Human3.6M are drawn in the first row and results on MPII are drawn
in the second to fourth row. Best viewed in color.
Rogez and Schmid 2016; Bogo et al. 2016; Pavlakos et al.
2017; Nie, Wei, and Zhu 2017; Zhou et al. 2017; Martinez
et al. 2017) and report quantitative comparisons in Table 1.
From the results, our method obtains superior performance
over the competing methods under all protocols.
To verify our claims, we re-train three previous methods, which obtain top performance under protocol #1, with
protocol #3. The quantitative results are reported in Table. 1. The large drop of performance (17% – 41%) of previous 2D-3D reconstruction models (Pavlakos et al. 2017;
Nie, Wei, and Zhu 2017; Zhou et al. 2017; Martinez et al.
2017), which demonstrates the blind spot of previous evaluation protocols and the over-fitting problem of those models.
Notably, our method greatly surpasses previous methods
(12mm improvement over the second best under cross-view
evaluation (i.e., protocol #3). Additionally, the large performance gap of (Martinez et al. 2017) under protocol #1 and
protocol #3 (62.9mm vs 84.9mm) demonstrates that previous 2D-to-3D reconstruction networks easily over-fit to
camera views. Our general improvements over different settings demonstrate our superior performance and good generalization.
HumanEva-I. We compare our method with 6 state-ofthe-art methods (Simo-Serra et al. 2013; Kostrikov and Gall
2014; Yasin et al. 2016; Moreno-Noguer 2017; Pavlakos et
al. 2017; Martinez et al. 2017). The quantitative comparisons on HumanEva-I are reported in Table 2. As seen, our
results outperforms previous methods across the vast majority of subjects and on average.
MPII. We visualize sampled results generated by our
method on MPII as well as Human3.6M in Figure 5. As
seen, our method is able to accurately predict 3D pose for
both indoor and in-the-wild images.
5.5
Ablation studies
We study different components of our model on Human
3.6M dataset under protocol #3, as reported in Table 3.
Pose grammar. We first study the effectiveness of our
Component
Pose grammar
PSS
PSS Generalization
Variants
Ours, full
w/o. grammar
w. kinematics
w. kinematics+symmetry
w/o. extra 2D-3D pairs
w. extra 2D-3D pairs, GT
w. extra 2D-3D pairs, simple
Bruce et al. (ICCV’17) w/o.
Bruce et al. (ICCV’17) w.
Martinez et al. (ICCV’17) w/o.
Martinez et al. (ICCV’17) w.
Error (mm)
72.8
75.1
73.9
73.2
82.6
76.7
78.0
112.1
96.3
84.9
76.0
∆
–
2.3
1.1
0.4
9.8
3.9
5.2
–
15.8
–
8.9
Table 3: Ablation studies on different components in our
method. The evaluation is performed on Human3.6M under
Protocol #3. See text for detailed explanations.
grammar model, which encodes high-level grammar constraints into our network. First, we exam the performance
of our baseline by removing all three grammar from our
model, the error is 75.1mm. Adding the kinematics grammar provides parent-child relations to body joints, reducing the error by 1.6% (75.1mm → 73.9mm). Adding on
top the symmetry grammar can obtain an extra error drops
(73.9mm → 73.2mm). After combing all three grammar
together, we can reach an final error of 72.8mm.
Pose Sample Simulator (PSS). Next we evaluate the influence of our 2D-pose samples simulator. Comparing the
results of only using the data from original 3 camera views
in Human 3.6M and the results of adding samples by generating ground-truth 2D-3D pairs from 6 extra camera views,
we see an 7% errors drop (82.6mm → 76.7mm), showing that extra training data indeed expand the generalization
ability. Next, we compare our Pose Sample Simulator to a
simple baseline, i.e., generating samples by adding random
noises to each joint, say an arbitrary Gaussian distribution or
a white noise. Unsurprisingly, we observe a drop of performance, which is even worse than using the ground-truth 2D
pose. This suggests that the conditional distribution p(E|Ê)
helps bridge the gap between detection results and groundtruth. Furthermore, we re-train models proposed in (Nie,
Wei, and Zhu 2017; Martinez et al. 2017) to validate the
generalization of our PSS. Results also show a performance
boost for their methods, which confirms the proposed PSS is
a generalized technique. Therefore, this ablative study validates the generalization as well as effectiveness of our PSS.
6
Conclusion
In this paper, we propose a pose grammar model to encode
the mapping function of human pose from 2D to 3D. Our
method obtains superior performance over other state-ofthe-art methods by explicitly encoding human body configuration with pose grammar and a generalized data argumentation technique. We will explore more interpretable and effective network architectures in the future.
References
Andriluka, M.; Pishchulin, L.; Gehler, P.; and Schiele, B. 2014. 2d
human pose estimation: New benchmark and state of the art analysis. In IEEE Conference on Computer Vision and Pattern Recognition.
Bogo, F.; Kanazawa, A.; Lassner, C.; Gehler, P.; Romero, J.; and
Black, M. J. 2016. Keep it smpl: Automatic estimation of 3d human
pose and shape from a single image. In European Conference on
Computer Vision.
Charles, J.; Pfister, T.; Magee, D.; Hogg, D.; and Zisserman, A.
2016. Personalizing human video pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition.
Chen, C.-H., and Ramanan, D. 2016. 3d human pose estimation=
2d pose estimation+ matching. arXiv preprint arXiv:1612.06524.
Du, Y.; Wong, Y.; Liu, Y.; Han, F.; Gui, Y.; Wang, Z.; Kankanhalli,
M.; and Geng, W. 2016. Marker-less 3d human motion capture
with monocular image sequence and height-maps. In European
Conference on Computer Vision.
Han, F., and Zhu, S.-C. 2009. Bottom-up/top-down image parsing
with attribute grammar. IEEE Transactions on Pattern Analysis
and Machine Intelligence 31(1):59–73.
He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In IEEE Conference on Computer Vision
and Pattern Recognition.
Ionescu, C.; Papava, D.; Olaru, V.; and Sminchisescu, C. 2014.
Human3. 6m: Large scale datasets and predictive methods for 3d
human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(7):1325–1339.
Jiang, H. 2010. 3d human pose reconstruction using millions of
exemplars. In IEEE International Conference on Pattern Recognition.
Kostrikov, I., and Gall, J. 2014. Depth sweep regression forests for
estimating 3d human pose from images. In British Machine Vision
Conference.
Lee, C.-Y.; Xie, S.; Gallagher, P.; Zhang, Z.; and Tu, Z. 2015.
Deeply-supervised nets. In Artificial Intelligence and Statistics.
Li, S., and Chan, A. B. 2014. 3d human pose estimation from
monocular images with deep convolutional neural network. In
Asian Conference on Computer Vision.
Liu, T.; Chaudhuri, S.; Kim, V.; Huang, Q.; Mitra, N.; and
Funkhouser, T. 2014. Creating consistent scene graphs using a
probabilistic grammar. ACM Transactions on Graphics 33(6):1–
12.
Martinez, J.; Hossain, R.; Romero, J.; and Little, J. J. 2017. A
simple yet effective baseline for 3d human pose estimation. In
IEEE International Conference on Computer Vision.
Moreno-Noguer, F. 2017. 3d human pose estimation from a single
image via distance matrix regression. IEEE Conference on Computer Vision and Pattern Recognition.
Newell, A.; Yang, K.; and Deng, J. 2016. Stacked hourglass networks for human pose estimation. In European Conference on
Computer Vision.
Nie, B. X.; Wei, P.; and Zhu, S.-C. 2017. Monocular 3d human pose
estimation by predicting depth on joints. In IEEE International
Conference on Computer Vision.
Park, S.; Nie, X.; and Zhu, S.-C. 2015. Attributed and-or grammar for joint parsing of human pose, parts and attributes. IEEE
International Conference on Computer Vision.
Paul, G. S.; Viola, P.; and Darrell, T. 2003. Fast pose estimation
with parameter-sensitive hashing. In IEEE International Conference on Computer Vision.
Pavlakos, G.; Zhou, X.; Derpanis, K. G.; and Daniilidis, K. 2017.
Coarse-to-fine volumetric prediction for single-image 3D human
pose. In IEEE International Conference on Computer Vision.
Rogez, G., and Schmid, C. 2016. Mocap-guided data augmentation
for 3d pose estimation in the wild. In Annual Conference on Neural
Information Processing Systems.
Sanzari, M.; Ntouskos, V.; and Pirri, F. 2016. Bayesian image
based 3d pose estimation. In European Conference on Computer
Vision.
Sigal, L.; Balan, A. O.; and Black, M. J. 2010. Humaneva: Synchronized video and motion capture dataset and baseline algorithm
for evaluation of articulated human motion. International Journal
of Computer Vision 87(1):4–27.
Simo-Serra, E.; Quattoni, A.; Torras, C.; and Moreno-Noguer, F.
2013. A joint model for 2d and 3d pose estimation from a single image. In IEEE Conference on Computer Vision and Pattern
Recognition.
Tekin, B.; Katircioglu, I.; Salzmann, M.; Lepetit, V.; and Fua, P.
2016a. Structured prediction of 3d human pose with deep neural
networks. In British Machine Vision Conference.
Tekin, B.; Rozantsev, A.; Lepetit, V.; and Fua, P. 2016b. Direct
prediction of 3d body poses from motion compensated sequences.
In IEEE International Conference on Computer Vision.
Xu, Y.; Lin, L.; Zheng, W.-S.; and Liu, X. 2013. Human reidentification by matching compositional template with cluster
sampling. In IEEE International Conference on Computer Vision.
Xu, Y.; Liu, X.; Liu, Y.; and Zhu, S.-C. 2016. Multi-view people
tracking via hierarchical trajectory composition. In IEEE Conference on Computer Vision and Pattern Recognition.
Xu, Y.; Liu, X.; Qin, L.; and Zhu, S.-C. 2017. Multi-view people
tracking via hierarchical trajectory composition. In AAAI Conference on Artificial Intelligence.
Xu, Y.; Ma, B.; and Lin, R. H. L. 2014. Person search in a scene
by jointly modeling people commonness and person uniqueness.
In ACM Multimedia.
Yasin, H.; Iqbal, U.; Kruger, B.; Weber, A.; and Gall, J. 2016. A
dual-source approach for 3d pose estimation from a single image.
In IEEE Conference on Computer Vision and Pattern Recognition.
Zhou, X.; Huang, Q.; Sun, X.; Xue, X.; and Wei, Y. 2017. Towards 3d human pose estimation in the wild: a weakly-supervised
approach. In IEEE International Conference on Computer Vision.
| 2 |
1
Performance Analysis of Source Image
Estimators in Blind Source Separation
Zbyněk Koldovský and Francesco Nesta
1 Faculty
of Mechatronics, Informatics, and Interdisciplinary Studies, Technical University of
Liberec,
Studentská 2, 461 17 Liberec, Czech Republic. E-mail: [email protected],
arXiv:1603.04179v3 [cs.SD] 22 May 2017
fax:+420-485-353112, tel:+420-485-353534
2 Conexant
System, 1901 Main Street, Irvine, CA (USA). E-mail: [email protected]
Abstract
Blind methods often separate or identify signals or signal subspaces up to an unknown scaling factor.
Sometimes it is necessary to cope with the scaling ambiguity, which can be done through reconstructing signals as
they are received by sensors, because scales of the sensor responses (images) have known physical interpretations.
In this paper, we analyze two approaches that are widely used for computing the sensor responses, especially,
in Frequency-Domain Independent Component Analysis. One approach is the least-squares projection, while
the other one assumes a regular mixing matrix and computes its inverse. Both estimators are invariant to the
unknown scaling. Although frequently used, their differences were not studied yet. A goal of this work is to
fill this gap. The estimators are compared through a theoretical study, perturbation analysis and simulations. We
point to the fact that the estimators are equivalent when the separated signal subspaces are orthogonal, and vice
versa. Two applications are shown, one of which demonstrates a case where the estimators yield substantially
different results.
Index Terms
Beamforming, Blind Source Separation, Independent Component Analysis, Principal Component Analysis, Independent
Vector Analysis
I. I NTRODUCTION
The linear instantaneous complex-valued mixture model
x = Hs
or
X = HS
(1)
describes many situations where multichannel signals are observed, especially those considered in the field of
array processing [1] and Blind Source Separation (BSS) [2], [3]. The former equation is a vector-symbolic
description of the model while the latter equation describes a batch of data.
The vector x = [x1 , . . . , xd ]T represents d observed signals on sensors, s = [s1 , . . . , sr ]T represents original
signals, and H is a d × r complex-valued mixing matrix representing the linear mixing system. Upper-case
2
bold letters such as X and S will denote matrices whose columns contain concrete samples of the respective
signals; let the number of columns (samples) be N where N d. We will focus on the regular case when the
number of the observed signals d is the same as that of the original signals r, but later in the article we will
also address an undetermined case where r > d. From this point forward, let H be a d × d full rank matrix.
Consider a situation where only a subset of the original signals is of primary interest (e.g., only one
particular source or the subspace spanned by some sources, so-called multidimensional source). Without a loss
of generality, let s be divided into two components [s1 ; s2 ] where the sub-vectors s1 and s2 have, respectively,
length m and d − m, 1 ≤ m < d. The former component will be referred to as target component, and the
latter as interference. Correspondingly, let H be divided as [H1 H2 ] where the sub-matrices H1 and H2 have
dimensions d × m and d × (d − m), respectively. Then (1) can be written as
x = H1 s1 + H2 s2 .
(2)
The terms H1 s1 and H2 s2 correspond to the contributions of s1 and s2 , respectively, for the mixture x, and
4
will be denoted as si = Hi si , i ∈ {1, 2}. If, for example, s2 is not active, then x = s1 , which is equal to the
observations of s1 on the sensors, that is, the sensor response or source image of s1 .
In audio applications, s1 is often a scalar signal (m = 1) representing a point source located in the room,
and the model (1) describes linear mixing in the frequency domain for a particular frequency bin [4], [5], [6],
[7]. For m > 1, s1 can correspond to a subgroup of speakers [8]. In biomedical applications, s1 or s2 can
consist of components related to a target activity such as muscular artifacts in electroencephalogram (EEG) [9],
maternal or fetal electrocardiogram (ECG) [10], and so forth.
The problem of retrieving si from x is often solved with the aid of methods for Blind Source Separation. The
objective of BSS is to separate the original signals based purely on their general properties (e.g., independence,
sparsity or nonnegativity). In a general sense, BSS involves Principal Component Analysis (PCA), Independent
Component Analysis (ICA) [2], [11], Independent Vector Analysis (IVA) [12], [13], Nonnegative Matrix
Factorization [14], etc. Some methods separate all of the one-dimensional components of s [15], [16], extract
selected components only [17], or separate multidimensional components; see, e.g., [18], [19], [20], [21], [22].
The separation can also proceed in two steps where a steering vector/matrix (e.g., H1 ) is identified first, while the
signals are separated in the second step using an array processor such as the minimum variance distortion-less
(MVDR) beamformer [23].
The blind separation or identification is often not unique. For example, the order and scaling factors of
the separated components are random and cannot be determined without additional assumptions. Throughout
this paper, we will always assume that the problem of the random order (known as the permutation problem)
has already been resolved, so the subspaces in (2) are correctly identified; for practical methods solving the
permutation problem, see, e.g., [5], [24], [25].
The reconstruction of the signal images is a way to cope with the scaling ambiguity [7]. The advantage
is that si can be retrieved without prior knowledge of the scale of si in (1) or in (2). The scale of si has
clear physical interpretation (e.g., voltage), so the retrieval is highly practical. For example, in the Frequency
Domain ICA for audio source separation, the scaling ambiguity must be resolved within each frequency bin,
3
which is important for reconstructing the spectra of the separated signals in the time domain [5]. Some recent
BSS methods aim to consider the observed signals directly as the sum of images of original sources, by which
the scaling ambiguity is implicitly avoided and the number of unknown free parameters in the BSS model is
decreased [4], [26], [27]. This motivates us for this study, because the way to reconstruct the signal images
(either s1 or s2 ) is an important topic.
In this paper, we study two widely used methods to estimate the source images: One approach performs the
least-squares projection of a separated source on the subspace spanned by X. The other approach assumes that a
blind estimate of a demixing transform is available and exploits its inverse to compute the sources’ images. Both
estimators are invariant to the unknown scaling of the separated sources. The goal of this study is to compare the
estimators, which was not performed yet, and to provide a guidance which estimator is advantageous compared
to the other from different aspects. We also show conditions under which the estimators are equivalent.
The following section introduces the estimators and points to their important properties and relations. Section
III contains a perturbation analysis that studies cases where the estimated demixing transform contains “small”
errors. Section IV studies properties of the least-squares estimator in underdetermined situations, that is, when
there are more original signals than the observed ones. Section V presents results of simulations, and, finally,
Section VI demonstrates two applications.
II. S OURCE I MAGE E STIMATORS
Consider an exact demixing transform as a regular d × d matrix W defined as such that
WH = bdiag(Λ1 , Λ2 ),
(3)
where Λ1 and Λ2 are arbitrary nonsingular matrices representing the random scaling factors of dimensions
m × m and (d − m) × (d − m), respectively; bdiag(·) denotes a block-diagonal matrix with the arguments on
its block-diagonal. By applying W to x, the outputs are
y
= 1 .
y = Wx = WHs =
Λ2 s2
y2
Λ1 s1
(4)
The components y1 = Λ1 s1 and y2 = Λ2 s2 are separated in the sense that each is a mixture only of s1 and
s2 , respectively.
Let W1 and W2 be sub-matrices of W such that W = [W1 ; W2 ], and W1 contains the first m rows of
W, i.e., y1 = W1 x. From (3) it follows that W is demixing if and only if1 W1 H2 = 0 and W2 H1 = 0.
Throughout the paper we will occasionally use the following assumptions. Consider an estimated demixing
c
matrix W.
c i Hj = 0 where j ∈ {1, 2}, j 6= i. Assuming an exact demixing transform thus
A1(i) The assumption that W
corresponds to A1(1) simultaneously with A1(2).
A2 The assumption of uncorrelatedness of s1 and s2 means that E[s1 sH
2 ] = 0.
1A
more general definition is that W is demixing if and only if W1 H2 s2 = 0 and W2 H1 s1 = 0. However, we will assume that the
mixing model (1) is determined, so cases where W1 H2 s2 = 0 while W1 H2 6= 0 and similar do not exist.
4
A3 The assumption of orthogonality of Y1 and Y2 , that is,
Y1 Y2H /N = 0,
(5)
in BSS also known as the orthogonal constraint [28], means that the sample-based estimate of E[y1 y2H ] is
exactly equal to zero.
In the determined case r = d and under A1(1) and A1(2), the condition (5) corresponds with
S1 SH
2 /N = 0,
(6)
but not generally so when r > d. The latter condition could be seen as a stronger alternative to A2.
For example, the orthogonal constraint A3 is used by some ICA methods such as is Symmetric or Deflation
FastICA [15]. There are several reasons for this. First, A2 is the necessary condition of independence of the
original signals, so A3 is a practical way to decrease the number of unknown parameters in ICA. Second, A3
helps to achieve the global convergence (to find all independent components) and prevents algorithms from
c i is already determined
finding the same component twice. Finally, in the model (2) with the A3 constraint, W
c j is given, j 6= i, and vice versa.
up to a scaling matrix when W
A. Reconstruction Using Inverse of Demixing Matrix
The estimator to retrieve si described here will be abbreviated as INV.
c = [W
c 1; W
c 2 ] denote an estimated demixing matrix by a BSS method, and let
Definition 1 (INV): Let W
4
b be its inverse matrix, i.e., A
b =
c −1 . Let A
b = [A
b1 A
b 2 ] be divided in the same way as the system matrix
A
W
H. Then, the INV estimator is defined as
b iW
c ix
b
siINV = A
or
bi
b c
S
INV = Ai Wi X,
(7)
for i ∈ {1, 2}.
In particular, INV is popular in the frequency-domain ICA for audio source separation; see, e.g., [7], [6],
[33]. The following two propositions point to its important properties.
Proposition 1 (consistency of INV): Consider an exact demixing transform W satisfying A1(1) and A1(2),
that is, satisfying (3); let A = [A1 A2 ] be its inverse matrix. For i ∈ {1, 2}, it holds that
b
siINV = Ai Wi x = si .
(8)
Proof: By (3) it holds that Ai = Hi Λi−1 . Then,
Ai Wi x = Ai Wi (H1 s1 + H2 s2 )
(9)
= Ai Wi Hi si
(10)
i
= Hi Λ−1
i Λi si = Hi si = s .
(11)
c be an estimated demixing transform and A
b =W
c −1 .
Proposition 2 (scaling invariance of INV): Let W
c ← bdiag(Λ1 , Λ2 )W,
c where Λ1 and Λ2 are arbitrary
The INV estimator is invariant to substitution W
nonsingular matrices of dimensions m × m and (d − m) × (d − m), respectively.
5
c 1 ; Λ2 W
c 2 ]−1 = [A
b 1 Λ−1 A
b 2 Λ−1 ].
Proof: The proof follows from the fact that [Λ1 W
1
2
One advantage is that the transform Ai Wi is purely a function of W and does not explicitly depend on the
signals or on their statistics, e.g., on the sample-based covariance matrix. This makes the approach suitable for
real-time processing [29].
On the other hand, Ai Wi is a function of the whole W through the matrix inverse; it does not depend
solely on Wi , as one would expect when only si should be estimated. Formula (7) can thus be used only if
the whole demixing W is available. BSS methods extracting only selected components (e.g., one-unit FastICA
[15]) cannot be applied together with (7). Next, it follows that potential errors in the estimate of W2 can have
an adverse effect on the estimation of s1 . This is analyzed in Section III.
B. Least-squares reconstruction
Another approach to estimate si is to find an optimum projection of the separated components back to the
observed signals x in order to find their contribution to them. A straightforward way is to use the quadratic
criterion, that is, least squares, which gives two estimators that will be abbreviated by LS.
c i denote an estimated part of a demixing matrix, i ∈ {1, 2}, yi = W
c i x and
Definition 2 (LS): Let W
c i X. The theoretical LS estimator of si is defined as
Yi = W
b
siLS = arg min E kx − Vyi k2 x
V
c H (W
c i CW
c H )−1 W
c i x,
= CW
i
i
(12)
(13)
where C = E[xxH ]. The practical LS estimator of Si is defined as
b i = arg min kX − VYi k2 X
S
LS
F
(14)
bW
c H (W
c iC
bW
c H )−1 W
c i X,
=C
i
i
(15)
V
b = XXH /N , and k · kF denotes the Frobenius norm.
where C
Proposition 3 (scaling invariance of LS): The estimators (12) and (14) are invariant to a scaling transform
c i ← Λi W
c i where Λi is a nonsingular square matrix.
W
The proof of Proposition 3 is straightforward. It is worth pointing out that the LS estimators are purely
c i as compared to INV. Also, they involve the covariance matrix or its sample-based estimate.
functions of W
However, their consistency is not guaranteed under A1(i) even if the assumption holds for both i = 1, 2 as
assumed in Proposition 1. In fact, additional assumptions are needed as is shown by the following proposition.
Proposition 4: Let Wi be a part of an exact demixing matrix, so A1(i) holds. Let Wi Hi = Λi be
nonsingular. Then, under A2 it holds that
b
siLS = CWiH (Wi CWiH )−1 Wi x = si ,
(16)
b i = CW
b H (Wi CW
b H )−1 Wi X = Si .
S
LS
i
i
(17)
and under A3 it holds that
Proof: The proof will be given for (16) while the one for (17) is analogous.
6
According to (1) it holds that C = E[xxH ] = H Cs HH where Cs = E[ssH ]. Under A2 it follows that Cs
4
4
H
has the block-diagonal structure Cs = bdiag(Cs1 , Cs2 ) where Cs1 = E[s1 sH
1 ] and Cs2 = E[s2 s2 ] are regular
(because Cs is assumed to be regular). Without a loss of generality, let i = 1. Since W1 H = (Λ1 0),
b
siLS = CW1H (W1 CW1H )−1 W1 x
(18)
= HCs HH W1H (W1 HCs HH W1H )−1 W1 Hs
(19)
H −1
= H1 Cs1 ΛH
Λ1 s1
1 (Λ1 Cs1 Λ1 )
(20)
= H1 s1 = s1 .
(21)
b H . This
It is worth pointing out that LS involves a matrix inverse, namely, of Wi CWiH or of Wi CW
i
matrix (actually, the (sample) covariance of (Yi ) yi ) has a lower dimension than W and is more likely well
conditioned so that the computation of its inverse is numerically stable.
C. On the equivalence between INV and LS under the orthogonal constraint
c 1 is given. Under the assumption A3, W
c 2 is already determined
Without a loss of generality, assume that W
c is actually available, and the INV estimator (7) can be
up to a scaling matrix through (5), so the whole W
applied. The goal here is to verify that, in this case, INV coincides with LS.
c = [W
c 1 ; B]. Then,
Let B denote the unknown lower part of the entire demixing W
c X
W
Y
c = 1 = 1 .
WX
BX
Y2
(22)
The condition (5) requires that
bW
c H = 0,
BC
1
(23)
bW
c H . It can be verified that any B of the
which means that the rows of B are orthogonal to the columns of C
1
form
bW
c H (W
c 1C
bC
bW
c H )−1 W
c 1 C),
b
B = Q(I − C
1
1
(24)
where Q can be an arbitrary (d − m) × m full-row-rank matrix such that B has full row-rank, meets the
condition (23).
b 1 must be computed, which consists of first m columns of A
b =W
c −1 , so it satisfies
Now, to apply (7), A
c 1A
b 1 = I,
W
(25)
bW
c H (W
c 1C
bC
bW
c H )−1 W
c 1 C)
b A
b 1 = 0.
BA1 = Q(I − C
1
1
(26)
b1 = C
bW
c H R where R is an m × m matrix. To satisfy also (25),
The latter equation is satisfied whenever A
1
c 1C
bW
c H )−1 . Finally,
R = (W
1
b 1W
c 1X = C
bW
c H (W
c 1C
bW
c H )−1 W
c 1 X,
S1INV = A
1
1
(27)
which allows us to conclude this section by the following proposition.
c i be a part of an estimated demixing matrix, i ∈ {1, 2}, and let A3 be assumed. Then,
Proposition 5: Let W
bi
bi
S
INV = SLS .
7
III. P ERTURBATION A NALYSIS
Throughout this section, let W denote the exact demixing transform, that is W = H−1 . We present an
analysis of the sensor response estimators (7) and (14) when W is known up to a small deviation2 . Let
V = W + Ξ be the available observation of W where Ξ is a “small” matrix;V1 will denote the sub-matrix of
V containing the first m rows; similarly Ξ = [Ξ1 ; Ξ2 ]; A = V−1 and A1 contains the first m columns of A;
b − C be “small” and of the same asymptotic order as Ξ (typically, ∆C = Op (N −1/2 ) where
let also ∆C = C
Op (·) is the stochastic order symbol).
4
4
b H (V1 CV
b H )−1 V1 estimating S1 from
Now, consider the transform matrices T1 = A1 V1 and T2 = CV
1
1
X, respectively. The analysis resides in the computation of their squared distances (the Frobenius norm) from
the ideal transform, that is, from H1 W1 . Using first-order expansions and neglecting higher-order terms, it is
derived in Appendix that the following approximations hold.
2
2
kH1 W1 − T1 kF ≈ kH Ξ H1 W1 − H1 Ξ1 kF ,
(28)
2
−1
kH1 W1 − T2 kF ≈ H1 (Ξ1 CW1H + W1 C ΞH
1 )Cs1 W1
H −1
(H1 W1 − I)∆CW1H C−1
s1 W1 − H1 Ξ1 − C Ξ1 Cs1 W1
2
.
F
(29)
To provide a deeper insight, we will analyze a particular case where H = W = I, Cs1 = σ12 I, and
Cs2 = σ22 I. Let the elements of Ξ all be independent random variables with zero mean such that the variance
of each element of Ξi is equal to λ2i . For further simplification, let the elements of ∆Cij , which denotes the
ijth block of ∆C, i, j = 1, 2, be also independent random variables whose variance is equal to σi σj C. Then,
the expectation values of the right-hand sides of (28) and (29), respectively, are equal to
i
h
2
E kH1 W1 − T1 kF ≈ (λ21 + λ22 )m(d − m),
h
i
σ2
σ2
2
E kH1 W1 − T2 kF ≈
1 + 22 λ21 + C m(d − m).
σ1
σ1
(30)
(31)
Comparing (30) and (31) shows the pros and cons of the estimators. The latter depends on σ22 /σ12 , which
reflects the ratio between the power of s1 and that of s2 . The expression (30) does not depend on this ratio
explicitly3 . For simplicity, let us assume that σ22 /σ12 = 1.
Next, (31) depends on C while (30) is independent of it. Since C captures the covariance estimation error
in ∆C, it typically decreases with the length of data N . Usually, C has asymptotic order O(N −1/2 ); see, e.g.,
Appendix A.B in [30]. For the sake of the analysis, we will assume that C = 0. Then, (31) changes to
i
h
2
E kH1 W1 − T2 kF ≈ 2λ21 m(d − m).
(32)
The expressions (30) and (32) point to the main difference of the estimators when H = W = I: While the
performance of INV depends on λ21 and λ22 , that of LS depends purely on λ11 . In the special case λ11 = λ12 , the
performances coincide.
2 The
analysis of (12) follows from that of (14) when ∆C = 0.
3 Typically,
σ22 .
there is an implicit dependency of the estimation error Ξ on σ22 /σ12 . Therefore, λ21 as well as λ22 are influenced by σ12 and
8
20
10
MSE [dB]
0
INV - theory
INV - experimental
LS - theory
LS - experimental
LS - experimental, N=106
-10
-20
-30
-40
-50
10-3
10-2
10-1
1
Fig. 1.
Comparison of theoretical and experimental mean square error of INV and LS when d = 20, m = 5, H = W = I, σ12 = σ22 = 1,
C = 0, N = 105 , λ22 = 10−4 .
To verify the theoretical expectations, we conducted a simple simulation where d = 20, m = 5, H = W = I,
σ12
= σ22 = 1, C = 0, N = 105 , λ22 = 10−4 . Ξ and S were drawn from complex Gaussian distribution with
zero mean and unit variance. The average squared errors of INV and LS averaged over 100 trials for each value
of λ1 are compared, respectively, with the expressions (30) and (32) in Fig 1.
The theoretical error of INV is in a good agreement with the experimental one for all values of λ1 . The
same holds for that of LS until λ1 ≥ 0.003. For smaller values of λ1 , the experimental error of LS is limited
while the theoretical one is decreasing. This is caused by the influence of ∆C, which is fully neglected in (32)
and modeled through C in (31). The experimental error of LS evaluated for N = 106 confirms that (31) is a
more accurate theoretical error of LS than (32).
In this example, INV outperforms LS when λ1 > λ2 , and vice versa. However, these results are valid only
in the special case when H = W = I. Simulations in Section V consider general mixing matrices, thereby
compare the estimators in more realistic situations.
9
IV. N OISE E XTRACTION FROM U NDERDETERMINED M IXTURES
A. Mixture Model
Now we focus on a more realistic scenario that appears in most array processing problems. Let the mixture
be described as
x = H1 s1 + s2 ,
(33)
where H1 is a d × m matrix having full column rank, s1 is an m × 1 vector of target components, and s2 is a
d × 1 vector of noise signals. Note that, in this model, s2 is simultaneously equal to s2 .
The mixture model corresponds with (1), but H is equal to [H1 Id×d ] and has dimensions d × (m + d),
which makes the problem underdetermined (r = m + d).
In general, a linear transform that separates s1 or s2 from x does not exist, unless (33) is implicitly regular
(e.g., when Cs2 has rank d − m)4 . From now on, we focus on the difficult case where, generally speaking,
neither s1 nor s2 can be separated.
B. Target Signal Cancelation and Noise Extraction
Since the separation of s1 is not possible, multichannel noise reduction systems follow an inverse approach:
the target components s1 are first linearly canceled from the mixture in order to estimate a reference of the
noise components s2 . Second, a linear transform or adaptive filtering is used to subtract the noise from the
mixture as much as possible; see, e.g., [37], [38], [39], [40], [42], [43].
Specifically, the cancelation of the target component is achieved through a matrix W such that
WH1 = 0.
(34)
Since H1 has rank m, the maximum possible rank of W is d − m, which points to the fundamental limitation:
The maximum dimension of the subspace spanned by the extracted noise signals Wx = Ws2 is d − m.
Assume for now that any (d − m) × d matrix W having full row-rank has been identified (e.g., using BSS).
To estimate s2 , LS can be used (INV cannot be applied in the underdetermined case), so
b
s2LS = CWH (WCWH )−1 Wx,
(35)
b 2 = CW
b H (WCW
b H )−1 WX.
S
LS
(36)
or
Proposition 6: Let W be a (d − m) × d transform matrix having rank d − m and satisfying (34), and let
Q denote a d × (d − m) matrix. Under A2, s2LS is a minimizer of
min E ks2 − b
sk2 .
b
s=QWx
(37)
Assuming (6), S2LS is a minimizer of
min
b
S=QWX
4 For
b 2.
kS2 − Sk
F
(38)
example, model (33) is often studied under the assumption that at most d signals out of s1 and s2 are active at a given time instant;
see, e.g. [35], [36].
10
Proof: Under A2 it holds that
min E ks2 − b
sk2 = min E kx − b
sk2 .
b
s=QWx
b
s=QWx
When (6) holds, than
b 2 =
min kS2 − Sk
F
b
S=QWx
b 2.
min kX − Sk
F
b
S=QWx
The statements of the proposition follow, respectively, by the definitions (12) and (14).
The latter proposition points to important limitations of LS that should be taken into account in the underdetermined scenario. First, LS estimates a d-dimensional signal only from a (d − m)-dimensional signal subspace.
Second, (35) is optimal under A2 in the least-squares sense of (37). Third, (36) is optimal in the sense of (38)
only when (6) is valid, which is much stronger assumption than A2.
V. S IMULATIONS
This section is devoted to extensive Monte Carlo simulations where the signals and system parameters are
randomly generated. Real and complex parts of random numbers are always generated independently according
to the Gaussian law with zero mean and unit variance. Each trial of a simulation consists of the following steps.
1) The dimension parameters d and m are chosen.
2) N samples of the original components s1 and s2 are randomly generated according to the Gaussian law.
b = XXH /N .
3) The mixing matrix H is generated, W = H−1 , X = HS, and C
c 1 = W1 + Ξ1
4) The estimation of W is simulated by adding random perturbations to its blocks, that is, W
c 2 = W2 + Ξ2 , where the elements of Ξ1 and Ξ2 have, respectively, variances λ2 and λ2 ; W
c =
and W
1
2
c1 W
c 2 ]5 . Then, W
c 1 and W
c 2 are multiplied by random regular scaling matrices of corresponding
[W
dimensions.
c is evaluated through the normalized mean5) The accuracy of the INV and LS estimates of S1 using W
squared error defined as
NMSEj =
kH1 W1 − Tj k2F
,
kH1 W1 k2F
(39)
j = 1, 2.
The following subsection reports results of simulations assuming the determined model. The next subsection
considers the underdetermined model (33).
A. Determined model
c The experiment is done with d = 5, m = 2, and N = 104 ; λ2
1) Influence of the Estimation Errors in W:
2
is equal to one of four constants (10−1 , 10−2 , 10−3 , and 10−4 ) while λ21 is varied. Each simulation is repeated
in 105 trials. The average NMSE achieved by INV and LS are shown in Fig. 2.
c 2 . For example, for
The results of INV are highly influenced by λ22 that controls the perturbation of W
λ22 = 10−1 and λ22 = 10−2 , INV fails in the sense that the achieved NMSE is above 0 dB. This happens even
5 Note
c 1 and W
c 2 are not constrained to satisfy A3. Otherwise, the comparison of INV and LS would not give a sense due to
that W
Proposition 5.
11
20
15
10
mean NMSE [dB]
5
0
INV, λ22=10−1
−5
LS, λ2=10−1
−10
INV, λ2=10
−15
LS, λ22=10−2
−20
INV, λ22=10−3
2
2
LS, λ22=10−3
−25
INV, λ22=10−4
−30
−35
−40
Fig. 2.
−2
LS, λ22=10−4
−30
−20
−10
0
10⋅log10(λ21/λ22)
10
20
30
40
[dB]
NMSE averaged over 105 trials where d = 5, m = 2, N = 104 , λ22 is fixed, and λ21 is varied.
c 1 , is relatively “small”. For λ2 = 10−3 and λ2 = 10−4 , the NMSE
if λ21 , which controls the perturbation of W
2
2
of INV decreases with decreasing λ21 . However, the NMSE is lower bounded (does not improve as λ21 → 0).
c 2.
All these results point to the dependency of INV on W
The NMSE of LS depends purely on λ21 . It is always improved with the decreasing value of λ21 (it is
b In this
only limited by the length of data which influences the accuracy of the sample covariance matrix C).
experiment, LS is outperformed by INV only in a few cases, namely, when λ22 = 10−4 and λ21 /λ22 is higher than
c is a sufficiently
−14 dB. INV thus appears to be beneficial compared to LS in situations where the whole W
accurate estimate of W.
2) Varying Dimension: In the situation here, the target component s1 has dimension one, i.e., m = 1, while
the dimension of the mixture d is changed from 2 through 20; N = 104 . The variances λ21 and λ22 are fixed,
namely, λ22 = 10−3 and λ21 is chosen such that 10 log10 λ21 /λ22 corresponds, respectively, to −10, 0, and 10 dB.
The NMSE averaged over 105 trials is shown in Fig. 3.
The NMSE values of both methods are increasing with growing d. In the INV case, the NMSE grows
smoothly until it reaches a certain threshold value of d. The experiments show that this threshold depends on
λ21 and λ22 . Above this threshold, the NMSE of INV abruptly grows. It points to a higher sensitivity of INV to
c when the dimension of data is “high”.
the estimation errors in W
LS yields smooth and monotonic behavior of NMSE for every d. It is outperformed by INV if both λ21 and
12
25
2
20
LS, λ2/λ2 = −10 dB
1 2
INV, λ2/λ2 = 0 dB
1 2
2 2
LS, λ1/λ2 = 0 dB
INV, λ2/λ2 = 10 dB
1 2
LS, λ21/λ22 = 10 dB
15
mean NMSE [dB]
2
INV, λ1/λ2 = −10 dB
10
5
0
−5
−10
−15
−20
−25
2
4
6
8
10
12
14
16
18
20
d
Fig. 3.
NMSE averaged over 105 trials as a function of d = 2, . . . , 20; here m = 1, λ22 = 10−3 , and N = 104 .
λ22 as well as the data dimension d are sufficiently small.
3) Target Component Dimension: The dimension of the mixture d is now put equal to 20, while the dimension
of the target component m is varied from 1 through d − 1. Results for three different choices of λ21 and λ22
are shown in Fig. 4. The scenario with λ21 = λ22 = 10−3 appears to be difficult for both methods as they
do not achieve NMSE below 0 dB. INV also fails when λ22 = 10−3 and λ21 /λ22 corresponds to −10 dB (i.e.,
λ21 = 10−4 ) until m ≤ 17. This is in accordance with the results of the previous example that shows that INV
fails when λ21 , λ22 and d are “too large”. The example here reveals one more detail: INV can benefit from
smaller perturbations of the target component (λ21 = 10−4 ) even if λ22 is larger, but the target dimension must
be large enough with respect to d.
LS performs independently of λ22 , which is confirmed by the cases that are plotted with solid and dashed lines
in Fig. 4: these lines coincide as both correspond to the same λ21 (although different λ22 ). LS is outperformed
c is very small.
by INV when λ21 = λ22 = 10−4 , which, again, occurs when the estimation error of the whole W
B. Underdetermined model
In the example of this subsection, we consider the underdetermined mixture model (33) where m = 1,
d = 2, . . . , 20, and N = 50, . . . , 105 . The goal is to examine the reconstruction of the noise components S2
through (36). H1 is randomly generated. Then, W is such that its rows form a basis of the (d−m)-dimensional
subspace that is orthogonal to H1 plus a random Gaussian perturbation matrix whose elements have the variance
13
20
2
−3
2
2
INV, λ2=10 , λ1/λ2 = −10 dB
LS, λ2=10−3, λ2/λ2 = −10 dB
2
1 2
2
−4 2 2
INV, λ2=10 , λ1/λ2 = 0 dB
LS, λ22=10−4, λ21/λ22 = 0 dB
INV, λ22=10−3, λ21/λ22 = 0 dB
LS, λ22=10−3, λ21/λ22 = 0 dB
mean NMSE [dB]
15
10
5
0
−5
−10
2
4
6
8
10
12
14
16
18
m
Fig. 4.
NMSE averaged over 105 trials where d = 20, N = 104 , and m = 1, . . . , 19.
4
2
λ21 = 10−2
0
λ21 = 10−3
λ21 = 10−4
−2
λ21 = 10−5
2
mean NMSES [dB]
MMSE
λ21 = 10−6
−4
−6
−8
−10
−12
−14
2
4
6
8
10
12
14
16
18
20
d
Fig. 5.
Average NMSES2 as a function of d achieved by (35) in an experiment with the underdetermined model (33); m = 1; N = 104 .
MMSE denotes the NMSE achieved by the optimum minimum mean-squared error solution (41). The signals are generated as random
complex Gaussian i.i.d.
14
2
−3
2
2
mean NMSES [dB]
λ1=10
2
−3
−3.5
−3.5
−4
−4
−4.5
−4.5
−4.5
−5
−5
−5
−5.5
−5.5
−5.5
−6
−6
−6
−6.5
−6.5
−6.5
−7
−7
−7
2
10
3
4
10
10
−7.5
5
10
−5
λ1=10
−3
−7.5
−3
−3.5
LS
LSopt
MMSE
2
10
N
Fig. 6.
−4
λ1=10
3
4
10
10
N
−4
−7.5
5
10
2
10
3
4
10
10
5
10
N
Average NMSES2 as a function of N achieved through (36), (38) and (14); d = 4; λ21 = 10k , k = −3, −4, −5.
values equal to λ21 = 10k , k = −2, −3, . . . , −6, respectively. After applying (36), the evaluation is done using
the normalized mean square distance
NMSES2 =
b 2 k2
kS2 − S
F
.
kS2 k2F
(40)
b 2 with the exact solution of (38), which
Owing to the statement of Proposition 6, it is worth comparing S
will be abbreviated by LSopt, and with the minimum mean square error solution, marked as MMSE, defined
as the minimizer of
min kS2 − QXk2F .
Q∈Cd×d
(41)
The latter gives the minimum achievable value of NMSEs2 by a linear estimator; cf. (38) and (14).
The results averaged over 103 independent trials are shown in Figures 5 and 6. Fig. 5 shows results for
N = 104 . One observation here is that NMSES2 achieved through LS is getting closer to that of MMSE as λ21
approaches zero. Next, NMSES2 improves with growing dimension d, but it appears that it stops improving at
a certain d and grows beyond this threshold value, which depends on λ21 . For example, when λ21 = 10−4 , the
NMSES2 is decaying until d = 8 and grows beyond d ≥ 10.
Fig. 6 shows NMSES2 as a function of N when d = 4. This detailed observation shows that LS approaches
LSopt as N grows and λ21 approaches zero, but does not achieve the performance of MMSE. This is the
fundamental limitation due to the dimension of the separable signal subspace, that is, d − m.
15
VI. P RACTICAL E XAMPLES
A. De-noising of Electrocardiogram
Fig. 7 shows two seconds of a recording from a three channel electrocardiogram (ECG) of a Holter monitor,
which was sampled at 500 Hz. The recording is strongly interfered with a noise signal originating from the
Holter display. The fundamental frequency of the noise is about 37 Hz, and the noise contains several harmonics.
Since the noise is significantly stronger than the ECG components, Principal Component Analysis (PCA)
can be used to find a demixing transform that separates the noise from the mixture. Therefore, we take the
eigenvector corresponding to the highest eigenvalue of the covariance matrix of the recorded data (the principal
vector) as the separating transform. Then, the noise responses on the electrodes are computed using LS and
subtracted from the original noisy recording. This approach is computationally cheaper that doing the whole
PCA and using INV then. According to Proposition 5, both approaches give the same result as PCA yields
component that are exactly orthogonal.
To compare, we repeated the same experiment using the vector obtained through Independent Component
Analysis (ICA). One-unit FastICA [15] with tanh(·) nonlinearity was used to compute the vector separating
the noise component. To avoid the permutation ambiguity, the algorithm was initialized from [1 1 1], because
the noise appears to be uniformly distributed over the electrodes. Also here the approach is faster than doing
the whole orthogonally-constrained ICA (e.g. using Symmetric FastICA) and using INV.
Figures 8 and 9 show the resulting signals where the estimated images of the noise component were removed,
respectively, through PCA and ICA. Both results show very efficient subtraction of the noise. A visual inspection
of the detail in Fig. 8 shows certain residual noise that does not appear in Fig. 9, so the separation through
ICA appears to be more accurate than by PCA. Combining one-unit ICA algorithm with LS, the computational
complexity of the ICA solution is decreased.
B. Blind Separation with Incomplete Demixing Transforms
Proposition 5 points to the fact that if BSS is based on a method that yields (almost) orthogonal estimates of
s1 and s2 , INV and LS are principally not that different. The example of this section demonstrates a situation
where the estimates are significantly nonorthogonal, so INV and LS yield considerably different results.
Recently, a novel approach for blind separation of convolutive mixtures of audio signals has been proposed
in [45], [47]. The idea resides in the application of IVA in the frequency-domain on a constrained subset of
frequencies where the input (mixed) signals are active. This is in contrast with the conventional FrequencyDomain IVA (ICA), which is applied in all frequencies. The motivation behind is threefold: computational
savings, improved accuracy (especially in environments with sparse room impulse responses), and the prediction
of complete demixing transform for separation of future signals whose activity appears in other frequencies.
The method proceeds in three main steps. First, the subset S of p percents of the most active frequency bins
is selected. This can be done through estimating the power spectrum of the signal on a reference microphone
using the coefficients of its short-term Fourier transform. The frequency bins with maximum average magnitude
of Fourier coefficients are selected. Second, an IVA method is applied that estimates the demixing matrices
16
# electrode
3
2
1
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
time [s]
Fig. 7.
A two-second sample of a three channel electrocardiogram interfered by a noise signal originating from a Holter display.
3.5
3
# electrode
2.5
2
1.5
1
0.5
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
time [s]
Fig. 8.
LS.
Cleaned data from Fig. 7 after the subtraction of noise responses that were estimated through the main principal component and
17
# electrode
3
2
1
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
time [s]
Signal−to−Interference Ratio Improvement [dB]
Fig. 9.
Cleaned data from Fig. 7 after the subtraction of noise images that were estimated using the one-unit FastICA and LS.
35
30
25
20
15
LS: IVA + LASSO
LS: oracle + LASSO
LS: oracle + TDOA
INV: IVA + LASSO
INV: oracle + LASSO
INV: oracle + TDOA
10
5
0
0
10
20
30
40
50
60
70
80
90
100
percentage p [%]
Fig. 10.
Improvement of signal-to-interference ratio as a function of p, i.e., of percents of selected active frequencies in S for the
estimation of incomplete demixing transform. The evaluation was performed on speech signals.
Signal−to−Interference Ratio Improvement [dB]
18
25
20
15
LS: IVA + LASSO
LS: oracle + LASSO
LS: oracle + TDOA
INV: IVA + LASSO
INV: oracle + LASSO
INV: oracle + TDOA
10
5
0
0
10
20
30
40
50
60
70
80
90
100
percentage p [%]
Fig. 11.
Improvement of signal-to-interference ratio as a function of p. The evaluation was performed on white noise signals that
uniformly excite the whole frequency range.
within the subset of the selected frequencies. The subset of the matrices is referred to as Incomplete Demixing
Transform (IDT). Third, the IDT is completed by a given method.
We consider the same experiment with two simultaneously speaking persons and two microphones as in [45].
Signals have 10 seconds in length; the sampling frequency is 16 kHz. The signals are convolved with room
impulse responses (RIR) generated by a simulator and mixed together. Reflection order is set to 1 so that the
RIRs are significantly sparse (the results of this experiment with reflection order 10 when RIRs are no more
sparse are available in [45]). Then, the signals are transformed into the short-term Fourier domain with the
window length of 1024 samples and shift 128. The convolution is, in the frequency domain, approximated by
the set of multiplicative models (1) where d = r = 2 where one model corresponds to one frequency bin; there
are 513 models in total.
As for the second step, the demixing matrices are estimated from the mixed signals using the natural gradient
algorithm for IVA [25] applied to the subset of models (1). To compare, “oracle” demixing matrices are derived
on S using known responses of the speakers. This gives the IDT that is known only on the selected subset S.
The IDT is completed by two alternative methods. The first method, denoted as TDOA, utilizes known timedifferences of arrival of the signals. The unknown demixing matrices are such that their rows correspond to the
null beamformer steering spatial null towards the unwanted speaker. The second approach, denoted as LASSO,
completes the IDT through finding the sparsest representations of incomplete relative transfer functions (RTF)
19
that are derived from the IDT6 . Let q denote an |S| × 1 vector that collects the coefficients of an incomplete
RTF; |S| is the number of elements in S. The completed RTF is obtained as the solution of [46]
arg min khS − qk2 + kFH hk1 ,
h
(42)
where > 0 controls the time-domain sparsity of the solution, F is the matrix of the DFT, the subscript (·)S
denotes a vector/matrix with selected elements/rows whose indices are in S, and k · k1 denotes the `1 -norm.
Now, it is worth noting that the separated components by the demixing matrices after the completion can be
significantly nonorthogonal. While the IVA applied within S aims to find independent (thus “almost” or fully
orthogonal) components, the method for the IDT completion does not take any regard to the orthogonality7 .
Figures 10 and 11 show results of the experiment from [45] evaluated in terms of the Signal-to-Interference
Ratio Improvement (SIR) after the signals are separated as a function of p (the percentage of frequencies in
S). In Fig. 10, the evaluation is performed with the speech signals, while Fig. 11 shows the results achieved as
if the sources were white Gaussian sequences. The purpose of the latter evaluation is to evaluate the completed
IDT uniformly over the whole frequency range, i.e., also in frequencies that were not excited by the speech
signals. Note that SIR must be evaluated after resolving the scaling ambiguity in each frequency [41]. This
gives us the opportunity to apply either INV or LS.
The results in Figures 10 and 11 point to significant differences between LS and INV in this evaluation. The
results by LS appear to be less biased and stable as compared to those by INV, and can be interpreted in accord
with the theory. In particular, LS shows that oracle+LASSO (oracle IDT completed by LASSO) outperforms
oracle+TDOA for p between 35% and 80%. This gives sense, because LASSO can better exploit the sparsity of
the RIRs generated in this experiment. The results by INV do not reveal this important fact. Next, LS shows in
Fig. 10 that IVA+LASSO can improve the separation of the speech signals when p < 100%. The evaluation on
white noise in Fig. 11 shows that the loss of SIR is not essential until p < 30%. The latter conclusion cannot
be drawn with the results by INV.
VII. C ONCLUSIONS
We have analyzed and compared two estimators of sensor images (responses) of sources that were separated
from a multichannel mixture up to an unknown scaling factor: INV and LS. Simulations and perturbation analysis
have shown pros and cons of the methods, which can be summarized into the following recommendations.
•
LS is more practical in a sense that the whole mixing matrix need not be identified for its use, which is
useful especially in underdetermined scenarios.
•
The advantage of INV resides in the independence on the (estimated) covariance matrix.
•
INV could be beneficial as compared to LS when used with non-orthogonal BSS algorithms, i.e., those not
applying the orthogonal constraint. However, both the target as well as the interference subspaces must be
estimated with a sufficient accuracy.
6 As
pointed in [45], LASSO could be seen as a generalization of TDOA, because impulse responses corresponding to null beamformers
are pure-delay filters, which are perfectly sparse.
7 The
there.
orthogonal constraint cannot be imposed within the frequencies outside of the set S, because signals are not (or purely) active
20
Both approaches have been shown to be equivalent under the orthogonal constraint, so the differences in their
accuracies are less significant when BSS yields signal components that are (almost) orthogonal (e.g. PCA, ICA,
IVA). By contrast, the differences between the reconstructed images of nonorthogonal components can be large,
as demonstrated in the example of Section VI-B.
A PPENDIX : A SYMPTOTIC E XPANSIONS
Computation of (28)
Let E contain first m columns of the d × d identity matrix. It follows that
HE = H1 ,
AE = A1 ,
EH W = W1 ,
EH V = V1 .
To derive an approximate expression for A, we will use the first-order expansion
A = V−1 = (W + Ξ)−1 = (I + HΞ)−1 H
(43)
≈ (I − HΞ)H = H − HΞH.
(44)
Now we apply this approximation and neglect terms of higher than the first order.
2
kH1 W1 − A1 V1 kF = H1 W1 − AEEH V
2
F
≈
H1 W1 − (H − HΞH)EEH (W + Ξ)
2
F
=
2
kH1 W1 − (H1 − HΞH1 )(W1 + Ξ1 )kF ≈
2
kHΞH1 W1 − H1 Ξ1 kF . (45)
Computation of (29)
We start with the first approximation
b H (V1 CV
b H )−1 V1
H1 W1 − CV
1
1
2
=
F
H1 W1 − (C + ∆C)(W1H + ΞH
1 )
−1
· ((W1 + Ξ1 )(C + ∆C)(W1H + ΞH
(W1 + Ξ1 )
1 ))
2
≈
F
H
H1 W1 − (C + ∆C)(W1H + ΞH
1 ) · (W1 CW1 +
−1
W1 ∆CW1H + Ξ1 CW1H + W1 CΞH
(W1 + Ξ1 )
1 )
2
F
. (46)
21
Since W is now the exact inverse of H, it holds that W1 CW1H = Cs1 . By neglecting higher than the first-order
terms and by applying the first-order expansion of the matrix inverse inside the expression,
−1
H
H1 W1 − (C + ∆C)(W1H + ΞH
1 )(I + Cs1 Ξ1 CW1 +
H −1 −1
H
−1
+ C−1
Cs1 (W1 + Ξ1 )
s1 W1 ∆CW1 + Cs1 W1 CΞ1 )
2
≈
F
H
−1
H
H1 W1 − (CW1H + CΞH
1 + ∆CW1 )(I − Cs1 Ξ1 CW1 −
H
H
−1
−1
− C−1
s1 W1 ∆CW1 − Cs1 W1 CΞ1 )Cs1 (W1 + Ξ1 )
2
F
. (47)
Since,
H
H −1
CW1H C−1
s1 = H bdiag(Cs1 , Cs2 )H W1 Cs1
= H1 ,
(48)
the zero order term in (47) vanishes. By neglecting higher than the first-order terms, (29) follows.
ACKNOWLEDGMENTS
This work was supported by The Czech Science Foundation through Project No. 14-11898S and partly by
California Community Foundation through Project No. DA-15-114599.
We thank BTL Medical Technologies CZ for providing us the three-channel ECG recording.
R EFERENCES
[1] H. L. Van Trees, Optimum Array Processing: Part IV of Detection, Estimation, and Modulation Theory, John Wiley & Sons, Inc.,
2002.
[2] P. Comon and C. Jutten, Handbook of Blind Source Separation: Independent Component Analysis and Applications, Academic Press,
2010.
[3] J.-F. Cardoso, “Blind signal separation: statistical principles”, Proceedings of the IEEE, vol. 90, n. 8, pp. 2009-2026, October 1998.
[4] N. Q. K. Duong, E. Vincent, and R. Gribonval, Underdetermined reverberant audio source separation using a full-rank spatial
covariance model, IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 7, pp. 1830-1840, Sept 2010.
[5] H. Sawada, R. Mukai, S. Araki, and S. Makino, “A robust and precise method for solving the permutation problem of frequency-domain
blind source separation,” IEEE Trans. Speech Audio Processing, vol. 12, no. 5, pp. 530-538, 2004.
[6] S. Ukai, T. Takatani, H. Saruwatari, K. Shikano, R. Mukai, H. Sawada, “Multistage SIMO-based Blind Source Separation Combining
Frequency-Domain ICA and Time-Domain ICA, IEICE Trans. Fundam., E88-A, no. 3, pp. 642–650, 2005.
[7] K. Matsuoka and S. Nakashima, “Minimal distortion principle for blind source separation,” Proceedings of 3rd International
Conference on Independent Component Analysis and Blind Source Separation (ICA ’01), pp. 722-727, San Diego, Calif, USA,
Dec. 2001.
[8] S. Markovich, S. Gannot and I. Cohen, Multichannel Eigenspace Beamforming in a Reverberant Noisy Environment with Multiple
Interfering Speech Signals,” IEEE Transactions on Audio, Speech and Language Processing, vol. 17, no. 6, pp. 1071–1086, Aug.
2009.
[9] S. Sanei and J. A. Chambers, EEG Signal Processing, Wiley, July 2007.
[10] L. De Lathauwer, B. De Moor, J. Vandewalle, “Fetal Electrocardiogram Extraction by Blind Source Subspace Separation”, IEEE
Transactions on Biomedical Engineering, Special Topic Section on Advances in Statistical Signal Processing for Biomedicine, vol.
47, no. 5, pp. 567–572, May 2000.
[11] A. Hyvärinen, J. Karhunen, and E. Oja, Independent Component Analysis, Wiley-Interscience, New York, 2001.
[12] T. Kim, I. Lee; T.-W. Lee, “Independent Vector Analysis: Definition and Algorithms,” The Fortieth Asilomar Conference on Signals,
Systems and Computers, pp. 1393–1396, 2006.
22
[13] M. Anderson, G. Fu, R. Phlypo, T. Adali, “Independent Vector Analysis: Identification Conditions and Performance Bounds,” IEEE
Transactions on Signal Processing, vol. 62, no. 17, pp. 4399–4410, 2014.
[14] D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, pp. 788–791, Oct.
1999.
[15] A. Hyvärinen, “Fast and Robust Fixed-Point Algorithms for Independent Component Analysis”, IEEE Transactions on Neural
Networks, vol. 10, no. 3, pp. 626–634, 1999.
[16] J.-F. Cardoso and A. Souloumiac, “Blind Beamforming from non-Gaussian Signals,” IEE Proc.-F, vol. 140, no. 6, pp. 362-370, Dec.
1993.
[17] S. A. Cruces-Alvarez, A. Cichocki, and S. Amari, “From Blind Signal Extraction to Blind Instantaneous Signal Separation: Criteria,
Algorithms, and Stability,” IEEE Transactions on Neural Networks, vol. 15, no. 4, July 2004.
[18] D. Lahat, J.-F. Cardoso, and H. Messer, “Blind Separation of Multidimensional Components via Subspace Decomposition: Performance
Analysis,” IEEE Transactions on Signal Processing, vol. 62, no. 11, pp. 2894–2905, June 2014.
[19] A. Hyvärinen and U. Köster, “FastISA: A fast fixed-point algorithm for independent subspace analysis,” In Proc. European Symposium
on Artificial Neural Networks, Bruges, Belgium, 2006.
[20] J. Cardoso, “Multidimensional independent component analysis,” in Proceedings of the 1998 IEEE International Conference on
Acoustics, Speech and Signal Processing, vol. 4, pp. 1941–1944, 12-15 May 1998.
[21] D. Lahat, J.-F. Cardoso, and H. Messer, “Second-order multidimensional ICA: performance analysis,” IEEE Transactions on Signal
Processing, vol. 60, no. 9, pp. 4598–4610, Sept. 2012.
[22] Z. Koldovský and P. Tichavský, “Time-domain blind separation of audio sources on the basis of a complete ICA decomposition of
an observation space”, IEEE Trans. on Speech, Audio and Language Processing, vol. 19, no. 2, pp. 406–416, Feb. 2011.
[23] J. Capon, “High-resolution frequency-wavenumber spectrum analysis,” Proc. of IEEE, vol. 57, no. 8, pp. 1408–1418, Aug. 1969.
[24] P. Tichavský and Z. Koldovský, “Optimal Pairing of Signal Components Separated by Blind Techniques”, IEEE Signal Processing
Letters, vol. 11, no. 2, pp. 119–122, 2004.
[25] T. Kim, H. T. Attias, S.-Y. Lee, T-W. Lee, “Blind Source Separation Exploiting Higher-Order Frequency Dependencies,” IEEE
Transactions on Audio, Speech, and Language Processing, vol. 15, no. 1, Jan. 2007.
[26] J. F. Cardoso, M. Le Jeune, J. Delabrouille, M. Betoule and G. Patanchon, “Component Separation With Flexible ModelsApplication
to Multichannel Astrophysical Observations,” IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 5, pp. 735–746, Oct.
2008.
[27] The Planck collaboration, “Planck 2013 results. XII. Diffuse component separation,” Astronomy and Astrophysics, vol. 571, A12,
Nov. 2014,
[28] J.-F. Cardoso, “On the performance of orthogonal source separation algorithms”, Proc. EUSIPCO, pp. 776-779, Edinburgh, September
1994.
[29] K. Matsuoka, “Elimination of filtering indeterminacy in blind source separation,” Neurocomputing, vol. 71, pp. 2113-2126, 2008.
[30] P. Tichavský, Z. Koldovský, and E. Oja, “Performance Analysis of the FastICA Algorithm and Cramér-Rao Bounds for Linear
Independent Component Analysis”, IEEE Trans. on Signal Processing, Vol. 54, No. 4, April 2006.
[31] S. Makino, Te-Won Lee, and H. Sawada, Blind Speech Separation, Springer, Sept. 2007.
[32] Parra, L., and Spence, C.: “Convolutive Blind Separation of Non-Stationary Sources”, IEEE Trans. on Speech and Audio Processing,
Vol. 8, No. 3, pp. 320-327, May 2000.
[33] F. Nesta and M. Matassoni, “Blind source extraction for robust speech recognition in multisource noisy environments,” Journal
Computer Speech and Language, vol. 27, no. 3, pp. 730–725, May 2013.
[34] J. Benesty, J. Chen, and E. Habets, Speech Enhancement in the STFT Domain, Springer Briefs in Electrical and Computer Engineering,
2011.
[35] F. Abrard, Y. Deville, “A time-frequency blind signal separation method applicable to underdetermined mixtures of dependent sources,”
Signal Processing, vol. 85, issue 7, pp. 1389–1403, July 2005.
[36] T-W. Lee, M. S. Lewicki, M. Girolami, T. J. Sejnowski, “Blind Source Separation of More Sources Than Mixtures Using Overcomplete
Representations,” IEEE Signal Processing Letters, vol. 6, no. 4, 1999.
[37] L. Griffiths and C. Jim, “An alternative approach to linearly constrained adaptive beamforming,” IEEE Trans. Antennas Propag., vol.
30, no. 1, pp. 27–34, Jan. 1982.
[38] J. Even, C. Ishi, H. Saruwatari, and N. Hagita, “Close speaker cancellation for suppression of non-stationary background noise
23
for hands-free speech interface,” Proc. of the 11th Annual Conference of the International Speech Communication Association
(Interspeech 2010), pp. 977-980, Makuhari, Chiba, Japan, September 26-30, 2010.
[39] O. Hoshuyama, A. Sugiyama, A. Hirano, “A robust adaptive beamformer for microphone arrays with a blocking matrix using
constrained adaptive filters,” IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 2677–2684, Oct. 1999.
[40] Y. Takahashi, T. Takatani, K. Osako, H. Saruwatari, K. Shikano, “Blind Spatial Subtraction Array for Speech Enhancement in Noisy
Environment,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 4, pp. 650–664, May 2009.
[41] Vincent, E., Gribonval, R., and C. Févotte, “Performance Measurement in Blind Audio Source Separation,” IEEE Trans. on Speech
and Audio Processing, Vol 14, No. 4, pp. 1462–1469, July 2006.
[42] S. Gannot, D. Burshtein, and E. Weinstein, “Signal enhancement using beamforming and nonstationarity with applications to speech,”
IEEE Trans. on Signal Processing, vol. 49, no. 8, pp. 1614–1626, Aug. 2001.
[43] S. Gannot and I. Cohen, “Speech Enhancement Based on the General Transfer Function GSC and Postfiltering,” IEEE Trans. on
Speech and Audio Processing, vol. 12, No. 6, pp. 561–571, Nov. 2004.
[44] Z. Koldovský, J. Málek, and S. Gannot, “Spatial Source Subtraction Based on Incomplete Measurements of Relative Transfer Function,”
IEEE/ACM Trans. on Speech, Audio and Language Processing, vol. 23, no. 8, pp. 1335–1347, Aug. 2015.
[45] Z. Koldovský, F. Nesta, P. Tichavský, and N. Ono, “Frequency-Domain Blind Speech Separation Using Incomplete De-Mixing
Transform,” The 24th European Signal Processing Conference (EUSIPCO 2016), pp. 1663–1667, Budapest, Hungary, Sept. 2016.
[46] R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Royal. Statist. Soc. B., vol. 58, pp. 267–288, 1996.
[47] J. Janský, Z. Koldovský and N. Ono, “A Computationally Cheaper Method for Blind Speech Separation Based On AuxIVA and
Incomplete Demixing Transform,” The 15th International Workshop on Acoustic Signal Enhancement (IWAENC)‘, Xian, China, Sept.
2016.
Zbyněk Koldovský (M’04-SM’15) was born in Jablonec nad Nisou, Czech Republic, in 1979. He received the M.S.
degree and Ph.D. degree in mathematical modeling from Faculty of Nuclear Sciences and Physical Engineering
at the Czech Technical University in Prague in 2002 and 2006, respectively. He was also with the Institute of
Information Theory and Automation of the Academy of Sciences of the Czech Republic from 2002 to 2016.
Currently, he is an associate professor at the Institute of Information Technology and Electronics, Technical
University of Liberec, and the leader of Acoustic Signal Analysis and Processing (A.S.A.P.) Group. He is the Vicedean for Science, Research and Doctoral Studies at the Faculty of Mechatronics, Informatics and Interdisciplinary
Studies. His main research interests are focused on audio signal processing, blind source separation, independent component analysis, and
sparse representations.
Zbyněk Koldovský has served as a general co-chair of the 12th Conference on Latent Variable Analysis and Signal Separation (LVA/ICA
2015) in Liberec, Czech Republic.
24
Francesco Nesta received the Laurea degree in computer engineering from Politecnico di Bari, Bari, Italy, in
September 2005 and the Ph.D. degree in information and communication technology from University of Trento,
Trento, Italy, in April 2010, with research on blind source separation and localization in adverse environments.
He has been conducting his research at Bruno Kessler Foundation IRST, Povo di Trento, from 2006 to 2012. He
was a Visiting Researcher from September 2008 to April 2009 with the Center for Signal and Image Processing
Department, Georgia Institute of Technology, Atlanta. His major interests include statistical signal processing,
blind source separation, speech enhancement, adaptive filtering, acoustic echo cancellation, semi-blind source
separation and multiple acoustic source localization. He is currently working at Conexant System, Irvine (CA, USA) on the development
of audio enhancement algorithms for far-field applications.
Dr. Nesta serves as a reviewer for several journals such as the IEEE T RANSACTION ON AUDIO , S PEECH , AND L ANGUAGE P ROCESSING,
Elsevier Signal Processing Journal, Elsevier Computer Speech and Language, and in several conferences and workshops in the field of
acoustic signal processing. He has served as Co-Chair in the third community-based Signal Separation Evaluation Campaign (SiSEC 2011)
and as organizer of the 2nd CHIME challenge.
| 7 |
arXiv:1705.07492v1 [] 21 May 2017
Parallel and in-process compilation of individuals
for genetic programming on GPU
Hakan Ayral
[email protected]
Songül Albayrak
[email protected]
April 2017
Abstract
Three approaches to implement genetic programming on GPU hardware are compilation, interpretation and direct generation of machine
code. The compiled approach is known to have a prohibitive overhead
compared to other two.
This paper investigates methods to accelerate compilation of individuals for genetic programming on GPU hardware. We apply in-process compilation to minimize the compilation overhead at each generation; and we
investigate ways to parallelize in-process compilation. In-process compilation doesn’t lend itself to trivial parallelization with threads; we propose a
multiprocess parallelization using memory sharing and operating systems
interprocess communication primitives. With parallelized compilation we
achieve further reductions on compilation overhead. Another contribution
of this work is the code framework we built in C# for the experiments.
The framework makes it possible to build arbitrary grammatical genetic
programming experiments that run on GPU with minimal extra coding
effort, and is available as open source.
Introduction
Genetic programming is an evolutionary computation technique, where the objective is to find a program (i.e. a simple expression, a sequence of statements,
or a full-scale function) that satisfy a behavioral specification expressed as test
cases along with expected results. Grammatical genetic programming is a subfield of genetic programming, where the search space is restricted to a language
defined as a BNF grammar, thus ensuring all individuals to be syntactically
valid.
Processing power provided by graphic processing units (GPUs) make them
an attractive platform for evolutionary computation due to the inherently parallelizable nature of the latter. First genetic programming implementations shown
to run on GPUs were [2] and [5].
Just like in the CPU case, genetic programming on GPU requires the code
represented by individuals to be rendered to an executable form; this can be
1
achieved by compilation to an executable binary object, by conversion to an
intermediate representation of a custom interpreter developed to run on GPU,
or by directly generating machine-code for the GPU architecture. Compilation
of individuals’ codes for GPU is known to have a prohibitive overhead that is
hard to offset with the gains from the GPU acceleration.
Compiled approach for genetic programming on GPU is especially important for grammatical genetic programming; the representation of individuals
for linear and cartesian genetic programming are inherently suitable for simple interpreters and circuit simulators implementable on a GPU. On the other
hand grammatical genetic programming aims to make higher level constructs
and structures representable, using individuals that represent strings of tokens
belonging to a language defined by a grammar; unfortunately executing such
a representation sooner or later requires some form of compilation or complex
interpretation.
In this paper we first present three benchmark problems we implemented
to measure compilation times with. We use grammatical genetic programming
for the experiments, therefore we define the benchmark problems with their
grammars, test cases and fitness functions.
Then we set a baseline by measuring the compilation time of individuals for
those three problems, using the conventional CUDA compiler Nvcc. Afterwards
we measure the speedup obtained by the in-process compilation using the same
benchmark problem setups. We proceed by presenting the obstacles encountered
on parallelization of in-process compilation. Finally we propose a parallelization
scheme for in-process compilation, and measure the extra speedup achieved.
Prior Work
[6] deals with the compilation overhead of individuals for genetic programming
on GPU using CUDA. Article proposes a distributed compilation scheme where
a cluster of around 16 computers compile different individuals in parallel; and
states the need for large number of fitness cases to offset the compilation overhead. It correctly predicts that this mismatch will get worse with increasing
number of cores on GPUs, but also states that ”a large number of classic benchmark GP problems fit into this category”. Based on figure 5 of the article it can
be computed that for a population size of 256, authors required 25 ms/individual
in total1 .
[9] presents first use of grammatical genetic programming on the GPU, applied to a string matching problem to improve gzip compression; with a grammar
constructed from fragments of an existing string matching CUDA code. Based
on figure 11 of the accompanying technical report[10] a population of 1000 indi1 This number includes network traffic, XO, mutation and processing time on GPU, in
addition to compilation times. In our case the difference between compilation time and total
time has constantly been at sub-millisecond level per population on all problems; thus for
comparison purposes compile times we present can also be taken as total time with an error
margin of 1ms /pop.size
2
viduals (10 kernels of 100 individuals each) takes around 50 seconds to compile
using nvcc from CUDA v2.3 SDK, which puts the average compilation time to
approximately 50 ms/individual.
In [11] an overview of genetic programming on GPU hardware is provided,
along with a brief presentation and comparison of compiled and interpreted
approaches. As part of the comparison it underlines the trade-off between the
speed of compiled code versus the overhead of compilation, and states that
the command line CUDA compiler was especially slow, hence why interpreted
approach is usually preferred.
[15] investigate the acceleration of grammatical evolution by use of GPUs, by
considering performance impact of different design decisions like thread/block
granularity, different types of memory on GPU, host-device memory transactions. As part of the article compilation to PTX form and loading to GPU with
JIT compilation on driver level, is compared with directly compiling to cubin
object and loading to GPU without further JIT compilation. For a kernel containing 90 individuals takes 540ms to compile to CUBIN with sub-millisecond
upload time to GPU, vs 450ms for compilation to PTX and 80ms for JIT compilation and upload to GPU using nvcc compiler from CUDA v3.2 SDK. Thus
PTX+JIT case which is the faster of the two achieves average compilation time
of 5.88 ms/individual.
[12] proposes an approach for improving compilation times of individuals for
genetic programming on GPU, where common statements on similar locations
are aligned as much as possible across individuals. After alignment individuals
with overlaps are merged to common kernels such that aligned statements become a single statement, and diverging statements are enclosed with conditionals
to make them part of the code path only if the value of individual ID parameter
matches an individual having that divergent statements. Authors state that
in exchange for faster compilation times, they get slightly slower GPU runtime
with merged kernels as all individuals need to evaluate every condition at the
entry of each divergent code block coming from different individuals. In results it is stated that for individuals with 300 instructions, compile time is 347
ms/individual if it’s unaligned, and 72 ms/individual if it’s aligned (time for
alignment itself not included) with nvcc compiler from CUDA v3.2 SDK.
[3] provides a comparison of compilation, interpretation and direct generation of machine code methods for genetic programming on GPUs. Five benchmark problems consisting of Mexican Hat and Salutowicz regressions, MackeyGlass time series forecast, Sobel Filter and 20-bit Multiplexer are used to measure the comparative speed of the three mentioned methods. It is stated that
compilation method uses nvcc compiler from CUDA V5.5 SDK. Compilation
time breakdown is only provided for Mexican Hat regression benchmark on Table 6, where it is stated that total nvcc compilation time took 135,027 seconds
and total JIT compilation took 106,458 seconds. Table 5 states that Mexican
Hat problem uses 400K generations and a population size of 36. Therefore we
can say that an average compilation time of (135,027+106,458) /36×400,000 = 16.76
ms/individual is achieved.
3
Implemented Problems for Measurement
We implemented three problems as benchmark to compare compilation speed.
They consist of a general program synthesis problem, Keijzer-6 as a regression
problem [8], and 5-bit Multiplier as a multi output boolean problem . The latter
two are included in the ”Alternatives to blacklisted problems” table on [16].
We use grammatical genetic programming as our representation and phenotype production method; therefore all problems are defined with a BNF grammar that defines a search space of syntactically valid programs, along with some
test cases and a fitness function specific to the problem. For all three problems,
a genotype which is a list of (initially random) integers derives to a phenotype
which is a valid CUDA C expression, or code block in form of a list of statements. All individuals are appended and prepended with some initialization and
finalization code which serves to setup up the input state and write the output
to GPU memory afterwards. See Appendix for BNF Grammars and codes used
to surround the individuals.
Search Problem
Search Problem is designed to evolve a function which can identify whether a
search value is present in an integer list, and return its position if present or
return -1 otherwise.
We first proposed this problem as a general program synthesis benchmark
in [1]. The grammar for the problem is inspired by [14]; we designed it to be
a subproblem of the more general integer sort problem case along with some
others. It also bears some similarity to problems presented in [7] based on, the
generality of its usecase, combined with simplicity of its implementation.
Test cases consist of unordered lists of random integers in the range [0, 50],
and list lengths vary between 3 and 20. Test cases are randomly generated
but half of them are ensured to contain the value searched, and others ensured
not to contain. We employed a binary fitness function, which returns 1 if the
returned result is correct (position of searched value or -1 if it’s not present on
list) or 0 if it’s not correct; hence the fitness of an individual is the sum of its
fitnesses over all test cases, which evolutionary engine tries to maximize.
Keijzer-6
Px
Keijzer-6 function, introduced in [8], is the function K6 (x) = n=1 n1 which
maps a single integer parameter to the partial sum of harmonic series with
number of terms indicated by its parameter. Regression of Keijzer-6 function
is one of the recommended alternatives to replace simpler symbolic regression
problems like quartic polynomial [16].
For this problem we used a modified version of the grammar given in [13],
and [4], with the only modification of increasing constant and variable token
ratio as the expression nesting gets deeper. We used the root mean squared
error as fitness function which is the accepted practice for this problem.
4
5-bit multiplier
5-bit multiplier problem consists of finding a boolean relation that takes 10
binary inputs to 10 binary outputs, where two groups of 5 inputs each represent
an integer up to 25 −1 in binary, and the output represents a single integer up to
210 −1, such that the output is the multiplication of the two input numbers. This
problem is generally attacked as 10 independent binary regression problems,
with each bit of the output is separately evolved as a circuit or boolean function.
It’s easy to show that the number of n-bit input m-bit output binary relations
n
are 2m(2 ) , which grows super-exponentially. Multiple output multiplier is the
recommended alternative to Multiplexer and Parity problems in [16]
We transfer input to and output from GPU with bits packed as a single 32bit
integer; hence there is a code preamble before first individual to unpack the input
bits, and a post-amble after each individual to pack the 10 bits computed by
evolved expressions as an integer.
The fitness function for 5-bit multiplier computes the number of bits different
between the individual’s response and correct answer, by computing the pop
count of these two XORed.
Development and Experiment Setup
Hardware Platform
All experiments have been conducted on a dual Xeon E5-2670 (8 physical 16
logical cores per CPU, 32 cores in total) platform running at 2.6Ghz equipped
with 60GB RAM, along with dual SSD storage and four NVidia GRID K520
GPUs. Each GPU itself consists of 1536 cores spread through 8 multiprocessors
running at 800Mhz, along with 4GB GDDR5 RAM 2 and is able to sustain 2
teraflops of single precision operations (in total 6144 cores and 16GB GDDR5
VRAM which can theoretically sustain 8 teraflops single precision computation
assuming no other bottlenecks). GPUs are accessed for computation through
NVidia CUDA v8 API and libraries, running on top of Windows Server 2012
R2 operating system.
Development Environment
Codes related to grammar generation, parsing, derivation, genetic programming, evolution, fitness computation and GPU access has been implemented
in C#, using managedCuda 3 for CUDA API bindings and NVRTC interface,
along with CUDAfy.NET 4 for interfacing to NVCC command line compiler.
The grammars for the problems has been prepared such that the languages defined are valid subsets of CUDA C language specialized towards the respective
problems.
2 see
validation of hardware used at experiment: http://www.techpowerup.com/gpuz/details/7u5xd/
3 https://kunzmi.github.io/managedCuda/
4 https://cudafy.codeplex.com/
5
Experiment Parameters
We ran each experiment with population sizes starting from 20 individual per
population, going up to 300 with increments of 20. As the subject of interest is
compilation times and not fitness, we measured the following three parameters
to evaluate compilation speed:
(i) ptx : Cuda source code to Ptx compilation time per individual
(ii) jit : Ptx to Cubin object compilation time per individual
(iii) other : All remaining operations a GP cycle requires (i.e compiled individuals running on GPU, downloading produced results, computing fitness
values, evolutionary selection, cross over, mutation, etc.)
The value of other is measured to be always at sub-millisecond level, in all
experiments, all problems and for all population sizes. Therefore it does not
appear on plots. For all practical purposes ptx + jit can be considered as the
total time cost of a complete cycle for a generation, with an error margin of
1ms
pop.size .
Each data point on plots corresponds to the average of one of those measurements for the corresponding (populationsize, measurementtype, experiment)
triple. Each average is computed over the measurement values obtained for the
first 10 generations of 15 different populations for given size (thus effectively the
compile times of 150 generations averaged). The reason for not using 150 generations of a single population directly is that a population gains bias towards
to a certain type of individuals after certain number of generations, and stops
representing the inherent unbiased distribution of grammar.
The number of test cases used is dependent to the nature of problem; on
the other hand as each test case is run as a GPU thread, it is desirable that the
number of test cases are a multiple of 32 on any problem, as finest granularity
for task scheduling on modern GPUs is a group of 32 threads which is called
a Warp. For non multiple of 32 test cases, GPU transparently rounds up the
number to nearest multiple of 32 and allocate cores accordingly, with some
threads from the last warp work on cores with output disabled. The number
of test cases we used during experiments were 32 for Search Problem, 64 for
regression of Keijzer-6 function and 1024 (= 2(5+5) ) for 5-bit Binary Multiplier
Problem. For all experiments both mutation and crossover rate was set to 0.7;
these rates do not affect the compilation times.
Experiment Results
Conventional Compilation as Baseline
NVCC is the default compiler of CUDA platform, it is distributed as a command
line application. In addition to compilation of cuda C source codes, it performs
tasks such as the separation of source code as host code and device code, calling
6
7
90
Search Problem
Keijzer-6 Regression
5-bit Multiplier
80
Search Problem
Keijzer-6 Regression
5-bit Multiplier
6
70
5
time (sec)
time (ms)
60
50
40
4
3
30
20
2
10
1
0
0
50
100
150
200
250
300
350
0
50
100
150
200
250
300
350
population size (#individuals)
population size (#individuals)
(a) Per individual compile time
(b) Total compile time
Figure 1: Nvcc compilation times by population size.
the underlying host compiler (GCC or Visual C compiler) for host part of source
code, and linking compiled host and device object files.
Fib1(a) shows that compilation times level out at 11.2 ms/individual for
Search Problem, at 7.62 ms/individual for Keijzer-6 regression, and at 17.2
ms/individual for 5-bit multiplier problem. It can be seen on Fig.1(b) that,
even though not obvious, the total compilation time does not increase linearly,
which is most observable on trace of 5-bit multiplier problem. As Nvcc is a
separate process, it isn’t possible to measure the distribution of compilation
time between source to ptx, ptx to cubin, and all other setup work (i.e. process
launch overhead, disk I/O); therefore it is not possible to pinpoint the source of
nonlinearity on total compilation time.
The need for successive invocations of Nvcc application, and all data transfers being handled over disk files are the main drawbacks of Nvcc use in a
real time5 context, which is the case in genetic programming. Eventhough the
repeated creation and teardown of NVCC process most probably guarantees
that the application stays on disk cache, this still prevents it to stay cached on
processor L1/L2 caches.
In-process Compilation
NVRTC is a runtime compilation library for CUDA C, it was first released as
part of v7 of CUDA platform in 2015. NVRTC accepts CUDA source code and
compiles it to PTX in-memory. The PTX string generated by NVRTC can be
further compiled to device dependent CUBIN object file and loaded with CUDA
Driver API still without persisting it to a disk file. This provides optimizations
and performance not possible in off-line static compilation.
Without NVRTC, for each compilation a separate process needs to be spawned
to execute nvcc at runtime. This has significant overhead drawback, NVRTC
5 not
as in hard real time, but as prolonged, successive and throughput sensitive use
7
addresses these issues by providing a library interface that eliminates overhead
of spawning separate processes, and extra disk I/O.
80
4.5
out of process compilation
inprocess compilation
out of process compilation
inprocess compilation
4
population compile time (sec)
compile time per individual (ms)
70
60
50
40
30
20
3.5
3
2.5
2
1.5
1
10
0.5
0
0
0
50
100
150
200
250
300
0
50
population size (#individuals)
100
150
200
250
300
population size (#individuals)
(a) Per individual
(b) Total
Figure 2: In-process and out of process compilation times by population size,
for Search Problem
80
4.5
out of process compilation
inprocess compilation
out of process compilation
inprocess compilation
4
population compile time (sec)
compile time per individual (ms)
70
60
50
40
30
20
3.5
3
2.5
2
1.5
1
10
0.5
0
0
0
50
100
150
200
250
300
0
population size (#individuals)
50
100
150
200
250
300
population size (#individuals)
(a) Per individual
(b) Total
Figure 3: In-process and out of process compilation times by population size,
for Keijzer-6 Regression
On figures 2,3 and 4 it can be seen that in-process compilation of individuals
not only provides reduced compilation times for all problems on all population
sizes, it also allows to reach asymptotically optimal per individual compilation
time with much smaller populations. The fastest compilation times achieved
with in-process compilation is 4.14 ms/individual for Keijzer-6 regression (at
300 individuals per population), 10.88 ms/individual for 5-bit multiplier problem
(at 100 individuals per population6 ), and 6.89 ms/individual for Search Problem
(at 280 individuals per population7 ). The total compilation time speed ups are
6 compilation
7 compilation
speed at 300 individuals per population is 13.29 ms/individual
speed at 300 individuals per population is 7.76 ms/individual
8
80
6
out of process compilation
inprocess compilation
70
out of process compilation
inprocess compilation
population compile time (sec)
compile time per individual (ms)
5
60
50
40
30
20
4
3
2
1
10
0
0
0
50
100
150
200
250
300
0
50
population size (#individuals)
100
150
200
250
300
population size (#individuals)
(a) Per individual
(b) Total
Figure 4: In-process and out of process compilation times by population size,
for 5-bit Multiplier
measured to be in the order of 261% to 176% for the K6 regression problem,
288% to 124% for the 5-bit multiplier problem, and 272% to 143% for the Search
Problem, depending on population size (see Fig.5).
3
Search Problem speedup ratio
K6 Regression speedup ratio
5-bit Multiplier speedup ratio
2.5
speed up ratio
2
1.5
1
0.5
0
0
50
100
150
200
250
300
population size (#individuals)
Figure 5: Compile time speedup ratios between conventional and in-process
compilation by problem
Parallelizing In-process Compilation
Infeasibility of parallelization with threads
A first approach to parallelize in-process compilation, comes to mind as to
partition the individuals and spawn multiple threads that will compile each
partition in parallel through NVRTC library. Unfortunately it turns out that
NVRTC library is not designed for multi-threaded use; we noticed that when
9
multiple compilation calls are made from different threads at the same time, the
execution is automatically serialized.
Stack trace in Fig.6 shows nvrtc64 80.dll calling OS kernel’s EnterCriticalSection function to block for exclusive execution of a code block, and gets
unblocked by another thread which also runs a block from same library, 853ms
later via the release of the related lock. The pattern of green blocks on three
threads in addition to main thread in Fig.6 shows that calls are perfectly serialized one after another, despite being called at the same time which is hinted
by the red synchronization blocks preceding them.
Figure 6: NVRTC library serializes calls from multiple threads
Although NVRTC compiles CUDA source to PTX with a single call, the
presence of compiler options setup function which affects the following compilation call, and use of critical sections at function entries, show that apparently
this is a stateful API. Furthermore, unlike CUDA APIs’ design, mentioned state
is most likely not stored in thread local storage (TLS), but stored on the private heap of the dynamic loading library, making it impossible for us to trivially
parallelize this closed source library using threads, as moving the kept state to
TLS requires source level modifications.
Parallelization with daemon processes
Therefore as a second approach we implemented a daemon process which stays
resident. It is launched from command line with a unique ID as command
line parameter to allow multiple instances. Instances of daemon is launched as
many times as the wanted level of parallelism, and each instance identifies itself
with the ID received as parameter. Each launched process register two named
10
synchronization events with the operating system, for signaling the state transitions of a simple state machine consisting of {starting, available, processing}
states which represent the state of that instance. Main process also has copies
of same state machines for each instance to track the states of daemons. Thus
both processes (main and daemon) keep a consistent view of the mirrored state
machine by monitoring the named events which allows state transitions to be
performed in lock step. State transition can be initiated by both processes,
specifically (starting → available) and (processing → available) is triggered
by the daemon, and (available → processing) is triggered by the main process.
Main Process
Compilation Daemon
OS
create synchronization events %ID%+”1” and %ID%+”2”
launch process with command line parameter %ID%
wait for event %ID%+”1”
create process
create named memory map ”%MMAP%+%UID%”
create view to memory map
open synchronization event %UID%+”1” and %UID%+”2”
signal event %UID%+”1”
wait for event %UID%+”2”
unblock as event %UID%+”1” signaled
Figure 7: Sequence Diagram for creation of a compilation daemon process and
related interprocess communication primitives
The communication between the main process and compilation daemons are
handled via shared views to memory maps. Each daemon register a named
memory map and create a memory view, onto which main process also creates
a view to after the daemon signals state transition from starting to available.
(see Fig.7) CUDA source is passed through this shared memory, and compiled
device dependent CUBIN object file is also returned through the same. To
signal the state transition (starting → available) daemon process signals the
first event and starts waiting for the second event at the same time. Once a
daemon leaves the starting state, it never returns back to it.
When the main process generate a new population to be compiled it partitions the individuals in a balanced way, such that the difference of number of
individuals between any pair of partitions is never more than one. Once the
individuals are partitioned, the generated CUDA codes for each partition are
passed to the daemon processes. Each daemon waits in the blocked state till
main process wakes that specific daemon for a new batch of source to compile
by signaling the second named event of that process (see Fig.8). Main process
signals all daemons asynchronously to start compiling; then starts waiting for
the completion of daemon processes’ work. To prevent the UI thread of main
process getting blocked too, main process maintains a separate thread for each
11
daemon process it communicates with, therefore while waiting for daemon processes to finish their jobs only those threads of main process are blocked. Main
process signaling the second event and daemon process unblocking as a result,
corresponds to the state transition (available → processing).
Main Process
Compilation Daemon
OS
write CUDA code to shared memory
signal event %ID%+”2”
wait for event %ID%+”1”
unblock as event %ID%+”2” is signaled
read CUDA code from shared memory,
compile CUDA code to PTX with NVRTC,
compile PTX to CUBIN with Driver API,
write CUBIN object to shared memory
signal event %ID%+”1”
wait for event %ID%+”2”
unblock as event %ID%+”1” is signaled
read CUBIN object from shared memory
Figure 8: Sequence Diagram for compilation on daemon process and related
interprocess communication
When a daemon process arrives to processing state, it reads the CUDA
source code from the shared view of the memory map related to its ID, and
compiles the code using NVRTC library.
Once a daemon finishes compiling and writes the Cubin object to shared
memory, it signals the first event to unblock the related thread in main process
and starts to wait for the second event once again. This signaling, blocking pair
corresponds to the state transition (processing → available).
Cost of Parallelization
The parallelization approach we propose is virtually overhead free when compared to a hypothetical parallelization scenario using threads. As the daemon
processes are already resident and waiting in the memory along with the loaded
NVRTC library, the overhead of both parallelization approaches is limited to
the time cost of memory moves from/to shared memory and synchronization
by named events8 . The only difference between the two is, in a context switch
between threads of same process, processor keeps the Translation Look Aside
Buffer (TLB), but in case of a context switch to another process TLB is flushed
as processor transitions to a new virtual address space; we conjecture that the
impact would be negligible.
8 on Windows operating system named events is the fastest IPC primitive, upon which all
others (i.e. mutex, semaphore) are implemented
12
Table 1: Compilation Times by Compilation Methods for Search Problem with
300 individuals
Compilation Time
Compilation
Method
Nvcc
In-process
2 daemons
4 daemons
6 daemons
8 daemons
Per individual
11.20 ms
7.76 ms
3.81 ms
2.53 ms
2.23 ms
2.13 ms
Total
3.36 sec
2.33 sec
1.14 sec
0.76 sec
0.67 sec
0.64 sec
Speedup ratio
In-process
Nvcc
compilation compilation
1.00
1.00
1.44
2.04
2.93
3.07
4.41
3.48
5.01
3.65
5.26
Table 2: Compilation Times by Compilation Methods for Keijzer-6 Regression
with 300 individuals
Compilation Time
Compilation
Method
Nvcc
In-process
2 daemons
4 daemons
6 daemons
8 daemons
Per individual
7.63 ms
4.14 ms
2.92 ms
2.45 ms
2.20 ms
2.25 ms
Total
2.29 sec
1.24 sec
0.88 sec
0.73 sec
0.66 sec
0.67 sec
Speedup ratio
In-process
Nvcc
compilation compilation
1.00
1.00
1.83
1.42
2.60
1.69
3.10
1.88
3.45
1.84
3.37
About the memory cost, all modern operating systems recognize when an
executable binary or shared library gets loaded multiple times; OS keeps a single
copy of the related memory pages on physical memory, and separately maps
those to virtual address spaces of each process using those. This not only saves
physical RAM, but also allows better space locality for L2/L3 processor caches.
Hence memory consumption by multiple instances of our daemon processes each
loading NVRTC library (nvrtc64 80.dll is almost 15MB) to their own address
space, is almost the same as the consumption of a single instance.
Speedup Achieved with Parallel Compilation
At the end of each batch of experiments main application dumps the collected
raw measurements to a file. We imported this data to Matlab filtered by experiment and measurement types, and aggregated the experiment values for
each population size to produce the Tables 1,2,3, and to create the Figures
9,10,11,12,13,14.
It can be seen that parallelized in-process compilation of genetic programming individuals is faster for all problems and population sizes when compared
to in-process compilation without parallelization; furthermore in-process com-
13
Table 3: Compilation Times by Compilation Methods for 5-bit Multiplier Problem with 300 individuals
Compilation Time
Compilation
Method
Nvcc
In-process
2 daemons
4 daemons
6 daemons
8 daemons
Per individual
17.20 ms
13.29 ms
6.15 ms
3.23 ms
2.42 ms
2.17 ms
Total
5.16 sec
3.99 sec
1.85 sec
0.97 sec
0.73 sec
0.65 sec
Speedup ratio
In-process
Nvcc
compilation compilation
1.00
1.00
1.24
2.16
2.69
4.12
5.12
5.49
6.82
6.11
7.60
pilation without parallelization itself was shown to be faster than regular command line nvcc compilation on previous section.
Parallel compilation brought the per individual compilation time to 2.17
ms/individual for 5-bit Multiplier, to 2.20 ms/individual for Keijzer-6 regression and to 2.13 milliseconds for the Search Problem; these are almost an order
of magnitude faster than previous published results. Also we measured a compilation speedup of ×3.45 for regression problem, ×5.26 for search problem,
and ×7.60 for multiplication problem, when compared to the latest Nvcc V8
compiler, without requiring any code modification, and without any runtime
performance penalty.
Notice that our experiment platform consisted of dual Xeon E5-2670 processors running at 2.6Ghz; for compute bound tasks increase on processor frequency almost directly translates to performance improvement at an equal rate9 .
Therefore we can conjecture that to be able to compile a population of 300 individuals at sub-millisecond durations, the required processor frequency is around
2.6 × 2.13 = 5.54Ghz 10 which is currently available.
Conclusion
In this paper we present a new method to accelerate the compilation of genetic
programming individuals, in order to keep the compiled approach as a viable
option for genetic programming on gpu.
By using an in-process GPU compiler, we replaced disk file based data transfer to/from the compiler with memory accesses, also we mitigated the overhead
of repeated launches and tear downs of the command line compiler. Also we
investigated ways to parallelize this method of compilation, and identified that
in-process compilation function automatically serializes concurrent calls from
different threads. We implemented a daemon process that can have multiple
9 assuming
all other things being equal
again, under assumption of all other things being equal. 2.13 is the compilation time
of Search Problem with 8 daemons
10 once
14
running instances and service another application requesting CUDA code compilation. Daemon processes use the same in-line compilation method and communicate through operating system’s Inter Process Communication primitives.
We measured compilation times just above 2.1 ms/individual for all three
benchmark problems; and observed compilation speedups ranging from ×3.45 to
×7.60 based on problem, when compared to repeated command line compilation
with latest Nvcc v8 compiler.
All data and source code of software presented in this paper is available at
https://github.com/hayral/Parallel-and-in-process-compilation-of-individuals-forgenetic-programming-on-GPU
Acknowledgments
Dedicated to the memory of Professor Ahmet Cokun Snmez.
First author was partially supported by Turkcell Academy.
Appendix
Search Problem
Grammar Listing
<expr>
<expr2>
: : = <expr2> <bi −op> <expr2> | <expr2>
: : = <i n t > | <var−read> | <var−indexed >
<var−read>
: : = tmp | i | OUTPUT | SEARCH
<var−indexed > : : = INPUT[< var−read> % LENINPUT ]
<var−w r i t e >
: : = tmp | OUTPUT
<bi −op>
::= + | −
<i n t >
: : = 1 | 2 | ( −1)
<s t a t e m e n t >
<s t a t e m e n t 2 >
<s t a t e m e n t 3 >
: : = <a s s i g n m e n t > | < i f > | <l o o p >
: : = <a s s i g n m e n t > | <i f 2 >
: : = <a s s i g n m e n t >
<l o o p >
: : = f o r ( i =0; i #l e s s e r# LENINPUT ; i ++){<c−b l o c k 2 >}
<i f >
<i f 2 >
: : = i f (<cond−expr >) {<c−b l o c k 2 >}
: : = i f (<cond−expr >) {<c−b l o c k 3 >}
<cond−expr>
<comp−op>
<a s s i g n m e n t >
: : = <expr> <comp−op> <expr>
: : = #l e s s e r# | #g r e a t e r# | == | !=
: : = <var−w r i t e > = <expr >;
<c−b l o c k >
<c−b l o c k 2 >
<c−b l o c k 3 >
: : = <s t a t e m e n t s >
: : = <s t a t e m e n t s 2 >
: : = <s t a t e m e n t s 3 >
<s t a t e m e n t s >
: : = <s t a t e m e n t >
15
| <s t a t e m e n t ><s t a t e m e n t >
| <s t a t e m e n t ><s t a t e m e n t ><s t a t e m e n t >
<s t a t e m e n t s 2 > : : = <s t a t e m e n t 2 >
| <s t a t e m e n t 2 ><s t a t e m e n t 2 >
| <s t a t e m e n t 2 ><s t a t e m e n t 2 ><s t a t e m e n t 2 >
<s t a t e m e n t s 3 > : : = <s t a t e m e n t 3 >
| <s t a t e m e n t 3 ><s t a t e m e n t 3 >
| <s t a t e m e n t 3 ><s t a t e m e n t 3 ><s t a t e m e n t 3 >
Listing 1: Grammar for Search Problem
Code Preamble for Whole Population
constant
constant
constant
constant
global
i n t INPUT [NUMTESTCASE ] [ MAX TESTCASE LEN ] ;
i n t LENINPUT [NUMTESTCASE ] ;
i n t SEARCH [NUMTESTCASE ] ;
i n t CORRECTANSWER[NUMTESTCASE ] ;
v o i d c r e a t e d F u n c ( i n t ∗ OUTPUT)
{
i n t ∗INPUT = INPUT [ t h r e a d I d x . x ] ;
i n t LENINPUT = LENINPUT [ t h r e a d I d x . x ] ;
i n t SEARCH = SEARCH [ t h r e a d I d x . x ] ;
int i ;
i n t tmp ;
i n t OUTPUT;
Listing 2: Code preamble for whole population on Search Problem
Keijzer-6 Regression Problem
Grammar Listing
<e>
: : = <e2> + <e2> | <e2> − <e2> | <e2> ∗ <e2> | <e2> / <e2>
| s q r t f ( f a b s f (<e2 >)) | s i n f (<e2 >) | t a n h f (<e2 >)
| e x p f (<e2 >) | l o g f ( f a b s f (<e2 >)+1)
| x | x | x | x
| <c><c >.<c><c> | <c><c >.<c><c> | <c><c >.<c><c> | <c><c >.<c><c>
<e2> : : = <e3> + <e3> | <e3> − <e3> | <e3> ∗ <e3> | <e3> / <e3>
| s q r t f ( f a b s f (<e3 >)) | s i n f (<e3 >) | t a n h f (<e3 >)
| e x p f (<e3 >) | l o g f ( f a b s f (<e3 >)+1)
| x | x | x | x | x | x
| <c><c >.<c><c> | <c><c >.<c><c> | <c><c >.<c><c>
| <c><c >.<c><c> | <c><c >.<c><c> | <c><c >.<c><c>
<e3> : : = <e3> + <e3> | <e3> − <e3> | <e3> ∗ <e3> | <e3> / <e3>
| s q r t f ( f a b s f (<e3 >)) | s i n f (<e3 >) | t a n h f (<e3 >)
| e x p f (<e3 >) | l o g f ( f a b s f (<e3 >)+1)
| x | x | x | x | x | x | x | x
| <c><c >.<c><c> | <c><c >.<c><c> | <c><c >.<c><c>
| <c><c >.<c><c> | <c><c >.<c><c> | <c><c >.<c><c>
| <c><c >.<c><c> | <c><c >.<c><c>
16
<c>
::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Listing 3: Grammar for Keijzer-6 Regression
5-bit Multiplier Problem
Grammar Listing
<s t a r t > : : = o0=<expr >; o1=<expr >; o2=<expr >; o3=<expr >; o4=<expr >;
o5=<expr >; o6=<expr >; o7=<expr >; o8=<expr >; o9=<expr >;
<expr> : : = (<expr2> <bi −op> <expr2 >) | <var> | ( ˜ <var >)
<expr2> : : = (<expr2> <bi −op> <expr2 >) | <var> | ( ˜ <var >)
| <var> | ( ˜ <var >)
<var>
: : = a0 | a1 | a2 | a3 | a4 | b0 | b1 | b2 | b3 | b4
<bi −op> : : = & | #o r#
Listing 4: Grammar for 5-bit Multiplier Problem
Code Preamble for each Individual
global
v o i d c r e a t e d F u n c 0 ( i n t ∗ OUTPUT)
{
i n t t i d = b l o c k I d x . x ∗ blockDim . x + t h r e a d I d x . x ; ;
int
int
int
int
int
int
int
int
int
int
a0
a1
a2
a3
a4
b0
b1
b2
b3
b4
=
=
=
=
=
=
=
=
=
=
t i d & 0 x1 ;
( t i d & 0 x2 ) >> 1 ;
( t i d & 0 x4 ) >> 2 ;
( t i d & 0 x8 ) >> 3 ;
( t i d & 0 x10 ) >> 4 ;
( t i d & 0 x20 ) >> 5 ;
( t i d & 0 x40 ) >> 6 ;
( t i d & 0 x80 ) >> 7 ;
( t i d & 0 x100 ) >> 8 ;
( t i d & 0 x200 ) >> 9 ;
i n t o0 , o1 , o2 , o3 , o4 , o5 , o6 , o7 , o8 , o9 ;
Listing 5: Code preamble for 5-bit Multiplier Problem
Compilation Time and Speedup Ratio Plots
17
30
2.5
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
25
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
2
time (sec)
time (ms)
20
15
1.5
1
10
0.5
5
0
0
0
50
100
150
200
250
300
0
50
population size (#individuals)
100
150
200
250
300
population size (#individuals)
(a) Per individual compile time
(b) Total compile time
Figure 9: Nvcc compilation times for Search Problem by number of servicing
resident processes
8
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
7
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
7
6
5
speed up ratio
speed up ratio
6
8
4
3
2
5
4
3
2
1
1
0
0
50
100
150
200
250
300
population size (#individuals)
0
0
50
100
150
200
250
300
population size (#individuals)
(a) Speedup against conventional compilation
(b) Speedup against in-process compilation
Figure 10: Parallelization speedup on Search problem
18
30
1.5
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
25
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
1
time (sec)
time (ms)
20
15
10
0.5
5
0
0
0
50
100
150
200
250
300
0
50
population size (#individuals)
100
150
200
250
300
population size (#individuals)
(a) Per individual compile time
(b) Total compile time
Figure 11: Nvcc compilation times for Keijzer-6 regression by number of servicing resident processes
8
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
7
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
7
6
5
speed up ratio
speed up ratio
6
8
4
3
2
5
4
3
2
1
1
0
0
50
100
150
200
250
300
population size (#individuals)
0
0
50
100
150
200
250
300
population size (#individuals)
(a) Speedup against conventional compilation
(b) Speedup against in-process compilation
Figure 12: Parallelization speedup on Keijzer-6 regression
19
30
4
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
25
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
3.5
3
20
time (sec)
time (ms)
2.5
15
2
1.5
10
1
5
0.5
0
0
0
50
100
150
200
250
300
0
50
population size (#individuals)
100
150
200
250
300
population size (#individuals)
(a) Per individual compile time
(b) Total compile time
Figure 13: Nvcc compilation times for 5-bit Multiplier by number of servicing
resident processes
8
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
7
inprocess
2 service processes
4 service processes
6 service processes
8 service processes
7
6
5
speed up ratio
speed up ratio
6
8
4
3
2
5
4
3
2
1
1
0
0
50
100
150
200
250
300
population size (#individuals)
0
0
50
100
150
200
250
300
population size (#individuals)
(a) Speedup against conventional compilation
(b) Speedup against in-process compilation
Figure 14: Parallelization speedup on 5-Bit multiplier
20
References
[1] Hakan Ayral and Songül Albayrak. Effects of Population, Generation and
Test Case Count on Grammatical Genetic Programming for Integer Lists
(in press). in press.
[2] D M Chitty. A data parallel approach to genetic programming using programmable graphics hardware. Proc. of the Conference on Genetic and
Evolutionary Computation (GECCO), 2:1566–1573, 2007.
[3] Cleomar Pereira da Silva, Douglas Mota Dias, Cristiana Bentes, Marco
Aurélio Cavalcanti Pacheco, and Leandro Fontoura Cupertino. Evolving
GPU machine code. The Journal of Machine Learning Research, 16(1):673–
712, 2015.
[4] David Fagan, Michael Fenton, and Michael O’Neill. Exploring Position
Independent Initialisation in Grammatical Evolution. Proceedings of 2016
IEEE Congress on Evolutionary Computation (CEC 2016), pages 5060–
5067, 2016.
[5] Simon Harding and Wolfgang Banzhaf. Fast Genetic Programming on
GPUs. In Proceedings of the 10th European Conference on Genetic Programming, volume 4445, pages 90–101. Springer, 2007.
[6] SL Simon L SL Harding and Wolfgang Banzhaf. Distributed genetic programming on GPUs using CUDA. In Workshop on Parallel Architectures
and Bioinspired Algorithms, pages 1–10, 2009.
[7] Thomas Helmuth and Lee Spector. General Program Synthesis Benchmark
Suite. In Proceedings of the 2015 on Genetic and Evolutionary Computation
Conference - GECCO ’15, pages 1039–1046, New York, New York, USA,
2015. ACM Press.
[8] Maarten Keijzer. Improving Symbolic Regression with Interval Arithmetic
and Linear Scaling. Genetic Programming Proceedings of EuroGP2003,
2610:70–82, 2003.
[9] W. B. Langdon and M. Harman. Evolving a CUDA kernel from an nVidia
template. IEEE Congress on Evolutionary Computation, pages 1–8, jul
2010.
[10] WB Langdon and M Harman. Evolving gzip matches Kernel from an nVidia
CUDA Template. Technical Report February, 2010.
[11] William B. Langdon. Graphics processing units and genetic programming:
an overview. Soft Computing - A Fusion of Foundations, Methodologies
and Applications, 15(8):1657–1669, mar 2011.
21
[12] Tony E. Lewis and George D. Magoulas. Identifying similarities in TMBL
programs with alignment to quicken their compilation for GPUs. Proceedings of the 13th annual conference companion on Genetic and evolutionary
computation - GECCO ’11, page 447, 2011.
[13] Miguel Nicolau and Michael Fenton. Managing Repetition in GrammarBased Genetic Programming. Proceedings of the 2016 on Genetic and Evolutionary Computation Conference - GECCO ’16, pages 765–772, 2016.
[14] M O’Neill, M Nicolau, and A Agapitos. Experiments in program synthesis
with grammatical evolution: A focus on Integer Sorting. In Evolutionary
Computation (CEC), 2014 IEEE Congress on, pages 1504–1511, 2014.
[15] Petr Pospichal, Eoin Murphy, Michael O’Neill, Josef Schwarz, and Jiri
Jaros. Acceleration of Grammatical Evolution Using Graphics Processing
Units. Proceedings of the 13th annual conference companion on Genetic
and evolutionary computation - GECCO ’11, pages 431–438, 2011.
[16] David R. White, James McDermott, Mauro Castelli, Luca Manzoni,
Brian W. Goldman, Gabriel Kronberger, Wojciech Jaśkowski, Una-May
O’Reilly, and Sean Luke. Better GP benchmarks: community survey results
and proposals. Genetic Programming and Evolvable Machines, 14(1):3–29,
dec 2012.
22
| 9 |
Fractional order statistic approximation for
nonparametric conditional quantile inference
arXiv:1609.09035v1 [] 28 Sep 2016
Matt Goldman∗
David M. Kaplan∗
September 29, 2016
Abstract
Using and extending fractional order statistic theory, we characterize the O(n−1 )
coverage probability error of the previously proposed confidence intervals for population quantiles using L-statistics as endpoints in Hutson (1999). We derive an analytic
expression for the n−1 term,
which may be used to calibrate the nominal coverage level
−3/2
3
to get O n
[log(n)] coverage error. Asymptotic power is shown to be optimal.
Using kernel smoothing, we propose a related method for nonparametric inference on
conditional quantiles. This new method compares favorably with asymptotic normality and bootstrap methods in theory and in simulations. Code is provided for both
unconditional and conditional inference.
JEL classification: C21
Keywords: Dirichlet, high-order accuracy, inference-optimal bandwidth, kernel
smoothing.
c 2016 by the authors. This manuscript version is made available under the
CC-BY-NC-ND 4.0 license: http://creativecommons.org/licenses/by-nc-nd/4.0/
∗
Goldman: Microsoft, [email protected]. Kaplan (corresponding author): Department of Economics, University of Missouri, [email protected]. We thank the co-editor (Oliver Linton), associate
editor, referees, and Yixiao Sun for helpful comments and references, and Patrik Guggenberger and Andres
Santos for feedback improving the clarity of presentation. Thanks also to Brendan Beare, Karen Messer, and
active audience members at seminars and conferences. Thanks to Ruixuan Liu for providing and discussing
code from Fan and Liu (2016). This paper was previously circulated as parts of “IDEAL quantile inference
via interpolated duals of exact analytic L-statistics” and “IDEAL inference on conditional quantiles.”
1
1
Introduction
Quantiles contain information about a distribution’s shape. Complementing the mean, they
capture heterogeneity, inequality, and other measures of economic interest. Nonparametric
conditional quantile models further allow arbitrary heterogeneity across regressor values.
This paper concerns nonparametric inference on quantiles and conditional quantiles. In
particular, we characterize the high-order accuracy of both Hutson’s (1999) L-statistic-based
confidence intervals (CIs) and our new conditional quantile CIs.
Conditional quantiles appear across diverse topics because they are fundamental statistical objects. Such topics include wages (Buchinsky, 1994; Chamberlain, 1994; Hogg, 1975),
infant birthweight (Abrevaya, 2001), demand for alcohol (Manning, Blumberg, and Moulton,
1995), and Engel curves (Alan, Crossley, Grootendorst, and Veall, 2005; Deaton, 1997, pp.
81–82), which we examine in our empirical application.
We formally derive the coverage probability error (CPE) of the CIs from Hutson (1999), as
well as asymptotic power of the corresponding hypothesis tests. Hutson (1999) had proposed
CIs for quantiles using L-statistics (interpolating between order statistics) as endpoints and
found they performed well, but formal proofs were lacking. Using the analytic n−1 term we
derive in the CPE, we provide a new calibration to achieve O n−3/2 [log(n)]3 CPE, analogous
to the Ho and Lee (2005a) analytic calibration of the CIs in Beran and Hall (1993).
The theoretical results we develop contribute to the fractional order statistic literature
and provide the basis for inference on other objects of interest explored in Goldman and
Kaplan (2016b) and Kaplan (2014). In particular, Theorem 2 tightly links the distributions
of L-statistics from the observed and ‘ideal’ (unobserved) fractional order statistic processes.
Additionally, Lemma 7 provides Dirichlet PDF and PDF derivative approximations.
High-order accuracy is important for small samples (e.g., for experiments) as well as
nonparametric analysis with small local sample sizes. For example, if n = 1024 and there
are five binary regressors, then the smallest local sample size cannot exceed 1024/25 = 32.
For nonparametric conditional quantile inference, we apply the unconditional method
2
to a local sample (similar to local constant kernel regression), smoothing over continuous
covariates and also allowing discrete covariates. CPE is minimized by balancing the CPE of
our unconditional method and the CPE from bias due to smoothing. We derive the optimal
CPE and bandwidth rates, as well as a plug-in bandwidth when there is a single continuous
covariate.
Our L-statistic method has theoretical and computational advantages over methods based
on normality or an unsmoothed bootstrap. The theoretical bottleneck for our approach is
the need to use a uniform kernel. Nonetheless, even if normality or bootstrap methods
assume an infinitely differentiable conditional quantile function (and hypothetically fit an
infinite-degree local polynomial), our CPE is still of smaller order with one or two continuous
covariates. Our method also computes more quickly than existing methods (of reasonable
accuracy), handling even more challenging tasks in 10–15 seconds instead of minutes.
Recent complementary work of Fan and Liu (2016) also concerns a “direct method” of
nonparametric inference on conditional quantiles. They use a limiting Gaussian process to
derive first-order accuracy in a general setting, whereas we use the finite-sample Dirichlet
process to achieve high-order accuracy in an iid setting. Fan and Liu (2016) also provide
uniform (over X) confidence bands. We suggest a confidence band from interpolating a
growing number of joint CIs (as in Horowitz and Lee (2012)), although it will take additional
work to rigorously justify. A different, ad hoc confidence band described in Section 6 generally
outperformed others in our simulations.
If applied to a local constant estimator with a uniform kernel and the same bandwidth,
the Fan and Liu (2016) approach is less accurate than ours due to the normal (instead of
beta) reference distribution and integer (instead of interpolated) order statistics in their CI
in equation (6). However, with other estimators like local polynomials or that in Donald,
Hsu, and Barrett (2012), the Fan and Liu (2016) method is not necessarily less accurate.
One limitation of our approach is that it cannot incorporate these other estimators, whereas
Assumption GI(iii) in Fan and Liu (2016) includes any estimator that weakly converges (over
3
a range of quantiles) to a Gaussian process with a particular structure. We compare further
in our simulations. One open question is whether using our beta reference and interpolation
can improve accuracy for the general Fan and Liu (2016) method beyond the local constant
estimator with a uniform kernel; our Lemma 3 shows this at least retains first-order accuracy.
The order statistic approach to quantile inference uses the idea of the probability integral
transform, which dates back to R. A. Fisher (1932), Karl Pearson (1933), and Neyman
iid
iid
(1937). For continuous Xi ∼ F (·), F (Xi ) ∼ Unif(0, 1). Each order statistic from such an
iid uniform sample has a known beta distribution for any sample size n. We show that
the L-statistic linearly interpolating consecutive order statistics also follows an approximate
beta distribution, with only O(n−1 ) error in CDF. Although O(n−1 ) is an asymptotic claim,
the CPE of the CI using the L-statistic endpoint is bounded between the CPEs of the CIs
using the two order statistics comprising the L-statistic, where one such CPE is too small
and one is too big, for any sample size. This is an advantage over methods more sensitive
to asymptotic approximation error.
Many other approaches to one-sample quantile inference have been explored. With Edgeworth expansions, Hall and Sheather (1988) and Kaplan (2015) obtain two-sided O(n−2/3 )
CPE. With bootstrap, smoothing is necessary for high-order accuracy. This increases the
computational burden and requires good bandwidth selection in practice.1 See Ho and Lee
(2005b, §1) for a review of bootstrap methods. Smoothed empirical likelihood (Chen and
Hall, 1993) also achieves nice theoretical properties, but with the same caveats.
Other order statistic-based CIs dating back to Thompson (1936) are surveyed in David
and Nagaraja (2003, §7.1). Most closely related to Hutson (1999) is Beran and Hall (1993).
Like Hutson (1999), Beran and Hall (1993) linearly interpolate order statistics for CI endpoints, but with an interpolation weight based on the binomial distribution. Although their
proofs use expansions of the Rényi (1953) representation instead of fractional order statistic
For example, while achieving the impressive two-sided CPE of O(n−3/2 ), Polansky and Schucany (1997,
p. 833) admit, “If this method is to be of any practical value, a better bandwidth estimation technique will
certainly be required.”
1
4
theory, their n−1 CPE term is identical to that for Hutson (1999) other than the different
weight. Prior work (e.g., Bickel, 1967; Shorack, 1972) has established asymptotic normality
of L-statistics and convergence of the sample quantile process to a Gaussian limit process,
but without such high-order accuracy.
The most apparent difference between the two-sided CIs of Beran and Hall (1993) and
Hutson (1999) is that the former are symmetric in the order statistic index, whereas the
latter are equal-tailed. This allows Hutson (1999) to be computed further into the tails.
Additionally, our framework can be extended to CIs for interquantile ranges and two-sample
quantile differences (Goldman and Kaplan, 2016b), which has not been done in the Rényi
representation framework.
For nonparametric conditional quantile inference, in addition to the aforementioned Fan
and Liu (2016) approach, Chaudhuri (1991) derives the pointwise asymptotic normal distribution of a local polynomial estimator. Qu and Yoon (2015) propose modified local linear
estimators of the conditional quantile process that converge weakly to a Gaussian process,
and they suggest using a type of bias correction that strictly enlarges a CI to deal with the
first-order effect of asymptotic bias when using the MSE-optimal bandwidth rate.
Section 2 contains our theoretical results on fractional order statistic approximation,
which are applied to unconditional quantile inference in Section 3. Section 4 concerns our
new conditional quantile inference method. An empirical application and simulation results
are in Sections 5 and 6, respectively. Proof sketches are collected in Appendix A, while
the supplemental appendix contains full proofs. The supplemental appendix also contains
details of the plug-in bandwidth calculations, as well as additional empirical and simulation
results.
.
Notationally, φ(·) and Φ(·) are respectively the standard normal PDF and CDF, = should
be read as “is equal to, up to smaller-order terms”, as “has exact (asymptotic) rate/order
of”, and An = O(Bn ) as usual. Acronyms used are those for cumulative distribution function
(CDF), confidence interval (CI), coverage probability (CP), coverage probability error (CPE),
5
and probability density function (PDF).
2
Fractional order statistic theory
In this section, we introduce notation and present our core theoretical results linking unobserved ‘ideal’ fractional L-statistics with their observed counterparts.
Given an iid sample {Xi }ni=1 of draws from a continuous CDF denoted2 F (·), interest is
in Q(p) ≡ F −1 (p) for some p ∈ (0, 1), where Q(·) is the quantile function. For u ∈ (0, 1), the
sample L-statistic commonly associated with Q(u) is
Q̂LX (u) ≡ (1 − )Xn:k + Xn:k+1 ,
k ≡ bu(n + 1)c,
≡ u(n + 1) − k,
(1)
where b·c is the floor function, is the interpolation weight, and Xn:k denotes the kth order
statistic (i.e., kth smallest sample value). While Q(u) is latent and nonrandom, Q̂LX (u) is
a random variable, and Q̂LX (·) is a stochastic process, observed for arguments in [1/(n +
1), n/(n + 1)].
Let Ξn ≡ {k/(n + 1)}nk=1 denote the set of quantiles corresponding to the observed order
statistics. If u ∈ Ξn , then no interpolation is necessary and Q̂LX (u) = Xn:k . As detailed in
Section 3, application of the probability integral transform yields exact coverage probability
of a CI endpoint Xn:k for Q(p): P (Xn:k < F −1 (p)) = P (Un:k < p), where Un:k ≡ F (Xn:k ) ∼
iid
β(k, n + 1 − k) is equal in distribution to the kth order statistic from Ui ∼ Unif(0, 1),
i = 1, . . . , n (Wilks, 1962, 8.7.4). However, we also care about u ∈
/ Ξn , in which case k
is fractional. To better handle such fractional order statistics, we will present a tight link
between the marginal distributions of the stochastic process Q̂LX (·) and those of the analogous
‘ideal’ (I) process
Q̃IX (·)
≡F
−1
2
Q̃IU (·)
,
(2)
F will often be used with a random variable subscript to denote the CDF of that particular random
variable. If no subscript is present, then F (·) refers to the CDF of X. Similarly for the PDF f (·).
6
where Q̃IU (·) is the ideal (I) uniform (U) fractional order “statistic” process. We use a tilde in
Q̃IX (·) and Q̃IU (·) instead of the hat like in Q̂LX (·) to emphasize that the former are unobserved
(hence not true statistics), whereas the latter is computable from the sample data.
This Q̃IU (·) in (2) is a Dirichlet process (Ferguson, 1973; Stigler, 1977) on the unit interval
with index measure ν([0, t]) = (n + 1)t. Its univariate marginals are
Q̃IU (u) = Un:(n+1)u ∼ β (n + 1)u, (n + 1)(1 − u) .
(3)
The marginal distribution of Q̃IU (u1 ), Q̃IU (u2 ) − Q̃IU (u1 ), . . . , Q̃IU (uk ) − Q̃IU (uk−1 ) for u1 <
· · · < uk is Dirichlet with parameters (u1 (n + 1), (u2 − u1 )(n + 1), . . . , (uk − uk−1 )(n + 1)).
For all u ∈ Ξn , Q̃IX (u) coincides with Q̂LX (u); they differ only in their interpolation
between these points. Proposition 1 shows Q̃IX (·) and Q̂LX (·) to be closely linked in probability.
Proposition 1. For any fixed δ > 0 and m > 0, define U δ ≡ {u ∈ (0, 1) | ∀ t ∈ (u − m, u +
n
1
, n+1
]; then, sup Q̃IX (u) − Q̂LX (u) = Op (n−1 log(n)).
m), f (F −1 (t)) ≥ δ} and Unδ ≡ U δ ∩ [ n+1
u∈Unδ
Although Proposition 1 motivates approximating the distribution of Q̂LX (u) by that of
Q̃IX (u), it is not relevant to high-order accuracy. In fact, its result is achieved by any
interpolation between Xn:k and Xn:k+1 , not just Q̂LX (u); in contrast, the high-order accuracy
we establish in Theorem 4 is only possible with precise interpolations like Q̂LX (u).
Next, we consider marginal distributions of fixed dimension J. We also consider the
Gaussian approximation to the sampling distribution of fractional order statistics. It is well
known that the centered and scaled empirical process for standard uniform random variables
converges to a Brownian bridge. For standard Brownian bridge process B(·), we index by
u ∈ (0, 1) the additional stochastic processes
−1/2
Q̃B
B(u)
U (u) ≡ u + n
and
−1
Q̃B
Q̃B
X (u) ≡ F
U (u) .
The vector Q̃IU (u) has an ordered Dirichlet distribution (i.e., the spacings between consecutive Q̃IU (uj ) follow a joint Dirichlet distribution), while Q̃B
U (u) is multivariate Gaussian.
Lemma 7 in the appendix shows the close relationship between multivariate Dirichlet and
7
Gaussian PDFs and PDF derivatives.
Theorem 2 shows the close distributional link among linear combinations of ideal, interpolated, and Gaussian-approximated fractional order statistics. Specifically, for arbitrary
weight vector ψ ∈ RJ , we (distributionally) approximate
L
L ≡
J
X
ψj Q̂LX (uj )
J
X
I
by L ≡
j=1
ψj Q̃IX (uj ),
(4)
j=1
J
X
or alternatively by LB ≡
ψj Q̃B
X (uj ).
j=1
Our assumptions for this section are now presented, followed by the main theoretical
result. Assumption A2 ensures that the first three derivatives of the quantile function are
uniformly bounded in neighborhoods of the quantiles, uj , which helps bound remainder terms
in the proofs. We use bold for vectors and underline for matrices.
iid
Assumption A1. Sampling is iid: Xi ∼ F , i = 1, . . . , n.
Assumption A2. For each quantile uj , the PDF f (·) (corresponding to CDF F (·) in A1)
satisfies (i) f (F −1 (uj )) > 0; (ii) f 00 (·) is continuous in some neighborhood of F −1 (uj ), i.e.,
f ∈ C 2 (Uδ (F −1 (uj ))) with Uδ (x) denoting some δ-neighborhood of point x ∈ R.
Theorem 2. Define V as the J × J matrix with row i, column j entries V i,j = min{ui , uj } −
ui uj . Let A be the J × J matrix with main diagonal entries Aj,j = f (F −1 (uj )) and zeros
elsewhere, and let
Vψ ≡ ψ 0 A−1 V A−1 ψ,
X0 ≡
J
X
ψj F −1 (uj ).
j=1
Let Assumption A1 hold, and let A2 hold at ū. Given the definitions in (1), (2), and (4),
the following results hold uniformly over u = ū + o(1).
(i) For a given constant K,
L
P L < X0 + n
−1/2
K
I
−1/2
− P L < X0 + n
K
8
" J
!#
K exp{−K 2 /(2Vψ )} X ψj2 j (1 − j )
−1
−3/2
3
q
,
n
+
O
n
[log(n)]
=
f [F −1 (uj )]2
2πVψ3
j=1
where the remainder is uniform over all K.
(ii) Uniformly over K,
L
−1/2
I
−1/2
sup P L < X0 + n
K − P L < X0 + n
K
K∈R
" J
!#
e−1/2 X ψj2 j (1 − j )
=q
n−1 + O n−3/2 [log(n)]3 ,
2
−1
2πVψ2 j=1 f [F (uj )]
L
−1/2
B
−1/2
sup P L < X0 + n
K − P L < X0 + n
K = O n−1/2 [log(n)]3 .
K∈R
3
Quantile inference: unconditional
For inference on Q(p), we continue to maintain A1 and A2. For p ∈ (0, 1) and confidence
level 1 − α, define uh (α) and ul (α) to solve
α = P Q̃IU uh (α) < p ,
α = P Q̃IU ul (α) > p ,
(5)
with Q̃IU (u) ∼ β (n + 1)u, (n + 1)(1 − u) from (3), parallel to (7) and (8) in Hutson (1999).
One-sided CI endpoints for Q(p) are Q̂LX (uh ) or Q̂LX (ul ). Two-sided CIs replace α with
α/2 in (5) and use both endpoints. This use of α/2 yields the equal-tailed property; more
generally, tα and (1 − t)α can be used for t ∈ (0, 1).
Figure 1 visualizes an example. The beta distribution’s mean is uh (or ul ). Decreasing
uh increases the probability mass in the shaded region below u, while increasing uh decreases
the shaded region, and vice-versa for ul . Solving (5) is a simple numerical search problem.
Lemma 3 shows the CI endpoint indices converge to p at a n−1/2 rate and may be
approximated using quantiles of a normal distribution.
Lemma 3. Let z1−α denote the (1 − α)-quantile of a standard normal distribution, z1−α ≡
9
5
Determination of Upper Endpoint
n=11, u=0.65, α=0.1
5
Determination of Lower Endpoint
n=11, u=0.65, α=0.1
4
3
Density
2
1
2
3
ul = 0.42
1
Density
4
uh = 0.84
α = 10%
0
0
α = 10%
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
u
0.4
0.6
0.8
1.0
u
Figure 1: Example of one-sided CI endpoint determination,
n = 11, p = 0.65, α = 0.1.
l
I
l
Left: u makes the shaded region’s area P Q̃U (u ) > p = α. Right: similarly, uh solves
P Q̃IU (uh ) < p = α.
Φ−1 (1 − α). From the definitions in (5), the values ul (α) and uh (α) can be approximated as
p
2p − 1 2
p(1 − p) −
(z1−α + 2) + O(n−3/2 ),
6n
p
2p − 1 2
(z1−α + 2) + O(n−3/2 ).
uh (α) = p + n−1/2 z1−α p(1 − p) −
6n
ul (α) = p − n−1/2 z1−α
For the lower one-sided CI, using (5), the 1 − α CI from Hutson (1999) is
−∞, Q̂LX uh (α)
.
(6)
Coverage probability is
n
o
P Q(p) ∈ −∞, Q̂LX uh (α)
= P Q̂LX uh (α) > Q(p)
(1 − )z
2
Thm 2
h
h 1−α exp{−z1−α /2} −1
I
h
√
= P Q̃X u (α) > Q(p) +
n + O n−3/2 [log(n)]3
2πuh (α)(1 − uh (α))
h (1 − h )z1−α φ(z1−α ) −1
=1−α+
n + O n−3/2 [log(n)]3 ,
p(1 − p)
where φ(·) is the standard normal PDF and the n−1 term is non-negative. Similar to the Ho
and Lee (2005a) calibration, we can remove the analytic n−1 term with the calibrated CI
h (1 − h )z1−α φ(z1−α ) −1
L
h
−∞, Q̂X u α +
n
,
p(1 − p)
10
(7)
which has CPE of order O n−3/2 [log(n)]3 . We follow convention and define CPE ≡ CP −
(1 − α), where CP is the actual coverage probability and 1 − α the desired confidence level.
By parallel argument, Hutson’s (1999) uncalibrated upper one-sided and two-sided CIs
also have O(n−1 ) CPE, or O n−3/2 [log(n)]3 with calibration. For the upper one-sided case,
again using (5), the 1 − α Hutson CI and our calibrated CI are respectively given by
Q̂LX
u (α) , ∞ ,
l
` (1 − ` )z1−α φ(z1−α ) −1
L
l
n
,∞ ,
Q̂X u α +
p(1 − p)
(8)
and for equal-tailed two-sided CIs,
Q̂LX ul (α/2) , Q̂LX uh (α/2)
and
` (1 − ` )z1−α/2 φ(z1−α/2 ) −1
L
l α
+
n
Q̂X u
,
2
p(1 − p)
!
(1
−
)z
φ(z
)
α
h
h 1−α/2
1−α/2
+
n−1
.
Q̂LX uh
2
p(1 − p)
(9)
(10)
Without calibration, in all cases the n−1 CPE term is non-negative (indicating over-coverage).
For relatively extreme quantiles p (given n), the L-statistic method cannot be computed
because the (n + 1)th (or zeroth) order statistic is needed. In such cases, our code uses the
Edgeworth expansion-based CI in Kaplan (2015). Alternatively, if bounds on X are known a
priori, they may be used in place of these “missing” order statistics to generate conservative
CIs. Regardless, as n → ∞, the range of computable quantiles approaches (0, 1).
The hypothesis tests corresponding to all the foregoing CIs achieve optimal asymptotic
power against local alternatives. The sample quantile is a semiparametric efficient estimator,
so it suffices to show that power is asymptotically first-order equivalent to that of the test
based on asymptotic normality. Theorem 4 collects all of our results on coverage and power.
Theorem 4. Let zα denote the α-quantile of the standard normal distribution, and let h =
(n + 1)uh (α) − b(n + 1)uh (α)c and ` = (n + 1)ul (α) − b(n + 1)ul (α)c. Let Assumption A1
hold, and let A2 hold at p. Then, we have the following.
11
(i) The one-sided lower and upper CIs in (6) and (8) have coverage probability
1−α+
(1 − )z1−α φ(z1−α ) −1
n + O n−3/2 [log(n)]3 ,
p(1 − p)
with = h for the former and = ` for the latter.
(ii) The equal-tailed, two-sided CI in (9) has coverage probability
1−α+
[h (1 − h ) + ` (1 − ` )]z1−α/2 φ(z1−α/2 ) −1
n + O n−3/2 [log(n)]3 .
p(1 − p)
(iii) The calibrated one-sided lower, one-sided upper, and two-sided equal-tailed CIs given
in (7), (8), and (10), respectively, have O n−3/2 [log(n)]3 CPE.
(iv) The asymptotic probabilities of excluding Dn = Q(p) + κn−1/2 from lower one-sided (l),
upper one-sided (u), and equal-tailed two-sided (t) CIs (i.e., asymptotic power of the
corresponding hypothesis tests) are
Pnl (Dn ) → Φ(zα + S), Pnu (Dn ) → Φ(zα − S), Pnt (Dn ) → Φ zα/2 + S + Φ zα/2 − S ,
p
where S ≡ κf (F −1 (p))/ p(1 − p).
The equal-tailed property of our two-sided CIs is a type of median-unbiasedness. If (L̂, Ĥ)
is a CI for scalar θ, then an equal-tailed CI is “unbiased” under loss function L(θ, L̂, Ĥ) =
max{0, θ − Ĥ, L̂ − θ}, as defined in (5) of Lehmann (1951). This median-unbiased property
may be desirable (e.g., Andrews and Guggenberger, 2014, footnote 11), although it is different
than the usual “unbiasedness” where a CI is the inversion of an unbiased test. More generally,
in (9), we could replace ul (α/2) and uh (α/2) by ul (tα) and uh ((1−t)α) for t ∈ [0, 1]. Different
t may achieve different optimal properties, which we leave to future work.
12
4
Quantile inference: conditional
4.1
Setup and bias
Let QY |X (u; x) be the conditional u-quantile function of scalar outcome Y given conditioning
vector X ∈ X ⊂ Rd , evaluated at X = x. The object of interest is QY |X (p; x0 ), for p ∈ (0, 1)
and interior point x0 . The sample {Yi , Xi }ni=1 is drawn iid. Without loss of generality, let
x0 = 0.
If X is discrete so that P (X = 0) > 0, we can take the subsample with Xi = 0 and
compute a CI from the corresponding Yi values, using the method in Section 3. Even with
dependence like strong mixing among the Xi , CPE is the same O(n−1 ) from Theorem 4 as
a.s.
long as the subsample’s Yi are independent draws from the same QY |X (·; 0) and Nn n.
If X is continuous, then P (Xi = 0) = 0, so observations with Xi 6= 0 must be included.
If X contains mixed continuous and discrete components, then we can apply our method for
continuous X to each subsample corresponding to each unique value of the discrete subvector
of X. The asymptotic rates are unaffected by the presence of discrete variables (although
the finite-sample consequences may deserve more attention), so we focus on the case where
all components of X are continuous.
We now present definitions and assumptions, continuing the normalization x0 = 0.
Definition 1 (local smoothness). Following Chaudhuri (1991, pp. 762–3): if, in a neighborhood of the origin, function g(·) is continuously differentiable through order k, and its kth
derivatives are uniformly Hölder continuous with exponent γ ∈ (0, 1], then g(·) has “local
smoothness” of degree s = k + γ.
Assumption A3. Sampling of (Yi , Xi0 )0 is iid, for continuous scalar Yi and continuous vector
Xi ∈ X ⊆ Rd . The point of interest X = 0 is in the interior of X , and the quantile of interest
is p ∈ (0, 1).
Assumption A4. The marginal density of X, denoted fX (·), satisfies 0 < fX (0) < ∞ and
has local smoothness sX = kX + γX > 0.
13
Assumption A5. For all u in a neighborhood of p, QY |X (u; ·) (as a function of the second
argument) has local smoothness3 sQ = kQ + γQ > 0.
√
Assumption A6. As n → ∞, the bandwidth satisfies (i) h → 0, (i’) hb+d/2 n → 0 with
b ≡ min{sQ , sX + 1, 2}, (ii) nhd /[log(n)]2 → ∞.
Assumption A7. For all u in a neighborhood of p and all x in a neighborhood of the origin,
fY |X QY |X (u; x); x is uniformly bounded away from zero.
Assumption A8. For all y in a neighborhood of QY |X (p; 0) and all x in a neighborhood
of the origin, fY |X (y; x) has a second derivative in its first argument (y) that is uniformly
bounded and continuous in y, having local smoothness sY = kY + γY > 2.
Definition 2 refers to a window whose size depends on h: Ch = [−h, h] if d = 1, or more
generally a hypercube as in Chaudhuri (1991, pp. 763): letting k · k∞ denote the L∞ -norm,
Ch ≡ {x : x ∈ Rd , kxk∞ ≤ h},
Nn ≡ # {Yi : Xi ∈ Ch , 1 ≤ i ≤ n} .
(11)
Definition 2 (local sample). Using Ch and Nn defined in (11), the “local sample” consists
of Yi values from observations with Xi ∈ Ch ⊂ Rd , and the “local sample size” is Nn .
Additionally, let the local quantile function QY |X (p; Ch ) be the p-quantile of Y given X ∈ Ch ,
satisfying p = P Y < QY |X (p; Ch ) | X ∈ Ch ; similarly define the local CDF FY |X (·; Ch ),
local PDF fY |X (·; Ch ), and derivatives thereof.
Given fixed values of n and h, Assumption A3 implies that the Yi in the local sample
are independent and identically distributed,4 which is needed to apply Theorem 4. However, they do not have the quantile function of interest, QY |X (·; 0), but rather the biased
QY |X (·; Ch ). This is like drawing a global (any Xi ) iid sample of wages, Yi , and restricting
3
Our sQ corresponds to variable p in Chaudhuri (1991); Bhattacharya and Gangopadhyay (1990) use
sQ = 2 and d = 1.
4
This may be the case asymptotically even with substantial dependence, although we do not explore
this point. For example, Polonik and Yao (2002, p. 237) write, “Only the observations with Xt in a small
neighbourhood of x are effectively used. . . [which] are not necessarily close with each other in the time space.
Indeed, they could be regarded as asymptotically independent under appropriate conditions such as strong
mixing. . . .”
14
it to observations in Japan (X ∈ Ch ) when our interest is only in Tokyo (X = 0): our
restricted Yi constitute an iid sample from Japan, but the p-quantile wage in Japan may
differ from that in Tokyo. Assumptions A4–A6(i) and A8 are necessary for the calculation
of this bias, QY |X (p; Ch ) − QY |X (p; 0), in Lemma 5. Assumptions A6(ii) and A7 (and A3)
a.s.
ensure Nn → ∞. Assumptions A7 and A8 are conditional versions of Assumptions A2(i) and
A2(ii), respectively. Their uniformity ensures uniformity of the remainder term in Theorem
4, accounting for the fact that the local sample’s distribution, FY |X (·; Ch ), changes with n
(through h and Ch ).
From A6(i), asymptotically Ch is entirely contained within the neighborhoods implicit
in A4, A5, and A8. This in turn allows us to examine only a local neighborhood around p
−1/2
(e.g., as in A5) since the CI endpoints converge to the true value at a Nn
rate.
The Xi being iid helps guarantee that Nn is almost surely of order nhd . The hd comes
from the volume of Ch . Larger h lowers CPE via Nn but raises CPE via bias. This tradeoff
determines the optimal rate at which h → 0 as n → ∞. Using Theorem 4 and additional
results on CPE from bias below, we determine the optimal value of h.
Definition 3 (steps to compute CI for QY |X (p; 0)). First, Ch and Nn are calculated as
in Definition 2. Second, using the Yi from observations with Xi ∈ Ch , a p-quantile CI is
constructed as in Hutson (1999). If additional discrete conditioning variables exist, then
repeat separately for each combination of discrete conditioning values. This procedure may
be repeated for any number of x0 . For the bandwidth, we recommend the formulas in Section
4.3.
The bias characterized in Lemma 5 is the difference between these two population conditional quantiles.
Lemma 5. Define b as in A6 and let Bh ≡ QY |X (p; Ch ) − QY |X (p; 0). If Assumptions A4,
A5, A6(i), A7, and A8 hold, then the bias is of order |Bh | = O(hb ). Defining
ξp ≡ QY |X (p; 0),
(0,1)
FY |X (ξp ; 0) ≡
∂
FY |X (ξp ; x)
∂x
15
,
x=0
(0,2)
FY |X (ξp ; 0) ≡
∂2
FY |X (ξp ; x)
∂x2
,
x=0
with d = 1, kX ≥ 1, and kQ ≥ 2, the bias is
(0,2)
f (0)FY |X (ξp ; 0)
2 X
Bh = −h
(0,1)
+ 2fX0 (0)FY |X (ξp ; 0)
6fX (0)fY |X (ξp ; 0)
+ o(h2 ).
(12)
Equation (12) is the same as in Bhattacharya and Gangopadhyay (1990), who derive it
using different arguments.
4.2
Optimal CPE order
The CPE-optimal bandwidth minimizes the sum of the two dominant high-order CPE terms.
It must be small enough to control the O(hb + Nn h2b ) (two-sided) CPE from bias, but large
enough to control the O(Nn−1 ) CPE from applying the unconditional L-statistic method.
The following theorem summarizes optimal bandwidth and CPE results.
Theorem 6. Let Assumptions A3–A8 hold. The following results are for the method in
Definition 3. For a one-sided CI, the bandwidth h∗ minimizing CPE has rate h∗ n−3/(2b+3d) ,
corresponding to CPE of order O(n−2b/(2b+3d) ). For a two-sided CI, the optimal bandwidth
rate is h∗ n−1/(b+d) , and the optimal CPE is O(n−b/(b+d) ). Using the calibration in Section
3, if p = 1/2, then the nearly (up to log(n)) CPE-optimal two-sided bandwidth rate is h∗
n−5/(4b+5d) , yielding CPE of order O n−6b/(4b+5d) [log(n)]3 ; if p 6= 1/2, then h∗ n−3/(b+3d)
and CPE is O n−3b/(2b+6d) [log(n)]3 . The nearly CPE-optimal calibrated one-sided bandwidth
rate is h∗ n−2/(b+2d) , yielding CPE of order O n−3b/(2b+4d) [log(n)]3 .
As detailed in the supplemental appendix, Theorem 6 implies that for the most common
values of dimension d and most plausible values of smoothness sQ , even our uncalibrated
method is more accurate than inference based on asymptotic normality with a local polynomial estimator. The same comparisons apply to basic bootstraps, which claim no refinement
over asymptotic normality; in this (quantile) case, even Studentization does not improve
theoretical CPE without the added complications of smoothed or m-out-of-n bootstraps.
The only opportunity for normality to yield smaller CPE is to greatly reduce bias by
16
using a very large local polynomial if sQ is large; our approach implicitly uses a uniform
kernel, so bias reduction beyond O(h2 ) is impossible. Nonetheless, our method has smaller
CPE when d = 1 or d = 2 even if sQ = ∞, and in other cases the necessary local polynomial
degree may be prohibitively large given common sample sizes.
sQ = 2
800
10
600
sQ
400
−0.4
1
2
3
4
sQ
terms
0
−1.0
L−stat
L−stat (calib)
C91
5
1
d
2
3
4
5
6
0
200
5
−0.6
−0.8
CPE exponent
−0.2
L−stat
always
better
Number of polynomial terms
15
0.0
sQ needed to match L−stat CPE
7
d
Figure 2: Two-sided CPE comparison between new (“L-stat”) method and the local polynomial asymptotic normality method based on Chaudhuri (1991). Left: with sQ = 2 and
sX = 1, writing CPE as nκ , comparison of κ for different methods and different values of d.
Right: required smoothness sQ for the local polynomial normality-based CPE to match that
of L-stat, as well as the corresponding number of terms in the local polynomial, for different
d.
Figure 2 (left panel) shows that if sQ = 2 and sX = 1, then the optimal CPE from
asymptotic normality is always larger (worse) than our method’s CPE. As shown in the
supplement, CPE with normality is nearly O n−2/(4+2d) . With d = 1, this is O n−1/3 , much
larger than our two-sided O n−2/3 . With d = 2, O n−1/4 is larger than our O n−1/2 . It
remains larger for all d since the bias is the same for both methods while the unconditional
L-statistic inference is more accurate than normality.
Figure 2 (right panel) shows the required amount of smoothness and local polynomial
degree for asymptotic normality to match our method’s CPE. For the most common cases of
d = 1 and d = 2, two-sided CPE with normality is larger even with infinite smoothness and
a hypothetical infinite-degree polynomial. With d = 3, to match our CPE, normality needs
17
sQ ≥ 12 and a local polynomial of degree kQ ≥ 11. Since interaction terms are required,
Pk +d−1 T
an 11th-degree polynomial has TQ=d−1 d−1
= 364 terms, which requires a large Nn (and
yet larger n). As d → ∞, the required number of terms in the local polynomial only grows
larger and may be prohibitive in realistic finite samples.
4.3
Plug-in bandwidth
We propose a feasible bandwidth value with the CPE-optimal rate. To avoid recursive dependence on (the interpolation weight), we fix its value. This does not achieve the theoretical
optimum, but it remains close even in small samples and seems to work well in practice. The
CPE-optimal bandwidth value derivation is shown for d = 1 in the supplemental appendix; a
plug-in version is implemented in our code. For reference, the plug-in bandwidth expressions
are collected here. The α-quantile of N (0, 1) is again denoted zα . We let B̂h denote the
(0,1)
estimator of bias term Bh ; fˆX the estimator of fX (x0 ); fˆX0 the estimator of fX0 (x0 ); F̂Y |X
(0,1)
(0,2)
(0,2)
the estimator of FY |X (ξp ; x0 ); and F̂Y |X the estimator of FY |X (ξp ; x0 ), with notation from
Lemma 5.
When d = 1, the following are our CPE-optimal plug-in bandwidths.
• For one-sided inference, let
2/7
z1−α
ĥ+− = n−3/7 h
i1/2 h
i
(0,2)
(0,1)
3 p(1 − p)fˆX
fˆX F̂Y |X + 2fˆX0 F̂Y |X
,
ĥ++ = −0.770ĥ+− .
(13)
(14)
For lower one-sided inference, ĥ+− should be used if B̂h < 0, and ĥ++ otherwise. For
upper one-sided inference, ĥ++ should be used if B̂h < 0, and ĥ+− otherwise.
• For two-sided inference with general p ∈ (0, 1),
1/3
p
2
(B̂h /|B̂h |)(1 − 2p) + (1 − 2p) + 4
ĥ = n−1/3
,
(0,2)
(0,1)
0
ˆ
ˆ
2 fX F̂
+ 2f F̂
Y |X
18
X
Y |X
(15)
(0,2)
(0,1)
which simplifies to ĥ = n−1/3 fˆX F̂Y |X + 2fˆX0 F̂Y |X
−1/3
with p = 0.5.
While we suggest the CPE-optimal bandwidths for moderate n, we suggest shifting toward a larger bandwidth as n → ∞. Once CPE is small over a range of bandwidths, a larger
bandwidth in that range is preferable since it yields shorter CIs. As an initial suggestion, we
use a coefficient of max{1, n/1000}5/60 that keeps the CPE-optimal bandwidth for n ≤ 1000
and then moves toward a n−1/20 under-smoothing of the MSE-optimal bandwidth rate, as in
Fan and Liu (2016, p. 205).
5
Empirical application
We present an application of our L-statistic inference to Engel (1857) curves. Code is
available from the latter author’s website, and the data are publicly available.
Banks, Blundell, and Lewbel (1997) argue that a linear Engel curve is sufficient for
certain categories of expenditure, while adding a quadratic term suffices for others. Their
Figure 1 shows nonparametrically estimated mean Engel curves (budget share W against
log total expenditure ln(X)) with 95% pointwise CIs at the deciles of the total expenditure
distribution, using a subsample of 1980–1982 U.K. Family Expenditure Survey (FES) data.
We present a similar examination, but for quantile Engel curves in the 2001–2012 U.K.
Living Costs and Food Surveys (Office for National Statistics and Department for Environment, Food and Rural Affairs, 2012), which is a successor to the FES. We examine the same
four categories as in the original analysis: food; fuel, light, and power (“fuel”); clothing and
footwear (“clothing”); and alcohol. We use the subsample of households with one adult male
and one adult female (and possibly children) living in London or the South East, leaving
8,528 observations. Expenditure amounts are adjusted to 2012 nominal values using annual
CPI data.5
Table 1 shows unconditional L-statistic CIs for various quantiles of the budget share
5
http://www.ons.gov.uk/ons/datasets-and-tables/data-selector.html?cdid=D7BT&dataset=
mm23&table-id=1.1
19
Table 1: L-statistic 99% CIs for various unconditional quantiles (p) of the budget share
distribution, for different categories of expenditure described in the text.
Category
p = 0.5
p = 0.75
p = 0.9
food
(0.1532,0.1580) (0.2095,0.2170) (0.2724,0.2818)
fuel
(0.0275,0.0289) (0.0447,0.0470) (0.0692,0.0741)
clothing (0.0135,0.0152) (0.0362,0.0397) (0.0697,0.0761)
alcohol
(0.0194,0.0226) (0.0548,0.0603) (0.1012,0.1111)
distributions for the four expenditure categories. (Due to the large sample size, calibrated
CIs are identical at the precision shown.) These capture some population features, but the
conditional quantiles are of more interest.
Figure 3 is comparable to Figure 1 of Banks et al. (1997) but with 90% joint (over the nine
expenditure levels) CIs instead of 95% pointwise CIs, alongside quadratic quantile regression
estimates. (To get joint CIs, we simply use the Bonferroni adjustment and compute 1 − α/9
pointwise CIs.) Joint CIs are more intuitive for assessing the shape of a function since they
jointly cover all corresponding points on the true curve with 90% probability, rather than
any given single point. The CIs are interpolated only for visual convenience. Although
some of the joint CI shapes do not look quadratic at first glance, the only cases where the
quadratic fit lies outside one of the intervals are for alcohol at the conditional median and
clothing at the conditional upper quartile, and neither is a radical departure. With a 90%
confidence level and 12 confidence sets, we would not be surprised if one or two did not cover
the true quantile Engel curve completely. Importantly, the CIs are relatively precise, too;
the linear fit is rejected in 8 of 12 cases. Altogether, this evidence suggests that the benefits
of a quadratic (but not linear) approximation may outweigh the cost of approximation error.
The supplemental appendix includes a similar figure but with a nonparametric (instead
of quadratic) conditional quantile estimate along with joint CIs from Fan and Liu (2016).
6
Simulation study
Code for our methods and simulations is available on the latter author’s website.
20
Joint: Food
0.50−quantile
0.75−quantile
0.90−quantile
●
●
●
●
●
0.10
Joint: Clothing/footwear
0.12
6.5
●
●
●
●
●
0.00
●
6.5
●
●
●
6.0
●
●
●
●
7.0
●
●
●
●
●
●
6.0
log expenditure
●
●
7.0
0.75−quantile
0.90−quantile
●
●
●
●
6.5
0.05
budget share
●
●
●
●
●
log
● expenditure
0.50−quantile
0.20
0.10
0.08
0.06
●
●
0.06
6.0
0.75−quantile
0.90−quantile
●
●
Joint: Alcohol
7.0
0.04
0.02
0.00
●
log
● expenditure
0.50−quantile
●
●
●
●
●
●
●
●
●
●
●
●
0.02
●
●
0.15
●
●
●
0.10
●
0.50−quantile
0.75−quantile
0.90−quantile
0.08
0.10
●
0.04
budget share
0.30
0.25
0.20
●
6.0
budget share
●
●
0.15
budget share
0.35
●
Joint: Fuel, light, and power
●
●
●
●
●
●
●
●
●
●
●
6.5
7.0
log expenditure
Figure 3: Joint (over the nine expenditure levels) 90% confidence intervals for quantile Engel
curves: food (top left), fuel (top right), clothing (bottom left), and alcohol (bottom right).
21
6.1
Unconditional simulations
We compare two-sided unconditional CIs from the following methods: “L-stat” from Section
3, originally in Hutson (1999); “BH” from Beran and Hall (1993); “Norm” using the sample
quantile’s asymptotic normality and kernel-estimated variance; “K15” from Kaplan (2015);
and “BStsym,” a symmetric Studentized bootstrap (99 draws) with bootstrapped variance
(100 draws).6
Overall, L-stat and BH have the most accurate coverage probability (CP), avoiding undercoverage while maintaining shorter length than other methods achieving at least 95% CP.
Near the median, L-stat and BH are nearly identical. Away from the median, L-stat is closer
to equal-tailed and often shorter than BH. Farther into the tails, L-stat can be computed
where BH cannot.
Table 2: CP and median CI length, 1 − α = 0.95; n, p, and distributions of Xi (F ) shown
in table; 10,000 replications. “Too high” is the proportion of simulation draws in which the
lower endpoint was above the true F −1 (p), and “too low” is the proportion when the upper
endpoint was below F −1 (p).
n
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
p
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
F
Normal
Normal
Normal
Normal
Normal
Uniform
Uniform
Uniform
Uniform
Uniform
Exponential
Exponential
Exponential
Exponential
Exponential
Method
L-stat
BH
Norm
K15
BStsym
L-stat
BH
Norm
K15
BStsym
L-stat
BH
Norm
K15
BStsym
CP Too low
0.953
0.022
0.955
0.021
0.942
0.028
0.971
0.014
0.942
0.028
0.953
0.022
0.954
0.021
0.908
0.046
0.963
0.018
0.937
0.031
0.953
0.024
0.954
0.024
0.924
0.056
0.968
0.022
0.941
0.039
Too high
0.025
0.024
0.030
0.015
0.030
0.025
0.025
0.046
0.020
0.032
0.023
0.022
0.020
0.010
0.020
Length
0.99
1.00
1.02
1.19
1.13
0.37
0.37
0.35
0.44
0.45
0.79
0.80
0.75
0.96
0.93
Table 2 shows nearly exact CP for both L-stat and BH when n = 25 and p = 0.5.
6
Other bootstraps were consistently worse in terms of coverage: (asymmetric) Studentized bootstrap,
and percentile bootstrap with and without symmetry.
22
“Norm” can be slightly shorter, but it under-covers. The bootstrap has only slight undercoverage, and K15 none, but their CIs are longer than L-stat’s. Additional results are in the
supplemental appendix, but the qualitative points are the same.
Table 3: CP and median CI length, as in Table 2.
n
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
p
0.037
0.037
0.037
0.037
0.037
0.037
0.037
0.037
0.037
0.037
0.037
0.037
0.037
0.037
0.037
F
Method CP Too low
Normal
L-stat 0.951
0.023
Normal
BH
NA
NA
Normal
Norm 0.925
0.016
Normal
K15
0.970
0.009
Normal BStsym 0.950
0.020
Cauchy
L-stat 0.950
0.022
Cauchy
BH
NA
NA
Cauchy
Norm 0.784
0.082
Cauchy
K15
0.957
0.002
Cauchy BStsym 0.961
0.002
Uniform L-stat 0.951
0.024
Uniform
BH
NA
NA
Uniform Norm 0.990
0.000
Uniform
K15
0.963
0.028
Uniform BStsym 0.924
0.053
Too high
0.026
NA
0.059
0.021
0.030
0.028
NA
0.134
0.041
0.037
0.026
NA
0.010
0.009
0.022
Length
1.02
NA
0.83
1.55
1.20
39.37
NA
18.90
36.55
48.77
0.07
NA
0.12
0.11
0.08
Table 3 shows a case in the lower tail with n = 99 where BH cannot be computed (because
it needs the zeroth order statistic). Even then, L-stat’s CP remains almost exact, and it is
closest to equal-tailed. “Norm” under-covers for two F (severely for Cauchy) and is almost
twice as long as L-stat for the third. BStsym has less under-coverage, and K15 none, but
both are generally longer than L-stat. Again, additional results are in the supplemental
appendix, with similar patterns.
The supplemental appendix contains additional simulation results for p 6= 0.5 but where
BH is still computable. L-stat and BH both attain 95% CP, but L-stat is much closer to
equal-tailed and is shorter. The supplemental appendix also has results illustrating the effect
of calibration.
Table 4 isolates the effects of using the beta distribution rather than the normal approximation, as well as the effects of interpolation. Method “Normal” uses the normal approx-
23
imation to determine uh and ul but still interpolates, while “Norm/floor” uses the normal
approximation with no interpolation as in equations (5) and (6) of Fan and Liu (2016, Ex.
2.1).
iid
Table 4: CP and median CI length, n = 19, Yi ∼ N (0, 1), 1 − α = 0.90, 1,000 replications,
various p. In parentheses below CP are probabilities of being too low or too high, as in Table
2. Methods are described in the text.
Method
L-stat
Normal
Norm/floor
Two-sided CP
(Too low, Too high)
p = 0.15
p = 0.25
p = 0.5
0.905
0.901
0.898
(0.048,0.047) (0.050,0.049) (0.052,0.050)
NA
0.926
0.912
(NA,NA)
(0.062,0.012) (0.045,0.043)
NA
0.913
0.876
(NA,NA)
(0.083,0.004) (0.087,0.037)
Median length
p = 0.15 p = 0.25 p = 0.5
1.20
1.03
0.93
NA
1.22
1.00
NA
1.47
0.91
Table 4 shows several advantages of L-stat. First, for p = 0.15, Normal and Norm/floor
cannot even be computed (hence “NA”) because they require the zeroth order statistic, which
does not exist, whereas L-stat is computable and has nearly exact CP (0.905). Second, with
p = 0.25 and p = 0.5, the normal approximation (Normal) makes the CI needlessly longer
than L-stat’s CI. Third, additionally not interpolating (Norm/floor) makes the CI even longer
for p = 0.25 but leads to under-coverage for p = 0.5. Fourth, whereas the L-stat CIs are
almost exactly equal-tailed, the normal-based CIs are far from equal-tailed at p = 0.25,
where Norm/floor is essentially a one-sided CI.
6.2
Conditional simulations
For conditional quantile inference, we compare our L-statistic method (“L-stat”) with a
variety of others. Implementation details may be seen in the supplemental appendix and
available code. The first other method (“rqss”) is from the popular quantreg package in R
(Koenker, 2012). The second (“boot”) is a local cubic method following Chaudhuri (1991)
but with bootstrapped standard errors; the bandwidth is L-stat’s multiplied by n1/12 to get
24
the local cubic CPE-optimal rate. The third (“QYg”) uses the asymptotic normality of a local
linear estimator with a Gaussian kernel, using results and ideas from Qu and Yoon (2015),
although they are more concerned with uniform (over quantiles) inference; they suggest using
the MSE-optimal bandwidth (Corollary 1) and a particular type of bias correction (Remark
7). The fourth (“FLb”) is from Section 3.1 in Fan and Liu (2016), based on a symmetrized kNN estimator using a bisquare kernel; we use the code from their simulations.7 Interestingly,
although in principle they are just slightly undersmoothing the MSE-optimal bandwidth,
their bandwidth is very close to the CPE-optimal bandwidth for the sample sizes considered.
We now write x0 as the point of interest, instead of x0 = 0; we also take d = 1, b = 2,
and focus on two-sided inference, both pointwise (single x0 ) and joint (over multiple x0 ).
Joint CIs for all methods are computed using the Bonferroni approach. Uniform bands are
also examined, with L-stat, QYg, and boot relying on the adjusted critical value from the
Hotelling (1939) tube computations in plot.rqss. Each simulation has 1,000 replications
unless otherwise noted.
Figure 4 uses Model 1 from Fan and Liu (2016, p. 205): Yi = 2.5+sin(2Xi )+2 exp −16Xi2 +
iid
iid
0.5i , Xi ∼ N (0, 1), i ∼ N (0, 1), Xi ⊥
⊥ i , n = 500, p = 0.5. The “Direct” method in their
Table 1 is our FLb. All methods have good pointwise CP (top left). L-stat has the best
pointwise power (top right).
Figure 4 (bottom left) shows power curves of the hypothesis tests corresponding to the
joint (over x0 ∈ {0, 0.75, 1.5}) CIs, varying H0 while maintaining the same DGP. The deviations of QY |X (p; x0 ) shown on the horizontal axis are the same at each x0 ; zero deviation
implies H0 is true, in which case the rejection probability is the type I error rate. All methods
have good type I error rates: L-stat’s is 6.2%, and other methods’ are below the nominal 5%.
L-stat has significantly better power, an advantage of 20–40% at the larger deviations. The
bottom right graph in Figure 4 is similar, but based on uniform confidence bands evaluated
at 231 different x0 . Only L-stat has nearly exact type I error rate and good power.
7
Graciously provided to us. The code differs somewhat from the description in their text, most notably
by an additional factor of 0.4 in the bandwidth.
25
0.5
1.0
FLb
40
60
80
boot
QYg
0
50
0.0
boot
QYg
FLb
L−stat
rqss
20
80
70
1−α
L−stat
rqss
60
CP (%)
90
Rejection Probability (%)
100
Pointwise Power
100
Pointwise Coverage Probability
1.5
0.0
Joint Power
X Curves
0.5
1.0
1.5
Uniform Power
Curves
X
QYg
FLb
80
100
rqss
boot
60
60
40
0
20
α
L−stat
40
Rejection Probability (%)
QYg
FLb
20
rqss
boot
80
α
L−stat
0
Rejection Probability (%)
100
Deviation: −0.098387 or 0.098387
−0.2
−0.1
0.0
0.1
0.2
−0.2
Deviation of Null from Truth
−0.1
0.0
0.1
0.2
Deviation of Null from Truth
Figure 4: Results from DGP in Model 1 of Fan and Liu (2016), n = 500, p = 0.5. Top left:
pointwise CP at x0 ∈ {0, 0.75, 1.5}, interpolated for visual ease. Top right: pointwise power
at the same x0 against deviations of ±0.1. Bottom left: joint power curves. Bottom right:
uniform power curves.
26
Next, we use the simulation setup of the rqss vignette in Koenker (2012), which in turn
came in part from Ruppert, Wand, and Carroll (2003, §17.5.1). Here, n = 400, p = 0.5,
d = 1, α = 0.05, and
iid
Xi ∼ Unif(0, 1),
Yi =
p
Xi (1 − Xi ) sin 2π(1 + 2−7/5 )/(Xi + 2−7/5 ) + σ(Xi )Ui ,
(16)
where the Ui are iid N (0, 1), t3 , Cauchy, or centered χ23 , and σ(X) = 0.2 or σ(X) = 0.2(1+X).
The conditional median function is graphed in the supplemental appendix. Although the
function as a whole is not a common shape in economics (with multiple local maxima and
minima), it provides insight into different types of functions at different points. For pointwise
and joint CIs, we consider 47 equispaced points, x0 = 0.04, 0.06, . . . , 0.96; uniform confidence
bands are evaluated at 231 equispaced values of x0 .
Figure 5’s first two columns show that across all eight DGPs (four error distributions,
homoskedastic or heteroskedastic), L-stat has consistently accurate pointwise CP. At the
most challenging points (smallest x0 ), L-stat can under-cover by around five percentage
points. Otherwise, CP is near 1 − α for all x0 in all DGPs.
In contrast, with the exception of boot, the other methods can have significant undercoverage. As seen in the first two columns of Figure 5, rqss has under-coverage (as low as
50–60% CP) for x0 closer to zero. QYg has under-coverage with the χ23 and (especially)
Cauchy. FLb has good CP except with the Cauchy, where CP can dip below 70%.
Figure 5’s third column shows the joint power curves. The horizontal axis of the graphs
indicates the deviation of H0 from the true values. For example, letting ξp,j be the true
conditional quantiles at the j = 1, . . . , 47 values of x0 (say, xj ), −0.1 deviation refers to
H0 : {QY |X (p; xj ) = ξp,j − 0.1 for j = 1, . . . , 47} (which is false), and zero deviation means
H0 is true. Our method’s type I error rate is close to α under all four Ui distributions (5.7%,
5.8%, 7.3%, 6.3%). In contrast, other methods show size distortion under Cauchy and/or χ23
Ui ; among them, boot is closest but still has 10.3% type I error rate with the χ23 . Next-best
is rqss; size distortion for FLb and QYg is more serious. L-stat also has the steepest joint
27
Pointwise Coverage Probability
X
Pointwise Coverage Probability
X
X
Pointwise Coverage Probability
X
X
0.6
0.4
0.6
X
0.8
80
70
0.2
0.4
0.6
X
boot
QYg
FLb
0.8
80
60
40
0.3
20
80
60
40
20
boot
QYg
FLb
−0.2
Joint Power Curves
−0.1
0.0
0.1
0.2
0.3
Deviation of Null from Truth
80
60
40
α
L−stat
rqss
20
Rejection Probability (%)
0.8
1−α
L−stat
rqss
60
CP (%)
0.2
boot
QYg
FLb
50
90
80
70
50
60
1−α
L−stat
rqss
−0.3
−0.3
boot
QYg
FLb
−0.2
Joint Power Curves
−0.1
0.0
0.1
0.2
0.3
Deviation of Null from Truth
100
0.4
100
0.2
90
100
Pointwise
Coverage
Probability
0.2
0.4
0.6
0.8
0.2
0
boot
QYg
FLb
0.1
80
CP (%)
80
1−α
L−stat
rqss
60
50
boot
QYg
FLb
Rejection Probability (%)
0.8
70
90
80
70
50
60
1−α
L−stat
rqss
0.0
α
L−stat
rqss
100
0.6
−0.1
60
100
0.4
Joint Power Curves
40
0.2
0.8
90
100
0.6
−0.2
Deviation of Null from Truth
0
boot
QYg
FLb
Pointwise Coverage Probability
0.4
−0.3
α
L−stat
rqss
20
CP (%)
80
1−α
L−stat
rqss
60
boot
QYg
FLb
50
50
70
90
80
70
1−α
L−stat
rqss
boot
QYg
FLb
0
0.8
100
0.6
Rejection Probability (%)
100
0.4
90
100
0.2
0.8
Rejection Probability (%)
100
Pointwise Coverage Probability
0.6
α
L−stat
rqss
boot
QYg
FLb
0
CP (%)
70
boot
QYg
FLb
X
0.4
60
CP (%)
1−α
L−stat
rqss
60
50
boot
QYg
FLb
Pointwise Coverage Probability
0.2
CP (%)
80
90
100
90
80
CP (%)
70
50
60
1−α
L−stat
rqss
0.2
CP (%)
Joint Power Curves
100
Pointwise Coverage Probability
−0.3
−0.2
−0.1
0.0
0.1
0.2
0.3
Deviation of Null from Truth
Figure 5: Pointwise CP (first two columns) and joint power curves (third column), 1 − α =
0.95, n = 400, p = 0.5, DGP in (16). Distributions of Ui are, top row to bottom row: N (0, 1),
t3 , Cauchy, and centered χ23 . Columns 1 & 3: σ(x) = 0.2; Column 2: σ(x) = (0.2)(1 + x).
28
power curves among all methods. Beyond steepness, they are also the most robust to the
underlying distribution. L-stat’s type I error rate is near 5% for all four distributions. In
contrast, boot ranges from only 1.2% for the Cauchy, leading to worse power, up to 10.3%
for the χ23 .
The supplemental appendix shows a comparison of hypothesis tests based on uniform
confidence bands. The results are similar to the joint power curves, but with slightly higher
rejection rates all around.
0.4
0.6
100
80
60
0.8
0.2
Pointwise
X Power
0.6
0.8
100
FLb
60
80
boot
QYg
40
60
40
20
0
0
L−stat
rqss
20
FLb
Rejection Probability (%)
boot
QYg
80
L−stat
rqss
0.4
X Power
Pointwise
Deviation: −0.1 or 0.1
Deviation: −0.1 or 0.1
100
FLb
0
0.2
Rejection Probability (%)
boot
QYg
40
60
40
20
L−stat
rqss
20
FLb
Rejection Probability (%)
boot
QYg
80
L−stat
rqss
Pointwise Power
0
Rejection Probability (%)
100
Pointwise Power
0.2
0.4
0.6
0.8
0.2
0.4
0.6
X
X
Deviation: −0.1 or 0.1
Deviation: −0.1 or 0.1
0.8
Figure 6: Pointwise power (described in text), 1 − α = 0.95, n = 400, p = 0.5, DGP from
(16), σ(x) = 0.2. The Ui are N (0, 1) (top left), t3 (top right), Cauchy (bottom left), and
centered χ23 (bottom right).
Figure 6 shows pointwise power. Specifically, for a given x0 , this is the proportion of
29
simulation draws in which QY |X (p; x0 ) − 0.1 is excluded from the CI, averaged with the
corresponding proportion for QY |X (p; x0 ) + 0.1. L-stat generally has the best power among
methods with correct CP (per first column of Figure 5).
The supplemental appendix contains results for p = 0.25, where L-stat continues to
perform well. One additional advantage is that L-stat’s joint test is nearly unbiased, whereas
the other joint tests are all biased.
The supplemental appendix also shows the computational advantage of our method. For
example, with n = 105 and 100 different x0 , L-stat takes only 10 seconds, whereas the local
cubic bootstrap takes 141 seconds; rqss is even slower.
Overall, the simulation results show the new L-stat method to be fast and accurate.
Besides L-stat, the only method to avoid serious under-coverage is the local cubic with
bootstrapped standard errors, perhaps due to its reliance on our newly proposed CPEoptimal bandwidth. However, L-stat consistently has better power, greater robustness across
different conditional distributions, and less bias of its joint hypothesis tests.
7
Conclusion
We derive a uniform O(n−1 ) difference between the linearly interpolated and ideal fractional
order statistic distributions. We generalize this to L-statistics to help justify quantile inference procedures. In particular, this translates to O(n−1 ) CPE for the quantile CIs proposed
by Hutson (1999), which we improve to O n−3/2 [log(n)]3 via calibration. We extend these
results to a nonparametric conditional quantile model, with both theoretical and Monte
Carlo success. The derivation of an optimal bandwidth value (not just rate) and a fast
approximation thereof are important practical advantages.
Our results can be extended to other objects of interest, such as interquantile ranges and
two-sample quantile differences (Goldman and Kaplan, 2016b), quantile marginal effects
(Kaplan, 2014), and entire distributions (Goldman and Kaplan, 2016a).
30
In ongoing work, we consider the connection with Bayesian bootstrap quantile inference,
which may be a way to “relax” the iid assumption. Other future work may improve finitesample performance, e.g., by smoothing over discrete covariates (Li and Racine, 2007).
References
Abrevaya, J. (2001). The effects of demographics and maternal behavior on the distribution
of birth outcomes. Empirical Economics 26 (1), 247–257.
Alan, S., T. F. Crossley, P. Grootendorst, and M. R. Veall (2005). Distributional effects
of ‘general population’ prescription drug programs in Canada. Canadian Journal of Economics 38 (1), 128–148.
Andrews, D. W. K. and P. Guggenberger (2014). A conditional-heteroskedasticity-robust
confidence interval for the autoregressive parameter. Review of Economics and Statistics 96 (2), 376–381.
Banks, J., R. Blundell, and A. Lewbel (1997). Quadratic Engel curves and consumer demand.
Review of Economics and Statistics 79 (4), 527–539.
Beran, R. and P. Hall (1993). Interpolated nonparametric prediction intervals and confidence
intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 55 (3),
643–652.
Bhattacharya, P. K. and A. K. Gangopadhyay (1990). Kernel and nearest-neighbor estimation of a conditional quantile. Annals of Statistics 18 (3), 1400–1415.
Bickel, P. J. (1967). Some contributions to the theory of order statistics. In Proceedings
of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1:
Statistics. The Regents of the University of California.
Buchinsky, M. (1994). Changes in the U.S. wage structure 1963–1987: Application of quantile
regression. Econometrica 62 (2), 405–458.
Chamberlain, G. (1994). Quantile regression, censoring, and the structure of wages. In
Advances in Econometrics: Sixth World Congress, Volume 2, pp. 171–209.
Chaudhuri, P. (1991). Nonparametric estimates of regression quantiles and their local Bahadur representation. Annals of Statistics 19 (2), 760–777.
Chen, S. X. and P. Hall (1993). Smoothed empirical likelihood confidence intervals for
quantiles. Annals of Statistics 21 (3), 1166–1181.
DasGupta, A. (2000). Best constants in Chebyshev inequalities with various applications.
Metrika 51 (3), 185–200.
David, H. A. and H. N. Nagaraja (2003). Order Statistics (3rd ed.). New York: Wiley.
Deaton, A. (1997). The analysis of household surveys: a microeconometric approach to
development policy. Baltimore: The Johns Hopkins University Press.
Donald, S. G., Y.-C. Hsu, and G. F. Barrett (2012). Incorporating covariates in the measurement of welfare and inequality: methods and applications. The Econometrics Journal 15 (1), C1–C30.
Engel, E. (1857). Die productions- und consumtionsverhältnisse des königreichs sachsen. Zeitschrift des Statistischen Bureaus des Königlich Sächsischen, Ministerium des
Inneren 8–9, 1–54.
31
Fan, X., I. Grama, and Q. Liu (2012). Hoeffding’s inequality for supermartingales. Stochastic
Processes and their Applications 122 (10), 3545–3559.
Fan, Y. and R. Liu (2016). A direct approach to inference in nonparametric and semiparametric quantile models. Journal of Econometrics 191 (1), 196–216.
Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. Annals of
Statistics 1 (2), 209–230.
Fisher, R. A. (1932). Statistical Methods for Research Workers (4th ed.). Edinburg: Oliver
and Boyd.
Goldman, M. and D. M. Kaplan (2016a). Evenly sensitive KS-type inference on distributions.
Working paper, available at http://faculty.missouri.edu/~kaplandm.
Goldman, M. and D. M. Kaplan (2016b). Nonparametric inference on conditional quantile
differences, linear combinations, and vectors, using L-statistics. Working paper, available
at http://faculty.missouri.edu/~kaplandm.
Hall, P. and S. J. Sheather (1988). On the distribution of a Studentized quantile. Journal
of the Royal Statistical Society: Series B (Statistical Methodology) 50 (3), 381–391.
Ho, Y. H. S. and S. M. S. Lee (2005a). Calibrated interpolated confidence intervals for
population quantiles. Biometrika 92 (1), 234–241.
Ho, Y. H. S. and S. M. S. Lee (2005b). Iterated smoothed bootstrap confidence intervals for
population quantiles. Annals of Statistics 33 (1), 437–462.
Hogg, R. (1975). Estimates of percentile regression lines using salary data. Journal of the
American Statistical Association 70 (349), 56–59.
Horowitz, J. L. and S. Lee (2012). Uniform confidence bands for functions estimated nonparametrically with instrumental variables. Journal of Econometrics 168 (2), 175–188.
Hotelling, H. (1939). Tubes and spheres in n-space and a class of statistical problems.
American Journal of Mathematics 61, 440–460.
Hutson, A. D. (1999). Calculating nonparametric confidence intervals for quantiles using
fractional order statistics. Journal of Applied Statistics 26 (3), 343–353.
Jones, M. C. (2002). On fractional uniform order statistics. Statistics & Probability Letters 58 (1), 93–96.
Kaplan, D. M. (2014). Nonparametric inference on quantile marginal effects. Working paper,
available at http://faculty.missouri.edu/~kaplandm.
Kaplan, D. M. (2015). Improved quantile inference via fixed-smoothing asymptotics and
Edgeworth expansion. Journal of Econometrics 185 (1), 20–32.
Kaplan, D. M. and Y. Sun (2016). Smoothed estimating equations for instrumental variables
quantile regression. Econometric Theory XX (XX), XX–XX. Forthcoming.
Koenker, R. (2012). quantreg: Quantile Regression. R package version 4.81.
Kumaraswamy, P. (1980). A generalized probability density function for double-bounded
random processes. Journal of Hydrology 46 (1–2), 79–88.
Lehmann, E. L. (1951). A general concept of unbiasedness. Annals of Mathematical Statistics 22 (4), 587–592.
Li, Q. and J. S. Racine (2007). Nonparametric econometrics: Theory and practice. Princeton
University Press.
Manning, W., L. Blumberg, and L. Moulton (1995). The demand for alcohol: the differential
response to price. Journal of Health Economics 14 (2), 123–148.
Muir, T. (1960). A Treatise on the Theory of Determinants. Dover Publications.
32
Neyman, J. (1937). »Smooth test» for goodness of fit. Skandinavisk Aktuarietidskrift 20 (3–
4), 149–199.
Office for National Statistics and Department for Environment, Food and Rural Affairs
(2012). Living Costs and Food Survey. 2nd Edition. Colchester, Essex: UK Data Archive.
http://dx.doi.org/10.5255/UKDA-SN-7472-2.
Pearson, K. (1933). On a method of determining whether a sample of size n supposed to have
been drawn from a parent population having a known probability integral has probably
been drawn at random. Biometrika 25, 379–410.
Peizer, D. B. and J. W. Pratt (1968). A normal approximation for binomial, F , beta, and
other common, related tail probabilities, I. Journal of the American Statistical Association 63 (324), 1416–1456.
Polansky, A. M. and W. R. Schucany (1997). Kernel smoothing to improve bootstrap confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 59 (4), 821–838.
Polonik, W. and Q. Yao (2002). Set-indexed conditional empirical and quantile processes
based on dependent data. Journal of Multivariate Analysis 80 (2), 234–255.
Pratt, J. W. (1968). A normal approximation for binomial, F , beta, and other common,
related tail probabilities, II. Journal of the American Statistical Association 63 (324),
1457–1483.
Qu, Z. and J. Yoon (2015). Nonparametric estimation and inference on conditional quantile
processes. Journal of Econometrics 185 (1), 1–19.
Rényi, A. (1953). On the theory of order statistics. Acta Mathematica Hungarica 4 (3),
191–231.
Robbins, H. (1955). A remark on Stirling’s formula. The American Mathematical
Monthly 62 (1), 26–29.
Ruppert, D., M. P. Wand, and R. J. Carroll (2003). Semiparametric Regression. Cambridge
Series in Statistical and Probabilistic Mathematics. Cambridge University Press.
Shorack, G. R. (1972). Convergence of quantile and spacings processes with applications.
Annals of Mathematical Statistics 43 (5), 1400–1411.
Shorack, G. R. and J. A. Wellner (1986). Empirical Processes with Applications to Statistics.
New York: John Wiley & Sons.
Stigler, S. M. (1977). Fractional order statistics, with applications. Journal of the American
Statistical Association 72 (359), 544–550.
Thompson, W. R. (1936). On confidence ranges for the median and other expectation distributions for populations of unknown distribution form. Annals of Mathematical Statistics 7 (3), 122–128.
Wilks, S. S. (1962). Mathematical Statistics. New York: Wiley.
A
Proof sketches and additional lemmas
The following are only sketches of proofs. The full proofs, with additional intermediate steps
and explanations, may be found in the supplemental appendix.
33
Sketch of proof of Proposition 1
For any u, let k = b(n +1)uc and = (n + 1)u − k ∈ [0, 1). If = 0, then the objects Q̃IX (u),
Q̂LX (u), and F −1 Q̃LU (u) are identical and equal to Xn:k . Otherwise, each lies in between Xn:k
and Xn:k+1 due to monotonicity of the quantile function and k/(n + 1) ≤ u < (k + 1)/(n + 1):
Xn:k = Q̃IX (k/(n + 1)) ≤ Q̃IX (u) ≤ Q̃IX ((k + 1)/(n + 1)) = Xn:k+1 ,
Xn:k ≤ Q̂LX (u) = (1 − )Xn:k + Xn:k+1 ≤ Xn:k+1 ,
Xn:k = F −1 Q̃LU k/(n + 1) ≤ F −1 Q̃LU (u) ≤ F −1 Q̃LU (k + 1)/(n + 1) = Xn:k+1 .
Thus, differences between the processes can be bounded by the maximum (over k) spacing
Xn:k+1 − Xn:k . Using the assumption that the density is uniformly bounded away from
zero over the interval of interest (and applying a maximal inequality from Bickel (1967,
eqn. (3.7))), this in turn can be bounded by a maximum of uniform order statistic spacings
Un:k+1 − Un:k . The marginal distribution (Un:k+1 − Un:k ) ∼ β(1, n) can then be used to bound
the probability as needed.
Lemma for PDF approximation
P
Lemma 7. Let ∆k be a positive (J + 1)-vector of natural numbers such that J+1
j=1 ∆kj =
Pj
n + 1, minj {∆kj } → ∞, and minj {n − ∆kj } → ∞, and define kj ≡ i=1 ∆ki and k ≡
(k1 , . . . , kJ )0 . Let X ≡ (X1 , . . . , XJ )0 be the random J-vector such that
∆X ≡ (X1 , X2 − X1 , . . . , 1 − XJ )0 ∼ Dirichlet(∆k).
Take any sequence an that satisfies conditions a) an → ∞, b) an n−1 [max{∆kj }]1/2 → 0, and
c) a3n [min{∆kj }]−1/2 → 0. Define Condition ?(an ) as satisfied by vector x if and only if
n
o
−1/2
max n∆kj |∆xj − ∆kj /n| ≤ an .
Condition ?(an )
j
Let kvk∞ ≡ maxj∈{1,...,k} |vj | denote the maximum norm of vector v = (v1 , . . . , vk )0 .
(i) Condition ?(an ) implies
n
o
−1/2
max n∆kj |∆xj − ∆kj /(n + 1)| = O(an ),
j
n
o
−1/2
max n∆kj |∆xj − (∆kj − 1)/(n − J)| = O(an ),
j
(17)
(18)
where ∆kj /(n + 1) and (∆kj − 1)/(n − J) are respectively the mean and mode of ∆Xj .
(ii) At any point of evaluation ∆x satisfying Condition ?(an ), the log Dirichlet PDF of
34
∆X may be uniformly approximated as
log f∆X (∆x) = D −
J+1
2 X
(n − J)
2
j=1
∆xj −
∆kj −1
n−J
2
∆kj − 1
+ Rn ,
J+1
1X
n
J
log
,
D ≡ log(n/2π) +
2
2 j=1
∆kj − 1
and Rn = O(a3n k∆k−1/2 k∞ ) uniformly (over ∆x). We also have the uniform (over
∆x) approximations
∂ log[f∆X (∆x)]
(n − J)2
∆kj − 1
= (n − J) −
∆xj −
+ O(a2n nk∆k−1 k∞ ),
∂∆xj
∆kj − 1
n−J
∂ log[f∆X (x)]
∂ log[f∆X (∆x)]
=−
+ O a2n nk∆k−1 k∞ .
∂∆kj /n
∂∆xj
(iii) Uniformly over all x ∈ RJ satisfying Condition ?(an ),
1
log[fX (x)] = D − (x − k/(n + 1))0 H(x − k/(n + 1)) + O(a3n k∆k−1/2 k∞ ),
2
∂ log[fX (x)]
= −H(x − k/(n + 1)) + O a2n nk∆k−1 k∞ ,
∂x
∂ log[fX (x)]
= H(x − k/(n + 1)) + O(a2n nk∆k−1 k∞ ),
∂k/(n + 1)
where the constant D is the same as in part (ii), and the J × J matrix H has non−1
and one off the diagonal
zero elements only on the diagonal H j,j = n2 ∆kj−1 + ∆kj+1
−1
2
H j,j+1 = H j+1,j = −n ∆kj+1 . The covariance matrix for x, V/n ≡ H −1 , has row i,
column j elements
V i,j = min(ki , kj )(n + 1 − max(ki , kj ))/[n(n + 1)],
(19)
connecting the above with the conventional asymptotic normality results for sample
quantiles. That is,
i
h
fX (x) = φV/n (x − k/(n + 1)) 1 + O a3n k∆k−1/2 k∞ ,
(20)
∂fX (x)
∂
=
φV/n (x − k/(n + 1)) + O a4n nJ/2 nk∆k−1 k∞ .
∂x
∂x
(21)
(iv) For the Dirichlet-distributed ∆X, Condition
?(an ) is violated
with only exponentially
−1 −a2n /2
decaying (in n) probability: 1 − P ?(an ) = O an e
.
(v) If instead there are asymptotically fixed components of the parameter
vector, the largest
of which is ∆kj = M < ∞, then with M = 1, 1 − P ?(an ) ≤ e−an −1 . With M ≥ 2,
35
√
for any η > 0, 1 − P ?(an ) = o a−1
.
n exp −an M (1/2 − η)
Sketch of proof of Lemma 7
The proof of part (i) uses the triangle inequality and the fact that the mean and mode differ
from ∆kj /n by O(1/n).
For part (ii), since ∆X ∼ Dirichlet(∆k), for any ∆x that sums to one,
log(f∆X (∆x)) = log(Γ(n + 1)) +
J+1
X
(∆kj − 1) log(∆xj ) − log(Γ(∆kj )) .
(22)
j=1
Applying Stirling-type bounds in Robbins (1955) to the gamma functions,
h(∆x)
D
}|
{ z
}|
{
z
J+1
J+1
X
J
1X
n
n∆xj
log
+
(∆kj − 1) log
−J
log(f∆X (∆x)) = log(n/(2π)) +
2
2 j=1
∆kj − 1
∆kj − 1
j=1
+ O(k∆k−1 k∞ ),
(23)
where D is the same constant as in the statement of the lemma.
We then expand h(·) around the Dirichlet mode, ∆x0 . The cross partials are zero, the
first derivative terms sum to zero, and the fourth derivative is smaller-order uniformly over
∆x satisfying Condition ?(an ):
≡R1n =O(n−1 )
J+1
J+1
z }| {
X
1X
hj (∆x0 )(∆xj − ∆x0j ) +
hj,j (∆x0 )(∆xj − ∆x0j )2
h(∆x) = h(∆x0 ) +
2 j=1
j=1
≡R2n =O(a3 k∆k−1/2 k∞ )
≡R3n =O(a4 k∆k−1 k∞ )
n
n
z
}|
{ z
}|
{
J+1
J+1
X
X
1
1
+
hj,j,j (∆x0 )(∆xj − ∆x0j )3 +
hj,j,j,j (∆x̃)(∆xj − ∆x0j )4 ,
6 j=1
24 j=1
(24)
where the quadratic term expands to the form in the statement of the lemma.
The derivative with respect to ∆x is computed by expanding hj (∆x) = (∆kj − 1)∆x−1
j
P
around the mode, and then simplifying with Condition ?(an ) and the fact that J+1
(∆x
j−
j=1
∆x0,j ) = 1 − 1 = 0. The derivative with respect to ∆kj /n is computed from an expansion of
h(∆x) around the mode, reusing many results from the original computation of the PDF.
For part (iii), the results are intuitive given part (ii), so we defer to the supplemental
appendix. It is helpful that the transformation from the values Xj to spacings ∆Xj is
unimodular.
For part (iv), we use Boole’s inequality along with the beta tail probability bounds from
DasGupta (2000) and our beta PDF approximation from Lemma 7(ii).
P
For part (v), since ∆kj = M < ∞ is a fixed natural number, we can write ∆Xj = M
i=1 δi ,
where each δi is a spacing between consecutive uniform order statistics. The marginal distribution of each δi is β(1, n). Using the corresponding CDF formula and Boole’s inequality
36
leads to the result. The other bound may be derived using equation (6) in Inequality 11.1.1
in Shorack and Wellner (1986, p. 440), as seen in the supplemental appendix, but for our
quantile inference application we only use the first result since it is better for M = 1.
For violations of Condition ?(an ) in the other direction, the probability is zero for large
enough n since P (∆Xj < 0) = 0 and M 1/2 − an < 0 for large enough n.
Lemma for proving Theorem 2
First, we introduce notation. From earlier, u0 ≡ 0 and uJ+1 ≡ 1. For all j, kj ≡ b(n +1)uj c,
j ≡ (n + 1)uj − kj . Let ∆k denote the (J + 1)-vector such that ∆kj = kj − kj−1 , let
ψ = (ψ1 , . . . , ψJ )0 be the fixed weight vector from (4), and
Yj ≡ Un:kj ∼ β(kj , n + 1 − kj ),
Λj ≡ Un:kj +1 − Un:kj ∼ β(1, n),
√
Zj ≡ n(Yj − uj ),
X≡
J
X
ψj F −1 (Yj ),
∆Y ≡ (Y1 , Y2 − Y1 , . . . , 1 − YJ ) ∼ Dirichlet(∆k),
Vj ≡
X0 ≡
j=1
W≡
√
√ −1
n F (Yj ) − F −1 (uj ) ,
J
X
(25)
ψj F −1 (uj ),
j=1
0
n(X − X0 ) = ψ V,
1/2
W,Λ ≡ W + n
J
X
j ψj Λj [Q0 (uj ) + Q00 (uj )(Yj − uj )],
j=1
where the preceding variables are all understood to vary with n.
Let φΣ (·) be the PDF of a mean-zero multivariate normal distribution with covariance
Σ.
Lemma 8. Let Assumption A2 hold at ū, and let each element of Y and Λ satisfy
Condition ?(an ) (as defined in Lemma 7) with an = 2 log(n). The following results
hold uniformly over any u = ū + o(1).
(i) Let C be a J-vector of random interpolation coefficients as defined in Jones (2002):
each Cj ∼ β(j , 1 − j ), and they are mutually independent and independent of all other
random variables. Then,
n1/2 (LL − X0 ) − W,Λ = O n−3/2 [log(n)]3 ,
(26)
n1/2 (LI − X0 ) − WC,Λ = O n−3/2 [log(n)]3 .
(ii) Define V as the J × J matrix with row i, column j elements V i,j = min{ui , uj }(1 −
max{ui , uj }), and define A = diag{f (F −1 (u))},
i.e., Ai,j = f (F −1 (ui )) if i = j and
zero if i 6= j. Define Vψ ≡ ψ 0 A−1 V A−1 ψ ∈ R. For any realization λ of Λ =
(Λ1 , . . . , ΛJ ) satisfying Condition ?(2 log(n)),
sup
{w:?(2 log(n)) holds}
fW,Λ |Λ (w | λ)
− 1 = O n−1/2 [log(n)]3 ,
φVψ (w)
37
sup
{w:?(2 log(n)) holds}
∂fW,Λ |Λ (w | λ) ∂φVψ (w)
−
= O n−1/2 [log(n)]3+J ,
∂w
∂w
where the notation φVψ (·) denotes the PDF of a normal random variable with mean
zero and variance Vψ . For any value ˜ ∈ [0, 1)J , uniformly over K satisfying Condition
?(an ),
∂ 2 FW,Λ |Λ (K | λ)
∂φVψ (w)
2 0
2 2
−3/2
5+J
=
nψ
Q
(u
)
λ
+
O
n
[log(n)]
.
j
j
j
∂2j
∂w
w=K
=˜
Sketch of proof of Lemma 8
For part (i), with a Taylor expansion, the object LL may be rewritten as
L
L = X0 + n
−1/2
W,Λ +
J
X
L
L
ψj j νj,1
+ νj,2
,
j=1
000
L
νj,1
≡
Q (ũj )
[Yj − uj ]2 Λj ,
2
L
νj,2
≡
Q00 (ỹj ) 2
Λj ,
2
(27)
where ∀j, ỹj ∈ (Yj , Yj + Λj ) and ũj is between uj and Yj . The remainder is O n−3/2 [log(n)]3
by applying Condition ?(an ), noting that A2 uniformly bounds the quantile function derivatives for large enough n under Condition ?(an ). The argument for LI is essentially the
same.
For part (ii), since Λ contains finite spacings, we cannot apply Lemma 7(iii). Instead,
we use the result from 8.7.5 in Wilks (1962, p. 238),
(Λ1 , . . . , ΛJ , 1 − Λ1 − · · · − ΛJ ) ∼ Dirichlet(1, . . . , 1, n + 1 − J).
Directly approximating the corresponding PDF yields
log fΛ (λ) = J log(n) − n
J
X
λj + O(n−1 log(n)).
(28)
j=1
For the joint density of {Y, Λ}, define T ≡ ∆Y1 , Λ1 , ∆Y2 − Λ1 , . . . , ΛJ , ∆YJ+1 − ΛJ =
0
T Y0 , Λ0 , where det(T ) = 1 can be shown. Now
T ∼ Dirichlet(∆k1 , 1, ∆k2 − 1, . . . , 1, ∆kJ+1 − 1).
Using the formula for the PDF of a transformed vector and plugging in the Dirichlet PDF
formula for T yields the joint log PDF of Y and Λ. Combining this with the marginal log
PDF of Λ in (28) yields the log conditional PDF.
For the PDF of W, we can use the formula for the PDF of a transformed random
vector
and then expand around the uj . The transformation √
from Z ≡ Q(Y) to V ≡
√
n[Q(Y) − Q(u)] (from (25)) is straightforward centering and n-scaling. The last transformation is from V to W = ψ 0 V, as defined in (25). For the special case J = 1, as in our
38
quantile inference application, this step is trivial since W = V. Altogether, up to this point,
fW|Λ (w | λ) = φVψ (w) 1 + O n−1/2 [log(n)]3 ,
(29)
and it remains to account for the difference between W and W,Λ .
Altogether, it can be shown that the PDF of W conditional on Condition ?(an ) and Λ is
Z
Z
w − ψ1 v1 − · · · − ψJ−1 vJ−1
fW|?(an ),Λ (w | λ) = · · · fV|?(an ),Λ v1 , . . . , vJ−1 ,
| λ dv1 · · · dvJ−1 .
ψJ
?(an )
√ P
To transition to W,Λ , define η = n Jj=1 j ψj Λj [Q0 (uj ) + Q00 (uj )(Yj − uj )], so W,Λ =
W+η. Conditional on W = w, Y1 = y1 , . . . , YJ−1 = yJ−1 , the value of YJ is fully determined.
Additionally conditioning on Λ = λ, the value of η is fully determined. Along with the
implicit function theorem, this can be used to derive the final normal approximation.
Results for the PDF derivative follow the same sequence of transformations; details are
left to the supplemental appendix.
For the last result in this part of the lemma, in addition to Condition ?(2 log(n)) and A2,
we use the law of iterated expectations for CDFs.
Sketch of proof of Theorem 2
For part (i), we start by restricting attention to cases where the largest of the J spacings
between relevant uniform order statistics, Un:b(n+1)uj c+1 − Un:b(n+1)uj c , and the largest difference between the Un:b(n+1)uj c and uj satisfy Condition ?(2 log(n)) as in Lemma 7. By Lemma
7(iv,v), the error from this restriction is smaller-order. We then use the representation of
ideal uniform fractional order statistics from Jones (2002), which is equal in distribution
to the linearly interpolated form but with random interpolation weights Cj ∼ β(j , 1 − j )
instead of fixed j , where each Cj is independent of every other random variable we have.
The leading term in the error is due to Var(Cj ), and by plugging in other calculations from
Lemma 8, we see that it is uniformly O(n−1 ) and can be calculated analytically.
For part (ii), the first result comes from the FOC
" J
!#
∂ K exp{−K 2 /(2Vψ )} X ψj2 j (1 − j )
q
n−1
0=
2
−1
∂K
[f (F (uj ))]
2πV 3
j=1
ψ
p
whose solution K = Vψ is plugged into the expression in Theorem 2(i).
The additional result for LB in part (ii) follows from the Dirichlet PDF approximation
in Lemma 8(ii).
Sketch of proof of Lemma 3
The results are based on the Cornish–Fisher-type expansion from Pratt (1968) and Peizer
and Pratt (1968), solving for the high-order constants.
39
Sketch of proof of Theorem 4
For CP, let Rn = O n−3/2 [log(n)]3 be the remainder from Theorem 2(i).
For a lower one-sided CI,
uh (α) = p + O(n−1/2 ),
J = 1,
h = (n + 1)uh (α) − b(n + 1)uh (α)c,
uh (α)(1 − uh (α))
, X0 = F −1 (uh (α)),
f (F −1 (uh (α)))2
p
−1
z1−α uh (α)(1 − uh (α))
1/2
−1
h
+ O(n−1/2 )
K=n
F (p) − F
u (α) = −
−1
h
f (F (u (α)))
p
−1/2
= −z1−α Vψ + O(n
),
Vψ =
where the first and last lines use Lemma 3, and the last line uses Assumption A2. Then, the
rate of coverage probability error is
L
h
L
h
−1/2
P Q̂X u (α) < Q(p) = P Q̂X u (α) < X0 + n
K
K exp{−K 2 /(2Vψ )}
h (1 − h )
q
+ Rn
= P Q̂IX uh (α) < X0 + n−1/2 K + n−1
[f (F −1 (uh (α)))]2
2πVψ3
= α − n−1 z1−α
h (1 − h )
φ(z1−α ) + O(n−3/2 ) + Rn ,
p(1 − p)
(30)
where f (F −1 (uh (α))) is uniformly (for large enough n) bounded away from zero by A2 since
uh (α) = p + O(n−1/2 ) → p. The argument for the lower endpoint is similar.
Two-sided CP comes directly from the two one-sided results, replacing α with α/2. For
the yet-higher-order calibration, the results follow from plugging in the proposed α̃.
L
The results for power are derived using the normal approximation Q̃B
X (u) of Q̂X (u),
along with a first-order Taylor approximation and arguments that the remainder terms are
negligible.
Sketch of proof of Lemma 5
Since the result is similar to other kernel bias results, and since the special case of d = 1 and
b = 2 is already given in Bhattacharya and Gangopadhyay (1990), we leave the proof to the
supplemental appendix and provide only a very brief sketch here. The approach is to start
from the definitions of QY |X (p; Ch ) and QY |X (p; x),
)
Z (Z
Z
QY |X (p;Ch )
p=
QY |X (p;x)
fY |X (y; x) dy fX|Ch (x) dx,
Ch
−∞
Z (Z
QY |X (p;Ch )
0=
fY |X (y; x) dy,
−∞
)
fY |X (y; x) dy fX|Ch (x) dx.
Ch
p=
QY |X (p;x)
40
so
After a change of variables to w = x/h, an expansion around w = 0 is taken, and the bias
can be isolated. If b = 2, kQ ≥ 2, and kX ≥ 1, then a second-order expansion is justified;
otherwise, the smoothness determines the order of both the expansion and the remainder.
Sketch of proof of Theorem 6
As in Chaudhuri (1991), we consider a deterministic bandwidth sequence, leaving treatment
of a random (data-dependent) bandwidth to future work. Whereas n is a deterministic
a.s.
sequence, Nn is random, but Nn nhd as shown in Chaudhuri (1991). Another difference
with the unconditional case is that the local sample’s distribution, FY |X (·; Ch ), changes with
n (through h). The uniformity of the remainder term in Theorem 4 relies on the properties
of the PDF in Assumption A2. In the conditional case, we show that these properties hold
uniformly over the PDFs fY |X (·; Ch ) as h → 0, for which we rely on A4, A5, A7, and A8.
In the lower one-sided case, let Q̂LY |Ch (uh ) be the Hutson (1999) upper endpoint, with
notation analogous to Section 2, with uh = uh (α). The CP of the lower one-sided CI is
L
P QY |X (p; 0) < Q̂Y |Ch (uh ) = 1 − α + CPEU + CPEBias ,
(31)
where CPEU is CPE due to the unconditional method and CPEBias comes from the bias:
CPEU ≡ P QY |X (p; Ch ) < Q̂LY |Ch (uh ) − (1 − α) = O Nn−1 ,
CPEBias ≡ P QY |X (p; 0) < Q̂LY |Ch (uh ) − P QY |X (p; Ch ) < Q̂LY |Ch (uh ) .
1/2
Using Lemmas 5 and 8, or alternatively Theorem 2, one can show CPEBias = O(Nn hb ).
Then, one can solve for the h that equates the orders of CPEU and CPEBias , i.e., so that
1/2
Nn−1 Nn hb , using Nn nhd .
With two-sided inference, the lower and upper endpoints have opposite bias effects. For
the median, the dominant terms of these effects cancel completely. For other quantiles,
there is a partial, order-reducing cancellation. The calculations, which use Theorem 2, are
extensive and thus left to the supplemental appendix. Ultimately, it can be shown that twosided CP is 1 − α plus terms of O(Nn−1 ), O(Bh ), and O(Bh2 Nn ), in addition to smaller-order
remainders. With the new CPE terms, one can again solve for the h that sets the orders
equal.
41
| 10 |
arXiv:1607.04470v2 [math.OA] 22 Jun 2017
TRACIAL STABILITY FOR C ∗ -ALGEBRAS
DON HADWIN AND TATIANA SHULMAN
Abstract. We consider tracial stability, which requires that tuples of elements of a C*-algebra with a trace that nearly satisfy a relation are close to
tuples that actually satisfy the relation. Here both ”near” and ”close” are
in terms of the associated 2-norm from the trace, e.g. the Hilbert-Schmidt
norm for matrices. Precise definitions are stated in terms of liftings from tracial ultraproducts of C*-algebras. We completely characterize matricial tracial
stability for nuclear C*-algebras in terms of certain approximation properties
for traces. For non-nuclear C ∗ -algebras we find new obstructions for stability
by relating it to Voiculescu’s free entropy dimension. We show that the class
of C*-algebras that are stable with respect to tracial norms on real-rank-zero
C*-algebras is closed under tensoring with commutative C*-algebras. We show
that C(X) is tracially stable with respect to tracial norms on all C ∗ -algebras
if and only if X is approximately path-connected.
Contents
Introduction
1. Preliminaries
2. Tracial stability
3. Matricial tracial stability
3.1. Embeddable traces
3.2. Matricial tracial stability – necessary conditions
3.3. Matricial tracial stability for tracially nuclear C*-algebras
4. Matricial tracial stability and free entropy dimension
5. C ∗ -tracial stability
References
1
5
6
14
14
16
19
21
25
32
Introduction
The notion of stability is an old one. For a given equation p(x1 , . . . , xn ) = 0 of
noncommutative variables x1 , . . . , xn one can ask if it is ”stable”, meaning that for
any ǫ > 0 there is a δ > 0 such that if B is a C ∗ -algebra with b1 , . . . bn ∈ B and
kp (b1 , . . . , bn )k < δ, then there exist c1 , . . . , cn ∈ B such that p (c1 , . . . , cn ) = 0 and
kck − bk k < ε for 1 ≤ k ≤ n.
In other words, if some tuple is close to satisfying the equation, it is near to
something that does satisfy the equation.
2000 Mathematics Subject Classification. Primary 46Lxx; Secondary 20Fxx.
Key words and phrases. tracial ultraproduct, tracially stable, tracial norms, almost commuting
matrices.
1
2
DON HADWIN AND TATIANA SHULMAN
”Stability under small perturbations” questions depend very much on the norm
we consider and the class of C*-algebras B we allow.
A folklore ”stability” result is related to projections. If x = x∗ and x − x2 <
ε < 1/4, with the norm being
√ the usual operator norm, then there is a projection
p ∈ C ∗ (x) with kp − xk < ε. There are easily proved similar results for isometries
2
2
1−x∗ x = 0 and unitaries (1 − x∗ x) +(1 − xx∗ ) = 0 using the polar decomposition.
∗
∗
For the property of being normal, x x−xx = 0, the famous question of stability
for finite matrices was asked by Halmos ([17]). He asked whether an almost normal
contractive matrix is necessarily close to a normal contractive matrix. This is
considered independently of the matrix size and ”almost” and ”close” are meant
with respect to the operator norm. This question was answered positively by Lin’s
famous theorem [20] (see also [7] and [19]).
However the result does not hold when matrices are replaced by operators. A
classical example is the sequence {Sn } of weighted unilateral shifts with weights,
1 2
n−1
, ,...,
, 1, 1, . . . .
n n
n
Each Sn is a compact perturbation of the unweighted shift and Fredholm index
arguments show that the distance from Sn to the normal operators is exactly 1,
but
lim kSn∗ Sn − Sn Sn∗ k = 0.
n→∞
In other words, the relation
kxk ≤ 1, xx∗ − x∗ x = 0
is stable with respect to the class of matrix algebras but is not stable with respect
to the class of all C ∗ -algebras. Thus ”stability” questions depend on the class of
C ∗ -algebras you are considering.
The property of being stable with respect to the class of all C ∗ -algebras and the
operator norm is called weak semiprojectivity ([3]). An excellent exposition of weak
semiprojectivity can be found in Loring’s book ([21]).
Although almost normal operators need not be close to normal, however, using
the remarkable distance formula of Kachkovsky and Safarov in [19], the first author
and Ye Zhang [15] proved that there is a constant C such that, for every Hilbertspace operator T , the distance from T ⊕ T ⊕ · · · to the normal operators is at most
1/2
C kT ∗ T − T T ∗k .
Another famous Halmos stability question ([17]) asks whether two almost commuting unitary matrices are necessarily close to two exactly commuting unitary
matrices. It was answered by Voiculescu in the negative [26] (see [5] for a short
proof). However if the operator norm is replaced by Hilbert-Schmidt norm, then
things change dramatically. In the Hilbert-Schmidt norm almost commuting unitary matrices turn out to be close to commuting ones, and almost commuting
self-adjoint ones to be close to commuting self-adjoint ones as was shown in [11]
by the first author and Weihua Li. Several quantitative results (estimating δ(ǫ))
for almost commuting k-tuples of self-adjoint, unitary, and normal matrices with
respect to this norm have been obtained in [8], [24] and [6] . Much more generally,
it was proved in [11] that any polynomial equation of commuting normal variables
is stable with respect to the tracial norms on diffuse von Neumann algebras.
TRACIALLY STABLE
3
However nothing is known for polynomial relations in non-commuting variables.
In this paper we initiate a study of stability of non-commutative polynomial relations, which we translate into lifting problems for noncommutative C ∗ -algebras.
We consider C*-algebras B that have a tracial state ρ, and we measure ”almost”
and ”close” in terms of the 2-(semi)norm on B given by
kxk2 = ρ (x∗ x)
1/2
.
Thus we will address a ”Hilbert-Schmidt” type of stability that we call tracial
stability.
The original ǫ-δ-definition of norm stability can be reformulated in terms of apα
Y
proximate liftings from ultraproducts
Ai of C*-algebras Ai . Similarly tracial
i∈I
stability can be reformulated in terms of approximate liftings from tracial ultraα
Y
products
(Ai , ρi ) of tracial C*-algebras (Ai , ρi ). These ideas are made precise
i∈I
in section 2.
Suppose C is a class of unital C*-algebras that is closed under isomorphisms.
We say that a separable unital C*-algebra A is C-tracially stable if every unital
∗-homomorphism from A into a tracial ultraproduct of C ∗ -algebras from the class
C is approximately liftable.
If A is the universal C ∗ -algebra of a relation p, this definition is equivalent to
the ǫ − δ definition above with the norm being a tracial norm and C ∗ -algebras B
being from the class C.
We will be interested here in matricial tracial stability, II1 -factor tracial stability,
W ∗ -factor-tracial stability, RR0-tracial stability (that is when C is the class of real
rank zero C ∗ -algebras), and C ∗ -tracial stability (that is when C is the class of all
C ∗ -algebras).
All previous results ([11], [8], [24] and [6]) on matrices which almost commute
w.r.t. the Hilbert-Schmidt norm can be reformulated as matricial tracial stability
of separable commutative C ∗ -algebras.
In fact first results related with RR0-tracial stability appeared in [22] and [23],
where there were proved some stability results for projections almost commuting
with matrix units, and this was applied to deducing the tracial Rokhlin property for
an automorphism of a C ∗ -algebra from the Rokhlin property for the corresponding
automorphism of an associated von Neumann algebra.
In section 2 of this paper we extend substantially all the previous results about
matrices almost commuting with respect to the Hilbert-Schmidt norm.
Theorem 2.7. Suppose the class C ⊆ RR0 is closed under taking direct sums and
unital corners. If B is separable unital and C-tracially stable, and if X is a compact
metric space, then B ⊗ C (X) is C-tracially stable. In particular every separable
unital commutative C*-algebra is C-tracially stable.
Of course a natural obstruction for a C ∗ -algebra to be tracially stable can be
simply a lack of ∗-homomorphisms. Say, a C ∗ -algebra has enough almost homomorphisms to matrix algebras to separate points, but not enough actual ∗homomorphisms to matrix algebras to separate points, then of course it is not
matricially tracially stable. However it turns out to be not the only obstruction.
4
DON HADWIN AND TATIANA SHULMAN
We show that a certain approximation property for traces have to hold for a C*algebra to be matricially tracially stable. For nuclear C ∗ -algebras (even tracially
nuclear, see definition in section 3) this property is also sufficient.
Theorem 3.10 Suppose A is a separable tracially nuclear C*-algebra with at least
one tracial state. The following are equivalent
(1) A is matricially tracially stable
(2) for every tracial state τ on A, there is a positive integer n0 and, for each
n ≥ n0 there is a unital ∗-homomorphism ρn : A → Mn (C) such that, for
every a ∈ A,
τ (a) = lim τn (ρn (a)) .
n→∞
(here τn is the usual tracial state on Mn (C))
(3) A is W*-factor tracially stable.
These conditions are stronger than just a property to have a separating family
of ∗-homomorphisms to matrix algebras. Namely,
Example 3.11 There exists a residually finite-dimensional (RFD) nuclear C ∗ algebra which has finite-dimensional irreducible representations of all dimensions
but is not matricially tracially stable.
However a Type I C ∗ -algebra is W ∗ -factor tracially stable (in particular, matricially tracially stable) when it has sufficiently many matrix representations (for
example, to have a 1-dimensional representation is enough):
Corollary 3.9 Suppose A is a type I separable unital C ∗ -algebra such that for all
but finitely many positive integers n it has a unital n-dimensional representation.
Then A is W ∗ -factor tracially stable. In particular, for any type I C ∗ -algebra A,
A ⊕ C is W ∗ -factor tracially stable.
In section 4 we find a close relationship between matricial tracial stability and
Voiculescu’s free entropy dimension δ0 . The following result shows that a matricially
tracially stable algebra may be forced to have a lot of non-unitarily equivalent
representations of some given dimension. Below by Rep(A, k) / ⋍ we denote the
set of all unital ∗-homomorphisms from A into Mk (C) modulo unitary equivalence.
Theorem 4.2 Suppose A =C*(x1 , . . . , xn ) is matricially tracially stable and τ is
an embeddable tracial state on A such that 1 < δ0 (x1 , . . . , xn ) . Then
lim sup
k→∞
log Card (Rep (A, k) / ⋍)
= ∞.
k2
This allows us to show that for non-(tracially) nuclear C ∗ -algebras there arise
new obstructions, other than in the (tracially) nuclear case, for being matricially
tracially stable.
Theorem 4.6 There exists an RFD C ∗ -algebra which has the approximation property from Theorem 3.10 but which is not matricially tracially stable.
In the last section we consider C ∗ -tracial stability. In contrast to RR0-tracial
stability, not all commutative C ∗ -algebras have this property. The main result
of section 5 is a characterization of C ∗ -tracial stability for separable commutative
C ∗ -algebras. For that we introduce approximately path-connected spaces. We say
that a topological space X is approximately path-connected if, for any finitely many
TRACIALLY STABLE
5
points x1 , . . . , xn , one can find arbitrarily close to them points x′1 , . . . , x′n which can
be connected by a continuous path.
Theorem 5.3 Suppose X is a compact metric space. The following are equivalent:
(1) C (X) is C ∗ -tracially stable.
(2) X is approximately path-connected.
As a final remark here it is interesting how things are reversed when norm stability is replaced by C ∗ -tracial stability. For example, being a projection is norm
stable, but not C ∗ -tracially stable. Indeed, if {fn } is any sequence of functions in
R1
C [0, 1] with trace ρ (f ) = 0 f (x) dx and 0 ≤ fn ≤ 1 and fn (x) → χ[0,1/2] (x)
a.e., then fn − fn2 2 → 0, but since 0, 1 are the only projections, the fn ’s are not
kk2 -close to a projection in C [0, 1] . On the other hand, for C ∗ -tracial stability,
the problems of normality, commuting pairs of unitaries, and commuting triples
of selfadjoint operators the answers are all affirmative, in contrast to the norm
stability.
Acknowledgements. The first author gratefully acknowledges a Collaboration
Grant from the Simons Foundation. The research of the second-named author was
supported by the Polish National Science Centre grant under the contract number DEC- 2012/06/A/ST1/00256 and from the Eric Nordgren Research Fellowship
Fund at the University of New Hampshire.
1. Preliminaries
1.1. Ultraproducts.
If a unital C*-algebra B has a tracial state ρ, we denote the 2-norm (seminorm)
given by ρ as k·k2 = k·k2,ρ defined by
1/2
kbk2 = ρ (b∗ b)
.
We also denote the GNS representation for the state ρ by πρ .
Suppose I is an infinite set and α is an ultrafilter on I. We say α is nontrivial
if there is a sequence {En } in α such that ∩n En = ∅. Suppose α is a nontrivial
ultrafilter on a set I and, for each i ∈ I, suppose Ai is a unital C*-algebra with a
α
Y
Y
tracial state ρi . The tracial ultraproduct
(Ai , ρi ) is the C*-product
Ai modulo
Y i∈I
the ideal Jα of all elements {ai } in
Ai for which
i∈I
i∈I
lim kai k22,ρi
i→α
= lim ρi (a∗i ai ) = 0.
i→α
Y
We denote the coset of an element {ai } in
Ai by {ai }α .
i∈I
Tracial ultraproducts for factor von Neumann algebras was first introduced by
S. Sakai [25] where he proved that a tracial ultraproduct of finite factor von Neumann algebras is a finite factor. More recently, it was shown in [11] that a tracial
α
Y
ultraproduct
(Ai , ρi ) of C*-algebras is always a von Neumann algebra with a
i∈I
faithful normal tracial state ρα defined by
ρa ({ai }α ) = lim ρi (ai ) .
i→α
6
DON HADWIN AND TATIANA SHULMAN
If there is no confusion, we will denote it just by ρ.
1.2. Background theorem.
The next theorem is a key tool for some of our results. It says, that two tuples that
are both close to the same tuple in a hyperfinite von Neumann algebra are nearly
unitarily equivalent to each other.
Theorem 1.1. ([16]) Suppose A = W ∗ (x1 , . . . , xs ) is a hyperfinite von Neumann
algebra with a faithful normal tracial state ρ. For every ε > 0 there is a δ > 0
and an N ∈ N such that, for every unital C*-algebra B with a factor tracial state τ
and a1 , . . . , as , b1 , . . . , bs ∈ B, if, for every ∗-monomial m (t1 , . . . , ts ) with degree at
most N ,
kτ (m (a1 , . . . as )) − ρ (m (x1 , . . . xs ))k2 < δ,
kτ (m (b1 , . . . bs )) − ρ (m (x1 , . . . xs ))k2 < δ,
then there is a unitary element u ∈ B such that
s
X
kuak u∗ − bk k2 < ε.
k=1
The following lemma is very well known.
Lemma 1.2. If γ is a ∗-homomorphism from A to a von Neumann algebra M
with a faithful normal trace τ , then γ(A)′′ ∼
= πτ ◦γ (A)′′ .
2. Tracial stability
α
Y
Suppose A is a unital C*-algebra and π : A →
(Ai , ρi ). We say that π is
i∈I
approximately liftable if there is a set E ∈ α and for every i ∈ E there is a unital
*-homomorphism πi : A → Ai such that, for every a ∈ A,
π (a) = {πi (a)}α
where we arbitrarily define πi (a) = 0 when i ∈
/ E.
It actually makes no difference how we define πi (a) when i ∈
/ E since the equivalence class {πi (a)}α does not change.
Suppose C is a class of unital C*-algebras that is closed under isomorphisms.
We say that a separable unital C*-algebra A is C-tracially stable if every unital
α
Y
∗-homomorphism from A into any tracial ultraproduct
(Ai , ρi ), with Ai ∈ C
i∈I
and ρi any trace on Ai , is approximately liftable.
Thus A is C*-tracially stable if every unital ∗-homomorphism from A into any
tracial ultraproduct is approximately liftable.
We say that A is matricially tracially stable if every unital ∗-homomorphism from
A into an ultraproduct of full matrix algebras Mn (C) is approximately liftable.
We say that A is finite-dimensionally tracially stable if every unital ∗-homomorphism
from A into a tracial ultraproduct of finite-dimensional C*-algebras is approximately liftable.
We say that A is W*-tracially stable if every unital ∗-homomorphism from A
into a tracial ultraproduct of von Neumann algebras is approximately liftable.
We say that A is W*-factor tracially stable if every unital ∗-homomorphism from
A into a tracial ultraproduct of factor von Neumann algebras is approximately
liftable.
TRACIALLY STABLE
7
Remark 2.1. One can also define tracial stability in the non-unital category. For
instance we can define the non-unital version of matricial tracial stability by saying that a separable unital C*-algebra A is matricially tracially stable if every
∗-homomorphism from A into an ultraproduct of full matrix algebras Mn (C) is
liftable. It is easy to show that A is matricially tracially stable in the non-unital
category iff à is matricially tracially stable in the unital category. Here à = A+
when A is non-unital and à = A⊕ C when A is unital. Thus the non-unital version
can be easily obtained from the unital one.
To see how this ultraproduct formulation represents the ε-δ definition of tracial
stability mentioned in the introduction, suppose A is the universal unital C*-algebra
generated by contractions x1 , . . . , xs subject to a relation p (x1 , . . . , xs ) = 0. Supα
Y
pose π : A →
(Ai , ρi ) is a unital ∗-homomorphism such that, for each 1 ≤ k ≤ s,
i∈I
π (xk ) = {xk (i)}α .
We then have
0 = kπ (p (x1 , . . . xs ))k2,ρα = kp (π (x1 ) , . . . π (xs ))k2,ρα = lim kp (x1 (i) , . . . , xs (i))k2,ρi .
i→α
Since α is a nontrivial ultrafilter on I, there is a decreasing sequence E1 ⊃ E2 ⊃ · · ·
in α such that ∩k∈N Ek = ∅. First suppose A is C*-tracially stable. Then, for each
positive integer m there is a number δm > 0 such that, when
kp (x1 (i) , . . . , xs (i))k2,ρi < δm
there is a unital ∗-homomorphism γm,i : A→Ai such that
max kxk (i) − γm,i (xk )k2,ρi < 1/m.
1≤k≤s
Since limi→α kp (x1 (i) , . . . , xs (i))k2,ρi = 0 we can find a decreasing sequence {An }
in α with An ⊂ En such that, for every i ∈ An
kp (x1 (i) , . . . , xs (i))k2,ρi ≤ δn .
For i ∈ An \An+1 we define πi = γn,i . We then have that {πi }i∈A1 is an approximate
lifting of π.
On the other hand, if A is not C*-tracially stable, then there is an ε > 0 such
that, for every positive integer n there is a tracial unital C*-algebra (An , ρn ) and
x1 (n) , . . . , xs (n) such that
kp (x1 (n) , . . . , xs (n))k2,ρn < 1/n,
but for every unital ∗-homomorphism γ : A → An
max kxk (n) − γ (xk )k2,ρn ≥ ε.
1≤k≤s
If we let α be any free ultrafilter on N, we have that the map
π (xk ) = {xk (n)}λ
extends to a unital ∗-homomorphism into
liftable.
α
Y
n∈N
(An , ρn ) that is not approximately
8
DON HADWIN AND TATIANA SHULMAN
The following result shows that pointwise k k2 -limits of approximately liftable
representations are approximately liftable.
Lemma 2.2. Suppose A = C ∗ ({b1 , b2 , . . .}), {(Ai , ρi ) : i ∈ I} is a family of tracial
α
Y
C*-algebras, α is a nontrivial ultrafilter on I, and π : A →
(Ai , ρi ) is a unital
∗-homomorphism such that, for each k ∈ N,
i∈I
π (bk ) = {bk (i)}α .
The following are equivalent:
(1) π is approximately liftable
(2) For every ε > 0 and every N ∈ N, there is a set E ∈ α and for every i ∈ E
there is a unital ∗-homomorphism πi : A → Ai such that, for 1 ≤ k ≤ N
and every i ∈ E,
kπi (bk ) − bk (i)k2,ρi < ε.
Proof. Obviously 1) implies 2). We need to prove the opposite implication. Since
α is nontrivial, there is a decreasing sequence {Bn } of elements of α such that
∩∞
n=1 Bn = ∅. For each n ∈ N, let N = n and ε = 1/n and let En ∈ α and, for each
i ∈ En , choose a unital ∗-homomorphism πn,i : A → Ai such that
kπn,i (bk ) − bk (i)k2,ρi < 1/n
for 1 ≤ k ≤ n. We define Fn = ∩nk=1 (Bk ∩ Ek ) and for S
i ∈ Fn+1 \Fn we define
∞
πi = πn,i . Since ∩∞
n=1 Fn = ∅, πi ’s are defined for each i ∈
n=1 Fn ∈ α. Clearly,
{πi (bk )}α = {bk (i)}α = π (bk )
for k = 1, 2, . . .. Since the set S of all a ∈ A such that {πi (a)}α = π (a) is a unital
C*-algebra, we see that S = A and that π is approximately liftable.
Although we consider all tracial ultraproducts, however when the algebra A
is separable, we need to consider only ultraproducts over N with respect to one
non-trivial ultrafilter.
Lemma 2.3. Suppose A = C ∗ (x1 , x2 , . . .) is a separable unital C*-algebra, C is a
class of unital C*-algebras closed under isomorphism and α is a nontrivial ultrafilter
on N. The following are equivalent:
(1) A is C-tracially stable
(2) If {(Bn , γn )} is a sequence of tracial C*-algebras in C and π : A →
α
Y
(Bi , γi )
i∈I
is a unital ∗-homomorphism, then π is approximately liftable.
(3) For every ε > 0, for every positive integer s, and for every tracial state ρ
on A, there is a positive integer N such that if B ∈ C and γ is a tracial
state on B and b1 , . . . , bN ∈ B such that kbk k ≤ 1 + kxk k for 1 ≤ k ≤ N
and
1
|ρ (m (x1 , . . . , xN )) − γ (m (b1 , . . . , bN ))| <
N
TRACIALLY STABLE
9
for all ∗-monomials m (t1 , . . . , tN ) with degree at most N , then there is a
unital ∗-homomorphism π : A → B such that
s
X
kπ (xk ) − bk k2,γ < ε.
k=1
Proof. (1) ⇒ (2). This is obvious.
(2) ⇒ (3). Assume (3) is false. Then there is an ε > 0, an integer s ∈ N and a
tracial state ρ on A and, for each N ∈ N, there is a BN ∈ C with a tracial state γN
and {bN,1, bN,2 , . . . , bN,N } ⊂ BN such that
kbN,k k ≤ kxk k + 1
for 1 ≤ k ≤ N < ∞, and
1
N
for all ∗-monomials m (t1 , . . . , tN ) with degree at most N , such that, for each N
there is no ∗-homomorphism π : A → BN such that
s
X
kπ (xk ) − bN,k k2,γN < ε.
|ρ (m (x1 , . . . , xN )) − γN (m (bN,1 , . . . , bN,N ))| <
k=1
For each 1 ≤ N < k let bN,k = 0 ∈ BN . For 1 ≤ k < ∞, let
bk = {bN,k }α ∈
Let γ be the limit trace on
α
Y
N ∈N
m (t1 , . . . , tk ) we have
(2.1)
α
Y
(BN , γN ) .
N ∈N
(BN , γN ). It follows that for every ∗-monomial
ρ (m (x1 , . . . , xk )) = γ (m (b1 , . . . , bk )) .
Define a unital ∗-homomorphism π : A →
α
Y
(BN , γN ) by
N ∈N
π(xi ) = bi .
Since γ is faithful, it follows from (2.1) that π is well-defined and ρ = γ ◦ π. By
(2), π is approximately liftable so there is an E ∈ α and, for each N ∈ E a unital
∗-homomorphism πN : A → BN such that, for 1 ≤ k < ∞
π (xk ) = {πN (xk )}α .
It follows that
lim
N →α
s
X
k=1
which means for some N ∈ N,
s
X
k=1
kbN,k − πN (xk )k2,γN = 0,
kbN,k − πN (xk )k2,γN < ε.
This contradiction implies (3) must be true.
(3) ⇒ (1). Suppose (3) is true and {(Bi , γi ) : i ∈ I} is a collection of tracial
unital C*-algebras with each Bi in C and suppose β is a nontrivial ultrafilter on I
10
and π : A →
DON HADWIN AND TATIANA SHULMAN
β
Y
i∈I
(Bi , γi ). Let γ be the limit trace along β and define ρ = γ ◦ π. If
k ∈ N, we let ε = k1 and s = k we can choose Nk as in (2). Since β is nontrivial
there is a decreasing sequence {Ej } in β whose intersection is empty. For each
k ∈ N write
π (xk ) = {bi,k }β
with kbi,k k ≤ kxk k for every i ∈ I. It follows, for every ∗-monomial m (t1 , . . . , tk )
that
lim γi (m (bi,1 , . . . , bi,k )) = ρ (m (x1 , . . . , xk )) .
i→β
Thus the set Fk consisting of all i ∈ I such that
1
Nk
for all ∗-monomials m with degree at most Nk must be in β. For each k ∈ N let
|ρ (m (x1 , . . . , xNk )) − γi (m (bi,1 , . . . , bi,Nk ))| <
Wk = Ek ∩ F1 ∩ · · · ∩ Fk .
For each i ∈ Wk \Wk+1 there is a unital ∗-homomorphism πi : A → Bi such that
k
X
j=1
kπi (xj ) − bi,j k2,γi <
1
.
k
n
o
It is clear for every k ∈ N that π (xk ) = {πi (xk )}β . Since a ∈ A : π (a) = {πi (a)}β
is a unital C*-subalgebra oof A, containing all x1 , x2 , . . ., we conclude that A =
n
a ∈ A : π (a) = {πi (a)}β and thus π is approximately liftable.
It is clear that C*-tracially stable implies W*-tracially stable implies factor tracially stable implies matricially tracially stable. Here are some slightly more subtle
relationships.
Lemma 2.4. Suppose A is a separable unital C*-algebra. Then A is matricially
tracially stable if and only if every unital ∗-homomorphism from A into a tracial ultraproduct of hyperfinite factors is approximately liftable. Also A is finitedimensionally tracially stable if and only if every unital ∗-homomorphism from A
into a tracial ultraproduct of hyperfinite von Neumann algebras is approximately
liftable.
Proof. We prove the first statement; the second follows in a similar fashion. The ”if”
part is clear since Mn (C) is a hyperfinite factor. Suppose A is matricially tracially
stable, and suppose {(Mi , τi ) : i ∈ I} is a family with each Mi a hyperfinite factor
and τi is the unique normal faithful tracial state on Mi . Suppose also that α is
α
Y
a nontrivial ultrafilter on I and π : A →
(Mi , τi ) is a unital ∗-homomorphism.
i∈I
α
Y
If E = {i ∈ I : dim Mi < ∞} ∈ α, then
i∈I
(Mi , τi ) =
α
Y
(Mi , τi ) is a tracial
i∈E
product of matrix algebras and π is approximately liftable. If I\E ∈ α, we can
assume that each Mi is a hyperfinite II1 -factor. Since α is nontrivial, there is a
TRACIALLY STABLE
11
decreasing sequence {En } in α such that E1 = I and ∩∞
n=1 En = ∅. Suppose {an }
is a dense sequence in A and, for each n ∈ N write
π (an ) = {an (i)}α
with kan (i)k ≤ kan k, for all n and i. If n ∈ N and i ∈ En \En+1 , we can find a
unital subalgebra Bi ⊂ Mi such that Bi is isomorphic to Mki (C) for some ki ∈ N
an such that there is a sequence {bm (i)} in Bi with kbm (i)k ≤ kam k for each m
and such that
1
kam (i) − bm (i)k2 ≤
n
for 1 ≤ m ≤ n. It follows that
α
Y
π (an ) = {an (i)}α = {bn (i)}α ∈
(Bi , τi ) .
i∈I
α
Y
It follows that π : A→
approximately liftable.
i∈I
(Bi , τi ). Since A is matricially tracially stable, π is
Recall that a C ∗ -algebra has real rank zero (RR0) if each its self-adjoint element
can be approximated by self-adjoint elements with finite spectrum.
Theorem 2.5. Suppose every algebra in the class C is RR0. Then every separable
unital commutative C*-algebra is C-tracially stable.
Proof. In [14] the authors proved that if J is a norm-closed two-sided ideal of a
real-rank zero C*-algebra M such that M/J is an AW*-algebra, then for any
commutative separable unital C*-algebra A, any ∗-homomorphism π : A → M/J ,
lifts to a unital ∗-homomorphism ρ : A → M. Since the direct product of realrank zero C*-algebras is a real-rank zero C*-algebra and a tracial ultraproduct of
C ∗ -algebras is a von Neumann algebra, the statement follows.
Proposition 2.6. Suppose C is a class such that C ⊕ B ∈ C whenever B ∈ C. Every
C-tracially stable separable unital C ∗ -algebra A with at least one representation
into a tracial ultraproduct of C ∗ -algebras in C must have a one-dimensional unital
representation.
Proof. Suppose π : A →
α
Y
i∈I
(Bi , γi ) is a unital ∗-homomorphism where each Bi ∈ C
and α is a nontrivial ultrafilter on I. Let Di = C ⊕ Bi for each i ∈ I. Since α is
nontrivial there is a decreasing sequence I = E1 ⊇ E2 ⊇ · · · in α whose intersection
is ∅. For i ∈ En \En+1 define a trace ρi on Di by
1
1
γi (B) .
ρi (λ ⊕ B) = λ + 1 −
n
n
Then
α
Y
i∈N
The rest is easy.
(Di , ρi ) =
α
Y
(Bi , γi ) .
i∈N
12
DON HADWIN AND TATIANA SHULMAN
Theorem 2.7. Suppose the class C ⊆ RR0 is closed under taking direct sums and
unital corners. If B is separable unital and C-tracially stable, and if X is a compact
metric space, then B ⊗ C (X) is C-tracially stable.
Proof. Suppose {(An , ρn )} is a sequence of C ∗ -algebras in the class C, α is a
α
Y
(An , ρn ) is a unital ∗non-trivial ultrafilter on N, and f : B ⊗ C(X) →
n∈N
homomorphism. By Lemma 2.2, it will be enough for any x1 , . . . , xk ∈ B ⊗ C(X),
ǫ > 0 to find unital ∗-homomorphisms f˜n : B ⊗ C(X) → An , such that
k{f˜n (xj )}α − f (xj )k2,ρ < 3ǫ,
j ≤ k.
(j)
(j)
There are bi ∈ B, φi ∈ C(X), N1 , . . . , Nk such that
j ≤ k. Let
Nj
X
kxj −
(2.2)
M=
i=1
(j)
(j)
bi ⊗ φi k ≤ ǫ,
(j)
max
1≤i≤Nj ,1≤j≤k
kf (bi ⊗ 1)k, N = max Nj .
j≤k
Since f (1 ⊗ C(X)) is commutative, it is isomorphic to some C(Ω). We can choose
a disjoint collection {E1 , . . . , Em } of Borel subsets of Ω whose union is Ω and
ω1 ∈ E1 , . . . , ωm ∈ Em such that
(2.3)
(j)
kf (1 ⊗ φi ) −
m
X
l=1
(j)
f (1 ⊗ φi )(wl )χEl k2,ρ <
ε
,
NM
for j ≤ k and i ≤ Nj . Since pairwisely orthogonal projections generate a commutative C ∗ -algebra, by Theorem 2.5 for each χEl , l ≤ m, we can find projections
Pl,n ∈ An such that χEl = {Pl,n }α and P1,n , . . . , Pm,n are pairwisely orthogonal,
for each n. Since the subalgebra f (1 ⊗ C(X)) is central in the algebra f (B ⊗ C(X)),
each χEl commutes with f (B ⊗ C(X)). Hence
!
!
α
m
α
m
Y
M
Y
X
(Pi,n An Pi,n , ρn ) .
Pi,n An Pi,n , ρn =
f (B ⊗ C(X)) ⊆
n∈N
i=1
i=1
n∈N
Since B is C-tracially stable, we can find unital ∗-homomorphisms ψn,i : B →
Pi,n An Pi,n , such that
(2.4)
{ψn,i (b)}α = pi ◦ f (b ⊗ 1),
for any b ∈ B. Here pi is the projection onto the i-th summand in
Hence for ψn = ⊕m
i=1 ψn,i : B →
(2.5)
Pm
i=1
Pi,n An Pi,n , we have
{ψn (b)}α = f (b ⊗ 1),
for any b ∈ B. Define a ∗-homomorphism δn : C(X) →
(2.6)
δn (φ) =
m
X
i=1
Pm
f (1 ⊗ φ)(wi )Pi,n .
i=1
Lm
i=1
α
Y
n∈N
!
(Pi,n An Pi,n , ρn ) .
Pi,n An Pi,n by
TRACIALLY STABLE
13
Define f˜n : B ⊗ C(X) → An by
f˜n (b ⊗ φ) = ψn (b)δn (φ).
Since δn (C(X)) and ψn (B) commute, f˜n is a ∗-homomorphism. By (2.2), (2.3),
(2.5), (2.6), for each j ≤ k we have
(2.7)
{f˜n (xj )}α − f (xj )
2,ρ
≤ 2ǫ+
Nj
= 2ǫ +
X
i=1
(j)
X
i=1
f˜n
(j)
Nj
X
i=1
Nj
X
(j)
(j)
(j)
(j)
bi ⊗ φi
bi ⊗ φi
−f
α
(j)
f (bi ⊗ 1)
m
X
l=1
i=1
Nj
{ψn (bi )}α {δn (φi }α −
Nj
= 2ǫ +
X
i=1
(j)
(j)
f (bi ⊗ 1)f (1 ⊗ φi )
2,ρ
(j)
(j)
!
f (1 ⊗ φi )(wl )χEl − f (1 ⊗ φi )
2,ρ
ǫ
≤ 2ǫ + Nj M
≤ 3ǫ.
NM
Proposition 2.8. Suppose the class C ⊆ RR0 is closed under taking finite direct
sums. Then the class of C-tracially stable C*-algebras is closed under taking finite
direct sums.
Proof. We will show it for direct sums with 2 summands, and the general case
α
Y
is similar. Let A, B be C-tracially stable, π : A ⊕ B →
(Dn , ρn ) a unital ∗n∈N
homomorphism and all Dn ∈ C. Let
p = π(1A ), q = π(1B ).
Then p + q = 1. By Theorem 2.5 we can find projections Pn , Qn ∈ Dn such that
{Pn }α = p, {Qn }α = q, Pn + Qn = 1Dn .
Then
π(A) ⊂
α
Y
n∈N
(Pn Dn Pn , ρn ) , π(B) ⊂
α
Y
n∈N
(Qn Dn Qn , ρn ) .
Since A and B are C-tracially
stable, there are unital ∗-homomorphisms φ : A →
Q
Q
Pn Dn Pn and ψ : B → Qn Dn Qn such that
{φ(a)n }α = π(a), {ψ(b)n }α = π(b),
for all a ∈ A, b ∈ B. Then π̃, defined by π̃(a, b) = φ(a) + ψ(b), is an approximate
lift of π.
The following proposition is obvious.
Proposition 2.9. The class of C-tracially stable C*-algebras is closed under finite
free products in the unital category for any C.
2,ρ
14
DON HADWIN AND TATIANA SHULMAN
Remark. The preceding proposition cannot be extended to countable free products
in the unital category. For example, Bn = Mn (C) ⊕ Mn+1 (C) is matricially
tracially stable for each positive integer n by Lemma 3.5 and Corollary 3.9 in the
next section. Let α be a nontrivial ultrafilter containing the set {k! | k ∈ N}. Then,
α
Y
(Mi , τi ) and hence there is a unital
for each n ∈ N, Bn embeds unitally into
n∈N
∗-homomorphism from the unital free product ∗n∈N Bn into
α
Y
(Mi , τi ). However,
n∈N
∗n∈N Bn has no finite-dimensional representations. Hence ∗n∈N Bn is not matricially
tracially stable. However, if An is a separable tracially-C-stable C ∗ -algebra for each
n ∈ N, then the unitization of the free product of the An ’s in the nonunital category
is tracially-C-stable.
3. Matricial tracial stability
Notation: τn usual trace on Mn (C). The tracial ultraproduct is denoted
α
Y
(Mn (C) , τn ) .
n∈N
3.1. Embeddable traces. Definition Suppose A is a unital separable C*-algebra
and τ is a tracial state on A. We say τ is embeddable if there is a non-trivial
ultrafilter α on N and a unital ∗-homomorphism
π:A→
α
Y
(Mn (C) , τn )
n∈N
such that
τα ◦ π = τ.
If Connes’ embedding theorem holds, then every tracial state is embeddable.
A tracial state τ is finite-dimensional if there is a finite-dimensional C*-algebra
B with a tracial state τB and a unital *-homomorphism
π:A→B
such that
τB ◦ π = τ.
A tracial state τ is called matricial if there is a positive integer n and a unital
*-homomorphism
π : A → Mn (C)
such that
τ = τn ◦ π.
We say that a matricial tracial state is a factor matricial state if the π above
can be chosen to be surjective.
TRACIALLY STABLE
15
Lemma 3.1. A C ∗ -algebra A = C ∗ (D) with a tracial state τ admits a tracepreserving ∗-homomorphism into an ultraproduct of matrix algebras if and only if,
for every ǫ > 0, every finite tuple (a1 , ..., an ) of elements in D, and every finite set
F of ∗-monomials, there is a positive integer k and a tuple (A1 , ..., An ) of k × k
matrices such that
(1) kAj k ≤ kaj k + 1 for 1 ≤ j ≤ n,
(2) |τ (m(a1 , ..., an )) − τk (m(A1 , ..., An )| < ǫ for every m ∈ F .
Proof. The ”only if” part is obvious. For the other direction, let Λ be the set of all
triples λ = (ελ , Nλ , Eλ ) with ελ > 0, Eλ a finite subset of D, Nλ a positive integer,
partially ordered by(≥, ≤, ⊆) . By the hypothesis, for each λ ∈ Λ there is a positive
integer kλ and there is a function fλ : D → Mkλ with fλ (1) = 1 and such that,
i. kfλ (a)k ≤ kak + 1 for every a ∈ A,
ii. For all monomials m with degree at most Nλ , and all a1 , . . . , an ∈ Eλ ,
|τ (m(a1 , ..., an )) − τk (m(fλ (a1 ), ..., fλ (an ))| < ελ .
(We can define f (a) = 0 on A\Eλ .)
Y
Define F : D →
(Mkλ , τkλ ) by F (a) (λ) = fλ (a) . Let α be an ultrafilter on
λ∈Λ
Λ containing {λ : λ ≥ λ0 } for each λ0 ∈ Λ. Define
ρ:A→
α
Y
(Mkλ , τkλ )
λ∈Λ
by ρ (a) = [F (a)]α . Let A0 denote the unital ∗-algebra generated by D. It follows
from the definition of the fλ ’s that ρ extends to a unital ∗-homomorphism π0 :
α
Y
A0 →
(Mkλ , τkλ ) with τα ◦ π0 = τ . We can now uniquely extend, by continuity,
λ∈Λ
π0 to a unital ∗-homomorphism on A such that τ = τα ◦ π.
Corollary 3.2. If a separable
C ∗ -algebra with a tracial state admits a trace-preserving
Qα
∗-homomorphism into n∈N (Mn , τn ), for some non-trivial ultrafilter α, then it adQβ
mits a trace-preserving ∗-homomorphism into n∈N (Mn , τn ), for any non-trivial
ultrafilter β.
Proof.
Suppose A = C ∗ (a1 , a2 , . . .) with a tracial state τ , and suppose ρ : A →
Qα
n∈N (Mn , τn ) is a unital ∗-homomorphism such that τα ◦ ρ = τ . We can assume
kan k = 1 for all n, and we can let ρ (an ) = {bn,κ }α with kbn,κ k ≤ kan k = 1 for
all k ≥ 1. For each positive integer s there is a positive ks such that, for every
∗-monomial m with degree at most s,
1
.
2s
Suppose t > 2sks . Then by dividing t by ks we get t = ks q + r with 0 ≤ r < ks .
(q)
We define cj = bj,ks ⊕ 0r (where b(q) is a direct sum of q copies of b and 0r is an
r × r zero matrix. A simple computation gives, for any monomial m of degree at
most s that
r
.
τt (m (c1 , . . . , cs )) = τks (m (b1,ks , . . . , bs,ks )) 1 −
t
|τ (m (a1 , . . . , as )) − τks (m (b1,ks , . . . , bs,ks ))| <
16
DON HADWIN AND TATIANA SHULMAN
Since r/t < 1/2s and |τks (m (b1,ks , . . . , bs,ks ))| ≤ 1, we see that
1
.
s
Hence we get representatives like (c1 , . . . , cs ) for
Q all t > 2sks . From this we see
that there is a function F : {a1 , a2 , . . .} → n∈N (Mn , τn ) so that if f (an ) =
(dn,1 , dn,2 , . . .), we have
|τ (m (a1 , . . . , as )) − τt (m (c1 , . . . , cs ))| <
lim τn (m (dn,1 , dn,2 , . . .)) = τ (m (a1 , a2 , . . .))
n→∞
for any ∗-monomial m. Hence, for any free ultrafilter β,
lim τn (m (dn,1 , dn,2 , . . .)) = τ (m (a1 , a2 , . . .)) .
n→β
Thus the map π0 : {a1 , a2 , . . .} →
Qβ
n∈N (Mn , τn )
defined by
π0 (ak ) = F (ak )β
Qβ
extends to a unital ∗-homomorphism π : A → n∈N (Mn , τn ) such that τβ ◦ π =
τ.
It follows easily from Lemma 3.1 that the set of embeddable tracial states on
A is ∗-weak-compact. It is not hard to show that it is also convex, and contains
the finite-dimensional tracial states. The set of finite-dimensional tracial states is
the convex hull of the set of factor matricial states. Moreover, the set of finitedimensional tracial states is contained in the weak*-closure of the set of matricial
tracial states.
3.2. Matricial tracial stability – necessary conditions. Let A be a unital
C ∗ -algebra and let Jet (A) be the largest ideal of A that is annihilated by every
embeddable trace on A. If A has no embeddable traces, then A/Jet = {0} . More
generally, the embeddable tracial states of A/Jet separate the points of A/Jet , and
if A is separable, A/Jet always has a faithful tracial embeddable state. Clearly, A
is matricially tracially stable iff A/Jet is matricially tracially stable. C ∗ -algebras
without any embeddable tracial states are automatically matricially tracially stable.
Theorem 3.3. Suppose A is a separable unital matricially tracially stable C ∗ algebra with at least one embeddable trace. Then
(1) For all but finitely many positive integers n there is a unital representation
πn : A → Mn (C)
(2) The set of embeddable tracial states on A is the ∗-weak-closed convex hull
of the set of matricial tracial states.
Proof. (1) . Suppose, via contradiction, that the set E of positive integers n for
which there is no n-dimensional representation of A is infinite. Let α be a nonα
Y
trivial ultrafilter on N with E ∈ α. Let τ be the limit trace on
(Mn (C) , τn ).
n∈N
Then, if ρ is an embeddable trace on A, by Corollary 3.2 there is a representation
α
Y
π:A→
(Mn (C) , τn ) such that τ ◦ π = ρ. Since there A is matricially stable,
n∈N
the set N\E ∈ α, which is a contradiction.
TRACIALLY STABLE
17
(2) . If ρ is an embeddable trace and α is a non-trivial ultrafilter on N and
α
Y
π:A→
(Mn (C) , τn ) is a representation such that τ ◦π = ρ, then the matricial
n∈N
tracial stability of A implies there is a set F ∈ α and, for each n ∈ F , there is a
representation πn : A → Mn (C) such that the ∗-weak-limit along α of the finitedimensional traces τn ◦ πn .
Corollary 3.4. Suppose A is separable and matricially tracially stable. Then
A/Jet is RFD. If the collection of embeddable traces on A separates points of A,
then A is RFD.
The set of positive integers n for which there is an n-dimensional representation is
an additive semigroup (direct sums) that is generated by the k ∈ N for which there
is a k-dimensional irreducible representation. The following simple lemma, which
yields a reformulation of the first condition in Theorem 3.3, should be well-known.
Lemma 3.5. Suppose n1 < n2 < · · · < ns . The following are equivalent:
(1) GCD (n1 , . . . , nk ) = 1.
(2) There is a positive integer N such that every integer n ≥ N can be written
as
s
X
n=
ak nk
k=1
with a1 , . . . ak nonnegative integers.
Ps
Ps
Proof. (2) =⇒ (1) . If we write N = k=1 ak nk and N + 1 = k=1 bk nk , then
s
X
k=1
(bk − ak ) nk = 1,
which implies (1) .
(1) =⇒ (2) . If GCD (n1 , . . . , nk ) = 1, there are integers s1 , . . . , sk such that
s1 n1 + · · · + sk nk = 1.
Let m = 1 + n1 max (|s1 | , . . . , |sk |). Let N = mn1 + · · · + mnk . Suppose n ≥ N .
Using the division algorithm we can find integers q ≥ 0 and r with 0 ≤ r < n1 such
that
n − N = n1 q + r.
Thus
n = mn1 + · · · + mnk + qn1 + r (s1 n1 + · · · + sk nk ) =
(m + rs1 + q) n1 +
k
X
(m + rsj ) nj .
j=2
However, m + rsj ≥ m − n1 |sj | ≥ 1 for 1 ≤ j ≤ k.
Corollary 3.6. A unital C*-algebra satisfies condition (1) in Theorem 3.3 if and
only if {n ∈ N : A has an irreducible n-dimensional representation} has greatest common divisor equal to 1.
18
DON HADWIN AND TATIANA SHULMAN
Lemma 3.7. A separable unital C ∗ -algebra A satisfies statements (1) and (2)
in Theorem 3.3 if and only if, for every embeddable tracial state τ on A, there
is a positive integer n0 and, for each n ≥ n0 there is a unital ∗-homomorphism
ρn : A → Mn (C) such that, for every a ∈ A,
τ (a) = lim τn (ρn (a))
n→∞
Proof. The ”if” part is clear. Suppose statements (1) and (2) in Theorem 3.3 are
true. Let {a1 , a2 , . . .} be a norm dense subset if the unit ball of A. Let τ be an
embeddable tracial state on A, and suppose N is a positive integer. Then there is
a convex combination σ, with rational coefficients, of matricial states, and hence
even of factor matricial states, such that
|τ (aj ) − σ (aj )| < 1/2N
for 1 ≤ j ≤ N . Thus there exist a positive integer m and positive integers s1 , . . . , sν
with s1 +· · ·+sν = m and positive integers t1 , . . . , tν with surjective representations
πi : A → Mti (C) such that
ν
X
si
σ=
τti ◦ πi .
m
i=1
Let M ∈ N be such stii M is an integer, for all i ≤ ν. If ρ(k) denotes a direct sum of
k copies of a representation ρ, then let
s
( t i M)
π = ⊕i πi
i
: A → MmM (C) .
It is easy to check that σ = τmM ◦ π. Since by the assumption the statement
(1) in Theorem 3.3 holds, there is n0 such that for any n ≥ n0 there is a unital
representation into Mn (C). Since for a positive integer q we have σ = τqmM ◦ π (q) ,
we can assume that mM > n0 . Suppose n > 2mM . We can write n = amM + b
with a ≥ 2 and 0 ≤ b < mM . Since n0 ≤ mM + b, there is a unital representation
ρ : A → MmM+b (C). Let
Then
πN,n = π (a−1) ⊕ ρ : A → Mn (C) .
(a − 1)mM τMm ◦ π(ak ) + (mM + b)τmM+b ◦ ρ(ak )
=
n
mM + b
(a − 1)mM
σ(ak ) +
τm+b ◦ ρ(ak ) =
n
n
mM + b
mM + b
)σ(ak ) +
τm+b ◦ ρ(ak ).
(1 −
n
n
1
< 4N
. Then for
There is a positive integer kN such that if n ≥ kN , then mM+b
n
all n ≥ kN
τn ◦ πN,n (ak ) =
for 1 ≤ k ≤ N and hence
|τn ◦ πN,n (ak ) − σ (ak )| < 1/2N
|τ (ak ) − τn ◦ πN,n (ak )| < 1/N
for 1 ≤ k ≤ N . We can easily arrange k1 < k2 < · · · and we can define
ρn = πN,n if kN ≤ n < kN +1 .
TRACIALLY STABLE
19
3.3. Matricial tracial stability for tracially nuclear C*-algebras. Recall
that a unital C ∗ -algebra A is nuclear if and only if, for every Hilbert space H and
every unital ∗-homomorphism π : A → B (H) the von Neumann algebra π (A)′′ is
hyperfinite. The algebra A is tracially nuclear [11] if, for every tracial state τ on
′′
A, the algebra πτ (A) is hyperfinite, where πτ is the GNS representation for τ . It
is easy to show that if A is tracially nuclear, then every trace on A is embeddable.
Theorem 3.8. Suppose A is a unital separable tracially nuclear C*-algebra with
at least one tracial state. The following are equivalent
(1) A is matricially tracially stable
(2) A satisfies the conditions (1) and (2) in Theorem 3.3.
(3) A is W*-factor tracially stable.
Proof. 1) ⇒ 2) This follows from Theorem 3.3.
2) ⇒ 3). Suppose A = C ∗ (x1 , x2 , . . .), with all xi ’s being contractions, suppose
α
Y
(Mn , ρn ), where (Mn , ρn )
α is a free ultrafilter on N, and suppose σ : A →
n∈N
is a factor von Neumann algebra with trace ρn . Denote by ρα the limit trace on
α
α
Y
Y
(Mn , ρn ). Since ρα a faithful trace on
(Mn , ρn ) and since A is tracially
n∈N
n∈N
′′
nuclear, it follows that σ (A) is hyperfinite.
Since α is an ultrafilter, either {n ∈ N : dim Mn < ∞} ∈ α or {n ∈ N : dim Mn = ∞} ∈
α. We first consider the latter case. In this case we can assume that every Mn is
a II1 -factor. In this case, it follows from [11] that, for each n ∈ N there is a unital
′′
′′
∗-homomorphism γn : σ (A) → Mn such that, for every b ∈ σ (A)
lim ρn (γn (b)) = ρα (b) ,
n→α
and
b = {γn (b)}α .
Hence {γn ◦ σ}n∈N is an approximate lifting of σ.
Next suppose E := {n ∈ N : dim Mn < ∞} ∈ α and Mn = Mkn (C) and ρn =
τkn for each n ∈ E.
By assumption and Lemma 3.7, there is an n0 ∈ N and, for every n ≥ n0 there
is a unital ∗-homomorphism πn : A → Mn (C) such that, for every a ∈ A,
τα (a) = lim τn (πn (a)) .
k→∞
For each k ∈ N, write σ (xk ) = {xk (n)}α . For any s ∈ N and any ∗-monomial
m (t1 , . . . , ts ) we have
lim τn (m (x1 (n) , . . . , xs (n))) = τα (m (σ (x1 ) , . . . , σ (xs )))
n→α
and
lim τn (m (πn (x1 ) , . . . , πn (xs ))) = τα (m (σ (x1 ) , . . . , σ (xs ))) .
n→∞
′′
Since σ (A) is hyperfinite, it follows from Connes’ theorem that, for each s ∈ N,
′′
′′
σ (C ∗ (x1 , . . . , xs )) ⊂ σ (A) is also hyperfinite. It now follows from Theorem 1.1
that there is a sequence of unitaries Un ∈ Mkn (C) such that, for every s ∈ N,
lim kxs (n) − Un∗ πn (xs ) Un k2 = 0.
n→α
20
DON HADWIN AND TATIANA SHULMAN
Hence {Un∗ πn (·) Un } is an approximate lifting of σ.
3) ⇒ 1) is obvious.
Recall that a C ∗ -algebra A is called GCR (or type I ) if for any its irreducible
representation π : A → B(H), π(A) contains all the compact operators.
Corollary 3.9. Suppose A is a separable GCR unital C ∗ -algebra satisfying condition (1) in Theorem 3.3. Then A is W ∗ -factor tracially stable.
Proof. The extreme points of the tracial states are the factor tracial states ([12]).
A factor representation of a GCR C ∗ -algebra must yield a factor von Neumann
algebra of type I, which must be isomorphic to some B(H). If it has a trace, then H
must be finite-dimensional. Thus the factorial tracial states are finite-dimensional,
and the Krein-Milman theorem gives that all tracial states are in the weak*-closed
convex hull of the finite-dimensional states. Thus the condition (2) in Theorem 3.3
holds.
The next statement follows from Theorem 3.8, Theorem 3.3 and Lemma 3.7.
Theorem 3.10. Suppose A is a separable tracially nuclear C*-algebra with at least
one tracial state. The following are equivalent
(1) A is matricially tracially stable
(2) for every tracial state τ on A, there is a positive integer n0 and, for each
n ≥ n0 there is a unital ∗-homomorphism ρn : A → Mn (C) such that, for
every a ∈ A,
τ (a) = lim τn (ρn (a)) .
n→∞
(here τn is the usual tracial state on Mn (C))
(3) A is W*-factor tracially stable.
Below we give an example of an RFD nuclear C ∗ -algebra which has finitedimensional irreducible representations of all dimensions but is not matricially tracially stable.
Example 3.11. Suppose 0 < θ1 < θ2 <, 1 are irrational and {1, θ1 , θ2 } is linearly
independent over Q. Let λk = e2πiθk for k = 1, 2. Let Aθk be the irrational rotation
algebra generated by unitaries Uk , Vk satisfying Uk Vk = λk Vk Uk . We know that
each Aθk is simple nuclear and has a unique tracial state ρk . Let U = U1 ⊕ U2 and
V = V1 ⊕ V2 . Since U V U ∗ V ∗ = λ1 ⊕ λ2 , we see that 1 ⊕ 0 ∈ C ∗ (U, V ) , so
C ∗ (U, V ) = Aθ1 ⊕ Aθ2 .
By the linear independence assumption, there is an increasing sequence {nk } of
positive integers such that
(3.1)
λn1 k → 1 and λn2 k → 1
as k → ∞. We can assume also that
(3.2)
n1 = 2 and n2 = 3.
For each positive integer n and each λ ∈ C, let {e1 , . . . , en } be the standard
orthonormal basis for Cn , let Un,λ and Vn be the matrices defined by
Un,λ ej = λj−1 ej
TRACIALLY STABLE
21
and
Vn ej = ej+1 for 1 ≤ j < n; Vn en = e1 .
It follows from (3.1) that
kUnk ,λs Vnk − λs Vnk Unk ,λs k → 0
as k → ∞ for s = 1, 2. The simplicity of each Aθs implies that, for every ∗polynomial p and s = 1, 2, we have
kp (Unk ,λs , Vnk ) − p (Us , Vs )k → 0.
(3.3)
The uniqueness of the trace on each Aθs implies that, for every ∗-polynomial p
and s = 1, 2, we have
lim τnk (p (Unk ,λs , Vnk )) = ρs (p (Us , Vs ))
α
for any non-trivial ultrafilter α, and hence
(3.4)
τnk (p (Unk ,λs , Vnk )) → ρs (p (Us , Vs ))
as k → ∞. P
P⊕
(k−1)
(k−1)
⊕ Vnk , and let
Let Û = ⊕
k∈N Vnk
k∈N Unk ,λ1 ⊕ Unk ,λ2 and V̂ =
A = C ∗ Û , V̂ + K,
where
K=
X⊕
k∈N
Mknk (C).
It follows that K is nuclear and by (3.3) A/K is isomorphic to C ∗ (U, V ) = Aθ1 ⊕
Aθ2 , which is also nuclear. Hence, A is nuclear. Moreover, the only finite-dimensional
irreducible representations of A are the coordinate representations πk for k ∈ N.
However, by (3.4), for every ∗-polynomial p, we have
1
k−1
τnk (p(Unk ,λ1 , Vnk )) + τnk (p(Unk ,λ2 , Vnk ))
= lim
lim τknk (πk p Û , V̂
k→∞
k→∞
k
k
= lim τnk (p(Unk ,λ1 , Vnk )) = ρ1 (p (U1 , V1 )) .
k→∞
Hence the trace ρ on A that annihilates K and sends p Û , V̂ to ρ2 (p (U2 , V2 )) is
embeddable and cannot be approximated by finite-dimensional tracial states. Thus
A is nuclear and RFD and by (3.2) and Corollary 3.6 satisfies condition (1) in
Theorem 3.3, but does not satisfy condition (2), hence is not matricially tracially
stable.
4. Matricial tracial stability and free entropy dimension
There is a close relationship between matricial stability and D. Voiculescu’s free
entropy dimension [27], [28] (via the free orbit dimension in Hadwin-Shen [13]). D.
Voiculescu defined his free entropy dimension δ0 ([27], [28]), and he applied it to
show the existence of a II1 factor von Neumann algebra without a Cartan MASA,
solving a longstanding problem.
Suppose A is a unital C*-algebra with a tracial state τ . Suppose x1 , . . . , xn are
elements in A, ε > 0, R > max1≤j≤n kxj k and N, k ∈ N. Voiculescu [27] defines
22
DON HADWIN AND TATIANA SHULMAN
ΓR,τ (x1 , . . . , xn ; N, k, ε) to be the set of all n-tuples (A1 , . . . , An ) of matrices in
Mk (C) with norm at most R such that
|τ (m (x1 , . . . , xn )) − τn (m (A1 , . . . , An ))| < ε
for all ∗-monomials m (t1 , . . . , tn ) with degree at most N.
Connes’ embedding problem is equivalent to the assertion that, for every unital
C*-algebra B with a tracial state τ , every n-tuple (x1 , . . . , xn ) of selfadjoint contractions in B, for any N, for any R > max1≤j≤n kxj k and for every ε > 0 there is
a positive integer k such that ΓR,τ (x1 , . . . , xn ; N, k, ε) 6= ∅.
n
n
Let Mk (C) = ((A1 , . . . , An ) : A1 , . . . , An ∈ Mk (C)) and define kk2 on Mk (C)
by
n
X
2
τk A∗j Aj .
k(A1 , . . . , An )k2 =
j=1
n
If A = (A1 , . . . , An ) ∈ Mk (C) and U is unitary, we define
U ∗ AU = (U ∗ A1 U, . . . , U ∗ An U ) .
If ω > 0, we define the ω-orbit ball of A = (A1 , . . . , An ), denoted by U (A1 , . . . , An ; ω),
n
to be the set of all B = (B1 , . . . , Bb ) ∈ Mk (C) such that there is a unitary
U ∈ Mk (C) such that
kU ∗ AU − Bk2 < ω.
n
If E ⊂ Mk (C) , we define the ω-orbit covering number of E, denoted by ν (E, ω),
to be the smallest number of ω-orbit balls that cover E.
Let A =C*(x1 , . . . , xn ) and for each positive integer k, let Rep(A, k) / ⋍ denote
the set of all unital ∗-homomorphisms from A into Mk (C) modulo unitary equivalence. If π1 , π2 ∈Rep(A, k) with corresponding images [π] , [ρ] ∈ Rep(A, k) / ⋍ we
define a metric
dk ([π] , [ρ]) = min k(π (x1 ) , . . . , π (xn )) − (U ∗ ρ (x1 ) U, . . . , U ∗ ρ (xn ) U )k2
as U ranges over all of the k × k unitary matrices.
For each 0 < ω < 1, we define
νdk (Rep (A, k) / ⋍, ω)
to be the minimal number of dk -balls of radius ω it takes to cover Rep(A, k) / ⋍.
Lemma 4.1. Suppose A =C*(x1 , . . . , xn ) is matricially tracially stable and τ is an
embeddable tracial state on A. Let R > max1≤j≤n kxj k. For each 0 < ω < 1 there
exists an mω ∈ N such that, for all integers k, N ≥ mω and every 0 < ε < 1/mω
Card (Rep (A, k) / ⋍) ≥ νdk (Rep (A, k) / ⋍, ω/4) ≥ ν (ΓR,τ (x1 , . . . , xn ; N, k, ε) , ω) .
Proof. Let 0 < ω < 1. It is easy to deduce from the ǫ − δ-definition of matricial
tracial stability that, there is a positive integer mω such that, for every k ∈ N
and every N ≥ mω and 0 < ε < 1/mω , we have for each B = (b1 , . . . , bn ) ∈
ΓR,τ (x1 , . . . , xn ; N, k, ε) a representation πB ∈ Rep (A, k) such that
kB − (πB (x1 ) , . . . , πB (xn ))k2 < ω/4.
Now suppose N ≥ mω and 0 < ε < 1/mω . It follows from the definition of
s = ν (ΓR,τ (x1 , . . . , xn ; N, k, ε) , ω) that there is a collection
{Bj ∈ ΓR,τ (x1 , . . . , xn ; N, k, ε) : 1 ≤ j ≤ s}
TRACIALLY STABLE
23
so that, for any k × k unitary U and 1 ≤ i 6= j ≤ j,
kU ∗ Bi U − Bj k2 ≥ ω.
For each j, 1 ≤ j ≤ s. It easily follows that, for 1 ≤ i 6= j ≤ s,
dk [πBi ] , πBj ≥ ω/2.
Hence every dk -ball with radius ω/4 contains at most one of πBj , 1 ≤ j ≤ s.
Hence,
Card (Rep (A, k) / ⋍) ≥ νdk (Rep (A, k) / ⋍, ω/4) ≥ ν (ΓR,τ (x1 , . . . , xn ; N, k, ε) , ω) .
In [13] the first named author and J. Shen defined the free-orbit dimension
K1 (x1 , . . . , xn ; τ ). First let
K (x1 , . . . , xn ; τ, ω) = inf lim sup
ε,N
and let
k→∞
log ν (ΓR,τ (x1 , . . . , xn ; N, k, ε) , ω)
,
k 2 |log ω|
K1 (x1 , . . . , xn ; τ ) = lim sup K (x1 , . . . , xn ; τ, ω) .
ω→0+
If A =C*(x1 , . . . , xn ) is matricially tracially stable and τ is an embeddable tracial
state on A, then by Lemma 4.1
log Card (Rep (A, k) / ⋍)
1
≥ K (x1 , . . . , xn ; τ, ω) .
lim sup
|log ω| k→∞
k2
= ∞.
It follows that if K1 (x1 , . . . , xn ; τ ) > 0, then lim supk→∞ log Card(Rep(A,k)/⋍)
k2
If δ0 (x1 , . . . , xn ; τ ) denotes D. Voiculescu’s free entropy dimension, we know from
[28] that
δ0 (x1 , . . . , xn ; τ ) ≤ 1 + K1 (x1 , . . . , xn ; τ ) .
This gives the following result, which shows that a matricially tracially stable algebra may be forced to have a lot of representations of each large finite dimension.
Theorem 4.2. Suppose A =C*(x1 , . . . , xn ) is matricially tracially stable and τ is
an embeddable tracial state on A such that 1 < δ0 (x1 , . . . , xn ) . Then
lim sup
k→∞
log Card (Rep (A, k) / ⋍)
= ∞.
k2
Below using this theorem we give an example which shows that for not tracially
nuclear C ∗ -algebras, the conditions 1) and 2) in Theorem 3.3 are not sufficient for
being matricially tracially stable.
Lemma 4.3. The set {(U, V ) ∈ Un × Un : C ∗ (U, V ) = Mn (C)} is norm dense in
Un × Un .
Proof. Suppose (U, V ) ∈ Un × Un . Perturb U by an arbitrary small amount so that
it is diagonal with no repeated eigenvalues with respect to some orthonormal basis
{e1 , . . . , en } and perturb V by an arbitrary small amount
Pn so that its eigenvalues
are not repeated and one eigenvector has the form k=1 λk ek with λk 6= 0 for
1 ≤ k ≤ n.
The following lemma states that every irreducible representation of (π1 ⊕ π2 ) (A)
must either factor through π1 or through π2 .
24
DON HADWIN AND TATIANA SHULMAN
Lemma 4.4. If π = π1 ⊕ π2 is a direct sum of unital representations of a unital
C*-algebra A and ρ is an irreducible representation of A such that ker π ⊂ ker ρ,
then either ker π1 ⊂ ker ρ or ker π2 ⊂ ker ρ.
Proof. Suppose ρ : A → B (H) is irreducible. Assume, via contradiction, ai ∈ ker πi
and ai ∈
/ ker ρ for i = 1, 2. Then a1 Aa2 ⊆ ker π ⊆ ker ρ, so ρ (a1 ) ρ (A) ρ (a2 ) = {0} ,
−wot
and since ρ (A)
= B (H) , we have ρ (a1 ) B (H) ρ (a2 ) = {0} , but ρ (a1 ) 6= 0 6=
ρ (a2 ) , which is a contradiction.
Lemma 4.5. (Hoover [18]) If π = π1 ⊕ π2 is a direct sum of unital representations
of a unital C*-algebra A, then π (A) = π1 (A) ⊕ π2 (A) if and only if there is no
irreducible representation ρ of A such that ker π1 ⊂ ker ρ and ker π2 ⊂ ker ρ.
We will give a proof of it for the reader’s convenience.
Proof. Let J1 = {b ∈ π1 (A) : b ⊕ 0 ∈ π (A)} and J2 = {b ∈ π2 (A) : 0 ⊕ b ∈ π (A)}.
Clearly,
π (A) = π1 (A) ⊕ π2 (A) if and only if 1 ∈ J1 if and only if 1 ∈ J2 if and only
if 1 ⊕ 1 ∈ J (where J = J 1 ⊕ J2 ). Hence, if π (A) 6= π1 (A) ⊕ π2 (A), J is a
proper ideal of π (A) and Jk is a proper ideal of πk (A) for k = 1, 2. Moreover, if
π (a) = π1 (a) ⊕ π2 (a), we have:
π (a) ∈ J if and only if π1 (a) ∈ J1 if and only if π2 (a) ∈ J2 . Thus the algebras
π (A) /J , π1 (A) /J1 and π2 (A) /J2 are isomorphic to a unital C*-algebra D. Thus
there are surjective homomorphisms ρ : π (A) → D and ρk : πk (A) → D such that
ρ ◦ π = ρ1 ◦ π1 = ρ2 ◦ π2 .
If we choose an irreducible representation γ of D and replace ρ, ρ1 , ρ2 with γ ◦ ρ, γ ◦
ρ1 , γ ◦ ρ2 we get irreducible representations.
Theorem 4.6. There exists a C ∗ -algebra which satisfies the conditions 1) and 2)
of Theorem 3.3 but is not matricially tracially stable.
Proof. Let Cr∗ (F2 ) be the reduced free group C*-algebra. Then Cr∗ (F2 ) has a
unique tracial state τ and two canonical unitary generators U, V . D. Voiculescu
[27] proved that
δ0 (U, V ; τ ) = 2.
It was also shown by U. Haagerup and S. Thorbjørnsen [9] that there are sequences
{Uk } , {Vk } of unitary matrices with Uk , Vk ∈ Mk (C) for each k ∈ N such that, for
every ∗-polynomial p (x, y)
(4.1)
lim kp (Uk , Vk )k = kp (U, V )k .
k→∞
It follows from the uniqueness of a trace on Cr∗ (F2 ) that, for every ∗-polynomial
p (s, t),
(4.2)
lim τk (p (Uk , Vk )) = τ (p (U, V )) .
k→∞
By Lemma 4.3 we can assume that each pair (Uk , Vk ) is irreducible, i.e., C*(Uk , Vk ) =
Mk (C). Let U∞ = U1 ⊕U2 ⊕· · · and V∞ = V1 ⊕V2 ⊕· · · , and let A = C ∗ (U∞ , V∞ ).
Clearly A is an RFD C*-algebra. Moreover, for each k ∈ N, A has, up to unitary
equivalence, exactly one irreducible representation πk of dimension k, namely the
one with πk (U∞ ) = Uk and πk (V∞ ) = Vk . Since each k-dimensional representation
TRACIALLY STABLE
25
of A is unitarily equivalent to a direct sum of at most k irreducible representations
of dimension at most k, it follows that
Card (Rep (A, k) / ⋍) ≤ k k .
Thus
log k
log Card (Rep (A, k) / ⋍)
≤ lim sup
= 0.
k2
k
k→∞
However, there is a unital ∗-homomorphism π : A → Cr∗ (F2 ) such that π (U∞ ) = U
and π (V∞ ) = V . Thus τ ◦ π is an embeddable tracial state on A and since
lim sup
k→∞
δ0 (U∞ , V∞ ; τ ◦ π) = δ0 (U, V ; τ ) = 2,
it follows from Theorem 4.2 that A is notPmatricially tracially stable.
⊕
Claim 1: πn does not factor through 0≤k6=n<∞ πk .
Proof: Assume, via contradiction, the claim is false. Since (Un , Vn ) is an irreP
ducible pair for n ∈ N, it follows that πn does not factor through ⊕
0≤k<N,k6=n πk for
each
positive
integer
N
.
Hence,
by
Lemma
4.4,
it
follows
that
π
n factors through
P⊕
N ≤k6=n<∞ πk , for each positive integer N . However, since by (4.1), for every
a ∈ A, we have
kπk (a)k → kπ (a)k ,
we see that, for every a ∈ A,
kπn (a)k ≤ kπ (a)k .
This means that πn factors through π, which contradicts the fact that Cr∗ (F2 ) has
no finite-dimensional
P∞representations. Thus Claim 1 must be true.
Claim 2: J = k=1 Mk (C) ⊂ A.
Idk if k = n
Proof: It is sufficient to show that the sequence
belongs to
0
otherwise
A. Since idA = ⊕πk , the claim follows from Claim 1 and Lemma 4.5.
Clearly A/J is isomorphic to Cr∗ (F2 ) . Thus any factor representation of A must
either be finite-dimensional or be factorable through the representation π. Since
the extreme tracial states are the factor tracial states [12], we see that the extreme
tracial states on A are
{τ ◦ π} ∪ {τk ◦ πk : k ∈ N} .
By (4.2) we see that both conditions in Theorem 3.3 are satisfied, but A is not
matricially tracially stable.
5. C ∗ -tracial stability
The following lemma must be very well known. We give a proof of it because of
lack of convenient references.
Lemma 5.1. Suppose (X, d) is a compact metric space and τ is a state on C (X).
Let σ be the state on C [0, 1] given by
Z 1
σ (f ) =
f (t) dt.
0
Then, for any non-trivial ultrafilter α on N there is a unital ∗-homomorphism π :
α
Y
C (X) →
(C [0, 1] , σ) , such that σα ◦ π = τ.
n∈N
26
DON HADWIN AND TATIANA SHULMAN
Proof. We know from [11] that
α
Y
n∈N
(C [0, 1] , σ) =
α
Y
α
Y
(C [0, 1] , σ) is a von Neumann algebra and that
n∈N
(L∞ [0, 1] , σ). We know that there is a probability Borel
n∈N
measure µ on X such that, for every f ∈ C (X),
Z
τ (f ) =
f dµ.
X
Choose a dense subset {f1 , f2 , . . .} of C (X). For each n ∈ N we can find a Borel
partition {En,1 , . . . , En,kn } of X so that each En,j has sufficiently small diameter
and points xn,j ∈ En,j for 1 ≤ j ≤ kn such that, for 1 ≤ m ≤ n and for every
x ∈ X,
kn
X
1
fm (xn,j ) χEn,j (x) ≤ .
fm (x) −
n
j=1
For each n ∈ N we can partition [0, 1] into intervals {In,j : 1 ≤ j ≤ kn } so that
σ χIn,j = µ (En,j ) .
For each n ∈ N we define a unital ∗-homomorphism πn : C (X) → L∞ [0, 1] by
πn (f ) =
kn
X
f (xn,j ) χIn,j .
j=1
We define π : C (X) →
α
Y
(L∞ [0, 1] , σ) by
n∈N
It is clear, for m ∈ N, that
π (fm ) = {πn (fm )}α
σα (π (fm )) = lim σ (πn (fm )) = lim
n→α
n→α
kn
X
fm (xn,j ) µ(En,j ) = τ (fm ) .
j=1
Since {f1 , f2 , . . .} is dense in C (X), we see that τ = σα ◦ π.
We say that a topological space X is approximately path-connected if, for each
collection {V1 , . . . , Vs } of nonempty open subsets of X there is a continuous function
γ : [0, 1] → X such that
γ [(0, 1)] ∩ Vk 6= ∅
for 1 ≤ k ≤ s.
Equivalently, traversing the above path γ back and forth a finite number of times,
given 0 < a1 < b1 < · · · < as < bs < 1 we can find a path γ such that
γ ((ak , bk )) ⊂ Vk
for 1 ≤ k ≤ s. Indeed, we first find a path γ1 and 0 < c1 < d1 < · · · < cs < ds < 1
such that
γ1 ((ck , dk )) ⊂ Vk
for 1 ≤ k ≤ s, and then compose γ1 with a homeomorphism on [0, 1] sending ak to
ck and bk to dk for 1 ≤ k ≤ s.
TRACIALLY STABLE
27
To prove that X is approximately path-connected when X is Hausdorff, it clearly
suffices to restrict to the case in which {V1 , . . . , Vs } is disjoint (Choose xk ∈ Vk for
each k and chose an open set Wk ⊆ Vk with xk ∈ Wk such that xk = xj ⇒ Wj =
Wk and such that {Wk : 1 ≤ k ≤ s} (without repetitions) is disjoint, then consider
{Wk : 1 ≤ k ≤ s}.)
The following facts are elementary:
(1) Every approximately path-connected space is connected
(2) A continuous image of an approximately path-connected space is approximately path-connected
(3) A cartesian product of approximately path-connected spaces is approximately path-connected.
An interesting example of a compact approximately path connected metric space
in R2 is
1
A=
: 0 < x ≤ 1 ∪ {(0, y) : 1− ≤ y ≤ 1} .
x, sin
x
Note that A∪B with B = {(x, 0) : −1 ≤ x ≤ 0} is not approximately path-connected.
In particular, A and B are approximately path-connected and A∩B 6= ∅, but A∪B
is not approximately path-connected.
For compact Hausdorff spaces we have a characterization of approximate pathconnectedness which later will be used to characterize C ∗ -tracial stability for commutative C ∗ -algebras.
Theorem 5.2. Suppose X is a compact Hausdorff space. The following are equivalent:
(1) X is approximately path-connected.
(2) If A is a unital C*-algebra, B is an AW*-algebra, η : A → B is a surjective
unital ∗-homomorphism, and π : C (X) → B is a unital ∗-homomorphism,
then there is is a net {πλ } of unital ∗-homomorphisms from C (X) to A
such that, for every f ∈ C (X) ,
(η ◦ πλ ) (f ) → π (f )
in the weak topology on B.
(3) If A is a unital C*-algebra, B is an W*-algebra, η : A → B is a surjective
unital ∗-homomorphism, and π : C (X) → B is a unital ∗-homomorphism,
then there is is a net {πλ } of unital ∗-homomorphisms from C (X) to A
such that, for every f ∈ C (X) ,
(η ◦ πλ ) (f ) → π (f )
in the ultra*-strong topology on B.
(4) For every regular Borel probability measure ν on X, there is a net {ρω }
of unital ∗-homomorphisms from C (X) into C [0, 1] such that, for every
f ∈ C (X) ,
Z
Z 1
f dν = lim
ρω (f ) (x) dx.
X
ω
0
Proof. (1) ⇒ (2). Suppose X is approximately path-connected and suppose A, B, η,
and π are is in (2). Let Λ be the collection of all tuples of the form λ = (Sλ , Eλ , ελ )
where Sλ is a finite set of states on B, Eλ is a finite subset of C (X) of functions from
X to [0, 1], and ελ > 0. Suppose λ ∈ Λ. Clearly, π (C (X)) is an abelian selfadjoint
28
DON HADWIN AND TATIANA SHULMAN
C*-subalgebra of B, and therefore there is a maximal abelian C*-subalgebra M of B
such that π (C (X)) ⊆ M. Since B is an AW*-algebra, there is a commuting family
P of projections in B such that M = C ∗ (P). Since C ∗ (Eλ ) is a separable unital
C*-subalgebra of C (X), the maximal ideal space of C ∗ (Eλ ) is a compact metric
space Xλ and there is a surjective continuous function ζλ : X → Xλ . It follows that
Xλ is approximately path-connected. Note that if we identify C ∗ (Eλ ) with C (Xλ )
we can view a function f ∈ C ∗ (Eλ ) as both a function on X and on Xλ and we
have f = f ◦ ζλ . Now π (C ∗ (Eλ )) = π (C (Xλ )) is a separable subalgebra of C ∗ (P)
so there is a countable subset Pλ ⊆ P such that π (C (Xλ )) ⊆ C ∗ (Pλ ). If follows
from von Neumann’s argument that there is a wλ = wλ∗ with σ (wλ ) is P
a totally
disconnected subset of (0, 1) such that C ∗ (wλ ) = C ∗ (Pλ ). Let ψλ =
ψ∈Sλ ψ
and let nλ = ♯(Sλ ). Then there is a measure µλ on σ (wλ ) such that, for every
h ∈ C (σ (wλ )) , we have
Z
hdµλ = ψλ (h (wλ )) .
σ(wλ )
For each f ∈ Eλ there is a function hf ∈ C (σ (wλ )) such that π (f ) = hf (wλ ) and
there is a function ĥf ∈ C [0, 1] with 0 ≤ ĥf ≤ 1 such that ĥf |σ(wλ ) = hf . We can
view µλ as a Borel measure on n[0, 1] by defining
µλ ([0, 1] \σ (wλ )) = 0. Clearly,
o
µλ ([0, 1]) = ψλ (1) = nλ . Since ĥf : f ∈ Eλ is equicontinuous and [0, 1] \σ (wλ )
is dense in [0, 1], we can find 0 < a1 < b1 < a2 < b2 < · · · < as < bs < 1 and
r1 , . . . , rs ∈ σ (wλ ) such that, for every f ∈ E and 1 ≤ j ≤ s,
ĥf (t) − ĥf (rj ) < ελ /4nλ when t ∈ (aj , bj )
and such that
µλ [0, 1] \ ∪sj=1 (aj , bj ) < ελ /4.
We know ĥf (rj ) = hf (rj ) for all f ∈ Eλ and 1 ≤ j ≤ s. Since π : C (Xλ ) →
C (σ (wλ )), there is a yj ∈ Xλ for 1 ≤ j ≤ s such that, for every f ∈ Eλ , f (yj ) =
hf (rj ) . On the other hand, we see that there is an xj ∈ X for 1 ≤ j ≤ s such that
ζλ (xj ) = yj . Hence, we have
ĥf (t) − f (xj ) < ελ /4nλ when t ∈ (aj , bj ) .
We next choose an open set Vj ⊆ X with xj ∈ Vj such that, for every f ∈ Eλ and
every x ∈ V, we have
|f (x) − f (xj )| < ελ /4nλ .
We now use the fact that X is approximately path-connected to find a continuous
function γλ : [0, 1] → X such that, for 1 ≤ j ≤ s,
γλ ((aj , bj )) ⊆ Vj .
We have, for each f ∈ Eλ , each 1 ≤ j ≤ s, and each t ∈ (aj , bj )
ĥf (t) − f ◦ γλ (t) < ελ /2nλ
and for t ∈ [0, 1] \ ∪sj=1 (aj , bj ) ,
ĥf (t) − f ◦ γλ (t) ≤ 2.
TRACIALLY STABLE
29
Hence, for every f ∈ Eλ
|ψλ (π (f )) − ψλ ((f ◦ γλ ) (wλ ))| ≤
Z
ĥf − (f ◦ γλ ) (wλ ) dµλ <
(ελ /2nλ ) µλ ([0, 1]) + 2µλ [0, 1] \ ∪sj=1 (aj , bj ) < ελ .
P
Since ψλ = ψ∈Sλ ψ, the measure µλ is a sum of probability measures, one corresponding to each ψ ∈ Sλ . We therefore have, for every f ∈ Eλ and every ψ ∈ Sλ ,
Z
ĥf − (f ◦ γλ ) (wλ ) dµλ < ελ .
|ψ (π (f )) − ψ ((f ◦ γλ ) (wλ ))| ≤
We can choose vλ ∈ A with 0 ≤ vλ ≤ 1 such that η (vλ ) = wλ . We define a unital
∗-homomorphism πλ : C (X) → A by πλ (f ) = (f ◦ γλ ) (vλ ). Hence, for every
f ∈ C (X),
(η ◦ πλ ) (f ) = η ((f ◦ γλ ) (vλ )) = (f ◦ γλ ) (wλ ) .
Hence, for every f ∈ Eλ and every ψ ∈ Sλ we have
|ψ ((η ◦ πλ ) (f )) − ψ (π (f ))| < ελ .
It follows, for every f ∈ C (X) with 0 ≤ f ≤ 1 and every state ψ on B, that
lim ψ ((η ◦ πλ ) (f )) = ψ (π (f )) .
λ
Since every g ∈ C (X) is a linear combination of f ’s with 0 ≤ f ≤ 1 and every
continuous linear functional on B is a linear combination of states, we see, for every
f ∈ C (X), that (η ◦ πλ ) (f ) → π (f ) in the weak topology on B.
(2) ⇒ (3) . Since every W*-algebra is an AW*-algebra, it is clear that we can
find a net {πλ } as in (2). Thus, for every f ∈ C (X), (η ◦ πλ ) (f ) → π (f ) and
∗
∗
(η ◦ πλ ) (f ) (η ◦ πλ ) (f ) = (η ◦ πλ ) (f ∗ f ) → π (f ) π (f )
ultraweakly. Hence we have (η ◦ πλ ) (f ) → π (f ) in the ultra*-strong topology on
B.
(3) ⇒ (4). By Lemma 5.1 there exists a unital ∗-homomorphism π : C(X) →
α
Y
(C [0, 1] , σ) such that
n∈N
σα ◦ π(f ) =
(5.1)
Z
f dν,
X
R1
for each f ∈ C(X). Here a state σ on C[0, 1] is given by σ(g) = 0 gdx. Let
α
Y
Q
η :
C[0, 1] →
(C [0, 1] , σ) be the canonical surjection. By 3) there is a
n∈N
Q
net {πλ } of unital ∗-homomorphisms from C (X) to C[0, 1] such that, for every
f ∈ C (X) ,
(η ◦ πλ ) (f ) → π (f )
ultra*-strongly. By Lemma 2.2 there exist unital ∗-homomorphisms ρn : C(X) →
C[0, 1] such that
π(f ) = η((ρn (f ))n∈N ),
for each f ∈ C(X). By (5.1)
Z 1
Z
f dν = lim
ρn (f )dx.
X
α
0
30
DON HADWIN AND TATIANA SHULMAN
R1
R1
The sequence 0 ρn (f )dx contains
a subnet 0 ρnω (f )dx which is an ultranet. This
R
ultranet has to converge to X f dν. Indeed, if limα tn = t and tnω is an ultranet,
then limω tnω = t. (Proof: for any ǫ > 0 the set {nω | |tnω − t| < ǫ} is infinite,
/ α and we would have ∅ = {nω | |tnω − t| ≥
otherwise
{nω | |tnω − t| < ǫ} ∈
T
ǫ} {n | |tn − t| < ǫ} ∈ α. Hence t is an accumulation point for {tnω } and since
{tnω } is an ultranet, t is its limit.) Thus we have
Z 1
Z
f dν = lim
ρnω (f )dx.
ω
X
0
(4) ⇒ (1). Assume (4) is true. Suppose V1 , V2 , . . . , Vs are nonempty open
subsets of X. There is no harm in assuming {V1 , . . . , Vs } is disjoint. For each
1 ≤ j ≤ s we can choose xk ∈ Vk and a continuous
function hj : X → [0, 1] such that
Ps
hj (xj ) = 1 and hj |X\Vj = 0. Let µ = 1s j=1 δxj . Then µ is a probability measure
R
with X hj dµ = 1s . It follows from (4) that there is a unital ∗-homomorphism
ρ : C (X) → C [0, 1] such that
Z 1
ρ (hj ) (x) dx 6= 0
0
for 1 ≤ k ≤ s. However, there must be a continuous map γ : [0, 1] → X such that
R1
π (f ) = f ◦ γ for every f ∈ C (X) . For each 1 ≤ j ≤ s, 0 6= 0 (hj ◦ γ) (x) dx implies
that there is a tj ∈ [0, 1] such that hj (γ (tj )) 6= 0. Thus, by the definition of hj , we
have γ (tj ) ∈ Vj for 1 ≤ j ≤ s. Therefore, X is approximately path-connected.
Remark. In statement (2) in Theorem 5.2, if we view B ⊆ B (H) as the universal
representation (that is the direct sum of all irreducible representations), then the
weak operator topology on B is the weak (and the ultraweak) topology on B. Thus,
if (η ◦ πλ ) (f ) → π (f ) and
∗
∗
[(η ◦ πλ ) (f )] [(η ◦ πλ ) (f )] = (η ◦ πλ ) (f ∗ f ) → π (f ∗ f ) = π (f ) π (f )
weakly for every f ∈ C (X) implies, for each f ∈ C (X) that
(η ◦ πλ ) (f ) → π (f )
is the ∗-strong operator topology in B (H) .
As a corollary we obtain a characterization of when a separable commutative
C ∗ -algebra is C ∗ -tracially stable.
Theorem 5.3. Suppose (X, d) is a compact metric space. The following are equivalent:
(1) C (X) is C ∗ -tracially stable.
(2) X is approximately path-connected
(3) For every state τ on C (X) there is a sequence πn : C (X) → C [0, 1] such
that, for every f ∈ C (X) ,
Z 1
τ (f ) = lim
πn (f ) (x) dx.
n→∞
0
TRACIALLY STABLE
31
Proof. 2) ⇔ 3) by the equivalence of statements 1) and 4) in Theorem 5.2 and
separability of C(X).
Q
3) ⇒ 1): Let φ : C(X) → α
i∈I (Ai , ρi ) be a unital ∗-homomorphism. By [11]
Qα
(A
,
ρ
)
is
a
von
Neumann
algebra
and by the equivalence 3) ⇔ 4) in Theorem
i
i
i∈I
5.2
φ
is
a
∗-strong
pointwise
limit
of
liftable ∗-homomorphisms from C(X) →
Qα
(A
,
ρ
).
By
Lemma
2.2
φ
is
liftable.
i
i
i∈I
1) ⇒ 3): By Lemma 5.1 there is a unital ∗-homomorphism π : C (X) →
α
Y
(C [0, 1] , σ) such that σα ◦ π = τ. By 1) we can lift it and obtain a sequence
n∈N
πn : C (X) → C [0, 1] such that, for every f ∈ C (X) ,
Z 1
πn (f ) (x) dx.
τ (f ) = lim
α
0
Taking a subnet, which is an ultranet we obtain (by the same arguments as in the
proof of the implication 3) ⇒ 4) in Theorem 5.2) that for any f ∈ C(X)
Z 1
τ (f ) = lim
πnω (f ) (x) dx.
ω
0
Since C(X) is separable, we can pass to a subsequence.
Corollary 5.4. Suppose A is a unital commutative C ∗ -tracially stable C*-algebra
and A0 is a unital C*-subalgebra of A. Then A0 is C ∗ -tracially stable.
Proof. We can assume A = C (X) with X an approximately path-connected compact metric space. Also we can write A0 = C (Y ), and, since A0 embeds into
A, there is a continuous surjective map ϕ : X → Y. Thus Y is approximately
path-connected, which implies A0 is C*-tracially stable.
At this point there is little else we can say about C*-tracially stable algebras,
except that they do not have projections when there is a faithful embeddable tracial
state.
Theorem 5.5. Suppose A is a separable C*-tracially stable unital C*-algebra.
Then A/Jet has no nontrivial projections.
Proof. Without loss of generality we can assume Jet = {0}. Then A has a faithful
embeddable tracial state σ. Then there is a tracial embedding π of (A, σ) into
α
Y
(Cr∗ (F2 ) , τ ) for some non-trivial ultrafilter α on N. Indeed, Cr∗ (F2 ) has a
n∈N
unique trace τ and it is a subalgebra of the factor von Neumann algebra LF2 so
that LF2 = W ∗ (Cr∗ (F2 )). It follows from the Kaplansky density theorem that the
|| k2 -closure of the unit ball of C ∗ (F2 ) is the unit ball of LF2 which implies that
α
α
Y
Y
∗
(LF2 , τ ).
(C (F2 ), τ ) =
Qα
Q
(Mn (C), τn ) embeds into α (C ∗ (F2 ), τ ) =
Since LF2 contains Mn (C) for each n ∈ N,
Q
α
(LF2 , τ ).
Now since A is tracially stable, there is a sequence {πn } of unital ∗-homomorphisms
from A into Cr∗ (F2 ) such that, for every a ∈ A ,
σ (a) = lim τ (πn (a)) .
n→α
32
DON HADWIN AND TATIANA SHULMAN
Suppose P is a projection in A. Since Cr∗ (F2 ) contains no non-trivial projections,
for each n ∈ N πn (P ) = 0 or πn (P ) = 1. Since α is an ultrafilter, eventually
πn (P ) = 0 along α or eventually πn (P ) = 1 along α. Thus π (P ) = {πn (P )}α is
either 0 or 1. Hence P is either 0 or 1.
Remark. As was pointed out in [11], the tracial ultraproducts remain unchanged
when you replace the 2-norm by a p-norm (1 ≤ p < ∞). Therefore all results in
this paper remain valid if 2-norms are replaced by p-norms.
References
C ∗ -algebras
[1] N. P. Brown and N. Ozawa,
and finite-dimensional approximations, Graduate
Studies in Mathematics, vol. 88, American Mathematical Society, Providence, RI, 2008.
[2] K. Davidson, C*-algebras by example, Fields Institute Monograph 6, AMS, 1996.
[3] S. Eilers and T. Loring, Computing contingencies for stable relations, Internat. J. Math.,
10(3) (1999), 301 – 326.
[4] S. Eilers, T. Loring, and G. Pedersen, Stability of anticommutation relations: an application
of noncommutative CW complexes, J. Reine Angew. Math. 99(1998), 101 – 143.
[5] R. Exel and T. Loring, Almost commuting unitary matrices, Proc. Amer. Math. Soc. 106
(1989), 913 – 915.
[6] N. Filonov and I. Kachkovskiy, A Hilbert-Schmidt analog of Huaxin Lins Theorem,
arXiv:1008.4002.
[7] P. Friis and M. Rordam, Almost commuting matrices - A short proof of Huaxin Lin’s Theorem,
J. Reine Angew. Math. 479, 121 (1996).
[8] L. Glebsky, Almost commuting matrices with respect to normalized Hilbert-Schmidt norm,
arXiv:1002.3082
[9] U. Haagerup and S. Thorbjornsen, A new application of random matrices: Ext(Cr∗ (F2 )) is
not a group, Annals of Mathematics, 162 (2005), 711 – 775.
[10] D. Hadwin, A lifting characterization of RFD C*-algebras, Math. Scand. 115 (2014), no. 1,
85 - 95.
[11] D. Hadwin and W. Li, A note on approximate liftings. Oper. Matrices 3 (2009), no. 1, 125 143.
[12] D. Hadwin and X. Ma, A note on free products. Oper. Matrices 2 (2008), no. 1, 53 – 65.
[13] D. Hadwin and J. Shen, Free orbit dimension of finite von Neumann algebras, JFA 249 (2007),
75-91.
[14] D. Hadwin and T. Shulman, Variations of projectivity for selfadjoint operator algebras, in
preparation.
[15] D. Hadwin and Y. Zhang, English Title: The Distance of Operators of Infinite Multiplicity to
the Normals and nonseparable BDF Theory, Journal: Science China (Chinese edition) 2016,
(English version: arXiv:1403.6228).
[16] D. Hadwin, Free entropy and approximate equivalence in von Neumann algebras, Operator
algebras and operator theory (Shanghai, 1997), 111–131, Contemp. Math., 228, Amer. Math.
Soc., Providence, RI, 1998.
[17] P. R. Halmos, Some unsolved problems of unknown depth about operators on Hilbert space,
Proc. Roy. Soc. Edinburgh Sect. A 76 (1976/77), no. 1, 67 - 76.
[18] Thomas B. Hoover, unpublished manuscript, 1975.
[19] Kachkovskiy I., Safarov Yu., Distance to normal elements in C*-algebras of real rank zero,
Journal of the American Mathematical Society, 2015
[20] Huaxin Lin, Almost commuting self-adjoint matrices and applications, Filds Inst. Commun.
13, 193 (1995).
[21] T. A. Loring, Lifting solutions to perturbing problems in C*-algebras, Fields Institute Monographs, 8, American Mathematical Society, Providence, RI, 1997.
[22] H. Osaka and N. C. Phillips, Furstenberg transformations on irrational rotation algebras,
Ergodic Theory and Dynamical Systems (2006), 26: 1623-1651.
[23] S. Echterhoff, W. Lueck, N. C. Phillips, and S. Walters, The structure of crossed products of
irrational rotation algebras by finite subgroups of SL2 (Z, Journal fur die reine und angewandte
Mathematik (Crelles Journal), Volume 2010, Issue 639 (2010), 173 – 221.
TRACIALLY STABLE
33
[24] N. Filonov and Y. Safarov, On the relation between an operator and its self-commutator, J.
Funct. Anal. 260 (2011), no. 10, 29022932.
[25] S. Sakai, The theory of W*-algebras, Lecture Notes, Yale University, 1962.
[26] D. Voiculescu, Asymptotically commuting finite rank unitary operators without commuting
approximants, Acta Sci. Math. 45 (1983), 429 - 431.
[27] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free probability theory II, Invent. Math., 118 (1994), 411-440.
[28] D. Voiculescu, The analogues of entropy and of Fisher’s information measure in free probability theory III: The absence of Cartan subalgebras, Geom. Funct. Anal. 6 (1996), 172–199.
University of New Hampshire
E-mail address: [email protected]
Institute of Mathematics of the Polish Academy of Sciences, Poland
E-mail address: [email protected]
| 4 |
arXiv:1802.07109v1 [math.LO] 20 Feb 2018
THE DERIVED SUBGROUP OF LINEAR AND
SIMPLY-CONNECTED O-MINIMAL GROUPS
ELÍAS BARO
Abstract. We show that the derived subgroup of a linear definable
group in an o-minimal structure is also definable, extending the semialgebraic case proved in [16]. We also show the definability of the derived
subgroup in case that the group is simply-connected.
1. Introduction
Let R = hR, <, +, ·, · · · i be an o-minimal expansion of a real closed field
R. Algebraic groups over R are clearly definable in R; on the other hand, if
G is a group definable in R and R = R then G has a Lie group structure (see
Preliminaries). In fact, the behaviour of o-minimal groups rests in between
algebraic groups and Lie groups. The definability of the derived subgroup
is a good example of this dichotomy.
As it is well-known, the derived subgroup of an irreducible algebraic group
is an irreducible algebraic subgroup. In the context of Lie groups, the derived
subgroup is a virtual subgroup (i.e., the image of a Lie homomorphism).
However, there are examples of Lie groups –even solvable– whose derived
subgroup is not closed [14, Ex.1.4.4]. In two important situations it is closed:
either if the Lie group is linear or it is simply-connected. In both cases the
proof relies on Lie’s third fundamental theorem and therefore it cannot be
reproduced in the o-minimal setting.
A. Conversano [7, §1] showed an example of an o-minimal group G whose
derived subgroup G′ is not definable (remarkably, the example is semialgebraic over R). Thus, the situation concerning the derived subgroup of
o-minimal groups could seem closer to Lie groups rather than to the algebraic ones. However, Conversano’s example is a central extension of a simple
group and therefore it is not solvable. Surprisingly, in [3] we proved that if
G is a solvable connected o-minimal group then G′ is definable. Moreover,
the commutator width of G is bounded by dim(G). Recall that the derived
(or commutator) subgroup of G is
[
G′ =
[G, G]n
n∈N
Date: February 21, 2018.
1991 Mathematics Subject Classification. 03C64.
Key words and phrases. o-minimal group, simply-connected, commutator.
Research partially supported by the program MTM2017-82105-P.
1
where [G, G]n denotes the definable set of at most n products of commutators. The commutator width is the smallest n ∈ N such that G′ = [G, G]n
in case it exists.
In this paper we prove that if G is a connected o-minimal group and G is
either linear (Theorem 3.1) or simply-connected (Theorem 4.8) then G′ is
definable.
In Section 3 we address the linear case. A. Pillay already showed in [16]
that if G is semialgebraic and linear then G′ is semialgebraic. He avoids the
use of Lie’s third fundamental theorem by considering the Zariski closure.
We will combine the result of Pillay with the definability of the derived
subgroup in the solvable case established in [3]. Furthermore, we will also
provide a bound of the commutator width.
In Section 4 we make use of the developments of o-minimal homotopy in [1]
to show that a normal connected definable subgroup of a simply-connected
definable group is simply-connected (Proposition 4.3). This allows us to
make induction arguments, so we can apply the strategy used in [3] to reduce
the problem to a minimal configuration: a central extension of a semisimple
group (Proposition 4.6).
Finally, in Section 5 we apply our results to prove an o-minimal version
of a classical result by A. Malcev (Theorem 4.6) concerning the existence of
cross-section of projection maps of quotients of simply-connected Lie groups.
2. Preliminaries
We fix an o-minimal expansion R of a real closed field R. Henceforth
definable means definable in the structure R possibly with parameters. Let
G be a definable group in R, we refer to [15] for the basics on o-minimal
groups.
For any fixed p ∈ N, the group G is a topological group with a definable
C p -manifold structure compatible with the group operation. Any definable
subgroup of G is closed and a C p -submanifold of G. Since R expands a field,
we have elimination of imaginaries and therefore the quotients of definable
groups by definable subgroups are again definable. A definable group G is
linear if G ≤ GL(n, R) for some n ∈ N.
Any definable subset X of G is the disjoint union of finitely many connected definable components, i.e. definable subsets which cannot be written as the union of two proper open definable subsets. In particular, the
connected component Go of G which contains the identity is a –normal–
subgroup of finite index (it is the smallest one with that property). We say
that G is connected if Go = G. Moreover, the group G has the descending
chain condition on definable subgroups (dcc for short): any strictly descending chain of definable subgroups –which must be closed in the topology– of
G is finite.
In [17] the authors define the Lie algebra of G similarly as in the classical
case. We define the tangent space Te (G) as the set of all equivalence classes
2
of definable C 1 -curves σ : (−1, 1) → G with σ(0) = e, where two curves are
equivalent if they are tangent at 0. We denote by σ̄ the equivalence class
of σ and we endow Te (G) with the natural vector space structure as in the
classical case. Given a local C 3 -chart ϕ : U → Rn around e ∈ U ⊆ G,
ϕ(e) = 0, we can identify Te (G) with Rn via the isomorphism sending σ
to (ϕ ◦ σ)′ (0). Moreover, since ϕ(ey) = ϕ(y) and ϕ(xe) = ϕ(x), using the
Taylor expansion we get that
ϕ(xy) = ϕ(x) + ϕ(y) + α(ϕ(x), ϕ(y)) + · · ·
where α is a bilinear vector-valued form and dots stands for elements of
order greater than 2. The transposition of x and y yields
ϕ(yx) = ϕ(y) + ϕ(x) + α(ϕ(y), ϕ(x)) + · · ·
and therefore we get that
(1)
ϕ([x, y]) = ϕ(x−1 y −1 xy) = γ(ϕ(x), ϕ(y)) + · · ·
where γ(ϕ(x), ϕ(y)) = α(ϕ(x), ϕ(y)) − α(ϕ(y), ϕ(x)) and dots stand for the
terms of order greater then 2. It turns out that Te (G) with the bracket
operation [X, Y ] := γ(X, Y ) is the Lie algebra of G, denoted by Lie(G)
–and which do not depend on the chart ϕ chosen.
Many basic results from the Lie theory have an o-minimal analogue. For
example, if H is a definable subgroup of G then Lie(H) is a Lie subalgebra
of Lie(G). Furthermore:
Fact 2.1. [17, Claim 2.20] If G1 and G2 are two connected definable subgroups of a definable group G with the same Lie algebra then G1 = G2 .
In some sense, locally definable groups play the
S role of virtual Lie groups.
A locally definable group [8] is a subset G = n∈N Xn of Rℓ which is a
countable union of increasing definable subsets Xn and whose group operation restricted to Xn × Xn is contained in Xm for some m ∈ N and it is
a definable
map. A homomorphism
f : G → H of locally definable groups
S
S
G = n∈N Xn and H = m∈N Ym is a locally definable homomorphism if
for each n there is m such that f (Xn ) ⊆ Ym and the restriction f ↾ Xn is
definable.
As before, G has a locally definable C p -manifold structure, a submanifold
of GL(n, R) in case that the group G is linear. We say that a subset Y ⊆ G
is compatible if Y ∩ Xn is definable for each n ∈ N. Any compatible subset Y
is a disjoint union of its countable many compatible connected components,
and we say G is connected if it is equal to the connected component G o of
the identity.
Fact 2.2. Let G be a connected definable group and let g be its Lie algebra.
Then G′ is a connected locally definable subgroup such that [g, g] ⊆ Lie(G′ ).
Proof. Since G × G → G : (x, y) 7→ [x, y] = x−1 y −1 xy is continuous we
get that [G, G]1 is a connected definable subset of G. Therefore [G, G]n =
n
[G, G]· · ·[G, G] is connected and definable for each n ∈ N. In particular,
3
S
G′ = n∈N [G, G]n is locally definable because [G, G]n [G, G]n = [G, G]2n
for each n ∈ N. On the other hand, [G′ ]o ∩ [G, G]n is an open and closed
definable subset of [G, G]n and therefore [G, G]n ⊆ [G′ ]o , so that [G′ ]o = G′ .
Next, let us see that [g, g] ⊆ Lie(G′ ), we follow [14, §4]. It suffices to
show that given X, Y ∈ g then [X, Y ] ∈ Lie(G′ ). Let x, y : (−1, 1) → G be
definable C 1 -curves such that x′ (0) = X and y ′ (0) = Y . Then
(
√
√
[x( t), y( t)], if t ∈ [0, 1),
p
p
c(t) :=
−1
[x( |t|), y( |t|)] , if t ∈ (−1, 0],
is a definable C 1 -curve such that c′ (0) = [X, Y ]. Indeed, by equation (1)
above,
√
√
√
√
[x( t), y( t)] = γ(x( t), y( t)) + · · ·
where “ ” denotes the image by the chart ϕ. Thus,
√
√
√
√
lim 1t [x( t), y( t)] = lim 1t γ(x( t), y( t))
t→0+
=
t→0+
√
√
lim γ( x(√tt) , y(√tt) )
t→0+
= γ(X, Y ) = [X, Y ].
On the other hand,
lim
t→0−
=
p
1
t [x(
p
|t|), y( |t|)] = lim
t→0−
√
√
−t) y( −t)
lim −γ( x(√−t
, √−t )
t→0−
√
1
t γ(x(
√
−t), y( −t))
= −γ(X, Y ) = −[X, Y ]
p
p
and therefore limt→0− 1t [x( |t|), y( |t|)]−1 = [X, Y ], as required.
A Lie subalgebra of gl(n, R) is said to be algebraic if it is the Lie algebra
of an algebraic subgroup of GL(n, R). Given a Lie subalgebra g of gl(n, R),
a(g) denotes the minimal algebraic Lie subalgebra of gl(n, R) containing g.
We recall that if g is a subalgebra of gl(n, R) then [g, g] = [a(g), a(g)] is
algebraic (see [14, Ch.3, §3]).
If G is a semialgebraic subgroup of GL(n, R) then G and its Zariski closure
G in GL(n, R) have the same dimension. This is a crucial aspect in the proof
of the following result (the bound in the commutator width can be deduced
from the proof there noting that [G, G]−1
n = [G, G]n for each n ∈ N):
Fact 2.3. [16, Cor.3.3] Let G be a semialgebraic subgroup of GL(n, R). Then
G′ is semialgebraic and its commutator width is bounded by dim(G).
It is not longer true in general that if G is a linear o-minimal group then
dim(G) = dim(G). Nevertheless, in the proof of Theorem 3.1 we will make
′
use of the fact that G and G still have an strong relation precisely via G
(as pointed out in [19]). We recall that by [19, Lem.2.4] the Zariski Lie
algebra of an algebraic subgroup of GL(n, R) and its o-minimal Lie algebra
canonically coincides. Moreover:
4
Fact 2.4. [2, Prop.3.9] Let G be a definable subgroup of GL(n, R) and G its
Zariski closure. Then, Lie(G) = a(Lie(G)). Furthermore, if G is connected
then G is irreducible and G is normal in G.
The solvable radical R(G) of a definable group G is the maximal normal
solvable connected definable subgroup of G. We say that G is semisimple if
R(G) is trivial. A non-abelian definable group G is definably simple if there
is no infinite normal definable subgroup of G. We recall a basic result of
semisimple groups, the symbol ≃ denotes definably isomorphic.
Fact 2.5. [11] Let G be a connected definable group. Then G is semisimple
if and only if its Lie algebra g is semisimple. In this case, we have that
G′ = G and there are definably simple subgroups C1 , . . . , Cℓ such that G ≃
C1 × · · · × Cℓ .
We finish with some topological remarks on o-minimal groups. Given a
connected definable group G in the o-minimal structure R, we define the
o-minimal n-homotopy group πnR (G) as in the classical case via definable
maps and definable homotopies pointed in the identity element [1, §4]. We
say that G is simply-connected if π1R (G) = 1.
If R1 is an elementary extension of R and G(R1 ) is the realization of G in
R1 , then πnR1 (G(R1 )) and πnR (G) are canonically isomorphic, so henceforth
we shall omit the superscript and we will write πn (G). For, it can be deduced
from the following stronger result which will be crucial in our work:
Fact 2.6. [1, Thm.3.1, Cor.4.4] Let X ⊆ Rn and Y ⊆ Rm be connected
semialgebraic sets defined over Q. Then every continuous map f : X → Y
definable in R is definably homotopic to a semialgebraic map g : X → Y
defined over Q. If g1 , g2 : X → Y are two continuous semialgebraic maps defined over Q which are definably homotopic, then they are semialgebraically
homotopic over Q.
In particular, the o-minimal n-homotopy group πn (X) is canonically isomorphic to the classical homotopy group πn (X(R)) of the realization of X
in the real numbers.
Similarly, we can define the o-minimal n-homotopy group of a locally
definable group G and again we have invariance under elementary extensions.
As it happens with Lie groups, the fundamental group interacts with map
coverings: an onto locally definable homomorphism p : G → H is a locally
definable
S covering if there is a family of open definable subsets {Uj }j∈J of
H = n∈N Hn whose union is H, each Hn is contained in the union of finitely
many Uj , and each p−1 (Uj ) is a disjoint union of open definable subsets of
G each of which is mapped homeomorphically onto Uj .
Fact 2.7. Let G and H be connected locally definable groups and let f : G →
H be a locally definable surjective homomorphism. If dim(ker(f )) = 0 then
f is an isomorphism.
5
Proof. Since dim(ker(f )) = 0, the map f is a locally definable covering [8,
Thm.3.6]. Therefore, by [8, Prop.3.4 and 3.12],
ker(f ) = π1 (H)/f∗ (π1 (G)),
where f∗ : π1 (G) → π1 (H) : [γ] 7→ [f ◦ γ]. Since π1 (H) = 1 we get that
ker(f ) = 1, as desired.
3. Linear groups
If G is a connected linear Lie subgroup of GL(n, R) then G′ is a closed
subgroup. Indeed, if g denotes the Lie algebra of G then we have that G′
is a connected virtual subgroup of G whose Lie algebra is [g, g]. On the
other hand, [g, g] is the Lie algebra of an algebraic subgroup H of GL(n, R).
Therefore, G′ and [H ′ ]o have the same Lie algebra, so they are equal (see
Ch.1 §2 and Ch.4 §1 in [14]). It follows that G′ is closed in G.
We cannot adapt the above argument to prove that if G is a linear ominimal group then G′ is a definable subgroup. Though G′ is a connected
locally definable subgroup of G, it is not true that locally definable subgroups
of G are uniquely determined by their Lie algebra. For example, the group
R and its finite elements Fin(R) have the same Lie algebra.
Theorem 3.1. Let G ≤ GL(n, R) be a connected definable group in R.
Then G′ is semialgebraic and connected. Moreover, the commutator width
of G is bounded by dim(G) + dim(G′ ) − dim(G′′ ).
Proof. We argue by induction on the dimension, the initial step is trivial.
So assume that dim(G) > 0.
Let g be the Lie algebra of G. By Fact 2.4 the Zariski closure H := G ≤
GL(n, R) of G is an irreducible algebraic subgroup of GL(n, R) whose Lie
algebra h := Lie(H) equals a(g). The derived subgroup H ′ of H, is also
an irreducible algebraic group with Lie(H ′ ) = [a(g), a(g)] = [g, g]. Denote
G1 := H o and G2 := [H ′ ]o , which are connected semialgebraic subgroups of
GL(n, R). Since Lie(G2 ) = [g, g] ⊆ g it follows from Fact 2.1 that
G2 E G E G1 .
We prove that G′ equals the connected semialgebraic group G2 . By Fact
2.3 the groups G′1 and G′2 are both semialgebraic and connected. Thus, the
quotient G1 /G2 is abelian since G′1 = [G1 , G1 ] = [H o , H o ] E [H, H]o = G2 .
In particular G/G2 is abelian, so that G′ ≤ G2 .
On the other hand, consider the connected definable group G/G′2 and
note that it is non-necessarily linear. However, it is solvable. Indeed, we
already showed above that G′1 E G2 , so that [G1 /G′2 ]′ = G′1 /G′2 E G2 /G′2 is
abelian. Then [G/G′2 ]′ E [G1 /G′2 ]′ is abelian and therefore G/G′2 is solvable,
as desired. Thus, by [3, Thm.3.1] we deduce that [G/G′2 ]′ = G′ /G′2 is
definable and connected, and the commutator width of G/G′2 is bounded by
dim(G/G′2 ). In particular, G′ is definable and connected.
6
Finally, by Fact 2.2 we have that
Lie(G2 ) = Lie([H ′ ]o ) = Lie(H ′ ) = [g, g] ⊆ Lie(G′ )
and therefore G′ = G2 by Fact 2.1. Note that G′′ = G′2 and thus by the above
we have that the commutator width of G/G′′ is bounded by dim(G/G′′ ) =
dim(G) − dim(G′′ ). On the other hand, by Fact 2.3 the commutator width
of G′ = G2 is bounded by dim(G′ ). Hence, the commutator width of G is
bounded by dim(G) − dim(G′′ ) + dim(G′ ), as required.
We complete the linear case by considering non-connected definable linear
groups:
Corollary 3.2. Let G be a linear group definable in an o-minimal structure, and A and B be two definable subgroups which normalize each other.
Then the subgroup [A, B] is definable and [A, B]o = [Ao , B][A, B o ]. Furthermore, any element of [A, B]o can be expressed as the product of at most
dim([A, B]o ) commutators from [Ao , B] or [A, B o ] whenever Ao or B o is
solvable.
Proof. It suffices to prove that H := Ao B o satisfies condition (∗) of [3,
Thm.3.1]. That is, if K is a normal definable subgroup of H such that H/K
is the central extension of a definable simple group then (H/K)′ = H ′ K/K is
definable. Since H is linear and connected, the latter follows from Theorem
3.1.
4. Simply-connected definable groups
A. Malcev proved the existence of cross-sections of quotients of simplyconnected Lie groups by normal closed subgroups. This is a key result that,
for example, it allows to study central extensions of simply-connected Lie
groups via analytic sections [12].
Fact 4.1. [13] Let G be a simply-connected Lie group and let H E G be a
closed connected subgroup. Let π : G → G/H be the natural homomorphism.
Then there exists an analytic mapping σ : G/H → H such that π ◦ σ = id.
Note that with the above notation,
G → (G/H) × H
x 7→ (π(x), x−1 σ(π(x)))
is a homeomorphism and therefore both H and G/H are simply-connected.
We are interested in an o-minimal version of this consequence because it will
allow us to make arguments by induction. However, the proof in [13] goes
through the 1-1 correspondence between Lie algebras and simply-connected
Lie groups, which it is not available in the o-minimal context.
We follow another approach. E. Cartan proved in [6] that any connected
Lie group has trivial second homotopy group. His proof again goes through
Lie’s third fundamental theorem. W. Browder [5] later gave an alternative
proof –using just homological methods– which is also valid for H-spaces
7
with finitely generated homology. Recall that a topological space X is an
H-space if there exists a continuous map f : X × X → X and an element
e ∈ X such that both f (−, e) and f (e, −) are homotopic to the identity map
id : X 7→ X.
Lemma 4.2. Let G be a connected definable group G. Then π2 (G) = 0.
Proof. By the Triangulation theorem we can assume that there is a finite
simplicial complex K with vertices over Q such that G = |K|(R), where
|K|(R) denotes the realization of K in R. Moreover, we can assume that
the identity of G is one of the vertices.
By Fact 2.6, the group operation on |K|(R) is definably homotopic to
a continuous semialgebraic map f : |K|(R) × |K|(R) → |K|(R) which
is defined over Q. Furthermore, both f (−, e) and f (−, e) are clearly definably homotopic to the identity map id, so again by Fact 2.6 both are
also semialgebraically homotopic to the id over Q. Thus, we can consider f R : |K|(R) × |K|(R) → |K|(R), the realization of K and f over
the real numbers. The polyhedron |K|(R) with the map f R is an H-space.
Moreover, since K is a finite simplicial complex the homology groups of
|K|(R) are clearly finitely generated. Thus, by [5, Thm.6.11] we have that
π2 (|K|(R)) = 0 and in particular π2 (G) = 0, as required.
A continuous definable map p : E → B is a definable fibration if p has the
homotopy lifting property with respect to all definable sets, i.e. for every
definable set X, for every definable homotopy H : X × I → B and for every
definable map g : X → E such that p ◦ g = H(−, 0) there is a definable
homotopy H1 : X × I → E such that p ◦ H1 = H and H1 (−, 0) = g(−).
With the above lemma and the fact that the projection map of quotients of
definable groups are definable fibrations we get:
Proposition 4.3. Let G be connected definable group, and let H be a normal
connected definable subgroup of G. Then G is simply-connected if and only
if both H and G/H are simply-connected.
Proof. By [4, Cor.2.4] the projection map G → G/H is a definable fibration.
Therefore, by [1, Thm.4.9], for each n ≥ 2, the o-minimal homotopy groups
πn (G, H) and πn (G/H) are isomorphic. In particular, we have the following
long exact sequence via the o-minimal homotopy sequence of the pair (G, H),
see [1, §4],
π2 (G/H) → π1 (H) → π1 (G) → π1 (G/H) → 0.
Since by Lemma 4.2 we have that π2 (G/H) = 0, we obtain the exact sequence
0 → π1 (H) → π1 (G) → π1 (G/H) → 0.
Therefore, π1 (G) = 0 if and only if both π1 (H) = 0 and π1 (G/H) = 0, as
required.
8
Once we have that normal connected definable subgroups and their quotients are also simply-connected, we will be able to make induction arguments. For example, we have the following consequence analogue to the
classical one:
Corollary 4.4. Let G be a connected solvable definable group. Then the
following are equivalent:
(1) G is torsion-free.
(2) G is definably diffeomorphic to Rdim(G) .
(3) G is simply-connected.
In particular, if G is simply-connected then any connected definable subgroup
is simply-connected.
Proof. 1) implies 2) follows from [20, Cor.5.7] and 2) implies 3) is obvious.
Let us prove by induction on the dimension that 3) implies 1), the initial
case is obvious. Since G is solvable, by [3, Thm.4.1] we have that G′ is
a normal connected definable proper subgroup of G. If G′ 6= 1 then by
Proposition 4.3 and by the induction hypothesis we get that both G′ and
G/G′ are torsion-free. In particular, G is also torsion-free.
Thus, we can assume that G is abelian. We consider the definable homomorphism fn : G → G : g 7→ gn for each n ∈ N. Note that ker(fn ) < G by
[21, Prop.6.1]. If 1 < ker(fn )o < G for some n ∈ N, then again by Proposition 4.3 and by induction we have that both ker(fn )o and G/ ker(fn )o are
torsion-free. In particular, G is also torsion-free. Hence, we can assume that
ker(fn )o = 1 for all n ∈ N. Thus, by Fact 2.7, we also have that ker(fn ) = 1
for all n ∈ N, as required.
Finally, suppose that G is simply-connected, and let H be a connected
definable subgroup of G. By the above equivalences, we have that G is
torsion-free, so that H is also a connected torsion-free solvable definable
group. Thus, H is simply-connected, as required.
In order to prove the definability of the commutator subgroup of simplyconnected groups we will be concerned with the following configuration: a
definable central extension of a semisimple definable group. These extensions were profoundly studied in [11]:
Fact 4.5. [11, Cor.5.3] Let G be a connected central extension of a semisimple definable group. Then for each n, the set Z(G) ∩ [G, G]n is finite.
Proposition 4.6. Let G be a simply-connected definable group such that
R(G) = Z(G)o . Then G′ is definable and simply-connected.
Proof. By Fact 2.2 the derived subgroup G′ of G is a connected locally
definable group of G and the projection map
π ↾G′ : G′ → G/Z(G)o
is clearly a locally definable homomorphism. Since R(G) = Z(G)o , the connected definable group G/Z(G)o is semisimple and therefore [G/Z(G)o ]′ =
G/Z(G)o by Fact 2.5, so that π ↾G′ is surjective.
9
Moreover, by Fact 4.5 the compatible subgroup ker(π ↾G′ ) = G′ ∩ Z(G)o
of G′ has dimension 0. Therefore, since G/Z(G)o is simply-connected by
Proposition 4.3, it follows from Fact 2.7 that π ↾G′ is a definable isomorphism, as required.
We already have all the ingredients to prove the definability of the derived
subgroup of simply-connected definable groups.
Fact 4.7. [3, Cor.4.3] Let G be a definable group and let A, B be normal
connected definable subgroups of G with [A, B] ≤ Z(B) or [A, B] ≤ Z(A).
Then [A, B] is definable and connected.
Proof. For the sake of the presentation, we include a proof in case that
[A, B] ≤ Z(B), the other one is similar. For any a ∈ A and b1 , b2 ∈ B
we have [a, b1 b2 ] = [a, b2 ][a, b1 ]b2 = [a, b2 ][a, b1 ] = [a, b1 ][a, b2 ]. Thus, the
set [a, B]1 is a group, which is also definable and connected since it is the
image of the continuous map B → B : b 7→ [a, b]. In particular, for any
a1 , . . . , aℓ ∈ A we have that
[a1 , B]1 · · · [aℓ , B]1
is a connected definable subgroup of B. Therefore [A, B] equals any such
finite product of maximal dimension, so it is definable and connected.
Theorem 4.8. Let G be a simply-connected group definable in an o-minimal
structure, A and B be two normal connected definable subgroups of G. Then
[A, B] is a normal connected definable subgroup of G.
Proof. Let G be a potential counterexample to our statement of minimal
dimension. Note that dim(G) > 2 because otherwise G is abelian. Let A
and B be two normal connected definable subgroups of G for which [A, B]
is either non-definable or definable but non-connected, and with
d := min(dim(A), dim(B)) ≥ 1
minimal. By Proposition 4.3 the normal definable subgroup AB of G is
simply-connected and therefore G = AB. Note that since A and B are
normal in G we have that [A, B] E A ∩ B.
Claim 1. There is not a normal connected definable subgroup C of G contained in A ∩ B with dim(C) < d and C Z(A) ∩ Z(B).
Proof. Suppose there exists such a subgroup C, and say it does not centralize
B. Thus, [C, B] is a non-trivial normal connected definable subgroup of G
because dim(C) < d.
Notice that [C, B] is normal in G and therefore by Proposition 4.3 we have
that G/[C, B] is simply-connected. Denote by “ ” the quotients by [C, B].
Since G was the minimal counterexample, we get that [A, B] is definable and
connected. But clearly [A, B] = [A, B] = [A, B]/[C, B] since [C, B] ≤ [A, B],
and it follows that [A, B] is definable and connected, a contradiction.
Claim 2. We may assume A = A ∩ B Z(A) ∩ Z(B).
10
Proof. If (A ∩ B) ≤ Z(A) ∩ Z(B) then [A, B] ≤ Z(A) ∩ Z(B). By Fact 4.7
we obtain that [A, B] is definable and connected, a contradiction. Hence,
we have that A ∩ B Z(A) ∩ Z(B).
On the other hand, by Claim 1 we get that dim(A∩B)o = dim(A∩B) = d.
Since A and B are definably connected it follows that A ∩ B equals A or B,
say A.
In particular, we are now in the situation in which A E B = G.
Claim 3. The subgroups A and B are equal.
Proof. Suppose that dim(A) < dim(B) = dim(G). Then by minimality
of our counterexample we have that A′ = [A, A] is a connected definable
subgroup of A.
Since A′ is characteristic in A, and A is normal in B, we get that A′ is
normal in B. Thus, we can work in B/A′ . We denote by “ ” the quotients
by A′ . Note that A is abelian. Then [A, B] ≤ A = Z(A) and Fact 4.7 gives
that [A, B] is connected and definable. Since A′ ≤ [A, B], we deduce the
definability and connectedness of [A, B], a contradiction.
All in all, we are in the following situation:
G is a simply-connected definable group for which G′ is either non-definable
or definable but non-connected, and such that any proper normal connected
definable subgroup C of G is central in G.
The group is non-solvable, otherwise by [3] we would have that G′ is definable and connected. Since R(G) is a proper connected definable subgroup
of G, we get that R(G) ≤ Z(G) and therefore R(G) = Z(G)o . Then, by
Proposition 4.6 it follows that G′ is definable and connected, a contradiction.
Natural examples of simply-connected o-minimal groups appear in the
literature: the spin groups or the examples in [19, §1] of solvable o-minimal
groups which are not semialgebraic. However, we would like to stress that
simply-connectedness emerges canonically in the context of locally definable
groups. Indeed, every o-minimal group has a (simply-connected) universal
cover which is a locally definable group. Hence, it seems natural to ask:
Question 4.9. Let G be a locally definable group G which is the universal
covering of a connected o-minimal group. Is G′ a compatible subgroup of
G?
Remark 4.10. We would like to finish this section by pointing out that recently we notice that part of the results in [3] can be generalized to an
abstract model-theoretic context. Let G be a group interpretable in a structure M. Henceforth, definability refers to Meq . We suppose that to each
definable set in Cartesian powers of G is attached a dimension in N, denoted
by dim and satisfying the following axioms:
11
(Definability) If f is a definable function between two definable sets A and
B, then for every m in N the set {b ∈ B | dim(f −1 (b)) = m} is a definable
subset of B.
(Additivity) If f is a surjective definable function between two definable sets
A and B, whose fibers have constant dimension m in N, then dim(A) =
dim(B) + m.
(Finite sets) A definable set A is finite if and only if dim(A) = 0.
We also assume that G satisfies the dcc. In particular, G has a smallest
definable subgroup Go of finite index, the intersection of all of them. Then:
Let G be a solvable group equipped with a dimension and with dcc. Let
A and B be two connected definable subgroups of G which normalize each
other. Then the subgroup [A, B] is definable and connected.
Indeed, applying the reductions in the proof of [3, Thm.6.1], i.e. the
claims 1,2 and 3 in the proof of Theorem 4.8 above, it suffices to handle the
following problem: given a solvable group G equipped with a dimension and
with dcc and such that any proper normal connected definable subgroup
is central, its derived subgroup G′ is definable and connected. Now, if G
is abelian then G′ is trivial and we are done. If G is not abelian, then
an argument in [18, Thm.2.12] shows that there exists a proper normal
connected definable subgroup C of G such that G/C is abelian. This ends
the proof since G′ ≤ C ≤ Z(G) and thus G′ is definable by the corresponding
version of Fact 4.7.
We recall the argument in [18, Thm.2.12]. Take C a proper normal connected subgroup C of G of maximal dimension. Suppose that H := G/C
is not abelian. Then we prove that there exists a proper normal definable
e of G such that C is a subgroup of C
e of finite index and G/C̃
subgroup C
is abelian. This yields a contradiction because (G/C)′ would be finite and
therefore by [3, Fact 3.1] we would obtain that G/C is abelian (just consider
for each g ∈ G the action by conjugation of G over the finite set g G ) . Since
H is solvable there exists n ∈ N, n > 1, such that
1 = H (n) < H (n−1) < · · · < H
where H (k) := [H (k−1) ]′ for each k ∈ N. Let m ∈ N, m > 1, be minimal with
e be the normal definable subgroup of G such that C/C
e
H (m) finite. Let C
=
(m−1)
(m)
(m)
e ≃ H/H . Since E := Z(CH (H
H
and consider H1 := G/C
))
1
1
is an infinite abelian normal definable subgroup of H1 , by maximality of
dim(C) we get that E = H1 , so that H1 is abelian, as required.
5. Malcev’s cross-section
As an application of our previous results, we prove an o-minimal version of
Fact 4.1. We need first to study Levi decompositions in the simply-connected
case. The following lemma follows from [7], for the sake of completeness we
provide a proof which becomes somewhat easier in our particular setting.
12
Lemma 5.1. Let G be a simply-connected definable group. Then there exists
a semisimple simply-connected definable subgroup S of G such that G =
R(G)S and R(G) ∩ S = 1.
Proof. It is enough to prove there is a connected semisimple definable subgroup S such that G = R(G)S and R(G) ∩ S is finite. For, in that case the
quotient
G/R(G) = R(G)S/R(G) ≃ S/(R(G) ∩ S)
is simply-connected by Proposition 4.3. Then, by Fact 2.7, the finite normal
subgroup R(G) ∩ S of S must be trivial. In particular, S ≃ G/R(G) is
simply-connected, as required.
Suppose first that G ≤ GL(n, R) is linear. Let g = r + s be a Levi
decomposition of the Lie algebra g of G, where r denotes the radical of
g. We note that Lie(R(G)) = r. Indeed, since R(G) is solvable, its Lie
algebra Lie(R(G)) is solvable [2, Lem.3.7] and therefore Lie(R(G)) ⊆ r.
In particular, since G/R(G) is semisimple, it follows from Fact 2.5 that
g/Lie(R(G)) is semisimple and so Lie(R(G)) = r. On the other hand, since
s = [s, s] = [a(s), a(s)] is algebraic, there is an algebraic group S1 of GL(n, R)
whose Lie algebra is s. Therefore S := S1o is a connected semisimple definable
subgroup of G such that G = R(G)S and R(G) ∩ S is finite, as desired.
Now, suppose that G is almost-linear, i.e. there is a finite normal (central)
subgroup N of G such that G/N is linear. Let π : G → G/N be the canonical
projection, and let R1 := R(G)N/N be the radical of G/N . By the above
there exists a connected semisimple definable subgroup S1 of G/N such that
G/N = R1 S1 and R1 ∩ S1 = 1. Then for the connected semisimple definable
subgroup S := π −1 (S1 )o of G we have that G = R(G)S and R(G) ∩ S is
finite, as required.
For the general case, recall that the kernel of the definable homomorphism
Ad : G → Aut(g) is Z(G), and so the definable group G/Z(G)o is almostlinear and simply-connected by Proposition 4.3. Denote π : G → G/Z(G)o
the canonical projection and let R1 := R(G/Z(G)o ) = R(G)/Z(G)o . Let S1
be a simply-connected semisimple definable subgroup of G/Z(G)o such that
G/Z(G)o = R1 S1 and R1 ∩ S1 = 1.
Consider the connected definable subgroup B := π −1 (S1 ) of G. By Proposition 4.3, since both S1 = B/Z(G)o and Z(G)o are simply-connected, we
have that B is simply-connected. In particular, S := [B, B] is also definable and simply-connected by Theorem 4.8. Let us show that Z(G)o ∩ S
is finite. Indeed, consider an ℵ1 -saturated elementary extension R1 of R,
as well as the realizations G(R1 ), Z(G)o (R1 ) = Z(G(R1 ))o , S1 (R1 ) and
B(R1 ) = π(R1 )−1 (S1 (R1 )) in R1 of the definable groups G, Z(G)o , S1 and
B respectively. To be semisimple and/or simply-connected is preserved under elementary extensions. Therefore, the simply-connected definable group
B(R1 ) is a central extension of the semisimple group S1 (R1 ) and so
Z(G(R1 ))o ∩ [B(R1 ), B(R1 )]n ⊆ Z(B(R1 ))o ∩ [B(R1 ), B(R1 )]n
13
is finite by Fact 4.5. In particular, since [B(R1 ), B(R1 )] is definable by
Proposition 4.3, by saturation there exists n0 ∈ N such that
Z(G(R1 ))o ∩ [B(R1 ), B(R1 )]n0 = Z(G(R1 ))o ∩ [B(R1 ), B(R1 )]n
for all n ≥ n0 . We deduce that
Z(G)o ∩ [B, B]n0 = Z(G)o ∩ [B, B]n
for all n ≥ n0 and therefore Z(G)o ∩ S = Z(G)o ∩ [B, B]n0 is finite.
On the other hand, by Fact 2.5 we have that S1′ = S1 . Thus, π(S) = S1
and
S1 ≃ SZ(G)o /Z(G)o ≃ S/(Z(G)o ∩ S).
Since S1 is simply-connected, it follows from Fact 2.7 that Z(G)o ∩S is trivial.
In particular, S is a simply-connected semisimple definable subgroup of G.
Moreover, we clearly have that G = R(G)S and (R(G) ∩ S)o E R(S) = 1,
as required.
In [20] the authors show that if G is a connected definable group and H
is a contractible normal definable subgroup of G then there is a continuous
section of the projection map G → G/H. The other classical result concerning the existence of cross-sections is, in the o-minimal setting, an easy
consequence of the results in [4]:
Lemma 5.2. Let G be a connected definable group and let H be a normal
connected definable subgroup of G. If G/H is contractible then exists a
continuous definable section σ : G/H → H such that π ◦ σ = id.
Proof. By [4, Cor.2.4] the projection map G → G/H is a definable fibration. Since G/H is contractible, there exists a continuous definable map
F : G/H × [0, 1] → G/H such that F (−, 0) = id and F (−, 1) is the constant
function c : G/H → G/H : ḡ 7→ 1̄. Consider the lifting e
c : G/H → G : ḡ 7→ 1
of c. By the homotopy lifting property of the projection π with respect to
all definable sets, there is a continuous definable map
Fe : G/H × [0, 1] → G
such that π ◦ Fe = F and Fe(−, 1) = e
c. In particular, the continuous definable
e
map σ := F (−, 0) : G/H → G satisfies π ◦ σ = id, as desired.
We already have all the ingredients to prove the existence of cross-sections
in the simply-connected case:
Theorem 5.3. Let G be a simply-connected definable group and let H E
G be a connected definable subgroup. Let π : G → G/H be the natural
homomorphism. Then there exists a continuous definable section σ : G/H →
H such that π ◦ σ = id.
Proof. We prove it by induction on dim(G). The initial case dim(G) = 0
is obvious, so we assume that dim(G) ≥ 1 and the statement holds for
all simply-connected definable groups of dimension less than dim(G). Let
14
H E G be a connected definable subgroup and π : G → G/H the canonical
projection. Note that both H and G/H are simply-connected by Proposition
4.3.
Claim. If there are proper connected definable subgroups A1 and B1 of G/H
such that A1 B1 = G/H and A1 ∩ B1 = 1 then there exists a continuous
definable section σ : G → G/H.
Proof. The map
φ : A1 × B1 → G/H : (a, b) 7→ ab
is clearly a definable homeomorphism. In particular, since π1 (A1 × B1 ) =
π1 (A1 ) × π1 (B1 ) by [10, Lem.2.2] and φ∗ : π1 (G/H) → π1 (A1 × B1 ) is an
isomorphism, it follows that both A1 and B1 are simply-connected. Moreover, by Proposition 4.3 the proper definable subgroups A := π −1 (A1 ) and
B := π −1 (B1 ) of G are simply-connected.
By induction there are continuous definable sections
σA : A1 → A
&
σB : B1 → B
such that π ◦ σA = id and π ◦ σB = id. Consider the continuous definable
maps
σA×B : A1 × B1 → A × B : (x, y) 7→ (σA (x), σB (y))
and
ψ : A × B → G : (x, y) 7→ xy.
Then
σ := ψ ◦ σA×B ◦ φ−1 : G/H → G
is a continuous definable map which satisfies π ◦ σ = id, as desired.
By Proposition 5.1 there exists a definable simply-connected S1 of G/H
such that G/H = R1 S1 and R1 ∩ S1 = 1, where R1 = R(G/H). Thus,
either G/H = R1 or G/H = S1 by the Claim above. If G/H = R1 then
by Corollary 4.4 and Lemma 5.2 there exists a continuous definable section
σ : G/H → G, so we can assume that G/H = S1 is semisimple. Moreover,
again by the Claim and Fact 2.5 we can assume that G/H is definably
simple.
Next, suppose that R(H) is not trivial. Since H is normal in G, and
R(H) is characteristic in H, we get that R(H) is normal in G. Thus, by
Proposition 4.3 the connected definable group G/R(H) is simply-connected.
By induction there is a continuous definable section σ0 : G/H → G/R(H)
of the projection map G/R(H) → G/H. On the other hand, since R(H)
is solvable and simply-connected, by Corollary 4.4 the group R(H) is contractible. Thus, by [20, Thm.5.1] we also have a continuous definable section
σ1 : G/R(H) → G of the projection G → G/R(H). In particular, σ := σ1 ◦σ0
is the desired section of π : G → G/H.
Finally, since we can assume that H is definable simple and G/H is
semisimple, it follows that G is semisimple. In particular, there are definably simple normal definable subgroups C1 , . . . , Cℓ of G such that G ≃
15
C1 × · · · × Cℓ . Since G/H is definably simple, we can assume that π(C1 ) =
G/H. Moreover, ker(π) ∩ C1 is a finite normal subgroup of C1 and therefore
π ↾C1 : C1 → G/H is a definable isomorphism by Fact 2.7. The inverse of
π ↾C1 gives the required continuous definable section σ : G/H → G.
References
[1] E. Baro and M. Otero. On o-minimal homotopy groups. Q. J. Math. 61 (2010), no.
3, 275–289.
[2] E. Baro, A. Berarducci and M. Otero. Cartan subgroups and regular points of ominimal groups. arXiv: https://arxiv.org/abs/1707.02738.
[3] E. Baro, E. Jaligot and M.Otero. Commutators in groups definable in o-minimal
structures. Proc. Amer. Math. Soc. 140 (2012), no. 10, 3629–3643.
[4] A. Berarducci, M. Mamino and M. Otero. Higher homotopy of groups definable in
o-minimal structures. Israel J. Math. 180 (2010), 143–161.
[5] W. Browder. Torsion in H-spaces. Annals of Mathematics, Vol. 74, No. 1, July, 1961.
[6] E. Cartan. La topologie des groupes de Lie. Hermann, Paris (1936).
[7] A. Conversano and A. Pillay. On Levi subgroups and the Levi decomposition for
groups definable in o-minimal structures. Fund. Math. 222 (2013), no. 1, 49–62.
[8] M. Edmundo, Covers of groups definable in o-minimal structures. Illinois J. Math. 49
(2005), no. 1, 99–120.
[9] M. Edmundo, Erratum to: ”Covers of groups definable in o-minimal structures”
[Illinois J. Math. 49 (2005), no. 1, 99–120]. Illinois J. Math. 51 (2007), no. 3, 1037–
1038.
[10] M. Edmundo and M. Otero. Definably compact abelian groups. J. Math. Log. 4
(2004), no. 2, 163–180.
[11] E. Hrushovski, Y. Peterzil and A. Pillay. On central extensions and definably compact
groups in o-minimal structures. J. Algebra 327 (2011), 71–106.
[12] G. Hochschild. Group extensions of Lie groups. Ann. of Math. (2) 54, (1951). 96–109.
[13] A. Malcev. On the simple connectedness of invariant subgroups of Lie groups. C. R.
Acad. Sci. URSS (N.S.) 34, (1942). 10–13.
[14] A. Onishchik and E.B. Vinberg. Lie groups and algebraic groups. Springer-Verlag,
Berlin (1990), pp. 328. ISBN:3-540-50614-4.
[15] M. Otero. A survey on groups definable in o-minimal structures. Model theory with
applications to algebra and analysis. Vol. 2, 177–206, London Math. Soc. Lecture
Note Ser., 350, Cambridge Univ. Press, Cambridge, 2008.
[16] A. Pillay. An application of model theory to real and p-adic algebraic groups. J.
Algebra, 126 (1989), no.1, 139–146.
[17] Y. Peterzil, A. Pillay and S. Starchenko. Definably simple groups in o-minimal structures. Trans. Amer. Math. Soc. 352 (2000), no. 10, 4397–4419.
[18] Y. Peterzil, A. Pillay and S. Starchenko. Simple algebraic and semialgebraic groups
over real closed fields. Trans. Amer. Math. Soc. 352 (2000), no. 10, 4421–4450.
[19] Y. Peterzil and S. Starchenko. Linear groups definable in o-minimal structures. J.
Algebra 247 (2002), no. 1, 1–23.
[20] Y. Peterzil and S. Starchenko. On torsion-free groups in o-minimal structures. Illinois
J. Math. 49 (2005), no. 4, 1299–1321.
[21] A.W. Strzebonski. Euler characteristic in semialgebraic and other o-minimal groups.
J. Pure Appl. Algebra 96 (1994), no. 2, 173–201.
Departamento de Álgebra; Facultad de Matemáticas; Universidad Complutense de Madrid; Plaza de Ciencias, 3; Ciudad Universitaria; 28040 Madrid;
Spain
E-mail address: [email protected]
16
| 4 |
Derivative-based Global Sensitivity Measures
and Their Link with Sobol’ Sensitivity Indices
arXiv:1605.07830v1 [] 25 May 2016
Sergei Kucherenko and Shugfang Song
Abstract The variance-based method of Sobol’ sensitivity indices is very popular
among practitioners due to its efficiency and easiness of interpretation. However,
for high-dimensional models the direct application of this method can be very timeconsuming and prohibitively expensive to use. One of the alternative global sensitivity analysis methods known as the method of derivative based global sensitivity
measures (DGSM) has recently become popular among practitioners. It has a link
with the Morris screening method and Sobol’ sensitivity indices. DGSM are very
easy to implement and evaluate numerically. The computational time required for
numerical evaluation of DGSM is generally much lower than that for estimation of
Sobol’ sensitivity indices. We present a survey of recent advances in DGSM and
new results concerning new lower and upper bounds on the values of Sobol’ total
sensitivity indices Stot
i . Using these bounds it is possible in most cases to get a good
practical estimation of the values of Stot
i . Several examples are used to illustrate an
application of DGSM.
Keywords: Global sensitivity analysis; Monte Carlo methods; Quasi Monte Carlo
methods; Derivative based global measures; Morris method; Sobol sensitivity indices
1 Introduction
Global sensitivity analysis (GSA) is the study of how the uncertainty in the model
output is apportioned to the uncertainty in model inputs [9],[14]. GSA can provide valuable information regarding the dependence of the model output to its input
parameters. The variance-based method of global sensitivity indices developed by
Sobol’ [11] became very popular among practitioners due to its efficiency and easSergei Kucherenko · Shugfang Song
Imperial College London, London, SW7 2AZ, UK e-mail: [email protected], e-mail:
[email protected]
1
2
Sergei Kucherenko and Shugfang Song
iness of interpretation. There are two types of Sobol’ sensitivity indices: the main
effect indices, which estimate the individual contribution of each input parameter to
the output variance, and the total sensitivity indices, which measure the total contribution of a single input factor or a group of inputs [3]. The total sensitivity indices
are used to identify non-important variables which can then be fixed at their nominal values to reduce model complexity [9]. For high-dimensional models the direct
application of variance-based GSA measures can be extremely time-consuming and
impractical.
A number of alternative SA techniques have been proposed. In this paper we
present derivative based global sensitivity measures (DGSM) and their link with
Sobol’ sensitivity indices. DGSM are based on averaging local derivatives using
Monte Carlo or Quasi Monte Carlo sampling methods. These measures were briefly
introduced by Sobol’ and Gershman in [12]. Kucherenko et al [6] introduced some
other derivative-based global sensitivity measures (DGSM) and coined the acronym
DGSM. They showed that the computational cost of numerical evaluation of DGSM
can be much lower than that for estimation of Sobol’ sensitivity indices which later
was confirmed in other works [5]. DGSM can be seen as a generalization and formalization of the Morris importance measure also known as elementary effects [8].
Sobol’ and Kucherenko[15] proved theoretically that there is a link between DGSM
and the Sobol’ total sensitivity index Stot
i for the same input. They showed that
DGSM can be used as an upper bound on total sensitivity index Stot
i . They also introduced modified DGSM which can be used for both a single input and groups of
inputs [16]. Such measures can be applied for problems with a high number of input
variables to reduce the computational time. Lamboni et al [7] extended results of
Sobol’ and Kucherenko for models with input variables belonging to the class of
Boltzmann probability measures.
The numerical efficiency of the DGSM method can be improved by using the automatic differentiation algorithm for calculation DGSM as was shown in [5]. However, the number of required function evaluations still remains to be proportional to
the number of inputs. This dependence can be greatly reduced using an approach
based on algorithmic differentiation in the adjoint or reverse mode [1]. It allows estimating all derivatives at a cost at most 4-6 times of that for evaluating the original
function [4].
This paper is organised as follows: Section 2 presents Sobol’ global sensitivity
indices. DGSM and lower and upper bounds on total Sobol’ sensitivity indices for
uniformly distributed variables and random variables are presented in Sections 3 and
4, respectively. In Section 5 we consider test cases which illustrate an application of
DGSM and their links with total Sobol’ sensitivity indices. Finally, conclusions are
presented in Section 6.
Page: 2
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
Title Suppressed Due to Excessive Length
3
2 Sobol’ global sensitivity indices
The method of global sensitivity indices developed by Sobol’ is based on ANOVA
decomposition [11]. Consider the square integrable function f (x) defined in the unit
hypercube H d = [0, 1]d . The decomposition of f (x)
d
d
d
f (x) = f0 + ∑ fi (xi ) + ∑ ∑ fi j (xi , x j ) + · · · + f12···d (x1 , · · · , xd ),
where f0 =
R
Hd
(1)
i=1 j>i
i=1
f (x)dx , is called ANOVA if conditions
Z
Hd
fi1 ...is dxik = 0
(2)
are satisfied for all different groups of indices x1 , · · · , xs such that 1 ≤ i1 < i2 < ... <
is ≤ d. These conditions guarantee that all terms in (1) are mutually orthogonal with
respect to integration.
The variances of the terms in the ANOVA decomposition add up to the total
variance:
D=
Z
Hd
f 2 (x)dx − f02 =
d
d
∑ ∑
s=1 i1 <···<is
Di1 ...is ,
where Di1 ...is = H d fi21 ...is (xi1 , ..., xis )dxi1 , ..., xis are called partial variances.
Total partial variances account for the total influence of the factor xi :
R
Dtot
i =
∑ Di1 ...is ,
<i>
where the sum ∑ is extended over all different groups of indices x1 , · · · , xs satisfy<i>
ing condition 1 ≤ i1 < i2 < ... < is ≤ n, 1 ≤ s ≤ n, where one of the indices is equal
to i. The corresponding total sensitivity index is defined as
tot
D.
Stot
i = Di
Denote ui (x) the sum of all terms in ANOVA decomposition (1) that depend on
xi :
d
ui (x) = fi (xi ) +
∑
fi j (xi , x j ) + · · · + f12···d (x1 , · · · , xd ).
j=1, j6=i
From the definition of ANOVA decomposition it follows that
Z
Hd
ui (x)dx = 0.
(3)
The total partial variance Dtot
i can be computed as
Page: 3
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
4
Sergei Kucherenko and Shugfang Song
Dtot
i =
Z
Hd
u2i (x)dx =
Z
Hd
u2i (xi , z)dxi dz.
Denote z = (x1 , ..., xi−1 , xi+1 , ..., xd ) the vector of all variables but xi , then x ≡
(xi , z) and f (x) ≡ f (xi , z). The ANOVA decomposition of f (x) in (1) can be presented in the following form
f (x) = ui (xi , z) + v(z),
where v(z) is the sum
of terms independent of xi . Because of (2) and (3) it is easy to
R
show that v(z) = H d f (x)dxi . Hence
ui (xi , z) = f (x) −
Z
Hd
f (x)dxi .
(4)
Then the total sensitivity index Stot
i is equal to
u2i (x)dx
.
(5)
D
We note that in the case of independent random variables all definitions of the
ANOVA decomposition remain to be correct but all derivations should be considered
in probabilistic sense as shown in [14] and presented in Section 4.
Stot
i
=
R
Hd
3 DGSM for uniformly distributed variables
Consider continuously differentiable
function f (x) defined in the unit hypercube
H d = [0, 1]d such that ∂ f ∂ xi ∈ L2 .
Theorem 1. Assume that c ≤
∂f
∂ xi
≤ C. Then
c2
C2
≤ Stot
≤
.
i
12D
12D
(6)
The proof is presented in [15].
The Morris importance measure also known as elementary effects originally defined as finite differences averaged over a finite set of random points [8] was generalized in [6]:
µi =
Z
Hd
∂ f (x)
dx.
∂ xi
(7)
Kucherenko et al [6] also introduced a new DGSM measure:
νi =
Z
Hd
∂ f (x)
∂ xi
2
dx.
(8)
In this paper we define two new DGSM measures:
Page: 4
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
Title Suppressed Due to Excessive Length
(m)
wi
5
=
Z
Hd
xm
i
∂ f (x)
dx,
∂ xi
(9)
where m is a constant, m > 0,
∂ f (x) 2
dx.
xi (1 − xi )
∂ xi
Hd
2
We note that νi is in fact the mean value of ∂ f ∂ xi . We also note that
1
ςi =
2
Z
∂f
∂ ui
=
.
∂ xi
∂ xi
(10)
(11)
3.1 Lower bounds on Stot
i
Theorem 2. There exists the following lower bound between DGSM (8) and the
Sobol’ total sensitivity index
R
(
Hd
[ f (1, z) − f (0, z)] [ f (1, z) + f (0, z) − 2 f (x)] dx)2
< Stot
i .
4νi D
(12)
Proof. Consider an integral
Z
Hd
ui (x)
∂ ui (x)
dx.
∂ xi
(13)
Applying the Cauchy–Schwarz inequality we obtain the following result:
Z
2 Z
Z
∂ ui (x)
∂ ui (x) 2
2
dx ≤
dx.
ui (x)dx ·
ui (x)
∂ xi
∂ xi
Hd
Hd
Hd
(14)
It is easy to prove that the left and right parts of this inequality cannot be equal.
should be linearly dependent.
Indeed, for them to be equal functions ui (x) and ∂ u∂ix(x)
i
For simplicity consider a one-dimensional case: x ∈ [0, 1]. Let’s assume
∂ u(x)
= Au(x),
∂x
where A is a constant. The general solution to this equation u(x) = B exp(Ax) , where
B is a constant. It is easy to see that this solution is not consistent with condition (3)
which should be imposed on function u(x).
R
Integral H d ui (x) ∂ u∂ix(x)
dx can be transformed as
i
Page: 5
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
6
Sergei Kucherenko and Shugfang Song
R ∂ u2i (x)
H d ∂ xi
∂ ui (x)
1
dx
H d uRi (x) ∂ xi dx = 2
1
2
= 2 RH d−1 ui (1, z) − u2i (0, z) dz
= 12 H d−1 (ui (1, z) − ui (0, z)) (ui (1, z) + ui(0, z)) dz
R
= 12 H d ( f (1, z) − f (0, z)) ( f (1, z) + f (0, z) − 2v(z)) dz.
R
(15)
All terms in the last integrand are independent of xi , hence we can replace integration with respect to dz to integration with respect to dx and substitute v(z) for
f (x) in the integrand due to condition (3). Then (15) can be presented as
Z
Hd
ui (x)
1
∂ ui (x)
dx =
∂ xi
2
Z
Hd
[ f (1, z) − f (0, z)] [ f (1, z) + f (0, z) − 2 f (x)] dx (16)
From (11) ∂ u∂ix(x)
= ∂ ∂f x(x)
, hence the right hand side of (14) can be written as
i
i
tot
νi Di . Finally dividing (14) by νi D and using (16), we obtain the lower bound (12).
⊓
⊔
We call
R
(
Hd
[ f (1, z) − f (0, z)] [ f (1, z) + f (0, z) − 2 f (x)] dx)2
4νi D
the lower bound number one (LB1).
Theorem 3. There exists the following lower bound between DGSM (9) and the
Sobol’ total sensitivity index
(2m + 1)
hR
Hd
i
(m+1) 2
( f (1, z) − f (x)) dx − wi
(m + 1)2D
< Stot
i
(17)
Proof. Consider an integral
Z
Hd
xm
i ui (x)dx.
(18)
Applying the Cauchy–Schwarz inequality we obtain the following result:
Z
2
xm
i ui (x)dx
Hd
≤
Z
Hd
x2m
i dx ·
Z
Hd
u2i (x)dx.
(19)
It is easy to see that equality in (19) cannot be attained. For this to happen
functions ui (x) and xm
i should be linearly dependent. For simplicity consider a onedimensional case: x ∈ [0, 1]. Let’s assume
u(x) = Axm ,
where A 6= 0 is a constant. This solution does not satisfy condition (3) which should
be imposed on function u(x).
Further we use the following transformation:
Page: 6
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
Title Suppressed Due to Excessive Length
Z
Hd
7
Z
Z
∂ xm+1
ui (x)
∂ ui (x)
i
dx = (m + 1)
xm
ui (x)dx +
xm+1
dx
i
i
d
d
∂ xi
∂ xi
H
H
to present integral (18) in a form:
R
R ∂ (xm+1
R
ui (x))
m+1 ∂ ui (x)
i
m u (x)dx = 1
x
x
dx
−
dx
d
d
d
H i i
H
H i
m+1
∂ xi
∂ xi
hR
i
R
m+1 ∂ ui (x)
1
= m+1 H d−1 ui (1, z)dz − H d xi
∂ xi dx
i
hR
R
m+1 ∂ ui (x)
1
= m+1
dx
.
x
(
f
(1,
z)
−
f
(x))
dx
−
d
d
H i
H
∂ xi
(20)
We notice that
Z
Hd
x2m
i dx =
1
.
(2m + 1)
(21)
⊓
⊔
Using (20) and (21) and dividing (19) by D we obtain (17).
This second lower bound on Stot
i we denote γ (m):
γ (m) =
(2m + 1)
hR
Hd
i
(m+1) 2
( f (1, z) − f (x)) dx − wi
(m + 1)2D
< Stot
i .
(22)
In fact, this is a set of lower bounds depending on parameter m. We are interested
in the value of m at which γ (m) attains its maximum. Further we use star to denote
such a value m: m∗ = arg max(γ (m)) and call
γ ∗ (m∗ ) =
(2m∗ + 1)
hR
Hd
i
(m∗ +1) 2
( f (1, z) − f (x)) dx − wi
(m∗ + 1)2 D
(23)
the lower bound number two (LB2).
We define the maximum lower bound LB∗ as
LB∗ = max(LB1, LB2).
(24)
We note that both lower and upper bounds can be estimated by a set of derivative
based measures:
(m)
ϒi = {νi , wi }, m > 0.
(25)
3.2 Upper bounds on Stot
i
Theorem 4.
Stot
i ≤
Page: 7
job: Kucherenko_mcqmc
νi
.
π 2D
macro: svmult.cls
(26)
date/time: 26-May-2016/0:33
8
Sergei Kucherenko and Shugfang Song
The proof of this Theorem in given in [15].
Consider the set of values ν1 , ..., νn , 1 ≤ i ≤ n. One can expect that smaller νi
correspond to less influential variables xi .
We further call (26) the upper bound number one (UB1).
Theorem 5.
Stot
i ≤
ςi
,
D
(27)
where ςi is given by (10).
Proof. We use the following inequality [2]:
0≤
Z 1
2
u dx −
0
Z
1
0
2
Z
1 1
x(1 − x)u′2dx.
udx ≤
2 0
(28)
The inequality
R is reduced to an equality only if u is constant. Assume that u is given
by (3), then 01 udx = 0, and from (28) we obtain (27).
⊓
⊔
Further we call ςDi the upper bound number two (UB2). We note that 12 xi (1 − xi )
for 0 ≤ xi ≤ 1 is bounded: 0 ≤ 21 xi (1 − xi) ≤ 18 . Therefore, 0 ≤ ςi ≤ 81 νi .
3.3 Computational costs
All DGSM can be computed using the same set of partial derivatives
∂ f (x)
∂ xi
∂ f (x)
∂ xi , i
=
1, ..., d. Evaluation of
can be done analytically for explicitly given easilydifferentiable functions or numerically.
In the case of straightforward numerical estimations of all partial derivatives and
computation of integrals using MC or QMC methods, the number of required function evaluations for a set of all input variables is equal to N(d + 1), where N is a number of sampled points. Computing LB1 also requires values of f (0, z) , f (1, z), while
computing LB2 requires only values of f (1, z). In total, numerical computation of
∗
LB∗ for all input variables would require NFLB = N(d + 1) + 2Nd = N(3d + 1) function evaluations. Computation of all upper bounds require NFUB = N(d + 1) function
evaluations. We recall that the number of function evaluations required for compuS
tation of Stot
i is NF = N(d + 1) [10]. The number of sampled points N needed to
achieve numerical convergence can be different for DGSM and Stot
i . It is generally
lower for the case of DGSM. The numerical efficiency of the DGSM method can be
significantly increased by using algorithmic differentiation in the adjoint (reverse)
mode [1]. This approach allows estimating all derivatives at a cost at most 6 times
of that for evaluating the original function f (x) [4]. However, as mentioned above
∗
lower bounds also require computation of f (0, z) , f (1, z) so NFLB would only be
∗
LB
UB
reduced to NF = 6N + 2Nd = N(2d + 6), while NF would be equal to 6N.
Page: 8
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
Title Suppressed Due to Excessive Length
9
4 DGSM for random variables
Consider a function f (x1 , ..., xd ), where x1 , ..., xd are independent random variables
with distribution functions F1 (x1 ) , ..., Fd (xd ). Thus the point x = (x1 , ..., xd ) is defined in the Euclidean space Rd and its measure is dF1 (x1 ) · · · dFd (xd ).
The following DGSM was introduced in [15]:
νi =
Z
Rd
∂ f (x)
∂ xi
2
dF(x).
(29)
We introduce a new measure
wi =
Z
Rd
∂ f (x)
dF(x).
∂ xi
(30)
4.1 The lower bounds on Stot
i for normal variables
Assume that xi is normally distributed with the finite variance σi2 and the mean value
µi .
Theorem 6.
σi2 w2i
≤ Stot
i .
D
Proof. Consider
tain
(31)
R
Rd xi ui (x)dF(x). Applying the Cauchy–Schwarz inequality we ob-
Z
Rd
2 Z
Z
xi ui (x)dF(x) ≤
x2i dF(x) ·
u2i (x)dF(x).
Rd
Rd
(32)
Equality in (32) can be attained if functions ui (x) and xi are linearly dependent. For
simplicity consider a one-dimensional case. Let’s assume
u(x) = A(x − µ ),
where A 6= 0 is a constant. This solution Rsatisfies condition (3) for normally distributed variable x with the mean value µ : Rd u(x)dF(x) = 0.
For normally distributed variables the following equality is true [2]:
Z
Rd
2 Z
Z
xi ui (x)dF(x) =
x2i dF(x) ·
Rd
Rd
∂ ui (x)
dF(x).
∂ xi
(33)
By definition Rd x2i dF(x) = σi2 . Using (32) and (33) and dividing the resulting
inequality by D we obtain the lower bound (31).
⊓
⊔
R
Page: 9
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
10
Sergei Kucherenko and Shugfang Song
4.2 The upper bounds on Stot
i for normal variables
The following Theorem 7 is a generalization of Theorem 1.
Theorem 7. Assume that c ≤
∂f
∂ xi
≤ C, then
σi2 c2
σi2C2
≤ Stot
≤
.
i
D
D
(34)
The constant factor σi2 cannot be improved.
Theorem 8.
Stot
i ≤
σi2
νi .
D
(35)
The constant factor σi2 cannot be reduced.
Proofs are presented in [15].
5 Test cases
In this section we present the results of analytical and numerical estimation of Si ,
tot
Stot
i , LB1, LB2 and UB1, UB2. The analytical values for DGSM and Si were calculated and compared with numerical results. For text case 2 we present convergence
plots in the form of root mean square error (RMSE) versus the number of sampled
points N. To reduce the scatter in the error estimation the values of RMSE were
averaged over K = 25 independent runs:
∗
2 ! 12
1 K Ii,k − I0
.
∑
K k=1
I0
εi =
Here Ii∗ is numerically computed values of Stot
i , LB1, LB2 or UB1, UB2, I0 is
the corresponding analytical value of Stot
,
LB1,
LB2
or UB1, UB2. The RMSE can
i
be approximated by a trend line cN −α . Values of (−α ) are given in brackets on the
plots. QMC integration based on Sobol’ sequences was used in all numerical tests.
Example 1. Consider a linear with respect to xi function:
f (x) = a(z)xi + b(z).
For this function Si =
(
R
Hd
2
2 (z)x dzdx
(a2 (z)−2a
i)
i)
R
4D H d−1
Page: 10
a2 (z)dz
Stot
i ,
Dtot
i =
1 R
2
12 H d−1 a (z)dz,
= 0 and γ (m) =
job: Kucherenko_mcqmc
νi =
R
H d−1 a
macro: svmult.cls
(z)dz, LB1 =
2
(2m+1)m2 ( H d−1 a(z)dz)
4(m+2)2 (m+1)2 D
R
2
. A maximum value
date/time: 26-May-2016/0:33
Title Suppressed Due to Excessive Length
11
R
2
of γ (m) is attained at m∗ =3.745, when γ ∗ (m∗ ) = 0.0401
D ( a(z)dz) . The lower and
R
1
1
tot
2
tot
upper bounds are LB∗ ≈ 0.48Stot
i . UB1 ≈ 1.22Si . UB2 = 12D 0 a(z) dz = Si . For
this test function UB2 < UB1.
Example 2. Consider the so-called g-function which is often used in GSA for
illustration purposes:
d
f (x) = ∏ gi ,
i=1
|4xi −2|+ai
1+ai ,
where gi =
ai (i = 1, ..., d) are constants. It is easy to see that for this
d
function fi (xi ) = (gi − 1), ui (x) = (gi − 1) ∏ g j and as a result LB1=0. The total
j=1, j6=i
d
variance is D = −1 + ∏ 1 +
j=1
are given in Table 1.
1/3
(1+a j )2
. The analytical values of Si , Stot
i and LB2
Table 1 The analytical expressions for Si , Stot
i and LB2 for g-function
γ (m)
Stot
i
Si
1/3
(1+ai )2
1/3
(1 + ai )2 D
d
∏
j=1, j6=i
1/3
1 + (1+a
2
)
j
D
2
4(1−(1/2)m+1 )
(2m + 1) 1 −
m+2
(1 + ai )2 (m + 1)2 D
d γ (m)
0.0772
∗
∗
dm = 0 , we find that m =9.64, γ (m ) = (1+ai )2 D . It is
interesting to note that m∗ does not depend on ai , i = 1, 2, ..., d and d. In the extreme
∗
∗
cases: if ai → ∞ for all i, γ S(mtot ) → 0.257, SStoti → 1, while if ai → 0 for all i, γ S(mtot ) →
i
i
i
By solving equation
0.257
, Si
(4/3)d−1 Stot
i
→
1
.
(4/3)d−1
The analytical expression for Stot
i , UB1 and UB2 are given
in Table 2.
Table 2 The analytical expressions for Stot
i UB1 and UB2 for g-function
Stot
i
1/3
(1+ai )2
UB1
d
1/3
1 + (1+a
∏
2
j)
UB2
d
16
j=1, j6=i
1/3
1 + (1+a
∏
2
j)
j=1, j6=i
(1 + ai
D
Stot
)2 π 2 D
Stot
d
4
∏
j=1, j6=i
1/3
1 + (1+a
)2
j
3(1 + ai
)2 D
UB2
i
i
For this test function UB1
= π48 , UB2
= 14 , hence UB1
= π12 < 1. Values of Si ,
Stot
i , UB and LB2 for the case of a=[0,1,4.5,9,99,99,99,99], d=8 are given in Table
3 and shown in Figure 1. We can conclude that for this test function the knowledge
of LB2 and UB1, UB2 allows to rank correctly all the variables in the order of their
importance.
Page: 11
job: Kucherenko_mcqmc
2
macro: svmult.cls
2
date/time: 26-May-2016/0:33
12
Sergei Kucherenko and Shugfang Song
1
Si
Stot
i
0
UB
LB
log2(RMSE)
−1
−2
−3
−4
−5
0
2
4
log2(N)
Fig. 1
Values of Si ,Stot
i , LB2
a=[0,1,4.5,9,99,99,99,99], d=8.
and
6
UB1
for
8
all input
variables.
Example
2,
Fig. 2 presents RMSE of numerical estimations of Stot
i , UB1 and LB2. For an
individual input LB2 has the highest convergence rate, following by Stot
i , and UB1
in terms of the number of sampled points. However, we recall that computation of
all indices requires NFLB∗ = N(3d + 1) function evaluations for LB, while for Stot
i
this number is NFS = N(d + 1) and for UB it is also NFUB = N(d + 1).
4
"
n
Example 3. Hartmann function f (x) = − ∑ ci exp − ∑ αi j (x j − pi j
i=1
j=1
)2
#
, xi ∈
[0, 1]. For this test case a relationship between the values LB1, LB2 and Si varies
with the change of input (Table 4, Figure 3): for variables x2 and x6 LB1> Si > LB2,
while for all other variables LB1< LB2 <Si . LB* is much smaller than Stot
i for all
inputs. Values of m* also vary with the change of input. For all variables but variable
2 UB1 > UB2.
Table 3 Values of LB*, Si , Stot
i , UB1 and UB1. Example 2, a=[0,1,4.5,9,99,99,99,99], d=8.
LB∗
Si
Stot
i
UB1
UB2
Page: 12
x1
0.166
0.716
0.788
3.828
3.149
x2
0.0416
0.179
0.242
1.178
0.969
job: Kucherenko_mcqmc
x3
0.00549
0.0237
0.0343
0.167
0.137
macro: svmult.cls
x4
0.00166
0.00720
0.0105
0.0509
0.0418
x5 ...x8
0.000017
0.0000716
0.000105
0.000501
0.00042
date/time: 26-May-2016/0:33
Title Suppressed Due to Excessive Length
0
13
−4
−14
tot
tot
Si (−0.977)
log2(RMSE)
log2(RMSE)
2
−17
−10
−8
−12
−10
−14
−12
4
−16
4
UB1(−0.894)
LB2(−0.836)
−16
−8
−6
Si (−0.993)
−15
UB1(−0.844)
LB2(−1.048)
−6
−4
log (RMSE)
tot
Si (−0.953)
UB1(−0.962)
LB2(−1.134)
−2
−18
−19
−20
−21
−22
5
6
7
8
log2(N)
9
10
11
12
5
6
7
(a)
8
log2(N)
9
10
11
12
−23
4
5
6
(b)
7
8
log2(N)
9
10
11
12
(c)
Fig. 2 RMSE of Stot
i , UB and LB2 versus the number of sampled points. Example 2,
a=[0,1,4.5,9,99,99,99,99], d=8. Variable 1 (a), variable 3 (b) and variable 5 (c).
0.5
Si
Stot
i
0
UB
LB1
LB2
log2(RMSE)
−0.5
−1
−1.5
−2
−2.5
−3
−3.5
1
2
3
4
5
6
log2(N)
Fig. 3 Values of Si ,Stot
i , UB1, LB1 and LB2 for all input variables. Example 3.
Table 4 Values of m∗ , LB1, LB2, UB1, UB2,Si and Stot
i for all input variables.
LB1
LB2
m∗
LB∗
Si
Stot
i
UB1
UB2
Page: 13
x1
0.0044
0.0515
4.6
0.0515
0.115
0.344
1.089
1.051
x2
0.0080
0.0013
10.2
0.0080
0.00699
0.398
0.540
0.550
job: Kucherenko_mcqmc
x3
0.0009
0.0011
17.0
0.0011
0.00715
0.0515
0.196
0.150
x4
0.0029
0.0418
5.5
0.0418
0.0888
0.381
1.088
0.959
macro: svmult.cls
x5
0.0014
0.0390
3.6
0.0390
0.109
0.297
1.073
0.932
x6
0.0357
0.0009
19.9
0.0357
0.0139
0.482
1.046
0.899
date/time: 26-May-2016/0:33
14
Sergei Kucherenko and Shugfang Song
6 Conclusions
We can conclude that using lower and upper bounds based on DGSM it is possible in
most cases to get a good practical estimation of the values of Stot
i at a fraction of the
CPU cost for estimating Stot
.
Small
values
of
upper
bounds
imply
small values of
i
Stot
.
DGSM
can
be
used
for
fixing
unimportant
variables
and
subsequent
model rei
duction. For linear function and product function, DGSM can give the same variable
ranking as Stot
i . In a general case variable ranking can be different for DGSM and
variance based methods. Upper and lower bounds can be estimated using MC/QMC
integration methods using the same set of partial derivative values. Partial derivatives can be efficiently estimated using algorithmic differentiation in the reverse
(adjoint) mode.
We note that all bounds should be computed with sufficient accuracy. Standard
techniques for monitoring convergence and accuracy of MC/QMC estimates should
be applied to avoid erroneous results.
Acknowledgements The authors would like to thank Prof. I. Sobol’ his invaluable contributions
to this work. Authors also gratefully acknowledge the financial support by the EPSRC grant
EP/H03126X/1.
References
1. A. Griewank and A. Walther. Evaluating derivatives: Principles and techniques of algorithmic differentiation. SIAM Philadelphia, PA, 2008.
2. G.H. Hardy, J.E. Littlewood and G. Polya. Inequalities. Cambridge University Press, Second
edition, 1973.
3. T. Homma and A. Saltelli. Importance measures in global sensitivity analysis of model output.
Reliability Engineering and System Safety, 52(1):1–17, 1996.
4. K. Jansen, H. Leovey, A. Nube, A. Griewank. and M. Mueller-Preussker. A first look at
quasi-Monte Carlo for lattice field theory problems. Comput. Phys. Commun., 185:948–959,
2014.
5. A. Kiparissides, S. Kucherenko, A. Mantalaris and E.N. Pistikopoulos. Global sensitivity
analysis challenges in biological systems modeling. J. Ind. Eng. Chem. Res., 48(15):7168–
7180, 2009.
6. S. Kucherenko, M.Rodriguez-Fernandez, C.Pantelides and N.Shah. Monte Carlo evaluation
of derivative based global sensitivity measures Reliability Engineering and System Safety,
94(7):1135–1148, 2009.
7. M. Lamboni, B. Iooss, A.L. Popelin and F. Gamboa. Derivative based global sensitivity
measures: general links with Sobol’s indices and numerical tests. Math. Comput. Simulat.,
87:45–54, 2013.
8. M.D. Morris. Factorial sampling plans for preliminary computational experiments. Technometrics, 33:161–174, 1991.
9. A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana and S.
Tarantola. Global sensitivity analysis: The Primer. Wiley, New York, 2008.
Page: 14
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
Title Suppressed Due to Excessive Length
15
10. A. Saltelli, P. Annoni, I. Azzini, F. Campolongo, M. Ratto and S. Tarantola. Variance based
sensitivity analysis of model output: Design and estimator for the total sensitivity index. Comput. Phys. Commun., 181(2):259–270, 2010.
11. I.M. Sobol’ Sensitivity estimates for nonlinear mathematical models. Matem. Modelirovanie ,
2: 112-118, 1990 (in Russian). English translation: Math. Modelling and Comput. Experiment,
1(4):407–414, 1993.
12. I.M. Sobol’ and A. Gershman. On an altenative global sensitivity estimators. In Proc SAMO,
Belgirate, 1995. pages 40–42, 1995.
13. I.M. Sobol’. Global sensitivity indices for nonlinear mathematical models and their Monte
Carlo estimates. Math. Comput. Simulat. 55(1-3):271–280, 2001.
14. I.M. Sobol’ and S. Kucherenko. Global sensitivity indices for nonlinear mathematical models.
Review. Wilmott Magazine, 1:56–61, 2005.
15. I.M. Sobol’ and S. Kucherenko. Derivative based global sensitivity measures and their link
with global sensitivity indices. Math. Comput. Simulat., 79(10):3009–3017, 2009.
16. I.M. Sobol’ and S. Kucherenko. A new derivative based importance criterion for groups
of variables and its link with the global sensitivity indices. Comput. Phys. Commun.,
181(7):1212–1217, 2010.
Page: 15
job: Kucherenko_mcqmc
macro: svmult.cls
date/time: 26-May-2016/0:33
| 10 |
Mondshein Sequences (a.k.a. (2, 1)-Orders)
arXiv:1311.0750v4 [] 19 Aug 2016
Jens M. Schmidt
Institute of Mathematics
TU Ilmenau∗
Abstract
Canonical orderings [STOC’88, FOCS’92] have been used as a key tool in graph drawing,
graph encoding and visibility representations for the last decades. We study a far-reaching
generalization of canonical orderings to non-planar graphs that was published by Lee Mondshein
in a PhD-thesis at M.I.T. as early as 1971.
Mondshein proposed to order the vertices of a graph in a sequence such that, for any i, the
vertices from 1 to i induce essentially a 2-connected graph while the remaining vertices from
i + 1 to n induce a connected graph. Mondshein’s sequence generalizes canonical orderings
and became later and independently known under the name non-separating ear decomposition.
Surprisingly, this fundamental link between canonical orderings and non-separating ear decomposition has not been established before. Currently, the fastest known algorithm for computing
a Mondshein sequence achieves a running time of O(nm); the main open problem in Mondshein’s
and follow-up work is to improve this running time to subquadratic time.
After putting Mondshein’s work into context, we present an algorithm that computes a
Mondshein sequence in optimal time and space O(m). This improves the previous best running
time by a factor of n. We illustrate the impact of this result by deducing linear-time algorithms for five other problems, for four out of which the previous best running times have been
quadratic. In particular, we show how to
– compute three independent spanning trees in a 3-connected graph in time O(m), improving
a result of Cheriyan and Maheshwari [J. Algorithms 9(4)],
– improve the preprocessing time from O(n2 ) to O(m) for the output-sensitive data structure
by Di Battista, Tamassia and Vismara [Algorithmica 23(4)] that reports three internally
disjoint paths between any given vertex pair,
– derive a very simple O(n)-time planarity test once a Mondshein sequence has been computed,
– compute a nested family of contractible subgraphs of 3-connected graphs in time O(m),
– compute a 3-partition in time O(m), while the previous best running time is O(n2 ) due to
Suzuki et al. [IPSJ 31(5)].
1
Introduction
Canonical orderings are a fundamental tool used in graph drawing, graph encoding and visibility
representations; we refer to [2] for a wealth of applications. For maximal planar graphs, canonical
orderings were introduced by de Fraysseix, Pach and Pollack [9, 10] in 1988. Kant then generalized
canonical orderings to 3-connected planar graphs [23, 24]. In polyhedral combinatorics, canonical
∗
This research was partly done at Max Planck Institute for Informatics, Saarbrücken. An extended abstract of
this paper has been published at ICALP’14.
1
orders are in addition related to shellings of (dual) convex 3-dimensional polytopes [41]; however,
such shellings are often, as in the Bruggesser-Mani theorem, dependent on the geometry of the
polytope. A combinatorial generalization to arbitrary planar graphs was given by Chiang, Lin and
Lu [7].
Surprisingly, the concept of canonical orderings can be traced back much further, namely to
a long-forgotten PhD-thesis at M.I.T. by Lee F. Mondshein [29] in 1971. In fact, Mondshein
proposed a sequence that generalizes canonical orderings to non-planar graphs, hence making them
applicable to arbitrary 3-connected graphs. Mondshein’s sequence was, independently and in a
different notation, found later by Cheriyan and Maheshwari [6] under the name non-separating
ear decompositions and is sometimes also called (2,1)-order (e.g., see [5]). In addition, Mondshein
sequences provide a generalization of Schnyder’s famous woods to non-planar 3-connected graphs.
One key contribution of this paper is to establish the above fundamental link between canonical
orderings and non-separating ear decompositions in detail.
Computationally, it is an intriguing question how fast a Mondshein sequence can be computed.
Mondshein himself gave an involved algorithm with running time O(m2 ). Cheriyan showed that
it is possible to achieve a running time of O(nm) by using a theorem of Tutte that proves the
existence of non-separating cycles in 3-connected graphs [35]. Both works state as main open
problem, whether it is possible to compute a Mondshein sequence in subquadratic time (see [29, p.
1.2] and [6, p. 532]).
We present the first algorithm that computes a Mondshein sequence in optimal time and space
O(m), hence solving the above 40-year-old problem. The interest in such a computational result
stems from the fact that 3-connected graphs play a crucial role in algorithmic graph theory. We
illustrate this in five applications by giving linear-time algorithms. For four of them, the previous
best running times have been quadratic.
We start by giving an overview of Mondshein’s work and its connection to canonical orderings
and non-separating ear decompositions in Section 3. Section 4 explains the linear-time algorithm
and proves its main technical lemma, the Path Replacement Lemma. Section 5 covers five applications of our linear-time algorithm.
2
Preliminaries
We use standard graph-theoretic terminology and assume that all graphs are simple.
Definition 1 ([26, 39]). An ear decomposition of a graph G = (V, E) is a sequence (P0 , P1 , . . . , Pk )
of subgraphs of G that partition E such that P0 is a cycle and every Pi , 1 ≤ i ≤ k, is a path that
intersects P0 ∪ · · · ∪ Pi−1 in exactly its endpoints. Each Pi is called an ear. An ear is short if it is
an edge and long otherwise.
According to Whitney [39], every ear decomposition has exactly m−n+ 1 ears and G has an ear
decomposition if and only if G is 2-connected. For any i, let Gi := P0 ∪· · ·∪Pi and Vi := V −V (Gi ).
We write Gi to denote the graph induced by Vi . Note that Gi does not necessarily contain all edges
in E − E(Gi ); in particular, there may be short ears in E − E(Gi ) that have both endpoints in Gi .
For a path P and two vertices x and y in P , let P [x, y] be the subpath in P from x to y. A
path with endpoints v and w is called a vw-path. A vertex x in a vw-path P is an inner vertex of
P if x ∈
/ {v, w}. For convenience, every vertex in a cycle is called an inner vertex of that cycle.
For an ear P , let inner(P ) the set of its inner vertices. The inner vertex sets of the ears in an
ear decomposition of G play a special role, as they partition V . Every vertex of G is contained in
exactly one long ear as inner vertex. This gives readily the following characterization of Vi .
2
Observation 2. For every i, Vi is the union of the inner vertices of all long ears Pj with j > i.
We will compare vertices and edges of G by their first occurrence in a fixed ear decomposition.
Definition 3. Let D = (P0 , P1 , . . . , Pm−n ) be an ear decomposition of G. For an edge e ∈ G, let
birthD (e) be the index i such that Pi contains e. For a vertex v ∈ G, let birthD (v) be the minimal
i such that Pi contains v (thus, PbirthD (v) is the ear containing v as an inner vertex). Whenever D
is clear from the context, we will omit D.
Clearly, for every vertex v, the ear Pbirth(v) is long, as it contains v as an inner vertex.
3
Generalizing Canonical Orderings
Although canonical orderings of (maximal or 3-connected) planar graphs are traditionally defined
as vertex partitions, we will define them as special ear decompositions. This will allow for an easy
comparison of canonical orderings to the more general Mondshein sequences, which extend them
to non-planar graphs. We assume that the input graphs are 3-connected and, when talking about
canonical orderings, planar. It is well-known that maximal planar graphs (which were considered
in [9] in this setting) form a subclass of 3-connected graphs, apart from the triangle-graph.
Definition 4. An ear decomposition is non-separating if, for every long ear Pi except the last one,
every inner vertex of Pi has a neighbor in Gi .
The name non-separating refers to the following helpful property.
Lemma 5. In a non-separating ear decomposition D, Gi is connected for every i.
Proof. For all i satisfying Gi = ∅ the claim is true, in particular if i is at least the index of the last
long ear. Otherwise, i is such that the inner vertex set A of the last long ear in D is contained in
Gi . Consider any vertex x in Gi . In order to show connectedness, we exhibit a path from x to A
in Gi . If x ∈ A, we just take the path of length zero. Otherwise, the vertex x has a neighbor in
Gbirth(x) , since D is non-separating. According to Observation 2, this neighbor is an inner vertex
of some ear Pj with j > birth(x). Applying induction on j gives the desired path to A.
A plane graph is a graph that is embedded into the plane. In particular, a plane graph has a
fixed outer face. We define canonical orderings as follows.
Definition 6 (canonical ordering). Let G be a 3-connected plane graph and let rt and ru be edges
of its outer face. A canonical ordering through rt and avoiding u is an ear decomposition D of G
such that
1. rt ∈ P0 ,
2. Pbirth(u) is the last long ear, contains u as its only inner vertex and does not contain ru, and
3. D is non-separating.
The fact that D is non-separating plays a key role for both canonical orderings and their generalization to non-planar graphs. E.g., Lemma 5 implies that the plane graph G can be constructed
from P0 by successively inserting the ears of D to only one dedicated face of the current embedding,
a routine that is heavily applied in graph drawing and embedding problems. Put simply, the second
condition forces u to be “added last” in D. Further motivations are given by 3-connectivity: If we
would not restrict u to be the only vertex in Pbirth(u) , other vertices in the same ear could have degree two, as the non-separateness does not imply any later neighbors for the last ear. The condition
3
ru ∈
/ Pbirth(u) ensures that u has degree at least three in G (which is necessary for 3-connectivity)
and will also lead to the existence of a third independent spanning tree (see Application 1 in
Section 5).
We note that forcing one edge rt in P0 is optimal in the sense that two edges rz and rt cannot
be forced: Let W be a sufficiently large wheel graph with center vertex r and rim vertices t and z
such that t and z are not adjacent. Then a canonical ordering with rt, rz ∈ P0 and avoiding u does
not exist, as any inner vertex on the rim-path from t to z not containing u has no larger neighbor
with respect to birth, and thus violates the non-separateness.
The original definition of canonical orderings by Kant [24] states the following additional properties.
Lemma 7 (further properties). For every 0 ≤ i ≤ m − n in a canonical ordering,
4. the outer face Ci of the plane subgraph Gi ⊆ G is a (simple) cycle that contains rt,
5. Gi is 2-connected and every separation pair of Gi has both its vertices in Ci , and
6. for i > 0, the neighbors of inner(Pi ) in Gi−1 are contained consecutively in Ci−1 .
Further, the canonical ordering implies the existence of one satisfying the following property:
7. if |inner(Pi )| ≥ 2, each inner vertex of Pi has degree two in G − Vi
Properties 4–6 can be easily deduced from Definition 6 as follows: Every Gi is a 2-connected
plane subgraph of G, as Gi has an ear decomposition. According to [34, Corollary 1.3], all faces of
a 2-connected plane graph form cycles. Thus, every Ci is a cycle and Property 4 follows directly
from the fact that rt is assumed to be in the fixed outer face of G. Property 5 is implied by the
3-connectivity of G and Property 4. Property 6 follows from Property 4, the fact that every inner
vertex of Pi must be outside Ci−1 (in G) and the Jordan Curve Theorem.
For the sake of completeness, we show how Property 7 is derived. Although it is not directly
implied by Definition 6 (in that sense our definition is more general), the following lemma shows
that we can always find a canonical ordering satisfying it.
Lemma 8. Every canonical ordering can be transformed to a canonical ordering satisfying Property 7.7 in linear time.
Proof. First, consider any ear Pi 6= P0 with |inner(Pi )| ≥ 2 such that an inner vertex x of Pi has a
neighbor y in G − Vi that is different from its predecessor and successor in Pi . Then Pbirth(xy) = xy
and birth(xy) > i. If y is in Pi , let Z be the path obtained from Pi by replacing Pi [x, y] ⊆ Pi
with xy; we call this latter operation short-cutting. We replace Pi with the two ears Z and Pi [x, y]
in that order and delete Pbirth(xy) = xy. This preserves Properties 1–3 (note that u ∈
/ Pi , as
|inner(Pi )| ≥ 2) and therefore the canonical ordering. If y is not in Pi , let Z1 be a shortest path
in Pi from an endpoint of Pi to x and let Z2 be the path in Pi from x to the remaining endpoint.
Replace Pi with the two ears Z1 ∪ xy and Z2 in that order and delete Pbirth(xy) . This preserves
Properties 1–3.
Now, consider a vertex x ∈ P0 having not degree 2 in G − V0 , i.e. x has a non-consecutive
neighbor y in P0 in the graph vertex-induced by V (P0 ). If x ∈ {r, t}, we replace P0 with the
shortest cycle C in P0 ∪ xy that contains r, t and y, delete Pbirth(xy) = xy and add the remaining
path from x to y in P0 − E(C) as new ear directly after C. This clearly preserves Properties 1–3.
If x ∈
/ {r, t}, we can shortcut P0 in a similar way. The above operations can be computed in linear
total time.
Our definition of canonical orderings uses planarity only in one place: tr ∪ ru is assumed to be
part of the outer face of G. Note that the essential part of this assumption is that tr ∪ ru is part of
some face of G, as we can always choose an embedding for G having this face as outer face. Hence,
4
there is a natural generalization of canonical orderings to non-planar graphs G: We merely require
rt and ru to be edges of G! The following ear-based definition is similar to the one given in [6] but
does not need additional degree-constraints.
Definition 9 ([29, 6]). Let G be a graph with edges rt and ru. A Mondshein sequence through rt
and avoiding u (see Figure 1) is an ear decomposition D of G such that
1. rt ∈ P0 ,
2. Pbirth(u) is the last long ear, contains u as its only inner vertex and does not contain ru, and
3. D is non-separating.
This definition is in fact equivalent to the one Mondshein used 1971 to define a (2,1)-sequence
[29, Def. 2.2.1], but which he gave in the notation of a special vertex ordering. This vertex ordering
actually refines the partial order inner(P0 ), . . . , inner(Pm−n ) by enforcing an order on the inner
vertices of each path according to their occurrence on that path (in any direction). The statement
that canonical orderings can be extended to non-planar graphs can also be found in [14, p.113],
however, no further explanation is given.
P12
r
u
P6
P3
P 11
P10
P4
P0
P9
P2
P5
P8
P 1 P7
t
Figure 1: A Mondshein sequence of a non-planar 3-connected graph.
Note that Definition 9 implies u ∈
/ P0 , as P0 6= Pbirth(u) , since Pbirth(u) contains only one inner
vertex. As a direct consequence of this and the fact that D is non-separating, G must have minimum
degree at least 3 in order to have a Mondshein sequence. Mondshein proved that every 3-connected
graph has a Mondshein sequence. In fact, also the converse is true.
Theorem 10. [6, 40] Let rt and ru be edges of G. Then G is 3-connected if and only if G has a
Mondshein sequence through rt and avoiding u.
We state two additional facts about Mondshein sequences. For the first, let G be planar. Clearly,
every canonical ordering of an embedding of G is also a Mondshein sequence. Conversely, let D
be a Mondshein sequence of G through rt and avoiding u. Then Theorem 10 implies that G is
3-connected. If G has an embedding in which tr ∪ ru is contained in a face, we can choose this
face as outer face and get an embedding of G for which D is a canonical ordering. This embedding
must be unique, as Whitney proved that any 3-connected planar graph has a unique embedding
(up to flipping) [38]. Otherwise, there is no embedding of G such that tr ∪ ru is contained in some
face. Since the faces of a 3-connected planar graph are precisely its non-separating cycles [35], we
conclude the following observation.
Observation 11. For a planar graph G and edges tr and ru, the following statements are equivalent:
5
• There is a planar embedding of G whose outer face contains tr ∪ ru, and D is a canonical
ordering of this (unique) embedding through rt and avoiding u.
• D is a Mondshein sequence through rt and avoiding u, and tr ∪ ru is contained in a nonseparating cycle of G.
For the second fact, let a chord of an ear Pi be an edge in G that joins two non-adjacent vertices
of Pi . Note that the definition of a Mondshein sequence allows chords for every Pi . Once having
a Mondshein sequence, one can aim for a slightly stronger structure. Let a Mondshein sequence
be induced if P0 is induced in G and every ear Pi 6= P0 has no chord, except possibly the one
joining the endpoints of Pi . It has been shown [6] that every Mondshein sequence can be made
induced. The following lemma shows the somewhat stronger statement that we can always expect
Mondshein sequences to satisfy Property 7.7. In fact, its proof is precisely the same as the one for
Lemma 8, since none of its arguments uses planarity.
Lemma 12. Every Mondshein sequence can be transformed to a Mondshein sequence D satisfying
Property 7.7 in linear time. In particular, D is induced.
4
Computing a Mondshein Sequence
Mondshein gave an involved algorithm [29] that computes his sequence in time O(m2 ). Independently, Cheriyan and Maheshwari gave an algorithm that runs in time O(nm) and which is based on
a theorem of Tutte. At the heart of our linear-time algorithm is the following classical construction
sequence for 3-connected graphs due to Barnette and Grünbaum [3] and Tutte [36, Thms. 12.64
and 12.65].
Definition 13. The following operations on simple graphs are BG-operations (see Figure 2).
(a) vertex-vertex-addition: Add an edge between two distinct non-adjacent vertices
(b) edge-vertex-addition: Subdivide an edge ab, a 6= b, with a vertex v and add the edge vw for a
vertex w ∈
/ {a, b}
(c) edge-edge-addition: Subdivide two distinct edges (the edges may intersect in one vertex) with
vertices v and w, respectively, and add the edge vw
v
v
w
w
(a) vertex-vertex-addition
a
b
a
v
w
b
w
(b) edge-vertex-addition
a
b
a
v
b
c
d
c
w
d
(c) edge-edge-addition
Figure 2: BG-operations
Theorem 14 ([3, 36]). A graph is 3-connected if and only if it can be constructed from K4 using
BG-operations.
Hence, applying a BG-operation on a 3-connected graph preserves it to be simple and 3connected. Let a BG-sequence of a 3-connected graph G be a sequence of BG-operations that
constructs G from K4 . It has been shown that such a BG-sequence can be computed efficiently.
Theorem 15 ([31, Thms. 6.(2) and 52]). A BG-sequence of a 3-connected graph can be computed
in time O(m).
6
The outline of our algorithm is as follows. Assume we want a Mondshein sequence of G through
rt and avoiding u. We will first compute a suitable BG-sequence of G using Theorem 15 and start
with a Mondshein sequence of its first graph, the K4 . The crucial part is then a careful analysis that
a Mondshein sequence of a 3-connected graph can be modified to one of G0 , where G0 is obtained
from the former by applying a BG-operation.
In more detail, we need a special BG-sequence to harness the dynamics of the vertices r, t and
u throughout the BG-sequence. A BG-sequence is determined by an (arbitrary) DFS-tree and two
fixed incident edges of its root. We choose a DFS-tree with root r and fix the edges rt and ru.
This way the initial K4 will contain the vertex r and r will never be relabeled [30, Section 5].
However, t and u are not necessarily vertices of the K4 . This is a problem, as we have to
specify an edge rt and vertex u of K4 which the Mondshein sequence of K4 goes through and
avoids, respectively, for induction purposes. Fortunately, the relation between the graphs in a BGsequence and subdivisions of these graphs in G [30, Section 4] gives us such replacement vertices
for t and u efficiently: We find vertices t and u of the initial K4 such that the following labeling
process ends with the input graph G in which t = t and u = u: For every BG-operation of the
BG-sequence from K4 to G that subdivides the edge rt or ru, we label the subdividing vertex with
t or u, respectively (the old vertex t or u is then given a different label). As desired, the final t and
u upon completion of the BG-sequence will be t and u. We refer to [30, Section 4] for details on
how to efficiently compute such a labeling scheme.
For the K4 , it is easy to compute a Mondshein sequence through rt and avoiding u efficiently.
We iteratively proceed to a Mondshein sequence of the next graph in the sequence. The following
modifications and their computational analysis are the main technical contribution of this paper
and depend on the various positions in the sequence in which the vertices and edges that are
involved in the BG-operation can occur.
Note that any short ear xy in a Mondshein sequence can be moved to an arbitrary position of
the sequence without destroying the Mondshein property, as long as both x and y are created at
an earlier position. Thus, the essential information of a Mondshein sequence is its order on long
ears. We will prove that there is always a modification that is local in the sense that the only long
ears that are modified are the ones containing a vertex that is involved in the BG-operation.
Lemma 16 (Path Replacement Lemma). Let G be a 3-connected graph with edges rt and ru and
let D = (P0 , P1 , . . . , Pm−n ) be a Mondshein sequence of G through rt and avoiding u. Let G0 be
obtained from G by applying a BG-operation Γ and let rt0 and ru0 be the edges of G0 that correspond
to rt and ru in G. Then a Mondshein sequence D0 of G0 through rt0 and avoiding u0 can be computed
from D using only constantly many (amortized) constant-time modifications.
We split the proof into three parts. First, we state two preprocessing routines leg() and belly()
on D that will reduce the number of subsequent cases considerably. Second, we show how to modify
D to D0 using these routines and, third, we discuss computational issues.
From now on, let vw be the edge that was added by Γ such that v subdivides ab ∈ E(G) and w
subdivides cd ∈ E(G) (if applicable). Thus, the vertex u0 in G0 is either u, v or w, and likewise t0 in
G0 is either t, v or w. By symmetry, we assume w.l.o.g. that birth(a) ≤ birth(b), birth(c) ≤ birth(d)
and birth(d) ≤ birth(b). Recall that {a, b} may intersect {c, d} in at most one vertex. If not stated
otherwise, the birth-operator refers always to D in this section.
We need some notation for describing the modifications. Suppose Pi is an ear containing an
inner vertex z. If an orientation of Pi is given, let Pi [, z] be the prefix of Pi ending at z in this
orientation and let Pi [z, ] be the suffix of Pi starting at z. Occasionally, the orientation does not
matter; if none is given, an arbitrary orientation can be taken. For paths A and B that end and
start at a unique common vertex, let A + B be the concatenation of A and B. Similarly, for disjoint
7
paths A and B such that exactly one endpoint x of A is a neighbor of exactly one endpoint y of
B, let A + B be the path A ∪ xy ∪ B.
Of legs and bellies: We describe two preprocessing routines. These will be used on D in the
next section to ensure that ab ∈ Pbirth(b) and cd ∈ Pbirth(d) (up to some special cases). Let an edge
xy ∈
/ Pbirth(y) be a leg of Pbirth(y) if xy 6= ru and birth(x) < birth(y). For each such leg, Pbirth(y) is
a long ear, xy is a short ear, and x is either not contained in Pbirth(y) or an endpoint of Pbirth(y) (see
Figures 3a and 3b). In the first case, if y is not the only inner vertex of Pbirth(y) , orient Pbirth(y) such
that the successor of y is also an inner vertex of Pbirth(y) ; this will preserve the non-separateness at
y for some later cases. In the latter case, orient Pbirth(y) toward x.
y
y
x
x
(a) A leg xy with x ∈
/ Pbirth(y)
and the result of Operation leg(x, y)
(dashed lines).
(b) A leg xy with x ∈ Pbirth(y) and
the result of Operation leg(x, y).
t
y
x
x
r
y
a
(c) A belly xy with birth(y) > 0 and
the result of Operation belly(x, y).
(d) A belly xy with birth(y) = 0 and
the result of Operation belly(x, y).
Figure 3
A leg xy of Pbirth(y) has the feature that it may be incorporated into Pbirth(y) such that the
resulting sequence is still a Mondshein sequence: Let leg(x, y) be the operation that deletes the
short ear xy in the sequence D and replaces the long ear Pbirth(y) by the two ears Pbirth(y) [, y] + x
and Pbirth(y) [y, ] in that order. We prove that the resulting sequence D is a Mondshein sequence.
Clearly, D is an ear decomposition. In addition, we still have rt ∈ P0 , as P0 did not change due
to birth(y) > birth(x) ≥ 0. Since every inner vertex of the two new ears is also an inner vertex of
Pbirth(y) , it has a neighbor in some larger ear (with respect to birth) in D; thus D is non-separating
by Definition 4. Since xy 6= ru, the last long ear in D does not contain ru. The last long ear in D
may be different from the one in D if y = u, but since the replacement does not introduce any new
inner vertex, it will still contain the same vertex u as only inner vertex. Hence, D is a Mondshein
sequence through rt and avoiding u by Definition 9.
Let an edge xy of G be a belly of Pbirth(y) if birth(x) = birth(y) 6= birth(xy). Then Pbirth(y)
contains both x and y as inner vertices, but does not contain xy; hence xy is a short ear (see
Figures 3c and 3d).
For a belly xy, we can again find a Mondshein sequence that ensures xy ∈ Pbirth(y) . First,
consider the case birth(y) > 0, in which we orient Pbirth(y) from y to x. For this case, let belly(x, y)
be the operation that deletes the short ear xy in the sequence D and replaces the long ear Pbirth(y)
by the two long ears Pbirth(y) [, y] + Pbirth(y) [x, ] and Pbirth(y) [y, x] in that order (see Figure 3c). For
the same reasons as before, the resulting sequence D is an ear decomposition and non-separating.
8
Since Pbirth(y) contains two inner vertices, we have birth(y) 6= birth(u), and it follows that the last
long ear in D is exactly the last long ear of D. In addition, rt ∈ P0 , as P0 did not change due to
birth(y) > birth(x) ≥ 0. Hence, D is a Mondshein sequence through rt and avoiding u.
Now consider the case birth(y) = 0. The vertices x and y cut P0 into two distinct paths A and
B having endpoints x and y; let A be the one containing rt. Let belly(x, y) be the operation that
deletes the short ear xy in D and replaces P0 by the two long ears A ∪ xy and B in that order
(see Figure 3d). This preserves P0 to be a cycle that contains rt and, thus, gives also a Mondshein
sequence through rt and avoiding u. Note that both operations leg() and belly() leave the vertices
u, r and t unchanged.
Modifying D to D0 : We use the operations leg() and belly() for a preprocessing on the subdivided edges ab and cd (if applicable) by Γ. Suppose first that ru ∈
/ {ab, cd}; we will solve the
remaining case ru ∈ {ab, cd} later. Assume birth(ab) 6= birth(b) and recall that birth(a) ≤ birth(b).
If birth(a) < birth(b), ab is a leg of Pbirth(b) and we apply the operation leg(a, b). Otherwise,
birth(a) = birth(b) and we apply the operation belly(a, b). In both cases, this leaves a Mondshein
sequence in which birth(ab) = birth(b), i.e. ab is contained in the long chain Pbirth(b) .
Similarly, if birth(cd) 6= birth(d), we want to apply either leg(c, d) or belly(c, d) to obtain
birth(cd) = birth(d). However, doing this without any restrictions may result in loosing birth(ab) =
birth(b), e.g. when cd is a belly of Pbirth(b) . Thus, we apply leg(c, d) or belly(c, d) only if birth(d) <
birth(b), as then d is no inner vertex of Pbirth(b) . Since birth(d) ≤ birth(b), we have therefore
birth(d) ∈ {birth(b), birth(cd)}. Subdivide the edge ab in G and Pbirth(ab) with v and likewise
subdivide cd with w if applicable for Γ. Call the resulting sequence D; D satisfies birth(v) = birth(b)
and birth(d) ∈ {birth(b), birth(w)}. We obtain the desired Mondshein sequence D0 through rt0 and
avoiding u from D by distinguishing the following cases (see Figure 4).
(1) Γ is a vertex-vertex-addition
Obtain D0 from D by adding the new short ear vw to the end of D. This way v and w exist
when vw is born.
(2) Γ is an edge-vertex-addition
. birth(v) = birth(b)
(a) birth(w) > birth(b)
.w∈
/ Gbirth(b)
Obtain D0 from D by adding the new ear vw to the end of D. Since birth(w) > birth(b),
v has a larger neighbor with respect to birth.
(b) birth(w) < birth(b)
Then wv 6= ru0 , as otherwise we would have w = r and v = u0 and thus ab = ru, which
contradicts our assumption. Hence, wv is a leg of Pbirth(v) . We apply leg(w, v). By the
orientation assigned to Pbirth(v) , this ensures that v has a larger neighbor with respect to
birth (e.g., b).
(c) birth(w) = birth(b)
Then wv ∈
/ Pbirth(v) , since v is adjacent to only a and b in Pbirth(v) and w ∈
/ {a, b} for
edge-vertex-additions. Thus, birth(w) = birth(v) 6= birth(wv) and hence wv is a belly of
Pbirth(v) . We apply belly(w, v). By the orientation assigned to Pbirth(v) , this ensures that v
has a larger neighbor.
(3) Γ is an edge-edge-addition
. birth(v) = birth(b) and birth(d) ∈ {birth(b), birth(w)}
(a) birth(d) < birth(b)
. d ∈ Gbirth(b)−1 and birth(b) > 0
Then birth(c) ≤ birth(d) = birth(w) < birth(b) = birth(v). We further have vw 6= ru0 , as
otherwise we would have w = r and v = u0 and thus r ∈ {a, b} which contradicts r = w.
Hence, wv is a leg of Pbirth(b) . Obtain D0 from D by applying leg(w, v).
9
(b) birth(d) = birth(b) = birth(w)
. d, w ∈ inner(Pbirth(b) )
0
Then vw is a belly of Pbirth(b) . Obtain D from D by applying belly(v, w).
(c) birth(d) = birth(b) 6= birth(w) and birth(c) = birth(b)
. c, d ∈ inner(Pbirth(b) ) 63 w
Then birth(w) > birth(b) and thus Pbirth(w) = cw ∪wd. Let Z be a shortest path in Pbirth(b)
that contains c, d and v, but not the edge rt0 (the latter is only relevant for birth(b) = 0).
Let z be the inner vertex of Z that is contained in {c, d, v}. At least one of the two paths
Z[; z] and Z[z; ], say Z[z; ], contains an inner vertex, as otherwise Γ would not be a BGoperation. Obtain D0 from D by deleting Pbirth(w) , replacing the path Z in Pbirth(b) with
the two edges connecting w to the endpoints of Z, and adding the two new ears Z[; z] + w
and Z[z; ] directly afterward in that order. Clearly, rt0 ∈ P0 in D0 .
(d) birth(d) = birth(b) 6= birth(w) and birth(c) 6= birth(b)
. d ∈ inner(Pbirth(b) ) 63 c, w
Then birth(c) < birth(d) < birth(w) and hence birth(b) > 0 and Pbirth(w) = cw ∪ wd. One
of the paths Pbirth(b) [; v] and Pbirth(b) [v; ], say Pbirth(b) [v; ], contains d as an inner vertex.
Obtain D0 from D by replacing Pbirth(b) with the two ears Pbirth(b) [; v]+w+c and Pbirth(b) [v; ]
in that order and replacing Pbirth(w) with the short ear wd. If birth(b) 6= birth(u), it follows
directly that u0 = u and thus that D0 avoids u0 = u. Otherwise birth(b) = birth(u), which
implies u = b = d and c 6= r, since we assumed cd 6= ru. Thus, in this case D0 avoids
u0 = u = b as well.
v
a
v
a
b
v
b
a
w
Case (1)
w
b
w
w
Case (2a)
a
Case (2b)
v
Case (2c)
b
a
c
w
Pb
v
b
c
w
d
a
1
Case (3b)
a
a
v
d
Case (3a)
c
v
b
v
b
d
d
w
w
c
a
Case (3c)
Case (3d)
Figure 4: Cases when modifying D to D0 . Black vertices are endpoints of ears that are contained
in Gbirth(b) . The dashed paths depict (parts of) the ears in D0 .
In all these cases, we obtain a Mondshein sequence D0 through rt0 and avoiding u0 as desired.
Now consider the remaining case ru ∈ {ab, cd}. If birth(d) = birth(b) (for an edge-edge-addition),
we have b = d = u and can w.l.o.g. assume ru = ab. Otherwise, birth(d) < birth(b) and it follows
directly that we have in all cases, even for edge-vertex-additions, r = a and u = b. If cd is a short
10
ear, we move cd to the position in D directly after Pbirth(d) ; this preserves a Mondshein sequence.
As before, subdivide ab and cd with v and w.
Let Γ be an edge-vertex-addition. Then u0 = v and hence birth(w) < birth(u) < birth(v).
Obtain D0 from D by replacing Pbirth(v) with the long ear uv ∪ vw and adding the short ear
av = ru0 directly afterward. Then D0 avoids u0 .
Let Γ be an edge-edge-addition and suppose first that birth(w) 6= birth(u). Then u0 = v and
birth(w) < birth(v) > birth(u). Obtain D0 from D by replacing Pbirth(v) with the long ear uv ∪ vw
and adding the short ear av = ru0 directly afterward. Then D0 avoids u0 . Now suppose that
birth(w) = birth(u). Then b = d = u, u0 = v and birth(u) = birth(w) < birth(v). Obtain D0
from D by replacing Pbirth(v) with the long ear uv ∪ vw and adding the short ear av = ru0 directly
afterward. Hence, in all cases, we obtain a Mondshein sequence D0 through rt0 and avoiding u0 .
Computational Complexity: For proving the Path Replacement Lemma 16, it remains to show
that each modification can be computed in amortized constant time. Note that ears may become
arbitrarily long in the path replacement process and therefore may contain up to Θ(n) vertices.
Moreover, we have to maintain the birth-values of all vertices that are involved in future BGoperations in order to compute which of the subcases in Case (1)–(3) applies. Thus, we cannot use
the standard approach of storing the ears of D explicitly by using doubly-linked lists, as then the
birth-values of linearly many vertices may change for every modification.
Instead, we will represent the ears as the sets of a data structure for set splitting, which maintains disjoint sets online under an intermixed sequence of find and split operations. Gabow and
Tarjan [15] discovered the first data structure for set splitting with linear space and constant amortized time per operation. Their and our model of computation is the standard unit-cost word-RAM.
Imai and Asano [20] enhanced this data structure to an incremental variant, which additionally
supports adding single elements to certain sets in constant amortized time. In both results, all sets
are restricted to be intervals of some total order. To represent the Mondshein sequence D in the
path replacement process, we will use the following more general data structure due to Djidjev [12,
Section 3.2], which does not have that requirement but still supports the add-operation.
The data structure maintains a collection P of edge-disjoint paths under the following operations:
new_path(x,y): Creates a new path that consists of the edge xy. The edge xy must not be in any
other path of P .
find(e): Returns the integer-label of the path containing the edge e.
split(xy): Splits the path containing the edge xy into the two subpaths from x to one endpoint
and from x to the other endpoint of that path.
sub(x,e): Modify the path containing e by subdividing e with the vertex x.
replace(x,y,e): Neither x nor y may be an endpoint of the path Z containing e. Cut Z into the
subpath from x to y and the path that consists of the two remaining subpaths of Z joined by
the new edge xy.
add(x,yz): The vertex y must be an endpoint of the path Z containing the edge yz and x is either
a new vertex or not in Z. Add the new edge xy to Z.
Note that all ears are not only edge-disjoint but also internally disjoint. Djidjev proved that
each of the above operations can be computed in amortized constant time [12, Theorem 1]. We
will only represent long ears in this data structure; the remaining short ears do not contain any
essential birth-value information and can therefore be maintained simply as edges. As the data
structure can only store paths, we need to clarify how the unique cycle P0 in D can be maintained:
We store P0 as paths, namely as the two paths in P0 with endpoints r and t. For every ear different
11
from P0 , we store its two endpoints at its find()-label. These endpoints can therefore be accessed
and updated in constant time.
Now we initialize the data structure with the Mondshein sequence of K4 in constant time using
the above operations. Every modification of the Cases (1)–(3) and ru ∈ {ab, cd} can then be realized
with a constant number of operations of the data structure, and hence in amortized constant time.
Additionally, we need to maintain the order of ears in D. The incremental list order-maintenance
problem is to maintain a total order subject to the operations of (i) inserting an element after a
given element and (ii) comparing two distinct given elements by returning the one that is smaller
in the order. Bender et al. [4] showed a simple solution with amortized constant time per operation
(which holds even if, additionally, deletions of elements are supported); we will call this the order
data structure. It is easy to see that the Path Replacement Lemma inserts in every step at most
two new ears directly after Pbirth(b) and at most one new short ear at the end of D. Hence, we can
maintain the order of ears in D by applying the order data structure to the find()-labels of ears;
this costs amortized constant time per step.
For deciding which of the subcases in (1)–(3) and ru ∈ {ab, cd} applies, we additionally need to
maintain the birth-values of the vertices and edges in D. In fact, it suffices to support the queries
“birth(x) < birth(y)” and “birth(x) = birth(y)”, where x and y may be arbitrary edges or vertices
in D. If x and y are edges, both queries can be computed in constant amortized time by comparing
the labels find(x) and find(y) in the order data structure. In order to allow birth-queries on
vertices, we will store pointers at every vertex x to the two edges e1 and e2 that are incident to x
in Pbirth(x) . The desired query involving birth(x) can then be computed by comparing find(e1 )
in the order data structure.
For any new vertex x that is added to D, we can find e1 and e2 in constant time, as these are in
{av, vb, cw, wd, vw}. Since Pbirth(x) may change over time, we have to update e1 and e2 after each
step. The only situation in which Pbirth(x) may loose e1 or e2 (but not both) is a split or replace
operation on Pbirth(x) at x (the split operation must be followed by an add operation on x, as x is
always inner vertex of some ear). This cuts Pbirth(x) into two paths, each of which contains exactly
one edge in {e1 , e2 }. Checking find(e1 )=find(e2 ) recognizes this case efficiently. Dependent on
the particular case, we compute a new consistent pair {e01 , e02 } that differs from {e1 , e2 } in exactly
one edge. This allows to check the desired comparisons in amortized constant time.
We conclude that D0 can be computed from D in amortized constant time; this proves the Path
Replacement Lemma. Thus, we deduce the following theorem.
Theorem 17. Given edges rt and ru of a 3-connected graph G, a Mondshein sequence D of G
through rt and avoiding u can be computed in time O(m).
The above algorithm is certifying in the sense of [27]: First, check in linear time that D is an
ear decomposition of G. Second, check the side constraints on the first and last ear. Third, check
in linear time that D is non-separating by testing that every ear satisfies Definition 4.
5
Applications
Application 1: Independent Spanning Trees
Let k spanning trees of a graph be independent if they all have the same root vertex r and, for
every vertex x 6= r, the paths from x to r in the k spanning trees are internally disjoint (i.e.,
vertex-disjoint except for their endpoints; see Figure 5). The following conjecture from 1988 due
to Itai and Rodeh [21] has received considerable attention in graph theory throughout the past
decades.
12
Conjecture (Independent Spanning Tree Conjecture [21]). Every k-connected graph contains k
independent spanning trees.
14
12
13
11
10
8
9
7
6
5
4
2
3
1
Figure 5: Three independent spanning trees in the graph of Figure 1, which were computed from
its Mondshein sequence (vertex numbers depict a consistent tr-numbering).
The conjecture has been proven for k ≤ 2 [21], k = 3 [6, 40] and k = 4 [8], with running times
O(m), O(n2 ) and O(n3 ), respectively, for computing the corresponding independent spanning trees.
For every k ≥ 5, the conjecture is open. For planar graphs, the conjecture has been proven by
Huck [19].
We show how to compute three independent spanning trees in linear time, using an idea of [6].
This improves the previous best quadratic running time. It may seem tempting to compute the
spanning trees directly and without using a Mondshein sequence, e.g. by local replacements in an
induction over BG-operations or inverse contractions. However, without additional restrictions this
is bound to fail, as shown in Figure 6.
r
x
y
z
v
Figure 6: A 3-connected graph G (some edges are not drawn). G is obtained from the 3-connected
graph G0 := (G − v) ∪ xy by performing a BG-operation (or inverse contraction) that adds the
vertex v (with added edge vy). Two of the three independent spanning trees of G0 are given, rooted
at r (thick edges). However, not both of them can be extended to v.
Compute a Mondshein sequence through rt and avoiding u, as described in Theorem 17. Choose
r as the common root vertex of the three spanning trees and let x 6= r be an arbitrary vertex.
First, we show how to obtain two internally disjoint paths from x to r that are both contained
in the subgraph Gbirth(x) . A tr-numbering < is a total order v1 < · · · < vn of the vertices of a graph
such that t = v1 , r = vn , and every other vertex has both a higher-numbered and a lower-numbered
neighbor. Let a tr-numbering < be consistent [6] to a Mondshein sequence if < is a tr-numbering
13
for every graph Gi , 0 ≤ i ≤ m − n. We can compute a consistent tr-numbering < in linear time as
follows: Let <0 be the total order on V (P0 ) from t to r; then <0 is a consistent tr-numbering of G0 .
We maintain <i−1 in the order data structure of [4] (see the computational complexity paragraph).
Now we add iteratively the next ear Pi and obtain <i from <i−1 by ordering the new inner vertices
of Pi from the lower to the larger endpoint of Pi in <i−1 (such that inner(Pi ) is between these
endpoints in <i ). This takes amortized time proportional to the length of Pi and, hence, gives a
total linear running time.
According to <, every vertex x 6= r has a higher-numbered neighbor in Gbirth(x) and every
vertex x ∈
/ {r, t} a lower-numbered neighbor in Gbirth(x) . Fixing arbitrary such neighbors, the first
two spanning trees T1 and T2 then consist of the incident edges to higher neighbors and of the edge
tr and the incident edges to lower neighbors, respectively. Clearly, T1 and T2 are independent due
to the numbering used.
We construct the third independent spanning tree T3 . As a Mondshein sequence is nonseparating, every vertex x 6= {r, u} has an incident edge with an endpoint in Gbirth(x) (as seen
before, iterating this argument gives a path to u in Gbirth(x) ). Let T3 consist of arbitrary such incident edges and of the edge ru. Since Gbirth(x) and Gbirth(x) are vertex-disjoint, T3 is independent
from T1 and T2 .
Remark. We remark that the three independent spanning trees constructed this way satisfy the
following additional condition: Due to the fact that T2 and T3 are extended to r by one single edge,
all incident edges of r are contained in at most one of T1 , T2 , T3 . In particular, no edge of G is
contained in all three independent trees, which is a fact that cannot be derived from the definition
of independent spanning trees (an edge that is incident to r may be contained in all three trees).
Application 2: Output-Sensitive Reporting of Disjoint Paths
Given two vertices x and y of an arbitrary graph, a k-path query reports k internally disjoint paths
between x and y or outputs that these do not exist. Di Battista, Tamassia and Vismara [11] give
data structures that answer k-path queries for k ≤ 3. A key feature of these data structures is
that every k-path query has an output-sensitive running time, i.e., a running time of O(`) if the
total length of the reported paths is ` (and running time O(1) if the paths do not exist). The
preprocessing time of these data structures is O(m) for k ≤ 2, but O(n2 ) for k = 3.
For k = 3, Di Battista et al. show how the input graph can be restricted to be 3-connected using
a standard decomposition. For every 3-connected graph we can compute a Mondshein sequence,
which allows us to compute three independent spanning trees T1 –T3 in a linear preprocessing time,
as shown in Application 1. If x or y is the root r of T1 –T3 , this gives a straight-forward outputsensitive data structure that answers 3-path queries: we just store T1 –T3 and extract one path from
each tree per query.
In order to extend these queries to k-path queries between arbitrary vertices x and y, [11] gives
a case distinction that shows that the desired paths can be found efficiently in the union of the
six paths in T1 –T3 that join either x with r or y with r. This case distinction can be used for the
desired output-sensitive reporting in time O(`) without changing the preprocessing. We conclude
that the preprocessing time of O(n2 ) for allowing k-path queries with k ≤ 3 in arbitrary graphs
can be improved to O(n + m).
Application 3: Planarity Testing
We give a conceptually very simple planarity test based on Mondshein’s sequence for any 3connected graph G in time O(n). The 3-connectivity requirement is not crucial, as the planarity of
G can be reduced to the planarity of all 3-connected components of G, which in turn are computed
14
as a side-product from the computation of the BG-sequence [28, Appendix 2]. Alternatively, one
could also use standard algorithms [18, 16] for reducing G to be 3-connected.
If m > 3n − 6, G is not planar due to Euler’s formula and we reject the instance, so let
m ≤ 3n − 6. Let rt be an edge of G. We will find an embedding whose outer face is left of rt,
unless G is non-planar. Due to Whitney [38], this embedding is unique. In light of Observation 11,
we need to pick an edge ru 6= rt such that tr ∪ ru is in a non-separating cycle. We can easily find
such an edge by computing a Mondshein sequence through rt and avoiding some vertex u0 ∈
/ {r, t},
and then taking the edge that is incident to r in P0 − rt (alternatively, any linear-time algorithm
that computes a non-separating cycle containing rt like the one in [6] can be used).
Now we compute a Mondshein sequence D through rt and avoiding u that satisfies Property 7.7
in time O(n). If G is planar, Observation 11 ensures that D is a canonical ordering of our fixed
embedding; in particular, the last vertex u and the edge rt will be embedded in the outer face.
Due to Property 7.7, P0 has no chords and every short ear xy satisfies birth(x) 6= birth(y). For
the embedding process, we rearrange the order of short ears in D such that all short ears xy with
birth(x) < birth(y) are direct successors of the long ear Pbirth(y) (this can be done in linear time
using bucket sort).
We start with a planar embedding M0 of P0 . Step by step, we attempt to augment Mi with the
next long ear Pj in D as well as all short ears directly succeeding Pj in order to construct a planar
embedding Mj of Gj .
Once the current embedding Mi contains u, we have added all edges of G and are done. Otherwise, u is contained in Gi , according to Definition 6.2. Then Gi contains a path from each inner
vertex of Pj to u, according to Lemma 5. Since u is contained in the outer face of the unique
embedding of G, adding the long ear Pj to Mi can preserve planarity only when it is embedded
into the outer face f of Mi . Thus, we only have to check that both endpoints of Pj are contained
in f (this is easy to test by maintaining the vertices of the outer face). For the same reason, the
short ears directly succeeding Pj can preserve planarity only if the set S of their endpoints in Gi is
contained in f . Note that, if there is at least one such short ear, Pj has precisely one inner vertex
v due to Property 7.7 and all short ears directly succeeding Pj have v as endpoint.
Thus, if the endpoints of Pj and S are contained in f , we embed Pj and the short ears into f in
the only possible way, i.e. as a path or as one new vertex v with the short ears and the two edges of
Pj as incident edges. Otherwise, we output “not planar”. If desired, a Kuratowski-subdivision can
then be easily extracted in time O(n), as shown in [32, Lemma 5] (the extraction is even simpler,
as we do not make use of adding “claws”).
Application 4: Contractible Subgraphs in 3-Connected Graphs
A connected subgraph H of a 3-connected graph G is called contractible if contracting H to a single
vertex generates a 3-connected graph. It is easy to show that a connected subgraph H is contractible
if and only if G − V (H) is 2-connected. While many structural results about contractible subgraphs
are known in graph theory, we are not aware of any non-trivial result that computes them.
Using a Mondshein sequence, we can identify a nested family of m − n contractible induced
subgraphs in linear time, namely the subgraphs Gi for every 0 ≤ i < m − n. Clearly, these
subgraphs are contractible, as G − Gi is 2-connected due to Lemma 7.5. Moreover, for each i > 0,
Gi is an induced subgraph of the induced subgraph Gi−1 . In particular, every Gi contains u, since
Vm−n−1 = {u} due to Definition 9.2.
Application 5: The k-Partitioning Problem
Given vertices a1 , . . . , ak of a graph G and natural numbers n1 , . . . , nk with n1 + · · · + nk = n, we
15
want to find a partition of V into sets A1 , . . . , Ak with ai ∈ Ai and |Ai | = ni for every i such that
every set Ai induces a connected graph in G. We call this a k-partition.
If the conditions ai ∈ Ai are ignored, the problem becomes NP-hard even for k = 2 and bipartite
input graph G [13]; although often stated otherwise, this does not seem to imply an NP-hardness
proof for the k-partitioning problem directly. If the input graph is k-connected, however, Györi [17]
and Lovász [25] proved that there is always a k-partition. Thus, let G be k-connected. If k = 2,
the k-partitioning problem is easy to solve: If G does not contain the edge a1 a2 , add this edge to
G. Compute an a1 a2 -numbering a1 = v1 , v2 , . . . , vn = a2 and observe that, for any vertex vi (in
particular for vn1 ), the graphs induced by {v1 , . . . , vi } and by {vi+1 , . . . , vn } are connected. For
every k ≥ 4, the k-partitioning problem on a k-connected input graph is not even known to be in
P (although its decision variant is), so we will focus on the 3-partitioning problem of a 3-connected
input graph.
This problem can be solved in quadratic time [33] and, if the graph is additionally planar,
even in linear time [22]. As suggested in [37, 1], the problem (as well as a related extension) can
be solved with the aid of a non-separating ear decomposition. For planar graphs, it thus suffices
with Observation 11 to compute just a canonical ordering, which simplifies previous algorithms
considerably.
More generally, we get the first O(m) time algorithm for arbitrary 3-connected graphs as follows.
Consider a Mondshein sequence through a1 a2 and avoiding a3 (if the edges a1 a2 and a1 a3 do not
exist in G, we add them in advance). If Gi contains exactly n1 + n2 vertices for some i, we set
A3 := Gi and compute A1 and A2 by solving the 2-partitioning problem on Gi in linear time using an
a1 a2 -numbering, as described above. Otherwise, let Pi be the first ear such that |V (Gi )| > n1 + n2 .
We partition inner(Pi ) into the vertex sets B1 , B3 and B2 (designated to be part of A1 , A3 and
A2 , respectively) of three consecutive paths in Pi − a1 a2 such that |B3 | = n3 − |Vi |. In particular,
0 < |B3 | < |inner(Pi )|. Let l := |B1 | + |B2 |; then there are l + 1 choices for B3 . For any such
choice, setting A3 := B3 ∪ Vi satisfies the claim for A3 , as A3 contains a3 , has cardinality n3 and is
connected, as a Mondshein sequence is non-separating.
We specify how to compute B1 ; this determines the sets B3 and B2 . If i = 0, choose B1 as the
path in P0 − a1 a2 that starts at a1 and consists of n1 vertices. The desired 2-partition of G − A3
is then given by A1 := B1 and A2 := B2 . If i > 0, we aim for a coloring of Gi−1 into blue and red
vertices such that A1 consists of B1 and the blue vertices, and A2 consists of B2 and the red vertices.
In order to make A1 connected, we have to prevent that both endpoints of Gi−1 are colored red
as long as |B1 | > 0. Clearly, |B1 | < n1 , as a1 has to be in A1 ; similarly, |B2 | < n2 , which implies
|B1 | > l − n2 . Hence, the valid choices for |B1 | are between max{0, l − n2 + 1} and min{l, n1 − 1}.
For every max{0, l − n2 + 1} ≤ |B1 | ≤ min{l, n1 − 1}, we compute a 2-partition of Gi−1 into
n1 − |B1 | blue and n2 − |B2 | red vertices. The first 2-partition for |B1 | = max{0, l − n2 + 1} can be
computed in linear time using an a1 a2 -numbering as described above. For each increase of |B1 | by
one, we can construct the new 2-partition in constant time from the old one, as exactly one blue
vertex is recolored red. If the coloring of one of these choices for |B1 | colors the endpoints x and
y of Pi differently, we choose B1 as the path in Pi next to the blue endpoint that consists of |B1 |
vertices. Then A1 and A2 as stated above give a 3-partition.
Otherwise, x and y have always the same color. Moreover, this color is identical, say red by
symmetry, for every computed choice of |B1 |, since only one vertex is recolored per increase of |B1 |.
Consider the smallest choice |B1 | := max{0, l − n2 + 1}. As x and y are red, n2 − |B2 | ≥ 2, which
implies |B1 | > l − n2 + 1. Hence, |B1 | = 0 and we choose B1 := ∅. Then A1 and A2 as stated above
give the desired 3-partition.
16
Acknowledgments. I wish to thank Joseph Cheriyan for valuable hints, the anonymous person
who drew my attention to Lee F. Mondshein’s work, David R. Wood for suggesting the graph
in Figure 6, and the anonymous reviewers that gave me very valuable feedback, which led to a
reduction of the number of cases in the Path Replacement Lemma.
References
[1] T. Awal and M. S. Rahman. A linear algorithm for resource tripartitioning triconnected planar
graphs. INFOCOMP Journal of Computer Science, 9(2):39–48, 2010.
[2] M. Badent, U. Brandes, and S. Cornelsen. More canonical ordering. Journal of Graph Algorithms and Applications, 15(1):97–126, 2011.
[3] D. W. Barnette and B. Grünbaum. On Steinitz’s theorem concerning convex 3-polytopes and
on some properties of planar graphs. In Many Facets of Graph Theory, pages 27–40, 1969.
[4] M. A. Bender, R. Cole, E. D. Demaine, M. Farach-Colton, and J. Zito. Two simplified algorithms for maintaining order in a list. In Proceedings of the 10th European Symposium on
Algorithms (ESA’02), pages 152–164, 2002.
[5] T. Biedl and M. Derka. The (3,1)-ordering for 4-connected planar triangulations. Journal of
Graph Algorithms and Applications, 20(2):347–362, 2016.
[6] J. Cheriyan and S. N. Maheshwari. Finding nonseparating induced cycles and independent
spanning trees in 3-connected graphs. Journal of Algorithms, 9(4):507–537, 1988.
[7] Y.-T. Chiang, C.-C. Lin, and H.-I. Lu. Orderly spanning trees with applications. SIAM Journal
on Computing, 34(4):924–945, 2005.
[8] S. Curran, O. Lee, and X. Yu. Finding four independent trees. SIAM Journal on Computing,
35(5):1023–1058, 2006.
[9] H. de Fraysseix, J. Pach, and R. Pollack. Small sets supporting fary embeddings of planar
graphs. In Proceedings of the 20th Annual ACM Symposium on Theory of Computing (STOC
’88), pages 426–433, 1988.
[10] H. de Fraysseix, J. Pach, and R. Pollack. How to draw a planar graph on a grid. Combinatorica,
10(1):41–51, 1990.
[11] G. Di Battista, R. Tamassia, and L. Vismara. Output-sensitive reporting of disjoint paths.
Algorithmica, 23(4):302–340, 1999.
[12] H. N. Djidjev. A linear-time algorithm for finding a maximal planar subgraph. SIAM J.
Discrete Math., 20(2):444–462, 2006.
[13] M. E. Dyer and A. M. Frieze. On the complexity of partitioning graphs into connected subgraphs. Discrete Applied Mathematics, 10:139–153, 1985.
[14] H. d. Fraysseix and P. O. de Mendez. Regular orientations, arboricity, and augmentation. In
Proceedings of the DIMACS International Workshop on Graph Drawing 1994, volume LNCS
894, pages 111–118, 1995.
17
[15] H. N. Gabow and R. E. Tarjan. A linear-time algorithm for a special case of disjoint set union.
Journal of Computer and System Sciences, 30(2):209–221, 1985.
[16] C. Gutwenger and P. Mutzel. A linear time implementation of SPQR-trees. In Proceedings of
the 8th International Symposium on Graph Drawing (GD’00), pages 77–90, 2001.
[17] E. Győri. Partition conditions and vertex-connectivity of graphs. Combinatorica, 1(3):263–273,
1981.
[18] J. E. Hopcroft and R. E. Tarjan. Dividing a graph into triconnected components. SIAM
Journal on Computing, 2(3):135–158, 1973.
[19] A. Huck. Independent trees in planar graphs. Graphs and Combinatorics, 15(1):29–77, 1999.
[20] H. Imai and T. Asano. Dynamic orthogonal segment intersection search. Journal of Algorithms,
8(1):1–18, 1987.
[21] A. Itai and M. Rodeh. The multi-tree approach to reliability in distributed networks. Information and Computation, 79:43–59, 1988.
[22] L. Jou, H. Suzuki, and T. Nishizeki. A linear algorithm for finding a non-separating ear
decomposition of triconnected planar graphs. Technical report, Information Processing Society
of Japan, AL40-3, 1994.
[23] G. Kant. Drawing planar graphs using the lmc-ordering. In Proceedings of the 33th Annual
Symposium on Foundations of Computer Science (FOCS’92), pages 101–110, 1992.
[24] G. Kant. Drawing planar graphs using the canonical ordering. Algorithmica, 16(1):4–32, 1996.
[25] L. Lovász. A homology theory for spanning trees of a graph. Acta Mathematica Hungarica,
30(3-4):241–251, 1977.
[26] L. Lovász. Computing ears and branchings in parallel. In Proceedings of the 26th Annual
Symposium on Foundations of Computer Science (FOCS’85), pages 464–467, 1985.
[27] R. M. McConnell, K. Mehlhorn, S. Näher, and P. Schweitzer. Certifying algorithms. Computer
Science Review, 5(2):119–161, 2011.
[28] K. Mehlhorn, A. Neumann, and J. M. Schmidt. Certifying 3-edge-connectivity. Algorithmica,
to appear 2016.
[29] L. F. Mondshein. Combinatorial Ordering and the Geometric Embedding of Graphs. PhD
thesis, M.I.T. Lincoln Laboratory / Harvard University, 1971. Technical Report available at
www.dtic.mil/cgi-bin/GetTRDoc?AD=AD0732882.
[30] J. M. Schmidt. Construction sequences and certifying 3-connectedness. In Proceedings of
the 27th Symposium on Theoretical Aspects of Computer Science (STACS’10), pages 633–644,
2010.
[31] J. M. Schmidt. Contractions, removals and certifying 3-connectivity in linear time. SIAM
Journal on Computing, 42(2):494–535, 2013.
[32] J. M. Schmidt. A planarity test via construction sequences. In 38th International Symposium
on Mathematical Foundations of Computer Science (MFCS’13), pages 765–776, 2013.
18
[33] H. Suzuki, N. Takahashi, T. Nishizek, H. Miyano, and S. Ueno. An algorithm for tripartitioning
3-connected graphs. Information Processing Society of Japan (IPSJ), 31(5):584–592, 1990. (In
Japanese).
[34] C. Thomassen. Kuratowski’s theorem. Journal of Graph Theory, 5(3):225–241, 1981.
[35] W. T. Tutte. How to draw a graph. Proceedings of the London Mathematical Society, 13:743–
767, 1963.
[36] W. T. Tutte. Connectivity in graphs. In Mathematical Expositions, volume 15. University of
Toronto Press, 1966.
[37] K. Wada and K. Kawaguchi. Efficient algorithms for tripartitioning triconnected graphs and
3-edge-connected graphs. In 19th International Workshop on Graph-Theoretic Concepts in
Computer Science (WG’93), pages 132–143, 1993.
[38] H. Whitney. Congruent graphs and the connectivity of graphs. American Journal of Mathematics, 54(1):150–168, 1932.
[39] H. Whitney. Non-separable and planar graphs. Transactions of the American Mathematical
Society, 34(1):339–362, 1932.
[40] A. Zehavi and A. Itai. Three tree-paths. Journal of Graph Theory, 13(2):175–188, 1989.
[41] G. M. Ziegler. Lectures on Polytopes. Springer, 2nd edition, 1998.
19
| 8 |
In Defense of the Triplet Loss for Person Re-Identification
Alexander Hermans∗, Lucas Beyer∗ and Bastian Leibe
Visual Computing Institute
RWTH Aachen University
arXiv:1703.07737v4 [] 21 Nov 2017
[email protected]
Abstract
In the past few years, the field of computer vision has
gone through a revolution fueled mainly by the advent
of large datasets and the adoption of deep convolutional
neural networks for end-to-end learning. The person reidentification subfield is no exception to this. Unfortunately,
a prevailing belief in the community seems to be that the
triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as
well as pretrained ones, using a variant of the triplet loss to
perform end-to-end deep metric learning outperforms most
other published methods by a large margin.
1. Introduction
In recent years, person re-identification (ReID) has attracted significant attention in the computer vision community. Especially with the rise of deep learning, many new
approaches have been proposed to achieve this task [40, 8,
42, 31, 39, 10, 52, 4, 46, 20, 54, 35]1 . In many aspects
person ReID is similar to image retrieval, where significant
progress has been made and where deep learning has recently introduced a lot of changes. One prominent example
in the recent literature is FaceNet [29], a convolutional neural network (CNN) used to learn an embedding for faces.
The key component of FaceNet is to use the triplet loss, as
introduced by Weinberger and Saul [41], for training the
CNN as an embedding function. The triplet loss optimizes
the embedding space such that data points with the same
identity are closer to each other than those with different
identities. A visualization of such an embedding is shown
in Figure 1.
Several approaches for person ReID have already used
some variant of the triplet loss to train their models [17,
9, 28, 8, 40, 31, 33, 26, 6, 25], with moderate success. The
Figure 1: A small crop of the Barnes-Hut t-SNE [38] of
our learned embeddings for the Market-1501 test-set. The
triplet loss learns semantically meaningful features.
recently most successful person ReID approaches argue that
a classification loss, possibly combined with a verification
loss, is superior for the task [6, 51, 10, 52, 22]. Typically,
these approaches train a deep CNN using one or multiple
of these surrogate losses and subsequently use a part of the
network as a feature extractor, combining it with a metric
learning approach to generate final embeddings. Both of
these losses have their problems, though. The classification
loss necessitates a growing number of learnable parameters
as the number of identities increases, most of which will
be discarded after training. On the other hand, many of the
networks trained with a verification loss have to be used
in a cross-image representation mode, only answering the
question “How similar are these two images?”. This makes
∗ Equal
1A
contribution. Ordering determined by a last minute coin flip.
nice overview of the field is given by a recent survey paper [51].
1
using them for any other task, such as clustering or retrieval,
prohibitively expensive, as each probe has to go through the
network paired up with every gallery image.
In this paper we show that, contrary to current opinion, a plain CNN with a triplet loss can outperform current
state-of-the-art approaches on the CUHK03 [21], Market1501 [50] and MARS [49] datasets. The triplet loss allows
us to perform end-to-end learning between the input image
and the desired embedding space. This means we directly
optimize the network for the final task, which renders an additional metric learning step obsolete. Instead, we can simply compare persons by computing the Euclidean distance
of their embeddings.
A possible reason for the unpopularity of the triplet loss
is that, when applied naı̈vely, it will indeed often produce
disappointing results. An essential part of learning using
the triplet loss is the mining of hard triplets, as otherwise
training will quickly stagnate. However, mining such hard
triplets is time consuming and it is unclear what defines
“good” hard triplets [29, 31]. Even worse, selecting too
hard triplets too often makes the training unstable. We show
how this problem can be alleviated, resulting in both faster
training and better performance. We systematically analyze
the design space of triplet losses, and evaluate which one
works best for person ReID. While doing so, we place two
previously proposed variants [9, 32] into this design space
and discuss them in more detail in Section 2. Specifically,
we find that the best performing version has not been used
before. Furthermore we also show that a margin-less formulation performs slightly better, while removing one hyperparameter.
Another clear trend seems to be the use of pretrained
models such as GoogleNet [36] or ResNet-50 [14]. Indeed, pretrained models often obtain great scores for person
ReID [10, 52], while ever fewer top-performing approaches
use networks trained from scratch [21, 1, 8, 42, 31, 39, 4].
Some authors even argue that training from scratch is
bad [10]. However, using pretrained networks also leads to
a design lock-in, and does not allow for the exploration of
new deep learning advances or different architectures. We
show that, when following best practices in deep learning,
networks trained from scratch can perform competitively
for person ReID. Furthermore, we do not rely on network
components specifically tailored towards person ReID, but
train a plain feed-forward CNN, unlike many other approaches that train from scratch [21, 1, 39, 42, 34, 20, 48].
Indeed, our networks using pretrained weights obtain the
best results, but our far smaller architecture obtains respectable scores, providing a viable alternative for applications where person ReID needs to be performed on
resource-constrained hardware, such as embedded devices.
In summary our contribution is twofold: Firstly we introduce variants of the classic triplet loss which render mining
of hard triplets unnecessary and we systematically evaluate these variants. And secondly, we show how, contrary
to the prevailing opinion, using a triplet loss and no special
layers, we achieve state-of-the-art results both with a pretrained CNN and with a model trained from scratch. This
highlights that a well designed triplet loss has a significant
impact on the result, on par with other architectural novelties, hopefully enabling other researchers to gain the full potential of the previously often dismissed triplet loss. This is
an important result, highlighting that a well designed triplet
loss has a significant impact on model performance—on par
with other architectural novelties—hopefully enabling other
researchers to gain the full potential of the previously often
dismissed triplet loss.
2. Learning Metric Embeddings, the Triplet
Loss, and the Importance of Mining
The goal of metric embedding learning is to learn a function fθ (x) : RF → RD which maps semantically similar
points from the data manifold in RF onto metrically close
points in RD . Analogously, fθ should map semantically
different points in RF onto metrically distant points in RD .
The function fθ is parametrized by θ and can be anything
ranging from a linear transform [41, 23, 45, 28] to complex non-linear mappings usually represented by deep neural networks [9, 8, 10]. Let D(x, y) : RD × RD → R
be a metric function measuring distances in the embedding
space. For clarity we use the shortcut notation Di,j =
D(fθ (xi ), fθ (xj )), where we omit the indirect dependence
of Di,j on the parameters θ. As is common practice, all
loss-terms are divided by the number of summands in a
batch; we omit this term in the following equations for conciseness.
Weinberger and Saul [41] explore this topic with the explicit goal of performing k-nearest neighbor classification in
the learned embedding space and propose the “Large Margin Nearest Neighbor loss” for optimizing fθ :
LLMNN (θ) = (1 − µ)Lpull (θ) + µLpush (θ),
(1)
which is comprised of a pull-term, pulling data points i towards their target neighbor T (i) from the same class, and
a push-term, pushing data points from a different class k
further away:
Lpull (θ) =
X
Di,j ,
(2)
i,j∈T (i)
Lpush (θ) =
X
m + Da,T (a) − Da,n + .
(3)
a,n
ya 6=yn
Because the motivation was nearest-neighbor classification,
allowing disparate clusters of the same class was an explicit
goal, achieved by choosing fixed target neighbors at the onset of training. Since this property is harmful for retrieval
tasks such as face and person ReID, FaceNet [29] proposed
a modification of LLMNN (θ) called the “Triplet loss”:
Ltri (θ) =
X
[m + Da,p − Da,n ]+ .
(4)
a,p,n
ya =yp 6=yn
form batches by randomly sampling P classes (person identities), and then randomly sampling K images of each class
(person), thus resulting in a batch of P K images.2 Now, for
each sample a in the batch, we can select the hardest positive and the hardest negative samples within the batch when
forming the triplets for computing the loss, which we call
Batch Hard:
all anchors
This loss makes sure that, given an anchor point xa , the
projection of a positive point xp belonging to the same class
(person) ya is closer to the anchor’s projection than that of
a negative point belonging to another class yn , by at least a
margin m. If this loss is optimized over the whole dataset
for long enough, eventually all possible pairs (xa , xp ) will
be seen and be pulled together, making the pull-term redundant. The advantage of this formulation is that, while
eventually all points of the same class will form a single
cluster, they are not required to collapse to a single point;
they merely need to be closer to each other than to any point
from a different class.
A major caveat of the triplet loss, though, is that as the
dataset gets larger, the possible number of triplets grows
cubically, rendering a long enough training impractical. To
make matters worse, fθ relatively quickly learns to correctly
map most trivial triplets, rendering a large fraction of all
triplets uninformative. Thus mining hard triplets becomes
crucial for learning. Intuitively, being told over and over
again that people with differently colored clothes are different persons does not teach one anything, whereas seeing
similarly-looking but different people (hard negatives), or
pictures of the same person in wildly different poses (hard
positives) dramatically helps understanding the concept of
“same person”. On the other hand, being shown only the
hardest triplets would select outliers in the data unproportionally often and make fθ unable to learn “normal” associations, as will be shown in Table 1. Examples of typical hard positives, hard negatives, and outliers are shown
in the Supplementary Material. Hence it is common to
only mine moderate negatives [29] and/or moderate positives [31]. Regardless of which type of mining is being
done, it is a separate step from training and adds considerable overhead, as it requires embedding a large fraction of
the data with the most recent fθ and computing all pairwise
distances between those data points.
In a classical implementation, once a certain set of B
triplets has been chosen, their images are stacked into a
batch of size 3B, for which the 3B embeddings are computed, which are in turn used to create B terms contributing
to the loss. Given the fact that there are up to 6B 2 − 4B
possible combinations of these 3B images that are valid
triplets, using only B of them seems wasteful. With this
realization, we propose an organizational modification to
the classic way of using the triplet loss: the core idea is to
LBH (θ; X) =
z }| {
P X
K h
X
hardest positive
}|
z
{
m + max D fθ (xia ), fθ (xip ) (5)
p=1...K
i=1 a=1
i
− min D fθ (xia ), fθ (xjn )
,
j=1...P
n=1...K
j6=i
|
+
{z
}
hardest negative
which is defined for a mini-batch X and where a data point
xij corresponds to the j-th image of the i-th person in the
batch.
This results in P K terms contributing to the loss, a
threefold3 increase over the traditional formulation. Additionally, the selected triplets can be considered moderate
triplets, since they are the hardest within a small subset of
the data, which is exactly what is best for learning with the
triplet loss.
This new formulation of sampling a batch immediately
suggests another alternative, that is to simply use all possible P K(P K − K)(K − 1) combinations of triplets, which
corresponds to the strategy chosen in [9] and which we call
Batch All:
all anchors all pos. all negatives
z }| { z}|{ z }| {
P X
K
P X
K h
K
i
X
X
X
LBA (θ; X) =
m + di,a,p
, (6)
j,a,n
i=1 a=1
p=1
p6=a
j=1 n=1
j6=i
+
i
i
i
j
di,a,p
j,a,n = D fθ (xa ), fθ (xp ) − D fθ (xa ), fθ (xn ) .
At this point, it is important to note that both LBH and
LBA still exactly correspond to the standard triplet loss
in the limit of infinite training. Both the max and min
functions are continuous and differentiable almost everywhere, meaning they can be used in a model trained by
stochastic (sub-)gradient descent without concern. In fact,
they are already widely available in popular deep-learning
frameworks for the implementation of max-pooling and the
ReLU [11] non-linearity.
Most similar to our batch hard and batch all losses is
the Lifted Embedding loss [32], which fills the batch with
2 In all experiments we choose B, P , and K in such a way that 3B is
close to P K, e.g. 3 · 42 ≈ 32 · 4.
3 Because P K ≈ 3B, see footnote 2
triplets but considers all but the anchor-positive pair as negatives:
Xh
X
i
em−Da,n + em−Dp,n
.
LL (θ; X) =
Da,p +log
+
n∈X
n6=a,n6=p
(a,p)∈X
While [32] motivates a “hard”-margin loss similar to LBH
and LBA , they end up optimizing the smooth bound of it
given in the above equation. Additionally, traditional 3B
batches are considered, thus using all possible negatives,
but only one positive pair per triplet. This leads us to propose a generalization of the Lifted Embedding loss based
on P K batches which considers all anchor-positive pairs as
follows:
all positives
all anchors
LLG (θ; X) =
z }| {
P X
K h
X
log
i=1 a=1
+ log
K
P X
X
z
K
X
}|
{
i
i
eD(fθ (xa ),fθ (xp ))
3. Experiments
m−D (fθ (xia ),fθ (xjn ))
i
.
+
j=1 n=1
j6=i
|
{z
all negatives
Summary. In summary, the novel contributions proposed
in this paper are the batch hard loss and its soft margin version. In the following section we evaluate them experimentally and show that, for ReID, they achieve superior performance compared to both the traditional triplet loss and the
previously published variants of it [9, 32].
(7)
p=1
p6=a
e
person ReID, it can be beneficial to pull together samples
from the same class as much as possible [45, 8], especially when working on tracklets such as in MARS [49].
For this purpose, it is possible to replace the hinge function by a smooth approximation using the softplus function:
ln(1 + exp(•)), for which numerically stable implementations are commonly available as log1p. The softplus function has similar behavior to the hinge, but it decays exponentially instead of having a hard cut-off, we hence refer to
it as the soft-margin formulation.
}
Distance Measure. Throughout this section, we have
referred to D(a, b) as the distance function between a
and b in the embedding space. In most related works,
the squared Euclidean distance D (fθ (xi ), fθ (xj )) =
2
kfθ (xi ) − fθ (xj )k2 is used as metric, although nothing
in the above loss definitions precludes using any other
(sub-)differentiable distance measure. While we do not
have a side-by-side comparison, we noticed during initial
experiments that using the squared Euclidean distance made
the optimization more prone to collapsing, whereas using
the actual (non-squared) Euclidean distance was more stable. We hence used the Euclidean distance throughout all
our experiments presented in this paper. In addition, squaring the Euclidean distance makes the margin parameter less
interpretable, as it does not represent an absolute distance
anymore.
Note that when forcing the embedding’s norm to one,
using the squared Euclidean distance corresponds to using
the cosine-similarity, up to a constant factor of two. We did
not use a normalizing layer in any of our final experiments.
For one, it does not dramatically regularize the network by
reducing the available embedding space: the space spanned
by all D-dimensional vector of fixed norm is still a D − 1dimensional volume. Worse, an output-normalization layer
can actually hide problems in the training, such as slowly
collapsing or exploding embeddings.
Soft-margin. The role of the hinge function [m + •]+
is to avoid correcting “already correct” triplets. But in
Our experimental evaluation is split up into three main
parts. The first section evaluates different variations of the
triplet loss, including some hyper-parameters, and identifies the setting that works best for person ReID. This evaluation is performed on a train/validation split we create
based on the MARS training set. The second section shows
the performance we can attain based on the selected variant of the triplet loss. We show state-of-the-art results on
the CUHK03, Market-1501 and MARS test sets, based on
a pretrained network and a network trained from scratch.
Finally, the third section discusses advantages of training
models from scratch with respect to real-world use cases.
3.1. Datasets
We focus on the Market-1501 [50] and MARS [49]
datasets, the two largest person ReID datasets currently
available. The Market-1501 dataset contains bounding
boxes from a person detector which have been selected
based on their intersection-over-union overlap with manually annotated bounding boxes. It contains 32 668 images
of 1501 persons, split into train/test sets of 12 936/19 732
images as defined by [50]. The dataset uses both singleand multi-query evaluation, we report numbers for both.
The MARS dataset originates from the same raw data as
the Market-1501 dataset; however, a significant difference
is that the MARS dataset does not have any manually annotated bounding boxes, reducing the annotation overhead.
MARS consist of “tracklets” which have been grouped into
person IDs manually. It contains 1 191 003 images split
into train/test sets of 509 914/681 089 images, as defined
by [49]. Here, person ReID is no longer performed on a
frame-to-frame level, but instead on a tracklet-to-tracklet
level, where feature embeddings are pooled across a tracklet, thus it is inherently a multi-query setup.
We use the standard evaluation metrics for both datasets,
namely the mean average precision score (mAP) and the
cumulative matching curve (CMC) at rank-1 and rank-5. To
compute these scores we use the evaluation code provided
by [55].
Additionally, we show results on the CUHK03 [21]
dataset for our pretrained network, using the single shot
setup and average over the provided 20 train/test splits.
3.2. Training
Unless specifically noted otherwise, we use the same
training procedure across all experiments and on all
datasets. We performed all our experiments using the
Theano [5] framework, code is available at redacted.
We use the Adam optimizer [18] with the default hyperparameter values ( = 10−3 , β1 = 0.9, β2 = 0.999) for
most experiments. During initial experiments on our own
MARS validation split (see Sec. 3.4), we ran multiple experiments for a very long time and monitored the loss and
mAP curves. With this information, we decided to fix the
following exponentially decaying training schedule, which
does not disadvantage any setup, for all experiments presented in this paper:
(t) =
(
0
0 0.001
if t ≤ t0
t−t0
t1 −t0
if t0 ≤ t ≤ t1
(8)
with 0 = 10−3 , t0 = 15 000, and t1 = 25 000, stopping
training when reaching t1 . We also set β1 = 0.5 when
entering the decay schedule at t0 , as is common practice [2].
In the Supplementary Material, we provide a detailed
discussion of various interesting effects we regularly observed during training, providing hands-on guidance for
other researchers.
3.3. Network Architectures
For our main results we use two different architectures,
one based on a pretrained network and one which we train
from scratch.
Pretrained. We use the ResNet-50 architecture and the
weights provided by He et al. [14]. We discard the last layer
and add two fully connected layers for our task. The first
has 1024 units, followed by batch normalization [16] and
ReLU [11], the second goes down to 128 units, our final
embedding dimension. Trained with our batch hard triplet
loss, we call this model TriNet. Due to the size of this network (25.74 M parameters), we had to limit our batch size
to 72, containing P = 18 persons with K = 4 images each.
For these pretrained experiments, 0 = 10−3 proved to be
too high, causing the models to diverge within few iterations. We thus reduced 0 to 3 · 10−4 which worked fine on
all datasets.
Trained from Scratch. To show that training from scratch
does not necessarily result in poor performance, we also designed a network called LuNet which we train from scratch.
LuNet follows the style of ResNet-v2, but uses leaky ReLU
nonlinearities, multiple 3 × 3 max-poolings with stride 2
instead of strided convolutions, and omits the final averagepooling of feature-maps in favor of a channel-reducing final res-block. An in-depth description of the architecture
is given in the Supplementary Material. As the network is
much more lightweight (5.00 M parameters) than its pretrained sibling, we sample batches of size 128, containing
P = 32 persons with K = 4 images each.
3.4. Triplet Loss
Our initial experiments test the different variants of
triplet training that we discussed in Sec. 2. In order not to
perform model-selection on the test set, we randomly sample a validation set of 150 persons from the MARS training
set, leaving the remaining 475 persons for training. In order
to make this exploration tractable, we run all of these experiments using the smaller LuNet trained from scratch on images downscaled by a factor of two. Since our goal here is
to explore triplet loss formulations, as opposed to reaching
top performance, we do not perform any data augmentation
in these experiments.
Table 1 shows the resulting mAP and rank-1 scores for
the different formulations at multiple margin values, and
with a soft-margin where applicable. Consistent with results reported in several recent papers [10, 9, 6], the vanilla
triplet loss with randomly sampled triplets performs poorly.
When performing simple offline hard-mining (OHM), the
scores sometimes increase dramatically, but the training
also fails to learn useful embeddings for multiple margin
values. This problem is well-known [29, 31] and has been
discussed in Sec. 2. While the idea of learning embeddings using triplets is theoretically pleasing, this practical finnickyness, coupled with the considerable increase in
training time due to non-parallelizable offline mining (from
7 h to 20 h in our experiments), makes learning with vanilla
triplets rather unattractive.
Considering the long training times, it is nice to see that
all proposed triplet re-formulations perform similarly to or
better than the best OHM run. The key observation is that
the (semi) hard-mining happens within the batch and thus
comes at almost no additional runtime cost.
Perhaps surprisingly, the batch hard variant (Eq. 5) consistently outperforms the batch all variant (Eq. 6) previously
used by several authors [9, 40]. We suspect this is due to the
fact that in the latter, many of the possible triplets in a batch
are zero, essentially “washing out” the few useful contributing terms during averaging. To test this hypothesis, we also
ran experiments where we only average the non-zero loss
terms (marked by 6= 0 in Table 1); this performs much better
margin 0.1
Triplet (Ltri )
Triplet (Ltri ) + OHM
Batch hard (LBH )
Batch hard (LBH6=0 )
Batch all (LBA )
Batch all (LBA6=0 )
Lifted 3-pos. (LLG )
Lifted 1-pos. (LL ) [32]
margin 0.2
margin 0.5
margin 1.0
soft margin
mAP
rank-1
mAP
rank-1
mAP
rank-1
mAP
rank-1
mAP
rank-1
40.80
16.6*
65.09
63.10
59.43
63.29
64.00
61.95
59.23
36.6*
83.51
83.04
79.24
83.65
82.71
81.35
41.71
61.40
65.27
64.19
60.48
64.31
63.87
63.68
60.78
82.95
84.55
83.42
79.99
83.37
82.86
81.73
43.51
32.0*
65.12
63.71
60.30
64.41
63.61
63.01
60.87
57.1*
83.39
82.29
79.52
83.98
84.55
82.48
43.61
41.45
63.78
64.06
62.08
64.06
64.02
62.28
61.63
59.42
82.48
84.50
80.55
82.90
84.17
82.34
48.40
46.63
65.77
61.04
-
66.37
65.43
84.69
80.65
-
Table 1: LuNet scores on our MARS validation split. The best performing loss at a given margin is bold, the best margin for a
given loss is italic, and the overall best combination is highlighted in green. A * denotes runs trapped in a bad local optimum.
in the batch all case. Another interpretation of this modification is that it dynamically increases the weight of triplets
which remain active as they get fewer.
The lifted triplet loss LL as introduced by [32] performs
competitively, but is slightly worse than most other formulations. As can be seen in the table, our generalization to
multiple positives (Eq. 7), which makes it more similar to
the batch all variant of the triplet, improves upon it overall.
The best score was obtained by the soft-margin variation
of the batch hard loss. We use this loss in all our further
triplet experiments. To clarify, here we merely seek the best
triplet loss variation for person ReID, but do not claim that
this variant works best across all fields. For other tasks such
as image retrieval or clustering, additional experiments will
have to be performed.
3.5. Performance Evaluation
Here, we present the main experiments of this paper.
We perform all following experiments using the batch hard
variant LBH of the triplet loss and the soft margin, since this
setup performed best during the exploratory phase.
Batch Generation and Augmentation. Since our batch
hard triplet loss requires slightly different mini-batches, we
sample random P K-style batches by first randomly sampling P person identities uniformly without replacement.
For each person, we then sample K images, without replacement whenever possible, otherwise replicating images
as necessary.
We follow common practice by using random crops and
random horizontal flips during training [19, 1]. Specifically, we resize all images of size H × W to 1 18 (H × W ),
of which we take random crops of size H × W , keeping
their aspect ratio intact. For all pretrained networks we
set H = 256, W = 128 on Market-1501 and MARS and
H = 256, W = 96 on CUHK03, whereas for the networks
trained from scratch we set H = 128, W = 64.
We apply test-time augmentation in all our experiments.
Following [19], we deterministically average over the em-
beddings from five crops and their flips. This typically gives
an improvement of 3% in the mAP score; a more detailed
analysis can be found in the Supplementary Material.
Combination of Embeddings. For test-time augmentation, multi-query evaluation, and tracklet-based evaluation,
the embeddings of multiple images need to be combined.
While the learned clusters have no reason to be Gaussian,
their convex hull is trained to only contain positive points.
Thus, a convex combination of multiple embeddings cannot
get closer to a negative embedding than any of the original
ones, which is not the case for a non-convex combination
such as max-pooling. For this reason, we suggest combining triplet-based embeddings by using their mean. For example, combining tracklet-embeddings using max-pooling
led to an 11.4% point decrease in mAP on MARS.
Comparison to State-of-the-Art. Tables 2 and 3 compare
our results to a set of related, top performing approaches on
Market-1501 and MARS, and CUHK03, respectively. We
do not include approaches which are orthogonal to ours and
could be integrated in a straightforward manner, such as
various re-ranking schemes, data augmentation, and regularization [3, 43, 54, 7, 47, 53, 56]. These are included in
more exhaustive tables in the Supplementary Material. The
different approaches are categorized into three major types:
Identification models (I) that are trained to classify person
IDs, Verification models (V) that learn whether an image
pair represents the same person, and methods such as ours
that directly learn an Embedding (E).
We present results for both our pretrained network
(TriNet) and the network we trained from scratch (LuNet).
As can clearly be seen, TriNet outperforms all current methods. Especially striking is the jump from 41.5% mAP, obtained by another ResNet-50 model trained with triplets
(DTL [10]), to our 69.14% mAP score in Table 2. Since
Geng et al. [10] do not discus all details of their training
procedure when using the triplet loss, we could only speculate about the reasons for the large performance gap.
Market-1501 MQ
MARS
Type
Market-1501 SQ
mAP
rank-1
rank-5
mAP
rank-1
rank-5
mAP
rank-1
rank-5
E
E
I
69.14
60.71
58.06
84.92
81.38
78.50
94.21
92.34
91.18
76.42
69.07
67.48
90.53
87.11
85.45
96.29
95.16
94.12
67.70
60.48
57.42
79.80
75.56
72.42
91.36
89.70
86.21
LOMO + Null Space [45]
Gated siamese CNN [39]
CAN [25]
JLML [22]
ResNet 50 (I+V)† [52]
DTL† [10]
DTL† [10]
APR (R, 751)† [24]
Latent Parts (Fusion) [20]
IDE (R) + ML [55]
spatial temporal RNN [57]
CNN + Video† [46]
E
V
E
I
I+V
E
I+V
I
I
I
E
I
29.87
39.55
35.9
65.5
59.87
41.5
65.5
64.67
57.53
49.05
-
55.43
65.88
60.3
85.1
79.51
63.3
83.7
84.29
80.31
73.60
-
90.91
93.20
-
46.03
48.45
47.9
74.5
70.33
49.7
73.8
66.70
-
71.56
76.04
72.1
89.7
85.84
72.4
89.6
86.79
-
94.54
-
56.05
55.12
50.7
-
71.77
70.51
70.6
55.5
86.57
90.0
70.2
TriNet (Re-ranked)
LuNet (Re-ranked)
IDE (R) + ML ours (Re-ra.)
IDE (R) + ML (Re-ra.) [55]
E
E
I
I
81.07
75.62
71.38
63.63
86.67
84.59
81.62
77.11
93.38
91.89
89.88
-
87.18
82.61
79.78
-
91.75
89.31
86.79
-
95.78
94.48
92.96
-
77.43
73.68
69.50
68.45
81.21
78.48
74.39
73.94
90.76
88.74
85.86
-
TriNet
LuNet
IDE (R) + ML ours
Table 2: Scores on both the Market-1501 and MARS datasets. The top and middle contain our scores and those of the
current state-of-the-art respectively. The bottom contains several methods with re-ranking [55]. The different types represent
the optimization criteria, where I stands for identification, V for verification and E for embedding. All our scores include
test-time augmentation. The best scores are bold. † : Concurrent work only published on arXiv.
Our LuNet model, which we train from scratch, also performs very competitively, matching or outperforming most
other baselines. While it does not quite reach the performance of our pretrained model, our results clearly show that
with proper training, the flexibility of training models from
scratch (see Sec. 3.6) should not be discarded.
To show that the actual performance boost is indeed
gained by the triplet loss and not by other design choices,
we train a ResNet-50 model with a classification loss. This
model is very similar to the one used in [55] and we thus
refer to it as “IDE (R) ours”, for which we also apply a
metric learning step (XQDA [23]). Unfortunately, especially difficult images caused frequent spikes in the loss,
which ended up harming the optimization using Adam. After unsuccessfully trying lower learning rates and clipping
extreme loss values, we resorted to Adadelta [44], another
competitive optimization algorithm which did not exhibit
these problems. While we combine embeddings through average pooling for our triplet based models, we found maxpooling and normalization to work better for the classification baseline, consistent with results reported in [49]. As
Table 2 shows, the performance of the resulting model “IDE
(R) ours” is still on-par with similar models in the literature. However, the large gap between the identification-
based model and our TriNet clearly demonstrates the advantages of using a triplet loss.
In line with the general trend in the vision community, all
deep learning methods outperform shallow methods using
hand-crafted features. While Table 2 only shows [45] as a
non-deep learning method, to the best of our knowledge all
others perform worse.
We also evaluated how our models fare when combined
with a recent re-ranking approach by Zhong et al. [55]. This
approach can be applied on top of any ranking methods
and uses information from nearest neighbors in the gallery
to improve the ranking result. As Table 2 shows, our approaches go well with this method and show similar improvements to those obtained by Zhong et al. [55].
Finally, we evaluate our models on Market-1501 with the
provided 500k additional distractor images. The full experiment is described in the Supplementary Material. Even with
these additional distractors, our triplet-based model outperforms a classification one by 8.4% mAP.
All of these results show that triplet loss embeddings are
indeed a valuable tool for person ReID and we expect them
to significantly change the way how research will progress
in this field.
Type
TriNet
E
Gated siamese CNN [39]
LOMO + Null Space [45]
CAN [25]
Latent Parts (Fusion) [20]
Spindle Net* [48]
JLML [22]
DTL† [10]
ResNet 50 (I+V)† [52]
V
E
E
I
I
I
I+V
I+V
Labeled
r-1
r-5
Detected
r-1
r-5
89.63 99.01
87.58 98.17
62.55
77.6
74.21
88.5
83.2
85.4
-
61.8
54.70
69.2
67.99
80.6
84.1
83.4
90.05
95.2
94.33
97.8
98.0
-
86.7
84.75
88.5
91.04
96.9
97.1
Table 3: Scores on CUHK03 for TriNet and a set of recent
top performing methods. The best scores are highlighted in
bold. † : Concurrent work only published on arXiv. *: The
method was trained on several additional datasets.
TriNet
256 × 128
128 × 64
64 × 32
LuNet
mAP
rank-1
rank-5
mAP
rank-1
rank-5
69.14
62.52
47.42
84.92
79.45
68.08
94.21
91.06
85.84
60.71
57.18
81.38
78.21
92.34
90.94
Table 4: The effect of input size on mAP and CMC scores.
3.6. To Pretrain or not to Pretrain?
As mentioned before, many methods for person ReID
rely on pretrained networks, following a general trend in the
computer vision community. Indeed, these models lead to
impressive results, as we also confirmed in this paper with
our TriNet model. However, pretrained networks reduce the
flexibility to try out new advances in deep learning or to
make task-specific changes in a network. Our LuNet model
clearly suggests that it is also possible to train models from
scratch and obtain competitive scores.
In particular, an interesting direction for ReID could be
the usage of additional input channels such as depth information, readily available from cheap consumer hardware.
However it is unclear how to best integrate such input data
into a pretrained network in a proper way.
Furthermore, the typical pretrained networks are designed with accuracy in mind and do not focus on the memory footprint or the runtime of a method. Both are important factors for real-world robotic scenarios, where typically
power consumption is a constraint and only less powerful
hardware can be considered [12, 37]. When designing a
network from scratch, one can directly take this into consideration and create networks with a smaller memory footprint and faster evaluation times.
In principle, our pretrained model can easily be sped up
by using half or quarter size input images, since the global
average pooling in the ResNet will still produce an output
vector of the same shape. This, however, goes hand in hand
with the question of how to best adapt a pretrained network
to a new task with different image sizes. The typical way of
leveraging pretrained networks is to simply stretch images
to the fixed expected input size used to train the network,
typically 224 × 224 pixels. We used 256 × 128 instead
in order to preserve the aspect ratio of the original image.
However, for the Market-1501 dataset, this meant we had
to upscale the images, while if we do not confine ourselves
to pretrained networks we can simply adjust our architecture to the dataset size, as we did in the LuNet architecture.
However, we hypothesize that a pretrained network has an
“intrinsic scale,” for which the learned filters work properly and thus simply using smaller input images will result
in suboptimal performance. To show this, we retrain our
TriNet with 128×64 and 64×32 images. As Table 4 clearly
shows, the performance drops rapidly. At the original image
scale, our LuNet model can almost match the mAP score of
TriNet and already outperforms it when considering CMC
scores. At an even smaller image size, LuNet significantly
outperforms the pretrained model. Since the LuNet performance only drops by about ∼ 3%, the small images still
hold enough data to perform ReID, but the rather rigid pretrained weights can no longer adjust to such a data change.
This shows that pretrained models are not a solution for arbitrary tasks, especially when one wants to train lightweight
models for small images.
4. Discussion
We are not the first to use the triplet loss for person ReID.
Ding et al. [9] and Wang et al. [40] use a batch generation and loss formulation which is very similar to our batch
all formulation. Wang et al. [40] further combine it with a
pairwise verification loss. However, in the batch all case, it
was important for us to average only over the active triplets
(LBA6=0 ), which they do not mention. This, in combination
with their rather small networks, might explain their relatively low scores. Cheng et al. [8] propose an “improved
triplet loss” by introducing another pull term into the loss,
penalizing large distances between positive images. This
formulation is in fact very similar to the original one by
Weinberger and Saul [41]. We briefly experimented with
a pull term, but the additional weighting hyper-parameter
was not trivial to optimize and it did not improve our results. Several authors suggest learning attributes and ReID
jointly [17, 33, 24, 30], some of which integrate this into
their embedding dimensions. This is an interesting research
direction orthogonal to our work. Several other authors also
defined losses over triplets of images [28, 31, 26], however, they use losses different from the triplet loss we defend in this paper, possibly explaining their lower scores.
Finally, FaceNet [29] uses a huge batch with moderate mining, which can only be done on the CPU, whereas we advocate hard mining in a small batch, which has a similar effect
to moderate mining in a large batch, while fitting on a GPU
and thus making training significantly more affordable.
5. Conclusion
In this paper we have shown that, contrary to the prevailing belief, the triplet loss is an excellent tool for person
re-identification. We propose a variant that no longer requires offline hard negative mining at almost no additional
cost. Combined with a pretrained network, we set the new
state-of-the-art on three of the major ReID datasets. Furthermore, we show that training networks from scratch can
lead to very competitive scores. We hope that in future work
the ReID community will build on top of our results and
shift more towards end-to-end learning.
Acknowledgments.
This work was funded, in parts,
by ERC Starting Grant project CV-SUPER (ERC-2012StG-307432) and the EU project STRANDS (ICT-2011600623). We would also like to thank the authors of the
Market-1501 and MARS datasets.
References
[1] E. Ahmed, M. Jones, and T. K. Marks. An Improved
Deep Learning Architecture for Person Re-Identification. In
CVPR, pages 3908–3916, 2015. 2, 6
[2] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer Normalization.
arXiv preprint arXiv:1607.06450, 2016. 5
[3] S. Bai, X. Bai, and Q. Tian. Scalable person re-identification
on supervised smoothed manifold. CVPR, 2017. 6, 15, 16
[4] I. B. Barbosa, M. Cristani, B. Caputo, A. Rognhaugen, and
T. Theoharis. Looking Beyond Appearances: Synthetic
Training Data for Deep CNNs in Re-identification. arXiv
preprint arXiv:1701.03153, 2017. 1, 2
[5] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio. Theano:
new features and speed improvements. In NIPS W., 2012. 5
[6] W. Chen, X. Chen, J. Zhang, and K. Huang. A Multi-task
Deep Network for Person Re-identification. AAAI, 2017. 1,
5
[7] Y. Chen, X. Zhu, and G. Shaogang. Person Re-Identification
by Deep Learning Multi-Scale Representations. In ICCV W.
on Cross-domain Human Identification, 2016. 6, 15, 16
[8] D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng. Person
Re-Identification by Multi-Channel Parts-Based CNN with
Improved Triplet Loss Function. In CVPR, 2016. 1, 2, 4, 8
[9] S. Ding, L. Lin, G. Wang, and H. Chao. Deep feature learning with relative distance comparison for person
re-identification. Pattern Recognition, 48(10):2993–3003,
2015. 1, 2, 3, 4, 5, 8
[10] M. Geng, Y. Wang, T. Xiang, and Y. Tian. Deep Transfer Learning for Person Re-identification. arXiv preprint
arXiv:1611.05244, 2016. 1, 2, 5, 6, 7, 8, 15, 16
[11] X. Glorot, A. Bordes, and Y. Bengio. Deep Sparse Rectifier
Neural Networks. In AISTATS, 2011. 3, 5, 16
[12] N. Hawes, C. Burbridge, F. Jovan, L. Kunze, B. Lacerda,
L. Mudrová, J. Young, J. Wyatt, D. Hebesberger, T. Körtner,
et al. The STRANDS Project: Long-Term Autonomy in Everyday Environments. RAM, 24(3):146–156, 2017. 8
[13] K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep Into Rectifiers: Surpassing Human-Level Performance on ImageNet
Classification. In ICCV, pages 1026–1034, 2015. 16
[14] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning
for Image Recognition. In CVPR, 2016. 2, 5, 11
[15] K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in
Deep Residual Networks. In ECCV, 2016. 16
[16] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
deep network training by reducing internal covariate shift. In
ICML, 2015. 5, 16
[17] S. Khamis, C.-H. Kuo, V. K. Singh, V. D. Shet, and L. S.
Davis. Joint Learning for Attribute-Consistent Person ReIdentification. In ECCV, 2014. 1, 8
[18] D. P. Kingma and J. Ba. Adam: A Method for Stochastic
Optimization. In ICLR, 2015. 5
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet
Classification with Deep Convolutional Neural Networks. In
NIPS, pages 1097–1105, 2012. 6, 11
[20] D. Li, X. Chen, Z. Zhang, and K. Huang. Learning Deep
Context-aware Features over Body and Latent Parts for Person Re-identification. In CVPR, 2017. 1, 2, 7, 8, 15, 16
[21] W. Li, R. Zhao, T. Xiao, and X. Wang. DeepReID: Deep
Filter Pairing Neural Network for Person Re-Identification.
In CVPR, 2014. 2, 5
[22] W. Li, X. Zhu, and S. Gong. Person Re-Identification by
Deep Joint Learning of Multi-Loss Classification. In IJCAI,
2017. 1, 7, 8, 15, 16
[23] S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person Re-identification
by Local Maximal Occurrence Representation and Metric
Learning. In CVPR, 2015. 2, 7
[24] Y. Lin, L. Zheng, Z. Zheng, Y. Wu, and Y. Yang. Improving person re-identification by attribute and identity learning.
arXiv preprint arXiv:1703.07220, 2017. 7, 8, 15
[25] H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan. Endto-End Comparative Attention Networks for Person Reidentification. Trans. Image Proc., 26(7):3492–3506, 2017.
1, 7, 8, 15, 16
[26] J. Liu, Z.-J. Zha, Q. Tian, D. Liu, T. Yao, Q. Ling, and T. Mei.
Multi-Scale Triplet CNN for Person Re-Identification. In
ACMMM, 2016. 1, 8
[27] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In ICML,
2013. 16
[28] S. Paisitkriangkrai, C. Shen, and A. van den Hengel. Learning to rank in person re-identification with metric ensembles.
In CVPR, 2015. 1, 2, 8
[29] F. Schroff, D. Kalenichenko, and J. Philbin. FaceNet: A
Unified Embedding for Face Recognition and Clustering. In
CVPR, 2015. 1, 2, 3, 5, 8
[30] A. Schumann and R. Stiefelhagen. Person Re-Identification
by Deep Learning Attribute-Complementary Information. In
CVPR Workshops, 2017. 8
[31] H. Shi, Y. Yang, X. Zhu, S. Liao, Z. Lei, W. Zheng, and S. Z.
Li. Embedding Deep Metric for Person Re-identification: A
Study Against Large Variations. In ECCV, 2016. 1, 2, 3, 5,
8
[32] H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep
Metric Learning via Lifted Structured Feature Embedding.
In CVPR, 2016. 2, 3, 4, 6
[33] C. Su, S. Zhang, J. Xing, W. Gao, and Q. Tian. Deep Attributes Driven Multi-camera Person Re-identification. In
ECCV, 2016. 1, 8
[34] A. Subramaniam, M. Chatterjee, and A. Mittal. Deep
Neural Networks with Inexact Matching for Person ReIdentification. In NIPS, pages 2667–2675, 2016. 2
[35] Y. Sun, L. Zheng, W. Deng, and S. Wang. SVDNet for Pedestrian Retrieval. ICCV, 2017. 1, 15, 16
[36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed,
D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. In CVPR, 2015. 2
[37] R. Triebel, K. Arras, R. Alami, L. Beyer, S. Breuers,
R. Chatila, M. Chetouani, D. Cremers, V. Evers, M. Fiore,
et al. SPENCER: A Socially Aware Service Robot for Passenger Guidance and Help in Busy Airports. In Field and
Service Robotics, pages 607–622. Springer, 2016. 8
[38] L. Van Der Maaten. Accelerating t-SNE using Tree-Based
Algorithms. JMLR, 15(1):3221–3245, 2014. 1, 17
[39] R. R. Varior, M. Haloi, and G. Wang. Gated Siamese
Convolutional Neural Network Architecture for Human ReIdentification. In ECCV, 2016. 1, 2, 7, 8, 15, 16
[40] F. Wang, W. Zuo, L. Lin, D. Zhang, and L. Zhang. Joint
Learning of Single-image and Cross-image Representations
for Person Re-identification. In CVPR, 2016. 1, 5, 8
[41] K. Q. Weinberger and L. K. Saul. Distance Metric Learning
for Large Margin Nearest Neighbor Classification. JMLR,
10:207–244, 2009. 1, 2, 8
[42] T. Xiao, H. Li, W. Ouyang, and X. Wang. Learning Deep
Feature Representations with Domain Guided Dropout for
Person Re-identification. In CVPR, 2016. 1, 2, 16
[43] R. Yu, Z. Zhou, S. Bai, and X. Bai. Divide and Fuse: A
Re-ranking Approach for Person Re-identification. BMVC,
2017. 6
[44] M. D. Zeiler. ADADELTA: An Adaptive Learning Rate
Method. arXiv preprint arXiv:1212.5701, 2012. 7
[45] L. Zhang, T. Xiang, and S. Gong. Learning a Discriminative
Null Space for Person Re-identification. In CVPR, 2016. 2,
4, 7, 8, 15, 16
[46] W. Zhang, S. Hu, and K. Liu. Learning Compact Appearance
Representation for Video-based Person Re-Identification.
arXiv preprint arXiv:1702.06294, 2017. 1, 7, 15
[47] Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu. Deep
Mutual Learning. arXiv preprint arXiv:1706.00384, 2017.
6, 15
[48] H. Zhao, M. Tian, S. Sun, J. Shao, J. Yan, S. Yi, X. Wang,
and X. Tang. Spindle Net: Person Re-identification with Human Body Region Guided Feature Decomposition and Fusion. In CVPR, 2017. 2, 8, 15, 16
[49] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and
Q. Tian. MARS: A Video Benchmark for Large-Scale Person Re-Identification. In ECCV, 2016. 2, 4, 7, 15
[50] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian.
Scalable Person Re-Identification: A Benchmark. In ICCV,
2015. 2, 4
[51] L. Zheng, Y. Yang, and A. G. Hauptmann. Person Reidentification: Past, Present and Future. arXiv preprint
arXiv:1610.02984, 2016. 1
[52] Z. Zheng, L. Zheng, and Y. Yang. A Discriminatively
Learned CNN Embedding for Person Re-identification.
arXiv preprint arXiv:1611.05666, 2016. 1, 2, 7, 8, 11, 12,
15, 16
[53] Z. Zheng, L. Zheng, and Y. Yang. Pedestrian Alignment
Network for Large-scale Person Re-identification. arXiv
preprint arXiv:1707.00408, 2017. 6, 15
[54] Z. Zheng, L. Zheng, and Y. Yang. Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline
in vitro. ICCV, 2017. 1, 6, 15, 16
[55] Z. Zhong, L. Zheng, D. Cao, and S. Li. Re-ranking Person
Re-identification with k-reciprocal Encoding. CVPR, 2017.
5, 7, 15
[56] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang.
Random Erasing Data Augmentation.
arXiv preprint
arXiv:1708.04896, 2017. 6
[57] Z. Zhou, Y. Huang, W. Wang, L. Wang, and T. Tan. See the
Forest for the Trees: Joint Spatial and Temporal Recurrent
Neural Networks for Video-based Person Re-identification.
In CVPR, 2017. 7, 15
Supplementary Material
PID: 207
PID: 207
PID: 207
PID: 209
PID: 209
PID: 209
PID: 1365
PID: 1365
PID: 1365
PID: 1365
PID: 1365
PID: 1365
PID: 516
PID: 516
PID: 516
PID: 516
PID: 516
PID: 516
A. Test-time Augmentation
As is good practice in the deep learning community [19,
14], we perform test-time augmentation. From each image,
we deterministically create five crops of size H × W : four
corner crops and one center crop, as well as a horizontally
flipped copy of each. The embeddings of all these ten images are then averaged, resulting in the final embedding for
a person. Table 5 shows how five possible settings affect our
scores on the Market-1501 dataset. As expected, the worst
option is to scale the original images to fit the network input
(first line), as this shows the network an image type it has
never seen before. This option is directly followed by not
using any test-time augmentation, i.e. just using the central
crop. Simply flipping the center crop and averaging the two
resulting embeddings already gives a big boost while only
being twice as expensive. The four additional corner crops
typically seem to be less effective, while more expensive,
but using both augmentations together gives the best results.
For the networks trained with a triplet loss, we gain about
3% mAP, while the network trained for identification only
gains about 2% mAP. A possible explanation is that the feature space we learn with the triplet loss could be more suited
to averaging than that of a classification network.
TriNet
original
center
center + flip
5 crops
5 crops + flip
LuNet
IDE (R) Ours
mAP
rank-1
mAP
rank-1
mAP
rank-1
65.48
66.29
68.32
67.86
69.14
82.51
84.06
84.47
84.83
84.92
55.61
57.08
59.31
59.00
60.71
76.72
77.94
79.78
79.42
81.38
56.29
56.26
57.80
56.73
58.06
77.55
77.08
78.71
77.29
78.50
Table 5: The effect of test-time augmentation on Market1501.
B. Hard Positives, Hard Negatives and Outliers
Figure 2 shows several outliers in the Market-1501 and
MARS datasets. Some issues in MARS are caused by
the tracker-based annotations where bounding boxes sometimes span two persons and the tracker partially focuses on
the wrong person. Additionally, some annotation mistakes
can be found in both datasets; while some are obvious, some
others are indeed very hard to spot!
In Figure 3 we show some of the most difficult queries
along with their top-3 retrieved images (containing hard
negatives), as well as their two hardest positives. While
some mistakes are easy to spot by a human, others are indeed not trivial, such as the first row in Figure 3.
Figure 2: Some outliers in the Mars (top two rows) and
Market-1501 (bottom row) datasets. The first row shows
high image overlap between tracklets of two persons. The
second row shows a very hard example where a person was
wrongly matched across tracklets. The last row shows a
simple annotation mistake.
C. Experiments with Distractors
On top of the normal gallery set, the Market-1501 dataset
provides an additional 500k distractors recorded at another
time. In order to evaluate how such distractors affect performance, we randomly sample an increasing number of
distractors and add them to the original gallery set. Here
we compare to the results from Zheng et al. [52]. Both
our models show a similar behavior to that of their ResNet50 baseline. Surprisingly, our LuNet model starts out with
a slightly better mAP score than the baseline and ends up
just below it, while consistently being better when considering the rank-1 score. This might indeed suggest that the
inductive bias from pretraining helps during generalization
to large amounts of unseen data. Nevertheless, all models
seem to suffer under the increasing gallery set in a similar
manner, albeit none of them fails badly. Especially the fact
that in 74.70% of all single-image queries the first image
out of 519 732 gallery images is correctly retrieved is an
impressive result.
For reproducibility of the 500k distractor plot (Fig. 4),
Table 6 lists the values of the plot.
PID: 1
rank-1
rank-2
rank-3
rank-1450
rank-1967
...
PID: 5
rank-1
rank-2
rank-3
...
rank-470
rank-1130
mAP / Rank-1 [%]
80
70
60
50
TriNet
LuNet
Res50 (I+V) [52]
40
...
...
30
19.73
PID: 38
rank-1
rank-2
rank-3
rank-1510
...
PID: 1026
rank-1
rank-2
rank-3
PID: 1151
rank-1
rank-2
rank-2302
rank-3
rank-4611
rank-1886
rank-4127
...
Figure 3: Some of the hardest queries. The leftmost column shows the query image, followed by the top 3 retrieved
images and the two ground truth matches with the highest distance to the query images, i.e. the hardest positives.
Correctly retrieved images have a green border, mistakes
(i.e. hard negatives) have a red border.
TriNet
LuNet
400
500
Figure 4: 500k distractor set results. Solid lines represent
the mAP score, dashed lines the rank-1 score. See Supplementary Material for values.
...
...
200
300
Gallery Size [K]
rank-2893
...
...
100
Res50 (I+V) [52]
Gallery size
mAP
rank-1
mAP
rank-1
mAP
rank-1
19 732
119 732
219 732
319 732
419 732
519 732
69.14
61.93
58.74
56.58
54.97
53.63
84.92
79.69
77.88
76.34
75.50
74.70
60.71
52.73
49.44
47.17
45.57
44.26
81.38
75.65
73.40
71.85
70.52
69.74
59.87
52.28
49.11
NaN
NaN
45.24
79.51
73.78
71.50
NaN
NaN
68.26
Table 6: Values for the 500k distractor plot.
D. Notes on Network Training
Here we present and discuss several training-logs that
display interesting behavior, as noted in the main paper.
This serves as practical advice of what to monitor for practitioners who choose to use a triplet-based loss in their training.
A typical training usually proceeds as follows: initially,
all embeddings are pulled together towards their center of
gravity. When they come close to each other, they will
“pass” each other to join “their” clusters and, once this
cross-over has happened, training mostly consists of pushing the clusters further apart and fine-tuning them. The collapsing of training happens when the margin is too large
and the initial spread is too small, such that the embeddings
get stuck when trying to pass each other.
Most importantly of all, if any type of hard-triplet mining is used, a stagnating loss curve by no means indicates
stagnating progress. As the network learns to solve some
hard cases, it will be presented with other hard cases and
hence still keep a high loss. We recommend observing the
fraction of active triplets in a batch, as well as the norms of
the embeddings and all pairwise distances.
Figures 5, 6, 7, and 8 all show different training logs.
Note that while they all share the x-axis since the number of
updates was kept the same throughout the experiments, the
y-axes vary for clearer visualization. First, the topmost plot
in each Subfigure (a) (“Value of criterion/penalty”) shows
all per-triplet values of the optimization criterion (the triplet
loss) encountered in each mini-batch. This is shown again
in the plot below it on the left (“loss”), with an overlaid
light-blue line representing the batch-mean criterion value,
and an overlaid dark-blue line representing the batch’s 5percentile. To the right of it, the “% non-zero losses in
batch” shows how many entries in the mini-batch had nonzero loss; values are computed up to a precision of 10−5 ,
which explains how it can be below 100% in the soft-margin
case. The single final vertical line present in some plots
should be ignored as a plotting-artifact.
(a) Training-log of the loss and active triplet count.
(a) Training-log of the loss and active triplet count.
(b) Training-log of the embeddings in the minibatch.
(b) Training-log of the embeddings in the minibatch.
Figure 5: Training-log of LuNet on Market1501 using the
batch hard triplet loss with margin 0.2. The embeddings
stay bounded, as expected from a triplet formulation, and
there is a lot of progress even when the loss stays seemingly
flat.
Figure 6: Training-log of LuNet on Market1501 using the
batch hard triplet loss with soft margin. The embeddings
keep moving apart as even the loss shows a steady downward trend.
Second, each Subfigure (b) (blue plots), monitors statistics about the embeddings computed during training. Different lines show 0, 5, 50, 95, and 100-percentiles within a
mini-batch, thus visualizing the distribution of values. The
top-left plot, “2-norm of embeddings”, shows the norms of
the embeddings, thus visualizing whether the embeddingspace shrinks towards 0 or expands. The top-right plot,
“%tiles value of embedding entries” shows these same
statistics over the individual numeric entries in the embedding vectors. The only use we found for this plot is noticing when embeddings collapse to all-zeros vs. some other
value. Finally, the bottom-left plot, “Distance between embeddings”, is the most revealing, as it shows the same percentiles over all pairwise distances between the embeddings
within a mini-batch. Due to a bug, the x-axis is unfortunately mislabeled in some cases.
Let us now start by looking at the logs of two very suc-
cessful runs: the LuNet training from scratch on Market1501 with the batch hard loss with margin 0.2 and in the
soft-margin formulation, see Fig. 5 and Fig. 6, respectively.
The first observation is that, although they reach similar
final scores, they learn significantly different embeddings.
Looking at the embedding distances and norms, it is clear
that the soft-margin formulation keeps pushing the embeddings apart, whereas the 0.2-margin keeps the norm of embeddings and their distances bounded. The effect of exponentially decaying the learning-rate is also clearly visible: starting at 15 000 updates, both the loss as well as the
number of non-zero entries in a mini-batch start to strongly
decrease again, before finally converging from 20 000 to
25 000 updates.
A network getting stuck only happened to us with too
weak network architectures, or when using offline hard mining (OHM), the latter can be seen in Fig 8.
(a) Training-log of the loss and active triplet count.
(a) Training-log of the loss and active triplet count.
(b) Training-log of the embeddings in the minibatch.
(b) Training-log of the embeddings in the minibatch.
Figure 7: Training-log of a very small network on Market1501 using the batch hard triplet loss with soft margin. The
difficult “packed” phase is clearly visible.
Figure 8: Training-log of LuNet on MARS when using offline hard mining (OHM) with margin 0.1. This is one of the
runs that collapsed and never got past the difficult phase.
E. Extended Comparison Tables
Next, let us turn to Fig. 7, which shows the traininglogs of a very small net (not further specified in this paper).
We can clearly see that the network first pulls all embeddings towards their center of gravity, as evidenced by the
quickly decreasing embedding norms, entries, as well as
distances in the first few hundred updates. (More visible
when zooming-in on a computer.) Once they are all close
to each other, the networks really struggles to make them
all “pass each other” to reach “their” clusters. As soon as
this difficult phase is overcome, the embeddings are spread
around the space to quickly decrease the loss. This is where
the training becomes “fragile” and prone to collapsing: if
the embeddings never pass each other, training gets stuck.
This behavior can also be observed in Figures 5 and 6, although to a much lesser extent, as the network is powerful
enough to quickly overcome this difficult phase.
We show extended versions of the two state-of-the-art
comparison tables in the main paper. We add additional
methods that were left out due to space reasons, or because
the approaches are orthogonal to ours. The latter could be
integrated with our approach in a straightforward manner.
Market-1501 and Mars results are shown in Table 7 and
CUHK03 results in Table 8.
Market-1501 MQ
MARS
Type
Market-1501 SQ
mAP
rank-1
rank-5
mAP
rank-1
rank-5
mAP
rank-1
rank-5
E
E
I
69.14
60.71
58.06
84.92
81.38
78.50
94.21
92.34
91.18
76.42
69.07
67.48
90.53
87.11
85.45
96.29
95.16
94.12
67.70
60.48
57.42
79.80
75.56
72.42
91.36
89.70
86.21
LOMO + Null Space [45]
Gated siamese CNN [39]
Spindle Net [48]
CAN [25]
SSM [3]
JLML [22]
SVDNet [35]
CNN + DCGAN [54]
DPFL [7]
ResNet 50 (I+V)† [52]
MobileNet+DML† [47]
DTL† [10]
DTL† [10]
APR (R, 751)† [24]
PAN† [53]
IDE (C) + ML [49]
Latent Parts (Fusion) [20]
IDE (R) + ML [55]
Spatial-Temporal RNN [57]
CNN + Video† [46]
E
V
I
E
I
I
I
I
I+V
I
E
I+V
I
I
I
I
I
E
I
29.87
39.55
35.9
68.80
65.5
62.1
56.23
73.1
59.87
68.83
41.5
65.5
64.67
63.35
57.53
49.05
-
55.43
65.88
76.9
60.3
82.21
85.1
82.3
78.06
88.9
79.51
87.73
63.3
83.7
84.29
82.81
80.31
73.60
-
91.5
90.91
93.20
-
46.03
48.45
47.9
76.18
74.5
68.52
80.7
70.33
77.14
49.7
73.8
71.72
66.70
-
71.56
76.04
72.1
88.18
89.7
85.12
92.3
85.84
91.66
72.4
89.6
88.18
86.79
-
94.54
-
47.6
56.05
55.12
50.7
-
65.3
71.77
70.51
70.6
55.5
82.0
86.57
90.0
70.2
TriNet (Re-ranked)
LuNet (Re-ranked)
IDE (R) + ML ours (Re-ra.)
IDE (R) + ML (Re-ra.) [55]
PAN (Re-ra.)† [53]
E
E
I
I
I
81.07
75.62
71.38
63.63
76.65
86.67
84.59
81.62
77.11
85.78
93.38
91.89
89.88
-
87.18
82.61
79.78
83.79
91.75
89.31
86.79
89.79
95.78
94.48
92.96
-
77.43
73.68
69.50
68.45
-
81.21
78.48
74.39
73.94
-
90.76
88.74
85.86
-
TriNet
LuNet
IDE (R) + ML ours
Table 7: Scores on both the Market-1501 and MARS datasets. The top and middle contain our scores and those of the
current state-of-the-art respectively. The bottom contains several methods with re-ranking [55]. The different types represent
the optimization criteria, where I stands for identification, V for verification and E for embedding. All our scores include
test-time augmentation. The best scores are bold. † : Concurrent work only published on arXiv.
Type
TriNet
Gated siamese CNN [39]
DGD* [42]
LOMO + Null Space [45]
SSM [3]
CAN [25]
Latent Parts (Fusion) [20]
Spindle Net* [48]
JLML [22]
SVDNet [35]
CNN + DCGAN [54]
DPFL [7]
DTL† [10]
ResNet 50 (I+V)† [52]
E
V
I
E
E
I
I
I
I
I
I
I+V
I+V
Labeled
r-1
r-5
Detected
Type
Size
r-1
Conv
Res-block
MaxPool
128×7×7×3
128, 32, 128
pool 3×3, stride (2×2), padding (1×1)
Res-block
Res-block
Res-block
MaxPool
128, 32, 128
128, 32, 128
128, 64, 256
pool 3×3, stride (2×2), padding (1×1)
Res-block
Res-block
MaxPool
Res-block
Res-block
Res-block
MaxPool
256, 64, 256
256, 64, 256
pool 3×3, stride (2×2), padding (1×1)
256, 64, 256
256, 64, 256
256, 128, 512
pool 3×3, stride (2×2), padding (1×1)
Res-block
Res-block
MaxPool
512, 128, 512
512, 128, 512
pool 3×3, stride (2×2), padding (1×1)
Res-block
Linear
Batch-Norm
ReLU
Linear
512×(3×3×512), 128×(3×3×512)
1024×512
512
r-5
89.63 99.01
87.58 98.17
75.3
62.55
76.6
77.6
74.21
88.5
83.2
82.8
85.4
-
61.8
54.70
72.7
69.2
67.99
80.6
81.8
84.6
82.0
84.1
83.4
90.05
94.6
95.2
94.33
97.8
98.0
-
86.7
84.75
92.4
88.5
91.04
96.9
97.6
97.1
Table 8: Scores on CUHK03 for TriNet and a set of recent
top performing methods. The best scores are highlighted in
bold. † : Concurrent work only published on arXiv. *: The
method was trained on several additional datasets.
F. LuNet’s Architecture
The details of the LuNet architecture for training from
scratch can be seen in Table 9. The input image has three
channels and spatial dimensions 128×64. Most Res-blocks
are of the “bottleneck” type [15], meaning for given numbers n1 , n2 , n3 in the table, they consist of a 1×1 convolution from the number of input channels n1 to the number
of intermediate channels n2 , followed by a 3×3 convolution
keeping the number of channel constant, and finally another
1×1 convolution going from n2 channels to n3 channels.
Only the last Res-block, whose exact filter sizes are given
in the table, is an exception to this. All ReLUs, including
those in Res-blocks, are leaky [27] by a factor of 0.3; although we do not have side-by-side experiments comparing
the benefits, we expect them to be minor. All convolutional
weights are initialized following He et al. [13], whereas
we initialized the final Linear layers following Glorot et
al. [11]. Batch-normalization [16] is essential to train such
a network, and makes the exact initialization less important.
512×128
Table 9: The architecture of LuNet.
Figure 9: Barnes-Hut t-SNE [38] of our learned embeddings for the Market-1501 test-set. Best viewed when zoomed-in.
G. Full t-SNE Visualization
Figure 9 shows the full Barnes-Hut t-SNE visualization from which the teaser image (Fig. 1 in the paper) was
cropped. We used a subset of 6000 images from the Market1501 test-set and a perplexity of 5000 for this visualization.
| 1 |
Journal of
Bioinformatics and Comparative Genomics
Research
Open Access
AIS-INMACA: A Novel Integrated MACA Based Clonal Classifier for Protein Coding and Promoter Region Prediction
Pokkuluri Kiran Sree1, Inampudi Ramesh Babu2 and SSSN Usha Devi N3
Research Scholar, Department of CSE, JNTU Hyderabad, India
Professor, Department of CSE, ANU, Guntur, India
3
Assistant Professor, Department of CSE, University College of Engineering, JNTUK, India
1
2
*Corresponding author: Pokkuluri Kiran Sree, Department of CSE, JNTU Hyderabad, India; Email: [email protected]
Received Date: January 20, 2013 Accepted Date: February 10, 2014 Published Date: February 28, 2014
Citation: Pokkuluri Kiran Sree, et al. (2014) AIS-INMACA: A Novel Integrated MACA Based Clonal Classifier for
Protein Coding and Promoter Region Prediction. J Bioinfo Comp Genom 1: 1-7.
Abstract
Most of the problems in bioinformatics are now the challenges in computing. This paper aims at building a classifier based
on Multiple Attractor Cellular Automata (MACA) which uses fuzzy logic. It is strengthened with an artificial Immune System Technique (AIS), Clonal algorithm for identifying a protein coding and promoter region in a given DNA sequence. The
proposed classifier is named as AIS-INMACA introduces a novel concept to combine CA with artificial immune system to
produce a better classifier which can address major problems in bioinformatics. This will be the first integrated algorithm
which can predict both promoter and protein coding regions. To obtain good fitness rules the basic concept of Clonal selection algorithm was used. The proposed classifier can handle DNA sequences of lengths 54,108,162,252,354. This classifier
gives the exact boundaries of both protein and promoter regions with an average accuracy of 89.6%. This classifier was tested
with 97,000 data components which were taken from Fickett & Toung , MPromDb, and other sequences from a renowned
medical university. This proposed classifier can handle huge data sets and can find protein and promoter regions even in
mixed and overlapped DNA sequences. This work also aims at identifying the logicality between the major problems in
bioinformatics and tries to obtaining a common frame work for addressing major problems in bioinformatics like protein
structure prediction, RNA structure prediction, predicting the splicing pattern of any primary transcript and analysis of
information content in DNA, RNA, protein sequences and structure. This work will attract more researchers towards application of CA as a potential pattern classifier to many important problems in bioinformatics.
Keywords: MACA (Multiple Attractor Cellular Automata); CA(Cellular Automata); AIS( Artificial Immune System);
Clonal Algorithm(CLA);AIS-INMACA (Artificial Immune System-Integrated Multiple Attractor Cellular Automata).
Introduction
Cellular automata
Pattern classification encompasses development of a model
which will be trained to solve a given problem with the help of
some examples; each of them will be characterized by a number of features. The development of such a system is characterized as pattern classification. We use a class of Cellular
Automata (CA)[1,2] to develop the proposed classifier. Cellular automata consist of a grid of cells with a finite number of
states. Cellular Automata (CA) is a computing model which
©2013 The Authors. Published by the JScholar under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/
by/3.0/, which permits unrestricted use, provided the original author and
source are credited.
JScholar Publishers
provides a good platform for performing complex computations with the available local information.
CA is defined a four tuple <G, Z, N, F>
Where
G -> Grid (Set of cells)
Z -> Set of possible cell states
N -> Set which describe cells neighborhoods
F -> Transition Function (Rules of automata)
As Cellular Automata consists of a number of cells structured
in the form of a grid. The transitions between the cells may
depend on its own state and the states of its neighboring cells.
The equation one sates that if ith cell have to make a transition, it has to depend on own state, left neighbor and right
neighbor also.
qi(t + 1) = f (qi−1(t), qi(t), qi+1(t)) ---- Equation
1
J Bioinfo Comp Genom 2014 | Vol 1: 103
2
Problems in Bioinformatics
Bioinformatics can be characterized as a collection of statistical, mathematical and computational methods for dissecting
biological sequences like DNA, RNA and amino acid. It deals
with the design and development of computer based technology that supports biological processing. Bioinformatics tools
are aimed at performing lot of functions like data integration,
collection, analysis, mining, management, simulation, visualization and statistics.
The central dogma shown in the Figure 1 of molecular biology
was initially articulated by Francis Crick [3] in 1958. It deals
with point by point transfer of important sequential data. It
states that data can't be exchanged back from protein to either
protein or nucleic acid. So once data gets into protein, it can't
stream again to nucleic acid. This dogma is a framework for
comprehension about the exchange of sequence data between
sequential data carrying biopolymers in living organisms.
There are 3 significant classes of such biopolymers ie DNA,
RNA and protein.
There are 9 possible immediate exchanges of data that can happen between these. This dogma classifies these nine exchanges
into three transfers (Normal, Special, and Never Happens).
The normal flow of biological information is DNA is copied
to DNA (DNA replication), DNA information is copied into
RNA (Transcription) and protein are synthesized by using the
protein coding region exits in DNA/RNA (Translation).
Protein Coding Region Identification
DNA is an important component of a cell and genes will be
found in specific portion of DNA which will contain the information as explicit sequences of bases (A, G, C, T).These explicit sequences of nucleotides will have instructions to build
the proteins. But the region which will have the instructions
which is called as protein coding regions occupies very less
space in a DNA sequence. The identification of protein coding
regions plays a vital role in understanding the gens. We can
extract lot of information like what is the disease causing gene,
whether it is inherited from father or mother, how one cell is
going to control another cell.
Promoter Region Identification
DNA is a very important component in a cell, which is located
in the nucleus. DNA contains lot of information. For DNA sequence to transcript and form RNA which copies the required
information, we need a promoter. So promoter plays a vital
role in DNA transcription. It is defined as “the sequence in the
region of the upstream of the transcriptional start site (TSS)”. If
we identify the promoter region we can extract information regarding gene expression patterns, cell specificity and
development. Some of the genetic diseases which are associated with variations in promoters are asthma, beta thalassemia
and rubinstein-taybi syndrome.
Literature Survey
Eric E. Snyder et al. [4,5] has developed a PC program, Geneparser, which distinguishes and verifies the protein coding
genes in genomic DNA arrangements.
JScholar Publishers
This program scores all subintervals in a grouping for substance facts characteristic of introns and exons, and for destinations that recognize their limits. Jonathan H. Badger[6]
et al. has proposed a protein coding region identification tool
named CRITICA which uses comparative analysis. In the
comparative segment of the investigation, regions of DNA are
straightened with identified successions from the DNA databases; if the interpretation of the arranged groupings has more
stupendous amino acid than needed for the watched rate nucleotide character; this is translated as proof for coding.
David J. States[7] proposed a PC project called BLASTX was
formerly indicated to be viable in distinguishing and allotting
putative capacity to likely protein coding districts by identifying huge likeness between a theoretically interpreted nucleotide query arrangement and parts of a protein grouping
database.
Steven Salzberg, et al. [8] has used a decision tree algorithm
for locating protein coding region. Genes in eukaryotic DNA
spread hundreds or many base sets, while the locales of the
aforementioned genes that code for proteins may possess just
a little rate of the succession. Eric E.Snyder, et al. [4] has used
dynamic programming with neural networks to address the
protein coding region problem. Dynamic Programming (DP)
is connected to the issue of exactly distinguishing inside exons
and introns in genomic DNA arrangements. Suprakash Datta,
et al. [9] used a DFT based gene prediction for addressing this
problem. Authors provided theoretical concept of three periodicity property observed in protein coding regions in genomic
DNA. They proposed new criteria for classification based on
traditional frequency approaches of coding regions. Jesus P.
Mena-Chalco, et al. [10] has used Modified Gabor-Wavelet
Transform for addressing this issue. In this connection, numerous coding DNA model-free systems dependent upon the
event of particular examples of nucleotides at coding areas
have been proposed. Regardless, these techniques have not
been totally suitable because of their reliance on an observationally predefined window length needed [11] for a nearby
dissection of a DNA locale. Many authors[10,12-15,16] have
applied CA in bioinformatics.
Rakesh Mishra, et al. [17] has worked on search and use of
promoter region. Look for a promoter component by RNA
polymerase from the to a great degree extensive DNA base arrangement is thought to be the slowest and rate-confirming
for the regulation of interpretation process. Few immediate
investigations we portrayed here which have attempted to accompany the robotic suggestions of this promoter look[18].
Christoph Dieterich, et al. [19] made an extensive study on
promoter region. The robotized annotation of promoter districts joins data of two sorts. To begin with, it recognizes crossspecies preservation inside upstream districts of orthologous
genes. Pair wise too a various arrangement examinations are
processed. Vetriselvi Rangnanan, et al. [20] made a dissection
of different anticipated structural lands of promoter [19] districts in prokaryotic and in addition eukaryotic genomes had
prior shown that they have a few normal characteristics, for
example, lower steadiness, higher curve and less bendability,
when contrasted and their neighbouring areas [20]. Jih-Wei
Hung[21] has developed an effective forecast calculation that
J Bioinfo Comp Genom 2014 | Vol 1: 103
3
can expand the recognition (power =1 - false negative) of promoter. Authors introduce two strategies that utilize the machine force to ascertain all conceivable examples which are the
conceivable characteristics of promoters. Some of the other
authors [22-26] has worked on promoter identification and
succeeded to some extent.
AIS-INMACA(Artificial Immune SystemIntegrated Multiple Attractor Cellular Automata)
Multiple Attractor Cellular Automata which is used in this report is a special class of fuzzy cellular automata which was introduced thirty years ago. It uses fuzzy logic [27,28] to handle
real value attributes. The development process / implementation of Multiple Attractor Cellular Automata is administered
by AIS technique, a Clonal Algorithm with the underlying
theory of survival of the fittest gene.
Artificial Immune System is a novel computational intelligence
technique with features like distributed computing, fault /error tolerance, dynamic learning, adaption to the frame work,
self monitoring, non uniformity and several features of natural
immune systems. AIS take its motivation from the standard
immune system of the body to propose novel computing tools
for addressing many problems in wide domain areas. Some
features of AIS which can be mapped with bioinformatics
framework are chosen and used in the thesis to strengthen the
proposed CA classifier.
This paper introduces the integration of AIS with CA which is
first for its kind to produce a better classifier which can address
major problems in bioinformatics. This proposed classifier
named artificial immune system based multiple attractor cellular automata classifier (AIS-INMACA) uses the basic frame
work of Cellular Automata (CA) and features of AIS like self
monitoring and non uniformity which is potential, versatile
and robust. This is the basic motivation of the entire research.
The objectives of AIS based evolution of MACA [13-14, 28] is
1. To improve the conception of the ways CA performs calculations.
2. To Figure out how CA may be advanced to perform a particular computational job.
3. To follow how advancement makes complex global behavior into locally interconnected cells on a grid.
4. To extract innate classification potential in CA and use
Clonal algorithm for producing better rules with fitness.
Design of AIS-INMACA
The proposed AIS based CA classifier uses fuzzy logic can
address major problems inbioinformatics like protein coding
Figure 1: Central Dogma
JScholar Publishers
Figure 2: Design of AIS-INMACA
Neighbourhood
111
110
101
100
011
010
001
000
Rule
Next State
( Rule 51)
0
0
1
1
0
0
1
1
51
Next State
(Rule 254)
1
1
1
1
1
1
1
0
254
Table 1: Transition Function
SNO
Rule Number
General Representation
1
1
q i-1 +qi + qi+1
2
3
q i-1 +qi
3
17
qi + qi+1
4
5
q i-1 + qi+1
5
51
qi
6
15
q i-1
7
85
qi+1
8
255
1
Table 2: Complemented Rules
SNO
Rule Number
General Representation
1
254
q i-1 +qi + qi+1
2
252
q i-1 +qi
3
238
qi + qi+1
4
250
q i-1 + qi+1
5
204
qi
6
240
q i-1
7
170
qi+1
8
0
0
Table: 3: Non complemented Rules
J Bioinfo Comp Genom 2014 | Vol 1: 103
4
region identification and promoter region prediction .Even
though some scientists have proposed different algorithms, all
of these are specific to the problem. None of them have worked
towards proposing a common classifier whose frame work can
be useful for addressing many problems in bioinformatics.
The general design of AIS-INMACA is indicated in the Figure.
2 Input to AIS-INMACA algorithm and its variations will be
DNA sequence and Amino Acid sequences. Input processing
unit will process sequences three at a time as three neighborhood cellular automata is considered for processing DNA sequences. The rule generator will transform the complemented
(Table 2) and non complemented rules (Table 3) in the form
of matrix, so that we can apply the rules to the corresponding sequence positions very easily. AIS-INMACA basins are
calculated as per the instructions of proposed algorithm and
an inverter tree named as AIS multiple attractor cellular automata is formed which can predict the class of the input after
all iterations.
Algorithm 3.1 is used for creating of AIS-INMACA tree .This
tree will dissipate the DNA sequence into respective leaves of
the tree. If the sequence is falling into two or more class labels, the algorithm wills recursively partition [29] in such a
way that all the sequences will fit into one of the leaves. Every
leaf will have a class .Algorithm 2 will be used for getting the
class as well as the required transition function. The best fitness rules with a score more than .5 is considered. Algorithm
3.3 [7] uses, algorithm 3.1, 3.2 for predicting the protein and
promoter coding regions.
Rules of AIS-INMACA
The decimal equivalent of the next state function, as defined
as the rule number of the CA cell introduced by Wolfram [2],
is. In a 2-state 3-neighborhood CA, there are 256 distinct next
state functions, among 256 rules, rule 51 and rule 254 are represented in the following equations.
Rule 51 : qi(t + 1)
Rule 254 : qi(t + 1)
=
qi(t)
= qi-1(t)+qi(t) + qi+1(t)
(1)
(2)
Input: Training set S = {S1, S2, • • • , SK}, Maximum Generation (Gmax).
Output: Dependency matrix T, F, and class information.
begin
Step 1: Generate 200 new chromosomes for IP.
Step 2: Initialize generation counter GC=zero; Present Population (PP) ← IP.
Step 3: Compute fitness Ft for each chromosome of PP according to Equation
Step 4: Store T, F, and corresponding class information for
which the fitness value Ft>0.5
Step 5: If number of chromosomes with fitness more than 0.5
are 50 then go to 12.
Step 6: Rank chromosomes in order of fitness.
Step 6a: Clone the chromosome
Step 7: Increment generation counter (GC).
Step 8: If GC > Gmax then go to Step 11.
Step 9: Form NP by selection, cloning and mutation.
Step 10: PP← NP; go to Step 3.
Step 11: Store T, F, and corresponding class information for
which the fitness value is maximum.
Step 11a: Output class, T,F
Algorithm 3.3:
1. Uses the AIS-INMACA Tree construction Algorithm 3.1
2. Uses the AIS-INMACA Evolution Algorithm 3. 2
3. Trace the corresponding attractor
4. Travel back form attractor to the starting node
5. Identify the start codon
6. Identify the stop codon
7. Report the boundaries of protein coding region.
8. From fist codon of the sequence to start codon Search for
TAATAA.
9. Report the promoter boundary located at upstream
The transition function Table 1 was shown for equations 1,2.
The tables 2, 3 show the complemented and non complemented rules.
Algorithm 3. 1:
Input: Training Set S= {S1, S2,…………….., Sx) with P classes
Output: AIS-INMACA tree
Partition(S, P)
1. Generate a AIS-INMACA with x attractor basins (Two
Neighborhood CA)
2. Distribute training set in x attractor basins(Nodes)
3. Evaluate the patterns distributed in each attractor basin.
4. If all the patters say S’ which are covered by the attractor
basin belong to only one class, then label the attractor basin (
Leaf Node) as the class
5. If S’ of an attractor basin belong to more than one class
partition (S’,P’)
6. Stop
Algorithm 3.2 (Partial CLA Algorithm)
JScholar Publishers
Figure 3: Interface
Experimental Results & Discussions
Experiments were conducted by using Fickett and Toung
data [30] for predicting the protein coding regions. All the 21
measures reported in [30] were considered for developing the
J Bioinfo Comp Genom 2014 | Vol 1: 103
5
classifier. Promoters are tested and trained with MpromDb
data sets[31]. Figure 3 shows the interface developed. Figure.
4 is the training interface with rules, sequence and real values.
Figure 5 shows the testing interface. Table. 4 show the execution time for predicting both protein and promoter regions
which is very promising. Table. 5 shows the number of datasets handled by AIS-INMACA. Figure. 6 and Figure. 7 shows
the accuracy of prediction separately which is the important
output of our work. Figure. 8 gives the prediction of extons
Table 5: Number of Data Sets Used
Figure 6: Predictive Accuracy for Protein Coding Regions
Figure 4: Training Interface
Figure 7: Predictive Accuracy for Promoter Regions
Figure 5: Testing Interface
Size of Data Set
Prediction Time of Integrated
Algorithm
5000
1064
6000
1389
10000
2002
20000
2545
Table 4: Execution Time for prediction of both protein and promoter regions.
Figure 8: Exons Prediction
Figure 9: Exons Boundaries Reporting
JScholar Publishers
J Bioinfo Comp Genom 2014 | Vol 1: 103
6
Figure 10: Predictive Accuracy for Promoter Regions
from the given input graphically. Figure 9. gives the boundary
of the location of extons. Figure 10. gives the promoter region
prediction and boundary reporting.
Conclusion
We have successfully developed a logical classifier designed
with MACA and strengthened with AIS technique. The accuracy of the AIS-INMACA classifier is considerably more when
compared with the existing algorithms which are 84% for
protein coding and 90% for promoter region prediction. The
proposed classifier can handle large data sets and sequences of
various lengths. This is the first integrated algorithm to process
DNA sequences of length 252,354. This novel classifier frame
work can be used to address many problems in like protein
structure prediction, RNA structure prediction, predicting the
splicing pattern of any primary transcript and analysis of information content in DNA, RNA, protein sequences and structure and many more. We have successfully developed a frame
work with this classifier which will lay future intuition towards
application of CA in number of bioinformatics applications.
References
1) John Von Neumann (1996) The Theory of Self-Reproducing Automata.
Burks AW (Eds)University of Illinois Press, Urbana and London.
2) Wolfram, Stephen (1984) Universality and complexity in cellular automata.
Physica D: Nonlinear Phenomena 1: 1-35.
3) Francis Crick (1970) Central dogma of molecular biology. Nature 227: 561563.
4) Snyder Eric E, Gary D Stormo (1995) Identification of protein coding regions in genomic DNA. Journal of molecular biology 1: 1-18.
5) Snyder EE, Stormo GD (1993) Identification of coding regions in genomic
DNA sequences: an application of dynamic programming and neural networks. Nucleic Acids Res 11: 607–613.
JScholar Publishers
6) Jonathan Badger H, Gary J Olsen (1999) CRITICA: coding region identification tool invoking comparative analysis. Molecular Biology and Evolution,
16: 512-524.
7) Warren Gish, David States J (1993) Identification of protein coding regions
by database similarity search. Nature genetics 3: 266-272.
8) Steven Salzberg (1995) Locating protein coding regions in human DNA
using a decision tree algorithm. Journal of Computational Biology 3: 473-485.
9) Suprakash Datta and Amir Asif (2005) A fast DFT based gene prediction
algorithm for identification of protein coding regions. In Acoustics, Speech,
and Signal Processing Proceedings.(ICASSP’05) IEEE 5: 653-656.
10) Jesus P. Mena-Chalco, Helaine Carrer, Yossi Zana, Roberto M. Cesar Jr.
(2008) Identification of protein coding regions using the modified Gaborwavelet transform. Computational Biology and Bioinformatics IEEE/ACM
Transactions 2: 198-207.
11) Jinfeng Liu, Julian Gough, BurkhardRost (2006) Distinguishing proteincoding from non-coding RNAs through support vector machines. PLoS genetics 2: e29.
12) Changchuan Yin, Stephen S-T Yau (2007) Prediction of protein coding
regions by the 3-base periodicity analysis of a DNA sequence. J Theor Biol 4:
687-694.
13) Pokkuluri Kiran Sree, Ramesh Babu I (2009) Investigating an Artificial
Immune System to Strengthen the Protein Structure Prediction and Protein
Coding Region Identification using Cellular Automata Classifier. Int J Bioinform Res Appl 5: 647-662.
14) Pokkuluri Kiran Sree, Ramesh Babu I (2014) PSMACA: An Automated
Protein Structure Prediction using MACA (Multiple Attractor Cellular Automata).Journal of Bioinformatics and Intelligent Control (JBIC) in 3: 211215.
15) Pokkuluri Kiran Sree , Ramesh Babu I (2013) An extensive report on Cellular Automata based Artificial Immune System for strengthening Automated
Protein Prediction. Advances in Biomedical Engineering Research (ABER) 1:
45-51.
16) Pokkuluri Kiran Sree , Ramesh Babu I (2008) A Novel Protein Coding
Region Identifying Tool using Cellular Automata Classifier with Trust-Region
Method and Parallel Scan Algorithm (NPCRITCACA). International Journal
of Biotechnology & Biochemistry (IJBB) 4: 177-189.
17) Rakesh Mishra K, Jozsef Mihaly, Stéphane Barges, Annick Spierer, et al.
(2001) The iab-7 polycomb response element maps to a nucleosome-free re-
J Bioinfo Comp Genom 2014 | Vol 1: 103
7
gion of chromatin and requires both GAGA and pleiohomeotic for silencing
activity. Molecular and cellular biology 4 : 1311-1318.
18) Krishna Kumar K, Kaneshige J and Satyadas A (2002) Challenging Aerospace Problems for Intelligent Systems. Proceedings of the von Karman Lecture series on Intelligent Systems for Aeronautics, Belgium.
19) Christoph Dieterich, Steffen Grossmann, Andrea Tanzer, Stefan Ropcke,
Peter F Arndt, et al. (2005) Comparative promoter region analysis powered by
CORG. BMC genomics 1: 24.
20) Vetriselvi Rangannan and Manju Bansal (2007) Identification and annotation of promoter regions in microbial genome sequences on the basis of DNA
stability. Journal of biosciences 5: 851-862.
21) Jih-Wei Huang (2003) Promoter Prediction in DNA Sequences. PhD diss
National Sun Yat-sen University.
22) Horwitz M S and Lawrence A Loeb (1986) Promoters selected from random DNA sequences. Proceedings of the National Academy of Sciences 19:
7405-7409.
23) Pokkuluri Kiran Sree, Ramesh Babu I (2013) HMACA: Towards proposing Cellular Automata based tool for protein coding, promoter region identification and protein structure prediction. Int J Eng Res Appl 1: 26-31.
24) Pokkuluri Kiran Sree , Ramesh Babu I (2013) Multiple Attractor Cellular
Automata (MACA) for Addressing Major Problems in Bioinformatics. Review
of Bioinformatics and Biometrics (RBB) 3: 70-76.
25) Pokkuluri Kiran Sree, I.Ramesh Babu (2013) AIS-PRMACA: Artificial
Immune System based Multiple Attractor Cellular Automata for Strengthening PRMACA, Promoter Region Identification. SIJ Transactions on Computer
Science Engineering & its Applications (CSEA), The Standard International
Journals (The SIJ) 4: 124-127.
26) Maji P, Chaudhuri PP (2004) Fuzzy Cellular Automata for Modeling Pattern Classifier IEICE.
27) Maji P, Chaudhuri PP (2004) FMACA: A Fuzzy Cellular Automata Based
Pattern Classifier. Proceedings of 9th International Conference on DatabaseSystems, Korea 2973: 494–505.
28) Flocchini P, Geurts F, Mingarelli A, and Santoro N (2000) Convergence
and Aperiodicity in Fuzzy Cellular Automata: Revisiting Rule 90 Physica D.
29) Pokkuluri Kiran Sree, I.Ramesh Babu (2008) Identification of Protein
Coding Regions in Genomic DNA Using Unsupervised FMACA Based Pattern Classifier. in International Journal of Computer Science & Network Security 8: 1738-7906.
30) Fickett JW, Chang-Shung Tung (1992) Assessment of protein coding
measures. Nucleic Acids Res 24: 6441-6450.
31) Ravi Gupta, Anirban Bhattacharyya, Francisco J Agosto-Perez, Priyankara
Wickramasinghe, Ramana V. Davuluri. (2011) MPromDb update 2010: an integrated resource for annotation and visualization of mammalian gene promoters and ChIP-seq experimental data. Nucleic acids research 39: 1.
Submit your manuscript to a JScholar journal
and benefit from:
¶¶
¶¶
¶¶
¶¶
¶¶
¶¶
Convenient online submission
Rigorous peer review
Immediate publication on acceptance
Open access: articles freely available online
High visibility within the field
Better discount for your subsequent articles
Submit your manuscript at
http://www.jscholaronline.org/submit-manuscript.php
JScholar Publishers
J Bioinfo Comp Genom 2014 | Vol 1: 103
| 5 |
A Heuristic Algorithm for optimizing Page
Selection Instructions
Qing’an Li1, Yanxiang He1,2, Yong Chen1, Wei Wu1, Wenwen Xu3
1. Computer School, Wuhan University, Wuhan 430072, China
2. State Key Laboratory of Software Engineering, Wuhan University, Wuhan 430072, China
3. Institute of Computing Technology, Chinese Academy of Sciences
Abstract-Page
switching
is a
technique
that
increases the memory in microcontrollers without
extending the address buses. This technique is
the page number. A multi-paged program
memory with page size of 2K is illustrated in
Figure 1. .
widely used in the design of 8-bit MCUs. In this
paper, we present an algorithm to reduce the
overhead of page switching. To pursue small code
size, we place the emphasis on the allocation of
functions into suitable pages with a heuristic
algorithm, thereby the cost-effective placement of
Figure 1.
Program Memory Maps
page selection instructions. Our experimental
results showed the optimization achieved a
reduction in code size of 13.2 percent.
Keywords-compiler optimization, page selection,
function partitioning
I.
INTRODUCTION
In many kinds of RISC based MCUs,
such as the PIC16F7X [1] family of Microchip
Technology
Inc,
the
No.
1
8-bit
microcontroller manufacturer, the length of
instructions is greatly limited. To support large
programs, a special register is designed to store
the high part of code’s address. As a result, the
program memory layout seems to be
multi-paged, with each page of a fixed size.
The page size is determined by the bits of an
instruction used to store the code address. For
example, if a function call instruction is 14 bits
wide, among which 11 bits can be used to
indicate the code address, the size of each page
is 2K * 14 bits. So, to support program
memory as large as 8k, this special register is
required to provide at least 2 bits to indicate
the high part of code address, sometimes called
With this kind of multi-paged program
memory layout, before the control flow is
transferred from the present instruction to
another far away, extra operations on this
special register are necessary. The instructions
related to these operations, called page
selection instructions in this paper, inevitably
induce extra overhead in code size. Code size
is critical for the programs running in
embedded systems, since smaller code size
often means less consumption of ROM as well
as less energy, and thus more competitiveness
for IC manufacturers.
This paper presents an algorithm to
optimize these page selection instructions by
cost-effective allocation of functions and
placement of page selection instructions. This
paper is organized as follows. Our algorithm is
discussed in detail in Section II. Our
experimental results are shown in Section III.
Related works are reviewed in Section IV.
Then, a conclusion is drawn in Section V.
Finally, Section VI lists the references for this
paper.
II.
BACKGROUND
A. Definitions
The following definitions are helpful for
understanding this paper.
1) Page
Page is a logical concept rather than a
physical one. As stated above, page size is
determined by the bits in an instruction to
indicate the code address. In PIC16F7X family,
only 11 bits are used to indicate the address,
so a page here is a space of sequential
addresses, of which the start is dividable by
2K (2^11).
2) Page Selection Register (PSR)
In this paper, the special register used to
indicate the page number, is called page
selection register (which is named PCLATH in
the PIC family).
3) Page Selection Instruction (PSI):
The instructions designed to switch the
page number are called page selection
instructions. That is, a PSI can write into the
PSR directly.
4) Value of PSR
PSR is a register of 8-bit, but only two or
three bits are used to indicate the page number.
However, in this paper, the value of PSR is
defined to be the page number indicated by
PSR.
5) Basic Block
This is commonly used in the compiler
optimization terminology [2]. Briefly, a basic
block is a sequence of instructions in which
flow of control can only enter from its
beginning and leave at its end.
6) Page Transparent Instruction (PTI)
Generally, only the jump operation and
function call operation trigger the loading of
value of PSR into PC. The instructions related
to these operations are called page
nontransparent instructions (PNTI); others are
page transparent. In this paper, only two kinds
of instructions are page nontransparent.
a) goto
Before such an instruction executes, the
value of PSR is required to be the same with
the number of the page holding the current
function.
b) call
Before a function call instruction executes,
the value of PSR is required to be the same
with the number of the page holding the callee
function. After this instruction, PSR may be
changed by PSIs in the callee function.
7) Page Transparent Block (PTB)
If a block includes any PSI, the block is
called a page nontransparent block (PNTB);
otherwise, the block is page transparent.
8) FuncPage
This function mapping from the functions
to the page numbers, is used to indicate the
number of page holding a certain function. For
a function f, FuncPage (f) means the number
of the page holding f.
B. The Motivations
The motivation of our algorithm comes
from the following observations. Firstly, the
value of PSR at any point is always related to a
function or more 1 , since PSR indicates the
page holding a related function. Besides, at any
point immediately before a PNTI, a PSI is
needed only when the current value of PSR is
not the same with what this PTI requires. It
follows that if the functions are located
delicately, the chance that the current value of
PSR is just what the PTI requires could be
increased. An example is illustrated in Fig. 2.
If the current value of PSR is FuncPage (f),
and the value required is FuncPage (g), by
placing
f
and
g
into
the
Figure 2.
1
Optimizing the PSI by function allocation
There is a need to make clear that the value of PSR
at some point may be related to more than one
function, when more than one path can reach this
point or the value is affected by a function call
instruction.
same page, no PSI is need. So the goal of the
optimization is to minimize the PSIs to the
most extent, and a feasible way to this goal is
through placing the related functions in the
same page as possible as we could. The more
savings we could obtain by allocating two
functions into one page, the more related they
are viewed as. For example, the caller function
is related with its first callee function, since
allocating them into the same page could save
the PSIs before the call instruction.
III.
THE ALGORITHM
An algorithm is presented in this paper
devoted to reduce the number of PSIs, with
careful allocation of related functions into the
same page. After this allocation, a
cost-effective placement of PSIs is obtained.
The placement of PSIs would not be described
in detail, since once the function allocation is
finished, the value of PSR required before each
PNTI is determined, and what is left is to insert
PSIs before a PNTI when the current value of
PSR differs from the number of the required
page. Therefore, our algorithm consists of only
the following three steps.
The first step is to found the functions to
which the value of PSR is related at every
point of the analyzed code. For example,
before a call instruction, the callee function is
related to the value of PSR; after this call
instruction, either the last function invoked
(directly or indirectly) by the callee function,
or the callee function itself, is related to the
value of PSR. This step is called the analysis
process. Then, we build a weighted function
relation graph (FRG). In this FRG, once it is
found that the placing of two functions into
same page could result in savings of PSIs, the
weight value of the edge between the two
functions is increased. This step is called the
building process. At last, given the number of
pages, we place the functions of the analyzed
code into the right pages by partitioning the
weighted FRG. If we allocate the functions
delicately, the number of PSIs can be reduced
to the most extent. This step is called the
partitioning process.
A. The analysis process
In this process, we use the algorithm of
data flow analysis [2] to calculate the set of
functions related to the value of PSR at every
point of the code. Firstly, we use the equations
depicted in Figure 3. to calculate whether and
how a block affects the value of PSR. Both
Gen and Kill are sets of functions of the
analyzed code. The function RetVop is
depicted in Figure 5. It’s obvious that only a
PNTB has the potential to affects the value of
PSR.
Gen( b) = RetVop(i),if b is a PNTB and i is the last PNTI of b
Gen( b) = { } ,if b is a PTB
Kill ( b) = { f | f ∉RetVop(i)} ,if b is a PNTB and f is a function
Kill ( b) = { } ,if b is a PTB
Figure 3.
Equations for calculating Gen and Kill sets
Then, we use the equations in Figure 4. to
iteratively determine which block or blocks
affect the value of PSR at the entry and exit
point of each block. As Gen and Kill, both In
and Out are sets of functions. Before the
iteration progress begins, the In and Out sets
for any block are initialized to be empty.
In ( bi ) =
∪
bj∈Preds ( bi )
Out ( bj )
Out ( bi ) = In ( bi ) − Kill ( bi )
Figure 4.
Equations for calculating In and Out sets
Now, it is easy to calculate the set of
functions related to the value of PSR at any
point of the program by scanning each basic
block only once. For a point immediately after
instruction i, marked as pi, the algorithm
depicted in Fig. 5 can calculate the value of
PSR at this point, marked as VOP (i). In this
figure, Out (f.psb) means the Out set for the
pseudo block of function f. A pseudo block is
not a real block of code. After we construct the
control flow graph (CFG), we add a pseudo
block into the CFG such that it is connected to
any block with no successor in the CFG, and it
becomes their common successor. Therefore,
this pseudo block is the unique exit of this
function.
1
func GetVop
2
if pi is the entry point of a block b
3
4
VOP (pi) = In (b)
else if i is a PTI
5
6
VOP (i) = VOP (i-1)
else
7
VOP (i) = RetVop i
8
func RetVop i
9
if i is a “goto” instruction
10
11
return the current function
else if i is a “call f” instruction
12
return Out (f.psb)
Figure 5.
Calculating value of PSR at any point
It’s noteworthy that there seems to be a
cyclic dependency in this algorithm, since
RetVop depends on Out, Out depends on Gen,
and Gen depends on RetVop. There are two
explanations. In the first place, even though
there are cyclic function callings in the
program, there is a fix point for the
inter-procedural flow analysis [2]. In the
second place, if there are no cyclic function
callings in the program, a more efficient
algorithm could be obtained by processing the
callee function before the caller function. A
topological sorting could do it well.
B. The building process
With the analysis from the first step, it is
easy to build the weighted function relation
graph. Firstly, we build a complete graph, with
each node representing a function and
initialize the weight value of all the edges to
be zero. Then, by scanning all the PNTIs in
the program once, the weight value is updated
with the algorithm depicted in Figure 6.
After this step, it is assumed that the weight
value of an edge between two functions could
be approximately equated with the cost
savings we could gain by placing these two
functions in the same page.
C. The partitioning process
Until now, all tasks left are about how to
partition the FRG. With the assumption stated
in the building process, the sum of the weight
value of all edges in the FRG is the cost
savings in total. Therefore, the problem of
page selection optimization can be restated as
follows:
1) Any function could be placed into only
one page.
2) The number of pages to be used are
speicfied by the MCU.
3) The sum of size of functions placed in
one page should never be greater than the size
of the page.
4) If two functions are placed into the
same page, the weight value of the edge
between the two functions can be reset to zero.
5) The goal is to find such a partition that
the four conditions above are fully satisfied,
and the sum of the weight value of all edges in
the reserved graph is minimized.
The partitioning problem has been proved
to be NP hard, so there is no optimal
polynomial algorithm. A greedy algorithm is
presented in this paper, described in Figure 7.
In this algorithm, the allocation is conducted
by some dynamically updatable statistics. We
always try to allocate a function to the right
page, when this allocation could save the most
PSIs from the statistics (decrease the most
weight value from the current graph).
D. Complexity analysis of the algorithm
For the first step, the Gen and Kill sets
could be calculated by scanning the whole
program once. Although iteration is needed in
calculating the In and Out set, the times of
iteration is bounded by a small constant factor
[2]. So, time cost is linear to the code size S.
For the second step, most instructions are
PNTIs and the VOP (i) includes all the
functions in the worst case. If the number of
1
func AddWeight i, f
2
// Graph is the complete graph about function relation
3
// i is the current analyzed instruction, f is the current analyzed function
4
// PreValue is a predefined value for calculating the weight value
5
if i is a “goto” instruction
6
if VOP(i-1) is not equal to {f}
7
forall pair of functions (g,h) in {VOP (i-1)} U {f}
8
add PreValue/|VOP (i-1)| to Graph (g, h)
9
else if i is a “call e” instruction
10
11
if VOP(i-1) is not equal to {e}
forall pair of functions (g,h) in {VOP(i-1)} U {e}
12
add PreValue/|VOP (i-1)| to Graph ( g, h )
Figure 6.
Code for calculating weight value
1
func partition
2
// Pages is a list of pages, the number is specified by the MCU
3
// Funcs is a list of all the functions; Graph is the RFG
4
// Weight is a map from functions and pages to weight value, indicating the cost savings
5
initialize the size of each of the pages to 2k
6
sort the Funcs by size of function size in descend order
7
get the first function f from the Funcs
9
get a page p from the Pages
10
if p.size > f.size
11
remove f from Funcs
12
p.size <- p.size – f.size
13
forall g from Funcs
14
Weight[p,g] <- Weight[p,g] + Graph[f,g]
15
else print “not enough memory error”
17
while Funcs are not empty
18
get the function f from the Funcs with Weight[p,f] is the greatest in Weight
19
calculate the number n of PNTIs in f and estimate the size f.size of f
20
if p.size > f.size
21
remove f from Funcs
22
p.size <- p.size – f.size
23
p.size <- p.size + n
24
forall g from Funcs
25
26
27
28
// complementing the cost savings
Weight[p,g] <- Weight[p,g] + Graph[f,g]
else if find page p’ from Pages, with p’.size > f.size
p <- p’
else print “not enough memory error”
Figure 7.
Code for function partitioning
functions is NOF, then the code in the 8th line
and the 12th line in Figure 6. executes at most
NOF^2 times. Therefore, the time cost is
linear to S*NOF^2.
For the third step, the time cost is
dominated by the code in the 25th line in
Figure 7. If the number of the pages is NOP,
and Weight and Graph are implemented with a
random access data structure, the time cost is
linear to NOP*NOF^2.
Since S is commonly greater than NOP,
the time cost for this algorithm is linear to
S*NOF^2.The space cost is dominated by the
implementation of Weight and Graph, which
respectively are bounded by NOP*NOF and
NOF*NOF
IV.
language into target code executable on the
HR6P family of microcontrollers, among
which each kind of microcontroller typically
constitutes a RISC-based Harvard architecture
with instruction sizes of 14, 15, or 16 bits, and
a data-bus that is 8-bit wide. With a special
register for the higher part of code size, also
called PSR, these MCUs could support
program memory of at most 64K*16 bits.
Our experiments were conducted on the
HR6P90H MCU [4], which provides 8K or
16K * 15 bits of program memory. The
benchmark suit comprises 21 programs from
industrial applications in embedded systems,
such as those for electric stoves, electric
bicycles and washing machines. The result
illustrated in Figure 8. showed that the code
size shrank to 86.8 percent on average.
EXPERIMENTAL RESULTS
We implemented the algorithm stated
above in a cross compiler framework, HICC.
HICC compiles source code written in C
12000
No page selection opti
With page selectiong opti
10000
code size (word)
8000
6000
4000
2000
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
sample code
Figure 8.
V.
Code compiled with optimization and without optimization
RELATED WORKS
To us know, there is little research in the
literature about optimizing the page selection
instructions, although the problem exists for a
long time. However, many works have been
done for optimizing bank selection instructions,
which is very close to the problem of
optimizing page selection instructions.
Bernhard et al. [5] formulated the problem of
optimizing bank selection instructions for
multiple goals as a form of Partitioned Boolean
Quadratic Programming (PBQP). However,
they made the assumption that the variable had
been assigned to specified banks. In the work
by Yuan Mengting et al. [6], the variable
partitioning was taken on a part of the memory,
namely the shared memory, which is highly
architecture dependent. Liu Tiantian et al. [7]
claimed they had integrated variable
partitioning into optimizing bank selection
instructions. But, with the analysis of code
patterns, they placed the emphasis on the
positions for inserting bank selection
instructions rather than the variable
partitioning. Many other works [8] about
variable partitioning focused on DSP
processors, where parallelism and energy
consumption attracted the main attentions.
There is also research work to improve the
overall throughput for MPSoc architecture by
variable partitioning.
There are some differences between
optimizing bank selection and page selection.
The construction of the function relation graph
is not as simple as the construction of the
variable access graph, since the former may
generally need inter-procedural analysis.
Besides, the placement of a function could
dynamically change size of the function itself,
since some PSIs are optimized. This makes it
more difficult to estimate the reserved space of
a page.
VI.
CONCLUSIONS AND FUTURE WORKS
In this paper, we present an algorithm to
optimize the page selection instructions by
function partitioning. Our experimental
showed it achieved a great improvement with
respect to code size. However, there are still
much work to be done to improve this
algorithm. Perhaps the followings are worthy
of consideration.
• Analyze the code patterns in more detail,
as inspired by [7]. Our assumption mentioned
above is made roughly. The weight value
should be estimated more precisely to conduct
the partitioning process. Maybe a probability
algorithm could work well in estimating
weight value.
• The algorithm presented in this paper for
partitioning process may be somewhat naive.
We could try to design a more complex but
efficient one. Perhaps a cluster algorithm is
worth a try.
VII. REFERENCES
[1] Microchip Technology Inc. PIC16F7X Data Sheet,
2002.
[2] Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey
D. Ullman. Compiler: Principles, techniques, & tools
(second edition). Pearson Education, 2007.
[3] Andrew W. Appel, and Jens Palsberg. Modern
Compiler Impletation in JAVA. Cambridge University Press
2002.
[4] Shanghai Haier Intergrated Circuit Co., Ltd.
HR6P90/90H Data Sheet, 2008.
[5] Bernhard Scholz, Bernd Burgstaller, and Jingling Xue.
Minimizing Bank Selection Instructions for Partitioned
Memory Architectures. Proceedings of the 2006
International Conference on Compilers, Architecture and
Synthesis for Embedded Systems, pages 201-211, 2006.
[6] Yuan Mengting, Wu Guoqing, and Yu Chao.
Optimizing Bank Selection Instructions by Using Shared
Memory. The 2008 International Conference on Embedded
Software Systems (ICESS2008).
[7] Liu Tiantian, Minming Li, and Chun Jason Xue. Joint
Variable Partitioning and Bank Selection Instruction
Optimization on Embedded Systems with Multiple Memory
Banks. Proceedings of the Asia and South Pacific Design
Automation Conference (ASP-DAC 2010), 2010.
[8] Rainer Leuper, Daniel Kotte. Variable Partitioning For
Dual Memory Bank DSPs. ICASSP, 2001.
[9] Zhong Wang, Xiaobo Sharon Hu. Power Aware
Variable Partitioning and Instruction Scheduling for Multiple
Memory Banks. Proceedings of the Design, Automation and
Test in Europe Conference and Exhibition (DATE’04), 2004.
[10] Qingfeng Zhuge, Edwin Hsing-Mean Sha, Bin Xiao,
and Chantana Chantrapornchai. Efficient Variable
Partitioning and Scheduling for DSP Processors With
Multiple Memory Modules. IEEE Transactions on Signal
Processing, Vol. 52, No. 4, 2004.
| 6 |
G1-smooth splines on quad meshes with 4-split
macro-patch elements
arXiv:1703.06717v1 [] 20 Mar 2017
Ahmed Blidia a Bernard Mourrain a Nelly Villamizar b
a UCA,
Inria Sophia Antipolis Méditerranée, aromath, Sophia Antipolis, France
b Swansea
University, Swansea, UK
Abstract
We analyze the space of differentiable functions on a quad-mesh M, which are
composed of 4-split spline macro-patch elements on each quadrangular face. We describe explicit transition maps across shared edges, that satisfy conditions which
ensure that the space of differentiable functions is ample on a quad-mesh of arbitrary topology. These transition maps define a finite dimensional vector space of G1
spline functions of bi-degree 6 (k, k) on each quadrangular face of M. We determine the dimension of this space of G1 spline functions for k big enough and provide
explicit constructions of basis functions attached respectively to vertices, edges and
faces. This construction requires the analysis of the module of syzygies of univariate
b-spline functions with b-spline function coefficients. New results on their generators and dimensions are provided. Examples of bases of G1 splines of small degree
for simple topological surfaces are detailed and illustrated by parametric surface
constructions.
Key words: geometrically continuous splines, dimension and bases of spline spaces,
gluing data, polygonal patches, surfaces of arbitrary topology
1
Introduction
Quadrangular b-spline surfaces are ubiquitous in geometric modeling. They
are represented as tensor products of univariate b-spline functions. Many of
Email addresses: [email protected] (Ahmed Blidia),
[email protected] (Bernard Mourrain),
[email protected] (Nelly Villamizar).
the properties of univariate b-splines extend naturally to this tensor representation. They are well suited to describe parts of shapes, with features organized along two different directions, as this is often the case for manufactured
objects. However, the complete description of a shape by tensor product bspline patches may require to intersect and trim them, resulting in a geometric
model, which is inaccurate or difficult to manipulate or to deform.
To circumvent these difficulties, one can consider geometric models composed
of quadrangular patches, glued together in a smooth way along their common
boundary. Continuity constraints on the tangent planes (or on higher osculating spaces) are imposed along the share edges. In this way, smooth surfaces
can be generated from quadrilateral meshes by gluing several simple parametric surfaces. By specifying the topology of a quad(rilateral) mesh M and the
geometric continuity along the shared edges via transition maps, we obtain a
(vector) space of smooth b-spline functions on M.
Our objective is to analyze this set of smooth b-spline functions on a quad
mesh M of arbitrary topology. In particular, we want to determine the dimension and a basis of the space of smooth functions composed of tensor product
b-spline functions of bounded degree. By determining bases of these spaces,
we can represent all the smooth parametric surfaces which satisfy the geometric continuity conditions on M. Any such surface is described by its control
points in this basis, which are the coefficients in the basis of the differentiable
functions used in the parametrization.
The construction of basis functions of a spline space has several applications.
For visualization purposes, smooth deformations of these models can be obtained simply by changing their coefficients in the basis, while keeping satisfied
the regularity constraints along the edges of the quad mesh. Fitting problems
for constructing smooth models that approximate point sets or satisfy geometric constraints can be transformed into least square problems on the coefficient vector of a parametric model and solved by standard linear algebra
tools. Knowing a basis of the space of smooth spline functions of bounded
degree on a quad mesh can also be useful in Isogeometric Analysis. In this
methodology, the basis functions are used to describe the geometry and to approximate the solutions of partial differential equations on the geometry. The
explicit knowledge of a basis allows to apply Galerkin type methods, which
project the solution onto the space spanned by the basis functions.
In the last decades, several works have been focusing on the construction of G1
surfaces, including [CC78], [Pet95], [Loo94], [Rei95], [Pra97], [YZ04], [GHQ06],
[HWW+ 06], [FP08], [HBC08], [PF10], [BGN14], [BH14]. Some of these constructions use tensor product b-spline elements. In [Pet95], biquartic b-spline
elements are used on a quad mesh obtained by middle point refinement of a
general mesh. These elements involve 25 control coefficients. In [Rei95], bi2
quadratic b-spline elements are used on a semi-regular quad mesh obtained
by three levels of mid-point refinements. These correspond to 16-split macropatches, which involve 81 control points. In [Pet00], bicubic b-spline elements
with 3 nodes, corresponding to a 16-split of the parameter domain are used.
The macro-patch elements involve 169 control coefficients. In [LCB07], biquintic polynomial elements are used for solving a fitting problem. They involve 36
control coefficients. Normal vectors are extracted from the data of the fitting
problem to specify the G1 constraints. In [SWY04] biquintic 5-split b-spline elements are involved. They are represented by 100 control coefficients or more.
In [FP08], bicubic 9-split b-spline elements are involved. They are represented
by 100 control coefficients. In [PF10], it is shown that bicubic G1 splines with
linear transition maps requires at least a 9-split of the parameter domains. In
[HBC08], bicubic 4-split macro-patch elements are used. They are represented
by 36 control coefficients. The construction does not apply for general quad
meshes. In [BH14], biquartic 4-split macro-elements are used. They involve 81
control coefficients. The construction applies for general quad meshes and is
used to solve the interpolation problem of boundary curves. In these constructions, symmetric geometric continuity constraints are used at the vertices of
the mesh.
Much less work has been developed on the dimension analysis. In [MVV16],
a dimension formula and explicit basis constructions are given for polynomial
patches of degree > 4 over a mesh with triangular or quadrangular cells. In
[BM14], a similar result is obtained for the space of G1 splines of bi-degree
> (4, 4) for rectangular decompositions of planar domains. The construction of
basis functions for spaces of C 1 geometrically continuous functions restricted
to two-patch domains, has been considered in [KVJB15]. In [CST16], the approximation properties of the aforementioned spaces are explored, including
constructions over multi-patch geometries motivated by applications in isogeometric analysis.
In this paper we analyze the space of G1 splines on a general quad mesh M,
with 4-split macro-patch elements of bounded bi-degree. We describe explicit
transition maps across shared edges, that satisfy conditions which ensure that
the space of differentiable functions is ample on the quad mesh M of arbitrary topology. These transition maps define a finite dimensional vector space
of G1 b-spline functions of bi-degree 6 (k, k) on each quadrangular face of M.
We determine the dimension of this space for k big enough and provide explicit constructions of basis functions attached respectively to vertices, edges
and faces. This construction requires the analysis of the module of syzygies
of univariate b-spline functions with b-spline coefficients. New results on their
generators and dimensions are provided. This yields a new construction of
smooth splines on quad meshes of arbitrary topology, with macro-patch elements of low degree.
3
Examples of bases of G1 splines of small degree for simple topological surfaces
are detailed and illustrated by parametric surface constructions.
The techniques developed in this paper for the study of geometrically smooth
splines rely on the analysis of the syzygies of the glueing data, similarly to
the approach used in [MVV16] for polynomial patches. However an important difference is that we consider here syzygies of spline functions with spline
coefficients. The classical properties of syzygy modules over the ring of polynomials used in [MVV16] do not apply to spline functions. New results on syzygy
modules over the ring of piecewise polynomial functions (with one node and
prescribed regularity) are presented in Sections 4.1, 4.2. In particular, Proposition 4.8 describes a family of gluing data with additional degrees of freedom,
providing new possibilities to construct ample spaces of spline functions in
low degree. The necessary notation and constrains from the case of polynomial patches are used in the course of the paper. But, properties of Taylor
maps exploited in [MVV16] are incoherent in our context. In our setting, the
construction of the spline space requires to extend the results on these Taylor
maps to the context of macro-patches. Sections 4.3, 4.4, 4.5 present an alternative analysis adapted to our needs. Exploiting these properties, vertex basis
functions and face basis functions can then be constructed in the same way
as in the polynomial case.
The paper is organized as follows. The next section recalls the notions of topological surface M, differentiable functions on M and smooth spline functions
on M. In Section 3, we detail the constraints on the transition maps to have
an ample space of differentiable functions and provide explicit constructions.
In Section 4, 5, 6, we analyze the space of smooth spline functions around respectively an edge, a vertex and a face and describe basis functions attached
to these elements. In Section 7, we give the dimension formula for the space
of spline functions of bi-degree 6 (k, k) over a general quad mesh M and
describe a basis. Finally, in Section 7, we give examples of such smooth spline
spaces.
2
Definitions and basic properties
In this section, we define and describe the objects we need to analyze the
spline spaces on a quad mesh.
2.1
Topological surface
Definition 2.1 A topological surface M is given by
4
(1, 0)
(0, 1)
σ1
σ0
τ1
τ1′
(0, 1)
u1
τ0
γ1
γ0
v1 (0, 0)
v0
(0, 0) u0
τ0′
(1, 0)
Fig. 1. Given an edge τ of a topological surface M that is shared by two polygons
σ0 , σ1 ∈ M, we associated a different coordinate system to each of these two faces
and consider τ as the pair of edges τ0 and τ1 in σ0 and σ1 , respectively.
• a collection M2 of polygons (also called faces of M) in the plane that are
pairwise disjoint,
• a collection of homeomorphisms φi,j : τi 7→ τj between polygonal edges from
different polygons σi and σj of M2 ,
where a polygonal edge can be glued with at most one other polygonal edge, and
it cannot be glued with itself. The shared edges (resp. the points of the shared
edges) are identified with their image by the corresponding homeomorphism.
The collection of edges (resp. vertices) is denoted M1 (resp. M0 ).
For a vertex γ ∈ M0 , we denote by Mγ the submesh of M composed of the
faces which are adjacent to γ. For an edge τ ∈ M1 , we denote by Mτ the
submesh of M composed of the faces which are adjacent to the interior of τ .
Definition 2.2 (Gluing data) For a topological surface M, a gluing structure associated to M consists of the following:
• for each edge τ ∈ M1 of a cell σ, an open set Uτ,σ of R2 containing τ ;
• for each edge τ ∈ M1 shared by two polygons σi , σj ∈ M2 , a C 1 -diffeomorphism
called the transition map φσj ,σi : Uτ,σi → Uτ,σj between the open sets Uτ,σi
and Uτ,σj , and also its correspondent inverse map φσi ,σj ;
Let τ be an edge shared by two polygons σ0 , σ1 ∈ M2 , τ = τ0 in σ0 , τ = τ1
in σ1 respectively and let γ = (γ0 , γ1 ) be a vertex of τ corresponding to γ0 in
σ0 and to γ1 in σ1 . We denote by τ10 (resp. τ00 ) the second edge of σ1 (resp. σ0 )
through γ1 (resp. γ0 ). We associate to σ1 and σ0 two coordinate systems (u1 , v1 )
and (u0 , v0 ) such that γ1 = (0, 0), τ1 = {(u1 , 0), u1 ∈ [0, 1]}, τ10 = {(0, v1 ), v1 ∈
[0, 1]} and γ0 = (0, 0), τ0 = {(0, v0 ), v0 ∈ [0, 1]}, τ00 = {(u0 , 0), u0 ∈ [0, 1]}, see
Figure 1. Using the Taylor expansion at (0, 0), a transition map from Uτ,σ1 to
Uτ,σ0 is then of the form
v12 ρ1 (u1 , v1 )
v1 bτ,γ (u1 ) +
φσ0 ,σ1 : (u1 , v1 ) −→ (u0 , v0 ) =
2
u1 + v1 aτ,γ (u1 ) + v1 ρ2 (u1 , v1 )
5
(1)
where aτ,γ (u1 ), bτ,γ (u1 ), ρ1 (u1 , v1 ), ρ2 (u1 , v1 ) are C 1 functions. We will refer to
it as the canonical form of the transition map φ0,1 at γ along τ . The functions
[aτ,γ , bτ,γ ] are called the gluing data at γ along τ on σ1 .
Definition 2.3 An edge τ ∈ M which contains the vertex γ ∈ M is called a
crossing edge at γ if aτ,γ (0) = 0 where [aτ,γ , bτ,γ ] is the gluing data at γ along
τ . We define cτ (γ) = 1 if τ is a crossing edge at γ and cτ (γ) = 0 otherwise.
By convention, cτ (γ) = 0 for a boundary edge. If γ ∈ M0 is an interior vertex
where all adjacent edges are crossing edges at γ, then it is called a crossing
vertex. Similarly, we define c+ (γ) = 1 if γ is a crossing vertex and c+ (γ) = 0
otherwise.
2.2
Differentiable functions
We define now the differentiable functions on M and the spline functions on
M.
Definition 2.4 (Differentiable functions) A differentiable function on a
topological surface M is a collection f = (fσ )σ∈M2 of differentiable functions
such that for each two faces σ0 and σ1 sharing an edge τ with φ0,1 as transition
map, the two functions fσ1 and fσ0 ◦ φ0,1 have the same Taylor expansion of
order 1. The function fσ is called the restriction of f on the face σ.
This leads to the following two relations for each u1 ∈ [0, 1]:
f1 (u1 , 0) = f0 (0, u1 )
∂f0
∂f0
∂f1
(u1 , 0) = bτ,γ (u1 )
(0, u1 ) + aτ,γ (u1 )
(0, u1 )
∂v1
∂u0
∂v0
(2)
(3)
wheref1 = fσ1 , f0 = fσ0 are the restrictions of f on the faces σ0 , σ1 .
For r ∈ N, let U r = S r ([0, 12 , 1]) be the space of piecewise univariate polynomial
functions (or splines) on the subdivision [0, 12 , 1], which are of class C r . We
denote by Ukr the spline functions in U r whose polynomial pieces are of degree
6 k. We denote by R[u] the ring of polynomials in one variable u, with
coefficients in R.
Let Rr (σ) be the space of spline functions of regularity r in each parameter
over the 4-split subdivision of the quadrangle σ (see Figure 2), that is, the
tensor product of U r with itself.
For k ∈ N, the space of b-spline functions of degree 6 k in each variable, that
is of bi-degree 6 (k, k) is denoted Rrk (σ). A function fσ ∈ Rrk (σ) is represented
6
(1/2, 1)
(0, 1)
b
b
b
b
b
b
(0, 1/2)
(0, 0)
(1, 1)
b
(1/2, 1/2)
(1, 1/2)
b
(1/2, 0)
b
(1, 0)
Fig. 2. 4-split of the parameter domain
in the b-spline basis of σ as
fσ :=
X
cσi,j (fσ )Ni (uσ )Nj (vσ ),
06i,j6m
where cσi,j (fσ ) ∈ R and N0 , . . . , Nm are the classical b-spline basis functions of
Ukr with m = 2k − r. The dimension of Rrk (σ) is (m + 1)2 = (2k − r + 1)2 .
The geometric continuous spline functions on M are the differentiable functions f on M, where each component fσ on a face σ ∈ M2 is in Rr (σ). We
denote this spline space by S 1,r (M). The set of splines f ∈ S 1,r (M) with
fσ ∈ Rrk (σ) is denoted Sk1,r (M).
2.3
Taylor maps
An important tool that we are going to use intensively is the Taylor map
associated to a vertex or to an edge of M. For each face σ the space of spline
functions over a subdivision onto 4 parts as in the figure above will be denoted
Rr (σ). Let γ ∈ M0 be a vertex on a face σ ∈ M2 belonging to two edges
τ, τ 0 ∈ M1 of σ. We define the ring of γ on σ by Rσ (γ) = R(σ)/(`2τ , `2τ 0 ) where
(`2τ , `2τ 0 ) is the ideal generated by the squares of `τ and `τ 0 , the equations
`τ (u, v) = 0 and `τ 0 (u, v) = 0 are respectively the equations of τ and τ 0 in
Rr (σ) = S r .
The Taylor expansion at γ on σ is the map
Tγσ : f ∈ Rr (σ) 7→ f
mod (`2τ , `2τ 0 ) in Rσ (γ).
Choosing an adapted basis of Rσ (γ), one can define Tγσ by
h
i
Tγσ (f ) = f (γ), ∂u f (γ), ∂v f (γ), ∂u ∂v f (γ) .
The map Tγσ can also be defined in another basis of Rσ (γ) in terms of the
7
b-spline coefficients by
h
Tγσ (f ) = cσ0,0 (f ), cσ1,0 (f ), cσ0,1 (f ), cσ1,1 (f )
i
where c0,0 , c1,0 , c0,1 , c1,1 are the first b-spline coefficients associated to f on σ
at γ = (0, 0).
We define the Taylor map Tγ on all the faces σ that contain γ,
Tγ : f = (fσ ) ∈ ⊕σ Rr (σ) → (Tγσ (fσ )) ∈ ⊕σ⊃γ Rσ (γ).
Similarly, we define T as the Taylor map at all the vertices on all the faces of
M.
If τ ∈ M1 is the edge of the face σ(uσ , vσ ) ∈ M2 associated to vσ = 0, we
define the restriction along τ on σ as
Dτσ : Rrk (σ) → Rrk (σ)
f=
cσi,j (f )Ni (uσ )Nj (vσ ) 7→
X
06i,j6m
cσi,j (f )Ni (uσ )Nj (vσ ).
X
06i6m,06j61
The restrictions along the edges vσ = 1, uσ = 0, uσ = 1 are defined similarly
by symmetry. By convention if τ is not an edge of σ, Dτσ = 0.
For a face σ ∈ M2 , we define the restriction along the edges of σ as
Dσ : Rrk (σ) → Rrk (σ)
f=
cσi,j (f )Ni (uσ )Nj (vσ ) 7→
X
06i, j6m
X
cσi,j (f )Ni (uσ )Nj (vσ ).
i>1, or
i<m−1, j>1,
or j<m−1
The edge restriction map along all edges of M is given by
D : f = (fσ ) ∈ ⊕σ Rrk (σ) → (Dσ (fσ )) ∈ ⊕σ Rrk (σ).
3
Transition maps
The spline space on the mesh M is constructed using the transition maps
associated to the edges shared by pair of polygons in M. The transition map
b(u)
accross an edge τ is given by formula (1), where a(u) = a(u)
, b(u) = c(u)
and
c(u)
[a(u), b(u), c(u)] is a triple of functions, called gluing data. In the following,
the transition maps will be defined from spline functions in Ulr , of class C r and
degree l, with nodes 0, 21 , 1 for the gluing data. We assume that the dimension
8
of Ulr is bigger than 4, that is, 2l + 1 − r > 4 and r > 0 so that l > 12 (3 + r),
which implies that l > 2.
We denote by d0 (u), d1 (u) ∈ Ulr two spline functions such that d0 (0) = 1,
d0 (1) = 0, d1 (0) = 0, d1 (1) = 1 and d00 (0) = d00 (1) = d01 (0) = d01 (1) = 0. We
can take, for instance,
d0 (u) = N0 (u) + N1 (u)
d1 (u) = Nm−1 (u) + Nm (u)
(4)
where m = 2l − r. For l = 2, r = 1, these functions are
1 − 2u2
d0 (u) =
d1 (u) =
0 6 u 6 21
1
6u61
2
2(1 − u)2
2u2
0 6 u 6 12
1
6 u 6 1.
2
1 − 2 (1 − u)2
For l = 2, r = 0, these functions are
1 − 4u2
d0 (u) =
d1 (u) =
0 6 u 6 12
1
6u61
2
0
0
0 6 u 6 12
1
6 u 6 1.
2
1 − 4 (1 − u)2
To ensure that the space of spline functions is sufficiently ample (i.e., it contains enough regular functions, see [MVV16, Definition 2.5]), we impose compatibility conditions.
First around an interior vertex γ ∈ M0 , which is common to faces σ1 , . . . , σF
glued cyclically around γ, along the edges τi = σi ∩ σi−1 for i = 2, . . . , F + 1
(with σF +1 = σ1 ), we impose the condition: Jγ (φ1,2 ) ◦ · · · ◦ Jγ (φN −1,N ) = I2
where Jγ is the jet or Taylor expansion of order 1 at γ. It translates into the
following condition (see [MVV16]):
Condition 3.1 If γ ∈ M0 is an interior vertex and belongs to the faces
σ1 , . . . , σF that are glued cyclically around γ, then the gluing data [ai , bi ] at γ
on the edges τi between σi−1 and σi satisfies
F
Y
i=1
0
=
1
bi (0) ai (0)
1
0
01
.
This gives algebraic restrictions on the values ai (0), bi (0).
9
(5)
In addition to Condition 3.1, we also consider the following condition around
a crossing vertex:
Condition 3.2 If the vertex γ is a crossing vertex with 4 edges τ1 , . . . , τ4 , the
gluing data [ai , bi ] i = 1 . . . 4 on these edges at γ satisfy
b0 (0)
= −b1 (0) a03 (0) +
+ 4
b4 (0)
b01 (0)
0
a2 (0) +
= −b2 (0) a04 (0) +
b1 (0)
a01 (0)
b02 (0)
,
b2 (0)
!
b03 (0)
.
b3 (0)
!
(6)
(7)
Let us notice that we can write the previous conditions on the gluing data
(which in our setting is given by spline functions) as in [MVV16] since they
depend on the value of the functions defining the gluing data and not on the
particular type of functions. The conditions (6) and (7) were introduced in
[MVV16] in the context of gluing data defined from polynomial functions,
they generalize the conditions of [PF10], where bi (0) = −1. The conditions
come from the relations between the derivatives and the cross-derivatives of
the face functions across the edges at a crossing vertex.
An additional condition of topological nature is also considered in [MVV16].
It ensures that the glued faces around a vertex γ are equivalent to sectors
around a point in the plane, via the reparameterization maps. We will not
need it hereafter.
To define transition maps which satisfy these conditions, we first compute the
values of the transition functions aτ , bτ of an edge τ at its end points and then
interpolate the values:
(1) For all the vertices γ ∈ M0 and for all the edges τ1 , . . . , τF of M1 that
contain γ, choose vectors u1 , . . . , uF ∈ R2 such that the cones in R2
generated by ui , ui+1 form a fan in R2 and such that the union of these
cones is R2 when γ is an interior vertex. The vector ui is associated to the
edge τi , so that the sectors ui−1 , ui and ui , ui+1 define the gluing across
the edge τi at γ.
The transition map φi−1,i at γ = (0, 0) on the edge τi is constructed as:
0 bi (0)
J(0,0) (φi−1,i )t = S ◦ [ui , ui+1 ]−1 ◦ [ui−1 , ui ] ◦ S =
0
where S =
1
1 ai (0)
1
, [ui , uj ] is the matrix which columns are the vectors ui
0
10
u0+
γ
τ
u0
b
u1
u0−
u1−
b
γ′
u1+
Fig. 3. The edge τ = (γ, γ 0 ) is associated to the vectors u0 and u1 at the points γ
and γ 0 , respectively.
and uj , and |ui , uj | is the determinant of the vectors ui , uj . Thus,
ai (0) =
|ui+1 , ui−1 |
|ui , ui−1 |
, bi (0) = −
,
|ui+1 , ui |
|ui+1 , ui |
(8)
so that ui−1 = ai (0)ui + bi (0)ui+1 . This implies that Condition 3.1 is
satisfied.
(2) For all the shared edges τ ∈ M1 , we define the functions aτ = acττ , bτ = cbττ
on the edges τ by interpolation as follows. Assume that the edge τ is
associated to the vectors u0 and u1 , respectively at the end point γ and
γ 0 corresponding to the parameters u = 0 and u = 1. Let us− , us+ ∈ R2 ,
s = 0, 1 be the vectors which define respectively the previous and next
sectors adjacent to usi at the point γ and γ 0 , see Figure 3. We define the
gluing data so that it interpolates the corresponding value (8) at u = 0
and u = 1 as:
aτ (u) = u0+ , u0− d0 (u) + u1+ , u1− d1 (u)
bτ (u) = − u0 , u0− d0 (u) − u1 , u1− d1 (u)
(9)
cτ (u) = u0+ , u0 d0 (u) + u1+ , u1 d1 (u)
where d0 (u), d1 (u) are two Hermite interpolation functions at u = 0 and
u = 1.
Since the derivatives of aτ , bτ , cτ vanish at u = 0 and u = 1, the
conditions (6) and (7) are automatically satisfied at an end point if it is
a crossing vertex.
Another possible construction, with a constant denominator cτ (u) = 1
is:
aτ (u) =
u0+ , u0−
u0+ , u0
bτ (u) = −
d0 (u) −
u1+ , u1−
d0 (u) +
u1 , u1−
u0 , u0−
u0+ , u0
u1+ , u1
u1+ , u1
d1 (u)
d1 (u)
(10)
cτ (u) = 1
The construction (10) specializes to the symmetric gluing used for instance in
11
[Hah89, §8.2], [HBC08], [BH14]:
aτ = d0 (u) 2 cos
2π
2π
− d1 (u) 2 cos
n0
n1
bτ = −1
cτ = 1
(11)
where n0 (resp. n1 ) is the number of edges at the vertex γ0 (resp. γ1 ). It
corresponds to a symmetric gluing, where the angle of two consecutive edges
.
at γi is 2π
ni
4
Splines along an edge
The space Sk1,r (M) of splines over the mesh M can be splitted into three
linearly independent components: Ek , Hk , Fk (see Section 7) attached respectively to vertices, edges and faces. The objective of this section is to give a
dimension formula for the component E(τ )k attached to the edge τ and an
explicit base, where τ is an interior edge, shared by two faces σ1 , σ2 ∈ M2 .
We denote by Mτ the sub-mesh of M composed of the two faces σ1 , σ2 .
An important step is to analyse the space Syzr,r,r
(a, b, c) of Syzygies over the
k
(a, b, c)
base ring U r . The relation of this space with E(τ )k and a basis of Syzr,r,r
k
are presented in Sections 4.1 and 4.2.
Next in Section 4.3, we study the effect, on E(τ )k , of the Taylor map at the
two end points of τ and we determine when they can be separated by the
Taylor map.
The Section 4.4 shows how to decompose the space Sk1,r for the simple mesh
Mτ , using this Taylor maps at the end points of τ . The same technique will
be used to decompose the space Sk1,r (M), for a general mesh M.
4.1
Relation with Syzygies
Given spline functions a, b, c ∈ Uls defining the gluing data accross the edge
τ ∈ M, and (f1 , f2 ) ∈ Sk1 (Mτ ), from (3) we have that:
A(u1 )a(u1 ) + B(u1 )b(u1 ) + C(u1 )c(u1 ) = 0
12
where
∂f2
0
,
(0, u1 ) ∈ Uk−1
∂v2
∂f2
B(u1 ) =
(0, u1 ) ∈ Uk1 ,
∂u2
∂f1
C(u1 ) = −
(u1 , 0) ∈ Uk1 .
∂v1
A(u1 ) =
These are the conditions imposed by the transition map across τ . According
to such data, and if the topological surface Mτ contains two faces with one
transition map along the shared edge τ , then any differentiable spline functions
f = (f1 , f2 ) over Mτ of bi-degree 6 (k, k) is given by the formula:
f1 (u1 , v1 ) = N1 (v1 ) + N0 (v1 )
−
a0 +
0
A(t)dt
(12)
1
N1 (v1 )C(u1 ) + E1 (u1 , v1 )
2k
f2 (u2 , v2 ) = N1 (u2 ) + N0 (u2 )
+
Z u1
a0 +
Z v2
0
A(t)dt
(13)
1
N1 (u2 )B(v2 ) + E2 (u2 , v2 ),
2k
since N0 (0) = 1, N1 (0) = 0, N00 (0) = −2k, and N10 (0) = 2k.
Here a0 ∈ R, the functions Ei ∈ ker Dτσi for i = 0, 1, and A, B, C are spline
functions of degree at most k − 1, k, k and class C 0 , C 1 , C 1 , respectively.
For r1 , r2 , r3 , k ∈ N and a, b, c ∈ Uls , we denote
n
o
r1
Syzkr1 ,r2 ,r3 (a, b, c) = (A, B, C) ∈ Uk−1
× Ukr2 × Ukr3 | A a + B b + C c = 0 .
We denote this vector space simply by Syzrk1 ,r2 ,r3 when a, b, c are implicitly
given.
By (12) and (13), the splines in Sk1 (Mτ ) with a support along the edge τ are
in the image of the map:
13
Θτ : R × Syz0,1,1
→ Skr (Mτ )
k
(a0 , (A, B, C)) 7→
a0 +
(14)
Z u1
0
+ a0 +
A(t)dt N0 (v1 )
Z u1
0
A(t)dt −
N0 (u2 ) a0 +
Z v2
+N1 (u2 ) a0 +
0
Z v2
0
1
C(u1 ) N1 (u1 ),
2k
A(t)dt
!
1
A(t)dt +
B(v2 )
2k
.
The classical results on the module of syzygies on polynomial rings described in
[MVV16] (see Proposition 4.3. in the reference), will be used in order to prove
the corresponding statements in the context of syzygies on spline functions.
First, we recall the notation and results concerning the polynomial case. Let
a, b, c be polynomials in R = R[u], such that gcd(a, c) = gcd(b, c) = 1,
then Z = Syz(a, b, c) is the R-module defined by Syz(a, b, c) = {(A, B, C) ∈
R[u]3 : Aa + Bb + Cc = 0}. The degree of an element in Syz(a, b, c) is defined
as deg(A, B, C) = max{deg(A), deg(B), deg(C)}, and we are interested in
studying the subspace Zk ⊂ Syz(a, b, c) of elements of degree less than or
equal to k − 1. Let us denote n = max{deg(a), deg(b), deg(c)}, and
e=
0 ,
if min n + 1 − deg(a), n − deg(b), n − deg(c) = 0 and
1 ,
otherwise.
Lemma 4.1 Using the notation above we have:
• Z is a free R[u]-module of rank 2.
• If µ and ν are the degree of the two free generators of Syz(a, b, c) with µ
minimal, then µ + ν = n.
• dim Zk = (k − µ + 1)+ + (k − n + µ + e)+ where t+ = max(0, t) for any
t ∈ Z.
A basis with minimal degree corresponds to what is called a µ-basis in the
literature.
The proof of Lemma 4.1 can be found in [MVV16].
In the following we state the analogous to Lemma 4.1 in the context of syzygies
on spline funtions. We consider Syzr,r,r
as defined above, it is the set of spline
k
r
r
r
functions (A, B, C) ∈ Uk−1 ×Uk ×Uk such that A a+B b+C c = 0. An element
of Syzr,r,r
is a triple of pairs of polynomials ((A1 , A2 ), (B1 , B2 ), (C1 , C2 )). Let
k
R = R[u], Rk = {p ∈ R | deg(p) 6 k}, Qr = R/((2u − 1)r+1 ) and Qrk =
Rk /((2u − 1)r+1 ).
14
The elements f = (f1 , f2 ) of Ukr+1 are pairs of polynomials f1 , f2 ∈ Rk such
that f1 − f2 ≡ 0 mod (2u − 1)r+1 . Let a = (a1 , a2 ), b = (b1 , b2 ), c = (c1 , c2 ) ∈
U r with gcd(a1 , c1 ) = gcd(a2 , c2 ) = gcd(b1 , c1 ) = gcd(b2 , c2 ) = 1. We consider
the following sequence:
ψ
φ
0 −→ Syzr,r,r
−→ Syz1,k × Syz2,k −→ Qrk−1 × Qrk × Qrk −→ Qrn1 +k −→ 0 (15)
k
where Syz1,k = Syzk (a1 , b1 , c1 ), Syz2,k = Syzk (a2 , b2 , c2 ), and
• ψ(f, g, h) = a1 f + b1 g + c1 h,
• φ(A, B, C) = (A1 − A2 , B1 − B2 , C1 − C2 ) (mod (2u − 1)r+1 ).
Lemma 4.2 The sequence (15) is exact for k > n1 +r where n1 = max{deg(a1 ),
deg(b1 ), deg(c1 )}.
Proof. Since b1 , c1 are coprime, the map ψ : (f, g, h) ∈ Rk−1 × Rk × Rk 7→
a1 f + b1 g + c1 h ∈ Rn1 +k is surjective for k > n1 − 1. The map φ, obtained by
working modulo (2u − 1)r+1 , remains surjective.
We have to prove that ker(ψ) = Im(φ). If (A, B, C) ∈ Syz1 × Syz2 then
ψ ◦ φ(A, B, C) = (A1 a1 + B1 b1 + C1 c1 ) − (A2 a1 + B2 b1 + C2 c1 ) = −(A2 a1 +
B2 b1 + C2 c1 ). Because a, b, c ∈ U r , we have a1 ≡ a2 (mod (2u − 1)r+1 ), b1 ≡ b2
(mod (2u − 1)r+1 ) and c1 ≡ c2 (mod (2u − 1)r+1 ), so that
ψ ◦ φ(A, B, C) ≡ −(A2 a2 + B2 b2 + C2 c2 ) ≡ 0
(mod (2u − 1)r+1 ).
This implies that Im(φ) ⊂ ker(ψ).
Conversely, if ψ(f, g, h) = 0 with deg(f ) 6 r, deg(g) 6 r, deg(h) 6 r then
f a1 + gb1 + hc1 = d (2u − 1)r+1 for some polynomial d ∈ R of degree 6 n1 − 1.
Since gcd(b1 , c1 ) = 1, there exists p, q ∈ Rn1 −1 such that d = p b1 + q c1 , we
deduce that:
(2u − 1)r+1 d = (2u − 1)r+1 (p b1 + q c1 ) = f a1 + g b1 + h c1 ,
with deg((2u − 1)r+1 p) 6 n1 + r. This yields
f a1 + (g − p(2u − 1)r+1 ) b1 + (h − (2u − 1)r+1 q) c1 = 0.
(16)
Since k > n1 + r, this implies that ((f, 0), (g − (2u − 1)r+1 p, 0), (h − (2u −
1)r+1 q, 0)) ∈ Syz1,k × Syz2,k and its image by φ is (f, g, h). This shows that
ker(ψ) ⊂ Im(φ) and implies the equality of the two vector spaces.
By construction, the kernel of φ is the pair of triples ((A1 , B1 , C1 ), (A2 , B2 , C2 ))
in Syz1,k × Syz2,k such that A1 − A2 ≡ B1 − B2 ≡ C1 − C2 ≡ 0 (mod (2u −
r
1)r+1 ), that is, the set Syzr,r,r
of triples (A, B, C) ∈ Uk−1
× Ukr × Ukr such that
k
A a + B b + C c = 0.
15
This show that the sequence (15) is exact.
2
We deduce the dimension formula:
Proposition 4.3 Let (p1 , q1 ) (resp. (p2 , q2 )) be a basis of Syz1 (resp. Syz2 ) of
minimal degree (µ1 , ν1 ) (resp. (µ2 , ν2 )) and e1 , e2 defined as above for (a1 , b1 , c1 )
and (a2 , b2 , c2 ). For k > min(n1 , n2 ) + r,
dim(Syzr,r,r
) = (k − µ1 + 1)+ + (k − n1 + µ1 + e1 )+ + (k − µ2 + 1)+
k
+ (k − n2 + µ2 + e2 )+ − min(r + 1, k) − (r + 1).
This dimension is denoted dτ (k, r).
Proof. By symmetry, we may assume that n1 = min(n1 , n2 ). For k > n1 + r,
the sequence (15) is exact and we have
dim Syzr,r,r
= dim Syz1,k + dim Syz2,k − dim Qrk−1 − 2 dim Qrk + dim Qrn1 +k .
k
We have dim Qrk−1 = min(r + 1, k) and dim Qrk = dim Qrn1 +k = r + 1, since
k > n1 + r. This leads to the formula, using Lemma 4.1.
2
4.2
Basis of the syzygy module
The diagram (15) allows to construct a basis for the space of syzygies Syzr,r,r
k
associated to the gluing data a, b, c ∈ U r . In the rest of this section we will
show how to construct such a basis.
Lemma 4.4 Assume that k > n1 + r. Using the notation of Proposition 4.3,
we have the following assertions:
• For any p2 ∈ Syz2,k , there exists p1 ∈ Syz1,k such that (p1 , p2 ) ∈ ker(φ).
S
• There exist t, s ∈ N such that if G = {(p1 (2u−1)i , 0) : 0 6 i 6 t} {(q1 (2u−
1)j , 0) : 0 6 j 6 l} then φ(G) is a basis of the vector space ker(ψ).
L
• ker(φ) hGi = Syz1,k × Syz2,k .
Proof. Let p2 = (A2 , B2 , C2 ) ∈ Syz2,k . As φ((0, p2 )) = (f, g, h) is in ker(ψ)
(since ψ ◦ φ = 0), we can construct p1 ∈ Syz1,k such that φ((p1 , 0)) = φ((0, p2 ))
as we did in the proof of Lemma 4.2 for (f, g, h) ∈ ker(ψ) using relation
(16). This gives an element of the form (p1 , 0) ∈ Syz1,k × {0}, and finally
(p1 , p2 ) ∈ ker(φ), this proves the first point.
The second point follows from the fact that φ(Syz1,k × {0}) = ker(ψ) (since
by Lemma 4.2, the sequence (15) is exact) and that {(p1 (2u − 1)i , 0) : i 6
16
k − µ1 } {(q1 (2u − 1)j , 0)j 6 k − ν1 } is a basis of Syz1,k × {0} as a vector
space, thus the image of this basis is a generating set for ker(ψ). Since it is a
R-module, it has a basis as described in the second point of this lemma.
S
The third point is a direct consequence of the second one.
2
Considering the map in (15), the first point of the lemma has an intuitive
meaning: any function defined on a part of Mτ and that satisfies the gluing
conditions imposed by a1 , b1 , c1 can be extended to a function over Mτ that
satisfies the gluing conditions a, b, c. The third point allows us to define the
projection π1r of an element on ker(φ) along hGi.
Let (p˜2 , p2 ), (q˜2 , q2 ) be the two projections of (0, p2 ) and (0, q2 ) by π1r respectively. We denote:
•
•
•
•
•
•
•
Z1r
Z2r
Z3r
Z4r
Z5r
Z6r
Zr
= {(0, (2u − 1)i p2 ) : r + 1 6 i 6 k − µ2 }
= {(0, (2u − 1)i q2 ) : r + 1 6 i 6 k − ν2 }
= {((2u − 1)i q1 , 0) : r + 1 6 i 6 k − µ1 }
= {((2u − 1)i p1 , 0) : r + 1 6 i 6 k − ν2 }
= {(2u − 1)i (p˜2 , p2 ) : 0 6 i 6 r}
= {(2u − 1)i (q˜2 , q2 ) : 0 6 i 6 r}
S
S
S
S
S
= Z1r Z2r Z3r Z4r Z5r Z6r
Proposition 4.5 Using the notation above we have the following:
• The set Z r is a basis of the vector space Syzr,r,r
.
k
• The set Y = {(0, (2u − 1)r+1 p2 ), (0, (2u − 1)r+1 q2 ), (p˜2 , p2 ), (q˜2 , q2 ), ((2u −
1)r+1 q1 , 0), ((2u − 1)r+1 p1 , 0)} is a generating set of the R-module Syzr,r,r .
Proof. The cardinal of Z r is equal to the dimension of Syzr,r,r
, we have to
k
prove that it is a free set. Let a = (ai ), b = (bi ), c = (ci ), d = (di ), e = (ei ),
f = (fi ) for i ∈ {0, . . . , k} a set of coefficients. Suppose that:
0=
+
+
r
X
i
ai (2u − 1) (p˜2 , p2 ) +
i=0
k−r−ν
X1
i=0
k−r−µ
X 2
r
X
bi (2u − 1)i (q˜2 , q2 )
i=0
i+r+1
ci ((2u − 1)
q1 , 0) +
k−r−µ
X 1
di (0, (2u − 1)r+i+1 p2 ) +
i=0
i=0
k−r−ν
X2
i=0
17
ei ((2u − 1)i+r+1 p1 , 0)
fi (0, (2u − 1)i+r+1 q2 ).
Then we have the following equations,
0=
r
X
ai (2u − 1)i p˜2 +
i=0
k−r−µ
X 1
+
r
X
bi (2u − 1)i q˜2 +
i=0
k−r−ν
X1
ci (2u − 1)r+1+i q1
i=0
ei (2u − 1)r+1+i p1
(17)
i=0
0=
r
X
ai (2u − 1)i p2 +
i=0
k−r−ν
X2
+
r
X
bi (2u − 1)i q2 +
i=0
k−r−µ
X 2
di (2u − 1)r+1+i p2
i=0
fi (2u − 1)r+1+i q2
(18)
i=0
we know that p2 and q2 are free generators of Syz2 , by (18) this means that all
the coefficients ai , bi , di , fi that are used in the equation are zero. Replacing
in the equation(17) we get in the same way that the other coefficients ci , ei
are zero, so the set is free. Finally since the set Y does not change when k
changes, then Y generates Syzr,r,r .
2
We have similar results if we proceed in a symmetric way exchanging the role
of the first and second polynomial components of the spline functions. The
is denoted Z 0r and the generating set of the
corresponding basis of Syzr,r,r
k
R-module is
Y0 =
n
0, (2u − 1)r+1 p2 , 0, (2u − 1)r+1 q2 , p1 , p˜1 ,
o
q1 , q˜1 , (2u − 1)r q1 , 0 , (2u − 1)r p1 , 0
.
It remains to compute the dimension and a basis for Syzkr−1,r,r , we deduce
, and it will depend on the gluing data
and Syzr,r,r
them those of Syzr−1,r−1,r−1
k
k
as we explain in the following.
Proposition 4.6
• If a(1/2) 6= 0 then Syzr,r,r
= Syzr−1,r,r
, otherwise we have that dim(Syzr−1,r,r
)=
k
k
k
dim(Syzr,r,r
)
+
1.
k
• For the second case, an element in Syzr−1,r,r
\ Syzr,r,r
is of the form: α(2u −
k
k
r
r
1) (0, p2 ) + β(2u − 1) (0, q2 ), with α, β ∈ R.
For the proof of this proposition we need the following lemma that can be
proven exactly in the same way as Proposition 4.5 above.
Lemma 4.7 The set Z̃ r−1 = Z 0r {(2u − 1)r (0, p2 ), (2u − 1)r (0, q2 )} is a basis
of Syzkr−1,r−1,r−1 .
S
Proof.[Proof of Proposition 4.6.] We denote p1 = (p11 , p21 , p31 ), and q1 = (q11 , q12 , q13 ),
18
where pji and qij are polynomials. Suppose that there exists (A, B, C) ∈ Syzkr−1,r,r \
Syzr,r,r
, then by the previous lemma we can choose (A, B, C) = α(2u −
k
1)r (0, p2 ) + β(2u − 1)r (0, q2 ) with α, β ∈ R, that is:
A
B
= α(0, (2u − 1)r p12 ) + β(0, (2u − 1)r q21 )
= α(0, (2u − 1)r p22 ) + β(0, (2u − 1)r q22 )
C = α(0, (2u − 1)r p32 ) + β(0, (2u − 1)r q23 )
But since B, C ∈ U r , we deduce:
(2u − 1)r+1
divides B2 − B1 = (2u − 1)r (αp22 + βq22 )
(2u − 1)r+1
divides C2 − C1 = (2u − 1)r (αp32 + βq23 )
This means that
αp2 ( 1 ) + βq 2 ( 1 )
=0
αp3 ( 1 ) + βq 3 ( 1 )
=0
2 2
2 2
2 2
2 2
As the determinant of this system is exactly p22 ( 21 )q23 ( 12 ) − p32 ( 21 )q22 ( 12 ) = a( 12 ),
we deduce the two points of the proposition.
2
Lemma 4.7 implies the following proposition:
is d˜τ (k, r) = dτ (k, r) + δτ with
Proposition 4.8 The dimension of Syzr−1,r,r
k
1
δτ = 1 if a( 2 ) = 0 and 0 otherwise.
4.3
Separation of vertices
We analyze now the separability of the spline functions on an edge, that is
when the Taylor map at the vertices separate the spline functions.
Let f = (f1 , f2 ) ∈ R(σ1 ) ⊕ R(σ2 ) of the form fi (ui , vi ) = pi + qi ui + q̃i vi +
si ui vi + ri u2i + r̃i vi2 + · · · . Then
Tγ (f ) = [p1 , q1 , q̃1 , s1 , p2 , q2 , q̃2 , s2 ].
If f = (f1 , f2 ) ∈ Skr (Mτ ), then taking the Taylor expansion of the gluing
condition (3) centered at u1 = 0 yields
q2 + s1 u1 = (a(0) + a0 (0)u1 + · · · ) (q̃2 + 2 r̃2 u1 + · · · )
+(b(0) + b0 (0)u1 + · · · ) (q2 + s2 u1 + · · · )
19
(19)
Combining (19) with (2) yields
p1 = p2
q1 = q̃2
r1 = r̃2
q̃1 = a(0) q̃2 + b(0) q2
s1 = 2 a(0) r̃2 + b(0) s2 + a0 (0) q̃2 + b0 (0) q2 .
Let H(γ) be the linear space spanned by the vectors [p1 , q1 , q̃1 , s1 , p2 , q2 , q̃2 , s2 ],
which are solution of these equations.
If a(0) 6= 0, it is a space of dimension 5 otherwise its dimension is 4. Thus
dim H(γ) = 5 − cτ (γ).
In the next proposition we use the notation of the previous section.
Proposition 4.9 For k > ν1 +1 we have Tγ (Sk1,r (Mτ )) = H(γ). In particular
dim(Tγ (Sk1,r (Mτ ))) = 5 − cτ (γ).
Proof. By construction we have Tγ (Sk1,r (Mτ )) ⊂ H(γ). Let us prove that
they have the same dimension. If (A, B, C) ∈ Syzr,r,r
with A = (A1 , A2 ),B =
k
(B1 , B2 ),C = (C1 , C2 ), then (A1 , B1 , C1 ) is an element of the R-module spanned
by p1 = (p11 , p21 , p31 ), q1 = (q11 , q12 , q13 ), ie (A, B, C) = a1 ((1 − 2u)r+1 p1 , 0) +
P (p1 , p˜1 ) + Q(q1 , q˜1 ). Let f = (f1 , f2 ) = Θτ (a0 , (A, B, C)) (see (14)), then it is
easy to see that:
Tγ (f ) =
1
=
f1 (γ)
∂u1 f1 (γ)
∂u2 f2 (γ)
−∂v1 f1 (γ)
∂u2 ∂v2 f2 (γ)
−∂u1 ∂v1 f1 (γ)
0
(20)
0
0
0
0 a0
0
a1
0
p11 (0)
p11 (0) q11 (0) 0
2
2
2
0
p1 (0)
p1 (0) q1 (0) 0
0 P (0)
3
3
3
0
p
(0)
p
(0)
q
(0)
0
0
Q(0)
1
1
1
0 p2 0 (0) − 2(r + 1)p2 (0) p2 0 (0) q 2 0 (0) p2 (0) q 2 (0) P 0 (0)
1
1
1
1
1
1
0
0
0
0 p31 (0) − 2(r + 1)p31 (0) p31 (0) q13 (0) p31 (0) q13 (0)
20
Q0 (0)
The second column of the matrix is linearly dependent on the third and fifth
columns. Using the same argument as in the proof of [MVV16, Proposition
4.7] on the first and 4 last columns of this matrix, we prove that its rank
is 5 − cγτ . By taking P, Q ∈ R1 of degree 6 1, which implies that k >
max(deg(P p1 ), deg(Q q1 )) = ν1 + 1, the vector [a0 , P (0), Q(0), P 0 (0), Q0 (0)]
can take all the values of R5 and we have Tγ (Sk1,r (Mτ )) = H(γ). This ends
the proof.
2
We consider now the separability of the Taylor map at the two end points
γ, γ 0 .
Proposition 4.10 Assume that k > max(ν1 + 2, ν2 + 2, µ1 + r + 1, µ2 + r + 1).
Then Tγ,γ 0 (Sk1,r (Mτ )) = (H(γ), H(γ 0 )) and dim Tγ,γ 0 (Sk1,r (Mτ )) = 10−cτ (γ)−
cτ (γ 0 ).
Proof. The inclusion Tγ,γ 0 (Sk1,r (Mτ )) ⊆ (H(γ), H(γ 0 )) is clear by construction.
For the converse, we show that the image of Tγ,γ 0 ◦ Θτ contains (H(γ), 0)
and then by symmetry we have that (0, H(γ)) is in the image of Tγ,γ 0 ◦ Θτ .
Let f = (f1 , f2 ) = Θτ (a0 , (A, B, C)) ∈ Sk1,r (Mτ ) with (A, B, C) = a1 ((1 −
2u)r+1 p1 , 0) + P (p1 , p˜1 ) + Q(q1 , q̃1 ) and P, Q ∈ U2r . The image of f by Tγ is of
the form (20). The image of f by Tγ 0 is of the form
Tγ 0 (f ) =
1 t1
=
0
0
0
0
f1 (γ 0 )
0
∂u1 f1 (γ )
∂u2 f2 (γ 0 )
−∂v1 f1 (γ 0 )
∂u2 ∂v2 f2 (γ 0 )
−∂u1 ∂v1 f1 (γ 0 )
0
0
0
0 p̃11 (1) q̃11 (1)
0
0 p̃21 (1) q̃12 (1)
0
0 p̃31 (1) q̃13 (1)
0
0 p̃21 0 (1) q̃12 0 (1) p̃21 (1)
0 a0
0
a1
0 P (1)
+
Q(1)
0
2
0
q̃1 (1) P (1)
0 0 p̃31 0 (1) q̃13 0 (1) p̃31 (1) q̃13 (1)
Q0 (1)
R 1/2
R
L (P ) + L2 (Q)
1
0
0
0
0
0
with t1 = 0 (1 − 2u)r+1 p11 du, L1 (P ) = 01 P p̃11 du, L2 (Q) = 01 Q q̃11 du. By
choosing P (1) = P 0 (1) = Q(1) = Q0 (1) = 0 and a0 + t1 a1 = 0, we have an
element in the kernel of this matrix. By choosing a0 , P (0), P 0 (0), Q(0), Q0 (0)
21
R
and a1 such that a0 + t1 a1 + L1 (P ) + L2 (Q) = 0, we can find a solution to
the system (20) for any f ∈ Sk (Mτ ). Therefore, constructing spline coefficients P, Q ∈ U2r which interpolate prescribed values and derivatives at 0, 1,
we can construct spline functions f ∈ Sk (Mτ ) such that Tγ (f ) span H(γ)
and Tγ 0 (f ) = 0. The degree of the spline is k > max(ν1 + 2, µ1 + r + 1). By
symmetry, for k > max(ν2 + 2, µ2 + r + 1), we have (0, H(γ 0 )) ⊂ Tγ,γ 0 (Sk1 (Mτ ),
which concludes the proof.
2
Definition 4.11 The separability s(τ ) of the edge τ is the minimal k such
that Tγ,γ 0 (Sk1,r (Mτ )) = (Tγ (Sk1,r (Mτ )), Tγ 0 (Sk1,r (Mτ ))).
The previous proposition shows that s(τ ) 6 max(ν1 + 2, ν2 + 2, µ1 + r + 1, µ2 +
r + 1).
4.4
Decompositions and dimension
Let τ ∈ M1 be an interior edge τ shared by the cells σ0 , σ1 ∈ M2 . The Taylor
map along the edge τ of Mτ is
Dτ : Rk (σ0 ) ⊕ Rk (σ1 ) → Rk (σ0 ) ⊕ Rk (σ1 )
(f0 , f1 ) 7→ (Dτσ0 (f0 ), Dτσ1 (f1 ) .
Its image is the set of splines of Rrk (σ1 )⊕Rrk (σ2 ) with support along τ . The kernel is the set of splines of Rrk (σ1 ) ⊕ Rrk (σ2 ) with vanishing b-spline coefficients
along the edge τ . The elements of ker(Dτ ) are smooth splines in Skr (Mτ ). Let
Wk (τ ) = Dτ (Skr (Mτ )). It is the set of splines in Skr (Mτ ) with a support along
τ . As Dτ is a projector, we have the decomposition
Skr (Mτ ) = ker(Dτ ) ⊕ Wk (τ ).
(21)
From the relations (12) and (13), we deduce that Wk (τ ) = Im Θτ . Since Θτ is
injective, thus dim(Wk (τ )) = dim Syzr,r,r
k−1 + 1 = dτ (k, r) + 1 and Wk (τ ) 6= {0}
when k > µ1 and k > µ2 (Lemma (4.1) (iii)).
The map Tγ,γ 0 defined in Section 2.3 induces the exact sequence
Tγ,γ 0
0 → Kk (τ ) → Sk1,r (Mτ ) −→ H(τ ) → 0
(22)
where Kk (τ ) = ker(Tγ,γ 0 ) and H(τ ) = Tγ,γ 0 (Sk1,r (Mτ )).
Definition 4.12 For an interior edge τ ∈ Mo1 , let Ek (τ ) = ker(Tγ,γ 0 ) ∩
Wk (τ ) = ker(Tγ,γ 0 ) ∩ Im Dτ be the set of splines in Skr (Mτ ) with their support
along τ and with vanishing Taylor expansions at γ and γ 0 . For a boundary
22
edge τ 0 = (γ, γ 0 ), which belongs to a face σ, we also define Ek (τ 0 ) as the set
of elements of Rrk (σ) with their support along τ 0 and with vanishing Taylor
expansions at γ and γ 0 .
Notice that the elements of Ek (τ ) have their support along τ and that their
Taylor expansion at γ and γ 0 vanish. Therefore, their Taylor expansion along
all (boundary) edges of Mτ distinct from τ also vanish.
As ker(Dτ ) ⊂ Kk (τ ), we have the decomposition
Kk (τ ) = ker(Dτ ) ⊕ Ek (τ ).
(23)
We deduce the following result
Lemma 4.13 For an interior edge τ ∈ Mo1 and for k > s(τ ), the dimension
of Ek (τ ) is
dim Ek (τ ) = d˜τ (k, r) − 9 + cτ (γ) + cτ (γ 0 ).
Proof. From the relations (21), (22) and (23), we have
dim Ek (τ ) = dim Kk (τ ) − dim ker(Dτ )
= dim Sk1,r (Mτ ) − dim Hk (τ ) − dim Sk1,r (Mτ ) + dim Wk (τ )
= dim Wk (τ ) − dim Hk (τ ),
2
which gives the formula using Proposition 4.10.
Remark 4.14 When τ is a boundary edge, which belongs to the face σ ∈ M2 ,
we have Skr (Mτ ) = Rrk (σ) and dim Ek (τ ) = 2(m + 1) − 8 = 4k − 2r − 6.
4.5
Basis functions associated to an edge
Suppose that Bkr = {βi }i=0...l with l = dim Syzr−1,r,r
and βi = (βi1 , βi2 , βi3 ), is a
k−1
basis of Syzr−1,r,r
. We know also that Ek = {f = Θτ (a0 , (A, B, C)) : Tγ,γ 0 (f ) =
k−1
0, (A, B, C) ∈ Syzr−1,r,r
}, but we have:
k−1
Tγ
Tγ,γ 0 (f ) =
Tγ 0
0
0
c0 , A(0), −C(0), −C (0),
c0 , B(0), A(0), B (0)
=
R1
R
c0 + 0 A(u)du, A(1), −C(1), C 0 (1), c0 + 01 A(u)du, B(1), A(1), B 0 (1)
23
P
Suppose that (A, B, C) =
bi βi1 ,
0 is equivalent to the system:
P
bi βi2 ,
P
bi βi3 with bi ∈ R, then Tγ,γ 0 (f ) =
a0 = 0
P
bi βi1 (0)
P
bi βi2 (0) = 0
=0
P
bi βi3 (0) = 0
P
0
bi βi2 (0) = 0
P
0
bi βi3 (0) = 0
P b R 1 β (t)dt = −a
i 0
P
bi βi1 (1) = 0
P
bi βi2 (1) = 0
P
bi βi3 (1) = 0
P
0
bi βi2 (1) = 0
P
bi β 30 (1) = 0
i
(24)
i
0
The system (24) directly depends on the gluing data (1) along the edge via
equations (12) and (13), see Section 4.1 above. An explicit solution requires
the computation of a basis for the syzygy module, which is constructed in
Section 4.2. The image by Θτ (defined in (14)) of a basis of the solutions of
this system yields a basis of Ek .
5
Splines around a vertex
In this section, we analyse the spline functions, attached to a vertex, that is,
the spline functions which Taylor expansions along the edges around the vertex
vanish. We analyse the image of this space by the Taylor map at the vertex,
and construct a set of linearly independant spline functions, which images span
the image of the Taylor map. These form the set of basis functions, attached
to the vertex.
Let us consider a topological surface Mγ composed by quadrilateral faces
σ1 , . . . , σF (γ) sharing a single vertex γ, and such that the faces σi and σi−1
have a common edge τi = (γ, δi ), for i = 2, . . . , F (γ). If γ is an interior vertex
then we identify the indices modulo F (γ) and τ1 is the common edge of σF (γ)
and σ1 , see Fig. 4.
The gluing data attached to each of the edges τi will be denoted by ai = acii ,
bi = cbii . By a change of coordinates we may assume that γ is at the origin
(0, 0), and the edge τi is on the line vi = 0, where (ui−1 , vi−1 ) and (ui , vi )
are the coordinate systems associated to σi−1 and σi , respectively. Then the
24
δ2
σ1
b
δ1
τ1
σ2
σ5
γ
b
τ3
δ3
b
τ2
b
τ5
b
τ4
σ3
δ5
σ4
b
δ4
Fig. 4. Topological surface Mγ composed by F (γ) = 5 quadrilateral faces glued
around the vertex γ.
transition map at γ across τi from σi to σi−1 is as given by
vi bi (ui )
φτi : (ui , vi ) →
;
ui + vi ai (ui )
following the notation in (1), we have φτi = φi−1,i .
The restriction along the boundary edges of Mγ is defined by
F (γ)
Dγ :
M
M
R(σi ) →
i=1
F (γ)
τ ∈∂Mγ
τ 63γ
Rσi (τ )
(fi )i=1 7→ Dτσi (fi )
τ 63γ
where Dτσi is the Taylor expansion along τ on σi , see Section 2.3.
Let Vk (γ) be the set of spline functions of degree 6 k on Mγ that vanish at
the first order derivatives along the boundary edges:
Vk (γ) = ker Dγ ∩ Sk1 (Mγ ).
(25)
The gluing data and the differentiability conditions in (3) lead to conditions
on the coefficients of the Taylor expansion of fi , namely
fi (ui , vi ) = p + qi ui + qi+1 vi + si ui vi + ri u2i + ri+1 vi2 + · · ·
(26)
with p, qi , si , ri ∈ R, and for i = 2, . . . , F the following two conditions are
satisfied
qi+1 = ai (0)qi + bi (0)qi−1
si = 2ai (0)ri + bi (0)si−1 + a0i (0)qi + b0i (0)qi−1 .
(27)
(28)
Let H(γ) be the space spanned by the vectors h = [p, q1 , . . . , qF (γ) , s1 , . . . , sF (γ) ]
such that p, q1 , . . . , qF (γ) , s1 , . . . , sF (γ) , r1 , . . . , rF (γ) ∈ R give a solution for (27)
25
and (28). The following result was proved in [MVV16, Proposition 5.1] in the
case of polynomial splines.
Proposition 5.1 For a topological surface Mγ consisting of F (γ) quadrangles
glued around an interior vertex γ,
dim H(γ) = 3 + F (γ) −
X
cτ (γ) + c+ (γ),
τ 3γ
where cτ (γ), c+ (γ) are as in Definition 2.3.
Since the vectors in H(γ) only depend on the Taylor expansion of f at γ, and
f can be seen as a polynomial spline in a neighborhood of γ, then the proof
of Proposition 5.1 follows the same argument as the one in [MVV16].
Proposition 5.2 For a topological surface Mγ as before, if s(τi ) denotes the
separability of the edge τi as in Definition 4, then
Tγ Vk (γ) = H(γ),
for every k > max{s(τi ) : i = 1, . . . , F (γ)}.
Proof. By definition (see (25)), the elements of Vk (γ)
the conditions
satisfy
(27) and (28) on the Taylor expansion of f , then Tγ Vk (γ) ⊆ H(γ).
Let us consider a vector h = [p, q1 , . . . , qF (γ), s1 , . . . , sF (γ) ] ∈ H(γ), we need
to prove that this vector is in the image Tγ Vk (γ) . In fact, by Proposition
τi
) ∈ Sk1,r (Mτi ) such that
4.10 applied to τi = [γ, δi ], there exists (fiτi , fi−1
τi
τi
τi
) = 0 for k > s(τi ),
Tγ (fi , fi−1 ) = [p, qi , qi+1 , si , p, qi−1 , qi , si−1 ] and Tδi (fiτi , fi−1
τ
for i = 2, . . . , F . Let us notice that in such case, Tγσi (fiτi ) = Tγσi (fi i+1 ).
Thus, it follows that there exists gi ∈ Rk (σi ) such that Tτσii (gi ) = fiτi and
τ
i
Tτσi+1
(gi ) = fi i+1 . The spline gi is constructed by taking the coefficients of
τ
fiτi and fi i+1 in Rσi (τi ) and Rσi (τi+1 ), respectively (see Section 2.3). Since
τ
i
i
Tδσii (fiτi ) = Tδσii (gi ) = 0 and Tδσi+1
(fi i+1 ) = Tδσi+1
(gi ) = 0 then Tτσi (gi ) = 0 for
every edge τ ∈ σi such that γ ∈
/ τ . Let g = [g1 , g2 , . . . , gF (γ) ] where gi ∈ Rk (σi )
is as previously constructed. Then g and their first derivatives vanish on the
edges in ∂Mγ , and g satisfies the gluing conditions along all the interior edges
τi of Mγ , i.e. g ∈ Sk1 (Mγ ) ∩ ker Dγ . Hence g ∈ Vk (γ), and by construction
Tγ (g) = h.
2
Given a topological surface M, let T be the Taylor map at all the vertices of
M, as defined in Section 2.3. We have the following exact sequence
T
0 → Kk (M) → Sk1 (M) −
→ Hk (M) → 0
26
(29)
where Hk (M) = T Sk1 (M) and Kk (M) = ker T ∩ Sk1 (M). Let us define
s∗ = max{s(τ ) : τ ∈ M1 }. From Proposition 4.10, we know that s∗ 6 2 +
max{viτ : for i = 1, 2 and τ ∈ M1 } + min(3, r), where (uτi , viτ ) for i = 1, 2 are
the degrees of the generators of Syz1 and Syz2 , respectively, with uτi 6 viτ .
Proposition 5.3 Let F (γ) and H(γ) be as defined above for each vertex γ ∈
Q
M0 , then for every k > s∗ we have T (Sk1 (M)) = γ H(γ) and
dim T (Sk1 (M)) =
X
X X
(F (γ) + 3) −
γ∈M0 τ 3γ
γ∈M0
cτ (γ) +
X
c+ (γ).
γ∈M0
Proof. The statement follows directly applying Propositions 5.2 and 5.1 to
each vertex γ ∈ M0 , with Mγ the sub-mesh of M which consists of the quadrangles in M containing the vertex γ.
2
5.1
Basis functions associated to a vertex
Given a topological surface M, for each vertex γ ∈ M0 , let us consider the
sub-mesh Mγ consisting of all the faces σ ∈ M such that γ ∈ σ, as before,
we denote this number of such faces by F (γ). From Proposition 5.3 we know
the dimension of T (Sk1 (M)) for k > s∗ . In the following, we construct a set
of linearly independent splines B0 ⊆ Sk1 (M) such that span{T (f ) : f ∈ B0 } =
T (Sk1 (M)).
Let us take a vertex γ ∈ M0 and consider the b-spline representation of the
elements fσ ∈ Rk (σ) for σ ∈ Mγ . We construct a set B0 (γ) ⊂ Sk1 (Mγ ) of
linearly independent spline function as follows:
• First we add one basis function f attached to the value at γ, such that
Tγσ (fσ )(γ) = 1 for every σ ∈ Mγ . Let us notice that if we define gσ =
P
06i,j61 Ni (uσ )Nj (vσ ) for every σ ∈ Mγ , and g on Mγ such that g|σ = gσ ,
then g(γ) = 1. We lift g to a spline f on Mγ such that f is in the image of
the map Θτ defined in (14), for every τ ∈ M1 attached to γ.
• We add two basis functions g, h supported on Mγ and attached
to the
first derivatives at γ. Namely, let us consider gσ1 = (1/2k) N0 (uσ1 ) +
N1 (uσ1 ) N1 (vσ1 ), and hσ1 = (1/2k)N1 (uσ1 ) N0 (vσ1 ) + N1 (vσ1 ) . The conditions (27) and (28) allow us to find gσi and hσi , for i = 2, . . . , F (γ) from
gσ1 and hσ1 , respectively. Thus, we define g and h on Mγ by taking g|σ = gσ
and h|σ = hσ . Since g and h by construction satisfy the gluing conditions
(2) and (3) along the edges, then they are splines in the image Sk1 (Mγ ) of
Θτ for every interior edge τ ∈ Mγ .
27
• For each edge τi for i = 1, . . . , F (γ), let us define the function gσi =
i
i
cσ1,1
(gσi )N1 (uσi )N1 (vσi ), where cσ1,1
(gσi ) = 1/4k 2 if τi is not a crossing edge,
and equal to zero otherwise. Then, for every fix edge τi ∈ Mγ attached to
γ we construct a spline g on Mγ such that g|σi = gσi , and g|σj for j 6= i are
determined by gσi and the gluing data at γ, according to (27) and (28). The
P
previous construction produces F (γ) − τ 3γ cτ (γ) (non-zero) spline functions. These splines, by construction, are in the image of Θτ (14) along all
the edges τ ∈ M1 attached to γ.
• If γ is a crossing vertex, by definition all the edges attached to γ are crossing
edges. In this case, we define gσ1 = (1/4k 2 )N1 (uσ1 )N1 (vσ1 ), and determine
gσi for i = 2, . . . , F (γ) using the gluing data at γ and conditions (27) and
(28). Defining g on Mγ by g|σi = gσi we obtain a spline in Sk1 (Mγ ).
Let us notice that if τi is a crossing edge then, following the notation in the
Taylor expansion of gi (ui , vi ) in (26), the coefficient si = ∂uσi ∂vσi gi (ui , vi )|γ
becomes dependent on si−1 , qi and qi−1 and therefore there is no additional
basis function associated to the edge τi .
Applying the previous construction to every γ ∈ M0 , we obtain a collection
of splines B0 (γ) ⊆ Sk1 (Mγ ) for each γ ∈ M0 . We lift the splines f ∈ Sk1 (Mγ )
to functions on M by defining fσ = 0 for every σ ∈
/ Mγ . To simplify the
exposition, we abuse the notation, and will also call f the lifted spline on M,
and B0 (γ) the collection of those splines.
Definition 5.4 For a topological surface M, let B0 ⊆ Sk1 (M) be the set of
linearly independent functions defined by
B0 =
[
B0 (γ),
(30)
γ∈M0
where B0 (γ) ⊆ Sk1 (Mγ ), for each vertex γ ∈ M.
By construction, the collection of splines in B0 (γ), for each vertex γ ∈ M0 ,
and B0 , are linearly independent. Moreover, the number of elements in B0
coincides with the dimension of Hk (M) and hence they constitute a basis for
the spline space Sk1 (M) whose Taylor map T (29) is not zero.
6
Splines on a face
Let Fk (M) be the spline functions in Skr (M) with vanishing Taylor expansion
along all the edges of M, that is, Fk (M) = Skr (M) ∩ ker D.
An element f is in Fk (M) if and only if cσi,j (f ) = 0 for i 6 1 or i > m − 1,
j 6 1 or j > m − 1 for all σ ∈ M2 .
28
0
Let Fk (σ) be the elements in Fk (M) with cσi,j (f ) = 0 for 0 6 i, j 6 m and
σ 0 6= σ.
• The dimension of Fk (σ) is (2 k − r − 3)2+ .
• A basis of Fk (σ) is Ni (uσ )Nj (vσ ) for 1 < i, j < m − 1.
We easily check that Fk (M) = ⊕σ Fk (σ), which implies the following result:
Lemma 6.1 The dimension of Fk (M) is (2k − r − 3)2+ F2 , where F2 is the
number of (quadrangular) faces of M.
Basis functions associated to a face. The set Fk (M) of basis functions
associated to faces is obtained by taking the union of the bases of Fk (σ) for
all faces σ ∈ M2 , that is,
B2 := {Ni (uσ )Nj (vσ ), 1 < i, j < m − 1, σ ∈ M2 }.
7
(31)
Dimension and basis of Splines on M
We have now all the ingredients to determine the dimension of Sk1,r (M) and
a basis.
Theorem 7.1 Let s∗ = max{s(τ ) | τ ∈ M1 }. Then, for k > s∗ ,
dim Sk1 (M) = (2k − r − 3)2 F2 +
P
τ ∈M1
d˜τ (k, r) + 4F2 − 9F1 + 3F0 + F+
where
• d˜τ (k) is the dimension of the syzygies of the gluing data along τ in degree
6 k,
• F2 is the number of rectangular faces,
• F1 is the number of edges,
• F0 (resp. F+ ) is the number of (resp. crossing) vertices,
Proof. By construction, Kk (M) = Sk1,r (M) ∩ ker T is the set of splines in
Sk1,r (M), which Taylor expansion at all the vertices vanish and Hk (M) is the
image of Sk1,r (M) by the Taylor map T . Thus we have the following exact
sequence:
T
0 → Kk (M) → Sk1,r (M) −→ Hk (M) → 0.
(32)
By construction, Ek (M) is the set of splines in Kk (M) with a support along
the edges of M, so that D(Kk (M)) = Ek (M). The kernel of D : ⊕σ Rk (M) →
29
⊕σ Rk (M) is Fk (M). As Fk (M) ⊂ Kk (M), we have the exact sequence
D
0 → Fk (M) → Kk (M) −→ Ek (M) → 0.
(33)
From the exact sequences (32) and (33), we have
dim Sk1,r (M) = dim Hk (M) + dim Kk (M)
= dim Hk (M) + dim Ek (M) + dim Fk (M)
We deduce the dimension formula using Lemma 4.13, Proposition 5.1 and
Lemma 6.1, as in [MVV16, proof of Theorem 6.3].
2
Basis of Sk1,r (M). A basis of Sk1,r (M) is obtained by taking
• the basis B0 of Vk (M) attached to the vertices of M and defined in (30),
• the basis B1 of Ek (M) attached to the edges of M and defined in (24),
• the basis B2 of Fk (M) attached to the faces of M and defined in (31).
8
Examples
To illustrate the construction, we detail an example of a simple mesh, where
a point of valence 3 is connected to a crossing point. The construction can be
extended to points of arbitrary valencies, in a more complex mesh.
We consider the mesh M composed of 3 rectangles σ1 , σ2 , σ3 glued around an
interior vertex γ, along the 3 interior edges τ1 , τ2 , τ3 . There are 6 boundary
edges and 6 boundary vertices δ1 , δ2 , δ3 , 1 , 2 , 3 . We use the symmetric glueing
ǫ3
b
δ1
σ3
b
τ1
τ2
σ1
ǫ1
b
τ3
b
δ3
γ
σ2
b
b
δ2
ǫ2
Fig. 5. Smooth corner.
corresponding to the angle
2π
3
at γ and
30
π
2
at δ1 , δ2 , δ3 .
We choose the gluing data [a, b, c] along an edge τi given by Formula (10):
a(u) = d0 (u)
b(u) = −d0 (u) − d1 (u)
c(u) = d0 (u) + d1 (u)
where d0 = Ñ0 (u) + Ñ1 (u), d1 = Ñ2 (u) + Ñ3 (u) + Ñ4 (u) for the b-spline basis
Ñ0 , . . . , Ñ5 of U20 and where u = 0 corresponds to γ. This gives
a(u) =
2
−1 + 4 u
06u6
0
1
2
1
2
, b(u) = −1,
c(u) = 1;
6u61
The degrees of the µ-bases of the different components are respectively µ1 =
0, ν1 = 2, µ2 = 0, ν2 = 0. Thus the separability is reached from the degree
k > 4.
We are going to analyze the spline space S41,1 (M) for specific gluing data. An
element f ∈ S41,1 (M) is represented on each cell σi (i = 1, 2, 3) by a tensor
product b-spline of class C 1 with 8 × 8 b-spline coefficients:
X
fk :=
cki,j (f )Ni,j (uk , vk ),
06i,j67
where Ni,j (u, v) = Ni (u)Nj (v) and {N0 (u), . . . , N7 (u)} is the basis of Uk1 . We
describe an element f ∈ S41,1 (M) as a triple of b-spline functions
"
#
X
06i,j67
c1i,j Ni,j
,
c2i,j Ni,j
X
06i,j67
,
X
c3i,j Ni,j
.
06i,j67
The separability is reached at degree 4 and we have the following basis elements, described by a triple of functions which are decomposed in the b-spline
bases of each face:
• The number of basis functions attached to γ is 6 = 1 + 2 + 3.
– The basis function associated to the value at γ is
1
1
N0,0 + N0,2 + N0,3 + N0,4 + 2 N1,3 + 2 N1,4 + N2,0 + N3,0 + N4,0 ,
3
3
1
31
N0,0 + N2,0 + N3,0 + N4,0 + 3 N0,1 +
N0,2 + 17 N0,3 + 17 N0,4
3
3
+ 14 N1,2 + 34 N1,3 + 34 N1,4 ,
31
1
N0,0 + 3 N1,0 +
N2,0 + 17 N3,0 + 17 N4,0 + N0,2 + N0,3 + N0,4
3
3
+ 2 N1,3 + 2 N1,4 .
31
– The two basis functions associated to the derivatives at γ are
10
16
16
14
32
32
N0,2 +
N0,3 +
N0,4 +
N1,2 +
N1,3 +
N1,4 ,
3
3
3
3
3
3
10
16
16
N1,0 +
N2,0 +
N3,0 +
N4,0 ,
3
3
3
10
16
16
16
32
32
− N0,1 −
N0,2 −
N0,3 −
N0,4 −
N1,2 −
N1,3 −
N1,4
3
3
3
3
3
3
16
16
10
N2,0 −
N3,0 −
N4,0 ,
− N1,0 −
3
3
3
10
16
16
N1,0 +
N2,0 +
N3,0 +
N4,0 ,
3
3
3
16
16
14
32
32
10
N0,2 −
N0,3 −
N0,4 −
N1,2 −
N1,3 −
N1,4 ,
− N0,1 −
3
3
3
3
3
3
10
16
16
10
16
− N1,0 −
N2,0 −
N3,0 −
N4,0 + N0,1 +
N0,2 +
N0,3
3
3
3
3
3
16
14
32
32
+
N0,4 +
N1,2 +
N1,3 +
N1,4 .
3
3
3
3
N0,1 +
– The three basis functions associated to the cross derivatives at γ are
8
8
4
16
16
4
N0,2 − N0,3 − N0,4 + N1,1 − N1,2 −
N1,3 −
N1,4 ,
3
3
3
3
3
3
4
8
8
− N2,0 − N3,0 − N4,0 , 0 ,
3
3
3
4
8
8
− N2,0 − N3,0 − N4,0 ,
3
3
3
4
8
8
4
16
16
− N0,2 − N0,3 − N0,4 − N1,2 −
N1,3 −
N1,4 ,
3
3
3
3
3
3
4
8
8
4
8
8
− N2,0 − N3,0 − N4,0 + N1,1 − N0,2 − N0,3 − N0,4
3
3
3
3
3
3
4
16
16
− N1,2 −
N1,3 −
N1,4 ,
3
3
3
4
8
8
− N2,0 − N3,0 − N4,0 , 0 ,
3
3
3
4
8
8
4
16
16
− N0,2 − N0,3 − N0,4 − N1,2 −
N1,3 −
N1,4 .
3
3
3
3
3
3
−
• There are 4 = 1 + 2 + 2 − 1 basis functions attached to δi :
[N0,7 ,
N7,0 + 2 N7,1 , 0], [N0,6 , N6,0 + 2 N6,1 , 0 ], [N1,7 , −N7,1 , 0 ], [N1,6 , −N6,1 , 0 ].
The basis functions associated to the other boundary points δ2 , δ3 are obtained
by cyclic permutation.
• There are 5 = 14 − 5 − 4 basis functions attached to edge τ1 :
[ − N1,2 ,
[ − N1,5 ,
N2,1 , 0 ], [ − N1,3 , N3,1 , 0 ], [ − N1,4 , N4,1 , 0 ],
N5,1 , 0 ], [N0,5 + 2 N1,5 , N5,0 , 0 ].
32
The basis functions associated to the other edges τ2 , τ3 are obtained by cyclic
permutation.
• For the remaining boundary points, boundary edges and faces, we have the
following 36 × 3 basis functions
[Ni,j ,
0 , 0 ], [0 , Ni,j , 0 ], [0 , 0 , Ni,j ],
for 2 6 i, j 6 7.
The dimension of the space S41,1 (M) is 6 + 3 × (4 + 5 + 36) = 141.
A similar construction applies for an edge of a general mesh connecting an
interior vertex γ of any valency 6= 4 to another vertex γ 0 . If γ 0 is a crossing
vertex, the numbers of basis functions attached to the vertices and the edge
do not change. If γ 0 is not a crossing vertex, the number of basis functions
attached to the non-crossing vertex γ 0 becomes 5 and there are 4 basis functions attached to the edges. In the case, where the edge connects two crossing
vertices, there are 4 basis functions attached to each crossing vertex and 8
basis functions attached to the edge.
The glueing data used in this construction require a degree 4 for the separability. For the mesh of Figure 5, it is possible to use linear glueing data
and bi-cubic b-spline patches. The dimension of bi-cubic G1 splines with the
linear glueing data is 72. Depending on topology of the mesh, it is possible to
construct and the choice of the glueing data, it is possible to use low degree
b-spline patches for the construction of G1 splines. In Figure 6, examples of
G1 bicubic spline surfaces are shown, for meshes with valencies at most 3, 4
and 6. The G1 surface is obtained by least-square projection of a G0 spline
onto the space of G1 splines.
Fig. 6. Examples of bi-cubic G1 surfaces
Concluding remarks
We have studied the set of smooth b-spline functions defined on quadrilateral
meshes of arbitrary topology, with 4-split macro-patch elements. Our study
33
has focused on determining the dimension of the space of geometrically continuous G1 splines of bounded degree. We have provided a construction for the
basis of the space composed of tensor product b-spline functions. We have also
illustrated our results with examples concerning parametric surface construction for simple topological surfaces. Further extensions include the explicit
construction of transition maps which ensure that the differentiability conditions are fulfilled, and the study of spline spaces with different macro-patch
elements leading to a lower degree of the basis functions, the analysis of the
numerical conditioning of the representation of the G1 -splines in the chosen
basis, the use of these basis functions for approximation, in particular, in fitting problems and in iso-geometric analysis.
Acknowledgements: The work is partial supported by the Marie SklodowskaCurie Innovative Training Network ARCADES (grant agreement No 675789)
from the European Union’s Horizon 2020 research and innovation programme.
References
[BGN14]
Carolina Vittoria Beccari, Daniel E. Gonsor, and Marian Neamtu.
RAGS: Rational geometric splines for surfaces of arbitrary topology.
Computer Aided Geometric Design, 31(2):97–110, 2014.
[BH14]
Georges-Pierre Bonneau and Stefanie Hahmann.
Flexible G1
interpolation of quad meshes. Graphical Models, 76(6):669–681, 2014.
[BM14]
Michel Bercovier and Tanya Matskewich. Smooth Bézier Surfaces over
Arbitrary Quadrilateral Meshes. Preprint available at arXiv:1412.1125,
2014.
[CC78]
Edwin Catmull and Jim Clark. Recursively generated B-spline surfaces
on arbitrary topological meshes. Computer-Aided Design, 10(6):350–
355, 1978.
[CST16]
Annabelle Collin, Giancarlo Sangalli, and Thomas Takacs. Analysissuitable G1 multi-patch parametrizations for C 1 isogeometric spaces.
Comput. Aided Geom. Design, 47:93–113, 2016.
[FP08]
Jianhua Fan and Jörg Peters. On smooth bicubic surfaces from quad
meshes. In International Symposium on Visual Computing, pages 87–96.
Springer, 2008.
[GHQ06]
Xianfeng Gu, Ying He, and Hong Qin. Manifold splines. Graphical
Models, 68(3):237–254, 2006.
[Hah89]
Jörg M. Hahn. Geometric continuous patch complexes. Computer Aided
Geometric Design, 6(1):55–67, 1989.
34
[HBC08]
Stefanie Hahmann, Georges-Pierre Bonneau, and Baptiste Caramiaux.
Bicubic G1 interpolation of irregular quad meshes using a 4-split. In
International Conference on Geometric Modeling and Processing, pages
17–32. Springer, 2008.
[HWW+ 06] Ying He, Kexiang Wang, Hongyu Wang, Xianfeng Gu, and Hong Qin.
Manifold T-spline. In International Conference on Geometric Modeling
and Processing, pages 409–422. Springer, 2006.
[KVJB15] Mario Kapl, Vito Vitrih, Bert Jüttler, and Katharina Birner.
Isogeometric analysis with geometrically continuous functions on twopatch geometries.
Computers & Mathematics with Applications,
70(7):1518–1538, 2015.
[LCB07]
Hongwei Lin, Wei Chen, and Hujun Bao. Adaptive patch-based mesh
fitting for reverse engineering. Computer-Aided Design, 39(12):1134–
1142, 2007.
[Loo94]
Charles Loop. Smooth spline surfaces over irregular meshes. In
Proceedings of the 21st Annual Conference on Computer Graphics and
Interactive Techniques, pages 303–310. ACM, 1994.
[MVV16]
Bernard Mourrain, Raimundas Vidunas, and Nelly Villamizar.
Dimension and bases for geometrically continuous splines on surfaces
of arbitrary topology. Computer Aided Geometric Design, 45:108–133,
2016.
[Pet95]
Jörg Peters.
Biquartic C 1 -surface splines over irregular meshes.
Computer-Aided Design, 27(12):895–903, 1995.
[Pet00]
Jörg Peters. Patching catmull-clark meshes. In Proceedings of the 27th
Annual Conference on Computer Graphics and Interactive Techniques,
SIGGRAPH ’00, pages 255–258, New York, NY, USA, 2000. ACM
Press/Addison-Wesley Publishing Co.
[PF10]
Jörg Peters and Jianhua Fan. On the complexity of smooth spline
surfaces from quad meshes.
Computer Aided Geometric Design,
27(1):96–105, 2010.
[Pra97]
Hartmut Prautzsch. Freeform splines.
Design, 14(3):201–206, 1997.
[Rei95]
Ulrich Reif. Biquadratic G-spline surfaces. Computer Aided Geometric
Design, 12(2):193–205, 1995.
[SWY04]
Xiquan Shi, Tianjun Wang, and Piqiang Yu. A practical construction
of G1 smooth biquintic B-spline surfaces over arbitrary topology.
Computer-Aided Design, 36(5):413–424, 2004.
[YZ04]
Lexing Ying and Denis Zorin. A simple manifold-based construction
of surfaces of arbitrary smoothness. In SIGGRAPH’04, pages 271–275.
ACM Press, 2004.
35
Computer Aided Geometric
| 0 |
Visualizations for an Explainable Planning Agent
Tathagata Chakraborti1 and Kshitij P. Fadnis2 and Kartik Talamadupula2 and Mishal Dholakia2
Biplav Srivastava2 and Jeffrey O. Kephart2 and Rachel K. E. Bellamy2
1
Computer Science Department, Arizona State University, Tempe, AZ 85281 USA
tchakra2 @ asu.edu
2
arXiv:1709.04517v2 [] 8 Feb 2018
IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 USA
{ kpfadnis, krtalamad, mdholak, biplavs, kephart, rachel } @ us.ibm.com
Abstract
In this paper, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent
that can support human in the loop decision making. Imposing transparency and explainability requirements on such agents is especially important
in order to establish trust and common ground with
the end-to-end automated planning system. Visualizing the agent’s internal decision making processes is a crucial step towards achieving this. This
may include externalizing the “brain” of the agent
– starting from its sensory inputs, to progressively
higher order decisions made by it in order to drive
its planning components. We also show how the
planner can bootstrap on the latest techniques in
explainable planning to cast plan visualization as a
plan explanation problem, and thus provide concise
model based visualization of its plans. We demonstrate these functionalities in the context of the automated planning components of a smart assistant
in an instrumented meeting space.
Introduction
Advancements in the fields of speech, language, and search
have led to ubiquitous personalized assistants like the Amazon Echo, Google Home, Apple Siri, etc. Even though
these assistants have mastered a narrow category of interaction in specific domains, they mostly operate in passive
mode – i.e. they merely respond via a set of predefined
scripts, most of which are written to specification. In order to evolve towards truly smart assistants, the need for
(pro)active collaboration and decision support capabilities is
paramount. Automated planning offer a promising alternative
to this drudgery of repetitive and scripted interaction. The use
of planners allows automated assistants to be imbued with
the complementary capabilities of being nimble and proactive on the one hand, while still allowing specific knowledge to be coded in the form of domain models. Additionally, planning algorithms have long excelled [Myers, 1996;
Sengupta et al., 2017] in the presence of humans in the loop
for complex collaborative decision making tasks.
eXplainable AI Planning (XAIP) While planners have always adapted to accept various kinds of inputs from humans,
only recently has there been a concerted effort on the other
side of the problem: making the outputs of the planning process more palatable to human decision makers. The paradigm
of eXplainable AI Planning (XAIP) [Fox et al., 2017] has become a central theme around which much of this research has
coalesced. In this paradigm, emphasis is laid on the qualities of trust, interaction, and transparency that an AI system
is endowed with. The key contributions to explainability are
the resolution of critical exploratory questions – why did the
system do something a particular way, why did it not do some
other thing, why was its decision optimal, and why the evolving world may force the system to replan.
Role of Visualization in XAIP One of the keys towards
achieving an XAIP agent is visualization. The planning community has recently made a concerted effort to support the
visualization of key components of the end-to-end planning
process: from the modeling of domains [Bryce et al., 2017];
to assisting with plan management [Izygon et al., 2008]; and
beyond [Sengupta et al., 2017; Benton et al., 2017]. For an
end-to-end planning system, this becomes even more challenging since the systems state is determined by information
at different levels of abstraction which are being coalesced in
the course of decision making. A recent workshop [Freedman
and Frank, 2017] outlines these challenges in a call to arms
to the community on the topic of visualization and XAIP.
Contribution It is in this spirit that we present a set of visualization capabilities for an XAIP agent that assists with human in the loop decision making tasks: specifically in the case
of this paper, assistance in an instrumented meeting space.
We introduce the end-to-end planning agent, Mr.Jones,
and the visualizations that we endow it with. We then provide
fielded demonstrations of the visualizations, and describe the
details that lie under the hood of these capabilities.
Introducing Mr.Jones
First, we introduce Mr.Jones, situated in the CEL – the
Cognitive Environments Laboratory – at IBM’s T.J. Watson
Research Center. Mr.Jones is designed to embody the key
properties of a proactive assistant while fulfilling the properties desired of an XAIP agent.
Figure 1: Architecture diagram illustrating the building blocks of Mr.Jones – the two main components Engage and Orchestrate situates
the agent proactively in a decision support setting with human decision makers in the loop. The top right inset shows the different roles of
Mr.Jones as a smart room orchestrator and meeting facilitator. The bottom right inset illustrates the flow of control in Mr.Jones – each
service runs in parallel and asynchronously to maintain anytime response of all the individual components.
Mr.Jones: An end-to-end planning system
We divide the responsibilities of Mr.Jones into two processes – Engage, where plan recognition techniques are used
to identify the task in progress; and Orchestrate, which involves active participation in the decision-making process via
real-time plan generation, visualization, and monitoring.
ENGAGE This consists of Mr.Jones monitoring various
inputs from the world in order to situate itself in the context
of the group interaction. First, the assistant gathers various
inputs like speech transcripts, live images, and the positions
of people within a meeting space; these inputs are fed into
a higher level symbolic reasoning component. Using this,
the assistant can (1) requisition resources and services that
may be required to support the most likely tasks based on
its recognition; (2) visualize the decision process – this can
depict both the agent’s own internal recognition algorithm,
and an external, task-dependent process; and (3) summarize
the group decision-making process.
ORCHESTRATE This process is the decision support assistant’s contribution to the group’s collaboration. This can
be done using standard planning techniques, and can fall
under the aegis of one of four actions as shown in Figure 1. These actions, some of which are discussed in more
detail in [Sengupta et al., 2017], are: (1) execute, where
the assistant performs an action or a series of actions related to the task at hand; (2) critique, where the assistant
offers recommendations on the actions currently in the collaborative decision sequence; (3) suggest, where the assistant suggests new decisions and actions that can be discussed collaboratively; and (4) explain, where the assistant
explains its rationale for adding or suggesting a particular
decision. The Orchestrate process thus provides the “support” part of the decision support assistant. The Engage
and Orchestrate processes can be seen as somewhat parallel to the interpretation and steering processes defined in
the crowdsourcing scenarios of [Talamadupula et al., 2013;
Manikonda et al., 2017]. The difference in these new scenarios is that the humans are the final decision makers, with the
assistant merely supporting the decision making.
Architecture Design & Key Components
The central component – the Orchestrator1 – regulates the
flow of information and control flow across the modules that
manage the various functionalities of the CEL; this is shown
in Figure 1. These modules are mostly asynchronous in
nature and may be: (1) services2 processing sensory information from various input devices across different modalities like audio (microphone arrays), video (PTZ cameras /
Kinect), motion sensors (Myo / Vive) and so on; (2) services
handling the different services of CEL; and (3) services that
attach to the Mr.Jones module. The Orchestrator is responsible for keeping track of the current state of the system as
well as coordinating actuation either in the belief/knowledge
space, or in the actual physical space.
Knowledge Acquisition / Learning The knowledge contained in the system comes from two sources – (1) the developers and/or users of the service; and (2) the system’s own
memory; as illustrated in Figure 1. One significant barrier towards the adoption of higher level reasoning capabilities into
such systems has been the lack of familiarity of developers
and end users with the inner working of these technologies.
With this in mind we provide an XML-based modeling interface – i.e. a “system config” – where users can easily configure new environments. This information in turn enables automatic generation of the files that are internally required by
the reasoning engines. Thus system specific information is
bootstrapped into the service specifications written by expert
1
Not to be confused with the term Orchestrate from the previous
section, used to describe the phase of active participation.
2
Built on top of the Watson Conversation and Visual Recognition
services on IBM Cloud and other IBM internal services.
developers, and this composite knowledge can be seamlessly
transferred across task domains and physical configurations.
The granularity of the information encoded in the models depends on the task at hand – for example, during the Engage
phase, the system uses much higher level information (e.g.
identities of agents in the room, their locations, speech intents, etc.) than during the Orchestrate phase, where more
detailed knowledge is needed. This enables the system to
reason at different levels of abstraction independently, thus
significantly improving the scalability as well as robustness
of the recognition engine.
Plan Recognition The system employs the probabilistic
goal / plan recognition algorithm from [Ramirez and Geffner,
2010] to compute its beliefs over possible tasks. The algorithm casts the plan recognition problem as a planning problem by compiling away observations to the form of actions in
a new planning problem. The solution to this new problem
enforces the execution of these observation-actions in the observed order. This explainsmini the reasoning process behind
the belief distribution in terms of the possible plans that the
agent envisioned (as seen in Figure 2).
Plan Generation The FAST-DOWNWARD planner
[Helmert, 2006] provides a suite of solutions to the forward planning problem. The planner is also required
internally by the Recognition Module when using the compilation from [Ramirez and Geffner, 2010], or in general to
drive some of the orchestration processes. The planner reuses
the compilation from the Recognition Module to compute
plans that preserve the current (observed) context.
Visualizations in Mr.Jones
The CEL is a smart environment, equipped with various sensors and actuators to facilitate group decision making. Automated planning techniques, as explained above, are the core
component of the decision support capabilities in this setting.
However, the ability to plan is rendered insufficient if the
agent cannot communicate that information effectively to the
humans in the loop. Dialog as a means of interfacing with the
human decision makers often becomes clumsy due to the difficulty of representing information in natural language, and/or
the time taken to communicate. Instead, we aim to build visual mediums of communication between the planner and the
humans for the following key purposes –
- Trust & Transparency - Externalizing the various pathways involved in the decision support process is essential to establish trust between the humans and the machine, as well as to increase situational awareness of the
agents. It allows the humans to be cognizant of the internal state of the assistant, and to infer decision rationale,
thereby reducing their cognitive burden.
- Summarization of Minutes - The summarization process
is a representation of the beliefs of the agent with regard to what is going on in its space over the course of
an activity. Since the agent already needs to keep track
of this information in order to make its decisions effectively, we can replay or sample from it to generate an
automated visual summary of (the agent’s belief of) the
proceedings in the room.
Figure 2: Snapshot of the mind of Mr.Jones externalizing different stages of its cognitive processes.
- Decision Making Process - Finally, and perhaps most
importantly, the decision making process itself needs efficient interfacing with the humans – this can involve a
range of things from showing alternative solutions to a
task, to justifying the reasoning behind different suggestions. This is crucial in a mixed initiative planning setting [Horvitz, 1999; Horvitz, 2007] to allow for human
participation in the planning process, as well as for the
planner’s participation in the humans’ decision making
process.
Mind of Mr.Jones
First, we will describe the externalization of the “mind” of
Mr.Jones – i.e. the various processes that feed the different capabilities of the agent. A snapshot of the interface is
presented in Figure 2. The interface itself consists of five
widgets. The largest widget on the top shows the various usecases that the CEL is currently set up to support. In the current
CEL setup, there are nine such usecases. The widget represents the probability distribution that indicates the confidence
of Mr.Jones in the respective task being the one currently
being collaborated on, along with a button for the provenance
of each such belief. The information used as provenance
is generated directly from the plans used internally by the
recognition module [Ramirez and Geffner, 2010] and justifies why, given its model of the underlying planning problems, these tasks look likely in terms of plans that achieve
those tasks. Model based algorithms are especially useful in providing explanations like this [Sohrabi et al., 2011;
Fox et al., 2017]. The system is adept at handling uncertainty
in its inputs (it is interesting to note that in coming up with
an explanatory plan it has announced likely assignments to
unknown agents in its space). In Figure 2, Mr.Jones has
placed the maximum confidence in the tour usecase.
Below the largest widget is a set of four widgets, each
of which give users a peek into an internal component of
Mr.Jones. The first widget, on the top left, presents a wordcloud representation of Mr.Jones’s belief in each of the
tasks; the size of the word representing that task corresponds
to the probability associated with that task. The second widget, on the top right, shows the agents that are recognized as
being in the environment currently – this information is used
by the system to determine what kind of task is more likely.
This information is obtained from four independent camera
feeds that give Mr.Jones an omnispective view of the environment; this information is represented via snapshots (sampled at 10-20 Hz) in the third widget, on the bottom left. In
the current example, Mr.Jones has recognized the agents
named (anonymized) “XXX” and “YYY” in the scenario.
Finally, the fourth widget, on the bottom right, represents a
wordcloud based summarization of the audio transcript of the
environment. This transcript provides a succinct representation of the things that have been said in the environment in
the recent past via the audio channels. Note that this widget
is merely a summarization of the full transcript, which is fed
into the IBM Watson Conversation service to generate observations for the plan recognition module. The interface thus
provides a (constantly updating) snapshot of the various sensory and cognitive organs associated with Mr.Jones – the
eyes, ears, and mind of the CEL. This snapshot is also organized at increasing levels of abstraction –
[1] Raw Inputs – These show the camera feeds and voice
capture (speech to text outputs) as received by the system. These help in externalizing what information the
system is working with at any point of time and can
be used, for example, in debugging at the input level if
the system makes a mistake or in determining whether
it is receiving enough information to make the right
decisions. It is especially useful for an agent like
Mr.Jones, which is not embodied in a single robot
or interface but is part of the environment as a whole.
As a result of this, users may find it difficult to attribute
specific events and outcomes to the agent.
[2] Lower level reasoning – The next layer deals with the
first stage of reasoning over these raw inputs – What are
the topics being talked about? Who are the agents in
the room? Where are they situated? This helps an user
identify what knowledge is being extracted from the input layer and fed into the reasoning engines. It increases
the situational awareness of agents by visually summarizing the contents of the scene at any point of time.
[3] Higher level reasoning – Finally, the top layer uses information extracted at the lower levels to reason about
abstract tasks in the scene. It visualizes the outcome
of the plan recognition process, along with the provenance of the information extracted from the lower levels (agents in the scene, their positions, speech intents,
etc.). This layer puts into context the agent’s current understanding of the processes in the scene.
Demonstration 1 We now demonstrate how the Engage
process evolves as agents interact in the CEL. The demonstration begins with two humans discussing the CEL envi-
ronment, followed by one agent describing a projection of the
Mind of Mr.Jones on the screen. The other agent then discusses how a Mergers and Acquisitions (M&A) task [Kephart
and Lenchner, 2015] is carried out. A video of this demonstration can be accessed at https://www.youtube.com/
watch?v=ZEHxCKodEGs. The video contains a window
that demonstrates the evolution of the Mr.Jones interface
through the duration of the interaction. This window illustrates how Mr.Jones’s beliefs evolve dynamically in response to interactions in real-time.
Demonstration 2 After a particular interaction is complete
Mr.Jones can automatically compile a summarization (or
minutes) of the meeting by sampling from the visualization of
its beliefs. An anonymized video of a typical summary can be
accessed at https://youtu.be/AvNRgsvuVOo. This
kind of visual summary provides a powerful alternative to established meeting summarization tools like text-based minutes. The visual summary can also be used to extract abstract
insights about this one meeting, or a set of similar meetings
together and allows for agents that may have missed the meeting to catch up on the proceedings. Whilst merely sampling
the visualization at discrete time-intervals serves as a powerful tool towards automated summary generation, we anticipate the use of more sophisticated visualization [Dörk et
al., 2010] and summarization [Shaw, 2017; Kim et al., 2015;
Kim and Shah, 2016] techniques in the future.
Model-Based Plan Visualization : Fresco
We start by describing the planning domain that is used in the
rest of this section, followed by a description of Fresco’s
different capabilities in terms of top-K plan visualization and
model-based plan visualization. We conclude by describing
the implementation details on the back-end.
The Collective Decision Domain We use a variant of the
Mergers and Acquisitions (M&A) task called Collective Decision (CD). The CD domain models the process of gathering
input from a decision makers in a smart room, and the orchestration of comparing alternatives, eliciting preferences,
and finally ranking of the possible options.
Top-K Visualization
Most of the automated planning technology and literature
considers the problem of generating a single plan. Recently,
however, the paradigm of Top-K planning [Riabov et al.,
2014] has gained traction. Top-K plans are particularly useful in domains where producing and deliberating on multiple
alternative plans that go from the same fixed initial state and
the same fixed goal is important. Many decision support scenarios, including the one described above, are of this nature.
Moreover, Top-K plans can also help in realizing unspecified
user preferences, which may be very hard to model explicitly. By presenting the user(s) with multiple alternatives, an
implicit preference elicitation can instead be performed. The
Fresco interface supports visualization of the K top plans
for a given problem instance and domain model, as shown in
Figure 3a. In order to generate the Top-K plans, we use an
experimental Top-K planner [Anonymous, 2017] that is built
on top of Fast Downward [Helmert, 2006].
(a) Top-K plan visualization showing alternative plans for a given problem.
(b) Action Descriptions
Figure 3: Visualization of plans in Fresco showing top-K alternative solutions (K=3) for a given planing problem (left) and on-demand
visualization of each action in the plan (zoomed-in; right) in terms of causal links consumed and produced by it.
Figure 4: Visualization as a process of explanation – minimized view of conditions relevant to a plan. Blue, green and red nodes indicate
preconditions, add and delete effects respectively. The conditions which are not necessary causes for this plan (i.e. the plan is still optimal in
a domain without these conditions) are grayed out in the visualization (11 out of a total 30).
Model-based Plan Visualization
The requirements for visualization of plans can have different semantics depending on the task at hand – e.g. showing the search process that produced the plan, and the decisions taken (among possible alternative solutions) and tradeoffs made (by the underlying heuristics) in that process; or
revealing the underlying domain or knowledge base that engendered the plan. The former involves visualizing the how
of plan synthesis, while the latter focuses on the why, and
is model-based and algorithm independent. Visualizing the
how is useful to the developer of the system during debugging, but serves little purpose for the end user who would
rather be told the rationale behind the plan: why is this
plan better than others, what individual actions contribute
to the plan, what information is getting consumed at each
step, and so on. Unfortunately, much of the visualization
work in the planning community has been confined to depicting the search process alone [Thayer, 2010; Thayer, 2012;
Magnaguagno et al., 2017]. Fresco, on the other hand, aims
to focus on the why of a plan’s genesis, in the interests of establishing common ground with human decision-makers. At
first glance, this might seems like an easy problem – we could
just show what the preconditions and effects are for each action along with the causal links in the plan. However, even
for moderately sized domains, this turns into a clumsy and
cluttered approach very soon, given the large number of conditions to be displayed. In the following, we will describe
how Fresco handles this problem of overload.
Visualization as a Process of Explanation We begin by
noting that the process of visualization can in fact be seen
as a process of explanation. In model-based visualization, as
described above, the system is essentially trying to explain to
the viewer the salient parts of its knowledge that contributed
to this plan. In doing so, it is externalizing what each action is
contributing to the plan, as well as outlining why this action
is better that other possible alternatives.
Explanations in Multi-Model Planning Recent work has
shown [Chakraborti et al., 2017] how an agent can explain
its plans to the user when there are differences in the models
(of the same planning problem) of the planner and the user,
which may render an optimal plan in the planner’s model
sub-optimal or even invalid–and hence unexplainable–in the
user’s mental model. An explanation in this setting constitutes a model update to the human such that the plan (that
is optimal to the planner) in question also becomes optimal
in the user’s updated mental model. This is referred to as a
model reconciliation process (MRP). The smallest such explanation is called a minimally complete explanation (MCE).
Model-based Plan Visualization ≡ Model Reconciliation
with Empty Model As we mentioned previously, exposing the entire model to the user is likely to lead to cognitive
overload and lack of situational awareness due to the amount
of information that is not relevant to the plan in question. We
want to minimize the clutter in the visualization and yet maintain all relevant information pertaining to the plan. We do this
by launching an instantiation of the model reconciliation process with the planner’s model and an empty model as inputs.
An empty model is a copy of the given model where actions
do not have any conditions and the initial state is empty (the
goal is still preserved). Following from the above discussion,
the output of this process is then the minimal set of conditions
in the original model that ensure optimality of the given plan.
In the visualization, the rest of the conditions from the domain are grayed out. [Chakraborti et al., 2017] showed how
this can lead to a significant pruning of conditions that do not
contribute to the generation of a particular plan. An instance
of this process on the CD domain is illustrated in Figure 4.
(a) Architecture diagram of Fresco.
(b) Software stack
Figure 5: Illustration of the flow of control (left) in Fresco between the plan generator (FD), explanation generator (MMP), and plan
validator (VAL) with the visualization modules. The MMP code base is in the process of being fully integrated into Fresco, and it is
currently run as a stand-alone component. The software stack (right) shows the infrastructure supporting Fresco in the backend.
Note that the above may not be the only way to minimize
information being displayed. There might be different kinds
of information that the user cares about, depending on their
preferences. This is also highlighted by the fact that an MCE
is not unique for a given problem. These preferences can be
learned in the course of interactions.
Architecture of Fresco
The architecture of Fresco, shown in Figure 5a, includes
several core modules such as the parser, planner, resolver, and
visualizer. These modules are all connected in a feed-forward
fashion. The parser module is responsible for converting domain models and problem instances into python objects, and
for validating them using VAL [Howey et al., 2004]. Those
objects are then passed on to the planner module, which relies on Fast-Downward (FD) and the Multi-Model Planner
(MMP) [Chakraborti et al., 2017] to generate a plan along
with its explanation. The resolver module consumes the
plan, the explanation, and the domain information to not only
ground the plan, but also to remove any preconditions, add, or
delete effects that are deemed irrelevant by the MMP module.
Finally, the visualizer module takes the plan from the resolver
module as an input, and builds graphics that can be rendered
within any well-known web browser. Our focus in designing the architecture was on making it functionally modular
and configurable, as shown in Figure 5b. While the first three
modules described above are implemented using Python, the
visualizer module is implemented using Javascript and the
D3 graphics library. Our application stack uses REST protocols to communicate between the visualizer module and
the rest of the architecture. We also accounted for scalability and reliability concerns by containerizing the application
with Kubernetes, in addition to building individual containers / virtual machines for third party services like VAL,
Fast-Downward, and MMP.
Work in Progress
While we presented the novel notion of explanation as visualization in the context of AI planning systems in this paper via
the implemention of the Mr.Jones assistant, there is much
work yet to be done to embed this as a central research topic
in the community. We conclude the paper with a brief outline
of future work as it relates to the visualization capabilities of
Mr.Jones and other systems like it.
Visualization for Model Acquisition Model acquisition is
arguably the biggest bottleneck in the widespread adoption
of automated planning technologies. Our own work with
Mr.Jones is not immune to this problem. Although we
have enabled an XML-based modeling interface, the next iteration of making this easily consumable for non-experts involves two steps: first, we impose an (possibly graphical) interface on top of the XML structure to obtain information in a
structured manner. We can thenl provide visualizations such
as those described in [Bryce et al., 2017] in order to help with
iterative acquisition and refinement of the planning model.
Tooling Integration Eventually, our vision – not restricted
to any one planning tool or technology – is to integrate the
capabilities of Fresco into a domain-independent planning
tool such as planning.domains [Muise, 2016], which
will enable the use of these visualization components across
various application domains. planning.domains realizes the long-awaited planner-as-a-service paradigm for end
users, but is yet to incorporate any visualization techniques
for the user. Model-based visualization from Fresco, complemented with search visualizations from emerging techniques like WebPlanner [Magnaguagno et al., 2017], can
be a powerful addition to the service.
Acknowledgements A significant part of this work was initiated and completed while Tathagata Chakraborti was an intern at IBM’s T. J. Watson Research Center during the summer of 2017. The continuation of his work at ASU is supported by an IBM Ph.D. Fellowship.
References
[Anonymous, 2017] Anonymous.
blind review. 2017.
Anonymous for double
[Benton et al., 2017] J. Benton, David Smith, John
Kaneshige, and Leslie Keely. CHAP-E: A plan execution assistant for pilots. In Proceedings of the Workshop
on User Interfaces and Scheduling and Planning, UISP
2017, pages 1–7, Pittsburgh, Pennsylvania, USA, 2017.
[Bryce et al., 2017] Daniel Bryce, Pete Bonasso, Khalid
Adil, Scott Bell, and David Kortenkamp. In-situ domain
modeling with fact routes. In Proceedings of the Workshop
on User Interfaces and Scheduling and Planning, UISP
2017, pages 15–22, Pittsburgh, Pennsylvania, USA, 2017.
[Chakraborti et al., 2017] Tathagata Chakraborti, Sarath
Sreedharan, Yu Zhang, and Subbarao Kambhampati. Plan
explanations as model reconciliation: Moving beyond
explanation as soliloquy. In IJCAI, 2017.
[Dörk et al., 2010] M. Dörk, D. Gruen, C. Williamson, and
S. Carpendale. A Visual Backchannel for Large-Scale
Events. IEEE Transactions on Visualization and Computer
Graphics, 2010.
[Fox et al., 2017] Maria Fox, Derek Long, and Daniele Magazzeni. Explainable Planning. In First IJCAI Workshop on
Explainable AI (XAI), 2017.
[Freedman and Frank, 2017] Richard G. Freedman and
Jeremy D. Frank, editors. Proceedings of the First Workshop on User Interfaces and Scheduling and Planning.
AAAI, 2017.
[Helmert, 2006] Malte Helmert. The fast downward planning system. Journal of Artificial Intelligence Research,
26:191–246, 2006.
[Horvitz, 1999] Eric Horvitz. Principles of mixed-initiative
user interfaces. In Proceedings of the SIGCHI conference
on Human Factors in Computing Systems, pages 159–166.
ACM, 1999.
[Horvitz, 2007] Eric J Horvitz. Reflections on challenges
and promises of mixed-initiative interaction. AI Magazine,
28(2):3, 2007.
[Howey et al., 2004] Richard Howey, Derek Long, and
Maria Fox. Val: Automatic plan validation, continuous
effects and mixed initiative planning using pddl. In Tools
with Artificial Intelligence, 2004. ICTAI 2004. 16th IEEE
International Conference on, pages 294–301. IEEE, 2004.
[Izygon et al., 2008] Michel Izygon, David Kortenkamp,
and Arthur Molin. A procedure integrated development
environment for future spacecraft and habitats. In Space
Technology and Applications International Forum, 2008.
[Kephart and Lenchner, 2015] Jeffrey O Kephart and
Jonathan Lenchner. A symbiotic cognitive computing
perspective on autonomic computing. In Autonomic
Computing (ICAC), 2015 IEEE International Conference
on, pages 109–114, 2015.
[Kim and Shah, 2016] Joseph Kim and Julie A Shah. Improving team’s consistency of understanding in meetings. IEEE Transactions on Human-Machine Systems,
46(5):625–637, 2016.
[Kim et al., 2015] Been Kim, Caleb M Chacha, and Julie A
Shah. Inferring team task plans from human meetings:
A generative modeling approach with logic-based prior.
Journal of Artificial Intelligence Research, 2015.
[Magnaguagno et al., 2017] Maurıcio C Magnaguagno, Ramon Fraga Pereira, Martin D Móre, and Felipe Meneguzzi.
Web planner: A tool to develop classical planning domains
and visualize heuristic state-space search. ICAPS 2017
User Interfaces for Scheduling & Planning (UISP) Workshop, 2017.
[Manikonda et al., 2017] Lydia Manikonda,
Tathagata
Chakraborti, Kartik Talamadupula, and Subbarao Kambhampati. Herding the crowd: Using automated planning
for better crowdsourced planning. Journal of Human
Computation, 2017.
[Muise, 2016] Christian Muise. Planning.Domains. In The
26th International Conference on Automated Planning
and Scheduling - Demonstrations, 2016.
[Myers, 1996] Karen L Myers. Advisable planning systems.
Advanced Planning Technology, pages 206–209, 1996.
[Ramirez and Geffner, 2010] M Ramirez and H Geffner.
Probabilistic plan recognition using off-the-shelf classical
planners. In AAAI, 2010.
[Riabov et al., 2014] Anton Riabov, Shirin Sohrabi, and Octavian Udrea. New algorithms for the top-k planning problem. In Proceedings of the Scheduling and Planning Applications woRKshop (SPARK) at the 24th International Conference on Automated Planning and Scheduling (ICAPS),
pages 10–16, 2014.
[Sengupta et al., 2017] Sailik
Sengupta,
Tathagata
Chakraborti, Sarath Sreedharan, and Subbarao Kambhampati. RADAR - A Proactive Decision Support System for
Human-in-the-Loop Planning. In AAAI Fall Symposium
on Human-Agent Groups, 2017.
[Shaw, 2017] Darren Shaw. How Wimbledon is using IBM
Watson AI to power highlights, analytics and enriched fan
experiences. https://goo.gl/r6z3uL, 2017.
[Sohrabi et al., 2011] Shirin Sohrabi, Jorge A Baier, and
Sheila A McIlraith. Preferred explanations: Theory and
generation via planning. In AAAI, 2011.
[Talamadupula et al., 2013] Kartik Talamadupula, Subbarao Kambhampati, Yuheng Hu, Tuan Nguyen, and
Hankz Hankui Zhuo. Herding the crowd: Automated
planning for crowdsourced planning. In HCOMP, 2013.
[Thayer, 2010] Jordan Thayer.
Search Visualizations.
https://www.youtube.com/user/
TheSuboptimalGuy, 2010.
[Thayer, 2012] Jordan Tyler Thayer. Heuristic search under
time and quality bounds. Ph. D. Dissertation, University
of New Hampshire, 2012.
| 2 |
A Cross Entropy based Stochastic Approximation
Algorithm for Reinforcement Learning with Linear
Function Approximation
Ajin George Joseph
arXiv:1609.09449v1 [] 29 Sep 2016
Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India, [email protected]
Shalabh Bhatnagar
Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India, [email protected]
In this paper, we provide a new algorithm for the problem of prediction in Reinforcement Learning, i.e., estimating the
Value Function of a Markov Reward Process (MRP) using the linear function approximation architecture, with memory
and computation costs scaling quadratically in the size of the feature set. The algorithm is a multi-timescale variant of
the very popular Cross Entropy (CE) method which is a model based search method to find the global optimum of a realvalued function. This is the first time a model based search method is used for the prediction problem. The application of
CE to a stochastic setting is a completely unexplored domain. A proof of convergence using the ODE method is provided.
The theoretical results are supplemented with experimental comparisons. The algorithm achieves good performance fairly
consistently on many RL benchmark problems. This demonstrates the competitiveness of our algorithm against least
squares and other state-of-the-art algorithms in terms of computational efficiency, accuracy and stability.
Key words: Reinforcement Learning, Cross Entropy, Markov Reward Process, Stochastic Approximation, ODE
Method, Mean Square Projected Bellman Error, Off Policy Prediction
1. Introduction and Preliminaries
In this paper, we follow the Reinforcement Learning (RL) framework as described in [1, 2, 3]. The basic
structure in this setting is the discrete time Markov Decision Process (MDP) which is a 4-tuple (S, A, R,
P), where S denotes the set of states and A is the set of actions. R : S × A × S → R is the reward function
where R(s, a, s0 ) represents the reward obtained in state s after taking action a and transitioning to s0 .
Without loss of generality, we assume that the reward function is bounded, i.e., |R(., ., .)| ≤ Rmax < ∞.
P : S × A × S → [0, 1] is the transition probability kernel, where P(s, a, s0 ) = P(s0 |s, a) is the probability of
next state being s0 conditioned on the fact that the current state is s and action taken is a. We assume that
the state and action spaces are finite with |S| = n and |A| = b. A stationary policy π : S → A is a function
1
2
from states to actions, where π(s) is the action taken in state s. A given policy π along with the transition
kernel P determines the state dynamics of the system. For a given policy π, the system behaves as a Markov
Reward Process (MRP) with transition matrix Pπ (s, s0 ) = P(s, π(s), s0 ). The policy can also be stochastic
in order to incorporate exploration. In that case, for a given s ∈ S, π(.|s) is a probability distribution over
the action space A.
For a given policy π, the system evolves at each discrete time step and this process can be captured as
a sequence of triplets (st , rt , s0t ), t ≥ 0, where st is the random variable which represents the current state
at time t, s0t is the transitioned state from st and rt = R(st , π(st ), s0t ) is the reward associated with the
transition. In this paper, we are concerned with the problem of prediction, i.e., estimating the long run γdiscounted cost V π ∈ RS (also referred to as the Value function) corresponding to the given policy π. Here,
given s ∈ S, we let
V π (s) , E
"∞
X
#
γ t rt s0 = s ,
(1)
t=0
where γ ∈ [0, 1) is a constant called the discount factor and E[·] is the expectation over sample trajectories
of states obtained in turn from Pπ when starting from the initial state s. V π is represented as a vector in
R|S| . V π satisfies the well known Bellman equation under policy π, given by
V π = Rπ + γPπ V π , T π V π ,
(2)
where Rπ , (Rπ (s), s ∈ S)> with Rπ (s) = E [rt |st = s], V π , (V π (s), s ∈ S)> and T π V π , ((T π V π )(s),
s ∈ S)> , respectively. Here T π is called the Bellman operator. If the model information, i.e., Pπ and Rπ
are available, then we can obtain the value function V π by solving analytically the linear system V π =
(I − γPπ )−1 Rπ .
However, in this paper, we follow the usual RL framework, where we assume that the model, is inaccessible; only a sample trajectory {(st , rt , s0t )}∞
t=1 is available where at each instant t, state st of the triplet
(st , rt , s0t ) is sampled using an arbitrary distribution ν over S, while the next state s0t is sampled using
Pπ (st , .) and rt is the immediate reward for the transition. The value function V π has to be estimated here
from the given sample trajectory.
3
To further make the problem more arduous, the number of states n may be large in many practical applications, for example, in games such as chess and backgammon. Such combinatorial blow-ups exemplify
the underlying problem with the value function estimation, commonly referred to as the curse of dimensionality. In this case, the value function is unrealizable due to both storage and computational limitations.
Apparently one has to resort to approximate solution methods where we sacrifice precision for computational tractability. A common approach in this context is the function approximation method [1], where we
approximate the value function of unobserved states using available training data.
In the linear function approximation technique, a linear architecture consisting of a set of k, ndimensional feature vectors, 1 ≤ k n, {φi ∈ RS }, 1 ≤ i ≤ k, is chosen a priori. For a state s ∈ S, we
define
φ1 (s)
φ2 (s)
φ(s) ,
..
.
φk (s)
>
k×1
φ(s1 )
φ(s2 )>
, Φ,
..
.
φ(sn )>
,
(3)
n×k
where the vector φ(·) is called the feature vector, while Φ is called the feature matrix.
Primarily, the task in linear function approximation is to find a weight vector z ∈ Rk such that the predicted value function Φz ≈ V π . Given Φ, the best approximation of V π is its projection on to the subspace
{Φz |z ∈ Rk } (column space of Φ) with respect to an arbitrary norm. Typically, one uses the weighted
norm k.kν where ν(·) is an arbitrary distribution over S. The norm k.kν and its associated linear projection
operator Πν are defined as
kV k2ν =
n
X
V (i)2 ν(i),
Πν = Φ(Φ> Dν Φ)−1 Φ> Dν ,
(4)
i=1
where Dν is the diagonal matrix with Diiν = ν(i), i = 1, . . . , n. So a familiar objective in most approximation
algorithms is to find a vector z ∗ ∈ Rk such that Φz ∗ ≈ Πν V π .
Also it is important to note that the efficacy of the learning method depends on both the features φi and
the parameter z [4]. Most commonly used features include Radial Basis Functions (RBF), Polynomials,
4
Fourier Basis Functions [5], Cerebellar Model Articulation Controller (CMAC) [6] etc. In this paper, we
assume that a carefully chosen set of features is available a priori.
The existing algorithms can be broadly classified as (i) Linear methods which include Temporal Difference (TD) [7], Gradient Temporal Difference (GTD [8], GTD2 [9], TDC [9]) and Residual Gradient (RG)
[10] schemes, whose computational complexities are linear in k and hence are good for large values of
k and (ii) Second order methods which include Least Squares Temporal Difference (LSTD) [11, 12] and
Least Squares Policy Evaluation (LSPE) [13] whose computational complexities are quadratic in k and are
useful for moderate values of k. Second order methods, albeit computationally expensive, are seen to be
more data efficient than others except in the case when trajectories are very small [14].
Eligibility traces [7] can be integrated into most of these algorithms to improve the convergence rate.
Eligibility trace is a mechanism to accelerate learning by blending temporal difference methods with Monte
Carlo simulation (averaging the values) and weighted using a geometric distribution with parameter λ ∈
[0, 1). The algorithms with eligibility traces are named with (λ) appended, for example TD(λ), LSTD(λ)
etc. In this paper, we do not consider the treatment of eligibility traces.
Sutton’s TD(λ) algorithm with function approximation [7] is one of the fundamental algorithms in RL.
TD(λ) is an online, incremental algorithm, where at each discrete time t, the weight vectors are adjusted to
better approximate the target value function. The simplest case of the one-step TD learning, i.e. λ = 0, starts
with an initial vector z0 and the learning continues at each discrete time instant t where a new prediction
vector zt+1 is obtained using the recursion,
zt+1 = zt + αt+1 δt (zt )φ(st ).
In the above, αt is the learning rate which satisfies
P
t αt
= ∞,
2
t αt
P
< ∞ and δt (z) , rt + γz > φ(s0t ) −
z > φ(st ) is called the Temporal Difference (TD)-error. In on-policy cases where Markov Chain is ergodic
and the sampling distribution ν is the stationary distribution of the Markov Chain, then with αt satisfying
the above conditions and with Φ being a full rank matrix, the convergence of TD(0) is guaranteed [15].
But in off-policy cases, i.e., where the sampling distribution ν is not the stationary distribution of the chain,
TD(0) is shown to diverge [10].
5
∗
By applying stochastic approximation theory, the limit point zTD
of TD(0) is seen to satisfy
(5)
0 = E [δt (z)φ(st )] = Az − b,
where A = E [φ(st )(φ(st ) − γφ(s0t ))> ] and b = E [rt φ(st )] . This gives rise to the Least Squares Temporal
Difference (LSTD) algorithm [11, 12], which at each iteration t, provides estimates At of matrix A and bt
of vector b, and upon termination of the algorithm at time T , the approximation vector zT is evaluated by
zT = (AT )−1 bT .
Least Squares Policy Evaluation (LSPE) [13] is a multi-stage algorithm where in the first stage, it obtains
ut+1 = arg minu kΦzt − T π Φuk2ν using the least squares method. In the subsequent stage, it minimizes the
fix-point error using the recursion zt+1 = zt + αt+1 (ut+1 − zt ).
∗
Van Roy and Tsitsiklis [15] gave a different characterization for the limit point zTD
of TD(0) as the fixed
point of the projected Bellman operator Πν T π ,
(6)
Φz = Πν T π Φz.
This characterization yields a new error function, the Mean Squared Projected Bellman Error (MSPBE)
defined as
MSPBE(z) , kΦz − Πν T π Φz k2ν ,
z ∈ Rk .
(7)
In [9, 8], this objective function is maneuvered to derive novel Θ(k) algorithms like GTD, TDC and GTD2.
GTD2 is a multi-timescale algorithm given by the following recursions:
zt+1 = zt + αt+1 (φ(st ) − γφ(s0t )) (φ(st )> vt ),
vt+1 = vt + βt+1 (δt (zt ) − φ(st )> vt )φ(st ).
The learning rates αt and βt satisfy
P
t αt
= ∞,
2
t αt
P
(8)
(9)
< ∞ and βt = ηαt , where η > 0.
Another pertinent error function is the Mean Square Bellman Residue (MSBR) which is defined as
MSBR(z) , E (E [δt (z)|st ])2 , z ∈ Rk .
(10)
MSBR is a measure of how closely the prediction vector represents the solution to the Bellman equation.
6
| Algorithm
| Complexity
| Error
|
Eligibility Trace |
LSTD
Θ(k 3 )
MSPBE
Yes
TD
Θ(k)
MSPBE
Yes
LSPE
Θ(k 3 )
MSPBE
Yes
GTD
Θ(k)
MSPBE
-
GTD2
Θ(k)
MSPBE
-
RG
Θ(k)
MSBR
Yes
Table 1
Comparison of the state-of-the-art function approximation RL algorithms
Residual Gradient (RG) algorithm [10] minimizes the error function MSBR directly using stochastic
00
gradient search. RG however requires double sampling, i.e., generating two independent samples s0t and st
of the next state when in the current state st . The recursion is given by
0
>
zt+1 = zt + αt+1 rt + γz>
t φt − zt φt
00
00
φt − γφt ,
(11)
00
where φt , φ(st ), φ0t , φ(s0t+1 ) and φt , φ(st ). Even though RG algorithm guarantees convergence, due
to large variance, the convergence rate is small.
If the feature set, i.e., the columns of the feature matrix Φ is linearly independent, then both the error
functions MSBR and MSPBE are strongly convex. However, their respective minima are related depending
on whether the feature set is perfect or not. A feature set is perfect if V π ∈ {Φz |z ∈ Rk }. If the feature set is
perfect, then the respective minima of MSBR and MSPBE are the same. In the imperfect case, they differ.
A relationship between MSBR and MSPBE can be easily established as follows:
MSBR(z) = MSPBE(z) + kT π Φz − Πν T π Φz k2 , z ∈ Rk .
(12)
A vivid depiction of the relationship is shown in Figure 1.
Another relevant error objective is the Mean Square Error (MSE) which is the square of the ν-weighted
distance from V π and is defined as
MSE(z) , kV π − Φz k2ν ,
z ∈ Rk .
(13)
7
T π Φz
√
MS
BR
Tπ
Π
√ MSP
BE
ΠT π Φz
Φz
{Φz|z ∈ Rk }
Figure 1
Diagram depicting the relationship between the error functions MSPBE and MSBR.
In [16] and [17] the relationship between MSE and MSBR is provided. It is found that, for a given ν with
ν(s) > 0, ∀s ∈ S,
p
p
where C(ν) = maxs,s0
P(s,s0 )
.
ν(s)
MSE(z) ≤
C(ν) p
MSBR(z),
1−γ
(14)
Another bound which is of considerable importance is the bound on the MSE
∗
of the limit point zTD
of the TD(0) algorithm provided in [15]. It is found that
p
∗
)≤ √
MSE(zTD
p
1
MSE(z ν ),
1 − γ2
(15)
where z ν ∈ Rk satisfies Φz ν = Πν V π and γ is the discount factor. Table 1 provides a list of important TD
based algorithms along with the associated error objectives. The algorithm complexities are also shown in
the table.
Put succinctly, when linear function approximation is applied in an RL setting, the main task can be cast
as an optimization problem whose objective function is one of the aforementioned error functions. Typically, almost all the state-of-the-art algorithms employ gradient search technique to solve the minimization
problem. In this paper, we apply a gradient-free technique called the Cross Entropy (CE) method instead
to find the minimum. By ‘gradient-free’, we mean the algorithm does not incorporate information on the
gradient of the objective function, rather uses the function values themselves. Cross Entropy method is commonly subsumed within the general class of Model based search methods [18]. Other methods in this class
8
are Model Reference Adaptive Search (MRAS) [19], Gradient-based Adaptive Stochastic Search for Simulation Optimization (GASSO) [20], Ant Colony Optimization (ACO) [21] and Estimation of Distribution
Algorithms (EDAs) [22]. Model based search methods have been applied to the control problem1 in [23]
and in basis adaptation2 [24], but this is the first time such a procedure has been applied to the prediction
problem. However, due to certain limitations in the original CE method, it cannot be directly applied to the
RL setting. In this paper, we have proposed a method to workaround these limitations of the CE method,
thereby making it a good choice for the RL setting. Note that any of the aforementioned error functions can
be employed, but in this paper, we attempt to minimize MSPBE as it offers the best approximation with less
bias to the projection Πν V π for a given policy π, using a single sample trajectory.
Our Contributions The Cross Entropy (CE) method [25, 26] is a model based search algorithm to find
the global maximum of a given real valued objective function. In this paper, we propose for the first time,
an adaptation of this method to the problem of parameter tuning in order to find the best estimates of the
value function V π for a given policy π under the linear function approximation architecture. We propose a
multi-timescale stochastic approximation algorithm which minimizes the MSPBE. The algorithm possesses
the following attractive features:
1. No restriction on the feature set.
2. The computational complexity is quadratic in the number of features (this is a significant improvement
compared to the cubic complexity of the least squares algorithms).
3. It is competitive with least squares and other state-of-the-art algorithms in terms of accuracy.
4. It is online with incremental updates.
5. It gives guaranteed convergence to the global minimum of the MSPBE.
A noteworthy observation is that since MSPBE is a strongly convex function [14], local and global minima
overlap and the fact that CE method finds the global minima as opposed to local minima, unlike gradient
search, is not really essential. Nonetheless, in the case of non-linear function approximators, the convexity
property does not hold in general and so there may exist multiple local minima in the objective and the
gradient search schemes would get stuck in local optima unlike CE based search. We have not explored
the non-linear case in this paper. However, our approach can be viewed as a significant first step towards
efficiently using model based search for policy evaluation in the RL setting.
9
2. Proposed Algorithm: SCE-MSPBEM
We present in this section our algorithm SCE-MSPBEM, acronym for Stochastic Cross Entropy-Mean
Squared Projected Bellman Error Minimization that minimizes the Mean Squared Projected Bellman Error
(MSPBE) by incorporating a multi-timescale stochastic approximation variant of the Cross Entropy (CE)
method.
2.1. Summary of Notation:
Let Ik×k and 0k×k be the identity matrix and the zero matrix with dimensions k × k respectively. Let fθ (·) be
the probability density function (pdf ) parametrized by θ and Pθ be its induced probability measure. Let Eθ [·]
be the expectation w.r.t. the probability distribution fθ (·). We define the (1 − ρ)-quantile of a real-valued
function H(·) w.r.t. the probability distribution fθ (·) as follows:
γρH (θ) , sup{l ∈ R : Pθ (H(x) ≥ l) ≥ ρ},
(16)
Also dae denotes the smallest integer greater than a. For A ⊂ Rm , let IA represent the indicator function,
i.e., IA (x) is 1 when x ∈ A and is 0 otherwise. We denote by Z+ the set of non-negative integers. Also
we denote by R+ the set of non-negative real numbers. Thus, 0 is an element of both Z+ and R+ . In this
section, x represents a random variable and x a deterministic variable.
2.2. Background: The CE Method
To better understand our algorithm, we briefly explicate the original CE method first.
2.2.1. Objective of CE The Cross Entropy (CE) method [25, 27, 26] solves problems of the following
form:
Find
x∗ ∈ arg max H(x),
x∈X ⊂Rm
where H(·) is a multi-modal real-valued function and X is called the solution space.
The goal of the CE method is to find an optimal “model” or probability distribution over the solution space
X which concentrates on the global maxima of H(·). The CE method adopts an iterative procedure where
at each iteration t, a search is conducted on a space of parametrized probability distributions {fθ |θ ∈ Θ} on
10
X over X , where Θ is the parameter space, to find a distribution parameter θt which reduces the Kullback-
Leibler (KL) distance from the optimal model. The most commonly used class here is the exponential family
of distributions.
Exponential Family of Distributions: These are denoted as C , {fθ (x) = h(x)eθ
>
Γ(x)−K(θ)
|θ∈Θ⊂
Rd }, where h : Rm −→ R, Γ : Rm −→ Rd and K : Rd −→ R. By rearranging the parameters, we can show
that the Gaussian distribution with mean vector µ and the covariance matrix Σ belongs to C . In this case,
fθ (x) = p
and so one may let h(x) = p
> −1
1
e−(x−µ) Σ (x−µ)/2 ,
m
(2π) |Σ|
(17)
1
1
, Γ(x) = (x, xx> )> and θ = (Σ−1 µ, − Σ−1 )> .
m
2
(2π)
~ Assumption (A1): The parameter space Θ is compact.
2.2.2. CE Method (Ideal Version) The CE method aims to find a sequence of model parameters
{θt }t∈Z+ , where θt ∈ Θ and an increasing sequence of thresholds {γt }t∈Z+ where γt ∈ R, with the property
that the support of the model identified using θt , i.e., {x|fθt (x) 6= 0} is contained in the region {x|H(x) ≥
γt }. By assigning greater weight to higher values of H at each iteration, the expected behaviour of the
probability distribution sequence should improve. The most common choice for γt+1 is γρH (θt ), the (1 − ρ)quantile of H(x) w.r.t. the probability distribution fθt (·), where ρ ∈ (0, 1) is set a priori for the algorithm.
We take Gaussian distribution as the preferred choice for fθ (·) in this paper. In this case, the model parameter is θ = (µ, Σ)> where µ ∈ Rm is the mean vector and Σ ∈ Rm×m is the covariance matrix.
The CE algorithm is an iterative procedure which starts with an initial value θ0 = (µ0 , Σ0 )> of the mean
vector and the covariance matrix tuple and at each iteration t, a new parameter θt+1 = (µt+1 , Σt+1 )> is
derived from the previous value θt as follows:
θt+1 = arg max Eθt S(H(x))I{H(x)≥γt+1 } log fθ (x) ,
(18)
θ∈Θ
where S is positive and strictly monontone.
If the gradient w.r.t. θ of the objective function in (18) is equated to 0 and using (17) for fθ (·), we obtain
µt+1 =
Eθt [g1 (H(x), x, γt+1 )]
, Υ1 (H(·), θt , γt+1 ),
Eθt [g0 (H(x), γt+1 )]
(19)
11
Σt+1 =
where
Eθt [g2 (H(x), x, γt+1 , µt+1 )]
, Υ2 (H(·), θt , γt+1 , µt+1 ).
Eθt [g0 (H(x), γt+1 )]
(20)
(21)
g0 (H(x), γ) , S(H(x))I{H(x)≥γ} ,
g1 (H(x), x, γ) , S(H(x))I{H(x)≥γ} x,
g2 (H(x), x, γ, µ) , S(H(x))I{H(x)≥γ} (x − µ)(x − µ)> .
(22)
(23)
R EMARK 1. The function S(·) in (18) is positive and strictly monotone and is used to account for the cases
when the objective function H(x) takes negative values for some x. One common choice is S(x) = exp(rx)
where r ∈ R is chosen appropriately.
2.2.3. CE Method (Monte-Carlo Version) It is hard in general to evaluate Eθt [·] and γt , so the stochastic counterparts of the equations (19) and (20) are used instead in the CE algorithm. This gives rise to
the Monte-Carlo version of the CE method. In this stochastic version, the algorithm generates a sequence
{θ̄t = (µ̄t , Σ̄t )> }t∈Z+ , where at each iteration t, Nt samples Λt = {x1 , x2 , . . . , xNt } are picked using the dis-
tribution fθ̄t and the estimate of γt+1 is obtained as follows: γ̄t+1 = H(d(1−ρ)Nt e) where H(i) is the ith-order
>
>
t
statistic of {H(xi )}N
i=1 . The estimate θ̄t+1 = (µ̄t+1 , Σ̄t+1 ) of the model parameters θt+1 = (µt+1 , Σt+1 )
is obtained as
µ̄t+1 =
Σ̄t+1 =
1
Nt
1
Nt
PNt
i=1 g1 (H(xi ), xi , γ̄t+1 )
, Ῡ1 (H(·), θ̄t , γ̄t+1 ),
P
Nt
1
g
(
H
(x
),
γ̄
)
0
i
t+1
i=1
Nt
(24)
PNt
i=1 g2 (H(xi ), xi , γ̄t+1 , µ̄t+1 )
PNt
1
i=1 g0 (H(xi ), γ̄t+1 )
Nt
, Ῡ2 (H(·), θ̄t , γ̄t+1 , µ̄t+1 ).
(25)
12
An observation allocation rule {Nt ∈ Z+ }t∈Z+ is used to determine the sample size. The Monte-Carlo
version of the CE method is described in Algorithm 1.
Algorithm 1: The Monte-Carlo CE Algorithm
Step 0: Choose an initial p.d.f. fθ̄0 (·) on X where θ̄0 = (µ̄0 , Σ̄0 )> and fix an > 0;
Step 1: [Sampling Candidate Solutions] Randomly sample Nt independent and identically
distributed solutions Λt = {x1 , . . . , xNt } using fθ̄t (·).
Step 2: [Threshold Evaluation] Calculate the sample (1 − ρ)-quantile γ̄t+1 = H(d(1−ρ)Nt e) , where
t
H(i) is the ith-order statistic of the sequence {H(xi )}N
i=1 ;
Step 3: [Threshold Comparison]
if γ̄t+1 ≥ γ̄t∗ + then
∗
γ̄t+1
= γ̄t+1 ,
else
∗
= γ̄t∗ .
γ̄t+1
Step 3: [Model Parameter Update]
>
∗
∗
θ̄t+1 = (µ̄t+1 , Σ̄t+1 )> = Ῡ1 (H(·), θ̄t , γ̄t+1
), Ῡ2 (H(·), θ̄t , γ̄t+1
, µ̄t+1 ) .
Step 4: If the stopping rule is satisfied, then return θ̄t+1 , else set t := t + 1 and go to Step 1.
R EMARK 2. The CE method is also applied in stochastic settings for which the objective function is given
by H(x) = Ey [G(x, y)], where y ∈ Y and Ey [·] is the expectation w.r.t. a probability distribution on Y .
Since the objective function is expressed in terms of expectation, it might be hard in some scenarios to
obtain the true values of the objective function. In such cases, estimates of the objective function are used
instead. The CE method is shown to have global convergence properties in such cases too.
2.2.4. Limitations of the CE Method A significant limitation of the CE method is its dependence on
the sample size Nt used in Step 1 of Algorithm 1. One does not know a priori the best value for the sample
size Nt . Higher values of Nt while resulting in higher accuracy also require more computational resources.
One often needs to apply brute force in order to obtain a good choice of Nt . Also as m, the dimension of
the solution space, takes large values, more samples are required for better accuracy, making Nt large as
well. This makes finding the ith-order statistic H(i) in Step 2 harder. Note that the order statistic H(i) is
obtained by sorting the list {H(x1 ), H(x2 ), . . . H(xNt )}. The computational effort required in that case is
13
O(Nt log Nt ) which in most cases is inadmissible. The other major bottleneck is the space required to store
the samples Λt . In situations when m and Nt are large, the storage requirement is a major concern.
The CE method is also offline in nature. This means that the function values {H(x1 ), . . . , H(xNt )} of the
sample set Λt = {x1 , . . . , xNt } should be available before the model parameters can be updated in Step 4 of
Algorithm 1. So when applied in the prediction problem of approximating the value function V π for a given
policy π using the linear architecture defined in (3) by minimizing the error function MSPBE(·), we require
the estimates of {MSPBE(x1 ), . . . , MSPBE(xNt )}. This means that a sufficiently long traversal along
the given sample trajectory has to be conducted to obtain the estimates before initiating the CE method.
This does not make the CE method amenable to online implementations in RL, where the value function
estimations are performed in real-time after each observation.
In this paper, we resolve all these shortcomings of the CE method by remodelling the same in the stochastic approximation framework and thus replacing the sample averaging operation in equations (24) and (25)
with a bootstrapping approach where we continuously improve the estimates based on past observations.
Each successive estimate is obtained as a function of the previous estimate and a noise term. We replace
the (1 − ρ)-quantile estimation using the order statistic method in Step 2 of Algorithm 1 with a stochastic
recursion which serves the same purpose, but more efficiently. The model parameter update in step 3 is
also replaced with a stochastic recursion. We also bring in additional modifications to the CE method to
adapt to a Markov Reward Process (MRP) framework and thus obtain an online version of CE where the
computational requirements are quadratic in the size of the feature set for each observation. To fit the online
nature, we have developed an expression for the objective function MSPBE, where we are able to separate
its deterministic and non-deterministic components. This separation is critical since the original expression
of MSPBE is convoluted with the solution vector and the expectation terms and hence is unrealizable. The
separation further helps to develop a stochastic recursion for estimating MSPBE. Finally, in this paper, we
provide a proof of convergence of our algorithm using an ODE based analysis.
2.3. Proposed Algorithm (SCE-MSPBEM)
Notation: In this section, z represents a random variable and z a deterministic variable.
SCE-MSPBEM is an algorithm to approximate the value function V π (for a given policy π) with linear
14
function approximation, where the optimization is performed using a multi-timescale stochastic approximation variant of the CE algorithm. Since the CE method is a maximization algorithm, the objective function
in the optimization problem here is the negative of MSPBE. Thus,
z ∗ = arg min MSPBE(z) = arg max J (z),
z∈Z⊂Rk
(26)
z∈Z⊂Rk
where J = −MSPBE.
Here Z is the solution space, i.e., the space of parameter values of the function approximator. We also define
J ∗ , J (z ∗ ).
R EMARK 3. Since ∃z ∈ Z such that Φz = Πν T π Φz, the value of J ∗ is 0.
~ Assumption (A2): The solution space Z is compact, i.e., it is closed and bounded.
In [9] a compact expression for MSPBE is given as follows:
MSPBE(z) = (Φ> Dν (Tπ Vz − Vz ))> (Φ> Dν Φ)−1 (Φ> Dν (Tπ Vz − Vz )).
(27)
Using the fact that Vz = Φz, the expression Φ> Dν (Tπ Vz − Vz ) can be rewritten as
Φ> Dν (Tπ Vz − Vz ) = E E φt (rt + γz > φ0t − z > φt )|st
= E [E [φt rt |st ]] + E E φt (γφ0t − φt )> |st z, where φt , φ(st ) and φ0t , φ(s0t ).
Also,
(28)
(29)
Φ> Dν Φ = E φt φ>
t .
Putting all together we get,
>
−1
MSPBE(z) = E [E [φt rt |st ]] + E E φt (γφ0t − φt )> |st z (E φt φ>
t )
E [E [φt rt |st ]] + E E φt (γφ0t − φt )> |st z . (30)
= ω∗(0) + ω∗(1) z
(0)
(1)
>
ω∗(2) ω∗(0) + ω∗(1) z ,
(31)
(2)
−1
where ω∗ , E [E [φt rt |st ]], ω∗ , E [E [φt (γφ0t − φt )> |st ]] and ω∗ , (E [φt φ>
.
t ])
This is a quadratic function on z. Note that, in the above expression, the parameter vector z and the
stochastic component involving E[·] are decoupled. Hence the stochastic component can be estimated independent of the parameter vector z.
15
The goal of this paper is to adapt CE method into a MRP setting in an online fashion, where we solve the
prediction problem which is a continuous stochastic optimization problem. The important tool we employ
here to achieve this is the stochastic approximation framework. Here we take a slight digression to explain
stochastic approximation algorithms.
Stochastic approximation algorithms [28, 29, 30] are a natural way of utilizing prior information. It
does so by discounted averaging of the prior information and are usually expressed as recursive equations
of the following form:
zj+1 = zj + αj+1 ∆zj+1 ,
(32)
where ∆zj+1 = q(zj ) + bj + Mj+1 is the increment term, q(·) is a Lipschitz continuous function, bj is the
bias term with bj → 0 and {Mj } is a martingale difference noise sequence, i.e., Mj is Fj -measurable and
integrable and E[Mj+1 |Fj ] = 0, ∀j. Here {Fj }j∈Z+ is a filtration, where the σ-field Fj = σ(zi , Mi , 1 ≤ i ≤
j, z0 ). The learning rate αj satisfies Σj αj = ∞, Σj αj2 < ∞.
We have the following well known result from [28] regarding the limiting behaviour of the stochastic
recursion (32):
T HEOREM 1. Assume supj kzj k < ∞, E[kMj+1 k2 |Fj ] ≤ K(1 + kzj k2 ), ∀j and q(·) is Lipschitz continuous. Then the iterates zj converge almost surely to the compact connected internally chain transitive
invariant set of the ODE:
ż(t) = q(z(t)), t ∈ R+ .
(33)
Put succinctly, the above theorem establishes an equivalence between the asymptotic behaviour of the iterates {zj } and the deterministic ODE (33). In most practical cases, the ODE have a unique globally asymptotically stable equilibrium at an arbitrary point z ∗ . It will then follow from the theorem that zj → z ∗ a.s.
However, in some cases, the ODE can have multiple isolated stable equilibria. In such cases, the convergence of {zj } to one of these equilibria is guaranteed, however the limit point would depend on the noise
and the initial value.
A relevant extension of the stochastic approximation algorithms is the multi-timescale variant. Here there
will be multiple stochastic recursions of the kind (32), each with possibly different learning rates. The
16
learning rates defines the timescale of the particular recursion. So different learning rates imply different
timescales. If the increment terms are well-behaved and the learning rates properly related (defined in Chapter 6 of [28]), then the chain of recursions exhibit a well-defined asymptotic behaviour. See Chapter 6 [28]
for more details.
Now digression aside, note that the important stochastic variables of the ideal CE method are H, γk , µk ,
Σk and θk . Here, the objective function H = J . In our approach, we track these variables independently
using stochastic recursions of the kind (32). Thus we model our algorithm as a multi-timescale stochastic
approximation algorithm which tracks the ideal CE method. Note that the stochastic recursion is uniquely
identified by their increment term, their initial value and the learning rate. We consider here these recursions
in great detail.
1. Tracking the Objective Function J (·): Recall that the goal of the paper is to develop an online and incremental prediction algorithm. This implies that algorithm has to learn from a given sample trajectory using
an incremental approach with a single traversal of the trajectory. The algorithm SCE-MSPBEM operates on0
π
0
line on a single sample trajectory {(st , rt , s0t )}∞
t=0 , where st ∼ ν(·), st ∼ P (st , ·) and rt = R(st , π(st ), st ).
0
~ Assumption (A3): For the given trajectory {(st , rt , s0t )}∞
t=0 , let φt , φt , and rt have uniformly bounded
second moments. Also, E [φt φ>
t ] is non-singular.
In the expression (31) for the objective function J (·), we have isolated the stochastic and deterministic
(0)
(1)
(2)
part. The stochastic part can be identified by the tuple ω∗ , (ω∗ , ω∗ , ω∗ )> . So if we can find ways to
track ω∗ , then it implies we could track J (·). This is the line of thought we follow here. In our algorithm,
(0)
(1)
(2)
(0)
(1)
we track ω∗ using the time dependent variable ωt , (ωt , ωt , ωt )> , where ωt ∈ Rk , ωt ∈ Rk×k and
(2)
(i)
(i)
(i)
(i)
ωt ∈ Rk×k . Here ωt independently tracks ω∗ , 1 ≤ i ≤ 3. Note that tracking implies limt→∞ ωt = ω∗ ,
(0)
(1)
(2)
1 ≤ i ≤ 3. The increment term ∆ωt , (ωt , ωt , ωt )> used for this recursion is defined as follows:
(0)
4ωt
(1)
4ωt
(2)
4ωt
=
(0)
rt φt − ωt ,
(1)
= φt (γφ0t − φt )> − ωt ,
(2)
= Ik×k − φt φ>
t ωt ,
(34)
17
where φt , φ(st ) and φ0t , φ(s0t ). Now we define a new function J¯(ωt , z) ,
>
(0)
(1)
(2)
(0)
(1)
− ωt + ωt z ωt ωt + ωt z . Note that this is the same expression as (31) except for ωt replacing
ω∗ . Since ωt tracks ω∗ , it is easily verifiable that J¯(ωt , z) indeed tracks J (z) for a given z ∈ Z .
The stochastic recursions which track ω∗ and the objective function J (·) are defined in (41) and (42)
respectively. A rigorous analysis of the above stochastic recursion is provided in lemma 2. There we also
find that the initial value ω0 is irrelevant.
2. Tracking γρ (J , θ): Here we are faced two difficult situations: (i) the true objective function J is
unavailable and (ii) we have to find a stochastic recursion which tracks γρ (J , θ) for a given distribution
parameter θ. To solve (i) we use whatever is available, i.e. J¯(ωt , ·) which is the best available estimate of
the true function J (·) at time t. In other words, we bootstrap. Now to address the second part we make use
of the following lemma from [31]. The lemma provides a characterization of the (1 − ρ)-quantile of a given
real-valued function H w.r.t. to a given probability distribution function fθ .
L EMMA 1. The (1 − ρ)-quantile of a bounded real valued function H(·) with H(x) ∈ [Hl , Hu ] w.r.t the
probability density function fθ (·) is reformulated as an optimization problem
γρ (H, θ) = arg min Eθ [ψ(H(x), y)] ,
(35)
y∈[Hl ,Hu ]
where x ∼ fθ (·), ψ(H(x), y) = (1 − ρ)(H(x) − y)I{H(x)≥y} + ρ(y − H(x))I{H(x)≤y} and Eθ [·] is the
expectation w.r.t. the p.d.f. fθ (·).
In this paper, we employ the time-dependent variable γt to track γρ (J , ·). The increment term in the recursion is the subdifferential ∇y ψ. This is because ψ is non-differentiable as it follows from its definition
from the above lemma. However subdifferential exists for ψ. Hence we utilize it to solve the optimization
problem (35) defined in lemma 1. Here, we define an increment function (contrary to an increment term)
and is defined as follows:
∆γt+1 (z) = −(1 − ρ)I{J¯(ωt ,z)≥γt } + ρI{J¯(ωt ,z)≤γt }
(36)
18
The stochastic recursion which tracks γρ (J , ·) is given in (43). A deeper analysis of the recursion (43) is
provided in lemma 3. In the analysis we also find that the initial value γ0 is irrelevant.
3. Tracking Υ1 and Υ2 : In the ideal CE method, for a given θt , note that Υ1 (θt , . . . ) and Υ2 (θt , . . . )
form the subsequent model parameter θt+1 . In our algorithm, we completely avoid the sample averaging
technique employed in the Monte-Carlo version. Instead, we follow the stochastic approximation recursion
(1)
to track the above quantities. Two time-dependent variables ξt
(2)
and ξt
are employed to track Υ1 and Υ2
respectively. The increment functions used by their respective recursions are defined as follows:
(0)
∆ξt+1 (z) =
(0)
g1 (J¯(ωt , z), z, γt ) − ξt g0 (J¯(ωt , z), γt ),
∆ξt+1 (z) = g2 (J¯(ωt , z), z, γt , ξt ) − ξt g0 (J¯(ωt , z), γt ).
(1)
(0)
(1)
(37)
(38)
The recursive equations which track Υ1 and Υ2 are defined in (44) and (45) respectively. The analysis of
these recursions is provided in lemma 4. In this case also, the initial values are irrelevant.
4. Model Parameter Update:
In the ideal version of CE, note that given θt , we have θt+1 =
(Υ1 (H(·), θt , . . . ), Υ2 (H(·), θt , . . . ))> . This is a discrete change from θt to θt+1 . But in our algorithm, we
adopt a smooth update of the model parameters. The recursion is defined in equation (48). We prove in
theorem 2 that the above approach indeed provide an optimal solution to the optimization problem defined
in (26).
5. Learning Rates and Timescales: The algorithm uses two learning rates αt and βt which are deterministic, positive, nonincreasing and satisfy the following conditions:
∞
X
t=1
αt =
∞
X
t=1
βt = ∞,
∞
X
t=1
αt2 + βt2 < ∞,
lim
t→∞
αt
= 0.
βt
(39)
In a multi-timescale stochastic approximation setting, it is important to understand the difference between
timescale and learning rate. The timescale of a stochastic recursion is defined by its learning rate (also referred as step-size). Note that from the conditions imposed on the learning rates αt and βt in (39), we have
αt
βt
→ 0. So αt decays to 0 faster than βt . Hence the timescale obtained from βt , t ≥ 0 is considered faster as
compared to the other. So in a multi-timescale stochastic recursion scenario, the evolution of the recursion
19
controlled by the faster step-sizes (converges faster to 0) is slower compared to the recursions controlled by
the slower step-sizes. This is because the increments are weighted by their learning rates, i.e., the learning
rates control the quantity of change that occurs to the variables when the update is executed. So the faster
timescale recursions converge faster compared to its slower counterparts. Infact, when observed from a
faster timescale recursion, one can consider the slower timescale recursion to be almost stationary. This attribute of the multi-timescale recursions are very important in the analysis of the algorithm. In the analysis,
when studying the asymptotic behaviour of a particular stochastic recursion, we can consider the variables
of other recursions which are on slower timescales to be constant. In our algorithm, the recursion of ωt and
θt proceed along the slowest timescale and so updates of ωt appear to be quasi-static when viewed from
(0)
the timescale on which the recursions governed by βt proceed. The recursions of γt , ξt
(1)
and ξt
proceed
along the faster timescale and hence have a faster convergence rate. The stable behaviour of the algorithm
is attributed to the timescale differences obeyed by the various recursions.
6. Sample Requirement: The streamline nature inherent in the stochastic approximation algorithms demands only a single sample per iteration. Infact, we use two samples zt+1 (generated in (40)) and zpt+1
(generated in (46) whose discussion is deferred for the time being).This is a remarkable improvement, apart
from the fact that the algorithm is now online and incremental in the sense that whenever a new state transition (st , rt , s0t ) is revealed, the algorithm learns from it by evolving the variables involved and directing the
model parameter θt towards the degenerate distribution concentrated on the optimum point z ∗ .
7. Mixture Distribution: In the algorithm, we use a mixture distribution fbθt to generate the sample xt+1 ,
where fbθt = (1 − λ)fθt + λfθ0 with λ ∈ (0, 1) the mixing weight. The initial distribution parameter θ0 is
chosen s.t. the density function fθ0 is strictly positive on every point in the solution space X , i.e., fθ0 (x) >
0, ∀x ∈ X . The mixture approach facilitates exploration of the solution space and prevents the iterates from
getting stranded in suboptimal solutions.
The SCE-MSPBEM algorithm is formally presented in Algorithm 2.
20
Algorithm 2: SCE-MSPBEM
Data: αt , βt , ct ∈ (0, 1), ct → 0, 1 , λ, ρ ∈ (0, 1), S(·) : R → R+ ;
(0)
(1)
Initialization: γ0 = 0, γ0p = −∞, θ0 = (µ0 , Σ0 )> , T0 = 0, ξt = 0k×1 , ξt = 0k×k ,
(0)
(1)
(2)
ω0 = 0k×1 , ω0 = 0k×k , ω0 = 0k×k , θp = N U LL;
foreach (st , rt , s0t ) of the trajectory do
fbθt = (1 − λ)fθt + λfθ0 ;
(40)
zt+1 ∼ fbθt (·);
• [Objective Function Evaluation]
(41)
ωt+1 = ωt + αt+1 ∆ωt+1 ;
>
(0)
(1)
(2)
(0)
(1)
J¯(ωt , zt+1 ) = − ωt + ωt zt+1 ωt ωt + ωt zt+1 ;
(42)
I [Threshold Evaluation]
(43)
γt+1 = γt − βt+1 ∆γt+1 (zt+1 );
I [Tracking µt+1 and Σt+1 of (19) and (20)]
(0)
(0)
(0)
(44)
(1)
(1)
(1)
(45)
ξt+1 = ξt + βt+1 ∆ξt+1 (zt+1 );
ξt+1 = ξt + βt+1 ∆ξt+1 (zt+1 );
if θp 6= N U LL then
zpt+1
p
γt+1
∼ fbθp (·) , λfθ0 + (1 − λ)fθp ;
=
γtp
− βt+1 ∆γt+1 (zpt+1 );
I [Threshold Comparison]
p
p
Tt+1 = Tt + c I{γt+1 >γt+1
−
I
−
T
;
t
}
{γt+1 ≤γt+1 }
(46)
(47)
I [Updating Model Parameter]
if Tt+1 > 1 then
θ p = θt ;
(0) (1)
θt+1 = θt + αt+1 (ξt , ξt )> − θt ;
(48)
p
γt+1
= γt ;
(49)
Tt = 0;
c = ct ;
else
p
γt+1
= γtp ;
t := t + 1;
θt+1 = θt ;
21
1
ωk+1 = ωk + αk+1 ∆ωk+1
2
(0)
(0)
(0)
ξk+1 = ξk + βk+1∆ξk+1
(1)
(1)
(1)
ξk+1 = ξk + βk+1∆ξk+1
γk+1 = γk + βk+1∆γk+1
p
p
= γkp + βk+1∆γk+1
γk+1
Tk+1 = Tk + λ∆Tk+1
Yes
Tk > ǫ1
No
∆θk+1, γk+1
z ∼ fbθk (·)
θk+1 = θk
3
Model parameters
θk+1 = θk + αk+1 ∆θk+1
p
= γk+1
γk+1
Figure 2
FlowChart representation of the algorithm SCE-MSPBEM.
R EMARK 4. In practice, different stopping criteria can be used. For instance, (a) t reaches an a priori fixed
limit, (b) the computational resources are exhausted, or (c) the variable Tt is unable to cross the 1 threshold
for an a priori fixed number of iterations.
The pictorial depiction of the algorithm SCE-MSPBEM is shown in Figure 2.
It is important to note that the model parameter θt is not updated at each t. Rather it is updated every time
Tt hits 1 where 0 < 1 < 1. So the update of θt only happens along a subsequence {t(n) }n∈Z+ of {t}t∈Z+ .
So between t = t(n) and t = t(n+1) , the variable γt estimates the quantity γρ (J¯ωt , , θbt(n) ). The threshold γtp
is also updated during the 1 crossover in (49). Thus γtp(n) is the estimate of (1 − ρ)-quantile w.r.t. fbθt(n−1) .
Thus Tt in recursion (47) is a elegant trick to ensure that the estimates γt eventually become greater than
the prior threshold γtp(n) , i.e., γt > γtp(n) for all but finitely many t. A timeline map of the algorithm is shown
in Figure 3.
It can be verified as follows that the random variable Tt belongs to (−1, 1), ∀t > 0. We state it as a
proposition here.
P ROPOSITION 1. For any T0 ∈ (0, 1), Tt in (47) belongs to (−1, 1), ∀t > 0.
Proof: Assume T0 ∈ (0, 1). Now the equation (47) can be rearranged as
p
p
Tt+1 = (1 − c) Tt + c(I{γt+1 >γt+1
} − I{γt+1 ≤γt+1 } ),
22
J¯(ωt , zt+1 )
(0)
(0)
ξt+1 = ξt
(1)
J¯(ωt , zt+1 )
(0)
ξt+1 = ξt
(0)
(0)
+ βt+1 ∆ξt+1
(1)
(1)
ξt+1
(1)
ξt
(1)
βt+1 ∆ξt+1
+ βt+1 ∆ξt+1
(1)
J¯(ωt , zt+1 )
(0)
(0)
+ βt+1 ∆ξt+1
(1)
(1)
(1)
ξt+1 = ξt
(0)
∆θt+1 , γt+1
ξt+1 = ξt + βt+1 ∆ξt+1
γt+1 = γt + βt+1 ∆γt+1
∆θt+1 , γt+1
=
+
γt+1 = γt + βt+1 ∆γt+1
∆θt+1 ,γt+1
ξt+1 = ξt + βt+1 ∆ξt+1
γt+1 = γt + βt+1 ∆γt+1
(0)
t −→
θt(n+1)
θt(n)
θt = θt(n) , t(n) ≤ t ≤ t(n+1)
Figure 3
θt(n+3)
θt(n+2)
θt = θt(n+1) , t(n+1) ≤ t ≤ t(n+2)
θt = θt(n+2) , t(n+2) ≤ t ≤ t(n+3)
Timeline graph of the algorithm SCE-MSPBEM.
p
p
where c ∈ (0, 1). In the worst case, either I{γt+1 >γt+1
} = 1, ∀t or I{γt+1 ≤γt+1 } = 1, ∀t. Since the two
p
p
events {γt+1 > γt+1
} and {γt+1 ≤ γt+1
} are mutually exclusive, we will only consider the former event
p
I{γt+1 >γt+1
} = 1, ∀t. In this case
lim Tt = lim c + c(1 − c) + c(1 − c)2 + · · · + c(1 − c)t−1 + (1 − c)t T0
t→∞
t→∞
c(1 − (1 − c)t )
+ T0 (1 − c)t = lim (1 − (1 − c)t ) + T0 (1 − c)t = 1.
t→∞
t→∞
c
p
Similarly for the latter event I{γt+1 ≤γt+1
} = 1, ∀t, we can prove that limt→∞ Tt = −1.
= lim
(∵ c ∈ (0, 1))
R EMARK 5. The recursion in equation (46) is not addressed in the discussion above. The update of γtp in
equation (49) happens along a subsequence {t(n) }n≥0 . So γtp(n) is the estimate of γρ (J¯ωt(n) , θt(n−1) ), where
J¯ωt(n) (·) = J¯(ωt(n) , ·). But at time t(n) < t ≤ t(n+1) , γtp is compared with γt in equation (47). But γt is
derived from a better estimate of J¯(ωt , ·). Equation (46) ensures that γtp is updated using the latest estimate
of J¯(ωt , ·). The variable θp holds the model parameter θt(n−1) and the update of γtp in (46) is performed
using the zpt+1 sampled using fbθp (·).
3. Convergence Analysis
For analyzing the asymptotic behaviour of the algorithm, we apply the ODE based analysis from [29, 32, 28]
where an ODE whose asymptotic behaviour is eventually tracked by the stochastic system is identified.
The long run behaviour of the equivalent ODE is studied and it is argued that the algorithm asymptotically
converges almost surely to the set of stable fixed points of the ODE. We define the filtration {Ft }t∈Z+ where
(0) (1)
the σ-field Ft = σ ωi , γi , γip , ξi , ξi , θi , 0 ≤ i ≤ t; zi , 1 ≤ i ≤ t; si , ri , s0i , 0 ≤ i < t , t ∈ Z+ .
23
It is worth mentioning that the recursion (41) is independent of other recursions and hence can be analysed
independently. For the recursion (41) we have the following result.
L EMMA 2. Let the step-size sequences αt and βt , t ∈ Z+ satisfy (39). For the sample trajectory
{(st , rt , s0t )}∞
t=0 , we let assumption (A3) hold and let ν be the sampling distribution. Then, for a given z ∈ Z ,
the iterates ωt in equation (41) satisfy with probability one,
(0)
(1)
lim (ωt + ωt z) = ω∗(0) + ω∗(1) z,
t→∞
(2)
lim ωt = ω∗(2) and lim J¯(ωt , z) = J (z),
t→∞
t→∞
where J¯t (z) is defined in equation (42), J (z) is defined in equation (26), Φ is defined in equation (3) and
Dν is defined in equation (4) respectively.
Proof: By rearranging equations in (41), for t ∈ Z+ , we get
(0)
(0)
(0,0)
(0)
ωt+1 = ωt + αt+1 Mt+1 + h(0,0) (ωt ) ,
(50)
(0,0)
where Mt+1 = rt φt − E [rt φt ] and h(0,0) (x) = E [rt φt ] − x.
Similarly,
(1)
(1)
(0,1)
(1)
ωt+1 = ωt + αt+1 Mt+1 + h(0,1) (ωt ) ,
(51)
(0,1)
where Mt+1 = φt (γφ0t − φt )> − E [φt (γφ0t − φt )> ] and h(0,1) (x) = E [φt (γφ0t − φt )> ] − x.
Finally,
(2)
(2)
(0,2)
(2)
ωt+1 = ωt + αt+1 Mt+1 + h(0,2) (ωt ) ,
(52)
h
i
(0,2)
(2)
(2)
where Mt+1 = E φt φ>
ω
− φt φ>
and h(0,2) (x) = Ik×k − E [φt φ>
t
t
t ωt
t x].
(0,i)
It is easy to verify that h(0,i) , 0 ≤ i ≤ 2 are Lipschitz continuous and {Mt+1 }t∈Z+ , 0 ≤ i ≤ 2 are martingale
h
i
(0,i)
(0,i)
difference noise terms, i.e., for each i, Mt is Ft -measurable, integrable and E Mt+1 |Ft = 0, t ∈ Z+ ,
0 ≤ i ≤ 2.
(0,0)
Since φt and rt have uniformly bounded second moments, the noise terms {Mt+1 }t∈Z+ have uniformly
bounded second moments as well and hence ∃K0,0 > 0 s.t.
h
i
(0,0)
(0)
E kMt+1 k2 |Ft ≤ K0,0 (1 + kωt k2 ), t ≥ 0.
24
Also h(0,0)
(x) ,
c
h(0,0) (cx)
c
=
E[rt φt |Ft ]−cx
c
=
E[rt φt |Ft ]
c
(0,0)
− x. So h∞
(x) = limt→∞ hc(0,0) (x) = −x. Since
the ODE ẋ(t) = h(0,0)
∞ (x) is globally asymptotically stable to the origin, we obtain that the iterates
(0)
(0)
{ωt }t∈Z+ are almost surely stable, i.e., supt kωt k < ∞ a.s., from Theorem 7, Chapter 3 of [28]. Similarly
(1)
we can show that supt kωt k < ∞ a.s.
(0,1)
Since φt and φ0t have uniformly bounded second moments, the second moments of {Mt+1 }t∈Z+ are
uniformly bounded and therefore ∃K0,1 > 0 s.t.
h
i
(0,1)
(1)
E kMt+1 k2 |Ft ≤ K0,1 (1 + kωt k2 ), t ≥ 0.
Now define
h(0,2)
(x) ,
c
Ik×k
h(0,2) (cx) Ik×k − E [φt φ>
t cx|Ft ]
=
=
− xE φt φ>
t .
c
c
c
(0,2)
(0,2)
Hence h∞
(x) = limt→∞ h(0,2)
(x) = −xE [φt φ>
c
t ]. The ∞-system ODE given by ẋ(t) = h∞ (x) is also
globally asymptotically stable to the origin since E[φt φ>
t ] is positive definite (as it is non-singular and
(2)
positive semi-definite). So supt kωt k < ∞ a.s. from Theorem 7, Chapter 3 of [28].
Since φt has uniformly bounded second moments, ∃K0,2 > 0 s.t.
h
i
(0,2)
(2)
E kMt+1 k2 |Ft ≤ K0,2 (1 + kωt k2 ), t ≥ 0.
Now consider the following system of ODEs associated with (50)-(52):
d (0)
ω (t) = E [rt φt ] − ω (0) (t),
dt
d (1)
ω (t) = E φt (γφ0t − φt )> − ω (1) (t)),
dt
(2)
d (2)
ω (t) = Ik×k − E φt φ>
(t),
t ω
dt
t ∈ R+ ,
(53)
t ∈ R+ ,
(54)
t ∈ R+ .
(55)
For the ODE (53), the point E [rt φt ] is a globally asymptotically stable equilibrium. Similarly for the ODE
(54), the point E [φt (γφ0t − φt )> ] is a globally asymptotically stable equilibrium. For the ODE (55), since
E [φt φ>
t ] is non-negative definite and non-singular (from the assumptions of the lemma), the ODE (55) is
−1
globally asymptotically stable to the point E [φt φ>
.
t ]
25
It can now be shown from Theorem 2, Chapter 2 of [28] that the asymptotic properties of the recursions
(0)
(50), (51), (52) and their associated ODEs (53), (54), (55) are similar and hence limt→∞ ωt = E [rt φt ] a.s.,
(1)
(2)
−1
limt→∞ ωt = E [φt (γφ0t − φt )> ] a.s. and limt→∞ ωt+1 = E [φt φ>
t ]
(0)
a.s.. So for any z ∈ Rk , using (28), we
(1)
(2)
have limt→∞ (ωt + ωt z) = Φ> Dν (Tπ Vz − Vz ) a.s. Also, from (29), we have limt→∞ ωt+1 = (Φ> Dν Φ)−1
a.s.
Putting all the above together we get limt→∞ J¯(ωt , z) = J¯(ω∗ , z) = J (z) a.s.
As mentioned before, the update of θt only happens along a subsequence {t(n) }n∈Z+ of {t}t∈Z+ . So
between t = t(n) and t = t(n+1) , θt is constant. The lemma and the theorems that follow in this paper depend
on the timescale difference in the step-size schedules {αt }t≥0 and {βt }t≥0 . The timescale differences allow
the different recursions to learn at different rates. The step-size {βt }t≥0 decays to 0 at a slower rate than
{αt }t≥0 and hence the increments in the recursions (43), (44) and (45) which are controlled by βt are
larger and hence converge faster than the recursions (41),(42) and (48) which are controlled by αt . So the
relative evolution of the variables from the slower timescale αt , i.e., ωt is indeed slow and in fact can be
considered constant when viewed from the faster timescale βt , see Chapter 6, [28] for a succinct description
on multi-timescale stochastic approximation algorithms.
~ Assumption (A4): The iterate sequence γt in equation (43) satisfies supt |γt | < ∞ a.s..
R EMARK 6. The assumption (A4) is a technical requirement to prove convergence. In practice, one may
replace (43) by its ‘projected version’ whereby the iterates are projected back to an a priori chosen compact
convex set if they stray outside of this set.
Notation: We denote by Eθb[·] the expectation w.r.t. the mixture pdf and Pθb denotes its induced probability
b represents the (1 − ρ)-quantile w.r.t. the mixture pdf fbθ .
measure. Also γρ (·, θ)
The recursion (43) moves on a faster timescale as compared to the recursion (41) of ωt and the recursion
(48) of θt . Hence, on the timescale of the recursion (43), one may consider ωt and θt to be fixed. For
recursion (43) we have the following result:
26
b
L EMMA 3. Let ωt ≡ ω, θt ≡ θ. Let J¯ω (·) , J¯(ω, ·). Then γt , t ∈ Z+ in equation (43) satisfy γt → γρ (J¯ω , θ)
as t → ∞ with probability one.
Proof: Here, for easy reference we rewrite the recursion (43),
γt+1 = γt − βt+1 ∆γt+1 (zt+1 ),
(56)
Substituting the expression for ∆γt+1 in (56) with ωt = ω and θt = θ, we get
γt+1 = γt − βt+1 − (1 − ρ)I{J¯(ω,zt+1 )≥γt } + ρI{J¯(ω,zt+1 )≤γt } , where zt+1 ∼ fbθ (·)
(57)
The above equation can be apparently viewed as,
γt+1 − γt ∈ −βt+1 ∇y ψ(J¯ω (zt+1 ), γt ),
where ∇y ψ is the sub-differential of ψ(x, y) w.r.t. y (where ψ is defined in Lemma 1). ∇y ψ is a set function
and is defined as follows:
∇y ψ(J¯ω (z), y) =
{−(1 − ρ)I{J¯(ω,z)≥y} + ρI{J¯(ω,z)≤y} }, for y 6= J¯(ω, z)
(58)
[ρ , ρ ] , for y = J¯(ω, z),
1
2
where ρ1 = min {1 − ρ, ρ} and ρ2 = max {1 − ρ, ρ}.
Rearranging the terms in equation (57) we get,
(1,0)
γt+1 = γt + βt+1 Mt+1 − Eθb [∆γt+1 (zt+1 )] ,
(59)
(1,0)
where Mt+1 = Eθb [∆γt+1 (zt+1 )] − ∆γt+1 (zt+1 ) with zt+1 ∼ fbθ (·).
It is easy to verify that Eθb [∆γt+1 (zt+1 )] = ∇y Eθb ψ(J¯(zt+1 ), y) . For brevity, define h(1,0) (γ) ,
−∇y Eθb ψ(J¯ω (zt+1 ), γ) .
The set function h(1,0) : R → {subsets of R} satisfies the following properties:
1. For each y ∈ R, h(1,0) (y) is convex and compact.
2. For each y ∈ R, supy0 ∈h(y) |h(1,0) (y 0 )| < K1,0 (1 + |y |), for some 0 < K1,0 < ∞.
3. h(1,0) is upper semi-continuous.
(1,0)
The noise term Mt
satisfies the following properties:
27
(1,0)
1. Mt
is Ft -measurable ∀t and integrable, ∀t > 0.
(1,0)
(1,0)
2. Mt , t ≥ 0 is a martingale difference noise sequence, i.e., E[Mt+1 |Ft ] = 0 a.s.
h
i
(1,0)
3. E kMt+1 k2 Ft ≤ K1,1 (1 + kγt k2 + kωt k2 ), for some 0 < K1,1 < ∞. This follows directly from the
fact that ∆γt+1 (zt+1 ) has finite first and second order moments.
Therefore by the almost sure boundedness of the sequence {γt } in assumption (A4) and by Lemma 1,
Chapter 2 in [28], we can claim that the stochastic sequence {γt } asymptotically tracks the differential
inclusion
d
γ(t) ∈ −Eθb ∇y ψ(J¯ω (z), γ(t)) = −∇y Eθb ψ(J¯ω (z), γ(t)) = h(1,0) (γ(t)).
dt
(60)
The interchange of ∇γ and Eθb[·] in the above equation is guaranteed by the dominated convergence theorem.
Now we prove the stability of the above differential inclusion. Note that by Lemma 1 of [31], we know
b is a root of the function h(1,0) (γ) and hence it is a fixed point of the flow induced by
that γ ∗ , γρ (J¯ω , θ)
the above differential inclusion. Now define V (γ) , Eθb ψ(J¯ω (z), γ) − γ ∗ . It is easy to verify that V is
continuously differentiable. Also by Lemma 1 of [31], we have Eθb ψ(J¯ω (z), γ) to be a convex function
and γ ∗ to be its global minimum. Hence V (γ) > 0, ∀γ ∈ Rd \{γ ∗ }. Further V (γ ∗ ) = 0 and V (γ) → ∞ as
kγ k → ∞. So V (·) is a Lyapunov function. Also note that < ∇V (γ), h(γ) >≤ 0. So γ
b is the global attractor
of the differential inclusion defined in (60). Thus by Theorem 2 of chapter 2 in [28], the iterates γt converge
b
almost surely to γ ∗ = γρ (ω, θ).
The recursions (44) and (45) move on a faster timescale as compared to the recursion (41) of ωt and the
recursion (48) of θ̄t . Hence, viewed from the timescale of the recursions (44) and (45), one may consider ωt
and θt to be fixed. For the recursions (44) and (45), we have the following result:
L EMMA 4. Assume ωt ≡ ω, θt ≡ θ. Let J¯ω (·) , J¯(ω, ·). Then almost surely,
h
i
b
Eθb g1 (Jω (z), z, γρ (J , θ)
(0)
h
i ,
(i) lim ξt = ξ∗(0) =
t→∞
b
Eθb g0 (Jω (z), γρ (J , θ)
i
h
b ξ∗(0)
Eθb g2 J¯ω (z), z, γρ (J¯ω , θ),
(1)
h
i
(ii) lim ξt = ξ∗(1) =
.
t→∞
b
Eθb g0 J¯ω (z), γρ (J¯ω , θ)
where Eθb[·] is the expectation w.r.t. the pdf fbθ (·) and z ∼ fbθ (·).
b > γρ (J¯ω , θbp ), then Tt , t ∈ Z+ in equation (47) satisfy limt→∞ Tt = 1 a.s.
(iii) If γρ (J¯ω , θ)
28
Proof: (i) First, we recall equation (44) below
(0)
(0)
(0)
ξt+1 = ξt + βt+1 g1 (J¯(ωt , zt+1 ), zt+1 , γt ) − ξt g0 J¯(ωt , zt+1 ), γt .
(61)
(0)
Note that the above recursion of ξt depends on γt , but not the other way. This implies that we can replace
b and a bias term which goes to zero as t → ∞. We denote the decaying bias
γt by its limit point γρ (J¯ω , θ)
term using the notation o(1). Further, using the assumption that ωt = ω, θt = θ and from the equation (61),
we get,
(0)
(0)
(0)
(2,0)
ξt+1 = ξt + βt+1 h(2,0) (ξt ) + Mt+1 + o(1) ,
(62)
h
i
h
i
b Ft + E g1 J¯ω (zt+1 ), zt+1 , γρ (J¯ω , θ)
b Ft , (63)
where h(2,0) (x) , −E xg0 J¯ω (zt+1 ), γρ (J¯ω , θ)
h
i
(2,0)
b − E g1 J¯ω (zt+1 ), zt+1 , γρ (J¯ω , θ)
b Ft −
Mt+1 , g1 J¯ω (zt+1 ), zt+1 , γρ (J¯ω , θ)
h
i
(0)
b + E ξt(0) g0 J¯ω (zt+1 ), γρ (J¯ω , θ)
b Ft and zt+1 ∼ fbθ (·).
ξt g0 J¯ω (zt+1 ), γρ (J¯ω , θ)
Since zt+1 is independent of the σ-field Ft , the function h(2,0) (·) in equation (63) can be rewritten as
h
i
h
i
b , where z ∼ fbθ (·).
b
+ Eθb g1 J¯ω (z), z, γρ (J¯ω , θ)
h(2,0) (x) = −Eθb xg0 J¯ω (z), γρ (J¯ω (z), θ)
(2,0)
It is easy to verify that Mt
(2,0)
, t ∈ Z+ is a martingale difference sequence, i.e., Mt
is Ft -measurable,
(2,0)
integrable and E[Mt+1 |Ft ] = 0 a.s., ∀t ∈ Z+ . It is also easy to verify that h(2,0) (·) is Lipschitz continuous.
Also since S(·) is bounded above and fbθ (·) has finite first and second moments we have almost surely,
h
i
(2,0)
(0)
E kMt+1 k2 |Ft ≤ K2,0 (1 + kξt k2 ), ∀t ≥ 0, for some 0 < K2,0 < ∞.
Now consider the ODE
d (0)
ξ (t) = h(2,0) (ξ (0) (t)).
dt
(64)
We may rewrite the above ODE as,
d (0)
ξ (t) = Aξ (0) (t) + b(0) ,
dt
h
i
b , 0 ≤ i < k and
where A is a diagonal matrix with Aii = −Eθb g0 (J¯ω (z), γρ (J¯ω , θ))
h
i
b . Now consider the ODE in the ∞-system
b(0) = Eθb g1 (J¯ω (z), z, γρ (J¯ω , θ))
d (0)
ξ (t)
dt
=
29
limη→∞ h
(2,0)
(ηξ (0) (t))
η
= Aξ (0) (t). Since the matrix A has the same value for all the diagonal elements,
h
i
b
A has only one eigenvalue: λ(A) = −Eθb g0 (J¯ω (z), γρ (J¯ω , θ))
with multiplicity k. Also observe that
λ(A) < 0. Hence the ODE (64) is globally asymptotically stable to the origin. Using Theorem 7, Chapter 3
(0)
(0)
of [28], the iterates {ξt }t∈Z+ are stable a.s., i.e., supt∈Z+ kξt k < ∞ a.s.
Again, by using the earlier argument that the eigenvalues λ(A) of A are negative and identical, the
point −A−1 b(0) can be seen to be a globally asymptotically stable equilibrium of the ODE (64). By using
Corollary 4, Chapter 2 of [28], we can conclude that
h
i
b
Eθb g1 (J¯ω (z), z, γρ (J¯ω , θ))
(0)
h
i a.s.
lim ξt = −A−1 b(0) a.s. =
t→∞
b
Eθb g0 (J¯ω (z), γρ (J¯ω , θ))
(ii) We recall first the matrix recursion (45) below:
(1)
(1)
(0)
(1)
ξt+1 = ξt + βt+1 g2 (J¯(ωt , zt+1 ), zt+1 , γt , ξt ) − ξt g0 J¯(ωt , zt+1 ), γt .
(1)
(0)
As in the earlier proof, we also assume ωt = ω and θt = θ. Also note that ξt , ξt
(65)
and γt are on the
same timescale. However, the recursion of γt proceeds independently and in particular does not depend
(0)
on ξt
(1)
(1)
and ξt . Also, there is a unilateral coupling of ξt
(0)
on ξt
and γt , but not the other way. Hence,
(0)
(0)
while analyzing (61), one may replace γt and ξt in equation (61) with their limit points γρ (J¯ω , θ) and ξ∗
respectively and a decaying bias term o(1). Now, by considering all the above observations, we rewrite the
equation (65) as,
(1)
(2,1)
(1)
(1)
ξt+1 = ξt + βt+1 h(2,1) (ξt ) + Mt+1 + o(1) ,
h
i
h
i
b ξ (0) Ft − E xg0 J¯ω (zt+1 ), γρ (J¯ω , θ)
b Ft
where h(2,1) (x) , E g2 J¯ω (zt+1 ), zt+1 , γρ (J¯ω , θ),
∗
h
i
(2,1)
(1)
b Ft − ξt(1) g0 J¯ω (zt+1 ), γρ (J¯ω (θ)
b −
and Mt+1 , E ξt g0 J¯ω (zt+1 ), γρ (J¯ω (θ)
h
i
b ξ (0) ,
b ξ (0) Ft + g2 J¯ω (zt+1 ), zt+1 , γρ (J¯ω , θ),
E g2 J¯ω (zt+1 ), zt+1 , γρ (J¯ω , θ),
∗
∗
(66)
(67)
(68)
where zt+1 ∼ fbθ (·).
Since zt+1 is independent of the σ-field Ft , the function h(2,1) (·) in equation (67) can be rewritten as
h
i
h
i
b ξ (0) − E b xg0 J¯ω (z), γρ (J¯ω , θ)
b , where z ∼ fbθ (·). (69)
h(2,1) (x) = Eθb g2 J¯ω (z), z, γρ (J¯ω , θ),
∗
θ
30
(2,1)
It is not difficult to verify that Mt+1 , t ∈ Z+ is a martingale difference noise sequence and h(2,1) (·) is
Lipschitz continuous. Also since S(·) is bounded and fbθ (·) has finite first and second moments we get,
h
i
(2,1)
(1)
E kMt+1 k2 |Ft ≤ K2,1 (1 + kξt k2 ), ∀t ∈ Z+ , for some 0 < K2,1 < ∞.
Now consider the ODE given by
d (1)
ξ (t) = h(2,1) (ξ (1) (t)),
dt
(70)
t ∈ R+ .
By rewriting the above equation we get,
d (1)
ξ (t) = Aξ (1) (t) + b(1) ,
dt
t ∈ R+ ,
h
i
b , ∀i, 0 ≤ i < k
where A is a diagonal matrix as before, i.e., Aii = −Eθb g0 (J¯ω (z), γρ (J¯ω , θ)
h
i
b ξ∗(0) ) . Now consider the ODE in the ∞-system d ξ (1) (t) =
and b(1) = Eθb g2 (J¯ω (z), z, γρ (J¯ω , θ),
dt
h
i
1 (2,1)
b of A is neg(ηξ (1) (t)) = Aξ (1) (t). Again, the eigenvalue λ(A) = −Eθb g0 (J¯ω (z), γρ (J¯ω , θ)
limη→∞ η h
ative and is of multiplicity k and hence origin is the unique globally asymptotically stable equilibrium of the
(1)
(0)
∞-system. Therefore it follows that the iterates {ξt }t∈Z+ are almost surely stable, i.e., supt∈Z+ kξt k < ∞
a.s., see Theorem 7, Chapter 3 of [28].
Again, by using the earlier argument that the eigenvalues λ(A) of A are negative and identical, the point
−A−1 b(1) can be seen to be a globally asymptotically stable equilibrium of the ODE (70). By Corollary 4,
Chapter 2 of [28], it follows that
(1)
lim ξt = −A−1 b(1)
t→∞
h
i
b ξ∗(0) )
Eθb g2 (J¯ω (z), z, γρ (J¯ω , θ),
h
i
a.s. =
a.s.
b
E b g0 (J¯ω (z), γρ (J¯ω , θ)
θ
b
(iii) Here also we assume ωt = ω. Then γt in recursion (43) and γtp in recursion (46) converge to γρ (J¯ω , θ)
b > γρ (J¯ω , θbp ), then γt > γtp eventually, i.e., γt > γtp for all but
and γρ (J¯ω , θbp ) respectively. So if γρ (J¯ω , θ)
h
i
finitely many t. So almost surely Tt in equation (47) will converge to E I{γt >γtp } − I{γt ≤γtp } = P{γt >
γtp } − P{γt ≤ γtp } = 1 − 0 = 1.
Notation: For the subsequence {t(n) }n>0 of {t}t≥0 , we denote t−
(n) , t(n) − 1 for n > 0.
As mentioned earlier, θ̄t is updated only along a subsequence {t(n) }n≥0 of {t}t≥0 with t0 = 0 as follows:
(0)
(1)
>
θ̄t(n+1) = θ̄t(n) + αt(n+1) (ξt− , ξt− ) − θ̄t(n) .
(71)
(n+1)
(n+1)
31
Now define Ψ(ω, θ) = (Ψ1 (ω, θ), Ψ2 (ω, θ))> , where
h
i
b
Eθb g1 (J¯ω (z), z, γρ (J¯ω , θ)
h
i ,
Ψ1 (ω, θ) ,
b
Eθb g0 (J¯ω (z), γρ (J¯ω , θ)
h
i
b Ψ1 (ω, θ)
Eθb g2 J¯ω (z), z, γρ (J¯ω , θ),
h
i
Ψ2 (ω, θ) ,
.
b
Eθb g0 J¯ω (z), γρ (J¯ω , θ)
(72)
(73)
We now state our main theorem. The theorem states that the model sequence {θt } generated by Algorithm
2 converges to θ∗ = (z ∗ , 0k×k )> , which is the degenerate distribution concentrated at z ∗ .
T HEOREM 2. Let S(z) = exp(rz), r ∈ R+ . Let ρ ∈ (0, 1), λ ∈ (0, 1) and λ > ρ. Let θ0 = (µ0 , qIk×k )> ,
where q ∈ R+ . Let the step-size sequences αt , βt , t ∈ Z+ satisfy (39). Also let ct → 0. Suppose {θt =
(µt , Σt )> }t∈Z+ is the sequence generated by Algorithm 2 and assume θt ∈ int(Θ), ∀t ∈ Z+ . Also, let the
assumptions (A1), (A2), (A3) and (A4) hold. Further, we assume that there exists a continuously differentiable function V : Θ → R+ s.t. ∇V > (θ)Ψ(ω∗ , θ) < 0, ∀θ ∈ Θ r {θ∗ } and ∇V > (θ∗ )Ψ(ω∗ , θ∗ ) = 0. Then,
there exists q ∗ ∈ R+ , r∗ ∈ R+ and ρ∗ ∈ (0, 1) s.t. ∀q > q ∗ , ∀r > r∗ and ∀ρ < ρ∗ ,
lim J¯(ωt , µt ) = J ∗ and lim θt = θ∗ = (z ∗ , 0k×k )> almost surely,
t→∞
t→∞
where J ∗ and z ∗ are defined in (26). Further, since J = −MSPBE, the algorithm SCE-MSPBEM converges to the global minimum of MSPBE a.s.
Proof: Rewriting the equation (48) along the subsequence {t(n) }n∈Z+ , we have for n ∈ Z+ ,
(0)
θt(n+1) = θt(n) + αt(n+1) (ξt−
(n+1)
(1)
, ξ t−
(n+1)
>
(74)
) − θt(n) .
The iterates θt(n) are stable, i.e., supn kθt(n) k < ∞ a.s. It is directly implied from the assumptions that
θt(n) ∈ int(Θ) and Θ is a compact set.
Rearranging the equation (74) we get, for n ∈ Z+ ,
θt(n+1) = θt(n) + αt(n+1) Ψ(ω∗ , θt(n) ) + o(1) .
(75)
(0)
This easily follows from the fact that, for t(n) < t ≤ t(n+1) , the random variables ξt
(1)
and ξt
estimates the
quantities Ψ1 (ωt(n) , θt(n) ) and Ψ2 (ωt(n) , θt(n) ) respectively. Since ct → 0, the estimation error decays to 0.
32
Hence the term o(1).
The limit points of the above recursion are the roots of Ψ. Hence by equating Ψ1 (ω∗ , θ) to 0k×1 , we get,
h
i
b
Eθb g1 J (z), z, γρ (J , θ)
h
µ=
(76)
i .
b
Eθb g0 J (z), γρ (J , θ)
Equating Ψ2 (ω∗ , θ) to O (= 0k×k ), we get,
h
i
b µ
Eθb g2 J (z), z, γρ (J , θ),
h
i − Σ = O.
b
E b g0 J (z), γρ (J , θ)
(77)
θ
For brevity, we define
γρ∗ (θ) , γρ (J , θ),
ĝ0 (z, θ) , g0 J (z), γρ∗ (θ) and L(θ) , Eθ [ĝ0 (z, θ)] .
(78)
Substituting the expression for µ from (76) in (77) and after further simplification we get,
i
h
b > − µµ>− Σ = O.
b b ĝ0 (z, θ)zz
(1/L(θ))E
θ
Since Σ = Eθ [zz> ] − µµ> , the above equation implies
i
h
b > − Eθ zz> = O
b b ĝ0 (z, θ)zz
(1/L(θ))E
θ
i
h
b θ zz> = O,
b > − L(θ)E
=⇒1 Eθb ĝ0 (z, θ)zz
i
i
h
h
b θ zz> = O
b > − L(θ)E
b > + λEθ ĝ0 (z, θ)zz
=⇒2 (1 − λ)Eθ ĝ0 (z, θ)zz
0
i
h
i
i
h
h
b > − (1 − λ)Eθ ĝ0 (z, θ)
b Eθ zz> −
b > + λEθ ĝ0 (z, θ)zz
=⇒3 (1 − λ)Eθ ĝ0 (z, θ)zz
0
h
i
b Eθ zz> = O
λEθ0 ĝ0 (z, θ)
h
h
i
i
h
h
i
i
b − Eθ ĝ0 (z, θ)
b zz> + λEθ
b − Eθ ĝ0 (z, θ)
b zz> +
=⇒4 (1 − λ)Eθ ĝ0 (z, θ)
ĝ
(z,
θ)
0
0
0
h
i
b =O
− λ Eθ zz> − Eθ0 zz> Eθ0 ĝ0 (z, θ)
h
i
h
i
b + λΣ2 Eθ ∇2 g0 (z, θ)
b +
=⇒5 (1 − λ)Σ2 Eθ ∇2z ĝ0 (z, θ)
0
z
0
h
i
b =O
λ Eθ zz> − Eθ0 zz> Eθ0 ĝ0 (z, θ)
h
i
h
i
b + λq 2 Eθ ∇2 ĝ0 (z, θ)
b +
=⇒6 (1 − λ)Σ2 Eθ ∇2z ĝ0 (z, θ)
z
0
h
i
b =O
λ Eθ zz> − Eθ0 zz> Eθ0 ĝ0 (z, θ)
33
=⇒8
h
i
h
i
2
r
(1 − λ)Σ2 Eθ S(J (z))Gr (z)I{J (z)≥γρ∗ (θ)}
+
λq
E
S(
J
(z))G
(z)I
+
b
b
θ0
{J (z)≥γρ∗ (θ)}
h
i
>
>
b
λ Eθ zz − Eθ0 zz
Eθ0 ĝ0 (z, θ) = O,
(79)
where Gr (z) , r2 ∇J (z)∇J (z)> + r∇2 J (z). Note that =⇒5 follows from “integration by parts” rule for
multivariate Gaussian and =⇒8 follows from the assumption S(z) = exp(rz). Note that for each z ∈ Z ,
i=k,j=k
Gr (z) ∈ Rk×k . Hence we denote Gr (z) as Grij (z) i=1,j=1 . For brevity, we also define
F r,ρ (z, θ) , S(J (z))Gr (z)I{J (z)≥γρ∗ (θ)} ,
(80)
i=k,j=k
where F r,ρ (z, θ) ∈ Rk×k which is also denoted as Fijr,ρ (z) i=1,j=1 .
Hence equation (79) becomes,
h
i
h
i
h
i
b + λq 2 Eθ F r,ρ (z, θ)
b + λ Eθ zz> − Eθ zz> Eθ ĝ0 (z, θ)
b = O. (81)
(1 − λ)Σ2 Eθ F r,ρ (z, θ)
0
0
0
Note that (∇i J )2 ≥ 0. Hence we can find a r∗ ∈ R+ s.t. Grii (z) > 0, ∀r > r∗ , 1 ≤ i ≤ k, ∀z ∈ Z . This
b > 0, ∀θ ∈ Θ. Also since Z is compact and J is continuous, we have
further implies that Eθ [Fiir,ρ (z, θ]
J(z) > B1 > −∞, ∀z ∈ Z . Hence we obtain the following bound:
h
i
b > ρK3 S(B1 ), where 0 < K3 < ∞.
Eθ z2i − Eθ0 z2i Eθ0 ĝ0 (z, θ)
(82)
Now from (81) and (82), we can find a ρ∗ ∈ (0, 1) and q ∗ ∈ R s.t. ∀ρ < ρ∗ , ∀q > q ∗ , ∀r > r∗ , we have,
h
i
h
i
h
i
b + λq 2 Eθ F r,ρ (z, θ)
b + λ Eθ z2 − Eθ z2 Eθ ĝ0 (z, θ)
b > 0.
(1 − λ)Σ2 Eθ Fiir,ρ (z, θ)
ii
i
i
0
0
0
(83)
This contradicts equation (81) for such a choice of ρ, q and r. This implies that each of the terms in equation
(81) is 0, i.e.,
h
i
b = O,
Σ2 Eθ F r,ρ (z, θ)
h
i
r,ρ
2
b
q Eθ0 F (z, θ) = O
and
h
i
>
>
b
Eθ zz − Eθ0 zz
Eθ0 ĝ0 (z, θ) = O.
(84)
(85)
(86)
It is easy to verify that Σ = O simultaneously satisfies (84), (85) and (86). Besides Σ = O is the only
solution for r > r∗ . This is because we have already established earlier that ∀r > r∗ , 1 ≤ i ≤ k, ∀z ∈ Z
34
b > 0, ∀θ ∈ Θ. This proves that for any z ∈ Z , the degenerate distribution concentrated on z
Eθ [Fiir,ρ (z, θ]
given by θz = (z, 0k×k )> is a potential limit point of the recursion. From (85), we have
h
i
b =O
Eθ0 F r,ρ (z, θ)
=⇒
b = J (z ∗ ).
γρ∗ (θ)
Claim A: The only degenerate distribution which satisfies the above condition is θ∗ = (z ∗ , 0k×k )> .
The above claim can be verified as follows: if there exists z 0 (∈ Z ) 6= z ∗ s.t. γρ∗ (θbz0 ) = J (z ∗ ) is satisfied
(where θbz0 represents the mixture distribution fbθz0 ) , then from the definition of γρ∗ (·) in (16) and (78), we
can find an increasing sequence {li }, where li > J(z 0 ) s.t. the following property is satisfied:
lim li = J (z ∗ ) and Pθb(J (z) ≥ li ) ≥ ρ.
i→∞
(87)
But Pθc0 (J (z) ≥ li ) = (1 − λ)Pθz0 (J (z) ≥ li ) + λPθ0 (J (z) ≥ li ) and Pθz0 (J (z) ≥ li ) = 0, ∀i. Therefore
z
from (87), we get,
Pθc0 (J (z) ≥ li ) ≥ ρ
z
⇒ (1 − λ)Pθz0 (J (z) ≥ li ) + λPθ0 (J (z) ≥ li ) ≥ ρ
⇒ λPθ0 (J (z) ≥ li ) ≥ ρ
⇒ Pθ0 (J (z) ≥ li ) ≥
ρ
< 1.
λ
Recollect that li → J (z ∗ ). Thus by the continuity of probability measures we get
ρ
0 = Pθ0 (J (z) ≥ J(z ∗ )) = lim Pθ0 (J (z) ≥ li ) ≥ ,
i→∞
λ
which is a contradiction. This proves the Claim A. Now the only remaining task is to prove θ∗ is a stable
attractor. This easily follows from the assumption regarding the existence of the Lyapunov function V in
the statement of the theorem.
3.1. Computational Complexity
The computational load of this algorithm is Θ(k 2 ) per iteration which comes from (41). Least squares algorithms like LSTD and LSPE also require Θ(k 2 ) per iteration. However, LSTD requires an extra operation
of inverting the k × k matrix AT which requires an extra computational effort of Θ(k 3 ). (Note that LSPE
35
also requires a k × k matrix inversion). This makes the overall complexity of LSTD and LSPE to be Θ(k 3 ).
Further in some cases the matrix AT may not be invertible. In that case, the pseudo inverse of AT needs
to be obtained in LSTD, LSPE which is computationally even more expensive. Our algorithm does not
require such an inversion procedure. Also even though the complexity of the first order temporal difference
algorithms such as TD(λ) and GTD2 is Θ(k), the approximations they produced in the experiments we conducted turned out to be inferior to ours and also showed a slower rate of convergence than our algorithm.
Another noteworthy characteristic exhibited by our algorithm is stability. Recall that the convergence of
TD(0) is guaranteed by the requirements that the Markov Chain of Pπ should be ergodic and the sampling
distribution ν to be its stationary distribution. The classic example of Baird’s 7-star [10] violates those restrictions and hence TD(0) is seen to diverge. However, our algorithm does not impose such restrictions and
shows stable behaviour even in non-ergodic off policy cases such as the Baird’s example.
4. Experimental Results
We present here a numerical comparison of SCE-MSPBEM with various state-of-the-art algorithms in the
literature on some benchmark Reinforcement Learning problems. In each of the experiments, a random
0
trajectory {(st , rt , s0t )}∞
t=0 is chosen and all the algorithms are updated using it. Each st in {(st , rt , st ), t ≥ 0}
is sampled using an arbitrary distribution ν over S. The algorithms are run on multiple trajectories and the
average of the results obtained are plotted. The x-axis in the plots is t/1000, where t is the iteration number.
In each case, the learning rates αt , βt are chosen so that the condition (39) is satisfied. The function S(·) is
chosen as S(x) = exp (rx), where r ∈ R is chosen appropriately.
SCE-MSPBEM was tested on the following benchmark problems:
1. Linearized Cart-Pole Balancing [14]
2. 5-Link Actuated Pendulum Balancing [14]
3. Baird’s 7-Star MDP [10]
4. 10-state Ring MDP [33]
5. Large state space and action space with Radial Basis Functions
6. Large state space and action space with Fourier Basis Functions [5]
36
4.1. Experiment 1: Linearized Cart-Pole Balancing [14]
Setup: A pole with mass m and length l is connected to a cart of mass M . It can rotate 360◦ and the cart
is free to move in either direction within the bounds of a linear track.
Goal: To balance the pole upright and the cart at the centre of the track.
State space: The 4-tuple [x, ẋ, ψ, ψ̇] where ψ is the angle of the pendulum w.r.t. the vertical axis, ψ̇ is the
angular velocity, x the relative cart position from the centre of the track and ẋ is its velocity.
Control space: The controller applies a horizontal force a on the cart parallel to the track. The stochastic
policy used in this setting corresponds to π(a|s) = N (a|β1> s, σ12 ).
System dynamics: The dynamical equations of the system are given by
ψ̈ =
−3mlψ̇ 2 sin ψ cos ψ + (6M + m)g sin ψ − 6(a − bψ̇) cos ψ
,
4l(M + m) − 3ml cos ψ
(88)
−2mlψ̇ 2 sin ψ + 3mg sin ψ cos ψ + 4a − 4bψ̇
.
4(M + m) − 3m cos ψ
(89)
ẍ =
By making further assumptions on the initial conditions, the system dynamics can be approximated accurately by the linear system
ψ̇t
xt+1 xt
0
3(M +m)ψt −3a+3bψ˙t
0
ẋt+1 ẋt
4M l−ml
+ ,
= + ∆t
0
ψt+1 ψt
ẋ
t
˙
3mgψt +4a−4bψt
ψ̇t+1
z
ψ̇t
4M −m
(90)
where ∆t is the integration time step, i.e., the time difference between two transitions and z is a Gaussian
noise on the velocity of the cart with standard deviation σ2 .
1 2
a .
Reward function: R(s, a) = R(ψ, ψ̇, x, ẋ, a) = −100ψ 2 − x2 − 10
Feature vectors: φ(s ∈ R4 ) = (1, s21 , s22 . . . , s1 s2 , s1 s3 , . . . , s3 s4 )> ∈ R11 .
Evaluation policy: The policy evaluated in the experiment is the optimal policy π ∗ (a|s) = N (a|β1∗ > s, σ1∗ 2 ).
The parameters β1∗ and σ1∗ are computed using dynamic programming. The feature set chosen above is a
∗
perfect feature set, i.e., V π ∈ {Φz |z ∈ Rk }. The table of the various parameter values we used in our
37
m
y
g
ψ
x
l
a
M
x
Figure 4
The Cart-Pole System. The goal is to keep the pole in the upright position and the cart at the center of the track by
pushing the cart with a force a either to the left or the right. The system is parameterized by the position x of the cart,
the angle of the pole ψ, the velocity ẋ and the angular velocity ψ̇.
experiment is given below.
Gravitational acceleration (g)
9.8 sm2
Mass of the pole (m)
0.5kg
Mass of the cart (M )
0.5kg
αt
t−1.0
Length of the pole (l)
0.6m
βt
t−0.6
Friction coefficient (b)
0.1N (ms)−1
ct
0.01
Integration time step (∆t)
0.1s
1
0.95
Standard deviation of z (σ2 )
0.01
Discount factor (γ)
0.95
The results of the experiments are shown in Figure 5.
4.2. Experiment 2: 5-Link Actuated Pendulum Balancing [14]
Setup: 5 independent poles each with mass m and length l with the top pole being a pendulum connected
using 5 rotational joints.
Goal: To keep all the poles in the upright position by applying independent torques at each joint.
State space: The state s = (q, q̇)> ∈ R10 where q = (ψ1 , ψ2 , ψ3 , ψ4 , ψ5 ) ∈ R5 and q̇ = (ψ̇1 , ψ̇2 , ψ̇3 , ψ̇4 , ψ̇5 ) ∈
R5 where ψi is the angle of the pole i w.r.t. the vertical axis and ψ̇i is the angular velocity.
Control space: The action a = (a1 , a2 , . . . , a5 )> ∈ R5 where ai is the torque applied to the joint i. The
38
γt
kΣt k
1000
0
800
−10
600
−20
−30
400
−40
200
−50
0
0
20
40
60
80
100
−60
0
20
40
(a) kΣ̄t kF (where k · kF is the Frobenius norm)
60
80
(b) γ̄t∗
√
Tt
1.0
100
MSP BE
30
0.95
25
0.5
20
0.0
15
−0.5
10
5
−1.0
0
0
100
200
300
400
500
600
(c) Tt
Figure 5
700
800
0
20
40
(d)
60
80
100
p
MSPBE(µ̄t )
The Cart-Pole setting. The evolutionary trajectory of the variables kΣ̄t kF (where k · kF is the Frobenius norm), γ̄t∗ ,
p
p
Tt and MSPBE(µ̄t ). Note that both γ̄t∗ and MSPBE(µ̄t ) converge to 0 as t → ∞, while kΣ̄t kF also converges
to 0. This implies that the model θ̄t = (µ̄t , Σ̄t )> converges to the degenerate distribution concentrated on z ∗ . The
evolutionary track of Tt shows that Tt does not cross the 1 = 0.95 line after the model θ̄t = (µ̄t , Σ̄t )> reaches a
close neighbourhood of its limit.
stochastic policy used in this setting corresponds to π(a|s) = N (a|β1> s, σ12 ).
39
System dynamics: The approximate linear system dynamics is given by
I
∆t
I
0
q
q
t+1
t
=
+ ∆t
a+z
−1
−1
q̇t+1
−∆t M U I
q̇t
M
(91)
where ∆t is the integration time step, i.e., the time difference between two transitions, M is the mass matrix
in the upright position where Mij = l2 (6 − max(i, j))m and U is a diagonal matrix with Uii = −gl(6 − i)m.
Each component of z is a Gaussian noise.
g
ψ3
y
x
a3
ψ2
a2
ψ1
a1
Figure 6
3-link actuated pendulum setting. Each rotational joint i, 1 ≤ i ≤ 3 is actuated by a torque ai . The system is parameterized by the angle ψi against the vertical direction and the angular velocity ψ̇i . The goal is to balance the pole in the
upright direction, i.e., all ψi should be as close to 0 as possible.
Reward function: R(q, q̇, a) = −q > q.
Feature vectors: φ(s ∈ R10 ) = (1, s21 , s22 . . . , s1 s2 , s1 s3 , . . . , s9 s10 )> ∈ R46 .
Evaluation policy: The policy evaluated in the experiment is the optimal policy π ∗ (a|s) = N (a|β1∗ > s, σ1∗ 2 ).
The parameters β1∗ and σ1∗ are computed using dynamic programming. The feature set chosen above is a
∗
perfect feature set, i.e., V π ∈ {Φz |z ∈ Rk }.
40
The table of the various parameter values we used in our experiment is given below. Note that we have
used constant step-sizes in this experiment.
Gravitational acceleration (g) 9.8 sm2
Mass of the pole (m)
1.0kg
Length of the pole (l)
1.0m
Integration time step (∆t)
0.1s
Discount factor (γ)
0.95
αt
0.001
βt
0.05
ct
0.05
1
0.95
The results of the experiment are shown in Figure 7.
TD
LSTD
SCE-MSPBE
200
√
MSE
150
MSE
150
√
TD
LSTD
SCE-MSPBE
200
100
50
100
50
0
0
0
5
10
15
20
25
0
5
10
Time
(a)
Figure 7
p
MSPBE(µ̄t )
15
20
25
Time
(b)
p
MSE(µ̄t )
√
√
5-link actuated pendulum setting. The respective trajectories of the MSPBE and MSE generated by TD(0),
√
LSTD(0) and SCE-MSPBEM algorithms are plotted. The graph on the left is for MSPBE , while on the right is
√
√
that of MSE. Note that MSE also converges to 0 since the feature set is perfect.
4.3. Experiment 3: Baird’s 7-Star MDP [10]
Our algorithm was also tested on Baird’s star problem [10] with |S| = 7, |A| = 2 and k = 8. We let ν be the
uniform distribution over S and the feature matrix Φ and the transition matrix Pπ are given by
41
1
1
1
Φ=
1
1
1
2
2 0 0 0 0 0 0
0 2 0 0 0 0 0
0 0 2 0 0 0 0
0 0 0 2 0 0 0
0 0 0 0 2 0 0
0 0 0 0 0 2 0
0000001
0
0
0
π
P =
0
0
0
0
0 0 0 0 0 1
0 0 0 0 0 1
0 0 0 0 0 1
0 0 0 0 0 1
.
0 0 0 0 0 1
0 0 0 0 0 1
000001
The reward function is given by R(s, s0 ) = 0, ∀s, s0 ∈ S. The Markov Chain in this case is not ergodic and
hence belongs to an off-policy setting. This is a classic example where TD(0) is seen to diverge [10]. The
performance comparison of the algorithms GTD2, TD(0) and LSTD(0) with SCE-MSPBEM is shown in
Figure 9. The performance metric used here is the
p
M SE(·) of the prediction vector returned by the
corresponding algorithm at time t. The algorithm parameters for the problem are given below:
αt
0.001
βt
0.05
ct
0.01
1
0.8
5
6
4
7
1
3
2
Figure 8: Baird’s 7-star MDP
A careful analysis in [34] has shown that when the discount factor γ ≤ 0.88, with appropriate learning
rate, TD(0) converges. Nonetheless, it is also shown in the same paper that for discount factor γ = 0.9,
TD(0) will diverge for all values of the learning rate. This is explicitly demonstrated in Figure 9. However
our algorithm SCE-MSPBEM converges in both cases, which demonstrates the stable behaviour exhibited
by our algorithm.
42
The algorithms were also compared on the same Baird’s 7-star, but with a different feature matrix Φ1 .
1
1
1
Φ1 = 1
1
1
2
2 0 0 0 0 1 0
0 2 0 0 0 0 0
0 0 2 0 0 0 0
.
0 0 0 2 0 0 0
0 0 0 0 0 0 2
0 0 0 0 0 0 3
0000001
In this case, the reward function is given by R(s, s0 ) = 2.0, ∀s, s0 ∈ S. Note that Φ1 gives an imperfect
feature set. The algorithm parameter values used are same as earlier. The results are show in Figure 10. In
this case also, TD(0) diverges. However, SCE-MSPBEM is exhibiting good stable behaviour.
12
70
SCE-MSPBEM
LSTD(0)
TD(0)
GTD2
RG
10
SCE-MSPBEM
LSTD(0)
TD(0)
GTD2
RG
60
50
8
M SE −→
30
√
6
√
M SE −→
40
4
20
10
2
0
0
0
10
20
30
40
50
Time −→
(a) Discount factor γ = 0.1
Figure 9
60
−10
0
10
20
30
40
50
60
70
80
Time −→
(b) Discount factor γ = 0.9
Baird’s 7-Star MDP with perfect feature set. For γ = 0.1, all the algorithms show almost the same rate of convergence.
The initial jump of SCE-MSPBEM is due to the fact that the initial value is far from the limit. For γ = 0.9, TD(0)
does not converge and GTD2 is slower. However, SCE-MSPBEM exhibits good convergence behaviour.
Experiment 4: 10-State Ring MDP [33]
Next, we studied the performance comparisons of the algorithms on a 10-ring MDP with |S| = 10 and
k = 8. We let ν be the uniform distribution over S. The transition matrix Pπ and the feature matrix Φ are
43
160
70
SCE-MSPBEM
LSTD(0)
TD(0)
GTD2
RG
140
120
SCE-MSPBEM
60
50
M SP BE −→
40
80
30
√
√
M SE −→
100
60
20
40
10
20
0
0
0
20
40
60
80
100
120
0
20
40
60
Time −→
(a)
Figure 10
√
80
100
120
140
160
Time −→
(b)
MSE.
√
MSPBE.
Baird’s 7-Star MDP with imperfect feature set. Here the discount factor γ = 0.99. In this case, TD(0) method
√
diverges. However, MSE of SCE-MSPBEM and LSTD(0) converge to the same limit point (=103.0), while RG
converges to a different limit (= 1.6919). This is because the feature set is imperfect and also due to the fact that
RG minimizes MSBR, while SCE-MSPBEM and LSTD minimize MSPBE. To verify this fact, note that in (b),
p
MSPBE(µ̄t ) of SCE-MSPBEM converges to 0.
given by
0
0
0
0
0
Pπ =
0
0
0
0
1
1 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0
0 0 0 0 1 0 0 0 0
,
0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 1
000000000
1
0
0
0
0
Φ=
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
.
0
0
1
1
0
6
7
5
4
8
3
9
2
10
1
Figure 11: 10-Ring MDP
The reward function is R(s, s0 ) = 1.0, ∀s, s0 ∈ S. The performance comparisons of the algorithms GTD2,
TD(0) and LSTD(0) with SCE-MSPBEM are shown in Figure 12. The performance metric used here is the
p
M SE(·) of the prediction vector returned by the corresponding algorithm at time t.
44
The algorithm parameters for the problem are as follows:
SCE-MSPBEM
LSTD(0)
TD(0)
GTD2
RG
140
120
βt
0.05
ct
0.075
1
0.85
SCE-MSPBEM
LSTD(0)
TD(0)
GTD2
RG
120
M SE −→
100
80
60
√
M SE −→
0.001
140
100
√
αt
80
60
40
40
20
20
0
0
0
10
20
30
40
50
60
70
0
10
20
Time −→
40
50
60
70
80
Time −→
(a) Discount factor γ = 0.99
Figure 12
30
(b) Discount factor γ = 0.1
10-Ring MDP with perfect feature set: For γ = 0.1, all the algorithms exhibit almost the same rate of convergence.
For γ = 0.99, SCE-MSPBEM converges faster than TD(0), GTD2 and RG.
4.4. Experiment 5: Large State Space Random MDP with Radial Basis Functions and
Fourier Basis
These experiments were designed by us. Here, tests were performed by varying the feature set to prove that
the algorithm is not dependent on any particular feature set. Two types of feature sets are used here: Fourier
Basis Functions and Radial Basis Functions (RBF).
Figure 13 shows the performance comparisons when Fourier Basis Functions are used for the features
{φi }ki=1 , where
1
φi (s) = cos (i+1)πs
2
sin iπs
2
if i = 1,
if i is odd,
if i is even.
(92)
45
Figure 14 shows the performance comparisons when RBF is used instead for the features {φi }ki=1 , where
φi (s) = e
−
(s−mi )2
2.0v 2
i
(93)
with mi and vi fixed a priori.
In both the cases, the reward function is given by
1
R(s, s ) = G(s)G(s )
(1.0 + s0)0.25
0
0
,
(94)
∀s, s0 ∈ S,
where the vector G ∈ (0, 1)|S| is initialized for the algorithm with G(s) ∼ U (0, 1), ∀s ∈ S.
Also in both the cases, the transition probability matrix Pπ is generated as follows
0
0
|S|
P (s, s ) =
b(s)s (1.0 − b(s))|S|−s ,
0
s
π
0
(95)
∀s, s0 ∈ S,
where the vector b ∈ (0, 1)|S| is initialized for the algorithm with b(s) ∼ U (0, 1), ∀s ∈ S. It is easy to verify
that the Markov Chain defined by Pπ is ergodic in nature.
In the case of RBF, we have set |S| = 1000, |A| = 200, k = 50, mi = 10 + 20(i − 1) and vi = 10, while for
Fourier Basis Functions, |S| = 1000, |A| = 200, k = 50. In both the cases, the distribution ν is the stationary
distribution of the Markov Chain. The simulation is run sufficiently long to ensure that the chain achieves
its steady state behaviour, i.e., the states appear with the stationary distribution.
The algorithm parameters for the problem are as follows:
Both RBF & Fourier Basis
αt
0.001
βt
0.05
ct
0.075
1
0.85
Also note that when Fourier basis is used, the discount factor γ = 0.9 and for RBFs, γ = 0.01. SCEMSPBEM exhibits good convergence behaviour in both cases, which shows the non-dependence of SCEMSPBEM on the discount factor γ. This is important because in [35], the performance of TD methods is
shown to be dependent on the discount factor γ.
To get a measure of how well our algorithm performs on much larger problems, we applied it on a large
MDP where |S| = 215 , |A| = 50, k = 100 and γ = 0.9. The reward function R and the transition probability
46
70
SCE-MSPBEM
LSTD(0)
LSPE(0)
TD(0)
GTD2
RG
60
50
√
MSE −→
40
30
20
10
0
0
10
20
30
40
50
Time −→
Figure 13
Fourier Basis Function: Here, |S| = 1000, |A| = 200, k = 50 and γ = 0.9. In this case, SCE-MSPBEM shows good
convergence behaviour.
SCE-MSPBEM
LSTD(0)
TD(0)
GTD2
LSPE(0)
60
40
√
M SE
50
30
20
0
Figure 14
20
40
60
80
100
120
Radial Basis Function. Here, |S| = 1000, |A| = 200, k = 50 and γ = 0.01. In this case, SCE-MSPBEM converges
to the same limit point as other algorithms.
47
matrix Pπ are generated using the equations (94) and (95) respectively. RBFs are used as the features in this
case. Since the MDP is huge, the algorithms were run on Amazon cloud servers. The true value function
√
V π was computed and the M SEs of the prediction vectors generated by the different algorithms were
compared. The performance results are shown in Table 2.
Ex# SCE-MSPBEM
LSTD(0)
TD(0)
RLSTD(0)
LSPE(0)
GTD2
1
23.3393204
23.3393204 24.581849219 23.3393204 23.354929410 24.93208571
2
23.1428622
23.1428622 24.372033722 23.1428622 23.178814326 24.75593565
3
23.3327844
23.3327848 24.537556372 23.3327848 23.446585398 24.88119648
4
22.9786909
22.9786909 24.194543862 22.9786909 22.987520761 24.53206023
5
22.9502660
22.9502660 24.203561613 22.9502660 22.965571900 24.55473382
6
23.0609354
23.0609354 24.253239213 23.0609354 23.084399716 24.60783237
7
23.2280270
23.2280270 24.481937450 23.2280270 23.244345617 24.83529005
Performance comparison of various algorithms with large state space. Here |S| = 215 , |A| = 50, k = 100, and γ = 0.9.
√
RBF is used as the feature set. The feature set is imperfect. The entries in the table correspond to the MSE values obtained from
Table 2
the respective algorithms on 7 different random MDPs. While the entries of SCE-MSPBEM, LSTD(0) and RLSTD(0) appear to
be similar, they actually differed in decimal digits that are not shown here for lack of space.
5. Conclusion and Future Work
We proposed, for the first time, an application of the Cross Entropy (CE) method to the problem of prediction in Reinforcement Learning (RL) under the linear function approximation architecture. This task
is accomplished by remodelling the original CE algorithm as a multi-timescale stochastic approximation
algorithm and using it to minimize the Mean Squared Projected Bellman Error (MSPBE). The proof of
convergence to the optimum value using the ODE method is also provided. The theoretical analysis is supplemented by extensive experimental evaluation which is shown to corroborate the claim. Experimental
comparisons with the state-of-the-art algorithms show the superiority in the accuracy of our algorithm while
being competitive enough with regard to computational efficiency and rate of convergence.
48
The algorithm can be extended to non-linear approximation settings also. In [36], a variant of the TD(0)
algorithm is developed and applied in the non-linear function approximation setting, where the convergence
to the local optima is proven. But we believe our approach can converge to the global optimum in the
non-linear case because of the application of a CE-based approach, thus providing a better approximation
to the value function. The algorithm can also be extended to the off-policy case [8, 9], where the sample
trajectories are developed using a behaviour policy which is different from the target policy whose value
function is approximated. This can be achieved by appropriately integrating a weighting ratio [37] in the
recursions. TD learning methods are shown to be divergent in the off-policy setting [10]. So it will be
interesting to see how our algorithm behaves in such a setting. Another future work includes extending
this optimization technique to the control problem to obtain an optimum policy. This can be achieved by
parametrizing the policy space and using the optimization technique in SCE-MSPBEM to search in this
parameter space.
References
[1] R. S. Sutton and A. G. Barto, Introduction to reinforcement learning. MIT Press, New York, USA, 1998.
[2] D. J. White, “A survey of applications of Markov decision processes,” Journal of the Operational Research Society, pp. 1073–
1096, 1993.
[3] D. P. Bertsekas, Dynamic programming and optimal control, vol. 2. Athena Scientific Belmont, USA, 2013.
[4] M. G. Lagoudakis and R. Parr, “Least-squares policy iteration,” The Journal of Machine Learning Research, vol. 4, pp. 1107–
1149, 2003.
[5] G. Konidaris, S. Osentoski, and P. S. Thomas, “Value Function Approximation in Reinforcement Learning Using the Fourier
Basis.,” in AAAI, 2011.
[6] M. Eldracher, A. Staller, and R. Pompl, Function approximation with continuous valued activation functions in CMAC. Citeseer, 1994.
[7] R. S. Sutton, “Learning to predict by the methods of temporal differences,” Machine learning, vol. 3, no. 1, pp. 9–44, 1988.
[8] R. S. Sutton, H. R. Maei, and C. Szepesvári, “A convergent o(n) temporal-difference algorithm for off-policy learning with
linear function approximation,” in Advances in neural information processing systems, pp. 1609–1616, 2009.
[9] R. S. Sutton, H. R. Maei, D. Precup, S. Bhatnagar, D. Silver, C. Szepesvári, and E. Wiewiora, “Fast gradient-descent methods for temporal-difference learning with linear function approximation,” in Proceedings of the 26th Annual International
Conference on Machine Learning, pp. 993–1000, ACM, 2009.
[10] L. Baird, “Residual algorithms: Reinforcement learning with function approximation,” in Proceedings of the twelfth international conference on machine learning, pp. 30–37, 1995.
[11] S. J. Bradtke and A. G. Barto, “Linear least-squares algorithms for temporal difference learning,” Machine Learning, vol. 22,
no. 1-3, pp. 33–57, 1996.
[12] J. A. Boyan, “Technical update: Least-squares temporal difference learning,” Machine Learning, vol. 49, no. 2-3, pp. 233–246,
2002.
[13] A. Nedić and D. P. Bertsekas, “Least squares policy evaluation algorithms with linear function approximation,” Discrete Event
Dynamic Systems, vol. 13, no. 1-2, pp. 79–110, 2003.
[14] C. Dann, G. Neumann, and J. Peters, “Policy evaluation with temporal differences: A survey and comparison,” The Journal
of Machine Learning Research, vol. 15, no. 1, pp. 809–883, 2014.
[15] J. N. Tsitsiklis and B. Van Roy, “An analysis of temporal-difference learning with function approximation,” Automatic Control,
IEEE Transactions on, vol. 42, no. 5, pp. 674–690, 1997.
[16] R. J. Williams and L. C. Baird, “Tight performance bounds on greedy policies based on imperfect value functions,” tech. rep.,
Citeseer, 1993.
49
[17] B. Scherrer, “Should one compute the Temporal Difference fix point or minimize the Bellman Residual? the unified oblique
projection view,” in 27th International Conference on Machine Learning-ICML 2010, 2010.
[18] M. Zlochin, M. Birattari, N. Meuleau, and M. Dorigo, “Model-based search for combinatorial optimization: A critical survey,”
Annals of Operations Research, vol. 131, no. 1-4, pp. 373–395, 2004.
[19] J. Hu, M. C. Fu, and S. I. Marcus, “A model reference adaptive search method for global optimization,” Operations Research,
vol. 55, no. 3, pp. 549–568, 2007.
[20] E. Zhou, S. Bhatnagar, and X. Chen, “Simulation optimization via gradient-based stochastic search,” in Simulation Conference
(WSC), 2014 Winter, pp. 3869–3879, IEEE, 2014.
[21] M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative learning approach to the traveling salesman problem,”
Evolutionary Computation, IEEE Transactions on, vol. 1, no. 1, pp. 53–66, 1997.
[22] H. Mühlenbein and G. Paass, “From recombination of genes to the estimation of distributions i. binary parameters,” in Parallel
Problem Solving from NaturePPSN IV, pp. 178–187, Springer, 1996.
[23] J. Hu, M. C. Fu, and S. I. Marcus, “A model reference adaptive search method for stochastic global optimization,” Communications in Information & Systems, vol. 8, no. 3, pp. 245–276, 2008.
[24] I. Menache, S. Mannor, and N. Shimkin, “Basis function adaptation in temporal difference reinforcement learning,” Annals of
Operations Research, vol. 134, no. 1, pp. 215–238, 2005.
[25] R. Y. Rubinstein and D. P. Kroese, The cross-entropy method: a unified approach to combinatorial optimization, Monte-Carlo
simulation and machine learning. Springer Science & Business Media, 2013.
[26] P.-T. De Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstein, “A tutorial on the cross-entropy method,” Annals of operations
research, vol. 134, no. 1, pp. 19–67, 2005.
[27] J. Hu and P. Hu, “On the performance of the cross-entropy method,” in Simulation Conference (WSC), Proceedings of the
2009 Winter, pp. 459–468, IEEE, 2009.
[28] V. S. Borkar, “Stochastic approximation: A dynamical systems viewpoint,” Cambridge University Press, 2008.
[29] H. J. Kushner and D. S. Clark, Stochastic approximation for constrained and unconstrained systems. Springer Verlag, New
York, 1978.
[30] H. Robbins and S. Monro, “A stochastic approximation method,” The Annals of Mathematical Statistics, pp. 400–407, 1951.
[31] T. Homem-de Mello, “A study on the cross-entropy method for rare-event probability estimation,” INFORMS Journal on
Computing, vol. 19, no. 3, pp. 381–394, 2007.
[32] C. Kubrusly and J. Gravier, “Stochastic approximation algorithms and applications,” in 1973 IEEE Conference on Decision
and Control including the 12th Symposium on Adaptive Processes, no. 12, pp. 763–766, 1973.
[33] B. Kveton, M. Hauskrecht, and C. Guestrin, “Solving factored mdps with hybrid state and action variables.,” J. Artif. Intell.
Res.(JAIR), vol. 27, pp. 153–201, 2006.
[34] R. Schoknecht and A. Merke, “Convergent combinations of reinforcement learning with linear function approximation,” in
Advances in Neural Information Processing Systems, pp. 1579–1586, 2002.
[35] R. Schoknecht and A. Merke, “TD(0) converges provably faster than the residual gradient algorithm,” in ICML, pp. 680–687,
2003.
[36] H. R. Maei, C. Szepesvári, S. Bhatnagar, D. Precup, D. Silver, and R. S. Sutton, “Convergent temporal-difference learning
with arbitrary smooth function approximation,” in Advances in Neural Information Processing Systems, pp. 1204–1212, 2009.
[37] P. W. Glynn and D. L. Iglehart, “Importance sampling for stochastic simulations,” Management Science, vol. 35, no. 11,
pp. 1367–1392, 1989.
| 3 |
1
Larger is Better: The Effect of Learning Rates
Enjoyed by Stochastic Optimization with
Progressive Variance Reduction
Fanhua Shang
arXiv:1704.04966v1 [cs.LG] 17 Apr 2017
Department of Computer Science and Engineering, The Chinese University of Hong Kong
[email protected]
April 18, 2017
Abstract
In this paper, we propose a simple variant of the original stochastic variance reduction gradient (SVRG) [1], where
hereafter we refer to as the variance reduced stochastic gradient descent (VR-SGD). Different from the choices of the
snapshot point and starting point in SVRG and its proximal variant, Prox-SVRG [2], the two vectors of each epoch in VRSGD are set to the average and last iterate of the previous epoch, respectively. This setting allows us to use much larger
learning rates or step sizes than SVRG, e.g., 3/(7L) for VR-SGD vs. 1/(10L) for SVRG, and also makes our convergence
analysis more challenging. In fact, a larger learning rate enjoyed by VR-SGD means that the variance of its stochastic
gradient estimator asymptotically approaches zero more rapidly. Unlike common stochastic methods such as SVRG and
proximal stochastic methods such as Prox-SVRG, we design two different update rules for smooth and non-smooth objective
functions, respectively. In other words, VR-SGD can tackle non-smooth and/or non-strongly convex problems directly without
using any reduction techniques such as quadratic regularizers. Moreover, we analyze the convergence properties of VR-SGD
for strongly convex problems, which show that VR-SGD attains a linear convergence rate. We also provide the convergence
guarantees of VR-SGD for non-strongly convex problems. Experimental results show that the performance of VR-SGD is
significantly better than its counterparts, SVRG and Prox-SVRG, and it is also much better than the best known stochastic
method, Katyusha [3].
Index Terms
Stochastic optimization, stochastic gradient descent (SGD), proximal stochastic gradient, variance reduction, iterate
averaging, snapshot and starting points, strongly convex and non-strongly convex, smooth and non-smooth
F
•
All the codes of VR-SGD and some related variance reduced stochastic methods can be downloaded from the author’s website: https://sites.google.com/site/
fanhua217/publications.
2
1
I NTRODUCTION
In this paper, we focus on the following composite convex optimization problem:
n
X
def 1
fi (x) + g(x)
(1)
min F (x) =
n i=1
x∈Rd
Pn
where f (x) := n1 i=1 fi (x), fi (x) : Rd → R, i = 1, . . . , n are the smooth convex functions, and g(x) is a relatively simple
(but possibly non-differentiable) convex function (referred to as a regularizer). The formulation (1) arises in many places
in machine learning, signal processing, data science, statistics and operations research, such as regularized empirical risk
minimization (ERM). For instance, one popular choice of the component function fi (·) in binary classification problems is
the logistic loss, i.e., fi (x) = log(1 + exp(−bi aTi x)), where {(a1 , b1 ), . . . , (an , bn )} is a collection of training examples, and
bi ∈ {±1}. Some popular choices for the regularizer include the `2 -norm regularizer (i.e., g(x) = (λ1 /2)kxk2 ), the `1 -norm
regularizer (i.e., g(x) = λ2 kxk1 ), and the elastic-net regularizer (i.e., g(x) = (λ1 /2)kxk2 +λ2 kxk1 ), where λ1 ≥ 0 and λ2 ≥ 0
are two regularization parameters. So far examples of some other applications include deep neural networks [1], [4], [5],
[6], group Lasso [7], [8], sparse learning and coding [9], [10], phase retrieval [11], matrix completion [12], [13], conditional
random fields [14], eigenvector computation [15], [16] such as principal component analysis (PCA) and singular value
decomposition (SVD), generalized eigen-decomposition and canonical correlation analysis (CCA) [17].
1.1
Stochastic Gradient Descent
In this paper, we are especially interested in developing efficient algorithms to solve regularized ERM problems involving a
large sum of n component functions. The standard and effective method for solving Problem (1) is the (proximal) gradient
descent (GD) method, including accelerated proximal gradient (APG) [18], [19], [20], [21]. For smooth objective functions,
the update rule of GD is
"
xk+1 = xk − ηk
n
1X
∇fi (xk ) + ∇g(xk )
n i=1
#
(2)
for k = 1, 2, . . ., where ηk > 0 is commonly referred to as the step-size in optimization or the learning rate in machine
learning. When the regularizer g(·) is non-smooth, e.g., the `1 -norm regularizer, we need to introduce the following proximal
operator into (2),
xk+1 = Prox ηk ,g (yk ) := arg min (1/2ηk )·kx − yk k2 + g(x)
(3)
x∈Rd
where yk = xk −(ηk /n)
Pn
i=1 ∇fi (xk ).
The GD methods mentioned above have been proven to achieve linear convergence
for strongly convex problems, and APG attains the optimal convergence rate of O(1/T 2 ) for non-strongly convex problems,
where T denotes the number of iterations. However, the per-iteration cost of all the batch (or deterministic) methods is
O(nd), which is expensive.
Instead of evaluating the full gradient of f (·) at each iteration, an effective alternative is the stochastic (or incremental)
gradient descent (SGD) method [22]. SGD only evaluates the gradient of a single component function at each iteration, thus
it has much lower per-iteration cost, O(d), and has been successfully applied to many large-scale learning problems [4],
[23], [24], [25]. The update rule of SGD is formulated as follows:
xk+1 = xk − ηk [∇fik(xk ) + ∇g(xk )]
(4)
where ηk ∝ 1/k , and the index ik is chosen uniformly at random from {1, . . . , n}. Although the expectation of ∇fik(xk ) is
an unbiased estimation for ∇f (xk ), i.e., E[∇fik(xk )] = ∇f (xk ), the variance of the stochastic gradient estimator ∇fik(xk ) may
be large due to the variance of random sampling [1]. Thus, stochastic gradient estimators are also called “noisy gradients”,
and we need to gradually reduce its step size, leading to slow convergence. In particular, even under the strongly convex
condition, standard SGD attains a slower sub-linear convergence rate of O(1/T ) [26], [27].
3
1.2
Accelerated SGD
Recently, many SGD methods with variance reduction techniques were proposed, such as stochastic average gradient
(SAG) [28], stochastic variance reduced gradient (SVRG) [1], stochastic dual coordinate ascent (SDCA) [29], SAGA [30],
stochastic primal-dual coordinate (SPDC) [31], and their proximal variants, such as Prox-SAG [32], Prox-SVRG [2] and
Prox-SDCA [33]. All these accelerated SGD methods can use a constant step size η instead of diminishing step sizes for
SGD, and fall into the following three categories: primal methods such as SVRG and SAGA, dual methods such as SDCA,
e or
and primal-dual methods such as SPDC. In essence, many of primal methods use the full gradient at the snapshot x
average gradients to progressively reduce the variance of stochastic gradient estimators, as well as dual and primal-dual
methods, which leads to a revolution in the area of first-order methods [34]. Thus, they are also known as the hybrid gradient
descent method [35] or semi-stochastic gradient descent method [36]. In particular, under the strongly convex condition, the
accelerated SGD methods enjoy linear convergence rates and the overall complexity of O((n+L/µ) log(1/)) to obtain an
-suboptimal solution, where each fi (·) is L-smooth and g(·) is µ-strongly convex. The complexity bound shows that they
p
converge significantly faster than deterministic APG methods, whose complexity is O((n L/µ) log(1/)) [36].
SVRG [1] and its proximal variant, Prox-SVRG [2], are particularly attractive because of their low storage requirement
compared with other stochastic methods such as SAG, SAGA and SDCA, which require to store all the gradients of the n
e) is computed
component functions fi (·) or dual variables. At the beginning of each epoch in SVRG, the full gradient ∇f (x
e, which is updated periodically. The update rule for the smooth optimization problem (1) is given by
at the snapshot point x
e i (xk ) = ∇fi (xk ) − ∇fi (x
e) + ∇f (x
e),
∇f
k
k
k
(5a)
e i (xk ) + ∇g(xk )].
xk+1 = xk − η[∇f
k
(5b)
e i (xk ). It is not hard to verify
When g(·) ≡ 0, the update rule in (5b) becomes the original one in [1], i.e., xk+1 = xk −η ∇f
k
2
e
e
that the variance of the SVRG estimator ∇fi (xk ), i.e., Ek∇fi (xk )−∇f (xk )k , can be much smaller than that of the SGD
k
k
2
estimator ∇fik(xk ), i.e., Ek∇fik(xk )−∇f (xk )k . However, for non-strongly convex problems, the accelerated SGD methods
mentioned above converge much slower than batch APG methods such as FISTA [21], namely, O(1/T ) vs. O(1/T 2 ).
More recently, many acceleration techniques were proposed to further speed up those variance-reduced stochastic
methods mentioned above. These techniques mainly include the Nesterov’s acceleration technique used in [24], [37], [38],
[39], [40], reducing the number of gradient calculations in the early iterations [34], [41], [42], the projection-free property of
the conditional gradient method (also known as the Frank-Wolfe algorithm [43]) as in [44], the stochastic sufficient decrease
technique [45], and the momentum acceleration trick in [3], [34], [46]. More specifically, [40] proposed an accelerating
p
Catalyst framework and achieved the complexity of O((n+ nL/µ) log(L/µ) log(1/)) for strongly convex problems. [3]
p
and [46] proved that their accelerated methods can attain the best known complexity of O(n log(1/)+ nL/) for nonstrongly convex problems. The overall complexity matches the theoretical upper bound provided in [47]. Katyusha [3] and
p
point-SAGA [48] achieve the best-known complexity of O((n+ nL/µ) log(1/)) for strongly convex problems, which is
identical to the upper complexity bound in [47]. That is, Katyusha is the best known stochastic optimization method for both
strongly convex and non-strongly convex problems. Its proximal gradient update rules are formulated as follows:
e + (1 − w1 − w2 )zk ,
xk+1 = w1 yk + w2 x
1
2
T e
yk+1 = arg min
ky − yk k + y ∇fik(xk+1 ) + g(y) ,
2η
y∈Rd
3L
2
T e
kz − xk+1 k + z ∇fik(xk+1 ) + g(z)
zk+1 = arg min
2
z∈Rd
(6a)
(6b)
(6c)
where w1 , w2 ∈ [0, 1] are two momentum parameters. To eliminate the need for parameter tuning, η is set to 1/(3w1 L),
and w2 is fixed to 0.5 in [3]. Unfortunately, most of the accelerated methods mentioned above, including Katyusha, require
at least two auxiliary variables and two momentum parameters, which lead to complicated algorithm design and high
per-iteration complexity [34].
4
1.3
Our Contributions
From the above discussion, one can see that most of accelerated stochastic variance reduction methods such as [3], [34],
[37], [42], [44], [45] and applications such as [9], [10], [13], [15], [16], [49] are based on the stochastic variance reduced gradient
(SVRG) method [1]. Thus, any key improvement on SVRG is very important for the research of stochastic optimization.
In this paper, we propose a simple variant of the original SVRG [1], which is referred to as the variance reduced stochastic
gradient descent (VR-SGD). The snapshot point and starting point of each epoch in VR-SGD are set to the average and last
iterate of the previous epoch, respectively. Different from the settings of SVRG and Prox-SVRG [2] (i.e., the last iterate for
the two points of the former, while the average of the previous epoch for those of the latter), the two points in VR-SGD are
different, which makes our convergence analysis more challenging than SVRG and Prox-SVRG. Our empirical results show
that the performance of VR-SGD is significantly better than its counterparts, SVRG and Prox-SVRG. Impressively, VR-SGD
with a sufficiently large learning rate performs much better than the best known stochastic method, Katyusha [3]. The main
contributions of this paper are summarized below.
•
The snapshot point and starting point of VR-SGD are set to two different vectors. That is, for all epochs, except the
s+1
1 Pm
1 Pm−1 s
s
es = m
es = m−1
first one, x
= xsm . In
k=1 xk (denoted by Option I) or x
k=1 xk (denoted by Option II), and x0
particular, we find that the setting of VR-SGD allows us take much larger learning rates or step sizes than SVRG,
e.g., 3/(7L) vs. 1/(10L), and thus significantly speeds up the convergence of SVRG and Prox-SVRG in practice.
Moreover, VR-SGD has an advantage over SVRG in terms of robustness of learning rate selection.
•
Different from proximal stochastic gradient methods, e.g., Prox-SVRG and Katyusha, which have a unified update
rule for the two cases of smooth and non-smooth objectives (see Section 2.2 for details), VR-SGD employs two different
update rules for the two cases, respectively, as in (12) and (13) below. Empirical results show that gradient update
rules as in (12) for smooth optimization problems are better choices than proximal update formulas as in (10).
•
Finally, we theoretically analyze the convergence properties of VR-SGD with Option I or Option II for strongly convex
problems, which show that VR-SGD attains a linear convergence rate. We also give the convergence guarantees of
VR-SGD with Option I or Option II for non-strongly convex objective functions.
2
P RELIMINARY AND R ELATED W ORK
Throughout this paper, we use k · k to denote the `2 -norm (also known as the standard Euclidean norm), and k·k1 is the
P
`1 -norm, i.e., kxk1 = di=1 |xi |. ∇f (·) denotes the full gradient of f (·) if it is differentiable, or ∂f (·) the subgradient if f (·)
is only Lipschitz continuous. For each epoch s ∈ [S] and inner iteration k ∈ {0, 1, . . . , m−1}, isk ∈ [n] is the random chosen
index. We mostly focus on the case of Problem (1) when each component function fi (·) is L-smooth1 , and F (·) is µ-strongly
convex. The two common assumptions are defined as follows.
2.1
Basic Assumptions
Assumption 1 (Smoothness). Each convex function fi (·) is L-smooth, that is, there exists a constant L > 0 such that for all
x, y ∈ Rd ,
k∇fi (x) − ∇fi (y)k ≤ Lkx − yk.
(7)
Assumption 2 (Strong Convexity). The convex function F (x) is µ-strongly convex, i.e., there exists a constant µ > 0 such that for
all x, y ∈ Rd ,
µ
kx − yk2 .
(8)
2
Note that when the regularizer g(·) is non-smooth, the inequality in (8) needs to be revised by simply replacing the
F (y) ≥ F (x) + h∇F (x), y − xi +
gradient ∇F (x) in (8) with an arbitrary sub-gradient of F (·) at x. In contrast, for a non-strongly convex or general convex
function, the inequality in (8) can always be satisfied with µ = 0.
1. Actually, we can extend all the theoretical results in this paper for the case, when the gradients of all component functions have the same
Lipschitz constant L, to the more general case, when some fi (·) have different degrees of smoothness.
5
Algorithm 1 SVRG (Option I) and Prox-SVRG (Option II)
Input: The number of epochs S , the number of iterations m per epoch, and step size η .
e0 .
Initialize: x
1: for
s = 1, 2, . . . , S do
3:
xs0
es
4:
for k = 0, 1, . . . , m − 1 do
2:
5:
6:
7:
8:
9:
10:
11:
es−1 ;
=x
P
es−1 );
µ = n1 ni=1 ∇fi (x
% Initiate the variable xs0
% Compute the full gradient
Pick isk uniformly at random from [n];
e is (xs ) = ∇fis (xs ) − ∇fis (x
es−1 ) + µ
es ;
∇f
k
k
k
k
h k
i
e is (xs ) +∇g(xs ) ,
Option I: xsk+1 = xsk − η ∇f
k
k k
e is (xs ) ;
or xsk+1 = Prox η, g xsk − η ∇f
k
k
n
e is (xs ) +
Option II: xs = arg miny∈Rd g(y) + y T ∇f
k+1
k
k
% The stochastic gradient estimator
% Smooth case of g(·)
% Non-smooth case of g(·)
1
2η ky
o
− xsk k2 ;
% Proximal update
end for
es = xsm ;
Option I: x
1 Pm
s
es = m
Option II: x
k=1 xk ;
e
% Last iterate for snapshot x
e
% Iterate averaging for snapshot x
12: end for
eS
Output: x
2.2
Related Work
To speed up standard and proximal SGD methods, many variance reduced stochastic methods [28], [29], [30], [35] have
been proposed for some special cases of Problem (1). In the case when each fi (x) is L-smooth, f (x) is µ-strongly convex,
and g(x) ≡ 0, Roux et al. [28] proposed a stochastic average gradient (SAG) method, which attains a linear convergence rate.
However, SAG needs to store all gradients as well as other incremental aggregated gradient methods such as SAGA [30],
so that O(nd) storage is required in general problems [41]. Similarly, SDCA [29] requires storage of all dual variables [1],
which scales as O(n). In contrast, SVRG [1], as well as its proximal variant, Prox-SVRG [2], has the similar convergence
rate to SAG and SDCA but without the memory requirements of all gradients and dual variables. In particular, the SVRG
estimator in (5a) (independently introduced in [1], [35]) may be the most popular choice for stochastic gradient estimators.
Besides, other stochastic gradient estimators include the SAGA estimator in [30] and the stochastic recursive gradient
estimator in [50]. Although the original SVRG in [1] only has convergence guarantees for a special case of Problem (1),
when each fi (x) is L-smooth, f (x) is µ-strongly convex, and g(x) ≡ 0, one can extend SVRG to the proximal setting by
introducing the proximal operator in (3), as shown in Line 7 of Algorithm 1. In other words, when g(·) is non-smooth, the
update rule of SVRG becomes
n
o
e is (xs )]k2 + g(x) .
xsk+1 = arg min (1/2η)·kx − [xsk − η ∇f
k
k
(9)
x∈Rd
Some researchers [8], [51], [52] have borrowed some variance reduction techniques into ADMM for minimizing convex
composite objective functions subject to an equality constraint. [15], [16], [17] applied efficient stochastic solvers to compute
leading eigenvectors of a symmetric matrix or generalized eigenvectors of two symmetric matrices. The first such method
is VR-PCA by Shamir [15], and the convergence properties of the VR-PCA algorithm for such a non-convex problem are
also provided. Garber et al. [16] analyzed the convergence rate of SVRG when f (·) is a convex function that is a sum
of non-convex component functions. Moreover, [6] and [53] proved that SVRG and SAGA with minor modifications can
converge asymptotically to a stationary point of non-convex finite-sum problems. Some distributed variants [54], [55] of
accelerated SGD methods have also been proposed.
An important class of stochastic methods is the proximal stochastic gradient (Prox-SG) method, such as Prox-SVRG [2],
SAGA [30], and Katyusha [3]. Different from standard variance reduction SGD methods such as SVRG, which have a
6
stochastic gradient update as in (5b), the Prox-SG method has a unified update rule for both smooth and non-smooth cases
of g(·). For instance, the update rule of Prox-SVRG [2] is formulated as follows:
e is (xs ) + 1 ky − xs k2 .
xsk+1 = arg min g(y) + y T ∇f
k
k
k
2η
y∈Rd
(10)
For the sake of completeness, the details of Prox-SVRG [2] are shown in Algorithm 1 with Option II. When g(·) is the
widely used `2 -norm regularizer, i.e., g(·) = (λ1 /2)k · k2 , the proximal update formula in (10) becomes
h
i
1
e is (xs ) .
xsk − η ∇f
xsk+1 =
k
k
1 + λ1 η
3
(11)
VARIANCE -R EDUCED S TOCHASTIC G RADIENT D ESCENT
In this section, we propose an efficient variance reduced stochastic gradient descent (VR-SGD) method with iterate averaging.
Different from the choices of the snapshot and starting points in SVRG [1] and Prox-SVRG [2], the two vectors of each
epoch in VR-SGD are set to the average and last iterate of the previous epoch, respectively. Unlike common stochastic
gradient methods such as SVRG and proximal stochastic gradient methods such as Prox-SVRG, we design two different
update rules for smooth and non-smooth objective functions, respectively.
3.1
Iterate Averaging
Like SVRG and Katyusha, VR-SGD is also divided into S epochs, and each epoch consists of m stochastic gradient steps,
es )
where m is usually chosen to be Θ(n) as in [1], [2], [3]. Within each epoch, we need to compute the full gradient ∇f (x
s
s
e i (x ) as in [1]. Unlike SVRG
e and use it to define the variance reduced stochastic gradient estimator ∇f
at the snapshot x
k
k
es of each epoch in VR-SGD is set to
whose snapshot point is set to the last iterate of the previous epoch, the snapshot x
P
m−1 s
1
es = m−1
the average of the previous epoch, e.g., x
k=1 xk in Option I of Algorithm 2, which leads to better robustness to
1 Pm
2
s
es = m
gradient noise , as also suggested in [34], [45], [59]. In fact, the choice of Option II in Algorithm 2, i.e., x
k=1 xk ,
also works well in practice, as shown in Fig. 1. Therefore, we provide the convergence guarantees for both our algorithms
(including Algorithm 2) with Option I and our algorithms with Option II in the next section. In particular, we find that
the one of the effects of the choice in Option I or Option II of Algorithm 2 is to allow taking much larger learning rates
or step sizes than SVRG in practice, e.g., 3/(7L) for VR-SGD vs. 1/(10L) for SVRG (see Fig. 2 and Section 5.3 for details).
This is the main reason why VR-SGD converges significantly faster than SVRG. Actually, a larger learning rate enjoyed by
VR-SGD means that the variance of its stochastic gradient estimator goes asymptotically to zero faster.
Unlike Prox-SVRG [2] whose starting point is initialized to the average of the previous epoch, the starting point xs+1
0
of each epoch in VR-SGD is set to the last iterate xsm of the previous epoch. That is, the last iterate of the previous
epoch becomes the new starting point in VR-SGD, while those of Prox-SVRG are completely different, thereby leading
to relatively slow convergence in general. It is clear that both the starting point and snapshot point of each epoch in the
original SVRG [1] are set to the last iterate of the previous epoch3 , while the two points of Prox-SVRG [2] are set to the
average of the previous epoch (also suggested in [1]). Different from the settings in SVRG and Prox-SVRG, the starting and
snapshot points in VR-SGD are set to the two different vectors mentioned above, which makes the convergence analysis of
VR-SGD more challenging than SVRG and Prox-SVRG, as shown in Section 4.
2. It should be emphasized that the noise introduced by random sampling is inevitable, and generally slows down the convergence speed in
a sense. However, SGD and its variants are probably the most used optimization algorithms for deep learning [56]. In particular, [57] has shown
that by adding noise at each step, noisy gradient descent can escape the saddle points efficiently and converge to a local minimum of non-convex
optimization problems, as well as the application of deep neural networks in [58].
1
3. Note that the theoretical convergence of the original SVRG [1] relies on its Option II, i.e., both x
es and xs+
are set to xsk , where k is randomly
0
chosen from {1, 2, . . . , m}. However, the empirical results in [1] suggest that Option I is a better choice than its Option II, and the convergence
guarantee of SVRG with Option I for strongly convex objective functions is provided in [60].
7
Algorithm 2 VR-SGD for strongly convex objectives
Input: The number of epochs S , the number of iterations m per epoch, and step size η .
e0 .
Initialize: x10 = x
2:
s = 1, 2, . . . , S do
P
es = n1 n
es−1 );
µ
i=1 ∇fi (x
3:
for k = 0, 1, . . . , m − 1 do
1: for
4:
5:
6:
7:
8:
9:
10:
% Compute the full gradient
Pick isk uniformly at random from [n];
e is (xs ) = ∇fis (xs ) − ∇fis (x
es−1 ) + µ
es ;
∇f
k
k
k
i
hk k
e is (xs ) + ∇g(xs ) ,
xsk+1 = xsk − η ∇f
k
k
k
e is (xs ) ;
or xsk+1 = Prox η, g xsk − η ∇f
k
k
% The stochastic gradient estimator
% Smooth case of g(·)
% Non-smooth case of g(·)
end for
1 Pm−1 s
k=1 xk ;
m−1
1 Pm
s
e = m k=1 xsk ;
Option II: x
xs+1
= xsm ;
0
es =
Option I: x
e
% Iterate averaging for snapshot x
e
% Iterate averaging for snapshot x
% Initiate xs+1
for the next epoch
0
11: end for
PS
es ),
s=1 x
bS = S1
and x
es
s=1 x
VR−SGD−I
VR−SGD−II
VR−SGD−I
VR−SGD−II
−4
10
F (xs ) − F (x* )
PS
−6
10
−8
10
otherwise.
−4
−10
−6
10
−8
10
−10
10
10
−12
10
VR−SGD−I
VR−SGD−II
VR−SGD−I
VR−SGD−II
10
F (xs ) − F (x* )
bS = x
eS if F (x
eS ) ≤ F ( S1
Output: x
−12
0
2
4
6
Gradient evaluations / n
8
10
10
0
(a) Logistic regression
2
4
6
Gradient evaluations / n
8
10
(b) Ridge regression
Fig. 1. Comparison of VR-SGD with Option I (denoted by VR-SGD-I) and Option II (denoted by VR-SGD-II) for solving `2 -norm (i.e., (λ/2)k · k2 )
regularized logistic regression and ridge regression problems on the Covtype data set. In each plot, the vertical axis shows the objective value
minus the minimum, and the horizontal axis is the number of effective passes. Note that the blue lines stand for the results where λ = 10−4 , while
the red lines correspond to the results where λ = 10−5 (best viewed in colors).
3.2
The Algorithm for Strongly Convex Objectives
In this part, we propose an efficient VR-SGD algorithm to solve strongly convex objective functions, as outlined in Algorithm
2. It is well known that the original SVRG [1] only works for the case of smooth and strongly convex objective functions.
However, in many machine learning applications, e.g., elastic net regularized logistic regression, the strongly convex
objective function F (x) is non-smooth. To solve this class of problems, the proximal variant of SVRG, Prox-SVRG [2],
was subsequently proposed. Unlike SVRG and Prox-SVRG, VR-SGD can not only solve smooth objective functions, but
directly tackle non-smooth ones. That is, when the regularizer g(x) is smooth, e.g., the `2 -norm regularizer, the update rule
of VR-SGD is
e is (xs ) + ∇g(xs )].
xsk+1 = xsk − η[∇f
k
k
k
(12)
8
η=0.1/L
η=0.2/L
η=0.3/L
η=0.4/L
η=0.5/L
η=0.1/L
η=0.2/L
η=0.3/L
η=0.4/L
η=0.5/L
η=6/5L
−4
s
*
F (x ) − F (x )
10
−6
10
−8
10
−10
10
−4
10
−6
10
−8
10
−10
10
−12
10
η=0.1/L
η=0.2/L
η=0.3/L
η=0.4/L
η=0.5/L
η=0.1/L
η=0.2/L
η=0.3/L
η=0.4/L
η=0.5/L
η=6/5L
10
F (xs ) − F (x* )
−2
10
−2
−12
0
5
10
15
20
Gradient evaluations / n
25
10
30
0
5
10
15
20
25
Gradient evaluations / n
30
35
(a) Logistic regression: λ = 10−4 (left) and λ = 10−5 (right)
−2
10
−8
10
−4
*
s
−6
10
−10
−6
10
−8
10
−10
10
10
−12
10
η=0.05/L
η=0.1/L
η=0.2/L
η=0.3/L
η=0.4/L
η=0.05/L
η=0.1/L
η=0.2/L
η=0.3/L
η=0.4/L
10
F (x ) − F (x )
−4
10
F (xs ) − F (x* )
−2
10
η=0.05/L
η=0.1/L
η=0.2/L
η=0.3/L
η=0.4/L
η=0.05/L
η=0.1/L
η=0.2/L
η=0.3/L
η=0.4/L
−12
0
10
20
30
40
Gradient evaluations / n
50
60
10
0
10
20
30
40
Gradient evaluations / n
50
60
(b) Ridge regression: λ = 10−4 (left) and λ = 10−5 (right)
Fig. 2. Comparison of SVRG [1] and VR-SGD with different learning rates for solving `2 -norm (i.e., (λ/2)k · k2 ) regularized logistic regression and
ridge regression problems on the Covtype data set. In each plot, the vertical axis shows the objective value minus the minimum, and the horizontal
axis is the number of effective passes. Note that the blue lines stand for the results of SVRG with different learning rates, while the red lines
correspond to the results of VR-SGD with different learning rates (best viewed in colors).
When g(x) is non-smooth, e.g., the `1 -norm regularizer, the update rule of VR-SGD becomes
e is (xs ) .
xsk+1 = Prox η, g xsk − η ∇f
k
k
(13)
Different from the proximal stochastic gradient methods such as Prox-SVRG [2], all of which have a unified update rule
as in (10) for both the smooth and non-smooth cases of g(·), VR-SGD has two different update rules for the two cases, as
stated in (12) and (13). This leads to the following advantage over the Prox-SG methods: the stochastic gradient update
rule in (12) usually outperforms the proximal stochastic gradient update rule in (11), as well as the two classes of update
rules for Katyusha [3] (see Section 5.4 for details).
Fig. 2 demonstrates that VR-SGD has a significant advantage over SVRG in terms of robustness of learning rate selection.
That is, VR-SGD yields good performance within the range of the learning rate between 0.1/L and 0.4/L, whereas the
performance of SVRG is very sensitive to the selection of learning rates. Thus, VR-SGD is convenient to apply in various
real-world problems of large-scale machine learning. In fact, VR-SGD can use much larger learning rates than SVRG for
logistic regression problems in practice, e.g., 6/(5L) for VR-SGD vs. 1/(10L) for SVRG, as shown in Fig. 2(a).
9
3.3
The Algorithm for Non-Strongly Convex Objectives
Although many variance reduced stochastic methods have been proposed, most of them, including SVRG and ProxSVRG, only have convergence guarantees for the case of Problem (1) when the objective function F (x) is strongly convex.
However, F (x) may be non-strongly convex in many machine learning applications such as Lasso and `1 -norm regularized
logistic regression. As suggested in [3], [61], this class of problems can be transformed into strongly convex ones by adding
a proximal term (τ /2)kx − xs0 k2 , which can be efficiently solved by Algorithm 2. However, the reduction technique may
degrade the performance of the involved algorithms both in theory and in practice [42]. Thus, we present an efficient
VR-SGD algorithm for directly solving the non-strongly convex problem (1), as outlined in Algorithm 3.
The main difference between Algorithm 2 and Algorithm 3 is the setting for the learning rate. Similar to Algorithm 2,
the learning rate ηs of Algorithm 3 can also be fixed to a constant. Inspired by existing accelerated stochastic algorithms [3],
[34], the learning rate ηs in Algorithm 3 can be gradually increased, which in principle leads to faster convergence (see
Section 5.5 for details). Different from existing stochastic methods such as Katyusha [3], the update rule of the learning rate
ηs for Algorithm 3 is defined as follows: η0 = c, where c > 0 is an initial learning rate, and for any s ≥ 1,
ηs = η0 / max{α, 2/(s + 1)}
(14)
where 0 < α ≤ 1 is a given constant, e.g., α = 0.2.
3.4
Complexity Analysis
From Algorithms 2 and 3, we can see that the per-iteration cost of VR-SGD is dominated by the computation of ∇fisk(xsk ),
es−1 ), and ∇g(xsk ) or the proximal update in (13). Thus, the complexity is O(d), as low as that of SVRG [1] and
∇fisk(x
es−1 ) in the computation of
Prox-SVRG [2]. In fact, for some ERM problems, we can save the intermediate gradients ∇fi (x
es , which generally requires O(n) additional storage. As a result, each epoch only requires (n + m) component gradient
µ
evaluations. In addition, for extremely sparse data, we can introduce the lazy update tricks in [36], [62], [63] to our
algorithms, and perform the update steps in (12) and (13) only for the non-zero dimensions of each example, rather than
all dimensions. In other words, the per-iteration complexity of VR-SGD can be improved from O(d) to O(d0 ), where d0 ≤ d
is the sparsity of feature vectors. Moreover, VR-SGD has a much lower per-iteration complexity than existing accelerated
stochastic variance reduction methods such as Katyusha [3], which have at least two more update rules for additional
variables, as shown in (6a)-(6c).
3.5
Extensions of VR-SGD
It has been shown in [36], [37] that mini-batching can effectively decrease the variance of stochastic gradient estimates. In
this part, we first extend the proposed VR-SGD method to the mini-batch setting, as well as its convergence results below.
Here, we denote by b the mini-batch size and Iks the selected random index set Ik ⊂ [n] for each outer-iteration s ∈ [S] and
inner-iteration k ∈ {0, 1, . . . , m−1}. The variance reduced stochastic gradient estimator in (5a) becomes
X
e I s (xs ) = 1
es−1 ) +∇f (x
es−1 )
∇f
∇fi (xsk )−∇fi (x
k
k
b i∈I s
k
where Iks ⊂ [n] is a mini-batch of size b. If some component functions are non-smooth, we can use the proximal operator
oracle [61] or the Nesterov’s smoothing [64] and homotopy smoothing [65] techniques to smoothen them, and thereby
obtain the smoothed approximations of the functions fi (x). In addition, we can directly extend our algorithms to the
non-smooth setting as in [34], e.g., Algorithm 3 in [34].
Considering that each component function fi (x) maybe have different degrees of smoothness, picking the random index
isk
from a non-uniform distribution is a much better choice than the commonly used uniform random sampling [66], [67],
as well as without-replacement sampling vs. with-replacement sampling [27]. This can be done using the same techniques
Pn
in [2], [3], i.e., the sampling probabilities for all fi (x) are proportional to their Lipschitz constants, i.e., pi = Li / j=1 Lj .
10
Algorithm 3 VR-SGD for non-strongly convex objectives
Input: The number of epochs S , and the number of iterations m per epoch.
e0 , α > 0 and η0 .
Initialize: x10 = x
2:
s = 1, 2, . . . , S do
P
es = n1 n
es−1 );
µ
i=1 ∇fi (x
3:
ηs = η0 / max{α, 2/(s+1)};
4:
for k = 0, 1, . . . , m − 1 do
1: for
% Compute full gradient
% Compute step sizes
Pick isk uniformly at random from [n];
e is (xs ) = ∇fis (xs ) − ∇fis (x
es−1 ) + µ
es ;
∇f
k
k
k
hk k
i
e is (xs ) + ∇g(xs ) ,
xsk+1 = xsk − ηs ∇f
k
k
k
e is (xs ) ;
or xsk+1 = Prox ηs , g xsk − ηs ∇f
k
k
5:
6:
7:
% The stochastic gradient estimator
% Smooth case of g(·)
% Non-smooth case of g(·)
end for
8:
1 Pm−1 s
k=1 xk ;
m−1
1 Pm
s
e = m k=1 xsk ;
Option II: x
xs+1
= xsm ;
0
es =
Option I: x
9:
10:
11:
e
% Iterate averaging for snapshot x
e
% Iterate averaging for snapshot x
% Initiate xs+1
for the next epoch
0
12: end for
bS = x
eS if F (x
eS ) ≤ F ( S1
Output: x
PS
es ),
s=1 x
bS = S1
and x
PS
es
s=1 x
otherwise.
Moreover, our VR-SGD method can also be combined with other accelerated techniques proposed for SVRG. For instance,
the epoch length of VR-SGD can be automatically determined by the techniques in [42], [68] instead of a fixed epoch
length. We can reduce the number of gradient calculations in the early iterations as in [34], [41], [42], which leads to faster
convergence in general. Moreover, we can also introduce the Nesterov’s acceleration technique as in [24], [37], [38], [39],
[40] and momentum acceleration trick as in [3], [34] to further improve the performance of VR-SGD.
4
C ONVERGENCE A NALYSIS
In this section, we provide the convergence guarantees of VR-SGD for solving both smooth and non-smooth general convex
problems. We also extend these results to the mini-batch setting. Moreover, we analyze the convergence properties of VRSGD for solving both smooth and non-smooth strongly convex objective functions. We first introduce the following lemma
which is useful in our analysis.
Lemma 1 (3-point property, [69]). Let ẑ be the optimal solution of the following problem,
min
z∈Rd
τ
kz − z0 k2 + r(z)
2
where r(z) is a convex function (but possibly non-differentiable), and τ ≥ 0. Then for any z ∈ Rd , the following inequality holds
r(ẑ) +
4.1
τ
τ
τ
kẑ − z0 k2 ≤ r(z) + kz − z0 k2 − kz − ẑk2 .
2
2
2
Convergence Properties for Non-strongly Convex Problems
In this part, we analyze the convergence properties of VR-SGD for solving the more general non-strongly convex problems.
Considering that the two proposed algorithms (i.e., Algorithms 2 and 3) have two different update rules for the smooth
and non-smooth problems, we give the convergence guarantees of VR-SGD for the two cases as follows.
4.1.1
Smooth Objectives
We first give the convergence guarantee of Problem (1) when the objective function F (x) is smooth and non-strongly convex.
In order to simplify analysis, we denote F (x) by f (x), that is, fi (x) := fi (x)+g(x) for all i = 1, 2, . . . , n.
11
Lemma 2 (Variance bound of smooth objectives). Let x∗ be the optimal solution of Problem (1). Suppose Assumption 1 holds, and
e is (xs ) := ∇fis (xs ) − ∇fis (x
es−1 ) + ∇f (x
es−1 ). Then the following inequality holds
let ∇f
k
k
k
k
k
i
e is (xs ) − ∇f (xs )k2 ≤ 4L[f (xs ) − f (x∗ ) + f (x
es−1 ) − f (x∗ )].
E k∇f
k
k
k
k
h
The proof of this lemma is included in APPENDIX A. Lemma 2 provides the upper bound on the expected variance
of the variance reduced gradient estimator in (5a), i.e., the SVRG estimator independently introduced in [1], [35]. For
Algorithm 3 with Option I and a fixed learning rate, we give the following key result for our analysis.
Lemma 3 (Option I and smooth objectives). Let β = 1/(Lη). If each fi (·) is convex and L-smooth, then the following inequality
holds for all s = 1, 2, . . . , S ,
2
1
es ) − f (x∗ )] +
1−
E[f (x
E[f (xsm ) − f (x∗ )]
β −1
m−1
2m
2
Lβ
es−1 )−f (x∗ ) +
≤
E f (x
E[f (xs0 )−f (x∗ )] +
E kx∗ −xs0 k2 − kx∗ −xsm k2 .
(β −1)(m−1)
(β −1)(m−1)
2(m−1)
es−1 )+∇f (x
es−1 ).
Proof. In order to simplify notation, the stochastic gradient estimator is defined as: vks := ∇fisk (xsk )−∇fisk (x
Since each component function fi (x) is L-smooth, which implies that the gradient of the average function f (x) is also Lsmooth, i.e., for all x, y ∈ Rd ,
k∇f (x) − ∇f (y)k ≤ Lkx − yk,
whose equivalent form is
f (y) ≤ f (x) + h∇f (x), y − xi +
L
ky − xk2 .
2
Applying the above smoothness inequality, we have
L s
2
x
− xsk
2 k+1
Lβ s
L(β −1) s
2
= f (xsk ) + ∇f (xsk ), xsk+1 − xsk +
xk+1 − xsk −
xk+1 − xsk
2
2
Lβ s
kxk+1 − xsk k2
= f (xsk ) + vks , xsk+1 − xsk +
2
L(β −1) s
+ ∇f (xsk ) − vks , xsk+1 − xsk −
kxk+1 − xsk k2 ,
2
where β = 1/(Lη) > 3 is a constant. Using Lemma 2, then we get
L(β −1) s
s 2
s
s
s
s
kxk+1 − xk k
E ∇f (xk ) − vk , xk+1 − xk −
2
1
L(β −1) s
L(β −1) s
≤E
k∇f (xsk ) − vks k2 +
kxk+1 −xsk k2 −
kxk+1 −xsk k2
2L(β −1)
2
2
2
s
∗
s−1
∗
e
≤
f (xk ) − f (x ) + f (x
) − f (x ) ,
β −1
f (xsk+1 ) ≤ f (xsk ) + ∇f (xsk ), xsk+1 − xsk +
2
(15)
(16)
where the first inequality holds due to the Young’s inequality (i.e., y T z ≤ kyk2 /(2γ)+γkzk2 /2 for all γ > 0 and y, z ∈ Rd ),
and the second inequality follows from Lemma 2.
Substituting the inequality in (16) into the inequality in (15), and taking the expectation with respect to the random
choice isk , we have
E[f (xsk+1 )]
Lβ s
2
s
s
s
s 2
es−1 )−f (x∗ )
+ E vk , xk+1 −xk +
≤
kxk+1 −xk k +
f (xsk )−f (x∗ )+f (x
2
β −1
Lβ
2
es−1 )−f (x∗ )
≤ E[f (xsk )] + E hvks , x∗ −xsk i +
(kx∗ −xsk k2 − kx∗ −xsk+1 k2 ) +
f (xsk )−f (x∗ )+f (x
2
β −1
Lβ
2
s
∗
s 2
∗
s
s
∗
s
es−1 )−f (x∗ )
≤ E[f (xk )] + E[h∇f (xk ), x −xk i] + E
(kx −xk k − kx −xk+1 k2 ) +
f (xsk )−f (x∗ )+f (x
2
β −1
Lβ
2
es−1 ) − f (x∗ ) .
≤ f (x∗ ) +
E kx∗ − xsk k2 − kx∗ − xsk+1 k2 +
f (xsk ) − f (x∗ ) + f (x
2
β −1
E[f (xsk )]
12
Here, the first inequality holds due to the inequality in (15) and the inequality in (16); the second inequality follows
from Lemma 1 with ẑ = xsk+1 , z = x∗ , z0 = xsk , τ = Lβ = 1/η , and r(z) := hvks , z − xsk i; the third inequality holds
due to the fact that E[vks ] = ∇f (xsk ); and the last inequality follows from the convexity of the smooth function f (·), i.e.,
f (xsk )+h∇f (xsk ), x∗ −xsk i ≤ f (x∗ ). The above inequality can be rewritten as follows:
E[f (xsk+1 )] − f (x∗ ) ≤
Lβ ∗
2
es−1 ) − f (x∗ ) +
f (xsk ) − f (x∗ ) + f (x
E kx − xsk k2 − kx∗ − xsk+1 k2 .
β −1
2
Summing the above inequality over k = 0, 1, . . . , m−1, then
m
m
X
X
Lβ ∗
2
es−1 ) − f (x∗ ) +
f (xsk−1 ) − f (x∗ ) + f (x
E kx − xsk−1 k2 − kx∗ − xsk k2 .
E[f (xsk ) − f (x∗ )] ≤
β −1
2
k=1
k=1
1 Pm−1 s
es = m−1
Due to the setting of x
k=1 xk in Option I, and the convexity of f (·), then we have
es ) ≤
f (x
m−1
X
1
f (xsk ).
m − 1 k=1
(17)
(18)
The left and right hand sides of the inequality in (17) can be rewritten as follows:
m
X
E[f (xsk ) − f (x∗ )] =
k=1
m−1
X
E[f (xsk ) − f (x∗ )] + E[f (xsm ) − f (x∗ )] ,
k=1
m
X
Lβ ∗
2
es−1 ) − f (x∗ ) +
f (xsk−1 ) − f (x∗ ) + f (x
E kx − xsk−1 k2 − kx∗ − xsk k2
β −1
2
k=1
m−1
2 X
Lβ ∗
2
es−1 )−f (x∗ )] +
f (xs0 )−f (x∗ ) + m[f (x
E kx −xs0 k2 −kx∗ −xsm k2 .
[f (xsk )−f (x∗ )] +
β −1 k=1
β −1
2
2 Pm−1
s
∗
Subtracting β−1
k=1 [f (xk )−f (x )] from both sides of the inequality in (17), then we obtain
=
m−1
X
2
1−
E[f (xsk ) − f (x∗ )] + E[f (xsm ) − f (x∗ )]
β −1 k=1
2
Lβ ∗
es−1 )−f (x∗ )] +
≤
E f (xs0 )−f (x∗ ) + m[f (x
E kx −xs0 k2 − kx∗ −xsm k2 .
β −1
2
Applying the inequality in (18), we have
2
es ) − f (x∗ )] + E[f (xsm ) − f (x∗ )]
1−
(m−1)E[f (x
β −1
m−1
X
2
E[f (xsk ) − f (x∗ )] + E[f (xsm ) − f (x∗ )]
≤ 1−
β −1 k=1
≤
2
Lβ ∗
es−1 )−f (x∗ )] +
E f (xs0 )−f (x∗ ) + m[f (x
E kx −xs0 k2 − kx∗ −xsm k2 .
β −1
2
Dividing both sides of the above inequality by (m−1), we arrive at
2
1
es ) − f (x∗ )] +
1−
E[f (x
E[f (xsm ) − f (x∗ )]
β −1
m−1
2
2m
Lβ
es−1 )−f (x∗ ) +
≤
E[f (xs0 )−f (x∗ )]+
E f (x
E kx∗ −xs0 k2 −kx∗ −xsm k2 .
(β −1)(m−1)
(β −1)(m−1)
2(m−1)
This completes the proof.
The first main result is the following theorem, which provides the convergence guarantee of VR-SGD with Option I for
solving smooth and general convex minimization problems.
Theorem 1 (Option I and smooth objectives). Suppose Assumption 1 holds. Then the following inequality holds
h
i
2(m + 1)
β(β − 1)L
bS ) − f (x∗ ) ≤
e0 ) − f (x∗ )] +
e0 − x∗ k2 ,
E f (x
[f (x
kx
[(β −1)(m−1)−4m+2]S
2[(β −1)(m−1)−4m+2]S
P
bS = x
eS if f (x
eS ) ≤ f (xS ), and xS = S1 S
es . Otherwise, x
bS = xS .
where x
s=1 x
13
Proof. Since 2/(β −1) < 1, it is easy to verify that
2
1
{E[f (xsm )] − f (x∗ )} ≤
{E[f (xsm )] − f (x∗ )} .
(β −1)(m−1)
m−1
(19)
Applying the above inequality and Lemma 3, we have
2
2
es ) − f (x∗ )] +
E[f (x
E[f (xsm ) − f (x∗ )]
1−
β −1
(β −1)(m−1)
1
2
es ) − f (x∗ )] +
E[f (x
≤ 1−
E[f (xsm ) − f (x∗ )]
β −1
m−1
2
2m
Lβ
es−1 ) − f (x∗ ) +
≤
E[f (xs0 ) − f (x∗ )] +
E f (x
E kx∗ − xs0 k2 − kx∗ − xsm k2 .
(β −1)(m−1)
(β −1)(m−1)
2(m−1)
Summing the above inequality over s = 1, 2, . . . , S , taking expectation with respect to the history of random variables isk ,
and using the setting of xs+1
= xsm , we obtain
0
S
X
2
es ) − f (x∗ )]
1−
E[f (x
β
−1
s=1
S
X
2
2m
es−1 ) − f (x∗ )]
≤
E[f (xs0 ) − f (x∗ ) − (f (xsm ) − f (x∗ ))] +
E[f (x
(β −1)(m−1)
(β −1)(m−1)
s=1
S
Lβ X ∗
E kx − xs0 k2 − kx∗ − xsm k2 .
2(m−1) s=1
PS−1
2m
es )−f (x∗ )] from both sides of the above inequality, we have
Subtracting (β−1)(m−1)
s=1 [f (x
+
S
X
4
2
2m
S
∗
e ) − f (x )] +
es ) − f (x∗ )]
E[f (x
1−
−
E[f (x
(β −1)(m−1)
β
−1
(β
−1)(m−1)
s=1
h
i
2m
2
1
∗
∗
e0 ) − f (x∗ )]
E f (x0 ) − f (x ) − (f (xS
E[f (x
≤
m ) − f (x )) +
(β −1)(m−1)
(β −1)(m−1)
h
i
Lβ
2
+
E kx∗ − x10 k2 − kx∗ − xS
.
mk
2(m − 1)
e0 = x10 , we arrive at
Dividing both sides of the above inequality by S , and using the setting of x
X
S
1
2
4
es ) − f (x∗ )]
−
E[f (x
1−
S
β −1 (β −1)(m−1) s=1
X
S
2m
1
2
4
eS ) − f (x∗ )] +
es ) − f (x∗ )]
≤
E[f (x
−
E[f (x
1−
(β −1)(m−1)S
S
β −1 (β −1)(m−1) s=1
2
2m
Lβ
e0 ) − f (x∗ )] +
[f (x10 ) − f (x∗ )] +
[f (x
kx∗ − x10 k2
(β −1)(m−1)S
(β −1)(m−1)S
2(m−1)S
2(m + 1)
Lβ
e0 ) − f (x∗ )] +
e0 − x∗ k2 ,
=
[f (x
kx
(β −1)(m−1)S
2(m−1)S
≤
eS ) − f (x∗ ) ≥ 0; the second inequality holds due to the facts that
where the first inequality holds due to the fact that f (x
e0 = x10 .
f (xSm )−f (x∗ ) ≥ 0 and kx∗ −xSm k2 ≥ 0; and the last equality follows from the setting of x
P
S
es ), and therefore the above inequality
Due to the definition of xS and the convexity of f (·), we have f (xS ) ≤ S1 s=1 f (x
becomes:
h
i
4
2
−
E f (xS ) − f (x∗ )
β −1 (β −1)(m−1)
2(m + 1)
Lβ
e0 ) − f (x∗ )] +
e0 − x∗ k2 .
≤
[f (x
kx
(β −1)(m−1)S
2(m−1)S
1−
4
2
Dividing both sides of the above inequality by c1 = 1− β−1
− (β−1)(m−1)
> 0, we have
h
i
E f (xS ) − f (x∗ ) ≤
2(m + 1)
Lβ
e0 ) − f (x∗ )] +
e0 − x∗ k2 .
[f (x
kx
c1 (β −1)(m−1)S
2c1 (m−1)S
(20)
14
bS = x
eS if f (x
eS ) ≤ f (xS ). Then
Due to the setting for the output of Algorithm 3, x
h
i
h
i
2(m + 1)
Lβ
bS ) − f (x∗ ) ≤ E f (xS ) − f (x∗ ) ≤
e0 ) − f (x∗ )] +
e0 − x∗ k2 .
E f (x
[f (x
kx
c1 (β −1)(m−1)S
2c1 (m−1)S
bS = xS , and the above inequality still holds. This completes the proof.
eS ) ≥ f (xS ), let x
Alternatively, when f (x
From Lemma 3, Theorem 1, and their proofs, one can see that our convergence analysis is very different from those of
existing stochastic methods, such as SVRG [1] and Prox-SVRG [2]. For Algorithm 3 with Option II and a fixed learning rate,
we give the following key lemma for our convergence analysis.
Lemma 4 (Option II and smooth objectives). If each fi (·) is convex and L-smooth, then the following inequality holds for all
s = 1, 2, . . . , S ,
2
2
es ) − f (x∗ )] +
1−
E[f (x
E[f (xsm ) − f (x∗ )]
β −1
(β −1)m
2
Lβ ∗
2
es−1 ) − f (x∗ ) +
E f (x
E[f (xs0 ) − f (x∗ )] +
E kx − xs0 k2 − kx∗ − xsm k2 .
≤
(β −1)
(β −1)m
2m
This lemma is a slight generalization of Lemma 3, and we give the proof in APPENDIX B for completeness.
Theorem 2 (Option II and smooth objectives). If each fi (·) is convex and L-smooth, then the following inequality holds
h
i
Lβ(β −1)
2(m+1)
e0 ) − f (x∗ )] +
e0 − x∗ k2 ,
bS ) − f (x∗ ) ≤
[f (x
kx
E f (x
mS(β −5)
2mS(β −5)
P
es . Otherwise, x
bS = xS .
bS = x
eS if f (x
eS ) ≤ f (xS ), and xS = S1 S
where x
s=1 x
The proof of this theorem can be found in APPENDIX C. Clearly, Theorems 1 and 2 show that VR-SGD with Option I
or Option II attains a sub-linear convergence rate for smooth general convex objective functions. This means that VR-SGD
is guaranteed to have a similar convergence rate with other variance reduced stochastic methods such as SAGA [30], and
a slower theoretical rate than accelerated methods such as Katyusha [3]. Nevertheless, VR-SGD usually converges much
faster than the best known stochastic method, Katyusha [3] in practice (see Section 5.3 for details).
4.1.2
Non-Smooth Objectives
Next, we provide the convergence guarantee of Problem (1) when the objective function F (x) is non-smooth (i.e., the
regularizer g(x) is non-smooth) and non-strongly convex. We first give the following lemma.
Lemma 5 (Variance bound of non-smooth objectives). Let x∗ be the optimal solution of Problem (1). If each fi (·) is convex and
e is (xs ) := ∇fis (xs ) − ∇fis (x
es−1 ) + ∇f (x
es−1 ), then the following inequality holds
L-smooth, and ∇f
k
k
k
k
k
h
i
e is (xs ) − ∇f (xs )k2 ≤ 4L[F (xs ) − F (x∗ ) + F (x
es−1 ) − F (x∗ )].
E k∇f
k
k
k
k
This lemma can be viewed as a generalization of Lemma 2, and is essentially identical to Corollary 3.5 in [2] and Lemma
1 in [45], and hence its proof is omitted. For Algorithm 3 with Option II and a fixed learning rate, we give the following
results.
Lemma 6 (Option II and non-smooth objectives). If each fi (·) is convex and L-smooth, then the following inequality holds for all
s = 1, 2, . . . , S ,
2
2
es ) − F (x∗ )] +
E[F (x
E[F (xsm )−F (x∗ )]
β −1
(β −1)m
Lβ ∗
2
2
es−1 )−F (x∗ ) +
≤
E[F (xs0 ) − F (x∗ )] +
E F (x
E kx −xs0 k2 −kx∗ −xsm k2 .
(β −1)m
β −1
2m
1−
(21)
The proof of this lemma can be found in APPENDIX D. Using the above lemma, we give the following convergence
result for VR-SGD.
15
Theorem 3 (Option II and non-smooth objectives). Suppose Assumption 1 holds. Then the following inequality holds
h
i
2(m+1)
β(β −1)L
bS ) − F (x∗ ) ≤
e0 ) − F (x∗ )] +
e0 − x∗ k2 .
E F (x
[F (x
kx
(β −5)mS
2(β −5)mS
The proof of this theorem is included in APPENDIX E. Similarly, for Algorithm 3 with Option I and a fixed learning
rate, we give the following results.
Corollary 1 (Option I and non-smooth objectives). If each fi (·) is convex and L-smooth, then the following inequality holds for
all s = 1, 2, . . . , S ,
1
2
es ) − F (x∗ )] +
E[F (x
E[F (xsm ) − F (x∗ )]
1−
β −1
m−1
2m
2
Lβ
es−1 )−F (x∗ ) +
≤
E F (x
E[F (xs0 )−F (x∗ )] +
E kx∗ −xs0 k2 − kx∗ −xsm k2 .
(β −1)(m−1)
(β −1)(m−1)
2(m−1)
Corollary 2 (Option I and non-smooth objectives). Suppose Assumption 1 holds. Then the following inequality holds
h
i
bS ) − F (x∗ )
E F (x
2(m + 1)
β(β − 1)L
e0 ) − F (x∗ )] +
e0 − x∗ k2 ,
[F (x
kx
[(β −1)(m−1)−4m+2]S
2[(β −1)(m−1)−4m+2]S
P
es . Otherwise, x
bS = xS .
bS = x
eS if F (x
eS ) ≤ F (xS ), and xS = S1 S
where x
s=1 x
≤
Corollaries 1 and 2 can be viewed as the generalizations of Lemma 3 and Theorem 1, respectively, and hence their
proofs are omitted. Obviously, Theorems 3 and Corollary 2 show that VR-SGD with Option I or Option II attains a sublinear convergence rate for non-smooth general convex objective functions.
4.1.3
Mini-Batch Settings
e is (xs ) in Lemma 5 is extended to the mini-batch
The upper bound on the variance of the stochastic gradient estimator ∇f
k
k
setting as follows.
e I s (xs ) := 1
Lemma 7 (Variance bound of mini-batch). If each fi (·) is convex and L-smooth, and ∇f
k
b
k
P
i∈Iks
es−1 ) +
∇fi (xsk ) − ∇fi (x
es−1 ), then the following inequality holds
∇f (x
h
i
e I s (xs ) − ∇f (xs )k2 ≤ 4Lδ(b)[F (xs ) − F (x∗ ) + F (x
es−1 ) − F (x∗ )],
E k∇f
k
k
k
k
where δ(b) = (n−b)/[(n−1)b].
It is not hard to verify that 0 ≤ δ(b) ≤ 1. This lemma is essentially identical to Theorem 4 in [36], and hence its proof
is omitted. Based on the variance upper bound in Lemma 7, we further analyze the convergence properties of VR-SGD for
the mini-batch setting. Lemma 6 is first extended to the mini-batch setting as follows.
Lemma 8 (Mini-batch). Using the same notation as in Lemma 7, we have
2δ(b)
2δ(b)
s
∗
es )] − F (x∗ )}
{E[F (xm )] − F (x )} + 1 −
{E[F (x
(β −1)m
β −1
2δ(b)
2δ(b)
Lβ ∗
es−1 ) − F (x∗ ) +
≤
F (x
[F (xs0 ) − F (x∗ )] +
E kx − xs0 k2 − kx∗ − xsm k2 .
(β −1)
(β −1)m
2m
Proof. In order to simplify notation, the stochastic gradient estimator of mini-batch is defined as:
1 X
es−1 ) + ∇f (x
es−1 ).
vks :=
∇fi (xsk ) − ∇fi (x
b i∈I s
k
F (xsk+1 ) ≤ g(xsk+1 ) + f (xsk ) + ∇f (xsk ), xsk+1 − xsk +
Lβ s
kxk+1 − xsk k2
2
L(β −1) s
−
kxk+1 − xsk k2 .
2
= g(xsk+1 ) + f (xsk ) + vks , xsk+1 − xsk +
+ ∇f (xsk ) − vks , xsk+1 − xsk
Lβ s
xk+1 − xsk
2
2
−
L(β −1) s
xk+1 − xsk
2
2
(22)
16
Using Lemma 7, then we obtain
L(β −1) s
E ∇f (xsk ) − vks , xsk+1 − xsk −
kxk+1 − xsk k2
2
1
L(β −1) s
L(β −1) s
s
s 2
(23)
≤E
k∇f (xk ) − vk k +
kxk+1 −xsk k2 −
kxk+1 −xsk k2
2L(β −1)
2
2
2δ(b)
es−1 ) − F (x∗ ) ,
≤
F (xsk ) − F (x∗ ) + F (x
β −1
where the first inequality holds due to the Young’s inequality, and the second inequality follows from Lemma 7. Substituting
the inequality (23) into the inequality (22), and taking the expectation over the random mini-batch set Iks , we have
E[F (xsk+1 )]
Lβ s
2δ(b)
es−1 )−F (x∗ )
≤ E[g(xsk+1 )] + E[f (xsk )] + E vks , xsk+1 −xsk +
kxk+1 −xsk k2 +
F (xsk )−F (x∗ )+F (x
2
β −1
Lβ
2δ(b)
∗
s
s
∗
s
∗
s 2
∗
s
2
es−1 )−F (x∗ )
≤ g(x ) + E[f (xk )] + E hvk , x −xk i +
(kx −xk k −kx −xk+1 k ) +
F (xsk )−F (x∗ )+F (x
2
β −1
Lβ
2δ(b)
∗
s 2
∗
s
s
∗
∗
2
es−1 )−F (x∗ )
≤ g(x ) + f (x ) + E
(kx − xk k − kx − xk+1 k ) +
F (xk )−F (x∗ )+F (x
2
β −1
2δ(b)
Lβ
es−1 )−F (x∗ ) ,
F (xsk )−F (x∗ )+F (x
E kx∗ − xsk k2 − kx∗ − xsk+1 k2 +
= F (x∗ ) +
2
β−1
where the second inequality holds from Lemma 1. Then the above inequality is rewritten as follows:
Lβ ∗
2δ(b)
(24)
es−1 ) − F (x∗ ) +
E F (xsk+1 ) − F (x∗ ) ≤
F (xsk ) − F (x∗ ) + F (x
E kx − xsk k2 − kx∗ − xsk+1 k2 .
β −1
2
Summing the above inequality over k = 0, 1, · · · , (m − 1), then
m−1
X
E[F (xsk+1 )] − F (x∗ )
k=0
m−1
X
Lβ ∗
2δ(b)
s
∗
s−1
∗
s 2
∗
s
2
e
≤
F (xk ) − F (x ) + F (x
) − F (x ) +
E kx − xk k − kx − xk+1 k
.
β −1
2
k=0
1 Pm
1 Pm
s
s
es = m
es ) ≤ m
Since x
k=1 xk , we have F (x
k=1 F (xk ), and
2δ(b)
2δ(b)
s
∗
es )] − F (x∗ )}
{E[F (xm )] − F (x )} + 1 −
{E[F (x
(β −1)m
β −1
2δ(b)
Lβ ∗
2δ(b)
es−1 ) − F (x∗ ) +
F (x
[F (xs0 ) − F (x∗ )] +
E kx − xs0 k2 − kx∗ − xsm k2 .
≤
(β −1)
(β −1)m
2m
This completes the proof.
Similar to Lemma 8, we can also extend Lemma 3 and Corollary 1 to the mini-batch setting.
Theorem 4 (Mini-batch). If each fi (·) is convex and L-smooth, then the following inequality holds
h
i
2δ(b)(m + 1)
Lβ(β −1)
bS ) − F (x∗ ) ≤
e0 )−F (x∗ ) +
e0 k2 .
E F (x
E F (x
E kx∗ − x
(β −1−4δ(b))mS
2(β −1−4δ(b))mS
(25)
Proof. Using Lemma 8, we have
2δ(b)
es ) − F (x∗ )]
1−
E[F (x
β −1
2δ(b)
2δ(b)
Lβ ∗
es−1 ) − F (x∗ ) +
≤
F (x
[F (xs0 ) − F (x∗ )] +
E kx − xs0 k2 − kx∗ − xsm k2 .
(β −1)
(β −1)m
2m
= xsm , we
Summing the above inequality over s = 1, 2, . . . , S , taking expectation over whole history of Iks , and using xs+1
0
obtain
S
X
≤
s=1
S
X
2δ(b)
es ) − F (x∗ )]
1−
E [F (x
β −1
S
S
X
2δ(b)
2δ(b)
Lβ X ∗
es−1 ) − F (x∗ ) +
F (x
E[F (xs0 ) − F (xsm )] +
E kx − xs0 k2 − kx∗ − xsm k2 .
(β −1)
(β −1)m
2m s=1
s=1
s=1
17
Subtracting
2δ(b) PS−1
s=1
β−1
es )−F (x∗ )] from both sides of the above inequality, we have
E[F (x
S
i X
2δ(b) h
4δ(b)
S
∗
es ) − F (x∗ )]
e ) − F (x ) +
E[F (x
E F (x
1−
β −1
β −1
s=1
i
i Lβ h
2δ(b)
2δ(b) h
∗
1 2
∗
S 2
e0 ) − F (x∗ ) +
≤
F (x
E F (x10 ) − F (xS
)
+
E
kx
−
x
k
−
kx
−
x
k
m
0
m
(β −1)
(β −1)m
2m
P
S
es ), we arrive at
Dividing both sides of the above inequality by S and using E[F (x)] ≤ S1 s=1 F (x
2δ(b)
4δ(b)
eS ) − F (x∗ )] + 1 −
E[F (x) − F (x∗ )]
E[F (x
(β −1)S
β −1
h
i
i
2δ(b)
2δ(b)
Lβ h ∗
2
e0 ) − F (x∗ ) +
E F (x10 ) − F (xS
E kx − x10 k2 − kx∗ − xS
F (x
.
≤
m) +
mk
(β −1)S
(β −1)mS
2mS
Subtracting
2δ(b)
∗
eS
(β−1)S E[F (x )−F (x )]
from both sides of the above inequality, we have
4δ(b)
E[F (x) − F (x∗ )]
β −1
i
i
h
i
2δ(b) h
2δ(b)
Lβ h ∗
1 2
∗
S 2
e0 ) − F (x
eS ) +
.
≤
E F (x
E F (x10 ) − F (xS
)
+
E
kx
−
x
k
−
kx
−
x
k
m
0
m
(β −1)S
(β −1)mS
2mS
1−
Dividing both sides of the above inequality by (1−
4δ(b)
β−1 ) > 0,
we arrive at
E[F (x)] − F (x∗ )
h
i
h
i
2δ(b)
Lβ(β −1)
2δ(b)
e0 )−F (x
eS ) +
e0 )−F (xS
e0 k2
E F (x
E F (x
)
+
E kx∗ − x
m
(β −1−4δ(b))S
(β −1−4δ(b))mS
2(β −1−4δ(b))mS
2δ(b)
Lβ(β −1)
2δ(b)
e0 )−F (x∗ ) +
e0 )−F (x∗ ) +
e0 k2
≤
E F (x
E F (x
E kx∗ − x
(β −1−4δ(b))S
(β −1−4δ(b))mS
2(β −1−4δ(b))mS
2δ(b)(m + 1)
Lβ(β −1)
e0 )−F (x∗ ) +
e0 k2 .
=
E F (x
E kx∗ − x
(β −1−4δ(b))mS
2(β −1−4δ(b))mS
≤
bS = x
eS , and
eS ) ≤ F (xS ), then x
When F (x
h
i
2δ(b)(m + 1)
Lβ(β −1)
bS ) − F (x∗ ) ≤
e0 )−F (x∗ ) +
e0 k2 .
E F (x
E F (x
E kx∗ − x
(β −1−4δ(b))mS
2(β −1−4δ(b))mS
eS ) ≥ F (xS ), then x
bS = xS , and the above inequality still holds. This completes the proof.
Alternatively, if F (x
From Theorem 4, one can see that when b = n (i.e., the batch setting), we have δ(n) = 0, and the first term on the
right-hand side of (25) diminishes. In other words, our VR-SGD method degenerates to the deterministic method with the
convergence rate of O(1/T ). Furthermore, when b = 1, we have δ(1) = 1, and then Theorem 4 degenerates to Theorem 3.
4.2
Convergence Properties for Strongly Convex Problems
In this part, we analyze the convergence properties of VR-SGD (i.e., Algorithm 2) for solving the strongly convex objective
function (1). According to the above analysis, one can see that if F (·) is convex, and each convex component function fi (·)
is L-smooth, both Algorithms 2 and 3 converge to the optimal solution. In the following, we provide stronger convergence
rate guarantees for VR-SGD under the strongly convex condition. We first give the following assumption.
Assumption 3. For all s = 1, 2, . . . , S , the following inequality holds
es−1 ) − F (x∗ )
E[F (xs0 ) − F (x∗ )] ≤ C E F (x
where C > 0 is a constant4 .
Similar to Algorithm 3, Algorithm 2 also has two different update rules for the two cases of smooth and non-smooth
objective functions. We first give the following convergence result for Algorithm 2 with Option I.
4. This assumption shows the relationship of the gaps between the function values at the starting and snapshot points of each epoch and the
optimal value of the objective function. In fact, both x
es and xsm (i.e., xs+1
) converge to x∗ , and thus C is far less than m, i.e., C m.
0
18
Theorem 5 (Option I). Suppose Assumptions 1, 2, and 3 hold, and m is sufficiently large so that
2Lη(m+C)
C(1−Lη)
+
< 1.
(m−1)(1−3Lη) µη(m−1)(1−3Lη)
Then Algorithm 2 with Option I has the following geometric convergence in expectation:
es ) − F (x∗ )] ≤ ρsI F (x
e0 ) − F (x∗ ) .
E [F (x
ρI :=
Proof. Since each fi (·) is convex and L-smooth, then Corollary 1 holds, which then implies
2
1
Lβ
∗
es ) − F (x∗ )] +
E[F (x
1−
E F (xs+1
E kxs+1
−x∗ k2
0 ) − F (x ) +
0
β −1
m−1
2(m−1)
2m
2
Lβ
es−1 )−F (x∗ ) +
E F (x
E[F (xs0 )−F (x∗ )]+
E kxs0 −x∗ k2 .
≤
(m−1)(β −1)
(m−1)(β −1)
2(m−1)
(26)
Due to the strong convexity of F (·), we have kxs0 − x∗ k2 ≤ (2/µ)[F (xs0 ) − F (x∗ )]. Then the inequality in (26) can be
rewritten as follows:
2
es ) − F (x∗ )]
E[F (x
β −1
2m
2
Lβ
es−1 )−F (x∗ ) +
E[F (xs0 )−F (x∗ )]
≤
E F (x
+
(m−1)(β −1)
(m−1)(β −1) µ(m−1)
2m
2C
CLβ
es−1 )−F (x∗ ) +
es−1 )−F (x∗ )
≤
E F (x
+
E F (x
(m−1)(β −1)
(m−1)(β −1) µ(m−1)
CLβ
2(m+C)
s−1
e )−F (x∗ )
+
E F (x
=
(m−1)(β −1) µ(m−1)
1−
where the first inequality holds due to the fact that kxs0 − x∗ k2 ≤ (2/µ)[F (xs0 ) − F (x∗ )], and the second inequality follows
from Assumption 3. Dividing both sides of the above inequality by [1−2/(β −1)] > 0 (that is, β is required to be larger than
3) and using the definition of β = 1/(Lη), we arrive at
2(m+C)
CLβ(β −1)
s
∗
e
es−1 )−F (x∗ )
E[F (x ) − F (x )] ≤
+
E F (x
(m−1)(β −3) µ(m−1)(β −3)
C(1−Lη)
2(m+C)Lη
es−1 )−F (x∗ ) .
+
E F (x
=
(m−1)(1−3Lη) µη(m−1)(1−3Lη)
This completes the proof.
Although the learning rate η in Theorem 5 needs to be less than 1/(3L), we can use much larger learning rates in
practice, e.g., η = 3/(7L). Similar to Theorem 5, we give the following convergence result for Algorithm 2 with Option II.
Theorem 6 (Option II). Suppose Assumptions 1, 2, and 3 hold, and m is sufficiently large so that
2Lη(m+C)
C(1−Lη)
+
< 1.
m(1−3Lη)
mµη(1−3Lη)
Then Algorithm 2 with Option II has the following geometric convergence in expectation:
es ) − F (x∗ )] ≤ ρsII F (x
e0 ) − F (x∗ ) .
E [F (x
ρII :=
The proof of Theorem 6 can be found in APPENDIX F. In addition, we can also provide the linear convergence
guarantees of Algorithm 2 with Option I or Option II for solving smooth strongly convex objective functions. From all
the results, one can see that VR-SGD attains a linear convergence rate for both smooth and non-smooth strongly convex
minimization problems.
5
E XPERIMENTAL R ESULTS
In this section, we evaluate the performance of our VR-SGD method for solving various ERM problems, such as logistic
regression, Lasso, and ridge regression, and compare its performance with several related stochastic variance reduced
methods, including SVRG [1], Prox-SVRG [2], and Katyusha [3]. All the codes of VR-SGD and related methods can be
downloaded from the author’s website5 .
5. https://sites.google.com/site/fanhua217/publications
19
Option I
Option II
Option III
−2
10
Option I
Option II
Option III
−4
10
10
−8
10
10
−6
10
−8
10
−6
F (xs ) − F (x* )
F (xs ) − F (x* )
−6
10
−8
10
−10
−10
10
−12
−12
0
10
20
30
Gradient evaluations / n
40
10
50
20
40
60
80
Gradient evaluations / n
10
100
−12
0
(a) Adult: λ = 10−5 (left) and λ = 10−6 (right)
−2
5
10
15
20
Gradient evaluations / n
25
10
30
20
40
60
80
Gradient evaluations / n
100
(b) Protein: λ = 10−5 (left) and λ = 10−6 (right)
10
Option I
Option II
Option III
−4
10
Option I
Option II
Option III
−4
10
Option I
Option II
Option III
−2
10
−8
10
−10
−6
10
−8
10
−10
10
10
−12
5
10
15
20
Gradient evaluations / n
25
30
10
−6
10
−8
10
−10
10
20
30
40
50
Gradient evaluations / n
60
10
−6
10
−8
10
10
−12
0
10
−10
10
−12
0
−4
10
F (xs ) − F (x* )
10
F (xs ) − F (x* )
F (xs ) − F (x* )
−6
Option I
Option II
Option III
−2
10
−4
F (xs ) − F (x* )
0
−2
10
10
−8
10
10
−12
0
−6
10
−10
10
−10
10
10
Option I
Option II
Option III
−4
10
−4
10
F (xs ) − F (x* )
−4
F (xs ) − F (x* )
Option I
Option II
Option III
−2
10
−12
0
(c) Covtype: λ = 10−5 (left) and λ = 10−6 (right)
10
20
30
Gradient evaluations / n
40
10
50
0
50
100
Gradient evaluations / n
150
(d) Sido0: λ = 10−4 (left) and λ = 10−5 (right)
Fig. 3. Comparison of Options I, II, and III for solving ridge regression problems with the regularizer (λ/2)k · k2 . In each plot, the vertical axis shows
the objective value minus the minimum, and the horizontal axis is the number of effective passes.
5.1
Experimental Setup
We used four publicly available data sets in the experiments: Adult (also called a9a), Covtype, Protein, and Sido0, as listed
in Table 1. It should be noted that each example of these date sets was normalized so that they have unit length as in [2], [34], which
leads to the same upper bound on the Lipschitz constants Li , i.e., L = Li for all i = 1, . . . , n. As suggested in [1], [2], [3], the epoch
length is set to m = 2n for the stochastic variance reduced methods, SVRG [1], Prox-SVRG [2], and Katyusha [3], as well as
our VR-SGD method. Then the only parameter we have to tune by hand is the step size (or learning rate), η . Since Katyusha
has a much higher per-iteration complexity than SVRG, Prox-SVRG, and VR-SGD, we compare their performance in terms
of both the number of effective passes and running time (seconds), where computing a single full gradient or evaluating
n component gradients is considered as one effective pass over the data. For fair comparison, we implemented SVRG,
Prox-SVRG, Katyusha, and VR-SGD in C++ with a Matlab interface, and performed all the experiments on a PC with an
Intel i5-2400 CPU and 16GB RAM. In addition, we do not compare with other stochastic algorithms such as SAGA [30] and
Catalyst [40], as they have been shown to be comparable or inferior to Katyusha [3].
TABLE 1
Summary of data sets used for our experiments.
Data sets
Sizes n
Dimensions d
Sparsity
Adult
32,562
123
11.28%
Protein
145,751
74
99.21%
Covtype
581,012
54
22.12%
Sido0
12,678
4,932
9.84%
TABLE 2
The three choices of snapshot and starting points for stochastic optimization.
Option I
1
x
es+1 = xsm and xs+
= xsm
0
Option II
x
es+1 =
1 Pm
s
k=1 xk
m
1
and xs+
=
0
Option III
1 Pm
s
k=1 xk
m
x
es+1 =
1 Pm−1 s
k=1 xk
m−1
1
and xs+
= xsm
0
20
Option I
Option II
Option III
−2
10
−4
Option I
Option II
Option III
−4
10
−8
10
−6
10
−8
10
−6
10
F (xs ) − F (x* )
10
10
F (xs ) − F (x* )
−6
−8
10
−10
−10
10
−12
−12
0
10
20
30
Gradient evaluations / n
10
40
10
20
30
40
Gradient evaluations / n
10
50
−8
10
10
−12
0
−6
10
−10
10
−10
10
10
Option I
Option II
Option III
−4
10
−4
10
F (xs ) − F (x* )
F (xs ) − F (x* )
Option I
Option II
Option III
−2
10
−12
0
5
(a) Adult: λ = 10−4 (left) and λ = 10−5 (right)
10
15
20
25
Gradient evaluations / n
30
10
35
0
20
40
60
Gradient evaluations / n
80
(b) Protein: λ = 10−4 (left) and λ = 10−5 (right)
−2
10
Option I
Option II
Option III
−4
Option I
Option II
Option III
−2
−8
10
−2
10
−2
10
F (xs ) − F (x* )
10
F (xs ) − F (x* )
−6
Option I
Option II
Option III
−1
10
10
F (xs ) − F (x* )
F (xs ) − F (x* )
10
Option I
Option II
Option III
−4
10
−10
−3
10
−4
10
−5
10
10
−6
10
−12
10
−6
0
5
10
15
20
Gradient evaluations / n
25
30
0
50
100
150
Gradient evaluations / n
200
0
(c) Covtype: λ = 10−4 (left) and λ = 10−5 (right)
50
100
150
Gradient evaluations / n
200
10
0
50
100
150
Gradient evaluations / n
200
(d) Sido0: λ = 10−4 (left) and λ = 10−5 (right)
Fig. 4. Comparison of Options I, II, and III for solving Lasso problems with the regularizer λkxk1 . In each plot, the vertical axis shows the objective
value minus the minimum, and the horizontal axis is the number of effective passes.
5.2
Different Choices for Snapshot and Starting Points
es and the starting point xs+1
In the practical implementation of SVRG [1], both the snapshot point x
of each epoch are set
0
to the last iterate xsm of the previous epoch (i.e., Option I in Algorithm 1), while the two vectors in [2] (also suggested
1 Pm
s
es and xs+1
in [1]) are set to the average point of the previous epoch, m
0
k=1 xk (i.e., Option II in Algorithm 1). In contrast, x
P
m−1
1
6
s
s
in this paper are set to m−1 k=1 xk and xm (denoted by Option III, i.e., Option I in Algorithms 2 and 3), respectively.
Note that Johnson and Zhang [1] first presented the choices of Options I and II, while the setting of Option III is suggested
in this paper. In the following, we compare the performance of the three choices (i.e., Options I, II and III in Table 2) for
snapshot and starting points for solving ridge regression and Lasso problems, as shown in Figs. 3 and 4. Except for the
three different settings for snapshot and starting points, we use the update rules in (12) and (13) for ridge regression and
Lasso problems, respectively.
From all the results shown in Figs. 3 and 4, we can see that our algorithms with Option III (i.e., Algorithms 2 and 3 with
their Option I) consistently converge much faster than SVRG with the choices of Options I and II for both strongly convex
and non-strongly convex cases. This indicates that the setting of Option III suggested in this paper is a better choice than
Options I and II for stochastic optimization.
5.3
Logistic Regression
In this part, we focus on the following generalized logistic regression problems for binary classification,
n
min
x∈Rd
1X
λ1
log(1 + exp(−bi aTi x)) + kxk2 + λ2 kxk1 ,
n i=1
2
(27)
where {(ai , bi )} is a set of training examples, and λ1 , λ2 ≥ 0 are the regularization parameters. Note that when λ2 > 0,
fi (x) = log(1+exp(−bi aTi x))+(λ1 /2)kxk2 . The formulation (27) includes the `2 -norm (i.e., λ2 = 0), `1 -norm (i.e., λ1 = 0), and
elastic net (i.e., λ1 6= 0 and λ2 6= 0) regularized logistic regression problems. Figs. 5, 6 and 7 show how the objective gap, i.e.,
F (xs )−F (x∗ ), decreases for the `2 -norm, `1 -norm, and elastic net regularized logistic regression problems, respectively.
From all the results, we make the following observations.
6. As Options I and II in Algorithms 2 and 3 achieve very similar performance, we only report the results of Algorithms 2 and 3 with Option I.
21
•
When the regularization parameters λ1 and λ2 are relatively large, e.g., λ1 = 10−4 or λ2 = 10−4 , Prox-SVRG
usually converges faster than SVRG for both strongly convex (e.g., `2 -norm regularized logistic regression) and
non-strongly convex (e.g., `1 -norm regularized logistic regression) cases, as shown in Figs. 5(a)–5(c) and Figs. 6(a)–
6(b). On the contrary, SVRG often outperforms Prox-SVRG, when the regularization parameters are relatively small,
e.g., λ1 = 10−6 or λ2 = 10−6 , as observed in [45]. The main reason is that they have different initialization settings,
s+1
1 Pm
1 Pm
s
s
s
es = xsm and xs+1
es = m
i.e., x
0 = xm for SVRG vs. x
k=1 xk and x0 = m
k=1 xk for Prox-SVRG.
•
Katyusha converges much faster than SVRG and Prox-SVRG for the cases when the regularization parameters are
relatively small, e.g., λ1 = 10−6 , whereas it often achieves similar or inferior performance when the regularization
parameters are relatively large, e.g., λ1 = 10−4 , as shown in Figs. 5(a)–5(d) and Figs. 6(a)–6(b). Note that we
implemented the original algorithms with Option I in [3] for Katyusha. In other words, Katyusha is an accelerated
proximal stochastic gradient method. Obviously, the above observation matches the convergence properties of
Katyusha provided in [3], that is, only if mµ/L ≤ 3/4, Katyusha attains the best known overall complexities of
p
O((n+ nL/µ) log(1/)) for strongly convex problems.
•
Our VR-SGD method consistently converges much faster than SVRG and Prox-SVRG, especially when the regularization parameters are relatively small, e.g., λ1 = 10−6 or λ2 = 10−6 , as shown in Figs. 5(i)–5(l) and Figs. 6(e)–6(f).
The main reason is that VR-SGD can use much larger learning rates than SVRG (e.g., 6/(5L) for VR-SGD vs. 1/(10L)
for SVRG), which leads to faster convergence. This further verifies that the setting of both snapshot and starting
points in our algorithms (i.e., Algorithms 2 and 3) is a better choice than Options I and II in Algorithm 1.
•
In particular, VR-SGD generally outperforms the best-known stochastic method, Katyusha, in terms of the number
of passes through the data, especially when the regularization parameters are relatively large, e.g., 10−4 and 10−5 ,
as shown in Figs. 5(a)–5(h) and Figs. 6(a)–6(d). Since VR-SGD has a much lower per-iteration complexity than
Katyusha, VR-SGD has more obvious advantage over Katyusha in terms of running time. From the algorithms of
Katyusha proposed in [3], one can see that the learning rate of Katyusha is at least set to 1/(3L). Similarly, the
learning rate used in VR-SGD is comparable to Katyusha, which may be the main reason why the performance of
VR-SGD is much better than that of Katyusha. This also implies that the algorithm that enjoys larger learning rates
can yield better performance.
5.4
Common Stochastic Gradient and Prox-SG Updates
In this part, we compare the original Katyusha algorithm, i.e., Algorithm 1 in [3], with the slightly modified Katyusha
algorithm (denoted by Katyusha-I). In Katyusha-I, only the following two update rules for smooth objective functions are
used to replace the original proximal stochastic gradient update rules in (6b) and (6c).
s
e i (xs ) + ∇g(xs )],
yk+1
= yks − η[∇f
k+1
k+1
k
s
e i (xs ) + ∇g(xs )]/(3L).
zk+1
= xsk+1 − [∇f
k+1
k+1
k
(28)
Similarly, we also implement the proximal versions7 for the original SVRG (also called SVRG-I) and the proposed VR-SGD
(denoted by VR-SGD-I) methods, and denote their proximal variants by SVRG-II and VR-SGD-II, respectively. Here, the
original Katyusha method is denoted by Katyusha-II.
Figs. 8 and 9 show the performance of Katyusha-I and Katyusha-II for solving ridge regression problems on the two
popular data sets: Adult and Covtype. We also report the results of SVRG, VR-SGD, and their proximal variants. It is clear
that Katyusha-I usually performs better than Katyusha-II (i.e., the original proximal stochastic method, Katyusha [3]), and
converges significantly faster for the case when the regularization parameter is 10−4 . This seems to be the main reason
why Katyusha has inferior performance when the regularization parameter is relatively large, as shown in Section 5.3. In
7. Here, the proximal variant of SVRG is different from Prox-SVRG [2], and their main difference is the choices of both the snapshot point and
starting point. That is, the two vectors of the former are set to the last iterate xsm , while those of Prox-SVRG are set to the average point of the
1 Pm
s
previous epoch, i.e., m
k=1 xk .
22
contrast, VR-SGD and its proximal variant have similar performance, and the former slightly outperforms the latter in most
cases, as well as SVRG vs. its proximal variant. All this suggests that stochastic gradient update rules as in (12) and (28)
are better choices than proximal stochastic gradient update rules as in (10), (6b) and (6c) for smooth objective functions. We
also believe that our new insight can help us to design accelerated stochastic optimization methods. Both Katyusha-I and
Katyusha-II usually outperform SVRG and its proximal variant, especially when the regularization parameter is relatively
small, e.g., λ = 10−6 , as shown in Figs. 8(d) and 9(d). Unfortunately, both Katyusha-I and Katyusha-II cannot solve the
convex objectives without any regularization term (i.e., λ = 0, as shown in Figs. 8(f) and 9(f)) or with too large and small
regularization parameters, e.g., 10−3 and 10−7 , as shown in Figs. 9(a) and 8(e).
Moreover, it can be seen that both VR-SGD and its proximal variant achieve much better performance than the other
methods in most cases, and are also comparable to Katyusha-I and Katyusha-II in the remaining cases. This further verifies
that VR-SGD is suitable for various large-scale machine learning.
5.5
Fixed and Varied Learning Rates
Finally, we compare the performance of Algorithm 3 with fixed and varied learning rates for solving `1 -norm regularized
logistic regression and Lasso problems, as shown in Fig. 10. Note that the learning rates in Algorithm 3 are varied according
to the update formula in (14). We can observe that Algorithm 3 with varied step-sizes performs very similar to Algorithm 3
with fixed step-sizes in most cases. In the remaining cases, the former slightly outperforms the latter.
6
C ONCLUSIONS
In this paper, we proposed a simple variant of the original SVRG [1], called variance reduced stochastic gradient descent
(VR-SGD). Unlike the choices of the snapshot point and starting point in SVRG and its proximal variant, Prox-SVRG [2],
the two points of each epoch in VR-SGD are set to the average and last iterate of the previous epoch, respectively. This
setting allows us to use much larger learning rates than SVRG, e.g., 3/(7L) for VR-SGD vs. 1/(10L) for SVRG, and also
makes VR-SGD more robust in terms of learning rate selection. Different from existing proximal stochastic methods such as
Prox-SVRG [2] and Katyusha [3], we designed two different update rules for smooth and non-smooth objective functions,
respectively, which makes our VR-SGD method suitable for non-smooth and/or non-strongly convex problems without
using any reduction techniques as in [3], [61]. In contrast, SVRG and Prox-SVRG cannot directly solve non-strongly convex
objectives [42]. Furthermore, our empirical results showed that for smooth problems stochastic gradient update rules as in
(12) are better choices than proximal stochastic gradient update formulas as in (10).
On the practical side, the choices of the snapshot and starting points make VR-SGD significantly faster than its
counterparts, SVRG and Prox-SVRG. On the theoretical side, the setting also makes our convergence analysis more
challenging. We analyzed the convergence properties of VR-SGD for strongly convex objective functions, which show
that VR-SGD attains a linear convergence rate. Moreover, we also provided the convergence guarantees of VR-SGD for
non-strongly convex problems, which show that VR-SGD achieves a sub-linear convergence rate. All these results imply
that VR-SGD is guaranteed to have a similar convergence rate with other variance reduced stochastic methods such as
SVRG and Prox-SVRG, and a slower theoretical rate than accelerated methods such as Katyusha [3]. Nevertheless, various
experimental results show that VR-SGD significantly outperforms than SVRG and Prox-SVRG, and is still much better than
the best known stochastic method, Katyusha [3].
A PPENDIX A: P ROOF OF L EMMA 2
Before proving Lemma 2, we first give and prove the following lemma.
Lemma 9. Suppose each fi (·) is L-smooth, and let x∗ be the optimal solution of Problem (1) when g(x) ≡ 0, then we have
h
i
2
E k∇fi (x) − ∇fi (x∗ )k ≤ 2L [f (x) − f (x∗ )] .
23
Proof. Following Theorem 2.1.5 in [19] and Lemma 3.4 [2], we have
2
k∇fi (x) − ∇fi (x∗ )k ≤ 2L [fi (x) − fi (x∗ ) − h∇fi (x∗ ), x − x∗ i] .
Summing the above inequality over i = 1, . . . , n, we obtain
n
h
i 1X
2
2
k∇fi (x)−∇fi (x∗ )k ≤ 2L [f (x) − f (x∗ ) − h∇f (x∗ ), x − x∗ i] .
E k∇fi (x)−∇fi (x∗ )k =
n i=1
By the optimality of x∗ , i.e., x∗ = arg minx f (x), we have ∇f (x∗ ) = 0. Then
i
h
2
E k∇fi (x) − ∇fi (x∗ )k ≤ 2L [f (x) − f (x∗ ) − h∇f (x∗ ), x − x∗ i]
= 2L [f (x) − f (x∗ )] .
Proof of Lemma 2:
Proof.
i
h
2
s−1
s−1
s
e is (xs ) − ∇f (xs )k2 = E ∇fis (xs ) − ∇fis (x
e
e
)
+
∇f
(
x
)
−
∇f
(x
)
E k∇f
k
k
k
k
k
k
k
2
2
es−1 )
es−1 )
= E ∇fisk (xsk ) − ∇fisk (x
− ∇f (xsk ) − ∇f (x
2
es−1 )
≤ E ∇fisk (xsk ) − ∇fisk (x
2
s
∗
s
s
es−1 ) − ∇fisk (x∗ )
≤ 2E ∇fik (xk ) − ∇fik (x )
+ 2E ∇fisk (x
2
es−1 ) − f (x∗ ) ,
≤ 4L f (xsk ) − f (x∗ ) + f (x
where the second equality holds due to the fact that E[kx− E[x]k2 ] = E[kxk2 ]−kE[x]k2 ; the second inequality holds due to
the fact that ka − bk2 ≤ 2(kak2 + kbk2 ); and the last inequality follows from Lemma 9.
A PPENDIX B: P ROOF OF L EMMA 5
es−1 )+∇f (x
es−1 ). Since each
Proof. For convenience, the stochastic gradient estimator is defined as: vks := ∇fisk (xsk )−∇fisk (x
component function fi (x) is L-smooth, which implies that the gradient of the average function f (x) is also L-smooth, i.e.,
for all x, y ∈ Rd ,
k∇f (x) − ∇f (y)k ≤ Lkx − yk,
whose equivalent form is
f (y) ≤ f (x) + h∇f (x), y − xi +
L
ky − xk2 .
2
Using the above smoothness inequality, we have
L s
2
x
− xsk
2 k+1
Lβ s
L(β −1) s
2
= f (xsk ) + ∇f (xsk ), xsk+1 − xsk +
xk+1 − xsk −
xk+1 − xsk
2
2
Lβ s
= f (xsk ) + vks , xsk+1 − xsk +
kxk+1 − xsk k2
2
L(β −1) s
kxk+1 − xsk k2 .
+ ∇f (xsk ) − vks , xsk+1 − xsk −
2
Using Lemma 2, then we get
L(β −1) s
E ∇f (xsk ) − vks , xsk+1 − xsk −
kxk+1 − xsk k2
2
1
L(β −1) s
L(β −1) s
s
s 2
≤E
k∇f (xk ) − vk k +
kxk+1 −xsk k2 −
kxk+1 −xsk k2
2L(β −1)
2
2
2
es−1 ) − f (x∗ ) ,
≤
f (xsk ) − f (x∗ ) + f (x
β −1
f (xsk+1 ) ≤ f (xsk ) + ∇f (xsk ), xsk+1 − xsk +
2
(29)
(30)
24
where the first inequality holds due to the Young’s inequality (i.e., y T z ≤ kyk2 /(2θ)+θkzk2 /2 for all θ > 0 and y, z ∈ Rd ),
and the second inequality follows from Lemma 2.
Taking the expectation over the random choice of isk and substituting the inequality in (30) into the inequality in (29),
we have
E[f (xsk+1 )]
2
Lβ s
es−1 ) − f (x∗ )
kxk+1 − xsk k2 +
f (xsk ) − f (x∗ ) + f (x
≤ E[f (xsk )] + E vks , xsk+1 − xsk +
2
β −1
Lβ
2
s
∗
s 2
∗
s
∗
s
es−1 ) − f (x∗ )
≤ E[f (xk )] + E hvk , x − xk i +
(kx − xk k − kx − xsk+1 k2 ) +
f (xsk ) − f (x∗ ) + f (x
2
β −1
Lβ
2
es−1 ) − f (x∗ )
≤ E[f (xsk )] + h∇f (xsk ), x∗ − xsk i + E
(kx∗ − xsk k2 − kx∗ − xsk+1 k2 ) +
f (xsk ) − f (x∗ ) + f (x
2
β −1
Lβ
2
s
∗
∗
∗
s 2
∗
s
2
es−1 ) − f (x∗ )
f (xk ) − f (x ) + f (x
(kx − xk k − kx − xk+1 k ) +
≤ f (x ) + E
2
β −1
Lβ
2
es−1 ) − f (x∗ ) ,
= f (x∗ ) +
E kx∗ − xsk k2 − kx∗ − xsk+1 k2 +
f (xsk ) − f (x∗ ) + f (x
2
β −1
where the first inequality holds due to the inequality in (29) and the inequality in (30); the second inequality follows from
Lemma 1 with ẑ = xsk+1 , z = x∗ , z0 = xsk , τ = Lβ = 1/η , and r(z) := hvks , z −xsk i; the third inequality holds due to the fact
that E[vks ] = ∇f (xsk ); and the last inequality follows from the convexity of f (·), i.e., f (xsk )+h∇f (xsk ), x∗ −xsk i ≤ f (x∗ ). The
above inequality can be rewritten as follows:
E[f (xsk+1 )] − f (x∗ ) ≤
Lβ ∗
2
es−1 ) − f (x∗ ) +
f (xsk ) − f (x∗ ) + f (x
E kx − xsk k2 − kx∗ − xsk+1 k2 .
β −1
2
Summing the above inequality over k = 0, 1, . . . , m−1, we obtain
m−1
X
≤
E[f (xsk+1 )] − f (x∗ )
k=0
m−1
X
k=0
Lβ ∗
2
s
∗
s−1
∗
s 2
∗
s
2
e
f (xk ) − f (x ) + f (x ) − f (x ) +
E kx − xk k − kx − xk+1 k
.
β −1
2
Then
X
m
m
2
2 X
1−
{E[f (xsk )] − f (x∗ )} +
{E[f (xsk )] − f (x∗ )}
β −1 k=1
β −1 k=1
m
X
Lβ ∗
2
es−1 ) − f (x∗ ) +
f (xsk−1 ) − f (x∗ ) + f (x
E kx − xs0 k2 − kx∗ − xsm k2 .
≤
β
−1
2
k=1
1 Pm
s
es = m
Due to the setting of x
k=1 xk in Option II, and the convexity of f (·), then
es ) ≤
f (x
(31)
m
1 X
f (xsk ).
m k=1
Using the above inequality, the inequality in (31) becomes
m
2
2 X
es ) − f (x∗ )] +
m 1−
E[f (x
{E[f (xsk )] − f (x∗ )}
β −1
β −1 k=1
m
X
2
Lβ ∗
es−1 ) − f (x∗ ) +
≤
f (xsk−1 ) − f (x∗ ) + f (x
E kx − xs0 k2 − kx∗ − xsm k2 .
β −1
2
k=1
Pm−1
2
s
∗
Dividing both sides of the above inequality by m and subtracting (β−1)m
k=1 [f (xk )−f (x )] from both sides, we arrive
at
2
2
es ) − f (x∗ )] +
E[f (x
E[f (xsm ) − f (x∗ )]
β −1
(β −1)m
2
2
Lβ ∗
es−1 ) − f (x∗ ) +
≤
E f (x
E[f (xs0 ) − f (x∗ )] +
E kx − xs0 k2 − kx∗ − xsm k2 .
(β −1)
(β −1)m
2m
1−
This completes the proof.
25
A PPENDIX C: P ROOF OF T HEOREM 2
Proof. Using Lemma 5, we have
2
2
es ) − f (x∗ )] +
)E[f (x
E[f (xsm ) − f (x∗ )]
β −1
(β −1)m
Lβ
2
2
es−1 ) − f (x∗ ) +
≤
E[f (xs0 ) − f (x∗ )] +
E f (x
E[kx∗ − xs0 k2 − kx∗ − xsm k2 ].
(β −1)m
β −1
2m
(1 −
Summing the above inequality over s = 1, 2, . . . , S , taking expectation with respect to the history of random variables isk ,
and using the setting of xs+1
= xsm in Algorithm 3, we arrive at
0
S
X
≤
(1−
s=1
S
X
s=1
+
2
es ) − f (x∗ )]
)E[f (x
β −1
2
2
es−1 ) − f (x∗ )
E{f (xs0 ) − f (x∗ ) − [f (xsm ) − f (x∗ )]} +
E f (x
(β −1)m
β −1
S
Lβ X ∗
E kx − xs0 k2 − kx∗ − xsm k2 .
2m s=1
PS
− f (x∗ )] from both sides of the above inequality, we obtain
S
X
4
es ) − f (x∗ )]
1−
E[f (x
β
−1
s=1
n
o
2
2
∗
e0 ) − f (x
eS )]
≤
E f (x10 ) − f (x∗ ) − [f (xS
E[f (x
m ) − f (x )] +
(β −1)m
β −1
i
Lβ h ∗
2
+
E kx − x10 k2 − kx∗ − xS
.
mk
2m
e0 )−f (x
eS )] ≤ f (x
e0 )−f (x∗ ). Dividing both sides of the above inequality by S , and using
It is not hard to verify that E[f (x
Subtracting
2
β−1
es )
s=1 [f (x
e0 = x10 , we have
the choice x
X
S
1
4
es ) − f (x∗ )]
E[f (x
1−
S
β −1 s=1
n
o
2
Lβ
2
∗
e0 ) − f (x∗ )] +
E f (x10 ) − f (x∗ ) − [f (xS
[f (x
kx∗ − x10 k2
≤
m ) − f (x )] +
(β −1)mS
(β −1)S
2mS
(32)
2
2
Lβ
0
∗
0
∗
∗
0 2
e ) − f (x ) +
e ) − f (x )] +
e k
≤
f (x
[f (x
kx − x
(β −1)mS
(β −1)S
2mS
2(m+1)
Lβ
e0 ) − f (x∗ )] +
e0 − x∗ k2
=
[f (x
kx
(β −1)mS
2mS
2
e0 )−f (x
eS )] ≤ f (x
e0 )−f (x∗ ) and E kx∗ −xS
where the first inequality holds due to the facts that E[f (x
m k ≥ 0, and the last
∗
e0 = x10 .
inequality uses the facts that E f (xS
m )−f (x ) ≥ 0 and x
P
1 PS
S
es , and using the convexity of f (·), we have f (xS ) ≤ S1 S
es ), and therefore the inequality in
Since x = S s=1 x
s=1 f (x
(32) becomes
h
X
S
i
4
1
4
S
∗
es ) − f (x∗ )]
1−
E f (x ) − f (x ) ≤
1−
E[f (x
β −1
S
β −1 s=1
≤
Lβ
2(m+1)
e0 ) − f (x∗ )] +
e0 − x∗ k2 .
[f (x
kx
(β −1)mS
2mS
4
Dividing both sides of the above inequality by (1− β−1
) > 0 (i.e., η < 1/(5L)), we arrive at
h
i
Lβ(β −1)
2(m+1)
e0 ) − f (x∗ )] +
e0 − x∗ k2 .
E f (xS ) − f (x∗ ) ≤
[f (x
kx
mS(β −5)
2mS(β −5)
eS ) ≤ f (xS ), then x
bS = x
eS , and
If f (x
h
i
h
i
bS ) − f (x∗ ) ≤ E f (xS ) − f (x∗ )
E f (x
≤
2(m+1)
Lβ(β −1)
e0 ) − f (x∗ )] +
e0 − x∗ k2 .
[f (x
kx
mS(β −5)
2mS(β −5)
26
bS = xS , and the above inequality still holds.
eS ) ≥ f (xS ), then x
Alternatively, if f (x
This completes the proof.
A PPENDIX D: P ROOF OF L EMMA 6
Proof. Since the average function f (x) is L-smooth, then for all x, y ∈ Rd ,
f (y) ≤ f (x) + h∇f (x), y − xi +
L
ky − xk2 ,
2
which then implies
f (xsk+1 ) ≤ f (xsk ) + ∇f (xsk ), xsk+1 − xsk +
L s
x
− xsk
2 k+1
2
.
Using the above inequality, we have
F (xsk+1 ) = f (xsk+1 ) + g(xsk+1 )
≤ f (xsk ) + g(xsk+1 ) + ∇f (xsk ), xsk+1 − xsk +
Lβ s
xk+1 − xsk
2
2
−
L(β −1) s
xk+1 − xsk
2
2
Lβ s
kxk+1 − xsk k2
2
L(β −1) s
−
kxk+1 − xsk k2 .
2
= f (xsk ) + g(xsk+1 ) + vks , xsk+1 − xsk +
+ ∇f (xsk ) − vks , xsk+1 − xsk
According to Lemma 5, then we obtain
L(β −1) s
E ∇f (xsk ) − vks , xsk+1 − xsk −
kxk+1 − xsk k2
2
1
L(β −1) s
L(β −1) s
s
s 2
≤E
k∇f (xk ) − vk k +
kxk+1 −xsk k2 −
kxk+1 −xsk k2
2L(β −1)
2
2
2
es−1 ) − F (x∗ ) ,
F (xsk ) − F (x∗ ) + F (x
≤
β −1
(33)
(34)
where the first inequality holds due to the Young’s inequality, and the second inequality follows from Lemma 5. Substituting
the inequality (34) into the inequality (33), and taking the expectation over the random choice isk , we arrive at
Lβ s
E F (xsk+1 ) ≤ E[f (xsk )] + E g(xsk+1 ) + E vks , xsk+1 − xsk +
kxk+1 − xsk k2
2
2
s
∗
s−1
∗
e
F (xk ) − F (x ) + F (x
) − F (x )
+
β −1
Lβ
s
∗
s
∗
s
∗
s 2
∗
s
2
≤ E[f (xk )] + g(x ) + E hvk , x − xk i +
(kx − xk k − kx − xk+1 k )
2
2
es−1 ) − F (x∗ )
F (xsk ) − F (x∗ ) + F (x
+
β −1
Lβ
≤ f (x∗ ) + g(x∗ ) + E
(kx∗ − xsk k2 − kx∗ − xsk+1 k2 )
2
2
es−1 ) − F (x∗ )
+
F (xsk ) − F (x∗ ) + F (x
β −1
Lβ ∗
2
es−1 ) − F (x∗ ) ,
= F (x∗ ) +
E kx − xsk k2 − kx∗ − xsk+1 k2 +
F (xsk ) − F (x∗ ) + F (x
2
β −1
where the first inequality holds due to the inequality (33) and the inequality (34); the second inequality follows from
Lemma 1 with ẑ = xsk+1 , z = x∗ , z0 = xsk , τ = Lβ = 1/η , and r(z) := hvks , z −xsk i+g(z); and the third inequality holds due to
the fact that E[vks ] = ∇f (xsk ) and the convexity of f (·), i.e., f (xsk )+h∇f (xsk ), x∗ −xsk i ≤ f (x∗ ). Then the above inequality is
rewritten as follows:
E[F (xsk+1 )] − F (x∗ )
Lβ ∗
2
es−1 ) − F (x∗ ) +
≤
F (xsk ) − F (x∗ ) + F (x
E kx − xsk k2 − kx∗ − xsk+1 k2 .
β −1
2
(35)
27
Summing the above inequality over k = 0, 1, . . . , (m−1) and taking expectation over whole history, we have
m−1
X
E[F (xsk+1 )] − F (x∗ )
k=0
m−1
X
Lβ ∗
2
es−1 ) − F (x∗ ) +
F (xsk ) − F (x∗ ) + F (x
E kx − xsk k2 − kx∗ − xsk+1 k2 .
β −1
2
k=0
2 Pm−2
s
∗
Subtracting β−1
k=0 E F (xk+1 )−F (x ) from both sides of the above inequality, we obtain
≤
m−1
X
m−1
2 X
2
E[F (xsm )−F (x∗ )]
E F (xsk+1 )−F (x∗ ) +
β
−1
β
−1
k=0
k=0
m−2
m−1
X 2
Lβ ∗
2 X
es−1 )−F (x∗ ) −
F (xsk )−F (x∗ )+F (x
E F (xsk+1 )−F (x∗ ) +
E kx −xs0 k2 −kx∗ −xsm k2 .
≤
β −1
β −1 k=0
2
k=0
E F (xsk+1 ) − F (x∗ ) −
Then
2
β −1
X
m
2
E[F (xsm )−F (x∗ )]
β
−1
k=1
Lβ ∗
2
2m
s
es−1 )−F (x∗ ) +
≤
E[F (x0 ) − F (x∗ )] +
E F (x
E kx −xs0 k2 −kx∗ −xsm k2 .
β −1
β −1
2
P
m
s+1
1
es = m k=1 xsk and x0 = xsm , and the convexity of the objective function F (·), we have F (x
es ) ≤
Due to the settings of x
1 Pm
s
k=1 F (xk ), and
m
2
2
es ) − F (x∗ )] +
E[F (xsm )−F (x∗ )]
E[F (x
m 1−
β −1
β −1
Lβ ∗
2
2m
es−1 )−F (x∗ ) +
≤
E[F (xs0 ) − F (x∗ )] +
E F (x
E kx −xs0 k2 −kx∗ −xsm k2 .
β −1
β −1
2
1−
E[F (xsk ) − F (x∗ )] +
Dividing both sides of the above inequality by m, we arrive at
2
2
es ) − F (x∗ )] +
E[F (x
E[F (xsm )−F (x∗ )]
1−
β −1
(β −1)m
Lβ ∗
2
2
es−1 )−F (x∗ ) +
≤
E[F (xs0 ) − F (x∗ )] +
E F (x
E kx −xs0 k2 −kx∗ −xsm k2 .
(β −1)m
β −1
2m
This completes the proof.
A PPENDIX E: P ROOF OF T HEOREM 3
Proof. Summing the inequality in (21) over s = 1, 2, . . . , S , and taking expectation with respect to the history of isk , we have
S
S
X
X
2
2
es ) − F (x∗ )] +
1−
E[F (x
E[F (xsm ) − F (x∗ )]
β
−1
(β
−1)m
s=1
s=1
S
X
2
2
es−1 ) − F (x∗ )
≤
E[F (xs0 ) − F (x∗ )] +
E F (x
(β −1)m
β −1
s=1
S
Lβ X ∗
E kx − xs0 k2 − kx∗ − xsm k2 .
2m s=1
PS
2
2 PS
es )−F (x∗ )] from both sides of the above inequality, and using
Subtracting s=1 (β−1)m
E[F (xsm )−F (x∗ )]+ β−1
s=1 [F (x
+
the setting of xs+1
= xsm , we arrive at
0
S
X
4
es ) − F (x∗ )]
1−
E[F (x
β
−1
s=1
h
i
h
i Lβ h
i
2
2
2
e0 )−F (x
eS ) +
≤
E F (x10 )−F (xS
E F (x
E kx∗ −x10 k2 − kx∗ −xS
.
m) +
mk
(β −1)m
β −1
2m
28
e0 )−F (x
eS )] ≤ F (x
e0 )−F (x∗ ). Dividing both sides of the above inequality by S , and using the
It is easy to verify that E[F (x
e0 = x10 , we obtain
choice x
S
1X
es ) − F (x∗ )]
E[F (x
S s=1
h
i
2
2
Lβ
e0 ) − F (x
eS )] +
≤
E F (x10 )−F (xS
[F (x
kx∗ − x10 k2
m) +
(β −1)mS
(β −1)S
2mS
2
2
Lβ
e0 ) − F (x∗ ) +
e0 ) − F (x∗ )] +
e0 k2
≤
F (x
[F (x
kx∗ − x
(β −1)mS
(β −1)S
2mS
Lβ
2(m+1)
e0 ) − F (x∗ )] +
e0 − x∗ k2
[F (x
kx
=
(β −1)mS
2mS
1−
4
β −1
(36)
2
1
S
where the first inequality uses the fact that kx∗ −xS
m k ≥ 0; and the last inequality holds due to the facts that E F (x0 )−F (xm )
e0 )−F (x
eS )] ≤ F (x
e0 )−F (x∗ ), and x
e0 = x10 .
≤ F (x10 )−F (x∗ ), E[F (x
P
P
S
es and the convexity of the objective function F (·), we have F (xS ) ≤ S1 S
es ),
Using the definition of xS = S1 s=1 x
s=1 F (x
and therefore we can rewrite the above inequality in (36) as
h
X
S
i
4
1
4
S
∗
es ) − F (x∗ )]
E[F (x
1−
E F (x ) − F (x ) ≤ 1 −
β −1
β −1 S s=1
≤
Lβ
2(m+1)
e0 ) − F (x∗ )] +
e0 − x∗ k2 .
[F (x
kx
(β −1)mS
2mS
4
Dividing both sides of the above inequality by (1− β−1
) > 0, we have
i
h
2(m+1)
β(β −1)L
e0 ) − F (x∗ )] +
e0 − x∗ k2 .
E F (xS ) − F (x∗ ) ≤
[F (x
kx
(β −5)mS
2(β −5)mS
eS ) ≤ F (xS ), then x
bS = x
eS , and
When F (x
h
i
bS ) − F (x∗ ) ≤
E F (x
β(β −1)L
2(m+1)
e0 ) − F (x∗ )] +
e0 − x∗ k2 .
[F (x
kx
(β −5)mS
2(β −5)mS
eS ) ≥ F (xS ), then x
bS = xS , and the above inequality still holds.
Alternatively, if F (x
This completes the proof.
A PPENDIX F: P ROOF OF T HEOREM 6
Proof. Since each fi (·) is convex and L-smooth, then we have
Lβ s+1
2
2
∗
es ) − F (x∗ )] +
1−
E[F (x
E F (xs+1
E kx0 −x∗ k2
0 )−F (x ) +
β −1
m(β −1)
2m
2
Lβ
2
es−1 )−F (x∗ ) +
E F (x
E[F (xs0 ) − F (x∗ )] +
E kxs0 −x∗ k2
≤
β −1
m(β −1)
2m
2
2
Lβ
s
s−1
∗
e
≤
E F (x
)−F (x ) +
+
E[F (x0 ) − F (x∗ )]
β −1
m(β −1) mµ
2(m+C) CLβ
es−1 ) − F (x∗ )
≤
+
E F (x
m(β −1)
mµ
where the first inequality follows from Lemma 6; the second inequality holds due to the fact that kxs0 −x∗ k2 ≤ (2/µ)[F (xs0 )−
F (x∗ )]; and the last inequality follows from Assumption 3.
Due to the definition of β = 1/(Lη), the above inequality is rewritten as follows:
1−3Lη
2Lη(m+C)
C
s
∗
e ) − F (x )] ≤
es−1 ) − F (x∗ ) .
E[F (x
+
E F (x
1−Lη
m(1−Lη)
mµη
Dividing both sides of the above inequality by (1−3Lη)(1−Lη) > 0, we arrive at
2Lη(m+C)
C(1−Lη)
s
∗
e ) − F (x )] ≤
es−1 ) − F (x∗ ) .
E[F (x
+
E F (x
m(1−3Lη)
mµη(1−3Lη)
This completes the proof.
29
R EFERENCES
[1]
R. Johnson and T. Zhang, “Accelerating stochastic gradient descent using predictive variance reduction,” in Proc. Adv. Neural Inf. Process.
Syst., 2013, pp. 315–323.
[2]
L. Xiao and T. Zhang, “A proximal stochastic gradient method with progressive variance reduction,” SIAM J. Optim., vol. 24, no. 4, pp.
2057–2075, 2014.
[3]
[4]
Z. Allen-Zhu, “Katyusha: The first direct acceleration of stochastic gradient methods,” in Proc. 49th ACM Symp. Theory Comput., 2017.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf.
Process. Syst., 2012, pp. 1097–1105.
[5]
I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in Proc. 30th Int.
Conf. Mach. Learn., 2013, pp. 1139–1147.
[6]
Z. Allen-Zhu and E. Hazan, “Variance reduction for faster non-convex optimization,” in Proc. 33rd Int. Conf. Mach. Learn., 2016, pp. 699–707.
[7]
H. Ouyang, N. He, L. Q. Tran, and A. Gray, “Stochastic alternating direction method of multipliers,” in Proc. 30th Int. Conf. Mach. Learn., 2013,
pp. 80–88.
[8]
Y. Liu, F. Shang, and J. Cheng, “Accelerated variance reduced stochastic ADMM,” in Proc. 31st AAAI Conf. Artif. Intell., 2017, pp. 2287–2293.
[9]
C. Qu, Y. Li, and H. Xu, “Linear convergence of SVRG in statistical estimation,” arXiv:1611.01957v2, 2017.
[10] C. Paquette, H. Lin, D. Drusvyatskiy, J. Mairal, and Z. Harchaoui, “Catalyst acceleration for gradient-based non-convex optimization,”
arXiv:1703.10993, 2017.
[11] J. Duchi and F. Ruan, “Stochastic methods for composite optimization problems,” arXiv:1703.08570, 2017.
[12] B. Recht and C. Ré, “Parallel stochastic gradient algorithms for large-scale matrix completion,” Math. Prog. Comp., vol. 5, pp. 201–226, 2013.
[13] X. Zhang, L. Wang, and Q. Gu, “Stochastic variance-reduced gradient descent for low-rank matrix recovery from linear measurements,”
arXiv:1701.00481v2, 2017.
[14] M. Schmidt, R. Babanezhad, M. Ahmed, A. Defazio, A. Clifton, and A. Sarkar, “Non-uniform stochastic average gradient method for training
conditional random fields,” in Proc. 18th Int. Conf. Artif. Intell. Statist., 2015, pp. 819–828.
[15] O. Shamir, “A stochastic PCA and SVD algorithm with an exponential convergence rate,” in Proc. 32nd Int. Conf. Mach. Learn., 2015, pp.
144–152.
[16] D. Garber, E. Hazan, C. Jin, S. M. Kakade, C. Musco, P. Netrapalli, and A. Sidford, “Faster eigenvector computation via shift-and-invert
preconditioning,” in Proc. 33rd Int. Conf. Mach. Learn., 2016, pp. 2626–2634.
[17] Z. Allen-Zhu and Y. Li, “Doubly accelerated methods for faster CCA and generalized eigendecomposition,” arXiv:1607.06017v2, 2017.
[18] Y. Nesterov, “A method of solving a convex programming problem with convergence rate O(1/k2 ),” Soviet Math. Doklady, vol. 27, pp.
372–376, 1983.
[19] ——, Introductory Lectures on Convex Optimization: A Basic Course.
Boston: Kluwer Academic Publ., 2004.
[20] P. Tseng, “On accelerated proximal gradient methods for convex-concave optimization,” Technical report, University of Washington, 2008.
[21] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci., vol. 2, no. 1,
pp. 183–202, 2009.
[22] H. Robbins and S. Monro, “A stochastic approximation method,” Ann. Math. Statist., vol. 22, no. 3, pp. 400–407, 1951.
[23] T. Zhang, “Solving large scale linear prediction problems using stochastic gradient descent algorithms,” in Proc. 21st Int. Conf. Mach. Learn.,
2004, pp. 919–926.
[24] C. Hu, J. T. Kwok, and W. Pan, “Accelerated gradient methods for stochastic optimization and online learning,” in Proc. Adv. Neural Inf.
Process. Syst., 2009, pp. 781–789.
[25] S. Bubeck, “Convex optimization: Algorithms and complexity,” Found. Trends Mach. Learn., vol. 8, pp. 231–358, 2015.
[26] A. Rakhlin, O. Shamir, and K. Sridharan, “Making gradient descent optimal for strongly convex stochastic optimization,” in Proc. 29th Int.
Conf. Mach. Learn., 2012, pp. 449–456.
[27] O. Shamir and T. Zhang, “Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes,”
in Proc. 30th Int. Conf. Mach. Learn., 2013, pp. 71–79.
[28] N. L. Roux, M. Schmidt, and F. Bach, “A stochastic gradient method with an exponential convergence rate for finite training sets,” in Proc.
Adv. Neural Inf. Process. Syst., 2012, pp. 2672–2680.
[29] S. Shalev-Shwartz and T. Zhang, “Stochastic dual coordinate ascent methods for regularized loss minimization,” J. Mach. Learn. Res., vol. 14,
pp. 567–599, 2013.
[30] A. Defazio, F. Bach, and S. Lacoste-Julien, “SAGA: A fast incremental gradient method with support for non-strongly convex composite
objectives,” in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 1646–1654.
[31] Y. Zhang and L. Xiao, “Stochastic primal-dual coordinate method for regularized empirical risk minimization,” in Proc. 32nd Int. Conf. Mach.
Learn., 2015, pp. 353–361.
[32] M. Schmidt, N. L. Roux, and F. Bach, “Minimizing finite sums with the stochastic average gradient,” INRIA, Paris, Tech. Rep., 2013.
[33] S. Shalev-Shwartz and T. Zhang, “Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization,” Math. Program.,
vol. 155, pp. 105–145, 2016.
[34] F. Shang, Y. Liu, J. Cheng, and J. Zhuo, “Fast stochastic variance reduced gradient method with momentum acceleration for machine
learning,” arXiv:1703.07948, 2017.
30
[35] L. Zhang, M. Mahdavi, and R. Jin, “Linear convergence with condition number independent access of full gradients,” in Proc. Adv. Neural
Inf. Process. Syst., 2013, pp. 980–988.
[36] J. Konečný, J. Liu, P. Richtárik, , and M. Takáč, “Mini-batch semi-stochastic gradient descent in the proximal setting,” IEEE J. Sel. Top. Sign.
Proces., vol. 10, no. 2, pp. 242–255, 2016.
[37] A. Nitanda, “Stochastic proximal gradient descent with acceleration techniques,” in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 1574–1582.
[38] G. Lan and Y. Zhou, “An optimal randomized incremental gradient method,” arXiv:1507.02000v3, 2015.
[39] R. Frostig, R. Ge, S. M. Kakade, and A. Sidford, “Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical
risk minimization,” in Proc. 32nd Int. Conf. Mach. Learn., 2015, pp. 2540–2548.
[40] H. Lin, J. Mairal, and Z. Harchaoui, “A universal catalyst for first-order optimization,” in Proc. Adv. Neural Inf. Process. Syst., 2015, pp.
3366–3374.
[41] R. Babanezhad, M. O. Ahmed, A. Virani, M. Schmidt, J. Konecny, and S. Sallinen, “Stop wasting my gradients: Practical SVRG,” in Proc. Adv.
Neural Inf. Process. Syst., 2015, pp. 2242–2250.
[42] Z. Allen-Zhu and Y. Yuan, “Improved SVRG for non-strongly-convex or sum-of-non-convex objectives,” in Proc. 33rd Int. Conf. Mach. Learn.,
2016, pp. 1080–1089.
[43] M. Frank and P. Wolfe, “An algorithm for quadratic programming,” Naval Res. Logist. Quart., vol. 3, pp. 95–110, 1956.
[44] E. Hazan and H. Luo, “Variance-reduced and projection-free stochastic optimization,” in Proc. 33rd Int. Conf. Mach. Learn., 2016, pp. 1263–1271.
[45] F. Shang, Y. Liu, J. Cheng, K. W. Ng, and Y. Yoshida, “Variance reduced stochastic gradient descent with sufficient decrease,” arXiv:1703.06807,
2017.
[46] L. Hien, C. Lu, H. Xu, and J. Feng, “Accelerated stochastic mirror descent algorithms for composite non-strongly convex optimization,”
arXiv:1605.06892v2, 2016.
[47] B. Woodworth and N. Srebro, “Tight complexity bounds for optimizing composite objectives,” in Proc. Adv. Neural Inf. Process. Syst., 2016,
pp. 3639–3647.
[48] A. Defazio, “A simple practical accelerated method for finite sums,” in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 676–684.
[49] X. Li, T. Zhao, R. Arora, H. Liu, and J. Haupt, “Nonconvex sparse learning via stochastic optimization with progressive variance reduction,”
in Proc. 33rd Int. Conf. Mach. Learn., 2016, pp. 917–925.
[50] L. Nguyen, J. Liu, K. Scheinberg, and M. Takáč, “SARAH: A novel method for machine learning problems using stochastic recursive
gradient,” arXiv:1703.00102v1, 2017.
[51] L. W. Zhong and J. T. Kwok, “Fast stochastic alternating direction method of multipliers,” in Proc. 31st Int. Conf. Mach. Learn., 2014, pp. 46–54.
[52] S. Zheng and J. T. Kwok, “Fast-and-light stochastic ADMM,” in Proc. 25th Int. Joint Conf. Artif. Intell., 2016, pp. 2407–2613.
[53] S. J. Reddi, S. Sra, B. Poczos, and A. Smola, “Proximal stochastic methods for nonsmooth nonconvex finite-sum optimization,” in Proc. Adv.
Neural Inf. Process. Syst., 2016, pp. 1145–1153.
[54] S. Reddi, A. Hefny, S. Sra, B. Poczos, and A. Smola, “On variance reduction in stochastic gradient descent and its asynchronous variants,” in
Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 2629–2637.
[55] J. D. Lee, Q. Lin, T. Ma, and T. Yang, “Distributed stochastic variance reduced gradient methods and a lower bound for communication
complexity,” arXiv:1507.07595v2, 2016.
[56] Y. Bengio, “Learning deep architectures for AI,” Found. Trends Mach. Learn., vol. 2, no. 1, pp. 1–127, 2009.
[57] R. Ge, F. Huang, C. Jin, and Y. Yuan, “Escaping from saddle points online stochastic gradient for tensor decomposition,” in Proc. 28th Conf.
Learn. Theory, 2015, pp. 797–842.
[58] A. Neelakantan, L. Vilnis, Q. V. Le, I. Sutskever, L. Kaiser, K. Kurach, and J. Martens, “Adding gradient noise improves learning for very
deep networks,” arXiv:1511.06807, 2015.
[59] N. Flammarion and F. Bach, “From averaging to acceleration, there is only a step-size,” in Proc. 28th Conf. Learn. Theory, 2015, pp. 658–695.
[60] C. Tan, S. Ma, Y. Dai, and Y. Qian, “Barzilai-Borwein step size for stochastic gradient descent,” in Proc. Adv. Neural Inf. Process. Syst., 2016,
pp. 685–693.
[61] Z. Allen-Zhu and E. Hazan, “Optimal black-box reductions between optimization objectives,” in Proc. Adv. Neural Inf. Process. Syst., 2016, pp.
1606–1614.
[62] B. Carpenter, “Lazy sparse stochastic gradient descent for regularized multinomial logistic regression,” Tech. Rep., 2008.
[63] J. Langford, L. Li, and T. Zhang, “Sparse online learning via truncated gradient,” J. Mach. Learn. Res., vol. 10, pp. 777–801, 2009.
[64] Y. Nesterov, “Smooth minimization of non-smooth functions,” Math. Program., vol. 103, pp. 127–152, 2005.
[65] Y. Xu, Y. Yan, Q. Lin, and T. Yang, “Homotopy smoothing for non-smooth problems with lower complexity than O(1/),” in Proc. Adv. Neural
Inf. Process. Syst., 2016, pp. 1208–1216.
[66] P. Zhao and T. Zhang, “Stochastic optimization with importance sampling for regularized loss minimization,” in Proc. 32nd Int. Conf. Mach.
Learn., 2015, pp. 1–9.
[67] D. Needell, N. Srebro, and R. Ward, “Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm,” Math.
Program., vol. 155, pp. 549–573, 2016.
[68] J. Konečný and P. Richtárik, “Semi-stochastic gradient descent methods,” ArXiv Preprint: 1312.1666v2, 2015.
[69] G. Lan, “An optimal method for stochastic composite optimization,” Math. Program., vol. 133, pp. 365–397, 2012.
31
−2
−6
10
−8
10
10
−6
10
−8
10
−10
−12
10
5
10
15
20
25
30
Gradient evaluations / n
10
35
−4
−8
10
10
−12
0
−6
10
5
10
15
20
25
Gradient evaluations / n
30
10
35
10
−6
10
−8
10
−10
10
−12
0
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
10
20
30
Gradient evaluations / n
10
40
0
10
20
30
40
Gradient evaluations / n
50
−2
−6
10
−8
10
10
−6
10
−8
10
−10
10
0
0.5
1
1.5
Running time (sec)
10
2
−4
−8
10
10
−12
−12
−6
10
(a) Adult: λ1 = 10−4
0.5
1
1.5
2
Running time (sec)
2.5
10
3
10
−6
10
−8
10
−10
10
−12
0
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
(b) Protein: λ1 = 10−4
5
10
15
Running time (sec)
10
20
0
(c) Covtype: λ1 = 10−4
10
20
30
Running time (sec)
40
(d) Sido0: λ1 = 5∗10−3
−2
−6
10
−8
10
−8
10
−10
10
−12
10
−6
10
20
40
60
Gradient evaluations / n
10
80
−4
−8
10
10
−12
0
−6
10
5
10
15
20
25
Gradient evaluations / n
30
10
35
10
−6
10
−8
10
−10
10
−12
−12
0
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
10
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
0
5
10
15
20
25
Gradient evaluations / n
30
10
35
0
10
20
30
40
Gradient evaluations / n
50
60
−2
−6
10
−8
10
−8
10
−10
10
−12
10
−6
10
0.5
1
1.5
2
Running time (sec)
2.5
10
3
−4
−8
10
10
−12
0
−6
10
(e) Adult: λ1 = 10−5
1
2
3
4
Running time (sec)
5
10
6
10
−6
10
−8
10
−10
10
−12
0
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
10
−12
0
(f) Protein: λ1 = 10−5
5
10
15
Running time (sec)
10
20
0
(g) Covtype: λ1 = 10−5
5
10
15
Running time (sec)
20
25
(h) Sido0: λ1 = 10−4
−2
−6
10
−8
10
−8
10
−10
10
−12
10
−6
10
50
100
Gradient evaluations / n
10
150
−6
10
−4
−8
10
10
0
50
100
Gradient evaluations / n
10
150
10
−6
10
−8
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−10
10
−12
−12
0
−2
10
−10
10
−10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
10
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
10
20
30
40
50
Gradient evaluations / n
10
60
0
50
100
Gradient evaluations / n
150
−2
−6
10
−8
10
−8
10
−10
10
−12
10
−6
10
1
2
3
4
5
Running time (sec)
(i) Adult: λ1 = 10−6
6
7
10
−8
10
10
0
5
10
Running time (sec)
(j) Protein: λ1 = 10−6
15
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−4
10
−6
10
−8
10
−10
10
−12
−12
0
−6
10
−10
10
−10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
5
10
15
20
Running time (sec)
(k) Covtype: λ1 = 10−6
25
10
0
20
40
60
Running time (sec)
80
100
(l) Sido0: λ1 = 10−5
Fig. 5. Comparison of SVRG [1], Prox-SVRG [2], Katyusha [3], and VR-SGD for solving `2 -norm regularized logistic regression problems (i.e.,
λ2 = 0). In each plot, the vertical axis shows the objective value minus the minimum, and the horizontal axis is the number of effective passes (top)
or running time (bottom).
32
−2
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−4
10
−4
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−8
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−8
10
−10
10
−10
10
10
F (xs ) − F (x* )
−6
10
−6
10
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
−6
10
−8
10
−4
10
−10
10
−6
10
−12
10
−12
0
10
20
30
40
Gradient evaluations / n
50
10
60
−12
0
20
40
60
Gradient evaluations / n
10
80
0
10
20
30
Gradient evaluations / n
40
50
0
50
100
150
Gradient evaluations / n
200
−2
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−4
10
−4
10
−2
10
−6
10
−8
10
−6
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−8
10
−6
10
−8
10
−10
−12
10
10
−12
0
0.5
1
Running time (sec)
1.5
10
2
SVRG
Prox−SVRG
Katyusha
VR−SGD
−10
10
−10
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
−12
0
2
4
6
Running time (sec)
8
10
10
0
5
10
Running time (sec)
−4
10
−6
10
15
0
20
50
100
Running time (sec)
(a) λ2 = 10−4
150
(b) λ2 = 10−3
−2
−6
10
−8
10
−8
10
−6
10
−8
10
−12
0
50
100
Gradient evaluations / n
10
150
−4
10
−12
10
−6
10
−8
10
−10
10
−12
0
50
100
150
Gradient evaluations / n
10
200
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
10
10
−6
10
−10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
10
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
20
40
60
Gradient evaluations / n
10
80
0
50
100
150
200
Gradient evaluations / n
250
300
−2
10
−8
10
−6
10
−8
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−10
10
−10
10
−12
−12
0
2
4
Running time (sec)
6
10
8
0
5
10
15
20
Running time (sec)
25
−4
−6
10
−8
10
10
−6
10
−8
10
−10
10
−10
10
−12
10
30
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
F (xs ) − F (x* )
−6
10
−4
10
F (xs ) − F (x* )
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
−4
F (xs ) − F (x* )
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
10
20
30
Running time (sec)
40
10
50
0
50
(c) λ2 = 10−5
−4
−6
10
−8
10
10
−2
−5
10
−6
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
−10
10
−8
50
100
150
200
Gradient evaluations / n
10
300
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−4
10
0
50
100
150
200
250
Gradient evaluations / n
10
−6
10
−8
10
300
350
0
−6
10
−8
10
−10
5
10
Running time (sec)
15
20
10
350
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
0
300
0
0
10
20
30
Running time (sec)
(e) λ2 = 10−6
40
50
100
150
200
250
Gradient evaluations / n
50
0
50
100
Running time (sec)
300
350
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
−10
10
100
150
200
250
Gradient evaluations / n
10
10
−12
50
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
F (xs ) − F (x* )
250
−6
F (xs ) − F (x* )
0
−4
10
10
F (xs ) − F (x* )
−12
10
10
−7
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
10
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−3
10
200
(d) λ2 = 10−4
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
100
150
Running time (sec)
−4
10
−6
10
150
0
50
100
150
Running time (sec)
200
(f) λ2 = 10−5
Fig. 6. Comparison of SVRG, Prox-SVRG [2], Katyusha [3], and VR-SGD for `1 -norm regularized logistic regression problems (i.e., λ1 = 0) on the
four data sets: Adult (the first column), Protein (the sconced column), Covtype (the third column), and Sido0 (the last column). In each plot, the
vertical axis shows the objective value minus the minimum, and the horizontal axis is the number of effective passes (top) or running time (bottom).
33
−2
−6
s
10
−8
10
−6
10
−8
10
−8
10
10
10
−12
−12
0
10
20
30
Gradient evaluations / n
40
10
50
−4
−6
10
−6
10
−8
10
−10
10
−10
−10
10
−12
0
10
20
30
Gradient evaluations / n
10
40
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
10
−10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
F (xs ) − F (x* )
*
F (x ) − F (x )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
5
10
15
Gradient evaluations / n
10
20
0
10
20
30
40
50
Gradient evaluations / n
60
70
−2
−6
10
−8
10
−6
10
−8
10
−10
−12
10
0
0.5
1
1.5
2
2.5
Running time (sec)
3
10
3.5
−4
−6
10
−8
10
10
−12
1
2
3
Running time (sec)
4
10
5
10
−6
10
−8
10
−10
10
−12
−12
0
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
0
1
2
3
4
5
Running time (sec)
6
10
7
(a) λ1 = 10−5 and λ2 = 10−4
0
10
20
30
40
Running time (sec)
50
60
(b) λ1 = 10−4 and λ2 = 10−5
−2
−6
10
−8
10
−8
10
−10
10
−12
10
10
−6
10
20
40
60
80
Gradient evaluations / n
100
10
120
−4
−6
10
−8
10
10
−12
0
10
−6
10
−8
10
−10
10
−12
−12
0
10
20
30
40
Gradient evaluations / n
10
50
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
10
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
0
5
10
15
Gradient evaluations / n
10
20
0
50
100
150
Gradient evaluations / n
200
−2
−6
10
−8
10
−8
10
−10
10
−12
10
10
−6
10
1
2
3
Running time (sec)
10
4
−4
−8
10
10
−12
0
−6
10
1
2
3
Running time (sec)
4
10
5
10
−6
10
−8
10
−10
10
−12
0
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
2
4
6
Running time (sec)
8
10
10
(c) λ1 = 10−5 and λ2 = 10−5
0
20
40
60
Running time (sec)
80
100
(d) λ1 = 10−5 and λ2 = 10−4
−2
−6
10
−8
10
−8
10
−10
10
−12
50
100
Gradient evaluations / n
10
150
−4
−8
10
10
10
−6
10
−8
10
−10
10
−12
−12
0
−6
10
0
50
100
Gradient evaluations / n
10
150
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−10
10
−10
10
−6
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
20
40
60
Gradient evaluations / n
10
80
0
50
100
150
Gradient evaluations / n
200
−2
−6
10
−8
10
−8
10
−10
10
−12
2
4
6
Running time (sec)
8
10
10
−8
10
10
0
5
10
15
Running time (sec)
20
(e) λ1 = 10−6 and λ2 = 10−5
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−4
10
−6
10
−8
10
−10
10
−12
−12
0
−6
10
−10
10
−10
10
−6
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
10
SVRG
Prox−SVRG
Katyusha
VR−SGD
−4
10
F (xs ) − F (x* )
SVRG
Prox−SVRG
Katyusha
VR−SGD
−2
10
−12
0
5
10
15
20
25
Running time (sec)
30
35
10
0
50
100
Running time (sec)
150
(f) λ1 = 10−5 and λ2 = 10−5
Fig. 7. Comparison of SVRG, Prox-SVRG [2], Katyusha [3], and VR-SGD for solving elastic net regularized logistic regression problems on the four
data sets: Adult (the first column), Protein (the sconced column), Covtype (the third column), and Sido0 (the last column). In each plot, the vertical
axis shows the objective value minus the minimum, and the horizontal axis is the number of effective passes (top) or running time (bottom).
34
−4
F (xs ) − F (x* )
−6
10
−8
10
−10
−8
10
10
20
30
Gradient evaluations / n
40
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−4
10
10
−8
10
−10
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
10
0.5
1
Running time (sec)
−8
10
0.5
1
Running time (sec)
10
1.5
−10
SVRG−I
SVRG−II
VR−SGD−I
VR−SGD−II
10
−12
−4
−6
10
−8
10
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
50
100
Gradient evaluations / n
−10
10
−6
10
−8
10
0.5
1
Running time (sec)
1.5
2
(d) λ = 10−6
10
10
−6
10
−8
10
−10
10
−12
0
150
SVRG−I
SVRG−II
VR−SGD−I
VR−SGD−II
−4
10
−12
50
100
Gradient evaluations / n
−2
−10
10
0
10
F (xs ) − F (x* )
−8
10
10
150
SVRG−I
SVRG−II
VR−SGD−I
VR−SGD−II
−4
−6
−8
10
−12
0
−2
10
−6
10
−10
10
F (xs ) − F (x* )
−4
10
10
100
10
10
−12
40
60
80
Gradient evaluations / n
1.5
SVRG−I
SVRG−II
VR−SGD−I
VR−SGD−II
−2
10
−2
0.5
1
Running time (sec)
10
−10
10
10
0
(c) λ = 10−5
F (xs ) − F (x* )
−8
10
20
−8
10
−12
0
−4
−6
0
−6
10
−10
−2
10
10
10
10
10
F (xs ) − F (x* )
−4
10
50
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
(b) λ = 10−4
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−2
10
20
30
40
Gradient evaluations / n
−4
−6
10
1.5
10
−2
−12
0
0
10
10
(a) λ = 10−3
F (xs ) − F (x* )
10
50
10
−12
F (xs ) − F (x* )
40
−10
10
10
20
30
Gradient evaluations / n
−4
−6
−8
10
−12
0
−2
10
−6
10
−10
10
F (xs ) − F (x* )
−2
10
50
10
10
−12
0
10
F (xs ) − F (x* )
−6
10
10
−12
10
−4
−10
10
10
10
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−2
10
F (xs ) − F (x* )
F (xs ) − F (x* )
−4
10
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−2
10
F (xs ) − F (x* )
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−2
10
−12
0
1
2
3
4
5
Running time (sec)
(e) λ = 10−7
6
7
10
0
1
2
3
4
5
Running time (sec)
6
7
(f) λ = 0
Fig. 8. Comparison of SVRG [1], Katyusha [3], VR-SGD and their proximal versions for solving ridge regression problems with different regularization
parameters on the Adult data set. In each plot, the vertical axis shows the objective value minus the minimum, and the horizontal axis is the number
of effective passes (top) or running time (bottom).
35
−2
−4
10
−6
10
−8
10
−10
5
10
15
20
Gradient evaluations / n
25
10
30
−12
0
10
−6
10
−8
10
−10
4
Running time (sec)
6
−6
10
−8
10
10
8
−8
10
−10
−12
0
2
4
6
8
Running time (sec)
10
10
12
0
2
4
6
8
Running time (sec)
(b) λ = 10−4
10
12
(c) λ = 10−5
−2
−4
10
−6
10
−8
10
10
−4
10
−10
−1
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−6
10
10
F (xs ) − F (x* )
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
F (xs ) − F (x* )
10
−8
10
SVRG−I
SVRG−II
VR−SGD−I
VR−SGD−II
−10
10
10
−12
−2
10
−12
0
10
20
30
40
50
Gradient evaluations / n
10
60
−2
0
50
100
Gradient evaluations / n
150
0
100
200
300
400
Gradient evaluations / n
500
−2
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−4
10
−6
10
−8
10
−10
10
−1
10
−4
10
F (xs ) − F (x* )
10
−6
10
−8
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
10
−10
10
10
−12
−12
0
5
10
Running time (sec)
15
F (xs ) − F (x* )
10
−6
10
10
(a) λ = 10−3
−2
40
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
10
−12
2
20
30
Gradient evaluations / n
−4
−10
0
10
10
10
−12
0
−2
10
10
F (xs ) − F (x* )
10
40
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−4
F (xs ) − F (x* )
10
F (xs ) − F (x* )
20
30
Gradient evaluations / n
−2
−4
−8
10
−10
10
SVRG−I
SVRG−II
VR−SGD−I
VR−SGD−II
−6
10
10
F (xs ) − F (x* )
−2
F (xs ) − F (x* )
−8
10
−12
0
10
10
10
10
−12
10
−6
10
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−4
−10
10
10
10
SVRG−I
SVRG−II
Katyusha−I
Katyusha−II
VR−SGD−I
VR−SGD−II
−4
F (xs ) − F (x* )
F (xs ) − F (x* )
10
−2
10
SVRG−I
SVRG−II
VR−SGD−I
VR−SGD−II
F (xs ) − F (x* )
−2
10
10
0
(d) λ = 10−6
10
20
30
40
50
Running time (sec)
(e) λ = 10−7
60
SVRG−I
SVRG−II
VR−SGD−I
VR−SGD−II
−2
10
70
0
20
40
60
80
Running time (sec)
100
120
(f) λ = 0
Fig. 9. Comparison of SVRG [1], Katyusha [3], VR-SGD and their proximal versions for solving ridge regression problems with different regularization
parameters on the Covtype data set. In each plot, the vertical axis shows the objective value minus the minimum, and the horizontal axis is the
number of effective passes (top) or running time (bottom).
36
Fixed learning rate
Varying learning rate
Fixed learning rate
Varying learning rate
F (xs ) − F (x* )
10
−4
10
*
F (x ) − F (x )
−4
−6
s
10
−8
10
−6
10
−8
10
−10
−10
10
10
−12
10
Fixed learning rate
Varying learning rate
Fixed learning rate
Varying learning rate
−12
0
10
20
30
Gradient evaluations / n
10
40
0
5
10
15
Gradient evaluations / n
20
(a) `1 -norm regularized logistic regression: Adult (left) and Covtype (right)
Fixed learning rate
Varying learning rate
Fixed learning rate
Varying learning rate
F (xs ) − F (x* )
10
−4
10
*
F (x ) − F (x )
−4
−6
s
10
−8
10
−6
10
−8
10
−10
−10
10
10
−12
10
Fixed learning rate
Varying learning rate
Fixed learning rate
Varying learning rate
−12
0
5
10
15
20
Gradient evaluations / n
25
30
10
0
5
10
Gradient evaluations / n
15
(b) Lasso: Adult (left) and Covtype (right)
Fig. 10. Comparison of Algorithm 3 with fixed and varying learning rates for solving `1 -norm (i.e., λkxk1 ) regularized logistic regression and Lasso
problems with λ = 10−4 (blue lines) and λ = 10−5 (red lines, best viewed in colors). Note that the regularization parameter is set to 10−3 and
10−4 for solving Lasso problems on the Covtype data set.
| 2 |
1
Fast and Flexible Successive-Cancellation List
Decoders for Polar Codes
arXiv:1703.08208v2 [] 29 Aug 2017
Seyyed Ali Hashemi, Student Member, IEEE, Carlo Condo, Warren J. Gross, Senior Member, IEEE
Abstract—Polar codes have gained significant amount of attention during the past few years and have been selected as
a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation
list (SCL) decoding provides a reasonable trade-off between
the error-correction performance and hardware implementation
complexity when used to decode polar codes, at the cost of
limited throughput. The simplified SCL (SSCL) and its extension
SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and
frozen bit patterns (rate one and single parity check codes),
while keeping the error-correction performance unaltered. In this
paper, we improve SSCL and SSCL-SPC by proving that the
list size imposes a specific number of path splitting required to
decode rate one and single parity check codes. Thus, the number
of splitting can be limited while guaranteeing exactly the same
error-correction performance as if the paths were forked at each
bit estimation. We call the new decoding algorithms Fast-SSCL
and Fast-SSCL-SPC. Moreover, we show that the number of
path forks in a practical application can be tuned to achieve
desirable speed, while keeping the error-correction performance
almost unchanged. Hardware architectures implementing both
algorithms are then described and implemented: it is shown that
our design can achieve 1.86 Gb/s throughput, higher than the
best state-of-the-art decoders.
Index Terms—polar codes, successive-cancellation decoding,
list decoding, hardware implementation.
I. I NTRODUCTION
Polar codes are the first family of error-correcting
codes with provable capacity-achieving property and a
low-complexity encoding and decoding process [2]. The
successive-cancellation (SC) decoding is a low-complexity
algorithm with which polar codes can achieve the capacity of a
memoryless channel. However, there are two main drawbacks
associated with SC. Firstly, SC requires the decoding process
to advance bit by bit. This results in high latency and low
throughput when implemented in hardware [3]. Second, polar
codes decoded with SC only achieve the channel capacity
when the code length tends toward infinity. For practical
polar codes of moderate length, SC falls short in providing
a reasonable error-correction performance.
The first issue is a result of the serial nature of SC. In
order to address this issue, the recursive structure of polar
codes construction and the location of information and parity
(frozen) bits were utilized in [4], [5] to identify constituent
This work has been published in parts in the IEEE Wireless Communications and Networking Conference Workshops (WCNCW), 2017 [1].
S. A. Hashemi, C. Condo, and W. J. Gross are with the Department of Electrical and Computer Engineering, McGill University, Montréal, Québec, Canada. e-mail: [email protected],
[email protected], [email protected].
polar codes. In particular, rate zero (Rate-0) codes with all
frozen bits, rate one (Rate-1) codes with all information bits,
repetition (Rep) codes with a single information bit in the most
reliable position, and single parity-check (SPC) codes with a
single frozen bit in the least reliable position, were shown to
be capable of being decoded in parallel with low-complexity
decoding algorithms. This in turn increased the throughput and
reduced the latency significantly. Moreover, the simplifications
in [4], [5] did not introduce any error-correction performance
degradation with respect to conventional SC.
The second issue stems from the fact that SC is suboptimal
with respect to maximum-likelihood (ML) decoding. The
decoding of each bit is only dependent on the bits already
decoded. SC is unable to use the information about the bits
that are not decoded yet. In order to address this issue,
SC list (SCL) decoding advances by estimating each bit as
either 0 or 1. Therefore, the number of candidate codewords
doubles at each bit estimation step. In order to limit the
exponential increase in the number of candidates, only L
candidate codewords are allowed to survive by employing a
path metric (PM) [6]. The PMs were sorted and the L best
candidates were kept for further processing. It should be noted
that SCL was previously used to decoder Reed-Muller codes
[7]. SCL reduces the gap between SC and ML and it was
shown that when a cyclic redundancy check (CRC) code is
concatenated with polar codes, SCL can make polar codes
outperform the state-of-the-art codes to the extent that polar
codes have been chosen to be adopted in the next generation
of mobile broadband standard [8].
The good error-correction performance of SCL comes at
the cost of higher latency, lower throughput, and higher area
occupation than SC when implemented on hardware [9]. It
was identified in [10] that using the log-likelihood ratio (LLR)
values results in a SCL decoder which is more area-efficient
than the conventional SCL decoder with log-likelihood (LL)
values. In order to reduce the latency and increase the throughput associated with SCL, several attempts have been made to
reduce the number of required decoding time steps as defined
in [2]. It should be noted that different time steps might entail
different operations (e.g. a bit estimation or an LLR value
update), and might thus last a different number of clock cycles.
A group of M bits were allowed to be decoded together in [11],
[12]. [13] proposed a high throughput architecture based on
a tree-pruning scheme and further extended it to a multimode
decoder in [14]. The throughput increase in [13] is based
on code-based parameters which could degrade the errorcorrection performance significantly. Based on the idea in [5],
a fast list decoder architecture for software implementation
2
was proposed in [15] which was able to decode constituent
codes in a polar code in parallel. This resulted in fewer number
of time steps to finish the decoding process. However, the SCL
decoder in [15] is based on an empirical approach to decode
constituent Rate-1 and SPC codes and cannot guarantee the
same error-correction performance as the conventional SCL
decoder. Moreover, all the decoders in [13]–[15] require a large
sorter to select the surviving candidate codewords. Since the
sorter in the hardware implementation of SCL decoders has
a long and dominant critical path which is dependent on the
number of its inputs [10], increasing the number of PMs results
in a longer critical path and a lower operating frequency.
Based on the idea of list sphere decoding in [16], a
simplified SCL (SSCL) was proposed in [17] which identified
and avoided the redundant calculations in SCL. Therefore, it
required fewer number of time steps than SCL to decode
a polar code. The advantage of SSCL is that it not only
guarantees the error-correction performance preservation, but
also it uses the same sorter as in the conventional SCL
algorithm. To further increase the throughput and reduce the
latency of SSCL, the matrix reordering idea in [18] was used
to develop the SSCL-SPC decoder in [19]. While SSCL-SPC
uses the same sorter as in the conventional SCL, it provides
an exact reformulation for L = 2 and its approximations bring
negligible error-correction performance loss with respect to
SSCL.
While SSCL and SSCL-SPC are algorithms that can work
with any list size, they fail to address the redundant path
splitting associated with a specific list size. In this paper, we
first prove that there is a specific number of path splitting
required for decoding the constituent codes in SSCL and
SSCL-SPC for every list size to guarantee the error-correction
performance preservation. Any path splitting after that number
is redundant and any path splitting before that number cannot provably preserve the error-correction performance. Since
these decoders require fewer number of time steps than SSCL
and SSCL-SPC, we name them Fast-SSCL and Fast-SSCLSPC, respectively. We further show that in practical polar
codes, we can achieve similar error-correction performance to
SSCL and SSCL-SPC with even fewer number of path forks.
Therefore, we can optimize Fast-SSCL and Fast-SSCL-SPC
for speed. We propose hardware architectures to implement
both new algorithms: implementation results yield the highest
throughput in the state-of-the-art with comparable area occupation.
This paper is an extension to our work in [1] in which
the Fast-SSCL algorithm was proposed. Here, we propose the
Fast-SSCL-SPC algorithm and prove that its error-correction
performance is identical to that of SSCL-SPC. We further
propose speed-up techniques for Fast-SSCL and Fast-SSCLSPC which incur almost no error-correction performance loss.
Finally, we propose hardware architectures implementing the
aforementioned algorithms and show the effectiveness of the
proposed techniques by comparing our designs with state of
the art.
The remainder of this paper is organized as follows: Section II provides a background on polar codes and its decoding
algorithms. Section III introduces the proposed Fast-SSCL
α
β
l
α
l
β
û0
αr
û1
βr
û2
û3
û4
û5
û6
û7
Fig. 1: SC decoding on a binary tree for P(8, 4) and
{u0, u1, u2, u4 } ∈ F .
and Fast-SSCL-SPC algorithms and their speed optimization
technique. A decoder architecture is proposed in Section IV
and the implementation results are provided in Section V.
Finally, Section VI draws the main conclusions of the paper.
II. P RELIMINARIES
A. Polar Codes
A polar code of length N with K information bits is
represented by P(N, K) and can be constructed recursively
with two polar codes of length N/2. The encoding process
can be denoted as a matrix multiplication as x = uG N ,
where u = {u0, u1, . . . , u N −1 } is the sequence of input bits,
x = {x0, x1, . . . , x N −1 } is the sequence of coded bits, and
G N = B N G ⊗n is the generator matrix created by the product
of B N which is the bit-reversal permutation matrix, and G ⊗n
which is the n-th Kronecker product of the polarizing matrix
G = 11 01 .
The encoding process involves the determination of the K
bit-channels with the best channel characteristics and assigning
the information bits to them. The remaining N −K bit-channels
are set to a known value known at the decoder side. They are
thus called frozen bits with set F . Since the value of these bits
does not have an impact on the error-correction performance
of polar codes on a symmetric channel, they are usually set
to 0. The codeword x is then modulated and sent through the
channel. In this paper, we consider binary phase-shift keying
(BPSK) modulation which maps {0, 1} to {+1, −1}.
B. Successive-Cancellation Decoding
The SC decoding process can be represented as a binary tree
search as shown in Fig. 1 for P(8, 4). Thanks to the recursive
construction of polar codes, at each stage s of the tree, each
node can be interpreted as a polar code of length Ns = 2s .
Two kinds of messages are passed between the nodes, namely,
soft LLR values α = {α0, α1, . . . , α Ns −1 } which are passed
from parent to child nodes, and the hard bit estimates β =
{β0, β1, . . . , β Ns −1 } which are passed from child nodes to the
parent node.
The N2s elements of the left child node αl =
{α0l , α1l , . . . , αlN s }, and the right child node αr =
2
−1
3
{α0r , α1r , . . . , αrN s
2
−1
}, can be computed as [2]
α
i
=2 arctanh tanh
tanh
2
αir =αi+ N s + 1 − 2βil αi ,
αil
α
i+ N2s
D. Simplified Successive-Cancellation List Decoding
,
2
(1)
(2)
2
whereas the Ns values of β are calculated by means of the left
child and right child node messages β l = {β0l , β1l , . . . , βlN s }
and β r = {β0r , β1r , . . . , βrN s
2
(
βi =
−1
2
} as [2]
βil ⊕ βir , if i < N2s ,
βr N s , otherwise,
i−
−1
(3)
2
where ⊕ is the bitwise XOR operation. At leaf nodes, the i-th
bit ûi can be estimated as
(
0, if i ∈ F or αi ≥ 0,
ûi =
(4)
1, otherwise.
Equation (1) can be reformulated in a more hardware-friendly
(HWF) version that has first been proposed in [3]:
αil = sgn(αi ) sgn(αi+ N s ) min(|αi |, |αi+ N s |).
2
(5)
2
C. Successive-Cancellation List Decoding
The error-correction performance of SC when applied to
codes with short to moderate length can be improved by the
use of SCL-based decoding. The SCL algorithm estimates a
bit considering both its possible values 0 and 1. At every estimation, the number of codeword candidates (paths) doubles: in
order to limit the increase in the complexity of this algorithm,
only a set of L codeword candidates is memorized at all times.
Thus, after every estimation, half of the paths are discarded.
To this purpose, a PM is associated to each path and updated
at every new estimation: it can be considered a cost function,
and the L paths with the lowest PMs are allowed to survive.
In the LLR-based SCL [10], the PM can be computed as
PMil =
i
Õ
ln 1 + e−(1−2û jl )α jl ,
(6)
j=0
where l is the path index and û jl is the estimate of bit j at
path l. A HWF version of Equation (6) has been proposed in
[10]:
PM−1l = 0,
(
PMi−1l +|αil |, if ûil , 12 1 − sgn αil ,
PMil =
PMi−1l ,
otherwise,
which can be rewritten as
i
1Õ
PMil =
sgn(α jl )α jl − (1 − 2û jl )α jl .
2 j=0
(7)
(8)
In case the hardware does not introduce bottlenecks and
both (2) and (5) can be computed in a single time step, the
number of time steps required to decode a code of length N
with K information bits in SCL is [10]
TSCL (N, K) = 2N + K − 2.
(9)
1) SSCL Decoding: The SSCL algorithm in [17] provides
efficient decoders for Rate-0, Rep, and Rate-1 nodes in SCL
without traversing the decoding tree while guaranteeing the
error-correction performance preservation. For example in
Fig. 1, the black circles represent Rate-1 nodes, the white circles represent Rate-0 nodes, and the white triangles represent
Rep nodes. The pruned decoding tree of SSCL for the example
in Fig. 1 is shown in Fig. 2a which consists of two Rep nodes
and a Rate-1 node.
Let us consider that the vectors αl and ηl = 1 − 2βl are
relative to the top of a node in the decoding tree. Rate-0 nodes
can be decoded as
NÕ
s −1
Exact,
(10a)
ln (1 + e−αil ) ,
i=0
PM Ns −1l =
Ns −1
1 Õ
sgn αil αil − αil , HWF.
(10b)
2 i=0
Rep nodes can be decoded as
NÕ
s −1
ln (1 + e−η N s −1l αil ) ,
Exact, (11a)
i=0
PM Ns −1l =
Ns −1
1 Õ
sgn αil αil − η Ns −1l αil , HWF. (11b)
2 i=0
where η Ns −1l represents the bit estimate of the information bit
in the Rep node. Finally, Rate-1 nodes can be decoded as
NÕ
s −1
ln (1 + e−ηil αil ) ,
Exact, (12a)
i=0
PM Ns −1l =
Ns −1
1 Õ
sgn αil αil − ηil αil , HWF. (12b)
2 i=0
It was shown in [19] that the time step requirements of
Rate-0, Rep, and Rate-1 nodes of length Ns in SSCL decoding
can be represented as
TSSCLRate-0 (Ns, 0) = 1,
(13)
TSSCLRep (Ns, 1) = 2,
(14)
TSSCLRate-1 (Ns, Ns ) = Ns .
(15)
While the SSCL algorithm reduces the number of required
time steps to decode Rate-1 nodes by almost a factor of three,
it fails to address the effect of list size on the maximum
number of required path forks. In Section III, we prove that
the number of required time steps to decode Rate-1 nodes
depends on the list size and that the new Fast-SSCL algorithm
is faster than both SCL and SSCL without incurring any errorcorrection performance degradation.
2) SSCL-SPC Decoding: In [19], a low-complexity approach was proposed to decode SPC nodes which resulted
in exact reformulations for L = 2 and its approximations for
other list sizes brought negligible error-correction performance
degradation. The pruned tree of SSCL-SPC for the same
example as in Fig. 1 is shown in Fig. 2b which consists of a
Rep node and a SPC node. The idea is to decode the frozen
4
III. FAST-SSCL D ECODING
Rep
Rep
Rep Rate-1
SPC
(b) SSCL-SPC
(a) SSCL
Fig. 2: (a) SSCL, and (b) SSCL-SPC decoding tree for P(8, 4)
and {u0, u1, u2, u4 } ∈ F .
bit in SPC nodes in the first step of the decoding process. In
order to do that, the PM calculations in the HWF formulation
were carried out by only finding the LLR value of the least
reliable bit and using the LLR values at the top of the polar
code tree in the SCL decoding algorithm for the rest of the
bits.
The least reliable bit in an SPC node of length Ns is found
as
imin = arg min(|αi |),
(16)
0≤i<Ns
and the parity of it is derived as
N
s −1
Ê
1
γ=
(1 − sgn (αi )) .
2
i=0
(17)
To satisfy the even-parity constraint, γ is found for each path
based on (17). The PMs are then initialized as
(
PM−1 +|αimin |, if γ = 1,
PM0 =
(18)
PM−1 ,
otherwise.
In this way, the least reliable bit which corresponds to the
even-parity constraint is decoded first. For bits other than the
least reliable bit, the PM is updated as
(
PMi−1 +|αi | + (1 − 2γ)|αimin |, if ηi , sgn (αi ) ,
PMi =
PMi−1 ,
otherwise.
(19)
Finally, when all the bits are estimated, the least reliable bit
is set to preserve the even-parity constraint as
βimin =
N
s −1
Ê
βi .
(20)
i=0
i,imin
In [19], the time step requirements of SPC nodes of length
Ns in SSCL-SPC decoding was shown to be
TSSCL-SPCSPC (Ns, Ns − 1) = Ns + 1,
(21)
which consists of one time step for (18), Ns − 1 time steps for
(19), and one time step for (20).
The SSCL-SPC algorithm reduces the number of required
time steps to decode SPC nodes by almost a factor of three,
but as in the case of Rate-1 nodes, it fails to address the effect
of list size on the maximum number of required path forks. In
Section III, we prove that the number of required time steps to
decode SPC nodes depends on the list size and that the new
Fast-SSCL-SPC algorithm is faster than SSCL-SPC without
incurring any error-correction performance degradation.
In this section, we propose a fast decoding approach for
Rate-1 nodes and use it to develop Fast-SSCL. We further
propose a fast decoding approach for SPC nodes in SSCL-SPC
and use it to develop Fast-SSCL-SPC. To this end, we provide
the exact number of path forks in Rate-1 and SPC nodes
to guarantee error-correction performance preservation. Any
path splitting after that number is redundant and any path
splitting less than that number cannot guarantee the errorcorrection performance preservation. We further show that in
practical applications, this number can be reduced with almost
no error-correction performance loss. We use this phenomenon
to optimize Fast-SSCL and Fast-SSCL-SPC for speed.
A. Guaranteed Error-Correction Performance Preservation
The fast Rate-1 and SPC decoders can be summarized by
the following theorems.
Theorem 1. In SSCL decoding with list size L, the number
of path splitting in a Rate-1 node of length Ns required to get
the exact same results as the conventional SSCL decoder is
min (L − 1, Ns ) .
(22)
The proposed technique results in TFast-SSCLRate-1 (Ns, Ns ) =
min (L − 1, Ns ) which improves the required number of time
steps to decode Rate-1 nodes when L − 1 < Ns . Every bit after
the L −1-th can be obtained through hard decision on the LLR
as
(
0, if αil ≥ 0,
βil =
(23)
1, otherwise,
without the need for path splitting. On the other hand, in case
min (L − 1, Ns ) = Ns , all bits of the node need to be estimated
and the decoding automatically reverts to the process described
in [17]. The proof of the theorem is nevertheless valid for both
L − 1 < Ns and L − 1 ≥ Ns and is provided in [1].
The proposed theorem remains valid also for the HWF
formulation that can be written as
(
PMi−1l +|αil |, if ηil , sgn αil ,
PMil =
(24)
PMi−1l ,
otherwise.
The proof of the theorem in the HWF formulation case is also
presented in [1].
The result of Theorem 1 provides an exact number of path
forks in Rate-1 nodes for each list size in SCL decoding in
order to guarantee error-correction performance preservation.
The Rate-1 node decoder of [15] empirically states that
two path forks are required to preserve the error-correction
performance. The following remarks are the direct results of
Theorem 1.
Remark 1. The Rate-1 node decoder of [15] for L = 2 is
redundant.
Theorem 1 states that for a Rate-1 node of length Ns when
L = 2, the number of path splitting is min(L − 1, Ns ) = 1.
Therefore, there is no need to split the path after the least
reliable bit is estimated. [15] for L = 2 is thus redundant.
5
LLR values are more reliable and less likely to incur path
splitting. However, whether path splitting must occur or not
depends on the list size L. The proposed Rate-1 node decoder
is used in Fast-SSCL and Fast-SSCL-SPC algorithms and the
proposed SPC node decoder is used in Fast-SSCL-SPC, while
the decoders for Rate-0 and Rep nodes remain similar to those
used in SSCL [17] such that
100
10−1
10−3
BER
FER
10−2
10−4
10−6
10−5
10−7
10−9
10−8
3
4
Eb /N0
3
5
4
5
Eb /N0 [dB]
[dB]
[17]
[15]
Fig. 3: FER and BER performance comparison of SSCL [17]
and the empirical method of [15] for P(1024, 860) when L =
128. The CRC length is 32.
Remark 2. The Rate-1 node decoder of [15] falls short in
preserving the error-correction performance for higher rates
and larger list sizes.
TFast-SSCLRate-0 (Ns, 0) = TFast-SSCL-SPCRate-0 (Ns, 0) = 1,
(26)
TFast-SSCLRep (Ns, 1) = TFast-SSCL-SPCRep (Ns, 1) = 2.
(27)
It should be noted that the number of path forks is directly
related to the number of time steps required in the decoding
process [10]. Therefore, when L < Ns , the time step requirement of SPC nodes based on Theorem 2 is two time steps
more than the time step requirement of Rate-1 nodes as in
Theorem 1. However, if SPC nodes are not taken into account
as in Fast-SSCL decoding, the polar code tree needs to be
traversed to find Rep nodes and Rate-1 nodes as shown in
Fig. 2a. For a SPC node of length Ns , this will result in
additional time step requirements as
TFast-SSCLSPC (Ns, Ns − 1) =2 log2 Ns − 2 + TFast-SSCLRep (2, 1)
+
log2Õ
Ns −1
TFast-SSCLRate-1 (2i, 2i ).
i=1
For codes of higher rates, the number of Rate-1 nodes of
larger length increases [19]. Therefore, when the list size is
also large, min(L − 1, Ns ) 2. The gap between the empirical
method of [15] and the result of Theorem 1 can introduce
significant error-correction performance loss. Fig. 3 provides
the frame error rate (FER) and bit error rate (BER) of decoding
a P(1024, 860) code with SSCL of [17] and the empirical
method of [15] when the list size is 128. It can be seen that
the error-correction performance loss reaches 0.25dB at FER
of 10−5 . In Section III-B, we show that the number of path
forks can be tuned for each list size to find a good tradeoff between the error-correction performance and the speed of
decoding.
Theorem 2. In SSCL-SPC decoding with list size L, the
number of path forks in a SPC node of length Ns required
to get the exact same results as the conventional SSCL-SPC
decoder is
min (L, Ns ) .
(25)
Following the time step calculation of SSCLSPC, the proposed technique in Theorem 2 results in
TFast-SSCL-SPCSPC (Ns, Ns − 1) = min (L, Ns ) + 1 which improves
the required number of time steps to decode SPC nodes when
L < Ns . Every bit after the L-th can be obtained through hard
decision on the LLR as in (23) without the need for path
splitting. In case min (L, Ns ) = Ns , the paths need to be split
for all bits of the node and the decoding automatically reverts
to the process described in [19]. The proof of the theorem is
nevertheless valid for both L < Ns and L ≥ Ns . We defer the
proof to Appendix A.
The effectiveness of hard decision decoding after the
min(L − 1, Ns )-th bit in Rate-1 nodes and the min(L, Ns )-th bit
in SPC nodes is due to the fact that the bits with high absolute
For example, for a SPC node of length 64, Fast-SSCL with
L = 4 results in TFast-SSCLSPC (64, 63) = 26, while Fast-SSCLSPC with L = 4 results in TFast-SSCL-SPCSPC (64, 63) = 5. Table I
summarizes the number of time steps required to decode each
node with different decoding algorithms.
In practical polar codes, there are many instances where
L − 1 < Ns for Rate-1 nodes and using the Fast-SSCL
algorithm can significantly reduce the number of required
decoding time steps with respect to SSCL. Similarly, there are
many instances where L < Ns for SPC nodes and using the
Fast-SSCL-SPC algorithm can significantly reduce the number
of required decoding time steps with respect to SSCL-SPC.
Fig. 4 shows the savings in time step requirements of a polar
code with three different rates. It should be noted that as the
rate increases, the number of Rate-1 and SPC nodes increases.
This consequently results in more savings by going from SSCL
(SSCL-SPC) to Fast-SSCL (Fast-SSCL-SPC).
B. Speed Optimization
The analysis in Section III-A provides exact reformulations
of SSCL and SSCL-SPC decoders without introducing any
error-correction performance loss. However, in practical polar
codes, there are fewer required path forks for Fast-SSCL
and Fast-SSCL-SPC in order to match the error-correction
performance of SSCL and SSCL-SPC, respectively.
Without loss of generality, let us consider L − 1 < Ns for
Rate-1 nodes and L < Ns for SPC nodes such that FastSSCL and Fast-SSCL-SPC result in higher decoding speeds
than SSCL and SSCL-SPC, respectively. Let us now consider
SRate-1 be the number of path forks in a Rate-1 node of
length Ns , and SSPC be the number of path forks in a SPC
node of length Ns where SRate-1 ≤ L − 1 and SSPC ≤ L. It
6
TABLE I: Time-Step Requirements of Decoding Different Nodes of Length Ns with List Size L.
Algorithm
SCL
SSCL
SSCL-SPC
Fast-SSCL
Fast-SSCL-SPC
Rate-0
Rep
Rate-1
2Ns − 2
1
1
1
1
2Ns − 1
2
2
2
2
3Ns − 2
Ns
Ns
min(L − 1, Ns )
min(L − 1, Ns )
SSCL-SPC
3Ns − 3
Ns + 2 log2 Ns − 2
Ns + 1
Ílog N −1
2 log2 Ns + i=12 s min(L − 1,
min(L, Ns ) + 1
Fast-SSCL
1200
1000
1000
1000
800
800
800
Time steps
1200
600
600
400
400
200
200
200
(a) P(1024, 256)
0
21 22 23 24 25 26 27 28 29
L
(b) P(1024, 512)
)
600
400
0
21 22 23 24 25 26 27 28 29
L
Ns
2i
Fast-SSCL-SPC
1200
Time steps
Time steps
SSCL
SPC
0
21 22 23 24 25 26 27 28 29
L
(c) P(1024, 768)
Fig. 4: Time-step requirements of SSCL, SSCL-SPC, Fast-SSCL, and Fast-SSCL-SPC decoding of (a) P(1024, 256), (b)
P(1024, 512), and (c) P(1024, 768).
should be noted that SRate-1 = L − 1 and SSPC = L result
in optimal number of path forks as presented in Theorem 1
and Theorem 2, respectively. The smaller the values of SRate-1
and SSPC , the faster the decoders of Fast-SSCL and FastSSCL-SPC. Similar to (22) and (25), the new number of
required path forks for Rate-1 and SPC nodes can be stated
as min(SRate-1, Ns ) and min(SSPC, Ns ), respectively.
values of SRate-1 = 3 and SSPC = 4 as shown in Fig. 8. As
illustrated in Fig. 9 for L = 8, the selection of SRate-1 = 2 and
SSPC = 4 provides similar FER and BER performance as the
optimal values of SRate-1 = 7 and SSPC = 8.
The definition of the parameters SRate-1 and SSPC provides a
trade-off between error-correction performance and speed of
Fast-SSCL and Fast-SSCL-SPC. Let us consider CRC-aided
Fast-SSCL decoding of P(1024, 512) with CRC length 16.
Fig. 5 shows that for L = 2, choosing SRate-1 = 0 results
in significant FER and BER error-correction performance
degradation. Therefore, when L = 2, the optimal value of
SRate-1 = 1 is used for Fast-SSCL. The optimal value of SRate-1
for L = 4 is 3. However, as shown in Fig. 6, SRate-1 = 1
results in almost the same FER and BER performance as the
optimal value of SRate-1 = 3. For L = 8, the selection of
SRate-1 = 1 results in ∼0.1 dB of error-correction performance
degradation at FER = 10−5 as shown in Fig. 7. However,
selecting SRate-1 = 2 removes the error-correction performance
gap to the optimal value of SRate-1 = 7. In the case of CRCaided Fast-SSCL-SPC decoding of P(1024, 512) with 16 bits
of CRC, selecting SRate-1 = 1 and SSPC = 3 for L = 4 results
in almost the same FER and BER performance as the optimal
To evaluate the impact of the proposed techniques on a
practical case, a SCL-based polar code decoder architecture
implementing Fast-SSCL and Fast-SSCL-SPC has been designed. Its basic structure is inspired to the decoders presented
in [9], [19], and it is portrayed in Fig. 10. The decoding flow
follows the one portrayed in Section II-C for a list size L. This
means that the majority of the datapath and of the memory
are replicated L times, and work concurrently on different
candidate codewords and the associated LLR values.
Starting from the tree root, the tree is descended by recursively computing (5) and (2) on left and right branches
respectively at each tree stage s, with a left-first rule. The
computations are performed by L sets of P processing elements (PEs), where each set can be considered a standalone
SC decoder, and P is a power of 2. In case 2s > 2P, (5) and (2)
require 2s /(2P) time steps to be completed, while otherwise
needing a single time step. The updated LLR values are stored
in dedicated memories.
IV. D ECODER A RCHITECTURE
7
100
100
10−1
10−1
10−1
10−2
10−1
10−2
10−3
10−3
BER
10−2
FER
BER
FER
10−2
10−3
10−4
10−3
10−4
1
2
Eb /N0
3
10−5
10−5
10−6
10−6
10−5
10−4
1
[dB]
SRate-1 = 0
2
Eb /N0
3
10−4
10−7
1
[dB]
2
Eb /N0
1
3
SRate-1 = 1
SRate-1 = 0
SRate-1 = 2
Fig. 5: FER and BER performance comparison of Fast-SSCL
decoding of P(1024, 512) for L = 2 and different values of
SRate-1 . The CRC length is 16.
2
3
Eb /N0 [dB]
[dB]
SRate-1 = 1
SRate-1 = 7
Fig. 7: FER and BER performance comparison of Fast-SSCL
decoding of P(1024, 512) for L = 8 and different values of
SRate-1 . The CRC length is 16.
100
10−1
100
10−2
10−1
10−1
10−1
10−2
10−2
BER
10−3
10−3
FER
FER
BER
10−2
10−4
10−3
10−4
10−5
10−5
10−6
10−4
10−4
1
2
3
1
Eb /N0 [dB]
SRate-1 = 0
2
3
Eb /N0 [dB]
SRate-1 = 1
SRate-1 = 3
Fig. 6: FER and BER performance comparison of Fast-SSCL
decoding of P(1024, 512) for L = 4 and different values of
SRate-1 . The CRC length is 16.
The internal structure of PEs is shown in Fig. 11. Each
PE receives as input two LLR values, outputting one. The
computations for both (5) and (2) are performed concurrently,
and the output is selected according to is , that represents the
s-th bit of the index i, where 0 ≤ i < N. The index i is
represented with smax = log2 N bits, and identifies the next
leaf node to be estimated, and can be composed by observing
the path from the root node to the leaf node. From stage smax
down to 0, for every left branch we set the corresponding bit
of i to 0, and to 1 for every right branch.
When a leaf node is reached, the controller checks Node
Sequence, identifying the leaf node as an information bit or a
frozen bit. In case of a frozen bit, the paths are not split, and
the bit is estimated only as 0. All the L path memories are
updated with the same bit value, as are the LLR memories and
10−3
10−5
10−5
10−6
1
2
Eb /N0
3
[dB]
SRate-1 = 1, SSPC = 2
SRate-1 = 3, SSPC = 4
1
2
3
Eb /N0 [dB]
SRate-1 = 1, SSPC = 3
Fig. 8: FER and BER performance comparison of Fast-SSCLSPC decoding of P(1024, 512) for L = 4 and different values
of SRate-1 and SSPC . The CRC length is 16.
the β memories. On the other hand, in case of an information
bit, both 0 and 1 are considered. The paths are duplicated
and the PMs are calculated for the 2L candidates according to
(8). They are subsequently filtered through the sorter module,
designed for minimum latency. Every PM is compared to every
other in parallel: dedicated control logic uses the resulting
signals to return the values of the PMs of the surviving paths
and the newly estimated bits they are associated with. The
latter are used to update the LLR memories, the β memories
and the path memories, while also being sent to the CRC
calculation module to update the remainder.
All memories in the decoder are implemented as registers:
this allows the LLR and β values to be read, updated by the
8
is
100
10−1
αi
10−1
αil
10−2
αiout
10−3
10−4
10−6
10−6
10−7
1
2
Eb /N0
3
Fig. 11: PE architecture.
LLR Memories
1
2
Eb /N0
[dB]
SRate-1
SRate-1
SRate-1
SRate-1
= 1,
= 1,
= 2,
= 7,
SSPC
SSPC
SSPC
SSPC
=2
=4
=3
=8
3
[dB]
SRate-1 = 1, SSPC = 3
SRate-1 = 2, SSPC = 2
SRate-1 = 2, SSPC = 4
High
..
Stage .
Memory
N
QLLR
Path . . .
Memory
Node Sequence
QLLR
.
β
..
Memory
PM . . .
Memory
L
L
1
Channel
Memory
L
L
QLLR × P
Fig. 9: FER and BER performance comparison of Fast-SSCLSPC decoding of P(1024, 512) for L = 8 and different values
of SRate-1 and SSPC . The CRC length is 16.
Controller
Low
..
Stage .
Memory
N
10−5
2
1
10−5
2P − 2
10−4
αir
αi+ N s
N −1
10−3
N/P − 2
BER
FER
10−2
1
L
QPM
CRC Unit
Fig. 12: Memory architecture.
PM Computation
and Sorting
SC Decoders
Memories
PE
..
.
···
PE
1
···
PE
..
.
1
..
.
PE
P
L
Channel LLRs
Fig. 10: Decoder architecture.
PEs, and written back in a single clock cycle. At the same time,
the paths are either updated, or split and updated (depending
on the constituent code), and the new PMs computed. In the
following clock cycle, in case the paths were split, the PMs
are sorted, paths are discarded and the CRC value updated.
In case paths were not split, the PMs are not sorted, and the
CRC update occurs in parallel with the following operation.
A. Memory Structure
The decoding flow described above relies on a number of
memories that are shown in Fig. 12. The channel memory
stores the N LLR values received from the channel at the
beginning of the decoding process. Each LLR value is quantized with QLLR bits, and represented with sign and magnitude.
The high and low stage memories store the intermediate α
computed in (5) and (2). The high stage memory is used
to store LLR values related to stages with nodes of size
greater than P. The number of PEs determines the number
of concurrent (5) or (2) that can be performed: for a node
in stage s, where 2s > 2P, a total of 2s /(2P) time steps
are needed to descend to the lower
level. The depth of
Í maxtree
−1
the high stage memory is thus sj=log
2 j /P = N/P − 2,
2 P+1
while its width is QLLR × P. On the other hand, the low stage
memory stores the LLR values for stages where 2s ≤ 2P: the
width of this memory is QLLR , while its depth is defined as
Ílog2 P−1
P/2 j = 2P − 2. Both high and low stage memory
j=0
words are reused by nodes belonging to the same stage s,
since once a subtree has been completely decoded, its LLR
values are not needed anymore. While high and low stage
memories are different for each path, the channel LLR values
are shared among the L datapaths. Table II summarizes the
memory read and write accesses for the aforementioned LLR
memories. When 2s = 2P, 2P LLR values are read from the
high stage memory, and the P resulting LLR values are written
in the low stage memory. The channel memory is read at smax
only.
Each of the L candidate codewords is stored in one of the
N-bit path memories, updated after every bit estimation. The β
memories hold the β values for each stage from 0 to smax − 1,
for a total of N − 1 bits each. Each time a bit is estimated,
all the β values it contributes to are concurrently updated.
When the decoding of the left half of the SC decoding tree has
9
TABLE II: LLR Memory Access.
TABLE IV: Node Sequence Input Information for Fast-SSCL
and Fast-SSCL-SPC.
Stage
READ
WRITE
s = smax
log2 P + 1 < s < smax
s = log2 P + 1
s < log2 P + 1
Channel
High Stage
High Stage
Low Stage
High Stage
High Stage
Low Stage
Low Stage
Node Type
RATE0
RATE1-1
RATE1-2
REP1
REP2
DESCEND
LEAF
TABLE III: Node Sequence Input Information for SSCL and
SSCL-SPC.
Node Type
RATE0
RATE1
REP1
REP2
DESCEND
LEAF
SPC1
SPC2
SPC3
Node Stage
Node Size
Frozen
s
s
s
s
Next Node
0
2s
2s
s
2 −1
1
Next Node
1
1
0
1
0
Next Node
0/1
s
s
s
1
2s − 1
2s
1
0
0
SPC1
SPC2-1
SPC2-2
SPC3
•
been completed, the β memories are reused for the right half.
Finally, the PM memories store the L PM values computed in
(8).
•
B. Special Nodes
The decoding flow and memory structure described before
implement the standard SCL decoding algorithm. The SSCL,
SSCL-SPC and the proposed Fast-SSCL and Fast-SSCL-SPC
algorithms demand modifications in the datapath to accommodate the simplified computations for Rate-0, Rate-1, Rep and
SPC nodes.
As with standard SCL, the pattern of frozen and information
bits is known a priori given a polar code structure, the same
can be said for special nodes. In the modified architecture,
the Node Sequence input in the controller (see Fig. 10) is not
limited to the frozen/information bit pattern, but it includes
the type of encountered nodes, their size and the tree stage in
which they are encountered. Table III summarizes the content
of Node Sequence depending on the type of node for SSCL
and SSCL-SPC, while in case of Fast-SSCL and Fast-SSCLSPC Node Sequence is detailed in Table IV. The node stage
allows the decoder to stop the tree exploration at the right level,
and the node type identifies the operations to be performed.
Each of the four node types is represented with one or more
decoding phases, each of which involves a certain number of
codeword bits, identified by the node size parameter. Finally,
the frozen bit parameter identifies a bit or set of bits as frozen
or not. To limit the decoder complexity, the maximum node
stage for special nodes is limited to s = log2 P, thus the
maximum node size is P. If the code structure identifies special
nodes with node size larger than P, they are considered as
composed by a set of P-size special nodes.
•
Rate-0 nodes are identified in the Node Sequence with a
single decoding phase. No path splitting occurs, and all
•
•
•
Node Stage
Node Size
Frozen
s
s
s
s
s
Next Node
0
2s
min(SRate-1, 2 s )
2 s − min(SRate-1, 2 s )
2s − 1
1
Next Node
1
1
0
0
1
0
Next Node
0/1
s
s
s
s
1
min(SSPC, 2 s )
2 s − min(SSPC, 2 s ) − 1
2s
1
0
0
0
the 2s node bits are set to 0. The PM update requires a
single time step, as discussed in [19].
Rate-1 nodes are composed of a single phase in both
SSCL and SSCL-SPC, in which paths are split 2s times.
In case of Fast-SSCL and Fast-SSCL-SPC, each Rate-1
is divided into two phases. The first takes care of the
min(SRate-1, 2s ) path forks, requiring as many time steps,
while the second sets the remaining 2s − min(SRate-1, 2s )
bits according to (23) and updates the PM according to
(24). This second phase takes a single time step.
Rep nodes are identified by two phases in the Node
Sequence, the first of which takes care of the 2s − 1
frozen bits similarly as Rate-0 nodes do, and the second
estimates the single information bit. Each of these two
phases lasts a single time step.
SPC nodes are split in three phases in the original SSCLSPC formulation. The first phase takes care of the frozen
bit, and computes both (16) and (17), initializing the PM
as (18) in a time step. The extraction of the least reliable
bit in (16) is performed through a comparison tree that
carries over both the index and the value of the LLR.
The second phase estimates the 2s − 1 information bits,
splitting the path as many times in as many time steps.
During this phase, each time a bit is estimated, it is
XORed with the previous β values: this operation is
useful to compute (20). The update of βimin is finally
performed in the third phase, that takes a single time step.
Moving to Fast-SSCL-SPC, the second SPC phase is split
in two, similarly to what happens to the Rate-1 node.
Descend is a non-existing node type that is inserted for
one clock cycle in Node Sequence for control purposes
after every special node. The node size and stage associated with this label are those of the following node. The
Descend node type is used by the controller module.
Leaf nodes identify all nodes that can be found at s = 0,
for which the standard SCL algorithm applies.
The decoding of special nodes requires a few major changes
in the decoder architecture.
•
Path Memory: each path memory is an array of N
registers, granting concurrent access to all bits with a 1bit granularity. In SCL, the path update is based on the
10
0
û
sgn(α)
0
û
0
û
sgn(α)
βimin
û
RATE0
RATE1-1
RATE1-2
REP1
REP2
SPC1
SPC2-1
SPC2-2
SPC3
LEAF
Data in
•
Path
Memory
Node Type
i → i + 2s
i
i → i + 2 s − SRate-1
s
i →i+2 −1
i
i
i
s
i → i + 2 − SSPC − 1
imin
i
RATE0
RATE1-1
RATE1-2
REP1
REP2
SPC1
SPC2-1
SPC2-2
SPC3
LEAF
Address
Write Enable
Fig. 13: Path memory access architecture for Fast-SSCL-SPC.
•
combination of a write enable signal, the codeword bit
index i that acts as a memory address, and the value of
the estimated bit after the PMs have been sorted and the
surviving paths identified. Fig. 13 shows the path memory
access architecture for Fast-SSCL-SPC. Unlike SCL, the
path memory is not always updated with the estimated
bit û. Thus, the SCL datapath is bypassed according to
the node type. When Node Sequence identifies RATE0,
REP1 and SPC1 nodes that consider frozen bits, the path
memory is updated with 0 values. The estimated bit û is
chosen as input for RATE1-1, REP2, SPC2-2 and LEAF
nodes, where the path is split. RATE1-2 and SPC2-2
nodes estimate the bits through hard decision on the LLR
values, while in the SPC3 case the update considers the
result of (20). At the same time. whenever the estimated
bits are more than one, the corresponding bits in the path
memory must be concurrently updated. Thus, the address
becomes a range of addresses for RATE0, RATE1-2,
REP1 and SPC2-2.
β Memory: the update of this memory depends on the
value of the estimated bit. In order to limit the latency
cost of these computations, concurrently to the estimation
of û, the updated values of all the bits of the β memory
are computed assuming both û = 0 and û = 1. The actual
value of û is used as a selection signal to decide on
the two alternatives. The β memory in SCL, unlike the
path memory, already foresees the concurrent update of
multiple entries that are selected based on the bit index
i. Given an estimated leaf node, the β values of all the
stages that it affects are updated: in fact, since as shown
in (3) the update of β values is at most a series of XORs,
it is possible to distribute this operation in time. The same
can be said of multi-bit (3) updates. To implement FastSSCL-SPC, the β update selection logic must be modified
to foresee the special nodes, similar to that portrayed in
•
•
Fig. 13 for the path memory. For RATE0, REP1, and
SPC1, the û = 0 update is always selected. RATE1-1,
REP2, SPC2-1 and LEAF nodes maintain the standard
SCL selection based on the actual value of û. The update
for SPC3 case is based on βimin . For RATE1-2 and SPC12, the selection is based on the XORed sign bits of the
LLR values read from the memory.
PM Calculation: this operation is performed, in the
original SCL architecture and for leaf nodes in general
according to (8). The paths and associated PMs are split
and sorted every time an information bit is estimated,
while PMs are updated without sorting when frozen bits
are encountered. While the sorting architecture remains
the same, the implementation of the proposed algorithm
requires a different PM update structure for each special
node. Unlike with leaf nodes, the LLR values needed
for the PM update in special nodes are not the output
of PEs, and are read directly from the LLR memories.
Additional bypass logic is thus needed. For RATE0 and
REP1, (10b) and (11b) require a summation over up to P
values, while SPC1 nodes need to perform the minimum
α search (16): these operations are tackled through adder
and comparator trees. RATE1-1, REP2 and SPC2-1 PM
updates are handled similarly to the leaf node case, since
a single bit at a time is being estimated. RATE1-2, SPC22 and SPC3 do not require any PM to be updated.
CRC Calculation: the standard SCL architecture foresees
the estimation of a single bit at a time. Thus, the CRC is
computed sequentially. However, Rate-0 and Rep nodes
in SSCL and SSCL-SPC estimate up to P and P − 1
bits concurrently. Thus, for the CRC operation not to
become a latency bottleneck, the CRC calculation must be
parallelized by updating the remainder. Following the idea
presented in [20], it is possible to allow for variable input
sizes with a high degree of resource sharing and limited
overall complexity. The circuit is further simplified by the
fact that both Rate-0 and Rep nodes guarantee that the
estimated bit values are all 0. Fig. 14 shows the modified
CRC calculation module in case P = 64, where NCRC
represents the number of concurrently estimated bits: the
estimated bit can be different from 0 only in case of leaf
nodes and s = 1 Rep nodes, for which a single bit is
estimated in any case.
The Fast-SSCL and Fast-SSCL-SPC architectures follow
the same idea, but require additional logic. RATE1-2 and
SPC2-2 nodes introduce new degrees of parallelism, as
up to P − SRate-1 and P − SSPC bits are updated at the
same time. Moreover, it is not possible to assume that
these bits are 0 as with RATE0 and REP1. The value of
the estimated bit must be taken into account, leading to
increased complexity.
Controller: this module in the SCL architecture is tasked
with the generation of memory write enables, the update
of the codeword bit index i and the stage tracker s,
along with the LLR memory selection signals according
to Table II and path enable and duplication signals. It
implements a finite state machine that identifies the status
of the decoding process. The introduction of special nodes
11
Node Type
Node Size
NCRC = 64
NCRC = 63
NCRC = 32
NCRC = 31
NCRC = 16
Remainder
NCRC = 15
NCRC = 8
NCRC = 7
NCRC = 4
NCRC = 3
NCRC = 2
Estimated
bit
NCRC = 1
Fig. 14: CRC architecture for SSCL and SSCL-SPC.
demands that most of the control signal generation logic
is modified. Of particular importance is the fact that, in
the SCL architecture, the update of i is bound to having
reached a leaf node, i.e. s = 0. In Fast-SSCL-SPC, it is
instead linked to s being equal to the special node stage.
The index i is moreover incremented of the amount of
bits estimated in a single time step, depending on the
type of node. Memory write enables are also bound to
having reached the special node stage, and not only to
s = 0.
V. R ESULTS
A. Hardware Implementation
The architecture designed in Section IV has been described
in the VHDL language and synthesized in TSMC 65 nm
CMOS technology. Implementation results are provided in
Table V for different decoders: along with the Fast-SSCL and
Fast-SSCL-SPC described in this work, the SCL, SSCL and
SSCL-SPC decoders proposed in [19] are presented as well.
Each decoder has been synthesized with three list sizes (L =
2, 4, 8), while the Fast-SSCL and Fast-SSCL-SPC architectures
have been synthesized for considering different combinations
of SRate-1 and SSPC , as portrayed in Section III-B. Quantization
values are the same used in [19], i.e. 6 bits for LLR values
and 8 bits for PMs, with two fractional bits each. All memory
elements have been implemented through registers and the
area results include both net area and cell area. The reported
throughput is coded.
All Fast-SSCL and Fast-SSCL-SPC, regardless of the value
of SRate-1 and SSPC , show a substantial increase in area oc-
cupation with respect to SSCL and SSCL-SPC. The main
contributing factors to the additional area overhead are three:
• In SSCL and SSCL-SPC, the CRC computation needs to
be parallelized, since in Rep and Rate-0 nodes multiple
bits are updated at the same time. However, the bit
value is known at design time, since they are frozen
bits. This, along with the fact that 0 is neutral in the
XOR operations required by CRC calculation, limits the
required additional area overhead. On the contrary, in
Fast-SSCL and Fast-SSCL-SPC, Rate-1 and SPC nodes
update multiple bits within the same time step (SPC2-2
and RATE1-2 stages). In these cases, however, they are
information bits, whose values cannot be known at design
time: the resulting parallel CRC tree is substantially wider
and deeper than the ones for Rate-0 and Rep nodes.
Moreover, with increasing number of CRC trees, the
selection logic becomes more cumbersome.
• A similar situation is encountered for the β memory
update signal. As described in the previous section, the
β memory update values are computed assuming both
estimated values, and the actual value of û is used as a
selection signal. In SSCL and SSCL-SPC the multiplebit update does not constitute a problem since all the
estimated bits are 0 and the β memory content does not
need to be changed. On the contrary, in Fast-SSCL and
Fast-SSCL-SPC, the value of the estimated information
bits might change the content of the β memory. Moreover,
since β is computed as (3), the update of β bits depends
on previous bits as well as the newly estimated ones.
Thus, an XOR tree is necessary to compute the right
selection signal for every information bit estimated in
SPC2-2 and RATE1-2 stages.
• The aforementioned modifications considerably lengthen
the system critical path. In case of large code length, small
list size, or large P, the critical path starts in the controller
module, in particular in the high stage memory addressing
logic, goes through the multiplexing structure that routes
LLR values to the PEs, and ends after the PM update. In
case of large list sizes or short code length, the critical
path passes through the PM sorting and path selection
logic, and through the parallel CRC computation. Thus,
pipeline registers have been inserted to lower the impact
of critical path, at the cost of additional area occupation.
Fast-SSCL and Fast-SSCL-SPC implementations show consistent throughput improvements with respect to previously
proposed architectures. The gain is lower than what is shown
to be theoretically achievable in Fig. 4. This is due to the
aforementioned pipeline stages, that increase the number of
steps needed to complete the decoding of component codes.
B. Comparison with Previous Works
The Fast-SSCL-SPC hardware implementation presented in
this paper for P(1024, 512) and P = 64 is compared with
the state-of-the-art architectures in [11]–[14], [19] and the
results are provided in Table VI. The architectures presented
in [12]–[14] were synthesized based on 90 nm technology: for
a fair comparison, their results have been converted to 65 nm
12
TABLE V: TSMC 65 nm Implementation Results for P(1024, 512) and P = 64.
Implementation
L
SRate-1
SSPC
Area
[mm2 ]
Frequency
[MHz]
SCL
2
4
8
7
7
7
7
7
7
0.599
0.998
2.686
1031
961
722
389
363
272
SSCL
2
4
8
7
7
7
7
7
7
0.643
1.192
2.958
1031
961
722
1108
1033
776
SSCL-SPC
2
4
8
7
7
7
7
7
7
0.684
1.223
3.110
1031
961
722
1229
1146
861
Fast-SSCL
2
4
4
8
8
1
1
3
2
7
7
7
7
7
7
0.871
1.536
1.511
3.622
3.588
885
840
840
722
722
1579
1499
1446
1053
827
Fast-SSCL-SPC
2
4
4
8
8
1
1
3
2
7
2
3
4
4
8
1.048
1.822
1.797
3.975
3.902
885
840
840
722
722
1861
1608
1338
1198
959
technology using a factor of 90/65 for the frequency and a
factor of (65/90)2 for the area. The synthesis results in [11]
were carried out in 65 nm technology but reported in 90 nm
technology. Therefore, a reverse conversion was applied to
convert the results back to 65 nm technology.
The architecture in this paper shows 72% higher throughput
and 42% lower latency with respect to the multibit decision
SCL decoder architecture of [11] for L = 4. However, the
area occupation of [11] is smaller, leading to a higher area
efficiency than the design in this paper.
The symbol-decision SCL decoder architecture of [12]
shows lower area occupation than the design in this paper
for L = 4 but it comes at the cost of lower throughput and
higher latency. Our decoder architecture achieves 192% higher
throughput and 66% lower latency than [12] which resulted in
17% higher area efficiency.
The high throughput SCL decoder architecture of [13] for
L = 2 requires lower area occupation than our design but it
comes at the expense of lower throughput and higher latency.
Moreover, the design in [13] relies on parameters that need
to be tuned for each code, and it is shown in [13] that a
change of code can result in more than 0.2 dB error-correction
performance loss. For L = 4, our decoder not only achieves
higher throughput and lower latency than [13], but also it
occupies a smaller area. This in turn yields a 12% increase
in the area efficiency in comparison with [13].
The multimode SCL decoder in [14] relies on a higher
number of PEs than our design: nevertheless, it yields lower
throughput and higher latency than the architecture proposed
in this paper for L = 4. It should be noted that [14] is based on
the design presented in [13], whose code-specific parameters
may lead to substantial error-correction performance degradation. On the contrary, the design in this paper is targeted for
Throughput
[Mb/s]
speed and flexibility and can be used to decode any polar code
of any length.
Compared to our previous work [19], that has the same
degree of flexibility of the proposed design, this decoder
achieves 51% higher throughput and 34% lower latency for
L = 2, and 40% higher throughput and 28% lower latency for
L = 4. However, the higher area occupation of the new design
yields lower area efficiencies than [19] for L = {2, 4}. For
L = 8, the proposed design has 39% higher throughput and
29% lower latency than [19], which results in 9% increase in
area efficiency. The reason is that for L = 8, the sorter is quite
large and falls on the critical path. Consequently, the maximum
achievable frequency for the proposed design is limited by the
sorter and not by Rate-1 and SPC nodes as opposed to the
L = {2, 4} case. This results in the same maximum achievable
frequency for both designs, hence, higher throughput and area
efficiency.
Fig. 15 plots the area occupation against the decoding
latency for all the decoders considered in Table VI. For each
value of L, the design proposed in this work have the shortest
latency, shown by their leftmost position on the graph.
VI. C ONCLUSION
In this work, we have proven that the list size in polar
decoders sets a limit to the useful number of path forks in
Rate-1 and SPC nodes. We thus propose Fast-SSCL and FastSSCL-SPC polar code decoding algorithms that, depending
on L and the number of performed path forks, can reduce
the number of required time steps of more than 75% at no
error-correction performance cost. Hardware architectures for
the proposed algorithms have been described and implemented
in CMOS 65 nm technology. They have a very high degree
of flexibility and can decode any polar code, regardless of its
13
TABLE VI: Comparison with State-of-the-Art Decoders.
This work
[11]
[12]†
[14]†
[19]
L
P
2
64
4
64
8
64
4
64
4
64
2
64
4
64
4
256
2
64
4
64
8
64
Area [mm2 ]
Frequency [MHz]
Throughput [Mb/s]
Latency [µs]
Area Efficiency [Mb/s/mm2 ]
1.048
885
1861
0.55
1776
1.822
840
1608
0.64
883
3.975
722
1198
0.85
301
0.62
498
935
1.10
1508
0.73
692
551
1.86
755
1.03
586
1844
0.57
1790
2.00
558
1578
0.66
789
0.99
566
1515
0.69
1530
0.68
1031
1229
0.83
1807
1.22
961
1146
0.89
939
3.11
722
861
1.19
277
† The
results are originally based on TSMC 90 nm technology and are scaled to TSMC 65 nm technology.
paths min(L, Ns ) times while guaranteeing the same results as
in SSCL-SPC. Theorem 2 is consequently proven.
L=2
L=4
L=8
This work
100.6
[19]
R EFERENCES
100.4
Area [mm2 ]
[13]†
[13]
100.2
This work
[19]
This work
100
[13]
[14]
[12]
[19]
[11]
10−0.2
10−0.2
100
100.2
Latency [µs]
Fig. 15: Comparison with state-of-the-art decoders.
rate. The proposed decoder is the fastest SCL-based decoder
in literature: sized for N = 1024 and L = 2, it yields a 1.861
Gb/s throughput with an area occupation of 1.048 mm2 . The
same design, sized for L = 4 and L = 8, leads to throughputs
of 1.608 Gb/s and 1.198 Gb/s, and areas of 1.822 mm2 and
3.975 mm2 , respectively.
A PPENDIX A
P ROOF OF T HEOREM 2
Proof. In order to prove Theorem 2, we note that the first
step is to initialize the PMs based on (18). Therefore, the
least reliable bit needs to be estimated first. For the bits other
than the least reliable bit, the PMs are updated based on (19).
However, the term (1 − 2γ)|αimin | is constant for all the bit
estimations in the same path. Therefore, we can define a new
set of Ns − 1 LLR values as
αim = αi + sgn(αi )(1 − 2γ)|αimin |,
(28)
for i , imin and 0 ≤ im < Ns − 1, which results in
|αim | = |αi | + (1 − 2γ)|αimin |.
(29)
The problem is now reduced to a Rate-1 node of length Ns −
1 which, with the result of Theorem 1, can be decoded by
considering only min(L − 1, Ns − 1) path splitting. Adding the
bit estimation for imin , SPC nodes can be decoded by splitting
[1] S. A. Hashemi, C. Condo, and W. J. Gross, “Fast simplified successivecancellation list decoding of polar codes,” in IEEE Wireless Commun.
and Netw. Conf., March 2017, pp. 1–6.
[2] E. Arıkan, “Channel polarization: A method for constructing capacityachieving codes for symmetric binary-input memoryless channels,” IEEE
Trans. Inf. Theory, vol. 55, no. 7, pp. 3051–3073, July 2009.
[3] C. Leroux, A. Raymond, G. Sarkis, and W. Gross, “A semi-parallel
successive-cancellation decoder for polar codes,” IEEE Trans. Signal
Process., vol. 61, no. 2, pp. 289–299, January 2013.
[4] A. Alamdar-Yazdi and F. R. Kschischang, “A simplified successivecancellation decoder for polar codes,” IEEE Commun. Lett., vol. 15,
no. 12, pp. 1378–1380, December 2011.
[5] G. Sarkis, P. Giard, A. Vardy, C. Thibeault, and W. Gross, “Fast polar
decoders: Algorithm and implementation,” IEEE J. Sel. Areas Commun.,
vol. 32, no. 5, pp. 946–957, May 2014.
[6] I. Tal and A. Vardy, “List decoding of polar codes,” IEEE Trans. Inf.
Theory, vol. 61, no. 5, pp. 2213–2226, May 2015.
[7] I. Dumer and K. Shabunov, “Near-optimum decoding for subcodes of
Reed-Muller codes,” in IEEE Int. Symp. on Inform. Theory, 2001, p.
329.
[8] “Final report of 3GPP TSG RAN WG1 #87 v1.0.0,”
http://www.3gpp.org/ftp/tsg ran/WG1 RL1/TSGR1 87/Report/Final
Minutes report RAN1%2387 v100.zip, Reno, USA, November 2016.
[9] A. Balatsoukas-Stimming, A. J. Raymond, W. J. Gross, and A. Burg,
“Hardware architecture for list successive cancellation decoding of polar
codes,” IEEE Trans. Circuits Syst. II, vol. 61, no. 8, pp. 609–613, August
2014.
[10] A. Balatsoukas-Stimming, M. Bastani Parizi, and A. Burg, “LLR-based
successive cancellation list decoding of polar codes,” IEEE Trans. Signal
Process., vol. 63, no. 19, pp. 5165–5179, October 2015.
[11] B. Yuan and K. K. Parhi, “LLR-based successive-cancellation list
decoder for polar codes with multibit decision,” IEEE Trans. Circuits
Syst. II, vol. 64, no. 1, pp. 21–25, January 2017.
[12] C. Xiong, J. Lin, and Z. Yan, “Symbol-decision successive cancellation
list decoder for polar codes,” IEEE Trans. Signal Process., vol. 64, no. 3,
pp. 675–687, February 2016.
[13] J. Lin, C. Xiong, and Z. Yan, “A high throughput list decoder architecture
for polar codes,” IEEE Trans. VLSI Syst., vol. 24, no. 6, pp. 2378–2391,
June 2016.
[14] C. Xiong, J. Lin, and Z. Yan, “A multimode area-efficient SCL polar
decoder,” IEEE Trans. VLSI Syst., vol. 24, no. 12, pp. 3499–3512,
December 2016.
[15] G. Sarkis, P. Giard, A. Vardy, C. Thibeault, and W. J. Gross, “Fast list
decoders for polar codes,” IEEE J. Sel. Areas Commun., vol. 34, no. 2,
pp. 318–328, February 2016.
[16] S. A. Hashemi, C. Condo, and W. J. Gross, “List sphere decoding
of polar codes,” in Asilomar Conf. on Signals, Syst. and Comput.,
November 2015, pp. 1346–1350.
[17] S. A. Hashemi, C. Condo, and W. J. Gross, “Simplified successivecancellation list decoding of polar codes,” in IEEE Int. Symp. on Inform.
Theory, July 2016, pp. 815–819.
[18] S. A. Hashemi, C. Condo, and W. J. Gross, “Matrix reordering for
efficient list sphere decoding of polar codes,” in IEEE Int. Symp. on
Circuits and Syst., May 2016, pp. 1730–1733.
14
[19] S. A. Hashemi, C. Condo, and W. J. Gross, “A fast polar code list
decoder architecture based on sphere decoding,” IEEE Trans. Circuits
Syst. I, vol. 63, no. 12, pp. 2368–2380, December 2016.
[20] C. Condo, M. Martina, G. Piccinini, and G. Masera, “Variable parallelism cyclic redundancy check circuit for 3GPP-LTE/LTE-Advanced,”
IEEE Signal Process. Lett., vol. 21, no. 11, pp. 1380–1384, November
2014.
Seyyed Ali Hashemi was born in Qaemshahr, Iran.
He received the B.Sc. degree in electrical engineering from Sharif University of Technology, Tehran,
Iran, in 2009 and the M.Sc. degree in electrical
and computer engineering from the University of
Alberta, Edmonton, AB, Canada, in 2011. He is
currently working toward the Ph.D. degree in electrical and computer engineering at McGill University,
Montréal, QC, Canada. He was the recipient of
a Best Student Paper Award at the 2016 IEEE
International Symposium on Circuits and Systems
(ISCAS 2016). His research interests include error-correcting codes, hardware architecture optimization, and VLSI implementation of digital signal
processing systems.
Carlo Condo received the M.Sc. degree in electrical and computer engineering from Politecnico
di Torino, Italy, and the University of Illinois at
Chicago, IL, USA, in 2010. He received the Ph.D.
degree in electronics and telecommunications engineering from Politecnico di Torino and Telecom
Bretagne, France, in 2014. Since 2015, he has been
a postdoctoral fellow at the ISIP Laboratory, McGill
University. His Ph.D. thesis was awarded a mention
of merit as one of the five best of 2013/2014 by
the GE association, and he has been the recipient of
two conference best paper awards (SPACOMM 2013 and ISCAS 2016). His
research is focused on channel coding, design and implementation of encoder
and decoder architectures, and digital signal processing.
Warren J. Gross (SM’10) received the B.A.Sc.
degree in electrical engineering from the University
of Waterloo, Waterloo, ON, Canada, in 1996, and
the M.A.Sc. and Ph.D. degrees from the University
of Toronto, Toronto, ON, Canada, in 1999 and 2003,
respectively. Currently, he is Professor and Associate
Chair (Academic Affairs) with the Department of
Electrical and Computer Engineering, McGill University, Montréal, QC, Canada. His research interests
are in the design and implementation of signal processing systems and custom computer architectures.
Dr. Gross served as Chair of the IEEE Signal Processing Society Technical
Committee on Design and Implementation of Signal Processing Systems. He
has served as General Co-Chair of IEEE GlobalSIP 2017 and IEEE SiPS
2017, and as Technical Program Co-Chair of SiPS 2012. He has also served
as organizer for the Workshop on Polar Coding in Wireless Communications
at WCNC 2017, the Symposium on Data Flow Algorithms and Architecture
for Signal Processing Systems (GlobalSIP 2014) and the IEEE ICC 2012
Workshop on Emerging Data Storage Technologies. Dr. Gross served as
Associate Editor for the IEEE Transactions on Signal Processing and currently
is a Senior Area Editor. Dr. Gross is a Senior Member of the IEEE and a
licensed Professional Engineer in the Province of Ontario.
| 7 |
Parallel Ordered Sets Using Join
arXiv:1602.02120v4 [] 12 Nov 2016
Guy Blelloch
Carnegie Mellon University
[email protected]
Daniel Ferizovic
Karlsruhe Institute of Technology
[email protected]
Yihan Sun
Carnegie Mellon University
[email protected]
November 15, 2016
Abstract
Ordered sets (and maps when data is associated with each key) are one of the most important and useful data types.
The set-set functions union, intersection and difference are particularly useful in certain applications. Brown and
n
Tarjan first described an algorithm for these functions, based on 2-3 trees, that meet the optimal Θ m log m
+1
time bounds in the comparison model (n and m ≤ n are the input sizes). Later Adams showed very elegant algorithms
for the functions, and others, based on weight-balanced trees. They only require a single function that is specific to the
balancing scheme—a function that joins two balanced trees—and hence can be applied to other balancing schemes.
Furthermore the algorithms are naturally parallel. However, in the twenty-four years since, no one has shown that the
algorithms are work efficient (or optimal), sequential or parallel, and even for the original weight-balanced trees.
In this paper we show that Adams’ algorithms are both work efficient and highly parallel (polylog depth) across
four different balancing schemes—AVL trees, red-black trees, weight balanced trees and treaps. To do this we need
careful, but simple, algorithms for J OIN that maintain certain invariants, and our proof is (mostly) generic across the
schemes.
To understand how the algorithms perform in practice we have also implemented them (all code except J OIN is
generic across the balancing schemes). Interestingly the implementations on all four balancing schemes and three set
functions perform similarly in time and speedup (more than 45x on 64 cores). We also compare the performance of
our implementation to other existing libraries and algorithms including the standard template library (STL) implementation of red-black trees, the multicore standard template library (MCSTL), and a recent parallel implementation
based on weight-balanced trees. Our implementations are not as fast as the best of these on fully overlapping keys
(but comparable), but better than all on keys with a skewed overlap (two Gaussians with different means).
1
1
Introduction
Ordered sets and ordered maps (sets with data associated with each key) are two of the most important data types
used in modern programming. Most programming languages either have them built in as basic types (e.g. python) or
supply them as standard libraries (C++, C# Java, Scala, Haskell, ML). These implementations are based on some form
of balanced tree (or tree-like) data structure and, at minimum, support lookup, insertion, and deletion in logarithmic
time. Most also support set-set functions such as union, intersection, and difference. These functions are particularly
useful when using parallel machines since they can support parallel bulk updates. In this paper we are interested in
simple and efficient parallel algorithms for such set-set functions.
The lower bound for comparison-based algorithms for union, intersection
and difference for inputs of size n and
n
+
1
. Brown and Tarjan first described a
m ≤ n and output an ordered structure1 is log2 m+n
=
Θ
m
log
m
n
sequential algorithm for merging that asymptotically match these bounds [11]. It can be adapted for union, intersection
and difference with the same bounds. The bound is interesting since it shows that implementing insertion with union,
or deletion with difference, is asymptotically efficient (O(log n) time), as is taking the union of two equal sized sets
(O(n) time). However, the Brown and Tarjan algorithm is complicated, and completely sequential.
insert(T, k) =
(TL , m, TR ) = split(T, k);
join(TL , k, TR )
delete(T, k) =
(TL , m, TR ) = split(T, k);
join2(TL , TR )
union(T1 ,T2 ) =
if T1 = Leaf then T2
else if T2 = Leaf then T1
else (L2 ,k2 ,R2 ) = expose(T2 );
(L1 ,b,R1 ) = split(T1 ,k2 );
TL = union(L1 ,L2 ) k TR = union(R1 ,R2 );
join(TL ,k2 ,TR )
split(T, k) =
if T = Leaf then (Leaf,false,Leaf)
else (L, m, R) = expose(T );
if k = m then (L,true,R)
else if k < m then
(LL , b, LR ) = split(L, k);
(LL ,b,join(LR , m, R))
else (RL , b, RR ) = split(R, k);
(join(L, m, RL ), b, RR )
intersect(T1 ,T2 ) =
if T1 = Leaf then Leaf
else if T2 = Leaf then Leaf
else (L2 ,k2 ,R2 ) = expose(T2 );
(L1 ,b,R1 ) = split(T1 ,k2 );
TL = intersect(L1 ,L2 ) k TR = intersect(R1 ,R2 );
if b = true then join(TL ,k2 ,TR )
else join2(TL ,TR )
splitLast(T ) =
(L, k, R) = expose(T );
if R = Leaf then (L, k)
else (T 0 , k0 ) = splitLast(R);
(join(L, k, T 0 ),k0 )
difference(T1 ,T2 ) =
if T1 = Leaf then Leaf
else if T2 = Leaf then T1
else (L2 ,k2 ,R2 ) = expose(T2 );
(L1 ,b,R1 ) = split(T1 ,k2 );
TL = difference(L1 ,L2 ) k TR = difference(R1 ,R2 );
join2(TL ,TR )
join2(TL ,TR ) =
if TL = Leaf then TR
else (TL0 , k) = splitLast(TL );
join(TL0 , k, TR )
Figure 1: Implementing U NION, I NTERSECT, D IFFERENCE, I NSERT, D ELETE, S PLIT, and J OIN 2 with just J OIN.
E XPOSE returns the left tree, key, and right tree of a node. The || notation indicates the recursive calls can run in
parallel. These are slight variants of the algorithms described by Adams [1], although he did not consider parallelism.
Adams later described very elegant algorithms for union, intersection, and difference, as well as other functions
using weight-balanced trees, based on a single function, J OIN [1, 2] (see Figure 1). The algorithms are naturally
parallel. The J OIN(L, k, R) function takes a key k and two ordered sets L and R such that L < k < R and returns the
union of the keys [28, 26]. J OIN can be used to implement J OIN 2(L, R), which does not take the key in the middle,
and S PLIT(T, k), which splits a tree at a key k returning the two pieces and a flag indicating if k is in T (See Section
4). With these three functions, union, intersection, and difference (as well as insertion, deletion and other functions)
1 By
“ordered structure” we mean any data structure that can output elements in sorted order without any comparisons.
2
are almost trivial. Because of this at least three libraries use Adams’ algorithms for their implementation of ordered
sets and tables (Haskell [19] and MIT/GNU Scheme, and SML).
J OIN can be implemented on a variety of different balanced tree schemes. Sleator and Tarjan describe an algorithm
for J OIN based on splay trees which runs in amortized logarithmic time [26]. Tarjan describes a version for red-black
tree that runs in worst case logarithmic time [28]. Adams describes version based on weight-balanced trees [1].2
Adams’ algorithms were proposed in an international competition for the Standard ML community, which is about
implementations on “set of integers”. Prizes were awarded in two categories: fastest algorithm, and most elegant yet
still efficient program. Adams won the elegance award, while his algorithm is as fast as the fastest program for very
large sets, and was faster for smaller sets. Adams’ algorithms actually show that in principle all balance criteria for
search trees can be captured by a single function J OIN, although he only considered weight-balanced trees.
Surprisingly, however, there have been almost no results on bounding the work (time) of Adams’ algorithms, in
general nor on specific trees. Adams informally argues that his algorithms take O(n + m) work for weight-balanced
tree, but that is a very loose
bound. Blelloch and Miller later show that similar algorithms for treaps [6], are optimal for
n
+ 1 ), and also parallel. Their algorithms, however, are specific for treaps. The problem with
work (i.e. Θ m log m
bounding the work of Adams’ algorithms, is that just bounding the time of S PLIT, J OIN and J OIN 2 with logarithmic
costs is not sufficient.3 One needs additional properties of the trees.
The contribution of this paper is to give first work-optimal bounds for Adams’ algorithms. We do this not only for
the weight-balanced trees. we bound the work and depth of Adams’ algorithms (union, intersection and difference)
for four different balancing shemes: AVL trees, red-black trees, weight-balanced trees and treaps. We analyze exactly
the algorithms in Figure 1, and the bounds hold when either input tree is larger. We show that with appropriate (and
simple) implementations of J OIN for each of the four balancing schemes, we achieve asymptotically optimal bounds
on work. Furthermore the algorithms have O(log n log m) depth, and hence are highly parallel. To prove the bounds
on work we show that our implementations of J OIN satisfy certain conditions based on a rank we define for each tree
type. In particular the cost of J OIN must be proportional to the difference in ranks of two trees, and the rank of the
result of a join must be at most one more than the maximum rank of the two arguments.
In addition to the theoretical analysis of the algorithms, we implemented parallel versions of all of the algorithms
on all four tree types and describe a set of experiments. Our implementation is generic in the sense that we use
common code for the algorithms in Figure 1, and only wrote specialized code for each tree type for the J OIN function.
Our implementations of J OIN are as described in this paper. We compare performance across a variety of parameters.
We compare across the tree types, and interestingly all four balance criteria have very similar performance. We
measure the speedup on up to 64 cores and achieve close to a 46-fold speedup. We compare to other implementations,
including the set implementation in the C++ Standard Template library (STL) for sequential performance, and parallel
weight-balanced B-trees (WBB-trees) [12] and the multi-core standard template library (MCSTL) [13] for parallel
performance. We also compare for different data distributions. The conclusion from the experiments is that although
not always as fast as (WBB-trees) [12] on uniform distributions, the generic code is quite competitive, and on keys
with a skewed overlap (two Gaussians with different means), our implementation is much better than all the other
baselines.
Related Work Parallel set operations on two ordered sets have been well-studied, but each previous algorithm only
works on one type of balance tree. Paul, Vishkin, and Wagener studied bulk insertion and deletion on 2-3 trees in the
PRAM model [24]. Park and Park showed similar results for red-black
trees [23]. These algorithms are not based
n
on J OIN. Katajainen [16] claimed an algorithm with O m log( m
+ 1) work and O(log n) depth using 2-3 tree, but
was shown to contain some bugs so the bounds do not hold [6]. Akhremtsev and Sanders [4] (unpublished) recently
fixed that and proposed an algorithm based on (a, b)-trees with optimal work and O(log n) depth. Their algorithm
only works for (a, b)-trees, and they only gave algorithms on U NION. Besides, our algorithm is much simpler and
easy to be implemented than theirs. Blelloch and Miller showed a similar algorithm as Adams’ (as well as ours)
on treaps with optimal work and O(log n) depth on a EREW PRAM with scan operation using pipelines (implying
O(log n log m) depth on a plain EREW PRAM, and O(log∗ m log n) depth on a plain CRCW PRAM). The pipelining
2 Adams’
version had some bugs in maintaining the balance, but these were later fixed [14, 27].
the cost of J OIN, S PLIT, and J OIN 2 by the logarithm of the smaller tree is probably sufficient, but implementing a data structure with
such bounds is very much more complicated.
3 Bounding
3
is quite complicated. Our focus in this paper is in showing that very simple algorithm are work efficient and have
polylogarithmic depth, and less with optimizing the depth.
Many researchers have considered concurrent implementations of balanced search trees (e.g., [17, 18, 10, 21]).
None of these is work efficient for U NION since it is necessary to insert one tree into the other taking work at least
O(m log n). Furthermore the overhead of concurrency is likely to be very high.
2
Preliminaries
A binary tree is either a Leaf, or a node consisting of a left binary tree TL , a value (or key) v, and a right binary
tree TR , and denoted Node(TL , v, TR ). The size of a binary tree, or |T |, is 0 for a Leaf and |TL | + |TR | + 1 for a
Node(TL , v, TR ). The weight of a binary tree, or w(T ), is one more than its size (i.e., the number of leaves in the
tree). The height of a binary tree, or h(T ), is 0 for a Leaf, and max(h(TL ), h(TR )) + 1 for a Node(TL , v, TR ).
Parent, child, ancestor and descendant are defined as usual (ancestor and descendant are inclusive of the node itself).
The left spine of a binary tree is the path of nodes from the root to a leaf always following the left tree, and the right
spine the path to a leaf following the right tree. The in-order values of a binary tree is the sequence of values returned
by an in-order traversal of the tree.
A balancing scheme for binary trees is an invariant (or set of invariants) that is true for every node of a tree, and
is for the purpose of keeping the tree nearly balanced. In this paper we consider four balancing schemes that ensure
the height of every tree of size n is bounded by O(log n). For each balancing scheme we define the rank of a tree, or
r(T ).
AVL trees [3] have the invariant that for every Node(TL , v, TR ), the height of TL and TR √
differ by at most one.
This property implies that any AVL tree of size n has height at most logφ (n + 1), where φ = 1+2 5 is the golden ratio.
For AVL trees r(T ) = h(T ) − 1.
Red-black (RB) trees [5] associate a color with every node and maintain two invariants: (the red rule) no red
node has a red child, and (the black rule) the number of black nodes on every path from the root down to a leaf is
equal. Unlike some other presentations, we do not require that the root of a tree is black. Our proof of the work
bounds requires allowing a red root. We define the black height of a node T , denoted ĥ(T ) to be the number of black
nodes on a downward path from the node to a leaf (inclusive of the node). Any RB tree of size n has height at most
2 log2 (n + 1). In RB trees r(T ) = 2(ĥ(T ) − 1) if T is black and r(T ) = 2ĥ(T ) − 1 if T is red.
Weight-balanced (WB) trees with parameter α (also called BB[α] trees) [22] maintain for every T = Node(TL , v, TR )
L)
the invariant α ≤ w(T
w(T ) ≤ 1 − α. We say two weight-balanced trees T1 and T2 have like weights if Node(T1 , v, T2 ) is
2
1
weight balanced. Any α weight-balanced tree of size n has height at most log 1−α
n. For 11
< α ≤ 1 − √12 insertion
and deletion can be implemented on weight balanced trees using just single and double rotations [22, 7]. We require
the same condition for our implementation of J OIN, and in particular use α = 0.29 in experiments. For WB trees
r(T ) = dlog2 (w(T ))e − 1.
Treaps [25] associate a uniformly random priority with every node and maintain the invariant that the priority
at each node is no greater than the priority of its two children. Any treap of size n has height O(log n) with high
probability (w.h.p)4 . For treaps r(T ) = dlog2 (w(T ))e − 1.
For all the four balancing schemes r(T ) = Θ(log(|T | + 1)). The notation we use for binary trees is summarized
in Table 1.
A Binary Search Tree (BST) is a binary tree in which each value is a key taken from a total order, and for which
the in-order values are sorted. A balanced BST is a BST maintained with a balancing scheme, and is an efficient way
to represent ordered sets.
Our algorithms are based on nested parallelism with nested fork-join constructs and no other synchronization or
communication among parallel tasks.5 All algorithms are deterministic. We use work (W ) and span (S) to analyze
asymptotic costs, where the work is the total number of operations and span is the critical path. We use the simple
composition rules W (e1 || e2 ) = W (e1 ) + W (e2 ) + 1 and S(e1 || e2 ) = max(S(e1 ), S(e2 )) + 1. For sequential
4 Here
5 This
w.h.p. means that height O(c log n) with probability at least 1 − 1/nc (c is a constant)
does not preclude using our algorithms in a concurrent setting.
4
Notation
|T |
h(T )
ĥ(T )
r(T )
w(T )
p(T )
k(T )
L(T )
R(T )
expose(T )
Description
The size of tree T
The height of tree T
The black height of an RB tree T
The rank of tree T
The weight of tree T (i.e, |T | + 1)
The parent of node T
The value (or key) of node T
The left child of node T
The right child of node T
(L(T ), k(T ), R(T ))
Table 1: Summary of notation.
computation both work and span compose with addition. Any computation with W work and S span will run in time
T <W
P + S assuming a PRAM (random access shared memory) with P processors and a greedy scheduler [9, 8].
3
The JOIN Function
Here we describe algorithms for J OIN for the four balancing schemes we defined in Section 2. The function J OIN(TL , k, TR )
takes two binary trees TL and TR , and a value k, and returns a new binary tree for which the in-order values are a
concatenation of the in-order values of TL , then k, and then the in-order values of TR .
As mentioned in the introduction and shown in Section 4, J OIN fully captures what is required to rebalance a tree
and can be used as the only function that knows about and maintains the balance invariants. For AVL, RB and WB
trees we show that J OIN takes work that is proportional to the difference in rank of the two trees. For treaps the work
depends on the priority of k. All versions of J OIN are sequential so the span is equal to the work. Due to space
limitations, we describe the algorithms, state the theorems for all balancing schemes, but only show a proof outline for
AVL trees.
1 joinRight(TL , k, TR ) =
2
(l, k0 , c) = expose(TL );
3
if h(c) ≤ h(TR ) + 1 then
4
T 0 = Node(c, k, TR );
5
if h(T 0 ) ≤ h(l) + 1 then Node(l, k0 , T 0 )
6
else rotateLeft(Node(l, k0 , rotateRight(T 0 )))
7
else
8
T 0 = joinRight(c, k, TR );
9
T 00 = Node(l, k0 , T 0 );
10
if h(T 0 ) ≤ h(l) + 1 then T 00
11
else rotateLeft(T 00 )
12 join(TL , k, TR ) =
13
if h(TL ) > h(TR ) + 1 then joinRight(TL , k, TR )
14
else if h(TR ) > h(TL ) + 1 then joinLeft(TL , k, TR )
15
else Node(TL , k, TR )
Figure 2: AVL J OIN algorithm.
AVL trees. Pseudocode for AVL J OIN is given in Figure 2 and illustrated in Figure 6. Every node stores its own
height so that h(·) takes constant time. If the two trees TL and TR differ by height at most one, J OIN can simply
create a new Node(TL , k, TR ). However if they differ by more than one then rebalancing is required. Suppose that
h(TL ) > h(TR ) + 1 (the other case is symmetric). The idea is to follow the right spine of TL until a node c for which
h(c) ≤ h(TR ) + 1 is found (line 3). At this point a new Node(c, k, TR ) is created to replace c (line 4). Since either
5
1 joinRightRB(TL , k, TR ) =
2
if (r(TL ) = br(TR )/2c × 2) then
3
Node(TL , hk, redi, TR );
4
else
5
(L0 , hk0 , c0 i, R0 )=expose(TL );
6
T 0 = Node(L0 , hk0 , c0 i,joinRightRB(R0 , k, TR ));
7
if (c0 =black) and (c(R(T 0 )) = c(R(R(T 0 )))=red) then
8
c(R(R(T 0 )))=black;
9
T 00 =rotateLeft(T 0 )
10
else T 00
11 joinRB(TL , k, TR ) =
12
if br(TL )/2c > br(TR )/2c then
13
T 0 =joinRightRB(TL , k, TR );
14
if (c(T 0 )=red) and (c(R(T 0 ))=red) then
15
Node(L(T 0 ), hk(T 0 ), blacki, R(T 0 ))
16
else T 0
17
else if br(TR )/2c > br(TL )/2c then
18
T 0 =joinLeftRB(TL , k, TR );
19
if (c(T 0 )=red) and (c(L(T 0 ))=red) then
20
Node(L(T 0 ), hk(T 0 ), blacki, R(T 0 ))
21
else T 0
22
else if (c(TL )=black) and (c(TR )=black) then
23
Node(TL , hk, redi, TR )
24
else Node(TL , hk, blacki, TR )
Figure 3: RB J OIN algorithm.
1 joinRightWB(TL , k, TR ) =
2
(l, k0 , c)=expose(TL );
3
if (balance(|TL |, |TR |) then Node(TL , k, TR ));
4
else
5
T 0 = joinRightWB(c, k, TR );
6
(l1 , k1 , r1 ) = expose(T 0 );
7
if like(|l|, |T 0 |) then Node(l, k0 , T 0 )
8
else if (like(|l|, |l1 |)) and (like(|l| + |l1 |, r1 )) then
9
rotateLeft(Node(l, k0 , T 0 ))
10
else rotateLeft(Node(l, k0 ,rotateRight(T 0 )))
11 joinWB(TL , k, TR ) =
12
if heavy(TL , TR ) then joinRightWB(TL , k, TR )
13
else if heavy(TR , TL ) then joinLeftWB(TL , k, TR )
14
else Node(TL , k, TR )
Figure 4: WB J OIN algorithm.
1 joinTreap(TL , k, TR ) =
2
if prior(k, k1 ) and prior(k, k2 ) then Node(TL , k, TR )
3
else (l1 , k1 , r1 )=expose(TL );
4
(l2 , k2 , r2 )=expose(TR );
5
if prior(k1 , k2 ) then
6
Node(l1 , k1 ,joinTreap(r1 , k, TR ))
7
else Node(joinTreap(TL , k, l2 ),k2 , r2 )
Figure 5: Treap J OIN algorithm.
h(c) = h(TR ) or h(c) = h(TR ) + 1, the new node satisfies the AVL invariant, and its height is one greater than c. The
increase in height can increase the height of its ancestors, possibly invalidating the AVL invariant of those nodes. This
6
can be fixed either with a double rotation if invalid at the parent (line 6) or a single left rotation if invalid higher in the
tree (line 11), in both cases restoring the height for any further ancestor nodes. The algorithm will therefore require at
most two rotations.
……
The right
branch in 𝑇𝑇𝐿𝐿
𝑘𝑘
k
p
a
c
𝑇𝑇𝑅𝑅
Step 1: connect
……
𝑇𝑇1
𝑇𝑇2
(h or
h+1)
d
𝑇𝑇𝑅𝑅
(h)
c
p
a
k
p
c
h+2
(h or
h+1)
Step 2: rebalance
……
𝑇𝑇1
(h or
h+1)
a
d
𝑇𝑇2
𝑇𝑇𝑅𝑅
(h or
h+1)
(h)
Rebalance
required if
ℎ 𝑇𝑇1 = ℎ
ℎ 𝑇𝑇2 = ℎ + 1
𝑇𝑇1
(h)
k
c1
𝐿𝐿 𝑇𝑇2
(h or
h-1)
c2
𝑅𝑅 𝑇𝑇2
(h or
h-1)
d
h+2
𝑇𝑇𝑅𝑅
(h)
Figure 6: An example for J OIN on AVL trees (h(TL ) > h(TR ) + 1). We first follow the right spine of TL until a
subtree of height at most h(Tr ) + 1 is found (i.e., T2 rooted at c). Then a new Node(c, k, TR ) is created, replacing
c (Step 1). If h(T1 ) = h and h(T2 ) = h + 1, the node p will no longer satisfy the AVL invariant. A double rotation
(Step 2) restores both balance and its original height.
Lemma 1. For two AVL trees TL and TR , the AVL J OIN algorithm works correctly, runs with O(|h(TL ) − h(TR )|)
work, and returns a tree satisfying the AVL invariant with height at most 1 + max(h(TL ), h(TR )).
Proof outline. Since the algorithm only visits nodes on the path from the root to c, and only requires at most two
rotations, it does work proportional to the path length. The path length is no more than the difference in height of the
two trees since the height of each consecutive node along the right spine of TL differs by at least one. Along with the
case when h(TR ) > h(TL ) + 1, which is symmetric, this gives the stated work bounds. The resulting tree satisfies the
AVL invariants since rotations are used to restore the invariant (details left out). The height of any node can increase
by at most one, so the height of the whole tree can increase by at most one.
Red-black Trees. Tarjan describes how to implement the J OIN function for red-black trees [28]. Here we describe
a variant that does not assume the roots are black (this is to bound the increase in rank by U NION). The pseudocode
is given in Figure 3. We store at every node its black height ĥ(·). The first case is when ĥ(TR ) = ĥ(TL ). Then if
both k(TR ) and k(TL ) are black, we create red Node(TL , k, TR ), otherwise we create black Node(TL , k, TR ). The
second case is when ĥ(TR ) < ĥ(TL ) = ĥ (the third case is symmetric). Similarly to AVL trees, J OIN follows the right
spine of TL until it finds a black node c for which ĥ(c) = ĥ(TR ). It then creates a new red Node(c, k, TR ) to replace
c. Since both c and TR have the same height, the only invariant that can be violated is the red rule on the root of TR ,
the new node, and its parent, which can all be red. In the worst case we may have three red nodes in a row. This is
fixed by a single left rotation: if a black node v has R(v) and R(R(v)) both red, we turn R(R(v)) black and perform
a single left rotation on v. The update is illustrated in Figure 7. The rotation, however can again violate the red rule
between the root of the rotated tree and its parent, requiring another rotation. The double-red issue might proceed up
to the root of TL . If the original root of TL is red, the algorithm may end up with a red root with a red child, in which
case the root will be turned black, turning TL rank from 2ĥ − 1 to 2ĥ. If the original root of TL is black, the algorithm
may end up with a red root with two black children, turning the rank of TL from 2ĥ − 2 to 2ĥ − 1. In both cases the
rank of the result tree is at most 1 + r(TL ).
Lemma 2. For two RB trees TL and TR , the RB J OIN algorithm works correctly, runs with O(|r(TL ) − r(TR )|) work,
and returns a tree satisfying the red-black invariants and with rank at most 1 + max(r(TL ), r(TR )).
The proof is similar as Lemma 1.
Weight Balanced Trees. We store the weight of each subtree at every node. The algorithm for joining two weightbalanced trees is similar to that of AVL trees and RB trees. The pseudocode is shown in Figure 4. The like function
in the code returns true if the two input tree sizes are balanced, and false otherwise. If TL and TR have like weights
the algorithm returns a new Node(TL , k, TR ). Suppose |TR | ≤ |TL |, the algorithm follows the right branch of TL
until it reaches a node c with like weight to TR . It then creates a new Node(c, k, TR ) replacing c. The new node will
have weight greater than c and therefore could imbalance the weight of c’s ancestors. This can be fixed with a single
or double rotation (as shown in Figure 8) at each node assuming α is within the bounds given in Section 2.
7
Red node
……
Black node
The right
branch in 𝑇𝑇𝐿𝐿
g
a
k
p
b
𝑇𝑇1
𝑇𝑇2
(h)
(h)
Step 1: connect (when 𝑝𝑝 is red)
𝑇𝑇𝑅𝑅
𝑘𝑘
g
a
c
d
𝑇𝑇3
𝑇𝑇1
𝑇𝑇𝑅𝑅
(h)
(h)
c
d
𝑇𝑇3
𝑇𝑇𝑅𝑅
(h)
(h)
Step 2: rebalance (when 𝑝𝑝 is red)
……
p
r
k
g
v
v
k
𝑇𝑇2
(h)
(h)
1) When k(𝑇𝑇𝐿𝐿 ) is red, after rotation on 𝐿𝐿(𝑇𝑇𝐿𝐿 ):
Rank increases from 2ℎ� + 1 to 2(ℎ� + 1)
2) When k(𝑇𝑇𝐿𝐿 ) is black, after rotation on 𝑘𝑘(𝑇𝑇𝐿𝐿 ):
r
p
b
If the unbalance propagate to the root of 𝑇𝑇𝐿𝐿 :
r
Rebalance required on:
𝒈𝒈: 𝑝𝑝 and 𝑘𝑘 are red
𝒑𝒑: 𝑘𝑘 and 𝑑𝑑 are red,
𝑝𝑝 is black
……
Step 3: adjust on root
w
Node of either color
a
b
c
d
𝑇𝑇1
(h)
𝑇𝑇2
(h)
𝑇𝑇3
𝑇𝑇𝑅𝑅
v
v
u
r
Rank increases from 2ℎ� to 2ℎ� + 1
u
(h)
(h)
Figure 7: An example of J OIN on red-black trees (ĥ = ĥ(TL ) > ĥ(TR )). We follow the right spine of TL until we
find a black node with the same black height as TR (i.e., c). Then a new red Node(c, k, TR ) is created, replacing c
(Step 1). The only invariant that can be violated is when either c’s previous parent p or TR ’s root d is red. If so, a left
rotation is performed at some black node. Step 2 shows the rebalance when p is red. The black height of the rotated
subtree (now rooted at p) is the same as before (h + 1), but the parent of p might be red, requiring another rotation. If
the red-rule violation propagates to the root, the root is either colored red, or rotated left (Step 3).
Lemma 3. For two α weight-balanced trees TL and TR and α ≤ 1 − √12 ≈ 0.29, the weight-balanced J OIN algorithm
works correctly, runs with O(| log(w(TL )/w(TR ))|) work, and returns a tree satisfying the α weight-balance invariant
and with rank at most 1 + max(r(TL ), r(TR )).
The proof is shown in the Appendix.
Treaps. The treap J OIN algorithm (as in Figure 5) first picks the key with the highest priority among k, k(TL ) and
k(TR ) as the root. If k is the root then the we can return Node(TL , k, TR ). Otherwise, WLOG, assume k(TL ) has
a higher priority. In this case k(TL ) will be the root of the result, L(TL ) will be the left tree, and R(TL ), k and TR
will form the right tree. Thus J OIN recursively calls itself on R(TL ), k and TR and uses result as k(TL )’s right child.
When k(TR ) has a higher priority the case is symmetric. The cost of J OIN is therefore the depth of the key k in the
resulting tree (each recursive call pushes it down one level). In treaps the shape of the result tree, and hence the depth
of k, depend only on the keys and priorities and not the history. Specifically, if a key has the tth highest priority among
the keys, then its expected depth in a treap is O(log t) (also w.h.p.). If it is the highest priority, for example, then it
remains at the root.
Lemma 4. For two treaps TL and TR , if the priority of k is the t-th highest among all keys in TL ∪ {k} ∪ TR , the treap
J OIN algorithm works correctly, runs with O(log t + 1) work in expectation and w.h.p., and returns a tree satisfying
the treap invariant with rank at most 1 + max(r(TL ), r(TR )).
From the above lemmas we can get the following fact for J OIN.
Theorem 1. For AVL, RB and WB trees J OIN(TL , k, TR ) does O(|r(TL ) − r(TR )|) work. For treaps J OIN does
O(log t) work in expectation if k has the t-th highest priority among all keys. For AVL, RB, WB trees and treaps, J OIN
returns a tree T for which the rank satisfies max(r(TL ), r(TR )) ≤ r(T ) ≤ 1 + max(r(TL ), r(TR )).
8
……
(1). Single Rotation
u
(0)
……
v
1
1
The right
branch in 𝑇𝑇𝐿𝐿
𝐴𝐴
𝐴𝐴
5
2
4
𝐵𝐵
𝐶𝐶
4
2
u
3
v
……
𝐵𝐵
3
5
𝐶𝐶
𝐷𝐷
(2). Double Rotation
3
𝐷𝐷
u
(0): The rebalance process is currently
at 𝑣𝑣, which means the tree rooted at 𝑢𝑢
and all of its subtrees are balanced.
(1): The result of the single rotation.
(2): The result of the double rotation.
v
1
2
4
5
𝐴𝐴
𝐵𝐵
𝐶𝐶
𝐷𝐷
Figure 8: An illustration of single and double rotations possibly needed to rebalance weight-balanced trees. In the
figure the subtree rooted at u has become heavier due to joining in TL and its parent v now violates the balance
invariant.
4
Other Functions Using JOIN
In this section, we describe algorithms for various functions that use just J OIN. The algorithms are generic across
balancing schemes. The pseudocodes for the algorithms in this section is shown in Figure 1.
Split. For a BST T and key k, S PLIT(T, k) returns a triple (TL , b, TR ), where TL (TR ) is a tree containing all keys
in T that are less (larger) than k, and b is a flag indicating whether k ∈ T . The algorithm first searches for k in T ,
splitting the tree along the path into three parts: keys to the left of the path, k itself (if it exists), and keys to the right.
Then by applying J OIN, the algorithm merges all the subtrees on the left side (using keys on the path as intermediate
nodes) from bottom to top to form TL , and merges the right parts to form TR . Figure 9 gives an example.
Split 𝑇𝑇 with key 42:
𝑇𝑇𝐿𝐿 = 𝑗𝑗𝑗𝑗𝑗𝑗𝑗𝑗(
^
12
^
^
^
^
22
^
^
36
15
^
17
5
51
17
5
80
30
^
, 25 , 𝑗𝑗𝑗𝑗𝑗𝑗𝑗𝑗(
8
25
13
8
13
^
42
^
^
70
^
^
95
^
^
^
^
𝑇𝑇𝑅𝑅 = 𝑗𝑗𝑗𝑗𝑗𝑗𝑗𝑗(
12
^
^
15
^
,
^
51
,
30
^
36
,
^
)
)
^
22
^
^
^
80
,
70
^
95
^
^
)
^
𝑏𝑏 = 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡
Figure 9: An example of S PLIT in a BST with key 42. We first search for 42 in the tree and split the tree by the
searching path, then use J OIN to combine trees on the left and on the right respectively, bottom-top.
Theorem 2. The work of S PLIT(T, k) is O(log |T |) for all balancing schemes described in Section 3 (w.h.p. for
treaps). The two resulting trees TL and TR will have rank at most r(T ).
9
Proof. We only consider the work of joining all subtrees on the left side. The other side is symmetric. Suppose we have
l subtrees on the left side, denoted from bottom to top as T1 , T2 , . . . Tl . We have that r(T1 ) ≤ r(T2 ) ≤ · · · ≤ r(Tl ).
As stated above, we consecutively join T1 and T2 returning T20 , then join T20 with T3 returning T30 and so forth, until
all trees are merged. The overall work of S PLIT is the sum of the cost of l − 1 J OIN functions.
For AVL trees, red-black trees and weight-balanced trees, recall Theorem 1 that we have r(Ti0 ) ≤ r(Ti ) + 1, so
r(Ti0 ) ≤ r(Ti ) + 1 ≤ r(Ti+1 ) + 1. According to Theorem 1, the work of a single operation is O(|r(Ti+1 ) − r(Ti0 )|).
Pl
Pl
The overall complexity is i=1 |r(Ti+1 ) − r(Ti0 )| ≤ i=1 r(Ti+1 ) − r(Ti0 ) + 2 = O(r(T )) = O(log |T |).
For treaps, each J OIN uses the key with the highest priority since the key is always on a upper level. Hence by
Lemma 4, the complexity of each J OIN is O(1) and the work of split is at most O(log |T |) w.h.p.
Also notice that when getting the final result TL and TR , the last step is a J OIN on two trees, the larger one of
which is a subtree of the original T . Thus the rank of the two trees to be joined is of rank at most r(T ) − 1, according
to Theorem 1 we have r(TL ) and r(TR ) at most r(T ).
Join2. J OIN 2(TL , TR ) returns a binary tree for which the in-order values is the concatenation of the in-order values
of the binary trees TL and TR (the same as J OIN but without the middle key). For BSTs, all keys in TL have to be less
than keys in TR . J OIN 2 first finds the last element k (by following the right spine) in TL and on the way back to root,
joins the subtrees along the path, which is similar to S PLIT TL by k. We denote the result of dropping k in TL as TL0 .
Then J OIN(TL0 , k, TR ) is the result of J OIN 2. Unlike J OIN, the work of J OIN 2 is proportional to the rank of both trees
since both S PLIT and J OIN take at most logarithmic work.
Theorem 3. The work of J OIN 2(TL , TR ) is O(r(TL ) + r(TR )) for all balancing schemes described in Section 3
(bounds are w.h.p for treaps).
Union, Intersect and Difference. U NION(T1 , T2 ) takes two BSTs and returns a BST that contains the union of all
keys. The algorithm uses a classic divide-and-conquer strategy, which is parallel. At each level of recursion, T1 is split
by k(T2 ), breaking T1 into three parts: one with all keys smaller than k(T2 ) (denoted as L1 ), one in the middle either
of only one key equal to k(T2 ) (when k(T2 ) ∈ T1 ) or empty (when k(T2 ) ∈
/ T1 ), the third one with all keys larger than
k(T2 ) (denoted as R1 ). Then two recursive calls to U NION are made in parallel. One unions L(T2 ) with L1 , returning
TL , and the other one unions R(T2 ) with R1 , returning TR . Finally the algorithm returns J OIN(TL , k(T2 ), TR ), which
is valid since k(T2 ) is greater than all keys in TL are less than all keys in TR .
The functions I NTERSECT(T1 , T2 ) and D IFFERENCE(T1 , T2 ) take the intersection and difference of the keys in
their sets, respectively. The algorithms are similar to U NION in that they use one tree to split the other. However, the
method for joining is different. For I NTERSECT, J OIN 2 is used instead of J OIN if the root of the first is not found in
the second. For D IFFERENCE, J OIN 2 is used anyway because k(T2 ) should be excluded in the result tree. The base
cases are also different in the obvious way.
Theorem 4 (Main Theorem). For all the four balance schemes mentioned in Section 3, the work and span of the
algorithm (asshownin Figure
1) of U NION, I NTERSECT or D IFFERENCE on two balanced BSTs of sizes m and n
n
(n ≥ m) is O m log
+ 1 and O(log n log m) respectively (the bound is in expectation for treaps).
m
A generic proof of Theorem 4 working for all the four balancing schemes will be shown in the next section.
The work bound for these algorithms is optimal in the comparison-based model. In particular considering all possi
n
ble interleavings, the minimum number of comparisons required to distinguish them is log m+n
= Θ m log m
+1
n
[15].
Other Functions. Many other functions can be implemented with J OIN. Pseudocode for I NSERT and D ELETE was
given in Figure 1. For a tree of size n they both take O(log n) work.
5
The Proof of the Main Theorem
In this section we prove Theorem 4, for all the four balance schemes (AVL trees, RB trees, WB trees and treaps) and all
three set algorithms (U NION, I NTERSECT, D IFFERENCE) from Figure 1. For this purpose we make two observations.
The first is that all the work for the algorithms can be accounted for within a constant factor by considering just the
10
Notation
Tp
Td
n
m
Tp (v), v ∈ Tp
Td (v), v ∈ Tp
si
vkj
d(v)
Description
The pivot tree
The decomposed tree
max(|Tp |, |Td |)
min(|Tp |, |Td |)
The subtree rooted at v in Tp
The tree from Td that v splits6
The number of nodes in layer i
The j-th node on layer k in Tp
The number of nodes attached to a layer
root v in a treap
Table 2: Descriptions of notations used in the proof.
work done by the S PLITs and the J OINs (or J OIN 2s), which we refer to as split work and join work, respectively. This
is because the work done between each split and join is constant. The second observation is that the split work is
identical among the three set algorithms. This is because the control flow of the three algorithms is the same on the
way down the recursion when doing S PLITs—the algorithms only differ in what they do at the base case and on the
way up the recursion when they join.
Given these two observations, we prove the bounds on work by first showing that the join work is asymptotically
at most as large as the split work (by showing that this is true at every node of the recursion for all three algorithms),
and then showing that the split work for U NION (and hence the others) satisfies our claimed bounds.
We start with some notation, which is summarized in Table 2. In the three algorithms the first tree (T1 ) is split
by the keys in the second tree (T2 ). We therefore call the first tree the decomposed tree and the second the pivot tree,
denoted as Td and Tp respectively. The tree that is returned is denoted as Tr . Since our proof works for either tree
being larger, we use m = min(|Tp |, |Td |) and n = max(|Tp |, |Td |). We denote the subtree rooted at v ∈ Tp as Tp (v),
and the tree of keys from Td that v splits as Td (v) (i.e., S PLIT(v, Td (v)) is called at some point in the algorithm). For
v ∈ Tp , we refer to |Td (v)| as its splitting size.
Figure 10 (a) illustrates the pivot tree with the splitting size annotated on each node. Since S PLIT has logarithmic
work, we have,
X
split work = O
(log |Td (v)| + 1),
v∈Tp
which we analyze in Theorem 6. We first, however, show that the join work is bounded by the split work. We use the
following Lemma, which is proven in the appendix.
Lemma 5. For Tr =U NION(Tp , Td ) on AVL, RB or WB trees, if r(Tp ) > r(Td ) then r(Tr ) ≤ r(Tp ) + r(Td ).
Theorem 5. For each function call to U NION, I NTERSECT or D IFFERENCE on trees Tp (v) and Td (v), the work to do
the J OIN (or J OIN 2) is asymptotically no more than the work to do the S PLIT.
Proof. For I NTERSECT or D IFFERENCE, the cost of J OIN (or J OIN 2) is O(log(|Tr | + 1)). Notice that D IFFER ENCE returns the keys in Td \Tp . Thus for both I NTERSECT and D IFFERENCE we have Tr ⊆ Td . The join work is
O(log(|Tr | + 1)), which is no more than O(log(|Td | + 1)) (the split work).
For U NION, if r(Tp ) ≤ r(Td ), the J OIN will cost O(r(Td )), which is no more than the split work.
Consider r(Tp ) > r(Td ) for AVL, RB or WB trees. The rank of L(Tp ) and R(Tp ), which are used in the recursive
calls, are at least r(Tp ) − 1. Using Lemma 5, the rank of the two trees returned by the two recursive calls will be at
least (r(Tp ) − 1) and at most (r(Tp ) + r(Td )), and differ by at most O(r(Td )) = O(log |Td | + 1). Thus the join cost
is O(log |Td | + 1), which is no more than the split work.
6 The
nodes in Td (v) form a subset of Td , but not necessarily a subtree. See details later.
11
50
20
30
3
1
1
𝑣𝑣01
13
7
6
5
1
12
4
5
1
3
4
(a)
layer
3
2
0
2
3
4
17
8
11
7
Total cost:
log 50 + log 20 + ⋯ + log 8 +
𝑣𝑣11
𝑣𝑣21
𝑣𝑣02
𝑣𝑣31
𝑣𝑣12
(b)
𝑠𝑠3 = 1
𝑣𝑣22
𝑣𝑣03
𝑣𝑣04
𝑣𝑣13
𝑠𝑠2 = 2
𝑣𝑣05
� 𝑥𝑥𝑖𝑖𝑖𝑖 ≤ 𝑀𝑀
𝑠𝑠1 = 3
𝑗𝑗
𝑠𝑠0 = 5
Figure 10: An illustration of splitting tree and layers. The tree in (a) is Tp , the dashed circle are the exterior nodes.
The numbers on the nodes are the sizes of the tree from Td to be split by this node, i.e., the “splitting size” |Td (v)|. In
(b) is an illustration of layers on an AVL tree.
Consider r(Tp ) > r(Td ) for treaps. If r(Tp ) > r(Td ), then |Tp | ≥ |Td |. The root of Tp has the highest priority
|T |+|T |
among all |Tp | keys, so on expectation it takes at most the p|Tp | d ≤ 2-th place among all the |Td | + |Tp | keys. From
Lemma 4 we know that the cost on expectation is E[log t] + 1 ≤ log E[t] + 1 ≤ log 2 + 1, which is a constant.
This implies the total join work is asymptotically bounded by the split work.
We now analyze the split work. We do this by layering the pivot tree starting at the leaves and going to the root
and such that nodes in a layer are not ancestors of each other. We define layers based on the ranks and denote the size
of layer i as si . We show that si shrinks geometrically, which helps us prove our bound on the split work. For AVL
and RB trees, we group the nodes with rank i in layer i. For WB trees and treaps, we put a node v in layer i iff v has
rank i and v’s parent has rank strictly greater than i. Figure 10 (b) shows an example of the layers of an AVL tree.
Definition 1. In a BST, a set of nodes V is called a disjoint set if and only if for any two nodes v1 , v2 in V , v1 is not
the ancestor of v2 .
P
Lemma 6. For any disjoint set V ⊆ Tp , v∈V |Td (v)| ≤ |Td |.
The proof of this Lemma is straightforward.
Lemma 7. For an AVL, RB, WB tree or a treap of size N , each layer is a disjoint set, and si ≤
constant c > 1.
N
cbi/2c
holds for some
Proof. For AVL, RB, WB trees and treaps, a layer is obviously a disjoint set: a node and its ancestor cannot lie in the
same layer.
For AVL trees, consider a node in layer 2, it must have at least two descendants in layer 0. Thus s0 ≥ 2s2 . Since
an AVL tree with its leaves removed is still an AVL tree, we have si ≥ 2si+2 . Since s0 and s1 are no more than N ,
N
we can get that si < 2bi/2c
.
For RB trees, the number of black nodes in layer 2i is more than twice as many as in layer 2(i + 1) and less than
four times as many as in layer 2(i + 1), i.e., s2i ≥ 2s2i+2 . Also, the number of red nodes in layer 2i + 1 is no more
N
than the black nodes in layer 2i. Since s0 and s1 are no more than N , si < 2bi/2c
.
12
For WB trees and treaps, the rank is defined as dlog2 (w(T ))e − 1, which means that a node in layer i has weight
at least 2i + 1. Thus si ≤ (N + 1)/(2i + 1) ≤ N/2i .
Not all nodes in a WB tree or a treap are assigned to a layer. We call a node a layer root if it is in a layer. We attach
each node u in the tree to the layer root that is u’s ancestor and has the same rank as u. We denote d(v) as the number
of descendants attached to a layer root v.
Lemma 8. For WB trees and treaps, if v is a layer root, d(v) is less than a constant (in expectation for treaps).
Furthermore, the random variables d(v) for all layer roots in a treap are i.i.d. (See the proof in the Appendix.)
By applying Lemma 7 and 8 we prove the split work. In the following proof, we denote vkj as the j-th node in
layer k.
n
+1 .
Theorem 6. The split work in U NION, I NTERSECT and D IFFERENCE on two trees of size m and n is O m log m
P
Proof. The total work of S PLIT is the sum of the log of all the splitting sizes on the pivot tree O
log(|T
(v)|
+
1)
.
d
v∈Tp
Denote l as the number of layers in the tree. Also, notice that in the pivot tree, in each layer there are at most |Td |
nodes with |Td (vkj )| > 0. Since P
those nodes with splitting sizes of 0 will not cost any work, we can assume si ≤ |Td |.
We calculate the dominant term v∈Tp log(|Td (v)| + 1) in the complexity by summing the work across layers:
sk
l X
X
k=0 j=1
log(|Td (vkj )| + 1) ≤
=
l
X
P
l
X
sk log
l
X
k=0
sk log
|Td |
sk
|Td (vkj )| + 1
sk
k=0
k=0
We split it into two cases. If |Td | ≥ |Tp |,
j
sk log
|Td |
+1
sk
always dominates 1. we have:
|Td |
+1
sk
=
l
X
sk log
k=0
≤
l
X
k=0
≤ 2
m
cbk/2c
n
+1
sk
n
log
+1
m/cbk/2c
(1)
(2)
l/2
X
n
m
log
k
c
m/ck
k=0
l/2
l/2
X
X
m
n
m
log
+
2
k k
ck
m
c
k=0
k=0
n
= O m log
+ O(m)
mn
= O m log
+1
m
≤ 2
(3)
If |Td | < |Tp |, |Tskd | can be less than 1 when k is smaller, thus the sum should be divided into two parts. Also note
that we only sum over the nodes with splitting size larger than 0. Even though there could be more than |Td | nodes in
13
one layer in Tp , only |Td | of them should count. Thus we assume sk ≤ |Td |, and we have:
l
X
k=0
sk log
|Td |
+1
sk
=
l
X
sk log
k=0
2 logc
k=0
(4)
|Td | log(1 + 1)
l
X
+
n
Xm
≤
m
+1
sk
k=2 logc
n
n
m
cbk/2c
log
l
m
+1
n/cbk/2c
−logc
(5)
m
2 X n
n
m
k0
≤ O m log
+2
0 log c
k
m
c
k0 =0
n
+ O(m)
= O m log
m
n
= O m log( + 1)
m
(6)
From (1) to (2) and (4) to (5) we apply Lemma 7 and the fact that f (x) = x log( nx + 1) is monotonically increasing
when x ≤ n.
For WB trees and treaps, the calculation above only includes the log of splitting
size on layer roots. We need to
n
further prove the total sum of the log of all splitting size is still O m log m
+ 1 . Applying Lemma 8, the expectation
is less than:
xk
l X
X
d(vkj ) log((Td (vkj ) + 1)
E2
k=0 j=1
= E[d(vkj )] × 2
xk
l X
X
log((Td (vkj ) + 1)
k=0 j=1
n
+1
= O m log
m
For WB trees d(vkj ) is no more than a constant, so we can also come to the same bound.
To conclude, the split work on all four balancing schemes of all three functions is O m log
n
m
+1 .
Theorem 7. The total work of U NION, I NTERSECT
or D IFFERENCE of all four balancing schemes on two trees of
n
size m and n (m ≥ n) is O m log m
+1 .
This directly follows Theorem 5 and 6.
Theorem 8. The span of U NION and I NTERSECT or D IFFERENCE on all four balancing schemes is O(log n log m).
Here n and m are the sizes of the two tree.
Proof. For the span of these algorithms, we denote D(h1 , h2 ) as the span on U NION, I NTERSECT or D IFFERENCE on
two trees of height h1 and h2 . According to Theorem 5, the work (span) of S PLIT and J OIN are both O(log |Td |) =
O(h(Td )). We have:
D(h(Tp ), h(Td )) ≤ D(h(Tp ) − 1, h(Td )) + 2h(Td )
Thus D(h(Tp ), h(Td )) ≤ 2h(Tp )h(Td ) = O(log n log m).
Combine Theorem 7 and 8 we come to Theorem 4.
14
6
Experiments
To evaluate the performance of our algorithms we performed several experiments across the four balancing schemes
using different set functions, while varying the core count and tree sizes. We also compare the performance of our
implementation to other existing libraries and algorithms.
Experiment setups and baseline algorithms For the experiments we use a 64-core machine with 4 x AMD Opteron(tm)
Processor 6278 (16 cores, 2.4GHz, 1600MHz bus and 16MB L3 cache). Our code was compiled using the g++ 4.8
compiler with the Cilk Plus extensions for nested parallelism. The only compilation flag we used was the -O2 optimization flag. In all our experiments we use keys of the double data type. The size of each node is about 40 bytes,
including the two child pointers, the key, the balance information, the size of the subtree, and a reference count. We
generate multiple sets varying in size from 104 to 108 . Depending on the experiment the keys are drawn either from
an uniform or a Gaussian distribution. We use µ and σ to denote the mean and the standard deviation in Gaussian
distribution. Throughout this section n and m represent the two input sizes for functions with two input sets (n ≥ m).
We test our algorithm by comparing it to other available implementations. This includes the sequential version
of the set functions defined in the C++ Standard Template Library (STL) [20] and STL’s std::set (implemented
by RB tree). The STL supports the set operations set union, set intersection, and set difference on
any container class. Using an std::vector these algorithms achieve a runtime of O(m + n). Since the STL does
not offer any parallel version of these functions we could only use it for sequential experiments. To see how well
our algorithm performs in a parallel setting, we compare it to parallel WBB-trees [12] and the MCSTL library [13].
WBB-trees, as well as the MCSTL, offer support for bulk insertions and deletions. They process the bulk updates
differently. The MCSTL first splits the main tree among p processors, based on the bulk sequence, and then inserts
the chunks dynamically into each subtree. The WBB-tree recursively inserts the bulk in parallel into the main tree. To
deal with heavily skewed sequences they use partial tree reconstruction for fixing imbalances, which takes constant
amortized time. The WBB-tree has a more cache aware layout, leading to a better cache utilization compared to both
the MCSTL and our implementation. To make a comparison with these implementations we use their bulk insertions,
which can also be viewed as an union of two sets. However notice that WBB-trees take the bulk in the form of a sorted
sequence, which gives them an advantage due to the faster access to the one array than to a tree, and far better cache
performance (8 keys per cache line as opposed to 1).
Comparing the balancing schemes and functions To compare the four balancing schemes we choose U NION as
the representative operation. Other operations would lead to similar results since all operations except J OIN are generic
across the trees. We compare the schemes across different thread counts and different sizes.
Figure 11 (a) shows the runtime of U NION for various tree sizes and all four balancing schemes across 64 cores.
The times are very similar among the balancing schemes—they differ by no more than 10%.
Figure 11 (b) shows the speedup curves for U NION on varying core numbers with two inputs of size 108 . All
balancing schemes achieve a speedup of about 45 on 64 cores, and about 30 on 32 cores. The less-than-linear speedup
beyond 32 cores is not due to lack of parallelism, since when we ran the same experiments on significantly smaller
input (and hence less parallelism) we get very similar curves (not shown). Instead it seems to be due to saturation of
the memory bandwidth.
We use the AVL tree as the representative tree to compare different functions. Figure 11 (c) compares the U NION,
I NTERSECT and D IFFERENCE functions. The size of the larger tree is fixed (108 ), while the size of the smaller tree
varies from 104 to 108 . As the plot indicates, the three functions have very similar performance.
The experiments are a good indication of the performance of different balancing schemes and different functions,
while controlling other factors. The conclusion is that all schemes perform almost equally on all the set functions. It
is not surprising that all balancing schemes achieve similar performance because the dominant cost is in cache misses
along the paths in the tree, and all schemes keep the trees reasonably balanced. The AVL tree is always slightly faster
than the other trees and this is likely due to the fact that they maintain a slightly stricter balance than the other trees,
and hence the paths that need to be traversed are slightly shorter. For different set functions the performance is also as
expected given the similarity of the code.
15
Union, m = n = 108
Union, m = 108 , Threads = 64
Sequential union, m = 108
m = 108 , Threads = 64
50
100
100
102
105
106
n
107
AVL
WB
Treap
RB
10
0
108
0
20
(a)
0
10
−2
10
10−3
10
105
(e)
106
n, m
107
108
−1
10
−2
10
104
106
n
107
108
Union, m = n = 108
−2
10
10−3
10−4
106
n, m
105
(d)
10−1
−4
105
10−3
108
100
10−3
104
107
Join-AVL
std::vector
std::set
10−2
101
10
104
106
m
10−1
Union, Threads = 64
Time (s)
−1
10−4
105
100
(c)
Join-AVL
WBB
MCSTL
101
Time (s)
Time (s)
10
104
60
Union, Threads = 64, σ = 0.25
Join-AVL
WBB
MCSTL
0
Union
Intersect
Difference
(b)
Union, Threads = 64
101
40
Threads
10−2
107
108
Time (s)
104
20
10−1
Time (s)
AVL
WB
Treap
RB
10−2
101
30
Time (s)
10
Speedup
Time (s)
40
−1
Join-AVL
WBB
MCSTL
0
0.2
0.4
(g)
(f)
0.6
σ
0.8
1
101
Join-AVL
WBB
MCSTL
100
0
20
40
Threads
60
(h)
Figure 11: (a) Times for U NION as a function of size (n = 108 ) for different BBSTs; (b) speed up of U NION for
different BBSTs; (c) times for various operations on AVL trees as a function of size (n = 108 ); (d) comparing STLs
set union with our U NION; (e, f, g, h) comparing our U NION to other parallel search trees; (e, h) input keys are
uniformly distributed doubles in the range of [0, 1]; (f, g) inputs keys follow a normal distribution of doubles - the
mean of the main tree is always µ1 = 0, while the mean of the bulk is µ2 = 1. Figure (f) uses a standard deviation of
σ = 0.25, while Figure (g) shows the performance across different standard deviations.
Given the result that the four balancing schemes do not have a big difference in timing and speedup, nor do the
three set functions, in the following experiments we use the AVL tree along with U NION to make comparisons with
other implementations.
Comparing to sequential implementations The STL supports set union on any sorted container class, including
sets based on red-black trees, and sorted vectors (arrays). The STL set union merges the two sorted containers by
moving from left to right on the two inputs, comparing the current values, and writing the lesser to the end of the output.
For two inputs of size n and m, m ≤ n, it takes O(m+n) time on std::vectors, and O((n+m) log(n+m)) time
on std::set (each insertion at the end of the output red-black tree takes O(log(n+m)) time). In the case of ordered
sets we can do better by inserting elements from the smaller set into the larger, leading a time of O(m log(n + m).
This is also what we do in our experiments. For vectors we stick with the available set union implementation.
Figure 11 (d) gives a comparison of times for U NION. For equal lengths our implementation is about a factor of 3
faster than set variant (red-black trees), and about 8 times slower than the vector variant. This is not surprising since
we are asymptotically faster than their red-black tree implementation, and their array-based implementation just reads
and writes the values, one by one, from flat arrays, and therefore has much less overhead and much fewer cache misses.
For taking the union of smaller and larger inputs, our U NION is orders of magnitude faster than either STL version.
This is because their theoretical work bound (O(m + n) and O(m log(m + n)) is worse than our O(m log(n/m + 1)),
which is optimal in comparison model.
Comparing to parallel implementations on various input distributions We compare our implementations to other
parallel search trees, such as the WBB-trees, as described in [12], and parallel RB trees from the MCSTL [13]. We
test the performance on different input distributions.
16
In Figure 11 (e) we show the result of U NION on uniformly distributed doubles in the range of [0,1] across 64
cores. We set the input size to n = m = 10i , i from 4 to 8. The three implementations have similar performance
when n = m = 104 . As the input size increases, MCSTL shows much worse performance than the other two because
of the lack of parallelism (Figure 11 (h) is a good indication), and the WBB-tree implementation is slightly better
than ours. For the same reason that STL vectors outperform STL sets (implemented with RB trees) and our sequential
implementation, the WBB-trees take the bulk as a sorted array, which has much less overhead to access and much
better cache locality. Also their tree layout is more cache efficient and the overall height is lower since they store
multiple keys per node.
Figure 11 (f) shows the result of a Gaussian distribution with doubles, also on all 64 cores with set sizes of
n = m = 10i for i = 4 through 8. The distributions of the two sets have means at 0 and 1 respectively, and both
having a standard deviation of 0.25, meaning that the data in the two sets have less overlap comparing to a uniform
distribution (as in (e)). In this case our code achieves better performance than the other two implementations. For our
algorithms less overlap in the data means more parts of the trees will be untouched, and therefore less nodes will be
operated on. This in turn leads to less processing time.
We also do an in-depth study on how the overlap of the data sets affects the performance of each algorithm. We
generate two sets of size n = m = 108 , each from a Gaussian distribution. The distributions of the two sets have
means at 0 and 1 respectively, and both have an equal standard deviation varying in {1, 1/2, 1/4, 1/8, 1/16}. The
different standard deviations are to control the overlap of the two sets, and ideally less overlap should simplify the
problem. Figure 11 (g) shows the result of the three parallel implementations on a Gaussian distribution with different
standard deviations. From the figure we can see that MCSTL and WBB-tree are not affected by different standard
deviations, while our join-based union takes advantage of less overlapping and achieves a much better performance
when σ is small. This is not surprising since when the two sets are less overlapped, e.g., totally disjoint, our U NION
will degenerate to a simple J OIN, which costs only O(log n) work. This behavior is consistent with the “adaptive”
property (not always the worst-case) in [?]. This indicates that our algorithm is the only one among the three parallel
implementations that can detect and take advantage of less overlapping in data, hence have a much better performance
when the two operated sets are less overlapped.
We also compare the parallelism of these implementations. In Figure 11 (h) we show their performance across 64
cores. The inputs are both of size 108 , and generated from an uniform distribution of doubles. It is easy to see that
MCSTL does not achieve good parallelism beyond 16 cores, which explains why the MCSTL always performs the
worst on 64 cores in all settings. As we mentioned earlier, the WBB-tree are slightly faster than our code, but when
it comes to all 64 cores, both algorithms have similar performance. This indicates that our algorithm achieves better
parallelism.
To conclude, in terms of parallel performance, our code and WBB-trees are always much better than the MCSTL
because of MCSTL’s lack of parallelism. WBB-trees achieve a slightly better performance than ours on uniformly
distributed data, but it does not improve when the two sets are less overlapped. Thus our code is much better than
the other two implementations on less overlapped data, while still achieving a similar performance with the other
algorithms when the two sets are more intermixed with each other.
17
7
Conclusions
In this paper, we study ordered sets implemented with balanced binary search trees. We show for the first time that a
very simple “classroom-ready” set of algorithms due to Adams’ are indeed work optimal when used with four different
balancing schemes–AVL, RB, WB trees and treaps—and also highly parallel. The only tree-specific algorithm that
is necessary is the J OIN, and even the J OINs are quite simple, as simple as I NSERT or D ELETE. It seems it is not
sufficient to give a time bound to J OIN and base analysis on it. Indeed if this were the case it would have been done
years ago. Instead our approach defines the notion of a rank (differently for different trees) and shows invariants on
the rank. It is important that the cost of J OIN is proportional to the difference in ranks. It is also important that when
joining two trees the resulting rank is only a constant bigger than the larger rank of the inputs. This insures that when
joins are used in a recursive tree, as in U NION, the ranks of the results in a pair of recursive calls does not differ much
on the two sides. This then ensures that the set functions are efficient.
We also test the performance of our algorithm. Our experiments show that our sequential algorithm is about 3x
faster for union on two maps of size 108 compared to the STL red-black tree implementation. In parallel settings our
code is much better than the two baseline algorithms (MCSTL and WBB-tree) on less overlapped data, while still
achieves similar performances with WBB-tree when the two sets are more intermixed. Our code also achieves 45x
speedup on 64 cores.
References
[1] S. Adams. Implementing sets effciently in a functional language. Technical Report CSTR 92-10, University of
Southampton, 1992.
[2] S. Adams. Efficient sets—a balancing act. Journal of functional programming, 3(04):553–561, 1993.
[3] G. Adelson-Velsky and E. M. Landis. An algorithm for the organization of information. Proc. of the USSR
Academy of Sciences, 145:263–266, 1962. In Russian, English translation by Myron J. Ricci in Soviet Doklady,
3:1259-1263, 1962.
[4] Y. Akhremtsev and P. Sanders. Fast parallel operations on search trees(unpublished), 2016.
[5] R. Bayer. Symmetric binary b-trees: Data structure and maintenance algorithms. Acta Informatica, 1:290–306,
1972.
[6] G. E. Blelloch and M. Reid-Miller. Fast set operations using treaps. In Proc. ACM Symposium on Parallel
Algorithms and Architectures (SPAA), pages 16–26, 1998.
[7] N. Blum and K. Mehlhorn. On the average number of rebalancing operations in weight-balanced trees. Theoretical Computer Science, 11(3):303–320, 1980.
[8] R. D. Blumofe and C. E. Leiserson. Space-efficient scheduling of multithreaded computations. SIAM J. on
Computing, 27(1):202–229, 1998.
[9] R. P. Brent. The parallel evaluation of general arithmetic expressions. Journal of the ACM, 21(2):201–206, Apr.
1974.
[10] N. G. Bronson, J. Casper, H. Chafi, and K. Olukotun. A practical concurrent binary search tree. In Proc. ACM
SIGPLAN Symp. on Principles and Practice of Parallel Programming (PPoPP), pages 257–268, 2010.
[11] M. R. Brown and R. E. Tarjan. A fast merging algorithm. Journal of the ACM (JACM), 26(2):211–226, 1979.
[12] S. Erb, M. Kobitzsch, and P. Sanders. Parallel bi-objective shortest paths using weight-balanced b-trees with bulk
updates. In Experimental Algorithms, pages 111–122. Springer, 2014.
[13] L. Frias and J. Singler. Parallelization of bulk operations for STL dictionaries. In Euro-Par 2007 Workshops:
Parallel Processing, HPPC 2007, UNICORE Summit 2007, and VHPC 2007, pages 49–58, 2007.
18
[14] Y. Hirai and K. Yamamoto. Balancing weight-balanced trees. Journal of Functional Programming, 21(03):287–
307, 2011.
[15] F. K. Hwang and S. Lin. A simple algorithm for merging two disjoint linearly ordered sets. SIAM J. on Computing, 1(1):31–39, 1972.
[16] J. Katajainen. Efficient parallel algorithms for manipulating sorted sets. In Proceedings of the 17th Annual
Computer Science Conference. University of Canterbury, 1994.
[17] H. T. Kung and P. L. Lehman. Concurrent manipulation of binary search trees. ACM Trans. Database Syst.,
5(3):354–382, 1980.
[18] K. S. Larsen. AVL trees with relaxed balance. J. Comput. Syst. Sci., 61(3):508–522, 2000.
[19] S. Marlow et al. Haskell 2010 language report. Available online http://www. haskell. org/(May 2011), 2010.
[20] D. R. Musser, G. J. Derge, and A. Saini. STL tutorial and reference guide: C++ programming with the standard
template library. Addison-Wesley Professional, 2009.
[21] A. Natarajan and N. Mittal. Fast concurrent lock-free binary search trees. In Proc. ACM SIGPLAN Symp. on
Principles and Practice of Parallel Programming (PPoPP), pages 317–328, 2014.
[22] J. Nievergelt and E. M. Reingold. Binary search trees of bounded balance. SIAM J. Comput., 2(1):33–43, 1973.
[23] H. Park and K. Park. Parallel algorithms for red–black trees. Theoretical Computer Science, 262(1):415–435,
2001.
[24] W. J. Paul, U. Vishkin, and H. Wagener. Parallel dictionaries in 2-3 trees. In Proc. Intl. Colloq. on Automata,
Languages and Programming (ICALP), pages 597–609, 1983.
[25] R. Seidel and C. R. Aragon. Randomized search trees. Algorithmica, 16:464–497, 1996.
[26] D. D. Sleator and R. E. Tarjan. Self-adjusting binary search trees. Journal of the ACM (JACM), 32(3):652–686,
1985.
[27] M. Straka. Adams’ trees revisited. In Trends in Functional Programming, pages 130–145. Springer, 2012.
[28] R. E. Tarjan. Data Structures and Network Algorithms. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1983.
A
A.1
Proofs for Some Lemmas
Proof for Lemma 8
Proof. One observation in WB trees and treaps is that all nodes attached to a single layer root form a chain. This is
true because if two children of one node v are both in layer i, the weight of v is more than 2i+1 , meaning that v should
be layer i + 1.
For a layer root v in a WB tree on layer k, w(v) is at most 2k+1 . Considering the balance invariant that its child
has weight at most (1 − α)w(v), the weight of the t-th generation of its descendants is no more than 2k+1 (1 − α)t .
1
1
2 generations, the weight should decrease to less than 2k . Thus d(v) ≤ log 1−α
2,
This means that after t∗ = log 1−α
which is a constant.
For treaps consider a layer root v on layer k that has weight N ∈ [2k , 2k+1 ). The probability that d(v) ≥ 2 is
equal to the probability that one of its grandchildren has weight at least 2k . This probability P is:
19
(1). Single Rotation
……
u
1
v
3
(0)
The left
branch in ܶோ
4
ܤ
C
……
ܣ
v
u
1
5
2
5
3
ܦ
(2). Double Rotation
ܣ
2
4
ܤ
C
……
3
ܦ
u
(0): The rebalance process is currently at ݒ,
which means the tree rooted at ݑand all of
its subtrees are balanced.
(1): The result of the single rotation.
(2): The result of the double rotation.
v
1
2
4
5
ܣ
ܤ
ܥ
ܦ
Figure 12: An illustration of two kinds of outcomes of rotation after joining two weight balanced trees. After we
append the smaller tree to the larger one and rebalance from that point upwards, we reach the case in (0), where u has
been balanced, and the smaller tree has been part of it. Now we are balancing v, and two options are shown in (1) and
(2). At least one of the two rotation will rebalance v.
P =
≤
1
2k
1
2k
N
X
i − 2k
i
k
(7)
i=2 +1
k+1
2X
i=2k +1
≈ 1 − ln 2
i − 2k
i
(8)
(9)
We denote 1 − ln 2 as pc . Similarly, the probability that d(v) ≥ 4 should be less than p2c , and the probability shrink
geometrically as d(v) increase. Thus the expected value of d(v) is a constant.
Since treaps come from a random permutation, all s(v) are i.i.d.
A.2
Proof for Lemma 5
Proof. We are trying to show that for Tr =U NION(Tp , Td ) on AVL, RB or WB trees, if r(Tp ) > r(Td ) then r(Tr ) ≤
r(Tp ) + r(Td ).
For AVL and RB trees we use induction on r(Tp ) + r(Td ). When r(Td ) + r(Tp ) = 1 the conclusion is trivial. If
r = r(Tp ) > r(Td ), Tp will be split into two subtrees, with rank at most r(Tp )−1 since we remove the root. Td will be
split into two trees with height at most r(Td ) (Theorem 2). Using the inductive hypothesis, the two recursive calls will
return two trees of height at most r(Tp ) − 1 + r(Td ). The result of the final J OIN is therefore at most r(Tp ) + r(Td ).
For WB trees, |T | ≤ |Tp | + |Td | ≤ 2|Tp |. Thus r(T ) ≤ r(Tp ) + 1 ≤ r(Tp ) + r(Td ).
20
A.3
Proof for Lemma 3
Proof. Recall that in a weight balanced tree, for a certain node,
√neither of its children is β times larger than the other
one, where β = α1 − 1. When α ≤ 1 − √12 , we have β ≥ 1 + 2.
WLOG, we prove the case when |TL | < |TR |, where TL is inserted along the left branch of TR . Then we rebalance
the tree from the point of key k and go upwards. As shown in Figure 12 (0), suppose the rebalance has been processed
to u (then we can use reduction). Thus the subtree rooted at u is balanced, and TL is part of it. We name the four trees
from left to right A, B, C and D, and the number of nodes in them a, b, c and d. From the balance condition we know
that A is balanced with B + C, and B is balanced to C, i.e.:
1
(b + c) ≤a ≤ β(b + c)
β
1
b ≤c ≤ βc
β
(10)
(11)
We claim that at least one of the two operations will rebalanced the tree rooted at v in Figure 12 (0):
Op. (1). Single rotation: right rotation at u and v (as shown in Figure 12 (1));
Op. (2). double rotation: Left rotation followed by a right rotation (as shown in Figure 12 (2)).
Also, notice that the inbalance is caused by the insertion of a subtree at the leftmost branch. Suppose the size
of the smaller tree is x, and the size of the original left child of v is y. Note that in the process of J OIN, TL is not
concatenated with v. Instead, it goes down to deeper nodes. Also, note that the original subtree of size y is weight
balanced with D. This means we have:
x<
1
(d + y)
β
1
d ≤ y ≤ βd
β
x+y =a+b+c
From the above three inequalities we get x <
1
βd
+ d, thus:
a + b + c = x + y < (1 + β +
1
)d
β
(12)
Since a unbalance occurs, we have:
a + b + c > βd
(13)
1
(b + c) ≤ d ≤ β(b + c)
β
(14)
We discuss the following 3 cases:
Case 1. B + C is weight balanced with D, i.e.,
In this case, we apply a right rotate. The new tree rooted at u is now balanced. A is naturally balanced.
Then we discuss in two cases:
Case 1.1. βa ≥ b + c + d.
Notice that b+c ≥ β1 a, meaning that b+c+d > β1 a. Then in this case, A is balanced to B +C +D,
B + C is balanced to D. Thus just one right rotation will rebalance the tree rooted at u (Figure 12
(1)).
21
Case 1.2. βa < b + c + d.
In this case, we claim that a double rotation as shown in Figure 12 (2) will rebalance the tree. Now
we need to prove the balance of all the subtree pairs: A with B, C with D, and A + B with C + D.
First notice that when βa < b + c + d, from (13) we can get:
βd < a + b + c <
1
(b + c + d) + b + c
β
1
1
)d < ( + 1)(b + c)
β
β
⇒(β − 1)d < b + c
⇒(β −
(15)
Considering (14), we have (β − 1)d < b + c ≤ βd. Notice b and c satisfy (11), we have:
1
(b + c) >
β+1
1
c>
(b + c) >
β+1
b>
Also note that when β > 1 +
√
β−1
d
β+1
β−1
d
β+1
(16)
(17)
2 ≈ 2.414, we have
β+1
<β
β−1
We discuss the following three conditions of subtrees’ balance:
I. Prove A is weight balanced to B.
i. Prove b ≤ βa.
Since βa ≤ b + c (applying (10)) , we have b ≤ βa.
ii. Prove a ≤ βb.
In the case when βa < b + c + d we have:
1
(b + c + d) (applying(11), (16))
β
1
β+1
< (b + βb +
b)
β
β−1
β+1
=
b
β−1
< βb
a<
II. Prove C is weight balanced to D.
i. Prove c ≤ βd.
Since b + c ≤ βd (applying (14)), we have c ≤ βd.
ii. Prove d ≤ βc.
From (17), we have
d<
β+1
c < βc
β−1
III. Prove A + B is weight balanced to C + D.
i. Prove a + b ≤ β(c + d).
22
(18)
From (15), (11) and (18) we have:
1
1
(b + c) ≤
(βc + c)
β−1
β−1
β+1
=
c < βc
β−1
1
⇒ d<c
β
1
⇒(1 + )d < (1 + β)c
β
1
⇒(1 + + β)d < β(c + d) + c (applying (12))
β
1
⇒a + b + c < (1 + + β)d < β(c + d) + c
β
⇒a + b < β(c + d)
d<
ii. Prove c + d ≤ β(a + b).
β
< β. Thus applying (15) and (10) we have:
When β > 2, we have β−1
d<
1
β
(b + c) ≤
a < βa
β−1
β−1
Also we have c ≤ βb (applying (11)). Thus c + d < β(a + b).
Case 2. B + C is too light that cannot be balanced with D, i.e.,
β(b + c) < d
(19)
In this case, we have a < β(b + c) < d (applying (10) and (19)), which means that a + b + c < d + β1 d < βd
√
when β > 1+2 5 ≈ 1.618. This contradicts with the condition that A + B + C is too heavy to D (a + b + c >
βd). Thus this case is impossible.
Case 3. B + C is too heavy that cannot be balanced with D, i.e.,
b + c > βd
1
⇒a > (b + c) > d
β
(20)
(21)
In this case, we apply the double rotation.
We need to prove the following balance conditions:
I. Prove A is weight balanced to B.
i. Prove b < βa.
Since βa > b + c (applying (10)) , we have b < βa.
ii. Prove a < βb.
Suppose c = kb, where β1 < k < β. Since b + c > βd, we have:
d<
23
1+k
b
β
(22)
From the above inequalities, we have:
a + b + c = a + b + kb
(applying(12))
1
< (1 + β + )d (applying(22))
β
1
1+k
< (1 + β + ) ×
b
β
β
!
1 + β + β1
− 1 (1 + k)b
⇒a<
β
β+1
(1 + k)b
β2
(β + 1)2
<
b
β2
=
When β >
7
9 ×(
√
837+47 −1/3
)
+(
54
√
837+47 1/3
) + 13
54
II. Prove C is weight balanced to D.
i. Prove d ≤ βc. √
When β > 1+2 5 ≈ 1.618, we have β > 1 +
b < βc < d. Thus:
b + c < (1 +
≈ 2.1479, we have
1
β.
(β+1)2
β
< β. Hence a < βb.
Assume to the contrary c <
1
β d,
1
)d < βd
β
, which contradicts with (20) that B + C is too heavy to be balanced with D.
ii. Prove c < βd.
Plug (21) in (12) and get b + c < (β + β1 )d. Recall that β > 1, we have:
1
1
c + c < b + c < (β + )d
β
β
β2 + 1
⇒c <
d < βd
β+1
III. Prove A + B is weight balanced with C + D.
i. Prove c + d ≤ β(a + b).
From (11) we have c < βb, also d < a < βa (applying (21)), thus c + d < β(a + b).
ii. Prove a + b ≤ β(c√+ d).
Recall when β > 1+2 5 ≈ 1.618, we have β > 1 + β1 . Applying (20) and (11) we have:
d≤
1
1
(b + c) ≤ c + c < βc
β
β
1
⇒ d<c
β
1
⇒(1 + )d < (1 + β)c
β
1
⇒(1 + + β)d < β(c + d) + c (applying (12))
β
1
⇒a + b + c < (1 + + β)d < β(c + d) + c
β
⇒a + b < β(c + d)
24
we have
Taking all the three conclusions into consideration, after either a single rotation or a double rotation, the new subtree
will be rebalanced.
Then by induction we can prove Lemma 3.
25
| 8 |
Decentralized Collision-Free Control of Multiple
Robots in 2D and 3D Spaces
arXiv:1709.05843v1 [cs.RO] 18 Sep 2017
by
Xiaotian Yang
2017
Abstract
Decentralized control of robots has attracted huge research interests. However, some
of the research used unrealistic assumptions without collision avoidance. This report
focuses on the collision-free control for multiple robots in both complete coverage and
search tasks in 2D and 3D areas which are arbitrary unknown. All algorithms are
decentralized as robots have limited abilities and they are mathematically proved.
The report starts with the grid selection in the two tasks. Grid patterns simplify
the representation of the area and robots only need to move straightly between
neighbor vertices. For the 100% complete 2D coverage, the equilateral triangular
grid is proposed. For the complete coverage ignoring the boundary effect, the grid
with the fewest vertices is calculated in every situation for both 2D and 3D areas.
The second part is for the complete coverage in 2D and 3D areas. A decentralized
collision-free algorithm with the above selected grid is presented driving robots to
sections which are furthest from the reference point. The area can be static or
expanding, and the algorithm is simulated in MATLAB.
Thirdly, three grid-based decentralized random algorithms with collision avoidance are provided to search targets in 2D or 3D areas. The number of targets can be
known or unknown. In the first algorithm, robots choose vacant neighbors randomly
with priorities on unvisited ones while the second one adds the repulsive force to
disperse robots if they are close. In the third algorithm, if surrounded by visited
vertices, the robot will use the breadth-first search algorithm to go to one of the
nearest unvisited vertices via the grid. The second search algorithm is verified on
Pioneer 3-DX robots. The general way to generate the formula to estimate the
search time is demonstrated. Algorithms are compared with five other algorithms
in MATLAB to show their effectiveness.
i
Contents
Abstract
i
List of Figures
vii
List of Tables
viii
1 Introduction
1.1 Multiple Robots . . . . . . . . . . . . . .
1.2 Centralized vs. Decentralized Algorithms
1.3 Complete Coverage and Search . . . . .
1.4 Mapping . . . . . . . . . . . . . . . . . .
1.5 Collision-Free Navigation . . . . . . . . .
1.6 Contribution Highlights . . . . . . . . .
1.7 Report Outline . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
. 2
. 4
. 5
. 8
. 10
. 12
. 13
2 Grid Selection for Coverage in 2D and 3D Areas
2.1 Grids for 2D Areas . . . . . . . . . . . . . . . . .
2.1.1 Definitions and Assumptions . . . . . . . .
2.1.2 The Strict Requirement . . . . . . . . . .
2.1.3 The Loose Requirement . . . . . . . . . .
2.1.4 The Choice for the Pioneer 3-DX Robot .
2.2 Grids for 3D Areas . . . . . . . . . . . . . . . . .
2.2.1 Definitions and Assumptions . . . . . . . .
2.2.2 The Loose Requirement . . . . . . . . . .
2.2.3 The Choice for the Bluefin-21 Robot . . .
2.3 Summary . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
16
17
17
20
22
25
26
27
30
30
.
.
.
.
.
.
.
.
.
.
.
32
33
33
34
39
43
44
44
44
45
51
51
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Optimal Collision-Free Self-Deployment for Complete Coverage
3.1 Completer Coverage in 2D . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . .
3.1.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . .
3.1.4 Section Summary . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Completer Coverage in 3D . . . . . . . . . . . . . . . . . . . . . . .
3.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . .
3.2.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . .
3.2.4 Section Summary . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ii
CONTENTS
4 A Collision-Free Random Search Algorithm
4.1 2D Tasks . . . . . . . . . . . . . . . . . . . .
4.1.1 Problem Statement . . . . . . . . . .
4.1.2 Procedure and Algorithm . . . . . .
4.1.3 Stop Strategies . . . . . . . . . . . .
4.1.4 Broken Robots and Reliability . . . .
4.1.5 Simulation Results . . . . . . . . . .
4.1.6 Section Summary . . . . . . . . . . .
4.2 3D Tasks . . . . . . . . . . . . . . . . . . . .
4.2.1 Problem Statement . . . . . . . . . .
4.2.2 Procedure and Algorithm . . . . . .
4.2.3 Simulation Results . . . . . . . . . .
4.2.4 Section Summary . . . . . . . . . . .
4.3 Summary . . . . . . . . . . . . . . . . . . .
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 A Collision-Free Random Search with the Repulsive Force
5.1 2D Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . .
5.1.2 Procedure and Algorithm . . . . . . . . . . . . . . . . .
5.1.3 Simulations with the Strict Requirement . . . . . . . . .
5.1.4 Comparison with Other Algorithms . . . . . . . . . . . .
5.1.5 Robot Test . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.6 Factors in Choosing a Grid under the Loose Requirement
5.1.7 Section Summary . . . . . . . . . . . . . . . . . . . . . .
5.2 3D Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . .
5.2.2 Procedure and Algorithm . . . . . . . . . . . . . . . . .
5.2.3 Simulation Results . . . . . . . . . . . . . . . . . . . . .
5.2.4 Section Summary . . . . . . . . . . . . . . . . . . . . . .
5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 A Collision-Free Random Search with the Breath-First Search
gorithm
6.1 2D Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . .
6.1.2 Procedure and Algorithm . . . . . . . . . . . . . . . . . .
6.1.3 Simulations with the Strict Requirement . . . . . . . . . .
6.1.4 Comparison with Other Algorithms . . . . . . . . . . . . .
6.1.5 Section Summary . . . . . . . . . . . . . . . . . . . . . . .
6.2 3D Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . .
6.2.2 Procedure and Algorithm . . . . . . . . . . . . . . . . . .
6.2.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . .
6.2.4 Section Summary . . . . . . . . . . . . . . . . . . . . . . .
6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
52
52
53
55
59
59
60
64
65
65
67
68
71
71
.
.
.
.
.
.
.
.
.
.
.
.
.
.
74
74
74
76
77
86
90
94
99
99
99
100
101
105
106
Al107
. . 108
. . 109
. . 110
. . 111
. . 112
. . 117
. . 117
. . 118
. . 119
. . 119
. . 125
. . 125
7 Conclusion
126
7.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
CONTENTS
iv
Appendix
146
A Acronyms
147
List of Figures
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
Three regular tessellation patterns . . . . . . . . . . .
The illustration for a, rsaf e and rso . . . . . . . . . .
rso , rc and the selection order in search . . . . . . . .
Curvature example for the equilateral triangular grid
Example of the curvature in the S grid . . . . . . . .
A narrow path . . . . . . . . . . . . . . . . . . . . . .
The Pioneer 3-DX robot . . . . . . . . . . . . . . . .
Layout of 8 sonars in the front . . . . . . . . . . . . .
Range of the laser SICK-200 . . . . . . . . . . . . . .
Four uniform honeycombs . . . . . . . . . . . . . . .
a, rsaf e , rso , rc and the order of selection in search. .
Bluefin-21 . . . . . . . . . . . . . . . . . . . . . . . .
Passage width for a 2D area . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
16
18
18
19
20
21
24
24
25
26
28
30
31
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10
3.11
3.12
3.13
3.14
The multilevel rotary parking system . . . . . . . . . . . . .
The launcher for multiple UAVs in the LOCUST project . .
The flow chart for the 2D complete coverage . . . . . . . . .
Judge obstacles . . . . . . . . . . . . . . . . . . . . . . . . .
The task area and the initial setting . . . . . . . . . . . . . .
Positions of robots when k = 30, |Va | = 256, Ni = 3 . . . . .
Positions of robots when k = 60, |Va | = 256, Ni = 3 . . . . .
Positions of robots when k = 86, |Va | = 256, Ni = 3 . . . . .
An example of the narrow path with k = 3, |Va | = 6, Ni = 3 .
The task area and the initial setting . . . . . . . . . . . . . .
Positions of robots when k = 15, |Va | = 129, Ni = 3 . . . . .
Positions of robots when k = 30, |Va | = 129, Ni = 3 . . . . .
Positions of robots when k = 44, |Va | = 129, Ni = 3 . . . . .
An example of the narrow path with k = 3, |Va | = 6, Ni = 3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
35
36
37
40
41
41
42
42
46
47
48
49
50
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
Initial settings of the map . . . . . . . . . . . .
The flow chart for situation I . . . . . . . . . .
The flow chart for situation II . . . . . . . . . .
The route of robot 1 in a simulation of 3 robots
The route of robot 2 in a simulation of 3 robots
The route of robot 3 in a simulation of 3 robots
Combined routes in a simulation of 3 robots . .
Time and steps . . . . . . . . . . . . . . . . . .
3D area and initial settings . . . . . . . . . . . .
The route of robot 1 in a simulation of 3 robots
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
53
56
57
61
62
62
63
64
66
68
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
LIST OF FIGURES
4.11
4.12
4.13
4.14
4.15
The route of robot 2 in a simulation of 3 robots
The route of robot 3 in a simulation of 3 robots
The combined routes of 3 robots . . . . . . . . .
Time and steps for situation I . . . . . . . . . .
Time and steps for situation II . . . . . . . . . .
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
vi
.
.
.
.
.
.
.
.
.
.
69
69
70
72
72
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
sit. .
sit. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
78
78
79
79
80
80
81
82
83
84
5.13
5.14
5.15
5.16
5.17
5.18
5.19
5.20
5.21
5.22
5.23
5.24
5.25
5.26
5.27
5.28
5.29
5.30
5.31
5.32
5.33
5.34
Route of robot 1 in a simulation of three robots . . . . . . . . .
Route of robot 2 in a simulation of three robots . . . . . . . . .
Route of robot 3 in a simulation of three robots . . . . . . . . .
Combined routes in a simulation of three robots . . . . . . . . .
Choice in the first step of the simulation of three robots . . . . .
Initial setting of the map . . . . . . . . . . . . . . . . . . . . . .
The order and initial positions of robots . . . . . . . . . . . . .
Search time for 1-13 robots with 1, 3, 5, 7 and 9 targets . . . . .
Search time for 9 targets with 1-13 robots in 5 areas . . . . . . .
Time ratio based on 100 vertices for 1-13 robots in five areas . .
Search 9 targets in 900 vertices area with different allocations in
uation I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Search 9 targets in 900 vertices area with different allocations in
uation II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The order of allocation for 13 robots in a curve . . . . . . . . .
Search time for 1 target using 6 algorithms . . . . . . . . . . . .
Search time for 1 target using algorithms R and RW . . . . . .
Search time for 9 targets using 6 algorithms . . . . . . . . . . .
Search time for 9 targets using algorithms R and RW . . . . . .
The test area . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The average result for 10 tests with 1-3 robots . . . . . . . . . .
The example route for one robot with legends (13 loops) . . . .
Example routes for two robots (10 loops) . . . . . . . . . . . . .
Example routes for three robots (6 loops) . . . . . . . . . . . . .
Nvt for a T grid: star-4 green-5 purple-6 red-7 . . . . . . . . . .
Nvt for an S grid: purple-4 yellow-5 green-6 star-7 . . . . . . . .
Nvt for an H grid: star-2 red-3 green-4 yellow-5 purple-6 . . . .
A T grid with no obstacles . . . . . . . . . . . . . . . . . . . . .
Time for a T grid with Nvt =4 and Nvt =6 . . . . . . . . . . . . .
Time for three grids with one target . . . . . . . . . . . . . . . .
Time for three grids with nine targets . . . . . . . . . . . . . . .
The route of robot 1 in a simulation with 3 robots . . . . . . . .
The route of robot 2 in a simulation with 3 robots . . . . . . . .
The route of robot 3 in a simulation with 3 robots . . . . . . . .
The combined routes in a simulation with 3 robots . . . . . . .
Time for four algorithms with 9 targets . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
85
88
88
89
89
91
92
93
93
93
95
95
96
96
97
98
98
102
102
103
104
105
6.1
6.2
6.3
6.4
6.5
6.6
The route of robot 1 in a simulation with 3 robots .
The route of robot 2 in a simulation with 3 robots .
The route of robot 3 in a simulation with 3 robots .
The combined routes in a simulation with 3 robots
All algorithms except R for 1-13 robots . . . . . . .
Time for 3 search algorithms . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
112
113
113
114
115
116
5.12
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 84
LIST OF FIGURES
6.7
6.8
6.9
6.10
6.11
6.12
6.13
Search time for search algorithm 2 and 3 . . . . . .
The route of robot 1 in a simulation with 3 robots .
The route of robot 2 in a simulation with 3 robots .
The route of robot 3 in a simulation with 3 robots .
The combined routes in a simulation with 3 robots
Compare 4 algorithms in situation I . . . . . . . . .
Compare 4 algorithms in situation II . . . . . . . .
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
117
120
121
121
122
123
124
List of Tables
2.1
2.2
2.3
2.4
Wpass and rst for each grid . . . . . . . . . .
Wpass and the area occupied by each polygon
rst for each 3D grid . . . . . . . . . . . . . .
The area occupied by each cell . . . . . . . .
3.1
3.2
3.3
3.4
3.5
3.6
Parameters of robots and simulation .
Steps from calculations and simulations
Steps from calculations and simulations
Parameters of robots and simulation .
Steps from calculations and simulations
Steps from calculations and simulations
4.1
4.2
Wpass and rst for each grid . . . . . . . . . . . . . . . . . . . . . . . . 64
The ratio of calculation time to total time . . . . . . . . . . . . . . . 71
5.1
5.3
5.4
Results of RU for 1-13 Robots in 900 vertices area with 1 target. (RU
in Figure 5.14) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Results of S2 for 1-13 Robots in 900 vertices area with 9 targets. (5.1
& 5.2 in Figure 5.16) . . . . . . . . . . . . . . . . . . . . . . . . . .
Parameters of robots and in experiment . . . . . . . . . . . . . . . .
Search time for algorithm LFP and RU with 9 targets . . . . . . . .
6.1
Search time for algorithm R and RU in situation II . . . . . . . . . . 124
5.2
viii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . . . . .
with |Va | = 256 . . . .
for different areas with
. . . . . . . . . . . . .
with |Va | = 129 . . . .
for different areas with
.
.
.
.
.
.
.
.
. .
. .
Ni
. .
. .
Ni
.
.
.
.
.
.
.
.
21
23
27
29
. .
. .
=3
. .
. .
=3
39
43
43
47
49
51
. 90
. 91
. 92
. 106
Chapter 1
Introduction
The recent improvements of technologies and algorithms in robotics enable robots to
be widely used in various applications. The civilian usages include the vacuum cleaning at home [217, 229, 203, 72], manufacturing and material handling in factories
[50, 107], fire detection in the forest [108] and warehouse [30], pollution estimation in
the sea [47, 83] and the air [88], and data harvest [65]. There are also some military
applications such as the border patrol [121, 119, 94], the deployment of robots for
intruder detection [97, 132, 224, 226, 218, 57, 151, 18], intelligence, surveillance and
reconnaissance missions [1, 157], mine clearance [134, 31, 32, 127, 104, 105, 70, 21],
boundary monitoring [191], and the terrorist threat detection in ports [212]. Most
of the reference above is about search, coverage and path planning tasks. In the
employed robots, the autonomous mobile robots, including Unmanned Underwater
Vehicles (UUVs), Unmanned Ground Vehicles (UGVs), and Unmanned Aerial Vehicles (UAVs), have attracted the most research interests as they can be programmed
to execute certain tasks without further control from the human. Sometimes, a
team of multiple robots is used in a task as it has certain advantages compared to a
single robot. When a team of autonomous robots is employed, the ways that robots
coordinate to perform cooperative tasks become the main research topic. The algorithms can be divided into decentralized methods and centralized methods. Another
issue raised by a team of robots is collision avoidance. However, many algorithms
ignored this problem or use unrealistic assumptions thus cannot be applied to real
robots. Thus, to ensure a safe navigation through the task, standard robot models
or actual robots need to be considered as the base of the assumptions. The task area
can be two-dimensional (2D) or three-dimensional (3D), so the suitable method to
represent the 2D or 3D environments, as well as obstacles inside should be carefully
considered.
This report studied the problems of complete coverage and search using multiple
robots. The task area is arbitrary unknown, and robots move without collisions
based on a grid pattern. The best grids under different situations are discussed. One
decentralized algorithm for the complete coverage and three decentralized random
algorithms for search tasks are provided. The algorithms are initially designed in
2D area and can be used in 3D area with easy modification. The convergence of
each algorithm is proved, and all algorithms are verified by extensive simulations
and comparisons with other methods. Algorithms are designed based on datasheets
of real robots, and one search algorithm is applied to Pioneer 3-DX robots to show
the effectiveness.
1
1.1 Multiple Robots
Sections below are the literature review on the related topics in this report.
1.1
Multiple Robots
When finishing some tasks using a single robot, the robot usually needs to have
strong abilities with advanced sensors and complex algorithm. One kind of popular ground robots in research is the quadruped robot such as BigDog designed
by Boston Dynamics [155] and Mountainous Bionic Quadruped Robot from China
North Industries Corporation [160]. These robots are designed for rough-terrain
with different obstacles which may be too steep, muddy and snowy to use common
wheeled robots. They are also stable with heavy loads and external interference
such as pushes from different directions. Correspondingly, those robots need various
sensors for the environments, complicated computing and power systems, advanced
actuators and sophisticated dynamic controllers. For UAVs, the Parrot AR. Drone
has been widely used in research and education [91]. In [162], a trajectory tracking
and 3D positioning controller was designed for this drone using its onboard video
camera to building the map of the task area. A single robot path planning algorithm
for complete coverage using this robot is developed in [52]. A famous UUV used in
search of Malaysia Airline MF370 is the Bluefin-21 robot [21]. Some other robots in
research can be found in the survey paper [96].
However, there are many tasks which cannot be done by a single robot. For
the time-sensitive works, such as search and rescue after a disaster or the detection
of dangerous leaking biological, chemical, and radioactive materials, a single robot
may be unable to finish the task on time. For the coverage or data collection
task in a vast area, only one robot would not be able to have such a large sensing
range, and a larger sensing range means more expensive equipment should be used.
Sometimes, a task is designed for multiple members such as the soccer game and
the targets tracking mission with multiple aims moving in different directions where
one robot is not enough to follow each of them. Thus, the cooperation of multiple
robots is required. The recent developments in Micro-electro-mechanical systems,
integrated circuits, wireless communication, as well as the swift progress in algorithm
innovation also attracts researchers to focus on the control problem of the robots
network. As cheaper robots [5, 81, 152, 72] with simple functions are available [32]
to finish a task by cooperation, an expensive robot with advanced capability may
be unnecessary.
A system with multiple robots sometimes may be called as a Wireless Mobile
Sensor Network (WMSN) in some papers such as [128, 138, 99, 41, 42, 39, 184, 170,
164, 40]. In this report, both of them refer to a group of autonomous mobile robots
which carry sensors to detect targets or cover an area. The energy of the robots
comes from its battery. Robots can also build the local network to communicate with
neighbors to share information. Based on those detected or received information,
a robot moves dynamically to finish a task based on the control algorithm. As a
considerable number of robots can be used, each of them should have a reasonable
price, a carry-on battery, limited sensing ranges for obstacles and targets, and a
limited communication range.
The robots in a team do not always need to be homogeneous. They can be
heterogeneous as in the algorithms and experiments in [183, 76, 35, 110]. Due to
the complexity of the tasks, especially search and rescue, and the limited abilities
2
1.1 Multiple Robots
of each robot, members of a multi-robot system can have different functions. For
example, the HELIOS team with five robots were proposed by Guarnieri et al. for
the search and rescue task [66]. In the system, only three robots have laser range
finders and cameras to create the virtual 3D map of the task area. The other two
robots with manipulators are used to execute the special tasks including handling
objects and opening doors. Sugiyama et al. [189, 190] developed the chain network
in a rescue system with multiple robots. The system consists of a base station,
some search robots and some relay robots. Different robots have their behavior
algorithms to form the chain network. The method of the classification and the
behavior algorithm is based on the forwarding table of each robot constructed for adhoc networking. Then the robots recognized as relay robots will act as information
bridges, and frontier robots will search the area. Luo et al. [27] exploited a team
with both ground and aerial vehicles for search and rescue tasks. The environment
mapping was executed by the ground vehicle. The search and localization were
finished simultaneously by the micro aerial vehicle with a vertical camera and a
horizontal camera. Another two micro ground vehicles equipped with color sensors,
sonar, and compasses were used as the backup team. [35] proposed a hierarchical
system consisting of managers with advantages in computation and worker with
limited ability. This system allows the processing resources to be shared and global
environment to be divided into local sections for the workers. Worker robots can be
separated into planners, explorers or robots with both jobs based on their ability to
only deal with their assigned tasks. Algorithms in this report only require the robots
to satisfy certain assumptions about their sensing ranges, the communication range,
and the maximum swing radius but the robots can be different. In search tasks, all
robots are doing the same task without further actions on the found targets so there
is no need to separate robots to distinct roles.
Multiple robots have many advantages comparing to a single robot [219, 161, 59,
152, 214, 11, 12]. Firstly, a system with multiple robots is more efficient as discussed
above because they can share information with neighbors so that they can cooperate to know the environment faster and better. Multiple cooperative robots can be
separated spatially and temporally. Therefore, they can fulfill the complex, decomposable task efficiently such as cleaning floor and surveillance. Correspondingly, it
may save the total consumed energy. Secondly, it is reliable as more robots provide
more redundancy than a single robot. If one robot is not working, other robots could
finish the task if they use an independent algorithm. Through communication, the
robot failure may even be detected. Thirdly, the system is robust to errors because
the information of one robot can be judged or rectified using information from other
robots. Also, the system can be flexible to different tasks with different areas maybe
by reassigning the task for each robot. Finally, a robot team can be scalable with
various numbers of members to satisfy the task requirement, but the expense of
robots need to be considered.
While using a group of robots, many algorithms for a single robot may not be
available, and it is challenging to design the algorithm in different aspects. The later
sections will discuss them one by one.
3
1.2 Centralized vs. Decentralized Algorithms
1.2
Centralized vs. Decentralized Algorithms
In the robot teamwork, communication and control methods are significant and
complex topics. The algorithms for multiple robots can be divided into centralized algorithms and decentralized algorithms. In centralized control tasks such as
[100, 15, 16], the communication center or the leader robot collects and distributes
the information to all the others, which is relatively easy to control and to design algorithms. Thus, the leader robot has a larger memory and stronger communication
ability which will be expensive. However, on this occasion, global information such
as the Global Positioning System (GPS) data may be needed for synchronization
and localization. This method also requires all robots to have good communication ability. However, those demands may be unrealistic in some occasions such
as in caves, under water and in areas without control such as drones in enemys
airspace. In another type of centralized control method, the leader-follower algorithm [45, 75, 67, 68], robots need to keep a very restrictive type of communication
topology [48, 53], namely, they need to stay in the communication range of each
other. In the rescue network of [189], communication chains must be continuously
existing from the base station through the relay path to the distant rescue robots
to reconnoiter disaster areas, and those chains must be transformed to a suitable
topology if the target area of exploration is changed. Some other papers assumed
that the communication graph is minimally rigid [92, 209] or time-invariant and
connected [122]. Thus, they are used in formation control or achieving some consensus variables such as the same speed, the same direction, but they cannot work
independently in a large area. A deadly disadvantage of the centralized scheme is
that if the leader robot is broken, the whole system will stop working. Therefore,
the centralized control and communication algorithms have limited applications and
lead researcher to the decentralized methods.
Decentralized control of multiple robots is an attractive and challenging area
[138, 6, 137, 82, 196, 125], however, it provides the system with benefits in robustness,
flexibility, reliability, and scalability according to Zhu [232]. Specifically speaking,
this problem describes a group of dynamically decoupled robots, namely, each robot
is independent and will not be affected by others directly according to Savkin [163].
Research in this area is largely inspired by the animal aggregation problem which
belongs to ecology and evolution branches of biology. It is believed by Shaw [180],
Okubo [139] and Flierl [58] et al., that some simple local coordination and control
laws were exploited to achieve the highly intelligent aggregation behavior by a group
of bees, a school of fish and a swarm of bacteria. Those laws may be used in formation
control. Described by [193], efficient foraging strategy of animal can be used in a
coverage and search problem for animals. The evolutionary process through the
natural selection led to highly efficient and optimal foraging strategies even with
physical and biological constraints. It can be seen in the method that lobsters used
to localize and track odor plumes or in the bacterial chemotaxis mechanism used
by e-coli to response the nutrition concentration gradient. Similarly, robots only
need a local communication range and small sensing ranges which are achievable
for many cheap robots. Also, less communication is necessary compared to the
centralized method in which gathering information from and sending tasks to each
terminal need much time. Moreover, in some steps, using a decentralized algorithm,
different members of the team can execute their missions in parallel to improve
4
1.3 Complete Coverage and Search
the efficiency of the work. Both centralized and decentralized communications are
used in this report to improve the efficiency of the search or coverage task as well
as to save the total energy. The centralized communication is only used to deliver
the significant information such as the positions of targets or the positions of the
failed robots, which is like the usage of the flare gun for delivering distress signal
in the real rescue scenarios. Decentralized control and communication is the main
part of the algorithms in this report to save time and money and thus, making
the algorithms more applicable. Initially, the information of the environment is
unknown. Then a robot will gradually record its sensed map and exchange the map
with neighbor robots that it can communicate with. Thus, the robots only use local
information to make the decision and will have different neighbors in later steps
[82, 125, 11, 12, 42, 170, 176, 77].
In literature from the perspective of control engineering, ideal communication situations were usually considered. However, research in the area of telecommunication
discussed the non-ideal communication including the transmission delay due to the
limited bandwidth, the propagation delay because of the long transmission distance
or the information loss caused by inference especially in a battlefield where severe
interference is generated by the enemy. [140] considered the multi-agent system with
communication time-delays. It used switching topology and undirected networks.
Then a direct connection between the robustness margin to time-delays and the maximum eigenvalue of the network topology is provided. In [68] with the leader-follower
structure, the state of the considered leader not only kept changing but also might
not be measured. Thus, a state estimation method for getting the leader’s velocity
is proposed. In systems with communication constraints such as limited capacity in
communication channels, estimation of the necessary information are needed using
methods of Kalman state estimation in [167, 102, 171, 172, 146, 143, 147, 115, 169].
Other hardware limitations may include the insufficient memory for a large global
map. [34] solved this problem by describing the global map by local maps which
was explored by a two-tiered A* algorithm. This algorithm can be executed entirely on the robots with limited memory. As the energy of the mobile robots is
limited [98], [99] researched the area covered during the cover procedure instead of
the final result. It considered coverage rate at a specific time instant or during a
time interval. It also showed how long a location is covered and uncovered. Then
a relation between detection time for a randomly located target and parameters of
the hardware were given to help a user to decide the number of robots to use. Ideal
communication is assumed in this work and propagation delay would not happen as
local communication is fast enough to ignore that delay. The problem caused by the
limited bandwidth is also solved by designing a suitable mapping method to minimize the information needed for the area. The memory is considered as sufficient
and the search time with different parameters is discussed.
1.3
Complete Coverage and Search
Popular tasks for cooperation of multiple robots include coverage control, search
and rescue, formation control [176, 177, 201, 6, 154], flocking [196, 197, 198, 159],
and consensus [57, 74, 140]. This report focuses on the complete coverage tasks,
and search and rescue tasks. Coverage problems were defined differently by various
researchers [17, 215, 26, 230] from their research interests which are lack of generality.
5
1.3 Complete Coverage and Search
It could be concluded as going through a target area by utilizing the sensors on the
robots and achieving the requirements as consuming the least time, going through
the shortest path or having the least uncovered area. As defined by Gage [62],
there are three types of coverage problem namely, barrier coverage [15, 16, 41, 40],
sweep coverage [178, 40, 43] and blanket coverage [170, 42, 168, 224, 226]. Another
two coverage types are dynamic coverage [99] and search. Barrier coverage means
the robots are arranged as a barrier, and it will detect the invader when they pass
through the barrier. Sweep coverage can be thought as a moving barrier to cover a
particular area which maximizes the detections in a time slot as well as minimizes
the missed detections per area. Blanket coverage is the most difficult one. It aims
at deploying robots in the whole area so that each point of the area is covered by
at least one robot at any time [170] in which the selected deployment locations are
designed to minimize the uncovered area and the deployment procedure should be
as short as possible to reach the blanket coverage early. In the three-dimensional
area, the term blanket is not suitable for the same coverage problem and is called
complete coverage or complete sensing coverage in general for both types of areas.
For complete coverage, there are two types of self-deployment strategies based
on the survey [228] which are the static deployment and the dynamic deployment.
In the static deployment, robots move before the network start to operate. However,
the deployment result such as the coverage rate or the connectivity of the network
is assessed under the fixed network structure, or the assessment is irrelevant to
the state of the network. For the dynamic one, the condition of the network may
change such as the load balance of the nodes leading to the repositioning of the
mobile robotic sensors [179]. The complete coverage in this report firstly considered
100% coverage between nodes for a fixed area as the static deployment. Then the
dynamic self-deployment in an extending area, such as the demining area in the
battlefield which could be enlarged, is considered by a simple modification of the
static method. Some deployment methods deployed all robotic sensors manually
[84], however, the cost is high if the area is large, so the applications are usually in
an indoor area as in [93, 148, 24]. Moreover, in some applications in a dangerous
or unknown environment, such as demining, human actions are impossible. Thus,
in some papers, mobile sensors were randomly scattered from helicopters, clustered
bombs or grenade launchers with assumptions of the distribution pattern [228] such
as the uniform distribution in [195], the simple diffusion, and the R random in [80].
Dynamic coverage in [99] is similar to a search task. They both describe the
coverage problem with continuously moving sensors to find targets. Movements of
sensors are required because, in a vast area, blanket coverage is not realistic as
it needs a significant number of robots which is expensive. Therefore an algorithm
should be designed which uses only a selected number of robots to cover the unknown
area completely dynamically. Thus, a section of the area will be covered sometimes
and be uncovered in other time slots. However, the dynamic coverage ensures that
each point is visited at least once through the whole search procedure. Note that
in dynamic coverage or search, mobile sensors do not need to move together in a
formation like in the sweep coverage. Note that, a search task may also be named
as an area exploration task [126, 190, 17, 25, 35, 76, 108] in the mapping related
topics or be called as a foraging strategy for animals [185, 149].
In both complete coverage and search problems, some papers used the assumption about the boundary effect [221, 224, 3, 4]. The assumption claims that the
6
1.3 Complete Coverage and Search
length, width (and height if it is a 3D area) of the search area is much larger than
the sensing range so that the gaps between the sensed area and the boundary are
small enough to be ignored so targets would not appear in those gaps [3]. However,
some other papers ignored this assumption but still claimed they reached the complete coverage or a complete search [223, 132, 131, 128, 129, 130, 133, 12, 10, 14, 11].
Those papers could only be correct by having assumptions about the shape of the
boundary and passage width. For example, the boundary is simple and smooth to
allow the equilateral triangle grid to stay inside the border. In this report, complete
coverage without exceeding the board is considered by either providing some assumptions about the shape of the area [225, 222, 226] or employing the assumption
for the boundary effect [221, 224].
Current research about coverage and search were mainly in 2D areas which had a
relatively easy geographical feature and can be applied to the plane ground and the
surface of the water. However, there are a considerable number of applications in 3D
terrains, such as detection of the location of the forest fire [30], underwater search
and rescue [110], oceanographic data collection [211] ocean sampling, ocean resource
exploration, tactical surveillance [2, 150, 157] and hurricane tracking and warning
[44, 157]. The problems in the 3D area bring challenges in many aspects [55]. The
area of interest in 3D spaces, such as underwater area or airspace, is usually huge
and the telecommunication and sensing ranges need to be improved from circular
ranges to spherical ranges which may be solved by attaching more sensors on robots
or using better sensors. The mobility of robots needs to have more Degrees of
Freedom (DOF), which requires entirely different mechanical structure as well as
advanced controllers and sensors. Thus, tasks in 3D spaces require more expensive
hardware on autonomous robots with higher mobility [210]. For the connectivity
and coverage in algorithms, some comprehensively researched tasks in 2D areas are
still waiting to be proved in 3D areas which will be discussed in detail in the next
section. The complex system and task in 3D search and complete coverage lead to
extra computational complexity and larger memory space needed. Some algorithms
in 2D may not be applied to 3D tasks with simple extensions or generalizations
[79]. However, several work researched the coverage [216, 145, 213, 3, 128, 224,
111, 129, 133, 55, 79, 150], search [132, 131] and trajectory tracking [162] problems
in 3D areas [130]. Those algorithms can be used in the flight in a 3D terrain
[162, 52, 188, 208] and the exploration in underwater environment for rescue tasks
[110, 16, 182, 194, 213, 3]. Inspired by the search for MH370 in the underwater
environment, this report develops algorithms in a 2D area first and modifies them
to be suitable to be applied on a 3D area.
In complete coverage and search, the ranges of sensors are critical parameters.
There are various assumptions for the sensing ability of sensors [207]. Most papers
[223, 132, 131, 128, 129, 130, 133, 12, 10, 14, 11, 221, 224, 221, 224] assumed a fixed
circular sensing range in 2D tasks and a spherical range in 3D tasks. However, some
papers used a directional sensing range especially for some surveillance tasks using
cameras [55, 101, 56, 199]. Reference [153, 36] considered the sensor with a flexible
sensing range so that the combination of different ranges may reach the maximum
coverage rate and save the energy. [123] examined the adjustment for both the
direction and the range. [9] discussed the situation that sensors are positioned with
only approximate accuracy because of practical difficulties and provided the method
to set (near-)optimal radius or errors allowed in the placement. Boukerche and Fei
7
1.4 Mapping
[22] used irregular simple polygon assumption for the sensing range and solved the
problem in recognizing the completely covered sensors and finding holes. In this
report, all the sensors for both targets and obstacles are ideal and omnidirectional.
1.4
Mapping
Geometric maps [20] and topological maps [215, 106] are two kinds of methods for
robots to describe the environment. Although the geometric map is more accurate
containing detailed information of every point of the area, it needs a significant
amount of memory and considerable time for data processing, especially for data
fusion which deals with a huge amount of data collected from other robots. This kind
of work can be seen in navigation using Simultaneous Localization And Mapping
(SLAM) such as [20, 89, 95]. In contrast, the topological map only needs certain
points of the area which is more efficient and applicable. To reduce the memory
load, simplify the control law and save energy in calculations, a topological map
should be used.
Grid map is a kind of topological map. There are three common ways to generate
the grid map from literature. The first way to get the topological map is generating
a Voronoi tessellation [213, 184, 23, 124, 49]. According to Aurenhammer [7] and
Wang [207], a Voronoi diagram with N sensors form s1 to sN in an area can be defined
as a separation of the plane into N subsection with one sensor in each subarea. In a
subarea, the distance from the sensor to a point should be smaller than the distance
from any other sensor to that point. If the subsections of two sensors have a common
border, these two sensors are called neighbor sensors. Voronoi cells are usually used
in sensor deployment to reach a complete coverage. However, with initial random
deployment, the mobile sensors may not be able to communicate with others due
to a limited communication range. Another cell division can be seen in [33] which
used the square grid map. The robots go from the center of one cell to another cell
and visited cells are remembered. This square grid representation is usually used in
the graph search and the tree search algorithms see, e.g., [126]. The third way to
generate the coverage grid is based on uniform tessellation in 2D area and uniform
honeycomb in 3D space.
The third method is used in this report thus it is discussed in detail. Using this
approach, robots will move between the vertices of the 2D grid or the center of the 3D
cells in each step to search the area or reach the complete coverage. Therefore, only
the coordinates and visitation states of the vertices need to be included in the map
which is simpler than the grid generated by Voronoi partition. The uniform division
is used as each robot has the same ability so each cell a robot occupied in complete
coverage or visited in search needs to be the same to maximize the available sensing
range. It also simplifies the algorithm design as each step has a similar pattern. To
cover a 2D area with the grid, a topological map can be made using an equilateral
triangular (T) grid, a square (S) grid or a regular hexagonal (H) grid. [87] proposed
that the T grid pattern is asymptotically optimal as it needs the fewest sensors to
achieve the complete coverage of an arbitrary unknown area. With collisions allowed
and the assumption that robots have no volume, Baranzadeh [11, 10, 14, 12] used
√
the T grid which contains the fewest vertices as the communication range equals 3
times the sensing range. However, in the simulation of those papers, that relation
was not always used. In fact, when considering collision avoidance, the volume of the
8
1.4 Mapping
robot and the obstacle sensing range, the T grid could not always contain the fewest
vertices in different conditions and could only guarantee the complete coverage with
the assumption of the boundary effect [222, 225].
Unlike the proved result in 2D coverage, there is no proven conclusion to shown
which grid pattern could use the smallest number of sensors to cover a 3D space
completely. Coverage related works in 3D include Kelvin’s conjecture [200] for finding the space-filling polyhedron which has the largest isoperimetric quotient and
Kepler’s conjecture [69] for sphere packing in cuboid area [213]. Based on proposed
cells in these two conjecture and considering uniform honeycomb, truncated octahedral (TO) cells, rhombic dodecahedral (RD) cells, hexagonal prismatic (HP) cells,
and cubic (C) cells could be used, and the honeycomb with the minimum number
of cells under different relations between the communication range and the target
sensing range is given in [3, 4]. Then [131, 132, 128, 129, 130, 133] chose the TO
grid based on their common assumption of the large ratio of the communication
range to the target sensing range. However, [129] did not provide the ratio and all
these paper ignore collisions. In this report, to guarantee a strict 100% 2D complete
coverage of the task area, only the T grid could be used with assumptions about
the passage, the relations between different parameters and the curvature of the
boundaries based on [222, 225]. When a complete coverage is not required and the
assumption of the boundary effect is used, the way to choose the suitable grid under
different situations for both 2D and 3D areas is discussed. In simulations, the grid
pattern is chosen based on the parameters of the potential test robots so that the
simulation results can be applied and verified directly in experiments.
When designing the grid map, there are some papers considering the connectivity
of the network [64, 213, 3, 4, 8, 231]. A mobile sensor may need to have more than
one communication neighbors in case some sensors fail. Thus, the extra communication neighbors could provide redundancy of the network and improve the reliability.
However, some papers only considered the general space-filling problem with the
assumption of boundary effect but with no obstacles [3, 4, 8]. The area could be a
convex polygon in 2D areas or a convex polyhedron in 3D spaces [64, 231] as the
assumption of passage width and the volume of robots are not given. [213] limited
the shape of the area further to a cuboid based on the sphere packing problem.
So, in a real task area with irregular shape with obstacles inside, their methods for
multi-connectivity is invalid. In this report, a realistic unknown area is considered,
one sensor only needs to have at least one communication neighbor.
There are also some algorithms without map especially the algorithm inspired
by animals [71, 205, 227, 135, 19]. Animal foragers need to explore the area by only
detecting the randomly located objects in the limited vicinity using the random
walk. The best statistical strategy of optimizing the random search depends on the
probability distribution of the flight length given by [29, 204, 149, 135, 19]. When
the targets are sparsely and randomly distributed, the optimum strategy is the Lévy
flight (LF) random walk which can be seen in marine predators, fruit flies, and honey
bees [136]. It uses the Lévy distribution to generate the step length and the normal
distribution to generate the rotation angle so that robots have fewer chances to go
back to previously visited sites [193, 38, 186, 63, 85, 33, 28]. Thus, robots could
move freely and detect all points in the task area. This scheme means the Lévy
flight algorithm does not need a map to guarantee a complete search of the area
so that exchanging information on the visited areas is also unnecessary and only
9
1.5 Collision-Free Navigation
the positions of visited targets need to be communicated. This algorithm, on the
one hand, largely decreased the demand for memory and on the other hand, is a
kind of a waste of the communication ability. If the number of targets is unknown,
the Lévy flight algorithm could not be used as robots will not know when the area
is completed searched as no map are recorded. Reference [132, 131] considered a
combination of the Lévy flight algorithm and the TO grid for both static target and
moving target. In those two papers, the step lengths were some fixed distances from
one vertex to any other vertex, and the directions were the corresponding direction
from current vertex to the destination vertex. The combined algorithm decreased
the search time comparing to each single algorithm. However, the algorithm only
considered the situation that the number of targets is known but did not consider
obstacle avoidance, so the search area was simple. The algorithm also did not claim
the sensing range for the boundary thus how to reject the long step which would
exceed the boundary is unknown. This report will compare the proposed algorithm
with animal inspired algorithms in [193, 192] and always consider collision avoidance.
The algorithm is available for both when the number of targets is given and when
it is unknown.
Some current algorithms in coverage problems were heuristic [33, 123, 37] but
there were still several papers with grid maps having rigorous proofs [43, 39]. [170]
achieved 100% coverage of an unknown 2D area with a mathematical proof, but
it exceeded the boundary of the area which only has limited applications such as
using the UAV in open airspaces to cover the ground or the sea surface. If there
are solid walls or coastlines and ground robots are used, this algorithm could not
work. If the boundary of the area is a borderline in a military application, exceeding
the boundary is also forbidden. Although [11] used a T grid inside the boundary,
the routes of robots still exceeded the boundary which can be seen in its simulation
figures. The reason for this problem is that the author thought the assumption that
all accessible vertices are connected equals to the assumption that accessible vertices
can be reached by moving along the shortest path, however, there may be no vertices
to go through in that path. Even if there were vertices in the shortest passage, they
might be too close to the boundary considering the volume of the robots. Thus,
those vertices should be thought as blocked so that robots should choose a longer
path. If two vertices are reached from two unconnected edges, the section between
the two vertices may still be undetectable because of the short sensing range which
is only set to be greater than zero in [11]. This report proves the convergence of all
the proposed algorithms without asking robots to go beyond the boundary.
1.5
Collision-Free Navigation
For a multi-robot system, the increased number of robots leads to a higher possibility for robots to encounter each other also rises. However, some algorithms for
single robot navigation are no longer available. Therefore, collision avoidance is a
significant and attractive research topic in multi-robot systems.
Navigation strategies can be categorized into global methods and local methods.
The global one is off-line path planning. Given the global information about the
environment and obstacles as a priori, the robots will build a model of the scenario
[166] and find an optimal traceable path to solve the problem and avoid collisions
[206]. If a path is known, an appropriate velocity profile will be planned for col10
1.5 Collision-Free Navigation
lision avoidance [144]. This problem can be thought as resource allocation thus
corresponding algorithms can be utilized. Although this method is robust to deadlock situation, the disadvantages of it are clear that the environment of the system
should be known and be structured for the algorithm to segment it to separated
pieces [77]. These disadvantages lead to a centralized thought of the problem and
cannot be conveniently and easily modified to a decentralized approach. [142] proposed an algorithm in a known area. It used ring-shaped local potential fields to
have a path-following control of a nonholonomic robot. In some situations, a part
of the information is given. For instance, the edge of the area in [220] was known
and in [128, 133], the equation to describe the covered area was given. [46] provided
semantic information to assist the exploration. In many path planning and navigation problems like [51], the limited knowledge was given which was the direction
from the starting point to the target. This is possible in a workspace or factory floor
by installing radio beacons at the target so that the robot can track the beacon to
plan the path.
For the local or reactive method, algorithms are based on sensed and shared
local information. The drawback of some algorithms with this approach is that
they are not based on the kinematic model with realistic constraints of the robot.
Instead, they used heuristic methods. Without justification and mathematical proof,
some methods may fail in special circumstance. For example, the control algorithm
in [61] would work for two or more robots under the conditional that the robot
had a constant speed. If the acceleration was limited for a holonomic robot, a
potential field approach could support the control of three robots at most based on
[73]. To control an unlimited number of robots with no collision, [187] used robots
with unbounded velocity and [202] assigned the velocity of the robot as the control
input. In reactive methods, the artificial potential field algorithm is widely used
[18, 182, 54, 109, 158, 142, 73] as it generates a safe path to the target with little
computational effort. It generates a gradient field between robots and targets to
direct the movement of robots [181]. In that field, goals like the targets generate
the attractive force and act like valleys in the field. Obstacles and boundaries
generate the repulsive force, acting like peaks in the field. So, the superposition of
the two fields will drive robots to go downhill towards targets while moving away
from obstacles. One disadvantage of the algorithm is that robots can be trapped in
local minima which results in the failure of the navigation task. Therefore, [193, 192]
only used the repulsive force between robots to disperse them to move in different
directions, which is also utilized in this report based on [222, 225, 221].
Examples in the above two paragraphs were all considering steady obstacles.
More methods in this category include dynamic window [60], the lane curvature
[90] and the tangent graph based navigation [166]. For dynamic obstacles including
moving and deforming obstacles, it is tough to solve. Thus, many methods put constraints on the moving velocity of obstacles or the changing speed of shapes. The
obstacles were usually explained as rigid bodies with a selected shape. Moreover, in
[166], the characteristic point of the robots in concern with its global geometry is
calculated, and velocity of the obstacles is achieved. A local method with range-only
measurements in [121] allowed the shape of obstacles to be a time-vary variable, but
it had a fierce limitation on the robots’ velocity. Inspired by binary interaction
model, a reactive approach with integrated sensed environment presentation was
illustrated in [166]. This method did not require approximation of the shapes of
11
1.6 Contribution Highlights
obstacles and velocity information of obstacles. It would find a path between obstacles without separating obstacles. In the deployment and complete coverage part
of this report, local navigation is needed. With a grid pattern, collision avoidance
is easier. In the beginning, robots only know their initial positions are within the
search area without other knowledge about the area. However, while moving, the
robots are generating their maps about the visited areas so that the global method
could also be used.
A large percentage of research in coverage, search, formation, and consensus did
not consider the problem of collision avoidance such as [11, 170, 132, 131, 128, 129,
130, 133]. Most of them used simulations to verify the algorithms, and a robot is
always assumed as a particle point without the volume. [170, 11] assumed that if
more than one robots chose the same vertex to go, those robots would stay around
the vertex and the vertex was thought as a parking lot. However, it did not show
how to choose a parking space. Even with a parking space choosing mechanism,
when the number of robots increased, there would be more robots choosing the
same vertex so that the car park model is not available. Therefore, in this report,
robots would not choose together, and the communication range should be designed
large enough so the choice can be known by all robots which may select the same
vertex in this step. Although there will be more communication cost, the robots can
avoid collisions while choosing the next step instead of during moving. In [10, 14],
robots could move with different distance in one step which resulted in collisions
during moving. However, the velocity of the movement was not stated, and the
strategy for collision avoidance was not stated clearly. Also, the proposed algorithm
in [14] could not match its result in its experiment. Therefore, in this report, robots
always move with the unit step length to avoid this problem.
1.6
Contribution Highlights
The main contributions of the report can be described as follows:
1. One decentralized algorithm for the complete coverage is proposed for multiple
mobile robotic sensors to deploy themselves in both 2D and 3D areas. The
detailed descriptions of the coverage algorithms are presented.
2. The report proposes three decentralized random algorithms for the search task
in 2D or 3D areas. They are improved step by step.
3. The task areas for the four algorithms are arbitrary unknown with obstacles
inside. For complete coverage in a 2D area, assumptions about a large target
sensing range and the curvature of boundaries are given. For other problems,
if the boundary effect is ignorable, these two assumptions are not needed.
4. All algorithms utilize grid patterns to cover the area and robots move between
vertices of a grid. The methods to choose the best grid, namely, the grid with
the fewest number of vertices in a 2D area and the fewest cells in a 3D area,
are proposed considering the relation between different parameters. In some
previous algorithms, the sensing range for obstacles was not considered.
5. All algorithms consider collision avoidance between robots and between robots
and boundaries. Also, realistic constraints of robots including the volume are
12
1.7 Report Outline
set based on potential test robots. Robots choose the next step in a sequence,
and they are always synchronized.
6. Simulations in MATLAB are used to demonstrate the performance of the
algorithms. Another complete coverage algorithm and other five algorithms
for search tasks are simulated in the same area to show the effectiveness of the
proposed algorithms.
7. The convergence of all algorithms are mathematically proved which is different
from many heuristic algorithms.
8. The algorithms are suitable for both 2D areas and 3D areas. Only slight
modification based on the grid structures are needed. So it is easy to adjust
them to apply to the other dimension.
9. For the three search algorithms, the number of targets can be known or unknown.
10. Using the simulation result of the second search algorithm, the way to generate
the mathematical relation between the search time, the size of the area and the
number of robots is given. The effects of other parameters are also discussed.
11. Experiments for the second search algorithm is conducted on Pioneer 3-DX
robots to verify its performance in real situations and to help design errors of
the algorithm.
1.7
Report Outline
The remainder of the report is organized as follows:
Chapter 2 describes the method to choose the best grid pattern to cover the area
for the four algorithms considering related parameters of robots. It introduces the
basic definitions about the task area with assumptions for complete coverage and
for coverage ignoring the boundary effect. The grid pattern for potential test robots
which are the Pioneer 3-DX in 2D and the Bluefin-21 in 3D are chosen. Chapter 3 provides a decentralized complete coverage algorithm for the self-deployment
of multiple robots in both 2D and 3D areas without collisions. The convergence
of the algorithms is proved with simulations to show its efficiency comparing to
another algorithm. Then three distributed search algorithms in 2D and 3D areas
are presented in Chapter 4, 5 and 6 respectively. Rigorous mathematical proofs
of the convergence of these algorithms are provided, and they are all simulated in
MATLAB with comparisons to other five algorithms to show the effectiveness. In
Chapter 4, a decentralized random algorithm for search task is provided by using
the visitation state of the neighbor vertices. The second search algorithm in Chapter
5 adds potential field algorithm on the first one to disperse robots and decrease the
repeated routes. In 2D area, some factors affecting the search time are analyzed and
the way to find the relation between search time and other parameters are given.
Also, experiment on Pioneer 3-DX robots is used to check the performance of the algorithm. Chapter 6 adds breadth-first search algorithm on the algorithm in Chapter
4. It helps robots to jump out of the visited area by looking for the nearest unvisited
vertex in their local maps. The three proposed search algorithms are also compared
13
1.7 Report Outline
with each other to analyze their own merits and demerits. Chapter 7 concludes the
report and provides some possible future research topics to improve or to extend the
presented work.
14
Chapter 2
Grid Selection for Coverage in 2D
and 3D Areas
For complete coverage of an unknown area, mobile sensors are put into the area
and need to move to some places which form a grid to maximize the coverage of
the team. For search problems, robots move around the area until all targets are
found, or the entire area is detected. To ensure the whole area is sensed, robots also
need to have a map. Moreover, to cover or search the area easily, the grid pattern
could be used as a topological representation of the area to decrease the information
needed during movement of the two tasks. Then in a 2D space, the robot can move
from a vertex of the grid to one of the nearest neighbor vertices straightly, and in
a 3D area, robots can move straightly from the center of a polyhedron to one of
the closest centers of another polyhedron. As the robot has a sensing range for the
target, after visiting all the vertices, the whole area can be searched if the distance
between two cells of the grid is set reasonably. Similarly, in the complete coverage
task, after mobile sensors are deployed to those nodes, the whole area is covered.
With a grid pattern, robots only need to record the positions of the vertices in a
2D area or the centers of polyhedrons in a 3D area to build their maps. Thus, this
topological method also decreases the memory and calculation load comparing to
the geometrical method.
Looking for a grid to cover an area is the same as the tessellation problem for
a plain and the honeycomb for a 3D space. As robots in the system have the same
sensing ranges and the same communication range, regular tessellation and spacefilling could be considered to simplify the design of the control rule further. Different
grids have different numbers of vertices. If the same control algorithm is applied, it
is intuitive that the search time will be affected by the number of vertices. When
robots select the next vertex, they need to ensure that there are no obstacles on the
path to that vertex, so the obstacle sensing range is considered in grid selection. In
the selected grid, robots also need to form the sensing coverage of the whole area and
communicate with neighbor robots to exchange information so the target sensing
range and the communication range should be considered.
This chapter finds the grid with the minimum number of vertices to cover the
area considering different coverage requirements and all kinds of relations between
parameters about ranges of the robots. The selection criteria are applicable for both
complete coverage and search tasks. Section 2.1 considers the problem in a 2D area
while Section 2.2 considers the selection criteria a 3D area. A summary of the grid
15
2.1 Grids for 2D Areas
(a) An equilateral triangular
grid
(b) A square grid
(c) A regular hexagonal grid
Figure 2.1: Three regular tessellation patterns
selection is shown in Section 2.3.
2.1
Grids for 2D Areas
This section illustrates that only the T grid can provide a 100% complete coverage. It
also proposes the general method to select the grid with the least number of vertices
in a 2D area if areas near boundary can be ignored by considering the communication
range, the target sensing range and the obstacle sensing range of robots. Then the
rule is applied to a Pioneer 3-DX robot to see which grid is suitable for it in a search
task.
The tessellation of a plane surface is tiling it with one or more geometric shapes.
Each of the shapes can be called as a tile. When considering the same tiles for
tessellation, three forms for regular tiling can be used which are square, equilateral
triangle, and hexagon as seen in Figure 2.1. If only considers the circular sensing
range for targets, this problem is the same as allocating base stations to get the
maximum coverage area in telecommunication. Then based on [156], the T grid
should have the least number of vertices. However, in this report, the communication
range and the obstacle sensing range are also discussed. Thus, the result in [156]
may no longer be available. However, [11, 170, 12, 10, 14] still used the T grid to
design the algorithm and based on this pattern to find the relation between the
target sensing range and the communication range. The sensing range for obstacles
was set equal to the side length of the tile. However, in real applications, the search
or coverage algorithms may be applied to different robots. Thus, the problem needs
to be thought oppositely, namely, choosing the grid based on the parameters of
robots. There are two kinds of requirements for selecting the grid. One is strict
as in [170], and the other is loose which can be seen in [11, 3, 4, 131]. Next, the
assumptions for different ranges are given followed by the discussion subjected to
these two requirements.
16
2.1 Grids for 2D Areas
2.1.1
Definitions and Assumptions
This section takes advantage of the method of definitions in [225]. Examples to
illustrate the relation between ranges are given using a T grid.
Assumption 2.1.1. The search area A ⊂ R2 is a bounded, connected and Lebesgue
measurable set. Robots know nothing about the area initially.
There are m robots labeled as rob1 to robm in A. For the basic tiling shape, the
side length of the shape is represented as a and robots can only move the distance a
along the side in each step from one vertex of the grid to one of the nearest neighbor
vertices.
Assumption 2.1.2. In the 2D area, all the ranges for robots are circular.
Robots have a swing radius rrob . Let the maximum possible error containing
odometry errors and measurement errors be e. Then a safety radius rsaf e can be
defined to avoid collision.
Definition 2.1.1. To avoid collisions with other things before move, a safe distance
rsaf e should include both rrob and e (see Figure 2.2). So rsaf e ≥ rrob + e.
Thus, vertices which are within rsaf e away from the boundaries or obstacles
should not be visited. In Figure 2.2, vertex2 is within rsaf e of the obstacle so it
cannot be reached. This means the obstacles around vertex2 need to be detected
from robi so it will not choose to move there. Thus, sensors on robots should have
a large enough sensing range rso for obstacles. So rso should satisfy the following
assumption and can be seen in Figure 2.2 and Figure 2.3.
Assumption 2.1.3. rso ≥ a + rsaf e .
The vertices which are within rso of robi are called sensing neighbors of robi .
Robots are equipped with wireless communication tools with the range rc . Those
robots within rc of robi are called communication neighbors of robi . The communication between robots is temporary as in an ad-hoc network so each robot can be
considered equal with a local communication range. To avoid having the same choice
with other robots in the same step, robots should choose in a certain order and tell
the choice to neighbor robots which are one or two vertices away from it. A possible
order for search tasks given in Figure 2.3 is that robots on the right side and the
top of the current robot should have higher priorities. This order will be explained
in Chapter 4 in detail. Considering the error e, rc should satisfy Assumption 2.1.4.
Assumption 2.1.4. rc ≥ 2 ∗ a + e.
Now, the length of a step can be shown as an inequality:
a ≤ min(rso − rsaf e , (rc − e)/2)
2.1.2
(2.1)
The Strict Requirement
This requires robots to have a complete search through the grid. Only a T grid can
be used under this circumstance with the assumptions 2.1.5 2.1.6 and 2.1.7 below.
17
2.1 Grids for 2D Areas
Figure 2.2: The illustration for a, rsaf e and rso
Figure 2.3: rso , rc and the selection order in search
18
2.1 Grids for 2D Areas
Figure 2.4: Curvature example for the equilateral triangular grid
Assumption 2.1.5. rst ≥ a + rsaf e .
This assumption for the sensing range for targets guarantees that areas around
inaccessible vertices can still be detected. For example, in Figure 2.4, black circles
have a radius of a + rsaf e and the red line is the boundary. It can be seen that
vertex v4 is less than rsaf e away from the boundary so it cannot be visited. But the
Assumption 2.1.5 guarantees that the area D near v4 can be detected by robots at
vertices v1 , v2 and v3 .
Assumption 2.1.6. The curvature of the concave part of the boundary should be
smaller than or equal to 1/rst .
Without this assumption, some parts of the searched area may not be detected
such as section A in Figure 2.4. In that figure, circles enclose all the sensed area
and section A, B and C have segments with curvatures greater than 1/rst . It can
be seen clearly that section A cannot be detected or visited although section B and
C can be detected luckily. The reason that the S grid and the H grid are not used
is that there are no such kinds of limitations to guarantee a complete coverage. For
the S grid, Figure 2.5 demonstrates that the setting of curvature will be useless as
there is no vertex near the intersection point p of sensing ranges. Thus, a curve with
any curvature could go beyond p and left a section being undetected like section A.
The H grid has the same problem.
Assumption 2.1.7. Let Wpass represents the minimum width of the passages between obstacles or between obstacles and boundaries. Then Wpass ≥ a + 2 ∗ rsaf e .
This assumption guarantees that there is always at least one accessible vertex in
the path. Otherwise, if the passage is not large enough as in Figure 2.6, robots at
19
2.1 Grids for 2D Areas
Figure 2.5: Example of the curvature in the S grid
va will not arrive at vb as robot can only move a in a step but vertices vc , vd , ve and
vf are inaccessible, leaving a section in between undetectable.
2.1.3
The Loose Requirement
This only requires that all vertices are sensed because it has an assumption about
the boundary effect which can be seen in [221, 224, 3, 4].
Assumption 2.1.8. The search area is usually very large. So, a is much smaller
than the length and the width of area A. Therefore, the boundary effect is negligible.
This assumption means that there are no targets in the area near the boundary
such as the slashed area under point p in Figure 2.5 so robots do not need to
completely sense the area and Assumption 2.1.6 is not needed. Thus, all the three
kinds of grids can be discussed. Then Assumptions 2.1.5 and 2.1.7 for rst and Wpass
are made differently based on the structure of each grid pattern. Here, Wpass is still
used to design the minimum passage width so the passage can be passed by a robot.
But the usage of rst is to ensure that all the sections between accessible vertices
are completely sensed which is different from the requirement that all sections are
completely sensed in the strict requirement. Table 2.1 shows the relation between
rst , a, rsaf e and rrob and the value of Wpass represented by a and rsaf e for each grid
pattern. The subscript of a is the notation for the name of the corresponding grid.
To use the least number of vertices in the search area, for the same a, one vertex
needs to have the fewest neighbor vertices so that the polygon (tile) occupied by each
20
2.1 Grids for 2D Areas
Figure 2.6: A narrow path
Shape
rst
T
aT ≤
S
aS ≤
H
Wpass
√
3(rst − rsaf e + rrob )
√
2(rst − rsaf e + rrob )
aH ≤ rst − rsaf e + rrob
aT + 2rsaf e
√
2aS + 2rsaf e
2aH + 2rsaf e
Table 2.1: Wpass and rst for each grid
21
2.1 Grids for 2D Areas
vertex has the maximum area. Also, the maximum a should be used. [8] found the
grid pattern which results in the minimum number of vertices using different rc /rst
for a static coverage problem. This report considers three parameters in choosing the
grid pattern, namely, rc , rso and rst , which will be different based on the used devices.
So, the discussion regarding these three parameters for general situations is provided
based on the maximum a in the primary results in Inequality 2.1 and in Table 2.1.
In this subsection, the result in Inequality 2.1 is labeled as a1 and the result from
Table 2.1 is labeled as a2 . To simplify the discussion and illustrate the procedure of
the calculation only, e and rsaf e which depend on the selected equipment are ignored
while collision avoidance is still considered.
results to maximize a are
√
√ So the primary
simplified as a1 = min(rso , rc /2), a2T = 3rst , a2S = 2rst and a2H = rst . Then the
maximum a can be written as a = min(a1 , a2 ). If a1 > a2 , the area is represented
by rst which is the same for the three grid patterns as it is a fixed parameter from
the chosen robot. Otherwise, a = a1 and the area is represented by a1 which is also
the same in the three grids. Table 2.2 demonstrates the area S occupied by each
vertex and Wpass in different conditions. S and Wpass have subscripts to indicate the
types of the grid. The data in the table have three colors representing the ranking
of the values with red for the large value, orange for the medium value and cyan for
the small one. According to this table, if a1 ≥ 1.52rst , the T grid should be used
to have the least number of vertices. If 1.52rst ≥ a1 ≥ 1.14rst , the S grid should be
used. An H grid will be used when 1.14rst ≥ a1 . Although larger a leads to fewer
vertices and less search time, it will increase Wpass and thus will shrink the scope of
applicability.
2.1.4
The Choice for the Pioneer 3-DX Robot
Pioneer 3-DX robots in Figure 2.7 is widely used in research. The second search
algorithm in this report also uses it to verify the performance. Therefore, this robot
is introduced and the suitable grid pattern for it will be selected. Based on the
above discussion, for a complete coverage, a T grid must be used, so only the grid
pattern under the loose requirement need to be chosen based on the parameters
of this robot. The robot has 16 sonars around which is the yellow circle in the
figure. The layout of the 8 sonars in the front can be seen in Figure 2.8. There
are other 8 sonars at the symmetrical rear side of the robots. Thus, the robots can
sense the environment through 360 degrees. The sonar is used to sense obstacles with
0.17m < rso ≤ 5m. The target is sensed by the laser which is the cyan part in Figure
2.7. The laser model is SICK-200 which has rst = 32m for a range of 180 degree
which is illustrated in Figure 2.9. To detect 360 degrees, the robot needs to turn
around after the detection of one side is finished. For communication, the robot uses
a Cisco Aironet 2.4 GHz Articulated Dipole Antenna AIR-ANT4941 with rc = 91.4
at 1Mbps typically in an indoor environment. In calculations, equality conditions of
the inequalities are used. Therefore, the maximum a1 = 5 as a1 ≤ min(5, 91.4/2).
As 1.14rst > a1 , the H grid should be used. Base on Table 2.2, this belongs to the
last (7th) situation. From Table 2.1, a2H = 32m. So the side length of the H grid is
a = min(a1 , a2H ) = 5m. Then Wpass = 10m.
22
2.1 Grids for 2D Areas
Range a1 ≥ 1.73rst
1.73rst ≥ a1 ≥
1.52rst
1.52rst ≥ a1 ≥
1.41rst
1.41rst ≥ a1 ≥
1.23rst
ST
2
2.60rst
0.87a1 2
0.87a1 2
0.87a1 2
SS
2rst 2
2rst 2
2rst 2
a1 2
SH
1.30rst 2
1.30rst 2
1.30rst 2
1.30rst 2
WpassT 1.73rst
a1
a1
a1
WpassS 2rst
2rst
2rst
1.41a1
WpassH 2rst
2rst
2rst
2rst
Range 1.23rst ≥ a1 ≥
1.14rst
1.14rst ≥ a1 ≥
rst
rst ≥ a1
ST
0.87a1 2
0.87a1 2
0.87a1 2
SS
a1 2
a1 2
a1 2
SH
1.30rst 2
1.30rst 2
1.30a1 2
WpassT a1
a1
a1
WpassS 1.41a1
1.41a1
1.41a1
WpassH 2rst
2rst
2a1
Table 2.2: Wpass and the area occupied by each polygon
23
2.1 Grids for 2D Areas
Figure 2.7: The Pioneer 3-DX robot
Figure 2.8: Layout of 8 sonars in the front
24
2.2 Grids for 3D Areas
Figure 2.9: Range of the laser SICK-200
2.2
Grids for 3D Areas
This section proposes the general method to select the grid pattern to cover a 3D
area with the fewest vertices. Different relations between the communication range,
the target sensing range, and the obstacle sensing range are discussed in a Table.
Then the suitable grid for the UUV, Bluefin-21, will be chosen using that table.
Tessellation in a 3D area is called honeycomb which is filling a 3D space with
cells. As discussed in 2D area, the same cell is used to simplify the representation of the area and the control algorithm design. Honeycomb is more complicated
than tessellation as there are no proven results for which grid pattern will use the
fewest cells to cover a grid if the same sensing range is employed in all grid patterns. However, based on similar problems such as Kepler’s conjecture and Kelvin’s
conjecture [3, 4], possible polyhedrons include the C grid, the HP grid, the RD grid
and the TO grid as seen in Figure 2.10. [3, 4] also discussed which grid should
be chosen under different rc /rst . Then [129] chose the TO √
grid√without giving the
communication range. However, rc ≥ 1.549rst (which is, 2 3/ 5rst ) is enough to
connect all vertices. But when 1.587rst ≥ rc ≥ 1.549rst , an HP grid should be
used. Then √
[131, 132, 128, 130, 133] considered the situation for rc ≥ 1.789rst
(which is, 4/ 5rst ). However, as collision avoidance was not discussed in detail,
these papers did not have rso . Moreover, they were considering the question of
designing the best grid pattern to cover the area and setting the ranges based on
this grid. But, in reality, the search or coverage algorithms should be applied to
different robots carrying devices with various ranges. So this chapter is based on
the range related parameters of the robots to find the corresponding suitable grid
pattern. Different from 2D problems, this section only discusses the grid selection
which complies with the loose requirement.
25
2.2 Grids for 3D Areas
(a) cubic
(b) hexagonal prismatic
(c) rhombic dodecahedral
(d) truncated octahedral
Figure 2.10: Four uniform honeycombs
2.2.1
Definitions and Assumptions
This section will take advantage of the method of definitions in [221]. Examples to
illustrate the relations between ranges are given in a C grid.
Assumption 2.2.1. A three-dimensional area A ⊂ R3 is a bounded, connected and
Lebesgue measurable set.
There are m robots labeled as rob1 to robm in A.
Definition 2.2.1. If there are edges with different lengths in the grid and robots can
visit all vertices through various sets of lengths of edges, then the maximum length
of each set is compared with others, and the smallest one is chosen to be the edge
length a. Then robots move the distance a from one vertex straightly to an accessible
neighbor vertex in one step.
3D grids based on a uniform honeycomb are used, and the center of the polyhedron is the vertex of the grid. Then the grid is built up by connecting the vertices
to its nearest surrounding neighbor vertices.
Assumption 2.2.2. All sensors and communication equipment on robots have spherical ranges. The range using the word radius also refers to a spherical radius .
Let the swing radius of robots be rrob . Then the total error of robots is represented as e and the safety radius is rsaf e . Then we have a similar definition as
Definition 2.1.1 and two similar assumptions for the obstacle sensing range rso and
the communication range rc as Assumptions 2.1.3 and 2.1.4. The only difference is
that the ranges in these definitions are spherical instead of circular.
26
2.2 Grids for 3D Areas
Shape
rst
aC ≤
C
√
3(rst − rsaf e + rrob )/2
HP
aHP ≤
RD
aRD ≤
TO
√
√
2(rst − rsaf e + rrob )
2(rst − rsaf e + rrob )
√
aT O ≤ 5(rst − rsaf e + rrob )/2/ 15
Table 2.3: rst for each 3D grid
Definition 2.2.2. rsaf e ≥ rrob + e so that the collisions with other things could be
avoid by path planning before a move (see Figure 2.11).
Assumption 2.2.3. rso ≥ a + rsaf e .
Assumption 2.2.4. rc ≥ 2 ∗ a + e.
So, the length of a step is
a ≤ min(rso − rsaf e , (rc − e)/2).
(2.2)
Vertices which are within rso of robi are called sensing neighbors of robi . Robots
within rc of robi are called communication neighbors of robi . The communication in
the 3D space also uses the ad-hoc network with an omnidirectional antenna to get
the spherical coverage.
2.2.2
The Loose Requirement
In the loose requirement, the area between vertices must be fully covered, and the
area between outer sensing spheres to the boundaries can be ignored as the following
assumption is used.
Assumption 2.2.5. The search task for a team of robots usually has a vast area.
So, a is much smaller than the length, the width and the height of area A. Therefore,
the boundary effect is negligible [3, 4].
When selecting the type of the polyhedron, the number of polyhedrons to fill the
3D space needs to be minimized, so the maximum a needs to be used. For the same
a, one vertex needs to have the fewest sensing neighbors to allow each polyhedron to
have the maximum volume. For the HP cell, there are two kinds of edges to connect
vertices and robots must move along both to visit all vertices so that a equals to the
longer one. For a TO cell, there are also two kinds of edges, but robots can visit all
vertices through the shorter one only so a equals to the shorter one to maximize the
volume of the polyhedron. The relation between rst and a and listed in Table 2.3.
In the 3D space, the three parameter of robots, namely, rc , rso and rst , are still
considered in the discussion to find the grid with the fewest vertices in different
situations. Thus, the results in [3, 4] which considered rc and rst only for a similar
27
2.2 Grids for 3D Areas
Figure 2.11: a, rsaf e , rso , rc and the order of selection in search.
28
2.2 Grids for 3D Areas
Range a1 ≤ 1.1547rst
1.1547rst
≤
a1 ≤ 1.2599rst
1.2599rst
≤
a1 ≤ 1.2961rst
1.2961rst
≤
a1 ≤ 1.4142rst
VC
a31
1.1547rst 3
1.1547rst 3
1.1547rst 3
VHP
0.7071a1 3
0.7071a1 3
0.7071a1 3
0.7071a1 3
VRD
0.7071a1 3
0.7071a1 3
0.7071a1 3
0.7071a1 3
VT O
0.7698a1 3
0.7698a1 3
0.7698a1 3
0.7698a1 3
1.5431rst
≤
a1 ≤ 1.5492rst
rst ≤ 1.5492a1
Range 1.4142rst
≤
a1 ≤ 1.5431rst
VC
1.1547rst 3
1.1547rst 3
1.1547rst 3
VHP
1.4142rst 3
1.4142rst 3
1.4142rst 3
VRD
1.4142rst 3
1.4142rst 3
1.4142rst 3
VT O
0.7698a1 3
0.7698a1 3
1.5492rst 3
Table 2.4: The area occupied by each cell
problem cannot be used. The discussion in this section uses the maximum a in
the primary results in Section 2.2.1 and in Table 2.3, which are represented as a1
and a2 here. Similarly, e and rsaf e are ignored as to simplify the discussion and
only illustrate the procedure of calculation. These two parameters are affected by
the accuracy of the equipment and the disturbance in the environment. However,
collision avoidance is still considered in the algorithm
design. The primary results
√
3(rst − rsaf e + rrob )/2, a2HP
to
so , rc /2), a2C =
√ maximize a are a1 = min(r
√
√ =
2(rst −rsaf e +rrob ,a2RD = 2(rst −rsaf e +rrob and a2T O = 5(rst −rsaf e +rrob )/2/ 15.
Thus, the maximum a = min(a1 , a2 ). To represent the volume occupied by each
polyhedral cell, the same method as in 2D is used. So when a1 > a2 , the area is
represented by rst which is a fixed parameter from the selected robot. Otherwise,
a = a1 and the area is represented by a1 which is calculated using the other fixed
parameter rso rso . Therefore, the value in the table are comparable for the four
grid patterns. Table 2.4 only shows the volume V with subscript to describe the
shape of the polyhedron occupied by each vertex. Colored values are also used
in the table to show the comparison result with red for the large value, orange
for the medium value and cyan for the small one. The table shows that the C
grid should be chosen when a1 ≤ 1.2599rst and the HP or RD grid should be
employed when 1.4142rst ≤ a1 ≤ 1.5431rst . For the TO grid, it is used when
1.2599rst ≤ a1 ≤ 1.4142rst and a1 ≤ 1.5431rst to have the least number of vertices.
29
2.3 Summary
Figure 2.12: Bluefin-21[21]
2.2.3
The Choice for the Bluefin-21 Robot
Bluefin-21 is a UUV (see Figure 2.12) which carries multiple sensors and payload for
tasks such as search, salvage and mine countermeasure [21]. It is famous in searching
MH370 which is also the inspiration of the research of this report. In the search task,
the sensor for underwater locator beacon has a target sensing range as rst = 1km
at least in the normal condition [86]. The sensor for obstacles is the EdgeTech
2200-M side scan sonar. It has different sensing ranges at different frequencies and
resolutions. To guarantee the obstacle avoidance is successful, the minimum one
rso = 75m is used to ensure the accuracy. For the local communication, there is
no information in the datasheet for the equipment used and for the range that it
can communicate in so that the parameter of the 2D robot is used which is a WiFi communication with rc = 91.4m. In calculations, all the equality conditions
of the inequalities are used. Thus, based on Inequality 2.2, the maximum a1 =
min(75, 91.4/2) = 45.7m. As a1 < 1.1547rst , the first column of Table 2.4 is used
and the C grid is chosen. Then a = min(a1 , a2 ) = 45.7m and the corresponding
assumptions for passage width Wpass is given.
Assumption 2.2.6. All passages between obstacles or between the obstacles
and
√
borders have a cross-section which can include a circle with a radius of 3a + 2rsaf e .
This can be seen in Figure 2.13
So the theoretical Wpass without rsaf e is 79.1547m. This guarantees that there
is always at least one accessible vertex in the cross-section which is illustrated in
Figure 2.13. In the figure, the robot moving from a blue vertex (x) to a red vertex
(+) must go through a black vertex (o) as the robot can only move a in a step.
Although it is obvious that decreasing a will expand the scope of applicability, the
number of vertices will increase so the search time will also increase.
2.3
Summary
In summary, this chapter proposed that a grid pattern should be used as the final
location to deploy the Wireless Mobile Sensor Network and the representation of
the area for both search and complete coverage tasks. Three regular tessellations
30
2.3 Summary
Figure 2.13: Passage width for a 2D area
in a 2D field and four uniform honeycombs in a 3D area were considered. Then
Table 2.2 for 2D and Table 2.4 for 3D are provided as the tables of selection criteria
for different relations between the communication range, the target sensing range
and the obstacle sensing range. Then typical robots which are Pioneer 3-DX robots
in 2D and Bluefin-21 robot in 3D were used to exemplify the selection procedure.
Based on the result, later simulation and experiment will mostly use the T grid in
2D tasks and the C grid in 3D tasks.
31
Chapter 3
Optimal Collision-Free
Self-Deployment for Complete
Coverage
Complete coverage is an attractive topic for multi-robot systems. It means each
point of the task area is detected by at least one robot so the team can cover the
area completely for surveillance or data collection. In this task, communication
between the robots is usually required so each robot should be connected to at least
one robot to form a network. Thus, any incident in the area or any intruder moving
into the area will be known by the whole network to take further action. The mobile
robotic sensor network in this chapter is an example of a networked control system
with limited communication [112, 113, 117, 116]. As the area to cover is unknown
or may be dangerous, robots could not be deployed manually and thus, need to be
released from a known entrance to the area. While moving to the final location, the
process should be as fast as possible, so the optimal algorithm to drive the robots
while avoiding collisions is found in this chapter.
Self-deployment algorithm for 2D and 3D areas can be found in [170] and [128]. In
their deployment phases, all robots applied the decenalized random (DR) algorithm
at the same time which is impossible to cover the area completely because the
algorithm could only work when there is a sequence of choice and robots inform the
selection to neighbor robots which, however, was not mentioned in [170, 128]. Thus,
the absorbing state in their proof may not be reached, and the algorithm may be
invalid. The DR algorithm has a disadvantage that robots need to convert received
coordinates to their coordinate system which leads to a large calculation load. To
compare to the DR algorithm, robots in this chapter also start from certain vertices
of a shared grid in some possible ways. Then, an optimal navigation algorithm is
designed to have the fewest steps for coverage.
This chapter presents a novel distributed algorithm for deploying multiple robots
in an unknown area to achieve complete coverage. The task area can be twodimensional or three-dimensional and it is bounded and arbitrary with obstacles and
certain assumptions. The robots with limited sensing ranges and a limited communication range move in sequence via grid patterns which were chosen in Chapter 2
to reach the complete coverage. The algorithm sets a reference point and drives the
furthest robot first to move to one of the most distant neighbor vertices in each step
with no collisions. Finally, every point of the area can be detected by at least one
32
3.1 Completer Coverage in 2D
robot and robots achieve this using the fewest steps. If the task area is enlarged
later, the algorithm can be modified easily to fit this requirement. The convergence
of the algorithm is proved with probability one. Simulations are used to compare
the algorithm with valid modification of the DR algorithm in different areas to show
the effectiveness and scalability of the algorithm.
The rest of the chapter will be divided into two sections for the 2D problem
and the 3D problem respectively. Each section will start with the problem statement followed by the description of the algorithm. Then MATLAB simulations and
the comparison with another algorithm are shown, and finally, a summary will be
provided.
3.1
Completer Coverage in 2D
This section provides a decentralized collision-free algorithm to deploy multiple
robotic sensors in a task area A with obstacles to achieve the 100% complete coverage using a T grid pattern based on Chapter 2. Thus, intruders entering the covered
area can be detected. This section takes advantage of the methods for the definition
and assumption in [11, 225, 168]. Some assumptions and definition about the area,
ranges, and the curvature under the strict requirement have been claimed in Section
2.1.1 so they are used directly without explaining. The original work of this section
comes from [225].
3.1.1
Problem Statement
Assumption 3.1.1. There are a finite number of obstacles O1 , O2 ,. . ., Ol and
obstacles Oi are non-overlapping, closed, bounded and linearly connected for any
i > 0. The obstacles are static, arbitrary, and unknown to the robots a priori.
S
Definition 3.1.1. Let O := Oi for all i > 0. Then Ac := A \ O represents the
area that needs to be covered. Area Ac can be static or be expanding.
In each move, a robot translates straightly from a vertex represented as vi of
the grid to the selected vertices vj which is one of the nearest neighbors. Thus,
the moved distance is the side length a of the basic grid pattern. In Ac , the set of
accessible vertices is denoted by Va . If |•| is used to represent the number of elements
in set •, |Va | will represent the number of accessible vertices. m robots with the same
sensing and movement ability are released from Ni initial known vertices to cover
Ac . Then for the optimal deployment, let No represent the number of steps needed.
For the complete coverage, the number of robots prepared needs to satisfy
Assumption 3.1.2. m ≥ |Va |.
The set containing sensing neighbors at discrete time k is denoted as Ns,i (k)
and its jth element is at position ns,i,j (k). Nc,i (k) denotes the set containing those
communication neighbors at time k. As all robots are equal, the ad-hoc network is
used for communication between robi and robots in Nc,i (k). These local networks are
temporary and need to be rebuilt when robots arrive at new vertices. In the future
tests on robots, Wi-Fi networks in ad-hoc mode will be used with Transmission
Control Protocol (TCP) socket communication.
33
3.1 Completer Coverage in 2D
At time k, the chosen position of robi is pi (k). A reference position pr is selected
from the initial positions. Some definitions about distances are provided below.
Definition 3.1.2. The distance from pi (k) to pr is denoted as d(pi (k), pr ). Similarly, the distance from an accessible neighbor vertex ns,i,j (k) to pr is d(ns,i,j (k), pr )
and the maximum distance for all ns,i,j (k) is denoted as max(d(Ns,i (k), pr )) with
Nmax sensing neighbors with this value. A random choice from those Nmax values is
represented by c.
Assumption 3.1.3. Wpass ≥ a + 2 ∗ rsaf e .
3.1.2
Algorithm
The algorithm includes two parts. The first part is to allocate robots to a common
T grid with a common coordinate system. In the second part, a decentralized
collision-free algorithm is designed to deploy robots using the grid.
3.1.2.1
Initial Allocation
Before deployment, [128, 168] could only build a common grid with small errors in
certain situations and one vertex can have more than one robot which means collisions are allowed. To compare to that algorithm and design a practical initialization
with a common coordinate system, this chapter manually selects a few vertices of a
grid near the known entrance of the area as initial positions to allocate robots and
uses one point from them as pr . Initial positions and pr are set to the place where
robots could have more choices in the first step based on the limited knowledge of
the area after applying the algorithm so that robots could leave the initial vertices
earlier without blocking each other. To allocate multiple robots at the same point
in different steps, the multi-level rotary parking system as in Figure 3.1 from [141]
could be used. Thus, if a robot moves away from a vertex, the empty parking place
will rotate up, and another robot will be rotated down to the same start position.
Another possible method is to release robots using the rocket launcher such as the
tubes employed in the LOCUST project for the Coyote unmanned aircraft system
[1] in Figure 3.2. As the relative positions of the tubes are fixed, the position of the
robot in one tube can be set as the reference to allow all robots to have the same
coordinate system and the same grid.
3.1.2.2
Distributed Algorithm
The distributed algorithm for deployment is designed based on the search algorithm
in [223, 225, 222, 221]. The flow chart of the algorithm is in Figure 3.3 and the
abbreviation ‘comm’ in that figure is short for communication. Step 6 and Step 7
only happen when the area is static. If the area will be extended, the algorithm only
includes the loop with Step 1 to Step 5. In both situations, robots will sense intruders
continuously and broadcast the positions of intruders when they are found. However,
for the static area, intruder detection may only be required after the deployment
phase is stopped in Step 7 in some applications.
1. Comm1, decide sequence: A robot calculates d(pi (k), pr ) at the beginning
of each loop and exchanges d(pi (k), pr ) and pi (k) with its communication neighbors.
34
3.1 Completer Coverage in 2D
Figure 3.1: The multilevel rotary parking system
Figure 3.2: The launcher for multiple UAVs in the LOCUST project
35
3.1 Completer Coverage in 2D
Figure 3.3: The flow chart for the 2D complete coverage
Then, the one with larger d(pi (k), pr ) has higher priority. If more than one robot
have the same d(pi (k), pr ), another parameter of robots needs to be used to decide
the selection sequence. As only addresses, such as IP addresses, are different between
robots, this chapter allows the robot with a lower IP address to choose with a higher
priority, however, if the one with higher address chooses first, the result should be
the same.
2. Find robot obstacles: The positions of robots are known, so robots within
rso of robi are thought as obstacles, and the occupied vertices are inaccessible for
robi . The Ni vertices to release robots are always considered as obstacles so robots
will not choose them as the next step, and new robots can be added to the area.
3. Wait, sense obstacles: A robot needs to wait for the choices from the
robots with higher priorities because the positions of those choices can be thought
as detected points in the next step and may also be obstacles for robi . If all robots
are connected, the last one in the choosing sequence needs to wait for every other
robot and the processing time for the whole selection process should be set to the
value in this condition because this is the longest time and using this time, all the
robots could be synchronized in each loop.
In the turn for robi , it detects the objects in rso but avoids the section occupied
by existing neighbor robots. The way to judge whether the object p is an obstacle
or not is illustrated in Figure 3.4. The general rule is that if there is any sensed
object on the way to vertex ns,i,j (k) in Ns,i (k), that object is treated as an obstacle.
So that vertex ns,i,j (k) cannot be reached and is removed from Ns,i (k).
q In Figure
3.4, rnew is the distance between pi (k) and the left arc where rnew =
2
(a2 + rsaf
e ).
dpr represents the distance from the sensed object p to the position of robi , pi (k).
dpn denotes the distance between p and ns,i,j (k) and dpe denotes the distance from
p to the nearest edge of the grid. Let the angle between the facing direction of robi
and the segment which connects pi (k) and p be θpr . Due to the safe distance rsaf e ,
when a robot goes straight to ns,i,j (k), it needs a related safe region which is the
light brown part in the figure. When an object is detected, dpr and θpr are known so
coordinates of that object can be calculated. Thus, dpn and dpe can be determined.
36
3.1 Completer Coverage in 2D
Figure 3.4: Judge obstacles
If (dpr < rnew &dpe < rsaf e )|dpn < rsaf e , point p is an obstacle. If no objects are
sensed, this robot will still wait for the same amount of processing time so all robots
are synchronized in this step.
4. Choose comm2: robi calculates max(d(Ns,i (k), pr )) for its sensing neighbors and randomly chooses one vertex from Nmax neighbor vertices which could
result in max(d(Ns,i (k), pr )). If there are no accessible vertices to visit for robi ,
namely Nmax = 0, robi will stay at its current position. Mathematically, it can be
written as:
(
c with prob. 1/Nmax , if Nmax 6= 0,
pi (k + 1) =
(3.1)
pi (k), if Nmax = 0.
Then, the choice will be sent to robots with lower priorities. For the static
area, a robot also records and sends the IP addresses of robots which will move
in the current step. After communication, the robot still waits until the time for
choice past. However, robots have no knowledge about the number of robots used
at time k. Thus, the total time for making a choice is equal to the time for selection
and communication of one robot multiples the total number of robots prepared for
deployment. In this way, robots can be synchronized.
5. Move: All robots move together and those who do not move need to wait
for the time for the movement to get synchronized. The movement includes rotating
the head of the robot to aim at the target vertex and translating to the vertex. All
robots should move in the same pattern at the same time, and the way to move
should match the realistic constraints.
Each movement step consists of two phases: phase I is the pure rotation, phase
II is the pure translation. The rotation will align the robot with the target vertex.
Specially, if robi was facing to its next vertex before phase I, it will not move but
wait for the rotation time. Then robi can go straight along the selected edge. Many
robots use the nonholonomic model with standard hard constraints on velocity and
acceleration. This report also considers the constraints of real robots so that both
phases use trapezoidal velocity profiles based on parameters of the area and the test
robots. In this section for 2D areas, Pioneer 3-DX robots are used, and parameters
are shown in simulations (Section 3.1.3).
6. Do all Stay? : Based on the sequence of choice, the robot at pr will be
the last to choose the next vertex. If no IP addresses from robots which go to new
vertices are received, and the choice of the robot at pr is also staying at the current
37
3.1 Completer Coverage in 2D
position, namely pi (k + 1) = pi (k) for all robots, then the self-deployment should be
stopped. Otherwise, all robots execute Step 5.
7. Stop: The stop procedure depends on the applications. In this chapter,
robots are designed to detect intruders into the area so the robot at pr could broadcast the stop signal, and all robots stop moving and obstacles sensing but only keep
the sensors for target intruders on. If an intruder is found, robots broadcast the position of the intruder, and other corresponding actions could be taken. Specially, under
the strict requirement, if the sensing range for the target is changeable, the sensors
which cannot sense the boundaries could decrease their sensing range
√ to the minimum value which can meet the loose requirement, namely, rst = aT / 3+rsaf e −rrob .
Thus, the energy of those sensors will be saved, and they can monitor the area for
a longer time.
Remark 3.1.1. The broadcast is only used for a few vital information including the
stop signal and positions of intruders, so it will not consume too much energy. If
only local communication could be used, the stop signal can still be transmitted to
all robots at the time slot for comm1 and receivers will do all the actions in Step 7
directly except broadcasting the stop signal.
By applying the above algorithm, robots which are further away from pr have higher
priorities to choose to go to the furthest accessible vertices, which leaves spaces for
the newly added robots near pr to go to. If there are enough available choices for
robots, Ni robots could be added in each loop and an optimal deployment with No
steps can be reached. Thus, the equation for No can be written as:
No = ceil(|Va |/Ni ).
(3.2)
In the above formula, the ceil() is the ceiling function because the result of the
division may be a decimal figure. However, less than one step means not all the
robots need to move in that step, but that should still be finished in one step.
Theorem 3.1.1. For a static area, suppose all assumptions in Section 2.1.1 and
3.1.1 hold for the loose requirement or the strict requirement, and algorithm 3.1 with
the judgment step is used. Then there exists such a time k0 > 0 with probability 1
that for any vertex vj ∈ Va where j = 1, 2, . . . , |Va |, the relationship vj = pi (k) holds
for some i = 1, 2, . . . , m and all k ≥ k0 .
Proof. The proposed Algorithm 3.1 and the judgment method in Step 6 form an
absorbing Markov chain with many transient states and absorbing states. The transient states are the states that robots have accessible sensing neighbors to choose.
The absorbing states are the states that all robots have no sensing neighbors to go
and stop at their current vertices forever, namely the states that they cannot leave.
The proposed algorithm continuously adds robots to the area to occupy every vertex,
which enables robots to reach the absorbing states from any initial states with any Ni .
This implies the absorbing state will be achieved with probability 1. This completes
the proof of Theorem 3.1.1.
38
3.1 Completer Coverage in 2D
P
rrob
rsaf e
rso
rst
a
R
0.26m
0.35m
5m
32m
4.65m
S
0.26
0.35
1.35
1.35
1
P
rc
vmax
ωmax
tmove
Wpass
R
91.4m
1.5m/s
300◦ /s
5.75s
5.35m
S
2.09
0.3/s
300
5.75s
1.7
Table 3.1: Parameters of robots and simulation
3.1.3
Simulation Results
The proposed algorithm for the complete coverage is simulated in MATLAB2016a
in an area with 20*20 vertices and |Va | = 256 as seen in Figure 3.5. Ni = 3, 5, 7 are
chosen to verify the results and those known initial vertices are at the bottom right
corner to release robots. The numbers in the graph are the order to add vertices,
and vertex 1 is the reference vertex at pr . In this arrangement, each robot will have
vacant vertices to go without blocking each other in the first loop.
The parameters in the simulation are designed based on parameters of Pioneer
3-DX robots in Figure 2.7 and the assumptions satisfying the strict requirement.
Table 3.1 shows the parameters in the simulation which are designed based on the
datasheet of the pioneer 3-DX robot. Row P, R, and S are the names of the parameters, parameters for robots and parameters in simulations respectively. The
underlined column names refer to the known values for robots. rsaf e and tmove need
to be estimated from the experiment. Then a and Wpass can be calculated in turn
based on assumptions in Section 2.1.1 and rsaf e . In calculations, equality conditions
of the inequalities are used. In the experiment, robots will have at most a 0.08m
shift to the left or right after moving 8 meters forward or backward, so e is set to
0.09m and rsaf e = 0.35m. In an experiment with a = 1m and vmax = 0.3m/s, moving time is roughly estimated as 5.75 seconds which is 4.5s for the translation and
1.25s for the rotation using trapezoidal velocity profile and the maximum rotation
speed ωmax = 300◦ /s of the robot. In simulations, rrob and ωmax from the robot
are used. a, vmax and tmove from the experiment are used. Although different areas
with different sizes will be simulated, for simplicity, the error in the simulation is
always set as e = 0.09 so rsaf e = 0.35. Then other values are calculated based on
assumptions in Section 2.1.1.
An example progress for Ni = 3 is shown in Figure 3.6, 3.7 and 3.8 for k = 30,
k = 60 and k = 86 respectively and the self-deployment finished in 86 loops. The
graph shows that robots tended to go to vertices which were far from pr and left
space for newly added robots, which made the deployment fast. In Figure 3.8, the
circles represent the rst of each robot and every point of the task area Ac was covered
at the end.
The proposed algorithm is compared to the modified version of the DR algorithm
39
3.1 Completer Coverage in 2D
Figure 3.5: The task area and the initial setting
in [170, 131]. To modify it, firstly, the IP addresses of the robots are used as the order
numbers to set the sequence of choice. Therefore, for robi and its communication
neighbors, the robot with the lowest address will choose first. Then the robot needs
to inform its communication neighbors with lower priorities about its choice in this
loop.
Table 3.2 illustrates the No calculated using formula 3.2, average results of 100
simulations for the proposed algorithm and average results of 100 simulations for the
DR algorithm. It also shows the ratio of the steps in the proposed algorithm to that
in the DR algorithm. It can be seen that the proposed algorithm is always better
than the DR algorithm, and the advantage is clearer when Ni is larger. Compared
to No , the simulation result of the proposed algorithm is equal to it when Ni = 3.
However, for Ni = 5 or 7, the simulation results are a little bit larger but no greater
than 1 step. This result means some loops had less than Ni robots added because
less than Ni robots found vacant vertices to go to. This can be caused by some
narrow vacant areas which can be seen in the example in Figure 3.9 where Ni = 3
and |Va | = 6. From formula 3.2, No = 2, however, the narrow path only allows one
robot to be added in each loop. So, robots need four loops to finish the complete
coverage. Another reason is that robi can choose a ns,i,j (k) whose d(ns,i,j (k), pr ) is
smaller than d(pi (k), pr ) but other robots within rso of robi have higher priorities so
they have finished choosing the next step. Thus, pi (k) would be vacant in step k + 1
and the movement of robi is towards the reference point. So it does not result in a
vacant vertex near initial vertices for adding a new robot.
To test the scalability of the algorithm, areas with the same shape but different
sizes are used with Ni = 3. So areas with 10*10 vertices and 30*30 vertices are
used with |Va | = 57 and |Va | = 598 respectively. The average results of the two
40
3.1 Completer Coverage in 2D
Figure 3.6: Positions of robots when k = 30, |Va | = 256, Ni = 3
Figure 3.7: Positions of robots when k = 60, |Va | = 256, Ni = 3
41
3.1 Completer Coverage in 2D
Figure 3.8: Positions of robots when k = 86, |Va | = 256, Ni = 3
Figure 3.9: An example of the narrow path with k = 3, |Va | = 6, Ni = 3
42
3.1 Completer Coverage in 2D
Ni
Type of Results
3
5
7
No
86
52
37
Proposed algorithm
86
52.06
37.96
DR algorithm
310.39
273.74
254.92
Proposed/DR
27.7%
19.0%
14.9%
Table 3.2: Steps from calculations and simulations with |Va | = 256
Size of the Area
Type of Results
10*10
20*20
30*30
No
19
86
200
Proposed algorithm
23.96
86
200
DR algorithm
70.62
310.39
766.71
Proposed/DR
36.0%
27.7%
26.1%
Table 3.3: Steps from calculations and simulations for different areas with Ni = 3
algorithms, the ratio between them, and No are shown in Table 3.3. The results
demonstrate that the proposed algorithm is scalable and effective. Its advantage
is clearer when the area is larger. The steps for the areas with 20*20 and 30*30
vertices is optimal, but the steps for the area with 10*10 vertices is larger than No
because the shrunk area has more narrow paths as discussed in Figure 3.9.
3.1.4
Section Summary
This section provided a decentralized algorithm for multiple robots to build a robotic
sensor network to completely cover an unknown task area without collisions using
an equilateral triangular grid pattern. The convergence of the proposed algorithm
was mathamatically proved with probability 1. Results of this algorithm were very
close to the least number of steps and were equal to that in some situations. So the
algorithm is optimal under the limitation of the area. Simulation results demonstrated that the proposed algorithm is effective comparing to another decentralized
algorithm and it is scalable for different sizes of areas.
43
3.2 Completer Coverage in 3D
3.2
Completer Coverage in 3D
This section solves the complete coverage problem in a 3D area by using a decentralized collision-free algorithm for robots to form a coverage network. Based on
Chapter 2, the C grid is used in this section for the Bluefin-21 robot. The assumptions and definition for the area and ranges from Section 2.2.1 are used directly.
Others are similar as Section 3.1.1 except the ranges are spherical, and the shape of
objects are three-dimensional which are based on [224]. However, these assumptions
and definitions are still claimed here to make the explanation and the proof clear
and will be used in later Chapters.
3.2.1
Problem Statement
Assumption 3.2.1. Area A has a finite number of three-dimensional obstacles O1 ,
O2 ,. . ., Ol and all these obstacles Oi are non-overlapping, closed, bounded and linearly connected for any i > 0. The obstacles are unknown to the robots.
S
Definition 3.2.1. Let O := Oi for all i > 0. Then the 3D area that needs to be
covered can be written as Ac := A \ O. Area Ac can be either static or be expanding.
In each step, robots move a from one vertex vi to an accessible nearest neighbor
vertex vj through the edge of the grid. The number of steps needed in the optimal
situation for the complete coverage is denoted as No . The set of accessible vertices
in Ac is denoted by Va and the number of members in Va is |Va |.
Assumption 3.2.2. To occupy every vertex in Ac , m ≥ |Va |. Robots are fed into
the area from Ni initial vertices near the boundary.
The set of sensing neighbors of robi at discrete time k (step k) is denoted as
Ns,i (k) in which the position of the jth member is ns,i,j (k). The set of communication
neighbors of robi in time k is denoted as Nc,i (k). The communication in 3D is also
employing ad-hoc network which is rebuilt after each robot reached its next vertex.
The position of robi at time k is pi (k). A reference position pr which belongs to
the Ni initial positions is set. Then some definitions about distances are given.
Definition 3.2.2. The distance from pi (k) to pr is represented by d(pi (k), pr ) and
the distance from ns,i,j (k) to pr is d(ns,i,j (k), pr ). The maximum d(ns,i,j (k), pr ) is
denoted by max(d(Ns,i (k), pr )) with Nmax neighbor vertices resulting to this value.
The position of the choice from those Nmax values is represented by c.
To allow robots to go through each passage to reach each vertex, the assumption
for passage width Wpass is needed.
√
Assumption 3.2.3. Wpass ≥ a 3 + 2rsaf e .
3.2.2
Algorithm
The algorithm starts with the method to set the initial locations of robots followed
by the decentralized self-deployment algorithm with collision avoidance to form the
complete coverage network to detect intruders.
44
3.2 Completer Coverage in 3D
3.2.2.1
Initial Allocation
The initialization in a 3D area is similar to that in a 2D area. Ni nearby vertices
of a grid near the known entrance of the area are selected as the initial positions to
allocate robots, and one of them is set as pr . The chosen points should allow robots
to leave the initial positions without blocking each other and it is better to give a
robot more choices. In practice, method in Figure 3.2 and 3.1 can be used. For the
marine applications, Ni ships or submarines could be used to carry robots and stop
at the selected places to launch UUVs.
3.2.2.2
Distributed Algorithm
The 3D algorithm is the same as that in a 2D area, so the flowchart in Figure 3.3 is
still available. So Algorithm 3.1 and Equation 3.2 are still used here.
(
c with prob. 1/Nmax , if Nmax 6= 0,
pi (k + 1) =
(3.3)
pi (k), if Nmax = 0.
No = ceil(|Va |/Ni ).
(3.4)
To apply them to the 3D area, the user needs to think all parameter in the 3D
situation such as the distance in obstacle detection and setting the priority. Then
the algorithm in 2D will handle the task in 3D successfully.
3.2.3
Simulation Results
The proposed algorithm is initially simulated in MATLAB2016a for a static area
with 7*7*7 vertices and |Va | = 129 in Figure 3.10. In the simulation, Ni = 3, 5, 7,
9, 11 and 13 are used with initial vertices at the bottom layer of the task area. In
the figure, numbers indicate the order to added vertices and the position of vertex
1 is selected as pr . In this setting, all robots have accessible sensing neighbors to go
in loop one.
The parameters in the simulation are based on the Bluefin-21 underwater vehicle
which was used in searching MH370 and details can be found in [21]. The data for
Bluefin-21 and simulation is shown in Table 3.4 where P, R, and S stands for parameter names, parameters for robots and parameters for simulations. The underlined
parameter names mean those parameters for robots are from the datasheet. As ωmax
is unknown in the datasheet, the tmove cannot be estimated. The length of the robot
is 4.93m, and the diameter is 0.53m, so rrob = 4.93m is set. rsaf e is calculated based
on the e. In the datasheet, real-time accuracy ≤ 0.1% of the distance traveled. So
rsaf e can be set using the distance traveled in simulations. Then a and Wpass are
calculated based on rsaf e and assumptions in Section 2.2.1. In simulations, rrob use
the data from robot directly. Then a = 15 is set to avoid collisions while robots are
moving. To find rsaf e of the robot, the maximum travel distance in simulations in
this section is used which is 155.885 as the body diagonal of the area with 8*8*8
vertices in comparison. So the corresponding e = 0.156 and rsaf e = 5.086. Then
other parameters in simulations can be calculated. Then rsaf e = 5.086m is set in the
parameters for the robot. The rotation speed of this UUV is unknown, and there
are no Bluefin-21 robots to test in the author’s lab. Thus, the simulation results
will not be compared to the experiment result so guess an angular speed and a total
45
3.2 Completer Coverage in 3D
Figure 3.10: The task area and the initial setting
move time is meaningless. So in simulations, tmove is still set as 5.75s with 4.5s for
translation and 1.25s for rotation to show the effect of the algorithm, although it
will be smaller than the actual value.
An example of the procedure with Ni = 3 is shown in Figure 3.11, 3.12 and 3.13
for k = 15, k = 30 and k = 44 separately and the deployment finished in 44 steps.
The graphs show that robots will occupy the sections which are far from pr first,
thus, vertices near initial positions are vacant for adding new robots. Therefore,
the self-deployment result could be close to the optimal result. In Figure 3.13, the
light grey spheres demonstrate the rst for each vertex. The result shows that only
the corners near the boundaries of the area are not covered. Based on Assumption
2.2.5, those corners are ignored, and the complete coverage of the area is achieved.
The modified DR algorithm in [128] is compared to the proposed algorithm. To
make that algorithm work in the practical situation, the IP address of the robot
which is unique for each robot is used as the order of choice. For robi and its neighbors in Nc,i (k), it is set that the one with lower IP address will choose earlier. robi
also informs its choice robots in Nc,i (k) with lower priorities. Thus, communication
neighbors will not choose the same vertex as robi in the same step.
The calculated No and the average simulation results of 300 tests for both the
proposed algorithm and the DR algorithm are shown in Table 3.5. It also shows
46
3.2 Completer Coverage in 3D
P
rrob
rsaf e
rso
rst
a
R
4.93m
5.086m
75m
1km
69.9140m
S
4.93
5.086
20.086
20.086
15
P
rc
vmax
ωmax
tmove
Wpass
R
91.4m
2.315m/s
N/A
N/A
131.2666m
S
30.156
N/A
N/A
5.75s
36.1528
Table 3.4: Parameters of robots and simulation
Figure 3.11: Positions of robots when k = 15, |Va | = 129, Ni = 3
47
3.2 Completer Coverage in 3D
Figure 3.12: Positions of robots when k = 30, |Va | = 129, Ni = 3
the ratios of steps in the proposed algorithm to the steps in the DR algorithm.
The numbers of steps of the proposed algorithm are less than 1/3 of that in the
DR algorithm, and the advantage is clearer when Ni is larger. The results of the
proposed algorithm are only less than two steps more than No , so they are very
close to the optimal numbers of steps. However, this also means, in some steps,
there were less than Ni robots added because less than Ni robots could find vacant
vertices to move to. One possible reason is that there were some narrow sections in
the task area. An example of narrow section is shown in Figure 3.14 where Ni = 3,
|Va | = 6 and vertex 1 is at pr . The area to be covered is made up of two cylinders,
and the balls show the rsaf e around each vertex so that it is clear that vertex 2 and
3 are blocked by boundaries and only vertex 1 can add new robots to the section
of the thin cylinder. So, based on rule 3.1, robots need four loops to finish the
complete coverage. However, according to formula 3.2, No =2. Another reason for
the nonoptimal result is that robi chose a ns,i,j (k) with the d(ns,i,j (k), pr ) which was
smaller than d(pi (k), pr ), which means robi moved toward pr . If other robots at
sensing neighbors of robi had higher priorities than robi , they should have finished
their choices so pi (k) could not be chosen in this step and would be left vacant in
step k + 1. Thus, the choice of robi did not result in an accessible vertex near initial
vertices to feed in a new robot.
To test the scalability of the proposed algorithm, areas with the same shape but
different sizes with Ni = 3 are used. The test areas have 6*6*6 vertices and 8*8*8
vertices with |Va | = 62 and |Va | = 210 respectively. The average results of 300
simulations of both algorithms, the ratios of the two algorithms and No are in Table
3.6. It shows that the proposed algorithm is always better than the DR algorithm.
48
3.2 Completer Coverage in 3D
Figure 3.13: Positions of robots when k = 44, |Va | = 129, Ni = 3
Ni
Type of Results
3
5
7
9
11
13
No
43
26
19
15
12
10
Proposed algorithm
44.27
27.84
20.29
16.57
13.53
11.66
DR algorithm
139.65
93.04
68.36
62.65
53.91
46.26
Proposed/DR
31.7%
29.9%
29.7%
26.5%
25.1%
25.2%
Table 3.5: Steps from calculations and simulations with |Va | = 129
49
3.2 Completer Coverage in 3D
Figure 3.14: An example of the narrow path with k = 3, |Va | = 6, Ni = 3
50
3.3 Summary
Size of the Area
Type of Results
6*6*6
7*7*7
8*8*8
No
21
43
70
Proposed algorithm
23.70
44.27
70.98
DR algorithm
66.03
139.65
248.24
Proposed/DR
35.9%
31.7%
28.6%
Table 3.6: Steps from calculations and simulations for different areas with Ni = 3
As the area expands, the ratio decreases which means the advantage is clearer than
that in a small area. The difference between the proposed algorithm and No also
decreases because there are less narrow areas. The above simulations demonstrate
that the algorithm is scalable and effective.
3.2.4
Section Summary
This section provided a decentralized self-deployment algorithm for multiple mobile
robotic sensors to cover a 3D area completely. The area is arbitrary unknown with
obstacles, and the coverage algorithm used a grid pattern to have the least number
of robots and avoid collisions. This section proved that the algorithm could converge
with probability 1 and the results of this complete coverage algorithm are very close
to the fewest number of steps. Comparing to another decentralized algorithm with
the same initial condition, the proposed algorithm is effective and scalable.
3.3
Summary
A decentralized complete coverage algorithm for the mobile robotic sensor network
was proposed for both 2D and 3D area. The task area is arbitrary unknown with
obstacles and robots deploy themselves using the grid without any collisions. In 2D
areas, the 100% coverage was reached under the strict requirement, and in 3D areas,
each grid point in the task area was occupied by a mobile sensor. Comprehensive
simulation of the algorithm for areas with different sizes and different initial points
were run based on the parameters of real robots. The simulation results illustrate
that the step in the algorithm is very close to the ideal value and this algorithm is
much better than the DR algorithm. This chapter also proved the convergence of
the algorithm with probability 1. The algorithm is applicable in both 2D and 3D
areas, and only slight changes in the dimension of some values are needed.
51
Chapter 4
A Collision-Free Random Search
Algorithm
Chapter 4, 5 and 6 provide three decentralized collision-free algorithms for search
tasks in an unknown area by multiple robots. They all use grid patterns as discussed
in Chapter 2 and all the targets are static. So the problem statement and the
algorithm in this chapter will be explained and discussed in detail. Although some
of them will be the same as those in the complete coverage, including them here will
make this chapter self-contained and make readers understand the discussion easier.
Chapter 5 and 6 will refer to this chapter and only describe the problem statement
and algorithm briefly and focus on the different parts.
According to Chapter 3, in a large task area, monitoring the whole area requires
a significant number of robots which will be costly. Therefore, with a limited number, robots need to move around to detect the entire area gradually or find some
targets which will be the search task. There are two types of search tasks in an
unknown environment. If the number of targets is known to the robot a priori,
robots only need to find all the targets to finish the work. This situation is called
situation I which can be seen in the search for a certain number of people lost in a
forest. Otherwise, the number of targets is unknown, it is called situation II, and
robots need to detect every corner of the area to achieve the goal such as the mine
countermeasures in the battlefield.
This chapter proposes a grid-based decentralized random algorithm for search
tasks in an unknown area. The performance of the algorithm will be analyzed using
simulations. The proposed algorithm is suitable for both 2D and 3D areas. The main
differences are the grid pattern and the dimension of some parameters as discussed
in Chapter 2. The problem in a 2D area will be discussed in Section 4.1, and the
task in a 3D space will be in Section 4.2.
4.1
2D Tasks
This section aims to design a collision-free decentralized algorithm for a group of
robots to search targets in an unknown area with static obstacles by visiting vertices
of a grid pattern.
52
4.1 2D Tasks
Figure 4.1: Initial settings of the map
4.1.1
Problem Statement
The searched area in this section is a two-dimensional area A ⊂ R2 with limited
number of obstacles O1 , O2 ,. . ., Ol . There are m robots labeled as rob1 to robn in
this area as shown in Figure 4.1. An arbitrary known T grid pattern with a side
length of a is used, and robots can only move along the edge connection neighbor
vertices in each step.
Assumption 4.1.1. The area A is a bounded, connected and Lebesgue measurable
set. The obstacles Oi are non-overlapping, closed, bounded and linearly connected
sets for any i ≥ 0. They are both unknown to the robots.
S
Definition 4.1.1. Let O := Oi for all i > 0. Then Ad := A \ O represents the
area that needs to be detected.
There are n static targets t1 to tn which are labeled as red stars in Figure 4.1.
Let T be the set of target coordinates and Tki represents the targets set known by
robi . The symbol |•| in this report represents the number of elements in set •. Then
|T | denotes the number of targets in the search area and |Tki | denotes the number
of targets known by robot robi . There are two kinds of targets searching problems.
In situation I, |T | is known and in situation II, no information about |T | is given.
Assumption 4.1.2. The initial condition is that all robots use the same T grid
pattern and the same coordinate system. They start from different vertices near the
boundary which can be seen as the only known entrance of the area.
Although [131, 132, 129, 130, 11, 12, 10, 14] provided a consensus algorithm to
achieve this assumption, it needs infinite time and has collisions between robots, so
53
4.1 2D Tasks
the algorithm is impractical. If it is only approximately reached in a short period,
the sensing range and the communication range should be designed large enough
to ignore small errors between coordinate systems, but this was not mentioned. In
MATLAB simulations, robots are manually set to start from vertices. In robot
tests, all robots could be released from the same vertex but configured to move
to different nearby vertices at the beginning. Another possible solution could be
using one robot and its coordinate system as a standard, and each other robot is
put close to a distinct vertex near the standard. After sensing and communication
with the standard robot directly or indirectly, other robots will know the positions
of themselves and the corresponding closest vertices. Thus, they can go to those
vertices without any collision.
Assumption 4.1.3. All the ranges and radius in 2D search problems are circular.
The swing radius of robots is rrob . To avoid collision with other objects such as
boundary and other robots, errors need to be considered. Let the maximum possible
error containing odometry errors and measurement errors be e. Then a safety radius
rsaf e which is larger than rrob for collision avoidance can be defined as follows.
Definition 4.1.2. To avoid collisions with other things before move, a safe radius
rsaf e should include both rrob and e (see Figure 2.2). So rsaf e ≥ rrob + e.
Assumption 4.1.4. As the T grid is used, all passages Wpass between obstacles or
between an obstacle and a boundary are wider than a + 2 ∗ rsaf e based on Table 2.1.
This guarantees that there is always an approachable vertex in the path and
robots can go through the passage with no collision. Obviously, decreasing a will
expand the scope of applicability. However, the number of vertices increases so
the search time will also increase. To search with collision avoidance, robots carry
sensors to detect the environment including both targets and obstacles.
Assumption 4.1.5. The sensing radius for obstacles is rso where rso ≥ a + rsaf e .
Therefore, if the neighbor vertices of a robot are less than rsaf e away from the
boundary or obstacle, rso is large enough to detect that so the robot will not visit
those vertices to avoid collision. rso can be seen in Figure 2.2 and 2.3. The vertices
which are within rso of robi are called sensing neighbors of robi . Let Va represents the
set which includes all the accessible vertices. Then let Ns,i (k) represents the sensing
neighbors set that contains the closest vertices ns,i,j (k) around robi at discrete time
k where j ∈ Z+ is the index of the neighbor vertices. In Ns,i (k), let the set of visited
vertices of robi be Vv,i (k) and the corresponding set for unvisited vertieces be Vu,i (k).
So 0 ≤ |Vv,i (k)| ≤ 6 and also 0 ≤ |Vu,i (k)| ≤ 6. Then let the choice from Vv,i (k) be
cv and the choice from Vu,i (k) be cu .
Apart from obstacle sensing, robots also need to have a sensor to detect targets
which has a range of rst . Under the strict requirement, it is similar to rso . An
assumption for rst is needed to guarantees that areas around inaccessible vertices
can still be detected.
Assumption 4.1.6. rst ≥ a + rsaf e .
In the strict requirement, each point should be detected, and a T grid is needed,
the curvature assumption will guarantee this.
54
4.1 2D Tasks
Assumption 4.1.7. The curvature of the concave part of the boundary should be
smaller than or equal to 1/rst .
For the loose requirement, the above two assumptions are not needed as rst
should be chosen based on Table 2.1 other than Assumption 4.1.6 and assumption
for boundary effect will replace 4.1.7 to ensure a complete search which is stated
below.
Assumption 4.1.8. The search task for a team of robots usually has a vast area.
So, a is much smaller than the length, the width and the height of area A. Therefore,
the boundary effect is negligible [3, 4].
The communication range of robots is represented as rc . Robots within rc of
robi are named as the communication neighbors of robi . As the range is limited, the
communication between robots is temporary as in an ad-hoc network, so each robot
is considered equal with local communication ability. To avoid choosing the same
vertex with other robots in the same step, robots could choose in sequence use the
proposed random algorithm and tell the choice to communication neighbors who
may select it. Thus, others will think that vertex as an obstacle. The way to set the
sequence of choice is described in Section 4.1.2. Based on the T grid, those robots
are one or two vertices away from it. Therefore, rc should satisfy Assumption 4.1.9
with error e considered.
Assumption 4.1.9. rc ≥ 2 ∗ a + e (see Figure 2.3).
Let Nc,i (k) denotes a set of communication neighbors around robi that it must
communicate with. According to Figure 2.3, max(|Nc,i (k)|) = 18.
4.1.2
Procedure and Algorithm
The algorithm directs the robots to explore the area gradually vertex by vertex. The
whole progress can be seen in Figure 4.2 for situation I and in Figure 4.3 for situation
II. The stop strategy and the time to do it in the two situations are different. The
description of the algorithm is based on situation I. Extra steps for situation II are
mainly about detection states of the vertices. From the previous section, a vertex
is said to be detected in two conditions. If a vertex has obstacles within rsaf e , the
detection state for this vertex and surrounding area is judged from robots at its
sensing neighbors. If the vertex is accessible, this vertex and surrounding area are
said to be detected after it is visited. They are underlined in Figure 4.3 and will be
specially noted if necessary.
4.1.2.1
Preparation for the Choice
Initially, robots exchange their positions with robots within rc . Based on the relative
positions, the recognized communication neighbors of robi will form the set Nc,i (k).
The recognized robots staying at sensing neighbors will be thought as obstacles by
robi and their occupied vertices will be removed from Ns,i (k). Then the sequence of
choice can be decided to avoid choosing the same next vertex by different robots.
For robi at pi (k) and its communication neighbor robj at pj (k), if pjx > pix | (pjy >
piy &pjx = pix ), robj will have higher priority to make choice. Figure 2.3 visualizes
the above relation. In that figure, robots with higher priorities should be on the
55
4.1 2D Tasks
Figure 4.2: The flow chart for situation I
56
4.1 2D Tasks
Figure 4.3: The flow chart for situation II
57
4.1 2D Tasks
vertices at the right side of the green curve. So, robots at vertices on the left will
choose later than robi . In this way, robots at half of the neighbor vertices will choose
earlier than robi and there will be no contradiction when judging relative orders from
the views of different robots.
Subsequently, robots sense the local area within rst to detect targets. As positions
of other robots are known, their occupied zone will not be detected. In situation
I, when a target is detected, the information will be broadcast to all robots. The
broadcast is used as the positions of targets are the key results to find for all robots,
and it only happens |T | times, so this will not consume too much energy. After
this, all robots will go to stop strategy. If robots are not ready to stop or no targets
are sensed, robots will go back to the regular loop on the left side of the figure.
In situation II, the information of targets will only be broadcast after the stop is
confirmed. Robots only need to move to the next step of the flowchart regardless of
the sensing results.
Next, the robot shares its explored map which is updated based on the position
received in the first step. Then robots can update their maps again. In situation II,
whether the sensing neighbors of the vertices are detected or not will also be sent
and received to update the detection states of all vertices.
4.1.2.2
Wait, Choose, and Communicate
robi needs to wait for the choices from the robots with higher priorities because the
positions of those choices can be thought as detected points in the next step and
may also be obstacle areas for robi . If all robots are connected, the last one in the
choosing sequence needs to wait for every other robot and the processing time for
the whole selection process should be set to the value in this condition because all
the robots should be synchronized in each loop. When it is the turn of robi , it will
detect the local area but avoid the section of existing neighbor robots. The way to
judge whether the object p is an obstacle or not is illustrated in Figure 3.4 in Section
3.1.2. If there is any sensed object on the way to the vertex ns,i,j (k) in Ns,i (k), that
object is treated as an obstacle. So that the vertex ns,i,j (k) cannot be reached and
is removed from Ns,i (k). For situation II, that vertex is also recorded as detected. If
no objects are sensed in these two situations, this robot will still wait for the same
amount of processing time, so all robots are synchronized in this step.
Choices are made using the following random decentralized algorithm.
6 0,
cu with prob. 1/|Vu,i (k)|, if |Vu,i (k)| =
pi (k + 1) = cv with prob. 1/|Vv,i (k)|, if (|Vu,i (k)| = 0)&(|Vv,i (k)| =
(4.1)
6 0),
pi (k), if |Ns,i (k)| = 0
This algorithm means when there are unvisited sensing neighbors, robi will choose
one from them randomly. If there are only visited sensing neighbors around, robi
will randomly choose one from them. If there are no vacant sensing neighbors to go,
robi will stay at the current position pi (k).
The choice of robi is also added to its map and sent to robots in Nc,i (k). The
receiver robj with lower priority needs to remove pi (k + 1) from Ns,j (k) if it was in
that set. In situation II, the detection state of vertices should also be updated after
making a choice and receiving the choice.
58
4.1 2D Tasks
In contrast to the random choice, there is also an algorithm with deterministic
choice, namely using the first available choice. The first choice is used as there may
not be another option. However, if the first choice is defined based on heading angle
of the potential movement, robots tend to move in the same direction instead of
searching all directions evenly which may lead to more repeated visitation. Thus, it
will have a longer search time.
4.1.2.3
Move
All robots should move in the same pattern at the same time, and the way to move
should match the realistic constraints.
Each movement step consists of two phases: phase I is the pure rotation, and
phase II is the pure translation. The rotation will aim the heading of the robot at
the target vertex. Specially, if robi has been facing to its selected vertex before phase
I, it will not turn but wait for the rotation time. Then robi can go straight along
the chosen edge. Robots have standard hard constraints on the velocity and the
acceleration so that both phases use trapezoidal velocity profiles based on parameters
of the area and the test robots. In this section, parameters from Pioneer 3-DX are
used in simulation as in Table 3.1.
4.1.3
Stop Strategies
For situation I, a robot judges the stop condition after exchanging target information. robi should stop at time k when the number of targets that it finds equals to
the number of targets in the area. The knowledge of |Tki | comes from its detection
and the information received from other robots. Namely, for any robot,
pi (k + 1) = pi (k), if |Tki | = |T |
(4.2)
In situation II, the judgment for stop condition happens in step 6 in the flowchart.
For each visited vertex, the detection states of its sensing neighbors need to be
checked. If all visited vertices have no undetected sensing neighbors, the robot
should stop. Namely,
pi (k + 1) = pi (k), if ∀vi ∈ Va , |Vu,i (k)| = 0
(4.3)
The aims of the tasks can be various. So robots may have different missions after
stop moving. However, this paper focuses on search tasks only so looking for the
coordinates of the targets is the only requirement. In situation I, when the judgment
condition of robi is satisfied, all robots stop the whole procedure as they have the
same information of targets which is broadcast every time. However, in situation II,
if robi satisfies the stop condition, it needs to broadcast its knowledge of targets and
a stop signal to inform all other robots. Then all robots will broadcast the positions
of targets that they know and terminate the search.
4.1.4
Broken Robots and Reliability
Waiting for information to choose means the failure of robots with higher priorities
will affect those with lower priorities and thus break the loop of the flowchart. To
improve the reliability of the algorithm, the failure of robots need to be found and a
59
4.1 2D Tasks
solution to tackle it is provided as follows. If robi has not received any information
from robots with higher priorities until the (n − 1) waiting loop, it means that at
least one robot is not working. Therefore, robi will broadcast that the robots which
should send their choices but did not do failed and claim that robi itself is still
working. If both failure and working states of one robot are received in this loop,
that robot will be judged as working. Then, those robots which send information will
stay at current positions and wait until the time slot for move passed. In the next
loop, robots will not wait for the failed robots but just consider it as an obstacle.
This scheme detects the failure after the first step in the flowchart and before the
movement. However, the detection does not include the fault in communication.
By having this broken robots detection scheme, this decentralized algorithm is more
robust than centralized algorithms and robots will still work even when there is only
one working robot left. Under this circumstance, the system could only fail in the
extreme condition where broken robots blocked all the paths to a target so that the
target cannot be detected.
Theorem 4.1.1. Suppose that all assumptions hold and the random decentralized
algorithms 4.1 with a related judgment strategy 4.2 or 4.3 are used. Then for any
number of robots, there is a time k0 > 0 such that all targets are detected with
probability 1.
Proof. The algorithms 4.1 & 4.2 or 4.1 & 4.3 forms an absorbing Markov chain
which includes both transient states and absorbing states. The transient states are the
steps that robots visit the approachable vertices of the grid but do not stop. Absorbing
states are the steps that all robots stop at the vertices. Applying this algorithm, robots
tend to go to the unvisited neighbor vertices. If all sensing neighbors of a robot
are visited before finding the targets, robots will choose a random sensing neighbor,
continue the regular loop and only stop when all targets are found in Situation I or all
approachable vertices are visited, and all inapproachable vertices are detected from
their sensing neighbors in situation II. For 4.1 & 4.2, the number of transient states
will decrease until all robots know |Tki | = |T |. For 4.1 & 4.3, absorbing states are
reached until |Vu,i (k)| = 0 for any accessible vertices. Based on Assumption 4.1.4,
absorbing states can be achieved from any initial state, namely with probability 1.
This completes the proof of Theorem 4.1.1.
4.1.5
Simulation Results
Simulations of the situation where the number of robots is known, namely situation
I, is used to verify the algorithm. The loose requirement is applied for the area,
and the T grid is used for the robot to move through. In simulations, there is one
target randomly put in the area A, and there are several robots to search it (see
Figure 4.1). The aim is to find the target position in A by robots via the T grid
pattern using the proposed algorithm. Based on this aim, when the target is found,
the corresponding robot stops and shares this as global information. The search
procedure should always have no collision. The bold black line made up by points
in the map displays the boundary of area A and obstacles. The distances between
neighbor points are small enough which is set according to Assumption 4.1.4 so that
no robots can pass the boundary. Parameters in the simulation are designed based
on the pioneer 3-DX robot. Notice that move time is roughly estimated as 5.75s
60
4.1 2D Tasks
Figure 4.4: The route of robot 1 in a simulation of 3 robots
based on Table 3.1. To test all the situations in algorithm 4.1, at least seven robots
are needed. So the simulations with one to seven robots for the same target starting
from similar places are run to check the relation between the number of steps of
choice, search time and the number of robots. To see the route clear and show the
collision avoidance. The target is put far from the starting points with obstacles in
between. For the test with seven robots, one robot will be enclosed by other robots
as shown in Figure 4.1. Then all six sensing neighbors will be seen as obstacles by
the middle one which is the third situation of algorithm 4.1. In later loops, the first
and second situations could happen.
Figure 4.4-4.7 show the routes of a simulation with 3 robots. Figure 4.4-4.6
display the route of each robot with arrows to illustrate the directions of the movements. Figure 4.7 is the combination of the three routes where the repeated sections
of routes can be found due to distributed decisions made when the communication
is not available. By analyzing the route of each robot which is recorded in each step,
no collisions between robots are found, and the algorithm is functioning well.
The results in Table 4.1 and corresponding Figure 4.8 are the average results of
100 tests for 1 to 7 robots. The row ‘Code’ shows the time for the calculation in the
algorithm. The row ‘Total’ includes both the calculation time and the movement
time. It demonstrates that if more robots are working together, the number of search
steps will decrease significantly at the beginning and gently later. However, both
the number of robots n and the choosing time increase, so the waiting time in each
step which is n multiplies the time for ‘choose’ also increases. It can be seen clearly
that the moving time is much larger than the algorithm time so that the trend of
total time is similar to that of the number of steps, namely descending sharply from
1 to 5 robots and decreasing gradually from 5 to 7 robots. It can be predicted that
if there are more robots in this search area, the total time may keep stable followed
by a moderate rise as the increased algorithm time takes up larger proportion in the
61
4.1 2D Tasks
Figure 4.5: The route of robot 2 in a simulation of 3 robots
Figure 4.6: The route of robot 3 in a simulation of 3 robots
62
4.1 2D Tasks
Figure 4.7: Combined routes in a simulation of 3 robots
63
4.1 2D Tasks
No.
1
2
3
4
5
6
7
Steps
2638
1572
1116
931
795
730
661
Code(s)
22.0
27.0
29.0
30.3
33.7
37.2
39.0
Total(s)
13211
7885
5607
4687
4009
3688
3345
Table 4.1: Wpass and rst for each grid
Figure 4.8: Time and Steps
total time.
In this algorithm, the number of robots is flexible, and the algorithm is scalable.
The search time is also affected by the initial positions of the robots and the shape
of the area. As that information is unknown in the problem discussed in this paper,
the suitable range of the number of robots may only be estimated in simulations
with a general area when the rough size is given. Even for a fixed map, the optimal
number of robots should be decided by comparing the benefits from the decline of
the time and the cost of adding robots.
4.1.6
Section Summary
This section presented a decentralized random algorithm for multiple robots to
search static targets in an unknown 2D area. This algorithm sets a sequence for
making choices to avoid collisions and uses assumptions based on realistic conditions. A T grid pattern and the loose requirement of the area were used as an
example. Robots started with a preset common T grid pattern and the same coordinate system. Then they used the proved algorithm with local information to
choose future steps even if some robots failed in the calculation. Simulation results
for 1-7 robots showed the flexibility and scalability of the algorithm.
64
4.2 3D Tasks
4.2
3D Tasks
This section describes a search task for multiple robots in a 3D area using the decentralized random algorithm proposed in the 2D situation. The main difference
between these two areas is the dimension of the area and parameters. So the assumptions and definitions are given directly without repeated explanation. The
performance of the algorithm will also be analyzed using simulation in MATLAB.
As said in Chapter 2, the data of Bluefin-21 robots will be utilized, and the C grid
will be employed in the simulations.
4.2.1
Problem Statement
Problems in this section is for a three-dimensional area A ⊂ R3 which has a limited
number of obstacles O1 , O2 ,. . ., Ol . They need to satisfy the following assumption:
Assumption 4.2.1. Area A is a bounded, connected and Lebesgue measurable set.
The obstacles Oi ⊂ R3 are non-overlapping, closed, bounded and linearly connected
sets for any i > 0. They are both static and unknown to robots.
S
Definition 4.2.1. Let O := Oi for all i > 0. Then Ad := A \ O is the area that
needs to be detected.
There are m robots labelled as rob1 to robm to search n static targets t1 to tn .
The coordinate of robi is pi (k) at time k. Let T be the set of coordinates of targets
and Tki represents the set of coordinates fo targets known by robi .
Assumption 4.2.2. Initially, all robots have the same grid with the same coordinate
system and are allocated at different vertices along the border of the search area. In
the simulation, robots are manually set at vertices of the C grid
Assumption 4.2.3. All sensors and communication equipment on robots have spherical ranges. Other radiuses mentioned are also spherical.
Let the swing radius of robots be rrob and the safety radius to avoid collisions be
rsaf e . The total error is e. Then
Definition 4.2.2. rsaf e ≥ rrob + e (see Figure 2.11).
The C grid has a side length of a. According to Chapter 2, the passage width
Wpass satisfies
√
Assumption 4.2.4. Wpass ≥ 3a + 2rsaf e (see Figure 2.13).
The initial state of simulations is demonstrated in Figure 4.9 in which robots
start from vertices near the boundary of the area. The colored net with curves is
the border of the search area, and the two spheres inside are the obstacles. The
vertices of the net are the discrete boundary points which are close enough so that
robots will not pass the border. Passages between spheres and borders are designed
to fit Assumption 4.2.4. Black dots are the vertices of the cubic grid, and 13 robots
are labeled with colored shapes in the black curve. Red stars are targets t1 to t9
evenly distributed in the space because the compared algorithm Lévy fight works
best for sparsely and randomly distributed targets.
To avoid collisions with both obstacles and borders in movements, the sensing
range for obstacles rso satisfies:
65
4.2 3D Tasks
Figure 4.9: 3D area and initial settings
66
4.2 3D Tasks
Assumption 4.2.5. rso ≥ a + rsaf e (see Figure 2.11).
If there is any obstacle on the way from robi to one of its sensing neighbors, that
neighbor will not be visited. Then, let Va represents the set which includes all the
accessible vertices. Let the set of sensing neighbors Ns,i (k) contains all the closest
vertices ns,i,j (k) around robi at discrete time k in which j ∈ Z+ is the index of a
neighbor robot. In Ns,i (k), let the set containing visited vertices of robi be Vv,i (k) and
the set containing unvisited vertices be Vu,i (k). Then the choice of robi from Vv,i (k)
is represented by cv and the choice from Vu,i (k) is cu . To avoid collisions resulting
from having the same choice by different robots, robots choose the next steps in
a certain order as described in section 4.2.2 and at least need to tell the choice to
those which are one vertex and two vertices away. Thus, the communication range
rc satisfies:
Assumption 4.2.6. rc ≥ 2 ∗ a + e(see Figure 2.11).
Robots which are in the range rc of robi are called communication neighbors of
robi . The communication uses ad-hoc networks, and all robots are in equal status.
As the network is temporary, it will be rebuilt at the beginning of each loop. Let
Nc,i (k) denotes the set of communication neighbors of robi that it must communicate
with.
By having the concepts above, a further explanation about Figure 2.11 is given.
This figure demonstrates some parameters and the priority of making choices using
a cubic grid centered at robi . The small sphere shows rso with 6 magenta asterisks
(*) as sensing neighbors. The larger sphere illustrates rc with 26 blue plus signs (+)
and 6 magnet asterisks in it. Then, max(|Nc,i (k)|) = 32. Half of them (16 vertices)
in the black circles (o) have higher priorities than robi in making choice.
Assumption 4.2.7. The search task for a team of robots usually has a huge area.
So, a is much smaller than the length, the width and the height of area A. Therefore, the boundary effect is negligible which means there are no targets between the
boundary and the frontiers of the target sensing spheres.
√
Assumption 4.2.8. rst ≥ 2a/ 3 + rsaf e − rrob based on Table 2.3.
The aim of the section is to design a collision-free decentralized algorithm for
robots to search targets in a 3D area which satisfies the above assumptions using a
cubic grid.
4.2.2
Procedure and Algorithm
The general procedure for the 3D task is similar to the 2D version. So, it uses the
same rules for making choice (4.1)
6 0,
cu with prob. 1/|Vu,i (k)|, if |Vu,i (k)| =
pi (k + 1) = cv with prob. 1/|Vv,i (k)|, if (|Vu,i (k)| = 0)&(|Vv,i (k)| =
(4.4)
6 0),
pi (k), if |Ns,i (k)| = 0
and (4.2 and 4.3) for stop.
In situation I: pi (k + 1) = pi (k), if |Tki | = |T |
67
(4.5)
4.2 3D Tasks
Figure 4.10: The route of robot 1 in a simulation of 3 robots
In situation : pi (k + 1) = pi (k), if ∀vi ∈ Va , |Vu,i (k)| = 0
(4.6)
But the way to set the selection sequence is different. The 2th step of flow chart
4.2 and 4.3 sets the order of choices as illustrated in Figure 2.11. Let robj at pj be
a communication neighbor of robi . Then if pjx > pix | (pjy > piy &pjx ≥ pix ) | (pjz >
piz &pjy ≥ piy &pjx ≥ pix ), robj will have the higher priority. Thus in the figure,
16 vertices which is half of the max|Nc,i (k)| in the black circles will have higher
priorities than the middle robot robi . This will result in no contradiction from the
views of different robots.
4.2.3
Simulation Results
The simulation of the algorithm is run in MATLAB2016a. To illustrate the detail
in the route of robots apparently, the 8*8*8 grid in Figure 4.9 is used with three
robots label with red, blue and green color respectively. The route for situation I
with 9 targets are shown in Figure 4.10, 4.11 and 4.12 for the route of each robot
and Figure 4.13 is the combination of the three figures. The routes are illustrated
with arrows to show the directions of movements, and the route of each robot has
a distinct color. The result indicates that each target was within rst of the sensing
range of a visited vertex and there was no collision between robots based on the
routes.
Then, the algorithm is tested in a larger area which has 12*12*12 vertices in the
grid and 1 to 13 robots are used which are still at the bottom layer of the area near
point (1,1,1). Both situation I with Algorithm 4.2 and situation II with Algorithm
4.3 are considered and the simulations is run for 500 times to get the average search
time. Figure 4.14 shows the result for situation I and Figure 4.15 demonstrate the
result for situation II. Table 4.2 is used to show the ratio of calculation time to total
68
4.2 3D Tasks
Figure 4.11: The route of robot 2 in a simulation of 3 robots
Figure 4.12: The route of robot 3 in a simulation of 3 robots
69
4.2 3D Tasks
Figure 4.13: The combined routes of 3 robots
70
4.3 Summary
No.
1
2
3
4
5
6
7
Situation I
0.28%
0.31%
0.34%
0.37%
0.41%
0.49%
0.49%
Situation II
0.79%
1.34%
1.56%
1.90%
2.27%
2.40%
2.99%
No.
8
9
10
11
12
13
Situation I
0.63%
0.56%
0.62%
0.66%
0.74%
0.76%
Situation II
3.42%
3.54%
4.17%
4.67%
4.69%
5.67%
Table 4.2: The ratio of calculation time to total time
time for the two situations.
The two figures illustrate that, as the number of robots increases, the amount
of search time and the number of search steps decreases sharply at the beginning
but slowly later. However, the time in wait step will increase as it equals the time
for the choice of one robot multiplies the number of robots. However, the algorithm
time only accounts for a small percentage (≤ 5.7%)of the total time, so the trend
of the total time follows the trend of steps needed. As the percentage is increasing,
it may need to be considered when there are a significant number of robots. When
comparing the two figures, we can see that the time in situation II is more than four
times of situation I. Thus, for time-sensitive tasks, it is better to spend some time
to find the number targets if possible.
The simulation shows that the algorithm is flexible with different numbers of
robots and is scalable with areas in various sizes. However, other factors such as
initial locations of robots, the number of robots, and the shape of the area are
unknown in the problem discussed in this paper. So it is still hard to predict the
time of a search task even with a fixed area. To choose the suitable number of
robots, the user needs to seek a balance between the search time and the cost of
robots.
4.2.4
Section Summary
This section presented a decentralized search algorithm for a robot team in 3D area.
A C grid is used to simplify the algorithm and avoid collisions with a selection order.
In the algorithm, robots use local information and a random rule to explore the area
with a reliable failure detection method. Performance of the algorithm is simulated
for both situation I and situation II with 1 to 13 robots. The results show that the
algorithm is flexible and scalable.
4.3
Summary
This chapter proposed the first decentralized random algorithm for search task using
multiple mobile robots. The area which satisfies the assumptions is unknown with
71
4.3 Summary
Figure 4.14: Time and steps for situation I
Figure 4.15: Time and steps for situation II
72
4.3 Summary
unknown arbitrary static obstacles and targets. Robots can move via a grid pattern
without collisions. They start with a preset common grid pattern and the same
coordinate system. With the assumed sensing ranges and the communication range,
robots use the proved algorithm to choose future steps in order and move with
no collision until the task is finished even if some robots failed. All assumptions
are based on realistic conditions such as the volume and physical constraints of
the robots. A rigorous mathematical proof of the convergence of the algorithm is
provided. Simulation results show that, for the same task, more robots will use
fewer steps and time. However, the time may stop decreasing when the number of
robots increases to a certain value. Conclusively, the algorithm is scalable, flexible,
and reliable. It is suitable for both 2D and 3D area and can be transferred between
these two dimensions easily.
73
Chapter 5
A Collision-Free Random Search
with the Repulsive Force
This chapter proposes the second grid based decentralized algorithm without collisions. Comparing to the 1st algorithm in Chapter 4, this algorithm still uses the
random decision rule, but it adds the repulsive force of artificial potential field algorithm which improves the algorithm. The artificial potential field algorithm is used in
many search and complete coverage problems such as [18, 182, 54, 109, 158, 142, 73]
as it is easy to apply.
In this chapter, problem formulation and algorithm description will be simplified
and refer to Chapter 4. Only the different parts will be emphasized. Similar works
with grid algorithms, such as [131, 132, 129, 130, 11, 12, 10, 14], will be compared.
However, algorithms in those papers did not consider collision avoidance well or just
ignored it. Other random algorithms are mainly inspired by animals such as random
work or Lévy flight related algorithms [193, 38, 186, 63, 85, 33, 28]. They will also
be compared. The factors related to search time will be discussed in depth for 2D
areas. Also, experiments with Pioneer 3-DX robots are used to verify the algorithm
in a 2D area.
5.1
2D Tasks
This part proposed the second collision-free random algorithm for search task with
potential field algorithm. The way to find the relation between the search time,
the number of robots and the size of the area will be demonstrated. Other factors
related to search time will also be discussed to have a deeper understanding of this
algorithm. Unlike Chapter 4 which only analyzed the performance of the algorithm
itself, the algorithm in this section will be compared to many other algorithms to
show the effectiveness. Then an experiment using Pioneer 3-DX robots is done to
check the performance of the algorithm in reality.
5.1.1
Problem Statement
In this section, the task area is a two-dimensional area A with a limited number of
static obstacles O1 , O2 ,. . ., Ol . There are m robots labelled as rob1 to robn to search
static targets t1 to tn . An arbitrary known grid pattern with a side length of a is used
for robots to move in each step with the following definitions and assumptions. All
74
5.1 2D Tasks
these setting can be seen in Figure 5.6. Further explanations were given in Chapter
4.
Assumption 5.1.1. The area A is a bounded, connected and Lebesgue measurable
set. The obstacles Oi are non-overlapping, closed, bounded and linearly connected
sets for any i ≥ 0. They are both unknown to the robots.
S
Definition 5.1.1. Let O := Oi for all i > 0. Then Ad := A \ O represents the
area that needs to be detected.
Assumption 5.1.2. The initial condition is that all robots use the same grid pattern
and the same coordinate system. They start from different vertices near the boundary
which can be seen as the only known entrance of the area.
Assumption 5.1.3. All the ranges and radius in 2D search problems are circular.
Definition 5.1.2. To avoid collisions with other things before a move, a safe distance rsaf e should include both rrob and e (see Figure 2.2). So rsaf e ≥ rrob + e.
Assumption 5.1.4. If the T grid is used, all passages Wpass between obstacles or
between an obstacle and a boundary are wider than a + 2 ∗ rsaf e . Other Wpass for
other grid patterns can be seen in Table 2.1.
Assumption 5.1.5. The sensing radius for obstacles is rso where rso ≥ a + rsaf e .
Assumption 5.1.6. rc ≥ 2 ∗ a + e (see Figure 2.3).
Va is the set of all the accessible vertices. Ns,i (k) represents the set of sensing
neighbors around robi . In Ns,i (k), Vv,i (k) and Vu,i (k) denote the two sets for visited
vertices and unvisited vertex and the choices from these two sets are cv and cu
respectively.
If the strict requirement is used,
Assumption 5.1.7. The curvature of the concave part of the boundary should be
smaller than or equal to 1/rst .
Assumption 5.1.8. rst ≥ a + rsaf e .
If the loose requirement is used, the above two assumptions are not needed and
following assumptions are needed.
Assumption 5.1.9. The search area is usually vast. So, the side length a is much
smaller than the length and the width of area A. Therefore, the boundary effect is
negligible.
√
Assumption 5.1.10. rst ≥ a/ 3 + rsaf e − rrob .
The following definition is about variables in calculating repulsive force which is
new.
Definition 5.1.3. In loop k, pi,ave (k) is defined as the average position of choices
made by robots which chose earlier and positions of communication neighbors which
have not chosen the next step yet. Then dcu ,pi,ave (k) is the distance from pi,ave (k) to
one unvisited vertex and dcv ,pi,ave (k) is the distance from pi,ave (k) to a visited one.
Then the maximum values of these two parameters about distance are represented as
max(dcu ,pi,ave (k) ) and max(dcv ,pi,ave (k) ) separately. The corresponding sets of vertices
which result in these maximum distances are represented as Vumax,i (k) and Vvmax,i (k).
The chosen position of robi at time k + 1 is pi (k + 1).
75
5.1 2D Tasks
5.1.2
Procedure and Algorithm
The algorithm directs the robots to explore the area gradually vertex by vertex.
The general progress in the second search algorithm is the same as the first search
algorithm and the flow charts in Figure 4.2 and 4.3 can still be used. The key
difference is how to choose the next step.
Choices are made using the following random decentralized algorithm with repulsive force which is improved based on Chapter 4. When a robot has no neighbor
robots in rso , Algorithm 5.1 from Chapter 4 will be applied. It means, when there
are vacant unvisited vertices, robi will choose one from them randomly. If there are
only visited sensing neighbors to go, robi will randomly choose one from the visited
vertices. If there is no vacant sensing neighbor around, robi will stay at the current
position pi (k).
6 0,
cu with prob. 1/|Vu,i (k)|, if |Vu,i (k)| =
pi (k + 1) = cv with prob. 1/|Vv,i (k)|, if (|Vu,i (k)| = 0)&(|Vv,i (k)| =
(5.1)
6 0),
pi (k), if |Ns,i (k)| = 0
When robots have neighbor robots in rso , they apply the algorithm 5.2 which uses
the repulsive force of the artificial potential field. After receiving choices of previous
robots, at time k, robi will calculate pi,ave (k). Then it will randomly choose one
vertex from those which result in max(dcu ,pi,ave (k) ) if there are unvisited vertices to
go, or from max(dcv ,pi,ave (k) ) if there are only visited vertices to visit. Under this
circumstance, robots near each other will try to leave each other. So if there are
many unvisited vertices which are within rc of robots at time k, robots may have
more choices at time k + 1 by using rule 5.2 than using rule 5.1. If all the vertices
around are visited, robots will try to leave this visited section in different directions
simultaneously to have less repeated vertices comparing to using Algorithm 5.1.
Thus, they may find unvisited sections earlier. These two algorithms ensure that all
the places in Ad are searched.
6 0,
cu with prob. 1/|Vumax,i (k)|, if |Vu,i (k)| =
pi (k + 1) = cv with prob. 1/|Vvmax,i (k)|, if (|Vu,i (k)| = 0)&(|Vv,i (k)| =
6 0),
pi (k), if |Ns,i (k)| = 0
(5.2)
The stop strategy for this search algorithm is the same as that in Chapter 4. So,
in situation I:
pi (k + 1) = pi (k), if |Tki | = |T |.
(5.3)
In situation II:
pi (k + 1) = pi (k), if ∀vi ∈ Va , |Vu,i (k)| = 0.
(5.4)
Theorem 5.1.1. Suppose that all assumptions hold and the random decentralized
algorithms 5.1 & 5.2 with a related judgment strategy 5.3 or 5.4 are used. Then for
any number of robots, there is a discrete time k0 > 0 such that all targets are found
or all vertices are detected with probability 1.
Proof. The algorithms 5.1 & 5.2 with a corresponding judgment method form an
absorbing Markov chain which includes both transient states and absorbing states.
The transient states are the approachable vertices of the grid that robots visit but do
76
5.1 2D Tasks
not stop. Absorbing states are the vertices that the robots stop. Applying algorithms
5.1 & 5.2, robots tend to go to the unvisited neighbor vertices. If all sensing neighbors are visited before finding the targets, robots will randomly choose an accessible
neighbor, continue to the next step and only stop when all targets are found, or all
vertices are detected. For 5.3, an absorbing state will be reached until all robots know
|Tki | = |T |. For 5.4, an absorbing state will be reached until |Vu,i (k)| = 0 for any
vertex. As robot stop moving after the judgment conditions are satisfied, so absorbing states are impossible to leave. The assumption about Wpass guarantees that all
the accessible vertices can be visited so absorbing states can be achieved from any
initial states, namely with probability 1. This completes the proof of Theorem 5.1.1
5.1.3
Simulations with the Strict Requirement
Simulations with algorithms 5.1 & 5.2 for both situation I and II are run to verify
the algorithm. For situation I, one to nine targets which are averagely allocated in
the area are tested.
For situation I, Figure 5.1, 5.2, 5.3 and 5.4 is an example of searching 9 targets
by 3 robots. There are 900 vertices (30*30) in total in the graph. The boundaries
of the search area and obstacles are illustrated by the bold black lines made up by
discrete points. Distances between those points are designed to be small enough so
robots cannot pass the boundaries. Table 3.1 shows the parameters in the simulation
which are designed based on the datasheet of the Pioneer 3-DX robot. Robots had
been one vertex away from each target which means every target is found.
Based on the recorded route, there were no collisions between robots. Figure
5.5 displays the initial step (k = 1) of the above example. Orange circles are the
vertices which cannot be accessed as they are less than rsaf e to the boundary points.
Based on the method to decide selection order in Section 4.1.2.1, the robot labeled
by the blue square was the first to choose, the red one was the second, and the green
one was the last. Then based on the first condition of the algorithm 5.2, the blue
robot chose the unvisited vertex at the top left corner which is the only unvisited
vertex in the six sensing neighbors. The vertex with target t3 can be detected as an
obstacle, so it does not need to be visited. Similarly, the red robot chose the vertex
on the top left corner based on the first condition in 5.2. For the green robot, there
were two accessible unvisited neighbors so it calculated p3,ave (k) which is the orange
square and chose the vertex which results in max(dcu ,pi,ave (k) ) , namely the vertex
on the left side.
Search time may be affected by many factors including the number of robots,
the size of the area, the number of targets, the initial allocation of robots and Wpass .
The result of each situation is the average result for 150 simulations. In order to test
every situation in rules 5.1 & 5.2, at least seven robots are needed to guarantee one
robot can be blocked by six neighbors, which is the third condition of Algorithm
5.2 and can be seen in Figure 5.6. The six neighbors are in the first and second
conditions of Algorithm 5.2 and other conditions will happen in later loops. In
simulations, 1 to 13 robots were used, and the order of allocating robots is shown
in Figure 5.7.
Initially, simulations for one to thirteen robots with one, three, five, seven and
nine targets in an area of 574.8243618m2 (900 vertices) are run to check the relation
77
5.1 2D Tasks
Figure 5.1: Route of robot 1 in a simulation of three robots
Figure 5.2: Route of robot 2 in a simulation of three robots
78
5.1 2D Tasks
Figure 5.3: Route of robot 3 in a simulation of three robots
Figure 5.4: Combined routes in a simulation of three robots
79
5.1 2D Tasks
Figure 5.5: Choice in the first step of the simulation of three robots
Figure 5.6: Initial setting of the map
80
5.1 2D Tasks
Figure 5.7: The order and initial positions of robots
between the search time and the number of robots as well as between the search
time and the number of targets (see Figure 5.8 ). Here, 900 vertices include all
the vertices in the graph, and this representation is clear to identify the relation
between different areas. The size of the area is for the search area only which does
not include the parts occupied by obstacles. The area and obstacles have random
shapes with both concave and convex parts, and they satisfy the assumptions for
the strict requirement. Thus, the result in Figure 5.8 can be thought as a general
result for areas with obstacles. For each curve in Figure 5.8, as the number of robots
rises, the search time falls fast for 1 to 6 robots and slowly for 7 to 13 robots. The
shape of the line is similar to an inverse proportional function and generally can be
written in the form of the power function.
time = b ∗ mc + d
(5.5)
where b, c and d are the unknown parameters and m is the number of robots. As
the search time for nine targets is compared with others in the next few paragraphs,
the function for that line is estimated using the least squares method with four
significance bits for accuracy, namely
time = 1295 ∗ 10 ∗ m−1.209 + 420.2.
(5.6)
This function is the red dotted line with asterisks in 5.8. When searching different
numbers of targets using the same number of robots, the search time increases with
the number of targets. However, it can be seen that time differences between five,
seven, and nine targets are smaller than that between one, three and five targets as
robots need to go to most parts of the area to find all the targets when there are
five, seven, and nine targets. This is because targets t2 , t3 , t4 and t5 have taken up
the four corners of the area.
81
5.1 2D Tasks
Figure 5.8: Search time for 1-13 robots with 1, 3, 5, 7 and 9 targets
Secondly, 1 to 13 robots with 9 targets in areas of 63.87m2 (100 vertices),
255.48m2 (400 vertices), 574.82m2 (900 vertices), 1021.91m2 (1600 vertices) and
1596.73m2 (2500 vertices) are tested to discover the relation between the search
time, the number of robots (Figure 5.9) and the size of the area (Figure 5.10). It
should be noted that these five areas are expanded or shrunk based on the 900 vertices area, so they have the same shape so the area ratio is the same as vertices
ratio which is 1:4:9:16:25. All robots start from the bottom right corner near target
t3 , so the starting points are controlled to have the minimal effect on the result.
The curvatures are initially designed to satisfy the requirement in the area with
900 vertices, so the curvatures in the expanded areas also meet this requirement.
Nonetheless, for the shrunk area, curves of some parts do not meet the requirement,
so the grids are slightly moved to guarantee a complete search. The result in Figure
5.9 agrees with Figure 5.8. It also illustrates that as the size of area increases, the
search time increases as well. If the search time for 100 vertices is considered as 1
for each number of robots, the ratio of search time in different areas to the search
time in the area with 100 vertices can be demonstrated by Figure 5.10. It can be
claimed that there is always a linear relation between the search time and the area.
The gradients of the lines are different, and there is no fixed relation between the
gradient and the number of targets. So the average trend of the 13 lines is calculated
using the least squares method and four significance bits,
ratio = area ∗ 0.02328 − 1.024.
(5.7)
This equation is the red dotted line in Figure 5.10. The base of this equation namely
the time function for 100 vertices is
time = 1401 ∗ m−1.146 − 10.35.
(5.8)
It is the red dotted line in Figure 5.9. So, the general time function should be the
82
5.1 2D Tasks
Figure 5.9: Search time for 9 targets with 1-13 robots in 5 areas
multiplication of ratio 5.7 and time 5.8:
timetotal = 32.62 ∗ m−1.146 ∗ area − 1435 ∗ m−1.146 − 0.2410 ∗ area + 10.60
(5.9)
Thirdly, search time of two kinds of allocations for 1 to 13 robots is compared
in the area of 900 vertices. Robots are allocated in a circle (Figure 5.5) and in a
curve (Figure 5.13) respectively. The results are displayed in Figure 5.11 and 5.12
for situation I and II. For Figure 5.13, robots still start from the bottom right corner
of the searching area, and every robot is in the obstacle sensing range of at least
one robot to ensure that robots can communicate with each other and the operator
does not need to move too far to deploy all robots. In both Figure 5.11 and 5.12,
for 1-4 robots, the search time are close since the allocations for 1 to 3 robots are
similar in a line near target 3 in both situations. For 5-13 robots, search time for
robots in a circle is more than robots in a line. The reason is that robots are easier
to block each other when allocated in a circle. Thus, some robots, for example, rob1
and rob2 , need to choose a visited neighbor vertex even in the first step. In the next
few steps, a robot may also repeat others choices, which is a waste of the resource.
The deployment in a line decreases the chance of repeating vertices in the first few
steps. It can be predicted that, if the distances between robots can be set larger
but less than the communication range, robots will repeat fewer vertices in the first
few steps as they could not only communicate to get explored map from others but
also have more unvisited vertices around. Thus, robots could spend less time in the
search task.
The widths of passages in the area also affect the search time. If narrow passages
with widths near Wpass are used to connect subareas, robots in one subarea will have
a lower chance to move through the passage and leave that subarea. Thus, longer
search time is needed compared to those using wide passages in a search area with
all other three factors controlled to be the same.
83
5.1 2D Tasks
Figure 5.10: Time ratio based on 100 vertices for 1-13 robots in five areas
Figure 5.11: Search 9 targets in 900 vertices area with different allocations in situation I
84
5.1 2D Tasks
Figure 5.12: Search 9 targets in 900 vertices area with different allocations in situation II
Figure 5.13: The order of allocation for 13 robots in a curve
85
5.1 2D Tasks
In the proposed algorithm, the number of robots is flexible, and the algorithm is
scalable. As the area is unknown, the function of search time may be estimated using
the least squares method for simulation results with all possible known information
and a long practical distance between robots in a line for initialization. Even for a
known area, the optimal number of robots should still be decided by comparing the
benefits from the decline of time with the cost of adding more robots.
5.1.4
Comparison with Other Algorithms
The proposed algorithm is also compared with other decentralized random algorithms for search task in [12, 193] in situation I. For [12], there are three methods
using the T grid which are the random choice (R) algorithm, the random choice
with unvisited vertices first (RU) algorithm and the random choice with path to the
nearest unvisited vertex (RUN) algorithm. However, in RUN, robots may not move
straightly along the edges of triangles resulting to problems in path planning and
collision avoidance. For example, how to use the repulsive force to plan the path
when there are obstacles and other robots in the path and when and where to stop
if the time for the movement step is used up without reaching the destination. But
[12] does not provide solutions for those problems. Therefore, this algorithm is unconvincing and could not be simulated so RUN is not compared with the proposed
algorithm. In [193], the random walk (RW) algorithm, the LF algorithm and the
Lévy flight with potential field (LFP) algorithm are discussed which are all compared with the proposed algorithm. In those algorithms, a normal distribution is
needed to generate turning angles. The parameters of the normal distribution are
set as µ = 0 and σ = 1 and the angle range is (−π, π]. The step length of the RW
algorithm will also be generated by the normal distribution with µ = a/2, σ = 1 and
a maximum step length a. The Lévy distribution is utilized to generate the lengths
of movements. Based on [38], it needs two independent random variables, f and
g, generated by standard normal distribution above for angles to get the medium
result h where
f
(5.10)
h = 1/α
|g|
Then the sum of h with an appropriate normalization
zn =
n
1 X
n1/α
hk
(5.11)
k=1
converges to the Lévy distribution with a large n which is 100 usually so n=100
is used. The constant α is a parameter in the Lévy distribution function ranges
from 0 to 2 which is 1.5 in the simulation. When α=2, the distribution will become
the Gaussian distribution. The repulsive force is needed to disperse robots in LFP
which has the following formula according to [40].
k ( 1 − 1 ) 1 q − qobstacle , if ρ(q) ≤ ρ
−−→
0
rep
ρ(q) ρ0 ρ2 (q)
ρ(q)
Frep = −∇Urep (q) =
(5.12)
0, if ρ(q) > ρ0
The repulsive force Frep is used as the weight of the velocity in a certain direction.
Urep (q) is a differentiable repulsive field function. krep is the scale factor. Without
86
5.1 2D Tasks
losing generality, it is chosen as krep = 1. ρ0 is the maximum distance to apply
the force and ρ(q) is the minimal distance from a robot to its nearest obstacle.
q = (x, y) represents the coordinate of the robot and qobstacle is the coordinate of
the obstacle.
Algorithms R, RU, RW and LF did not provide any solution for collisions between
robots. The applied strategy in the simulation is allowing collisions happen and
counting the number of collisions. It is assumed that one more loop is needed to
finish the search for each collision. In fact, if consecutive collisions happen, more
than one extra loops may be required.
To compare all methods in the same environment, all the algorithms are simulated in the area in Figure 5.6 with one (Figure 5.14 and 5.15) and nine targets
(Figure 5.16 and 5.17) separately. The results in the four figures are the average
result of at least 100 simulations. Nevertheless, [12] had contradicted descriptions in
the method of transmitting information of targets and [193] used targets as stations
to store and transmit information. These are different from the proposed method
in this paper. So, to control all the factors to be the same, equation 5.3 for stop
judgment, ad-hoc with rc for the communication and broadcasting for transmitting
positions of targets are applied in all the algorithms. The algorithm R is much slower
than the others. To see the small time difference between other methods, Figure
5.14 and Figure 5.16 compare all algorithms except R for one and nine targets separately. Time for R is only displayed in two additional figures, namely Figure 5.15
with one target and Figure 5.17 with nine targets, with the second slowest algorithm
RU to show how slow it is. [193] claimed that LF algorithm and LFP algorithm are
advantageous when targets are sparsely and randomly distributed. Therefore, the
nine evenly separated targets in the area allow those two algorithms to fully unfold
their potential, which makes the comparison more convincible.
In the line chart, the first algorithm in Chapter 4 is represented as S1 and the
second search algorithm in this chapter is labeled as S2. The four line charts show
that the proposed algorithm S2 is always the best under the strict requirement. The
general ranking of search time from the graph is S2<S1<LFP<RU<LF<RW<R.
For all the algorithms, as the number of robots ascended, the search time descended
significantly at the beginning but gently later because more time will be spent on
repeating visited vertices and collision avoidance instead of exploring new vertices
only. For R and RU, although the number of loops decreases, the number of collisions
grows and the time on dealing with collisions accounts for a larger proportion of the
total time. This leads to slower descending search time or even ascending time as
RU in Figure 5.14 and R in Figure 5.15. The detail of time and collisions for RU
in Figure 5.14 is shown in Table 5.1. In that table, all values are average values
for one simulation. The row ‘collision’ shows the number of collisions and the row
‘tc ’ is the time to deal with collisions. The ‘total time’ includes tc , algorithm time
and movement time. In RW, LF, and LFP, when there are more robots, robots
will have higher chance to encounter others. Thus, they will turn more frequently
without fully executing the translation part of the algorithm so that the efficiency of
the algorithm will decline resulting in a more moderate decrease in the search time.
As to the proposed algorithm in Table 5.2, ‘ta ’ represents the average time used in
the algorithm in a simulation, ‘ta /loop’ is the ta for each loop. To avoid collisions,
the time in waiting will increase with the rise in the number of robots, because the
wait time in each loop equals the number of robots multiplies the time for choosing.
87
5.1 2D Tasks
Figure 5.14: Search time for 1 target using 6 algorithms
Figure 5.15: Search time for 1 target using algorithms R and RW
88
5.1 2D Tasks
Figure 5.16: Search time for 9 targets using 6 algorithms
Figure 5.17: Search time for 9 targets using algorithms R and RW
89
5.1 2D Tasks
No.
1
2
3
4
5
6
7
Loop
303
143
110
81.1
74.8
63.6
56.3
Collisions
0.00
1.32
3.74
7.67
10.5
14.8
21.4
tc (s)
0.00
7.56
21.5
44.1
60.2
84.9
123
Total time (s)
1748
833
655
512
492
452
448
No.
8
9
10
11
12
13
Loop
53.4
49.1
48.2
43.3
42.9
42.3
Collisions
21.1
26.7
28.7
32.0
36.9
42.1
tc (s)
121
153
165
184
212
242
Total time (s)
429
437
443
434
476
499
Table 5.1: Results of RU for 1-13 Robots in 900 vertices area with 1 target. (RU in
Figure 5.14)
Therefore, ta /loop goes up as the number of robots increases. However, ta only
consists a small proportion of the total time even in the simulation with 13 robots
(0.5396%) so the trend of the total time is similar to the trend of the number of
loops. However, if there are hundreds of robots in real applications, this proportion
will increase to a level that needs to be considered.
5.1.5
Robot Test
Algorithm 5.1 & 5.2 in situation I is verified by one to three Pioneer 3-DX robots
(Figure 2.7) in a 429mm*866mm (maximum width*maximum length) area as seen
in Figure 5.18. The robot has 16 sonars around to cover 360 degrees for objects
with a height of 15.5cm-21.5cm and Figure 2.8 displays half of them. The robot also
has a SICK-200 laser to scan 180 degrees in the front (see Figure 2.9) for objects
higher than 21.5cm. Based on the difference of the heights, the test environment is
set as Figure 5.18. The three large triangular boxes are obstacles. Obstacles and
boundaries are set lower enough to be detectable for sonars only. Then a tall brown
triangle box at the left side of the end was set as the target and can be detected by
both sonars and the laser. Robots use Wi-Fi in ad-hoc mode to communicate with
others using TCP.
Parameters of the robot used in the test are in Table 5.3. The safety distance dsaf e
is calculated based on the odometry errors of robots. Based on the measurement,
if a robot moves forward straightly for 8m, it may shift 0.08m to the left or right
at most. As the moving distance from the bottom to the top of Figure 5.18 is less
than 8m, dsaf e =0.09m is large enough for collision avoidance even if there are errors.
90
5.1 2D Tasks
No.
1
2
3
4
5
6
7
ta (s)
36.62
17.71
12.48
10.88
8.00
6.92
6.95
ta /loop (s)
0.0161
0.0171
0.0182
0.0192
0.0207
0.0220
0.0231
Total time (s)
13117
5989
3965
2857
2279
1864
1589
No.
8
9
10
11
12
13
ta
6.08
5.83
5.50
5.71
5.53
5.59
ta /loop (s)
0.0243
0.0270
0.0270
0.0284
0.0302
0.0312
Total time (s)
1449
1337
1204
1154
1076
993
Table 5.2: Results of S2 for 1-13 Robots in 900 vertices area with 9 targets. (5.1 &
5.2 in Figure 5.16)
Figure 5.18: The test area
91
5.1 2D Tasks
P
rrob (m)
rsaf e (m)
rso (m)
rst (m)
a(m)
R
0.26m
0.35m
5m
32m
4.65m
S
0.26
0.35
1.44
1.35
1
P
rc (m)
vmax (m/s)
ωmax (◦ /s)
tmove (s)
Wpass (m)
R
91.4m
1.5m/s
300◦ /s
5.75s
5.35m
S
2.09
0.3/s
300
10s
1.7
Table 5.3: Parameters of robots and in experiment
Figure 5.19: The average result for 10 tests with 1-3 robots
Note that the rso in the test is different from simulations as the used sonars have a
minimum target sense range min(rso ) which is around 0.17m. Objects within this
range will be considered as having an infinite distance from the robot. Therefore,
the way to calculate rso is changed to the equality condition of the inequality rso ≥
a + min(rso ) + rsaf e .
The test is done for ten times for each number of robots, and the results are
illustrated in Figure 5.19. The example routes of experiments with one to three
robots are displayed in Figure 5.20-5.22 separately. Figure 5.20 also has legends
labeled on the corresponding items. The search time of the robot test agrees with the
result of simulations, namely having a descending trend which is gradually slower.
The experiment shows that by applying Algorithms 5.1 & 5.2 in situation I, the
target can be found successfully by 1 to 3 robots without any collision.
92
5.1 2D Tasks
Figure 5.20: The example route for one robot with legends (13 loops)
Figure 5.21: Example routes for two robots (10 loops)
Figure 5.22: Example routes for three robots (6 loops)
93
5.1 2D Tasks
5.1.6
Factors in Choosing a Grid under the Loose Requirement
Definition 5.1.4. Let Nvt represents the number of vertices within rst of one target.
Let Pt represents the proportion of the number of vertices within rst of all targets to
the total number of vertices.
Under the loose requirement, different grid patterns can be used with various
amounts of vertices. The number of vertices is a significant factor for search time.
However, fewer vertices do not always result in shorter search time. Three other
possible factors for choosing a grid pattern are Pt , the structure of the grid and
Wpass . Simulations and analyses in this section use situation I and the seventh
condition in Table 2.2 which satisfies the parameter of robots in Table 5.3.
Example of different Nvt are shown in Figure 5.23, 5.24 and 5.25 for a T grid, an
S grid and an H grid with different colors. In the examples, a = 1 and rst ≥ a + rsaf e
are set so that a target can be detected from different vertices and the setting could
work for all the three gird. The cyan hexagon, square, and triangle show the sections
which are occupied by the vertices in the middle of the areas. Circles which have
intersections with the cyan areas show all the related rst centered at nearby vertices.
Then labels with different colors are used to represent different Nvt values. As the
graphs are symmetrical, only parts of the area are marked. The effect of Pt will be
evident when there are many targets in a relatively small area. The area in Figure
5.26 is used with no obstacles to avoid the effect of Wpass . The average search time of
500 simulations using 1 to 13 robots via a T grid with Nvt =4 and Nvt =6 is presented
in Figure 5.27. The corresponding Pt values are around 4.5% and 6.7% respectively.
Figure 5.27 shows that when Pt is larger, the search time is smaller. To minimize
the effect of Pt in discussing other factors, this factor of the three kinds of grids
should be similar. The ratio between the numbers of vertices of the T grid, the S
grid, and the H grid is 1.5:1.3:1. So the corresponding integral Nvt values in future
simulations are set to be 6, 5 and 4 which lead to the ratio of Pt as 1.5:1.25:1. Note
that those Nvt values are also close to the average values for the three grids in the
selected condition, so the setting is practical.
The structure of the grid affects the search time by affecting the possible routes
to a target. This can be understood by looking at the shortest path as an example.
In the current situation and condition, all robots use a1 as the side length. Then an
H grid can be seen as a part of a T grid. Therefore the shortest path via an H grid
is equal to or longer than that via the corresponding T grid. However, the average
path length which determines the mean time cannot be calculated, so simulations are
used to see the effect. Figure 5.28 and 5.29 show the average time of 500 simulations
for searching one target in the middle or separated nine targets in the area of Figure
5.26 by one to 13 robots via three grids. There are no obstacles in the area so the
effect of Wpass can be ignored. Based on Table 2.2, the vertex in the H grid covers
the largest area, so the H grid has the least vertices. However, in Figure 5.28, the S
grid has the least time for three to five robots, and the H grid has the least time for
other numbers of robots. In Figure 5.29, the S grid has the least time. Therefore,
the fewest vertices do not guarantee the least search time, and the structure of the
grid can affect the time.
The third factor Wpass affects the search time by affecting the chances to enter
or leave an area as discussed in Section 5.1.3. For the same Wpass , the opportunity
94
5.1 2D Tasks
(a) Ranges and neighbors
(b) Color label
Figure 5.23: Nvt for a T grid: star-4 green-5 purple-6 red-7
(a) Ranges and neighbors
(b) Color label
Figure 5.24: Nvt for an S grid: purple-4 yellow-5 green-6 star-7
95
5.1 2D Tasks
(a) Ranges and neighbors
(b) Color label
Figure 5.25: Nvt for an H grid: star-2 red-3 green-4 yellow-5 purple-6
Figure 5.26: A T grid with no obstacles
96
5.1 2D Tasks
Figure 5.27: Time for a T grid with Nvt =4 and Nvt =6
97
5.1 2D Tasks
Figure 5.28: Time for three grids with one target
Figure 5.29: Time for three grids with nine targets
98
5.2 3D Tasks
for each grid is distinct. For example, if all these three grids use a as the side length
with a passage Wpass = 2a1 + 2rsaf e which is the largest one in Table 2.1, the width
can fit at least three vertices for a T grid and two vertices for both an S grid and
an H grid. However, the corresponding ratio of the number of vertices in the three
grids is 1.5:1.3:1. Thus, robots using an S grid may have less chance to enter or
leave a search section through this passage. So Wpass needs to be considered in the
simulation design.
From the above discussions, it is clear that when choosing the grid for the loose
requirement, the number of vertices is important but not the only determinant. The
average Nvt needs to be calculated based on the condition in Table 2.2. The given
Wpass or the minimum Wpass , which satisfies all the three grids, needs to be found.
Then the simulations with these Nvt and Wpass should be run to compare the search
time of the three grids and find the best one.
5.1.7
Section Summary
This section presented a random decentralized algorithm with the repulsive force
for multiple robots to search targets in 2D areas through a grid pattern without
any collision. The repulsive force could help robots to be away from each other if
they met to reduce repetition. The convergence of the algorithm is proved, and
simulations showed that the proposed algorithm is the best comparing to other six
algorithms. An example of how to estimate the relation between search time, the
number of robots and the size of the area was demonstrated for nine targets to help
decide the suitable number of robots. A test on Pioneer 3-DX robots also verified
the algorithm. When only detecting all the vertices is required, simulations need to
be used to find the best grid considering the grid with the least number of vertices,
structure of the grid pattern, suitable Wpass and Pt .
5.2
3D Tasks
This section presents a decentralized random algorithm for multiple robots to search
targets in an unknown 3D area with obstacles via the cubic lattice. The algorithm
is the same as the one in 2D. It considers the realistic physical constraints and
conditions of all parameters as well as collision avoidance so that it can be applied
to real robots. Robots choose vertices in sequence and move between vertices along
edges of the lattice based on local information. The convergence of the algorithm is
proved, and the effectiveness is shown in comparison with three other algorithms.
5.2.1
Problem Statement
The problem is the same as that in Chapter 4. So the definitions and assumptions
below are claimed without detailed description.
In the search task, a three-dimensional area A ⊂ R3 with a limited number of
obstacles O1 to Ol is used. m robots labelled as rob1 to robm are employed to search
n static targets t1 to tn through a cubic grid with a side length a. For robi , its
coordinate at time k is pi (k). Let T be the set of coordinates of targets and Tki be
the set of targets coordinates known by robi . The initial setting is in Figure 4.9.
99
5.2 3D Tasks
Assumption 5.2.1. Area A is a bounded, connected and Lebesgue measurable set.
The obstacles Oi ⊂ R3 are non-overlapping, closed, bounded and linearly connected
sets for any i > 0. They are both static and unknown to robots.
S
Definition 5.2.1. Let O := Oi for all i > 0. Then Ad := A \ O is the area that
needs to be detected.
Assumption 5.2.2. Initially, all robots have the same grid with the same coordinate
system and are allocated at different vertices along the border of the search area. In
simulations, robots are manually set at vertices of the C grid
Assumption 5.2.3. All sensors and communication equipment that robots carry
have spherical ranges. Other radiuses mentioned are also spherical.
Definition 5.2.2. Robots have a swing radius rrob and the total error is e. Then a
safety radius to avoid collisions is rsaf e ≥ rrob + e (see Figure 2.11).
√
Assumption 5.2.4. Wpass ≥ a 3 + 2rsaf e (see Figure 2.13).
Assumption 5.2.5. rso ≥ a + rsaf e (see Figure 2.11).
Assumption 5.2.6. rc ≥ 2 ∗ a + e(see Figure 2.11).
Assumption 5.2.7. The task area is vast. So, a is much smaller than the length,
the width and the height of area A. Thus, there are no targets between the boundary
and the frontiers of the target sensing range, which means the boundary effect can
be ignored.
√
Assumption 5.2.8. rst ≥ 2a/ 3 + rsaf e − rrob based on Table 2.3.
Va is the set of all the accessible vertices. Ns,i (k) represents the sensing neighbors
set around robi . In Ns,i (k), Vv,i (k) and Vu,i (k) denote the sets for visited vertices
and unvisited vertex and the choice from these two sets are cv and cu respectively.
Then a set of new definitions for the distances and vertices in this chapter are stated
as follows:
Definition 5.2.3. The average position of choices made by robots which chose earlier and current positions of communication neighbors who have not chosen yet is
pi,ave (k). Let dc,pi,ave (k) be the set of distances from pi,ave (k) to the possible choices.
Then, dcu ,pi,ave (k) is the set of distances to unvisited vertices and dcv ,pi,ave (k) is the set
of distances to visited ones. Then max(dcu ,pi,ave (k) ) and max(dcv ,pi,ave (k) ) are the maximum values in the two sets respectively. The corresponding sets of vertices which
result in these maximum distances are represented as Vumax,i (k) and Vvmax,i (k).
5.2.2
Procedure and Algorithm
The general procedure of the algorithm is the same as the 2D algorithm. So the
same flow charts in Figure 4.2 and 4.3 are available for Situation I and II. However,
the way to set the sequence is different from the 2D case and readers can refer to
Section 4.2.2 for detail. Comparing to the first search algorithm, the improvement
of this algorithm is on how to use local information to make choice. So, when there
100
5.2 3D Tasks
are no local neighbors in rso , Algorithm 5.13 is used but when there are neighbor
robots within rso of a robot, it will apply Algorithm 5.14 to make choice.
6 0,
cu with prob. 1/|Vu,i (k)|, if |Vu,i (k)| =
pi (k + 1) = cv with prob. 1/|Vv,i (k)|, if (|Vu,i (k)| = 0)&(|Vv,i (k)| =
(5.13)
6 0),
pi (k), if |Ns,i (k)| = 0
6 0,
cu with prob. 1/|Vumax,i (k)|, if |Vu,i (k)| =
pi (k + 1) = cv with prob. 1/|Vvmax,i (k)|, if (|Vu,i (k)| = 0)&(|Vv,i (k)| =
6 0),
pi (k), if |Ns,i (k)| = 0
(5.14)
Algorithm 5.15 and 5.16 are employed to judge the stop time.
In situation I: pi (k + 1) = pi (k), if |Tki | = |T |.
(5.15)
In situation II: pi (k + 1) = pi (k), if ∀vi ∈ Va , |Vu,i (k)| = 0.
(5.16)
Theorem 5.2.1. Suppose that all assumptions hold and the proposed decentralized
random algorithm, namely Algorithm 5.13 and 5.14 with related judgment strategy
5.15 or 5.16, are used. Then for any number of robots, with probability 1, there
exists such a time k0 > 0 that all targets or all vertices are detected.
Proof. The algorithm 5.13 and 5.14 with corresponding judgment methods form an
absorbing Markov chain including both transient states and absorbing states. The
transient states are all the approachable vertices of the C grid where robots visit but
do not stop forever. Absorbing states are the vertices where the robots finally stop.
By applying the proposed algorithm, robots start from some initial transient states
and move between different transient states with the probability from the algorithm.
The robot will go to the unvisited sensing neighbors if there are any with the help
of the repulsive force. If all sensing neighbors of a robot are visited, the robot will
randomly choose an accessible neighbor with the repulsive force. If no accessible
neighbor vertices are available, it stays at current vertices. For 5.15, robots stop
at an absorbing state if all robots know |Tki | = |T |. For 5.16, absorbing states are
reached until |Vu,i (k)| = 0 for all accessible vertices. Based on Assumption 5.2.4,
absorbing states can be achieved from any initial states, namely with probability 1.
This completes the proof of Theorem 5.2.1.
5.2.3
Simulation Results
A search task for situation I is simulated in MATLAB2016a. The parameters are
set based on Table 3.4. To display the details of the proposed algorithm, a small
area which has 8*8*8 vertices with 9 targets is used to be searched by 3 robots
using control laws 5.13, 5.14 & 5.15 as shown in Figure 5.30-5.33. The routes are
illustrated with arrows to show the directions of movements, and the path of each
robot has its color. By analyzing the route of each robot, no collisions were found,
and each target was within rst of a visited vertex.
To verify the algorithm further, the search time using different algorithms namely
the LFP algorithm in [193], the RU algorithm in [170], rules 4.4 & 4.5(S1 in line
charts) in Chapter 4 and rules 5.13, 5.14 & 5.15(S2 in line charts) in this chapter are
101
5.2 3D Tasks
Figure 5.30: The route of robot 1 in a simulation with 3 robots
Figure 5.31: The route of robot 2 in a simulation with 3 robots
102
5.2 3D Tasks
Figure 5.32: The route of robot 3 in a simulation with 3 robots
compared. LFP and RU are chosen as they are better than related algorithms based
on the simulation results in 2D. In RU, there are collisions so it is assumed that one
collision needs one more movement time to get solved although longer time may
be needed if there are consecutive collisions. According to the assumption 5.2.7,
targets are allocated in the detectable area as in Figure 4.9. Time in movement
in the simulation is 5.75s in total including 4.5s for a translation and 1.25s for a
rotation which is in Table 3.4.
The simulation for searching nine separated targets in a larger area (12*12*12
vertices) by 1 to 13 robots has been run for 500 times to get the average search
time. Algorithm LFP is always much slower than the others as no previous map
will be used. So, Table 5.4 is used to show the time for LFP and RU specially
and algorithms except LFP are drawn in 5.34 to make the graphs clear. The figure
demonstrates that time in the three algorithms for one robot is similar as they are
the same in that situation. For more robots, time for S2 is always smaller than S1
as robots are separated to different parts to find distributed targets. When there
are 1 to 6 robots, algorithm RU is the fastest because robots can move without
considering the collisions, which provides more choices in each step for robots to
search different areas and it has the simplest rule which saves the calculation time.
Also, its disadvantage that it needs extra time to deal with collisions is not severe as
the number of collisions is small. When there are more than six robots, algorithms
S2 is the fastest as the time saved in collision avoidance is larger than the increased
waiting time before choosing in the algorithm. So, the proposed algorithm is the
best comparing to those with the collision avoidance scheme. When there are a large
number of robots, it is even better than algorithm RU which does not avoid collisions
between robots. In all the four algorithms, when the number of robots ascended, the
search time descended sharply at the beginning and slowly later. It can be predicted
that if the number robots increases, the search time of the proposed algorithm may
have a stable period or even an increasing part at the end as the algorithm time in
103
5.2 3D Tasks
Figure 5.33: The combined routes in a simulation with 3 robots
104
5.2 3D Tasks
Figure 5.34: Time for four algorithms with 9 targets
the step ‘wait’ will increase and take up a larger proportion of the total time.
Therefore, the optimal number of robots should be decided based on comparing
the benefits of decreased search time and the cost of increased expense on robots.
The search time is also affected by unknown factors including the initial positions
of robots and the shape and the size of the area. If the size of the area is given,
simulations like those in this paper can be used to estimate the proper rough range
of the number of robots according to the limitation of the search time in a general
situation.
5.2.4
Section Summary
This section proposed a random decentralized search algorithm for multiple robots in
a 3D region which is based on the 2D algorithm. All robots move simultaneously with
a C lattice to avoid collision. The repulsive force helps robots to have less repeated
area. By comparing different algorithms in simulations, the proposed algorithm is
always the best in those with collision avoidance and the best in all the algorithms
when there are more than a certain number of robots.
105
5.3 Summary
No.
1
2
3
4
5
6
7
RU
5290
2701
1855
1402
1111
952
855
LFP
9624
5063
3389
2688
2004
1771
1476
No.
8
9
10
11
12
13
RU
748
692
646
611
571
552
LFP
1352
1193
1176
1005
903
874
Table 5.4: Search time for algorithm LFP and RU with 9 targets
5.3
Summary
This chapter proposed the second search algorithm for a group of mobile robots. The
robot moves via the selected grid pattern to select vertices in sequence and avoid
collisions in an unknown area. Compared to the first algorithm, the repulsive force
is added to disperse robots when they are neighbors in obstacle sensing range. So
they tend to leave for different directions to have less repeated area. The algorithm
is suitable for both 2D areas and 3D areas with differences in the selection sequence
and the dimension of parameters. A rigorous mathematical proof of the convergence
of the algorithm was provided with probability 1. In the 2D area, this chapter illustrated how to find the relation between the search time and other parameters. With
loose requirement, it suggested using simulations to find the best grid considering
different factors. Then, an experiment on three Pioneer 3-DX robots was conducted
to verify the algorithm. In both 2D and 3D area, the proposed algorithms were compared with other grid-based algorithm and Lévy flight related algorithm to show the
effectiveness. Conclusively, the algorithm is scalable, robust, reliable and effective.
106
Chapter 6
A Collision-Free Random Search
with the Breath-First Search
Algorithm
The third grid based collision-free search algorithm is proposed in this chapter. The
algorithm is decentralized with the help of a grid pattern in a 2D or 3D unknown
area. Robots choose their future steps based on random algorithm when there
are unvisited neighbors and use the breadth-first search algorithm when they are
surrounded by visited vertices. As the algorithm in this chapter is solving the same
problem as the task in Chapter 4, the way to state the problem and describe the
algorithm will be similar but simpler, and only the different parts will be emphasized.
In the two previous search algorithms, there are a considerable number of repeated
vertices in the search. So this chapter focuses on how to use the known information
to escape the visited area. Thus, a graph search algorithm, namely the breadthfirst search algorithm is added to find one of the nearest unvisited vertices and the
shortest way to it. The shortest way in this chapter means the route with the fewest
number of steps from the current vertex to the target vertex. To check the effect
of this algorithm, simulations of the algorithm will be carried out comparing to 7
other algorithms to show the effectiveness. Then the advantages and disadvantages
of the three proposed algorithms will be discussed.
Similar algorithms with a method to decrease the amount of repeated search
include [14] and [132, 131]. In [14], the grid pattern is also used for robots to move
through. Robots used the explored map to find the nearest vertices and ‘the nearest’
means the minimum Euclidean distance which can be represented by de . However,
with unknown obstacles in the area, a robot may need to move a longer actual step
ds to reach the target point, especially when the robot and the target vertex is
at the two sides of the narrow part of a long obstacle. With this method, robots
may walk without passing the edge of the grid, and a dynamic collision avoidance
method must be provided. However, [14] did not provide the detail of collision
avoidance method and only claimed it used the embedded repulsive force method
from Pioneer 3-DX robots. It also did not provide what a robot should do if it is
not on a vertex, but its algorithm with assumptions on ranges was based on the
implied assumption that robots are on the vertices. As the maximum speed is set,
the step with different lengths will use different time. If all robots waited for the
longest movement time which is estimated and designed for the one that leaves the
107
6.1 2D Tasks
visited area to move further, the normal robots which only move a can be thought
as wasting their time in waiting. If robots are not synchronized, the communication
time points in the algorithm are not synchronized either. Suppose that the blocking
mode communication was used to solve the synchronization problem, robots needed
to wait for each other in the step for communication which might also be thought
as a waste of time. In the experiment of [14], the result did not agree with the
algorithm as one robot moved five steps with one unit step length in each step but
another robot moved six steps with one step having two unit step lengths. With
blocking mode communication, these two robots should both go the sixth vertex as
they chose at the same time. Without blocking mode, the one with five steps should
go to the last vertex as it chose earlier. However, neither of these agrees with the
result of the provided experiment. So the experiment may be a fake one. Also, the
experiment did not show the situation to clarify its ambiguous collision avoidance
algorithm. In [132, 131], a Lévy flight algorithm is used to have less repetition than
the RW algorithm. However, the levy flight algorithm of a robot only ensures this
robot has less repetition than using the RW algorithm, but different robots may still
repeat the visited point of other robots. The reason is that robots are independent
and did not communicate the information about the visited area as the choice was
not based on that. Another disadvantage of [132, 131] is that they did not consider
any collision avoidance and the task area has no obstacles. So that algorithm is not
practical and could not be used on real robots.
The explored area of the robot is a known subarea of the task area. Therefore,
graph search algorithms are considered in this chapter. Except for the breadthfirst search algorithm, there are other related graph search algorithms such are A*
algorithm [34, 203], and greedy best first search algorithm. However, these two
algorithms are suitable to the situation for finding the path to a known target
position. In this situation, the greedy best first algorithm estimates the distance to
the target and only needs to explore a small area to find a path while the breadthfirst search calculates the distance to the start point in each direction evenly which
results in a large explored area but the shortest path. A* algorithm estimates the
sum of the estimated values of the two algorithms above, so it has the advantage of
both algorithms, namely smaller explored area, and shortest path. However, in the
problem discussed in this chapter, there may be many unvisited vertices around the
visited area, and the target vertex should be the one with the fewest steps (ds ) to
reach which is unknown. If the A* algorithm is used, it needs to be executed for
each unvisited vertex at the boundary of the explored area which may have repeated
explored parts. Then all paths need to be compared to find the one with the smallest
ds . The breadth-first search algorithm could find the paths to all locations without
repeated exploration and will explore different directions equally and generate the
route after exploration. So it could have less calculation and should be used to
ensure the found path has the smallest ds .
6.1
2D Tasks
This part proposes the third decentralized random search algorithm. It adds the
breadth-first search on the 1st search algorithm to help robots to jump out of the detected zone. The algorithm is compared with five algorithms from other authors and
two search algorithms in previous chapters to show the advantage of this proposed
108
6.1 2D Tasks
algorithm. The convergence of the algorithm is also proved.
6.1.1
Problem Statement
In this section uses a two-dimensional area A with a limited number of static obstacles O1 , O2 ,. . ., Ol . Here, the strict requirement is considered, and the T grid
is used. There are m robots labelled as rob1 to robm to search static targets t1 to
tn . An arbitrary known T grid pattern with a side length of a is used for robots to
move in each step with the following definitions and assumptions. All these initial
settings can be seen in Figure 5.6. The repulsive force is no longer needed, so the
related definitions are not needed either. Further explanation of the definitions and
assumptions below can be seen in Chapter 4.
Assumption 6.1.1. The area A is a bounded, connected and Lebesgue measurable
set. The obstacles Oi are non-overlapping, closed, bounded and linearly connected
sets for any i ≥ 0. Robots have no knowledge about them before exploration.
S
Definition 6.1.1. Let O := Oi for all i > 0. Then Ad := A \ O represents the
area that needs to be detected.
Assumption 6.1.2. The initial condition is that all robots use the same T grid
pattern and the same coordinate system. They start from different vertices near the
boundary which can be seen as the only known entrance of the area.
Assumption 6.1.3. All the ranges and radius in 2D search problems are circular.
Definition 6.1.2. To avoid collisions with other things before move, a safe distance
rsaf e should include both rrob and e (see Figure 2.2). So rsaf e ≥ rrob + e.
Assumption 6.1.4. All passages Wpass between obstacles or between the obstacle
and the boundary are wider than a + 2 ∗ rsaf e . Other Wpass for other grid patterns
can be seen in Table 2.1.
Assumption 6.1.5. The sensing radius for obstacles is rso where rso ≥ a + rsaf e .
Assumption 6.1.6. rst ≥ a + rsaf e .
Assumption 6.1.7. rc ≥ 2 ∗ a + e (see Figure 2.3).
Assumption 6.1.8. The curvature of the concave part of the boundary should be
smaller than or equal to 1/rst .
Va is the set of all the accessible vertices. Ns,i (k) represents the set of sensing
neighbors around robi . In Ns,i (k), Vv,i (k) and Vu,i (k) denote the two sets for visited
vertices and unvisited vertices and the choice from these two sets are cv and cu
respectively.
For the breadth-first search algorithm, the shortest route to the nearest unvisited
vertex is generated.
Definition 6.1.3. The route is represented as a set of vertices Vr,i for robi . Then
|Vr,i (k)| is the total number of steps in that route. Specially, if the route has not
been fully generated, |Vr,i (k)| = 0. Let vr,i (j) be the jth step in the route and cr,i
be the choice from the route Vr,i (k). The number of steps left in the route is then
|Vr,i (k)| − j.
109
6.1 2D Tasks
6.1.2
Procedure and Algorithm
The proposed algorithm drives robots to go through the area through a grid pattern.
If the breadth-first rule is not applied, the general progress in this search algorithm
is the same as the first search algorithm and follow the flow chart in Figure 4.2 and
4.3. However, there are also some differences.
The way to use the breadth-first search for the choice step of robi is described
as follows:
1. Update map: In the turn of robi , it updates its map based on received information
which include the choice from others if robi is not the first to choose in this loop.
2. check obstacles: robi check obstacles in the rso and calculates |Vu,i (k)|.
3. check |Vu,i (k)|: Is there any vacant unvisited neighbor?
3.1 |Vu,i (k)| =
6 0 and select pi (k +1): If |Vu,i (k)| is not 0, there are unvisited vertices
so robi choose one from Vu,i (k) randomly.
3.2 |Vu,i (k)|=0: check |Vv,i (k)|
3.2.1 |Vv,i (k)| = 0 and stay: stay at current position pi (k) as all neighbor vertices
are blocked.
3.2.2 |Vv,i (k)| 6= 0 and set flag: There are no unvisited neighbors and robi is
surrounded by visited neighbors. Thus, the flag bit for whether the route
map is generated or not is set. In the next step, robi will have higher
priority than others with the flag bit being zero. If multiple robots within
rc set their flag bits, they will judge the priority between them according
to the original method which is base on relative positions.
4. Check the old route: robi checks whether there is any previously generated route
map to follow.
4.1 No routes: robi will find a path using the breadth-first search algorithm
4.2 A route is being calculated: robi will continue calculation use the updated map
if it is updated.
4.3 A full route found: robi will check whether the selected vertex to go is still
unvisited or not based on the explored area.
4.2.1 Unvisited: If it is not visited, keep using the route and choose the next
vertex cr,i in the route map.
4.2.2 Visited: robi will redesign a route based on the explored area.
5. The loop for checking visitation states: robi checks the visitation states of the
sensing neighbors of current sensing neighbors. Each checked vertex is saved in a
list with its father node information and the number of steps from pi (k) to it.
5.1 No unvisited vertices: continue the loop and check one step length further until
an unvisited vertex is found. If a vertex is found more than once, only the first
will be recorded as the corresponding path has fewer steps from pi (k).
5.2 Found an unvisited vertex: break the check loop. Then following the father
vertices back, the shortest route Vr,i (k) from pi (k) to the unvisited vertex is
generated.
110
6.1 2D Tasks
6. Check the time for the selection: As the map may be large, the search may cost
a long time. The time for the selection step needs to be checked
6.1 ≤ time for normal selection: robi chooses the next step cr,i based on the route
Vr,i (k).
6.2 >time for normal selection: robi stays at current position pi (k). In the time slot
for the movement, continue its calculation. In the next loop, stop calculation
and do normal things based on the flow chart until it is the turn for robi .
The steps above in red will generate a choice. Mathematically, this can be
concluded as:
6 0,
cu with prob. 1/|Vu,i (k)|, if |Vu,i (k)| =
pi (k + 1) = cr,i , if (|Vu,i (k)| = 0)&((|Vv,i (k)| =
(6.1)
6 0)&(|Vr,i (k)| =
6 0)),
pi (k), if (|Ns,i (k)| = 0) | (|Vr,i (k)| = 0).
The stop strategy for this choosing rule is the same as Chapter 4.
In situation I: pi (k + 1) = pi (k), if |Tki | = |T |.
(6.2)
In situation II: pi (k + 1) = pi (k), if ∀vi ∈ Va , |Vu,i (k)| = 0.
(6.3)
Theorem 6.1.1. Suppose that all the above assumptions hold and the search algorithm 6.1 and a judgment strategy 6.2 or 6.3 are used for situation I or situation
II. Then, for any number of robots, there is such a time k0 > 0 that all targets are
detected with probability 1.
Proof. The algorithm 6.1 with a judgment method forms an absorbing Markov
chain. It includes both transient states and absorbing states. The transient states
are all the approachable vertices of the grid that robots visit but do not stop forever.
Absorbing states are the vertices that the robots stop forever. Applying Algorithm
6.1, robots select the unvisited neighbor vertices to go. If all sensing neighbors are
visited before finding the targets, robots will look for the shortest route to the nearest
unvisited vertex and only stop when all targets are found, or all vertices are detected.
For 6.2, the absorbing state will be reached when robots know |Tki | = |T |. For 6.3,
absorbing states are reached until |Vu,i (k)| = 0 for any vertex. The assumption about
Wpass guarantees that all the accessible vertices can be visited so absorbing states can
be achieved from any initial states, namely with probability 1. This completes the
proof of Theorem 6.1.1
6.1.3
Simulations with the Strict Requirement
Situation I of the proposed algorithm is simulated to verify the algorithm. For
situation I, one to nine targets which are averagely allocated in the area are tested,
and the initial setting is the same as Figure 5.6.
In situation I, an example of the routes of three robots in searching 9 targets
is shown in Figure 6.1-6.4. There are 30*30 vertices in total in the graph. The
three robots are at the bottom right corner which is the only known entrance to
allocate the robots, and the nine targets are labeled as red stars. The boundaries
111
6.1 2D Tasks
Figure 6.1: The route of robot 1 in a simulation with 3 robots
and obstacles are shown by the bold black lines of discrete dots which obey the
Assumption 6.1.4. Parameters used in the simulation are from Table 3.1 based on
the parameters of the Pioneer 3-DX robot. The figures demonstrate that each target
is within rst of a visited vertex, so the search task is finished successfully. Based
on the recorded trace (Figure 6.1-6.3), robots have no collisions with each other so
it is a collision-free algorithm. Also, the routes in Figure 6.4 have less repetition
compared to the Figure 5.4 in Chapter 5 and Figure 4.7 in Chapter 4. The route of
each robot occupied a larger percentage of the area than before which is the benefit
from the breadth-first search algorithm.
6.1.4
Comparison with Other Algorithms
In this section, the third search algorithm is compared with the previous two search
algorithms. It is also compared to the R and RU algorithms in [12] which used
the grid pattern but did not consider collision avoidance clearly. Algorithms in
[193] including RW, LF, and LFP are also compared because they are also random
algorithms in unknown areas with collision avoidance rules. The RUN algorithm in
[12] is not compared as discussed in the previous chapter. Algorithm LF and LFP
use Lévy distribution to generate step length and uniform distribution to generate
the moving direction. The way to generate Lévy distribution can be seen in Section
5.1.4 of Chapter 5
As there is no collision avoidance scheme in R, RU, RW, and LF for collision
between robots. In the simulation, the total number of collisions is counted, and
one more loop is added to the total number of loops for each collision. In reality,
one step may not be enough considering one vertex may be chosen by robots from
all sensing neighbors. So this is a relatively reasonable way to calculate the total
112
6.1 2D Tasks
Figure 6.2: The route of robot 2 in a simulation with 3 robots
Figure 6.3: The route of robot 3 in a simulation with 3 robots
113
6.1 2D Tasks
Figure 6.4: The combined routes in a simulation with 3 robots
114
6.1 2D Tasks
Figure 6.5: All algorithms except R for 1-13 robots
search time.
To compare all methods in the same environment, all the eight algorithms are
simulated in the 900 vertices area with nine targets. The results in the figures are
the average result of at least 100 simulations. Different communication methods are
used in [12] and [193], but all the factors need to be the same. So equation 6.2 is
used as the stop judgment, the ad-hoc mode with rc is used for communication and
the broadcasting mode is chosen for sending positions of found targets in all the
algorithms. [193] claimed that LF and LFP algorithm would perform better when
targets are sparsely and randomly distributed. Therefore, the nine evenly separated
targets are used. Thus, the proposed algorithm is compared to the optimal performance of these two algorithms, and the effectiveness of the proposed algorithm will
be more convincible. Two slowest algorithm R and RW were picked up to another
graph in Figure 5.17 in Chapter 5. So here we draw the other seven algorithms in
Figure 6.5 to see the effect of the proposed algorithm and compare the three search
algorithm in Figure 6.6 to show the small difference.
The Figure 6.5 and 5.17 illustrate that the proposed algorithm 6.1 in situation
I with judgment rule 6.2 is always the best under the strict requirement. If the
algorithms in Chapter 4, 5 and 6 are labeled as S1, S2, and S3. The general ranking
of search time of the eight algorithms is S3<S2<S1< LFP<RU<LF<RW<R. Generally speaking, as the number of robots increased, the search time in all algorithms
decreased drastically at the beginning but slowly later as more time was spent on
115
6.1 2D Tasks
Figure 6.6: Time for 3 search algorithms
other things instead of exploring new vertices only. Specifically, in Algorithm S2
and S1, robots need to spend time on visiting repeated vertices. For the RU and
RW algorithms, robots are set to deal with extra steps caused by collision avoidance.
Algorithm S3 requires robots to spend time on finding the route to leave the area
of repeated vertices. In the LF, the RW, and the LFP algorithm, robots will meet
each other more frequently which will break the generated route and create a new
move to avoid collisions.
Figure 6.6 demonstrate that the second search algorithm only improved the efficiency in a very limited range because robots would not meet each other frequently
especially in a larger area, so the potential field algorithm did not have enough
chance to show its effect. However, the third proposed algorithm largely decreased
the search time to less than half of the second search algorithm due to the breadthfirst search algorithm in reducing repeated visitation. So the proposed algorithm
is effective in Situation I. In Situation II, only the second search algorithm is compared with the third search algorithm in Figure 6.7 as Algorithm S2 is the best in
the seven algorithms to compare. It is clear that Algorithm S3 decreased the search
time significantly and it is the best search algorithm.
In the three search algorithms, the first one is the simplest with calculation like
set operations and generating random numbers. The robot only needs to keep a map
of visited area, so this method is suitable for robots with limited calculation speed
and memory. However, it will spend the longest time. The second algorithm will
add distance calculation on the first one. Nevertheless, it only adds some calculation
for the Euclidean distance when using the repulsive force. The repulsive force only
works when robots are close to each other, so the improvement is small. The third
algorithm needs a considerable amount of calculation especially when the known
map is huge. However, a time-consuming calculation will not affect the normal
116
6.2 3D Tasks
Figure 6.7: Search time for search algorithm 2 and 3
calculation of other robots so it will not change the waiting time of Algorithm
S1. However, a relatively larger waiting time will allow more robots to finish path
planning and move earlier. This algorithm outweighs all other algorithms and needs
to be used on robots with high computation ability and large memory.
6.1.5
Section Summary
This section presented a decentralized random search algorithm with the breadthfirst search algorithm in unknown 2D areas. The movements of robots are synchronized and are based on a grid pattern. When robots have unvisited sensing neighbors
to visit, they choose one from them randomly like the first search algorithm. However, if the surrounding vertices are all searched, the robot will use the breadth-first
search algorithm to find the shortest route to the nearest unvisited area using its
explored map of the area. It is proved that the algorithm converges with probability
1, and the proposed algorithm was compared with seven other related algorithms in
simulations. The result showed that the proposed algorithm is always the best and
it can decrease the search time greatly.
6.2
3D Tasks
This section presents a random search algorithm in 3D with the breadth-first search
algorithm. The control algorithm is decentralized only based on local information
and robots search the area with obstacles via the cubic lattice. The algorithm is the
same as the one in 2D. It considers the collision avoidance as well as real physical
constraints and conditions of all parameters. Therefore, it is a practical algorithm
for real robots. Robots choose vertices in sequence and move between vertices along
edges of the lattice based on local information. The convergence of the algorithm is
117
6.2 3D Tasks
proved, and the effectiveness is shown in the comparison with other fours algorithms
in both situation I and II.
6.2.1
Problem Statement
The problem is the same as that in Chapter 4. So the definitions and assumptions
below are claimed without detailed description.
In the search task, a three-dimensional area A ⊂ R3 with a limited number of
obstacles O1 to Ol is used. m robots labeled as rob1 to robm are used to search
n static targets t1 to tn through a cubic grid with a side length a. pi (k) is the
coordinate of robi at time k. Let T be the set of coordinates of targets and Tki be
the set of coordinates of targets which are known by robi . The initial setting is in
Figure 4.9.
Assumption 6.2.1. Area A is a bounded, connected and Lebesgue measurable set.
The obstacles Oi ⊂ R3 are non-overlapping, closed, bounded and linearly connected
sets for any i > 0. They are both static and unknown to robots.
S
Definition 6.2.1. Let O := Oi for all i > 0. Then Ad := A \ O is the area that
needs to be detected.
Assumption 6.2.2. Initially, all robots have the same grid with the same coordinate
system and are allocated at different vertices along the border of the search area. In
the simulation, robots are manually set at some vertices of the C grid.
Assumption 6.2.3. All sensors and communication equipment that robots carry
have spherical ranges. Other radiuses mentioned are also spherical.
Definition 6.2.2. Robots have a swing radius rrob and a total error e. Then a safety
radius to avoid collision is rsaf e ≥ rrob + e (see Figure 2.11).
√
Assumption 6.2.4. Wpass ≥ a 3 + 2rsaf e (see Figure 2.13).
Assumption 6.2.5. rso ≥ a + rsaf e (see Figure 2.11).
Assumption 6.2.6. rc ≥ 2 ∗ a + e (see Figure 2.11).
√
Assumption 6.2.7. rst ≥ 2a/ 3 + rsaf e − rrob based on Table 2.3.
Assumption 6.2.8. The task area is vast. So, a is much smaller than the length,
the width and the height of area A. Thus, there are no targets between the boundary
and the frontiers of the target sensing range which means the boundary effect can be
ignored.
Va is the set of all the accessible vertices. Ns,i (k) represents the set of sensing
neighbors around robi . In Ns,i (k), Vv,i (k) and Vu,i (k) denote the two sets for visited
vertices and unvisited vertex and the choices from these two sets are cv and cu
respectively.
For the breadth-first search algorithm, the following definitions are used for generating the shortest route to the nearest unvisited vertex.
Definition 6.2.3. The route is represented as a set of vertices Vr,i for robi . Then
|Vr,i (k)| is the total number of steps in that route. Specifically, if the route has not
been fully generated, |Vr,i (k)| = 0. Let vr,i (j) be the jth step in the route and cr,i
be the choice from the route Vr,i (k). The number of steps left in the route is then
|Vr,i (k)| − j.
118
6.2 3D Tasks
6.2.2
Procedure and Algorithm
The general procedure of the algorithm is the same as the 2D algorithm. The
difference is on the dimension of parameters and the way to decide the sequence of
choice which is in Section 4.2.2. So the same flow charts 4.2 and 4.3 are available
for situation I and II. Compared to the first search algorithm, this algorithm helps
robots to escape from a visited area by looking for the route with minimum steps
to the nearest unvisited vertices. The way to use the breadth-first search algorithm
can be seen in Section 6.1.2. The random algorithm with the breadth-first search
algorithm can be summaries as
6 0,
cu with prob. 1/|Vu,i (k)|, if |Vu,i (k)| =
pi (k + 1) = cr,i , if (|Vu,i (k)| = 0)&((|Vv,i (k)| =
6 0)&(|Vr,i (k)| =
6 0)),
pi (k), if (|Ns,i (k)| = 0) | (|Vr,i (k)| = 0)
(6.4)
The stop strategy for this choosing rule is the same as that in Chapter 4.
In situation I: pi (k + 1) = pi (k), if |Tki | = |T |.
(6.5)
In situation II: pi (k + 1) = pi (k), if ∀vi ∈ Va , |Vu,i (k)| = 0.
(6.6)
Theorem 6.2.1. Suppose that all above assumptions hold and the search algorithms
6.1 and a judgment strategy 6.5 or 6.6 are used for situation I or situation II. Then,
for any number of robots, there is such a time k0 > 0 that all targets are detected
with probability 1.
Proof. The algorithm 6.4 with a stop strategy forms an absorbing Markov chain. It
includes both transient states and absorbing states. The transient states are all the
accessible vertices of the grid that robots visit but do not stop. Absorbing states are
the vertices that the robots stop. Applying algorithms 6.4, robots select the unvisited
neighbor vertices to go. If all sensing neighbors are visited before finding the targets,
robots will look for the shortest route to the nearest unvisited vertex and only stop
when all targets are found, or all vertices are detected. For 6.5, absorbing states will
be reached until all robots know |Tki | = |T |. For 6.6, absorbing states are reached
when |Vu,i (k)| = 0 for any vertex. The assumption about Wpass guarantees that all
the accessible vertices can be visited so absorbing states can be achieved from any
initial states, namely with probability 1. This completes the proof of Theorem 6.2.1
6.2.3
Simulation Results
A search task for situation I is simulated in MATLAB2016a. According to the
assumption 6.2.8, targets are allocated within rst from the accessible vertices as in
Figure 4.9. Parameters in Table 3.4 are used, so the time in the movement part of
the simulation is set as 5.75s in total including 4.5s for the translation and 1.25s
for rotation. To check the performance of escaping the searched area from routes,
a small area which has 8*8*8 vertices with nine targets is searched by three robots
using the control law, Algorithm 6.4 and the stop strategy, Algorithm 6.5, as shown
in Figure 6.8 to 6.11. All routes use arrows to show the directions of movements,
119
6.2 3D Tasks
Figure 6.8: The route of robot 1 in a simulation with 3 robots
and the route of each robot has a different color. The figures for individual robot
show that there was no route with many repeated parts and in the combined routes
for the three robots, they tended to search their area without too much overlapping.
This effect is more evident if these fours figures are compared with Figure 5.30-5.33
in Chapter 5. By analyzing the route of each robot, no collisions were found, and
each target was within rst of a visited vertex. So the search task is finished with less
repetition than the second search algorithm.
In situation I, the search algorithm in chapter 4 and 5, namely, Algorithm 4.4 and
5.13 & 5.14 will be compared. Other algorithms include the best algorithm in [193]
which is the LFP algorithm and the RU algorithm in [170]. Collisions avoidance is
not stated clearly in RU, so it is assumed that it accepts collision and one collision
needs one more movement to solve although longer time may be needed if there
are consecutive collisions or if multiple robots choose the same vertex from different
directions.
Simulations for searching nine separated targets in a larger area (12*12*12 vertices) by 1 to 13 robots are run for 500 times to get the average search time. Based
on the simulation in a 3D area in Chapter 5, the search time for LFP algorithm
is always much larger than the others because no map can be used to assist the
search. As the proposed algorithm should be faster than other algorithms, LFP is
compared with RU only in Table 5.4 as a reference. So other three algorithms are
compared with the third search algorithm in Figure 6.12 to make the line graph
clear. Given by the figure, the proposed algorithm was always the best in all these
five algorithms. Although the algorithm TU outperformed the other two search algorithm for 1 to 6 robots because of its simplicity, the wide range of choice and few
collisions happened, it still had repeated routes, so it was slower than the proposed
algorithm in this chapter. However, comparing to the simulation results in a 2D
120
6.2 3D Tasks
Figure 6.9: The route of robot 2 in a simulation with 3 robots
Figure 6.10: The route of robot 3 in a simulation with 3 robots
121
6.2 3D Tasks
Figure 6.11: The combined routes in a simulation with 3 robots
122
6.2 3D Tasks
Figure 6.12: Compare 4 algorithms in situation I
area, the advantage of the proposed algorithm was not that clear. The possible reason is that, although the 3D area has a larger number of vertices than the 2D area
(12*12*12 VS. 30*30). The fewest number of steps needed to go across the body
diagonal of the cube is (12-1)*3=33 steps. Nevertheless, for the 2D area using a T
grid, the number of steps for going through the diagonal is (30-1)+(30-1)/2=43.5
steps, which is larger than that of the C grid. For the breadth-first search algorithm,
it means the total breadth of the 3D grid is smaller, so the effect of going out of a
large visited area is less significant.
For situation II, robots need to have a map to know when all parts of the area are
searched. Therefore, algorithms RW, LF, and LFP which do not need map cannot
be used. So algorithm R and RU are compared with the three proposed search
algorithms in this report. In the result, the search time for algorithm R is too long,
so the two slow algorithms are illustrated in Table 6.1. Algorithms except algorithm
R are displayed in Figure 6.13. The table and figure display that the third search
algorithm was the best, and its search time was less than half of the second best
search algorithm. Unlike situation I, situation II needed more steps. Thus, there
were more chances to utilize the breadth-first search algorithm to boost the search
speed.
The general trend of all the algorithms is that the search time had a dramatic
123
6.2 3D Tasks
Figure 6.13: Compare 4 algorithms in situation II
No.
1
2
3
4
5
6
7
R
79447
47067
32892
24373
19840
16840
13227
RU
24036
13584
8689
6445
5606
4334
4032
No.
8
9
10
11
12
13
R
12459
11321
10957
10748
8492
8791
RU
3562
3094
2825
2694
2547
2182
Table 6.1: Search time for algorithm R and RU in situation II
124
6.3 Summary
drop followed by a gentle descent as the number of robots increased. It can be
predicted that if the number of robots increases further, the search time of the
proposed algorithm may have a stable period or even a moderate rise because the
algorithm time in the ‘wait’ step will increase which may take up a larger proportion
of total time. Also, the number of steps will decrease slower. Therefore, when
choosing the suitable number of robots in applications, the balance between the
merits of decreased search time and the increased expense on robots should be
considered. The search time is also affected by unknown factors including the initial
positions of robots and the shape and the size of the area. If the size of the area is
given, simulations like those in this paper can be used to estimate the proper rough
range of the number of robots in a general situation.
6.2.4
Section Summary
This section proposed a grid-based decentralized algorithm for search task in 3D
area without collision. The algorithm uses a combination of random choice and
the breadth-first search algorithm to decrease the amount of repeated visitation.
The convergence of the algorithm is proved, and it is compared to all the related
algorithms in both situation I and situation II. The result showed that the proposed
algorithm is always the best in different situations.
6.3
Summary
This chapter proposed the third search algorithm for mobile robots which is suitable
for both 2D and 3D areas with small differences in the dimension of parameters and
the setting of the selection sequence. When a robot has unvisited neighbors to visit,
it uses the random algorithm to select an unvisited one like the first and second
search algorithms. When the robot is in a visited section, the breadth-first search
algorithm is employed to find the nearest unvisited vertex referring to the number of
steps. Then it plans a path for the robot to go to the unvisited vertice thus reducing
the repetition and search time. Robots move via a grid pattern to select vertices in
sequence and avoid collisions in the unknown area. In both 2D and 3D area, the
proposed algorithm was compared with all other related algorithms and showed that
the proposed algorithm in this chapter is the best in different situations.
125
Chapter 7
Conclusion
This report studied the problems of complete coverage and search by a team of
robots with collision avoidance. The task area in these two problems is arbitrary
and unknown with obstacles inside. One optimal decentralized complete coverage
algorithm and three decentralized random based search algorithms were proposed.
All algorithms are suitable for both 2D and 3D areas and utilized a grid pattern
to simplify the representation of the area and calculation. In each step, the robot
moves between vertices along edges of the grid with limitations on sensing ranges
and the communication range. Robots are always within the task area and do not
need to go through the boundary. All robots are synchronized so the robot failure
can be detected. With the sequence, the grid pattern and communication, collisions
are avoided while robots are making choices. Chapter 2 illustrated how to select the
grid with the minimum number of vertices to cover an area based on parameters
of robots. Tables with selection criteria are provided with examples of real robots.
Using the selected grid, multiple mobile robotic sensors can employ the optimal
self-deployment algorithm in Chapter 3 to have a complete coverage of a static or
expanding area. If the robots are fed into certain points of the area continuously
when the vertices are vacant, the algorithm which drives robots to the furthest available vertices will ensure the least number of feeding steps are needed. Then, Chapter
4, 5 and 6 proposed three decentralized random algorithms for search tasks. In the
search problem, the number of targets can be known or unknown, and a sequence of
selection is given based on the relative positions of robots. The first search algorithm
chooses a random unvisited neighbor to go if there is any. Otherwise, it randomly
selects a visited vertex. If no accessible neighbors are available, it keeps stable in
that step. In the second algorithm, the repulsive force is used on the robots which
are close to each other to disperse robots to move to different areas. Thus, there will
be less repeated vertices. In the third algorithm, the repetition is further descended
by adding the breadth-first search algorithm when a robot is surrounded by visited
vertices. The shortest route is planned to the nearest unvisited vertices based on
the explored map of the robot.
In Chapter 5, the second search algorithm was further discussed with an example
of generating the formula for search time based on the size of the area and the number
of robots. For the coverage with the loose requirement, factors related to the search
time were discussed based on simulations and analyses. The convergence of each
algorithm was proved with probability one, and they are simulated in MATLAB
to discover their features and compared to other related algorithms to show the
126
7.1 Future Work
effectiveness. Experiments on Pioneer 3-DX robots were used to test the second
search algorithm which verified the algorithm.
7.1
Future Work
1. In the initialization of the algorithms, robots need to be manually set to vertices of grids or robots have a reference point to build its coordinate system and
grid. Based on some consensus problems without collision avoidance [11, 170],
future works should consider finding an autonomous collision-free method for
multiple robots to form the same coordinate system.
2. The breadth-first search algorithm in Chapter 6 uses a considerable amount of
time, memory and energy. As robots need to be cheap with limited resource,
other graph search algorithms should be developed.
3. The obstacle in this report is static which limits the usage of the proposed algorithms. So another research direction could be developing a collision avoidance
search algorithm with moving or deforming obstacles [121, 78, 173, 118, 114,
174, 175]. Thus, the algorithm will not be limited to the ‘wait and choose’
scheme.
4. This report is restricted to the case of static targets. However, when the
targets are moving, the proposed algorithms may not be the best. So search
problems for moving targets like those in [132, 131] need to be considered.
5. The 3D algorithms in this report are not tested on real robots. Therefore,
experiments on UUVs or UAVs are required to verify the algorithms.
6. Formation control of multiple robots is also a related interesting area. A
possible topic can be adding collision avoidance to the formation control in 3D
areas based on [176, 177, 13].
7. With formation control algorithms, the sweep coverage or patrolling in a region
with changing width could be considered as in [43, 111, 178].
8. The odometry error and other errors should be discussed further. As the simulation and experiment environments were not large enough and the odometry
error is related to the distance traveled, the current error did not affect the result seriously. However, in applications in a vast area, that may be a problem.
A possible direction may be adding some buoyage for some UUV applications.
9. In this report, all robots have the same ability. However, in real search and
rescue tasks, a robot team consists of robots with different functions. Thus,
cooperation between heterogeneous robots will be a realistic research topic
[183, 76, 35, 110].
10. The communication in the proposed algorithms is ideal, however, in real environments such as a battlefield, there may be interference and limitations to
the bandwidth. Thus communication delay or data loss could be considered
with estimation of certain information [172, 146, 143].
127
7.1 Future Work
11. The movement of the robots in the algorithm is based on the ability of the
Pioneer 3-DX robot. However, some robots may not have the capability to
turn a particular angle while staying at the same position. Therefore, the
non-holonomic model in [103, 120, 165] and the dynamics of vehicles in a 3D
area (see, e.g., [48]) should be researched.
128
Bibliography
[1] N. Adde. Locust swarm. Sea Power, 58(4):51–53, 2015.
[2] K. Akkaya and A. Newell. Self-deployment of sensors for maximized coverage in underwater acoustic sensor networks. Computer Communications,
32(7):1233–1244, 2009.
[3] S. Alam and Z. J. Haas. Coverage and connectivity in three-dimensional underwater sensor networks. Wireless Communications and Mobile Computing,
8(8):995–1009, 2008.
[4] S. N. Alam and Z. J. Haas. Coverage and connectivity in three-dimensional
networks with random node deployment. Ad Hoc Networks, 34:157–169, 2015.
[5] A. I. N. Alshbatat. Behavior-based approach for the detection of land mines
utilizing off the shelf low cost autonomous robot. IAES International Journal
of Robotics and Automation, 2(3):83, 2013.
[6] B. D. Anderson, C. Yu, B. Fidan, and J. M. Hendrickx. Control and information architectures for formations. In IEEE International Conference on
Control Applications, pages 1127–1138. IEEE, 2006.
[7] F. Aurenhammer. Voronoi diagramsa survey of a fundamental geometric data
structure. ACM Computing Surveys (CSUR), 23(3):345–405, 1991.
[8] X. Bai, S. Kumar, D. Xuan, Z. Yun, and T. H. Lai. Deploying wireless sensors
to achieve both coverage and connectivity. In Proceedings of the 7th ACM
international symposium on Mobile ad hoc networking and computing, pages
131–142. ACM, 2006.
[9] A. Bar-Noy, T. Brown, M. P. Johnson, and O. Liu. Cheap or flexible sensor
coverage. In DCOSS, pages 245–258. Springer, 2009.
[10] A. Baranzadeh and A. V. Savkin. A distributed algorithm for grid-based search
by a multi-robot system. In 2015 10th Asian Control Conference (ASCC),
pages 1–6, 2015.
[11] A. Baranzadeh. A decentralized control algorithm for target search by a multirobot team. In Australasian Conference on Robotics and Automation, 2013.
[12] A. Baranzadeh. Decentralized autonomous navigation strategies for multirobot search and rescue. arXiv preprint arXiv:1605.04368, 2016.
129
BIBLIOGRAPHY
[13] A. Baranzadeh and V. Nazarzehi. A decentralized formation building algorithm with obstacle avoidance for multi-robot systems. In Robotics and
Biomimetics (ROBIO), 2015 IEEE International Conference on, pages 2513–
2518. IEEE, 2015.
[14] A. Baranzadeh and A. V. Savkin. A distributed control algorithm for area
search by a multi-robot team. Robotica, 35(6):1452–1472, 2017.
[15] S. Barr, B. Liu, and J. Wang. Underwater sensor barriers with auction algorithms. In Computer Communications and Networks, 2009. ICCCN 2009.
Proceedings of 18th Internatonal Conference on, pages 1–6. IEEE, 2009.
[16] S. Barr, J. Wang, and B. Liu. An efficient method for constructing underwater
sensor barriers. Journal of Communications, 6(5):370–383, 2011.
[17] M. A. Batalin and G. Sukhatme. The analysis of an efficient algorithm for
robot coverage and exploration based on sensor network deployment. In Proceedings of the IEEE International Conference on Robotics and Automation,
pages 3478–3485. IEEE, 2005.
[18] J. L. Baxter, E. Burke, J. M. Garibaldi, and M. Norman. Multi-robot search
and rescue: A potential field based approach, pages 9–16. Springer, 2007.
[19] S. Benhamou. How many animals really do the Lévy walk?
88(8):1962–1969, 2007.
Ecology,
[20] C. Berger. Toward rich geometric map for slam: online detection of planes in
2D lidar. Journal of Automation Mobile Robotics and Intelligent Systems, 7,
2013.
[21] Bluefin
Robotics.
Maritime-Bluefin-21-UUV-01-3.
https:
//gdmissionsystems.com/-/media/General-Dynamics/Bluefin/
PDF/Maritime-Bluefin-21-UUV-01-3.ashx?la=en&hash=
B1073240AF49EDBF95FE624F97AA6C16F7863472, 2017. [Online; accessed
20-August-2017].
[22] A. Boukerche and X. Fei. A coverage-preserving scheme for wireless sensor
network with irregular sensing range. Ad Hoc Networks, 5(8):1303–1316, 2007.
[23] A. Breitenmoser and A. Martinoli. On combining multi-robot coverage and
reciprocal collision avoidance, pages 49–64. Springer Japan, Tokyo, 2016.
[24] A. Brooks, A. Makarenko, T. Kaupp, S. Williams, and H. Durrant-Whyte.
Implementation of an indoor active sensor network. Experimental Robotics
IX, pages 397–406, 2006.
[25] W. Burgard, M. Moors, D. Fox, R. Simmons, and S. Thrun. Collaborative
multi-robot exploration. In Proceedings of the IEEE International Conference
on Robotics and Automation, volume 1, pages 476–481. IEEE, 2000.
[26] Z. J. Butler. Distributed coverage of rectilinear environments. Thesis, Robotics
Institute, 2000.
130
BIBLIOGRAPHY
[27] L. Cai, A. P. Espinosa, D. Pranantha, and A. D. Gloria. Multi-robot search
and rescue team. In 2011 IEEE International Symposium on Safety, Security,
and Rescue Robotics, pages 296–301, 2011.
[28] Y. Cai and S. X. Yang. An improved pso-based approach with dynamic parameter tuning for cooperative multi-robot target searching in complex unknown
environments. International Journal of Control, 86(10):1720–1732, 2013.
[29] D. Calitoiu. New search algorithm for randomly located objects: A noncooperative agent based approach. In 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, pages 1–6, 2009.
[30] D. W. Casbeer, R. W. Beard, T. W. McLain, S.-M. Li, and R. K. Mehra. Forest
fire monitoring with multiple small uavs. In American Control Conference,
2005. Proceedings of the 2005, pages 3530–3535. IEEE, 2005.
[31] R. Cassinis, G. Bianco, A. Cavagnini, and P. Ransenigo. Strategies for navigation of robot swarms to be used in landmines detection. In Advanced Mobile
Robots, 1999. (Eurobot ’99) 1999 Third European Workshop on, pages 211–
218, 1999.
[32] R. Cassinis, G. Bianco, A. Cavagi, and W. Ransenigo. Landmines detection
methods using swarms of simple robots. In Proceedings of the 6th International
Conference on Intelligent Autonomous Systems, pages 212–218, 2000.
[33] Ö. Çayırpunar Ömer, V. Gazi, B. Tavlı, U. Witkowski, and J. Penders. Experimental study on the effects of communication on cooperative search in
complex environments. In Y. Baudoin and M. Habib, editors, Using robots in
hazardous environments, pages 535–557. Woodhead Publishing, Oxford, December 2010.
[34] P. Chand and D. A. Carnegie. A two-tiered global path planning strategy for
limited memory mobile robots. Robotics and Autonomous Systems, 60(2):309–
321, 2012.
[35] P. Chand and D. A. Carnegie. Mapping and exploration in a hierarchical
heterogeneous multi-robot system using limited capability robots. Robotics
and autonomous Systems, 61(6):565–579, 2013.
[36] C. T. Chang, C. Y. Chang, S. Zhao, J. C. Chen, and T. L. Wang. Sra: A
sensing radius adaptation mechanism for maximizing network lifetime in wsns.
IEEE Transactions on Vehicular Technology, 65(12):9817–9833, Dec 2016.
[37] C. Y. Chang, C. T. Chang, Y. C. Chen, and H. R. Chang. Obstacle-resistant
deployment algorithms for wireless sensor networks. IEEE Transactions on
Vehicular Technology, 58(6):2925–2941, 2009.
[38] L. Chang-Yong and Y. Xin. Evolutionary programming using mutations based
on the Lévy probability distribution. IEEE Transactions on Evolutionary
Computation, 8(1):1–13, 2004.
131
BIBLIOGRAPHY
[39] T. M. Cheng and A. V. Savkin. A distributed self-deployment algorithm for the
coverage of mobile wireless sensor networks. IEEE Communications Letters,
13(11):877–879, 2009.
[40] T. M. Cheng and A. V. Savkin. Decentralized control for mobile robotic sensor
network self-deployment: Barrier and sweep coverage problems. Robotica,
29(2):283–294, 2011.
[41] T. M. Cheng and A. V. Savkin. Self-deployment of mobile robotic sensor
networks for multilevel barrier coverage. Robotica, 30(04):661–669, 2012.
[42] T. M. Cheng and A. V. Savkin. Decentralized control of mobile sensor networks
for asymptotically optimal blanket coverage between two boundaries. IEEE
Transactions on Industrial Informatics, 9(1):365–376, 2013.
[43] T. M. Cheng, A. V. Savkin, and F. Javed. Decentralized control of a group
of mobile robots for deployment in sweep coverage. Robotics and Autonomous
Systems, 59(7):497–507, 2011.
[44] M. ChinnaDurai, S. MohanKumar, and S. Sharmila. Underwater wireless
sensor networks. Compusoft, 4(7):1899, 2015.
[45] K. Choi, S. Yoo, J. Park, and Y. Choi. Adaptive formation control in absence
of leader’s velocity information. IET Control Theory & Applications, 4(4):521–
528, 2010.
[46] R. Cipolleschi, M. Giusto, A. Q. Li, and F. Amigoni. Semantically-informed
coordinated multirobot exploration of relevant areas in search and rescue settings. In Mobile Robots (ECMR), 2013 European Conference on, pages 216–
221. IEEE, 2013.
[47] J. Clark and R. Fierro. Cooperative hybrid control of robotic sensors for
perimeter detection and tracking. In American Control Conference, 2005.
Proceedings of the 2005, pages 3500–3505. IEEE, 2005.
[48] L. Consolini, F. Morbidi, D. Prattichizzo, and M. Tosques. On a class of hierarchical formations of unicycles and their internal dynamics. IEEE Transactions
on Automatic Control, 57(4):845–859, 2012.
[49] J. Corts and F. Bullo. Coordination and geometric optimization via distributed
dynamical systems. SIAM Journal on Control and Optimization, 44(5):1543–
1574, 2005.
[50] D. Culler and J. Long. A prototype smart materials warehouse application
implemented using custom mobile robots and open source vision technology
developed using emgucv. Procedia Manufacturing, 5:1092–1106, 2016.
[51] A. Datta and S. Soundaralakshmi. On-line path planning in an unknown
polygonal environment. Information Sciences, 164(1):89–111, 2004.
[52] D. Dedousis and V. Kalogeraki. Complete coverage path planning for arbitrary number of unmanned aerial vehicles. In Proceedings of the 9th ACM
International Conference on PErvasive Technologies Related to Assistive Environments, page 5, Corfu, Island, Greece, 2016. ACM.
132
BIBLIOGRAPHY
[53] M. Defoort, T. Floquet, A. Kokosy, and W. Perruquetti. Sliding-mode formation control for cooperative autonomous mobile robots. IEEE Transactions
on Industrial Electronics, 55(11):3944–3953, 2008.
[54] B. Donald, P. Xavier, J. Canny, and J. Reif. Kinodynamic motion planning.
Journal of the ACM (JACM), 40(5):1048–1066, 1993.
[55] H. Dong, K. Zhang, and L. Zhu. An algorithm of 3D directional sensor network coverage enhancing based on artificial fish-swarm optimization. In The
2012 International Workshop on Microwave and Millimeter Wave Circuits and
System Technology, pages 1–4, 2012.
[56] X. Du, H. Li, and W. Li. A new coverage algorithm for directional movement
wireless sensor networks. In 2015 4th International Conference on Computer
Science and Network Technology (ICCSNT), volume 01, pages 1194–1199,
2015.
[57] A. Fagiolini, M. Pellinacci, G. Valenti, G. Dini, and A. Bicchi. Consensusbased distributed intrusion detection for multi-robot systems. In 2008
IEEE International Conference on Robotics and Automation, pages 120–127,
Pasadena, CA, USA, 2008.
[58] G. Flierl, D. Grnbaum, S. Levins, and D. Olson. From individuals to aggregations: the interplay between behavior and physics. Journal of Theoretical
Biology, 196(4):397–454, 1999.
[59] D. Fox, W. Burgard, H. Kruppa, and S. Thrun. A probabilistic approach
to collaborative multi-robot localization. Autonomous Robots, 8(3):325–344,
2000.
[60] D. Fox, W. Burgard, and S. Thrun. The dynamic window approach to collision
avoidance. IEEE Robotics & Automation Magazine, 4(1):23–33, 1997.
[61] A. Fujimori, M. Teramoto, P. N. Nikiforuk, and M. M. Gupta. Cooperative
collision avoidance between multiple mobile robots. Journal of Robotic Systems, 17(7):347–363, 2000.
[62] D. W. Gage. Command control for many-robot systems. Report, DTIC Document, 1992.
[63] M. Ghaemi, Z. Zabihinpour, and Y. Asgari. Computer simulation study of
the Lévy flight process. Physica A: Statistical Mechanics and its Applications,
388(8):1509–1514, 2009.
[64] A. Ghosh and S. K. Das. Coverage and connectivity issues in wireless sensor
networks. Mobile, Wireless, and Sensor Networks: Technology, Applications,
and Future Directions, pages 221–256, 2006.
[65] Y. Gu, D. Bozda, R. W. Brewer, and E. Ekici. Data harvesting with mobile
elements in wireless sensor networks. Computer Networks, 50(17):3449–3465,
2006.
133
BIBLIOGRAPHY
[66] M. Guarnieri, R. Kurazume, H. Masuda, T. Inoh, K. Takita, P. Debenest, R.
Hodoshima, E. Fukushima, and S. Hirose. Helios system: A team of tracked
robots for special urban search and rescue operations. In 2009 IEEE/RSJ
International Conference on Intelligent Robots and Systems, pages 2795–2800,
2009.
[67] J. Guo, Z. Lin, M. Cao, and G. Yan. Adaptive leader-follower formation control
for autonomous mobile robots. In American Control Conference (ACC), 2010,
pages 6822–6827. IEEE, 2010.
[68] T. C. Hales. A proof of the kepler conjecture.
162(3):1065–1185, 2005.
Annals of mathematics,
[69] T. C. Hales. A proof of the kepler conjecture.
162(3):1065–1185, 2005.
Annals of mathematics,
[70] I. A. Hameed. Motion planning for autonomous landmine detection and clearance robots. In Recent Advances in Robotics and Sensor Technology for Humanitarian Demining and Counter-IEDs (RST), International Workshop on,
pages 1–5. IEEE, 2016.
[71] J. M. Hereford and M. A. Siebold. Bio-inspired search strategies for robot
swarms. In Swarm Robotics from Biology to Robotics. InTech, 2010.
[72] J. Hess, M. Beinhofer, and W. Burgard. A probabilistic approach to highconfidence cleaning guarantees for low-cost cleaning robots. In 2014 IEEE
International Conference on Robotics and Automation (ICRA), pages 5600–
5605, Hong Kong, China, 2014.
[73] G. M. Hoffmann and C. J. Tomlin. Decentralized cooperative collision avoidance for acceleration constrained vehicles. In 47th IEEE Conference on Decision and Control, pages 4357–4363. IEEE, 2008.
[74] Y. Hong, J. Hu, and L. Gao. Tracking control for multi-agent consensus with
an active leader and variable topology. Automatica, 42(7):1177–1182, 2006.
[75] S. P. Hou and C. C. Cheah. Dynamic compound shape control of robot swarm.
IET Control Theory & Applications, 6(3):454–460, 2012.
[76] A. Howard, L. E. Parker, and G. S. Sukhatme. Experiments with a large
heterogeneous mobile robot team: Exploration, mapping, deployment and
detection. The International Journal of Robotics Research, 25(5-6):431–447,
2006.
[77] M. Hoy, A. S. Matveev, and A. V. Savkin. Collision free cooperative navigation
of multiple wheeled robots in unknown cluttered environments. Robotics and
Autonomous Systems, 60(10):1253–1266, 2012.
[78] M. Hoy, A. S. Matveev, and A. V. Savkin. Algorithms for collision-free navigation of mobile robots in complex cluttered environments: a survey. Robotica,
33(03):463–497, 2015.
134
BIBLIOGRAPHY
[79] C. F. Huang, Y. C. Tseng, and L. C. Lo. The coverage problem in threedimensional wireless sensor networks. Journal of Interconnection Networks,
8(03):209–227, 2007.
[80] M. Ishizuka and M. Aida. Performance study of node placement in sensor
networks. In Distributed Computing Systems Workshops, 2004. Proceedings.
24th International Conference on, pages 598–603. IEEE, 2004.
[81] A. Ismail, M. Elmogy, and H. ElBakry. Landmines detection using low-cost
multisensory mobile robot. Journal of Convergence Information Technology,
10(6):51, 2015.
[82] A. Jadbabaie, L. Jie, and A. S. Morse. Coordination of groups of mobile
autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control, 48(6):988–1001, 2003.
[83] P. Jalalkamali. Distributed Tracking and Information-Driven Control for Mobile Sensor Networks. Dartmouth College, 2013.
[84] P. Jeongyeup, K. Chintalapudi, R. Govindan, J. Caffrey, and S. Masri. A
wireless sensor network for structural health monitoring: Performance and
experience. In The Second IEEE Workshop on Embedded Networked Sensors,
2005. EmNetS-II., pages 1–10, 2005.
[85] M. Keeter, D. Moore, R. Muller, E. Nieters, J. Flenner, S. E. Martonosi, A. L.
Bertozzi, A. G. Percus, and R. Lévy. Cooperative search with autonomous
vehicles in a 3D aquatic testbed. In American Control Conference (ACC),
2012, pages 3154–3160. IEEE, 2012.
[86] N. Kelland. Deep-water black box retrieval. Hydro International, 13(9), 2009.
[87] R. Kershner. The number of circles covering a set. American Journal of
Mathematics, pages 665–671, 1939.
[88] K. K. Khedo, R. Perseedoss, and A. Mungur. A wireless sensor network air
pollution monitoring system. arXiv preprint arXiv:1005.1737, 2010.
[89] A. Kim and R. M. Eustice. Active visual slam for robotic area coverage:
Theory and experiment. The International Journal of Robotics Research, 34(45):457–475, 2015.
[90] N. Y. Ko and R. G. Simmons. The lane-curvature method for local obstacle avoidance. In Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems, volume 3, pages 1615–1621. IEEE, 1998.
[91] T. Krajnı́k, V. Vonásek, D. Fišer, and J. Faigl. AR-Drone as a Platform for
Robotic Research and Education, pages 172–186. Springer Berlin Heidelberg,
Berlin, Heidelberg, 2011.
[92] L. Krick, M. E. Broucke, and B. A. Francis. Stabilisation of infinitesimally
rigid formations of multi-robot networks. International Journal of Control,
82(3):423–439, 2009.
135
BIBLIOGRAPHY
[93] L. Krishnamurthy, R. Adler, P. Buonadonna, J. Chhabra, M. Flanigan, N.
Kushalnagar, L. Nachman, and M. Yarvis. Design and deployment of industrial sensor networks: experiences from a semiconductor plant and the north
sea. In Proceedings of the 3rd international conference on Embedded networked
sensor systems, pages 64–75. ACM, 2005.
[94] S. Kumar, N. H. Thach, and E. Suzuki. Understanding the behaviour
of reactive robots in a patrol task by analysing their trajectories. In
IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), volume 2, pages 56–63, Toronto, ON,
Canada, 2010. IEEE.
[95] K. Lenac, A. Kitanov, I. Maurovi, M. Dakulovi, and I. Petrovi. Fast active
SLAM for accurate and complete coverage mapping of unknown environments,
pages 415–428. Springer, 2016.
[96] J. Li. The benefit of being physically present: A survey of experimental works
comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies, 77:23–37, 2015.
[97] X. Liang and Y. Xiao. Studying bio-inspired coalition formation of robots for
detecting intrusions using game theory. IEEE Transactions on Systems, Man,
and Cybernetics, Part B (Cybernetics), 40(3):683–693, 2010.
[98] K. W. Lin, M. H. Hsieh, and V. S. Tseng. A novel prediction-based strategy
for object tracking in sensor networks by mining seamless temporal movement
patterns. Expert Systems with Applications, 37(4):2799–2807, 2010.
[99] B. Liu, O. Dousse, P. Nain, and D. Towsley. Dynamic coverage of mobile
sensor networks. IEEE Transactions on Parallel and Distributed Systems,
24(2):301–311, 2013.
[100] X. Liu, P. Zhang, and G. Du. Hybrid adaptive impedance-leader-follower
control for multi-arm coordination manipulators. Industrial Robot: An International Journal, 43(1):112–120, 2016.
[101] Z. Lu, T. Pitchford, W. Li, and W. Wu. On the maximum directional target
coverage problem in wireless sensor networks. In Mobile Ad-hoc and Sensor
Networks (MSN), 2014 10th International Conference on, pages 74–79. IEEE,
2014.
[102] V. Malyavej and A. V. Savkin. The problem of optimal robust Kalman state
estimation via limited capacity digital communication channels. Systems &
Control Letters, 54(3):283–292, 2005.
[103] I. R. Manchester and A. V. Savkin. Circular-navigation-guidance law for precision missile/target engagements. Journal of Guidance, Control, and Dynamics, 29(2):314–320, 2006.
[104] O. Manolov, B. Iske, S. Noykov, J. Klahold, G. Georgiev, U. Witkowski, and U.
Rückert. Gard - an intelligent system for distributed exploration of landmine
136
BIBLIOGRAPHY
fields simulated by a team of khepera robots. In Proceedings of the International Conference Automatics and Informatics03, volume 1, pages 199–202,
Sofia, Bulgaria, 6 - 8 October 2003.
[105] T. Manzoor, A. Munawar, and A. Muhammad. Visual servoing of a sensor
arm for mine detection robot marwa. In Robotics; Proceedings of ROBOTIK
2012; 7th German Conference on, pages 1–6. VDE, 2012.
[106] D. Marinakis and G. Dudek. Pure topological mapping in mobile robotics.
IEEE Transactions on Robotics, 26(6):1051–1064, 2010.
[107] A. Marjovi and L. Marques. Multi-robot olfactory search in structured environments. Robotics and Autonomous Systems, 59(11):867–881, 2011.
[108] A. Marjovi, J. G. Nunes, L. Marques, and A. de Almeida. Multi-robot exploration and fire searching. In Intelligent Robots and Systems, 2009. IROS
2009. IEEE/RSJ International Conference on, pages 1929–1934. IEEE, 2009.
[109] A. A. Masoud. A harmonic potential approach for simultaneous planning and
control of a generic uav platform. Journal of Intelligent & Robotic Systems,
65(1):153–173, 2012.
[110] A. Matos, A. Martins, A. Dias, B. Ferreira, J. M. Almeida, H. Ferreira, G.
Amaral, A. Figueiredo, R. Almeida, and F. Silva. Multiple robot operations
for maritime search and rescue in eurathlon 2015 competition. In OCEANS
2016 - Shanghai, pages 1–7, 2016.
[111] A. S. Matveev, K. S. Ovchinnikov, and A. V. Savkin. Reactive navigation of
a nonholonomic mobile robot for autonomous sweep coverage of a surface embedded in a 3D workspace. In 2016 35th Chinese Control Conference (CCC),
pages 8914–8919, 2016.
[112] A. S. Matveev and A. V. Savkin. The problem of state estimation via asynchronous communication channels with irregular transmission times. IEEE
Transactions on Automatic Control, 48(4):670–676, April 2003.
[113] A. S. Matveev and A. V. Savkin. The problem of LQG optimal control
via a limited capacity communication channel. Systems and Control Letters,
53(1):51–64, 2004.
[114] A. S. Matveev, M. C. Hoy, and A. V. Savkin. A globally converging algorithm for reactive robot navigation among moving and deforming obstacles.
Automatica, 54(Supplement C):292–304, 2015.
[115] A. S. Matveev and A. V. Savkin. Qualitative theory of hybrid dynamical systems. Control engineering Birkhäuser. Birkhäuser, Boston, 2000.
[116] A. S. Matveev and A. V. Savkin. An analogue of Shannon information theory for detection and stabilization via noisy discrete communication channels.
SIAM Journal on Control and Optimization, 46(4):1323–1367, 2007.
[117] A. S. Matveev and A. V. Savkin. Estimation and Control over Communication
Networks. Boston : Birkhäuser Boston, 2009.
137
BIBLIOGRAPHY
[118] A. S. Matveev, A. V. Savkin, M. Hoy, and C. Wang, editors. Safe Robot
Navigation Among Moving and Steady Obstacles. Elsevier, January 2015.
[119] A. S. Matveev, H. Teimoori, and A. V. Savkin. A method for guidance and
control of an autonomous vehicle in problems of border patrolling and obstacle
avoidance. Automatica, 47(3):515–524, 2011.
[120] A. S. Matveev, H. Teimoori, and A. V. Savkin. Navigation of a unicycle-like
mobile robot for environmental extremum seeking. Automatica, 47(1):85–91,
2011.
[121] A. S. Matveev, C. Wang, and A. V. Savkin. Real-time navigation of mobile
robots in problems of border patrolling and avoiding collisions with moving
and deforming obstacles. Robotics and Autonomous Systems, 60(6):769–788,
2012.
[122] H. Mehrjerdi, J. Ghommam, and M. Saad. Nonlinear coordination control for
a group of mobile robots using a virtual structure. Mechatronics, 21(7):1147–
1155, 2011.
[123] H. Mohamadi, S. Salleh, and M. N. Razali. Heuristic methods to maximize
network lifetime in directional sensor networks with adjustable sensing ranges.
Journal of Network and Computer Applications, 46:26–35, 2014.
[124] F. Mohseni, A. Doustmohammadi, and M. B. Menhaj. Distributed model
predictive coverage control for decoupled mobile robots. Robotica, 35(4):922–
941, 2015.
[125] L. Moreau. Leaderless coordination via bidirectional and unidirectional timedependent communication. In Proceedings of the 42nd IEEE Conference on
Decision and Control, volume 3, pages 3070–3075. IEEE, 2003.
[126] S. C. Nagavarapu, L. Vachhani, and A. Sinha. Multi-robot graph exploration
and map building with collision avoidance: A decentralized approach. Journal
of Intelligent & Robotic Systems, 83(3-4):503–523, 2016.
[127] H. Najjaran and A. A. Goldenberg. Landmine detection using an autonomous
terrain-scanning robot. Industrial Robot: An International Journal, 32(3):240–
247, 2005.
[128] V. Nazarzehi and A. V. Savkin. Decentralized control of mobile threedimensional sensor networks for complete coverage self-deployment and forming specific shapes. In 2015 IEEE Conference on Control Applications (CCA),
pages 127–132, 2015.
[129] V. Nazarzehi, A. V. Savkin, and A. Baranzadeh. Distributed 3D dynamic
search coverage for mobile wireless sensor networks. IEEE Communications
Letters, 19(4):633–636, 2015.
[130] V. Nazarzehi. Decentralized control of three-dimensional mobile robotic sensor
networks. arXiv preprint arXiv:1606.00122, 2016.
138
BIBLIOGRAPHY
[131] V. Nazarzehi and A. Baranzadeh. A decentralized grid-based random search
algorithm for locating targets in three dimensional environments by a mobile
robotic network. In Australasian Conference on Robotics and Automation.
ACRA2015, 2015.
[132] V. Nazarzehi and A. Baranzadeh. A distributed bio-inspired algorithm for
search of moving targets in three dimensional spaces. In 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pages 2507–2512,
Zhuhai, China, 2015. IEEE.
[133] V. Nazarzehi and A. V. Savkin. Distributed self-deployment of mobile wireless
3D robotic sensor networks for complete sensing coverage and forming specific
shapes. Robotica, pages 1–18, 2017.
[134] J. D. Nicoud and M. K. Habib. The pemex-b autonomous demining robot:
perception and navigation strategies. In Proceedings of the 1995 IEEE/RSJ
International Conference on Intelligent Robots and Systems 95.’Human Robot
Interaction and Cooperative Robots, volume 1, pages 419–424, Pittsburgh,
USA, 1995. IEEE.
[135] S. G. Nurzaman, Y. Matsumoto, Y. Nakamura, S. Koizumi, and H. Ishiguro.
Yuragi-based adaptive searching behavior in mobile robot: From bacterial
chemotaxis to Lévy walk. In Robotics and Biomimetics, 2008. ROBIO 2008.
IEEE International Conference on, pages 806–811. IEEE, 2008.
[136] S. G. Nurzaman, Y. Matsumoto, Y. Nakamura, S. Koizumi, and H. Ishiguro.
Biologically inspired adaptive mobile robot search with and without gradient
sensing. In Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ
International Conference on, pages 142–147. IEEE, 2009.
[137] P. Ögren, M. Egerstedt, and X. Hu. A control lyapunov function approach
to multi-agent coordination. In Proceedings of the 40th IEEE Conference on
Decision and Control, volume 2, pages 1150–1155. IEEE, 2001.
[138] P. Ögren, E. Fiorelli, and N. E. Leonard. Cooperative control of mobile sensor
networks: Adaptive gradient climbing in a distributed environment. IEEE
Transactions on Automatic Control, 49(8):1292–1302, 2004.
[139] A. Okubo. Dynamical aspects of animal grouping: swarms, schools, flocks,
and herds. Advances in biophysics, 22:1–94, 1986.
[140] R. Olfati-Saber and R. M. Murray. Consensus problems in networks of agents
with switching topology and time-delays. IEEE Transactions on Automatic
Control, 49(9):1520–1533, 2004.
[141] Parklayer. Multilevel car parking products. http://www.parklayer.com/
multilevel-car-parking-products.html#!prettyPhoto, 2017. [Online;
accessed 20-August-2017].
[142] K. Pathak and S. K. Agrawal. Planning and control of a nonholonomic unicycle
using ring shaped local potential fields. In American Control Conference, 2004.
Proceedings of the 2004, volume 3, pages 2368–2373. IEEE, 2004.
139
BIBLIOGRAPHY
[143] P. N. Pathirana, N. Bulusu, A. V. Savkin, and S. Jha. Node localization using
mobile robots in delay-tolerant sensor networks. IEEE transactions on Mobile
Computing, 4(3):285–296, 2005.
[144] J. Peng and S. Akella. Coordinating multiple robots with kinodynamic constraints along specified paths. The International Journal of Robotics Research,
24(4):295–310, 2005.
[145] H. I. A. Perez-Imaz, P. A. F. Rezeck, D. G. Macharet, and M. F. M. Campos. Multi-robot 3D coverage path planning for first responders teams. In
2016 IEEE International Conference on Automation Science and Engineering
(CASE), pages 1374–1379, 2016.
[146] I. R. Petersen and A. V. Savkin. Robust Kalman filtering for signals and
systems with large uncertainties. Springer Science & Business Media, 1999.
[147] I. Petersen, V. Ugrinovskii, and A. Savkin. Robust control design using H∞
methods. Communications and control engineering series. Springer, London,
2000.
[148] V. A. Petrushin, G. Wei, O. Shakil, D. Roqueiro, and V. Gershman. Multiplesensor indoor surveillance system. In Computer and Robot Vision, 2006. The
3rd Canadian Conference on, pages 40–40. IEEE, 2006.
[149] M. Plank and A. James. Optimal foraging: Lévy pattern or process? Journal
of The Royal Society Interface, 5(26):1077–1086, 2008.
[150] D. Pompili, T. Melodia, and I. F. Akyildiz. Three-dimensional and twodimensional deployment analysis for underwater acoustic sensor networks. Ad
Hoc Networks, 7(4):778–790, 2009.
[151] D. Portugal and R. Rocha. A Survey on Multi-robot Patrolling Algorithms,
pages 139–146. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
[152] A. Prorok, A. Bahr, and A. Martinoli. Low-cost collaborative localization for
large-scale multi-robot systems. In 2012 IEEE International Conference on
Robotics and Automation, pages 4236–4241, Saint Paul, MN, USA, 2012.
[153] Y. Qu and S. V. Georgakopoulos. A distributed area coverage algorithm for
maintenance of randomly distributed sensors with adjustable sensing range. In
Global Communications Conference (GLOBECOM), 2013 IEEE, pages 286–
291. IEEE, 2013.
[154] R. L. Raffard, C. J. Tomlin, and S. P. Boyd. Distributed optimization for
cooperative agents: Application to formation flight. In 43rd IEEE Conference
on Decision and Control, volume 3, pages 2453–2459. IEEE, 2004.
[155] M. Raibert, K. Blankespoor, G. Nelson, and R. Playter. Bigdog, the roughterrain quadruped robot. IFAC Proceedings Volumes, 41(2):10822–10825,
2008.
140
BIBLIOGRAPHY
[156] T. S. Rappaport. Wireless communications : principles and practice. Upper
Saddle River, N.J. : Prentice Hall PTR, Upper Saddle River, N.J., 2nd ed.
edition, 2002.
[157] Raytheon. COYOTE UAS. http://www.raytheon.com/capabilities/
products/coyote/, 2017. [Online; accessed 20-August-2017].
[158] J. Ren, K. A. McIsaac, and R. V. Patel. Modified newton’s method applied
to potential field-based navigation for nonholonomic robots in dynamic environments. Robotica, 26(1):117–127, 2008.
[159] C. W. Reynolds. Flocks, herds and schools: A distributed behavioral model.
ACM SIGGRAPH Computer Graphics, 21(4):25–34, 1987.
[160] X. Rong, Y. Li, J. Ruan, and B. Li. Design and simulation for a hydraulic
actuated quadruped robot. Journal of Mechanical Science and Technology,
26(4):1171–1177, 2012.
[161] S. I. Roumeliotis and G. A. Bekey. Distributed multirobot localization. IEEE
Transactions on Robotics and Automation, 18(5):781–795, 2002.
[162] L. V. Santana, A. S. Brandao, M. Sarcinelli-Filho, and R. Carelli. A trajectory tracking and 3D positioning controller for the ar. drone quadrotor.
In Unmanned Aircraft Systems (ICUAS), 2014 International Conference on,
pages 756–767. IEEE, 2014.
[163] A. V. Savkin. Coordinated collective motion of groups of autonomous mobile
robots: analysis of Vicsek’s model. IEEE Transactions on Automatic Control,
49(6):981–982, 2004.
[164] A. V. Savkin and F. Javed. A method for decentralized self-deployment of
a mobile sensor network with given regular geometric patterns. In 2011 Seventh International Conference on Intelligent Sensors, Sensor Networks and
Information Processing, pages 371–376, 2011.
[165] A. V. Savkin and H. Li. Collision free navigation of a non-holonomic ground
robot for search and building maps of unknown areas with obstacles. In 2016
35th Chinese Control Conference (CCC), pages 5409–5414, 2016.
[166] A. V. Savkin and C. Wang. A simple real-time algorithm for safe navigation
of a non-holonomic robot in complex unknown environments with moving
obstacles. In 2014 European Control Conference (ECC), pages 1875–1880,
June 2014.
[167] A. V. Savkin. Analysis and synthesis of networked control systems: Topological entropy, observability, robustness and optimal control. Automatica,
42(1):51–62, 2006.
[168] A. V. Savkin, T. M. Cheng, Z. Xi, F. Javed, A. S. Matveev, and H. Nguyen.
Decentralized coverage control problems for mobile robotic sensor and actuator
networks. John Wiley & Sons, 2015.
141
BIBLIOGRAPHY
[169] A. V. Savkin and R. J. Evans. Hybrid dynamical systems: controller and
sensor switching problems. Control engineering Birkhäuser. Springer Science
& Business Media, Boston, 2002.
[170] A. V. Savkin, F. Javed, and A. S. Matveev. Optimal distributed blanket coverage self-deployment of mobile wireless sensor networks. IEEE Communications
Letters, 16(6):949–951, 2012.
[171] A. V. Savkin and I. R. Petersen. Model validation for robust control of uncertain systems with an integral quadratic constraint. Automatica, 32(4):603–606,
1996.
[172] A. V. Savkin and I. R. Petersen. Robust state estimation and model validation
for discrete-time uncertain systems with a deterministic description of noise
and uncertainty. Automatica, 34(2):271–274, 1998.
[173] A. V. Savkin and C. Wang. A simple biologically inspired algorithm for
collision-free navigation of a unicycle-like robot in dynamic environments with
moving obstacles. Robotica, 31(6):993–1001, 2013.
[174] A. V. Savkin and C. Wang. Seeking a path through the crowd: Robot navigation in unknown dynamic environments with moving obstacles based on an
integrated environment representation. Robotics and Autonomous Systems,
62(10):1568–1580, 2014.
[175] A. V. Savkin and C. Wang. A framework for safe assisted navigation of semiautonomous vehicles among moving and steady obstacles. Robotica, 35(5):981–
1005, 2016.
[176] A. V. Savkin, C. Wang, A. Baranzadeh, Z. Xi, and H. T. Nguyen. A method
for decentralized formation building for unicycle-like mobile robots. In 9th
Asian Control Conference (ASCC), pages 1–5. IEEE, 2013.
[177] A. V. Savkin, C. Wang, A. Baranzadeh, Z. Xi, and H. T. Nguyen. Distributed
formation building algorithms for groups of wheeled mobile robots. Robotics
and Autonomous Systems, 75, Part B:463–474, 2016.
[178] A. A. Semakova, K. S. Ovchinnikov, and A. S. Matveev. Self-deployment
of mobile robotic networks: an algorithm for decentralized sweep boundary
coverage. Robotica, 35(9):1816–1844, 2017.
[179] V. Sharma, R. Patel, H. Bhadauria, and D. Prasad. Deployment schemes in
wireless sensor network to achieve blanket coverage in large-scale open area:
A review. Egyptian Informatics Journal, 17(1):45–56, 2016.
[180] E. Shaw. The development of schooling in fishes. ii. Physiological Zoology,
pages 263–272, 1961.
[181] R. Siegwart, I. R. Nourbakhsh, and D. Scaramuzza. Introduction to autonomous mobile robots. MIT press, 2011.
142
BIBLIOGRAPHY
[182] T. R. Smith, H. Hanmann, and N. E. Leonard. Orientation control of multiple
underwater vehicles with symmetry-breaking potentials. In Proceedings of the
40th IEEE Conference on Decision and Control, volume 5, pages 4598–4603.
IEEE, 2001.
[183] C. Song, L. Liu, G. Feng, and S. Xu. Coverage control for heterogeneous
mobile sensor networks on a circle. Automatica, 63:349–358, 2016.
[184] Y. Song, B. Wang, Z. Shi, K. R. Pattipati, and S. Gupta. Distributed algorithms for energy-efficient even self-deployment in mobile sensor networks.
IEEE Transactions on Mobile Computing, 13(5):1035–1047, 2014.
[185] D. W. Stephens and J. R. Krebs. Foraging theory. Princeton University Press,
1986.
[186] T. Stevens and T. H. Chung. Autonomous search and counter-targeting using Lévy search models. In Robotics and Automation (ICRA), 2013 IEEE
International Conference on, pages 3953–3960. IEEE, 2013.
[187] D. M. Stipanovi, P. F. Hokayem, M. W. Spong, and D. D. iljak. Cooperative avoidance control for multiagent systems. Journal of Dynamic Systems,
Measurement, and Control, 129(5):699–707, 2007.
[188] T. Stirling, S. Wischmann, and D. Floreano. Energy-efficient indoor search by
swarms of simulated flying robots without global information. Swarm Intelligence, 4(2):117–143, 2010.
[189] H. Sugiyama, T. Tsujioka, and M. Murata. Autonomous chain network formation by multi-robot rescue system with ad hoc networking. In 2010 IEEE
Safety Security and Rescue Robotics, pages 1–6, 2010.
[190] H. Sugiyama, T. Tsujioka, and M. Murata. Real-time exploration of a multirobot rescue system in disaster areas. Advanced Robotics, 27(17):1313–1323,
2013.
[191] S. Susca, F. Bullo, and S. Martinez. Monitoring environmental boundaries
with a robotic sensor network. IEEE Transactions on Control Systems Technology, 16(2):288–296, 2008.
[192] D. Sutantyo, P. Levi, C. Mslinger, and M. Read. Collective-adaptive Lévy
flight for underwater multi-robot exploration. In Mechatronics and Automation (ICMA), 2013 IEEE International Conference on, pages 456–462. IEEE,
2013.
[193] D. K. Sutantyo, S. Kernbach, P. Levi, and V. A. Nepomnyashchikh. Multirobot searching algorithm using Lévy flight and artificial potential field. In
Safety Security and Rescue Robotics (SSRR), 2010 IEEE International Workshop on, pages 1–6. IEEE, 2010.
[194] Y. C. Tan. Synthesis of a controller for swarming robots performing underwater mine countermeasures. Technical report, Naval Academy Annapolis Md,
2004.
143
BIBLIOGRAPHY
[195] J. Tang, B. Hao, and A. Sen. Relay node placement in large scale wireless
sensor networks. Computer communications, 29(4):490–501, 2006.
[196] H. G. Tanner, A. Jadbabaie, and G. J. Pappas. Stability of flocking motion.
Report, University of Pennsylvania, 2003.
[197] H. G. Tanner, A. Jadbabaie, and G. J. Pappas. Stable flocking of mobile
agents part i: dynamic topology. In Proceedings of the 42nd IEEE Conference
on Decision and Control, volume 2, pages 2016–2021. IEEE, 2003.
[198] H. G. Tanner, A. Jadbabaie, and G. J. Pappas. Stable flocking of mobile
agents, part i: Fixed topology. In Proceedings of the 42nd IEEE Conference
on Decision and Control, volume 2, pages 2010–2015. IEEE, 2003.
[199] D. Tao, S. Tang, and L. Liu. Constrained artificial fish-swarm based area
coverage optimization algorithm for directional sensor networks. In 2013 IEEE
10th International Conference on Mobile Ad-Hoc and Sensor Systems, pages
304–309, 2013.
[200] W. S. Thomson. On the division of space with minimum partitional area. Acta
mathematica, 11(1-4):121–134, 1887.
[201] M. Vaccarini and S. Longhi. Formation control of marine veihicles via real-time
networked decentralized mpc. In 17th Mediterranean Conference on Control
and Automation, pages 428–433. IEEE, 2009.
[202] J. Van Den Berg, S. J. Guy, M. Lin, and D. Manocha. Reciprocal n-body
collision avoidance, pages 3–19. Springer, 2011.
[203] H. H. Viet, V. H. Dang, M. N. U. Laskar, and T. Chung. BA*: an online complete coverage algorithm for cleaning robots. Applied Intelligence,
39(2):217–235, 2013.
[204] G. M. Viswanathan, S. V. Buldyrev, S. Havlin, M. Da Luz, E. Raposo,
and H. E. Stanley. Optimizing the success of random searches. Nature,
401(6756):911–914, 1999.
[205] G. Viswanathan, F. Bartumeus, S. V. Buldyrev, J. Catalan, U. Fulco, S.
Havlin, M. Da Luz, M. Lyra, E. Raposo, and H. E. Stanley. Lévy flight
random searches in biological phenomena. Physica A: Statistical Mechanics
and Its Applications, 314(1):208–213, 2002.
[206] I. Škrjanc and G. Klančar. Optimal cooperative collision avoidance between
multiple robots based on bernstein-bd́zier curves. Robotics and Autonomous
systems, 58(1):1–9, 2010.
[207] B. Wang. Coverage problems in sensor networks: A survey. ACM Computing
Surveys (CSUR), 43(4):32, 2011.
[208] C. Wang, A. V. Savkin, and M. Garratt. Collision free navigation of flying
robots among moving obstacles. In 2016 35th Chinese Control Conference
(CCC), pages 4545–4549, Chengdu, China, 2016.
144
BIBLIOGRAPHY
[209] Q. Wang and Y. P. Tian. Minimally rigid formations control for multiple
nonholonomic mobile agents. In Proceedings of the 31st Chinese Control Conference, pages 6171–6176, 2012.
[210] Y. Wang, Y. Liu, and Z. Guo. Three-dimensional ocean sensor networks: A
survey. Journal of Ocean University of China (English Edition), 11(4):436–
450, 2012.
[211] Y. Wang, R. Tan, G. Xing, X. Tan, J. Wang, and R. Zhou. Spatiotemporal
aquatic field reconstruction using robotic sensor swarm. In Real-Time Systems
Symposium (RTSS), 2012 IEEE 33rd, pages 205–214. IEEE, 2012.
[212] Y. Wang, X. Wang, B. Xie, D. Wang, and D. P. Agrawal. Intrusion detection in
homogeneous and heterogeneous wireless sensor networks. IEEE transactions
on mobile computing, 7(6):698–711, 2008.
[213] Z. Wang and B. Wang. A novel node sinking algorithm for 3D coverage and
connectivity in underwater sensor networks. Ad Hoc Networks, 56:43–55, 2016.
[214] J. Wawerla and R. T. Vaughan. A fast and frugal method for team-task allocation in a multi-robot transportation system. In Robotics and Automation
(ICRA), 2010 IEEE International Conference on, pages 1432–1437, Anchorage, AK, USA, 2010. IEEE.
[215] S. C. Wong and B. A. MacDonald. A topological coverage algorithm for mobile
robots. In Proceedings of the 2003 IEEE/RSJ International Conference on
Intelligent Robots and Systems, volume 2, pages 1685–1690. IEEE, 2003.
[216] Y. Xiang, Z. Xuan, M. Tang, J. Zhang, and M. Sun. 3D space detection and
coverage of wireless sensor network based on spatial correlation. Journal of
Network and Computer Applications, 61:93–101, 2016.
[217] M. A. Yakoubi and M. T. Laskri. The path planning of cleaner robot for
coverage region using genetic algorithms. Journal of Innovation in Digital
Ecosystems, 3(1):37–43, 2016.
[218] M. Yamashita, H. Umemoto, I. Suzuki, and T. Kameda. Searching for mobile
intruders in a polygonal region by a group of mobile searchers. Algorithmica,
31(2):208–236, 2001.
[219] Z. Yan, N. Jouandeau, and A. A. Cherif. A survey and analysis of multi-robot
coordination. International Journal of Advanced Robotic Systems, 10(12):399,
2013.
[220] B. Yang, Y. Ding, Y. Jin, and K. Hao. Self-organized swarm robot for target search and trapping inspired by bacterial chemotaxis. Robotics and Autonomous Systems, 72:83–92, 2015.
[221] X. Yang. A decentralized collision-free search algorithm in 3D area for multiple
robots. In 2017 36th Chinese Control Conference (CCC), pages 8107–8112,
2017.
145
BIBLIOGRAPHY
[222] X. Yang. Grid based 2D navigation by a decentralized robot system with
collison avoidance. In 2017 36th Chinese Control Conference (CCC), pages
8473–8478, 2017.
[223] X. Yang. A decentralized algorithm for collision free navigation of multiple
robots in search tasks. In 2016 35th Chinese Control Conference (CCC), pages
8096–8101, 2016.
[224] X. Yang. A collision-free self-deployment of mobile robotic sensors for threedimensional distributed blanket coverage control. accepted, 2017.
[225] X. Yang. A decentralized algorithm for multiple robots in collision-free search
tasks. submitted, 2017.
[226] X. Yang. Optimal distributed self-deployment of multiple mobile robotic sensors with collision avoidance for blanket coverage. submitted, 2017.
[227] X. S. Yang and S. Deb. Cuckoo search via Lévy flights. In Nature & Biologically
Inspired Computing, 2009. NaBIC 2009. World Congress on, pages 210–214.
IEEE, 2009.
[228] M. Younis and K. Akkaya. Strategies and techniques for node placement in
wireless sensor networks: A survey. Ad Hoc Networks, 6(4):621–655, 2008.
[229] K. Zheng, G. Chen, G. Cui, Y. Chen, F. Wu, and X. Chen. Performance
Metrics for Coverage of Cleaning Robots with MoCap System, pages 267–274.
Springer International Publishing, Cham, 2017.
[230] X. Zheng, S. Jain, S. Koenig, and D. Kempe. Multi-robot forest coverage. In
IEEE/RSJ International Conference on Intelligent Robots and Systems, pages
3852–3857. IEEE, 2005.
[231] C. Zhu, C. Zheng, L. Shu, and G. Han. A survey on coverage and connectivity issues in wireless sensor networks. Journal of Network and Computer
Applications, 35(2):619–632, 2012.
[232] Q. Zhu, A. Liang, and H. Guan. A pso-inspired multi-robot search algorithm
independent of global information. In IEEE Symposium on Swarm Intelligence, pages 1–7. IEEE, 2011.
146
Appendix A
Acronyms
2D
two-dimensional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3D
three-dimensional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
C
cubic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
DOF
Degrees of Freedom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
DR
decenalized random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
GPS
Global Positioning System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
H
regular hexagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
HP
hexagonal prismatic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
LF
Lévy flight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
LFP
Lévy flight with potential field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
R
random choice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
RD
rhombic dodecahedral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
RU
random choice with unvisited vertices first . . . . . . . . . . . . . . . . . . . . . 86
RUN
random choice with path to the nearest unvisited vertex . . . . . . . 86
RW
random walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
S
square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
SLAM
Simultaneous Localization And Mapping . . . . . . . . . . . . . . . . . . . . . . . 8
T
equilateral triangular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
TCP
Transmission Control Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
TO
truncated octahedral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
UAV
Unmanned Aerial Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
UGV
Unmanned Ground Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
UUV
Unmanned Underwater Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
WMSN
Wireless Mobile Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
147
| 3 |
arXiv:1801.07650v1 [] 23 Jan 2018
Dynamic Optimization of Neural Network Structures
Using Probabilistic Modeling
Shinichi Shirakawa
Yasushi Iwata
Youhei Akimoto
Yokohama National University
[email protected]
Yokohama National University
[email protected]
Shinshu University
y [email protected]
Abstract
Deep neural networks (DNNs) are powerful machine learning models and have succeeded in various artificial intelligence tasks. Although various architectures and modules for
the DNNs have been proposed, selecting and designing the
appropriate network structure for a target problem is a challenging task. In this paper, we propose a method to simultaneously optimize the network structure and weight parameters during neural network training. We consider a probability
distribution that generates network structures, and optimize
the parameters of the distribution instead of directly optimizing the network structure. The proposed method can apply
to the various network structure optimization problems under the same framework. We apply the proposed method to
several structure optimization problems such as selection of
layers, selection of unit types, and selection of connections
using the MNIST, CIFAR-10, and CIFAR-100 datasets. The
experimental results show that the proposed method can find
the appropriate and competitive network structures.
Introduction
Deep neural networks (DNNs) have become a popular
machine-learning model and seen great success in various tasks such as image recognition and natural language
processing. To date, a variety of DNN models has been
proposed. Considering the convolutional neural networks
(CNNs) for visual object recognition as an example, a variety of deep and complex CNN models were developed,
such as the VGG model (Simonyan and Zisserman 2015),
the residual networks (ResNets) (He et al. 2016), which have
the skip connections, and the dense convolutional networks
(DenseNets) (Huang et al. 2017). It is not easy for users
to select an appropriate network structure including hyperparameters, such as the depth of a network, the type of each
unit, and the connection between layers, since the performance depends on tasks and data. However, the appropriate
configuration of such structures is of importance for high
performance of the DNNs. Therefore, developing efficient
methods to optimize the structure of the DNNs is an important topic.
This is the camera-ready version of the following paper: Shirakawa, S., Iwata, Y., and Akimoto, Y., “Dynamic Optimization of
Neural Network Structures Using Probabilistic Modeling”, ThirtySecond AAAI Conference on Artificial Intelligence (AAAI-18).
A popular approach to such a network structure optimization is to treat the network structure as the hyper-parameters
of the DNNs and optimize them by a black-box optimization technique such as Bayesian optimization (Snoek,
Larochelle, and Adams 2012) or evolutionary algorithms
(Loshchilov and Hutter 2016). Given a network configuration (hyper-parameter vector), the training is done for a certain period and the trained network is evaluated based on the
accuracy or the loss for validation dataset. A black-box optimizer treats the hyper-parameter vector and the resulting
accuracy/loss as the design variables and its objective/cost
function. Recently, the methods for automatic network design that can construct more flexible network structures than
the conventional hyper-parameter optimization approaches
have been proposed. Zoph and Le defined the recurrent neural networks (RNNs) that generate neural network architectures for a target problem and found the state-of-the-art architectures by optimizing the RNN using the policy gradient
method (Zoph and Le 2017). The works of (Real et al. 2017;
Suganuma, Shirakawa, and Nagao 2017) optimize the connections and types of layers by evolutionary algorithms
to construct a high-performance CNN architecture. These
methods succeeded in finding the state-of-the-art configurations of the DNNs. We view all these approaches as static
optimization of the network structures. The main disadvantage of static optimization is the efficiency, since it repeats
the training with different network configurations until it
finds a reasonable configuration.
A dynamic optimization of the network structures, on the
other side, learns the connection weights and the structure of
a network simultaneously. A typical example is to represent
the network structure parameters as the learnable parameters
and optimize them by a stochastic gradient descent when
carrying out the weight training (Srinivas and Babu 2016;
Ba and Frey 2013). Srinivas and Babu (Srinivas and Babu
2016) introduce the Tri-State ReLU activation having differentiable parameters to prune the units and layers using
back-propagation. Ba and Frey (Ba and Frey 2013) use a binary belief network overlaying a neural network to decide
the dropout rate and jointly train two networks. The sparse
and compact network structures can be dynamically learned
by using regularization techniques (Wen et al. 2016) as well.
These methods require the loss function to be differentiable
with respect to (w.r.t.) the structure parameters or they use
heuristic optimization techniques. The dynamic optimization approach is computationally efficient since it optimizes
the connection weights and the network structure within a
single training loop, though it compromises the flexibility of
the learnable structures compared with the static optimization approaches.
In this paper, we propose a general framework for dynamically optimizing the network structures and the connection weights simultaneously. To achieve more flexibility of
learnable structures, we introduce a parametric distribution
generating the network structure and treat the distribution
parameters as the hyper-parameters. The objective function
for the weights and hyper-parameters are defined by the expectation of the loss function under the distribution. Then,
gradient based search algorithms can be applied. To demonstrate the flexibility and the efficiency of our framework,
we consider the Bernoulli distributions in this paper and
show that the proposed method can dynamically optimize
various network structure parameters under the same framework. Our method is more computationally efficient than the
static optimization approach and more flexible than the conventional dynamic optimization approach, such as directly
optimizing structure parameters (Srinivas and Babu 2016;
Ba and Frey 2013). We conduct four experiments: selection of layers, selection of activation functions, adaptation
of stochastic network, and selection of connections. The experimental results show that the proposed method can find
the appropriate and unusual network structures.
Dynamic Network Structure Optimization
Generic Framework In the following we consider the
neural network φ(W, M ) modeled by two parameter vectors, the vector W ∈ W consisting of the connection
weights and the vector M ∈ M consisting of d hyperparameters that determine the structure of the network such
as connectivity of each unit, type of activation function
for each unit, and so on. The weights W are in general
real valued, and the structure parameters M can live in
an arbitrary space. Our original objective is to minimize
Rthe loss L(W, M ), which is often defined as L(W, M ) =
l(z, W, M )p(z)dz, where D and l(z, W, M ) indicate the
D
dataset and the loss function for a given data z, respectively.
Let us consider a family of probability distributions
pθ (M ) of M parametrized by a real vector θ ∈ Θ. Instead
of directly optimizing L(W, M ), we consider to minimize
the expected loss G(W, θ) under pθ (M ), namely,
Z
G(W, θ) =
L(W, M )pθ (M )dM ,
(1)
M
where dM is a reference measure on M. Note that the minimizer (W ∗ , θ∗ ) of G admits the minimizer (W ∗ , M ∗ ) of L
in the sense that pθ∗ will be concentrated at the minimizer
M ∗ of L as long as such a distribution is included in the
given family of probability distributions. We remark that the
domain W × Θ of G is continuous and the objective G is
likely to be differentiable (one may be able to choose pθ so
that it will be), whereas L itself is not necessarily continuous
since M can be discrete, as we consider in the following.
We optimize the parameters W and θ based on by taking a
gradient step and a natural gradient step, respectively, which
are given by
Z
∇W G(W, θ) = ∇W L(W, M )pθ (M )dM,
(2)
Z
˜ θ G(W, θ) = L(W, M )∇
˜ θ ln pθ (M )pθ (M )dM, (3)
∇
˜ θ ln pθ (M ) = F −1 (θ)∇θ ln pθ (M ) is the so-called
where ∇
natural gradient (Amari 1998) of the log-likelihood ln pθ
and F (θ) is the Fisher information matrix of pθ (M ). Note
that the natural gradient generally requires the estimation of
the inverse Fisher information matrix in the standard machine learning setting (e.g., for W update). However, we
can analytically compute the natural gradient of the loglikelihood w.r.t. θ since we have full access to the distribution pθ . For example, any exponential family with sufficient statistics T (M ) with the expectation parameterization θ = E[T (M )] admits the natural gradient of the log˜ θ ln pθ (M ) = T (x) − θ. This setting is simlikelihood ∇
ilar to those in the natural policy gradient method with
parameter-based exploration (PGPE) (Miyamae et al. 2010)
for reinforcement learning and in the information geometric optimization algorithm (IGO) (Ollivier et al. 2017) for
simulation based black-box optimization. The natural gradient gives us a reasonable scaling of the gradient in our case,
compared to the vanilla (Euclidean) gradient.
At each step of the training phase, we receive a mini-batch
Z of N training data zRj , i.e., Z = {z1 , . . . , zN }. The loss
function L(W, M ) = D l(z, W, M )p(z)dz is then approximated by the sample average of the loss l(zj , W, M ). We
shall write it as L̄(W, M ; Z). The cost in (3) and the gradient of the cost in (2) are replaced with L̄(W, M ; Z) and
∇W L̄(W, M ; Z), respectively. In our situation, we need to
estimate the cost and its gradient for each structure parameter Mi . We consider the following two different ways:
(a) same mini-batches The same training data set Z is
used for each Mi , namely,
1 X
L̄(W, Mi ; Z) =
l(z, W, Mi )
(4)
N
z∈Z
(b) different mini-batches The training data set Z is decomposed into λ subsets Zi with equal number of data,
N/λ, and each subset Zi is used for each Mi , namely,
λ X
L̄(W, Mi ; Zi ) =
l(z, W, Mi )
(5)
N
z∈Zi
Letting L̄(W, Mi ) denote either (4) or (5), we obtain the
Monte-Carlo approximation of the gradients as
λ
∇W G(W, θ) ≈
1X
∇W L̄(W, Mi ) ,
λ i=1
(6)
λ
X
˜ θ G(W, θ) ≈ 1
˜ θ ln pθ (Mi ) .
∇
L̄(W, Mi )∇
λ i=1
(7)
On one hand the latter (5) possibly has an advantage in computational time, since its mini-batch size for each network is
1/λ times smaller. This advantage will disappear if GPU is
capable of processing all the data of the original mini-batch
in parallel. On the other hand, since the latter uses different
batches to compute the loss of different networks, the resulting loss function may lead to a racing situation. From this
optimization viewpoint, the former (4) is preferred. These
two variations are compared in experiment (I).
Instantiation with Bernoulli Distribution In the following we focus on the cases that the structure variables are
binary, i.e., mk ∈ {0, 1} for each element mk of M =
(m1 , . . . , md ). We consider the Bernoulli distribution pθ as
aQlaw of M . The probability mass function is pθ (M ) =
θkmk (1 − θk )1−mk , where θk is the probability of each
bit mk to be 1. The parameter vector θ = (θ1 , . . . , θd ) then
lives in Θ = [0, 1]d . Since it is an exponential family with
the expectation parameterization, the natural gradient of the
˜ ln pθ (M ) is given by M − θ.
log-likelihood ∇
The parameters W and θ are updated by taking the approximated gradient steps with learning rates (aka step-size)
ηW and ηθ . Any stochastic gradient optimizer can be used
for W update as well as in the standard neural network training. However, since Θ is bounded, one needs to constrain θ
so that it remains in Θ = [0, 1]d . To do so, a simple yet
practically attractive treatment is to rescale the loss function at each training step. This is done by transforming the
loss value L̄(W, Mi ) into the ranking based utility value ui
(Hansen and Ostermeier 2001; Yi et al. 2009) as
(best dλ/4e samples)
1
L̄(W, Mi ) 7→ ui = −1 (worst dλ/4e samples) . (8)
0
(otherwise)
With this utility transformation, the θ update reads
λ
ηθ X
ui (Mi − θt ),
(9)
λ i=1
P
where ηθ is set to 1/(d |ui |) for all experiments. This
way, we can guarantee for θ to stay in Θ with neither the
adaptation of ηθ nor the constraint handling. Moreover, we
restrict the range of θ within [1/d, 1 − 1/d] to leave open
the possibility of generating both values, i.e., we replace the
value of θ by the boundary value if it will be updated beyond
the boundary. The optimization procedure of the proposed
method is displayed in Algorithm 1.
When we apply the trained network to test data, we have
two options: the deterministic and stochastic predictions.
The deterministic prediction indicates that we fix the random
variables as mi = 0 if θi < 0.5 and mi = 1 if θi ≥ 0.5,
while the stochastic one averages the values of the model
predictions using the samples from pθ (M ). The stochastic
prediction requires high computational cost in proportion as
the number of samples increases. We report the results of
both predictions in the experiments and use 100 samples for
the stochastic prediction.
θt+1 = θt +
Algorithm 1 Optimization procedure of the proposed
method instantiated with Bernoulli distribution.
Input: Training data D
Output: Optimized parameters of W and θ
Procedure:
1: Initialize the weights and Bernoulli parameters as W 0
and θ0
2: t ← 0
3: while no stopping criterion is satisfied do
4:
Get N mini-batch samples from D
5:
Sample M0 , . . . , Mλ from pθt
6:
Compute the loss using (4) or (5)
7:
Update the weights to W t+1 using (6) by a SGD
method
8:
Update the Bernoulli parameters to θt+1 by (9)
9:
t←t+1
10: end while
Relation to stochastic network models Since our method
uses the stochastic network structures, we describe the relation to the stochastic networks such as Dropout (Srivastava
et al. 2014), which stochastically zeros the output of hidden units during training to prevent overfitting. Also, other
stochastic networks, DropConnect (Wan et al. 2013) that
drops the connections and the stochastic depth (Huang et
al. 2016) that skips the layers in ResNets, were developed.
Swapout (Singh, Hoiem, and Forsyth 2016) is a generalization of Dropout and the stochastic depth, and it randomly
chooses each unit behavior from four types: dropped, feedforward, skipped, or a residual network unit. These dropout
techniques contribute to reducing the generalization error.
The stochastic behavior is decided based on the Bernoulli
distributions of typically θ = 0.5. If the binary vector M
drawn from the Bernoulli distributions are used to decide
whether each unit drops or not in our method, the method
can be regarded as the adaptation of the dropout ratio. Therefore, our method can be also applied to the adaptation of the
parameters of the existing stochastic network models.
Relation and difference to IGO Optimizing the parameters of the probability distribution in the proposed method
is based on the IGO (Ollivier et al. 2017). We can view the
IGO as a generalization of the specific estimation distribution algorithms (EDAs) (Larrañaga and Lozano 2001) such
as the population based incremental learning (PBIL) (Baluja
1994) and the compact genetic algorithm (cGA) (Harik,
Lobo, and Goldberg 1999) for discrete optimization. Moreover, it generalizes the covariance matrix adaptation evolution strategy (CMA-ES) (Hansen and Ostermeier 2001;
Hansen, Müller, and Koumoutsakos 2003), which is nowadays recognized as a stat-of-the-art black-box continuous
optimizer. The update rule (9) is similar to the one in the
cGA.
In the standard IGO and EDAs, the optimizer only updates
the parameters of the distribution. On the contrary in this paper, the weight parameters of the neural network are simultaneously updated with a different mechanism (i.e., a stochas-
Table 1: Mean test errors (%) over 30 trials at the final iteration in the experiment of selection of layers. The values in parentheses
denote the standard deviation.
AdaptiveLayer (a)
AdaptiveLayer (b)
(λ = 2)
(λ = 2)
(λ = 8)
(λ = 32)
(λ = 128)
θinit = 0.5
Deterministic
Stochastic
θinit = 0.969
Deterministic
Stochastic
2.200 (0.125)
15.69 (29.9)
2.406 (0.189)
2.439 (0.224)
2.394 (0.163)
2.366 (0.901)
35.26 (41.6)
65.74 (38.8)
80.59 (24.6)
80.58 (24.8)
StochasticLayer
tic gradient descent with momentum). Differently from applying IGO to update both parameters at the same time, we
update the distribution parameters by IGO (i.e., natural gradient) and the weights are updated by using the gradient of
the loss function, since the gradient is available and it leads
to a faster learning compared to a direct search by IGO.
From the viewpoint of updating the distribution parameters, i.e. optimizing the network structures, the landscape of
the loss function dynamically changes at each algorithmic
iteration because the weight parameters as well as a minibatch change. This is the reason why we call the methods
that optimize the both of structure and weight parameters at
the same time dynamic structure optimization.
Experiments and Results
We apply our methodology to the following four situations:
(I) selection of layers, (II) selection of activation functions,
(III) adaptation of stochastic network, and (IV) selection of
connections for densely connected CNNs. The algorithms
are implemented by the Chainer framework (Tokui et al.
2015) (version 1.23.0) on NVIDIA Geforce GTX 1070 GPU
for experiments (I) to (III) and on NVIDIA TITAN X GPU
for experiment (IV). In all experiments, the SGD with a Nesterov momentum (Sutskever et al. 2013) of 0.9 and a weight
decay of 10−4 is used to optimize the weight parameters.
The learning rate is divided by 10 at 1/2 and 3/4 of the
maximum number of epochs. This setting is based on the
literature (He et al. 2016; Huang et al. 2017).
(I) Selection of Layers
Experimental setting The base network consists of 32
fully connected hidden layers with 128 units for each layer
and the rectified linear unit (ReLU) (Nair and Hinton 2010).
We use the MNIST handwritten digits dataset containing
the 60,000 training examples and 10,000 test examples of
28 × 28 gray-scale images. The input and output layers correspond to the 784 input pixel values and class labels (0 to
9), respectively. We use the cross entropy error with softmax
activation as the loss function L.
We use the binary vector M = (m1 , . . . , md ) to decide whether the processing of the corresponding layer is
skipped: we skip the processing of l-th layer if ml = 0.
We re-connect the (l + 1)-th layer with the (l − 1)-th layer
when ml = 0. More precisely, denoting the l-th layer’s pro-
2.218 (0.135)
15.67 (29.9)
2.423 (0.189)
2.453 (0.228)
2.405 (0.173)
2.375 (0.889)
35.27 (41.7)
65.74 (38.8)
80.60 (24.6)
80.58 (24.8)
4.704 (0.752)
cessing by Hl , the (l + 1)-th layer’s input vector becomes
Xl+1 = Hl (Xl ) if ml = 1 and Xl+1 = Xl if ml = 0. It
is possible because the number of units in each layer is the
same. The gradient ∇W L̄(W, Mi ) in (6) is then computed
in the straight-forward way, where the components of the
gradient corresponding to the skipped layers are zero. Such
skip processing is the same as the skip-forward defined in
(Singh, Hoiem, and Forsyth 2016), and the number of 1-bits
in M implies the number of hidden layers. To ensure the skip
processing, we do not skip the first hidden layer and decide
whether the second to 32-th hidden layers are skipped or not
based on the binary vector. For this setting, the dimension of
M is d = 31.
The purpose of this experiment is to investigate the difference between the type of approximation of the loss, (4) and
(5), and to check whether the proposed method can find the
appropriate number of layers. With the neural network structure as mentioned above with fixed layer size and the following optimization setting, the training does not work properly when the number of layers is greater than 21. Therefore, the proposed method needs to find less than 22 layers
during the training. We denote the proposed methods using
(4) by AdaptiveLayer (a) and using (5) by AdaptiveLayer
(b). We vary the parameter λ as {2, 8, 32, 128} for AdaptiveLayer (b) and use λ = 2 for AdaptiveLayer (a) and report
the results using the deterministic and stochastic predictions
mentioned above. The data sample size and the number of
epochs are set to N = 64 and 100 for AdaptiveLayer (a), respectively, and N = 128 and 200 for other algorithms. The
number of iterations is about 9 × 104 for all algorithms. At
the beginning of training, we initialize the learning rate of
SGD by 0.01 and the Bernoulli parameters by θinit = 0.5 or
θinit = 1 − 1/31 ≈ 0.968 to verify the impact of θ initialization1 . We also run the method using fixed Bernoulli parameters of 0.5 denoted by StochasticLayer to check the effectiveness of optimizing θ. The experiments are conducted
over 30 trials with the same settings.
Result and discussion Table 1 shows the test error of each
method at the final iteration. We observe that AdaptiveLayer
1
The initialization with θinit = 1 − 1/31 ≈ 0.968 is an artificially poor initialization. We use this setting here only to check the
impact of the initialization. We do not recommend tuning θinit at
all, and it should be θinit = 0.5 which assumes no prior knowledge.
30
25
20
15
10
5
0
0.0
0.2
0.4
0.6
0.8
1.0
θ
θinit = 0.5
θinit = 0.968
30
Sum of Bernoulli Parameters (
P
θi )
Figure 1: An example of histogram of θ obtained by AdaptiveLayer (a) at the final iteration.
25
20
15
10
5
0
10000 20000 30000
Number of iterations
40000
FigureP2: Transitions of the sum of the Bernoulli parameters ( θi ) using AdaptiveLayer (a) when we initialize by
θinit = 0.5 and P
0.968. The expected number of hidden layers is given by
θi + 1. The first 45,000 iterations (about
half of the maximum number of iterations) are plotted.
(a) shows the best performance among the proposed methods, and the performances of AdaptiveLayer (b) become significantly worse when the bad initialization (θinit ≈ 0.968)
is used. One reason for this is that the loss approximation (4)
used in AdaptiveLayer (a) evaluates each sample of M with
the same mini-batch, and this leads to an accurate comparison between the samples of M . Comparing the deterministic
and stochastic predictions, the performance differences are
not significant because the values of θ distributed close to
0.0 or 1.0, as shown in Figure 1.
Figure 2 shows
P the transitions of the sum of the Bernoulli
parameters ( θi ) for the first 45,000 iterations using AdaptiveLayer
(a). The expected number of layers which is given
P
by θi + 1 converges in between eight and ten. We observe
that the values converge to the learnable number of layers at
the early iteration, even in the case of the bad initial condition. In this experiments we observed no significant difference in computational time for AdaptiveLayer (a) and (b).
The test error of StochasticLayer is inferior to the most
proposed methods; thus optimizing the Bernoulli parame-
ters θ is effective. In our preliminary study, we found that
the best number of layers was ten, whose test error is 1.778
in our experimental setting. There was a run where the layer
size converges to 10, however, the final test error was inferior. From these observation, we conclude that the goodness
of the proposed method is not to find the optimal network
configuration, but to find a reasonable configuration within
a single training loop. It will improve the convenience of the
deep learning in practice. Based on the observation that the
method using (4) with λ = 2 showed the best performance,
even in the case of the bad initial condition, we adopt this
setting in the following experiments.
(II) Selection of Activation Functions
Experimental setting We use the binary vector M to select the activation function for each unit. Different activation functions can be mixed in the same layer. The activation
function of i-th unit is ReLU Frelu if mi = 1 and the hyperbolic tangent Ftanh if mi = 0. In other words, the activation
function is defined as mi Frelu (Xi ) + (1 − mi )Ftanh (Xi ),
where Xi denotes an input to the activation of the i-th unit.
The base network structure used in this experiment consists of three fully connected hidden layers with 1,024 units
for each layer. The number of activation functions to be decided is d = 3072. We use the MNIST dataset. In this experiment, we report the result of the method using (4) with
λ = 2 and denote it as AdaptiveActivation. We also run the
method using the fixed Bernoulli parameters of 0.5 and ones
using the ReLU and hyperbolic tangent activations for all
units; we denote them as StochasticActivation, ReLU, and
tanh, respectively.
The data sample size and the number of epochs are set
to N = 64 and 1,000 for AdaptiveActivation, respectively,
and N = 128 and 2,000 for other algorithms. Note that the
number of epochs is greater than the previous experiment. It
is because the number of bits to be optimized (i.e., 3, 072) is
significantly greater than the previous setting (i.e., 31). We
initialize the learning rate of SGD by 0.01 and the Bernoulli
parameters by θinit = 0.5. The experiments are conducted
over 10 trials using the same settings.
Result and discussion Table 2 shows the test error and
training time of each algorithm. We observe that AdaptiveActivation (stochastic) outperforms StochasticActivation in
which the Bernoulli parameters stay constant, suggesting
that the optimization of such parameters by our method is
effective. The predictive performance of AdaptiveActivation
(deterministic) is competitive with StochasticActivation, but
it is more computationally efficient than the stochastic prediction. In addition, the obtained networks by AdaptiveActivation have a better classification performance compared
to both uniform activations: ReLU and hyperbolic tangent.
Comparing the training time, we observe that the proposed
method needs about twice the computational time for training compared to the fixed structured neural networks. Our
method additionally requires the computation regarding the
Bernoulli distributions (e.g., the equation (9)) and one to
switch the structure. In our implementation, these are the
Table 2: Mean test errors (%) over 30 trials at the final iteration in the experiment of selection of activation functions. The
values in parentheses denote the standard deviation. The training time of a typical single run is reported.
AdaptiveActivation
StochasticActivation
ReLU
tanh
Test error (Deterministic)
Test error (Stochastic)
Training time (min.)
1.414 (0.054)
–
1.609 (0.044)
1.592 (0.069)
1.407 (0.036)
1.452 (0.025)
–
–
255
204
120
120
3.00
AdaptiveActivation
ReLU
tanh
Test error (%)
2.75
2.50
2.25
l-th hidden layer is skipped (if mlayer
= 0) or not (if
l
mlayer
=
1).
The
last
LU
bits,
denoted
as
munit
for l =
li
l
1, . . . , L and i = 1, . . . , U , determine whether the i-th
unit of the l-th layer will be dropped (if munit
= 0) or
li
not (if munit
=
1).
The
underlying
probability
distribution
li
for mlayer
is the Bernoulli distribution pθlayer , whereas the
l
l
2.00
1.75
1.50
0
250000
500000
750000
Number of iterations
dropout mask munit
for all i = 1, . . . , U are drawn from
li
the same Bernoulli distribution pθlunit . In other words, the
dropout ratio is shared by units within a layer. Let the vector
of the dropout mask for the l-th layer be denoted by Mlunit =
QU
unit
unit
unit
(munit
) =
l1 , . . . , mlU ) and p(Ml
i=1 pθlunit (mli ).
Then, the underlying distribution of M is pθ (M ) =
QL
p(M1unit ) l=2 pθlayer (mlayer
)p(Mlunit ) and the parameter
l
l
Figure 3: Transitions of the test errors (%) of AdaptiveActivation (deterministic), ReLU, and tanh.
reason of the increase in computational time. As our implementation is naive, the computational time may be reduced
by a sophisticated implementation.
Figure 3 illustrates the transitions of the test errors of
AdaptiveActivation (deterministic), ReLU, and tanh. We observe that the convergence of AdaptiveActivation is slow but
achieves better results at the last iterations. More iterations
are needed in our method to tune structure and weight parameters simultaneously.
Figure 4 shows an example of the histograms of θ in each
layer after training. In our setting, the larger value of θ means
that it tends to become ReLU. Interestingly, only the histogram of the first layer is biased toward ReLU. We have
observed that the number of units with θ ≥ 0.5 increases to
about 2,000 through training.
(III) Adaptation of Stochastic Network
Experimental setting Our proposed framework can be
applied to optimize more than one types of hyperparameters. To demonstrate this, we adapt the dropout ratio
as well as the layer-skip ratio at the same time. We use the
MNIST dataset in this experiment.
The network model is defined as follows. We consider
a fully connected network consisting L = 10 hidden layers with U = 1, 024 units for each layer as the base network. The configuration of the network is identified by
LU + (L − 1) binary parameters M . The first L − 1 bits,
denoted as mlayer
for l = 2, . . . , L, determine whether
l
layer unit
unit
, θ1 , . . . , θL
vector is θ = (θ2layer , . . . , θL
) ∈ R2L−1 .
Since the underlying probability distribution of M is not
anymore the independent Bernoulli model, the natural gradient of the log-likelihood is different from M − θ. Yet, the
natural gradient of the log-likelihood of our network model
is easily derived as
˜ layer ln pθ (M ) = mlayer − θlayer ,
∇
l
l
θ
l
U
X
˜ θunit ln pθ (M ) = 1
munit − θlunit .
∇
l
U i=1 li
(It demonstrates the generality of our methodology to some
extent.) We use the same training parameters as used in the
first experiment and report the result of the method using (4)
with λ = 2 and denote it as AdaptiveNet.
We employ a simple Bayesian optimization to the
same problem to compare the computational cost with
a static hyper-parameter optimization method. We use
GPyOpt package (version 1.0.3, http://github.com/
SheffieldML/GPyOpt) for the Bayesian optimization
implementation and adopt the default parameter setting. The
Bernoulli parameters of the stochastic network as mentioned
above are optimized as the hyper-parameter. The problem
dimension is d = 2L − 1 = 19 and the range of search
space is [1/d, 1 − 1/d]d . The training data is split into training and validation set in the ratio of nine to one; the validation set is used to evaluate a hyper-paremter after training
the neural network with a candidate hyper-parameter. We fix
the parameters of the Bernoulli distribution during the network training. After searching the hyper-parameter, we retrain the model using all training data and report the error
for test data. For fair comparison, we include the vector of
First hidden layer
300
Second hidden layer
300
250
250
250
200
200
200
150
150
150
100
100
100
50
50
50
0
0.0
0.2
0.4
0.6
0.8
1.0
0
0.0
0.2
0.4
θ
0.6
θ
Third hidden layer
300
0.8
1.0
0
0.0
0.2
0.4
0.6
0.8
1.0
θ
Figure 4: The histograms of θ in each layer obtained by AdaptiveActivation after training. The larger value of θi means that
it tends to become ReLU. These histograms were created on a certain run, but the obtained histograms on the other runs are
similar to this.
Table 3: Test errors (%) and computational time of the proposed method (AdaptiveNet) and the Bayesian optimization
(BO) with different budgets in the experiment of adaptation
of stochastic network. The mean values over 30 trials are reported in the proposed method, and the value in parentheses
denotes the standard deviation. For the Bayesian optimization, the result of a single run is reported.
AdaptiveNet
BO (budget=10)
BO (budget=20)
Test error (%)
Time (hour)
1.645 (0.072)
1.780
1.490
1.01
9.59
18.29
(0.5, . . . , 0.5), which is the initial parameter of the proposed
method, to the initial points for the Bayesian optimization.
We use the same setting for the network training as used in
the proposed method.
Result and discussion Table 3 shows that the test errors
of the stochastic networks obtained by the proposed method
and the Bayesian optimization with different budgets, where
budget indicates the number of hyper-parameters to be evaluated. We use the stochastic prediction with 100 samples to
calculate the test errors. Obviously, we observe that the computational time of the Bayesian optimization proportionally
increases for the number of budgets while our method is
more computationally efficient. The proposed method can
find a competitive stochastic network with reasonable computational time. We observed that the networks obtained by
the proposed method skip about five to seven layers and
their units are not dropped with high probability. We also
observed the same tendency for the network obtained by the
Bayesian optimization. Although the Bayesian optimization
could find a better configuration in this case within several
ten budgets, it probably needs many budgets if the dimension of hyper-parameters increases such as in the setting of
the experiment (II).
(IV) Selection of Connections for DenseNets
Experimental setting In this experiment, we use the
dense convolutional networks (DenseNets) (Huang et al.
2017), a state-of-the-art architectures for image classification, as the base network structure. DenseNets contain several dense blocks and transition layers. The dense block
comprises of Lblock layers, each of which implements a nonlinear transformation with a batch normalization (BN) (Ioffe
and Szegedy 2015) followed by a ReLU activation and a
3 × 3 convolution. The size of the output feature-maps of
each layer is the same as that of the input feature-maps.
Let k be the number of output feature-maps of each layer,
called growth rate; the l-th layer in the dense block receives
k(l − 1) + k0 feature-maps, where k0 indicates the number
of input feature-maps to the dense block. Thus, the number
of output feature-maps of the dense block is kLblock + k0 .
The transition layer is located between the dense blocks and
consists of a batch normalization, a ReLU activation, and an
1×1 convolutional layer followed by a 2×2 average pooling
layer. The detailed architecture of DenseNets can be found
in (Huang et al. 2017).
We decide the existence of the connections between layers in each dense block according to the binary vector M .
Namely, we remove the connection when the corresponding bit equals zero. Let us denote the k-th layer’s output
feature-maps by Yk ; then, the input feature-maps to the lth layer is computed by (mp Y0 , . . . , mp+l−1 Yl−1 ), where
p = l(l − 1)/2. We use the most simple DenseNet consisting 40 depth (k = 12 and Lblock = 12) reported in (Huang
et al. 2017) as the base network structure, containing three
dense blocks and two transition layers. For this setting, the
dimension of M becomes d = 273.
In this experiment, we use the CIFAR-10 and CIFAR-100
datasets in which the numbers of classes are 10 and 100,
respectively. The numbers of training and test images are
50,000 and 10,000, respectively, and the size of the images
is 32 × 32. We normalize the data using the per-channel
means and the standard deviations in the preprocessing. We
use the data augmentation method based on (He et al. 2016;
Huang et al. 2017): padding 4 pixels on each side followed
by choosing a random 32 × 32 crop from the padded image
and random horizontal flips on the cropped 32 × 32 image.
We report the results of the method using (4) with λ = 2
(AdaptiveConnection) and also run the normal DenseNet
for comparison. The data sample size and the number of
epochs are set to N = 32 and 300 for AdaptiveConnec-
Table 4: Test errors (%) at the final iteration in the experiment of connection selection for DenseNets. The values in parentheses
denote the standard deviation.
AdaptiveConnection
Normal DenseNet (40 depth, k = 12)
CIFAR-10
Deterministic
Stochastic
CIFAR-100
Deterministic
Stochastic
5.427 (0.167)
5.050 (0.147)
25.461 (0.408)
25.518 (0.380)
tion, respectively, and N = 64 and 600 epochs for the normal DenseNet. We initialize the weight parameters using
the method described in (He et al. 2015), and the learning
rate of SGD and the initial Bernoulli parameters by 0.1 and
0.5, respectively. We conduct the experiments with same settings over 20 and 5 trials for AdaptiveConnection and normal DenseNet, respectively.
Result and discussion Table 4 shows the test errors of
AdaptiveConnection and the normal DenseNet at the final
iteration. In this case, the stochastic prediction is slightly
better than the deterministic one, but the difference is not
significant. The difference of the predictive performances
between AdaptiveConnection and the normal DenseNet is
not significant for the CIFAR-100 datasets, whereas AdaptiveConnection is inferior for the CIFAR-10 dataset. We,
however, observed that the obtained Bernoulli parameters
are distributed to be close to 0.0 or 1.0, as in Figure 1. We
observed that about 70 connections are removed with high
probability for both datasets. Counting the weight parameters of those removed connections, we found that it can reduce about 10% of the weight parameters without suffering
from performance deterioration for CIFAR-100.
5.399 (0.153)
–
25.315 (0.409)
–
parameters to optimize them within a standard stochastic
gradient descent framework, whereas the proposed method
can optimize the network structure that are not necessarily
differentiable through parametric probability distributions.
Although this paper focuses on the Bernoulli distributions
(0/1 bits), the proposed framework can be used with other
distributions such as categorical distributions which represent several categorical variables (A/B/C/...). Indeed, it is
rather easy to derive the update of the distribution parameters if the distributions are in exponential families. Since
it is difficult to design a model to represent categorical
variables by differentiable parameters, the proposed framework is more flexible than the existing dynamic optimization
methods in the sense that it can handily treat a wider range
of structural optimization problems.
One direction of future work is to extend the proposed
method to treat variables other than binary variables, i.e.,
categorical variables, and to optimize a larger and more
complex networks. Another direction of future work is to
introduce a prior distribution for θ; one can incorporate the
regularization term to obtain sparse and compact representation through the prior distribution of θ.
Acknowledgments
Conclusion
In this paper, we proposed a methodology that dynamically
and indirectly optimizes the network structure parameters
by using probabilistic models. We instantiated the proposed
method using the Bernoulli distributions and simultaneously
optimized their parameters and network weights. We conducted experiments where we optimized four different network components: the layer skips, activation functions, layer
skips and unit dropouts, and connections. We observed that
the proposed method could find the learnable layer size and
the appropriate mix rate of the activation functions. We also
showed that our method can dynamically optimize more
than one type of hyper-parameters and obtain the competitive results with a reasonable training time. In the experiment of connection selection for DenseNets, the proposed
method have shown the competitive results with a smaller
number of connections.
The proposed method is computationally more efficient
than static structure optimization in general, which is validated in the experiment (III) (Table 3). The static optimization method such as a Bayesian optimization may find better
hyper-parameter configuration, but it takes a lot more time.
This is also observed in Table 3.
The existing dynamic structure optimization methods
need to parameterize the network structure by differentiable
This work is partially supported by the SECOM Science and
Technology Foundation.
References
[Amari 1998] Amari, S. 1998. Natural gradient works efficiently in learning. Neural Computation 10(2):251–276.
[Ba and Frey 2013] Ba, J., and Frey, B. 2013. Adaptive
dropout for training deep neural networks. In Advances
in Neural Information Processing Systems 26 (NIPS 2013),
3084–3092.
[Baluja 1994] Baluja, S. 1994. Population-based incremental learning: A method for integrating genetic search based
function optimization and competitive learning. Technical
Report Tech Rep CMU-CS-94-163, Carnegie Mellon University.
[Hansen and Ostermeier 2001] Hansen, N., and Ostermeier,
A. 2001. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2):159–195.
[Hansen, Müller, and Koumoutsakos 2003] Hansen,
N.;
Müller, S. D.; and Koumoutsakos, P. 2003. Reducing the
time complexity of the derandomized evolution strategy
with covariance matrix adaptation (CMA-ES). Evolutionary
Computation 11(1):1–18.
[Harik, Lobo, and Goldberg 1999] Harik, G. R.; Lobo, F. G.;
and Goldberg, D. E. 1999. The compact genetic algorithm.
IEEE Transactions on Evolutionary Computation 3(4):287–
297.
[He et al. 2015] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the
2015 IEEE International Conference on Computer Vision
(ICCV 2015), 1026–1034.
[He et al. 2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016.
Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR 2016), 770–778.
[Huang et al. 2016] Huang, G.; Sun, Y.; Liu, Z.; Sedra, D.;
and Weinberger, K. Q. 2016. Deep networks with stochastic
depth. In Proceedings of the 14th European Conference on
Computer Vision (ECCV 2016), volume 9908 of LNCS, 646–
661. Springer.
[Huang et al. 2017] Huang, G.; Liu, Z.; van der Maaten, L.;
and Weinberger, K. Q. 2017. Densely connected convolutional networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR
2017), 4700–4708.
[Ioffe and Szegedy 2015] Ioffe, S., and Szegedy, C. 2015.
Batch normalization: Accelerating deep network training by
reducing internal covariate shift. In Proceedings of the
32nd International Conference on Machine Learning (ICML
2015), volume 37, 448–456. PMLR.
[Larrañaga and Lozano 2001] Larrañaga, P., and Lozano,
J. A. 2001. Estimation of Distribution Algorithms: A New
Tool for Evolutionary Computation. Kluwer Academic Publishers.
[Loshchilov and Hutter 2016] Loshchilov, I., and Hutter, F.
2016. CMA-ES for hyperparameter optimization of deep
neural networks. arXiv preprint.
[Miyamae et al. 2010] Miyamae, A.; Nagata, Y.; Ono, I.; and
Kobayashi, S. 2010. Natural policy gradient methods with
parameter-based exploration for control tasks. In Advances
in Neural Information Processing Systems 23 (NIPS 2010),
1660–1668.
[Nair and Hinton 2010] Nair, V., and Hinton, G. E. 2010.
Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference
on Machine Learning (ICML 2010), 807–814.
[Ollivier et al. 2017] Ollivier, Y.; Arnold, L.; Auger, A.; and
Hansen, N. 2017. Information-geometric optimization algorithms: A unifying picture via invariance principles. Journal
of Machine Learning Research 18:1–65.
[Real et al. 2017] Real, E.; Moore, S.; Selle, A.; Saxena, S.;
Suematsu, Y. L.; Tan, J.; Le, Q. V.; and Kurakin, A. 2017.
Large-scale evolution of image classifiers. In Proceedings
of the 34th International Conference on Machine Learning
(ICML 2017), volume 70, 2902–2911. PMLR.
[Simonyan and Zisserman 2015] Simonyan, K., and Zisserman, A. 2015. Very deep convolutional networks for largescale image recognition. In Proceedings of the 3rd In-
ternational Conference on Learning Representations (ICLR
2015).
[Singh, Hoiem, and Forsyth 2016] Singh, S.; Hoiem, D.; and
Forsyth, D. 2016. Swapout: Learning an ensemble of deep
architectures. In Advances in Neural Information Processing
Systems 29 (NIPS 2016), 28–36.
[Snoek, Larochelle, and Adams 2012] Snoek, J.; Larochelle,
H.; and Adams, R. P. 2012. Practical Bayesian optimization
of machine learning algorithms. In Advances in Neural Information Processing Systems 25 (NIPS 2012), 2951–2959.
[Srinivas and Babu 2016] Srinivas, S., and Babu, R. V. 2016.
Learning neural network architectures using backpropagation. In Proceedings of the British Machine Vision Conference (BMVC 2016).
[Srivastava et al. 2014] Srivastava,
N.;
Hinton,
G.;
Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R.
2014. Dropout: A simple way to prevent neural networks
from overfitting. Journal of Machine Learning Research
15:1929–1958.
[Suganuma, Shirakawa, and Nagao 2017] Suganuma, M.;
Shirakawa, S.; and Nagao, T. 2017. A genetic programming
approach to designing convolutional neural network architectures. In Proceedings of the Genetic and Evolutionary
Computation Conference 2017 (GECCO 2017), 497–504.
[Sutskever et al. 2013] Sutskever, I.; Martens, J.; Dahl, G.;
and Hinton, G. 2013. On the importance of initialization and
momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning (ICML 2013),
volume 28, 1139–1147. PMLR.
[Tokui et al. 2015] Tokui, S.; Oono, K.; Hido, S.; and Clayton, J. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of the Workshop on
Machine Learning Systems (LearningSys) in the 29th Annual Conference on Neural Information Processing Systems
(NIPS 2015), 1–6.
[Wan et al. 2013] Wan, L.; Zeiler, M.; Zhang, S.; Cun, Y. L.;
and Fergus, R. 2013. Regularization of neural networks using DropConnect. In Proceedings of the 30th International
Conference on Machine Learning (ICML 2013), volume 28,
1058–1066. PMLR.
[Wen et al. 2016] Wen, W.; Wu, C.; Wang, Y.; Chen, Y.; and
Li, H. 2016. Learning structured sparsity in deep neural
networks. In Advances in Neural Information Processing
Systems 29 (NIPS 2016), 2074–2082.
[Yi et al. 2009] Yi, S.; Wierstra, D.; Schaul, T.; and Schmidhuber, J. 2009. Stochastic search using the natural gradient.
In Proceedings of the 26th International Conference on Machine Learning (ICML 2009), 1161–1168.
[Zoph and Le 2017] Zoph, B., and Le, Q. V. 2017. Neural architecture search with reinforcement learning. In Proceedings of the 5th International Conference on Learning
Representations (ICLR 2017).
| 9 |
A Greedy Search Tree Heuristic for Symbolic Regression
Fabrı́cio Olivetti de Françaa
a
Center of Mathematics, Computing and Cognition (CMCC), Universidade Federal do ABC (UFABC) – Santo
André, SP, Brazil. E-mail: [email protected]
arXiv:1801.01807v1 [] 4 Jan 2018
Abstract
Symbolic Regression tries to find a mathematical expression that describes the relationship of a set
of explanatory variables to a measured variable. The main objective is to find a model that minimizes the error and, optionally, that also minimizes the expression size. A smaller expression can
be seen as an interpretable model considered a reliable decision model. This is often performed
with Genetic Programming which represents their solution as expression trees. The shortcoming of this algorithm lies on this representation that defines a rugged search space and contains
expressions of any size and difficulty. These pose as a challenge to find the optimal solution
under computational constraints. This paper introduces a new data structure, called InteractionTransformation (IT), that constrains the search space in order to exclude a region of larger and
more complicated expressions. In order to test this data structure, it was also introduced an heuristic called SymTree. The obtained results show evidence that SymTree are capable of obtaining the
optimal solution whenever the target function is within the search space of the IT data structure
and competitive results when it is not. Overall, the algorithm found a good compromise between
accuracy and simplicity for all the generated models.
Keywords: Symbolic Regression, Regression Analysis, Greedy Heuristic
1. Introduction
Many decision making process can be automated by learning a computational model through
a set of observed data. For example, credit risk can be estimated by using explanatory variables
related to the consumer behavior [9]. A recommender system can estimate the likelihood of a
given person to consume an item given their past transactions [1].
Preprint submitted to Information Sciences
January 8, 2018
There are many techniques devised to generate such models, from the simple Linear Regression [26] to more advanced universal approximators like Neural Networks [13]. The former has
the advantage of being simple and easily interpretable, but the relationship must be close to linear
for the approximation to be acceptable. The latter can numerically approximate any function given
the constraint that the final form of the function is pre-determined, usually as a weighted sum of a
nonlinear function applied to a linear combination of the original variables. This constraint makes
the regression model hard to understand, since the implications of changing the value of an input
variable is not easily traced to the target variable. Because of that, these models are often called
Black Box Models.
The concerns with using black box models for decision making process are the inability to
predict what these models will do in critical scenarios and whether their response are biased by
the data used to adjust the parameters of the model.
For example, there is recent concern on how driverless cars will deal with variants of the
famous Trolley problem [5, 29]. Driverless cars use classification and regression model to decide
the next action to perform, such as speed up, slow down, break, turn left or right by some degrees,
etc. If the model used by these cars are difficult to understand, the manufacturer cannot be sure
the actions the car will do in extreme situations. Faced with the decision of killing a pedestrian
or killing the driver, what choice will it make? Even though these situations may be rare, it is
important to understand whether the model comprehends all possible alternatives to prevent life
losses.
Another recent example concerns the regression models used to choose which online ad to
show to a given user. It was found in [10] that the model presented a bias towards the gender of
the user. Whenever the user was identified as a male, the model chose ads of higher paying jobs
than when the user was a female person. In this case, the bias was introduced by the data used as
a reference to adjust the parameters of the model. Historically, the income distribution of men is
skewed towards higher salaries than women [28].
An interpretable model could provide a better insight to such concerns since everything will be
explicitly described in the mathematical expression of the model. In the example of the driverless
car, an inspection on the use of variables corresponding to location of bystanders around the car
2
could reveal what would be the probable actions taken by the model. Simlarly, the inspection
the mathematical expression to choose the online ads, could reveal a negative correlation for the
combination of salary and the female gender.
As such, an interpretable model should have both high accuracy regarding the target variable
and, at the same time, be as simple as possible to allow the interpretation of the decision making
process.
Currently this type of model is being studied through Symbolic Regression [4], a field of study
that aims to find a symbolic expression that fits an examplary data set accurately. Often, it is also
included as a secondary objective that such expression is as simple as possible. This is often solved
by means of Genetic Programming [19], a metaheuristic from the Evolutionary Algorithms [3]
field that evolves an expression tree by minimizing the model error and maximizing the simplicity
of such tree. Currently, the main challenges in such approach is that the search space induced by
the tree representation not always allow a smooth transition between the current solution towards
an incremental improvement and, since the search space is unrestricted, it allows the representation
of black box models as well.
1.1. Objectives and Hypothesis
This main objective of this paper is to introduce a new data structure, named InteractionTransformation (IT), for representing mathematical expressions that constrains the search space by
removing the region comprising uninterpretable expressions. Additionaly, a greedy divisive search
heuristic called SymTree is proposed to verify the suitability of such data structure to generate
smaller Symbolic Regression models.
The data structure simply describes a mathematical expression as the summation of polynomial
functions and transformation functions applied to the original set of variables. This data structure
restrict the search space of mathematical expressions and, as such, is not capable of representing
every possible expression.
As such, there are two hypothesis being tested in this paper:
H1. The IT data structure constrain the search space such as it is only possible to generate smaller
expressions.
3
H2. Even though the search space is restricted, this data structure is capable of finding function
approximations with competitive accuracy when compared to black box models.
In order to test these hypothesis the SymTree algorithm will be applied to standard benchmark
functions commonly used on the literature. These functions are low dimensional functions but
that still proves to be a challenge to many Symbolic Regression algorithms. The functions will
be evaluated by means of Mean Squared Error and number of nodes in the tree representation
of the generated expression. Finally, these results will be compared to three standard regression
approaches (linear and nonlinear), three recent variations of Genetic Programming applied to this
problem and two other Symbolic Regression algorithms from the literature.
The experimental results will show that the proposed algorithm coupled with this data structure is indeed capable of finding the original form of the target functions whenever the particular
function is representable by the structure. Also, when the function is not representable by the IT
data structure, the algorithm can still manage to find an approximation that compromises between
simplicity and accuracy. Regarding numerical results, the algorithm performed better than the
tested Symbolic Regression algorithms in most benchmarks and it was competitive when compared against an advanced black box model extensively used on the literature.
The remainder of this paper is organized as follows, Section 2 gives a brief explanation of
Symbolic Regression and classical solution through Genetic Programming. In Section 3 some
recent work on this application is reported along with their contribution. Section 4 describes
the proposed algorithm in detail, highlighting its advantages and limitations. Section 5 explains
the experiments performed to assess the performance of the proposed algorithm and compare its
results with the algorithms described in Section 3. Finally, Section 6 summarizes the contributions
of this work and discuss some of the possibilities for future research.
2. Symbolic Regression
Consider the problem where we have collected n data points X = {x1 , ..., xn }, called explanatory
variables, and a set of n corresponding target variables Y = {y1 , ..., yn }. Each data point is described
4
as a vector with d measurable variables xi ∈ Rd . The goal is to find a function fˆ(x) : X → Y, also
called a model, that approximates the relationship of a given xi with its corresponding yi .
Sometimes, this can be accomplished by a linear regression where the model is described by
a linear function assuming that the relationship between the explanatory and target variables are
linear. When such assumption does not hold, we can use non-linear regression techniques, such
as Artificial Neural Network, which have theoretical guarantees to the capability of approximating
any given function. The problem with the latter is that every function has the form:
fˆ(x) =
K
X
vk · g(wTi · x + bi ),
(1)
i=1
where v is a vector representing the regression coefficients for the nonlinear function g(.), W is a
K × d matrix containing the linear regression coefficients for every explanatory variable respective to each nonlinear function, b is a vector with the biases for each of the K linear regressions
and g(.) is any non-constant, bounded, monotonically-increasing, continuous nonlinear function.
It is proven [12, 14] that given the correct values of K, v, W and b, any function f (.) can be
approximated with a small error .
If the goal of a given problem is just to obtain the numerical approximation of such function,
this nonlinear model may suffice. But, if there is a need for understanding, inspecting or even
complementing the obtained model this approximation becomes impractical since this function
form is fixed and only the coefficients are optimized.
In these situations we need to find not only a function fˆ that approximates the observed relationship by minimizing the error but also one that maximizes the simplicity or interpretability.
This is often achieved by means of Symbolic Regression.
In Symbolic Regression both the function form and the coefficients are searched with the objective of minimizing the approximation error and, in some cases, also maximizing the simplicity
of the expression. The meaning of simplest in the context of Symbolic Regression refers to the
ease of interpretation. For example, consider the following functions:
5
x5
x7
x3
+
+
6 120 5040
16x(π − x)
f (x) = 2
5π − 4x(π − x)
f (x) =
f (x) = sin (x)
Assuming that the third function is the target, the first two functions return a reasonable approximation within a limited range of the x values. If we have the goal to interpret the function
behavior, instead of just obtaining the numerical approximation, the target function is the simplest
form and the simplest to understand. The simplicity is often measured by means of the size of the
expression tree or the number of nonlinear functions used in the expression.
The expression tree is a tree data structure representing a given expression. Each node of the
tree may represent an operator, a function, a variable or a constant. The nodes representing an
operator or a function must have a set of child nodes for each of the required parameters. The
variables and constants should all be leaf nodes. For example, the expression x2 · (x + tan y) can
be represented by the tree depicted in Fig. 1. In this example the length of the expression could be
measured by the number of nodes (8) or the height of the three (3). Additionally, the simplicity can
be measured by penalizing the bloatness 1 of the expression for the number of nonlinear functions
used, the size of the composition of functions and the total number of variables used.
This expression tree is often used as a representation to an evolutionary algorithm called Genetic Programming in order to find such optimal expression for a regression problem.
2.1. Genetic Programming
The Genetic Programming (GP) algorithm, in the context of Symbolic Regression, tries to find
the optimal expression that minimizes the approximation error. As a secondary objective it is often
sought to also maximizes the simplicity of the expression, through regularization.
As usual for any evolutionary algorithm, GP starts with a population of randomly generated
solutions, usually by means of the Ramped half-and-half procedure to improve the variability of
1
In GP the term bloat is used to refer to large and complicated expressions
6
×
pow
x
+
x
2
tan
y
Figure 1: Expression tree for the expression x2 · (x + tan (y)).
the structure [19], and afterwards iterates through the procedures of reproduction, mutation and
selection. The solutions are represented by means of expression trees.
The reproduction procedure tries to combine the good parts of two or more solutions creating
a new and improved solution, this procedure works well whenever a part of the solution representation is translated to a subset of the original problem.
The mutation, or perturbation, operator is responsible for introducing small changes to a given
solution in order to prevent the search process of getting stuck on a local optima. This operator
works well whenever a small change to a solution does not change the fitness by a large amount.
For example, with numerical optimization the mutation is usually the addition of a random Gaussian vector such that | f (x) − f (x + σ)| < .
But, the expression tree representation does not guarantee any of these properties for the common evolutionary operators. For example, given the expression tree in Fig. 2(a), a possible mutation may change the addition operator to the multiplication operator, resulting in the expression in
Fig. 2(b). In Fig. 2(c) we can see a possible result of the application of the crossover operator, in
this situation an entire sub-tree is replaced by a new one. As we can see from the respective plots
of each expression they approximate completely different relations.
Many researchers have created specialized operators to alleviate the problem with the standard operators. In the next section, we will summarize some of the most recent proposals in the
7
literature.
10
+
5
0
x
cos
−5
−10
−15
−10
x
−5
0
5
10
(a)
10
×
100
+
80
5
60
x
0
cos
pow
cos
−5
x
40
20
−10
−10
−5
0
5
x
10
(b)
2
x
0
−10
−5
0
5
10
(c)
Figure 2: Example of the application of mutation (b) and crossover (c) operators in the expression x + cos (x) (a).
3. Literature Review
Recently, many extensions to the cannonical GP or even new algorithms were proposed in
order to cope with the shortcomings pointed out in the previous section. This section will highlight
some of the recent publications that reported to have achieved improvements over previous works.
In [30] the authors propose the neat-GP, a GP based algorithm with mechanisms borrowed
from Neuroevolution of augmenting topologies [27] (NEAT) algorithm and the Flat Operator
Equalization bloat control method for GP. From the former it borrows the speciation mechanism
through fitness sharing [24, 11], forcing the crossover operator to be applied only on similar solutions and, from the latter, it maximizes the simplicity of generated expression by encouraging an
uniform distribution on the tree size among solutions of the population. The authors tested neatGP against the classic GP algorithm and the GP with the Flat Operator Equalization. Additionally,
the authors also tested different variations of neat-GP by replacing operators, selection and fitness
8
sharing mechanisms. For every tested function, at least one variation of the proposed algorithm
achieved the best solution.
Instead of modifying the search operators, in [17] the authors proposed the use of a surrogate
model to reduce the fitness calculation cost, thus allowing a higher number of iterations within a
time frame. In their proposal, called Semantic Surrogate Genetic Programming (SSGP), the fitness
approximation is composed of a linear equation controlling the number of training samples used
to calculate a partial fitness from the expression tree and the application of the k-NN algorithm to
infer the remaining training samples. When comparing the results of SSGP against four other Genetic Programming variants the authors showed that their algorithm could maintain a comparable
solution, and in some situations even better, than the contenders.
In [18] the authors also explored the use of a surrogate model to alleviate the cost of fitness
evaluation. The proposed algorithm, Surrogate Genetic Programming (sGP), trains two Radial
Basis Functions Neural Networks [21] (RBFN) in order to first map the semantic space (described
by the expression tree) into the output space (composed of the target vector) and then another
network maps the output space into the fitness space. The sGP uses the surrogate model with 40%
of the population while the remaining solutions are fully evaluated. Comparing the sGP against
three other variants, the authors showed that their approach obtained the best results for every
tested function using the same number of function evaluations.
The algorithm named Evolutionary Feature Synthesis, introduce in [2], tries to fit a model in
the form:
fˆ(x) =
M
X
wi · hi (x),
(2)
i=1
that minimizes the squared error. Differently from Genetic Programming, the algorithm tries to
evolve a population of M terms instead of a population of expressions, so at every step a new
term can be created, from a combination of terms of the current population, and old ones are
discarded with probability inversely proportional to |wi |. The function hi (x) can be any composition
of functions with one and two variables. The authors showed that this fixed model form was
capable of finding more acurate models when compared to traditional Genetic Programming.
9
Finally, in [15] the authors expanded the idea of the Fast Function Extraction algorithm [20]
(FFX) by creating the hybrid FFX/GP algorithm. The FFX algorithm enumerates the binary interactions between the original variables of the problem and some polynomial variations of such
variables. After this enumeration, different linear models are created by using the ElasticNet linear
regression [31] which acts as a feature selection mechanism. The ElasticNet is applied with different regularization parameters rendering several different solutions. This set of solutions is then
used by FFX/GP to generate a new data set by extending the variable space with every unique interaction found in the set of solutions. The GP algorithm is then applied in this new data set. This
algorithm was tested by a huge set of randomly generated polynomial functions and the results
showed that FFX/GP improved the capabilities of FFX when dealing with higher order polynomials.
4. Constrained Representation for Symbolic Regression
The overall idea introduced in this section is that if a given representation for Symbolic Regression does not comprehend bloated mathematical expressions, it will allow the algorithms to
focus the search only on the subset of expressions that can be interpretable.
For this purpose, a new Data Structure used to represent the search space of mathematical
expressions will be introduced followed by an heuristic algorithm that makes use of such representation to find approximations to nonlinear functions.
4.1. Interaction-Transformation Data Structure
Consider the regression problem described in Sec. 2 consisting of n data points described by a
d-dimensional vector of variables X and corresponding target variables Y. The goal is to find the
simplest function form fˆ : Rn → R that minimizes the approximation to the target variables.
Let us describe the approximation to the target function by means of a linear regression of
functions in the form:
fˆ(x) =
X
wi · gi (x),
i
10
(3)
where wi is the i-th coefficient of a linear regression and gi (.) is the i-th function (non-linear or
linear). The function g(.) is a composition function g(.) = t(.) ◦ p(.), with t : R → R a onedimensional transformation function and p : Rd → R a d-dimensional interaction function. The
interaction function has the form:
p(x) =
d
Y
xiki ,
(4)
i=1
where ki ∈ Z is the exponent of the i-th variable.
This approximation function (Eq. 3) is generic enough to correctly describe many different
functions. Following the previous example, the function sin x can be described as:
fˆ(x) = 1 · sin(x).
In this example, there is only one term for the summatory of Eq. 3, the single coefficient w1 = 1
and the composition function g(x) is simply the composition of t(z) = sin (z) and p(x) = x. As a
more advanced example, suppose the target function is f (x) = 3.5 sin (x12 · x2 ) + 5 log (x23 /x1 ). This
function could be described as:
fˆ(x) = 3.5 · g1 (x) + 5 · g2 (x),
and
t1 (z) = sin (z)
p1 (x) = x12 · x2
t2 (z) = log (z)
p2 (x) = x1−1 · x23 .
Notice that Eq. 3 does not comprehend the description of a more complicated set of mathematical expressions such as f (x) = sin (log (cos (x))), which prevents the search algorithm to find
bloated expressions. On the other hand this representation also does not comprehend expressions
such as f (x) = sin (x12 + x2 )/x3 or f (x) = sin (x12 + 2 · x2 ) which can be deemed as simple.
11
As a comparison with the EFS algorithm, introduced in Sec. 3, notice that in the approximation
function described in 2, hi (x) can be any composition of functions without constant values. From
the previou paragraph, it means it can represent the first and second functions, but not the third
one.
A computational representation of the function described in Eq. 3 can be derived by a set
T = {t1 , ..., tn } of terms. Each term represents a tuple (P, t), with P = [p1 , ..., pd ] representing an interaction of the original variables with each variable i having an exponent pi and t a transformation
function.
As previously indicated, the transformation function may be any mathematical function t :
R → R, taking one value and returning one value in the real numbers domain. In order to maintain
consistency of representation we use the identity function, id(x) = x to represent the cases where
we do not apply any transformation.
As an example of this representation, consider a three variable regression problem with the
variables set x = {x1 , x2 , x3 } and the function corresponding to a linear regression of these variables
w1 · x1 + w2 · x2 + w3 · x3 . Such expression would be represented as:
T = {t1 , t2 , t3 }
t1 = ([1, 0, 0], id)
t2 = ([0, 1, 0], id)
t3 = ([0, 0, 1], id),
(5)
and the linear regression Ŷ = w · T is solved to determine the coefficients w.
Similarly, the function w1 · x13 · x2 + w2 · sin (x3 ) would be represented as:
T = {t1 , t2 }
(6)
t1 = ([3, 1, 0], id)
(7)
t2 = ([0, 0, 1], sin),
(8)
12
again solving the regression Ŷ = w · T in order to obtain the original function.
Given a set of terms T and a set of weights W associated with each term, the computational
complexity of evaluating the mathematical expression represented by T for a given d-dimensional
point x can be determined by means of the number of terms n and dimension d. Assuming a constant cost for calculating any transformation function or the power of a number, each term should
evaluate d power functions plus a transformation function, each term should also be multiplied by
its corresponding weight. As such the computational complexity is O(n · (d + 2)), or O(n · d).
4.2. Symbolic Regression Search Tree
The general idea of the proposed algorithm is to perform a tree-based search where the root
node is simply a linear regression of the original variables and every child node is an expansion
of the parent expression. The search is performed by expanding the tree in a breadth-first manner
where every children of a given parent is expanded before exploring the next level of the tree.
The algorithm starts with a set of terms each of which corresponding to one of the variables
without any interaction or transformation, as exemplified in Eq. 5. This is labeled as the root node
of our tree and this node is given a score calculated by:
score(model) =
1
,
1 + MAE(model)
(9)
where MAE(.) returns the mean absolute error of the linear regression model learned from the
expression. After that, three different operators are applied to this expression: interaction, inverse
interaction or transformation.
In the interaction operator, every combination of terms (ti , t j ) is enumerated creating the term
tk = (Pi + P j , id), the addition operation of polynomials is simply the sum of the vectors Pi and
P j . Likewise, the inverse interaction creates the term tk = (Pi − Pk , id). Finally, the transformation
operator creates new terms by changing the associated function of every ti with every function of
a list of transformation functions.
Following the example given in Eq. 5, the interaction operator would return the set {t1 + t1 , t1 +
t2 , t1 + t3 , t2 + t2 , t2 + t3 , t3 + t3 } which would generate the new terms:
13
t4 = ([2, 0, 0], id)
t5 = ([1, 1, 0], id)
t6 = ([1, 0, 1], id)
t7 = ([0, 2, 1], id)
t8 = ([0, 1, 1], id)
t9 = ([0, 0, 2], id)
Given a set of ni terms in the i-th node of the search tree, this operation will have a computational complexity of O(n2i )
The inverse interaction operation would return the set {t1 − t2 , t1 − t3 , t2 − t3 , t2 − t1 , t3 − t1 , t3 − t2 }
and the new terms:
t10 = ([1, −1, 0], id)
t11 = ([1, 0, −1], id)
t12 = ([0, 1, −1], id)
t13 = ([−1, 1, 0], id)
t14 = ([−1, 0, 1], id)
t15 = ([0, −1, 1], id)
Similarly, this operation will have a computational complexity of O(n2i )
Finally, given the set of functions {sin, log} the transformation operator would return:
14
t16 = ([1, 0, 0], sin)
t17 = ([1, 0, 0], log)
t18 = ([0, 1, 0], sin)
t19 = ([0, 1, 0], log)
t20 = ([0, 0, 1], sin)
t21 = ([0, 0, 1], log)
Given a set of m transformation functions and ni terms, this operation will have a computational
complexity of O(m · ni ). The application of all these operations (i.e., each step of the algorithm)
will then have a complexity of O(m · ni + n2i ).
After this procedure, every term that produces any indetermination (i.e., log (0)) on the current
data is discarded from the set of new terms. The score of each remaining term is then calculated
by inserting the term into the parent node expression and calculating the model score with Eq. 9.
Those terms that obtain a score smaller than the score of its parent are eliminated.
Finally, a greedy heuristic is performed to generate the child nodes. This heuristic expands the
parent node expression by inserting each generated term sequentially and recalculating the score
of the new expression, whenever the addition of a new term reduces the current score, the term is
removed and inserted into a list of unused terms. After every term is tested, this new expression
becomes a single child node and the process is repeated with the terms stored at the unused terms
list, generating other child nodes. This is repeated until the unused terms list is empty. Notice that
because only the terms that improved upon the parent expression are used in this step, every term
will eventually be used in one of the child nodes.
After every child node is created, the corresponding expressions are simplified by eliminating
every term that has an associated coefficient (after applying a linear regression) smaller than a
given threshold τ set by the user. This whole procedure is then repeated, expanding each of the
current leaf nodes, until a stop criteria is met.
Notice that the greedy heuristic prevents the enumeration of every possible expression, thus
15
avoiding unnecessary computation. But, on the other hand, it may also prevent the algorithm from
reaching the optimum expression. The elimination of terms that have an importance below τ also
helps avoiding an exponential growth of the new terms during the subsequent application of the
operators.
In the worst case scenario, when every new term increases the score of the expression and
has a coefficient higher than τ in every iteration, the number of terms of an expression will still
increase exponentially. In this scenario, the expansion will have a number of terms proportional to
n2i . After k iterations, the expected number of terms will be proportional to n2k
0 , with n0 being the
number of original variables of the data set.
Another implementation procedure devised to avoid an exponential growth is the use of the
parameters minI and minT to control at what depth of the search tree the Inverse and Transformation operators will start to be applied. With these parameters, the expression may first expand to a
set of terms containing only positive polynomial interaction with the expectation to have enough
information to be simplified when applying the other operators.
Due to the tree-search nature, the algorithm will be named as Symbolic Regression Tree, or
SymTree for short, the pseudo-algorithm is illustrated in Alg. 1 together with the auxiliary functions in Algs. 2 and 3.
The Alg. 1 is a straightforward and abstract description of the SymTree algorithm. The Expand
function described in Alg. 2 gives further detail of the inner procedure of node expansion. In
this function the Interaction, Inverse and Transformation operators are applied, followed by the
GreedySearch function responsible for the creation of a new expanded expression (Alg. 3). The
Simplify function removes the terms with a coefficient smaller than τ of the corresponding linear
regression.
The application of a linear regression helps to regulate the amount of change the expression will
undertake from one node to the other. This allows a smooth transition from the previous solution
to the next if required. As an example, suppose that the current node contains the expression
0.3x + 0.6 cos (x) (Fig. 3(a)) and within one of its child the interaction x2 is inserted. After solving
the linear regression for the new expression the coefficients may become 0.2x + 0.6 cos (x) + 0.02x2
(Fig. 3(b)). Notice that in order to achieve the same transition from 0.3x + 0.6 cos (x) to 0.2x +
16
Algorithm 1: SymTree algorithm
input : data points X and corresponding set of target variable Y, simplification threshold τ,
minimum iteration for inverse and transformation operators minI, minT .
output: symbolic function f
/* create root node (Eq. 5).
*/
root ← LinearExpression(X);
leaves ← {root };
while criteria not met do
nodes ← ∅;
for leaf in leaves do
nodes ← nodes ∪ Expand(leaf, τ, it > minI, it > minT );
leaves ← nodes;
return arg max {Score (leaf) for leaf ∈ leaves };
0.6 cos (x) + 0.02x2 in GP, the expression x2 should be a part of another expression tree and both
solutions should be combined through crossover at the exact point as illustrated in Fig. 4.
0.3 ⋅ x + 0.6 ⋅ cosx + 0.02 ⋅ x 2
0.3 ⋅ x + 0.6 ⋅ cosx
4
6
2
f(x)
f(x)
4
0
2
−2
0
−4
−15
−10
−5
0
5
10
15
−15
−10
−5
0
x
5
10
15
x
(a)
(b)
Figure 3: Example of the transition between a parent expression 0.3x + 0.6 cos (x) (a) and the child expression 0.2x +
0.6 cos (x) + 0.02x2 (b) with SymTree.
17
Algorithm 2: Expand function
input : expression node to be expanded, simplification threshold τ, booleans indicating
whether to apply the Inverse (inv) and the Transformation (trans) operators.
output: set of child nodes children.
/* create candidate terms.
*/
terms ← Interaction(node) ∪ Inverse(node, inv) ∪ Transformation(node, trans);
terms ← [term ∈ terms if Score(node + term) > Score(node)];
/* generate nodes.
*/
children ← ∅;
while terms , ∅ do
new node, terms ← GreedySearch(node, terms);
new node ← Simplify(new node, τ);
children ← children ∪ new node;
/* guarantees that it returns at least one child.
*/
if children , ∅ then
return children;
else
return {node };
5. Experimental Results
The goal of the IT data structure and SymTree algorithm, proposed in this paper is to achieve a
concise and descriptive approximation function that minimizes the error to the measured data. As
such, not only we should minimize an error metric (i.e., absolute or squared error) but we should
also minimize the size of the final expression.
So, in order to test both SymTree and the representation, we have performed experiments with
a total of 17 different benchmark functions commonly used in the literature, extracted from [16,
30, 17, 18].
These functions are depicted in Table 1 with the information whether the function itself is
18
Algorithm 3: GreedySearch function
input : expression node to be expanded and the list of candidate terms.
output: the expanded node and the unused terms.
/* expand terms.
*/
for term ∈ terms do
if Score(node + term) > Score(node) then
node ← node + term;
terms ← [term ∈ terms if term < node ];
return node, terms;
+
+
×
0.3
×
x
0.6
×
+
×
cos
0.3
×
x
x
0.6
pow
0.02
cos
x
2
x
Figure 4: Corresponding expression trees with standard GP representation of expressions depicted in Fig. 3.
expressible by the IT data structure or not. Notice that even though some functions are not expressible, the algorithm can still find an approximation of such functions. In total, from the 17
functions, 8 are not expressible, or barely half of the benchmark. As a comparison, for the EFS
only 5 of these functions are not expressible, while the remaining GP algorithms tested here can
represent each one of these benchmark functions.
For every benchmark function, 600 samples were randomly generated with uniform distribution within the range [−5, 5], with the exception of F7 that was sampled within [0, 2] in order to
avoid indefinitions. These samples were split in half as training and testing data sets. All the tested
algorithms had access only to the training data during the model fitting step and, afterwards, the
19
mean absolute error of the fitted model was calculated using the testing data.
Table 1: Benchmark functions used for the comparative experiments.
Function
expressible
F 1 = x3 + x2 + 5 ∗ x
Y
F 2 = x4 + x3 + x2 + x
Y
F 3 = x5 + x4 + x3 + x2 + x
Y
F 4 = x6 + x5 + x4 + x3 + x2 + x
Y
F5 = sin(x2 )cos(x) − 1
N
F6 = sin(x) + sin(x + x2 )
N
F7 = log(x + 1) + log(x2 + 1)
√
F8 = 5 ∗ kxk
Y
F9 = sin(x) + sin(y2 )
Y
F10 = 6sin(x)cos(y)
N
F11 = 2 − 2.1cos(9.8x)sin(1.3w)
N
F12 =
F13 =
2
e−(x−1)
1.2+(y−2.5)2
N
10
(xi −3)2
N
F14 = x1 x2 x3 x4 x5
Y
5+
F16 =
P
i=1..5
x6
N
x
1−log(x2 +x+1)
N
F15 =
F17
Y
x3 +x2 +1
√
= 100 + log(x2 ) + 5 (|x|)
Y
The proposed approach was compared against other Symbolic and traditional Regression algorithms. The representative set of traditional regression algorithms was chosen to comprehend from
the simpler to more complex models. As such this set is composed by a Linear Regression [23],
Linear Regression with Polynomial Features [25] and Gradient Tree Boosting [8]. All of these
regression models are provided by the scikit-learn Python package [22].
As for the Symbolic Regression algorithms set, the choices were the ones already described in
Sec. 3: neat-GP, EFS, sGP and SSGP, for the first two we have used the provided source code in
20
Python 2 and in Java 3 , respectively, and for the last two we have used the reported values from the
literature.
The algorithm parameters was set by applying a grid search within a set of pre-defined parameter values, each combination of parameters within these set was tested using only the training
data and the, best model obtained by each algorithm was used to measure the goodness-of-fit to
the testing data. Next, each algorithm will be briefly explained together with the parameters set
used for the grid search.
The Linear Regression (LR) uses the Ordinary Least Square in order to estimate its parameters.
Besides testing this model, we also transformed the data by generating Polynomial Features (PF)
in order to allow the modeling of non-linear relationship as linear. We have tested polynomials of
degrees in the range [2, 6].
Finally, within the traditional algorithms, the Gradient Tree Boosting(GB) creates the regression model by means of a boosting ensemble outperforming many regression algorithms [6, 7].
This algorithm has two main parameters that improves the performance, but increases the size of
the final model: the number of boosting stages, improving the robustness, and the maximum depth
of each tree, minimizing the error of each regressor. The number of boosting stages was tested
within the set {100, 500} after verifying that a larger number of estimators would just increase
the length of the final expression without significantly reducing the regression error on the tested
functions. The maximum depth was tested in the range of [2, 6] following the same rationale.
Within the Symbolic Regression algorithms, EFS only allows one parameter to be set, the
maximum time in minutes allowed to search for a solution. As such we allowed a total amount of
1 minutes, more than what was used by every other algorithm within this benchmark. The neat-GP
algorithm was tested with a population size of 500, evolved through 100 iterations. The crossover
rate was tested with the values {0.5, 0.7, 0.9}, mutation rate with {0.1, 0.3, 0.5}, survival threshold
with {0.3, 0.4, 0.5}, specie threshold value with {0.1, 0.15, 0.2} and α = 0.5, all values fixed values
as suggested by the authors. The other two GP algorithms, sGP and SSGP, had neither of their
parameters adjusted for this test since we will only use the reported values in their respective
2
3
https://github.com/saarahy/neatGP-deap
https://github.com/flexgp/efs
21
papers.
Regarding the SymTree algorithm, the threshold parameter τ was tested within the range {1e −
6, 1e − 5, 1e − 4, 1e − 3, 1e − 2}, the minI parameter that controls the iteration that it starts to apply
the inverse interation operator was tested within [1, 10[, the minT parameter, that controls at which
iteration it starts to apply the transformation functions was tested within the range [5, 10[ and the
total number of iterations steps performed by the algorithm was set as minI + minT + it with it
tested within the range [0, 5]. Notice that despite testing only a small number of iterations, it is
worth noticing that after n iterations the algorithm can represent polynomials of degree 2n only
using the interaction operator.
The functions set used for the transformation functions for the Symbolic Regression algorithms
√
were fixed to the set {sin(x), cos(x), tan(x), kxk, log(x), log(x + 1)}. This set was chosen in order
to allow the IT data structure to correctly represent most of the functions tested here.
5.1. Accuracy comparison
Our first analysis will be how well SymTree fared against the contenders with respect to the
goodness of fit for each data set from the benchmark. In this analysis we are only concerned about
the minimization of an error function. For this purpose, we have measured the Mean Absolute
Error to allow a proper comparison with the reported results in kattan2016gp,kattan2015surrogate.
The experiments with the non-deterministic algorithms (GB, neat-GP, EFS) were repeated 30
times and the average of these values are reported in the following tables and plots. Since SymTree
is deterministic we have performed a One-sample Wilcoxon test comparing the result obtained by
SymTree against the results obtained by the best or second best (whenever SymTree had the best
result) algorithm with the difference being considered significant with a p-value < 0.05.
In Table 2 we can see the comparative results summarized by the Mean Absolute Error. From
this table we can see that, as expected, SymTree was capable of finding the optimal solution for
every function that the IT data structure was capable of expressing. In 5 of these functions the
proposed algorithm could find a significantly better solution than every other tested algorithm
and, in 13 of these functions it also found a significantly better solution than every other Symbolic
Regression algorithm. Notice that in 9 of the benchmark functions it achieved the optimal solution,
22
sometimes drawing with Polynomial Features and/or Gradient Boosting. Even when it did not
achieve the best solution it still managed to find a competitive approximation.
It is interesting to notice that neat-GP seems to have a difficulty when dealing with polynomial
functions, probabily due to its bloat control that penalizes tree with higher heights, favoring the
trigonometric approximations (the generated expressions extensively used the trigonometric functions). The surrogate models were competitive, though never finding the best solution, in almost
every one of their reported values, but this should still be validated using the original source code.
The EFS algorithm performed competitively against SymTree in many functions but had some difficulties dealing with polynomial functions, similar to neat-GP, probably due to the use of Lasso
penalty function that tries to favor smaller expressions.
From the traditional algorithms, surprisingly enough Linear Regression obtained some very
competitive results even finding better approximation than every other contender in 2 different occasions. The use of polynomial features improved the Linear Regression capabilities even further.
It is important to notice that, even though it should be expected that PF would find a perfect fit
for F4, the fourth and sixth degree polynomials are very close to each other, as such the sampled
data for the training set can have deceived this (and the others) approach to fit the wrong model.
Even though this did not happen with SymTree, it cannot be assured that it could have happend
with different samples. Finally, GB behaved more consistently close to the best approximation
with just two exceptions. F4 and F14, achieving the best approximation in 7 benchmark functions
(effectively being the solo winner in 5 of them).
Regarding the execution time, among the tested algorithms (with the exception of sGP and
SSGP, that was not tested on the same machine) took roughly the same time. The Linear Regression and Polynomial Features took less than a second, on average, to fit each benchmark function.
Gradient Boosting took 2 seconds on average while SymTree took an average of 6 seconds. Finally,
neat-GP and EFS took 30 and 60 seconds, respectively. All of these algorithms were implemented
in Python 3 4 , with the exception of EFS that was implemented in Java. All of these experiments
were run at a Intel Core i5, 1.7GHz with 8GB of RAM under Debian Jessie operational system.
4
notice that Linear Regression, Polynomial Features and Gradient Boosting may have additional optimizations
unknown to this author.
23
Table 2: Mean Absolute Error obtained by each algorithm on the benchmark functions. The results in which SymTree
results are better when compared to all the other algorithms are marked in bold, those results in which SymTree
performed better when compared to all the other Symbolic Regression algorithms are marked in italic.
algorithm
LR
PF
GB
neat-GP
SSGP
sGP
EFS
SymTree
F1
17.48
0.00
0.49
108.91
–
1.10
6.43
0.00
F2
131.99
0.00
1.83
284.70
–
–
6.62
0.00
F3
483.36
0.00
8.52
1025.46
–
–
32.50
0.00
F4
2823.94
247.72 58.39
813.25
–
–
47.55
0.00
F5
0.32
0.30
0.03
0.65
–
–
0.25
0.23
F6
0.79
0.60
0.07
1.95
–
–
0.57
0.58
F7
0.02
0.00
0.00
0.58
–
–
0.01
0.00
F8
2.22
0.56
0.03
0.95
–
0.99
0.14
0.00
F9
0.78
0.58
0.18
4.45
–
–
0.24
0.00
F10
2.26
2.05
0.95
8.74
2.31
–
1.88
0.72
F11
0.81
0.83
0.86
2.91
–
–
0.85
0.82
F12
0.06
0.06
0.01
2.00
0.07
–
0.04
0.06
F13
0.06
0.04
0.03
1.41
0.14
–
0.05
0.03
F14
61.26
0.00
71.87
3.62
76.08
–
90.50
0.00
F15
22.41
8.63
7.00
188.11
–
13.45
12.57
8.88
F16
5.21
5.82
5.50
6.26
–
8.89
6.71
6.31
F17
3.88
1.34
0.11
3.78
–
5.90
0.26
0.00
Function
5.2. Compromise between accuracy and simplicity
In order to evaluate the compromise between accuracy and simplicity of the expressions generated by the algorithms we have plotted a scatter plot with the size of the expression tree obtained by
each algorithm against the MAE on the testing data set. These plots are depicted in Figs. 5 , 6 and 7.
We have omitted the results obtained by GB in the following graphs since this approach generated
a much larger expression (as a black box model) ranging from 1, 200 to 30, 000 nodes.
24
100
LR
PF
NEAT
60
EFS
SymTree
40
LR
250
EFS
SymTree
100
0.4
0.6 0.8
Nodes (x100)
1.0
1.2
0.2
0.4
0.6
0.8
Nodes (x100)
LR
PF
2000
NEAT
EFS
MAE
1500
SymTree
1000
1.0
F5
0.7
0.6
OPT
LR
PF
0.5
NEAT
0.4
EFS
0.3
SymTree
0.2
500
0.0
0.2
0
0.0
0.4 0.6 0.8
Nodes (x100)
1.0
1.2
0.0
0.4
0.6
0.8
Nodes (x100)
1.0
F6
2.0
OPT
LR
PF
1.5
NEAT
EFS
1.0
SymTree
0.5
0.1
(d)
SymTree
(c)
MAE
OPT
2500
0.2
EFS
400
(b)
F4
0.0
NEAT
0
0.0
(a)
3000
PF
600
0
0.2
LR
200
50
0.0
OPT
800
NEAT
150
0
F3
1000
PF
200
20
OPT
MAE
MAE
80
F2
300
MAE
OPT
MAE
F1
0.0
0.1
0.2
0.3 0.4 0.5
Nodes (x100)
(e)
0.6
0.0
0.2
0.4
0.6
Nodes (x100)
0.8
1.0
(f)
Figure 5: Compromise between accuracy and simplicity for functions F1 to F6.
From these figures we can see some distinct situations. Whenever the target function could be
represented by IT data structure, SymTree algorithm achieved the optimal compromise. In almost
every situation, neat-GP found a bigger expression with a higher error. The behavior of SymTree
applied to those functions in which it had reached the optima does not clearly indicate whether it
is biased towards generating simpler or more accurate expressions, but we can see that with the
exception of four benchmark functions, it found a good balance between both objectives.
The generated expression by SymTree for each benchmark function can be found at http:
//professor.ufabc.edu.br/~folivetti/SymTree/.
5.3. Genetic Programming Based Symbolic Regression Using Deterministic Machine Learning
As a final experiment, we compared the results obtained by FFX/GP algorithm [15] on randomly generated polynomial functions. In this experiment, 30 random functions for different
combinations of dimension, order of the polynomial and number of terms were created. The dimension was varied in the set {1, 2, 3}, the order of the polynomial and the number of terms were
25
PF
NEAT
0.3
EFS
MAE
0.4
SymTree
0.2
F9
OPT
2.0
LR
1.5
NEAT
PF
EFS
1.0
SymTree
0.0
0.4
0.6
0.8
Nodes (x100)
1.0
SymTree
0.2
0.4
0.6
Nodes (x100)
0.8
1.0
0.0
0.2
(b)
8
OPT
LR
PF
6
NEAT
MAE
EFS
4
SymTree
F11
3.0
OPT
LR
2.5
PF
2.0
NEAT
EFS
1.5
SymTree
1.0
2
0.0
1.0
1.5
Nodes (x100)
2.0
(d)
0.6 0.8 1.0
Nodes (x100)
1.2
1.4
F12
OPT
2.0
LR
PF
1.5
NEAT
EFS
1.0
SymTree
0.5
0.5
0
0.4
(c)
MAE
F10
0.5
EFS
0
0.0
(a)
0.0
NEAT
PF
MAE
0.2
3
1
0.0
0.0
LR
2
0.5
0.1
OPT
4
MAE
LR
0.5
F8
OPT
MAE
F7
0.6
0.0
0.2
0.4
0.6
Nodes (x100)
0.8
1.0
0.2
0.4
0.6
Nodes (x100)
(e)
0.8
(f)
Figure 6: Compromise between accuracy and simplicity for functions F7 to F12.
varied within the set {1, 2, 3, 4}. For every combination, it was generated 2500 samples for training
and 1500 samples for testing all in the domain range [0, 1]. Notice that every function generated
on this test is achievable by SymTree. The results were reported by means of number of correct
expressions found.
In this paper the authors also performed some tests with 10 and 30 dimensions, but the results
were not reported on a Table, so we will refrain from testing them in this paper.
The results depicted in Table 3 show the number of correct polynomial expressions found
by SymTree (number on the left) and FFX/GP (number on the right) for 1, 2, and 3 dimensions
respectively. The reported results for FFX/GP were the best values from the pure GP, FFX and the
combination of FFX with GP [15], as reported by the authors. From these tables we can see that
SymTree could achieve almost a perfect score with just a few exceptions, vastly outperforming
FFX/GP in every combination.
26
F14
OPT
LR
PF
NEAT
60
NEAT
LR
EFS
100
SymTree
SymTree
50
20
0
0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25
Nodes (x100)
PF
NEAT
EFS
40
SymTree
0
0.0
2.5
5.0 7.5 10.0 12.5 15.0 17.5
Nodes (x100)
(a)
0.0
0.1
0.2
0.3 0.4 0.5
Nodes (x100)
(b)
F16
7
6
5
4
3
2
1
0
OPT
150
PF
MAE
MAE
EFS
80
LR
F15
200
OPT
MAE
F13
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
LR
PF
NEAT
F17
OPT
LR
3
PF
2
EFS
NEAT
MAE
MAE
EFS
SymTree
0.7
(c)
4
OPT
0.6
SymTree
1
0
0.0
0.2
0.4 0.6 0.8
Nodes (x100)
1.0
1.2
0.0
(d)
0.2
0.4
0.6
Nodes (x100)
0.8
1.0
(e)
Figure 7: Compromise between accuracy and simplicity for functions F13 to F17.
6. Conclusion
In this paper a new data structure for mathematical expressions, named Interaction-Transformation,
was proposed with the goal of constraining the search space with only simple and interpretable
expressions represented as linear combination of compositions of non-linear functions with polynomial functions. Also, in order to test this data structure, a heuristic approach was introduced
to assess the Symbolic Regression problem, called SymTree. The heuristic can be classified as a
greedy search tree method in which it starts with a linear approximation function and expands its
nodes through the interaction and transformation of the parent expression by means of a greedy
algorithm.
This algorithm was tested in a set of benchmark functions commonly used in the Symbolic
Regression literature and compared against the traditional linear regression algorithm, the linear
regression with polynomial features, gradient boosting and some recent Genetic Programming
variations from the literature. The results showed that SymTree can obtain the correct function
27
Table 3: Comparison of results obtained by SymTree (left) and FFX/GP (right) by number of correct answers.
Dim.
1D
2D
3D
Order / Base
1
2
3
4
1
30 / 30
–
–
–
2
30 / 30
30 / 29
–
–
3
30 / 30
29 / 27
30 / 19
–
4
30 / 30
29 / 28
29 / 16
29 / 17
1
30 / 30
–
–
–
2
30 / 30
30 / 29
–
–
3
30 / 30
30 / 22
30 / 15
–
4
30 / 30
30 / 20
30 / 11
30 / 3
1
30 / 30
–
–
–
2
30 / 30
30 / 26
–
–
3
30 / 30
30 / 28
30 / 14
–
4
30 / 30
30 / 17
28 / 12
30 / 6
form whenever the target function could be described by the representation. And in every function
that it was not capable of finding the optima, it was capable of finding competitive solutions.
Overall, the results were positive chiefly considering the greedy heuristic nature of the proposal.
Another interesting fact observed on the results is that SymTree tends to favor smaller expressions in contrast with black box algorithms which tends to favor accuracy over simplicity. It is
shown that SymTree can find a good balance between accuracy and conciseness of expression most
of the time. The evidences obtained through these experiments point to the validity of the hypothesis that the IT data structure helps to focus the search inside a region of smaller expressions and
that, even though the search space is restricted, it is still capable of finding good approximations
to the tested functions.
As a next step, we should investigate the use of the proposed representation in an evolutionary
algorithm context, with the operators inspired by the operators introduced with the greedy heuristic. Since the greedy approach creates an estimate of the enumeration tree, it may have a tendency
28
of exponential growth on higher dimensions which can be alleviated by evolutionary approaches.
References
[1] G. Adomavicius, A. Tuzhilin, Toward the next generation of recommender systems: A survey of the state-ofthe-art and possible extensions, IEEE transactions on knowledge and data engineering 17 (6) (2005) 734–749.
[2] I. Arnaldo, U.-M. O’Reilly, K. Veeramachaneni, Building predictive models via feature synthesis, in: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, ACM, 983–990, 2015.
[3] T. Back, Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic
algorithms, Oxford university press, 1996.
[4] L. Billard, E. Diday, Symbolic regression analysis, in: Classification, Clustering, and Data Analysis, Springer,
281–288, 2002.
[5] J.-F. Bonnefon, A. Shariff, I. Rahwan, Autonomous Vehicles Need Experimental Ethics: Are We Ready for
Utilitarian Cars?, arXiv preprint arXiv:1510.03346 .
[6] T. Chen, C. Guestrin, Xgboost: A scalable tree boosting system, arXiv preprint arXiv:1603.02754 .
[7] T. Chen, T. He, Higgs boson discovery with boosted trees, in: Cowan et al., editor, JMLR: Workshop and
Conference Proceedings, 42, 69–80, 2015.
[8] T. Chen, T. He, xgboost: eXtreme Gradient Boosting, R package version 0.4-2 .
[9] M. Crouhy, D. Galai, R. Mark, A comparative analysis of current credit risk models, Journal of Banking &
Finance 24 (1) (2000) 59–117.
[10] A. Datta, M. C. Tschantz, A. Datta, Automated experiments on ad privacy settings, Proceedings on Privacy
Enhancing Technologies 2015 (1) (2015) 92–112.
[11] F. O. de França, G. P. Coelho, F. J. Von Zuben, On the diversity mechanisms of opt-aiNet: a comparative study
with fitness sharing, in: IEEE Congress on Evolutionary Computation, IEEE, 1–8, 2010.
[12] G. Gybenko, Approximation by superposition of sigmoidal functions, Mathematics of Control, Signals and
Systems 2 (4) (1989) 303–314.
[13] S. Haykin, N. Network, A comprehensive foundation, Neural Networks 2 (2004).
[14] K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural networks 4 (2) (1991) 251–
257.
[15] I. Icke, J. C. Bongard, Improving genetic programming based symbolic regression using deterministic machine
learning, in: 2013 IEEE Congress on Evolutionary Computation, IEEE, 1763–1770, 2013.
[16] D. Karaboga, C. Ozturk, N. Karaboga, B. Gorkemli, Artificial bee colony programming for symbolic regression,
Information Sciences 209 (2012) 1–15.
[17] A. Kattan, A. Agapitos, Y.-S. Ong, A. A. Alghamedi, M. ONeill, GP made faster with semantic surrogate
modelling, Information Sciences 355 (2016) 169–185.
29
[18] A. Kattan, Y.-S. Ong, Surrogate Genetic Programming: A semantic aware evolutionary search, Information
Sciences 296 (2015) 345–359.
[19] J. R. Koza, Genetic programming: on the programming of computers by means of natural selection, vol. 1, MIT
press, 1992.
[20] T. McConaghy, Ffx: Fast, scalable, deterministic symbolic regression technology, in: Genetic Programming
Theory and Practice IX, Springer, 235–260, 2011.
[21] J. Park, I. W. Sandberg, Approximation and radial-basis-function networks, Neural computation 5 (2) (1993)
305–316.
[22] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, et al., Scikit-learn: Machine learning in Python, Journal of Machine Learning Research
12 (Oct) (2011) 2825–2830.
[23] C. R. Rao, Linear statistical inference and its applications, vol. 22, John Wiley & Sons, 2009.
[24] B. Sareni, L. Krahenbuhl, Fitness sharing and niching methods revisited, IEEE Transactions on Evolutionary
computation 2 (3) (1998) 97–106.
[25] B. Schölkopf, Statistical learning and kernel methods, in: Data Fusion and Perception, Springer, 3–24, 2001.
[26] G. A. Seber, A. J. Lee, Linear regression analysis, vol. 936, John Wiley & Sons, 2012.
[27] K. O. Stanley, R. Miikkulainen, Evolving neural networks through augmenting topologies, Evolutionary computation 10 (2) (2002) 99–127.
[28] L. E. Suter, H. P. Miller, Income differences between men and career women, American Journal of Sociology
78 (4) (1973) 962–974.
[29] J. J. Thomson, Killing, letting die, and the trolley problem, The Monist 59 (2) (1976) 204–217.
[30] L. Trujillo, L. Muñoz, E. Galván-López, S. Silva, neat Genetic Programming: Controlling bloat naturally, Information Sciences 333 (2016) 21–43.
[31] H. Zou, T. Hastie, Regularization and variable selection via the elastic net, Journal of the Royal Statistical
Society: Series B (Statistical Methodology) 67 (2) (2005) 301–320.
30
| 2 |
Computing factorized approximations of Pareto-fronts
using mNM-landscapes and Boltzmann distributions
arXiv:1512.03466v1 [] 10 Dec 2015
Roberto Santana, Alexander Mendiburu, Jose A. Lozano
Intelligent Systems Group. University of the Basque Country (UPV/EHU)
{roberto.santana,alexander.mendiburu,ja.lozano}@ehu.es
Abstract
NM-landscapes have been recently introduced as a class of tunable rugged models. They are a subset of the general interaction models where all the interactions
are of order less or equal M . The Boltzmann distribution has been extensively applied in single-objective evolutionary algorithms to implement selection and study
the theoretical properties of model-building algorithms. In this paper we propose
the combination of the multi-objective NM-landscape model and the Boltzmann
distribution to obtain Pareto-front approximations. We investigate the joint effect
of the parameters of the NM-landscapes and the probabilistic factorizations in the
shape of the Pareto front approximations.
keywords: multi-objective optimization, NM-landscape, factorizations, Boltzmann
distribution
1
Introduction
One important question in multi-objective evolutionary algorithms (MOEAs) is how the
structure of the interactions between the variables of the problem influences the different
objectives and impacts in the characteristics of the Pareto front (e.g. discontinuities,
clustered structure, etc.). The analysis of interactions is also important because there
is a class of MOEAs that explicitly capture and represent these interactions to make
a more efficient search [3, 10, 14]. In this paper, we approach this important question
by combining the use of a multi-objective fitness landscape model with the definition
of probability distributions on the search space and different factorized approximations
to these joint distributions. Our work follows a similar methodology to the one used in
[11, 12, 13, 16] to investigate the relationship between additively decomposable singleobjective functions and the performance of estimation of distribution algorithms (EDAs)
[6, 8].
Landscapes models are very useful to understand the behavior of optimizers under
different hypothesis about the complexity of the fitness function. Perhaps the best known
example of such models is the NK fitness landscape [5], a parametrized model of a fitness
landscape that allows to explore the way in which the neighborhood structure and the
strength of interactions between neighboring variables determine the ruggedness of the
landscape. One relevant aspect of the NK-fitness landscape is its simplicity and wide
usability across disciplines from diverse domains.
1
Another recently introduced landscape model is the NM-landscape [9]. It can be seen
as a generalization of the NK-landscape. This model has a number of attributes that
makes it particularly suitable to control the strength of the interactions between subsets
of variables of different size. In addition, it is not restricted to binary variables and allows
the definition of functions on any arity.
In [20], the NM-landscape was extended to multi-objective problems and used to
study the influence of the parameters in the characteristics of the MOP. We build on the
work presented in [20] to propose the use of the multi-objective NM-landscape (mNMlandscape) for investigating how the patterns of interactions in the landscape model
influence the shape of the Pareto front. We go one step further and propose the use
of factorized approximations computed from the landscapes to approximate the Pareto
fronts. We identify the conditions in which these approximations can be accurate.
2
NM-landscape
2.1
Definition
Let X = (X1 , . . . , XN ) denote a vector of discrete variables. We will use x = (x1 , . . . , xN )
to denote an assignment to the variables. S will denote a set of indices in {1, . . . , N},
and XS (respectively xS ) a subset of the variables of X (respectively x) determined by
the indices in S.
A fitness landscape F can be defined for N features using a general parametric interaction model of the form:
F (x) =
l
X
k=1
βUk
Y
xi
(1)
i∈Uk
where l is the number of terms, and each of the l coefficients βUk ∈ R. For k = 1, . . . , l,
Uk ⊆ {1, 2, . . . , N}, where Uk is a set of indices of the features in the kth term, and the
length |UQ
k | is the order of the interaction. By convention [9], it is assumed that when
Uk = ∅, j∈Uk xj ≡ 1. Also by convention, we assume that the model is defined for
binary variables represented as xi ∈ {−1, 1}.
The NM models [9] comprise the set of all general interactions models specified by
Equation 1, with the following constraints:
• All coefficients βUk are non-negative.
• Each feature value xi ranges from negative to positive values.
• The absolute value of the lower bound of the range is lower or equal than the upper
bound of the range of xi .
One key element of the model is how the parameters of the interactions are generated. In [9], each βUk is generated from e−abs(N (0,σ)) , where N (0, σ) is a random number
drawn from a Gaussian distribution with mean 0 and standard deviation σ. Increasing
σ determines smaller range and increasing clumping of fitness values. In this paper, we
use the same procedure to generate the βUk parameters.
We will focus on NM-models defined on the binary alphabet. In this case, the NMlandscape has a global maximum that is reached at x = (1, . . . , 1) [9].
2
3
Multi-objective NM-landscapes
The multi-objective NM-landscape model (mNM-landscape) is defined [20] as a vector
function mapping binary vectors of solutions into m real numbers f(.) = (f1 (.), f2 (.), . . . , fm (.)) :
BN → Rm , where N is the number of variables, m is the number of objectives, fi (.) is
the i-th objective function, and B = {−1, 1}. M = {M1 , . . . , Mm } is a set of integers
where Mi is the maximum order of the interaction in the i-th landscape. Each fi (x) is
defined similarly to Equation (1) as:
fi (x) =
li
X
βUki
Y
xj ,
(2)
j∈Uki
k=1
where li is the number of terms in objective i, and each of the li coefficients βUki ∈ R.
For k = 1, . . . , li , Uki ⊆ {1, 2, . . . , N}, where Uki is a set of indices of the features in the
kth term, and the length |Uki | is the order of the interaction.
Notice that the mNM fitness landscape model allows that each objective may have
a different maximum order of interactions. The mNM-landscape is inspired by previous
extensions of the NK fitness landscape model to multi-objective functions [1, 2, 7, 21].
One of our goals is to use the mNM-landscape to investigate the effect that the combination of objectives with different structures of interactions has in the characteristics
of the MOP. Without lack of generality, we will focus on bi-objective mNM-landscapes
(i.e., m = 2) and will establish some connections between the objectives. In this section
we explain how the constrained mNM-landscapes are designed.
As previously explained, the NM-model is defined for (x1 , . . . , xN ) ∈ {−1, 1}. However, we will use a representation in which (x1 , . . . , xN ) ∈ {0, 1}. The following transformation [20] maps the desired representation to the one used by the mNM-landscape.
Given the analysis presented in [9], it also guarantees that the Pareto set will comprise
at least two points, respectively reached at (0, . . . , 0) and (1, . . . , 1) for objectives f1 and
f2 .
f1 (y) : yi = −2xi + 1
f2 (z) : zi = 2xi − 1
(3)
(4)
where y = (y1 , . . . , yN ) ∈ {−1, 1} and z = (z1 , . . . , zN ) ∈ {−1, 1} are the new variables
obtained after the corresponding transformation have been applied to x = (x1 , . . . , xN ) ∈
{0, 1}.
When the complete space of solutions is evaluated, we add two normalization steps to
be able to compare landscapes with different orders of interactions. In the first normalization step, f (x) is divided by the number of the interaction terms (li ). In the second
step, we re-normalize the fitness values to the interval [0, 1], this is done by subtracting
the minimum fitness value among all the solutions, and dividing by the maximum fitness
value minus the minimum fitness value.
Another constraint we set in some of the experiments is that, if M1 < M2 then,
βUk1 = βUk2 for all |Uki | ≤ M1 . This means that all interactions contained in f1 are
also contained in f2 , but f2 will also contain higher order interactions. Starting from a
single mNM-landscape f of order M we will generate all pairs of models M1 , M2 , where
3
M1 ≤ M2 ≤ M. The coefficients βUk for f1 and f2 will be set as in f . The idea of
considering these pairs of objectives is to evaluate what is the influence in the shape of
the Pareto front, and other characteristics of the MOPs, of objectives that have different
order of interactions between their variables.
4
Boltzmann distribution
The relationship between the fitness function and the variables dependencies that arise
in the selected solutions can be modeled using the Boltzmann probability distribution
[11, 12]. The Boltzmann probability distribution pB (x) is defined as
pB (x) = P
e
g(x)
T
x′
e
g(x′ )
T
,
(5)
where g(x) is a given objective function and T is the system temperature that can be
used as a parameter to smooth the probabilities.
The key point about pB (x) is that it assigns a higher probability to solutions with
better fitness. The solutions with the highest probability correspond to the optima.
Starting from the complete enumeration of the search space, and using as the fitness
function the objectives of an mNM-landscape, we associate to each possible solution xi of
i
the search space m probability values (p1Bi (xi ), . . . , pm
Bi (x )) according to the corresponding Boltzmann probability distributions. There is one probability value for each objective
and in this paper we use the same temperature parameter T = 1 for all the distributions.
Using the Boltzmann distribution we can investigate how potential regularities of the
fitness function are translated into statistical properties of the distribution [11]. This
question has been investigated for single-objective functions in different contexts [15, 18,
17] but we have not found report on similar analysis for MOP. One relevant result in
single-objective problems is that if the objective function is additively decomposable in
a set of subfunctions defined on subsets of variables (definition sets), and the definition
sets satisfy certain constraints, then it is possible to factorize the associated Boltzmann
distribution into a product of marginal distributions [12]. Factorizations allow problem
decomposition and are at the core of EDAs.
5
Experiments
In our experiments we investigate the following issues:
• How the parameters of mNM model determine the shape of the Pareto front?
• How is the strength of the interactions between variables influenced by the parameters of the model?
• Under which conditions can factorized approximations of the Boltzmann probability
reproduce the shape of the Pareto front?
Algorithm 1 describes the steps of our simulations. We use a reference NM landscape (N = 10, M = 2) and create a bi-objective mNM model from it using different
combinations of parameters σ and |Uki |.
4
Algorithm 1: Simulation approach
1
2
3
4
5
6
7
8
5.1
Define the mMN model using its parameters.
For each objective:
Evaluate the 2N points of the search space.
Compute the Boltzmann distribution.
Compute the univariate marginals from the Boltzmann distribution.
For all solutions, compute univariate distribution as the product of univariate
marginals.
Determine the Pareto front using the objective values.
Determine the approximation of the Pareto front using the univariate factorizations of all the objectives.
Influence of the mNM-landscape parameters
We investigate how the parameters of mNM model determine the shape of the Pareto.
Figure 1 (column 1) shows the evaluation of the 210 solutions that are part of the
search space for N = 10 and different values of σ and |Uki |. From row 1 to row 4,
the figures respectively show the objective values of the mNM landscape for different
combination of its parameters: (σ = 1, |Uki | = 1), (σ = 1, |Uki | = 19), (σ = 2, |Uki | = 1),
(σ = 2, |Uki | = 19).
The influence of σ can be seen by comparing the figure in row 1 with the figure in
row 2, and doing a similar comparison with figures in row 3 and row 4. Increasing σ from
1 to 19 produces a clustering of the points in the objective space. One reason for this
behavior is that several genotypes will map to the same objective values. The clustering
effect in the space of objectives is a direct result of the clumpiness effect described for
the NM-model when σ is increased [9].
The effect of the maximum order of the interactions can be seen by comparing the
figure in row 1 with the figure in row 3, and the figures in rows 2 and 4. For σ = 1,
adding interactions transforms the shape of the Pareto front from a line to a boomeranglike shape. For σ = 19, the 8 points are transformed into a set of 8 stripes that seem to
be parallel to each other. In both cases, the changes due to the increase in the order of
the interactions are remarkable.
In the next experiments, and in order to emphasize the flexibility of the mNMlandscape, we allow the two objectives of the same mNM-landscape to have different
maximum order of interactions. Figure 2 shows the objective values and Pareto fronts
of the mNM model for σ = 36 for the situation in which f1 has a maximum order of
interactions |Uk1 | = o and f2 has a maximum order of interactions |Uk2 | = o + 1. It can be
observed that the shapes of the fronts are less regular than in the previous experiments
but some regularities are kept.
5.2
Boltzmann distribution
Figure 1 (column 2) shows the Boltzmann probabilities associated to each mNM-landscape
model described in column 1, i.e., (p1Bi (xi ), p2Bi (xi )).
The Boltzmann distribution modifies the shape of the objective space but it does not
modify the solutions that belong to the Pareto set. This is so because the dominance
5
−3
1
1.6
−3
x 10
1.6
x 10
0.9
0.8
1.4
1.4
1.2
1.2
0.7
2
i
q2B
i
0.5
pB
f2
0.6
1
1
0.4
0.3
0.2
0.8
0.8
0.6
0.6
0.1
0
0
0.2
0.4
f1
0.6
0.8
0.4
0.4
1
0.6
0.8
1
p1B
1.2
1.4
0.4
0.4
1.6
1.6
0.8
1
q1B
1.2
1
q1
1.2
1.4
1.6
−3
x 10
i
−3
1
0.6
−3
x 10
i
−3
x 10
1.6
x 10
0.9
0.8
1.4
1.4
1.2
1.2
0.7
i
q2
2
B
i
0.5
pB
f2
0.6
1
1
0.4
0.3
0.2
0.8
0.8
0.6
0.6
0.1
0
0
0.2
0.4
f
0.6
0.8
0.4
0.4
1
0.6
0.8
1
p1
1.2
1.4
0.4
0.4
1.6
2.2
1.4
1.6
−3
B
x 10
i
−3
−3
1
0.8
x 10
i
0.9
0.6
−3
B
1
x 10
x 10
2
0.8
1.8
0.7
i
q2
2
0.4
B
i
0.5
pB
f2
1.1
1.6
0.6
1.4
1
1.2
0.3
1
0.9
0.2
0.8
0.1
0
0
0.2
0.4
f
0.6
0.8
0.6
0.6
1
0.8
1
1.2
1.4
1
pB
1
1.6
1.8
2
0.8
0.8
2.2
1.8
−3
x 10
1.25
x 10
1.2
1.6
0.8
1.15
0.7
1.1
1.4
0.6
B
2
1.2
q2
0.5
i
i
1.05
pB
f2
1.1
−3
x 10
0.9
0.4
1
0.95
1
0.3
0.2
0.9
0.85
0.8
0.8
0.1
0
0
11
qB
i
−3
1
0.9
−3
x 10
i
0.2
0.4
f
0.6
0.8
1
0.6
0.6
0.8
1
1.2
p1
B
1
i
1.4
1.6
1.8
−3
x 10
0.75
0.7
0.8
0.9
1
q1
1.1
B
i
1.2
1.3
−3
x 10
Figure 1: Objective values, Boltzmann distribution, and univariate approximations for
different values of σ and different maximum orders of interactions. Column 1: Evaluation
of the 210 solutions that are part of the search space for N = 10 and different values of
σ and |Uki |. From row 1 to row 4, the figures respectively show the objective values
of the mNM model for (σ = 1, |Uki | = 1), (σ = 19, |Uki | = 1), (σ = 1, |Uki | = 2) and
(σ = 19, |Uki | = 2). Column 2: Boltzmann distributions computed from the objectives.
Column 3: Univariate approximations of the Boltzmann distributions.
6
1
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
f2
2
f
1
0.9
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
f
0.6
0.8
0
0
1
0.2
1
0.4
f
0.6
0.8
1
1
Figure 2: Objective values and Pareto fronts of the mNM model for σ = 36 and different
maximum orders of interactions: Left) |Uk1 | = 1 and |Uk2 | = 2, Right) |Uk1 | = 2 and
|Uk2 | = 3.
relationships between the points are preserved by the Boltzmann distribution. However,
the Boltzmann distribution “bends” the original objective space. This effect can be
clearly appreciated in rows 1 and row 4. In the first case, the line is transformed into a
curve. In the second case the parallel lines stripes that appear in the original objective
space change direction.
The Boltzmann distribution can be used as an effective way to modify the shape of
the Pareto while keeping the dominance relationships. This can be convenient to modify
the spacing between Pareto-optimal solutions, for more informative visualization of the
objective space, and for investigating how changes in the strength of selection could be
manifested in the shape of the Pareto front approximations.
5.3
Factorized univariate approximations
Figure 1 (column 3) shows the approximations of the Boltzmann distributions for the
two objectives, each approximation computed using the corresponding product of the
univariate marginals, i.e., (qB1 i (xi ), qB2 i (xi )). For |Uk | = 1, the approximations are identical to the Boltzmann distribution. This is because the Boltzmann distribution can be
exactly factorized in the product of its univariate marginal distributions. Therefore, as a
straightforward extension of the factorization theorems available for the single-objective
additive functions, we hypothesize that if the structure of all objectives is decomposable and the decompositions satisfy the running intersection property [11, 12], then the
associated factorized distributions will preserve the shape of the Pareto front.
However, the univariate approximation does not always respect the dominance relationships and this fact provokes changes in the composition and shape of the Pareto front.
This can be appreciated in rows 3 and 4, where the univariate approximation clearly departs from the Boltzmann distribution. Still, as shown in row 4, some characteristics
of the original function, as the discontinuity in the space of objectives, can hold for the
univariate factorization.
An open question is under which conditions will the univariate approximation keep the
7
dominance relationship between the solutions. One conjecture is that if the factorized
approximation keeps the ranking of the original functions for all the objectives then
the dominance relationship will be kept, but this condition may not be necessary. The
answer to this question is beyond the scope of this paper. Nevertheless, we include the
discussion to emphasize why explicit modeling of interactions by means of the mNM
landscape together with the use of the Boltzmann distribution is relevant for the study
of MOPs.
5.4
Interactions and dependencies in the mNM landscape
By computing bivariate and univariate marginals from the Boltzmann distribution and
computing the mutual information for every pair of variables we can assess which are the
strongest pair-wise interactions.
−5
12
x 10
σ=1
σ=3
σ=5
σ=7
σ=9
σ=11
σ=13
σ=15
σ=17
σ=19
10
Mutual information
8
6
4
2
0
−2
1
2
3
4
5
6
7
Maximum order of interactions
8
9
Figure 3: Influence of the maximum order of the interactions and σ in the mutual information.
In this section we analyze how the maximum order of the interactions and the σ
parameter affect the dependencies in the Boltzmann distribution. A reference NM model
with (N = 10) was generated and by varying the parameters M ∈ {1, . . . , 9} and σ =
2i + 1, i ∈ {0, 1, . . . , 9}) we generated different mNM landscapes. The results presented
in this section are the average of 10 models for each combination of parameters. We focus
on the analysis of the dependencies in only one of the objectives.
Figure 3 shows the values of the mutual information for the combinations of the
maximum order of the interactions and σ. When the maximum order of the interactions is
1, the approximation given by the univariate factorization is exact, therefore, the mutual
information between the variables are 0 for all values of σ. The mutual information is
maximized when the maximum number of interactions is 2. For these mNM landscapes
we would expect the univariate approximation to considerably distort the shape of the
Pareto front, as shown in Figure 1, column 3, rows 3 and 4.
8
Figure 3 shows that σ can be used to tune the strength of the interactions between
the variables. As σ increases the mutual information also increases. This fact would
allow us to define objectives that have interactions of the same maximum order but with
different strength.
5.5
Discussion
We summarize some of the findings from the experiments:
• Univariate factorizations are poor approximations for mNM models of maximum
order two and higher.
• The mutual information between the variables of the NM landscape is maximized
for problems with maximum order of interaction 2.
• The σ parameter can be used for changing the shape of the Pareto fronts and
increasing the strength of the interactions in the objectives. In particular, there is
a direct effect of σ in the discontinuity of the Pareto front and the emergence of
clusters.
6
Conclusions
We have shown how the mNM landscape can be used to investigate the effect that interactions between the variables have in the shapes of the fronts and in the emergence of
dependencies between the variables of the problem. We have shown that the Boltzmann
distribution can be used in conjunction with the mNM model to investigate how interactions are translated into dependencies. A limitation of the Boltzmann distribution is
that is can be computed exactly only for problems of limited size.
The idea of using the Boltzmann distribution to modify the Pareto shape of the
functions can be related to previous work by Okabe et al. [13] on the application of deformations, rotations, and shift operators to generate test functions with difficult Pareto
sets. However, by using the Boltzmann distribution we explicitly relate the changes
in the shape of the Pareto to the relationship interactions-dependencies determined by
the Boltzmann distribution. This can be considered as an alternative path to other approaches to creation of benchmarks for MOPs, like the combination of single-objectives
functions of known difficulty [4] or the maximization of problem difficulty by applying
direct optimization approaches [19].
Our results can be useful for the conception and validation of MOEAs that use probabilistic modeling. In this direction, we have advanced the idea that the effectiveness of a
factorized approximation in the context of MOPs may be related to the way it preserves
the original dominance relationships between solutions. We have shown that the Boltzmann distribution changes the shape of the fronts but does not change which solutions
belong to the Pareto front.
9
Acknowledgment
This work has been partially supported by IT-609-13 program (Basque Government) and
the TIN2013-41272P (Spanish Ministry of Science and Innovation) project. R. Santana
acknowledges support from the Program Science Without Borders No. : 400125/2014-5).
References
[1] H. Aguirre and K. Tanaka. Insights and properties of multiobjective MNKlandscapes. In Proceedings of the 2004 Congress on Evolutionary Computation CEC2004, pages 196–203, Portland, Oregon, 2004. IEEE Press.
[2] H. E. Aguirre and K. Tanaka. Working principles, behavior, and performance
of MOEAs on MNK-landscapes. European Journal of Operational Research,
181(3):1670–1690, 2007.
[3] P. A. Bosman and D. Thierens. Multi-objective optimization with diversity preserving mixture-based iterated density estimation evolutionary algorithms. International
Journal of Approximate Reasoning, 31(3):259–289, 2002.
[4] D. Brockhoff, T.-D. Tran, and N. Hansen. Benchmarking numerical multiobjective
optimizers revisited. In Proceedings of the Companion Publication of the 2015 on
Genetic and Evolutionary Computation Conference, pages 639–646, Madrid, Spain,
2015.
[5] S. Kauffman. Origins of Order. Oxford University Press, 1993.
[6] P. Larrañaga, H. Karshenas, C. Bielza, and R. Santana. A review on probabilistic
graphical models in evolutionary computation. Journal of Heuristics, 18(5):795–819,
2012.
[7] M. López-Ibánez, A. Liefooghe, and S. Verel. Local optimal sets and bounded
archiving on multi-objective NK-landscapes with correlated objectives. In Parallel
Problem Solving from Nature–PPSN XIII, pages 621–630. Springer, 2014.
[8] J. A. Lozano, P. Larrañaga, I. Inza, and E. Bengoetxea, editors. Towards a New
Evolutionary Computation: Advances on Estimation of Distribution Algorithms.
Springer, 2006.
[9] N. Manukyan, M. J. Eppstein, and J. S. Buzas. Tunably rugged landscapes with
known maximum and minimum. arXiv.org, arXiv:1409.1143, 2014.
[10] L. Marti, J. Garcia, A. Berlanga, C. A. Coello, and J. M. Molina. On current
model-building methods for multi-objective estimation of distribution algorithms:
Shortcommings and directions for improvement. Technical Report GIAA2010E001,
Department of Informatics of the Universidad Carlos III de Madrid, Madrid, Spain,
2010.
10
[11] H. Mühlenbein and T. Mahnig. Evolutionary algorithms and the Boltzmann distribution. In K. A. DeJong, R. Poli, and J. Rowe, editors, Foundation of Genetic
Algorithms 7, pages 133–150. Morgan Kaufmann, 2002.
[12] H. Mühlenbein, T. Mahnig, and A. Ochoa. Schemata, distributions and graphical
models in evolutionary optimization. Journal of Heuristics, 5(2):213–247, 1999.
[13] T. Okabe, Y. Jin, M. Olhofer, and B. Sendhoff. On test functions for evolutionary
multi-objective optimization. In Parallel Problem Solving from Nature-PPSN VIII,
pages 792–802. Springer, 2004.
[14] M. Pelikan, K. Sastry, and D. E. Goldberg. Multiobjective estimation of distribution algorithms. In M. Pelikan, K. Sastry, and E. Cantú-Paz, editors, Scalable
Optimization via Probabilistic Modeling: From Algorithms to Applications, Studies
in Computational Intelligence, pages 223–248. Springer, 2006.
[15] R. Santana, C. Bielza, and P. Larrañaga. Conductance interaction identification by
means of Boltzmann distribution and mutual information analysis in conductancebased neuron models. BMC Neuroscience, 13(Suppl 1):P100, 2012.
[16] R. Santana, P. Larrañaga, and J. A. Lozano. Interactions and dependencies in
estimation of distribution algorithms. In Proceedings of the 2005 Congress on Evolutionary Computation CEC-2005, pages 1418–1425, Edinburgh, U.K., 2005. IEEE
Press.
[17] R. Santana, P. Larrañaga, and J. A. Lozano. Protein folding in simplified models
with estimation of distribution algorithms. IEEE Transactions on Evolutionary
Computation, 12(4):418–438, 2008.
[18] R. Santana, R. B. McDonald, and H. G. Katzgraber. A probabilistic evolutionary
optimization approach to compute quasiparticle braids. In Proceedings of the 10th
International Conference Simulated Evolution and Learning (SEAL-2014), pages 13–
24. Springer, 2014.
[19] R. Santana, A. Mendiburu, and J. A. Lozano. Evolving MNK-landscapes with structural constraints. In Proceedings of the IEEE Congress on Evolutionary Computation
CEC 2015, pages 1364–1371, Sendai, Japan, 2015. IEEE press.
[20] R. Santana, A. Mendiburu, and J. A. Lozano. Multi-objective NM-landscapes. In
Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary
Computation Conference, pages 1477–1478, Madrid, Spain, 2015.
[21] S. Verel, A. Liefooghe, L. Jourdan, and C. Dhaenens. On the structure of multiobjective combinatorial search space:MNK-landscapes with correlated objectives.
European Journal of Operational Research, 227(2):331–342, 2013.
11
| 9 |
arXiv:1509.05205v2 [math.GT] 10 Jul 2016
A FINITE GENERATING SET FOR THE LEVEL 2 TWIST
SUBGROUP OF THE MAPPING CLASS GROUP OF A CLOSED
NON-ORIENTABLE SURFACE
RYOMA KOBAYASHI AND GENKI OMORI
Abstract. We obtain a finite generating set for the level 2 twist subgroup of
the mapping class group of a closed non-orientable surface. The generating
set consists of crosscap pushing maps along non-separating two-sided simple
loops and squares of Dehn twists along non-separating two-sided simple closed
curves. We also prove that the level 2 twist subgroup is normally generated
in the mapping class group by a crosscap pushing map along a non-separating
two-sided simple loop for genus g ≥ 5 and g = 3. As an application, we
calculate the first homology group of the level 2 twist subgroup for genus
g ≥ 5 and g = 3.
1. Introduction
Let Ng,n be a compact connected non-orientable surface of genus g ≥ 1 with
n ≥ 0 boundary components. The surface Ng = Ng,0 is a connected sum of g
real projective planes. The mapping class group M(Ng,n ) of Ng,n is the group
of isotopy classes of self-diffeomorphisms on Ng,n fixing the boundary pointwise
and the twist subgroup T (Ng,n ) of M(Ng,n ) is the subgroup of M(Ng,n ) generated by all Dehn twists along two-sided simple closed curves. Lickorish [7] proved
that T (Ng ) is an index 2 subgroup of M(Ng ) and the non-trivial element of
M(Ng )/T (Ng ) ∼
= Z/2Z =: Z2 is represented by a “Y-homeomorphism”. We define
a Y-homeomorphism in Section 2. Chillingworth [1] gave an explicit finite generating set for T (Ng ) and showed that T (N2 ) ∼
= Z2 . The first homology group H1 (G)
of a group G is isomorphic to the abelianization Gab of G. The group H1 (T (Ng ))
is trivial for g ≥ 7, H1 (T (N3 )) ∼
= Z12 , H1 (T (N4 )) ∼
= Z2 ⊕ Z and H1 (T (Ng )) ∼
= Z2
for g = 5, 6. These results were shown by Korkmaz [5] for g ≥ 7 and by Stukow [10]
for the other cases.
Let Σg,n be a compact connected orientable surface of genus g ≥ 0 with n ≥ 0
boundary components. The mapping class group M(Σg,n ) of Σg,n is the group
of isotopy classes of orientation preserving self-diffeomorphisms on Σg,n fixing the
boundary pointwise. Let S be either Ng,n or Σg,n . For n = 0 or 1, we denote
by Γ2 (S) the subgroup of M(S) which consists of elements acting trivially on
H1 (S; Z2 ). Γ2 (S) is called the level 2 mapping class group of S. For a group G,
a normal subgroup H of G and a subset X of H, H is normally generated in G
by X if H is the normal closure of X in G. In particular, for X = {x1 , . . . , xn },
if H is the normal closure of X in G, we also say that H is normally generated
in G by x1 , . . . , xn . In the case of orientable surfaces, Humphries [3] proved that
Date: November 13, 2017.
2010 Mathematics Subject Classification. 57M05, 57M07, 57M20, 57M60.
1
2
R. KOBAYASHI AND G. OMORI
Γ2 (Σg,n ) is normally generated in M(Σg,n ) by the square of the Dehn twist along
a non-separating simple closed curve for g ≥ 1 and n = 0 or 1. In the case of nonorientable surfaces, Szepietowski [11] proved that Γ2 (Ng ) is normally generated in
M(Ng ) by a Y-homeomorphism for g ≥ 2. Szepietowski [12] also gave an explicit
finite generating set for Γ2 (Ng ). This generating set is minimal for g = 3, 4. Hirose
and Sato [2] gave a minimal generating set for Γ2 (Ng ) when g ≥ 5 and showed that
g
g
∼ Z(3)+(2) .
H1 (Γ2 (Ng )) =
2
We denote by T2 (Ng ) the subgroup of T (Ng ) which consists of elements acting
trivially on H1 (Ng ; Z2 ) and we call T2 (Ng ) the level 2 twist subgroup of M(Ng ).
Recall that T (N2 ) ∼
= Z2 and Chillingworth [1] proved that T (N2 ) is generated by
the Dehn twist along a non-separating two-sided simple closed curve. T2 (N2 ) is
a trivial group because Dehn twists along non-separating two-sided simple closed
curves induce nontrivial actions on H1 (Ng ; Z2 ). Let Aut(H1 (Ng ; Z2 ), ·) be the group
of automorphisms on H1 (Ng ; Z2 ) preserving the intersection form · on H1 (Ng ; Z2 ).
Since the action of M(Ng ) on H1 (Ng ; Z2 ) preserves the intersection form ·, there
is the natural homomorphism from M(Ng ) to Aut(H1 (Ng ; Z2 ), ·). McCarthy and
Pinkall [8] showed that the restriction of the homomorphism to T (Ng ) is surjective.
Thus T2 (Ng ) is finitely generated.
In this paper, we give an explicit finite generating set for T2 (Ng ) (Theorem 3.1).
The generating set consists of “crosscap pushing maps” along non-separating twosided simple loops and squares of Dehn twists along non-separating two-sided simple
closed curves. We review the crosscap pushing map in Section 2. We can see the
generating set for T2 (Ng ) in Theorem 3.1 is minimal for g = 3 by Theorem 1.2. We
prove Theorem 3.1 in Section 3. In the last part of Subsection 3.2, we also give the
smaller finite generating set for T2 (Ng ) (Theorem 3.14). However, the generating
set consists of crosscap pushing maps along non-separating two-sided simple loops,
squares of Dehn twists along non-separating two-sided simple closed curves and
squares of Y-homeomorphisms.
By using the finite generating set for T2 (Ng ) in Theorem 3.1, we prove the
following theorem in Section 4.
Theorem 1.1. For g = 3 and g ≥ 5, T2 (Ng ) is normally generated in M(Ng ) by a
crosscap pushing map along a non-separating two-sided simple loop (See Figure 1).
T2 (N4 ) is normally generated in M(N4 ) by a crosscap pushing map along a
non-separating two-sided simple loop and the square of the Dehn twist along a nonseparating two-sided simple closed curve whose complement is a connected orientable
surface (See Figure 2).
The x-marks as in Figure 1 and Figure 2 mean Möbius bands attached to boundary components in this paper and we call the Möbius band the crosscap. The group
which is normally generated in M(Ng ) by the square of the Dehn twist along a
non-separating two-sided simple closed curve is a subgroup of T2 (Ng ) clearly. The
authors do not know whether T2 (Ng ) is generated by squares of Dehn twists along
non-separating two-sided simple closed curves or not.
As an application of Theorem 1.1, we calculate H1 (T2 (Ng )) for g ≥ 5 in Section 5
and we obtain the following theorem.
GENERATING LEVEL 2 TWIST SUBGROUP
3
Figure 1. A crosscap pushing map along a non-separating twosided simple loop is described by a product of Dehn twists along
non-separating two-sided simple closed curves as in the figure.
Figure 2. A non-separating two-sided simple closed curve on N4
whose complement is a connected orientable surface.
Theorem 1.2. For g = 3 and g ≥ 5, the first homology group of T2 (Ng ) is as
follows:
( 2
Z ⊕ Z2
if g = 3,
∼
H1 (T2 (Ng )) =
(g3)+(g2)−1
if g ≥ 5.
Z2
In this proof, we use the five term exact sequence for an extension of a group for
g ≥ 5. The authors do not know the first homology group of T2 (N4 ).
2. Preliminaries
2.1. Crosscap pushing map. Let S be a compact surface and let e : D′ ֒→ intS
be a smooth embedding of the unit disk D′ ⊂ C. Put D := e(D′ ). Let S ′ be the
surface obtained from S − intD by the identification of antipodal points of ∂D. We
call the manipulation that gives S ′ from S the blowup of S on D. Note that the
image M of the regular neighborhood of ∂D in S − intD by the blowup of S on
D is a crosscap, where a crosscap is a Möbius band in the interior of a surface.
Conversely, the blowdown of S ′ on M is the following manipulation that gives S
from S ′ . We paste a disk on the boundary obtained by cutting S along the center
line µ of M . The blowdown of S ′ on M is the inverse manipulation of the blowup
of S on D.
Let x0 be a point of Ng−1 and let e : D′ ֒→ Ng−1 be a smooth embedding of a
unit disk D′ ⊂ C to Ng−1 such that the interior of D := e(D′ ) contains x0 . Let
M(Ng−1 , x0 ) be the group of isotopy classes of self-diffeomorphisms on Ng−1 fixing
the point x0 , where isotopies also fix x0 . Then we have the blowup homomorphism
ϕ : M(Ng−1 , x0 ) → M(Ng )
4
R. KOBAYASHI AND G. OMORI
that is defined as follows. For h ∈ M(Ng−1 , x0 ), we take a representative h′ of h
which satisfies either of the following conditions: (a) h′ |D is the identity map on D,
(b) h′ (x) = e(e−1 (x)) for x ∈ D. Such h′ is compatible with the blowup of Ng−1
on D, thus ϕ(h) ∈ M(Ng ) is induced and well defined (c.f. [11, Subsection 2.3]).
The point pushing map
j : π1 (Ng−1 , x0 ) → M(Ng−1 , x0 )
is a homomorphism that is defined as follows. For γ ∈ π1 (Ng−1 , x0 ), j(γ) ∈
M(Ng−1 , x0 ) is described as the result of pushing the point x0 once along γ. Note
that for x, y ∈ π1 (Ng−1 ), yx means yx(t) = x(2t) for 0 ≤ t ≤ 21 and yx(t) = y(2t−1)
for 12 ≤ t ≤ 1, and for elements [f ], [g] of the mapping class group, [f ][g] means
[f ◦ g].
We define the crosscap pushing map as the composition of homomorphisms:
ψ := ϕ ◦ j : π1 (Ng−1 , x0 ) → M(Ng ).
For γ ∈ π1 (Ng−1 , x0 ), we also call ψ(γ) the crosscap pushing map along γ. Remark
that for γ, γ ′ ∈ π1 (Ng−1 , x0 ), ψ(γ)ψ(γ ′ ) = ψ(γγ ′ ). The next two lemmas follow
from the description of the point pushing map (See [6, Lemma 2.2, Lemma 2.3]).
Lemma 2.1. For a two-sided simple loop γ on Ng−1 based at x0 , suppose that γ1 ,
γ2 are two-sided simple closed curves on Ng−1 such that γ1 ⊔ γ2 is the boundary
of the regular neighborhood N of γ in Ng−1 whose interior contains D. Then for
some orientation of N , we have
t−1
,
ψ(γ) = ϕ(tγ1 t−1
f
γ 2 ) = tγ
1 γ
f
2
where γe1 , γe2 are images of γ1 , γ2 to Ng by blowups respectively (See Figure 3).
Let µ be a one-sided simple closed curve and let α be a two-sided simple closed
curve on Ng such that µ and α intersect transversely at one point. For these simple
closed curves µ and α, we denote by Yµ,α a self-diffeomorphism on Ng which is
described as the result of pushing the regular neighborhood of µ once along α.
We call Yµ,α a Y-homeomorphism (or crosscap slide). By Lemma 3.6 in [11], Yhomeomorphisms are in Γ2 (Ng ).
Lemma 2.2. Suppose that γ is a one-sided simple loop on Ng−1 based at x0 such
that γ and ∂D intersect at antipodal points of ∂D. Then we have
ψ(γ) = Yµ,eγ ,
where γ
e is a image of γ to Ng by a blowup and µ is a center line of the crosscap
obtained from the regular neighborhood of ∂D in Ng−1 by the blowup of Ng−1 on D
(See Figure 4).
Remark that the image of a crosscap pushing map is contained in Γ2 (Ng ). By
Lemma 2.1, if γ is a two-sided simple loop on Ng , then ψ(γ) is an element of T2 (Ng ).
We remark that Y-homeomorphisms are not in T (Ng ) (See [7]).
2.2. Notation of the surface Ng . Let ei : Di′ ֒→ Σ0 for i = 1, 2, . . . , g be smooth
embeddings of unit disks Di′ ⊂ C to a 2-sphere Σ0 such that Di := ei (Di′ ) and
Dj are disjoint for distinct 1 ≤ i, j ≤ g, and let xi ∈ Σ0 for i = 1, 2, . . . , g be g
points of Σ0 such that xi is contained in the interior of Di as the left-hand side
of Figure 5. Then Ng is diffeomorphic to the surface obtained from Σ0 by the
blowups on D1 , . . . , Dg . We describe the identification of ∂Di by the x-mark as
GENERATING LEVEL 2 TWIST SUBGROUP
5
Figure 3. A crosscap pushing map along two-sided simple loop γ.
Figure 4. A crosscap pushing map along one-sided simple loop γ
(Y-homeomorphism Yµ,eγ ).
the right-hand side of Figure 5. We call the crosscap which is obtained from the
regular neighborhood of ∂Di in Σ0 by the blowup of Σ0 on Di the i-th crosscap.
(k)
We denote by Ng−1 the surface obtained from Σ0 by the blowups on Di for every
(k)
i 6= k. Ng−1 is diffeomorphic to Ng−1 . Let xk;i be a simple loop on Ng based at
(k)
(k)
xk for i 6= k as Figure 6. Then the fundamental group π1 (Ng−1 ) = π1 (Ng−1 , xk ) of
(k)
Ng−1 has the following presentation.
(k)
π1 (Ng−1 ) = xk;1 , . . . , xk;k−1 , xk;k+1 , . . . , xk;g | x2k;1 . . . x2k;k−1 x2k;k+1 . . . x2k;g = 1 .
Figure 5. The embedded disks D1 , D2 , . . . , Dg on Σ0 and the
surface Ng .
(k)
2.3. Notations of mapping classes. Let ψk : π1 (Ng−1 ) → M(Ng ) be the cross(k)
(k)
cap pushing map obtained from the blowup of Ng−1 on Dk and let π1 (Ng−1 )+ be
6
R. KOBAYASHI AND G. OMORI
Figure 6. The simple loop xk;i for 1 ≤ i ≤ k − 1 and xk;j for
(k)
k + 1 ≤ j ≤ g on Ng−1 based at xk .
(k)
(k)
the subgroup of π1 (Ng−1 ) generated by two-sided simple loops on Ng−1 based at
(k)
xk . By Lemma 2.1, we have ψk (π1 (Ng−1 )+ ) ⊂ T2 (Ng ). We define non-separating
(k)
two-sided simple loops αk;i,j and βk;i,j on Ng−1 based at xk as in Figure 7 for distinct 1 ≤ i < j ≤ g and 1 ≤ k ≤ g. We also define αk;j,i := αk;i,j and βk;j,i := βk;i,j
for distinct 1 ≤ i < j ≤ g and 1 ≤ k ≤ g. We have the following equations:
αk;i,j = xk;i xk;j
for i < j < k or j < k < i or k < i < j,
βk;i,j = xk;j xk;i
for i < j < k or j < k < i or k < i < j.
Denote the crosscap pushing maps ak;i,j := ψk (αk;i,j ) and bk;i,j := ψk (βk;i,j ). Remark that ak;i,j and bk;i,j are contained in the image of ψk |π1 (N (k) )+ . Let η be the
g−1
self-diffeomorphism on Ng which is the rotation of Ng such that η sends the i-th
crosscap to the (i + 1)-st crosscap for 1 ≤ i ≤ g − 1 and the g-th crosscap to the
1-st crosscap as Figure 8. Then we have ak;i,j = η k−1 a1;i−k+1,j−k+1 η −(k−1) and
bk;i,j = η k−1 b1;i−k+1,j−k+1 η −(k−1) for each distinct 1 ≤ i, j, k ≤ g.
(k)
Figure 7. Two-sided simple loops αk;i,j and βk;i,j on Ng−1 based
at xk .
For distinct i1 , i2 , . . . , in ∈ {1, 2, . . . , g}, we define a simple closed curve αi1 ,i2 ,...,in
on Ng as in Figure 9. The arrow on the side of the simple closed curve αi1 ,i2 ,...,in
GENERATING LEVEL 2 TWIST SUBGROUP
7
Figure 8. The self-diffeomorphism η on Ng .
in Figure 9 indicates the direction of the Dehn twist tαi1 ,i2 ,...,in along αi1 ,i2 ,...,in if
n is even. We set the notations of Dehn twists and Y-homeomorphisms as follows:
for 1 ≤ i < j ≤ g,
Ti,j
:=
tαi,j
Ti,j,k,l
:=
tαi,j,k,l
Yi,j
:=
Yαi ,αi,j = ψi (xi;j )
for g ≥ 4 and 1 ≤ i < j < k < l ≤ g,
for distinct 1 ≤ i, j ≤ g.
2
2
Note that Ti,j
and Ti,j,k,l
are elements of T2 (Ng ), Yi,j is an element of Γ2 (Ng ) but
2
Yi,j is not an element of T2 (Ng ). We remark that ak;i,j = b−1
k;i,j = Ti,j for any
distinct i, j, k ∈ {1, 2, 3} when g = 3.
Figure 9. The simple closed curve αi1 ,i2 ,...,in on Ng .
3. Finite generating set for T2 (Ng )
In this section, we prove the main theorem in this paper. The main theorem is
as follows:
Theorem 3.1. For g ≥ 3, T2 (Ng ) is generated by the following elements:
(i) ak;i,i+1 , bk;i,i+1 , ak;k−1,k+1 , bk;k−1,k+1 for 1 ≤ k ≤ g, 1 ≤ i ≤ g and
i 6= k − 1, k,
(ii) a1;2,4 , bk;1,4 , al;1,3 for k = 2, 3 and 4 ≤ l ≤ g when g is odd,
2
(iii) T1,j,k,l
for 2 ≤ j < k < l ≤ g when g ≥ 4,
where the indices are considered modulo g.
We remark that the number of generators in Theorem 3.1 is 16 (g 3 + 6g 2 + 5g − 6)
for g ≥ 4 odd, 16 (g 3 + 6g 2 − g − 6) for g ≥ 4 even and 3 for g = 3.
8
R. KOBAYASHI AND G. OMORI
(k)
3.1. Finite generating set for π1 (Ng−1 )+ . First, we have the following lemma:
(k)
(k)
Lemma 3.2. For g ≥ 2, π1 (Ng−1 )+ is an index 2 subgroup of π1 (Ng−1 ).
(k)
Proof. Note that π1 (Ng−1 ) is generated by xk;1 , . . . , xk;k−1 , xk;k+1 , . . . , xk;g . If g =
(k)
2, π1 (Ng−1 ) is isomorphic to Z2 which is generated by a one-sided simple loop.
(k)
Hence π1 (Ng−1 )+ is trivial and we obtain this lemma when g = 2.
We assume that g ≥ 3. For i 6= k, we have
xk;i = x−1
k;k−1 · xk;k−1 xk;i .
(k)
Since xk;k−1 xk;i = βk;i,k−1 ∈ π1 (Ng−1 )+ , the equivalence classes of xk;i and x−1
k;k−1
(k)
(k)
in π1 (Ng−1 )/π1 (Ng−1 )+ are the same. We also have
2
xk;k−1 = x−1
k;k−1 · xk;k−1 .
(k)
Since x2k;k−1 ∈ π1 (Ng−1 )+ , the equivalence classes of xk;k−1 and x−1
k;k−1 in
(k)
(k)
(k)
(k)
π1 (Ng−1 )/π1 (Ng−1 )+ are the same. Thus π1 (Ng−1 )/π1 (Ng−1 )+ is generated by
the equivalence class [xk;k−1 ] whose order is 2 and we have completed the proof of
Lemma 3.2.
(k)
Ng−1 is diffeomorphic to the surface on the left-hand side (resp. right-hand
side) of Figure 10 when g − 1 = 2h + 1 (resp. g − 1 = 2h + 2). We take a
diffeomorphism which sends xk;i for i 6= k and xk as in Figure 6 to xk;i for i 6= k
(k)
and xk as in Figure 10 and identify Ng−1 with the surface in Figure 10 by the
]
(k)
(k)
diffeomorphism. Denote by pk : Ng−1 ։ Ng−1 the orientation double covering
]
(k)
(k)
of Ng−1 as in Figure 11. Then Hk := (pk )∗ (π1 (Ng−1 )) is an index 2 subgroup
]
(k)
(k)
of π1 (Ng−1 ). Note that when g − 1 = 2h + 1, π1 (Ng−1 ) is generated by yk;i for
]
(k)
1 ≤ i ≤ 4h, and when g−1 = 2h+2, π1 (Ng−1 ) is generated by yk;i for 1 ≤ i ≤ 4h+2,
]
(k)
where yk;i is two-sided simple loops on Ng−1 based at the lift x
fk of xk as in Figure 11.
We have the following Lemma.
Lemma 3.3. For g − 1 ≥ 1 and 1 ≤ k ≤ g,
(k)
Hk = π1 (Ng−1 )+ .
(k)
(k)
Proof. Note that π1 (Ng−1 )+ is an index 2 subgroup of π1 (Ng−1 ) by Lemma 3.2. It
(k)
is sufficient for proof of Lemma 3.3 to prove Hk ⊂ π1 (Ng−1 )+ because the index of
Hk in
(k)
π1 (Ng−1 )
is
(k)
(k)
(k)
(k)
2 = [π1 (Ng−1 ) : Hk ] = [π1 (Ng−1 ) : π1 (Ng−1 )+ ][π1 (Ng−1 )+ : Hk ]
(k)
= 2 · [π1 (Ng−1 )+ : Hk ]
(k)
if Hk ⊂ π1 (Ng−1 )+ .
(k)
We define subsets of π1 (Ng−1 )+ as follows:
A :=
{xk;j+1 xk;j , xk;k+1 xk;k−1 | 1 ≤ j ≤ g − 1, j 6= k − 1, k},
B
{xk;j xk;j+1 , xk;k−1 xk;k+1 | 1 ≤ j ≤ g − 1, j 6= k − 1, k},
:=
GENERATING LEVEL 2 TWIST SUBGROUP
9
(k)
Figure 10. Ng−1 is diffeomorphic to the surface on the left-hand
side (resp. right-hand side) of the figure when g − 1 = 2h + 1 (resp.
g−1 = 2h+2). We regard the above surface on the left-hand side as
the surface identified antipodal points of the boundary component,
and the above surface on the right-hand side as the surface attached
their boundary components along the orientation of the boundary.
C
:=
{x2k;1 }
{x2k;2 }
if k 6= 1,
if k = 1.
]
(k)
π1 (Ng−1 ) is generated by yk;i . For i ≤ 2h when g − 1 = 2h + 1 (resp. i ≤ 2h + 1
when g − 1 = 2h + 2), we can check that
x
x
if 2 ≤ i ≤ 2h and g − 1 = 2h + 1,
k;ρ(i+1) k;ρ(i)
xk;g xk;g−1
if i = 1 and g − 1 = 2h + 1,
(pk )∗ (yk;i ) =
x
x
if 2 ≤ i ≤ 2h + 1 and g − 1 = 2h + 2,
k;ρ(i+1) k;ρ(i)
xk;g xk;g−1
if i = 1 and g − 1 = 2h + 2,
and (pk )∗ (yk;i ) is an element of A, where ρ is the order reversing bijection from
{1, 2, . . . , 2h} (resp. {1, 2, . . . , 2h+1}) to {1, 2, . . . , g−1}−{k}. Since if g−1 = 2h+1
and g − 1 = 2h′ + 2, we have
2
xk;1 if k 6= 1,
(pk )∗ (yk;2h+1 ) =
x2k;2 if k = 1,
2
xk;1 if k 6= 1,
(pk )∗ (yk;2h′ +2 ) =
x2k;2 if k = 1,
(pk )∗ (yk;2h+1 ) and (pk )∗ (yk;2h′ +2 ) are elements of C respectively (See Figure 12).
Finally, for i ≥ 2h + 2 when g − 1 = 2h + 1 (resp. i ≥ 2h + 3 when g = 2h + 2), we
can also check that
x ′ x ′
if 2h + 2 ≤ i ≤ 4h − 1 and g − 1 = 2h + 1,
k;ρ (i) k;ρ (i+1)
xk;g−1 xk;g
if i = 4h and g − 1 = 2h + 1,
(pk )∗ (yk;i ) =
′ (i) xk;ρ′ (i+1)
x
if 2h + 3 ≤ i ≤ 4h + 1 and g − 1 = 2h + 2,
k;ρ
xk;g−1 xk;g
if i = 4h + 2 and g − 1 = 2h + 2,
and (pk )∗ (yk;i ) = xk;ρ′ (i) xk;ρ′ (i+1) is an element of B, where ρ′ is the order preserving bijection from {2h + 2, 2h + 3, . . . , 4h} (resp. {2h + 3, 2h + 4, . . . , 4h + 2}) to
{1, 2, . . . , g − 1} − {k} (See Figure 13). We obtain this lemma.
10
R. KOBAYASHI AND G. OMORI
]
(k)
Figure 11. The total space Ng−1 of the orientation double cov]
(k)
(k)
ering pk of Ng−1 and two-sided simple loops yk;i on Ng−1 based at
x
fk .
Figure 12. The representative of x2k;1 when k 6= 1 or x2k;2 when
k = 1.
By the proof of Lemma 3.3, we have the following proposition.
(k)
Proposition 3.4. For g ≥ 2, π1 (Ng−1 )+ is generated by the following elements:
GENERATING LEVEL 2 TWIST SUBGROUP
11
Figure 13. The representative of xk;j xk;j+1 and xk;k−1 xk;k+1 for
j 6= k − 1, k.
(1) xk;i+1 xk;i , xk;i xk;i+1 , xk;k+1 xk;k−1 , xk;k−1 xk;k+1 for 1 ≤ i ≤ g − 1 and
i 6= k − 1, k,
(2) x2k;2 when k = 1,
(3) x2k;1 when 2 ≤ k ≤ g.
We remark that xk;i+1 xk;i = βk;i,i+1 , xk;i xk;i+1 = αk;i,i+1 , xk;k+1 xk;k−1 =
2
2
αk;k−1,k+1 , xk;k−1 xk;k+1 = βk;k−1,k+1 and Yi,j
= Yj,i
. Let G be the subgroup of
(k)
T2 (Ng ) generated by ∪gk=1 ψk (π1 (Ng−1 )+ ). The next corollary follows from Proposition 3.4 immediately.
Corollary 3.5. For g ≥ 2, G is generated by the following elements:
(i) ak;i,i+1 , bk;i,i+1 , ak;k−1,k+1 , bk;k−1,k+1 for 1 ≤ k ≤ g, 1 ≤ i ≤ g − 1 and
i 6= k − 1, k,
2
(ii) Y1,j
when 2 ≤ j ≤ g,
where the indices are considered modulo g.
The simple loop x2k;1 and x2k;2 are separating loops. By the next proposition,
(k)
π1 (Ng−1 )+ is generated by finitely many two-sided non-separating simple loops.
(k)
Proposition 3.6. For g ≥ 2, π1 (Ng−1 )+ is generated by the following elements:
(1) xk;i+1 xk;i , xk;i xk;i+1 , xk;k+1 xk;k−1 , xk;k−1 xk;k+1 for 1 ≤ i ≤ g and i 6=
k − 1, k,
(2) xk;2 xk;4 when k = 1 and g − 1 is even,
(3) xk;1 xk;4 when k = 2, 3 and g − 1 is even,
(4) xk;1 xk;3 when 4 ≤ k ≤ g and g − 1 is even,
where the indices are considered modulo g.
Proof. When g − 1 is odd, since we have
−1
−1
−1
x21;2 = x1;2 x1;3 · x−1
1;3 x1;4 · x1;4 x1;5 · · · · · x1;g−1 x1;g · x1;g x1;2
12
R. KOBAYASHI AND G. OMORI
and
−1
−1
−1
x2k;1 = xk;1 xk;2 · x−1
k;2 xk;3 · xk;3 xk;4 · · · · · xk;g−1 xk;g · xk;g xk;1
for 2 ≤ k ≤ g, this proposition is clear.
When g − 1 is even, we use the relation
x2k;1 . . . x2k;k−1 x2k;k+1 . . . x2k;g = 1.
For k = 1, we have
x21;2 = x1;2 x1;4 · x1;4 x1;5 · · · · · x1;g x1;2 · x1;2 x1;3 · x1;3 x1;2 .
By a similar argument, we also have following equations:
x2 = x2;1 x2;4 · x2;4 x2;5 · · · · · x2;g x2;1 · x2;1 x2;3 · x2;3 x2;1 if k = 2,
22;1
x3;1 = x3;1 x3;4 · x3;4 x3;5 · · · · · x3;g x3;1 · x3;1 x3;2 · x3;2 x3;1 if k = 3,
x2k;1 =
x2k;1 = xk;1 xk;3 · xk;3 xk;4 · · · · · xk;k−1 xk;k+1 · · · ·
·xk;g xk;1 · xk;1 xk;2 · xk;2 xk;1
if 4 ≤ k ≤ g.
We obtain this proposition.
We remark that xk;1 xk;g = βk;1,g , xk;g xk;1 = αk;1,g , x1;2 x1;4 = α1;2,4 , xk;1 xk;4 =
βk;1,4 for k = 2, 3 and xk;1 xk;3 = αk;1,3 for 4 ≤ k ≤ g. By the above remarks, we
have the following corollary.
Corollary 3.7. For g ≥ 2, G is generated by the following elements:
(i) ak;i,i+1 , bk;i,i+1 , ak;k−1,k+1 , bk;k−1,k+1 for 1 ≤ k ≤ g, 1 ≤ i ≤ g and
i 6= k − 1, k,
(ii) a1;2,4 , bk;1,4 , al;1,3 for k = 2, 3 and 4 ≤ l ≤ g when g is odd,
where the indices are considered modulo g.
3.2. Proof of Main-Theorem. First, we obtain a finite generating set for T2 (Ng )
by the Reidemeister-Schreier method. We use the following minimal generating set
for Γ2 (Ng ) given by Hirose and Sato [2] when g ≥ 5 and Szepietowski [12] when
g = 3, 4 to apply the Reidemeister-Schreier method. See for instance [4] to recall
the Reidemeister-Schreier method.
Theorem 3.8. [2, 12] For g ≥ 3, Γ2 (Ng ) is generated by the following elements:
(1) Yi,j for 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g and i 6= j,
2
(2) T1,j,k,l
for 2 ≤ j < k < l ≤ g when g ≥ 4.
Proposition 3.9. For g ≥ 3, T2 (Ng ) is generated by the following elements:
2
(1) Yi,j Y1,2 , Yi,j
for 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g and i 6= j,
−1 2
2
for 2 ≤ j < k < l ≤ g when g ≥ 4.
(2) Y1,2 T1,j,k,l Y1,2 , T1,j,k,l
Proof. Note that T2 (Ng ) is the intersection of Γ2 (Ng ) and T (Ng ). Hence we have
the isomorphisms
Γ2 (Ng )/(Γ2 (Ng ) ∩ T (Ng )) ∼
= Z2 [Y1,2 ].
= (Γ2 (Ng )T (Ng ))/T (Ng ) ∼
We remark that Γ2 (Ng )T (Ng ) = M(Ng ) and the last isomorphism is given by
Lickorish [7]. Thus T2 (Ng ) is an index 2 subgroup of Γ2 (Ng ).
Set U := {Y1,2 , 1} and X as the generating set for Γ2 (Ng ) in Theorem 3.8, where
1 means the identity element. Then U is a Schreier transversal for T2 (Ng ) in Γ2 (Ng ).
GENERATING LEVEL 2 TWIST SUBGROUP
13
For x ∈ Γ2 (Ng ), define x as the element of U such that [x] = [x] in Γ2 (Ng )/T2 (Ng ).
By the Reidemeister-Schreier method, for g ≥ 4, T2 (Ng ) is generated by
B ={wu−1 wu | w ∈ X ± , u ∈ U, wu 6∈ U }
±1
Y1,2
={Yi,j
−1
±1
±1
Yi,j
Y1,2 , Yi,j
±2
Y1,2
∪ {T1,j,k,l
−1
−1
±1
Yi,j
| 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g, i 6= j}
±2
±2
T1,j,k,l
Y1,2 , T1,j,k,l
−1
±2
T1,j,k,l
| 2 ≤ j < k < l ≤ g}
±1
−1 ±1
={Yi,j
Y1,2 , Y1,2
Yi,j | 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g, i 6= j}
−1 ±2
±2
∪ {Y1,2
T1,j,k,l Y1,2 , T1,j,k,l
| 2 ≤ j < k < l ≤ g},
where X ± := X ∪ {x−1 | x ∈ X} and note that equivalence classes of Y−1 ±1
∓1
Y1,2 )−1
homeomorphisms in Γ2 (Ng )/T2 (Ng ) is nontrivial. Since Y1,2
Yi,j = (Yi,j
−1
−2
and Yi,j Y1,2 = Yi,j · Yi,j Y1,2 , we have the following generating set for T2 (Ng ):
2
B ′ ={Yi,j Y1,2 , Yi,j
| 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g, i 6= j}
−1 2
2
∪ {Y1,2
T1,j,k,l Y1,2 , T1,j,k,l
| 2 ≤ j < k < l ≤ g}.
By a similar discussion, T2 (N3 ) is generated by
2
B ′ ={Yi,j Y1,2 , Yi,j
| 1 ≤ i ≤ 2, 1 ≤ j ≤ 3, i 6= j}.
We obtain this proposition.
Let G be the group generated by the elements of type (i), (ii) and (iii) in Theorem 3.1. Then G is a subgroup of T2 (Ng ) clearly and it is sufficient for the proof
of Theorem 3.1 to prove B ′ ⊂ G, where B ′ is the generating set for T2 (Ng ) in the
(k)
proof of Proposition 3.9. By Corollary 3.7, we have ψk (π1 (Ng−1 )+ ) ⊂ G for any
(i)
2
1 ≤ k ≤ g. Thus Yi,j
= ψi (x2i;j ) ∈ ψi (π1 (Ng−1 )+ ) ⊂ G. We complete the proof of
−1 2
Theorem 3.1 if Yi,j Y1,2 and Y1,2
T1,j,k,l Y1,2 are in G.
−1 2
Lemma 3.10. For g ≥ 4, Y1,2
T1,j,k,l Y1,2 ∈ G.
−1 2
−1 2
Proof. Since Y1,2
T1,j,k,l Y1,2 is a Dehn twist along
T1,j,k,l Y1,2 = tY −1 (α1,j,k,l ) , Y1,2
1,2
the two-sided simple closed curve as in Figure 14. Then we have a1;k,l (α1,2,k,l ) =
−1
−2
−1
Y1,2
(α1,2,k,l ) and Y1,2
a1;2,j a1;k,l (α1,j,k,l ) = Y1,2
(α1,j,k,l ) for 3 ≤ j ≤ g
and the local orientation of the regular neighborhood of a1;k,l (α1,2,k,l ) (resp.
−2
−1
−1
Y1,2
a1;2,j a1;k,l (α1,j,k,l )) and Y1,2
(α1,2,k,l ) (resp. Y1,2
(α1,j,k,l )) are different. Therefore we have
−1 2
−2
Y1,2
T1,2,k,l Y1,2 =a1;k,l T1,2,k,l
a−1
1;k,l ,
−1 2
−2
−2
−1
2
Y1,2
T1,j,k,l Y1,2 =Y1,2
a1;2,j a1;k,l T1,j,k,l
a−1
1;k,l a1;2,j Y1,2 for 3 ≤ j ≤ g.
(1)
By Corollary 3.7, a1;k,l , a1;2,j ∈ ψ1 (π1 (Ng−1 )+ ) ⊂ G. We obtain this lemma.
Szepietowski [11, Lemma 3.1] showed that for any non-separating two-sided simple closed curve γ, t2γ is a product of two Y-homeomorphisms. In particular, we
have the following lemma.
Lemma 3.11 ([11]). For distinct 1 ≤ i, j ≤ g,
2
Ti,j for i < j,
−1
−1
Yj,i
Yi,j = Yj,i Yi,j
=
−2
Ti,j
for j < i.
14
R. KOBAYASHI AND G. OMORI
Figure 14. The upper side of the figure is the simple closed curve
−1
Y1,2
(α1,2,k,l ) on Ng and the lower side of the figure is the simple
−1
closed curve Y1,2
(α1,j,k,l ) on Ng for 3 ≤ j ≤ g.
2
Lemma 3.12. For distinct 1 ≤ i, j ≤ g, Ti,j
∈ G.
Proof. We discuss by a similar argument in proof of Lemma 3.5 in [12]. Let γi be
(i)
the two-sided simple loop on Ng−1 for i = 3, . . . , g as in Figure 15. Then we have
(i)
2
T1,2
= ψg (γg ) · · · ψ4 (γ4 )ψ3 (γ3 ) (see Figure 15). Since γi ∈ π1 (Ng−1 )+ , each ψi (γi )
2
is an element of G by Corollary 3.7. Hence we have T1,2
∈ G.
We denote by σi,j the self-diffeomorphism on Ng which is obtained by the transposition of the i-th crosscap and the j-th crosscap as in Figure 16. σi,j is called the
crosscap transposition (c.f. [9]). For 1 ≤ i < j ≤ g, put fi,j ∈ M(Ng ) as follows:
f1,2 := 1,
f1,j := σj−1,j · · · σ3,4 σ2,3 for 3 ≤ j ≤ g,
fi,j := σi−1.i · · · σ2,3 σ1,2 f1,j for 2 ≤ i < j ≤ g.
−1
−1
−1
−1
2
2
Then Ti,j
= fi,j T1,2
fi,j
= fi,j ψg (γg )fi,j
· · · fi,j ψ4 (γ4 )fi,j
·fi,j ψ3 (γ3 )fi,j
. Since the
−1
action of σi,j on Ng preserves the set of i-th crosscaps for 1 ≤ i ≤ g, fi,j ψk (γk )fi,j
is
(k′ )
−1
∈
an element of ψk′ (π1 (Ng−1 )) for some k ′ . By Corollary 3.7, we have fi,j ψk (γk )fi,j
G and we obtain this lemma.
Finally, by the following proposition, we complete the proof of Theorem 3.1.
Proposition 3.13. For distinct 1 ≤ i, j, k, l ≤ g, Yk,l Yi,j ∈ G.
Proof. Yk,l Yi,j is the following product of elements of G.
(a) case (k, l) = (i, j):
2
Yk,l Yi,j = Yi,j
.
By Corollary 3.7, the right-hand side is an element of G.
(b) case (k, l) = (j, i):
−1
2
Yj,i Yi,j = Yj,i
· Yj,i
Yi,j
Lem. 3.11
=
±2
(a) · Ti,j
.
By Lemma 3.12, the right-hand side is an element of G.
(c) case k = i and l 6= j:
Yi,l Yi,j = ψi (xi;l )ψi (xi;j ) = ψi (αi;j,l ) = ai;j,l
Cor. 3.7
∈
G.
GENERATING LEVEL 2 TWIST SUBGROUP
15
2
Figure 15. T1,2
is a product of crosscap pushing maps along γ3 , γ4 , . . . , γg .
Figure 16. The crosscap transposition σi,j .
(d) case k 6= i and l = j:
−1 −1
Yk,j Yi,j = Yk,j Yk,i · Yk,i
Yi,k · Yi,k Yi,j = (c) · (b) · (c) ∈ G.
(e) case k = j and l 6= i:
−1
Yj,l Yi,j = Yj,l Yj,i · Yj,i
Yi,j
Lem. 3.11
±2
(c) · Ti,j
Lem. 3.12
Lem. 3.11
±2
Ti,k
· (c)
Lem. 3.12
=
∈
G.
(f) case k 6= j and l = i:
−1
Yk,i Yi,j = Yk,i Yi,k
· Yi,k Yi,j
=
∈
G.
(g) case {k, l} ∩ {i, j} is empty:
−1 −1
Yk,l Yi,j = Yk,l Yk,j · Yk,j
Yk,j · Yk,j Yi,j = (c) · (a) · (d) ∈ G.
We have completed this proposition.
By a similar discussion in Subsection 3.2 and Corollary 3.5, we obtain the following theorem.
Theorem 3.14. For g ≥ 3, T2 (Ng ) is generated by following elements:
(i) ak;i,i+1 , bk;i,i+1 , ak;k−1,k+1 , bk;k−1,k+1 for 1 ≤ k ≤ g, 1 ≤ i ≤ g − 1 and
i 6= k − 1, k,
16
R. KOBAYASHI AND G. OMORI
2
(ii) Y1,j
for 2 ≤ j ≤ g,
2
(iii) T1,j,k,l for 2 ≤ j < k < l ≤ g when g ≥ 4,
where the indices are considered modulo g.
Since the number of generators in Theorem 3.14 is 61 (g 3 + 6g 2 − 7g − 12) for
g ≥ 4 and 3 for g = 3, the number of generators in Theorem 3.14 is smaller than
the number of generators in Theorem 3.1. On the other hand, by Theorem
1.2,
the dimension of the first homology group H1 (T2 (Ng )) of T2 (Ng ) is g3 + g2 − 1 =
1 3
2
6 (g − g − 6) for g ≥ 4. The difference of them is g − g − 1. The authors do not
know the minimal number of generators for T2 (Ng ) when g ≥ 4.
4. Normal generating set for T2 (Ng )
The next lemma is a generalization of the argument in the proof of Lemma 3.5
in [12].
Lemma 4.1. Let γ be a non-separating two-sided simple closed curve on Ng such
that Ng − γ is a non-orientable surface. Then t2γ is a product of crosscap pushing
maps along two-sided non-separating simple loops such that their crosscap pushing
maps are conjugate to a1;2,3 in M(Ng ).
Proof of Theorem 1.1. By Theorem 3.1, T2 (Ng ) is generated by (I) crosscap push2
ing maps along non-separating two-sided simple loops and (II) T1,j,k,l
for 2 ≤ j <
2
2
2
2
k < l ≤ g. When g = 3, T2 (Ng ) is generated by T1,2 , T1,3 , T2,3 . Recall Ti,j
= a−1
k;i,j
when g = 3. Since Ng − αi,j is non-orientable for g ≥ 3, ak;i,j is conjugate to ak′ ;i′ ,j ′
in M(Ng ). Hence Theorem 1.1 is clear when g = 3.
(k)
Assume g ≥ 4. For a non-separating two-sided simple loop c on Ng−1 based at
xk , by Lemma 2.1, there exist non-separating two-sided simple closed curves c1 and
c2 such that ψ(c) = tc1 t−1
c2 , where c1 and c2 are images of boundary components of
(k)
regular neighborhood of c in Ng−1 to Ng by a blowup. Then the surface obtained
by cutting Ng along c1 and c2 is diffeomorphic to a disjoint sum of Ng−3,2 and
N1,2 . Thus mapping classes of type (I) is conjugate to a1;2,3 in M(Ng ). We obtain
Theorem 1.1 for g = 4.
Assume g ≥ 5. Simple closed curves αi,j,k,l satisfy the condition of Lemma 4.1.
2
Therefore T1,j,k,l
is a product of crosscap pushing maps along non-separating twosided simple loops and such crosscap pushing maps are conjugate to a1;2,3 in
M(Ng ). We have completed the proof of Theorem 1.1.
5. First homology group of T2 (Ng )
By the argument in the proof of Proposition 3.9, for g ≥ 2, we have the following
exact sequence:
(5.1)
1 −→ T2 (Ng ) −→ Γ2 (Ng ) −→ Z2 −→ 0,
where Z2 is generated by the equivalence class of a Y-homeomorphism.
The level 2 principal congruence subgroup Γ2 (n) of GL(n, Z) is the kernel of the
natural surjection GL(n, Z) ։ GL(n, Z2 ). Szepietowski [12, Corollary 4.2] showed
that there exists an isomorphism θ : Γ2 (N3 ) → Γ2 (2) which is induced by the action
of Γ2 (N3 ) on the free part of H1 (N3 ; Z). Since the determinant of the action of a
GENERATING LEVEL 2 TWIST SUBGROUP
17
Dehn twist on the free part of H1 (N3 ; Z) is 1, we have the following commutative
diagram of exact sequences:
/ Γ2 (N3 )
/ T2 (N3 )
1
(5.2)
θ|T2 (N3 )
/1
/ Z2
/ 1,
θ
/ SL(2, Z)[2]
1
/ Z2
/ Γ2 (2)
det
where SL(n, Z)[2] := Γ2 (n) ∩ SL(n, Z) is the level 2 principal congruence subgroup
of the integral special linear group SL(n, Z). By the commutative diagram (5.2),
T2 (N3 ) is isomorphic to SL(2, Z)[2].
Proof of Theorem 1.2. For g = 3, the first homology group H1 (T2 (N3 )) is isomorphic to H1 (SL(2, Z)[2]) by the commutative diagram (5.2). The restriction of the
natural surjection from SL(2, Z) to the projective special linear group P SL(2, Z)
to SL(n, Z)[2] gives the following commutative diagram of exact sequences:
(5.3)
1
/ Z2 [−E]
/ SL(2, Z)
/ P SL(2, Z)
/1
/ SL(2, Z)[2]
/ P SL(2, Z)[2]
/ 1,
id
1
/ Z2 [−E]
where E is the identity matrix and P SL(n, Z)[2] := SL(n, Z)/{±E} is the level 2
principal congruence subgroup of P SL(2, Z). Since P SL(2, Z)[2] is isomorphic to
the free group F2 of rank 2 and −E commutes with all matrices, the exact sequence
in the lower row of Diagram (5.3) is split and SL(2, Z)[2] is isomorphic to F2 ⊕ Z2 .
Thus H1 (T2 (N3 )) is isomorphic to Z2 ⊕ Z2 .
For g ≥ 2, the exact sequence (5.1) induces the five term exact sequence between
these groups:
H2 (Γ2 (Ng )) −→ H2 (Z2 ) −→ H1 (T2 (Ng ))Z2 −→ H1 (Γ2 (Ng )) −→ H1 (Z2 ) −→ 0,
where
H1 (T2 (Ng ))Z2 := H1 (T2 (Ng ))/ f m − m | m ∈ H1 (T2 (Ng )), f ∈ Z2 .
−1
For m ∈ H1 (T2 (Ng )) and f ∈ Z2 , f m := [f ′ m′ f ′ ] ∈ H1 (T2 (Ng )) for some
representative m′ ∈ T2 (Ng ) and f ′ ∈ Γ2 (Ng ). Since H2 (Z2 ) ∼
= H2 (RP ∞ ) = 0 and
∼
H1 (Z2 ) = Z2 , we have the short exact sequence:
0 −→ H1 (T2 (Ng ))Z2 −→ H1 (Γ2 (Ng )) −→ Z2 −→ 0.
(g)+(g)
Since Hirose and Sato [2] showed that H1 (Γ2 (Ng )) ∼
= Z23 2 , it is sufficient for the
proof of Theorem 1.2 when g ≥ 5 to prove that the action of Z2 ∼
= Γ2 (Ng )/T2 (Ng )
on the set of the first homology classes of generators for T2 (Ng ) is trivial.
By Theorem 1.1, T2 (Ng ) is generated by crosscap pushing maps along nonseparating two-sided simple loops for g ≥ 5. Let ψ(γ) = tγ1 t−1
γ2 be a crosscap
pushing map along a non-separating two-sided simple loop γ, where γ1 and γ2 are
images of boundary components of the regular neighborhood of γ in Ng−1 to Ng by
a blowup. The surface S obtained by cutting Ng along γ1 and γ2 is diffeomorphic
to a disjoint sum of Ng−3,2 and N1,2 . Since g − 3 ≥ 5 − 3 = 2, we can define
a Y-homeomorphism Y on the component of S. The Y-homeomorphism is not
a product of Dehn twists. Hence [Y ] is the nontrivial element in Z2 and clearly
18
R. KOBAYASHI AND G. OMORI
Y ψ(γ)Y −1 = ψ(γ) in Γ2 (Ng ), i.e. [Y ψ(γ)Y −1 ] = [ψ(γ)] in H1 (T2 (Ng )). Therefore
the action of Z2 on H1 (T2 (Ng )) is trivial and we have completed the proof of
Theorem 1.2.
Remark 5.1. When g = 3, we have the exact sequence
0 −→ H1 (T2 (N3 ))Z2 −→ H1 (Γ2 (N3 )) −→ Z2 −→ 0
by the argument in the proof of Theorem 1.2. Since H1 (Γ2 (N3 )) ∼
= H1 (Γ2 (2)) ∼
= Z42 ,
∼
we showed that the action of Z2 = Γ2 (N3 )/T2 (N3 ) on H1 (T2 (N3 )) is not trivial by
Theorem 1.2 when g = 3.
Acknowledgement:
The authors would like to express their gratitude to
Hisaaki Endo and Susumu Hirose, for his encouragement and helpful advices. The
authors also wish to thank Masatoshi Sato for his comments and helpful advices.
The second author was supported by JSPS KAKENHI Grant number 15J10066.
References
[1] D. R. J. Chillingworth, A finite set of generators for the homeotopy group of a non-orientable
surface, Math. Proc. Camb. Philos. Soc. 65 (1969), 409–430.
[2] S. Hirose, M. Sato, A minimal generating set of the level 2 mapping class group of a nonorientable surface, Math. Proc. Camb. Philos. Soc. 157 (2014), no. 2, 345–355.
[3] S. P. Humphries, Normal closures of powers of Dehn twists in mapping class groups, Glasgow
Math. J. 34 (1992), 313–317.
[4] D. L. Johnson, Presentations of Groups, London Math. Soc. Stud. Texts 15 (1990).
[5] M. Korkmaz, First homology group of mapping class group of nonorientable surfaces, Math.
Proc. Camb. Phil. Soc. 123 (1998), 487–499.
[6] M. Korkmaz, Mapping class groups of nonorientable surfaces, Geom. Dedic. 89 (2002), 109–
133.
[7] W. B. R. Lickorish, On the homeomorphisms of a non-orientable surface, Math. Proc. Camb.
Philos. Soc . 61 (1965), 61–64.
[8] J. D. McCarthy and U. Pinkall. Representing homology automorphisms of nonorientable
surfaces, Max Planck Inst. Preprint MPI/SFB 85-11, revised version written on 26 Feb 2004,
available from http://www.math.msu.edu/˜mccarthy/publications/selected.papers.html.
[9] L. Paris and B. Szepietowski. A presentation for the mapping class group of a nonorientable
surface, arXiv:1308.5856v1 [math.GT], 2013.
[10] M. Stukow, The twist subgroup of the mapping class group of a nonorientable surface, Osaka
J. Math. 46 (2009), 717–738.
[11] B. Szepietowski. Crosscap slides and the level 2 mapping class group of a nonorientable
surface, Geom. Dedicata 160 (2012), 169–183.
[12] B. Szepietowski. A finite generating set for the level 2 mapping class group of a nonorientable
surface. Kodai Math. J. 36 (2013), 1–14.
(Ryoma Kobayashi) Department of General Education, Ishikawa National College
of Technology, Tsubata, Ishikawa, 929-0392, Japan
E-mail address: kobayashi [email protected]
(Genki Omori) Department of Mathematics, Tokyo Institute of Technology, Ohokayama, Meguro, Tokyo 152-8551, Japan
E-mail address: [email protected]
| 4 |
arXiv:1802.03045v2 [] 13 Feb 2018
ALGORITHMIC ASPECTS OF BRANCHED COVERINGS III/V.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT
SEQUENCE
LAURENT BARTHOLDI AND DZMITRY DUDKO
r ý be a Thurston map and let M pfrq be its mapping
Abstract. Let fr: pS 2 , Aq
r of maps obtained by pre- and post-composing fr
class biset: isotopy classes rel A
r Let A Ď A
r be an fr-invariant subset, and
by the mapping class group of pS 2 , Aq.
let f : pS 2 , Aq ý be the induced map. We give an analogue of the Birman short
r is an iterated
exact sequence: just as the mapping class group ModpS 2 , Aq
extension of ModpS 2 , Aq by fundamental groups of punctured spheres, M pfrq
is an iterated extension of M pf q by the dynamical biset of f .
Thurston equivalence of Thurston maps classically reduces to a conjugacy
problem in mapping class bisets. Our short exact sequence of mapping class
bisets allows us to reduce in polynomial time the conjugacy problem in M pfrq
to that in M pf q. In case fr is geometric (either expanding or doubly covered
by a hyperbolic torus endomorphism) we show that the dynamical biset Bpf q
r is a complete conjugacy
together with a “portrait of bisets” induced by A
r
invariant of f .
Along the way, we give a complete description of bisets of p2, 2, 2, 2q-maps
as a crossed product of bisets of torus endomorphisms by the cyclic group of
order 2, and we show that non-cyclic orbisphere bisets have no automorphism.
We finally give explicit, efficient algorithms that solve the conjugacy and
centralizer problems for bisets of expanding or torus maps.
1. Introduction
A Thurston
is a branched covering f : S 2 ý of the sphere whose post-critical
Ť map
n
set Pf :“ ně1 f pcritical points of f q is finite.
Extending [18], we developed in [2–5] an algebraic machinery that parallels the
topological theory of Thurston maps: one considers the orbisphere pS 2 , Pf , ordf q,
with ordf : Pf Ñ t2, 3, . . . , 8u defined by
ordf ppq “ l. c. m.tdegq pf n q | n ě 0, f n pqq “ pu,
and the orbisphere fundamental group G “ π1 pS 2 , Pf , ord, ˚q, which has one generator of order ordf ppq per point p P Pf , and one relation. Then f is encoded in
the structure of a G-G-biset Bpf q: a set with commuting left and right actions of
G. By [5, Theorem 7.9] (see also Corollary 3.7), the isomorphism class of Bpf q is
a complete invariant of f up to isotopy.
There is a missing element to this description, that of “extra marked points”. In
the process of a decomposition of spheres into smaller pieces, one is led to consider
Thurston maps with extra marked points, such as periodic cycles or preimages of
Date: February 8, 2018.
Partially supported by ANR grant ANR-14-ACHN-0018-01, DFG grant BA4197/6-1 and ERC
grant “HOLOGRAM”.
1
2
LAURENT BARTHOLDI AND DZMITRY DUDKO
post-critical points. They can be added to Pf , but they have order 1 under ordf
so are invisible in π1 pS 2 , Pf , ordf q. The orbisphere orders can be made artificially
larger, but then other properties, such as the characterization of expanding maps
as those having a “contracting” biset (see [4, Theorem A]) are lost.
We resolve this issue, in this article, by introducing portraits of bisets and exhibiting their algorithmic properties. As an outcome, the conjugacy and centralizer
problems for Thurston maps with extra marked points reduces to that of the underlying map with only Pf marked.
This allows us, in particular, to understand algorithmically maps doubly covered
by torus endomorphisms.
r Ą
ordq and a sub1.1. Maps and bisets. The natural setting is an orbisphere pS 2 , A,
2
Ą
r
orbisphere pS , A, ordq; namely one has A Ď A and ordpaq | ordpaq for all a P A.
r։
r Ą
There is a corresponding morphism of fundamental groups π1 pS 2 , A,
ord, ˚q “: G
2
G :“ π1 pS , A, ord, ˚q, called a forgetful morphism. We considered in [5, §7] the
inessential case in which ordpaq ą 1 ô Ą
ordpaq ą 1, so that only the type of
singularities are changed by the forgetful functor. Here we are more interested in
r
the essential case, in which A Ř A.
In this Introduction, we restrict ourselves to self-maps of orbispheres; the general non-dynamical case is covered in §2. An orbisphere self-map f : pS 2 , A, ordq ý
is a branched covering f : S 2 ý with ordppq degp pf q | ordpf ppqq for all p P S 2 .
Given a map f : pS 2 , Aq ý, there may exist different orbisphere structures on
pS 2 , Aq turning f into an orbisphere self-map; in particular, the maximal one,
in which ordpaq “ 8 for every a P A, and the minimal one, in which ordpaq “
l. c. m.tdegpf n @xq | n ě 0, x P f ´n paqu. The self-map f is encoded by the biset
(1)
Bpf q :“ tβ : r0, 1s Ñ S 2 zA | βp0q “ ˚ “ f pβp1qqu{«A,ord ,
where «A,ord denotes homotopy rel pA, ordq, and the commuting left and right
π1 pS 2 , A, ord, ˚q-actions are given respectively by concatenation of paths or their
appropriate f -lift.
Bisets form a convenient category with products, detailed in [3]. A self-conjugacy
of a G-G-biset B is a pair of maps pφ : G ý, β : B ýq with βphbgq “ φphqβpbqφpgq;
and an automorphism of B is a self-conjugacy with φ “ 1. Cyclic bisets (that of
p t0, 8uq ý) have a special status; for the others,
maps z ÞÑ z d : pC,
Proposition A (= Corollary 4.30). If
AutpBq “ 1.
G BG
is a non-cyclic orbisphere biset then
r “ A \ tdu, obtained by adding a point to the orbisphere pS 2 , A, ordq,
Consider A
r Ą
resulting in an orbisphere pS 2 , A,
ordq. There is a short exact sequence, called the
Birman exact sequence,
(2)
r Ñ ModpS 2 , Aq Ñ 1
1 Ñ π1 pS 2 zA, dq Ñ ModpS 2 , Aq
(note that the orbisphere structure is ignored.) Let f : pS 2 , A, ordq ý be an orbisphere map, and recall that its mapping class biset is the set
M pf q :“ tm1 f m2 | m1 , m2 P ModpS 2 , Aqu{«A ,
with natural left and right actions of ModpS 2 , Aq. Assume first that the extra point
rĄ
r Ą
d is fixed by f , and write fr: pS 2 , A,
ordq ý for the map f acting on pS 2 , A,
ordq.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
3
There is then a natural map M pfrq Ñ M pf q, and we will see that it is an extension
of bisets (see Definition 2.10):
Theorem B (= Corollary 2.23). Subordinate to the exact sequence of groups (2),
there is a short exact sequence of bisets
˙
ˆ ğ
Bpf 1 q
ãÑ M pfrq ։ M pf q
1
π1 pS 2 zA,dq
f 1 PMpf q
1
2
π1 pS 2 zA,dq
where Bpf q denotes the biset of f : pS , Aq ý rel the base point d.
r “ A\tperiodic pointsu: for a cycle of
The statement can be easily extended to A
length ℓ, the fibres in the short exact sequence are of the form Bpf qℓ with left- and
right-action twisted along an ℓ-cycle. This case is at the heart of our reduction of the
conjugacy problem from M pfrq to M pf q. Approximately the same picture applies
if D contains preimages of points in A, but there are subtle complications which
are taken care of in §2, see Theorem 2.13. In essence, the presence of preperiodic
points imposes a finite index condition on centralizers, and splits conjugacy classes
into finitely many pieces, see the remark after Lemma 4.24.
1.2. Portraits of bisets. Portraits of bisets emerge from a simple remark: fixed
points of f naturally yield conjugacy classes in Bpf q. Indeed if f ppq “ p then choose
a path ℓ : r0, 1s Ñ S 2 zA from ˚ to p, and consider cp :“ ℓ#ℓ´1 Òpf P Bpf q, which
is well-defined up to conjugation by G, namely a different choice of ℓ would yield
g ´1 cp g for some g P G. Conversely, if f expands a metric, then every conjugacy
class in Bpf q corresponds to a unique repelling f -fixed point.
r ý
A portrait of bisets in Bpf q, see Definition 2.15, consists of a map f˚ : A
extending f åA ; a collection of peripheral subgroups pGa qaPAr of G: they are such that
every Ga is cyclic and generated by a “lollipop” around a; and a collection of cyclic
Ga -Gf˚ paq -bisets pBa qaPAr . Two portraits of bisets pGa , Ba qaPAr and pG1a , Ba1 qaPAr
r ý are conjugate if there exist pℓa q r in GAr such
parameterized by the same f˚ : A
aPA
1
´1 1
that Ga “ ℓ´1
a Ga ℓa and Ba “ ℓa Ba ℓf˚ paq . The set of self-conjugacies of a portrait
is called its centralizer.
r every biset admits a unique minimal portrait up to conjugacy,
In case A “ A,
which may be understood geometrically as follows. Consider a branched covering
f : pS 2 , Aq ý. For every a P A choose a small disk neighbourhood Da of it; up
to isotopy we may assume that f : Da ztau Ñ Df paq ztf paqu is a covering. A choice
of embeddings π1 pDa ztauq ãÑ π1 pS 2 , Aq yields a family pGa qaPA of peripheral subgroups; and the corresponding embeddings Bpf : Da ztau Ñ Df paq ztf paquq ãÑ Bpf q
yields a minimal portrait of bisets pGa , Ba qaPA in Bpf q.
r “ A \ te1 , . . . , en u with pe1 , . . . , en q a periodic cycle, the bisets Bei
In case A
consist of single points, and almost coincide with Ishii and Smillie’s notion of homotopy pseudo-orbits, see [13] and §4.2: imagine that pe1 , . . . , en q Ă S 2 is almost a
periodic cycle, in that f pei q is so close to ei`1 that there is a well-defined “shortest”
path ℓi from the first to the second, indices being read modulo n. Choose for each
ei
i a path mi from ˚ to ei . Set then Bi :“ tmi #pℓi`1 #m´1
i`1 qÒf u, the portrait of
bisets encoding pe1 , . . . , en q.
Theorem B is proven via portraits of bisets. There is a natural forgetful intertwiner of bisets
2 r Ą
2
r
r
(3)
r B r :“ Bpf : pS , A, ordq ýq Ñ Bpf : pS , A, ordq ýq “: G BG
G
G
4
LAURENT BARTHOLDI AND DZMITRY DUDKO
r b r G. Every portrait in B,
r for example
given by b ÞÑ 1 b b b 1 where B – G bGr B
G
its minimal one, induces a portrait in B via the forgetful map. Let us denote by
r is
ModpS 2 , Aq the pure mapping class group of S 2 zA. A class m P ModpS 2 , Aq
r
called knitting if the image of m is trivial in ModpS 2 , A \ tauq for every a P AzA.
We prove:
Theorem C (= Theorem 2.19). Let G BG be an orbisphere biset with portrait
r ։ G and f˚ : A
r ý and deg : A
r Ñ N be compatible extensions.
f˚ : A ý, and let G
There is then a bijection between, on the one hand, conjugacy classes of portraits
of bisets pGa , Ba qaPAr in B parameterized by f˚ and deg and, on the other hand,
r G-bisets
r
r ։ G considered up to composition with the
Gprojecting to B under G
biset of a knitting element. This bijection maps every minimal portrait of bisets of
r to pGa , Ba q r .
B
aPA
1.3. Geometric maps. A homeomorphism f : pS 2 , Aq ý is geometric if f is either
of finite order (f n “ 1 for some n ą 0) or pseudo-Anosov (there are two transverse
measured foliations preserved by f such that one foliation is expanded by f while
another is contracted). In both cases, f preserves a geometric structure on S 2 , and
every surface homeomorphism decomposes, up to isotopy, into geometric pieces,
see [25].
Consider now a non-invertible sphere map f : pS 2 , Aq ý, and let A8 Ď A denote the forward orbit of the periodic critical points of f . The map f is Böttcher
expanding if there exists a metric on S 2 zA8 that is expanded by f , and such that
f is locally conjugate to z ÞÑ z dega pf q at every a P A8 . The map f is geometric if
f is either
tExpu Böttcher expanding; or
tGTor/2u a quotient of a torus endomorphism z ÞÑ M z`q : R2 {Z2 ý by the involution
z ÞÑ ´z, for a 2 ˆ 2 matrix M whose eigenvalues are different from ˘1.
The two cases are not mutually exclusive. A map f P tGTor/2u is expanding if
and only if the absolute values of the eigenvalues of M are greater than 1. Note
also that if f is non-invertible and covered by a torus endomorphism then either
f P tExpu or the minimal orbisphere of f satisfies #Pf “ 4 and ordf ” 2.
In that last case, we show that G BG is a crossed product of an Abelian biset
with an order-2 group. Let us fix a p2, 2, 2, 2q-orbisphere pS 2 , A, ordq and let us
set G :“ π1 pS 2 , A, ordq – Z2 ¸ t˘1u. We may identify A with the set of all
order-2 conjugacy classes of G. By Euler characteristic, every branched covering
f : pS 2 , A, ordq ý is a self-covering. Therefore, the biset of f is right principal.
We denote by Mat`
2 pZq the set of 2 ˆ 2 integer matrices M with detpM q ą 0.
2
For a matrix M P Mat`
2 pZq and a vector v P Z there is an injective endomorphism
v
2
M : Z ¸ t˘1u ý given by the following “crossed product” structure:
(4)
M v pn, 1q “ pM n, 1q and M v pn, ´1q “ pM n ` v, ´1q.
Furthermore, Z2 ¸ t˘1u has exactly 4 conjugacy classes of order 2, which we denote
by A. Then M v induces a map pM v q˚ : A ý on these conjugacy classes. We write
BM v “ BM ¸ t˘1u the crossed product decomposition of the biset of M v .
Proposition D (= Propositions 4.5 and 4.6). The biset of M v from (4) is an
orbisphere biset, and conversely every p2, 2, 2, 2q-orbisphere biset B is of the form
2
B “ BM v for some M P Mat`
2 pZq and some v P Z . Two bisets BM v and BN w
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
5
are isomorphic if and only if M “ ˘N and v ” w pmod 2Z2 q. The biset BM v is
geometric if and only if both eigenvalues of M are different from ˘1.
A distinguished property of a geometric map is rigidity: two geometric maps
are Thurston equivalent (namely, conjugate up to isotopy) if and only if they are
topologically conjugate.
An orbisphere biset G BG is geometric if it is the biset of a geometric map,
and tGTor/2u and tExpu bisets are defined similarly. If B is a geometric biset,
then by rigidity there is a map fB : pS 2 , A, ordq ý, unique up to conjugacy, with
BpfB q – B.
r r Ñ G BG is a forgetful intertwiner as in (3), then
If G BG is geometric and Gr B
G
r
elements of AzA
(which a priori do not belong to any sphere) can be interpreted
r is itself geometric,
dynamically as extra marked points on S 2 zA. More precisely, if B
2
and B is the biset of the geometric map f : pS , Aq ý, then there is an embedding
r in S 2 as an f -invariant set, unique unless G is cyclic, in such a manner that
of A
r
r ýq.
B is isomorphic to Bpf : pS 2 , Aq
Since furthermore geometric maps have only finitely many periodic points of
given period, we obtain a good understanding of conjugacy and centralizers of
geometric bisets:
r Ñ G be a forgetful morphism of groups and
Theorem E (= Theorem 4.41). Let G
let
r r Ñ G BG and r B
r1 Ñ GB1G
r BG
r
G
G G
r is geometbe two forgetful biset morphisms as in (3). Suppose furthermore that B
1
1
ric of degree ą 1. Denote by pGa , Ba qaPAr and pGa , Ba qaPAr the portraits of bisets
r and B
r 1 in B and B 1 respectively.
induced by B
1
r B
r are conjugate under ModpGq
r if and only if there exists φ P ModpGq
Then B,
such that B φ – B 1 and the portraits pGφa , Baφ qaPAr and pG1a , Ba1 qaPAr are conjugate.
Furthermore, the centralizer of the portrait pGa , Ba qaPAr is trivial, and the cenr of B
r is isomorphic, via the forgetful map ModpGq
r Ñ ModpGq, to
tralizer ZpBq
ˇ φ φ
(
φ P ZpBq ˇ pGa , Ba qaPAr „ pGa , Ba qaPAr
and is a finite-index subgroup of ZpBq.
Let us call an orbisphere map f : pS 2 , A, ordq ý weakly geometric if its minimal
quotient on pS 2 , Pf , ordf q is geometric; an orbisphere biset B is weakly geometric if
its minimal quotient orbisphere biset is geometric. In Theorem 4.35 we characterize
weakly geometric maps as those decomposing as a tuning by homeomorphisms:
starting from a geometric map, some points are blown up to disks which are mapped
to each other by homeomorphisms.
1.4. Algorithms. An essential virtue of the portraits of bisets introduced above
is that they are readily usable in algorithms. Previous articles in the series already
highlighted the algorithmic aspects of bisets; let us recall the following.
From the definition of orbisphere bisets in [5, Definition 2.6], it is clearly decidable whether a given biset H BG is an orbisphere biset; the groups G and
H may be algorithmically identified with orbisphere groups π1 pS 2 , A, ord, ˚q and
π1 pS 2 , C, ord, :q respectively, and the induced map f˚ : C Ñ A is computable. In
particular, if G BG is a G-G-biset, then the dynamical map f˚ : A ý is computable,
6
LAURENT BARTHOLDI AND DZMITRY DUDKO
the minimal orbisphere quotient G :“ π1 pS 2 , Pf , ordf , ˚q is computable, and the
induced G-G-biset B is computable. It is also easy (see [4, §5]) to determine from
an orbisphere biset whether it is tGTor/2u (and then to determine an affine map
M z ` q covering it) or tExpu.
We shall show that recognizing conjugacy of portraits is decidable, and give
efficient (see below) algorithms proving it, as follows:
Algorithm 1.1 (= Algorithms 5.6 and 5.10). Given a minimal geometric orbirÑA
r of the dynamics of B on its peripheral
sphere biset G BG , an extension f˚ : A
classes, and two portraits of bisets pGa , Ba qaPAr and pG1a , Ba1 qaPAr with dynamics f˚ ,
Decide whether pGa , Ba qaPAr and pG1a , Ba1 qaPAr are conjugate, and Compute the
centralizer of pGa , Ba qaPAr , which is a finite abelian group.
Algorithm 1.2 (= Algorithms 5.7 and 5.11). Given a minimal geometric orr Ñ A
r of the dynamics of B on its
bisphere biset G BG and an extension f˚ : A
peripheral classes,
Produce a list of representatives of all conjugacy classes of portraits of bisets
pGa , Ba qaPAr in B with dynamics f˚ .
Thurston equivalence to a map f : pS 2 , Aq ý reduces to the conjugacy problem
in the mapping class biset M pf q. Let X beŤa basis of M pf q and let N be a finite
generating set of ModpS 2 , Aq; so M pf q “ ně0 N n X. We call an algorithm with
input in M pf q ˆ M pf q efficient if for f, g P N n X the running time of the algorithm
is bounded by a polynomial in n.
We deduce that conjugacy and centralizer problems are decidable for geometric
maps, as long as they are decidable on their minimal orbisphere quotients:
Corollary F (= Algorithm 5.12). There is an efficient algorithm with oracle that,
given two orbisphere maps f, g by their bisets and such that f is geometric, decides
whether f, g are conjugate, and computes the centralizer of f .
The oracle must answer, given two geometric orbisphere maps f, g on their minimal orbisphere pS 2 , Pf , ordf q respectively pS 2 , Pg , ordg q, whether they are conjugate
and what the centralizer of f is.
Algorithms for the oracle itself will be described in details in the last article of
the series [6]. Furthermore, we have the following oracle-free result, proven in §5.3:
Corollary G. There is an efficient algorithm that decides whether a rational map
is equivalent to a given twist of itself, when only extra marked points are twisted.
1.5. Historical remarks: Thurston equivalence and its complexity. The
conjugacy problem is known to be solvable in mapping class groups [11]. The
state of the art is based on the Nielsen-Thurston classification: decompose maps
along their canonical multicurve; then a complete conjugacy invariant of the map
is given by the combinatorics of the decomposition, the conjugacy classes of return
maps, and rotation parameters along the multicurve. For general surfaces, the
cost of computing the decomposition is at most exponential time in n, see [14, 24],
and so is the cost of comparing the pseudo-Anosov return maps [17]. Margalit,
Yurttas, Strenner recently announced polynomial-time algorithms for all the above.
At all rates, for punctured spheres the cost of computing the decomposition is
polynomial [7].
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
7
Kevin Pilgrim developed, in [20], a theory of decompositions of Thurston maps
extending the Nielsen-Thurston decomposition of homeomorphisms. It is a fundamental ingredient in understanding Thurston equivalence of maps, without any
claims on complexity.
In [1], a general strategy is developed, along with computations of the mapping
class biset for the three degree-2 polynomials with three finite post-critical points
and period respectively 1, 2, 3. The Douady-Hubbard “twisted rabbit problem” is
solved in this manner (it asks to determine for n P Z the conjugacy class of r ˝ tn
with rpzq « z 2 ´ 0.12256 ` 0.74486i the “rabbit” polynomial and tpzq the Dehn
twist about the rabbit’s ears). Russell Lodge computed in [16] the solution to an
array of other “twisted rabbit” problems, by finding explicitly the mapping class
biset structure.
If M pf q is a contracting biset, then its conjugacy problem may be solved in
quasi-linear time. In terms of a twist parameter such as the n above, the solution
has Oplog nq complexity. However, this bound is not uniform, in that it requires
e.g. the computation of the nucleus of M pf q, which cannot a priori be bounded.
Nekrashevych showed in [19, Theorem 7.2] that the mapping class biset of a hyperbolic polynomial is contracting. Conversely, if M pf q contains an obstructed map
then it is not contracting.
In the case of polynomials, however, even more is possible: bisets of polynomials admit a particularly nice form by placing the basepoint close to infinity (this
description goes hand-in-hand with Poirier’s “supporting rays” [21]). As a consequence, the “spider algorithm” of Hubbard and Schleicher [12] can be implemented
directly at the level of bisets and yields an efficient algorithm, also in practice since
it does not require the computation of M pf q’s nucleus. This algorithm will be
described in [6].
Selinger and Yampolsky showed in [23] that the canonical decomposition is computable. In this manner, they solve the conjugacy problem for maps whose canonical
decomposition has only rational maps with hyperbolic orbifold: a complete conjugacy invariant of the map is given by the combinatorics of the decomposition, the
conjugacy classes of its return maps, and rotation parameters along the canonical
obstruction.
We showed in [5] that the conjugacy problem is decidable in general: a complete
conjugacy invariant of a Thurston map is given by the combinatorics of the decomposition, together with conjugacy classes of return maps, rotation parameters
along the canonical obstruction, together with the induced action of the centralizer
groups of return maps.
We finally mention a different path towards understanding Thurston maps, in
the case of maps with four post-critical points: the “nearly Euclidean Thurston
maps” from [8]. There, the restriction on the size of the post-critical set implies
that the maps may be efficiently encoded via linear algebra; and as a consequence,
conjugacy of NET maps is efficiently decidable. We are not aware of any direct
connections between their work and ours.
1.6. Notations. Throughout the text, some letters keep the same meaning and are
not always repeated in statements. The symbols A, C, D, E denote finite subsets of
the topological sphere S 2 . There is a sphere map fr: pS 2 , C \ Eq Ñ pS 2 , A \ Dq,
which restricts to a sphere map f : pS 2 , Cq Ñ pS 2 , Aq. Implicit in the definition, we
r “ C \E and A
r “ A\D. If there
have f pCqYtcritical values of f u Ď A. We write C
8
LAURENT BARTHOLDI AND DZMITRY DUDKO
are, furthermore, orbisphere structures on the involved spheres, we denote them by
pS 2 , A, ordq etc., with the same symbol ‘ord’. We also abbreviate X :“ pS 2 , A, ordq
and Y :“ pS 2 , C, ordq.
For sphere maps f0 , f1 : pS 2 , Cq Ñ pS 2 , Aq, we mean by f0 «C f1 that f0 and f1
are isotopic, namely there is a path pft qtPr0,1s of sphere maps ft : pS 2 , Cq Ñ pS 2 , Aq
connecting them.
For γ a path and f a (branched) covering, we denote by γÒxf the unique f -lift of
γ that starts at the preimage x of γp0q.
We denote by f åZ the restriction of a function f to a subset Z of its domain.
Finally, for a set Z we denote by Z Ó. the group of all permutations of Z.
2. Forgetful maps
We recall a minimal amount of information from [5]: a marked orbisphere is
pS 2 , A, ordq for a finite subset A Ă S 2 and a map ord : A Ñ t2, 3, . . . , 8u, extended to S 2 by ordpS 2 zAq ” 1. For a choice of basepoint ˚ P S 2 zA, its orbispace
fundamental group is generated by “lollipop” loops pγa qaPA based at ˚ that each
encircle once counterclockwise a single point of A, and with A “ ta1 , . . . , an u has
presentation
(5)
nq
1q
, γa1 ¨ ¨ ¨ γan y.
, . . . , γaordpa
G “ π1 pS 2 , A, ord, ˚q “ xγa1 , . . . , γan | γaordpa
n
1
Abstractly, i.e. without reference to a sphere, an orbisphere group is a group G as
in (5) together with the conjugacy classes Γ1 , . . . , Γn of γa1 , . . . , γan respectively.
An orbisphere map f : pS 2 , C, ordC q Ñ pS 2 , A, ordA q between orbispheres is
an orientation-preserving branched covering between the underlying spheres, with
f pCq Y tcritical values of f u Ď A, and with ordC ppq degp pf q | ordA pf ppqq for all
p P S2.
To avoid special cases, we make, throughout this article except in §6.3, the assumption
(6)
#A ě 3
and
#C ě 3.
For #A “ 2, many things go wrong: one must require ord to be constant; the fundamental group has a non-trivial centre; and the degree-d self-covering of pS 2 , A, ordq
has an extra symmetry of order d ´ 1. All our statements can be modified to take
into account this special case, see §6.3.
Fix basepoints ˚ P S 2 zA and : P S 2 zC. The orbisphere biset of an orbisphere
map f : pS 2 , C, ordC q Ñ pS 2 , A, ordA q is the π1 pS 2 , C, ordC , :q-π1 pS 2 , A, ordA , ˚qbiset
Bpf q “ tβ : r0, 1s Ñ S 2 zC | βp0q “ :, f pβp1qq “ ˚u { «C,ordC ,
with ‘«C,ord ’ denoting homotopy in the orbispace pS 2 , C, ordq. An orbisphere biset
H BG can also be defined purely algebraically, see [5, §7 and Definition 2.6]. By
[5, Theorem 7.9], there is an orbisphere map f : pS 2 , C, ordC q Ñ pS 2 , A, ordA q,
unique up to isotopy, such that B is isomorphic to Bpf q. We denote by B˚ : C Ñ A
the induced map on the peripheral conjugacy classes of H and G – they are identified
with the associated punctures.
Ą be orbispheres, and suppose ordpaq | Ą
Let pS 2 , A, ordq and pS 2 , A \ D, ordq
ordpaq
for all a P A. We then have a natural forgetful homomorphism FD : π1 pS 2 , A \
D, Ą
ord, ˚q Ñ π1 pS 2 , A, ord, ˚q given by γa ÞÑ γa for a P A and γd ÞÑ 1 for d P D.
We write the forgetful map pS 2 , A \ D, Ą
ordq 99K pS 2 , A, ordq with a dashed arrow,
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
9
because even though FD is a genuine group homomorphism, the corresponding map
between orbispheres is only densely defined. Note, however, that its inverse is a
genuine orbisphere map.
Consider forgetful maps pS 2 , C \E, Ą
ordq 99K pS 2 , C, ordq and pS 2 , A\D, Ą
ordq 99K
2
pS , A, ordq; in the sequel we shall keep the notations
X :“ pS 2 , A, ordq,
r :“ A \ D,
A
Y :“ pS 2 , C, ordq,
r :“ C \ E.
C
Ą Ñ pS 2 , A,
r ordq
r Ą
Let fr: pS 2 , C,
ordq be an orbispace map, with such that fr restricts
to an orbisphere map f : pS 2 , C, ordq Ñ pS 2 , A, ordq. We thus have
Ą
r ordq
pS 2 , C,
(7)
FE
X “ pS 2 , C, ordq
fr
f
rĄ
pS 2 , A,
ordq
FD
pS 2 , A, ordq “ Y.
(In particular, frpCq Ď A and A contains all critical values of fr.) We are concerned,
in this section, with the relationship between Bpfrq and Bpf q; we shall show that
Bpfrq may be encoded by Bpf q and a portrait of bisets, and that this encoding is
unique up to a certain equivalence. The algebraic counterpart of (7) is
(8)
Ą ˚q “: G
r
r Ą
r ord,
r :“ π1 pS 2 , C,
H
ord, :q í Bpfrq ì π1 pS 2 , A,
FE
FE,D
FD
H :“ π1 pS 2 , C, ord, :q í Bpf q ì π1 pS 2 , A, ord, ˚q “: G,
where we denote by
(9)
r rG
FE,D : Bpfrq Ñ Bpf q – H bH
Ă Bpf q bG
the natural map given by b ÞÑ 1 b b b 1.
2.1. Braid groups and knitting equivalence. We recall that there is no difference between ModpS 2 , Aq and ModpXq, see [5, §7]. We shall define a subgroup
r which
ModpX|Dq intermediate between ModpXq – ModpS 2 , Aq and ModpS 2 , Aq
r
will be useful to relate Bpf q and Bpf q.
Definition 2.1 (Pure braid group). Let D be a finite set on a finitely punctured
sphere S 2 zA. The pure braid group BraidpS 2 zA, Dq is the set of continuous motions
m : r0, 1s ˆ D Ñ S 2 zA considered up to isotopy so that
‚ mpt, ´q : D ãÑ S 2 zA is an inclusion for every t P r0, 1s;
‚ mp0, ´q “ mp1, ´q “ 1 åD .
The product in BraidpS 2 zA, Dq is concatenation of motions, and the inverse is
time reversal.
△
Note that in the special case D “ t˚u we have BraidpS 2 zA, t˚uq “ π1 pS 2 zA, ˚q.
Theorem 2.2 (Birman). For every m P BraidpS 2 zA, Dq there is a unique mapping
class pushpmq P kerpModpS 2 , A\Dq Ñ ModpS 2 , Aqq such that pushpmq is isotopic
rel A to the identity via an isotopy moving D along m´1 . The map
(10)
push : BraidpS 2 zA, Dq Ñ kerpModpS 2 , A \ Dq Ñ ModpS 2 , Aqq
is an isomorphism (it would be merely an epimorphism if #A ď 2).
10
LAURENT BARTHOLDI AND DZMITRY DUDKO
From now on we identify BraidpS 2 zA, Dq with its image under (10).
Definition 2.3 (Knitting group). Let X “ pS 2 , A, ordq be an orbisphere and let D
be a finite subset of S 2 zA. The knitting braid group knBraidpX, Dq is the kernel
of the forgetful morphism
ź
ED : BraidpS 2 zA, Dq Ñ
π1 pX, dq;
dPD
2
it is the set of D-strand braids in S zA all of whose strands are homotopically
trivial in X.
△
In case X “ pS 2 , Aq, knitting elements are the “p#D ´ 1q-decomposable braids”
from [15].
Lemma 2.4. The knitting group knBraidpX, Dq is a normal subgroup of ModpS 2 , A \ Dq.
Proof. We show that for every m P knBraidpX, Dq and h P ModpS 2 , A \ Dq
we have h´1 mh P knBraidpX, Dq. Indeed, h restricts to an orbisphere map
h : X ý fixing D pointwise. Thus mpd, ´q is a trivial loop in π1 pX, dq if and
only if hpmpd, ´qq is a trivial loop in π1 pX, dq for every d P D.
Define
(11)
ModpX|Dq :“ ModpS 2 , A \ Dq{knBraidpX, Dq.
With G “ π1 pX, ˚q, we write ModpG|Dq “ ModpX|Dq and ModpGq “ ModpXq.
We interpret elements of ModpGq as outer automorphisms of G, as mapping classes
and as biprincipal bisets. We also introduce the notation
ź
π1 pX, Dq :“
π1 pX, dq – GD .
dPD
Note the following four exact sequences:
knBraidpX, Dq
BraidpS 2 zA, Dq
ED
(12)
r “ ModpS 2 , A \ Dq
ModpGq
π1 pX, Dq;
push
ModpX|Dq
FD
FD
ModpGq “ ModpS 2 , Aq
exactness follows by definition except surjectivity in the top sequence. Given a
sequence of loops pγd qdPD P π1 pX, Dq, we may isotope them slightly to obtain
γd ptq ‰ γd1 ptq for all t P r0, 1s and all d ‰ d1 . Then bpd, tq :“ γd ptq is a braid in
S 2 zA and defines via ‘push’ an element of BraidpS 2 zA, Dq mapping to pγd q.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
11
2.2. fr-impure mapping class groups. As in (7), consider an orbisphere map
fr: pS 2 , C\E, Ą
ordq Ñ pS 2 , A\D, Ą
ordq that projects to f : pS 2 , C, ordq Ñ pS 2 , A, ordq.
We will enlarge the groups in (12) to “fr-impure mapping class groups” so that exact
sequences analogous to (16) hold.
Let Mod˚ pS 2 , C \ Eq be the group of homeomorphisms m : pS 2 , C \ Eq ý
considered up to isotopy rel C \ E such that m åC is the identity and for every
e P E we have frpmpeqq “ frpeq; i.e., m may permute points in fr´1 pfrpeqqzC. There
is a natural forgetful morphism
FE : Mod˚ pS 2 , C \ Eq Ñ ModpS 2 , Cq.
As in Definition 2.1, the braid group Braid˚ pS 2 zC, Eq is the set of continuous
motions m : r0, 1s ˆ E Ñ S 2 zC considered up to isotopy so that
‚ mpt, ´q : E ãÑ S 2 zC is an inclusion for every t P r0, 1s;
‚ mp0, ´q “ 1 åE ;
‚ mp1, eq P fr´1 pfrpeqq for every e P E.
Every m P Braid˚ pS 2 zC, Eq induces a permutation πm : e ÞÑ mp1, eq of E. The
pure braid group consists of those permutations with πm “ 1. The product in
Braid˚ pS 2 zC, Eq is m ¨ m1 “ m#pm1 ˝ p1 ˆ πm qq, with as usual ‘#’ standing
for concatenation of motions. Birman’s theorem (a slight generalization of Theorem 2.2) still holds: the group Braid˚ pS 2 zC, Eq is isomorphic to the kernel of
Mod˚ pS 2 , C \ Eq Ñ ModpS 2 , Cq via the push operator.
Let π1˚ pY, Eq be the group of motions m : r0, 1s ˆ E Ñ Y “ pS 2 , C, ordq, considered up to homotopy, such that
‚ mp0, ´q “ 1 åE ;
‚ mp1, eq P fr´1 pfrpeqq for every e P E;
here m, m1 : E ãÑ Y are homotopic if mp´, eq and m1 p´, eq are homotopic curves
(relative to their endpoints) in Y for all e P E. The product in π1˚ pY, Eq is again
m ¨ m1 “ m#pm1 ˝ p1 ˆ πm qq. We have
ź
ź
pfr´1 pdq X EqÓ.,
(13)
π1˚ pY, Eq –
π1 pY, eq ¸
ePE
dPfrpEq
the isomorphism mapping m to its restrictions mp´, eq and its permutation πm .
Lemma 2.5. The following sequence is exact:
(14)
knBraidpY, Eq
Braid˚ pS 2 zC, Eq
EE
π1˚ pY, Eq,
where EE is the natural forgetful morphism.
Proof. Suppose EE pmq “ EE pm1 q. Then m´1 m1 is pure so m´1 m1 P knBraidpY, Eq.
The converse is also obvious.
The same argument as in Lemma 2.4 shows that the knitting group knBraidpY, Eq
is a normal subgroup of Mod˚ pS 2 , C \ Eq. We may thus define
(15)
Mod˚ pY |Eq :“ Mod˚ pS 2 , C \ Eq{knBraidpY, Eq.
12
LAURENT BARTHOLDI AND DZMITRY DUDKO
As in (12) we have the following exact sequences
knBraidpY, Eq
Braid˚ pS 2 zC, Eq
EE
(16)
Mod˚ pS 2 , C \ Eq
π1˚ pY, Eq.
push
Mod˚ pY |Eq
FE
FE
ModpHq “ ModpS 2 , Cq
2.3. Branched coverings. Recall that, for orbisphere maps f0 , f1 : pS 2 , C, ordq Ñ
pS 2 , A, ordq we write f0 «C f1 , and call them isotopic, if there is a path pft qtPr0,1s
of orbisphere maps ft : pS 2 , C, ordq Ñ pS 2 , A, ordq. Equivalently,
Lemma 2.6. f0 «C f1 if and only if hf0 “ f1 for a homeomorphism h : pS 2 , Cq ý
that is isotopic to the identity.
Proof. If there exists an isotopy pht q witnessing 1 «C h, then ft :“ ht f0 witnesses
f0 «C f1 . Conversely, since all critical values of ft are frozen in A, the set ft´1 pyq
moves homeomorphically for every y P S 2 (equivalently, no critical points collide).
Therefore, we may factor ft “ ht f0 , with ht pzq the trajectory of z P ft´1 pyq; this
defines an isotopy from 1 to h :“ h1 .
Consider orbisphere maps fr, r
g : pS 2 , C \ E, Ą
ordq Ñ pS 2 , A\ D, Ą
ordq as in (7). We
write fr «C|E r
g, and call fr, r
g knitting-equivalent, if fr “ hr
g for a homeomorphism
h : pS 2 , C \ Eq ý in knBraidpY, Eq; we have
fr «C\E r
g ùñ fr «C|E gr ùñ fr «C r
g.
For m P BraidpS 2 zA, Dq we define its pullback pfrq˚ m : r0, 1s ˆ E Ñ S 2 zC by
#
mp´, frpeqqÒefr if frpeq P D,
˚
ppfrq mqp´, eq :“
e
if frpeq P A.
This defines a motion of E; note that pfrq˚ mp1, eq need not equal e:
Lemma 2.7. If pushpmq P BraidpS 2 zA, Dq, then pushppfrq˚ mq defines an element
of Braid˚ pS 2 zC, Eq and we have the following commutative diagram:
(17)
pS 2 , C \ Eq
fr
pS 2 , A \ Dq
pushppfrq˚ mq
pS 2 , C \ Eq
pushpmq
pS 2 , A \ Dq.
«C\E
fr
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
13
Proof. Let us discuss in more detail the operator ‘push’. Consider a simple arc
γ : r0, 1s Ñ S 2 zA and let U Ă S 2 zA be a small disk neighborhood of γ. We can
define (in a non-unique way) a homeomorphism pushpγq : pS 2 , Aq ý that maps γp0q
to γp1q and is identity away from U. Let U1 , U2 , . . . , Ud be the preimages of U under
fr, where d “ degpfrq. Each Ui contains a preimage γi of γ. Let pushpγi q : Ui ý be
the lift of pushpγq åU under fr: Ui Ñ U, extended by the identity on S 2 zUi . Then
(18)
pushpγ1 qpushpγ2 q ¨ ¨ ¨ pushpγd q ¨ fr “ fr ¨ pushpγq.
For m P BraidpS 2 zA, Dq, we can define pushpmq as a composition of pushes
along finitely many simple arcs βi . Using (18) we lift all pushpβi q through fr;
considering the equation rel C \ E we obtain (17).
We note that pushpmq does not necessarily lift to a ‘push’ if m moves a critical
value. Indeed if γ is a simple loop then pushpγq is the quotient of two Dehn twists
about the boundary curves of an annulus surrounding γ, see [9, §4.2.2]; however
the lift of γ will not be a union of simple closed curves if γ contains a critical value;
an annulus around γ will not lift to an annulus, but rather to a more complicated
surface F ; and the quotient of Dehn twists about boundary components of F will
not be a quotient of Dehn twists about boundary components of annuli.
Proposition 2.8. Let fr: pS 2 , C \ E, Ą
ordq Ñ pS 2 , A \ D, Ą
ordq be an orbisphere map
as in (7). Then every element in knBraidpX, Dq lifts through fr to an element of
knBraidpY, Eq.
If gr : pS 2 , C \ E, Ą
ordq Ñ pS 2 , A\ D, Ą
ordq is another orbisphere map, then fr «C|E
g if and only if fr «C hr
r
gk for some h P knBraidpY, Eq, k P knBraidpX, Dq.
Proof. Consider h P knBraidpX, Dq. By Theorem 2.2 we may write h “ pushpbq.
Since bp´, dq is homotopically trivial in X for every d P D, the curve bp´, frpeqqÒefr
ends at e for all e P E with frpeq P D, because this curve is in Y . Therefore,
pfrq˚ bp1, ´q “ 1 åE , and pushppfrq˚ bq P BraidpS 2 zC, Eq. Since the lifts bp´, frpeqqÒer
f
are homotopically trivial in Y for all e P E, we have pushppfrq˚ bq P knBraidpY, Eq
and Lemma 2.7 concludes the first claim.
The second claim is a direct consequence of the first.
As a consequence, we may detail a little bit more the map ED in (12). Choose
for every d P D a path ℓd in S 2 zA from ˚ to d. This path defines an isomorphism
π1 pX, dq Ñ π1 pX, ˚q “ G by γ ÞÑ ℓd #γ#ℓ´1
d . We thus have a map
(19)
ED : BraidpS 2 zA, Dq ։ GD ,
m ÞÑ pℓd #ppush´1 pmq åd q#ℓ´1
d qdPD ,
and kerpED q “ knBraidpX, Dq.
2.4. Mapping class bisets. We introduce some notation parallel to that in (12)
r Ñ H and FD : G
r Ñ G be forgetful
and (16) for mapping class bisets. Let FE : H
r
morphisms of orbisphere groups as in (8), and let B be an orbisphere biset. Let
r r G be the induced H-G-biset. We have forgetful morphisms of
B :“ H bH
Ă B bG
r Ñ ModpGq and FE : ModpHq
r Ñ ModpHq. Corresponding
groups FD : ModpGq
mapping class bisets are written respectively, with fr and f the orbisphere maps
14
LAURENT BARTHOLDI AND DZMITRY DUDKO
r and B,
associated with B
r “ M pfrq – tn b B
r b m | n P ModpHq,
r m P ModpGqu{–
r
M pBq
r m P ModpS 2 , Aqu{«
r
“ tnfrm | n P ModpS 2 , Cq,
r,
C
M pBq “ M pf q – tn b B b m | n P ModpHq, m P ModpGqu{–
“ tnf m | n P ModpS 2 , Cq, m P ModpS 2 , Aqu{«C
together with the natural forgetful intertwiner
(20)
r Ñ M pBq,
FE,D : M pBq
r 1 Ñ FE,D pB 1 q “ H b Ă B
r 1 b r G.
B
G
H
We may also define the following mapping class biset, sometimes larger than
r assume first that Ą
M pBq:
ord is constant on E, possibly 8, and set
(21)
ˇ
+
#
ˇ
r 1 q P M pBq
F
p
B
ˇ
E,D
1
˚ r
˚ r
r an Hr G-orbisphere
r
biset ˇ
{–,
M pBq “ M pf q :“ B
r 1 q˚ “ B
r˚
ˇ
pB
r 1 q˚ : C
rÑA
r denotes the induced map on marked conjugacy classes. It is
where pB
˚
2 r
r
r 1 P M ˚ pBq
r for
an Mod pS , Cq-ModpS 2 , Aq-biset;
note indeed that we have n b B
˚
1
˚ r
2 r
r
r
B P M pBq and n P Mod pS , Cq, because FE,D pn b Bq “ FE pnq b B. Again
there is a natural forgetful intertwiner
(22)
FE,D :
r M
Mod˚ pS 2 ,Cq
˚
pfrqModpS 2 ,Aq
r Ñ ModpS 2 ,Cq M pf qModpS 2 ,Aq .
r on M pfrq does not necessarily extend
We note that the left action of ModpS 2 , Cq
˚
2 r
to an action of Mod pS , Cq on M pfrq, because the result of the action is in general
in M ˚ pf q and not in M pf q, see Example 6.2.
In case Ą
ord is not constant on E, we should be careful, because permutation
r invariant; rather, the image of H
r under such a
of points in E does not leave H
r
r and
permutation gives an orbisphere group isomorphic to H. However, M ˚ pBq
˚
2 r
Mod pS , Cq do not depend on the orbisphere structure, so the definition may be
r and G
r replaced by orbisphere groups with larger orders.
applied with H
Let us call the set of extra marked points E saturated if
fr´1 pfrpEqq Ď C \ E.
Lemma 2.9.
(1) The mapping class biset M ˚ pfrq is left-free.
(2) Suppose that E is saturated and that g0 , g1 : pS 2 , C \ E, Ą
ordq Ñ pS 2 , A \
Ą
D, ordq are orbisphere maps coinciding on C \ E and such that FE,D pg0 q
and FE,D pg1 q are isotopic through maps pS 2 , Cq Ñ pS 2 , Aq. Then g0 , g1 P
M ˚ pfrq, and there is an m P Braid˚ pS 2 zC, Eq such that mg0 “ g1 holds in
M ˚ pfrq.
(3) If E is saturated, then
(23)
M ˚ pfrq “ tmfrn | m P Mod˚ pS 2 , C \ Eq, n P ModpS 2 , A \ Dqu{«Cr .
Proof. The proof of the first claim follows the lines of [5, Proposition 6.4]: suppose
r Ñ pS 2 , Aq
r for some m P Mod˚ pS 2 , C\
that g, mg are isotopic through maps pS 2 , Cq
˚ r
Eq and g P M pf q. By Lemma 2.6, we may assume g “ mg as maps; then the
homeomorphism m is a deck transformation of the covering induced by g, so m has
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
15
finite order because degpgq ă 8. Recall that #C ě 3 by our standing assumption (6). Since m fixes at least 3 points in C and m has finite order, we deduce
that m is the identity. This shows that M ˚ pfrq is left-free.
For the second claim, let pgt : pS 2 , Cq Ñ pS 2 , AqqtPr0,1s be an isotopy between g0
and g1 . By Lemma 2.6, we may write gt at mt g0 for mt : pS 2 , Cq ý. Then m1
preserves E because E is saturated, so m1 P Braid˚ pS 2 zC, Eq as required.
The third claim directly follows from the second.
Definition 2.10 (Extensions of bisets, see [5, Definition 5.2]). Let G1 BG2 be a
G1 -G2 -biset and let N1 , N2 be normal subgroups of G1 and G2 respectively, so that
for i “ 1, 2 we have short exact sequences
(24)
1
Ni
π
Gi
Qi
1.
If the quotient Q1 -Q2 -biset N1 zB{N2 , consisting of connected components of N1 BN2 ,
is left-free, then the sequence
(25)
N 1 BN 2
G 1 BG 2
π
Q1 pN1 zB{N2 qQ2
is called an extension of left-free bisets.
△
r ։ H and G
r ։ G be surjective
Definition 2.11 (Inert biset morphism). Let H
r
r
r
group homomorphisms, and let B be a left-free H-G-biset. Recall that the tenr r G is isomorphic to the double quotient kerpH
r Ñ
sor product B :“ H bH
Ă B bG
r kerpG
r Ñ Gq with natural H-G-actions. The natural map F : B
r Ñ B is
HqzB{
r
called inert if B is a left-free biset and the natural map t¨u bGr B Ñ t¨u bG B is a
r In other words,
bijection. In particular, B has the same number of left orbits as B.
r
r
assuming that the groups H and H have similarly-written generators and so do G
r
and G, the wreath recursions of B and B are identical.
rker ãÑ ĂB
r r ։ H BG the kernel is a
Yet said differently, in the extension ker B
H G
r“H
r and G “ H so that the bisets can
disjoint union of left-principal bisets. If G
r
rÑGÑ
be iterated, then B Ñ B is inert precisely when we have a factorization G
r
IMGGr pBq, the latter groupŮbeing the quotient of G by the kernel of the right action
on the rooted tree t¨u bH ně0 B bn , see [5, §7.3].
△
Define
r
M ˚ pB|E, Dq :“ knBraidpY, EqzM ˚ pBq{knBraidpX,
Dq;
(26)
this is naturally a Mod˚ pY |Eq-ModpX|Dq-biset, and Proposition 2.8 implies in
particular that it is left-free:
Proposition 2.12. The natural forgetful maps
r
r Ñ
Ă M pBq
ModpHq
ModpGq
ModpH|Eq M pB|E, DqModpG|Dq
and
Mod˚ pS 2 ,C\Eq M
are inert.
˚
E Ó.
˚
˚
r
pBq
ModpS 2 ,A\Dq Ñ Mod˚ pY |Eq M pB|E, DqModpX|Dq
Let
denote the group of all permutations t : E ý such that frptpeqq “ frpeq.
˚
˚
We denote by H E ¸E Ó. the semidirect product where E Ó. acts on H E by permuting
˚
coordinates; compare with (13). We have π1˚ pY, Eq – H E ¸ E Ó. .
16
LAURENT BARTHOLDI AND DZMITRY DUDKO
We denote by
Braid˚ pY,Eq M
˚ r
˚
˚
r
pBq
BraidpX,Dq and
π1˚ pY,Eq M
˚
pB|E, Dqπ1 pX,Dq the
restrictions of M pBq and M pB|E, Dq to braid and fundamental groups.
Theorem 2.13. If E is saturated, then the following sequences are extensions of
bisets:
(27)
r
M ˚ pBq
knBraidpY,Eq
knBraidpX,Dq
Braid˚ pY,Eq M
˚
r
pBq
BraidpX,Dq
EE,D
r
M ˚ pBq
FE,D
π1˚ pY,Eq M
pB|E, Dqπ1 pX,Dq
P
˚
M pB|E, Dq
FE,D
M pBq
˚
˚
H E ¸E Ó.
!
–
pB 1 P M pBq, pBc1 qcPCr q
pBc1 qcPCr is a portrait in B 1
)
.
GD
(The isomorphism on the right is the topic of Theorem 2.19, and will be proven
there.)
´1
r
pBq is a connected subbiset of Braid˚ pY,Eq M ˚ pBq
Proof. By Lemma 2.9(2), FE,D
BraidpX,Dq ;
thus the central-to-left sequence is an extension of bisets. Exactness of other se
quences follows from Proposition 2.12.
r by M pBq
r in (27),
Note that, if E were not saturated or if we replaced M ˚ pBq
then we wouldn’t have exact sequences of bisets anymore, because the fibres of
r ։ M pBq wouldn’t have to be connected; see Example 6.1. The failure of
M ˚ pBq
transitivity is described by Lemma 2.20. There are similar exact sequences in case
f˚ : E Ñ D is a bijection, see Theorem 2.22.
2.5. Portraits of bisets. First, a portrait of groups amounts to a choice of representative in each peripheral conjugacy class of an orbisphere group:
Definition 2.14 (Portraits of groups). Let G be an orbisphere group with marked
r be a finite set containing A. A portrait of
conjugacy classes pΓa qaPA and let A
groups pGa qaPAr in G is a collection of cyclic subgroups Ga ď G so that
#
xgy for some g P Γa , if a P A,
Ga “
x1y otherwise.
r “ A, then pGa qaPA is called a minimal portrait of groups.
If A
△
Definition 2.15 (Portraits of bisets). Let H, G be orbisphere groups with peripheral classes indexed by C, A respectively, let H BG be an orbisphere biset, and let
f˚ : C Ñ A be its portrait. We also have a “degree” map deg : C Ñ N. A portrait
of bisets relative to these data consists of
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
17
‚ portraits of groups pHc qcPCr in H and pGa qaPAr in G;
rÑA
r of f˚ and deg : C
r Ñ N of deg;
‚ extensions f˚ : C
‚ a collection pBc qcPCr of subbisets of B such that every Bc is an Hc -Gf˚ pcq biset that is right-transitive and left-free of degree degpcq, and such that if
f˚ pcq “ f˚ pc1 q and HBc “ HBc1 qua subsets of B then c “ c1 .
r “ C and A
r “ A, then pBc qcPC is called a minimal portrait of bisets.
If C
△
r
Note in particular that if c P CzC
then Hc is trivial and the subbiset Bc consists
of degpcq elements permuted by Gf˚ pcq . If moreover f˚ pcq R A, then degpcq “ 1.
For simplicity the portrait of bisets is sometimes simply written pBc qcPCr , the other
data f˚ , deg, pHc qcPCr , pGa qaPAr being implicit.
Here is a brief motivation for the definition. Consider an expanding Thurston
map f and its associated biset B. Recall from [18] that bounded sequences in B
represent points in the Julia set of f ; in particular constant sequences represent
fixed points of f and vice versa. On the other hand, fixed Fatou components
of f are represented by local subbisets of B with same cyclic group acting on
the left and on the right. All of these are instances of portraits of bisets in B.
Furthermore, (pre)periodic Julia or Fatou points are represented by portraits of
bisets with (pre)periodic map f˚ . E.g., closures of Fatou components intersect if
and only if they admit portraits that intersect; and similarly for inclusion in the
closure of a Fatou component of a (pre)periodic point in the Julia set.
r Ñ G be a marked forgetful morphism of orbisphere groups as in (8).
Let FD : G
r
For all a P A choose a small disk neighbourhood Ua Q a that avoids all other points
r so that π1 pUa zAq
r – Z. We call a curve γ close to a if γ Ă Ua .
in A,
r a q r in G
r is uniquely characLemma 2.16. Every minimal portrait of groups pG
aPA
terized by a family of paths pℓa qaPAr with
(28)
r
ℓa : r0, 1s Ñ S 2 rel A,
r for t ă 1, ℓa p0q “ ˚ and ℓa p1q “ a,
ℓa ptq R A
so that for any sufficiently small ǫ ą 0
(29)
r a “ ℓa år0,1´ǫs #αa #pℓa år0,1´ǫs q´1 | αa is close to a
G
(
r
rel A.
Conversely, every collection of curves (28) defines by (29) a minimal portrait of
groups for every sufficiently small ǫ ą 0.
Proof. This follows immediately from the definition of “lollipop” generators, see (5).
ra is self-normalizing in G:
r conjugating G
ra by an element
It follows that every G
r
g R Ga amounts to precomposing the path ℓa with g, resulting in a different path.
Lemma 2.17. Let H BG be an orbisphere biset. Then every pair of minimal portraits of groups pHc qcPC in H and pGa qaPA in G can be uniquely completed to a
minimal portrait of bisets pBc qcPC in B.
As a consequence, the minimal portrait of bisets is unique up to conjugacy. We
note that the lemma is also true in case H BG is a cyclic biset, namely if H, G – Z;
in this case all Bc are equal to B.
Proof. Since by Assumption (6) the sets A and C contain at least 3 elements, H
and G are non-cyclic, and in particular have trivial centre.
18
LAURENT BARTHOLDI AND DZMITRY DUDKO
Let f˚ : C Ñ A be the portrait of B. Choose generators ga P Ga and hc P Hc .
Recall from [3, §2.6] or [5, Definition 2.5] that there are bc P B and kc P H such
degpcq
that kc´1 hc kc bc “ bc gf pcq for all c P C. Set then Bc :“ Hc kc bc Gf pcq , and note that
these subbisets satisfy Definition 2.15.
Suppose next that pBc1 qcPC is another portrait of bisets, and choose elements
1
bc P Bc1 . Then by [5, Definition 2.6(SB3 )] the conjugacy class ∆c appears exactly
once among the lifts of Γf pcq , so Hb1c X Bc ‰ H, so we may choose kc P H with
kc b1c P Bc . Then kc Bc1 “ kc b1c Gf pcq “ Bc Gf pcq “ Bc . We have kc Hc “ Hc kc , so
kc P Hc because Hc is self-normalizing in H, and therefore Bc “ Bc1 .
r r Ñ H BG of orbisphere bisets,
Consider next a forgetful morphism FE,D : H
Ă BG
rc q r be the minimal portrait of bisets given by Lemma 2.17.
and let pB
cPC
Lemma 2.18. Let pmc qcPCr and pℓa qaPAr be the paths (see Lemma 2.16) associr c q r and pG
ra q r . The portrait
ated with the respective portraits of bisets pH
cPC
aPA
rc qq r admits the following description: for every sufficiently small ǫ ą 0,
pFE,D pB
cPC
!
)
β p1q
(30)
Bc “ mc år0,1´ǫs #βc #pℓf˚ pcq år0,1´ǫs q´1 Òf c | βc is close to c rel C.
rc by the intertwiner FE,D (see (9)), it suffices
Proof. Since Bc is obtained from B
to consider the case E “ D “ H; and in that case, by Lemma 2.17 it suffices to
note that Bc is indeed an Hc -Gf˚ pcq -biset.
rÑA
r be an abstract portrait extending
Let H BG be an orbisphere biset; let f˚ : C
r
B˚ ; let deg : C Ñ N be an extension of degB : C Ñ N; and let pBc qcPCr be a portrait
of bisets in B with portraits of groups pHc qcPCr in H and pGa qaPAr in G. A congruence
of portraits is defined by a choice of phc qcPCr in H and pga qaPAr in G, and modifies
the portrait of bisets by replacing
Hc ù h´1
c Hc h c ,
Bc ù h´1
c Bc gf˚ pcq ,
Ga ù ga´1 Ga ga .
By Lemma 2.17, any two minimal portraits of bisets are congruent.
2.6. Main result. Consider an orbisphere map f : pS 2 , C, ordq Ñ pS 2 , A, ordq. We
r Ą
rĄ
call it compatible with FE : pS 2 , C,
ordq 99K pS 2 , C, ordq and FD : pS 2 , A,
ordq 99K
r Ñ A
r and deg : C
r Ñ N if degpeq “ 1 for all e P E with
pS 2 , A, ordq and f˚ : C
Ą degpcq | Ą
r and tdegpf @f ´1 paqqu “
f˚ peq P D, and ordpcq
ordpf˚ pcqq for all c P C,
´1
r Ą
ordq Ñ
tdegpf˚ paqu for all a P A. Equivalently, there is an orbisphere map fr: pS 2 , C,
2 r Ą
2
2
pS , A, ordq that can be isotoped within maps pS , C, ordq Ñ pS , A, ordq to a map
r Ñ N and f˚ : C
r Ñ N are induced by fr.
making (7) commute and such that deg : C
r Ñ H, FD : G
r Ñ G,
Compatibility of an orbisphere biset H BG with FE : H
r Ñ A,
r and deg : C
r Ñ N is defined similarly, and is equivalent to the existence
f˚ : C
r making (8) commute.
of a biset B
We are now ready to relate the mapping class bisets M ˚ to portraits of bisets:
r Ñ H and FD : G
r Ñ G be forgetful morphisms of
Theorem 2.19. Let FE : H
orbisphere groups as in (8), and let H BG be an orbisphere biset compatible with
r Ñ A,
r and deg : C
r Ñ N.
FE , FD , f ˚ : C
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
19
Then for every portrait of bisets pBc qcPCr in B parameterized by f˚ and deg there
r r mapping to B under FE,D with a minimal portrait
exists an orbisphere biset H
Ă BG
mapping to pBc qcPCr .
r is unique up to pre- and post-composition with bisets of knitting
Furthermore, B
mapping classes and we have a congruence of bisets (see (19))
(31)
π1˚ pY,Eq M
˚
pB|E, Dqπ1 pX,Dq
P
˚
H E ¸E Ó.
pB 1 P M pBq, pBc1 qcPCr q | B 1 P M pBq, pBc1 qcPCr a portrait in B 1
r 1 q “ pFE,D pB
r 1 q, induced portrait of B
r 1 q.
given by PpB
(
GD
The H E -GD -action on tpB 1 , pBc1 qcPCr qu is given by
phc qcPE pB 1 , pBc1 qcPCr qpgd qdPD “ pB 1 , phc Bc1 gf˚ pcq qcPCr q,
with the understanding that hc “ 1 if c R E and gf˚ pcq “ 1 if f˚ pcq R D, and the
˚
action of E Ó. is given by permutation of the portrait of bisets.
r “ G),
r Theorem 2.19 proves
In the dynamical situation (i.e. when H “ G and H
Theorem C.
˚
Proof of Theorem 2.19. Clearly, (31) is an intertwiner: firstly, the actions of E Ó.
are compatible; we may ignore them in the sequel. Secondly, let pBc1 qcPCr be the
r 1 . For e P E we may write
induced portrait of bisets of B
Be1 “ tme #p´1 | p : r0, 1s Ñ Y, p´1 p0q “ e, f ˝ p “ ℓf˚ peq u,
see (30). Consider ψ P ModpY |Eq; the action of ψ on Be1 replaces me by ψ˝me ; and
if EE pψq “ phe qePE then ψ ˝ me “ he me by the very definition of ‘push’ and (19).
The same argument applies to the right action.
Let us now show that P is a congruence. Since P is an intertwiner between leftfree bisets with isomorphic acting groups, it is sufficient to show that P preserves
left orbits.
We proceed by adding new points to D and E. If E “ H, then the forgetful
maps M ˚ pB|E, Dq Ñ M pBq and
(
pB 1 P M pBq, pBc1 qcPCr q | B 1 P M pBq, pBc1 qcPCr a portrait in B 1 Ñ M pBq
are bijections and the claim follows. Therefore, it is sufficient to assume that
D “ H.
By [5, Theorem 7.9], the biset B may be written as Bpf q for a branched covering
f : pS 2 , C, ordq Ñ pS 2 , A, ordq, unique up to isotopy rel C. We lift f to a branched
r ` be its biset and let pB
rc` qcPf ´1 pAq be
covering f ` : pS 2 , f ´1 pAqq Ñ pS 2 , Aq. Let B
its minimal portrait of bisets, which is unique by Lemma 2.17.
Let us show that for every portrait of bisets pBc qcPCr in B there is an orbisphere
r whose minimal portrait of bisets maps to pBc q. This B
r will be of the form
biset B
`
2
´1
r
Bpmq b Ff ´1 pAqzC,H
p
B
q
for
a
homeomorphism
m
:
pS
,
f
pAqq
ý.
r
rc` in t¨u b Ă B
r ` – t¨u bH B, and compare
First, consider the images of all B
H
r
them to the images of all Bc . The condition that as c1 ranges over f˚´1 pf˚ pcqq the
20
LAURENT BARTHOLDI AND DZMITRY DUDKO
Bc1 lie in different H-orbits of B lets us select which preimages of A correspond
r and thus produces a well-defined map C
r Ñ f ´1 pAq.
to the marked points in C,
r to f ´1 pAq in the specified manner;
Let m1 be an isotopy of pS 2 , Cq which maps C
1 `
2 r
2
r 0 and portrait of bisets pB
r0 q r ;
then m f : pS , C, ordq Ñ pS , A, ordq has biset B
c cPC
0
0
r q Ď HBc for all c P C,
r so we may write hc FE,H pB
r q “ Bc for
and FE,H pB
c
c
some hc P H. (We recall that Bc consists of dpcq elements permuted by Gf˚ pcq ,
r 0 q “ Bc if and only if
where dpcq is the local degree of f at c. We have hc FE,H pB
c
ś
0
0
r
r q X Bc ‰ H.) We set B
r“
pushph
q
B
,
and
note that the minimal
hc FE,H pB
c
c
cPE
r
portrait of bisets of B maps to pBc qcPCr under FE,H .
The only choice involved is that of a mapping class that yields pushphc q when
restricted to C Y tcu, namely that of knitting mapping classes.
2.7. Fibre bisets. Consider an orbisphere map fr: pS 2 , C \ E, Ą
ordq Ñ pS 2 , A \
Ą
D, ordq as above and define the saturation of E as
ğ
˘
`
s :“
E
fr´1 frpeq zC.
ePE
Ą be an orbisphere map and
Lemma 2.20. Let fr: pS 2 , C \E, Ą
ordq Ñ pS 2 , A\D, ordq
2
s
s
let m : pS , C \ Eq ý be a homeomorphism such that m åC “ 1 and for every e P E
˚
2
r
r
we have f pmpeqq “ f peq. If the isotopy class of m is not in Mod pS , C \ Eq,
s
namely if m moves at least one point in E to EzE,
then mfr ffC\E fr.
˚ r
s ý as above
For every r
g P M pf q there is a homeomorphism m : pS 2 , C \ Eq
r
r
r
s
(i.e. m åC “ 1 and f pmpeqq “ f peq for e P E) such that r
g «C\E mf .
s does not act on M ˚ pfrq: there are orbisphere maps
Note that Mod˚ pS 2 , C \ Eq
2
fr1 , fr2 : pS , C \ E, Ą
ordq Ñ pS 2 , A \ D, Ą
ordq such that fr1 «C\E fr2 but mfr1 ffC\E
2
r
s ý as above.
mf2 for a homeomorphism m : pS , C \ Eq
s is saturated, by Lemma 2.9(2) there is an
Proof. Suppose mfr «C\E fr. Since E
2
s
s
n : pS , C \ Eq ý with n åC\E “ 1 and frpnpeqq “ frpeq for e P EzE
such that
r
r
nmf «C\Es f . This contradicts Lemma 2.9(1): the biset
s Ñ pS 2 , A \ Dqq
(32)
M ˚ pfr: pS 2 , C \ Eq
is left-free while nm ‰ 1.
The second claim follows from Lemma 2.9(2) applied to the biset (32).
We are interested in the fibre biset
F ´1 pB 1 qGD
H E E,D
under the forgetful map
r
FE,D : M pB|E, Dq Ñ M pBq. For every a P A define Ea :“ E X f˚´1 paq, where
rÑA
r is the portrait.
f˚ : C
˚
Proposition 2.21. We have
ź
ź
´1
˚
1
˚
1
(33) H E FE,D
pB 1 qGD –
H Ea M pB |Ea , Hq1 ˆ
H Ed M pB |Ed , tduqGtdu .
aPA
1
dPD
Suppose that B is the biset of g : pS , C, ordq Ñ pS 2 , A, ordq; then
The biset
2
˚
1
Ea
ˆ tι : Ea ãÑ g ´1 paqzCu.
H Ea M pB |Ea , Hq1 – H
˚
1
H Ed M pB |Ed , tduqGtdu is congruent to the biset
tpbe qePEd P B 1Ed | Hbe ‰ Hbe1 if e ‰ e1 u
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
21
endowed with the actions
phe qePEd ¨ pbe qePEd ¨ g “ phe be gqePEd .
´1
pB 1 qGD is isomorphic to the biset of
Proof. By Theorem 2.19, the biset H E FE,D
1
portraits pBc qcPCr in B . Recall that pBc qcPC is fixed (by Lemma 2.17) while pBe qePE
varies; this shows (33).
The second claim follows from Lemma 2.20.
For d P D and e P Ed we have Be “ tbe u; i.e. the choice of pBe qePEd is equivalent
to the choice of pbe qePEd P B 1Ed subject to the condition stated above.
Note that H Ea M ˚ pB 1 |Ea , Hq1 will not be transitive, as soon as there are at least
two maps ι : Ea Ñ g ´1 paqzC.
Let us define
r b ModpX|Dq.
M pB|E, Dq :“ ModpY |Eq b M pBq
Theorem 2.22. Suppose f˚ pEq Ă D and, moreover, that f˚ : E Ñ D is a bijection.
We then have a congruence
` ´1
˘
(34)
FE,D pB 1 q GD Ñ pH B 1 G qE
HE
rc q r in B 1 to pbe qePE where B
re “ tbe u. The group GD is
mapping the portrait pB
ePC
identified with GE via the bijection f˚ : E Ñ D.
r “ M pBq,
r M ˚ pB|E, Dq “ M pB|E, Dq, and exact sequences
Moreover, M ˚ pBq
similar to (27) hold:
(35)
r
M pBq
knBraidpY,Eq
knBraidpX,Dq
r
BraidpY,Eq M pBqBraidpX,Dq
EE,D
r
M pBq
π1 pY,Eq M pB|E, Dqπ1 pX,Dq
P
M pB|E, Dq
FE,D
FE,D
HE
M pBq
!
–
pB 1 P M pBq, pBc1 qcPCr q
pBc1 qcPCr is a portrait in B 1
)
The bottom sequence in (35) can be written (using (34)) as
(36)
Ů
E
B 1 PB
pB 1 q
M pB|E, Dq
M pBq.
Proof. The first claim follows from Proposition 2.21. Since pB 1 qE is a transitive
biset, we obtain M pB|E, Dq “ M ˚ pB|E, Dq and the exact sequences (36) and (35)
hold because the fibres are connected.
.
GD
22
LAURENT BARTHOLDI AND DZMITRY DUDKO
Corollary 2.23. Let f : pS 2 , A, ordq ý be an orbisphere map and let D “
be a finite union of periodic cycles Di of f . Then (36) takes the form
ğ ğ
(37)
Bpgqbp#Di q ãÑ M pf |D, Dq ։ M pf q.
Ů
iPI
Di
gPMpf q iPI
2.8. Centralizers of portraits. Let us now consider the dynamical situation:
r “ H;
r we abbreviate FD “ FD,D and M pB|Dq “ M pB|D, Dq.
H “ G and G
r we denote by ZpBq
r ď ModpGq
r the centralizer of
Given an orbisphere biset B
r
r
r
r in
B in M pBq and by ZpB|Dq ď ModpG|Dq the centralizer of the image of B
M pB|Dq, see Theorem 2.19. We have a natural forgetful map
r Ñ ZpB|Dq
r
ZpBq
(38)
which is, in general, neither injective nor surjective. However, we will show in
r is geometric non-invertible.
Corollary 4.28 that (38) is an isomorphism if B
Recall from (12) the short exact sequence π1 pX, Dq ãÑ ModpG|Dq ։ ModpGq.
We have the corresponding sequence for centralizers:
(39)
1
r
ZpB|Dq
X π1 pX, Dq
r
ZpB|Dq
ZpBq.
r is geometric non-invertible, then ZpB|Dq
r
r
If B
X π1 pX, Dq is trivial, so ZpB|Dq
–
r is naturally a subgroup of ZpBq, and in Theorem 4.41 we will show that it
ZpBq
has finite index.
Definition 2.24. The relative centralizer ZD ppBa qaPAr q of a portrait of bisets
pBa qaPAr is the set of pgd q P GD such that
Bd “ gd´1 Bd gf˚ pdq for all d P D,
with the understanding that gf˚ pdq “ 1 if f˚ pdq R D.
△
We remark that we could also have defined the “full” normalizer, consisting of
r
r and its subgroup
all pga q P GA with Ggaa “ Ga and Ba “ ga´1 Ba gf˚ paq for all a P A,
´1
the “full” centralizer, in which ga centralizes Ga and ga ¨ ´ ¨ gf˚ paq is the identity
on Ba ; but ś
we will make no use of these notions. The “full” normalizer is the direct
product of aPA Ga and the relative centralizer.
We also note that, if pgd qdPD belongs to the relative centralizer of pBa qaPAr and
f n pdq P A for some n P N, then gd “ 1.
Applying Theorem 2.19 to the dynamical situation we obtain:
r r ։ G BG be a forgetful morphism and let pBa q r be
Proposition 2.25. Let Gr B
G
aPA
the induced portrait of bisets in B. Then any choice of isomorphisms π1 pX, dq – G
gives an isomorphism
–
r
ZpB|Dq
X π1 pX, Dq ÝÑ ZD ppBa qaPAr q.
3. G-spaces
We start by general considerations. Let Y be a right H-space, and let X be a
right G-space. For every map f : Y Ñ X there exists a natural H-G-biset M pf q,
defined by
(40)
M pf q :“ thf g :“ f p´ ¨ hq ¨ g | h P H, g P Gu,
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
23
namely the set of maps Y Ñ X obtained by precomposing with the H-action and
post-composing with the G-action. Note that M pf q is right-free if the action of G
is free on X. We have a natural G-equivariant map Y bH M pf q Ñ X given by
evaluation: y b b ÞÑ bpyq. Define
Hf :“ th P H | Dg P G with hf “ f g in Mf u
the stabilizer in H of f G P M pf q{G. Then f descends to a continuous map
f : Y {Hf Ñ X{G.
Lemma 3.1. Suppose that X and Y are simply connected and that G, H act freely
with discrete orbits. Then M pf q is isomorphic to the biset of the correspondence
f
Y {H Ð Y {Hf Ñ X{G as defined in [3].
Proof. Let us define the following subbiset of M pf q:
(41)
M 1 pf q :“ tf p´ ¨ hq ¨ g | h P Hf , g P Gu.
Since Y, X are simply connected, the biset of f : Y {Hf Ñ X{G is isomorphic to
M 1 pf q. The isomorphism is explicit: choose basepoints : P Y and ˚ P X so that
π1 pY {H, :Hq – H and π1 pX{G, ˚Gq – G. Given b P Bpf q, represent it as a
path β : r0, 1s Ñ X{G with βp0q “ f p:Hf q and βp1q “ ˚G, and lift it to a path
β : r0, 1s Ñ X. We have βp0q “ f p:qh and βp1q “ ˚g for some h P Hf , g P G, and
we map b P Bpf q to h´1 ¨ f ¨ g P M 1 pf q. This map is a bijection because both Bpf q
and M 1 pf q are right-free. We finally have
M pf q – H bHf M 1 pf q – BpY {H Ð Y {Hf Ñ X{Gq.
In case the actions of G, H are discrete but not free, there still is a surjective
morphism BpX{G Ø Y {Hq ։ M pf q, when X{G and Y {H are treated as orbispaces.
Example 3.2 (Modular correspondence). The mapping class biset M pf q from §2.4
is isomorphic to the biset of the associated correspondence on Moduli space, see [5,
Proposition 8.1]. Indeed, M pf q is identified with
(42)
tσn ˝ σf ˝ σm | n P ModpS 2 , Cq, m P ModpS 2 , Aqu,
where σm : TA ý, σn : TC ý, and σf : TA Ñ TC are the pulled-back actions
between Teichmüller spaces of m, n, f respectively. By Lemma 3.1, the biset (42)
is isomorphic to the biset of the modular correspondence
σf
i
MC ÐÝ Wf ÝÑ MA .
3.1. Universal covers. Let us now generalize Lemma 3.1 by dropping the requirement that X, Y be simply connected. Choose basepoints :, ˚, write Q “ π1 pY, :q
r “ π1 pY {H, :Hq and G
r “ π1 pX{G, ˚Gq; so we have exact
and P “ π1 pX, ˚q and H
sequences
(43)
π
rÑ
1ÑQÑH
H Ñ 1,
π
rÑ
1ÑP ÑG
G Ñ 1.
The universal cover of X may be defined as
r :“ tβ : r0, 1s Ñ X | βp1q “ ˚u{«;
X
it has a natural basepoint r̊ given by the constant path at ˚, and admits a right
r Ñ X is a covering,
P -action by right-concatenation of loops at ˚. The projection X
24
LAURENT BARTHOLDI AND DZMITRY DUDKO
r _ the left P -set structure on X,
r with
and is given by β ÞÑ βp0q. We denote by X
´1 _
_
´1
_ :“
r with tβ | β P Xu
r and
pβ ¨ g q . We may naturally identify X
action g ¨ β
its natural left P -action.
r Yr of X, Y respectively and a lift fr of f .
Let us consider the universal covers X,
We have the following situation, with the acting groups represented on the right,
and omitted in the left column:
Yr
fr
Yr {Q – Y
f
r ìG
r
X
XìG
Y {Hf
f
r – Y {H
Yr {H
X{G
r f be the full preimage of Hf in H.
r Note that H
r f is the stabilizer of frG
r in
Let H
r
r
r
r
r
r
r
M pf q{G: we have hf P f G if and only if πphqf P f G by the unique lifting property.
By Lemma 3.1, we have
r Ð Yr {H
r f Ñ X{
r Gq
r – BpY {H Ð Y {Hf Ñ X{Gq,
M pfrq – BpYr {H
and we shall see that M pfrq is an extension of bisets. We have a natural map
r g P G.
r
π : M pfrq Ñ M pf q, given by hfrg ÞÑ πphqf πpgq for all h P H,
Lemma 3.3. There is a short exact sequence of bisets
(44)
r
Q M pf qP
M pfrq
π
M pf q,
in which every fibre π ´1 phf gq is isomorphic to a twisted form Bphf gq of the biset
of f .
r with actions of
Proof. This follows immediately from Lemma 3.1 applied to Yr , X
Q, P respectively.
Let us now assume that the short exact sequences (43) are split, so π1 pX{Gq –
P ¸ G and π1 pX{Hq – Q ¸ H. We shall see that the sequence (44) is split, and
that some additional structure on M pf q and Bpf q allow the extension M pf q –
BpY {H Ø X{Gq to be reconstructed as a crossed product.
The splitting of the map π : π1 pX{G, ˚Gq Ñ G means that there is a family
tαg ugPG of curves in X such that αg connects ˚g to ˚ and αg1 g2 « pαg1 g2 q#αg2 .
Similarly there is a family tβh uhPH of curves in Y such that βh connects :h to :
and βh1 h2 « pβh1 h2 q#βh2 .
For every h P Hf there is a unique element of G, written hf P G, such that
h ¨ f “ f ¨ hf in M 1 pf q. For every h P Hf and every b P Bpf q, represented as a path
b : r0, 1s Ñ X with bp0q “ f p:q and bp1q “ ˚, define
bh :“ pf ˝ βh´1 q#pb ¨ hf q#αhf .
f
We clearly have pq ¨ b ¨ pqh “ q h ¨ bh ¨ ph , so Hf acts on Bpf q by congruences. We
´1
convert that right action to a left action by h b :“ bh .
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
25
For every c P M 1 pf q and every p P P , write c “ f g and define c p :“ g p “ gpg ´1 .
We clearly have cg p “ c pg pq.
Lemma 3.4. The biset of f : Y {Hf Ñ X{G is isomorphic to the crossed product
r f – Q ¸ Hf and
Bpf q ¸ M 1 pf q, which is Bpf q ˆ M 1 pf q as a set, with actions of H
r – P ¸ G given by
G
As a consequence,
pq, hq ¨ pb, cq ¨ pp, gq “ pq ¨ h pb ¨ c pq, h ¨ c ¨ gq.
f
r b Ă Bpf q ¸ M 1 pf q.
BpY {H Ð Y {Hf Ñ X{Gq – H
Hf
Proof. This is almost a tautology. The short exact sequence (44) splits, with section
r H
r on X,
r Yr can be identified with
h ¨ f ¨ g ÞÑ p1, hq ¨ fr ¨ p1, gq, and the actions of G,
concatenation of lifts of the paths αg , βh .
3.2. Self-similarity of G-spaces. We return in more detail the situation in which
r so as to keep Y, X for the original
Y, X are universal covers; we rename them to Yr , X
space.
Consider two pointed spaces pY, :q and pX, ˚q with H :“ π1 pY, :q and G :“
π1 pX, ˚q, and let f : Y Ñ X, be a continuous map. Recall that its biset is defined
by
(45)
H Bpf qG
:“ tβ : r0, 1s Ñ X | βp0q “ f p:q, βp1q “ ˚u{«,
with the natural actions by left- and right-concatenation. We thus naturally have
r with corresponding right actions, and left action given by composing
Bpf q Ď X,
loops via f .
The map f : Y Ñ X naturally lifts to a map Yr Ñ Y Ñ X, and every choice
of β P Bpf q defines uniquely, by the lifting property of coverings, a lift frβ : Yr Ñ
r with the property that r: ÞÑ β. Furthermore, we have the natural identities
X
r
fβ p´¨hq¨g “ frhβg , so that the biset Bpf q as defined in (45) is canonically isomorphic
to every biset M pfrβ q as defined in (40), when an arbitrary β P Bpf q is chosen, and
r of f is chosen.
to Bpfrq, when an arbitrary lift fr: Yr Ñ X
r choosing fr “ 1
If f : Y Ñ X is a covering, then we may assume Yr “ X;
r
gives the simple description Bpf q “ H GG . Recall that the biset of a correspondence pf, iq : Y Ð Z Ñ X is defined as Bpf, iq “ Bpiq_ b Bpf q. In the case
of a covering correspondence, in which f is a covering, we therefore arrive at
Bpf, iq “ Bpiq_ bπ1 pZq G. We shall give now a more explicit description of this
biset using covering spaces, just as we had in [3, (15)]
Bpf, iq “ tpδ : r0, 1s Ñ Y, p P Zq | δp0q “ :, δp1q “ ippq, f ppq “ ˚u{«
and the special case, when i : Z Ñ Y is injective, of
Bpf, iq “ tδ : r0, 1s Ñ Y | δp0q “ :, f pi´1 pδp1qqq “ ˚u{«.
Still assuming that f : Z Ñ X is a covering map, define
rH :“ tpδ, pq P Yr ˆ Z | ippq “ δp0qu,
(46)
Z
the fibred product of Yr with Z above Y , see Diagram (48) left. (Note that ZrH is
not the universal cover of Z.) In case i : Z Ñ Y is injective, we may write
rH “ tδ P Yr | δp0q P ipZqu,
Z
26
LAURENT BARTHOLDI AND DZMITRY DUDKO
rH is the full preimage of ipZq under the covering map Yr Ñ Y .
so Z
Proposition 3.5. If pf, iq is a covering correspondence, the following map defines
an isomorphism of H-spaces:
Φ:
H Bpf, iqG
r _ Ñ pZrH q_ given by Φppδ, pq b α_ q “ ppδ#pi ˝ α´1 Òp qq´1 , pq.
bX
f
For every b “ pδ, pq P Bpf, iq the map
r _ Ñ pZ
rH q_ given by α_ ÞÑ Φpb b α_ q
frb´1 : X
is the unique lift of the inverse of the correspondence Y Ð Z Ñ X mapping r̊ to b;
we have the equivariance properties
´1
frhbg
“ h ¨ frb´1 pg ¨ ´q.
(47)
The inverses and contragredients may seem unnatural in the statements above;
but are necessary for the actions to be on the right sides, and are justified by the
fact that we construct a lift of the inverse of the correspondence, rather than the
correspondence itself:
(48)
Yr _
ri
i
Y
pZrH q_
frb´1
Z
f
r_
X
X
rH ,
Proof. It is obvious that Φ is H-equivariant, and it is surjective: given pδ, pq P Z
p
p
´1
´1
´1
r with f ppq “ αp0q, and write δ “ pα Ò q #α Ò #δ,
choose a path α P X
f
f
expressing in this manner pδ, pq_ “ Φpppα´1 Òpf #δq´1 , α´1 Òpf p1qq b α_ q.
If Φppδ, pq b α_ q “ Φppδ 1 , p1 q b pα1 q_ q, then α and α1 start at the same point,
so α “ α1 g for some g P G, and we have then pδ, pq “ pδ 1 , p1 qg ´1 so pδ, pq b α_ “
pδ 1 , p1 q b pα1 q_ and Φ is injective.
It is easy to see that frb´1 is a lift of f ´1 . Conversely, every lift fr´1 of f ´1
maps r̊ to an element b P Bpf, iq because fr´1 pr̊_ q ends at an f -preimage of ˚ by
construction; and then fr´1 “ frb´1 by unicity of lifts. Finally, equivariance follows
´1
from frb´1 pgα_ q “ Φpb b gα_ q “ Φpbg b α_ q “ frbg
pα_ q.
r ì G is a
3.3. Planar discontinuous groups. A planar discontinuous group X
r for
group acting properly discontinuously on a plane, which will be denoted by X:
every bounded set V the set tg P G | V g X V ‰ Hu is finite.
Let X :“ pS 2 , A, ordq be an orbifold with non-negative Euler characteristic,
consider ˚ P S 2 zA a base-point, and write G :“ π1 pX, ˚q. Then the universal cover
rG of X is a plane endowed with a properly discontinuous action of G. We denote
X
r Ñ X the covering map.
by π : X
By the classification of surfaces, there are only two cases to consider: X “ C
with G a lattice in the affine group tz ÞÑ az ` b | a, b P C, |a| “ 1u, and X “ H the
upper half plane, with G a lattice in PSL2 pRq.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
27
Consider another orbisphere Y :“ pS 2 , C, ordq and a branched covering f : Y Ñ
X, and fix basepoints : P Y and ˚ P X with corresponding fundamental groups
H “ π1 pY, :q and G “ π1 pX, ˚q. Let H Bpf qG be the biset of f . As usual, we
view f as a correspondence Y Ð Z Ñ X with Z “ pS 2 , f ´1 pAq, ordq and i a
homeomorphism S 2 Ñ S 2 mapping a subset of f ´1 pAq to C. The fibred product
rH constructed in (46) is naturally a subset of the plane Yr , with orbispace points
Z
and punctures at all H-orbits of f ´1 pAqzC. We need the following classical result.
Theorem 3.6 (Baer [26, Theorem 5.14.1]). Every orientation-preserving homeomorphism of a plane commuting with a properly discontinuous group action is
isotopic to the identity along an isotopy commuting with the action.
Let us reprove [5, Theorem 7.9] using our language of group actions:
Corollary 3.7. Let two orbisphere maps f, g : pS 2 , C, ordq Ñ pS 2 , A, ordq have
isomorphic orbisphere bisets. Then f «C g.
In other words, the orbisphere biset of f : pS 2 , C, ordq Ñ pS 2 , A, ordq is a complete invariant of f up to isotopy rel C.
Proof. Let us write X “ pS 2 , A, ordq and Y “ pS 2 , C, ordq and G “ π1 pX, ˚q
and Y “ π1 pY, :q. We may represent f, g respectively by covering pairs pf, iq
and pg, iq, with coverings f, g : pS 2 , P, ordq Ñ pS 2 , A, ordq and i : pS 2 , P, ordq Ñ
rH its fibred product
pS 2 , C, ordq. Let us furthermore write Z “ pS 2 , P, ordq and Z
with Yr . Identifying Bpf, iq and Bpg, iq, choose b P Bpf, iq “ Bpg, iq, and let
r _ Ñ pZ
rH q_
frb´1 , r
gb´1 : X
be the corresponding lifts as in Proposition 3.5.
Since i is injective, the map pZrH q_ Ñ Z is a covering, so frb´1 and grb´1 are
coverings. We may therefore consider their quotient frb ˝ grb´1 , which is a wellr _ ý, and is normalized to preserve the base point r̊. By (47) it is
defined map pXq
a homeomorphism commuting with the action of G.
By Theorem 3.6 there is an isotopy pr
h´1
b,t qtPr0,1s of maps satisfying (47) between
´1
r
1 and fb ˝ grb .
Since the set of fixed points of G is discrete, r
h´1 preserves all fixed points of G
b,t
and therefore projects to an isotopy pht qtPr0,1s in X. We have h0 “ 1 and h1 ˝g “ f ,
so the maps f and g are isotopic rel B.
4. Geometric maps
Let M be a matrix with integer entries and with detpM q ą 1. We call M
exceptional if one of the eigenvalues of M lies in p´1, 1q; so the dynamical system
M : R2 ý has one attracting and one repelling direction.
For r P R2 , denote by xZ2 , ´z ` ry the group of affine transformations of R2
generated by translations z ÞÑ z ` v with v P Z2 and the involution z ÞÑ ´z ` r.
The quotient R2 {xZ2 , ´z `ry is a topological sphere, with cone singularities of angle
π at the four images of 21 pr ` Z2 q.
We call a branched covering f : S 2 ý a geometric exceptional map if f : S 2 ý
is conjugate to a quotient of an exceptional linear map M : R2 ý under the action
of xZ2 , ´z ` ry for some r P R2 ; in particular, p1 ´ M qr P Z2 . A Thurston map
f : pS 2 , A, ordq ý is called geometric if the underlying branched covering f : S 2 ý
28
LAURENT BARTHOLDI AND DZMITRY DUDKO
is either Böttcher expanding (see [4, Definition 4.1]; there exists a metric on S 2 that
is expanded everywhere except at critical periodic cycles), a geometric exceptional
map, or a pseudo-Anosov homeomorphism. We refer to the first two types as noninvertible geometric maps.
We may consider more generally affine maps z ÞÑ M z ` q on R2 , and then their
quotient by the group xZ2 , ´zy; the map z ÞÑ M z on R2 {xZ2 , ´z ` ry is converted
to that form by setting q “ pM ´ 1qr. Conversely, if 1 is not an eigenvalue of M ,
then we can always convert an affine map into a linear one, at the cost of replacing
the reflection ´z by ´z ` r in the acting group.
Lemma 4.1. Let M : R2 ý be exceptional. Then for every bounded set D Ă R2
containing p0, 0q there is an n ą 0 such that for every m ě n we have M ´m pDq X
Z2 “ tp0, 0qu. Moreover, n “ npDq can be taken with npDq ď log diampDq.
Proof. Let λ1 , λ2 be the eigenvalues of M , and let e1 and e2 be unit-normed eigenvectors associated with λ1 and λ2 . It is sufficient to assume that D is a parallelogram centered at p0, 0q with sides parallel to e1 and e2 :
(49)
D “ tv P R2 | v “ t1 e1 ` t2 v2 with |t1 | ď x and |t2 | ď yu.
In particular, the area of D is comparable to xy. Then M ´n pDq is again a parallelogram centered at p0, 0q with sides parallel to e1 and e2 .
We claim that there is δ ą 0 such that if D is a parallelogram of the form (49)
with areapDq ă δ, then D X Z2 “ tp0, 0qu. This will prove the lemma because
areapM ´n pDqq “ areapDq{pdet M qn and log diampDq — log areapDq.
Without loss of generality we assume x ą y, so D is close to Re1 . Let µ1 be the
slope of Re1 . Since M is exceptional, the numbers λ1 , λ2 , µ1 are quadratic irrational,
so they are not well approximated by rational numbers: there is a positive constant
C such that |µ1 ´ pq | ą qC2 for all pq P Q.
Suppose that D contains a non-zero integer point w “ pp, qq; so x ě |q|. Also,
w is close to Re1 , and in particular q ‰ 0 if δ is sufficiently small; we may suppose
q ą 0. It also follows that µ1 is close to pq . The distance from w to Re1 is
dpw, Re1 q ď y —
On the other hand,
Combining, we get
areapDq
.
x
ˇ
p ˇˇ
ˇ
dpw, Re1 q “ |w| sin ?pw, e1 q ě q ˇµ1 ´ ˇ.
q
ˇ
areapDq
δ
p ˇˇ dpw, Re1 q
ˇ
ď
ď 2,
ˇµ1 ´ ˇ ď
q
q
q¨x
q
and the claim follows for δ ăă C.
2
2
Corollary 4.2. Let f : pS , Aq ý be a geometric exceptional map, let p P S zA be
a periodic point with period n, and let γ P π1 pS 2 zA, pq be a loop starting and ending
at p. Let |γ| be the length of γ with respect to the Euclidean metric of the minimal
orbifold structure pPf , ordf q. If γ is trivial rel pPf , ordf q and m ą log |γ|, then the
lift γÒpf nm is a trivial loop rel A.
Proof. By passing to an iterate, we may assume that p is a fixed point of f . Since
f is geometric exceptional, we have a branched covering map π : R2 Ñ S 2 under
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
29
which f lifts to an exceptional linear map M , and we may assume πp0, 0q “ p. By
q
assumption, γÒp0,0q
is a trivial loop in R2 . By Lemma 4.1, the sets M ´m pγÒp0,0q
π
π
´1
and π pAq are disjoint for m ą log |γ|; hence γÒpf nm is a trivial loop rel A.
Geometric exceptional maps all admit a minimal orbifold modeled on R2 {xZ2 , ´zy,
which has cone singularities of angle π at the images of t0, 12 u ˆ t0, 12 u. We consider
this class in a little more detail:
4.1. p2, 2, 2, 2q-maps. A branched covering f : pS 2 , Pf , ordf q ý is of type p2, 2, 2, 2q
if |Pf | “ 4 and ordf pxq “ 2 for every x P Pf . In this case, f is isotopic to a quotient
of an affine map z ÞÑ M z ` q under the action of xZ2 , ´zy, see Proposition 4.6(B).
Lemma 4.3. Let M be a matrix with integer entries and detpM q ą 1. Denote by
λ1 and λ2 the eigenvalues of M , ordered as |λ2 | ě |λ1 | ą 0. Then the following are
all mutually exclusive possibilities:
‚ if M is exceptional, that is 0 ă |λ1 | ă 1 ă |λ2 | and λ1 , λ2 P R, then
M : R2 ý is expanding in the direction of the eigenvector corresponding to
λ2 and M : R2 ý is contracting in the direction of the eigenvector corresponding to λ1 ;
‚ if λ1 P t˘1u, then M 2 : R2 ý preserves the rational line tz P R2 | M z “ zu;
‚ the map M : R2 ý is expanding in all other cases; that is λ1 “ λ2 R R, or
λ1 , λ2 P R but |λ1 |, |λ2 | ą 1.
Proof. If M ’s eigenvalues are non-real, then λ1 “ λ2 and |λ1 | “ |λ2 |, so M is
expanding. If λ1 and λ2 are real, then they are of the same sign and their product
is greater than 1. The lemma follows.
The following lemma follows from [23, Main Theorem II]:
Lemma 4.4. If f : pS 2 , Pf , ordf q ý is doubly covered by a torus endomorphism
z ÞÑ M z ` q, then f is geometric if and only if f is Levy-free.
Proof. We consider the exclusive cases of Lemma 4.3. In the first case, f is geometric and z ÞÑ M z ` q preserves transverse irrational laminations (given by the
eigenvectors of M ), so z ÞÑ M z ` q admits no Levy cycle and a fortiori neither
does f .
In the second case, f is not geometric and admits as Levy cycle the projection
of the 1-eigenspace of M .
In the third case, f is expanding, and admits no Levy cycle by [4, Theorem A].
We shall mainly study p2, 2, 2, 2q-maps algebraically: we write the torus as R2 {Z2
and the p2, 2, 2, 2q-orbisphere as R2 {xZ2 , ´zy. Its fundamental group G is identified
with xZ2 , ´zy – Z2 ¸ t˘1u. The orbifold points are the images on the orbisphere
of t0, 21 u ˆ t0, 12 u. We start with some basic properties of G:
Proposition 4.5.
(50)
(A) Every injective endomorphism of G is of the form
v
M : pn, 1q ÞÑ pM n, 1q and pn, ´1q ÞÑ pM n ` v, ´1q
for some v P Z2 and some non-degenerate matrix M with integer entries.
We have
N w ˝ M v “ pN M qw`N v .
30
LAURENT BARTHOLDI AND DZMITRY DUDKO
There are precisely 4 order-2 conjugacy classes in G, each of the form
pa, ´1qG “ tpa ` 2w, ´1q | w P Z2 u for some a P t0, 1u ˆ t0, 1u Ă Z2 .
The set of order-2 conjugacy classes of G is preserved by M v .
(B) The automorphism group of G is tM v | det M “ ˘1u and is naturally
identified with Z2 ¸ GL2 pZq. The inner automorphisms of G are identified
with 2Z2 ¸ t˘1u, and the outer automorphism group of G is identified
with pZ{2Zq2 ¸ PGL2 pZq. The index-2 subgroup of positively oriented outer
automorphisms is Out` pGq “ pZ{2Zq2 ¸ PSL2 pZq.
(C) The modular group ModpGq is free of rank 2, and we have
ModpGq “ tM v | detpM q “ 1, M ” 1 pmod 2q, v P 2Z2 u{p˘1q2Z
2
– tM 0 | M ” 1 pmod 2qu{t˘1u.
(D) Two bisets BM v and BN w are isomorphic if and only if M “ ˘N and
pM v q˚ “ pN w q˚ as maps on order-2 conjugacy classes, if and only if M “
˘N and v ” w pmod 2Z2 q.
We remark that (50) also follows from Lemma 3.4.
Proof. (A). It is easy to check that M v defines an injective endomorphism. Conversely, let M 1 : G Ñ G be an injective endomorphism. Then M 1 p0, ´1q “ pv, ´1q
for some v P Z2 because all pw, ´1q have order 2. On the other hand, M 1 åZ2 ˆt1u “
M åZ2 for a non-degenerate matrix M with integer entries because M 1 is injective.
It easily follows that M 1 “ M v as given by (50).
The claims on composition and order-2 conjugacy classes of G follow from direct
computation.
2
(B). Follows directly from (A) and the identification of G with t˘1uZ .
`
(C). We use (B); the modular group of G is the subgroup of Out pGq “ pZ{2q2 ¸
PSL2 pZq that fixes the order-2 conjugacy classes. The action of pZ{2Zq2 on these
classes is simply transitive, so the set of order-2 classes may be put in bijection with
pZ{2q2 under the correspondence pa ` 2Z2 , ´1q Ø a; then the action of PSL2 pZq
on order-2 conjugacy classes is identified with the natural linear action (noting
that ´1 acts trivially mod 2). It follows that ModpGq is the congruence subgroup
tM P PSL2 pZq | M ” 1 pmod ˘ 2qu, and it is classical that it is a free group of
rank 2.
(D). The bisets BM v and BN w are isomorphic if and only the maps M v , N w are
conjugate by an inner automorphism; so the claimed description follows from (B).
We turn to p2, 2, 2, 2q-maps, and their description in terms of the above; we use
throughout G “ Z2 ¸ t˘1u:
Proposition 4.6. Let f be a p2, 2, 2, 2q-map with biset
Then
G BG
“ Bpf, Pf , ordf q.
(A) The biset B is right principal, and for any choice of b0 P B there exists an
injective endomorphism M v of G satisfying gb0 “ b0 M v pgq for all g P G.
(B) The map f is Thurston equivalent to a quotient of z ÞÑ M z ` 21 v : R2 ý
under the action of G.
(C) The map f is Levy obstructed if and only if M has an eigenvalue in t˘1u.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
31
(D) If f is not Levy obstructed then for every b0 P B, writing M v as in (A)
and (50), the map f is Thurston equivalent to the quotient of z ÞÑ M z : Z2 ý
by the action of xZ2 , z ÞÑ ´z ` ry í R2 for a vector r P R2 satisfying
M r “ r ` v. The G-equivariant map of Proposition 4.17 takes the form
(51)
Φ:
G BG
b xZ2 ,´z`ry R2 Ñ xZ2 ,´z`ry R2 ,
b0 b z ÞÑ M ´1 z.
Proof. (A). Since f is a self-covering of orbifolds, G BG is right principal. The claim
then follows from Proposition 4.5(A).
(B). We claim that the quotient map, denoted by f¯, has a biset isomorphic to
G BG . Indeed, for g P G it suffices to verify that
pM z ` 21 vq ˝ gpzq “ M v pgq ˝ pM z ` 12 vq.
If g “ pt, 1q then both parts are M z ` M t ` 21 v, and if g “ p0, ´1q then both parts
are ´pM z ` 12 vq. Therefore, f and f¯ have isomorphic orbisphere bisets because
marked conjugacy classes are preserved automatically by Proposition 4.5 (A). By
Corollary 3.7 the maps f and f¯ are isotopic.
(C). Suppose that M has an eigenvalue in t1, ´1u; let w be the eigenvector of M
associated with this eigenvalue. Consider the foliation Fw of R2 parallel to w. Then
Fw is invariant under z ÞÑ M z ` 21 v as well as under the action of xZ2 , z ÞÑ ´zy.
Therefore, the quotient map has invariant foliation F w “ Fw {G. There are two
leaves in F w connecting points in pairs in the post-critical set of the quotient map;
any other leave is a Levy cycle.
(D). Since detpM ´ 1q ‰ 0, there is a unique r such that M r “ r ` v. It is easy
to see (as in (C)) that the quotient map in (E) has a biset isomorphic to G BG . By
Corollary 3.7 the quotient map is conjugate to f , and (51) is immediate.
Corollary 4.7. Let f be a p2, 2, 2, 2q-map. Then its biset Bpf : pS 2 , Pf , ordf q ýq
is of the form BM v for an endomorphism M v of G :“ Z2 ¸ t˘1u, namely it is G
qua right G-set, with left action given by g ¨ h “ M v pgqh.
Let us also recall how the biset of a p2, 2, 2, 2q-map is converted to the form
BM v . First, the fundamental group G is identified with Z2 ¸ t˘1u. The group
G has a unique subgroup H of index 2 that is isomorphic to Z2 , so H is easy to
find. The restriction of the biset to H yields a 2 ˆ 2-integer matrix M ; and the
translation part v is found by tracking the peripheral conjugacy classes. Note that
this procedure applies as well to non-invertible maps as to homeomorphisms; and
that the map is orientation-preserving precisely if detpM q ą 0.
4.2. Homotopy pseudo-orbits and shadowing. Let f : S 2 ý be a self-map,
and let I be an index set together with a index map also written f : I ý. An
I-poly-orbit pxi qiPI is a collection of points in S 2 such that f pxi q “ xf piq . If all
points xi are distinct, then pxi qiPI is an I-orbit.
Thus a poly-orbit differs from an orbit only in that it allows repetitions. Every
poly-orbit has a unique associated orbit, whose index set is obtained from I by
identifying i and j whenever xi “ xj . Note that we allow I “ N, I “ Z and
I “ Z{nZ as index sets, with f piq “ i ` 1, as well as I “ t0, . . . , m, . . . , m ` nu with
f piq “ i ` 1 except f pm ` nq “ m, an orbit with period n and preperiod m.
We shall consider a homotopical weakening of the notion of orbits. Our treatment
differs from [13] in a subtle manner (see below); recall that β «A,ord γ means that
the curves β, γ are homotopic in the orbispace pS 2 , A, ordq:
32
LAURENT BARTHOLDI AND DZMITRY DUDKO
Definition 4.8 (Homotopy pseudo-orbits). Let f : pS 2 , A, ordq ý be an orbisphere
self-map, and let I be a finite index set together with an index map also written
f : I ý.
An I-homotopy pseudo-orbit is a collection of paths
pβi qiPI with βi : r0, 1s Ñ S 2 zA satisfying βf piq p0q “ f pβi p1qq.
Two homotopy pseudo-orbits pβi qiPI and pβi1 qiPI are homotopic, written pβi qiPI «A,ord
pβi1 qiPI , if βi «A,ord βi1 for all i P I. In particular, βi p0q “ βi1 p0q and βi p1q “ βi1 p1q.
Two homotopy pseudo-orbits pβi qiPI and pγi qiPI are conjugate, written pβi qiPI „
pγi qiPI , if there is a collection of paths pℓi qiPI with
ℓi p0q “ βi p0q,
β p1q
ℓi p1q “ γi p0q and βi #ℓf piq Òf i
«A,ord ℓi #γi .
Poly-orbits are special cases of homotopy pseudo-orbits, in which the paths βi are
all constant.
△
Remark 4.9. A homotopy pseudo orbit can also be defined for an infinite I with
the assumption that pℓi qiPI and pβi qiPI in Definition 4.8 have uniformly bounded
length. Then Theorem 4.22 still holds.
In a homotopy pseudo-orbit, the curves pβi qiPI encode homotopically the difference between xi :“ βi p0q and a choice of preimage of xf piq . Note that Ishii-Smillie
define in [13, Definition 6.1] homotopy pseudo-orbits by paths βi connecting f pxi q
to xf piq ; their βi may be uniquely lifted to paths βi from xi to an f -preimage of
xf piq as in Definition 4.8, but our definition does not reduce to theirs, since our βi
are defined rel pA, ordq, not rel pf ´1 pAq, f ˚ pordqq.
βi
yi ¨
f
xf piq ¨
xi ¨
ℓi
βf piq
f
γi
f
¨ xf 2 piq
ℓf piq
yf piq ¨
γf piq
f
ℓf 2 piq
¨ yf 2 piq
Choose a length metric on S 2 , and define the distance between homotopy pseudoorbits by
`
˘
(52)
d pβi qiPI , pγi qiPI :“ max dpγi ptq, βi ptqq.
iPI,tPr0,1s
The following result states that the set of conjugacy classes of homotopy pseudoorbits is discrete:
Lemma 4.10 (Discreteness). Let pβi qiPI be a homotopy pseudo-orbit of f : pS 2 , A, ordq ý.
Then there is an ε ą 0 such every homotopy pseudo-orbit at distance less than ε
from pβi qiPI is conjugate to it.
Proof. Set ε “ mina‰bPA dpa, bq. Consider a homotopy pseudo-orbit pγi qiPI at distance δ ă ε from pβi qiPI . Connect βi p0q to γi p0q by a path ℓi of length at most δ.
Since SzA is locally contractible space, the curve ℓi is unique up to homotopy and
β p1q
βi #ℓf piq Òf i «A,ord ℓi #γi .
We recall that A8 denotes the union of all periodic cycles containing critical
points.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
33
Definition 4.11 (Shadowing). A homotopy pseudo-orbit pβi qPI shadows rel pA, ordq
a poly-orbit ppi qiPI in SzA8 if they are conjugate; namely if there are curves ℓi connecting βi p0q to pi that lie in S 2 zA except possibly their endpoints and such that
for every i P I we have
β p1q
ℓi «pA,ordq βi #ℓf piq Òf i
.
△
Lemma 4.12. The homotopy pseudo-orbit pβi qPI shadows the poly-orbit ppi qiPI if
and only if every neighbourhood pUi qiPI of ppi qiPI contains a homotopy pseudo-orbit
pβi1 qiPI conjugate to pβi qiPI ; namely, βi1 Ă Ui Q pi for all i P I.
Proof. If pβi q can be conjugated into a small enough neighbourhood of ppi qiPI , then
it is conjugate to ppi qiPI by Lemma 4.10. The converse is obvious.
Proposition 4.13. Suppose f : pS 2 , A, ordq ý is a geometric non-invertible map.
Then a periodic pseudo-orbit pβi qiPI shadows ppi qiPI rel pA, ordq if and only if pβi qiPI
shadows ppi qiPI rel pPf , ordf q.
Proof. Clearly if pβi qiPI shadows an orbit ppi qiPI rel pA, ordq then pβi qiPI shadows ppi qiPI rel pPf , ordf q. Conversely, suppose pβi qiPI shadows an orbit ppi qiPI rel
β p1q
pPf , ordf q, so there are paths ℓi connecting βi p0q to pi with ℓi «Pf ,ordf βi #ℓf piq Òf i .
β p1q
i
Thus ℓ´1
is a loop which is trivial rel pPf , ordf q, but may not be
i #βi #ℓf piq Òf
trivial rel A.
Consider the following pullback iteration. Set pβi0 qiPI :“ pβi qiPI and pℓ0i qiPI :“
0
pℓi qiPI . Define
`
`
βin´1 p1q ˘
βi p1q ˘
.
and pℓni qiPI :“ ℓn´1
Ò
pβin qiPI :“ βfn´1
f piq Òf
piq f
iPI
iPI
β p1q
Clearly the pβin qiPI are all conjugate. Observe that pℓni q´1 #βi #ℓnfpiq Òf i
passing through pi and
β p1q
pℓni q´1 #βi #ℓnfpiq Òf i
is a degree-1 preimage of
β p1q
If f is expanding, then the diameter of pℓni q´1 #βi #ℓnfpiq Òf i
is a loop
βi p1q
pℓn´1
q´1 #βi #ℓn´1
.
i
f piq Òf
tends to 0 expo-
β p1q
pℓni q´1 #βi #ℓnfpiq Òf i
nentially fast, hence
is trivial rel pA, ordq for all sufficiently
large n, and the claim follows.
β p1q
If f is exceptional, then pℓni q´1 #βi #ℓnfpiq Òf i is trivial rel pA, ordq for all sufficiently large n by Corollary 4.2.
At one extreme, homotopy pseudo-orbits include poly-orbits, represented as constant paths. At the other extreme, homotopy pseudo-orbits may be assumed to
consist of paths all starting at the basepoint ˚. As such, these paths represent
elements of the biset Bpf q of f , see 1.
4.3. Symbolic orbits. We shall be interested in marking periodic orbits of regular
points. These are conveniently encoded in the following simplification of portraits
of bisets (in which the subbisets are singletons and therefore represented simply as
elements):
Definition 4.14 (Symbolic orbits). Let I be a finite index set with self-map
f : I ý, and let B be a G-biset. An I-symbolic orbit is a sequence pbi qiPI of
elements of B, and two I-symbolic orbits pbi qiPI and pci qiPI are conjugate if there
exists a sequence pgi qiPI in G with gi ci “ bi gf piq for all i P I.
△
34
LAURENT BARTHOLDI AND DZMITRY DUDKO
Lemma 4.15. Every homotopy pseudo-orbit can be conjugated to a symbolic orbit,
unique up to conjugacy, in Bpf, A, ordq.
Proof. Given pβi qiPI a homotopy pseudo-orbit, choose paths ℓi from βi p0q to ˚
βi p1q
and define γi “ ℓ´1
. Then γi P Bpf, A, ordq and pβi qiPI „ pγi qiPI .
i #βi #ℓf piq Òf
Furthermore another choice of paths pℓ1i qiPI differs from pℓi qiPI by ℓ1i “ ℓi gi for some
β p1q
gi , so the symbolic orbits pγi qiPI and ppℓ1i q´1 #βi #ℓ1f piq Òf i qiPI are conjugate.
Let f : pS 2 , Pf , ordf q ý be an expanding map, and let G BG be its biset. Recall
that the Julia set J pf q of f is the accumulation set of preimages of a generic point
˚,
č ď
J pf q :“
f ´m p˚q.
ně0 měn
Every bounded sequence b0 b1 ¨ ¨ ¨ P B b8 defines an element of J pf q as follows:
c
p1q
set c0 “ b0 and ci “ ci´1 #bi Òfi´1
for all i ě 1; then limnÑ8 cn p1q converges
i
to a point ρpb0 b1 ¨ ¨ ¨ q P J pf q. The following proposition directly follows from the
definition:
Proposition 4.16 (Expanding case). Suppose f : pS 2 , Pf , ordf q ý is an expanding
map with orbisphere biset G BG , and let pbi qiPI be a finite symbolic orbit. Let Σ be
a generating set of G BG containing all bi and let ρ : Σ`8 Ñ J pf q be the symbolic
encoding defined above. Then pbi qiPI shadows pρpbi bf piq qbf 2 piq . . . qiPI .
This proposition is useful to solve shadowing problems (namely, determining
when two symbolic orbits shadow the same poly-orbit) using language of automata.
It also follows from the proposition that every orbit homotopy pseudo-orbit shadows a unique orbit in J pf q.
Proposition 4.17 (Shadowing and universal covers). Let f : pS 2 , A, ordq ý be an
r Ñ pS 2 , A, ordq be the universal covering
orbisphere map with biset G BG , let π : G U
2
r be the G equivariant map defined
r
map of pS , A, ordq, and let Φ : G BG b G U ãÑ G U
by Proposition 3.5.
r such that π and Φ extend to continuous
r ` of G U
Then there is a completion G U
`
2
8
r ` with the following property:
r ` Ñ GU
r Ñ S zA and Φ : G BG b G U
maps π : G U
a symbolic orbit pbi qiPI shadows an orbit pxi qiPI in S 2 zA8 if and only if
r ` with πpr
Φpbi b x
rf piq q “ x
ri for some x
ri P U
xi q “ xi .
r` “ U
r.
Furthermore, if ordpaq ă 8 for every a P AzA8 then we may take U
r ` , add to U
r all limit points of parabolic elements corresponding
Proof. To define U
to small loops around punctures a P AzA8 with ordpaq “ 8. The extension of π
r ` zU
r.
and Φ is given by continuity; π is a branched covering with branch locus U
If pbi qiPI shadows pxi qiPI , then there are curves
x
ri : r0, 1s Ñ pS 2 , A, ordq with x
ri p0q “ ˚ and x
ri p1q “ xi
b p1q
such that x
r´1
xf piq Òfi
is a homotopically trivial loop. This exactly means
i #bi #r
that Φpbi b x
rf piq q “ x
ri . Conversely, if Φpbi b x
rf piq q “ x
ri for all i P I then pbi qiPI
shadows pπpr
xi qqiPI .
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
35
Proposition 4.18 (Shadowing in the p2, 2, 2, 2q case). Using notations of Proposition 4.6, suppose f is a p2, 2, 2, 2q geometric non-invertible map and
Φ:
G BG
b xZ2 ,´z`ry R2 Ñ xZ2 ,´z`ry R2 : pb0 , zq ÞÑ M ´1 z
is as in Proposition 4.6(D). Then for every symbolic orbit pbi qiPI of G BG there is
a unique collection pri qiPI of points in R2 such that ri “ Φpbi b rf piq q. The points
ri are solutions of linear equations (53).
By Proposition 4.17, the image of pri qiPI under the projection π : R2 Ñ R2 {G «
pS , Pf , ordf q is the unique poly-orbit shadowed by pbi qiPI .
The proof will use the following easy fact.
2
Lemma 4.19. If M is non-invertible geometric, then | detpM n ` ǫIq| ě 1 for every
n ě 1 and every ǫ P t˘1u.
Proof. If λ1 , λ2 are M ’s eigenvalues, then Lemma 4.3 gives
| detpM n ` ǫ1q| “ |pλn1 ` ǫqpλn2 ` ǫq| ě 1.
2
Proof of Proposition 4.18. Write bi “ b0 gi . Recall that every gi acts on R as
z ÞÑ ǫi z ` ti with ǫi P t˘1u. We get the following system of linear equations:
M ´1 pǫi rf piq ` ti q “ ri ,
i P I.
By splitting f : I ý into grand orbits and eliminating variables we arrive at equations of the form
(53)
p1 ` θM n qri “ t with θ P t˘1u
and n the period of an orbit and t P R2 some parameters depending on ǫi and ti .
By Lemma 4.19, the system (53) has a unique solution.
Consider an I-symbolic orbit pbi qiPI shadowing a poly-orbit pxi qiPI . This means
(see Definition 4.11) that there are curves pℓi qiPI connecting ˚ to xi such that
bi p1q
ℓ´1
is a trivial loop rel pA, ordq. The local group Gxi ď π1 pX, ˚q “ G
i #bi #ℓf piq Òf
consists of loops of the form
(54)
ℓi r0, 1 ´ εs#α#ℓ´1
i r0, 1 ´ εs,
α is close to xi .
If xi R A, then Gxi is a trivial group; otherwise Gxi is an abelian group of
order ordpxi q. If pA, ordq “ pPf , ordf q, then Gxi is a finite abelian group. Clearly,
pgi bi hi qi also shadows xi for all gi P Gxi and all hi P Gxi`1 . Conversely:
Lemma 4.20. Let f : pS 2 , A, ordq ý be a geometric non-invertible map. Suppose
that the symbolic orbits pbi qiPI and pci qiPI shadow pxi qi . Let Gxi be the local group
associated with pbi qi . Then there are hi P Gxf piq such that pci qiPI is conjugate to
pbi hi qiPI .
If I consists only of periodic indices, then pci qiPI is conjugate to pgi bi qiPI for
some gi P Gxi .
b p1q
i
Proof. By conjugating pci qiPI we can assume that ℓ´1
is a trivial loop; i.e.
i ci ℓi Òf
local groups associated with pci qiPI coincide with whose associated with pbi qiPI . It
is now routine to check that pbi qiPI and pci qiPI differ by the action of local groups.
Indeed, bi is of the form
β p1q
ℓi r0, 1 ´ εs#βi #pℓf piq r0, 1 ´ εsq´1 Òf i
,
βi is close to xi ,
36
LAURENT BARTHOLDI AND DZMITRY DUDKO
and, similarly, ci is of the form
γ p1q
ℓi r0, 1 ´ εs#γi #pℓf piq r0, 1 ´ εsq´1 Òf i
,
γi is close to xi .
Let αi be the image of βi´1 #γi and define hi to be ℓf piq r0, 1 ´ εs#αi#ℓ´1
f piq r0, 1 ´ εs;
compare with (54). Then ci “ bi hi for all i P I.
If I has only periodic indices, then βi´1 and γi end at the same point and we set
αi :“ γi #βi´1 and gi :“ ℓi r0, 1 ´ εs#αi #ℓ´1
i r0, 1 ´ εs. We obtain ci “ gi bi for all
i P I.
Corollary 4.21. Let f : pS 2 , Pf , ordf q ý be a geometric non-invertible map. For
every poly-orbit there are only finitely many symbolic orbits that shadow it. Therefore, there are only finitely many conjugacy classes of portraits in Bpf q.
Theorem 4.22. Let f : pS 2 , A, ordq ý be a non-invertible geometric map. Then
the shadowing operation defines a map from conjugacy classes of symbolic finite
orbits onto poly-orbits in S 2 zA8 . If pA, ordq “ pPf , ordf q, then the shadowing map
is finite-to-one.
Proof. Follows from Propositions 4.13, 4.16, 4.18 and Corollary 4.21.
4.4. From symbolic orbits to portraits of bisets. The centralizer of a symbolic
finite orbit pbi qiPI is the set of pgi qiPI P GI such that gi bi “ bi gf piq for all i P I.
Lemma 4.23. If G BG is the biset of a geometric non-invertible map f and pbi qiPI is
a symbolic
finite orbit shadowing a poly-orbit pxi qiPI , then its centralizer is contained
ś
in iPI Gxi , where Gxi are the local groups associated with pbi qiPI .
Proof. By Definition 4.11 there are curves pℓi qiPI connecting ˚ to xi such that
bi p1q
ℓ´1
is a trivial loop rel pA, ordq. Suppose that pgi qiPI P GI centralizes
i #bi #ℓi Òf
gf piq Òxf i is isotopic to gri rel pA, ordq. By the
pbi qiPI . Set gri :“ ℓ´1
i #gi #ℓi . Then r
expanding property of f , orś
Corollary 4.2 if f is exceptional, gri is trivial rel pA, ordq.
This implies that pgi qiPI P iPI Gxi .
r “
Consider a portrait of bisets pGa , Ba qaPAr in G BG . We may decompose A
n
A \ F \ I with f˚ pF q Ď A for n " 0 and f˚ pIq Ď I. Then for every i P I the group
Gi is trivial and Bi “ tbi u is a singleton. We obtain the symbolic orbit pbi qiPI
which is the essential part of pGa , Ba qAr :
Lemma 4.24. The relative centralizer ZD ppGa , Ba qaPAr q (see §2.8) is isomorphic
to the centralizer of pbi qiPI via the forgetful map pgd qdPD Ñ pgi qiPI .
Let pGa , Ba qaPAr and pG1a , Ba1 qaPAr be two portraits of bisets with associated symbolic orbits pbi qiPI and pb1i qiPI . Assume that Ga “ G1a and Ba “ Ba1 for all a P A.
Then pGa , Ba qaPAr and pG1a , Ba1 qaPAr are conjugate if and only if
(1) pbi qiPI and pb1i qiPI are conjugate; and
(2) G bGd Bd “ G bGd Bd1 for every d P F .
This reduces the conjugacy problem of portrait of bisets to the conjugacy problem of symbolic orbits; indeed Condition (2) is easily checkable. Note that every
r imposes a finite condition on conjugacy and centralizers: for
preperiodic point in A
points attracted to A, Condition (2) imposes a congruence condition modulo the
action on t¨u bG B on the conjugator; for preperiodic points in I, conjugacy again
amounts (by Condition (1) and Definition 4.14) to a congruence condition modulo
the action on t¨u bG B.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
37
Proof. If d P D with f˚ pdq R D, then from gd Bd “ Bd gf˚ pdq “ Bd follows that
gd “ 1. Induction on the escaping time to A gives gd “ 1 for all d P F .
Similarly, if d P D with f˚ pdq R D, then there is a gd P G with gd Bd “ Bd1 “
1
Bd gf˚ pdq if and only if G bGd Bd “ G bGd Bd1 . By induction on the escaping
time, pGa , Ba qaPA\F and pG1a , Ba1 qaPA\F are conjugate if and only if Condition (2)
holds.
We are now ready to show that a geometric map, equipped with a portrait of
bisets, yields a sphere map — possibly with some points infinitesimally close to each
other. A blowup of a two-dimensional sphere is a topological sphere Sr2 equipped
with a monotone map Sr2 Ñ S 2 , namely a continuous map under which preimages
of connected sets are connected. We remark that arbitrary countable subsets of S 2
may be blown up into disks.
r be
Proposition 4.25. Let f : pS 2 , Aq ý be a non-invertible geometric map, let A
r
a set containing a copy of A, let f˚ : A ý be a symbolic map which coincides with
r and let pBa q r be a portrait of bisets in Bpf q.
f on the subset A Ď A,
aPA
r Ñ S 2 extending the identity on A, and a
Then there exists a unique map e : A
2
2
r
blowup b : S Ñ S , with the following properties. The locus of non-injectivity of e
is disjoint from A8 , and b blows up precisely at the grand orbits of
r : x “ epaq “ epa1 qu
tx | Da ‰ a1 P A
r Ă Sr2
replacing points by disks on which the metric is identically 0 and all points in A
2
2 r
r
r
r
are disjoint. The maps f˚ : A ý and f : pS , Aq ý extend to a map f : pS , Aq ý,
which is semiconjugate to f via b, and whose minimal portrait of bisets projects to
pBa qaPAr .
r “ A \ I \ J with
Proof. Let us set G :“ π1 pS 2 zA, ˚q. We may decompose A
f˚ pIq “ I and f˚n pJq Ď A \ I for n " 0. On A, we naturally define e as the identity.
If i P I, then i is f˚ -periodic and the bisets Bi , Bf˚ piq , . . . determine a homotopy
pseudo-orbit which shadows a unique periodic poly-orbit in S 2 , by Theorem 4.22.
Thus e is uniquely defined.
We now blow up the grand orbits of all points in S 2 which are the image of more
than one point in A \ I under e, replacing them by a disk on which the metric
is identically 0. We inject A \ I arbitrarily in the blown-up sphere Sr2 , and now
identify A \ I with its image in the blowup.
Let us next extend f to a self-map fr of Sr2 so that pBa qaPA\I is the induced
portrait of bisets. We first do it by arbitrarily mapping the disks to each other by
homeomorphisms restricting to f˚ on A \ I. Let pBa1 qaPA\I be the projection of
the minimal portrait of bisets of fr via π1 pSr2 , A \ Iq Ñ G. Consider i P I. By
Lemma 4.20 (the periodic case), we can assume that bi “ hi b1i with hi P Gepiq .
(Note that if epiq R A, then hi “ 1 and bi “ b1i .) Let m1 P BraidpSr2 zA, Iq be a
preimage of phi qiPI under EI (see (19)) and set m :“ pushpm1 q; it is defined up to
pre-composing with a knitting element. Since hi P Gepiq , we can assume that m
is identity away from the blown up disks. Then pBa qaPA\I is a portrait of bisets
induced by mfr.
Consider next j P J and assume that f˚ pjq has already been defined. The biset
Bj , and more precisely already its image in t¨u bG Bpf q, see Definition 2.15(B),
38
LAURENT BARTHOLDI AND DZMITRY DUDKO
determines the correct fr-preimage of f˚ pjq P Sr2 that corresponds to j, and thus
determines epjq uniquely.
4.5. Promotions of geometric maps. Let X :“ pS 2 , A, ordq be an orbisphere,
and consider a subset D Ă S 2 zA. Recall from (11) the quotient ModpX|Dq of
mapping classes of pS 2 , A \ Dq by knitting elements; there, we called two maps
f, g : pX, Dq ý knitting-equivalent, written f «A|D g, if they differ by a knitting
element in knBraidpX, Dq. We shall show that knitting-equivalent rigid maps are
conjugate rel A \ D:
Theorem 4.26 (Promotion). Suppose f, g : pX, Dq ý are orbisphere maps, and
assume that either
‚ g is geometric non-invertible, or
‚ g m pDq Ď A for some m ě 0.
Then every conjugacy h P ModpX|Dq between f and g rel A|D lifts to a unique
conjugacy r
h P ModpS 2 , A \ Dq between f and g rel A \ D such that r
h «A|D h.
Proof. We begin by a
Lemma 4.27. Under the assumption on g, consider that b P BraidpS 2 zA, Dq. If
for every m ě 1 the element b is liftable through g m , then pg ˚ qm b is trivial for all
m ą log |g|, where | | is the word metric.
Proof. Consider first the case that g is expanding. Then the lengths of curves
pg ˚ qm pbqp´, aq tend to 0 as m Ñ 8, so pg ˚ qm pbq “ 1 for sufficiently large m. If
g is exceptional, then Corollary 4.2 replaces the expanding argument. If finally
g m pEq Ď A, then pg ˚ qm pbq “ 1 for the same m.
We resume the proof. Let r
h0 P pModpS 2 , A \ Dq be any preimage of h under
2
the forgetful map ModpS , A \ Dq Ñ ModpX|Dq. Then r
h0 f r
h´1
«A\D h´1
0
1 g
´1
r
r
r
r
for some h1 P knBraidpX, Dq. Setting h1 :“ h1 h0 we get h1 f h1 «A\D h´1
2 g,
´1
is
the
lift
of
h
through
g.
Continuing
this
process
we
eventually
get
where h´1
2
1
´1
´1
r
r
hm f hm «A\D g because a the corresponding lift of h1 is trivial by Lemma 4.27.
We have shown the existence of r
h.
If r
h1 is another promotion of h, then r
h1 and r
h differ by a knitting element
h1 “
commuting with f . By Lemma 4.27, that knitting element is trivial, hence r
r
h.
Corollary 4.28. Let ModpS 2 ,A\Dq M pf qModpS 2 ,A\Dq Ñ ModpX|Dq M pf |DqModpX|Dq
be the inert map forgetting the action of knitting elements (see Proposition 2.12).
Suppose that either f is geometric non-invertible or f m pDq Ď A for some m ě 0.
Then f, g are conjugate in M pf q if and only if their images are conjugate in
M pf |Dq. Moreover, the centralizers Zpf q and Zpf |Dq (see §2.8) are naturally
isomorphic via the projection ModpS 2 , A \ Dq Ñ ModpX|Dq.
4.6. Automorphisms of bisets. Recall that the automorphism group of a biset
H BG is the set AutpBq of maps τ : B ý satisfying τ phbgq “ hτ pbqg for all h P
H, g P G.
Proposition 4.29. If H BG is a left-free, right-transitive biset and H is centreless,
then AutpBq acts freely on B, so B{ AutpBq is also a left-free, right-transitive HG-biset.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
39
Proof. AutpBq acts by permutations on t¨u bH B, and commutes with the right
G-action, which is transitive, so AutpBq{ kerpactionq acts freely. Consider now
τ P AutpBq that acts trivially on t¨u bH B, and consider b P B. We have τ pbq “ tb
for some t P H, and for all h P H there exists g P G with hb “ bg, because B is
right-transitive. We have
htb “ hτ pbq “ τ phbq “ τ pbgq “ τ pbqg “ tbg “ thb,
so t P ZpHq “ 1 and therefore τ “ 1. It follows that AutpBq acts freely on B.
Corollary 4.30 (No biset automorphisms). For an orbisphere biset
is non-cyclic, we have AutpBq “ 1.
In the dynamical situation
G BG
H BG
with H
is a cyclic biset if and only if G is a cyclic group.
Proof. Since B is an orbisphere biset, it is left-free and right-transitive; and since it
is not cyclic, H is a non-abelian orbisphere group and in particular is centreless, so
Proposition 4.29 applies. Now B cannot have a proper quotient, because conjugacy
classes of elements of H appear exactly once in wreath recursions of peripheral
elements, by [5, Definition 2.6(SB3 ) of orbisphere bisets].
Corollary 4.31. Suppose f, g : pS 2 , C, ordq Ñ pS 2 , A, ordq are isotopic orbisphere
maps. Let Bpf q and Bpgq be the bisets of f and g with respect to the same base
points : P S 2 zC and ˚ P S 2 zA. Then there is a unique isomorphism between Bpf q
and Bpgq.
Proof. Consider an isotopy pft qtPr0,1s rel A from f to g. It induces a continuous motion Bpft , A, ord, ˚q of the bisets of ft . Therefore, all Bpft , A, ord, ˚q are isomorphic.
This shows existence, the uniqueness follows from Corollary 4.30.
Theorem 4.32 (Rigidity). Suppose f : pS 2 , A, ordq ý is a geometric map. If
h : pS 2 , Aq ý commutes with f , is the identity in some neighbourhood of A8 , and
is isotopic to 1 rel A, then h “ 1.
Proof. Fix a basepoint ˚ P S 2 zA and the fundamental group G “ π1 pS 2 zA, ˚q.
Since h is isotopic to the identity rel A, there is a path ℓ : r0, 1s Ñ S 2 zA from ˚ to
hp˚q such that γ « ℓ#ph ˝ γq#ℓ´1 for all γ P G. Define then
h˚ :
G Bpf qG
ý,
hpβp1qq
h˚ pβq :“ ℓ#ph ˝ βq#pℓ´1 qÒf
.
Since h commutes with f , this defines an automorphism of Bpf q. By Corollary 4.30,
it is the identity on Bpf q. It also fixes, therefore, all conjugacy classes in Bpf qbn
for all n P N. By Theorem 4.22, these are in bijection with periodic orbits of f . It
follows that h is the identity on all periodic points of f , and therefore on f ’s Julia
set. Since furthermore h is the identity near A8 , and every point of S 2 zA either
belongs to J pf q or gets attracted to A8 , we get h “ 1 everywhere.
4.7. Weakly geometric maps. We are now ready to show that maps whose restriction to their minimal orbisphere is geometric are of a particularly simple form.
Definition 4.33 (Tunings). Let f : pS 2 , Aq ý be a Thurston map. A tuning
multicurve for a f is an f -invariant1 multicurve C such that f ´1 pC q does not
contain nested components rel f ´1 pAq; namely, the adjacency graph of components
of S 2 zf ´1 pC q is a star.
1In the sense that f ´1 pC q equals C rel A.
40
LAURENT BARTHOLDI AND DZMITRY DUDKO
A tuning is an amalgam of orbisphere bisets in which the graph of bisets is a star.
Equivalently, it is an amalgam of Thurston maps along a tuning multicurve.
△
Every tuning has a central map, corresponding to the centre of the star, as well
as satellite maps, corresponding to its leaves. Furthermore, unless the star has two
vertices, its central map is uniquely determined.
Definition 4.34 (Weakly geometric maps). A tuning by homeomorphisms is a
tuning in which all satellite maps are homeomorphisms.
A map is weakly geometric if it is a (possibly trivial) tuning by homeomorphisms
in such a manner that the central map is isotopic to a geometric map.
△
Theorem 4.35. Let f : pS 2 , Aq ý be a non-invertible Thurston map. Then f is
weakly geometric if and only if its restriction f : pS 2 , Pf , ordf q ý is isotopic to a
geometric map.
Proof. If f is weakly geometric, then its central map f is isotopic to a geometric
map, and f is isotopic to f rel Pf .
Conversely, assume that the restriction f : pS 2 , Pf , ordf q ý of f is geometric,
and consider its portrait of bisets pBa qaPA induced from the minimal portrait of
bisets of f . By Proposition 4.25, there exists a blown-up sphere Sr2 and extension
fr of f , such that the portraits of bisets of f and fr are conjugate portraits of bisets
in Bpf q. The tuning multicurve we seek is the boundary of the blowup disks.
By Theorem 2.19, the bisets Bpf q and Bpfrq differ by a knitting element, so
r
we may write fr “ mf for some m P knBraidpS 2 zA, AzAq.
By Proposition 2.8,
the class m is liftable arbitrarily often through f , and by Lemma 4.27 its lift
becomes eventually trivial; in other words, we have a relation n´1 frn “ mf for
some m with trivial image in BraidpS 2 zAq, and therefore n is a product of mapping
classes in the infinitesimal disks. Its restriction to these disks yields the required
homeomorphisms of the tuning.
We now turn to the algebraic aspects of geometric maps.
Definition 4.36. An orbisphere biset is geometric if it is the biset of a geometric
map. A biset B is weakly geometric if its projection B to its minimal orbisphere
group is geometric.
△
Corollary 4.37. A Thurston map f is [weakly] geometric if and only if its biset
is [weakly] geometric.
The following result was already obtained by Selinger and Yampolsky in the case
of torus maps [23]:
Corollary 4.38. A non-invertible Thurston map is isotopic to a geometric map if
and only if it is Levy-free.
Proof. Let f be a non-invertible Thurston map. If f admits a Levy cycle, then
either f shrinks no metric under which a curve in the Levy cycle is geodesic, or f
is doubly covered by a torus endomorphism with eigenvalue ˘1; in all cases, f is
not geometric.
Conversely, if f is Levy-free, then its restriction f to pS 2 , Pf , ordf q is still Levyfree. By [4, Theorem A] (for tExpu maps) or Lemma 4.4 (for tGTor/2u maps) the
map f is geometric. By Proposition 4.25, there is a map e : A Ñ S 2 whose image is
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
41
preserved by f . If e were not injective, there would be a Levy cycle for f consisting
of curves surrounding elements of A with same image under e. Now if e is injective
then the tuning constructed in Theorem 4.35 is trivial, so f is geometric.
4.8. Conjugacy and centralizer problem. We are now ready to show how conjugacy and centralizer problems may be solved in geometric bisets. We review the
algebraic interpretation of expanding maps: by [4, Theorem A], a map is expanding if and only if its minimal biset is contracting. We recall briefly that a left-free
G-G-biset B is contracting if, for every basis X Ď B, there exists a finite subset
N Ď G with the following property: for every g P G there is n P N such that
X n g P N X n Ď B bn .
The minimal such subset N is called the nucleus associated with pB, Xq. Note
that different bases yield different nuclei, but that finiteness of N is one basis implies
its finiteness for all other bases, or even for all finite sets X generating B qua left
G-set.
The biset B may be given by its structure map X ˆ G Ñ G ˆ X, written
px, gq ÞÑ pg@x, xg q; it is defined by the equality xg “ pg@xqxg in B. Then B is
contracting with nucleus N if for every g P G every sufficiently long iteration of the
maps g ÞÑ g@x eventually gives elements in N .
We may assume, without loss of generality, that XN Ď N X holds. For finitely
generated groups, there is perhaps more intuitive formulation of contraction: there
exists a proper metric on G and constants λ ă 1, C such that }g@x} ď λ}g} ` C
for all g P G, x P X.
Lemma 4.39. Let G BG be a contracting biset; choose a basis X of B, and let
N Ď G be the associated nucleus.
Then every symbolic finite orbit pbi qiPI is conjugate to one in which bi P N X for
all i P I.
Proof. Write every bi in the form gi xi with gi P G and xi P X. Conjugate the
symbolic orbit by pgi qiPI ; then each bi becomes gi´1 bi gf piq “ xi gf piq “ gi1 x1i P B for
some gi1 P G, x1i P X. Note that each gi1 is a state of some gi . Conjugate again by
pgi1 qiPE , etc.; after a finite number of steps, each bi will belong to N X.
Lemma 4.40. Let f : pS 2 , A, ordq ý be a geometric map. Then for every n P N
the number of n-periodic points of f is finite.
Proof. If f is doubly covered by a torus endomorphism z ÞÑ M z `q, then its periodn points are all solutions to z P p1 ´ M q´1 q ` p1 ´ M n q´1 Z2 ; the image of this set
under R2 Ñ R2 {xZ2 , ´zy is finite, of size at most detp1 ´ M n q.
If f is expanding, then consider its biset B, which is orbisphere contracting,
see [4, Theorem A]. By Lemma 4.39, there are finitely many conjugacy classes of
period-n finite symbolic orbits in B, and by Theorem 4.22, every symbolic orbit
shadows a unique finite poly-orbit.
r Ñ G be a forgetful morphism of groups and let
Theorem 4.41. Let G
r Ñ G BG and
r BG
r
G
r1
r BG
r
G
Ñ GB1G
r is geometbe two forgetful biset morphisms as in (8). Suppose furthermore that B
ric of degree ą 1. Denote by pGa , Ba qaPAr and pG1a , Ba1 qaPAr the portraits of bisets
r and B
r 1 in B and B 1 respectively.
induced by B
42
LAURENT BARTHOLDI AND DZMITRY DUDKO
r B
r 1 are conjugate under ModpGq
r if and only if there exists φ P ModpGq
Then B,
φ
1
φ
such that B – B and the portraits pGa , Baφ qaPAr and pG1a , Ba1 qaPA\E are conjugate.
Furthermore, the centralizer of the portrait pGa , Ba qaPAr is trivial, and the cenr of B
r is isomorphic, via the forgetful map ModpGq
r Ñ ModpGq, to
tralizer ZpBq
ˇ φ φ
(
φ P ZpBq ˇ pGa , Ba q r „ pGa , Ba q r
aPA
aPA
and is a finite-index subgroup of ZpBq.
r B
r 1 are conjugate, then certainly their images and subbisets are conProof. If B,
jugate. Conversely, let φ P ModpGq be such that B φ – B 1 and the portraits
r B
r1
pGφa , Baφ qaPAr and pG1a , Ba1 qaPA\E are conjugate, and let fr, fr1 be maps realizing B,
respectively, with f geometric. By Theorem 2.19, we have fr1 “ mfr for a knitting
mapping class m. Since m is liftable by Proposition 2.8, we have mfr “ f m1 for
some knitting mapping class m1 and therefore m1 fr1 pm1 q´1 “ m1 fr. Repeating with
m1 , we obtain mpkq ¨ ¨ ¨ m1 fr1 pmpkq ¨ ¨ ¨ m1 q´1 “ mpkq fr for all k P N, and mpkq “ 1
when k is large enough by Lemma 4.27. Therefore, fr, fr1 are conjugate, and so are
r B
r1.
B,
r follows. The centralizer of the portrait pGa , Ba q r
The description of ZpBq
aPA
r is geometric) all points shadowed by bisets Ba are
is trivial, because (since B
unmarked in G. The ZpBq-orbit of pGa , Ba qaPAr contains finitely many conjugacy
classes by Lemma 4.40, since all of them are induced from a geometric biset, and
therefore are encoded in periodic or pre-periodic points for a map realizing B.
5. Algorithmic questions
We give some algorithms that decide whether two portraits of bisets are conjugate, thereby reducing the conjugacy and centralizer problem from orbisphere maps
to their minimal orbisphere (with only the post-critical set marked).
All the algorithms we describe make use of the symbolic description of maps via
bisets, and are inherently quite fast and practical. Their precise performance, and
implementation details, be studied in the last paper [6] of this series.
There are two implementations of Algorithms 1.1 and 1.2, one for tGTor/2u
maps and one for tExpu maps. We describe them in separate subsections.
We already gave the following algorithms:
Algorithm 5.1 ([4, Algorithms 5.1 and 5.2]). Given an orbisphere biset G BG ,
Decide whether B is the biset of a map double covered by a torus endomorphism,
and if so Compute parameters M, q for a torus endomorphism z ÞÑ M z`q covering
B.
Algorithm 5.2 ([4, Algorithms 5.4 and 5.5]). Given an orbisphere biset G BG ,
Decide whether B is the biset of a tGTor/2u or an tExpu map, and in particular
whether B is geometric.
5.1. tGTor/2u maps. In the case of tGTor/2u maps, we can decide, without access
to an oracle, whether two such maps are conjugate, and we can compute their
centralizers, as follows. We shall need the following fact:
Theorem 5.3 (Corollary of [10]). There is an algorithm deciding whether two
matrices M, N P Mat`
2 pZq are conjugate by an element X P SL2 pZq, and produces
such an X if it exists.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
43
There is an algorithm computing, as a finitely generated subgroup of SL2 pZq, the
centralizer of M P Mat`
2 pZq.
Algorithm 5.4. Given G BG , G CG two minimal tGTor/2u bisets
Decide whether B and C are conjugate by an element of ModpGq, and if so construct a conjugator; and compute the centralizer ZpBq, as follows:
(1) If B˚ ‰ C˚ as maps on peripheral conjugacy classes, then return fail.
(2) Using Proposition 4.5(A), identify G with Z2 ¸ t˘1u, and with Algorithm 5.1 present B and C as BM v and BN w respectively, see (50).
(3) Using Theorem 5.3 check whether M and N are conjugate. If not, return
no; otherwise find a conjugator X and compute the centralizer ZpM q of M .
(4) Check whether there is a Y P ZpM q such that BpY Xq0 conjugates BM v to
BN w , as follows. The orbit of BM v under conjugation of
tBY 0 | Y P ZpM q, BpY Xq0 P ModpGqu
is finite and hence computable; so is its image under X 0 . Check whether
BN w belongs to it; if not, return no, and if yes, return yes and the conjugator BpY Xq0 .
(5) The centralizer of B is
tBY 0 | Y P ZpM q, BY 0 P ModpGq, and BY 0 centralizes BM v u,
naturally embeds as a subgroup of finite index in ZpM q.
Here is an algorithmic version of Proposition 4.18:
Algorithm 5.5. Given a minimal tGTor/2u orbisphere biset
xZ2 , ´z ` ry, a base point b0 P B specifying the map
(55)
Φ:
G BG
G BG
with G “
b xZ2 ,´z`ry R2 Ñ xZ2 ,´z`ry R2 : pb0 , zq ÞÑ M ´1 z
rÑA
r of the dynamics of B on its pe(see Proposition 4.18), an extension f˚ : A
ripheral classes, and a portrait of bisets pGa , Ba qaPAr ,
Compute pra qaPAr shadowed by pGa , Ba qaPAr : ra “ ΦpBa b rf˚ paq q, the local groups
Gr̄a where r̄a is the image of ra in R2 {G (see (54)), and the relative centralizer
ZD ppGa , Ba qaPAr q, which is a finite abelian group, as follows:
(1) Choose ba P Ba . Proceed through all periodic cycles E of I. Choose e P E
and solve the linear equation
´
¯
re “ Φ|E| be b bf˚ peq b ¨ ¨ ¨ b bf |E|´1 peq b re ;
˚
n
(2)
(3)
(4)
(5)
the equation takes form p1 ` θM qri “ t with θ P t˘1u (see (53)) and has
a unique solution by Lemma 4.19.
r
Inductively compute ra “ Φpba b rf˚ paq q for all a P A.
For a P A we have Gr̄a “ Ga .
r check whether ra ´ z P G. If ra ´ z R G, then Gr̄a is the trivial
For a P AzA
subgroup; otherwise Gr̄a “ xra ´ zy.
By a finite check compute ZD ppGa , Ba qaPAr q: by Lemma 4.24 it is the set
of self-conjugacies of the corresponding
homotopy pseudo-orbit, and by
ś
Lemma 4.20 it is a subgroup of aPAr Gr̄a , and is therefore an easily computable finite group.
44
LAURENT BARTHOLDI AND DZMITRY DUDKO
Algorithm 5.6. Given a minimal tGTor/2u orbisphere biset G BG , an extension
rÑ A
r of the dynamics of B on its peripheral classes, and two portraits of
f˚ : A
bisets pGa , Ba qaPAr and pG1a , Ba1 qaPAr ,
Decide whether pGa , Ba qaPAr and pG1a , Ba1 qaPAr are conjugate as follows:
(1) Normalize the portraits in such a manner that Ga “ G1a and Ba “ Ba1 for
all a P A; by Lemma 2.17 this follows from the conjugacy for subgroups:
find pℓa qaPA P GA with Gℓaa “ G1a and conjugate pGa , Ba qaPAr by pℓa qaPAr
with ℓa “ 1 if a R A.
(2) Identify G with xZ2 , ´z ` ry and choose b0 P B characterizing the map Φ,
see (55).
(3) Using Algorithm 5.5 compute the points ra and ra1 shadowed by pGa , Ba qaPAr
and pG1a , Ba1 qaPAr respectively.
r then return no.
(4) Check if r̄a “ r̄a1 in R2 {G. If r̄a ‰ r̄a1 for some a P A,
1
Otherwise find ℓa P G with ℓa ra “ ra with ℓa “ 1 for a P A and conjugate
pG1a , Ba1 q by pra qaPAr . This reduces to original problem to the case ra1 “ ra .
(5) Using Algorithm 5.5 compute the (finite
ś abelian) local groups Gr̄a and by
a finite check decide if an element in aPAr Gr̄a conjugates pGa , Ba qaPAr to
pG1a , Ba1 qaPAr .
r
and
If pr̄a qaPAr is an actual orbit, then Gr̄a are trivial groups for all a R AzA
Step (5) can be omitted. This is the case when the algorithm is called from Algorithm 5.12.
Algorithm 5.7. Given a minimal tGTor/2u orbisphere biset G BG and an extenrÑA
r of the dynamics of B on its peripheral classes,
sion f˚ : A
Produce a list of all conjugacy classes of portraits of bisets pGa , Ba qaPAr in B
with dynamics f˚ as follows:
(1) Write G “ xZ2 , ´z ` ry and B “ BM v , using Algorithm 5.1. Choose b0 P B
characterizing the map Φ, see (55).
(2) Using Algorithm 5.5 compute the orbit pr̄a qaPA of M : R2 {G ý. Produce a
list of all possible poly-orbits pr̄a qaPAr extending pr̄a qaPA .
(3) For every poly-orbit pr̄a qaPAr find a portrait pGa , Ba qaPAr that shadow pr̄a qaPAr .
(4) Using Algorithm 5.5 compute the (finite) local groups Gr̄a . By Lemma 4.20
the finite set
tpGa , Ba ha qaPAr | ha P Gr̄a , ha “ 1 if a P Au
contains a representative of every conjugacy class of portraits that shadows
pr̄a qaPAr . Using Algorithm 5.5 produce a list of all conjugacy classes of
portraits of bisets that shadow pr̄a qaPA .
5.2. tExpu maps. We now turn to expanding maps, and start by a short example
showing how the conjugacy problem may be solved algorithmically. We note, by
following the proof of Lemma 4.39, that every portrait of bisets can be algorithmically conjugated to one in which the terms belong to N X, for X a basis and N the
associated nucleus.
Example 5.8. Suppose E “ teu with f˚ peq “ e, and let pGa , Ba qaPA\E and
pGa , Ca qaPA\E be two portraits of bisets. Then Ge “ 1 and Be “ tbu and Ce “ tcu.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
45
The portraits pGa , Ba qaPA\E and pGa , Ca qaPA\E are conjugate if and only if
there exists ℓ P G such that ℓ´1 bℓ “ c.
Write Be “ tgxu in a basis X of B, with associated nucleus N ; recall that N is
symmetric, contains 1 and generates G. Then gx is conjugate to xg “ pg@xqxg “
g 1 x1 . After iterating finitely many times the process gx „ xg “ g 1 x1 , we can assume
g P N , as in Lemma 4.39. Similarly, we may replace Ce by a conjugate biset thyu
with h P N .
Find, by direct search, a t P N with N t´3 X Ě XN t ; such a t ě 4 exists because
B is contracting. Then Be and Ce are conjugate if and only if there exists ℓ P G
with bℓ “ ℓc, namely gxℓ “ ℓhy. Let u P N be such that ℓ P N u zN u´1 ; if u ě t
then gxℓ P N N u´3 X while ℓhy R N u´1 N X, a contradiction. Therefore the search
for a conjugator ℓ is constrained to ℓ P N t´1 .
As in Example 5.8 we have
Lemma 5.9. Let G BG be a contracting biset; choose a basis X of B, and let N Ď G
be the associated nucleus. Suppose that t P N is such that N t´3 X Ě XN t .
If two symbolic orbits pbi qiPI and pci qiPI with bi , ci P N X are conjugate by gi ,
then gi P N t´1 .
Proof. Suppose gi P N u zN u´1 for u ě t and some i. We can also assume that
gj P N u for all j P I. Then on one hand, gi bi P N u N X; but on the other hand
gi bi “ ci gf˚ piq P N u´3 N X. This is a contradiction.
Algorithm 5.10. Given a minimal tExpu orbisphere biset G BG , an extension
rÑ A
r of the dynamics of B on its peripheral classes, and two portraits of
f˚ : A
bisets pGa , Ba qaPAr and pG1a , Ba1 qaPAr ,
Decide whether pGa , Ba qaPAr and pG1a , Ba1 qaPAr are conjugate, and Compute the
centralizer of pGa , Ba qaPAr , which is a finite group, as follows:
r “ A\J \I with f n pJq Ď A and f˚ pIq Ď I. Normalize pGa , Ba q r
(1) Write A
˚
aPA
and pG1a , Ba1 qaPAr such that Ga “ G1a and Ba “ Ba1 ; by Lemma 2.17 this
follows from the conjugacy for subgroups: find pℓa qaPA P GA with Gℓaa “ G1a
and conjugate pGa , Ba qaPAr by pℓa qaPAr with ℓa “ 1 if a R A.
(2) Check whether there is pℓa qaPA\J with ℓa “ 1 for a P A conjugating
pGa , Ba qaPA\J and pG1a , Ba1 qaPA\J . If not, return no. This reduces the
conjugacy problem of portraits to the conjugacy problem of symbolic orbits: writing Bi “ tbi u and Bi1 “ tb1i u for i P I solve a conjugacy problem
of pbi qiPI and pb1i qiPI .
(3) Write the biset B in the form GX for a basis X, and let N be its nucleus.
Find, by direct search, a t P N with N t´3 X Ě XN t ; such a t ě 4 exists
because B is contracting.
(4) Write bi “ gi xi and replace bi with xi gi “ gi1 x1i . After iterating finitely
many times this process, we obtain bi P N X by Lemma 4.39. By a similar
iteration, we can assume b1i P N X.
(5) Answer whether pbi qiPI and pb1i qiPI are conjugate by elements in N t´1 . This
is correct by Lemma 5.9.
(6) By a direct search compute the centralizer of pGa , Ba qaPAr : the centralizer of
pGa , Ba qaPA\J is trivial, while elements centralizing pGa , Ba qaPI are within
N t´1 .
46
LAURENT BARTHOLDI AND DZMITRY DUDKO
Algorithm 5.11. Given a minimal tExpu orbisphere biset G BG and an extension
rÑA
r of the dynamics of B on its peripheral classes,
f˚ : A
Produce a list of all conjugacy classes of portraits of bisets pGa , Ba qaPAr in B
with dynamics f˚ as follows:
r “ A \ J \ I with f n pJq Ď A and f˚ pIq Ď I. Produce a list LA\J
(1) Write A
˚
of all conjugacy classes of portraits of bisets pGa , Ba qaPA\J .
(2) Write the biset B in the form GX for a basis X, and let N be its nucleus. Produce a list of all symbolic orbits pbi qiPI with bi P N X. Using
Algorithm 5.10 produce a list LI of all conjugacy classes of symbolic orbits
pbi qiPI with bi P N X.
(3) Combine LA\J and LI to produce a list LA\J ˆ LI of all conjugacy classes
of portraits of bisets pGa , Ba qaPAr by setting Bi – tbi u and Gi – t1u for
i P I.
5.3. Decidability of conjugacy and centralizer problems. The statements of
Theorem E can be turned into an algorithm:
r r, rC
r r two geometric bisets,
Algorithm 5.12. Given Gr B
G G G
And given an oracle that decides whether two minimal tExpuztGTor/2u bisets are
conjugate, and computes their centralizers,
r and C
r are conjugate by an element of ModpGq,
r and if so conDecide whether B
r
struct a conjugator; and compute the centralizer ZpBq, as follows:
r˚ ‰ C
r˚ as maps on peripheral conjugacy classes, then return no.
(1) If B
r
r r Ñ G1 CG1 be the maximal forgetful morphisms.
(2) Let Gr BGr Ñ G BG and Gr C
G
1
If G ‰ G , then return no.
(3) From now on assume G “ G1 . Using Algorithm 5.4 or the oracle, decide
whether B, C are conjugate; if not, return no. Using Algorithm 5.4 or the
oracle compute the centralizer ZpBq.
(4) From now on, assume B φ “ C for some φ P ModpGq. Compute the action
of ZpBq on the list of all conjugacy classes of portraits of bisets (this list
is finite by Corollary 4.21; Algorithms 5.7 and 5.11 compute the list) and
´1
´1
check whether pGa , Ba qaPAr and pGφa , Caφ qaPAr belong to the same orbit.
Return no if the answer is negative. Otherwise replace φ by an element in
ZpBqφ such that pGφa , Baφ qaPAr and pGa , Ca qaPAr are conjugate portraits.
r of φ, and compute k P ModpGq
r
(5) Choose an arbitrary lift φr P ModpGq
r
φ
r – C.
r It follows that k is a knitting element. Compute
such that k B
r 1 k2 ¨¨¨kpn´1q
pnq
r φk
r proceed until k pnq is identity
inductively k
with k pnq B
– C;
r 1 k 2 ¨ ¨ ¨ k pn´1q .
(this is guaranteed by Lemma 4.27). Return φk
r
(6) To compute the centralizer of B, consider first ZpBq. Compute the action
of ZpBq on the list of all conjugacy classes of portraits of bisets (this list
is finite by Corollary 4.21; Algorithms 5.7 and 5.11 compute the list) and
compute the stabilizer Z0 pBq of the ZpBq-action. Note that Z0 pBq is a
finite-index subgroup of ZpBq.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
47
(7) List a generating set S of Z0 pBq. For every φ P S compute its lift φp P
r with B
r ψp – B:
r compute first an arbitrary lift φr of φ, then inducModpGq
r 1 k2 ¨¨¨kpn´1q
pnq
r φk
r until k pnq is identity, and set
tively define k
with k pnq B
–B
1
2
pn´1q
r k ¨¨¨k
φp :“ φk
. Return tφp | φ P Su.
Proof of Corollary F. We claim that Algorithms 5.5, 5.6, 5.7, 5.10, 5.11 are efficient.
Indeed, the efficiency of Step 4 of Algorithm 5.10 follows from expansion. The
efficiency of Steps 5 and 7 of Algorithm 5.12 follows from Lemma 4.27. All the
remaining steps are obviously efficient.
We note that, in the case of p2, 2, 2, 2q bisets, the oracle itself is efficient. Indeed the oracle reduces to solving conjugacy and centralizer problems in the group
SL2 pZq, and Theorem 5.3 is efficient, because SL2 pZq has a free subgroup of index
12.
Corollary 5.13. There is an algorithm that, given two geometric bisets G BG and
decides whether B and C are conjugate, and computes the centralizer of B.
H CH ,
Proof. We briefly sketch an algorithm that justifies an existence of an oracle: given
two minimal tExpu orbisphere bisets, whether they are conjugate and computes
their centralizers; details will appear in [6].
Let B, C be two minimal tExpu orbisphere G-G-bisets. They admit a decomposition into rational maps along the canonical obstruction, which is computable
by [22]. The graph of bisets along this decomposition is computable, and rational maps may be computed for each of the small bisets in the decomposition, e.g.
by giving their coëfficients as algebraic numbers with floating-point enclosures to
distinguish them from their Galois conjugates. The bisets B, C are conjugate precisely when their respective rational maps are conjugate and the twists along the
canonical obstruction agree; the first condition amounts to finite calculations with
algebraic numbers, while the second is the topic of [5, Theorem A].
The centralizer of a rational map is trivial, and [5, Theorem A] shows that the
centralizer of B is computable.
Proof of Corollary G. If the rational map is p2, 2, 2, 2q, then the oracle is efficient,
as we noted above, so Corollary F applies. In the other case, the rational map is
hyperbolic, so it has trivial centralizer and no oracle is needed in the application of
Algorithm 5.12.
6. Examples
Finally, in this brief section, we consider some examples of portraits of bisets.
The first ones come from marking points on the Basilica map f pzq “ z 2 ´ 1, and
more generally maps with three post-critical points. The second ones from cyclic
bisets (which are particularly simple, but to which our main results apply with
restrictions because these bisets have automorphisms).
6.1. Twisted marked Basilica. Consider the Basilica polynomial
p t0, ´1, 8uq ý .
f pzq :“ z 2 ´ 1 : pC,
It has two fixed points α and β, with α P p´1, 0q and β ą 1. Let us take α to be
the basepoint and let G BG be the biset of f . Denote respectively by γ´1 and γ0
48
LAURENT BARTHOLDI AND DZMITRY DUDKO
γ´1 ´1 ‚
α
‚
γ0
´α
‚
‚ 0
‚
β
x2
γβ
t
Figure 1. The dynamical plane of z 2 ´ 1. Loops γ´1 , γ0 , γβ circle around 0, ´1, β respectively. The curve x2 connects α to its
preimage ´α. The simple closed curve t surrounds t´1, βu.
the loops circling around ´1 and 0 as in Figure 1, and let γ8 “ pγ´1 γ0 q´1 be the
loop around infinity. Then
G “ xγ´1 , γ0 , γ8 | γ8 γ´1 γ0 y.
The basepoint α has two preimages α and ´α. Let x1 be the constant path at
α and let x2 be a path slightly below 0 connecting α to ´α. The presentation of
B in the basis S :“ tx1 , x2 u is
γ´1 “!1, γ0 "p1, 2q,
γ0 “!γ´1 , 1",
´1 ´1
γ8 “!γ´1
γ0 , 1"p1, 2q.
We shall consider first the effect of marking a fixed point, and then of marking
some preimages of the post-critical set.
p t8, 0, ´1, βuq ý.
6.1.1. Marking β. Consider a marked Basilica frpzq “ z 2 ´ 1 : pC,
Let γβ be the loop around β depicted in Figure 1. The fundamental group of
p t8, 0, ´1, βuq, based at α, is
pC,
r “ xγ´1 , γ0 , γβ , γ8 | γ8 γ´1 γβ γ0 y
G
r ։ G sends γβ to 1. The presentation of the biset B
r “ Bpfrq
and the forgetful map G
in the basis S is
γ´1 “!1, γ0 "p1, 2q,
γβ “!1, γβ ",
γ0 “!γ´1 , 1",
´1 ´1
γ8 “!γ´1
γ0 , γβ´1 "p1, 2q.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
49
r “ t8, ´1, 0, βu; then B
r has a minimal portrait of
Write A “ t8, ´1, 0u and A
bisets
r ´1 “ xγ´1 y,
G
r´1 “ G
r´1 x1 ,
B
rβ “ xγβ y,
G
rβ “ G
r´1 x2 ,
B
r ´1 ,
r0 “ G
r0 x1 \ G
r0 x2 “ x1 G
B
r 0 “ xγ0 y,
G
r8 “ xγ8 y,
G
r8 “ G
r8 pγ´1 x1 q \ G
r8 pγ´1 x2 γβ q.
B
r ։ B the portrait pG
ra , Ba q r projects to
Under the forgetful intertwiner B
aPA
G´1 “ xγ´1 y,
B´1 “ G´1 x1 ,
Gβ “ 1,
Bβ “ tx2 u,
G0 “ xγ0 y,
B0 “ G0 x1 \ G0 x2 “ x1 G´1 ,
G8 “ xγ8 y,
B8 “ G8 pγ´1 x1 q \ G8 pγ´1 x2 q.
Note that Bβ “ tx2 u encodes β in the sense that x2 shadows β. This can be seen
directly as follows: set x02 “ x2 and let xi`1
be the f -lift of xi2 that starts at xi2 p1q.
2
i
Then the sequence of endpoints x2 p1q converges to β, and the infinite concatenation
x02 #x12 # ¨ ¨ ¨ is a path from α to β.
Let now T be the clockwise Dehn twist around the simple closed curve t surr is given by
rounding an interval 0 and β as in Figure 1. The action of T on G
pγ
T˚ : γ´1 Ñ γ´1´1
γβ q´1
pγ´1 γβ q´1
, γβ Ñ γβ
, γ0 Ñ γ0 , γ8 Ñ γ8 .
r of g :“ T ˝ f is B
r b BT˚ . Let us show that g is conjugate to
The biset C
2
p
z ´ 1 : pC, t8, ´1, 0, αuq ý. By Theorem 2.19,
ra , G
r a q r “ pB
ra , G
ra q r EpT q.
pC
aPA
aPA
Note that we have Ca “ Ba for all a P A, so it remains to compute Cβ .
Let t̄ P π1 pCzt0, ´1u, βq be the simple loop below 0 circling around ´1; then
T “ pushpt̄q. Let ℓβ : r0, 1s Ñ Czt´1, 0u (using notations of Lemma 2.16) be an arc
from α to β slightly below 0 so that γβ may be homotoped to a small neighborhood
´1
´1
of ℓβ . We have Cβ “ Bβ γ´1
; the claim then follows from x2 γ´1
“ x1 and the fact
that x1 shadows α.
More generally, suppose that gr “ pushps̄qfrpushpt̄q for some motions s̄, t̄ P
π1 pCzt0, ´1u, βq of β; then
´1
Cβ “ pℓβ #s̄#ℓ´1
β qBβ pℓβ #t̄#ℓβ q.
The process
´1
pℓβ #s̄#ℓ´1
β qx2 pℓβ #t̄#ℓβ q “ g1 xip1q „ xip1q g1 “ g2 xip2q „ xip2q g2 “ . . .
eventually terminates in either x1 or x2 . In the former case, gr is conjugate to z 2 ´
p t8, 0, ´1, αuq ý and in the latter case r
p t8, 0, ´1, βuq ý.
1 : pC,
g is conjugate to z 2 ´1 : pC,
6.1.2. Marking 1 and
?
2. Consider now
?
p t8, ´1, 0, 1, 2uq ý
frpzq “ z 2 ´ 1 : pC,
50
LAURENT BARTHOLDI AND DZMITRY DUDKO
´1
‚
ℓ´1
α
‚
ℓ0
0
‚
´1 1
´α ℓ0 Òf
ℓ1
1
‚?
2
ℓ´1
1 Òf
?
2
‚
ℓ?2
Figure 2. Curves ℓ´1 , ℓ0 , ℓ1 , ℓ?2 and their lifts.
?
?
r “ t1, 2u and A “ t8, ´1, 0u. The dynamics are 2 ÞÑ 1 ÞÑ 0 Ø ´1.
with AzA
r “ Bpfrq in the basis S:
As in §6.1.1 we readily compute a presentation of B
γ´1 “!1, γ0 "p1, 2q,
γ?2 “!1, 1",
γ1 “!1, γ?2 ",
γ0 “!γ´1 , γ1 ",
´1 ´1
´1
"p1, 2q,
γ8 “!γ´1
γ0 , γ1´1 γ?
2
r “ xγ8 , γ´1 , γ? , γ1 , γ0 | γ8 γ´1 γ? γ1 γ0 y.
with G
2
2
As in Lemma 2.16, let ℓ´1 and ℓ0 Ă R respectively be simple arcs from α to ´1
and to 0 so that γ´1 and γ0 may be homotoped to small neighborhoods
of γ´1 and
?
γ0 respectively. Let ℓ1 and ℓ?2 be simple arcs from α to 1 and 2 slightly below 0
as in Figure 2.
r “ Bpfrq. Then
Denote by pGa , Ba qaPAr the portrait of bisets in B induced by B
Ga and Ba are the same as in §6.1.1 for all a P A, while (by Lemma 2.18)
?
1
2
´1
?
?
B1 “ tℓ1 #ℓ´1
0 Òf u “ tx2 u and B 2 “ tℓ 2 #ℓ1 Òf u “ tx2 u.
We consider again some twists of fr. ?Consider gr “ m1 frm2 with m1 , m2 trivial
rel A. Suppose that m1 moves 1 and 2 along s1 P π1 pCzt´1, 0u, 1q and s?2 P
?
?
while m2 moves 1 and 2 along t1 P π1 pCzt´1, 0u, 1q
π1 pCzt´1, 0u, 2q respectively
?
and t?2 P π1 pCzt´1, 0u, 2q respectively. If pGa , Ca qaPAr is the portrait of bisets in
B induced by Bpr
g q, then
(
C1 “ pℓ1 #s1 #ℓ´1
1 qx2 ,
)
!
?
2
´1
?
?
C?2 “ pℓ?2 #s?2 #ℓ´1
1 qx2 pℓ 2 #t1 Òf #ℓ 2 q .
Write C1 “ th1 x2 u and C?2 “ th2 xj u. We may conjugate pC1 , C2 q to ptx2 u, txj h1 uq,
?
p t8, ´1, 0, 1, 2uq ý;
and write xj h1 “ h2 xk . If k “ 1, then r
g is conjugate to z 2 ´ 1 : pC,
?
p t8, ´1, 0, 1, ´ 2uq ý.
otherwise gr is conjugate to z 2 ´ 1 : pC,
6.1.3. Mapping class bisets. We continue the discussion from §6.1.2, and consider
the related mapping class bisets. The biset M pBq is of course reduced to tBu, with
ModpGq “ 1.
r is a left-free ModpGq-biset
r
On the other hand, the biset M pBq
of degree 2.
?
This can be seen in various ways: analytically, the point 2 may be moved to
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
51
?
?
2
r
1, 2uq, pz 2?´
´ 2, and a basis
? of M pBq may be chosen as tpz ´ 1, t8, ´1, 0, ?
1, t8, ´1, 0, 1, ´ 2uqu. More symbolically, the action of exchanging 2 with ´ 2
amounts to changing, in the wreath recursion of fr, the entry ‘γ1 “ !1, γ?2 "’ into
‘γ1 “ !γ?2 , 1"’.
r is nevertheless connected, and M pBq
r “ M ˚ pBq:
r inNote that the biset M pBq
deed the right action by the mapping class that pushes 1 once along the circle
t|z| “ 1u has the effect of exchanging the two left orbits.
?
Example 6.1. Let us now consider the sets A “ t8, ´1, 0, 1u and D “ t 2u, still
p Aq ý and fr “ f : pC,
p A \ Dq ý. Then the fibres
with the map f pzq “ z 2 ´ 1 : pC,
˚ r
of the map FD,D : M pf q “ M pfrq Ñ M pf q are not connected (by the second claim
of Proposition 2.21).
p
Indeed the left action of ModpCzA,
Dq has two orbits, while the right action
does not identify these orbits since 1 is not allowed to move around a critical value.
6.2. Belyi maps. The Basilica map f pzq “ z 2 ´ 1 is an example of dynamical
Belyi map, namely a map whose post-critical set consists of 3 points. All such
maps are realizable as holomorphic maps, and the three points may be normalized
as t0, 1, 8u.
In this subsection, we briefly state how the main results of this article simplify considerably. We concentrate on the dynamical situation, namely a map
f : pS 2 , Aq ý covered by fr: pS 2 , A \ Dq ý, with #A “ 3.
In that case, ModpS 2 , Aq “ 1, and we have a short exact sequence
1 Ñ knBraidpS 2 , A \ Dq Ñ ModpS 2 , A \ Dq Ñ π1 pS 2 zA, Dq Ñ 1.
Assume first that D consists only of fr-fixed points. Then the extension of bisets
decomposition of M pfrq from Theorem B reduces to the statement that M pfrq is
an inert extension of Bpf qD . The case of D consisting of a single fixed point was
considered in [5, §8.2].
More concretely: if we are given a wreath recursion G Ñ G ≀ S Ó. for Bpf q, with
G “ π1 pS 2 zA, ˚q, then a generating set for ModpS 2 , A\Dq can be chosen to consist
of #D copies of a generating set of G, corresponding to point pushes of D in S 2 zA,
together with some additional knitting elements. A wreath recursion for M pfrq will
then be of the form ModpS 2 , A \ Dq Ñ ModpS 2 , A \ Dq ≀ pS D qÓ., consisting of D
parallel copies of the wreath recursion of Bpf q. The wreath recursion associated
with the knitting elements is trivial.
In case D consists of periodic points of period ą 1, then the actions of G on
the left and the right should be appropriately permuted. If D contains ni cycles
of period i, then abstractly M pfrq will consist in the direct product of ni copies of
Bpf i q “ Bpf qbi .
Example 6.2. Consider the map frpzq “ f pzq “ z 3 with A “ t0, 1, 8u and D “
tω “ expp2πi{3qu. The biset M pf q is of course reduced to tf u, while M pfrq “
p
p
ModpCzA,
Dq – π1 pCzAq
with trivial right action. Indeed D is not in the image
of f , so pushing ω has no effect.
p
However, M ˚ pfrq has two left ModpCzA,
Dq-orbits by Lemma 2.20. Representa3
tives may be chosen as tpz , A\ tωuq, pz 3 , A\ tω 2 uqu, or equivalently (if the marked
p Aq ý
set is to remain A \ D) as the maps tz 3 , z 3 ˝ mu for a homeomorphism m : pC,
2
that pushes ω to ω .
52
LAURENT BARTHOLDI AND DZMITRY DUDKO
6.3. Cyclic bisets. We finally consider the easy case of a cyclic biset, namely the
p t0, 8uq ý for some d P Zzt´1, 0, 1u.
biset G BG of a monomial map f pzq “ z d : pC,
Then G – Z and B is a left-free right-principal biset. Choosing b0 P B we can
identify B “ b0 G with Z with the actions are given by
m ¨ b ¨ n “ dm ` b ` n.
Observe first that b Ñ b ` k is an automorphism of B for all k P Z. Therefore,
contrarily Proposition A we have AutpBq – Z.
Suppose that pGa , Ba qaPAr is a portrait of bisets in B. Then AutpBq acts on
portraits by
pGa , Ba qaPAr ` k :“ pGa , Ba ` kqaPAr .
We denote by pGa , Ba qaPAr { AutpBq the orbit of this action. We can now adjust
Theorem C for cyclic bisets:
r ։ G be a forgetful morphism of groups with G – Z, and let
Lemma 6.3. Let G
p t0, 8uq ý.
B
be
the
biset
of
z d : pC,
G G
There is then a bijection between, on the one hand, conjugacy classes of portraits
of bisets pBa qaPAr in B considered up to the action of AutpBq and, on the other
r G-bisets
r
r ։ G considered up to composition with
hand, Gprojecting to B under G
the biset of a knitting element. This bijection maps every minimal portrait of bisets
r to pBa q r { AutpBq.
of B
aPA
p t0, 8uq so as to remove automorphisms,
Proof. Mark an extra fixed point in pC,
and apply Theorem C.
p t0, 8uYDq ý and let pGa , Ba q r
To illustrate Lemma 6.3, consider frpzq “ z d : pC,
aPA
p t0, 8uq. Let us also assume that d ą 1. For
be the induced portrait of bisets on pC,
a P D write Ba “ tba u with ba P B – Z.
Let us consider a twisted map gr “ mfrn. For a P D let ma and na be the number
of times m and n push a around 0. If pGa , Ca qaPAr is the induced portrait of bisets,
then Ca “ tdma ` bd ` nf˚ paq u “ tca u for a P A. For a P D set
xa – ca {d ` cf˚ paq {d2 ` cf˚2 paq {d3 ` . . .
mod Z P R{Z.
Then pCa qaPD shadows pxa qaPD . The map r
g is unobstructed if and only if all points
xa are pairwise different. If gr is unobstructed, then pxa qdPD considered up to the
action
pxa qaPD Ñ pxa ` kqaPD , k P Z{pd ´ 1qZ
is a complete conjugacy invariant of gr.
References
[1] Laurent Bartholdi and Volodymyr V. Nekrashevych, Thurston equivalence of topological polynomials, Acta Math. 197 (2006), no. 1, 1–51, DOI 10.1007/s11511-006-0007-3, available at
arXiv:math.DS/0510082. MR2285317 (2008c:37072)
[2] Laurent Bartholdi and Dzmitry Dudko, Algorithmic aspects of branched coverings, Ann. Fac.
Sci. Toulouse 5 (2017), 1219–1296, DOI 10.5802/afst.1566, available at arXiv:cs/1512.05948.
[3]
, Algorithmic aspects of branched coverings I/V. Van Kampen’s theorem for bisets,
Groups Geom. Dyn. (December 2015), to appear, available at arXiv:cs/1512.08539.
, Algorithmic aspects of branched coverings IV/V. Expanding maps, Trans. Amer.
[4]
Math. Soc., posted on October 2016, to appear, DOI 10.1090/tran/7199, available at
arXiv:math/1610.02434.
ERASING MAPS, ORBISPACES, AND THE BIRMAN EXACT SEQUENCE
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
53
, Algorithmic aspects of branched coverings II/V. Sphere bisets and their decompositions (2016), submitted, available at arXiv:math/1603.04059.
, Algorithmic aspects of branched coverings V/V. Symbolic and floating-point algorithms, 2017, in preparation.
Matthieu Calvez, Fast Nielsen-Thurston classification of braids, Algebr. Geom. Topol. 14
(2014), no. 3, 1745–1758, DOI 10.2140/agt.2014.14.1745. MR3212582
James W. Cannon, William J. Floyd, Walter R. Parry, and Kevin M. Pilgrim, Nearly Euclidean Thurston maps, Conform. Geom. Dyn. 16 (2012), 209–255, DOI 10.1090/S1088-41732012-00248-2. MR2958932
Benson Farb and Dan Margalit, A primer on mapping class groups, Princeton Mathematical
Series, vol. 49, Princeton University Press, Princeton, NJ, 2012. MR2850125
Fritz J. Grunewald, Solution of the conjugacy problem in certain arithmetic groups, Word
problems, II (Conf. on Decision Problems in Algebra, Oxford, 1976), Stud. Logic Foundations Math., vol. 95, North-Holland, Amsterdam-New York, 1980, pp. 101–139. MR579942
(81h:20054)
Geoffrey Hemion, On the classification of homeomorphisms of 2-manifolds and the classification of 3-manifolds, Acta Math. 142 (1979), no. 1-2, 123–155, DOI 10.1007/BF02395059.
MR512214 (80f:57003)
John H. Hubbard and Dierk Schleicher, The spider algorithm, Complex dynamical systems
(Cincinnati, OH, 1994), Proc. Sympos. Appl. Math., vol. 49, Amer. Math. Soc., Providence,
RI, 1994, pp. 155–180. MR1315537
Yutaka Ishii and John Smillie, Homotopy shadowing, Amer. J. Math. 132 (2010), no. 4,
987–1029, DOI 10.1353/ajm.0.0126. MR2663646 (2012d:37055)
Thomas Koberda and Johanna Mangahas, An effective algebraic detection of the NielsenThurston classification of mapping classes, J. Topol. Anal. 7 (2015), no. 1, 1–21, DOI
10.1142/S1793525315500016. MR3284387
Henry W. Levinson, Decomposable braids and linkages, Trans. Amer. Math. Soc. 178 (1973),
111–126, DOI 10.2307/1996692. MR0324684
Russell Lodge, Boundary values of the Thurston pullback map, Conform. Geom. Dyn. 17
(2013), 77–118, DOI 10.1090/S1088-4173-2013-00255-5. MR3063048
H. A. Masur and Y. N. Minsky, Geometry of the complex of curves. II. Hierarchical structure,
Geom. Funct. Anal. 10 (2000), no. 4, 902–974, DOI 10.1007/PL00001643. MR1791145
Volodymyr V. Nekrashevych, Self-similar groups, Mathematical Surveys and Monographs,
vol. 117, American Mathematical Society, Providence, RI, 2005. MR2162164 (2006e:20047)
Volodymyr Nekrashevych, Combinatorial models of expanding dynamical systems, Ergodic
Theory Dynam. Systems 34 (2014), no. 3, 938–985, DOI 10.1017/etds.2012.163. MR3199801
Kevin M. Pilgrim, Combinations of complex dynamical systems, Lecture Notes in Mathematics, vol. 1827, Springer-Verlag, Berlin, 2003. MR2020454 (2004m:37087)
Alfredo Poirier, Critical portraits for postcritically finite polynomials, Fund. Math. 203
(2009), no. 2, 107–163, DOI 10.4064/fm203-2-2. MR2496235 (2010c:37095)
Nikita Selinger, Topological characterization of canonical Thurston obstructions, J. Mod.
Dyn. 7 (2013), no. 1, 99–117, DOI 10.3934/jmd.2013.7.99. MR3071467
Nikita Selinger and Michael Yampolsky, Constructive geometrization of Thurston maps
and decidability of Thurston equivalence, Arnold Math. J. 1 (2015), no. 4, 361–402, DOI
10.1007/s40598-015-0024-4, available at arXiv:1310.1492. MR3434502
Jing Tao, Linearly bounded conjugator property for mapping class groups, Geom. Funct. Anal.
23 (2013), no. 1, 415–466, DOI 10.1007/s00039-012-0206-3. MR3037904
William P. Thurston, On the geometry and dynamics of diffeomorphisms of surfaces, Bull.
Amer. Math. Soc. (N.S.) 19 (1988), no. 2, 417–431, DOI 10.1090/S0273-0979-1988-15685-6.
MR956596 (89k:57023)
Heiner Zieschang, Elmar Vogt, and Hans-Dieter Coldewey, Surfaces and planar discontinuous
groups, Lecture Notes in Mathematics, vol. 835, Springer, Berlin, 1980. Translated from the
German by John Stillwell. MR606743 (82h:57002)
E-mail address: [email protected]
54
LAURENT BARTHOLDI AND DZMITRY DUDKO
École Normale Supérieure, Paris and Mathematisches Institut, Georg-August Universität zu Göttingen
E-mail address: [email protected]
Jacobs University, Bremen
| 4 |
CausalGAN: Learning Causal Implicit Generative Models
with Adversarial Training
arXiv:1709.02023v2 [cs.LG] 14 Sep 2017
Murat Kocaoglu
∗1,a
, Christopher Snyder ∗1,b , Alexandros G. Dimakis1,c and Sriram
Vishwanath1,d
1
a
Department of Electrical and Computer Engineering, The University of Texas at Austin, USA
[email protected] b [email protected] c [email protected] d [email protected]
September 18, 2017
Abstract
We propose an adversarial training procedure for learning a causal implicit generative model
for a given causal graph. We show that adversarial training can be used to learn a generative
model with true observational and interventional distributions if the generator architecture is
consistent with the given causal graph. We consider the application of generating faces based
on given binary labels where the dependency structure between the labels is preserved with
a causal graph. This problem can be seen as learning a causal implicit generative model for
the image and labels. We devise a two-stage procedure for this problem. First we train a
causal implicit generative model over binary labels using a neural network consistent with a
causal graph as the generator. We empirically show that Wasserstein GAN can be used to
output discrete labels. Later we propose two new conditional GAN architectures, which we call
CausalGAN and CausalBEGAN. We show that the optimal generator of the CausalGAN, given
the labels, samples from the image distributions conditioned on these labels. The conditional
GAN combined with a trained causal implicit generative model for the labels is then an implicit
causal generative network over the labels and the generated image. We show that the proposed
architectures can be used to sample from observational and interventional image distributions,
even for interventions which do not naturally occur in the dataset.
1
Introduction
Generative adversarial networks are neural generative models that can be trained using backpropagation to mimick sampling from very high dimensional nonparametric distributions [13]. A generator
network models the sampling process through feedforward computation. The generator output is
constrained and refined through the feedback by a competitive "adversary network", that attempts
to discriminate between the generated and real samples. In the application of sampling from a
distribution over images, a generator, typically a neural network, outputs an image given independent noise variables. The objective of the generator is to maximize the loss of the discriminator
(convince the discriminator that it outputs images from the real data distribution). GANs have
shown tremendous success in generating samples from distributions such as image and video [37]
and have even been proposed for language translation [38].
∗
Equal contribution.
1
(a) Top: Intervened on Bald=1. Bottom: Conditioned (b) Top: Intervened on Mustache=1. Bottom: Condion Bald = 1. M ale → Bald.
tioned on Mustache = 1. M ale → M ustache.
Figure 1: Observational and interventional image samples from CausalBEGAN. Our architecture
can be used to sample not only from the joint distribution (conditioned on a label) but also from
the interventional distribution, e.g., under the intervention do(M ustache = 1). The resulting
distributions are clearly different, as is evident from the samples outside the dataset, e.g., females
with mustaches.
One extension idea for GANs is to enable sampling from the class conditional data distributions
by feeding labels to the generator. Various neural network architectures have been proposed for
solving this problem [27, 30, 1]. As far as we are aware of, in all of these works, the class labels are
chosen independently from one another. Therefore, choosing one label does not affect the distribution
of the other labels. As a result, these architectures do not provide the functionality to condition
on a label, and sample other labels and the image. For concreteness consider a generator trained
to output images of birds when given the color and species labels. On one hand, if we feed the
generator color=blue, since species label is independent from the color label, we are likely to see
blue eagles as well as blue jays. However, we do not expect to see any blue eagles when conditioned
on color=blue in any dataset of real bird images. Similarly, consider a generator trained to output
face images given the gender and mustache labels. When labels are chosen independently from one
another, images generated under mustache = 1 should contain both males and females, which is
clearly different than conditioning on mustache = 1. The key for understanding and unifying these
two notions, conditioning and being able to sample from distributions different than the dataset’s is
to use causality.
We can think of generating an image conditioned on labels as a causal process: Labels determine
the image distribution. The generator is a functional map from labels to image distributions. This is
consistent with a simple causal graph "Labels cause the Image", represented with the graph L → G,
where L is the set of labels and G is the generated image. Using a finer model, we can also include
the causal graph between the labels. Using the notion of causal graphs, we are interested in extending
the previous work on conditional image generation by
(i) capturing the dependence and
(ii) capturing the causal effect
between labels and the image.
As an example, consider the causal graph between gender (G) and mustache (M ) labels. The
causal relation is clearly gender causes mustache 1 , shown with the graph G → M . Conditioning
on gender=male, we expect to see males with or without mustaches, based on the fraction of males
with mustaches in the population. When we condition on mustache = 1, we expect to sample from
males only since the population does not contain females with mustaches. In addition to sampling
from conditional distributions, causal models allow us to sample from various different distributions
called interventional distributions, which we explain next.
From a causal lens, using independent labels corresponds to using an empty causal graph between
1
In reality, there may be confounder variables, i.e., variables that affect both, which are not observable. In this
work, we ignore this effect by assuming the graph has causal sufficiency, i.e., there does not exist unobserved variables
that cause more than one observable variable.
2
the labels. However in practice the labels are not independent and even have clear causal connections
(e.g., gender causes mustache). Using an empty causal graph instead of the true causal graph, and
setting a label to a particular value is equivalent to intervening on that label in the original causal
graph, but also ignoring the way it affects other variables. An intervention is an experiment which
fixes the value of a variable, without affecting the rest of the causal mechanism, which is different
from conditioning. An intervention on a variable affects its descendant variables in the causal graph.
But unlike conditioning, it does not affect the distribution of its ancestors. For example, instead
of the causal graph Gender causes Mustache, if we used the empty causal graph between the same
labels, intervening on Gender = Female would create females with mustaches, whereas with the
correct causal graph, it should only yield females without mustaches since setting the Gender variable
will affect all the variables that are downstream, e.g., mustache. See Figure 1 for a sample of our
results which illustrate this concept on the bald and mustache variables. Similarly, for generating
birds with the causal graph Species causes color, intervening on color = blue allows us to sample
blue eagles (which do not exist) whereas conditioning on color = blue does not.
An implicit generative model [28] is a mechanism that can sample from a probability distribution
but cannot provide likelihoods for data points. In this work we propose causal implicit generative
models (CiGM): mechanisms that can sample not only from probability distributions but also from
conditional and interventional distributions. We show that when the generator structure inherits
its neural connections from the causal graph, GANs can be used to train causal implicit generative
models. We use WassersteinGAN to train a causal implicit generative model for image labels, as
part of a two-step procedure for training a causal implicit generative model for the images and image
labels. For the second step, we propose a novel conditional GAN architecture and loss function
called the CausalGAN. We show that the optimal generator can sample from the correct conditional
and interventional distributions, which is summarized by the following theorem.
Theorem 1 (Informal). Let G(l, z) be the output of the generator for a given label l and latent
vector z. Let G∗ be the global optimal generator for the loss function in (5), when the rest of the
network is trained to optimality. Then the generator samples from the conditional image distribution
given the label, i.e., pg (G(l, Z) = x) = pdata (X = x|L = l), where pdata is the data probability density
function over the image and the labels, pg is the probability density function induced by the random
variable Z, and X is the image random variable.
The following corollary states that the trained causal implicit generative model for the labels
concatenated with CausalGAN is a causal implicit generative model for the labels and image.
Corollary 1. Suppose C : Z1 → L is a causal implicit generative model for the causal graph
D = (V, E) where V is the set of image labels and the observational joint distribution over
these labels is strictly positive. Let G : L × Z2 → I be the class conditional generator that can
sample from the image distribution conditioned on the given label combination L ∈ L. Then
G(C(Z1 ), Z2 ) is a causal implicit generative model for the causal graph D0 = (V ∪ {Image}, E ∪
{(V1 , Image), (V2 , Image), . . . , (Vn , Image)}).
In words, the corollary states the following: Consider a causal graph D0 on the image labels
and the image variable, where every label causes the image. Then combining an implicit causal
generative model for the induced subgraph on the labels with a conditional generative model for the
image given the labels yields a causal implicit generative model for D0 .
Our contributions are as follows:
• We observe that adversarial training can be used after simply structuring the generator
architecture based on the causal graph to train a causal implicit generative model.
3
• We empirically show how simple GAN training can be adapted using WassersteinGAN to learn
a graph-structured generative model that outputs essentially discrete 2 labels.
• We consider the problem of conditional and interventional sampling of images given a causal
graph over binary labels. We propose a two-stage procedure to train a causal implicit generative
model over the binary labels and the image. As part of this procedure, we propose a novel
conditional GAN architecture and loss function. We show that the global optimal generator3
provably samples from the class conditional distributions.
• We propose a natural but nontrivial extension of BEGAN to accept labels: using the same
motivations for margins as in BEGAN [4], we arrive at a "margin of margins" term, which
cannot be neglected. We show empirically that this model, which we call CausalBEGAN,
produces high quality images that capture the image labels.
• We evaluate our causal implicit generative model training framework on the labeled CelebA
data [23]. We show that the combined architecture generates images that can capture both the
observational and interventional distributions over images and labels jointly 4 . We show the
surprising result that CausalGAN and CausalBEGAN can produce high-quality label-consistent
images even for label combinations realized under interventions that never occur during training,
e.g., "woman with mustache".
2
Related Work
Using a generative adversarial network conditioned on the image labels has been proposed before:
In [27], authors propose to extend generative adversarial networks to the setting where there is
extra information, such as labels. The label of the image is fed to both the generator and the
discriminator. This architecture is called conditional GAN. In [7], authors propose a new architecture
called InfoGAN, which attempts to maximize a variational lower bound of mutual information
between the labels given to the generator and the image. In [30], authors propose a new conditional
GAN architecture, which performs well on higher resolution images. A class label is given to the
generator. Image from the dataset is also chosen conditioned on this label. In addition to deciding if
the image is real or fake, the discriminator has to also output an estimate of the class label.
Using causal principles for deep learning and using deep learning techniques for causal inference
has been recently gaining attention. In [26], authors observe the connection between conditional
GAN layers, and structural equation models. Based on this observation, they use CGAN [27] to
learn the causal direction between two variables from a dataset. In [25], the authors propose using a
neural network in order to discover the causal relation between image class labels based on static
images. In [3], authors propose a new regularization for training a neural network, which they call
causal regularization, in order to assure that the model is predictive in a causal sense. In a very
recent work [5], authors point out the connection of GANs to causal generative models. However
they see image as a cause of the neural net weights, and do not use labels.
BiGAN [9] and ALI [10] improve the standard GAN framework to provide the functionality of
learning the mapping from image space to latent space. In CoGAN [22] the authors learn a joint
distribution given samples from marginals by enforcing weight sharing between generators. This
can, for example, be used to learn the joint distribution between image and labels. It is not clear,
however, if this approach will work when the generator is structured via a causal graph. SD-GAN
[8] is an architecture which splits the latent space into "Identity" and "Observation" portions. To
2
Each of the generated labels are sharply concentrated around 0 and 1.
Global optimal after the remaining network is trained to optimality.
4
Our code is available at https://github.com/mkocaoglu/CausalGAN
3
4
generate faces of the same person, one can then fix the identity portion of the latent code. This
works well for datasets where each identity has multiple observations. Authors in [1] use conditional
GAN of [27] with a one-hot encoded vector that encodes the age interval. A generator conditioned
on this one-hot vector can then be used for changing the age attribute of a face image. Another
application of generative models is in compressed sensing: Authors in [6] give compressed sensing
guarantees for recovering a vector, if the data lies close to the output of a trained generative model.
3
Background
3.1
Causality Basics
In this section, we give a brief introduction to causality. Specifically, we use Pearl’s framework
[31], i.e., structural causal models, which uses structural equations and directed acyclic graphs
between random variables to represent a causal model. We explain how causal principles apply to
our framework through examples. For a more detailed treatment of the subject with more of the
technical details, see [31].
Consider two random variables X, Y . Within the structural causal modeling framework and
under the causal sufficiency assumption5 , X causes Y simply means that there exists a function f
and some unobserved random variable E, independent from X, such that Y = f (X, E). Unobserved
variables are also called exogenous. The causal graph that represents this relation is X → Y . In
general, a causal graph is a directed acyclic graph implied by the structural equations: The parents of
a node in the causal graph represent the causes of that variable. The causal graph can be constructed
from the structural equations as follows: The parents of a variable are those that appear in the
structural equation that determines the value of that variable.
Formally, a structural causal model is a tuple M = (V, E, F, PE (.)) that contains a set of
functions F = {f1 , f2 , . . . , fn }, a set of random variables V = {X1 , X2 , . . . , Xn }, a set of exogenous
random variables E = {E1 , E2 , . . . , En }, and a probability distribution over the exogenous variables
PE 6 . The set of observable variables V has a joint distribution implied by the distributions of E, and
the functional relations F. This distribution is the projection of PE onto the set of variables V and
is shown by PV . The causal graph D is then the directed acyclic graph on the nodes V, such that
a node Xj is a parent of node Xi if and only if Xj is in the domain of fi , i.e., Xi = fi (Xj , S, Ei ),
for some S ⊂ V . The set of parents of variable Xi is shown by P ai . D is then a Bayesian network
for the induced joint probability distribution over the observable variables V. We assume causal
sufficiency: Every exogenous variable is a direct parent of at most one observable variable.
An intervention, is an operation that changes the underlying causal mechanism, hence the
corresponding causal graph. An intervention on Xi is denoted as do(Xi = xi ). It is different
from conditioning on Xi = x in the following way: An intervention removes the connections of
node Xi to its parents, whereas conditioning does not change the causal graph from which data is
sampled. The interpretation is that, for example, if we set the value of Xi to 1, then it is no longer
determined through the function fi (P ai , Ei ). An intervention on a set of nodes is defined similarly.
The joint distribution over the variables after an intervention (post-interventional distribution) can
be calculated as follows: Since D is a Bayesian networkQfor the joint distribution, the observational
distribution can be factorized as P (x1 , x2 , . . . xn ) = i∈[n] Pr(xi |P ai ), where the nodes in P ai
are assigned to the corresponding values in {xi }i∈[n] . After an intervention on a set of nodes
5
In a causally sufficient system, every unobserved variable affects no more than a single observed variable.
The definition provided here assumes causal sufficiency, i.e., there are no exogenous variables that affect more
than one observable variable. Under causal sufficiency, Pearl’s model assumes that the distribution over the exogenous
variables is a product distribution, i.e., exogenous variables are mutually independent.
6
5
Q
XS := {Xi }i∈S , i.e., do(XS = s), the post-interventional distribution is given by i∈[n]\S Pr(xi |P aSi ),
where P aSi is the shorthand notation for the following assignment: Xj = xj for Xj ∈ P ai if j ∈
/S
7
and Xj = s(j) if j ∈ S .
In general it is not possible to identify the true causal graph for a set of variables without
performing experiments or making additional assumptions. This is because there are multiple causal
graphs that lead to the same joint probability distribution even for two variables [36]. This paper
does not address the problem of learning the causal graph: We assume the causal graph is given to
us, and we learn a causal model, i.e., the functions and the distributions of the exogenous variables
comprising the structural equations8 . There is significant prior work on learning causal graphs that
could be used before our method, see e.g. [11, 17, 18, 16, 35, 24, 12, 32, 21, 20, 19]. When the true
causal graph is unknown we can use any feasible graph, i.e., any Bayesian network that respects the
conditional independencies present in the data. If only a few conditional independencies are known,
a richer model (i.e., a denser Bayesian network) can be used, although a larger number of functional
relations should be learned in that case. We explore the effect of the used Bayesian network in
Section 8. If the used Bayesian network has edges that are inconsistent with the true causal graph,
our conditional distributions will be correct, but the interventional distributions will be different.
4
Causal Implicit Generative Models
Implicit generative models [28] are used to sample from a probability distribution without an explicit
parameterization. Generative adversarial networks are arguably one of the most successful examples
of implicit generative models. Thanks to an adversarial training procedure, GANs are able to produce
realistic samples from distributions over a very high dimensional space, such as images. To sample
from the desired distribution, one samples a vector from a known distribution, such as Gaussian
or uniform, and feeds it into a feedforward neural network which was trained on a given dataset.
Although implicit generative models can sample from the data distribution, they do not provide the
functionality to sample from interventional distributions. Causal implicit generative models provide
a way to sample from both observational and interventional distributions.
We show that generative adversarial networks can also be used for training causal implicit
generative models. Consider the simple causal graph X → Z ← Y . Under the causal sufficiency
assumption, this model can be written as X = fX (NX ), Y = fY (NY ), Z = fZ (X, Y, NZ ), where
fX , fY , fZ are some functions and NX , NY , NZ are jointly independent variables. The following
simple observation is useful: In the GAN training framework, generator neural network connections
can be arranged to reflect the causal graph structure. Consider Figure 2b. The feedforward neural
networks can be used to represent the functions fX , fY , fZ . The noise terms can be chosen as
independent, complying with the condition that (NX , NY , NZ ) are jointly independent. Hence this
feedforward neural network can be used to represents the causal graph X → Z ← Y if fX , fY , fZ
are within the class of functions that can be represented with the given family of neural networks.
The following proposition is well known in the causality literature. It shows that given the
true causal graph, two causal models that have the same observational distribution have the same
interventional distribution for any intervention.
Proposition 1. Let M1 = (D1 = (V, E), N1 , F1 , PN1 (.)), M2 = (D2 = (V, E), N2 , F2 , QN2 (.)) be
two causal models. If PV (.) = QV (.), then PV (.|do(S)) = QV (.|do(S))
7
With slight abuse of notation, we use s(j) to represent the value assigned to variable Xj by the intervention rather
than the jth coordinate of s
8
Even when the causal graph is given, there will be many different sets of functions and exogenous noise distributions
that explain the observed joint distribution for that causal graph. We are learning one such model.
6
Noise
Feed Forward NN
X
Noise
X
Y
Z
NX
Feed Forward NN
Z
NZ
Y
X
Y
Z
(a) Standard generator architecture and the causal
graph it represents
NY
Feed Forward NN
Feed Forward NN
(b) Generator neural network architecture that represents the causal graph X →
Z←Y
Figure 2: (a) The causal graph implied by the standard generator architecture, feedforward neural
network. (b) A neural network implementation of the causal graph X → Z ← Y : Each feed forward
neural net captures the function f in the structural equation model V = f (P aV , E).
Proof. Note that D1 and D2 are the same causal Bayesian networks [31]. Interventional distributions
for causal Bayesian networks can be directly calculated from the conditional probabilities and the
causal graph. Thus, M1 and M2 have the same interventional distributions.
We have the following definition, which ties a feedforward neural network with a causal graph:
Definition 1. Let Z = {Z1 , Z2 , . . . , Zm } be a set of mutually independent random variables. A
feedforward neural network G that outputs the vector G(Z) = [G1 (Z), G2 (Z), . . . , Gn (Z)] is called
consistent with a causal graph D = ([n], E), if ∀i ∈ [n], ∃ a set of layers fi such that Gi (Z)
can be written as Gi (Z) = fi ({Gj (Z)}j∈P ai , ZSi ), where P ai are the set of parents of i in D, and
ZSi := {Zj : j ∈ Si } are collections of subsets of Z such that {Si : i ∈ [n]} is a partition of [m].
Based on the definition, we say a feedforward neural network G with output
G(Z) = [G1 (Z), G2 (Z), . . . , Gn (Z)],
(1)
is a causal implicit generative model for the causal model M = (D = ([n], E), N, F, PN (.)) if G is
consistent with the causal graph D and Pr(G(Z) = x) = PV (x), ∀x.
We propose using adversarial training where the generator neural network is consistent with the
causal graph according to Definition 1. This notion is illustrated in Figure 2b.
5
Causal Generative Adversarial Networks
Causal implicit generative models can be trained given a causal graph and samples from a joint
distribution. However, for the application of image generation with binary labels, we found it difficult
to simultaneously learn the joint label and image distribution 9 . For these applications, we focus on
dividing the task of learning a causal implicit generative causal model into two subtasks: First, learn
the causal implicit generative model over a small set of variables. Then, learn the remaining set of
variables conditioned on the first set of variables using a conditional generative network. For this
training to be consistent with the causal structure, every node in the first set should come before
any node in the second set with respect to the partial order of the causal graph. We assume that the
problem of generating images based on the image labels inherently contains a causal graph similar to
the one given in Figure 3, which makes it suitable for a two-stage training: First, train a generative
9
Please see the Appendix for our primitive result using this naive attempt.
7
Smiling
Narrow
Eyes
Gender
Gray
Hair
Age
Image
Eyeglasses
Figure 3: A plausible causal model for image generation.
model over the labels, then train a generative model for the images conditioned on the labels. As we
show next, our new architecture and loss function (CausalGAN) assures that the optimum generator
outputs the label conditioned image distributions. Under the assumption that the joint probability
distribution over the labels is strictly positive10 , combining pretrained causal generative model for
labels with a label-conditioned image generator gives a causal implicit generative model for images.
The formal statement for this corollary is postponed to Section 6.
5.1
Causal Implicit Generative Model for Binary Labels
Here we describe the adversarial training of a causal implicit generative model for binary labels. This
generative model, which we call the Causal Controller, will be used for controlling which distribution
the images will be sampled from when intervened or conditioned on a set of labels. As in Section 4,
we structure the Causal Controller network to sequentially produce labels according to the causal
graph.
Since our theoretical results hold for binary labels, we prefer a generator which can sample from
an essentially discrete label distribution 11 . However, the standard GAN training is not suited for
learning a discrete distribution due to the properties of Jensen-Shannon divergence. To be able to
sample from a discrete distribution, we employ WassersteinGAN [2]. We used the model of [15],
where the Lipschitz constraint on the gradient is replaced by a penalty term in the loss.
5.2
CausalGAN Architecture
As part of the two-step process proposed in Section 4 of learning a causal implicit generative model
over the labels and the image variables, we design a new conditional GAN architecture to generate the
images based on the labels of the Causal Controller. Unlike previous work, our new architecture and
loss function assures that the optimum generator outputs the label conditioned image distributions.
We use a pretrained Causal Controller which is not further updated.
Labeler and Anti-Labeler: We have two separate labeler neural networks. The Labeler is
trained to estimate the labels of images in the dataset. The Anti-Labeler is trained to estimate the
labels of the images which are sampled from the generator. The label of a generated image is the
label produced by the Causal Controller.
10
This assumption does not hold in the CelebA dataset: Pr (M ale = 0, M ustache = 1) = 0. However, we will see
that the trained model is able to extrapolate to these interventional distributions when the CausalGAN model is not
trained for very long.
11
Ignoring the theoretical considerations, adding noise to transform the labels artificially into continuous targets
also works. However we observed better empirical convergence with this technique.
8
Causal
Controller
AntiLabeler
Label
Estimate
LG
N
Z
Generator
G(Z,LG)
Discriminator
ℙ(Real)
Dataset
X
Labeler
Label
Estimate
LR
Figure 4: CausalGAN architecture.
Generator: The objective of the generator is 3-fold: producing realistic images by competing
with the discriminator, capturing the labels it is given in the produced images by minimizing the
Labeler loss, and avoiding drifting towards unrealistic image distributions that are easy to label by
maximizing the Anti-Labeler loss. For the optimum Causal Controller, Labeler, and Anti-Labeler,
we will later show that the optimum generator samples from the same distribution as the class
conditional images.
The most important distinction of CausalGAN with the existing conditional GAN architectures
is that it uses an Anti-Labeler network in addition to a Labeler network. Notice that the theoretical
guarantee we develop in Section 6 does not hold when Anti-Labeler network is not used. Intuitively,
the Anti-Labeler loss discourages the generator network to generate only few typical faces for a
fixed label combination. This is a phenomenon that we call label-conditioned mode collapse. In the
literature, minibatch-features are one of the most popular techniques used to avoid mode-collapse
[34]. However, the diversity within a batch of images due to different label combinations can make
this approach ineffective for combatting label-conditioned mode collapse. We observe that this
intuition carries over to practice.
Loss Functions
We present the results for a single binary label l. For the more general case of d binary labels, we
have an extension where the labeler and the generator losses are slightly modified. We explain
this extension in the supplementary material in Section 10.4 along with the proof that the optimal
generator samples from the class conditional distribution given the d−dimensional label vector.
Let P(l = 1) = ρ. We use p0g (x) := P(G(z, l) = x|l = 0) and p1g (x) := P(G(z, l) = x|l = 1).
G(.), D(.), DLR (.), and DLG (.) are the mappings due to generator, discriminator, Labeler, and
Anti-Labeler respectively.
The generator loss function of CausalGAN contains label loss terms, the GAN loss in [13], and
an added loss term due to the discriminator. With the addition of this term to the generator loss,
we will be able to prove that the optimal generator outputs the class conditional image distribution.
This result will also be true for multiple binary labels.
For a fixed generator, Anti-Labeler solves the following optimization problem:
max ρEx∼p1g (x) [log(DLG (x))] + (1 − ρ)Ex∼p0g (x) [log(1 − DLG (x)] .
DLG
9
(2)
The Labeler solves the following optimization problem:
max ρEx∼p1
DLR
data (x)
[log(DLR (x))] + (1 − ρ)Ex∼p0
data (x)
[log(1 − DLR (x)] .
For a fixed generator, the discriminator solves the following optimization problem:
1 − D(x)
.
max Ex∼pdata (x) [log(D(x))] + Ex∼pg (x) log
D
D(x)
(3)
(4)
For a fixed discriminator, Labeler and Anti-Labeler, generator solves the following optimization
problem:
1 − D(x)
min Ex∼pdata (x) [log(D(x))] + Ex∼pg (x) log
G
D(x)
− ρEx∼p1g (x) [log(DLR (X))] − (1 − ρ)Ex∼p0g (x) [log(1 − DLR (X))]
+ρEx∼p1g (x) [log(DLG (X))] + (1 − ρ)Ex∼p0g (x) [log(1 − DLG (X))] .
(5)
Remark: Although the authors in [13] have the additive term Ex∼pg (x) [log(1 − D(X))] in the
definition of the loss function, in practice they use the term Ex∼pg (x) [− log(D(X))]. It is interesting
to note that this is the extra loss terms we need for the global optimum to correspond to the class
conditional image distributions under a label loss.
5.3
CausalBEGAN Architecture
In this section, we propose a simple, but non-trivial extension of BEGAN where we feed image
labels to the generator. One of the central contributions of BEGAN [4] is a control theory-inspired
boundary equilibrium approach that encourages generator training only when the discriminator is
near optimum and its gradients are the most informative. The following observation helps us carry
the same idea to the case with labels: Label gradients are most informative when the image quality
is high. Here, we introduce a new loss and a set of margins that reflect this intuition.
Formally, let L(x) be the average L1 pixel-wise autoencoder loss for an image x, as in BEGAN. Let
Lsq (u, v) be the squared loss term, i.e., ku − vk22 . Let (x, lx ) be a sample from the data distribution,
where x is the image and lx is its corresponding label. Similarly, G(z, lg ) is an image sample from
the generator, where lg is the label used to generate this image. Denoting the space of images by I,
let G : Rn × {0, 1}m 7→ I be the generator. As a naive attempt to extend the original BEGAN loss
formulation to include the labels, we can write the following loss functions:
LossD = L(x) − L(Labeler(G(z, l))) + Lsq (lx , Labeler(x)) − Lsq (lg , Labeler(G(z, lg ))),
LossG = L(G(z, lg )) + Lsq (lg , Labeler(G(z, lg ))).
(6)
However, this naive formulation does not address the use of margins, which is extremely critical
in the BEGAN formulation. Just as a better trained BEGAN discriminator creates more useful
gradients for image generation, a better trained Labeler is a prerequisite for meaningful gradients.
This motivates an additional margin-coefficient tuple (b2 , c2 ), as shown in (7,8).
The generator tries to jointly minimize the two loss terms in the formulation in (6). We empirically
observe that occasionally the image quality will suffer because the images that best exploit the
Labeler network are often not obliged to be realistic, and can be noisy or misshapen. Based on
this, label loss seems unlikely to provide useful gradients unless the image quality remains good.
Therefore we encourage the generator to incorporate label loss only when the image quality margin
10
b1 is large compared to the label margin b2 . To achieve this, we introduce a new margin of margins
term, b3 . As a result, the margin equations and update rules are summarized as follows, where
λ1 , λ2 , λ3 are learning rates for the coefficients.
b1 = γ1 ∗ L(x) − L(G(z, lg )).
b2 = γ2 ∗ Lsq (lx , Labeler(x)) − Lsq (lg , Labeler(G(z, lg ))).
(7)
b3 = γ3 ∗ relu(b1 ) − relu(b2 ).
c1 ← clip[0,1] (c1 + λ1 ∗ b1 ).
c2 ← clip[0,1] (c2 + λ2 ∗ b2 ).
(8)
c3 ← clip[0,1] (c3 + λ3 ∗ b3 ).
LossD = L(x) − c1 ∗ L(G(z, lg )) + Lsq (lx , Labeler(x)) − c2 ∗ Lsq (lg , G(z, lg )).
(9)
LossG = L(G(z, lg )) + c3 ∗ Lsq (lg , Labeler(G(z, lg ))).
One of the advantages of BEGAN is the existence of a monotonically decreasing scalar which can
track the convergence of the gradient descent optimization. Our extension preserves this property as
we can define
Mcomplete = L(x) + |b1 | + |b2 | + |b3 |,
(10)
and show that Mcomplete decreases progressively during our optimizations. See Figure 21 in the
Appendix.
6
Theoretical Guarantees for CausalGAN
In this section, we show that the best CausalGAN generator for the given loss function outputs the
class conditional image distribution when Causal Controller outputs the real label distribution and
labelers operate at their optimum. We show this result for the case of a single binary label l ∈ {0, 1}.
The proof can be extended to multiple binary variables, which we explain in the supplementary
material in Section 10.4. As far as we are aware of, this is the first conditional generative adversarial
network architecture with this guarantee.
6.1
CausalGAN with Single Binary Label
First, we find the optimal discriminator for a fixed generator. Note that in (4), the terms that the
discriminator can optimize are the same as the GAN loss in [13]. Hence the optimal discriminator
behaves the same as in the standard GAN. Then, the following lemma from [13] directly applies to
our discriminator:
Proposition 2 ([13]). For fixed G, the optimal discriminator D is given by
∗
DG
(x) =
pdata (x)
.
pdata (x) + pg (x)
Second, we identify the optimal Labeler and Anti-Labeler. We have the following lemma:
Lemma 1. The optimum Labeler has DLR (x) = Pr (l = 1|x).
Proof. Please see the supplementary material.
Similarly, we have the corresponding lemma for Anti-Labeler:
11
(11)
Lemma 2. For a fixed generator with x ∼ pg , the optimum Anti-Labeler has DLG (x) = Pg (l = 1|x).
Proof. Proof is the same as the proof of Lemma 1.
Define C(G) as the generator loss for when discriminator, Labeler and Anti-Labeler are at their
optimum. Then, we show that the generator that minimizes C(G) outputs class conditional image
distributions.
Theorem 2 (Theorem 1 formal for single binary label). The global minimum of the virtual training
criterion C(G) is achieved if and only if p0g = p0data and p1g = p1data , i.e., if and only if given a label l,
generator output G(z, l) has the class conditional image distribution pdata (x|l).
Proof. Please see the supplementary material.
Now we can show that our two stage procedure can be used to train a causal implicit generative
model for any causal graph where the Image variable is a sink node, captured by the following
corollary:
Corollary 2. Suppose C : Z1 → L is a causal implicit generative model for the causal graph
D = (V, E) where V is the set of image labels and the observational joint distribution over
these labels are strictly positive. Let G : L × Z2 → I be the class conditional GAN that can
sample from the image distribution conditioned on the given label combination L ∈ L. Then
G(C(Z1 ), Z2 ) is a causal implicit generative model for the causal graph D0 = (V ∪ {Image}, E ∪
{(V1 , Image), (V2 , Image), . . . (Vn , Image)}).
Proof. Please see the supplementary material.
6.2
Extensions to Multiple Labels
In Theorem 2 we show that the optimum generator samples from the class conditional distributions
given a single binary label. Our objective is to extend this result to the case with d binary labels.
First we show that if the Labeler and Anti-Labeler are trained to output 2d scalars, each
interpreted as the posterior probability of a particular label combination given the image, then the
minimizer of C(G) samples from the class conditional distributions given d labels. This result is
shown in Theorem 3 in the supplementary material. However, when d is large, this architecture may
be hard to implement. To resolve this, we propose an alternative architecture, which we implement
for our experiments: We extend the single binary label setup and use cross entropy loss terms for
each label. This requires Labeler and Anti-Labeler to have only d outputs. However, although
we need the generator to capture the joint label posterior given the image, this only assures that
the generator captures each label’s posterior distribution, i.e., pr (li |x) = pg (li |x) (Proposition 3).
This, in general, does not guarantee that the class conditional distributions will be true to the data
distribution. However, for many joint distributions of practical interest, where the set of labels
are completely determined by the image 12 , we show that this guarantee implies that the joint label
posterior will be true to the data distribution, implying that the optimum generator samples from
the class conditional distributions. Please see Section 10.5 for the formal results and more details.
7
Implementation
In this section, we explain the differences between implementation and theory, along with other
implementation details for both CausalGAN and CausalBEGAN.
12
The dataset we are using arguably satisfies this condition.
12
7.1
Pretraining Causal Controller for Face Labels
In this section, we explain the implementation details of the Wasserstein Causal Controller for
generating face labels. We used the total variation distance (TVD) between the distribution of
generator and data distribution as a metric to decide the success of the models.
The gradient term used as a penalty is estimated by evaluating the gradient at points interpolated
between the real and fake batches. Interestingly, this Wasserstein approach gives us the opportunity
to train the Causal Controller to output (almost) discrete labels (See Figure 7a). In practice though,
we still found benefit in rounding them before passing them to the generator.
The generator architecture is structured in accordance with Section 4 based on the causal graph
in Figure 5, using uniform noise as exogenous variables and 6 layer neural networks as functions
mapping parents to children. For the training, we used 25 Wasserstein discriminator (critic) updates
per generator update, with a learning rate of 0.0008.
7.2
Implementation Details for CausalGAN
In practice, we use stochastic gradient descent to train our model. We use DCGAN [33], a
convolutional neural net-based implementation of generative adversarial networks, and extend it
into our Causal GAN framework. We have expanded it by adding our Labeler networks, training a
Causal Controller network and modifying the loss functions appropriately. Compared to DCGAN an
important distinction is that we make 6 generator updates for each discriminator update on average.
The discriminator and labeler networks are concurrently updated in a single iteration.
Notice that the loss terms defined in Section 5.2 contain a single binary label. In practice we
feed a d-dimensional label vector and need a corresponding loss function. We extend the Labeler
and Anti-Labeler loss terms by simply averaging the loss terms for every label. The ith coordinates
of the d-dimensional vectors given by the labelers determine the loss terms for label i. Note that this
is different than the architecture given in Section 10.4, where the discriminator outputs a length-2d
vector and estimates the probabilities of all label combinations given the image. Therefore this
approach does not have the guarantee to sample from the class conditional distributions, if the data
distribution is not restricted. However, for the type of labeled image dataset we use in this work,
where labels seem to be completely determined given an image, this architecture is sufficient to have
the same guarantees. For the details, please see Section 10.5 in the supplementary material.
Compared to the theory we have, another difference in the implementation is that we have
swapped the order of the terms in the cross entropy expressions for labeler losses. This has provided
sharper images at the end of the training.
7.3
Usage of Anti-Labeler in CausalGAN
An important challenge that comes with gradient-based training is the use of Anti-Labeler. We
observe the following: In the early stages of the training, Anti-Labeler can very quickly minimize
its loss, if the generator falls into label-conditioned mode collapse. Recall that we define labelconditioned mode-collapse as the problem of generating few typical faces when a label is fixed. For
example, the generator can output the same face when Eyeglasses variable is set to 1. This helps
generator to easily satisfy the label loss term we add to our loss function. Notice that however, if
label-conditioned mode collapse occurs, Anti-Labeler will very easily estimate the true labels given
an image, since it is always provided with the same image. Hence, maximizing the Anti-Labeler loss
in the early stages of the training helps generator to avoid label-conditioned mode collapse with our
loss function.
13
In the later stages of the training, due to the other loss terms, generator outputs realistic
images, which drives Anti-Labeler to act similar to Labeler. Thus, maximizing Anti-Labeler loss and
minimizing Labeler loss become contradicting tasks. This moves the training in a direction where
labels are captured less and less by the generator, hence losing the conditional image generation
property.
Based on these observations, we employ the following loss function for the generator in practice:
LG = LGAN + LLabelerR − e−t/T LLabelerG ,
(12)
where the terms are GAN loss term, loss of Labeler and loss of Anti-Labeler respectively (see first
second and third lines of (5)). t is the number of iterations in the training and T is the time
constant of the exponential decaying coefficient for the Anti-Labeler loss. T = 3000 is chosen for the
experiments, which corresponds to roughly 1 epoch of training.
Male
Young
Eyeglasses
Mouth
Slightly
Open
Bald
Mustache
Smiling
Wearing
Lipstick
Narrow
Eyes
Figure 5: The causal graph used for simulations for both CausalGAN and CausalBEGAN, called
Causal Graph 1 (G1). We also add edges (see Appendix Section 10.6) to form the complete graph
"cG1". We also make use of the graph rcG1, which is obtained by reversing the direction of every
edge in cG1.
7.4
Conditional Image Generation for CausalBEGAN
The labels input to CausalBEGAN are taken from the Causal Controller. We use very few parameter
tunings. We use the same learning rate (0.00008) for both the generator and discriminator and do
1 update of each simultaneously (calculating the for each before applying either). We simply use
γ1 = γ2 = γ3 = 0.5. We do not expect the model to be very sensitive to these parameter values, as we
achieve good performance without hyperparameter tweaking. We do use customized margin learning
rates λ1 = 0.001, λ2 = 0.00008, λ3 = 0.01, which reflect the asymmetry in how quickly the generator
can respond to each margin. For example c2 can have much more "spiky", fast responding behavior
compared to others even when paired with a smaller learning rate, although we have not explored
this parameter space in depth. In these margin behaviors, we observe that the best performing
models have all three margins "active": near 0 while frequently taking small positive values.
8
8.1
Results
Dependence of GAN Behavior on Causal Graph
In Section 4 we showed how a GAN could be used to train a causal implicit generative model by
incorporating the causal graph into the generator structure. Here we investigate the behavior and
14
1
4 × 10
1
3 × 10
1
0
10
20
30
40
Iteration (in thousands)
(a) X → Y → Z
50
100
6 × 10
1
4 × 10
1
3 × 10
1
2 × 10
Collider
Complete
FC10
FC3
FC5
Linear
1
0
10
20
30
40
Iteration (in thousands)
(b) X → Y ← Z
50
Total Variation Distance
6 × 10
Collider
Complete
FC10
FC3
FC5
Linear
Total Variation Distance
Total Variation Distance
100
100
6 × 10
1
4 × 10
1
3 × 10
1
2 × 10
1
Collider
Complete
FC10
FC3
FC5
Linear
0
10
20
30
40
Iteration (in thousands)
50
(c) X → Y → Z, X → Z
Figure 6: Convergence in total variation distance of generated distribution to the true distribution
for causal implicit generative model, when the generator is structured based on different causal
graphs. (a) Data generated from line graph X → Y → Z. The best convergence behavior is observed
when the true causal graph is used in the generator architecture. (b) Data generated from collider
graph X → Y ← Z. Fully connected layers may perform better than the true graph depending
on the number of layers. Collider and complete graphs performs better than the line graph which
implies the wrong Bayesian network. (c) Data generated from complete graph X → Y → Z, X → Z.
Fully connected with 3 layers performs the best, followed by the complete and fully connected with
5 and 10 layers. Line and collider graphs, which implies the wrong Bayesian network does not show
convergence behavior.
convergence of causal implicit generative models when the true data distribution arises from another
(possibly distinct) causal graph.
We consider causal implicit generative model convergence on synthetic data whose three features
{X, Y, Z} arise from one of three causal graphs: "line" X → Y → Z , "collider" X → Y ← Z, and
"complete" X → Y → Z, X → Z. For each node a (randomly sampled once) cubic polynomial in
n + 1 variables computes the value of that node given its n parents and 1 uniform exogenous variable.
We then repeat, creating a new synthetic dataset in this way for each causal model and report the
averaged results of 20 runs for each model.
For each of these data generating graphs, we compare the convergence of the joint distribution to
the true joint in terms of the total variation distance, when the generator is structured according to
a line, collider, or complete graph. For completeness, we also include generators with no knowledge
of causal structure: {f c3, f c5, f c10} are fully connected neural networks that map uniform random
noise to 3 output variables using either 3,5, or 10 layers respectively.
The results are given in Figure 6. Data is generated from line causal graph X → Y → Z (left
panel), collider causal graph X → Y ← (middle panel), and complete causal graph X → Y →
Z, X → Z (right panel). Each curve shows the convergence behavior of the generator distribution,
when generator is structured based on each one of these causal graphs. We expect convergence
when the causal graph used to structure the generator is capable of generating the joint distribution
due to the true causal graph: as long as we use the correct Bayesian network, we should be able
to fit to the true joint. For example, complete graph can encode all joint distributions. Hence, we
expect complete graph to work well with all data generation models. Standard fully connected layers
correspond to the causal graph with a latent variable causing all the observable variables. Ideally,
this model should be able to fit to any causal generative model. However, the convergence behavior
of adversarial training across these models is unclear, which is what we are exploring with Figure 6.
For the line graph data X → Y → Z, we see that the best convergence behavior is when line
graph is used in the generator architecture. As expected, complete graph also converges well, with
15
TVD of Label Generation
66%
2%
2%
30%
1.0
0.95
0.75
0.5
0.25
0.05
0.0
Total Variation Distance
1.0
0.8
0.6
0.4
0.2
0.0
(a) Essentially Discrete Range of Causal Controller
complete Causal Graph 1
Causal Graph 1
edge-reversed complete Causal Graph 1
0
2000 4000 6000 8000 10000 12000 14000 16000 18000
Training Step
(b) TVD vs. No. of Iters in CelebA Labels
Figure 7: (a) A number line of unit length binned into 4 unequal bins along with the percent
of Causal Controller (G1) samples in each bin. Results are obtained by sampling the joint label
distribution 1000 times and forming a histogram of the scalar outputs corresponding to any label.
Note that our Causal Controller output labels are approximately discrete even though the input is a
continuum (uniform). The 4% between 0.05 and 0.95 is not at all uniform and almost zero near 0.5.
(b) Progression of total variation distance between the Causal Controller output with respect to the
number of iterations: Causal Graph 1 is used in the training with Wasserstein loss.
slight delay. Similarly, fully connected network with 3 layers show good performance, although
surprisingly fully connected with 5 and 10 layers perform much worse. It seems that although fully
connected can encode the joint distribution in theory, in practice with adversarial training, the
number of layers should be tuned to achieve the same performance as using the true causal graph.
Using the wrong Bayesian network, the collider, also yields worse performance.
For the collider graph, surprisingly using a fully connected generator with 3 and 5 layers shows
the best performance. However, consistent with the previous observation, the number of layers is
important, and using 10 layers gives the worst convergence behavior. Using complete and collider
graphs achieves the same decent performance, whereas line graph, a wrong Bayesian network,
performs worse than the two.
For the complete graph, fully connected 3 performs the best, followed by fully connected 5, 10 and
the complete graph. As we expect, line and collider graphs, which cannot encode all the distributions
due to a complete graph, performs the worst and does not actually show any convergence behavior.
8.2
Wasserstein Causal Controller on CelebA Labels
We test the performance of our Wasserstein Causal Controller on a subset of the binary labels of
CelebA datset. We use the causal graph given in Figure 5.
For causal graph training, first we verify that our Wasserstein training allows the generator to
learn a mapping from continuous uniform noise to a discrete distribution. Figure 7a shows where the
samples, averaged over all the labels in Causal Graph 1, from this generator appears on the real line.
The result emphasizes that the proposed Causal Controller outputs an almost discrete distribution:
96% of the samples appear in 0.05−neighborhood of 0 or 1. Outputs shown are unrounded generator
outputs.
A stronger measure of convergence is the total variational distance (TVD). For Causal Graph 1
(G1), our defined completion (cG1), and cG1 with arrows reversed (rcG1), we show convergence of
TVD with training (Figure 7b). Both cG1 and rcG1 have TVD decreasing to 0, and TVD for G1
16
assymptotes to around 0.14 which corresponds to the incorrect conditional independence assumptions
that G1 makes. This suggests that any given complete causal graph will lead to a nearly perfect
implicit causal generator over labels and that bayesian partially incorrect causal graphs can still give
reasonable convergence.
8.3
CausalGAN Results
In this section, we train the whole CausalGAN together using a pretrained Causal Controller network.
The results are given in Figures 8a-12a. The difference between intervening and conditioning is clear
through certain features. We implement conditioning through rejection sampling. See [29, 14] for
other works on conditioning for implicit generative models.
(a) Intervening vs Conditioning on Mustache, Top: Intervene Mustache=1, Bottom: Condition
Mustache=1
Figure 8: Intervening/Conditioning on Mustache label in Causal Graph 1. Since M ale → M ustache
in Causal Graph 1, we do not expect do(M ustache = 1) to affect the probability of M ale =
1, i.e., P(M ale = 1|do(M ustache = 1)) = P(M ale = 1) = 0.42. Accordingly, the top row
shows both males and females with mustaches, even though the generator never sees the label
combination {M ale = 0, M ustache = 1} during training. The bottom row of images sampled from
the conditional distribution P(.|M ustache = 1) shows only male images because in the dataset
P(M ale = 1|M ustache = 1) ≈ 1.
(a) Intervening vs Conditioning on Bald, Top: Intervene Bald=1, Bottom: Condition Bald=1
Figure 9: Intervening/Conditioning on Bald label in Causal Graph 1. Since M ale → Bald in
Causal Graph 1, we do not expect do(Bald = 1) to affect the probability of M ale = 1, i.e.,
P(M ale = 1|do(Bald = 1)) = P(M ale = 1) = 0.42. Accordingly, the top row shows both bald males
and bald females. The bottom row of images sampled from the conditional distribution P(.|Bald = 1)
shows only male images because in the dataset P(M ale = 1|Bald = 1) ≈ 1.
8.4
CausalBEGAN Results
In this section, we train CausalBEGAN on CelebA dataset using Causal Graph 1. The Causal
Controller is pretrained with a Wasserstein loss and used for training the CausalBEGAN.
To first empirically justify the need for the margin of margins we introduced in (9) (c3 and b3 ),
we train the same CausalBEGAN model setting c3 = 1, removing the effect of this margin. We show
17
(a) Intervening vs Conditioning on Wearing Lipstick, Top: Intervene Wearing Lipstick=1,
Bottom: Condition Wearing Lipstick=1
Figure 10: Intervening/Conditioning on Wearing Lipstick label in Causal Graph 1. Since M ale →
W earingLipstick in Causal Graph 1, we do not expect do(Wearing Lipstick = 1) to affect the
probability of M ale = 1, i.e., P(M ale = 1|do(Wearing Lipstick = 1)) = P(M ale = 1) = 0.42.
Accordingly, the top row shows both males and females who are wearing lipstick. However, the
bottom row of images sampled from the conditional distribution P(.|Wearing Lipstick = 1) shows
only female images because in the dataset P(M ale = 0|Wearing Lipstick = 1) ≈ 1.
(a) Intervening vs Conditioning on Mouth Slightly Open, Top: Intervene Mouth Slightly
Open=1, Bottom: Condition Mouth Slightly Open=1
Figure 11: Intervening/Conditioning on Mouth Slightly Open label in Causal Graph 1. Since
Smiling → M outhSlightlyOpen in Causal Graph 1, we do not expect do(Mouth Slightly Open = 1)
to affect the probability of Smiling = 1, i.e., P(Smiling = 1|do(Mouth Slightly Open = 1)) =
P(Smiling = 1) = 0.48. However on the bottom row, conditioning on Mouth Slightly Open = 1
increases the proportion of smiling images (0.48 → 0.76 in the dataset), although 10 images may not
be enough to show this difference statistically.
(a) Intervening vs Conditioning on Narrow Eyes, Top: Intervene Narrow Eyes=1, Bottom:
Condition Narrow Eyes=1
Figure 12: Intervening/Conditioning on Narrow Eyes label in Causal Graph 1. Since Smiling →
Narrow Eyes in Causal Graph 1, we do not expect do(Narrow Eyes = 1) to affect the probability
of Smiling = 1, i.e., P(Smiling = 1|do(Narrow Eyes = 1)) = P(Smiling = 1) = 0.48. However
on the bottom row, conditioning on Narrow Eyes = 1 increases the proportion of smiling images
(0.48 → 0.59 in the dataset), although 10 images may not be enough to show this difference
statistically.
that the image quality for rare labels deteriorates. Please see Figure 20 in the appendix. Then for
the labels Wearing Lipstick, Mustache, Bald, and Narrow Eyes, we illustrate the difference between
18
interventional and conditional sampling when the label is 1. (Figures 13a-16a).
(a) Intervening vs Conditioning on Mustache, Top: Intervene Mustache=1, Bottom: Condition
Mustache=1
Figure 13: Intervening/Conditioning on Mustache label in Causal Graph 1. Since M ale → M ustache
in Causal Graph 1, we do not expect do(M ustache = 1) to affect the probability of M ale =
1, i.e., P(M ale = 1|do(M ustache = 1)) = P(M ale = 1) = 0.42. Accordingly, the top row
shows both males and females with mustaches, even though the generator never sees the label
combination {M ale = 0, M ustache = 1} during training. The bottom row of images sampled from
the conditional distribution P(.|M ustache = 1) shows only male images because in the dataset
P(M ale = 1|M ustache = 1) ≈ 1.
(a) Intervening vs Conditioning on Bald, Top: Intervene Bald=1, Bottom: Condition Bald=1
Figure 14: Intervening/Conditioning on Bald label in Causal Graph 1. Since M ale → Bald in
Causal Graph 1, we do not expect do(Bald = 1) to affect the probability of M ale = 1, i.e.,
P(M ale = 1|do(Bald = 1)) = P(M ale = 1) = 0.42. Accordingly, the top row shows both bald males
and bald females. The bottom row of images sampled from the conditional distribution P(.|Bald = 1)
shows only male images because in the dataset P(M ale = 1|Bald = 1) ≈ 1.
(a) Intervening vs Conditioning on Mouth Slightly Open, Top: Intervene Mouth Slightly
Open=1, Bottom: Condition Mouth Slightly Open=1
Figure 15: Intervening/Conditioning on Mouth Slightly Open label in Causal Graph 1. Since
Smiling → M outhSlightlyOpen in Causal Graph 1, we do not expect do(Mouth Slightly Open = 1)
to affect the probability of Smiling = 1, i.e., P(Smiling = 1|do(Mouth Slightly Open = 1)) =
P(Smiling = 1) = 0.48. However on the bottom row, conditioning on Mouth Slightly Open = 1
increases the proportion of smiling images (0.48 → 0.76 in the dataset), although 10 images may not
be enough to show this difference statistically.
19
(a) Intervening vs Conditioning on Narrow Eyes, Top: Intervene Narrow Eyes=1, Bottom:
Condition Narrow Eyes=1
Figure 16: Intervening/Conditioning on Narrow Eyes label in Causal Graph 1. Since Smiling →
Narrow Eyes in Causal Graph 1, we do not expect do(Narrow Eyes = 1) to affect the probability
of Smiling = 1, i.e., P(Smiling = 1|do(Narrow Eyes = 1)) = P(Smiling = 1) = 0.48. However
on the bottom row, conditioning on Narrow Eyes = 1 increases the proportion of smiling images
(0.48 → 0.59 in the dataset), although 10 images may not be enough to show this difference
statistically. As a rare artifact, in the dark image in the third column the generator appears to rule
out the possibility of Narrow Eyes = 0 instead of demonstrating Narrow Eyes = 1.
9
Conclusion
We proposed a novel generative model with label inputs. In addition to being able to create samples
conditional on labels, our generative model can also sample from the interventional distributions. Our
theoretical analysis provides provable guarantees about correct sampling under such interventions
and conditionings. The difference between these two sampling mechanisms is the key for causality.
Interestingly, causality leads to generative models that are more creative since they can produce
samples that are different from their training samples in multiple ways. We have illustrated this
point for two models (CausalGAN and CausalBEGAN) and numerous label examples.
Acknowledgements
We thank Ajil Jalal for the helpful discussions.
References
[1] Grigory Antipov, Moez Baccouche, and Jean-Luc Dugelay. Face aging with conditional generative
adversarial networks. In arXiv pre-print, 2017.
[2] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. In arXiv pre-print, 2017.
[3] Mohammad Taha Bahadori, Krzysztof Chalupka, Edward Choi, Robert Chen, Walter F. Stewart, and
Jimeng Sun. Causal regularization. In arXiv pre-print, 2017.
[4] David Berthelot, Thomas Schumm, and Luke Metz. Began: Boundary equilibrium generative adversarial
networks. In arXiv pre-print, 2017.
[5] Michel Besserve, Naji Shajarisales, Bernhard Schölkopf, and Dominik Janzing. Group invariance
principles for causal generative models. In arXiv pre-print, 2017.
[6] Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. Compressed sensing using generative
models. In ICML 2017, 2017.
[7] Yan Chen, Xi Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Proceedings
of NIPS 2016, Barcelona, Spain, December 2016.
20
[8] Chris Donahue, Akshay Balsubramani, Julian McAuley, and Zachary C. Lipton. Semantically decomposing the latent spaces of generative adversarial networks. In arXiv pre-print, 2017.
[9] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. In ICLR, 2017.
[10] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky,
and Aaron Courville. Adversarially learned inference. In ICLR, 2017.
[11] Frederick Eberhardt. Phd thesis. Causation and Intervention (Ph.D. Thesis), 2007.
[12] Jalal Etesami and Negar Kiyavash. Discovering influence structure. In IEEE ISIT, 2016.
[13] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of NIPS 2014, Montreal,
CA, December 2014.
[14] Matthew Graham and Amos Storkey. Asymptotically exact inference in differentiable generative models.
In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial
Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 499–508, Fort
Lauderdale, FL, USA, 20–22 Apr 2017. PMLR.
[15] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved
training of wasserstein gans. In arXiv pre-print, 2017.
[16] Alain Hauser and Peter Bühlmann. Two optimal strategies for active learning of causal models from
interventional data. International Journal of Approximate Reasoning, 55(4):926–939, 2014.
[17] Patrik O Hoyer, Dominik Janzing, Joris Mooij, Jonas Peters, and Bernhard Schölkopf. Nonlinear causal
discovery with additive noise models. In Proceedings of NIPS 2008, 2008.
[18] Antti Hyttinen, Frederick Eberhardt, and Patrik Hoyer. Experiment selection for causal discovery.
Journal of Machine Learning Research, 14:3041–3071, 2013.
[19] Murat Kocaoglu, Alexandros G. Dimakis, and Sriram Vishwanath. Cost-optimal learning of causal
graphs. In ICML’17, 2017.
[20] Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, and Babak Hassibi. Entropic causal
inference. In AAAI’17, 2017.
[21] Ioannis Kontoyiannis and Maria Skoularidou. Estimating the directed information and testing for
causality. IEEE Trans. Inf. Theory, 62:6053–6067, Aug. 2016.
[22] Ming-Yu Liu and Tuzel Oncel. Coupled generative adversarial networks. In Proceedings of NIPS 2016,
Barcelona,Spain, December 2016.
[23] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In
Proceedings of International Conference on Computer Vision (ICCV), December 2015.
[24] David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, and Ilya Tolstikhin. Towards a learning
theory of cause-effect inference. In Proceedings of ICML 2015, 2015.
[25] David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Schölkopf, and Léon Bottou. Discovering causal signals in images. In Proceedings of CVPR 2017, Honolulu, CA, July 2017.
[26] David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests. In arXiv pre-print, 2016.
[27] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. In arXiv pre-print, 2016.
[28] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. In arXiv
pre-print, 2016.
[29] Christian Naesseth, Francisco Ruiz, Scott Linderman, and David Blei. Reparameterization Gradients
through Acceptance-Rejection Sampling Algorithms. In Aarti Singh and Jerry Zhu, editors, Proceedings
of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of
Machine Learning Research, pages 489–498, Fort Lauderdale, FL, USA, 20–22 Apr 2017. PMLR.
21
[30] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary
classifier gans. In arXiv pre-print, 2016.
[31] Judea Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2009.
[32] Christopher Quinn, Negar Kiyavash, and Todd Coleman. Directed information graphs. IEEE Trans. Inf.
Theory, 61:6887–6909, Dec. 2015.
[33] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep
convolutional generative adversarial networks. In arXiv pre-print, 2015.
[34] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved
techniques for training gans. In NIPS’16, 2016.
[35] Karthikeyan Shanmugam, Murat Kocaoglu, Alex Dimakis, and Sriram Vishwanath. Learning causal
graphs with small interventions. In NIPS 2015, 2015.
[36] Peter Spirtes, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search. A Bradford
Book, 2001.
[37] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In
Proceedings of NIPS 2016, Barcelona, Spain, December 2016.
[38] Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. Adversarial neural
machine translation. In arXiv pre-print, 2017.
22
10
10.1
Appendix
Proof of Lemma 1
The proof follows the same lines as in the proof for the optimal discriminator. Consider the objective
ρEx∼p1 (x) [log(DLR (x))] + (1 − ρ)Ex∼p0 (x) [log(1 − DLR (x)]
data
Z data
= ρpr (x|l = 1) log(DLR (x)) + (1 − ρ)pr (x|l = 0) log(1 − DLR (x))dx
(13)
Since 0 < DLR < 1, DLR that maximizes (3) is given by
∗
DLR
(x) =
10.2
ρpr (x|l = 1)
ρpr (x|l = 1)
=
= pr (l = 1|x)
pr (x|l = 1)ρ + pr (x|l = 0)(1 − ρ)
pr (x)
(14)
Proof of Theorem 2
Define C(G) as the generator loss for when discriminator, Labeler and Anti-Labeler are at their
optimum. pdata , pr , Pdata and Pr are used exchangeably for the data distribution. Then we have,
C(G) = Ex∼pdata (x) [log(D∗ (x))] + Ex∼pg (x) [log(1 − D∗ (x))] − Ex∼pg (x) [log(D∗ (x))]
− (1 − ρ)Ex∼p0g (x) [log(1 − DLR (x))] − ρEx∼p1g (x) [log(DLR (x))]
+ (1 − ρ)Ex∼p0g (x) [log(1 − DLG (x))] + ρEx∼p1g (x) [log(DLG (x))]
= Ex∼pdata (x) [log(D∗ (x))] + Ex∼pg (x) [log(1 − D∗ (x))] − Ex∼pg (x) [log(D∗ (x))]
− (1 − ρ)Ex∼p0g (x) [log(Pr (l = 0|x))] − ρEx∼p1g (x) [log(Pr (l = 1|x))]
+ (1 − ρ)Ex∼p0g (x) [log(Pg (l = 0|x))] + ρEx∼p1g (x) [log(Pg (l = 1|x))]
pg (x)
pdata (x)
+ Ex∼pg (x) log
= Ex∼pdata (x) log
pdata (x) + pg (x)
pdata (x)
− (1 − ρ)Ex∼p0g (x) [log(Pr (l = 0|x))] − ρEx∼p1g (x) [log(Pr (l = 1|x))]
+ (1 − ρ)Ex∼p0g (x) [log(Pg (l = 0|x))] + ρEx∼p1g (x) [log(Pg (l = 1|x))]
Using Bayes’ rule, we can write P(l = 1|x) =
have the following:
P(x|l=1)ρ
P(x)
and P(l = 0|x) =
P(x|l=0)(1−ρ)
.
P(x)
(15)
Then we
pr + pg
) + KL(pg k pr ) + H(ρ)
2
+ (1 − ρ)KL(p0g k p0r ) + ρKL(p1g k p1r ) − (1 − ρ)KL(p0g k pr ) − ρKL(p1g k pr )
"
!#
"
!#
p0g (1 − ρ)
p1g ρ
+ (1 − ρ)Ex∼p0g (x) log
+ ρEx∼p1g (x) log
,
pg
pg
C(G) = −1 + KL(pr k
23
where H(ρ) stands for the binary entropy function. Notice that we have
− (1 − ρ)KL(p0g k pr ) − ρKL(p1g k pr )
Z
Z
0
0
= −(1 − ρ) pg (x) log(pg (x))dx − ρ p1g log(p1g (x))dx
Z
p0g (x) log(p0g (x))dx − ρ
Z
p1g log(p1g (x))dx
Z
p0g (x) log(p0g (x))dx − ρ
Z
p1g log(p1g (x))dx
= −(1 − ρ)
= −(1 − ρ)
Also notice that we have
"
(1 − ρ)Ex∼p0g (x) log
p0g (1 − ρ)
pg
"
!#
+ ρEx∼p1g (x) log
Z
=−
Z
pg (x) log(pg (x))dx − H(ρ) + (1 − ρ)
Z
+ (1 − ρ) p0g (x) log(pr (x))dx
Z
+ ρ p1g (x) log(pr (x))dx
Z
+ pg (x) log(pr (x))dx
Z
− KL(pg k pr ) + pg (x) log(pg (x))dx.
p1g ρ
pg
!#
p0g (x) log(p0g (x))dx + ρ
Z
p1g (x) log(p1g (x))dx
Substituting this into the above equation and combining terms, we get
C(G) = −1 + KL(pr k
pr + pg
) + (1 − ρ)KL(p0g k p0r ) + ρKL(p1g k p1r )
2
Observe that for p0g = p0r and p1g = p1g , we have pg = pr , yielding C(G) = −1. Finally, since KL
divergence is always non-negative we have C(G) ≥ −1, concluding the proof.
10.3
Proof of Corollary 2
Since C is a causal implicit generative model for the causal graph D, by definition it is consistent with
the causal graph D. Since in a conditional GAN, generator G is given the noise terms and the labels,
it is easy to see that the concatenated generator neural network G(C(Z1 ), Z2 ) is consistent with
the causal graph D0 , where D0 = (V ∪ {Image}, E ∪ {(V1 , Image), (V2 , Image), . . . (Vn , Image)}).
Assume that C and G are perfect, i.e., they sample from the true label joint distribution and
conditional image distribution. Then the joint distribution over the generated labels and image
is the true distribution since P(Image, Label) = P(Image|Label)P(Label). By Proposition 1, the
concatenated model can sample from the true observational and interventional distributions. Hence,
the concatenated model is a causal implicit generative model for graph D0 .
10.4
CausalGAN Architecture and Loss for Multiple Labels
In this section, we explain the modifications required to extend the proof to the case with multiple
binary labels, or a label variable with more than 2 states in general. pdata , pr , Pdata and Pr are used
exchangeably for the data distribution in the following.
Consider Figure 4 in the main text. Labeler outputs the scalar DLR (x) given an image x. With
the given loss function in (3), i.e., when there is a single binary label l, when we show in Section
∗ (x) = p (l = 1|X = x). We first extend the Labeler objective as
10.1 that the optimum Labeler DLR
r
follows: Suppose we have d binary labels. Then we allow the Labeler to output a 2d dimensional
24
vector DLR (x), where DLR (x)[i] is the ith coordinate of this vector. The Labeler then solves the
following optimization problem:
d
max
DLR
2
X
ρj Epjr log(DLR (x)[j]),
(16)
j=1
where pjr (x) := Pr (X = x|l = j) and ρj = Pr (l = j). We have the following Lemma:
Lemma 3. Consider a Labeler DLR that outputs the 2d -dimensional vector DLR (x) such that
P2d
j=1 DLR (x)[j] = 1, where x ∼ pr (x, l). Then the optimum Labeler with respect to the loss in (16)
∗ (x)[j] = p (l = j|x).
has DLR
r
Proof. Suppose pr (l = j|x) = 0 for a set of (label, image) combinations. Then pr (x, l = j) = 0,
hence these label combinations do not contribute to the expectation. Thus, without loss of generality,
we can consider only the combinations with strictly positive probability. We can also restrict
our attention to the functions DLR that are strictly positive on these (label,image) combinations;
otherwise, loss becomes infinite, and as we will show we can achieve a finite loss. Consider the
vector DLR (x) with coordinates DLR (x)[j] where j ∈ [2d ]. Introduce the discrete random variable
Zx ∈ [2d ], where P(Zx = j) = DLR (x)[j]. The Labeler loss can be written as
min −E(x,l)∼pr (x,l) log(P(Zx = j))
(17)
= min Ex∼pr (x) KL(Lx k Zx ) − H(Lx ),
(18)
where Lx is the discrete random variable such that P(Lx = j) = Pr (l = j|x). H(Lx ) is the Shannon
entropy of Lx , and it only depends on the data. Since KL divergence is greater than zero and
p(x) is always non-negative, the loss is lower bounded by −H(Lx ). Notice that this minimum
can be achieved by satisfying P(Zx = j) = Pr (l = j|x). Since KL divergence is minimized if
and only if the two random variables have the same distribution, this is the unique optimum, i.e.,
∗ (x)[j] = P (l = j|x).
DLR
r
The lemma above simply states that the optimum Labeler network will give the posterior
probability of a particular label combination, given the observed image. In practice, the constraint
that the coordinates sum to 1 could be satisfied by using a softmax function in the implementation.
Next, we have the corresponding loss function and lemma for the Anti-Labeler network. The
Anti-Labeler solves the following optimization problem
d
max
DLG
2
X
ρj Epjg log(DLG (x)[j]),
(19)
j=1
where pjg (x) := P(G(z, l) = x|l = j) and ρj = P(l = j). We have the following Lemma:
∗ (x)[j] = P (l = j|x).
Lemma 4. The optimum Anti-Labeler has DLG
g
Proof. The proof is the same as the proof of Lemma 3, since Anti-Labeler does not have control
over the joint distribution between the generated image and the labels given to the generator, and
cannot optimize the conditional entropy of labels given the image under this distribution.
25
For a fixed discriminator, Labeler and Anti-Labeler, the generator solves the following optimization
problem:
1 − D(x)
min Ex∼pdata (x) [log(D(x))] + Ex∼pg (x) log
G
D(x)
d
−
2
X
ρj Ex∼pjg (x) [log(DLR (X)[j])]
j=1
d
+
2
X
ρj Ex∼pjg (x) [log(DLG (X)[j])] .
(20)
j=1
We then have the following theorem, that shows that the optimal generator samples from the class
conditional image distributions given a particular label combination:
Theorem 3 (Theorem 1 formal for multiple binary labels). Define C(G) as the generator loss for
when discriminator, Labeler and Anti-Labeler are at their optimum obtained from (20). The global
minimum of the virtual training criterion C(G) is achieved if and only if pjg = pjdata , ∀j ∈ [2d ], i.e., if
and only if given a d-dimensional label vector l, generator samples from the class conditional image
distribution, i.e., P(G(z, l) = x) = pdata (x|l).
Proof. Substituting the optimum values for the Discriminator, Labeler and Anti-Labeler networks,
we get the virtual training criterion C(G) as
C(G) = Ex∼pdata (x) [log(D∗ (x))] + Ex∼pg (x) [log(1 − D∗ (x)] − Ex∼pg (x) [log(D∗ (x)]
d
−
2
X
∗
ρj Ex∼pjg (x) log(DLR
(x)[j]))
j=1
d
+
2
X
∗
ρj Ex∼pjg (x) log(DLG
(x)[j])
j=1
= Ex∼pdata (x) log
pdata (x)
pdata (x) + pg (x)
+ Ex∼pg (x) log
pg (x)
pdata (x)
d
−
2
X
ρj Ex∼pjg (x) log(pr (l = j|X = x))
j=1
d
+
2
X
ρj Ex∼pjg (x) log(pg (l = j|X = x))
j=1
(21)
26
P(x|l=j)ρ
Using Bayes’ rule, we can write P(l = j|x) = P(x) j . Then we have the following:
pg (x)
pdata (x)
+ Ex∼pg (x) log
C(G) = Ex∼pdata (x) log
pdata (x) + pg (x)
pdata (x)
!
d
2
X
pjr (x)ρj
−
ρj Ex∼pjg (x) log
pr (x)
j=1
!
2d
X
pjg (x)ρj
+
,
ρj Ex∼pjg (x) log
pg (x)
j=1
pr + pg
) + KL(pg k pr ) + H(l)
2
2d
2d
X
X
j
j
+
ρj KL(pg k pr ) −
ρj KL(pjg k pr )
= −1 + KL(pr k
j=1
j=1
2d
+
X
ρj Ex∼pjg (x) log
j=1
pjg (x)ρj
pg (x)
!
.
Notice that we have
d
−
2
X
ρj KL(pjg k pr )
j=1
d
=−
2
X
Z
ρj
pjg
log(pjg )dx
Z
− KL(pg k pr ) +
pg (x) log(pg (x))dx
j=1
Also notice that we have
d
2
X
ρj Ex∼pjg (x) log(
j=1
pjg (x)ρj
)
pg (x)
d
Z
=−
pg (x) log(pg (x))dx − H(l) +
2
X
Z
ρj
pjg (x) log(pjg (x))dx
j=1
Substituting this into the above equation and combining terms, we get
2d
X
pr + pg
C(G) = −1 + KL(pr k
)+
ρj KL(pjg k pjr )
2
j=1
Observe that for pjg = pjr , ∀j ∈ [d], we have pg = pr , yielding C(G) = −1. Finally, since KL
divergence is always non-negative we have C(G) ≥ −1, concluding the proof.
10.5
Alternate CausalGAN Architecture for d Labels
In this section, we provide the theoretical guarantees for the implemented CausalGAN architecture
with d labels. Later we show that these guarantees are sufficient to prove that the global optimal generator samples from the class conditional distributions for a practically relevant class of
distributions.
27
First, let us restate the loss functions more formally. Note that DLR (x), DLG (x) are d−dimensional
vectors. The Labeler solves the following optimization problem:
max ρj Ex∼pj1
log(DLR (x)[j]) + (1 − ρj )Ex∼pj0
log(1 − DLR (x)[j]).
r
r
DLR
(22)
j0
where pj0
r (x) := P(X = x|lj = 0), pr (x) := P(X = x|lj = 0) and ρj = P(lj = 1). For a fixed
generator, the Anti-Labeler solves the following optimization problem:
max ρj Epj1
log(DLG (x)[j]) + (1 − ρj )Epj0
log(1 − DLG (x)[j]),
g
g
DLG
(23)
j0
where pj0
g (x) := Pg (x|lj = 0), pg (x) := Pg (x|lj = 0). For a fixed discriminator, Labeler and
Anti-Labeler, the generator solves the following optimization problem:
1 − D(x)
min Ex∼pdata (x) [log(D(x))] + Ex∼pg (x) log
G
D(x)
d
−
1X
ρj Ex∼pj1
[log(DLR (X)[j])] − (1 − ρj )Ex∼pj0
[log(1 − DLR (X)[j])]
g (x)
g (x)
d
j=1
d
+
1X
ρj Ex∼pj1
[log(DLG (X)[j])] + (1 − ρj )Ex∼pj0
[log(1 − DLG (X)[j])] .
g (x)
g (x)
d
(24)
j=1
We have the following proposition, which characterizes the optimum generator, for optimum
Labeler, Anti-Labeler and Discriminator:
Proposition 3. Define C(G) as the generator loss for when discriminator, Labeler and Anti-Labeler
are at their optimum obtained from (24). The global minimum of the virtual training criterion C(G)
is achieved if and only if pg (x|li ) = pr (x|li ), ∀i ∈ [d] and pg (x) = pr (x).
Proof. Proof follows the same lines as in the proof of Theorem 2 and Theorem 3 and is omitted.
Thus we have
pr (x, li ) = pg (x, li ), ∀i ∈ [d] and pr (x) = pg (x).
(25)
However, this does not in general imply pr (x, l1 , l2 , . . . , ld ) = pg (x, l1, l2 , . . . , ld ), which is equivalent
to saying the generated distribution samples from the class conditional image distributions. To
guarantee the correct conditional sampling given all labels, we introduce the following assumption:
We assume that the image x determines all the labels. This assumption is very relevant in practice.
For example, in the CelebA dataset, which we use, the label vector, e.g., whether the person is a
male or female, with or without a mustache, can be thought of as a deterministic function of the
image. When this is true, we can say that pr (l1 , l2 , . . . , ln |x) = pr (l1 |x)pr (l2 |x) . . . pr (ln |x).
We need the following lemma, where kronecker delta function refers to the functions that take
the value of 1 only on a single point, and 0 everywhere else:
Lemma 5. Any discrete joint probability distribution, where all the marginal probability distributions
are kronecker delta functions is the product of these marginals.
Proof. Let δ{x−u} be the kronecker delta function which is 1 if x = u and is 0 otherwise. Consider
a joint distribution p(X1 , X2 , . . . , Xn ), where p(Xi ) = δ{Xi −ui } , ∀i ∈ [n], for some set of elements
{ui }i∈[n] . We will show by contradiction that the joint probability distribution is zero everywhere
except at (u1 , u2 , . . . , un ). Then, for the sake of contradiction, suppose for some v = (v1 , v2 , . . . , vn ) 6=
28
(u1 , u2 , . . . , un ), p(v1 , v2 , . . . , vn ) =
6 0. Then ∃j ∈ [n] such that vj 6= uj . Then we can marginalize
the joint distribution as
X
p(vj ) =
p(X1 , . . . , Xj−1 , vj , Xj+1 , . . . , Xn ) > 0,
(26)
X1 ,...,Xj−1 ,Xj ,...,Xn
where the inequality is due to the fact that the particular configuration (v1 , v2 , . . . , vn ) must have
contributed to the summation. However this contradicts with the fact that p(Xj ) = 0, ∀Xj 6= uj .
Hence, p(.) is zero everywhere except at (u1 , u2 , . . . , un ), where it should be 1.
We can now simply apply the above lemma on the conditional distribution pg (l1 , l2 , . . . , ld |x).
Proposition 3 shows that the image distributions and the marginals pg (li |x) are true to the data
distribution due to Bayes’ rule. Since the vector (l1 , . . . , ln ) is a deterministic function of x by
assumption, pr (li |x) are kronecker delta functions, and so are pg (li |x) by Proposition 3. Thus,
since the joint pg (x, l1 , l2 , . . . , ld ) satisfies the condition that every marginal distribution p(li |x) is a
kronecker delta function, then it must be a product distribution by Lemma 5. Thus we can write
pg (l1 , l2 , . . . , ld |x) = pg (l1 |x)pg (l2 |x) . . . pg (ln |x).
Then we have the following chain of equalities.
pr (x, l1 , l2 , . . . , ld ) = pr (l1 , . . . , ln |x)pr (x)
= pr (l1 |x)pr (l2 |x) . . . pr (ln |x)pr (x)
= pg (l1 |x)pg (l2 |x) . . . pg (ln |x)pg (x)
= pg (l1 , l2 , . . . , ld |x)pg (x)
= pg (x, l1 , l2 , . . . , ld ).
Thus, we also have pr (x|l1 , l2 , . . . , ln ) = pg (x|l1 , l2 , . . . , ln ) since pr (l1 , l2 , . . . , ln ) = pg (l1 , l2 , . . . , ln ),
concluding the proof that the optimum generator samples from the class conditional image distributions.
10.6
Additional Simulations for Causal Controller
First, we evaluate the effect of using the wrong causal graph on an artificially generated dataset.
Figure 17 shows the scatter plot for the two coordinates of a three dimensional distribution. As we
observe, using the correct graph gives the closest scatter plot to the original data, whereas using the
wrong Bayesian network, collider graph, results in a very different distribution.
Second, we expand on the causal graphs used for experiments for the CelebA dataset. The graph
Causal Graph 1 (G1) is as illustrated in Figure 5. The graph cG1, which is a completed version of
G1, is the complete graph associated with the ordering: Young, Male, Eyeglasses, Bald, Mustache,
Smiling, Wearing Lipstick, Mouth Slightly Open, Narrow Eyes. For example, in cG1 Male causes
Smiling because Male comes before Smiling in the ordering. The graph rcG1 is associated with the
reverse ordering.
Next, we check the effect of using the incorrect Bayesian network for the data. The causal graph
G1 generates Male and Young independently, which is incorrect in the data. Comparison of pairwise
distributions in Table 1 demonstrate that for G1 a reasonable approximation to the true distribution
is still learned for {Male, Young} jointly. For cG1 a nearly perfect distributional approximation is
learned. Furthermore we show that despite this inaccuracy, both graphs G1 and cG1 lead to Causal
29
0.8
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.6
X3
1.0
X3
1.0
X3
1.0
X3
1.0
X3
1.0
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0.0
0.0
0.2
0.4
X1
0.6
0.8
1.0
(a) X1 → X2 → X3
0.0
0.0
0.2
0.4
X1
0.6
0.8
1.0
(b) X1 → X2 → X3
0.0
0.0
0.2
0.4
X1
0.6
0.8
1.0
(c) X1 → X2 → X3
X1 → X3
0.0
0.0
0.2
0.4
X1
0.6
0.8
1.0
(d) X1 → X2 ← X3
0.0
0.0
0.2
0.4
X1
0.6
0.8
1.0
(e) Fully connected
Figure 17: Synthetic data experiments: (a) Scatter plot for actual data. Data is generated using the
causal graph X1 → X2 → X3 . (b) Generated distribution when generator causal model is
X1 → X2 → X3 . (c) Generated distribution when generator causal model is X1 → X2 → X3
X1 → X3 . (d) Generated distribution when generator causal model is X1 → X2 ← X3 . (e)
Generated distribution when generator is from a fully connected last layer of a 5 layer FF neural net.
Label
Pair
Young
Mustache
0
1
0
1
Male
0
1
0.14[0.07](0.07) 0.09[0.15](0.15)
0.47[0.51](0.51) 0.29[0.27](0.26)
0.61[0.58](0.58) 0.34[0.38](0.38)
0.00[0.00](0.00) 0.04[0.04](0.04)
Table 1: Pairwise marginal distribution for select label pairs when Causal Controller is trained
on G1 in plain text, its completion cG1[square brackets], and the true pairwise distribution(in
parentheses). Note that G1 treats Male and Young labels as independent, but does not completely
fail to generate a reasonable (product of marginals) approximation. Also note that when an edge is
added Y oung → M ale, the learned distribution is nearly exact. Note that both graphs contain the
edge M ale → M ustache and so are able to learn that women have no mustaches.
Controllers that never output the label combination {Female,Mustache}, which will be important
later.
Wasserstein GAN in its original form (with Lipshitz discriminator) assures convergence in
distribution of the Causal Controller output to the discretely supported distribution of labels. We use
a slightly modified version of Wasserstein GAN with a penalized gradient[15]. We first demonstrate
that learned outputs actually have "approximately discrete" support. In Figure 7a, we sample the
joint label distribution 1000 times, and make a histogram of the (all) scalar outputs corresponding
to any label.
Although Figure 7b demonstrates conclusively good convergence for both graphs, TVD is not
always intuitive. For example, "how much can each marginal be off if there are 9 labels and the
TVD is 0.14?". To expand upon Figure 2 where we showed that the causal controller learns the
correct distribution for a pairwise subset of nodes, here we also show that both Causal Graph 1
(G1) and the completion we define (cG1) allow training of very reasonable marginal distributions
for all labels (Table 1) that are not off by more than 0.03 for the worst label. PD (L = 1) is the
probability that the label is 1 in the dataset, and PG (L = 1) is the probability that the generated
label is (around a small neighborhood of ) 1.
30
Label, L
Bald
Eyeglasses
Male
Mouth Slightly Open
Mustache
Narrow Eyes
Smiling
Wearing Lipstick
Young
PG1 (L = 1)
0.02244
0.06180
0.38446
0.49476
0.04596
0.12329
0.48766
0.48111
0.76737
PcG1 (L = 1)
0.02328
0.05801
0.41938
0.49413
0.04231
0.11458
0.48730
0.46789
0.77663
PD (L = 1)
0.02244
0.06406
0.41675
0.48343
0.04154
0.11515
0.48208
0.47243
0.77362
Table 2: Marginal distribution of pretrained Causal Controller labels when Causal Controller is
trained on Causal Graph 1(PG1 ) and its completion(PcG1 ), where cG1 is the (nonunique) largest
DAG containing G1 (see appendix). The third column lists the actual marginal distributions in the
dataset
10.7
Additional Simulations for CausalGAN
In this section, we provide additional simulations for CausalGAN. In Figures 18a-18d, we show the
conditional image generation properties of CausalGAN by sweeping a single label from 0 to 1 while
keeping all other inputs/labels fixed. In Figure 19, to examine the degree of mode collapse and show
the image diversity, we show 256 randomly sampled images.
10.8
Additional CausalBEGAN Simulations
In this section, we provide additional simulation results for CausalBEGAN. First we show that
although our third margin term b3 introduces complications, it can not be ignored. Figure 20
demonstrates that omitting the third margin on the image quality of rare labels.
Furthermore just as the setup in BEGAN permitted the definiton of a scalar "M", which was
monotonically decreasing during training, our definition permits an obvious extension Mcomplete
(defined in 10) that preserves these properties. See Figure 21 to observe Mcomplete decreaing
monotonically during training.
We also show the conditional image generation properties of CausalBEGAN by using "label
sweeps" that move a single label input from 0 to 1 while keeping all other inputs fixed (Figures 22a
-22d ). It is interesting to note that while generators are often implicitly thought of as continuous
functions, the generator in this CausalBEGAN architecture learns a discrete function with respect
to its label input parameters. (Initially there is label interpolation, and later in the optimization
label interpolation becomes more step function like (not shown)). Finally, to examine the degree of
mode collapse and show the image diversity, we show a random sampling of 256 images (Figure 23).
31
(a) Interpolating Bald label
(b) Interpolating Male label
(c) Interpolating Young label
(d) Interpolating Eyeglasses label
Figure 18: The effect of interpolating a single label for CausalGAN, while keeping the noise terms
and other labels fixed.
32
Figure 19: Diversity of the proposed CausalGAN showcased with 256 samples.
33
Figure 20: Omitting the nonobvious margin b3 = γ3 ∗ relu(b1 ) − relu(b2 ) results in poorer image
quality particularly for rare labels such as mustache. We compare samples from two interventional
distributions. Samples from P(.|do(M ustache = 1)) (top) have much poorer image quality compared
to those under P(.|do(M ustache = 0)) (bottom).
Mcomplete vs step
0.300
0.275
M complete
0.250
0.225
0.200
0.175
0.150
0.125
0.100
0
50000
100000
150000
Step
200000
Figure 21: Convergence of CausalBEGAN captured through the parameter Mcomplete .
34
(a) Interpolating Bald label
(b) Interpolating Male label
(c) Interpolating Young label
(d) Interpolating Eyeglasses label
Figure 22: The effect of interpolating a single label for CausalBEGAN, while keeping the noise terms
and other labels fixed. Although most labels are properly captured, we see that eyeglasses label is
not.
35
Figure 23: Diversity of Causal BEGAN showcased with 256 samples.
36
Figure 24: Failed Image generation for simultaneous label and image generation after 20k steps.
10.9
Directly Training Labels+Image Fails
In this section, we present the result of attempting to jointly train an implicit causal generative
model for labels and the image. This approach treats the image as part of the causal graph. It
is not clear how exactly to feed both labels and image to discriminator, but one way is to simply
encode the label as a constant image in an additional channel. We tried this for Causal Graph
1 and observed that the image generation is not learned (Figure 24). One hypothesis is that the
discriminator focuses on labels without providing useful gradients to the image generation.
37
| 7 |
arXiv:1207.1789v1 [] 7 Jul 2012
Monomial ideals with 3-linear resolutions
Marcel Morales2 , Abbas Nasrollah Nejad1 ,
Ali Akbar Yazdan Pour1,2 , Rashid Zaare-Nahandi1
1
Institute for Advanced Studies in Basic Sciences, P. O. Box 45195-1159, Zanjan, Iran
2
Université de Grenoble I, Institut Fourier, Laboratoire de Mathématiques, France
Abstract
In this paper, we study Cstelnuovo-Mumford regularity of square-free monomial ideals generated
in degree 3. We define some operations on the clutters associated to such ideals and prove that the
regularity is conserved under these operations. We apply the operations to introduce some classes of
ideals with linear resolutions and also show that any clutter corresponding to a triangulation of the
sphere does not have linear resolution while any proper sub-clutter of it has a linear resolution.
1
Introduction
Let S = K[x1 , . . . , xn ] be the polynomial ring over a field K and I be a homogeneous ideal of S. Computing the Castenuovo-Mumford regularity of I or even proving that the ideal I has linear resolutions is
difficult in general. It is known that a monomial ideal has d-linear resolution if and only if its polarization, which is a square-free monomial ideal, has d-linear resolution. Therefore, classification of monomial
ideals with linear resolution is equal to classification of square-free monomial ideals. In this subject, one
of the fundamental results is the Eagon-Reiner theorem, which says that the Stanley-Reisner ideal of a
simplicial complex has a linear resolution if and only if its Alexander dual is Cohen-Macaulay.
The problem of existing 2-linear resolution is completely solved by Fröberg [Fr] (See also [Mo]). An
ideals of S generated by square-free monomials of degree 2 can be assumed as edge ideal of a graph.
Fröberg proved that the edge ideal of a finite simple graph G has linear resolution if and only if the
complementary graph Ḡ of G is chordal, i.e., there is no any minimal induced cycle in G of length more
than 3.
Another combinatorial objects corresponding to square-free monomial ideals are clutters which are
special cases of hypergraphs. Let [n] = {1, . . . , n}. A clutter C on a vertex set [n] is a set of subsets of
[n] (called circuits of C) such that if e1 and e2 are distinct circuits, then e1 * e2 . A d-circuit is a circuit
with d vertices, and a clutter is called d-uniform if every circuit is a d-circuit. To a clutter C with circuits
{e1 , . . . , em } the ideal generated by Xej for all j = 1, . . . , m is corresponded which is called circuit ideal
of C and denoted by I(C). One says that a d-uniform clutter C has a linear resolution if the circuit ideal
of the complimentary clutter C¯ has d-linear resolution. Trying to generalize similar result of Fröberg for
d-uniform clutters (d > 2), several mathematicians including E. Emtander [E] and R. Woodroofe [W] have
defined notion of chordal clutters and proved that any d-uniform chordal clutter has a linear resolution.
These results are one-sided. That is, there are non-chordal d-uniform clutters with linear resolution.
In the present paper, we introduce some reduction processes on 3-uniform clutters which do not change
regularity of the minimal resolution. Then a class of 3-uniform clutters which have linear resolution and
a class of 3-uniform clutters which do not have linear resolution are constructed.
0 MSC 2010: Primary 13C14, 13D02; Secondary 13F55.
Key words: minimal free resolution, linear resolution, uniform clutter, monomial ideal, regularity.
1
Some of the results of this paper have been conjectured after explicit computations performed by the
computer algebra systems Singular [Si] and CoCoA [Co].
2
Preliminaries
Let K be a field, S = K[x1 , . . . , xn ] be the polynomial ring over K with the standard graduation and
m = (x1 , . . . , xn ) be the irredundant maximal ideal of S.
We quote the following well-known results that will be used in this paper.
Theorem 2.1 (Grothendieck, [St, Theorem 6.3]). Let M be a finitely generated S-module. Let t =
i
i
depth (M ) and d = dim(M ). Then Hm
(M ) 6= 0 for i = t and i = d, and Hm
(M ) = 0 for i < t and i > d.
i
Corollary 2.2. Let M be a finitely generated S-module. M is Cohen-Macaulay if and only if Hm
(M ) = 0
for i < dim M .
Lemma 2.3. Let S = K[x1 , . . . , xn , y1 , . . . , ym ] be the polynomial ring and I be an ideal in K[y1 , . . . , ym ].
Then,
S
S
depth
= depth .
(x1 · · · xn ) I
I
Definition 2.4 (Alexander duality). For a square-free monomial ideal I = (M1 , . . . , Mq ) ⊂ K[x1 , . . . , xn ],
the Alexander dual I ∨ , of I is defined to be
I ∨ = PM1 ∩ · · · ∩ PMq
where PMi is prime ideal generated by {xj : xj |Mi }.
Definition 2.5. Let I be a non-zero homogeneous ideal of S. For every i ∈ N one defines
S
tSi (I) = max{j : βi,j
(I) 6= 0}
S
where βi,j
(I) is the i, j-th graded Betti number of I as an S-module. The Castelnuovo-Mumford regularity
of I, is given by
reg (I) = sup{tSi (I) − i : i ∈ Z}.
We say that the ideal I has a d-linear resolution if I is generated by homogeneous polynomials of degree
S
d and βi,j
(I) = 0 for all j 6= i + d and i ≥ 0. For an ideal which has d-linear resolution, the CastelnuovoMumford regularity would be d.
Theorem 2.6 (Eagon-Reiner [ER, Theorem 3]). Let I be a square-free monomial ideal in S = K[x1 , . . . , xn ].
I has q-linear resolution if and only if S/I ∨ is Cohen-Macaulay of dimension n − q.
Theorem 2.7 ([T, Theorem 2.1]). Let I be a square-free monomial ideal in S = K[x1 , . . . , xn ] with
dim S/I ≤ n − 2. Then,
S
S
dim ∨ − depth ∨ = reg (I) − indeg(I).
I
I
Remark 2.8. Let I, J be square-free monomial ideals generated by elements of degree d ≥ 2 in S =
K[x1 , . . . , xn ]. By Theorem 2.7, we have
reg (I) = n − depth
S
,
I∨
reg (J) = n − depth
Therefore, reg (I) = reg (J) if and only if depth S/I ∨ = depth S/J ∨ .
2
S
.
J∨
Definition 2.9 (Clutter). Let [n] = 1, . . . , n. A clutter C on a vertex set [n] is a set of subsets of [n]
(called circuits of C) such that if e1 and e2 are distinct circuits of C then e1 * e2 . A d-circuit is a circuit
consisting of exactly d vertices, and a clutter is d-uniform if every circuit has exactly d vertices.
For a non-empty clutter C on vertex set [n], we define the ideal I(C), as follows:
I(C) = (xF :
F ∈ C)
and we define I(∅) = 0.
Let n, d be positive integers and d ≤ n. We define Cn,d , the maximal d-uniform clutter on [n] as
follow:
Cn,d = {F ⊂ [n] : |F | = d}.
¯ the complement of C, to be
If C is a d-uniform clutter on [n], we define C,
C¯ = Cn,d \ C = {F ⊂ [n] :
|F | = d, F ∈
/ C}.
¯ in
Frequently in this paper, we take a d-uniform clutter C and we consider the square-free ideal I = I(C)
the polynomial ring S = K[x1 , . . . , xn ]. The ideal I is called circuit ideal.
Definition 2.10 (Clique). Let C be a d-uniform clutter on [n]. A subset G ⊂ [n] is called a clique in C,
if all d-subset of G belongs to C.
¯ be the circuit ideal. If G is a clique in
Remark 2.11. Let C be a d-uniform clutter on [n] and I = I(C)
¯
C and F ∈ C, then ([n] \ G) ∩ F 6= ∅. So that x[n]\G ∈ PF . Hence
\
PF = I ∨ .
x[n]\G ∈
F ∈C̄
Example 2.12. We show that I(Cn,d ) has linear resolution. Let ∆ be a simplex on [n]. Then, clearly
I∆ = (0) and K[∆] = K[x1 , . . . , xn ] is Cohen-Macaulay. It follows from [BH, Exercise 5.1.23] that for
any r < n, ∆(r) = hF ⊂ [n] : |F | = r + 1i is Cohen-Macaulay. Note that
∨
(r) = (x :
I∆
=
I
∆
|F | = n − (r + 1))
(r)
F
which has linear resolution by Theorem 2.6. Using this argument for r = n − d − 1, one can say,
∨
I∆
(n−d−1) = I(Cn,d ) has linear resolution.
Definition 2.13 (Simplicial submaximal circuit). Let C be a d-uniform clutter on [n]. A (d − 1)-subset
e ⊂ [n] is called a submaximal circuit of C if there exists F ∈ C such
that e ⊂ F . The set of all submaximal
circuits of C is denoted by E(C). For e ∈ E(C), let N[e] = e ∪ c ∈ [n] : e ∪ {c} ∈ C ⊂ [n]. We say that
e is a simplicial submaximal circuit if N[e] is a clique in C. In case of 3-uniform clutters, E(C) is called
the edge set and we say simplicial edge instead of simplicial submaximal circuit.
3
Operations on Clutters
In this section, for a clutter C, we introduce some operations as changing or removing circuits which do
not change linearity of resolution of the circuit ideal. We begin this section with the following well-known
results.
Lemma 3.1. Let M be an R-module. For any submodules A, B, C of M such that B ⊂ C, one has
(A + B) ∩ C = (A ∩ C) + B.
3
(1)
Theorem 3.2 (Mayer-Vietoris sequence). For any two ideals I1 , I2 in the commutative Noetherian local
ring (R, m), the short exact sequence
0 −→
R
R
R
R
−→
⊕
−→
−→ 0
I1 ∩ I2
I1
I2
I1 + I2
gives rise to the long exact sequence
i
i
i
i
i−1
R
R
R
R
R
→ Hm
→
· · · → Hm
I1 +I2
I1 ∩I2 −→ Hm I1 ⊕ Hm I2 −→ Hm I1 +I2
i+1
R
→ Hm
I1 +I2 → · · · .
Lemma 3.3. Let I1 , I2 be ideals in a commutative Noetherian local ring (R, m) such that
depth
Then, depth I1R
∩I2 = 1 + depth
R
I1
≥ depth
R
I2
R
> depth I1 +I
.
2
R
I1 +I2 .
Proof. Let r := 1 + depth R/(I1 + I2 ). Then, for all i < r,
i−1
i
i
R
R
R
Hm
=
H
=
H
m
m
I1 +I2
I1
I2 = 0.
Hence by the Mayer-Vietoris exact sequence,
i−1 R
i−1 R
i−1
i
i
i
R
R
R
R
· · · → Hm
⊕
H
→
H
→
H
→
H
⊕
H
→ ···
m
m
m I1 ∩I2
m I1
m I2
I1
I2
I1 +I2
i
we have Hm
R
I1 ∩I2
r
= 0 for all i < r, and Hm
depth
R
I1 ∩I2
6= 0. So that
R
R
= r = 1 + depth
.
I1 ∩ I2
I1 + I2
Lemma 3.4. Let I, I1 , I2 be ideals in a commutative Noetherian local ring (R, m) such that I = I1 + I2
and
R
R
r := depth
≤ depth .
I1 ∩ I2
I2
Then, for all i < r − 1 one has
R ∼ i R
i
.
Hm
= Hm
I1
I
Proof. For i < r − 1, our assumption implies that
i
i+1
i
R
R
R
=
H
=
H
Hm
m
m
I1 ∩I2
I2
I1 ∩I2 = 0.
Hence, from the Mayer-Vietoris exact sequence
i
i
i
i
R
R
R
· · · −→ Hm
−→
H
⊕
H
m I1
m I2 −→ Hm
I1 ∩I2
we have
i
Hm
as desired.
R
I1
i
∼
= Hm
R
,
I
4
R
I
i+1
−→ Hm
for all i < r − 1,
R
I1 ∩I2
−→ · · · .
′
Notation. Let for n > 3, T1,n , T1,n
⊂ S = K[x1 , . . . , xn ] denote the ideals
T1,n =
\
(x1 , xi , xj ),
′
T1,n
=
2≤i<j≤n
\
(xi , xj ).
2≤i<j≤n
Proposition 3.5. For n ≥ 3, let S = K[x1 , . . . , xn ] be the polynomial ring, then
Q
Q
Q
Q
′
xi .
xi , . . . ,
and T1,n = x1 ,
xi
(i) T1,n =
xi , . . . ,
2≤i≤n
i6=2
(ii)
2≤i≤n
i6=2
2≤i≤n
i6=n
2≤i≤n
i6=n
S
S
(res.
) is Cohen-Macaulay of dimension n − 2 (res. n − 3).
′
T1,n
T1,n
Proof. The assertion is well-known but one can find the a direct proof of the primary decomposition of
′
in [Mo, Example 7].
the Alexander dual of T1,n
Let C be a 3-uniform clutter on the vertex set [n]. Surely, one can consider C as a 3-uniform clutter on
¯ will be changed when we consider C either on [n] or on [m].
[m] for any m ≥ n. However, C¯ (and hence I(C))
To be more precise, when we pass from [n] to [n + 1], then the new generators {xn+1 xi xj : 1 ≤ i < j ≤ n}
¯ Below, we will show that the linearity does not change when we pass from [n] to
will be added to I(C).
[m].
Lemma 3.6. Let I ⊂ K[x1 , . . . , xn ] be a square-free monomial ideal generated in degree 3 such that
x1 xi xj ∈ I for all 1 < i < j ≤ n. If J = I ∩ K[x2 , . . . , xn ], then reg (I) = reg (J).
Proof. By our assumption, J is an ideal of K[x2 , . . . , xn ] and
I = J + (x1 xi xj : 1 < i < j ≤ n).
T
So that I ∨ = J ∨ T1,n . By Remark 2.8, it is enough to show that depth S/I ∨ = depth S/J ∨ .
The ideal J ∨ is intersection of Q
some primes P , such that the set of generators of P is a 3-subset of
xi ∈ J ∨ . Hence J ∨ + T1,n = (x1 , J ∨ ) by Proposition 3.5(i). In
{x1 , . . . , xn }. So that for all j,
1<i≤n−1
i6=j
particular
depth
J∨
S
S
= depth ∨ − 1.
+ T1,n
J
(2)
S
S
By Proposition 3.5 and (2), depth T1,n
≥ depth JS∨ > depth J ∨ +T
. Hence by Lemma 3.3 and (2), we
1,n
have
S
S
S
depth ∨ = 1 + depth ∨
= depth ∨ .
I
J + T1,n
J
Theorem 3.7. Let C =
6 Cn,d be a d-uniform clutter on [n] and e be a simplicial submaximal circuit. Let
C′ = C \ e = F ∈ C : e * F
¯ J = I(C¯′ ). Then, reg (I) = reg (J).
and I = I(C),
Proof. By Remark 2.8, it is enough to show that depth S/I ∨ = depth S/J ∨ . Without loss of generality,
we may assume that e = {1, . . . , d − 1} and N [e] = {1, . . . , r}.
5
Since e = {1, . . . , d − 1} is a simplicial submaximal circuit, by Remark 2.11 and Lemma 3.1, we have:
\
I ∨ = (x1 , . . . , xd−1 , xr+1 · · · xn ) ∩
PF
F ∈C̄
{1,...,d−1}*F
= (x1 , . . . , xd−1 ) ∩
\
PF
F ∈C̄
{1,...,d−1}*F
∨
J = (x1 , . . . , xd−1 , xd · · · xn ) ∩
= (x1 , . . . , xd−1 ) ∩
\
\
+ (xr+1 · · · xn ),
PF
F ∈C̄
{1,...,d−1}*F
F ∈C̄
{1,...,d−1}*F
PF
+ (xd · · · xn ).
Since
(x1 , . . . , xd−1 ) ∩
(x1 , . . . , xd−1 ) ∩
\
PF
∩ (xr+1 · · · xn ) = (x1 xr+1 · · · xn , . . . , xd−1 xr+1 · · · xn ),
PF
∩ (xd · · · xn ) = (x1 xd · · · xn , . . . , xd−1 xd · · · xn )
F ∈C̄
{1,...,d−1}*F
\
F ∈C̄
{1,...,d−1}*F
have depth equal to n − (d − 1), by Lemma 3.4 we have:
S
S
S ∼ i
i
i
∼
= Hm
T
Hm
= Hm
∨
I∨
J
PF
(x1 , . . . , xd−1 ) ∩
for all i < n − d.
(3)
F ∈C̄
{1,...,d−1}*F
Since dim S/I ∨ = dim S/J ∨ = n − d, the above equation implies that depth S/I ∨ = depth S/J ∨ .
For a d-uniform clutter C, if there exist only one circuit F ∈ C which contains the submaximal circuit
e ∈ E(C), then clearly e is a simplicial submaximal circuit. Hence we have the following result.
¯ be the circuit ideal. If F is the only
Corollary 3.8. Let C be a d-uniform clutter on [n] and I = I(C)
circuit containing the submaximal circuit e, then reg (I) = reg (I + xF ).
Let C be 3-uniform clutter on [n] such that {1, 2, 3}, {1, 2, 4}, {1, 3, 4}, {2, 3, 4} ∈ C. If there exist no
other circuit which contains e = {1, 2}, then e is a simplicial edge. Hence by Theorem 3.7 we have the
following corollary.
¯ be the circuit ideal of C.
¯ Assume
Theorem 3.9. Let C be 3-uniform clutter on [n] and I = I(C)
that {1, 2, 3}, {1, 2, 4}, {1, 3, 4}, {2, 3, 4} ∈ C and there exist no other circuit which contains {1, 2}. If
J = I + (x1 x2 x3 , x1 x2 x4 ), then reg (I) = reg (J).
E. Emtander in [E] has introduced a generalized chordal clutter to be a d-uniform clutter, obtained
inductively as follows:
• Cn,d is a generalized chordal clutter.
• If G is generalized chordal clutter, then so is C = G ∪Ci,d Cn,d for all 0 ≤ i < n.
6
• If G is generalized chordal and V ⊂ V (G) is a finite set with |V | = d and at least one element of
{F ⊂ V : |F | = d − 1} is not a subset of any element of G, then G ∪ V is generalized chordal.
Also R. Woodroofe in [W] has defined a simplicial vertex in a d-uniform clutter to be a vertex v such
the if it belongs to two circuits e1 , e2 , then, there is another circuit in (e1 ∪ e2 ) \ {v}. He calls a clutter
chordal if any minor of the clutter has a simplicial vertex.
Remark 3.10. Let C be the class of 3-uniforms clutters which can be transformed to empty set after
a sequence of deletions of simplicial edges. Using Theorem 3.7, it is clear that if C ∈ C , then the
¯ has a linear resolution over any field K. It is easy to see that generalized 3-uniform chordal
ideal I(C)
clutters are contained in this class, so they have linear resolution over any field K. This generalizes
Theorem 5.1 of [E]. It is worth to say that C contains generalized chordal clutters strictly. For example,
C = {123, 124, 134, 234, 125, 126, 156, 256} is in C but it is not a generalized chordal clutter. Also it is
easy to see that any 3-uniform clutter which is chordal in sense of [W] has simplicial edges.
Definition 3.11 (Flip). Let C be 3-uniform clutter on [n]. Assume that {1, 2, 3}, {1,
2, 4} ∈ C are the only
′
circuits
containing
{1,
2}
and
there
is
no
circuit
in
C
containing
{3,
4}.
Let
C
=
C
∪
{1, 3, 4}, {2, 3, 4} \
′
′
{1, 2, 3}, {1, 2, 4} . Then C is called a flip of C. Clearly, if C is a flip of C, then C is a flip of C ′ too (see
the following illustration).
3
3
2
1
←→
1
2
4
4
C
C′
¯ = reg I(C¯′ ).
Corollary 3.12. Let C be 3-uniform clutter on [n] and C ′ be a flip of C. Then, reg I(C)
Proof. With the same notation as in the above definition, let C ′′ = C ∪ {1, 3, 4}, {2, 3, 4} . Theorem 3.9
applied to {3, 4}, shows that reg I(C¯′′ ) = reg I(C¯′ ). Using Theorem 3.9 again applied to {1, 2}, we conclude
¯ So that reg I(C)
¯ = reg I(C¯′ ), as desired.
that reg I(C¯′′ ) = reg I(C).
For our next theorem, we use the following lemmas.
Lemma 3.13. Let n ≥ 4, S = K[x1 , . . . , xn ] be the polynomial ring and Tn be the ideal
Tn = (x4 · · · xn , x1 x2 x3 x̂4 · · · xn , . . . , x1 x2 x3 x4 · · · x̂n ).
Then, we have:
(i) Tn = (Tn−1 ∩ (xn )) + (x1 x2 x3 x4 · · · x̂n ).
(ii) depth
S
= n − 2.
Tn
Proof. (i) This is an easy computation.
(ii) The proof is on induction over n. For n = 4, every thing is clear. Let n > 4 and (ii) be true for
n − 1.
Clearly, (Tn−1 ∩ (xn )) ∩ (x1 x2 x3 x4 · · · x̂n ) = (x1 x2 x3 x4 · · · xn ) which has depth n − 1. So by Lemma 3.4,
2.3 and induction hypothesis, we have:
depth
S
S
= depth
= n − 2.
Tn
Tn−1
7
Lemma 3.14. Let C be a 3-uniform clutter on [n] such that F = {1, 2, 3} ∈ C and for all r > 3,
{1, 2, r}, {1, 3, r}, {2, 3, r} * C.
(4)
¯ I1 = I(C¯1 ). Then,
Let C1 = C \ F and I = I(C),
(i) depth
S
S
≥ depth ∨ − 1.
I ∨ + (x1 , x2 , x3 )
I
(ii) depth
S
S
≥ depth ∨ .
I1∨
I
Proof. Let t := depth S/I ∨ ≤ dim S/I ∨ = n − 3.
(i) One can easily check that condition (4), is equivalent to say that
for all r > 3, there exist F ∈ C¯ such that PF ⊂ (x1 , x2 , x3 , xr ).
So that
I∨ =
\
F ∈C̄
PF =
=
\
PF ∩ ((x1 , x2 , x3 , x4 ) ∩ · · · ∩ (x1 , x2 , x3 , xn ))
\
PF ∩ (x1 , x2 , x3 , x4 · · · xn ) = I ∨ ∩ (x1 , x2 , x3 , x4 · · · xn ).
F ∈C̄
F ∈C̄
Clearly, x4 · · · xn ∈ I ∨ . So, from the Mayer-Vietoris long exact sequence
i−1
i−1
i
i−1 S
S
S
⊕
H
→
H
· · · → Hm
∨
∨
m
m
I
(x1 ,x2 ,x3 ,x4 ···xn )
I +(x1 ,x2 ,x3 ) → Hm
we have:
i−1
Hm
S
I ∨ + (x1 , x2 , x3 )
= 0,
S
I∨
→ ···
for all i < t ≤ n − 3.
(5)
This proves inequality (i).
(ii) Clearly, I1∨ = I ∨ ∩ (x1 , x2 , x3 ). So from Mayer-Vietoris long exact sequence
i
i
i
i−1
S
S
S
S
→
H
→
H
⊕
H
· · · → Hm
∨
∨
∨
m
m
m
I +(x1 ,x2 ,x3 )
I
I
(x1 ,x2 ,x3 ) → · · ·
1
and (5), we have:
i
Hm
S
I1∨
= 0,
for all i < t ≤ n − 3.
Theorem
3.15. Let C be a 3-uniform clutter on [n] such that F
= {1, 2, 3} ∈ C and for all r > 3,
{1, 2, r}, {1, 3, r}, {2, 3, r} * C. Let C1 = C \ F , C ′ = C1 ∪ {0, 1, 2}, {0, 1, 3}, {0, 2, 3} and I =
¯ J = I(C¯′ ) be the circuit ideals in the polynomial ring S = K[x0 , x1 , . . . , xn ]. Then, reg (I) = reg (J).
I(C),
1
2
0
1
2
←→
3
3
8
Proof. By Remark 2.8, it is enough to show that depth S/I ∨ = depth S/J ∨ .
Let I1 = I(C¯1 ). Clearly, I1∨ = (x1 , x2 , x3 ) ∩ I ∨ and
!
!
n
n
\
\
\
(x0 , x2 , xi ) ∩
(x0 , x1 , xi ) ∩
J ∨ = I1∨ ∩
(x0 , xi , xj )
3≤i<j≤n
i=4
i=4
= (x0 , x4 · · · xn , x1 x2 x3 x̂4 · · · xn , . . . , x1 x2 x3 x4 · · · x̂n ) ∩ I1∨ .
Let T be the ideal T = (x0 , x4 · · · xn , x1 x2 x3 x̂4 · · · xn , . . . , x1 x2 x3 x4 · · · x̂n ). Then, J ∨ = I1∨ ∩ T and by
Lemma 3.13, depth TS = n − 2. Moreover, our assumption implies that for all i > 4, there exist F ∈ C¯
such that PF ⊂ (x1 , x2 , x3 , xr ). So that
\
I1∨ + T = (x0 , x4 · · · xn , I1∨ ) = (x0 ) + x4 · · · xn , (x1 , x2 , x3 ) ∩
PF
F ∈C̄
= (x0 ) + (x1 , x2 , x3 , x4 ) ∩ · · · ∩ (x1 , x2 , x3 , xn ) ∩
\
F ∈C̄
PF = (x0 ) +
\
F ∈C̄
PF = (x0 , I ∨ ).
(6)
Hence, by Lemma 3.14(ii), depth I ∨S+T = depth IS∨ − 1 ≤ depth IS∨ − 1. Thus, depth TS ≥ depth IS∨ >
1
depth I ∨S+T . Using Lemma 3.3 and (6), depth
1
S
J∨
1
= 1 + depth I ∨S+T = depth
1
S
I∨ .
1
Lemma 3.16. Let T be a hexahedron. Then, the circuit ideal of T does not have linear resolution. If T′
be the hexahedron without one circuit, then the circuit ideal of T′ has a linear resolution.
4
2
1
3
5
Proof. Let I = I(T̄). We know that T̄ = {145, 245, 345, 123}. So that
I ∨ = (x1 x2 x3 , x4 , x5 ) ∩ (x1 , x2 , x3 ) ⊂ S := K[x1 , . . . , x5 ].
S
∨
∨
1
It follows from Theorem 3.2 that Hm
I ∨ 6= 0. Since dim S/I = 5 − 3 = 2, we conclude that S/I is
not Cohen-Macaulay. So that the ideal I does not have linear resolution by Theorem 2.6.
The second part of the theorem, is a direct conclusion of Theorem 3.8.
Let S 2 be a sphere in R3 . A triangulation of S 2 is a finite simple graph embedded on S 2 such that
each face is triangular and any two faces share at most one edge. Note that if C is a triangulation of
a surface, then C defines a 3-uniform clutter which we denote this again by C. Moreover, any proper
subclutter C ′ ⊂ C has an edge e ∈ E(C ′ ) such that e is contained in only one circuit of C ′ .
Corollary 3.17. Let S = K[x1 , . . . , xn ]. Let Pn be the clutter defined by a triangulation of the sphere
with n ≥ 5 vertices, and let I ⊂ S be the circuit ideal of Pn . Then,
(i) For any proper subclutter C1 ⊂ Pn , the ideal I(C¯1 ) has a linear resolution.
(ii) S/I does not have linear resolution.
9
Proof. (i) If C1 is a proper subclutter of Pn , then C1 has an edge e such that e is contained in only one
circuit of C1 and can be deleted without changing the regularity by Corollary 3.8. Continuing this process
proves the assertion.
(ii) The proof is by induction on n, the number of vertices. First step of induction is Lemma 3.16.
Let n > 5. If there is a vertex of degree 3 (the number of edges passing through the vertex in 3),
then by Theorem 3.15, we can remove the vertex and three circuits containing it and add a new circuit
instead. Then, we have a clutter with fewer vertices and by the induction hypothesis, S/I does not have
linear resolution. Now, assume that, there is no any vertex of degree 3 and take a vertex u of degree
> 3 and all circuits containing u (see the following illustrations). Using several flips and Corollary 3.12,
we can reduce our triangulation to another one such that there are only 3 circuits containing u. Now,
using Theorem 3.15, we get a triangulation of the sphere with n − 1 vertices which does not have linear
resolution by the induction hypothesis.
··
··
·
·
2
3
4
1
5
6
··
··
·
·
7
Remark 3.18. Let Pn be the 3-uniform clutter as in Corollary 3.17. Let I be the circuit ideal of P̄n and
∆ be a cimplicial complex such that the Stanley-Reisner ideal of ∆ is I. In this case, ∆∨ , the Alexander
dual of ∆, is a pure simplicial complex of dimension n − 4 which is not Cohen-Maculay, but adding any
more facet to ∆∨ makes it Cohen-Macaulay.
References
[BH] W. Bruns and J. Herzog, Cohen-Macaulay Rings, Revised Edition, Cambridge University Press,
Cambridge, (1996).
[Co] CoCoATeam: CoCoA: A System for Doing Computations in Commutative Algebra, available at
http://cocoa.dima.unige.it.
[ER] J. A. Eagon and V. Reiner, Resolutions of Stanley-Reisner rings and Alexander duality, J. Pure
and Applied Algebra 130 (1998), 265-275.
[E] E. Emtander, A class of hypergraphs that generalizes chordal graphs, Math. Scand. 106 (2010), no.
1, 50–66.
[Fr] R. Fröberg, On Stanley-Reisner rings, Topics in algebra, Part 2 (Warsaw, 1988), Banach Center
Publ., vol. 26, PWN, Warsaw, 1990, pp. 57-70.
[Mo] M. Morales, Simplicial ideals, 2-linear ideals and arithmatical rank, J. Algebra 324 (2010), no. 12,
3431–3456.
10
[Si] W. Decker, G.-M. Greuel, G. Pfister, H. Schönemann: Singular 3-1-3 — A computer algebra
system for polynomial computations. http://www.singular.uni-kl.de (2011).
[St] R. Stanley, Combinatorics and Commutative Algebra, second ed., Progress in Mathematics, vol. 41,
Birkhäuser Boston Inc., Boston, MA, (1996).
[T] N. Terai, Generalization of Eagon-Reiner theorem and h-vectors of graded rings, preprint (2000).
[W] R. Woodroofe, Chordal and sequentially Cohen-Macaulay clutters, Electron. J. Combin. 18 (2011),
no. 1, Paper 208, 20 pages, arXiv:0911.4697.
11
| 0 |
Information Bottleneck on General Alphabets
Georg Pichler, Günther Koliander
arXiv:1801.01050v1 [] 3 Jan 2018
Abstract—We prove rigorously a source coding theorem
that can probably be considered folklore, a generalization
to arbitrary alphabets of a problem motivated by the Information Bottleneck method. For general random variables
(Y, X), we show essentially that for some n ∈ N, a function f
with rate limit log|f | ≤ nR and I(Yn ; f (Xn )) ≥ nS exists if and
only if there is a discrete random variable U such that the
Markov chain Y −◦− X −◦− U holds, I(U; X) ≤ R and I(U; Y) ≤ S.
II. Main Result
Let Y and X be random variables with arbitrary alphabets
SY and SX , respectively. The bold-faced random vectors Y and
X are n i.i.d. copies of Y and X, respectively. We then have the
following definitions.
Definition 1. A pair (S, R) ∈ R2 is achievable if for some
n ∈ N there exists a measurable function f : SXn → M for some
finite set M with bounded cardinality n1 log|M| ≤ R and
I. Introduction
Since its inception [1], the Information Bottleneck (IB)
1
method became a widely applied tool, especially in the context
I Y; f (X) ≥ S.
(2)
n
of machine learning problems. It has been successfully applied
to various problems in machine learning [2], computer vision The set of all achievable pairs is denoted R ⊆ R2 .
[3], biomedical signal processing [4], and communications [5],
2
[6], [7]. Furthermore, it is a valuable tool for channel output Definition 2. A pair (S, R) ∈ R is IB-achievable if there
exists an additional random variable U with arbitrary alphabet
compression in a communication system [8], [9].
In the underlying information-theoretic problem, we define a SU , satisfying Y −◦− X −◦− U and
pair (S, R) ∈ R2 to be achievable for the two arbitrary random
R ≥ I(X; U),
(3)
sources (Y, X), if there exists a function f with rate limited
S
≤
I(Y;
U).
(4)
range n1 log|f | ≤ R and I(Y; f (X)) ≥ nS, where (Y, X) are n
independent and identically distributed (i.i.d.) copies of (Y, X). The set of all IB-achievable pairs is denoted RIB ⊆ R2 .
While this Shannon-theoretic problem and variants thereof
In what follows, we will prove the following theorem.
were also considered (e. g., [10], [11]), a large part of the
literature is aimed at studying the IB function
Theorem 3. The equality RIB = R holds.
SIB (R) =
sup
I(U; Y)
(1)
U : I(U;X)≤R
Y−
◦−X−◦−U
in different contexts. In particular, several works (e. g., [1], [2],
[12], [13], [14]) intend to compute a probability distribution
that achieves the supremum in (1). The resulting distribution
is then used as a building block in numerical algorithms, e. g.,
for document clustering [2] or dimensionality reduction [12].
In the discrete case, SIB (R) is equal to the maximum of all
S such that (S, R) is in the achievable region (closure of the
set of all achievable pairs). This statement has been re-proven
many times in different contexts [15], [11], [16], [17]. In this
note, we prove a theorem, which can probably be considered
folklore, extending this result from discrete to arbitrary random
variables. Formally speaking, using the definitions in [18], we
prove that a pair (S, R) is in the achievable region of an
arbitrary source (Y, X) if and only if, for every ε > 0, there
exists a random variable U with Y −◦− X −◦− U, I(X; U) ≤ R + ε,
and I(Y; U) ≥ S − ε. This provides a single-letter solution to
the information-theoretic problem behind the information bottleneck method for arbitrary random sources and in particular
it shows, that the information bottleneck for Gaussian random
variables [12] is indeed the solution to a Shannon-theoretic
problem.
In the proof, both the inner bound and the converse are
inferred from the discrete case. The techniques employed could
therefore be useful for lifting other discrete coding theorems to
the case of arbitrary alphabets.
G. Pichler is with the Institute of Telecommunications, Technische Universität Wien, Vienna, Austria
G. Koliander is with the Acoustics Research Institute, Austrian
Academy of Sciences, Vienna, Austria
Funding by WWTF Grants MA16-053, ICT15-119, and NXT17-013.
III. Preliminaries
When introducing a function, we implicitly assume it to be
measurable w.r.t. the appropriate σ-algebras. The σ-algebra
associated with a finite set is its power set and the σ-algebra
associated with R is the Borel σ-algebra. The symbol ∅ is used
for the empty set and for a constant random variable. When
there is no possibility for confusion, we will not distinguish
between a single-element set and its element, e. g., we write x
instead of {x} and 1x for the indicator function of {x}. We use
A △ B := A \ B ∪ B \ A to denote the symmetric set difference.
Let (Ω, Σ, µ) be a probability space. A random variable
X : Ω → SX takes values in the measurable space (SX , AX ). The
push-forward probability
measure µX : AX → [0, 1] is defined by
µX (A) = µ X−1 (A) for all A ∈ AX . We will state most results
in terms of push-forward measures and usually ignore the
background probability space. When multiple random variables
are defined, we implicitly assume the push-forward measures to
be consistent in the sense that, e. g., µX (A) = µXY (A × SY ) for
all A ∈ AX .
For n ∈ N let Ωn denote the n-fold Cartesian product
of (Ω, Σ, µ). A bold-faced random vector, e. g., X, defined on
Ωn , is an n-fold copy of X, i. e., X = Xn . Accordingly, the
corresponding push-forward measure, e. g., µX is the n-fold
product measure.
For a random variable X let aX , bX , and cX denote arbitrary
functions on SX , each with finite range. We will use the symbol
MX to denote the range of aX , i. e., aX : SX → MX .
Definition 4 ([19, Definition 8.11]). The conditional expectation of a random variable X with SX = R, given a random
variable Y, is a random variable E[X|Y] such that
1) E[X|Y] is σ(Y)-measurable, and
2) for all A ∈ σ(Y), we have E 1A E[X|Y] = E[1A X] .
The conditional probability of an event B ∈ Σ given Y is defined
as P{B|Y} := E[1B |Y].
The conditional expectation and therefore also the conditional
probability exists and is unique up to equality almost surely
by [19, Theorem 8.12]. Furthermore, if (SX , AX ) is a standard
space [18, Section 1.5], there even exists a regular conditional
distribution of X given Y [19, Theorem 8.37].
Definition 5. For two random variables X and Y a regular
conditional distribution of X given Y is a function κX|Y : Ω ×
AX → [0, 1] such that
1) for every ω ∈ Ω, the set function κX|Y (ω) := κX|Y (ω, · ) is
a probability measure on (SX , AX ).
2) for every set A ∈ AX , the function κX|Y ( · ; A) is σ(Y)measurable.
3) for
µ-a. e. ω ∈ Ω and all A ∈ AX , we have κX|Y (ω; A) =
P X−1 (A) Y (ω) (cf. Def. 4).
Note, in particular, that finite spaces are standard spaces.
Remark 1. If the random variable Y is discrete, then κX|Y
reduces to conditioning given events Y = y for y ∈ SY , i. e.,
κX|Y (ω; A) = µXYµY(A×Y(ω))
(cf. [19, Lemma 8.10]).
(Y(ω))
We use the following definitions and results from [18], [19].
Definition 6. For random variables X and Y with |SX | < ∞
the conditional entropy is defined as [18, Section 5.5]
H(X|Y) :=
Z
(5)
H κX|Y dµ,
I(X; Y|Z) := sup
aX aY
D κaX (X),aY (Y)|Z κaX (X)|Z × κaY (Y)|Z dµ
(6)
= sup H(aX (X)|Z) + H(aY (Y)|Z) − H(aX (X)aY (Y)|Z) , (7)
aX ,aY
κaX (X)aY (Y)|Z ( · ; A × B)
−1
= P X−1 (a−1
(a−1
X (A)) ∩ Y
Y (B)) Z
=P X
−1
(a−1
X (A))
Z P Y
−1
(a−1
Y (B))
(10)
Z
= κaX (X)|Z ( · ; A)κaY (Y)|Z ( · ; B),
(11)
(12)
where (10) and (12) follow from part 3 of Def. 5. This proves
that µ-a. e. the equality of measures κaX (X)aY (Y)|Z = κaX (X)|Z ×
κaY (Y)|Z holds. By the properties of Kullback-Leibler divergence
[18, Theorem 2.3.1] we have I(X; Y|Z) = 0 due to (6).
On the other hand, assume I(X; Y|Z) = 0 and choose arbitrary sets A ∈ AX and B ∈ AY . We define aX := 1A ,
aY := 1B , X̂ := aX (X), and Ŷ := aY (Y). By (6) we have
D κX̂Ŷ|Z (ω) κX̂|Z (ω) × κŶ|Z (ω) = 0 for µ-a. e. ω ∈ Ω, which
is equivalent to the equality µ-a. e. of the measures κX̂Ŷ|Z =
κX̂|Z × κŶ|Z . We obtain µ-a. e.,
P X−1 (A) ∩ Y−1 (B) Z = κX̂Ŷ|Z ( · ; 1 × 1)
= κX̂|Z ( · , 1)κŶ|Z ( · , 1)
=P X
−1
(A) Z P Y
−1
(13)
(14)
(B) Z .
(15)
(ii): see [18, Lemma 5.5.6].
(iii): see [18, Lemma 5.5.7].
(iv): Using Prop. (i) we have I(X; Z|Y) = 0 and by Prop. (iii)
it follows that
I(X; Z) ≤ I(X; YZ)
(16)
= I(X; Y) + I(X; Z|Y) = I(X; Y).
where H( · ) denotes discrete entropy on SX . For arbitrary random variables X, Y, and Z the conditional mutual information
is defined as [18, Lemma 5.5.7]
Z
denote arbitrary functions and pick two arbitrary sets A ⊆ MX ,
B ⊆ MY (cf. Section III) and we obtain µ-a. e.
where D( · k · ) denotes Kullback-Leibler divergence [18, Section 2.3]. The mutual information is given by [18, Lemma 5.5.1]
Occasionally we will interpret a probability measure on a
finite space M as a vector in [0, 1]M , equipped with the Borel
σ-algebra. We will use the L∞ -distance on this space.
Definition 9. For two probability measures µ and ν on a finite
space M, their distance is defined as the L∞ -distance d(µ, ν) :=
maxm∈M |µ(m)−ν(m)|. The diameter of A ⊆ [0, 1]M is defined
as diam(A) = supa,b∈A d(a, b).
(8)
Lemma 10 ([20, Lemma 2.7]). For two probability measures µ
and ν on a finite space M with d(µ, ν) ≤ ε ≤ 21 the inequality
|H(µ) − H(ν)| ≤ −ε|M| log ε holds.
Definition 7 ([19, Definition 12.20]). For arbitrary random
variables X, Y, and Z, the Markov chain X −◦− Y −◦− Z holds if,
for any A ∈ AX , B ∈ AZ , the following holds µ-a. e.:
For finite spaces SY , SX , and SU , the statement RIB ⊆ R is
well known, cf., [10, Section IV], [11, Section III.F]. We restate
it in the form of the following lemma.
I(X; Y) := I(X; Y|∅) = sup I(aX (X); aY (Y)).
aX ,aY
IV. Inner Bound
P X−1 (A) ∩ Z−1 (B) Y = P X−1 (A) Y P Z−1 (B) Y . (9)
In the following, we collect some properties of these definitions.
Lemma 8. For random variables X, Y, and Z the following
properties hold:
(i) I(X; Y|Z) ≥ 0 with equality if and only if X −◦− Z −◦− Y.
(ii) For discrete X, i. e., |SX | < ∞, we have I(X; Y) = H(X) −
H(X|Y).
(iii) I(X; YZ) = I(X; Z) + I(X; Y|Z).
(iv) If X −◦− Y −◦− Z, then I(X; Y) ≥ I(X; Z).
Proof. (i): The claim I(X; Y|Z) ≥ 0 follows directly from (6)
and the non-negativity of divergence.
that
X −◦− Y −◦− Z, i. e., P X−1 (A) ∩ Y−1 (B) Z =
Assume
−1
P X (A) Z P Y−1 (B) Z almost everywhere. Let aX and aY
Lemma 11. For random variables Y, X, and U with finite SY ,
SX , and SU , assume that Y −◦− X −◦− U holds. Then, for any
ε > 0, there exists n ∈ N and a function f : SXn → M with
1
log|M| ≤ I(X; U) + ε such that n1 I Y; f (X) ≥ I(Y; U) − ε.
n
In a first step, we will utilize Lem. 11 to show RIB ⊆ R for
an arbitrary alphabet SX , i. e., we wish to prove the following
Proposition 12, lifting the restriction |SX | < ∞.
Proposition 12. For random variables Y, X, and U with finite
SY and SU , assume that Y −◦− X −◦− U holds. Then, for any
ε > 0, there exists n ∈ N and a function f : SXn → M with
1
log|M| ≤ I(X; U) + ε such that
n
1
I Y; f (X) ≥ I(Y; U) − ε.
n
(17)
+ 2δ|SU | log δ −
≥ H µe
U
e ) + 2δ|SU | log δ,
= I(X̂; U
an
X( · )
Y −◦− X
g( · )
X̂
(a) Inner bound.
f (X)
≈
U
ZY −◦− X
≈
(18)
g( · )
e
U
g(X̂)
X̂
(b) Outer bound.
Fig. 1: Illustrations.
Proof of Proposition 12. Let µYXU be a probability measure on
Ω := SY ×SX ×SU , such that Y −◦− X −◦− U holds. Fix 0 < δ ≤ 21
and find a finite, measurable partition (Pi )i∈I of the space of
probability measures on SU such that for every i ∈ I we have
diam(Pi ) ≤ δ and choose νi ∈ Pi arbitrarily for every i ∈ I.
Define the random variable X̂ : Ω → I as X̂ = i if κU|X ∈ Pi . The
random variable X̂ is σ(X)-measurable (see Appendix A). We
can therefore find a measurable function g such that X̂ = g(X)
by the factorization lemma [19, Corollary 1.97]. Define the new
probability space Ω × ×i∈I SU , equipped with the probability
:= µYXU × ×i∈I νi . Slightly abusing notation,
measure µYXUe
U
I
e i (for every i ∈ I)
we define the random variables Y, X, U, and U
as the according projections. We also use X̂ = g(X) and define
e=U
e X̂ . From this construction we have
the random variable U
µYXUe
-a.
e.
the
equality
of
measures κe
= κe
= νX̂ , as well
U
U|X̂
U|X
I
e and Y −◦− X −◦− U
e (see Appendix B). Therefore,
as Y −◦− X̂ −◦− U
we have µYXUe
-a. e.
U
I
d(κe
, κU|X ) ≤ δ, and
U|X̂
d(κe
, κU|X ) ≤ δ,
U|X
(18)
by κe
= κe
= νX̂ and κU|X , νX̂ ∈ PX̂ . Thus, for any u ∈ SU ,
U|X̂
U|X
µU (u) =
≤
Z
Z
κU|X ( · ; u) dµYXU
(19)
(κe
( · ; u) + δ) dµYXUe
= µe
(u) + δ
U|X
UI
U
(20)
Thus, we obtain
I(X; U) = H(µU ) − H(U|X)
+ δ|SU | log δ −
≥ H µe
U
(21)
Z
H κU|X dµYXU
dµYXUe
U
I
(24)
(25)
=
Z
Z
Z
(18)
≤
≤
Z
κYU|X ( · ; y × u) dµYXU
(26)
κY|X ( · ; y)κU|X ( · ; u) dµYXU
(27)
κY|X ( · ; y)(κe
( · ; u) + δ) dµYXUe
U|X
U
I
κYe
( · ; y × u) dµYXUe
+δ
U|X
U
I
= µYe
(y × u) + δ,
U
(28)
(29)
(30)
where (26) and (30) follow from the defining property of
conditional probability, part 2 of Def. 4. Equality in (29) follows
e and Def. 7. By the same argument, one can
from Y −◦− X −◦− U
show that µYU (y × u) ≥ µYe
(y × u) − δ. Therefore, in total,
U
d(µYU , µYe
)
≤
δ
and,
by
Lem.
10,
U
e )| ≤ −δ|SY ||SU | log δ.
|H(YU) − H(YU
(31)
Thus, the mutual information can be bounded by
I(Y; U) = H(Y) + H(U) − H(YU)
(32)
(21)
e − δ|SU | log δ − H(YU)
≤ H(Y) + H(U)
(33)
(31)
e ) − δ(|SY | + 1)|SU | log δ
≤ I(Y; U
e ) − 2δ|SY ||SU | log δ,
≤ I(Y; U
(34)
(35)
where we applied Lem. 10 in (33) and (34). We apply Lem. 11
e and obtain a function
to the three random variables Y, X̂, and U
e ) − δ and
fˆ: I n → M with n1 I Y; fˆ(X̂) ≥ I(Y; U
(25)
1
e + δ ≤ I(X; U) + δ − 2δ|SU | log δ.
log|M| ≤ I(X̂; U)
n
We have X̂ = g n ◦ X and defining f := fˆ ◦ g, we obtain
1
1
e) − δ
I(Y; f (X)) = I(Y; fˆ(X̂)) ≥ I(Y; U
n
n
(35)
≥ I(Y; U) + 2δ|SY ||SU | log δ − δ.
(36)
(37)
(38)
Choosing δ such that ε ≥ −2δ|SY ||SU | log δ + δ completes the
proof.
We can now complete the proof of the inner bound by
showing the following lemma.
Lemma 13. RIB ⊆ R.
(22)
(23)
0 = I(Y; U|X) = sup I(cY (Y); cU (U)|X) ≥ I(Ŷ; Û|X) ≥ 0 (39)
(21)
Proof. Assuming (S, R) ∈ RIB , choose µYXU according to Def. 2.
Clearly I(X; U) < ∞ to satisfy (3) and thus also I(Y; U) < ∞
by Prop. (ii) of Lem. 8 as Y −◦− X −◦− U holds.
Pick ε > 0, select
functions aY , aX such that I aX (X); aU (U) ≥ I(X;
U) − ε, and
select functions bY , bU such that I bY (Y); bU (U) ≥ I(Y; U) − ε
(cf. (8)). Using Û := aU (U), bU (U) as well as Ŷ := aY (Y), we
have
and, by the same argument, µU (u) ≥ µe
(u) − δ, i. e., in total,
U
d(µU , µe
) ≤ δ.
U
H κe
U|X̂
where (22) and (25) follow from Prop. (ii) of Lem. 8, and in
both (23) and (24) we used Lem. 10. From Y −◦− X −◦− U and
Def. 7, we know that µYXU -a. e., we have the equality of measures κYU|X = κY|X × κU|X . Using this equality in (27) we obtain
µYU (y × u) =
Remark 2. Considering that both definitions of achievability
(Defs. 1 and 2) only rely on the notion of mutual information,
it is natural to assume that Def. 6 can be used to directly infer
Proposition 12 from Lem. 11. However, this is not the case. For
an arbitrary discretization aX (X) of X, we do have I(aX (X); U) ≤
I(X; U). However, the Markov chain Y −◦− aX (X) −◦− U does
not hold in general. To circumvent this problem, we will use
a discrete random variable X̂ = g(X) with an appropriate
e , satisfying
quantizer g and construct a new random variable U
e such that I(Y; U
e ) is close
the Markov chain Y −◦− X̂ −◦− U
to I(Y; U). Figure 1a illustrates this strategy. We choose the
quantizer g based on the conditional probability distribution of
U given X, i. e., quantization based on κU|X using L∞ -distance
(cf. Def. 9). Subsequently, we will use that, by Lem. 10, a small
L∞ -distance guarantees a small gap in terms of information
measures.
Z
cY ,cU
as well as
I(X; U) = sup I cX (X); cU (U)
cX ,cU
(40)
≥ sup I(cX (X); Û) = I(X; Û)
(41)
cX
and
I(Y; U) − ε ≤ I bY (Y); bU (U) ≤ I(Ŷ; Û).
(42)
We apply Proposition 12, substituting Û → U and Ŷ → Y.
Proposition 12 guarantees the existence of a function f : SXn →
M with
1
n
(41)
(3)
log|M| ≤ I(X; Û) + ε ≤ I(X; U) + ε ≤ R + ε and
1
1
I(Y; f (X)) = sup I(cY ◦ Y; f (X))
n
n cY
1
1
≥ I(an
◦ Y; f (X)) = I(Ŷ; f (X))
n Y
n
(17)
(42)
(43)
(44)
(4)
≥ I(Ŷ; Û) − ε ≥ I(Y; U) − 2ε ≥ S − 2ε.
(45)
Thus, (S − 2ε, R − ε) ∈ R and therefore (S, R) ∈ R.
V. Outer Bound
We also start the proof of the outer bound with the wellknown result RIB ⊆ R for finite spaces SY , SX , and SU , cf., [10,
Section IV], [11, Section III.F]. The statement is rephrased in
the following lemma.
Lemma 14. Assume that the spaces SY and SX are both finite
and µYX is fixed. For some n ∈ N, let f : SXn → M be a function
with |M| < ∞. Then there exists a probability measure µYXU ,
extending µYX , such that SU is finite, Y −◦− X −◦− U, and
1
(46)
I(X; U) ≤ log|M|,
n
1
I(Y; U) ≥ I(Y; f (X)).
(47)
n
We can slightly strengthen Lem. 14.
Corollary 15. Assume that, in the setting of Lem. 14, we are
given µZYX on SZ ×SY ×SX , extending µYX , where SZ is arbitrary,
not necessarily finite. Then there exists a probability measure
µZYXU , extending µZYX , such that SU is finite and ZY −◦− X −◦− U,
(46), and (47) hold.
Proof. Apply Lem. 14 to obtain µYXU on SY ×SX ×SU satisfying
(46), (47), and Y −◦− X −◦− U. We define µZYXU by
µZYXU (A × y × x × u) =
1
I Y; f (X) − ε.
(50)
n
Remark 3. In the proof of Proposition 16, we face a similar
problem as in the inner bound, which was described in Rmk. 2.
For any “per-letter” quantization X̂ := an
X (X), in general f (X)
cannot be written as a function g(X̂) of the quantization X̂.
We will approach this problem, by choosing aX such that, for
m ∈ M, the pre-image f −1 (m), can be well approximated,
−1
in terms of low probability of error, by (an
(Am ) for some
X)
n
Am ⊆ MX . Choosing g such that, g(x̂) = m with high
probability, whenever x̂ ∈ Am , we obtain g(X̂), which is close to
f (X) in terms of distribution and thus the information measures
are also close by Lem. 10. Figure 1b provides a sketch of this
strategy.
I(Y; U) ≥
µZYX (A × y × x)
µYXU (y × x × u)
µYX (y × x)
(48)
for any (y, x, u) ∈ SY × SX × SU and A ∈ AZ . Pick arbitrary
A ∈ AZ , y ∈ SY , and u ∈ SU . The Markov chain ZY −◦− X −◦− U
now follows as the events Z−1 (A) ∩ Y−1 (y) and U−1 (u) are
independent given X−1 (x) for any x ∈ SX (cf. Rmk. 1).
Again, we proceed by extending Cor. 15, lifting the restriction that SX is finite and obtain the following proposition.
Proposition 16. Given a probability measure µZYX as in
Cor. 15, assume that |SY | < ∞. For some n ∈ N, let f : SXn → M
be a function with |M| < ∞. Then, for any ε > 0, there exists a
probability measure µZYXU , extending µZYX with ZY −◦− X −◦− U
and
1
I(X; U) ≤ log|M|
(49)
n
S
Qm
Proof of Proposition 16. We can partition SXn =
m∈M
into finitely many measurable, mutually disjoint sets Qm :=
f −1 (m), m ∈ M. We want to approximate the sets Qm by a finite union of rectangles in the semiring [19, Definition 1.9] Ξ :=
B : B = ×n
i=1 Bi with Bi ∈ AX . We choose δ > 0, which
will be specified S
later. According to [19, Theorem 1.65(ii)], we
K
(m)
(m)
obtain B(m) := k=1 Bk for each m ∈ M, where Bk ∈ Ξ
(m)
are mutually disjoint sets, satisfying µX (B
△ Qm ) ≤ δ. Since
(m)
(m)
(m)
(m)
Bk ∈ Ξ, we have Bk = ×n
B
for some Bk,i ∈ AX . We
i=1 k,i
can construct functions aX and g such that g ◦San
X (x) = m
′
whenever x ∈ B(m) and x 6∈ B(6m) with B(6m) := m′ 6=m B(m ) .
Indeed, we obtain aX by finding a measurable partition of SX
(m)
(m)
that is finer than (Bk,i , (Bk,i )c ) for all i, k, m. For fixed
m ∈ M, we have
Qm ⊆ Qm ∪ B(m) \ B(6m)
⊆ B
(m)
\B
(6m)
∪ Qm \ B
(m)
[
∪
[
Qm ∩ B
(52)
m′ 6=m
[
⊆ B(m) \ B(6m) ∪ Qm △ B(m) ∪
⊆ B(m) \ B(6m) ∪
(51)
(m′ )
′
B(m ) \ Qm′
(53)
m′ 6=m
′
B(m ) △ Qm′ ,
(54)
m′
where we used the fact that Qm ∩ Qm′ = ∅ for m 6= m′ in (53).
Using X̂ := aX (X), we obtain for any y ∈ SYn
µYf (X) (y × m) = µYX (y × Qm )
(54)
≤ µYX (y × B(m) \ B(6m) ) +
(55)
X
′
µX (B(m ) △ Qm′ )
(56)
m′
≤ µYg(X̂) (y × m) + |M|δ.
On the other hand, we have
µYf (X) (y × m) = µY (y) −
X
(57)
µYf (X) (y × m′ )
(58)
(59)
m′ 6=m
(57)
≥ µY (y) −
X
µYg(X̂) (y × m′ ) + |M|δ
m′ 6=m
≥ µYg(X̂) (y × m) − |M|2 δ.
(60)
We thus obtain d(µYf (X) , µYg(X̂) ) ≤ |M|2 δ. This also implies
d(µf (X) , µg(X̂) ) ≤ |SY |n |M|2 δ. Assume |SY |n |M|2 δ ≤ 12 and
apply Cor. 15 substituting X̂ → X, XZ → Z, and the function
g → f . This yields a random variable U with XZY −◦− X̂ −◦− U,
I(X̂; U) ≤
1
log|M|, and
n
I(Y; U) ≥
1
I Y; g(X̂) .
n
(61)
We also obtain ZY −◦− X −◦− U due to
0 = I(XZY; U|X̂)
(62)
= I(XZY; U) − I(U; X̂)
(63)
≥ I(XZY; U) − I(U; X)
(64)
= I(ZY; U|X)
(65)
≥ 0,
(66)
where (62) follows from XZY −◦− X̂ −◦− U using Prop. (i)
of Lem. 8, (63) and (65) follow from Prop. (iii) of Lem. 8, (64)
is a consequence of Def. 6, and we used Prop. (i) of Lem. 8 in
(66). This also immediately implies 0 = I(X; U|X̂) and hence
(61)
1
log|M| ≥ I(X̂; U) = I(X̂; U) + I(X; U|X̂)
n
= I(XX̂; U) = I(X; U),
(67)
(68)
every component is σ(X)-measurable. Thus, we have X̂−1 (i) =
h−1 (Pi ) ∈ σ(X).
e and Conditional Independence
B. Distribution of U
We will first show that µYXUe
-a. e.
U
I
κe
= κe
= νX̂ .
U|X̂
U|X
Clearly, νX̂ is a probability measure everywhere. Fixing u ∈ SU ,
we need that νX̂ (u) is σ(X̂)-measurable, which is shown by the
factorization lemma [19, Corollary 1.97], when writing νX̂ (u) =
ν(·) (u) ◦ X̂. Also, this proves σ(X)-measurability as X̂ is σ(X)measurable, i. e., σ(X̂) ⊆ σ(X). It remains to show the defining
property of conditional probability, part 2 of Def. 4. Choosing
B ∈ σ(X) and u ∈ SU , we need to show that
h
(61) 1
I Y; g(X̂)
n
1
H(Y) + H(g(X̂)) − H(Yg(X̂))
n
1
1
≥ I Y; f (X) + |SY |n |M|3 δ log(|M|2 δ)
n
n
1
+ |SY |n |M|3 δ log(|SY |n |M|2 δ)
n
2
1
≥ I Y; f (X) + |SY |n |M|3 δ log(|M|2 δ)
n
n
where we used Lem. 10 in (71). Select δ such that
− n2 |SY |n |M|3 δ log(|M|2 δ).
=
(69)
(70)
E[1B νX̂ (u)] =
=
(71)
=
(72)
ε ≥
=
=
(73)
1
I(X; U) ≤ log|M| ≤ R
n
I(Y; U) ≥ I(aY (Y); U)
Z
i
(81)
(82)
(83)
(84)
Z
e )1y (Y) dµ
1 B 1 u (U
.
YXUe
U
I
and by integrating, we indeed obtain
=
1B κY|X ( · ; y)νX̂ (u) dµYXU
(86)
XZ
(87)
1B 1i (X̂)κY|X ( · ; y)νi (u) dµYXU
i∈I
(76)
=
Hence, (S − 2ε, R) ∈ RIB and consequently (S, R) ∈ RIB .
X
νi (u)
i∈I
=
XZ
i∈I
=
XZ
i∈I
Appendix
For u ∈ SU consider the σ(X)-measurable function hu :=
κU|X ( · ; u) on [0, 1]. We obtain the vector valued function h :=
(hu )u∈SU on [0, 1]|SU | . This function h is σ(X)-measurable as
i
i
(85)
(75)
A. X̂ is σ(X)-measurable
e)
E 1i (X̂)1B 1u (U
1B κY|X ( · ; y)νX̂ (u) dµYXU =
(74)
Acknowledgment
(80)
where we used Fubini’s theorem [19, Theorem 14.16] in (82).
e |X) = 0, we need to show that for every y ∈
To prove I(Y; U
SY , u ∈ SU , and B ∈ σ(X), we have
Z
The authors would like to thank Michael Meidlinger for
providing inspiration for this work.
ei)
E 1i (X̂)1B 1u (U
X h
h
(79)
e) ,
= E 1 B 1 u (U
(73)
I an
Y (Y); f (X) − ε ≥ S − 2ε.
i
e i ) E 1i (X̂)1B
E 1 u (U
X h
i∈I
(2)
νi (u)E 1i (X̂)1B
X h
i∈I
This is possible by applying [18, Lemma 5.2.2] with the algebra
that is generated by the rectangles (cf. the paragraph above [18,
Lemma 5.5.1]). We apply Proposition 16, substituting aY (Y) →
Y and Y → Z. For arbitrary ε > 0, Proposition 16 provides U
with YaY (Y) −◦− X −◦− U (i. e., Y −◦− X −◦− U) and
n
X
i∈I
≥
E 1i (X̂)1B νi (u)
i∈I
Lemma 17. R ⊆ RIB .
(50) 1
X
i∈I
Proof. Assume (S, R) ∈ R and choose n ∈ N and f , satisfying
1
log|M| ≤ R and (2). Choose any ε > 0 and find aY such that
n
(78)
The statement for B ∈ σ(X̂) then follows by σ(X̂) ⊆ σ(X), i. e.,
the σ(X)-measurability of X̂. We prove (78) by
We can now finish the proof by showing the following lemma.
I an
Y (Y); f (X) ≥ I Y; f (X) − ε ≥ nS − ε.
i
e .
E[1B νX̂ (u)] = E 1B 1{u} (U)
where we used Prop. (iii) of Lem. 8 in (68). We also have
I(Y; U) ≥
(77)
=
XZ
i∈I
=
Z
Z
1B 1i (X̂)κY|X ( · ; y) dµYXU
e i ) dµ
1 u (U
eUI
Z
1B 1i (X̂)1y (Y) dµYXU
e i )1i (X̂)1y (Y) dµ
1 B 1 u (U
YXUe
U
I
e )1i (X̂)1y (Y) dµ
1 B 1 u (U
YXUe
U
I
e )1y (Y) dµ
,
1 B 1 u (U
YXUe
U
I
(88)
(89)
(90)
(91)
(92)
where we used part 2 of Def. 4 in (89) and Fubini’s theorem [19,
Theorem 14.16] in (90). By replacing κY|X with κY|X̂ and using
e |X̂) =
B ∈ σ(X̂), the same argument can be used to show I(Y; U
0.
References
[1] N. Tishby, F. C. Pereira, and W. Bialek, “The information
bottleneck method,” in Proc. 37th Annual Allerton Conference
on Communication, Control, and Computing, Monticello, IL,
Sep. 1999, pp. 368–377.
[2] N. Slonim and N. Tishby, “Document clustering using word
clusters via the information bottleneck method,” in Proceedings
of the 23rd Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval, ser. SIGIR
’00, ACM. New York, NY, USA: ACM, 2000, pp. 208–215.
[3] S. Gordon, H. Greenspan, and J. Goldberger, “Applying the
information bottleneck principle to unsupervised clustering of
discrete and continuous image representations,” in Computer
Vision, 2003. Proceedings. Ninth IEEE International Conference
on, Oct 2003, pp. 370–377 vol.1.
[4] E. Schneidman, N. Slonim, N. Tishby, R. deRuyter van
Steveninck, and W. Bialek, “Analyzing neural codes using the
information bottleneck method,” Proc. of Advances in Neural
Information Processing System (NIPS-13), 2002.
[5] G. Zeitler, R. Kötter, G. Bauch, and J. Widmer, “On quantizer
design for soft values in the multiple-access relay channel,” in
Proc. IEEE ICC 2009, Jun. 2009.
[6] G. Zeitler, A. Singer, and G. Kramer, “Low-precision A/D
conversion for maximum information rate in channels with memory,” IEEE Trans. Communications, vol. 60, no. 9, pp. 2511–
2521, 9 2012.
[7] A. Winkelbauer and G. Matz, “Joint network-channel coding for
the asymmetric multiple-access relay channel,” in Proc. IEEE
ICC 2012, Jun. 2012, pp. 2485–2489.
[8] A. Winkelbauer, S. Farthofer, and G. Matz, “The rateinformation trade-off for Gaussian vector channels,” in IEEE
Int. Symp. Information Theory, Honolulu, HI, USA, 6 2014.
[9] A. Winkelbauer and G. Matz, “On quantization of log-likelihood
ratios for maximum mutual information,” in Proc. IEEE
SPAWC, 6 2015, pp. 316–320.
[10] G. Pichler, P. Piantanida, and G. Matz, “Distributed
information-theoretic biclustering of two memoryless sources,”
in Proc. 53rd Annual Allerton Conference on Communication,
Control, and Computing, Monticello, IL, Sep. 2015, pp. 426–433.
[11] T. A. Courtade and T. Weissman, “Multiterminal source coding
under logarithmic loss,” IEEE Trans. Inf. Theory, vol. 60, no. 1,
pp. 740–761, Jan. 2014.
[12] G. Chechik, A. Globerson, N. Tishby, and Y. Weiss, “Information bottleneck for gaussian variables,” Journal of Machine
Learning Research, vol. 6, pp. 165–188, Jan. 2005.
[13] N. Slonim and N. Tishby, “Agglomerative information
bottleneck,” in Advances in Neural Information Processing
Systems 12, S. Solla, T. Leen, and K. Müller, Eds. MIT Press,
2000, pp. 617–623.
[14] B. M. Kurkoski, “On the relationship between the kl means
algorithm and the information bottleneck method,” in Proc.
11th International ITG Conference on Systems, Communications and Coding (SCC), Hamburg, Germany, 2 2017, pp. 1–6.
[15] M. B. Westover and J. A. O’Sullivan, “Achievable rates for
pattern recognition,” IEEE Trans. Inf. Theory, vol. 54, no. 1,
pp. 299–320, Jan. 2008.
[16] R. Ahlswede and I. Csiszár, “Hypothesis testing with communication constraints,” IEEE Trans. Inf. Theory, vol. 32, no. 4, pp.
533–542, Jul. 1986.
[17] T. S. Han, “Hypothesis testing with multiterminal data compression,” IEEE Trans. Inf. Theory, vol. 33, no. 6, pp. 759–772,
Nov. 1987.
[18] R. M. Gray, Entropy and Information Theory, 1st ed. SpringerVerlag, 2013.
[19] A. Klenke, Probability Theory, ser. Universitext.
SpringerVerlag GmbH, 2013.
[20] I. Csiszár and J. Körner, Information Theory: Coding Theorems
for Discrete Memoryless Systems. Cambridge University Press,
Aug. 2011.
| 7 |
AN ERGODIC THEOREM FOR THE QUASI-REGULAR
REPRESENTATION OF THE FREE GROUP
arXiv:1601.00668v1 [] 4 Jan 2016
ADRIEN BOYER AND ANTOINE PINOCHET LOBOS
Abstract. In [BM11], an ergodic theorem à la Birkhoff-von Neumann for the action of the
fundamental group of a compact negatively curved manifold on the boundary of its universal
cover is proved. A quick corollary is the irreducibility of the associated unitary representation.
These results are generalized [Boy15] to the context of convex cocompact groups of isometries
of a CAT(-1) space, using Theorem 4.1.1 of [Rob03], with the hypothesis of non arithmeticity
of the spectrum. We prove all the analog results in the case of the free group Fr of rank r even
if Fr is not the fundamental group of a closed manifold, and may have an arithmetic spectrum.
1. Introduction
In this paper, we consider the action of the free group Fr on its boundary B, a probability
space associated to the Cayley graph of Fr relative to its canonical generating set. This action
is known to be ergodic (see for example [FTP82] and [FTP83]), but since the measure is not
preserved, no theorem on the convergence of means of the corresponding unitary operators had
been proved. Note that a close result is proved in [FTP83, Lemma 4, Item (i)].
We formulate such a convergence theorem in Theorem 1.2. We prove it following the ideas of
[BM11] and [Boy15] replacing [Rob03, Theorem 4.1.1] by Theorem 1.1.
1.1. Geometric setting and notation. We will denote Fr = ha1 , ..., ar i the free group on r
±1
generators, for r ≥ 2. For an element γ ∈ Fr , there is a unique reduced word in {a±1
1 , ..., ar }
which represents it. This word is denoted γ1 · · · γk for some integer k which is called the length of
γ and is denoted by |γ|. The set of all elements of length k is denoted Sn and is called the sphere
of radius k. If u ∈ Fr and k ≥ |u|, let us denote P ru (k) := {γ ∈ Fr | |γ| = k, u is a prefix of γ}.
±1
Let X be the Cayley graph of Fr with respect to the set of generators {a±1
1 , ..., ar }, which is
a 2r-regular tree. We endow it with the (natural) distance, denoted by d, which gives length 1
to every edge ; for this distance, the natural action of Fr on X is isometric and freely transitive
on the vertices ; the space X is uniquely geodesic, the geodesics between vertices being finite
sequences of successive edges. We denote by [x, y] the unique geodesic joining x to y.
We fix, once and for all, a vertex x0 in X. For x ∈ X, the vertex of X which is the closest to x
in [x0 , x], is denoted by ⌊x⌋ ; because the action is free, we can identify ⌊x⌋ with the element γ
that brings x0 on it, and this identification is an isometry.
The Cayley tree and its boundary. As for any other CAT(−1) space, we can construct a boundary of X and endow it with a distance and a measure. For a general construction, see [Bou95].
The construction we provide here is elementary.
±1
Let us denote by B the set of all right-infinite reduced words on the alphabet {a±1
1 , ..., ar }.
This set is called the boundary of X.
We will consider the set X := X ∪ B.
For u = u1 · · · ul ∈ Fr \ {e}, we define the sets
Xu := {x ∈ X | u is a prefix of ⌊x⌋}
Bu := {ξ ∈ B | u is a prefix of ξ}
2010 Mathematics Subject Classification. Primary 37; Secondary 43, 47.
Key words and phrases. boundary representations, ergodic theorems, irreducibility, equidistribution, free
groups.
Weizmann Institute of Science, [email protected].
Universit d’Aix-Marseille, CNRS UMR7373, [email protected].
1
AN ERGODIC THEOREM FOR THE QUASI-REGULAR REPRESENTATION OF THE FREE GROUP
2
Cu := Xu ∪ Bu
We can now define a natural topology on X by choosing as a basis of neighborhoods
(1) for x ∈ X, the set of all neighborhoods of x in X
(2) for ξ ∈ B, the set {Cu | u is a prefix of ξ}
For this topology, X is a compact space in which the subset X is open and dense. The
induced topology on X is the one given by the distance. Every isometry of X continuously
extend to a homeomorphism of X.
Distance and measure on the boundary. For ξ1 and ξ2 in B, we define the Gromov product
of ξ1 and ξ2 with respect to x0 by
(ξ1 |ξ2 )x0 := sup {k ∈ N | ξ1 and ξ2 have a common prefix of length k}
and
dx0 (ξ1 , ξ2 ) := e−(ξ1 |ξ2 )x0 .
Then d defines an ultrametric distance on B which induces the same topology ; precisely, if
ξ = u1 u2 u3 · · · , then the ball centered in ξ of radius e−k is just Bu1 ...uk .
On B, there is at most one Borel regular probability measure which is invariant under the
isometries of X which fix x0 ; indeed, such a measure µx0 must satisfy
1
µx0 (Bu ) =
2r(2r − 1)|u|−1
and it is straightforward to check that the ln(2r − 1)-dimensional Hausdorff measure verifies
this property.
If ξ = u1 · · · un · · · ∈ B, and x, y ∈ X, then (d(x, u1 · · · un ) − d(y, u1 · · · un ))n∈N is stationary.
We denote this limit βξ (x, y). The function βξ is called the Busemann function at ξ.
Let us denote, for ξ ∈ B and γ ∈ Fr the function
P (γ, ξ) := (2r − 1)βξ (x0 ,γx0 )
The measure µx0 is, in addition, quasi-invariant under the action of Fr . Precisely, the RadonNikodym derivative is given for γ ∈ Γ and for a.e. ξ ∈ B by
dγ∗ µx0
(ξ) = P (γ, ξ),
dµx0
where γ∗ µx0 (A) = µx0 (γ −1 A) for any Borel subset A ⊂ B.
The quasi-regular representation. Denote the unitary representation, called the quasi-regular
representation of Fr on the boundary of X by
π : Fr → U (L2 (B))
γ 7→ π(γ)
defined as
1
π(γ)g (ξ) := P (γ, ξ) 2 g(γ −1 ξ)
for γ ∈ Fr and for g ∈ L2 (B). We define the Harish-Chandra function
Z
1
P (γ, ξ) 2 dµx0 (ξ),
(1.1)
Ξ(γ) := hπ(γ)1B , 1B i =
B
where 1B denotes the characteristic function on the boundary.
For f ∈ C(X), we define the operators
1 X
π(γ)g
(1.2)
Mn (f ) : g ∈ L2 (B) 7→
f (γx0 )
∈ L2 (B).
|Sn |
Ξ(γ)
γ∈Sn
We also define the operator
(1.3)
M (f ) := m(f|B )P1B
where m(f|B ) is the multiplication operator by f|B on L2 (B), and P1B is the orthogonal projection on the subspace of constant functions.
AN ERGODIC THEOREM FOR THE QUASI-REGULAR REPRESENTATION OF THE FREE GROUP
3
Results. The analog of Roblin’s equidistribution theorem for the free group is the following.
Theorem 1.1. We have, in C(X × X)∗ , the weak-∗ convergence
1 X
Dγx0 ⊗ Dγ −1 x0 ⇀ µx0 ⊗ µx0
|Sn |
γ∈Sn
where Dx denotes the Dirac measure on a point x.
Remark 1. It is then straightforward to deduce the weak-∗ convergence
X
Dγx0 ⊗ Dγ −1 x0 ⇀ µx0 ⊗ µx0
kmΓ ke−δn
|γ|≤n
mΓ denoting the Bowen-Margulis-Sullivan measure on the geodesic flow of SX/Γ (where SX is
the “unit tangent bundle”) and δ denoting ln(2r − 1), the Hausdorff measure of B.
(1) Notice that in our case, the spectrum is Z so the geodesic flow is not topologically mixing,
according to [Dal99] or directly by [CT01, Ex 1.3].
(2) Notice also that our multiplicative term is different of that of [Rob03, Theorem 4.1.1],
which shows that the hypothesis of non-arithmeticity of the spectrum cannot be removed.
We use the above theorem to prove the following convergence of operators.
Theorem 1.2. We have, for all f in C(X), the weak operator convergence
Mn (f ) −→ M (f ).
n→+∞
In other words, we have, for all f in C(X) and for all g, h in L2 (B), the convergence
1 X
hπ(γ)g, hi
f (γx0 )
−→ hM (f )g, hi.
n→+∞
|Sn |
Ξ(γ)
γ∈Sn
We deduce the irreducibility of π, and give an alternative proof of this well known result (see
[FTP82, Theorem 5]).
Corollary 1.3. The representation π is irreducible.
Proof. Applying Theorem 1.2 to f = 1X shows that the orthogonal projection onto the space of
constant functions is in the von Neumann algebra associated with π. Then applying Theorem
1.2 to g = 1B shows that the vector 1B is cyclic. Then, the classical argument of [Gar14,
Lemma 6.1] concludes the proof.
Remark 2. For α ∈ R∗+ , let us denote by Wα the wedge of two circles, one of length 1 and
the other of length α. Let p : Tα ։ Wα the universal cover, with Tα endowed with the distance making p a local isometry. Then F2 ≃ π1 (Wα ) acts freely properly discontinously and
cocompactly on the 4-regular tree Tα (which is a CAT(-1) space) by isometries. For α ∈ R \ Q,
the analog of Theorem 1.2 for the quasi-regular representation πα of F2 on L2 (∂Tα , µα ) for a
Patterson-Sullivan measure associated to a Bourdon distance is known to hold ([Boy15]) because
[Rob03, Theorem 4.1.1] is true in this setting. Now if α1 and α2 are such that α1 6= α±1
2 , then
the representations πα are not unitarily equivalent ([Gar14, Theorem 7.5]). For α ∈ Q∗+ \ {1}, it
would be interesting to formulate and prove an equidistribution result like Theorem 1.1 in order
to prove Theorem 1.2 for πα .
2. Proofs
2.1. Proof of the equidistribution theorem. For the proof of Theorem 1.1, let us denote
Z
X
1
f (γx0 , γ −1 x0 ) →
E := f : C(X × X) |
f d(µx0 ⊗ µx0 )
|Sn |
X×X
γ∈Sn
The subspace E is clearly closed in C(X × X) ; it remains only to show that it contains a
dense subspace of it.
AN ERGODIC THEOREM FOR THE QUASI-REGULAR REPRESENTATION OF THE FREE GROUP
4
Let us define a modified version of certain characteristic functions : for u ∈ Fr we define
max{1 − dX (x, Cu ), 0} if x ∈ X
0
if x ∈ B \ Bu
χu (x) :=
1
if x ∈ Bu
It is easy to check that he function χu is a continuous function which coincides with χCu on
Fr x0 and B.
The proof of the following lemma is straightforward.
X
Lemma 2.1. Let u ∈ Fr and k ≥ |u|, then χu −
χγ has compact support included in X.
γ∈P ru (k)
Proposition 2.2. The set χ := {χu | u ∈ Fr \ {e}} separates points of B, and the product
of two such functions of χ is either in χ, the sum of a function in χ and of a function with
compact support contained in X, or zero.
Proof. It is clear that χ separates points. It follows from Lemma 2.1 that χu χv = χv if u is a
proper prefix of v, that χ2u − χu has compact support in X, and that χu χv = 0 if none of u and
v is a proper prefix of the other.
Proposition 2.3. The subspace E contains all functions of the form χu ⊗ χv .
Proof. We make the useful observation that
|Snu,v |
1 X
(χu ⊗ χv )(γx0 , γ −1 x0 ) =
|Sn |
|Sn |
γ∈Sn
Snu,v
where
is the set of reduced words of length n with u as a prefix and v −1 as a suffix. We
easily see that this set is in bijection with the set of all reduced words of length n − (|u| + |v|)
that do not begin by the inverse of the last letter of u, and that do not end by the inverse of
±1
the first letter of v −1 . So we have to compute, for s, t ∈ {a±1
1 , ..., ar } and m ∈ N, the cardinal
of the set Sm (s, t) of reduced words of length m that do not start by s and do not finish by t.
Now we have
Sm = Sm (s, t) ∪ {x | |x| = m and starts by s} ∪ {x | |x| = m and ends by t}.
Note that the intersection of the two last sets is the set of words both starting by s and
ending by t, which is in bijection with Sm−2 (s−1 , t−1 ).
We have then the recurrence relation :
|Sm (s, t)| = 2r(2r − 1)m−1 − 2(2r − 1)m−1 + |Sm−2 (s−1 , t−1 )|
m−3 + |S
= 2(r − 1)(2r − 1)m−1 + 2(r − 1)(2r
m−4 (s, t)|
− 1)
.
2
2(r
−
1)
(2r
−
1)
+
1
+
|S
(s,
t)|
= (2r − 1)m
m−4
(2r − 1)3
We set C :=
2(r−1)((2r−1)2 +1)
,
(2r−1)3
n = 4k + j with 0 ≤ j ≤ 3 and we obtain
s,t
s,t
|
|S4k+j
| = C(2r − 1)4k+j + |S4(k−1)+j
s,t
4k+j
4(k−1)+j
|
= C(2r − 1)
+ C(2r − 1)
+ |S4(k−2)+j
= C
k
X
(2r − 1)4i+j + |Sjs,t |
i=1
= C(2r − 1)4+j
= (2r − 1)1+j
Now we can compute
(2r − 1)4k − 1
+ |Sj (s, t)|
(2r − 1)4 − 1
(2r − 1)4k − 1
+ |Sj (s, t)|
2r
AN ERGODIC THEOREM FOR THE QUASI-REGULAR REPRESENTATION OF THE FREE GROUP
u,v
|S4k+j
|
|S4k+j |
−1
)
S4k+j−(|u|+|v|)(u|u| , v|v|
=
|S4k+j |
(2r − 1)1+j
=
=
5
(2r − 1)4k−(|u|+|v|) − 1
−1
)
+ Sj (u|u| , v|v|
2r
2r(2r − 1)4k+j−1
1
1
+ o(1)
|u|−1
2r(2r − 1)
2r(2r − 1)|v|−1
= µx0 (Bu )µx0 (Bv ) + o(1)
when k → ∞, and this proves the claim.
Corollary 2.4. The subspace E is dense in C(X × X).
Proof. Let us consider E ′ , the subspace generated by the constant functions, the functions which
can be written as f ⊗ g where f, g are continuous functions on X and such that one of them
has compact support included in X, and the functions of the form χu ⊗ χv . By Proposition
2.2, it is a subalgebra of C(X × X) containing the constants and separating points, so by the
Stone-Weierstraß theorem, E ′ is dense in C(X × X). Now, by Proposition 2.3, we have that
E ′ ⊆ E, so E is dense as well.
2.2. Proof of the ergodic theorem. The proof of Theorem 1.2 consists in two steps:
Step 1: Prove that the sequence Mn is bounded in L(C(X), B(L2 (B))).
Step 2: Prove that the sequence converges on a dense subset.
2.2.1. Boundedness. In the following 1X denotes the characteristic function of X. Define
Fn := Mn (1X ) 1B .
We denote by Ξ(n) the common value of Ξ on elements of length n.
1
P
(P (γ, ξ)) 2 is constant equal to |Sn | × Ξ(n).
Corollary 2.5. The function ξ 7→
γ∈Sn
Proof. This function is constant on orbits of the action of the group of automorphisms of X
fixing x0 . Since it is transitive on B, the function is constant. By integrating, we find
Z X
X
1
1
2
(P (γ, ξ))
(P (γ, ξ)) 2 dµx0 (ξ)
=
B γ∈S
γ∈Sn
n
XZ
1
=
(P (γ, ξ)) 2 dµx0 (ξ)
B
γ∈S
Xn
Ξ(n)
=
γ∈Sn
= |Sn |Ξ(n),
Lemma 2.6. The function Fn is constant, equal to 1B .
Proof. Because Ξ depends only on the length, we have that
1
Fn (ξ) :=
=
1 X (P (γ, ξ)) 2
|Sn |
Ξ(γ)
γ∈Sn
X
1
1
(P (γ, ξ)) 2
|Sn |Ξ(n)
γ∈Sn
=
1,
and the proof is done.
AN ERGODIC THEOREM FOR THE QUASI-REGULAR REPRESENTATION OF THE FREE GROUP
6
It is easy to see that Mn (f ) induces continuous linear transformations of L1 and L∞ , which
we also denote by Mn (f ).
Proposition 2.7. The operator Mn (1X ), as an element of L(L∞ , L∞ ), has norm 1; as an
element of B(L2 (B)), it is self-adjoint.
Proof. Let h ∈ L∞ (B). Since Mn (1X ) is positive, we have that
Mn (1X ) h ∞ ≤
Mn (1X ) 1B ∞ khk∞
= kFn k∞ khk∞
= khk∞
so that kMn (1X )kL(L∞ ,L∞ ) ≤ 1.
The self-adjointness follows from the fact that π(γ)∗ = π(γ −1 ) and that the set of summation
is symmetric.
Let us briefly recall one useful corollary of Riesz-Thorin’s theorem :
Let (Z, µ) be a probability space.
Proposition 2.8. Let T be a continuous operator of L1 (Z) to itself such that the restriction T2
to L2 (Z) (resp. T∞ to L∞ (Z)) induces a continuous operator of L2 (Z) to itself (resp. L∞ (Z)
to itself ).
Suppose also that T2 is self-adjoint, and assume that kT∞ kL(L∞ (Z),L∞ (Z)) ≤ 1.
Then kT2 kL(L2 (Z),L2 (Z)) ≤ 1.
Proof. Consider the adjoint operator T ∗ of (L1 )∗ = L∞ to itself. We have that
kT ∗ kL(L∞ ,L∞ ) = kT kL(L1 (Z),L1 (Z)) .
Now because T2 is self-adjoint, it is easy to see that T ∗ = T∞ . This implies
1 ≥ kT ∗ kL(L∞ ,L∞ ) = kT kL(L1 (Z),L1 (Z)) .
Hence the Riesz-Thorin’s theorem gives us the claim.
Proposition 2.9. The sequence (Mn )n∈N is bounded in L(C(X), B(L2 (B))).
Proof. Because Mn (f ) is positive in f , we have, for every positive g ∈ L2 (B), the inequality
−kf k∞ [Mn (1X )]g ≤ [Mn (f )]g ≤ kf k∞ [Mn (1X )]g
from which we deduce, for every g ∈ L2 (B)
k[Mn (f )]gkL2
≤ kf k∞ k[Mn (1X )]gkL2
≤ kf k∞ kMn (1X )kB(L2 ) kgkL2
which allows us to conclude that
kMn (f )kB(L2 ) ≤ kMn (1X )kB(L2 ) kf k∞ .
This proves that kMn kL(C(X),B(L2 )) ≤ kMn (1X )kB(L2 ) .
Now, it follows from Proposition 2.7 and Proposition 2.8 that the sequence (Mn (1X ))n∈N is
bounded by 1 in B(L2 ), so we are done.
2.2.2. Estimates for the Harish-Chandra function. The values of the Harish-Chandra are known
(see for example [FTP82, Theorem 2, Item (iii)]). We provide here the simple computations we
need.
We will calculate the value of
Z
1
P (γ, ξ) 2 dµx0 (ξ).
hπ(γ)1B , 1Bu i =
Bu
AN ERGODIC THEOREM FOR THE QUASI-REGULAR REPRESENTATION OF THE FREE GROUP
7
Lemma 2.10. Let γ = s1 · · · sn ∈ Fr . Let l ∈ {1, ..., |γ|}, and u = s1 · · · sl−1 tl tl+1 · · · tl+k 1, with
tl 6= sl and k ≥ 0, be a reduced word. Then
1
hπ(γ)1B , 1Bu i =
|γ|
2r(2r − 1) 2 +k
and
2r − 1
hπ(γ)1B , 1Bγ i =
|γ|
2r(2r − 1) 2
Proof. The function ξ 7→ βξ (x0 , γx0 ) is constant on Bu equal to 2(l − 1) − |γ|.
So hπ(γ)1B , 1Bu i is the integral of a constant function:
Z
1
log(2r−1) (l−1)− |γ|
2
P (γ, ξ) 2 dµx0 (ξ) = µx0 (Bu ) e
Bu
1
=
2r(2r − 1)
|γ|
+k
2
·
The value of hπ(γ)1B , 1Bγ i is computed in the same way.
Lemma 2.11. (The Harish-Chandra function)
Let γ = s1 · · · sn in Sn written as a reduced word. We have that
|γ|
r−1
|γ| (2r − 1)− 2 .
Ξ(γ) = 1 +
r
Proof. We decompose B into the following partition:
B=
G
Bu1
u1 6=s1
|γ|
G
⊔
l=2
G
u=s1 ···sl−1 tl
tl 6∈{sl ,(sl−1 )−1 }
Bu ⊔ Bγ
and Lemma 2.10 provides us the value of the integral on the subsets forming this partition.
A simple calculation yields the announced formula.
The proof of the following lemma is then obvious :
Lemma 2.12. If γ, w ∈ Fr are such that w is not a prefix of γ, then there is a constant Cw not
depending on γ such that
Cw
hπ(γ)1B , 1Bw i
≤
.
Ξ(γ)
|γ|
2.2.3. Analysis of matrix coefficients. The goal of this section is to compute the limit of the
matrix coefficients hMn (χu )1Bv , 1Bw i.
Lemma 2.13. Let u, w ∈ Fr such that none of them is a prefix of the other (i.e. Bu ∩ Bw = ∅).
Then
lim hMn (χu )1B , 1Bw i = 0
n→∞
Proof. Using Lemma 2.12, we get
hMn (χu )1B , 1Bw i =
=
hπ(γ)1B , 1Bw i
1 X
χu (γx0 )
|Sn |
Ξ(γ)
γ∈Sn
X
hπ(γ)1B , 1B i
1
|Sn |
Ξ(γ)
γ∈Cu ∩Sn
X
Cw
1
≤
|Sn |
|γ|
γ∈C
u ∩Sn
1
= O
n
1For l = 1, s · · · s
1
l−1 is e by convention.
w
AN ERGODIC THEOREM FOR THE QUASI-REGULAR REPRESENTATION OF THE FREE GROUP
8
Lemma 2.14. Let u, v ∈ Fr . Then
lim suphMn (χu )1Bv , 1B i ≤ µx0 (Bu )µx0 (Bv )
n→∞
Proof.
hMn (χu )1Bv , 1B i = hMn (χu )∗ 1B , 1Bv i
hπ(γ)1B , 1Bv i
1 X
χu (γ −1 x0 )
=
|Sn |
Ξ(γ)
γ∈Sn
1 X
≤
χu (γ −1 x0 )χv (γx0 )
|Sn |
γ∈Sn
hπ(γ)1B , 1Bv i
1 X
χu (γ −1 x0 )
+
|Sn |
Ξ(γ)
γ∈Sn
γ6∈Cv
1 X
χu (γ −1 x0 )χv (γx0 )
|Sn |
γ∈Sn
1
+O
n
Hence, by taking the lim sup and using Theorem I, we obtain the desired inequality.
=
Proposition 2.15. For all u, v, w ∈ Fr , we have
lim hMn (χu )1Bv , 1Bw i = µx0 (Bu ∩ Bw )µx0 (Bv )
n→∞
Proof. We first show the inequality
lim suphMn (χu )1Bv , 1Bw i ≤ µx0 (Bu ∩ Bw )µx0 (Bv ).
n→∞
If none of u and w is a prefix of the other, we have nothing to do according to Lemma 2.13.
Let us assume that u is a prefix of w (the other case can be treated analogously). We have, by
Lemma 2.14, that
µx0 (Bw )µx0 (Bv ) ≥ lim suphMn (χw )1Bv , 1B i
n→∞
≥ lim suphMn (χw )1Bv , 1Bw i
n→∞
≥ lim suphMn (χw )1Bv , 1Bw i +
n→∞
X
lim suphMn (χγ )1Bv , 1Bw i
γ∈P ru (|w|)\{w}
n→∞
= lim suphMn (χu )1Bv , 1Bw i
n→∞
We now compute the expected limit. Let us define
Su,v,w := {(u′ , v ′ , w′ ) ∈ Fr | |u| = |u′ |, |v| = |v ′ |, |w| = |w′ |}.
Then
1 = lim inf hMn (1X )1B , 1B i
n→∞
≤ lim inf hMn (χu )1Bv , 1Bw i +
n→∞
X
lim suphMn (χu′ )1Bv′ , 1Bw′ i
X
lim suphMn (χu′ )1Bv′ , 1Bw′ i
(u′ ,v′ ,w ′ )∈Su,v,w \{u,v,w}
≤ lim suphMn (χu )1Bv , 1Bw i +
n→∞
(u′ ,v′ ,w ′ )∈Su,v,w \{u,v,w}
X
≤ µx0 (Bu ∩ Bw )µx0 (Bv ) +
(u′ ,v′ ,w ′ )∈S
n→∞
n→∞
µx0 (Bu′ ∩ Bw′ )µx0 (Bv′ )
u,v,w \{u,v,w}
= 1
This proves that all the inequalities above are in fact equalities, and moreover proves that
the inequalities
lim inf hMn (χu )1Bv , 1Bw i ≤ lim suphMn (χu )1Bv , 1Bw i ≤ µx0 (Bu ∩ Bw )µx0 (Bv )
n→∞
n→∞
AN ERGODIC THEOREM FOR THE QUASI-REGULAR REPRESENTATION OF THE FREE GROUP
are in fact equalities.
9
Proof of Theorem 1.2. Because of the boundedness of the sequence (Mn )n∈N proved in Proposition 2.9, it is enough to prove the convergence for all (f, h1 , h2 ) in a dense subset of C(X) ×
L2 × L2 , which is what Proposition 2.15 asserts.
References
[BM11] U. Bader and R. Muchnik. Boundary unitary representations - irreducibility and rigidity. Journal of
Modern Dynamics, 5(1):49–69, 2011.
[Bou95] M. Bourdon. Structure conforme au bord et flot géodésique d’un CAT(-1)-espace. Enseign. Math,
2(2):63–102, 1995.
[Boy15] A. Boyer. Equidistribution, ergodicity and irreducibility in CAT(-1) spaces. arXiv:1412.8229v2, 2015.
[CT01] C. Charitos and G. Tsapogas. Topological mixing in CAT(-1)-spaces. Trans. of the American Math.
Society, 354(1):235–264, 2001.
[Dal99] F. Dal’bo. Remarques sur le spectre des longueurs d’une surface et comptages. Bol. Soc. Bras. Math.,
30(2):199–221, 1999.
[FTP82] A. Figà-Talamanca and M. A. Picardello. Spherical functions and harmonic analysis on free groups. J.
Functional Anal., 47:281–304, 1982.
[FTP83] A. Figà-Talamanca and M. A. Picardello. Harmonic analysis on free groups. Lecture Notes in Pure and
Applied Mathematics, 87, 1983.
[Gar14] L. Garncarek. Boundary representations of hyperbolic groups. arXiv:1404.0903, 2014.
[Rob03] T. Roblin. Ergodicité et Equidistribution en courbure négative. Mémoires de la SMF 95, 2003.
| 4 |
On the weak convergence of the empirical conditional copula
under a simplifying assumption
François Portier∗
Johan Segers†
arXiv:1511.06544v2 [] 16 May 2017
May 17, 2017
Abstract. A common assumption in pair-copula constructions is that the copula of
the conditional distribution of two random variables given a covariate does not depend on
the value of that covariate. Two conflicting intuitions arise about the best possible rate of
convergence attainable by nonparametric estimators of that copula. On the one hand, the
best possible rates for estimating the marginal conditional distribution functions is slower
than the parametric one. On the other hand, the invariance of the conditional copula given
the value of the covariate suggests the possibility of parametric convergence rates. The more
optimistic intuition is shown to be correct, confirming a conjecture supported by extensive
Monte Carlo simulations by I. Hobaek Haff and J. Segers [Computational Statistics and Data
Analysis 84:1–13, 2015] and improving upon the nonparametric rate obtained theoretically
by I. Gijbels, M. Omelka and N. Veraverbeke [Scandinavian Journal of Statistics 42:1109–
1126, 2015]. The novelty of the proposed approach lies in a double smoothing procedure for
the estimator of the marginal conditional distribution functions. The copula estimator itself
is asymptotically equivalent to an oracle empirical copula, as if the marginal conditional
distribution functions were known.
Keywords: Donsker class; Empirical copula process; Local linear estimator; Pair-copula
construction; Partial copula; Smoothing; Weak convergence.
1
Introduction
Let (Y1 , Y2 ) be a pair of continuous random variables with joint distribution function H(y1 , y2 ) =
Pr(Y1 ≤ y1 , Y2 ≤ y2 ) and marginal distribution functions Fj (yj ) = Pr(Yj ≤ yj ), for yj ∈ R
and j ∈ {1, 2}. If H is continuous, then (F1 (Y1 ), F2 (Y2 )) is a pair of uniform (0, 1) random
variables. Their joint distribution function, D(u1 , u2 ) = Pr{F1 (Y1 ) ≤ u1 , F2 (Y2 ) ≤ u2 } for uj ∈
[0, 1], is therefore a copula. By Sklar’s celebrated theorem (Sklar, 1959), we have H(y1 , y2 ) =
D{F1 (y1 ), F2 (y2 )}. The copula D thus captures the dependence between the random variables
Y1 and Y2 .
Suppose there is a third random variable, X, and suppose the joint conditional distribution
of (Y1 , Y2 ) given X = x, with x ∈ R, is continuous. Then we can apply Sklar’s theorem to the
joint conditional distribution function H(y1 , y2 |x) = Pr(Y1 ≤ y1 , Y2 ≤ y2 | X = x). Writing
Fj (yj |x) = Pr(Yj ≤ yj | X = x), we have H(y1 , y2 |x) = C{F1 (y1 |x), F2 (y2 |x) | x}, where
C(u1 , u2 |x) = Pr{F1 (Y1 |x) ≤ u1 , F2 (Y2 |x) ≤ u2 | X = x} is the copula of the conditional
distribution of (Y1 , Y2 ) given X = x. This conditional copula thus captures the dependence
between Y1 and Y2 conditionally on X = x. Examples include exchange rates before and
after the introduction of the euro (Patton, 2006), diastolic versus systolic blood pressure when
∗
LTCI, CNRS, Télécom ParisTech, Université Paris-Saclay. E-mail: [email protected]
Université catholique de Louvain, Institut de Statistique, Biostatistique et Sciences Actuarielles, Voie du
Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium. E-mail: [email protected]
†
1
controlling for cholesterol (Lambert, 2007), and life expectancies of males versus females for
different categories of countries (Veraverbeke et al., 2011).
Evidently, we can integrate out the joint and marginal conditional
R distributions to obtain
their unconditional versions: if X has density fX , then H(y1 , y2 ) = H(y1 , y2 |x) fX (x) dx and
similarly for Fj (yj ). For
R the copula, however, this relation does not hold: in general, D(u1 , u2 )
will be different from C(u1 , u2 |x) fX (x) dx.
Example 1.1. Suppose that Y1 and Y2 are conditionally independent given X: for all x, we
have H(y1 , y2 |x) = F1 (y1 |x) F2 (y2 |x). Then the copula of the conditional distribution is the
independence copula: C(u1 , u2 |x) = u1 u2 . Nevertheless, unconditionally, Y1 and Y2 need not
be independent, and the copula, D, of the unconditional distribution of (Y1 , Y2 ) can be different
from the independence copula.
In the previous example, the copula of the conditional distribution of (Y1 , Y2 ) given X = x
does not depend on the value of x. This invariance property is called the simplifying assumption.
The property does not hold in general but it is satisfied, for instance, for trivariate Gaussian
distributions. For further examples and counterexamples of distributions which do or do not
satisfy the simplifying assumption, see for instance Hobaek Haff et al. (2010) and Stöber et al.
(2013).
Example 1.2. Let (Y1 , Y2 , X) be trivariate Gaussian with means µ1 , µ2 , µX , standard deviations
σ1 , σ2 , σX > 0, and correlations ρ12 , ρ1X , ρ2X ∈ (−1, 1). The conditional distribution of (Y1 , Y2 )
given X = x is bivariate Gaussian with means µj − ρjX x and standard deviations σj (1 − ρ2jX )1/2
for j ∈ {1, 2}, while the correlation is given by the partial correlation of (Y1 , Y2 ) given X, i.e.,
ρ12|X = (ρ12 − ρ1X ρ2X )/{(1 − ρ21X )(1 − ρ22X )}1/2 . As a consequence, the conditional copula
of (Y1 , Y2 ) given X = x is the so-called Gaussian copula with correlation parameter ρ12|X ,
whatever the value of x, while the unconditional copula of (Y1 , Y2 ) is the Gaussian copula with
correlation parameter ρ12 .
Given an iid sample (Xi , Yi1 , Yi2 ), i = 1, . . . , n, we seek to make nonparametric inference
on the conditional dependence of (Y1 , Y2 ) given X under the simplifying assumption that there
exists a single copula, C, such that C(u1 , u2 |x) = C(u1 , u2 ) for all x and all (u1 , u2 ). As
noted in the two examples above, C is usually not equal to the copula, D, of the unconditional
distribution of (Y1 , Y2 ). The inference problem arises in so-called pair-copula constructions,
where a multivariate copula is broken down into multiple bivariate copulas through iterative
conditioning (Joe, 1996; Bedford and Cooke, 2002; Aas et al., 2009). If the pair copulas are
√
assumed to belong to parametric copula families, then likelihood-based inference yields 1/ nconsistent estimators of the copula parameters (Hobaek Haff, 2013). Here, we concentrate
instead on the nonparametric case. The mathematical analysis is difficult and our treatment is
therefore limited to a single conditioning variable.
Suppose we have no further structural information on the joint distribution of (X, Y1 , Y2 )
besides smoothness and the simplifying assumption. Then how well can we hope to estimate the
conditional copula C? The simplifying assumption implies that C is an object that combines
both local and global properties of the joint distribution of (X, Y1 , Y2 ): conditioning upon X = x
and integrating out, we find that C is equal to the partial copula of (Y1 , Y2 ) given X (Bergsma,
2011; Gijbels et al., 2015b),
C(u1 , u2 ) = Pr{F1 (Y1 |X) ≤ u1 , F2 (Y2 |X) ≤ u2 }.
On the one hand, the necessity to estimate the univariate conditional distribution functions
Fj (yj |x) suggests that the best convergence rate that can be achieved for estimating C will be
√
slower than 1/ n. On the other hand, since the outer probability is an unconditional one, we
√
may hope to achieve the parametric rate, 1/ n.
2
In Hobæk Haff and Segers (2015), evidence from extensive numerical experiments was pro√
vided to support the more optimistic conjecture that the parametric rate 1/ n can be achieved.
The estimator proposed was constructed as the empirical copula (Deheuvels, 1979; Rüschendorf,
1976) based on the pairs
F̂n,1 (Yi1 |Xi ), F̂n,2 (Yi2 |Xi ) ,
i = 1, . . . , n,
(1.1)
where the F̂n,j were nearest-neighbour estimators of the Fj . In Gijbels et al. (2015a), a highlevel theorem was provided saying that if F̂n,j are estimators of the univariate conditional
margins that satisfy a number of assumptions, then the empirical copula based on the pairs
√
(1.1) is consistent and asymptotically normal with convergence rate 1/ n. In addition, it was
proposed to estimate the univariate conditional margins Fj by local linear estimators. However,
attempts to prove that any of these estimators of Fj also satisfy the conditions of the cited
theorem have failed so far. The only positive results have been obtained under additional
structural assumptions on the conditional margins, exploitation of which leads to estimators
F̂n,j with the desired properties.
The contribution of our paper is then two-fold:
1. We provide an alternative to Theorem 2 in Gijbels et al. (2015a), imposing weaker conditions on the estimators F̂n,j of the univariate conditional margins, and concluding that
the empirical copula based on the pairs (1.1) is consistent and asymptotically normal with
√
rate 1/ n. The conclusion of our main theorem is a bit weaker than the one of the cited
theorem in that we can only prove weak convergence of stochastic processes on [γ, 1 − γ]2 ,
for arbitrary 0 < γ < 1/2, rather than on [0, 1]2 .
2. We provide nonparametric estimators F̂n,j of the conditional margins Fj that actually
satisfy the conditions of our theorem. The estimators are smoothed local linear estimators.
In contrast to Gijbels et al. (2015a), we also smooth in the yj -direction, ensuring that the
trajectories (yj , x) 7→ F̂n,j (yj |x) belong to a Donsker class with high probability.
To prove that the smoothed local linear estimators F̂n,j satisfy the requirements of our main
theorem, we prove a number of asymptotic results on those estimators that may be interesting
in their own right.
Although our estimator of Fj and thus of C is different from the ones of Hobæk Haff and
Segers (2015) or Gijbels et al. (2015a), we do not claim it to be superior. Our objective is rather
to provide the first construction of a nonparametric estimator of C that can be proven to achieve
√
the 1/ n convergence rate. This is why, in the numerical experiments, we limit ourselves to
illustrate the asymptotic theory but do not provide comparisons between estimators. For the
same reason, we do not address other important questions of practical interest such as how to
choose the bandwidths or how to test the hypothesis that the simplifying assumption holds.
For the latter question, we refer to Acar et al. (2012) and Derumigny and Fermanian (2016).
The paper is structured as follows. In Section 2, we state the main theorem giving conditions on the estimators F̂n,j for the empirical copula based on the pairs (1.1) to be consistent
√
and asymptotically normal with convergence rate 1/ n. In Section 3, we then show that the
smoothed local linear estimator satisfies the requirements. The theory is illustrated by numerical experiments. Auxiliary results of independent interest on the smoothed local linear
estimator are stated in Section 4. The proofs build upon empirical process theory and their
details are spelled out in the appendices.
3
2
Empirical conditional copula
The goal of this section is to present the mathematical framework of the paper (Section 2.1)
and to provide general conditions on the estimated margins that ensure the weak convergence
of the estimated conditional copula (Section 2.2).
2.1
Set-up and definitions
Let fX,Y be the density function of the random triple (X, Y ) = (X, Y1 , Y2 ). Let fX and
SX = {x ∈ R : fX (x) > 0} denote the density and the support of X, respectively. The
conditional cumulative distribution function of Y given X = x is given by
Z y1 Z y2
fX,Y (x, z1 , z2 )
H(y | x) = Pr(Y1 ≤ y1 , Y2 ≤ y2 | X = x) =
dz2 dz1 ,
fX (x)
−∞ −∞
for y = (y1 , y1 ) ∈ R2 and x ∈ SX . Since H( · |x) is a continuous bivariate cumulative distribution
function, its copula is given by the function
C(u | x) = Pr {F1 (Y1 |X) ≤ u1 , F2 (Y2 |X) ≤ u2 | X = x} ,
for u = (u1 , u2 ) ∈ [0, 1]2 and x ∈ SX , where F1 ( · |x) and F2 ( · |x) are the margins of H( · |x).
We make the simplifying assumption that the copula of H( · |x) does not depend on x, i.e.,
C( · |x) ≡ C( · ). This assumption is equivalent to the one that
F1 (Y1 |X), F2 (Y2 |X) is independent of X.
(2.1)
Under (2.1), the copula of H( · |x) is given by C for every x ∈ SX . It is worth mentioning that,
for any j ∈ {1, 2}, and even without the simplifying assumption,
Fj (Yj |X) is uniformly distributed on [0, 1] and independent of X.
(2.2)
Let (Xi , Yi1 , Yi2 ), for i ∈ {1, . . . , n}, be independent and identically distributed random
vectors, with common distribution equal to the one of (X, Y1 , Y2 ). Our aim is to estimate the
conditional copula C without any further structural or parametric assumptions on C or on
Fj ( · |x). A reasonable procedure is to estimate the conditional margins in some way, producing
random functions F̂n,j ( · |x), and then proceed with the pseudo-observations F̂n,j (Yij |Xi ) from
C. Exploiting the knowledge that C is a copula, we estimate it by the empirical copula, Ĉn ,
of those pseudo-observations. Formally, let Ĝn,j , for j ∈ {1, 2}, be the empirical distribution
function of the pseudo-observations F̂n,j (Yij |Xi ), i ∈ {1, . . . , n}, i.e.,
n
Ĝn,j (uj ) =
1X
1{F̂n,j (Yij |Xi ) ≤ uj } ,
n
uj ∈ [0, 1].
i=1
The generalized inverse of a univariate distribution function F is defined as
F − (u) = inf{y ∈ R : F (y) ≥ u},
u ∈ [0, 1].
(2.3)
Let Ĝ−
n,j be the generalized inverse of Ĝn,j . The empirical conditional copula, Ĉn , is defined by
n
Ĉn (u) =
1X
1{F̂n,1 (Yi1 |Xi ) ≤ Ĝ− (u1 )} 1{F̂n,2 (Yi2 |Xi ) ≤ Ĝ− (u2 )} ,
n,1
n,2
n
i=1
4
u ∈ [0, 1]2 ,
(2.4)
the empirical copula of the pseudo-observations (F̂n,1 (Yi1 |Xi ), F̂n,2 (Yi2 |Xi )), for i ∈ {1, . . . , n}.
We introduce an oracle copula estimator, defined as the empirical copula based on the
(or)
unobservable random pairs (F1 (Yi1 |Xi ), F1 (Yi2 |Xi )), i ∈ {1, . . . , n}. Let Ĝn,j be the empirical
distribution function of the uniform random variables Fj (Yij |Xi ), i ∈ {1, . . . , n}, i.e.,
n
(or)
Ĝn,j (uj ) =
1X
1{Fj (Yij |Xi ) ≤ uj } ,
n
uj ∈ [0, 1].
i=1
(or)−
Let Ĝn,j
be its generalized inverse, as in (2.3). The oracle empirical copula is defined as
n
Ĉn(or) (u) =
1X
1{F (Y |X ) ≤ Ĝ(or)− (u )} 1{F (Y |X ) ≤ Ĝ(or)− (u )} ,
2
1
2 i2
1 i1
i
i
n
n,2
n,1
u ∈ [0, 1]2 .
i=1
The oracle empircal copula is not computable in practice as it requires the knowledge of the
marginal conditional distributions F1 and F2 .
We rely on the following Hölder regularity class. Let d ∈ N \ {0}, 0 < δ ≤ 1, k ∈ N, and
M > 0 be scalars and let S ⊂ Rd be non-empty, open and convex. Let Ck+δ,M (S) be the space
of functions S → R that are k times differentiable and whose derivatives (including the zero-th
derivative, that is, the function itself) are uniformly bounded by M and such that every mixed
partial derivative of order l ≤ k, say f (l) , satisfies the Hölder condition
sup
z6=z̃
f (l) (z) − f (l) (z̃)
|z − z̃|δ
≤ M,
(2.5)
where | · | in the denominator denotes the Euclidean norm. In particular, C1,M (R) is the space
of Lipschitz functions R → R bounded by M and with Lipschitz constant bounded by M .
2.2
Asymptotic normality
(or)
We now give a set of sufficient conditions to ensure that Ĉn and Ĉn are asymptotically equivalent, i.e., their difference is oP (n−1/2 ). As a consequence, Ĉn is consistent and asymptotically
normal with convergence rate OP (n−1/2 ). In the same spirit as Theorem 2 in Gijbels et al.
(2015a), some of the assumptions are “ground-level conditions” and concern directly the distribution P , while others are “high-level conditions” and deal with the estimators F̂n,j of the
conditional margins. Given a choice for F̂n,j , the high-level conditions need to be verified, as
we will do in Section 3 for a specific proposal. Ground-level assumptions are denoted with the
letter G and high-level conditions are denoted with the letter H.
(G1) The law P admits a density fX,Y on SX × R2 such that SX is a nonempty, bounded,
open interval. For some M > 0 and δ > 0, the functions F1 ( · | · ) and F2 ( · | · ) belong to
C3+δ,M (R × SX ) and the function fX belongs to C2,M (SX ). There exists b > 0 such that
fX (x) ≥ b for every x ∈ SX . For any j ∈ {1, 2} and any γ ∈ (0, 1/2), there exists bγ > 0
such that, for every yj ∈ [Fj− (γ|x), Fj− (1 − γ|x)] and every x ∈ SX , we have fj (yj |x) ≥ bγ .
(G2) Let Ċj and C̈jk denote the first and second-order partial derivatives of C, where j, k ∈
{1, 2}. The copula C is twice continuously differentiable on the open unit square, (0, 1)2 .
There exists κ > 0 such that, for all u = (u1 , u2 ) ∈ (0, 1)2 and all j, k ∈ {1, 2},
C̈jk (u) ≤ κ {uj (1 − uj ) uk (1 − uk )}−1/2 .
5
(H1) For any j ∈ {1, 2} and any γ ∈ (0, 1/2), with probability going to 1, we have that for
every x ∈ SX , the function yj 7→ F̂n,j (yj |x) is continuous on R and strictly increasing on
[Fj− (γ|x), Fj− (1 − γ|x)].
(H2) For any j ∈ {1, 2}, we have
sup
x∈SX , yj ∈R
F̂n,j (yj |x) − Fj (yj |x) = oP (n−1/4 ).
(H3) For any j ∈ {1, 2} and any γ ∈ (0, 1/2), there exist positive numbers (δ1 , M1 ) such that,
with probability going to 1,
−
{x 7→ F̂n,j
(uj |x) : uj ∈ [γ, 1 − γ]} ⊂ C1+δ1 ,M1 (SX ).
The support SX is assumed to be open so that the derivatives of functions on SX are defined as
usual. Condition (G2) also appears as equation (9) in Omelka et al. (2009), where it is verified
for some popular copula models (Gaussian, Gumbel, Clayton, Student t).
Let `∞ (T ) denote the space of bounded real functions on the set T , the space being equipped
with the supremum distance, and let “ ” denote weak convergence in this space (van der Vaart
and Wellner, 1996). Let P denote the probability measure on the underlying probability space
associated to the whole sequence (Xi , Yi )i=1,2,... .
Theorem 2.1. Assume that (G1), (G2), (H1), (H2) and (H3) hold. If the simplifying assumption (2.1) holds, then for any γ ∈ (0, 1/2), we have
sup
n1/2 Ĉn (u) − Ĉn(or) (u) = oP (1),
n → ∞.
(2.6)
u∈[γ,1−γ]2
Moreover, n1/2 Ĉn (u) − C(u)
law as the random process
C in `∞ ([γ, 1 − γ]2 ), the limiting process C having the same
B(u) − Ċ1 (u) B(u1 , 1) − Ċ2 (u) B(1, u2 ),
u ∈ [γ, 1 − γ]2 ,
where B is a C-Brownian bridge, i.e., a centered Gaussian process with continuous trajectories
and with covarance function given by
2
cov B(u), B(v) = C(u1 ∧ v1 , u2 ∧ v2 ) − C(u) C(v),
(u, v) ∈ [γ, 1 − γ]2 .
(2.7)
The proof of Theorem 2.1 is given in Appendix A. The theorem bears resemblance with Theorem 2 in Gijbels et al. (2015a). Our approach is more specific because we consider a smoothness
approach through the spaces C1+δ,M (SX ) to obtain their Donsker property. Exploiting this context, we formulate in (H2) a condition on the rate of convergence of the estimator F̂n,j that
−
does not involve the inverse F̂n,j
, as expressed in their condition (Yn).
3
Smoothed local linear estimators of the conditional margins
The objective in this section is to provide estimators, F̂n,j , j ∈ {1, 2}, of the conditional marginal
distribution functions Fj (yj |x) = Pr(Yj ≤ yj | X = x) that satisfy conditions (H1), (H2) and
(H3) of Theorem 2.1 (Section 3.1). As a consequence, the empirical conditional copula based
on these estimators is a nonparametric estimator of the conditional copula in the simplifying
assumption which, by Theorem 2.1, is consistent and asymptotically normal with convergence
√
rate 1/ n (Section 3.2). Numerical experiments confirm that the asymptotic theory provides
a good approximation to the sampling distribution, at least for large samples (Section 3.3).
6
3.1
Definition of the smoothed local linear estimator
Let K : R → [0, ∞) and L : R → [0, ∞) be two kernel functions, i.e., nonnegative, symmetric
functions integrating to unity. Let (hn,1 )n≥1 and (hn,2 )n≥1 two bandwidth sequences that tend
to 0 as n → ∞. For (y, Y ) ∈ R2 and h > 0, put
Z y
Lh (t − Y ) dt.
(3.1)
Lh (y) = h−1 L(h−1 y),
ϕh (y, Y ) =
−∞
For j ∈ {1, 2}, we introduce the smoothed local linear estimator of Fj (yj |x) defined by
F̂n,j (yj |x) = ân,j ,
(3.2)
where ân,j is the first component of the random pair
(ân,j , b̂n,j ) = arg min
n
X
ϕhn,2 (yj , Yij ) − a − b(Xi − x)
2
K
(a,b)∈R2 i=1
x − Xi
hn,1
,
(3.3)
where ϕh in (3.1) serves to smooth the indicator function y 7→ 1{Y ≤y} . The kernels K and L do
not have the same role: L is concerned with “smoothing” over Y1 and Y2 whereas K “localises”
the variable X at x ∈ SX . For this reason, we purposefully use two different bandwidth
sequences (hn,1 )n≥1 and (hn,2 )n≥1 . We shall see that the conditions on the bandwidth hn,2
for the y-directions are weaker than the ones for the bandwidth hn,1 for the x-direction. The
assumptions related to the two kernels and bandwidth sequences are stated in (G3) and (G4)
below.
In the classical regression context, local linear estimators have been introduced in Stone
(1977) and are further studied for instance in Fan and Gijbels (1996). In Gijbels et al. (2015a),
local linear estimators for Fj (y|x) are considered too, but without smoothing in the y-variable,
so that condition (H3) does not hold; see Section 3.4 below.
3.2
Weak convergence of the empirical conditional copula
We derive the limit distribution of the empirical conditional copula Ĉn in (2.4) when the
marginal conditional distribution functions are estimated via the smoothed local linear estimators F̂n,j (yj |x) in (3.2). We need the following additional conditions.
(G3) The kernels K and L are bounded,
nonnegative,
symmetric functions on R, supported
R
R
on (−1, 1), and such that L(u) du = K(u) du = 1. The function L is continuously
differentiable on R and its derivative is a bounded real function of bounded variation.
The function K is twice continuously differentiable on R and its second-order derivative
is a bounded real function of bounded variation.
(G4) There exists α > 0 such that the bandwidth sequences hn,1 > 0 and hn,2 > 0 satisfy, as
n → ∞,
nh8n,1 → 0,
nh3+α
n,1
|log hn,1 |
→ ∞,
nh8n,2 → 0,
−1−α/2 2
hn,2
hn,1
nh1+α
n,1 hn,2
|log hn,1 hn,2 |
→ 0,
→ ∞.
Condition (G4) implies in particular that the bandwidth sequences are small enough to ensure
that the bias associated to the estimation of F1 and F2 does not affect the asymptotic distribution of the empirical conditional copula. An interesting situation occurs when the bandwidths
7
1/2+α/4
satisfy hn,2 /hn,1
→ 0 and h2n,1 /hn,2 → 0. Then the above condition becomes
nh8n,1 → 0,
nh8n,2 → 0,
nh3+α
n,1
|log hn,1 |
→ ∞.
Hence the conditions on the bandwidth h1,n are more restrictive than the ones on h2,n . Typically,
hn,2 might be chosen smaller than hn,1 as it does not need to satisfy nh3+α
n,1 /|log hn,1 | → ∞. This
might result in a smaller bias. By way of comparison, in the location-scale model, no smoothing
in the y-direction is required, but Gijbels et al. (2015a) still require the stronger condition that
nh5n,1 → 0 and nh3+α
n,1 / log n → ∞.
Theorem 3.1. Assume that (G1), (G2), (G3) and (G4) hold. Then (H1), (H2) and (H3) are
valid. If the simplifying assumption (2.1) also holds, then for any γ ∈ (0, 1/2), equation (2.6)
is satisfied and n1/2 (Ĉn − C)
C in `∞ ([γ, 1 − γ]2 ), where the limiting process C is defined in
the statement of Theorem 2.1.
The proof of Theorem 3.1 is given in Appendix B and relies on results on the smoothed
local linear estimator presented in Section 4.
Distinguishing between the bandwidth sequence hn,1 for the x-direction on the one hand
and the bandwidth sequence hn,2 for the y1 and y2 -directions on the other hand allows for
a weaker assumption than if both sequences would have been required to be the same. In
practice, one could even consider, for each j ∈ {1, 2}, the smoothed local linear estimator F̂n,j
(j)
(j)
based on a bandwidth sequence hn,1 for the x-direction and a bandwidth sequence hn,2 for
the yj -direction, yielding four bandwidth sequences in total. However, this would not really
lead to weaker assumptions in Theorem 3.1, since (G4) would then be required to hold for
(j)
(j)
each pair (hn,1 , hn,2 ). The same remark also applies to the kernels K and L, which could be
chosen differently for each margin j ∈ {1, 2}. The required modification of the formulation of
Theorem 3.1 is obvious.
3.3
Numerical illustrations
The assertion in (2.6) that the empirical conditional copula Ĉn is only oP (n−1/2 ) away from
(or)
the oracle empirical copula Ĉn is perhaps surprising, since the estimators of the conditional
margins converge at a rate slower, rather than faster, than OP (n−1/2 ). To support the claim,
we performed a number of numerical experiments based on independent random samples from
the copula of the trivariate Gaussian distribution (Example 1.2) with correlations given by
ρ1X = 0.4, ρ2X = −0.2, and ρ12 = 0.3689989. The conditional copula of (Y1 , Y2 ) given X is
then equal to the bivariate Gaussian copula with correlation parameter ρ12|X = 0.5. Estimation
target was the value of the copula at u = (0.5, 0.7).
To estimate the conditional margins, we used the smoothed local linear estimator (3.2)–
(3.3) based on the triweight kernel K(x) = (35/32)(1 − x2 )3 1[−1,1] (x) and the biweight kernel
L(y) = (15/16)(1 − y 2 )2 1[−1,1] (y), in accordance to assumption (G3). The bandwidths where
chosen as hn,1 = hn,2 = 0.5n−1/5 , with n the sample size. The experiments were performed
within the statistical software environment R (R Core Team, 2017). The algorithms for the
smoothed local linear estimator were implemented in C for greater speed.
(or)
Figure 1 illustrates the proximity between the estimator Ĉn (u) and the oracle Ĉn (u).
The left-hand and middle panels show scatterplots of 1 000 independent realizations of the
(or)
pairs (n1/2 {Ĉn (u) − C(u)}, n1/2 {Ĉn (u) − C(u)}) based on samples of size n = 500 and
n = 2 000. As n increases, the points are concentrated more strongly along the diagonal. The
8
sample size n = 2000
correlation between estimator and oracle
1.0
sample size n = 500
0.6
●
●
−0.5
● ●
● ●
●
● ● ●
●
●
●
●
● ●
●
●
●
−0.6
●
● ●
●
−0.5
0.0
0.5
●
●
●
●
●
0.8
●
●
0.6
●
correlation
● ● ●
●
●
●
● ●
●
● ● ●
● ●
●
●
●
●
●
0.4
●
●
●
●
●
●
●
● ● ●
0.4
●
●
●
●
●
●
●
●
●
●
●
0.2
●
●
●
●
●
●
●
●
●
●
●
●
●
0.0
● ●
●
● ●
● ●
● ●
● ●
● ●
● ●
● ●
● ●
● ●
● ●
● ●
●
●
●
●
●
● ●●
●●
●
●
●
●●●
● ● ●●●●
●●●●
●● ● ●
● ● ● ● ●●●
●●●
● ● ●●●● ●
●
● ●
● ● ● ●●● ● ●●
● ●●●●●●●●
●
●
●
●●●●●●●
● ●
● ●● ●●●●●●● ● ●
● ●●●●●●●●●●●●
● ●●●●●● ●●●●●●
● ●●●● ●●●●●●●●●
●●●● ●●●●●●●● ●●
●●●●●●●●●●●● ● ●
● ●●●● ●●●●●●●●● ● ●●
● ●●●●●●●● ●●●●● ●●
●
●●●●●●● ●
●●
●●●●●
● ● ●●●●●●● ●●●●
●● ●●●●●● ●●● ●● ●
●●●●●●●●●●●●●●● ●
●● ●●●●●●●●●● ●
●
●●● ●●●●●●●●●●●
●
●●●●●●●●●●●●●● ●
●● ● ●●●●● ●●●●●●●● ●
●● ●●●●●●● ● ●●● ●●
●
●●● ●●●●●●●●● ● ●●●
● ●●●●● ●●●●●●●●
●
●
●●●●●●●●●●●●●●●
●●● ●●●●●●● ● ●●
● ●●●●●●●●● ●
●● ●●●●●●●●●
● ●● ●●●●●●●● ●
●
● ●●●●●● ● ●●
● ●●
●
● ●
●● ●●●
●●●●
● ● ●
● ●
● ●● ●●
●
●●●
●
●●● ● ●
●
●
●
●
● ●
●
●
●
0.0
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
−0.2
●
● ●
●
● ●
● ●
● ●
● ●
● ●
● ●
● ●
● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
−0.4
0.0
●
●
●
● ● ●
● ● ● ●
●
● ●
● ● ● ●
● ● ● ●
● ● ● ●
● ● ● ●
● ● ● ●
●
● ●
● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
empirical
0.5
● ●
●
empirical
●
●
● ●
●
● ●
● ●
● ●
● ●
●
●
● ●
0.2
●
●
●
−0.5
oracle
0.0
0.5
100
200
500
oracle
1000
2000
5000
10000
sample size
Figure 1:
Scatterplots of 1 000 independent realizations of the pairs (n1/2 {Ĉn (u) −
(or)
C(u)}, n1/2 {Ĉn (u) − C(u)}) based on the trivariate Gaussian copula for sample sizes n = 500
(left) and n = 2 000 (middle). Right: correlation between the empirical conditional copula and
the oracle empirical copula as a function of the sample size n between 100 and 10 000, each
point being based on 5 000 samples of size n.
●
−0.6
−0.2
0.0
0.2
normal quantiles
0.4
0.6
0.24
0.22
●
●
●
●
5000
10000
0.18
0.0
−0.4
●
0.16
●●
●
●
●
0.20
standard deviation
1.5
1.0
Density
0.4
0.2
0.0
−0.2
sample quantiles
−0.4
−0.6
sampling density
normal density
0.5
0.6
●
●●
●
●●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●●●●
●
sampling vs limit distribution: standard deviations
0.26
sampling density vs limit normal density, n = 500
2.0
normal QQ plot, n = 500
−0.5
0.0
0.5
limit std dev 0.208 vs sampling std dev 0.205
100
200
500
1000
2000
sample size
Figure 2: QQ-plot (left) and density plot (middle) based on 1 000 independent realizations of
n1/2 {Ĉn (u)−C(u)} for the trivariate Gaussian copula for sample size n = 500. Right: standard
deviation of n1/2 {Ĉn (u) − C(u)} as a function of n between 100 and 10 000 versus the limit
value σ(u) = 0.2080, each point being based on 5 000 samples of size n.
linear correlations between the estimator and the oracle for sample sizes between n = 100 and
n = 10 000 are plotted in the right-hand panel, each point being based on 5 000 samples.
The proximity of the sampling distribution of the estimation error n1/2 {Ĉn (u) − C(u)}
and the limiting normal distribution is illustrated in Figure 2. According to Theorems 2.1
and 3.1, the limiting distribution of the empirical conditional copula process is N (0, σ 2 (u)).
The asymptotic variance is σ 2 (u) = var{C(u)} = var{B(u) − Ċ1 (u) B(u1 , 1) − Ċ2 (u) B(1, u2 )}
and can be computed from (2.7) and the knowledge that C is the bivariate Gaussian copula
with correlation parameter ρ12|X . As the sample size increases, the sampling distribution moves
closer to the limiting normal distribution as confirmed in the QQ-plot and density plot in
Figure 2, both based on 1 000 samples of size n = 500. The realized standard deviations of the
empirical conditional copula process are given in the right-hand panel, again for sample sizes
between n = 100 and n = 10 000, each point being based on 5 000 samples of size n. The limit
value σ(u) = 0.2080 is represented as a horizontal line.
9
3.4
Further comments
Smoothing in two directions rather than one. In Gijbels et al. (2015a), local linear
estimators that involve smoothing in x but not in yj are employed to estimate the conditional
margins Fj (yj |x). Interestingly, the approach taken in our paper does not extend to those estimators. Indeed, smoothing in the y-direction is crucial to obtain that the random functions
−
x 7→ F̂n,j
(u|x), when u ranges over [γ, 1 − γ], are included in a Hölder regularity class (Proposition 4.4 and its proof). Our approach is based on the result that the class of functions formed
by the indicators (y, x) 7→ 1{y ≤ q(x)} , where q : SX → R belongs to C1+δ,M (SX ) with δ > 0,
M > 0 and SX convex, is Donsker (Dudley, 1999; van der Vaart and Wellner, 1996). When the
estimators of the margins are not smooth with respect to yj , the difficulty consists in finding
appropriate restrictions on q so that the class (y, x) 7→ 1{y ≤ q(x)} is Donsker and still contains
−
the functions corresponding to q(x) = F̂n,j
(u|x), u ∈ [γ, 1 − γ]. Negative results are available
and they shed some light on the difficulties that arise. For instance, when q is constrained to be
nonincreasing, the class of indicators is equivalent to the class of lower layers in R2 studied by
Dudley (1999), section 12.4, that fails to be Donsker. When q is restricted to C1,M (SX ) rather
than C1+δ,M (SX ), the corresponding class is not Donsker either (Dudley, 1999, section 12.4).
Local linear rather than Nadaraya–Watson smoothing. Rather than a local linear
smoother in the x-direction,
one may be tempted to consider a simpler Nadaray–Watson
R yj
f˜n,j (y|x) dy, where
smoother F̃n,j (yj |x) = −∞
f˜n,j (yj |x) =
Pn
i=1 Khn,1 (x − Xi ) Lhn,2 (yj
Pn
i=1 Khn,1 (x − Xi )
− Yij )
.
However, the Nadaraya–Watson density estimator f˜n,j behaves poorly at the boundary points
of the support SX , since the true marginal density fX of X is not continuous at those boundary
points in view of the assumption inf{fX (x) : x ∈ SX } ≥ b > 0 (Fan, 1992). In particular, the
uniform rates given in Proposition 4.2 are no longer available, making our approach incompatible
with such a Nadaraya–Watson type estimator. Some techniques to bound the metric entropy
of such estimators are investigated in Portier (2016).
Extensions. The extension of Theorem 3.1 to the whole square unit [0, 1]2 might be obtained
by verifying “high-level” conditions given in Theorem 2 and Corollary 1 in Gijbels et al. (2015a).
We believe this is not straightforward and needs further research.
The extension to multivariate covariates X is technical and perhaps even impossible as some
conflicts between the bias and the variance might arise when choosing the bandwidths hn,1 and
hn,2 in (G4). As noted by a Referee, using higher-order local polynomial estimators would result
in lower bias (under appropriate smoothness conditions) and would (in theory) allow to extend
the results presented here to higher-dimensional covariates. Still, completely nonparametric
estimation would remain infeasible in high dimensions due to the curse of dimensionality in
nonparametric smoothing.
The extension to vectors Y of arbitrary dimension is probably feasible, but we did not
pursue this in view of the motivation from pair-copula constructions.
As suggested by a Referee, an alternative approach to estimating the conditional marginal
quantiles could be to use (nonparametric) quantile regression directly instead of inverting conditional distribution functions.
10
4
Auxiliary results on the smoothed local linear estimator of
the conditional distribution function
This section contains asymptotic results on the smoothed local linear estimator of the conditional
distribution function introduced in Section 3.1.
The presentation is independent from the copula set-up introduced before, so that the index
j ∈ {1, 2} will be omitted in this section. Moreover, when possible, we provide weaker assumptions on the bandwidth sequences than the ones introduced in Section 3. The proofs of the
results are given in Appendix C.2.
The main difficulty is to show that the estimator (x, y) 7→ F̂n (y|x) introduced in (3.2) satisfies
(H3). Our approach relies on bounds on the uniform convergence rates of the estimator and its
partial derivatives. Exact rates of uniform strong consistency for Nadaraya–Watson estimators
of the conditional distribution function are given in Einmahl and Mason (2000) and Härdle
et al. (1988), among others. Strong consistency of derivatives of the Nadaraya–Watson and
Gasser–Müller estimators of the regression function are studied in Akritas and Van Keilegom
(2001) and Gasser and Müller (1984), respectively. References on strong uniform rates for local
linear estimators include Masry (1996) and Dony et al. (2006).
Assume that (X1 , Y1 ), . . . , (Xn , Yn ) are independent and identically distributed random vectors with common distribution equal to the law, P , of the random vector (X, Y ) valued in R2 .
Assume that P has a density fX,Y with respect to the Lebesgue measure. As before, let fX
and SX = {x ∈ R : fX (x) > 0} denote the density and the support of X, respectively. The
conditional distribution function of Y given X = x is given by
Z y
fX,Y (x, z)
F (y|x) =
dz,
y ∈ R, x ∈ SX .
fX (x)
−∞
(G1’) The law P of (X, Y ) verifies (G1), i.e., the function F ( · | · ) replaces Fj ( · | · ).
(G4’) The bandwidth sequences hn,1 > 0 and hn,2 > 0 satisfy, as n → ∞,
hn,1 → 0,
nh3n,1
→ ∞,
|log hn,1 |
hn,2 → 0,
nhn,1 hn,2
→ ∞.
|log hn,1 hn,2 |
The quantity F̂n (y|x) can be found as the solution of a linear system of equations derived
from (3.3). With probability going to 1, a closed formula is available for F̂n (y|x). Similar
expressions are available for the local linear estimator of the regression function (Fan and
Gijbels, 1996, page 55, equation (3.5)). For every k ∈ N, define
p̂n,k (x) = n
−1
Q̂n,k (y, x) = n−1
n
X
i=1
n
X
wk,hn,1 (x − Xi ),
ϕhn,2 (y, Yi ) wk,hn,1 (x − Xi ),
i=1
where wk,h ( · ) = h−1 wk ( · /h) and wk (u) = uk K(u). In what follows, the function wk plays the
role of a kernel for smoothing in the x-direction. Let P denote the probability measure on the
probability space carrying the sequence of random pairs (X1 , Y1 ), (X2 , Y2 ), . . ..
Lemma 4.1. Let (X1 , Y1 ), (X2 , Y2 ), . . . be independent random vectors with common law P .
Assume that (G1’), (G3) and (G4’) hold. For some c > 0 depending on K, we have
lim P p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2 ≥ b2 c for all x ∈ SX = 1.
(4.1)
n→∞
11
Consequently, with probability going to 1, we have
F̂n (y|x) =
Q̂n,0 (y, x) p̂n,2 (x) − Q̂n,1 (y, x) p̂n,1 (x)
,
p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2
y ∈ R, x ∈ SX .
(4.2)
Differentiating F̂n (y|x) in (4.2) with respect to y, we obtain fˆn (y|x), the estimated conditional density of Y given X = x. It is given by
q̂n,0 (y, x) p̂n,2 (x) − q̂n,1 (y, x) p̂n,1 (x)
,
fˆn (y|x) =
p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2
(4.3)
where, for every k ∈ N,
q̂n,k (y, x) = n
−1
n
X
Lhn,1 (y − Yi ) wk,hn,2 (x − Xi ).
i=1
To derive the uniform rates of convergence of the smoothed local linear estimator F̂n (y|x) and its
derivatives, formula (4.2) permits to work with the quantities Q̂n,k (y, x), k ∈ {0, 1}, and p̂n,k (x),
k ∈ {0, 1, 2}. The asymptotic behavior of the latter quantities is handled using empirical process
theory in Proposition C.1. For ease of writing, we abbreviate the partial differential operator
∂ l /∂ l x to ∂xl .
Proposition 4.2. Let (X1 , Y1 ), (X2 , Y2 ), . . . be independent random vectors with common law
P . Assume that (G1’), (G3) and (G4’) hold. We have, as n → ∞,
s
!
|log hn,1 |
2
2
sup
F̂n (y|x) − F (y|x) = OP
+ hn,1 + hn,2 ,
nhn,1
x∈SX , y∈R
!
s
|log hn,1 |
2
sup
∂x {F̂n (y|x) − F (y|x)} = OP
+ hn,1 + h−1
n,1 hn,2 ,
nh3n,1
x∈SX , y∈R
!
s
|log
h
|
n,1
2
sup
∂x2 {F̂n (y|x) − F (y|x)} = OP
+ 1 + h−2
n,1 hn,2 ,
nh5n,1
x∈SX , y∈R
s
!
|log hn,1 hn,2 |
2
2
∂y {F̂n (y|x) − F (y|x)} = OP
sup
+ hn,1 + hn,2 ,
nhn,1 hn,2
x∈SX , y∈R
!
s
|log
h
h
|
n,1
n,2
1+δ
sup
∂y2 {F̂n (y|x) − F (y|x)} = OP
,
+ h1+δ
n,1 + hn,2
nhn,1 h3n,2
x∈SX , y∈R
!
s
|log hn,1 hn,2 |
sup
∂y ∂x {F̂n (y|x) − F (y|x)} = OP
+ h2n,2 h−1
n,1 + hn,1 .
nh3n,1 hn,2
x∈SX , y∈R
For the sake of brevity, the previous proposition is stated under (G4’) although the first
assertion remains true under the weaker condition hn,1 + hn,2 → 0 and nhn,1 /|log hn,1 | → ∞.
Recall the generalized inverse in (2.3). Uniform convergence rates for the estimated conditional quantile function Fn− (u|x) are provided in the following proposition.
Proposition 4.3. Let (X1 , Y1 ), (X2 , Y2 ), . . . be independent random vectors with common law
P . Assume that (G1’), (G3), (G4’) hold. For any γ ∈ (0, 1/2), we have, as n → ∞,
s
!
|log
h
|
n,1
sup
F F̂n− (u|x)|x − u = OP
+ h2n,1 + h2n,2 ,
nhn,1
x∈SX , u∈[γ,1−γ]
s
!
|log hn,1 |
−
−
2
2
sup
F̂n (u|x) − F (u|x) = OP
+ hn,1 + hn,2 .
nhn,1
x∈SX , u∈[γ,1−γ]
12
The convergence rates in Propositions 4.2 and 4.3 serve to show that the estimated quantile
functions x 7→ F̂n− (u|x), as u varies in [γ, 1 − γ], belong to a certain Hölder regularity class. The
bandwidths hn,1 and hn,2 are required to be large enough.
(G4”) There exists α > 0 such that the bandwidth sequences hn,1 > 0 and hn,2 > 0 satisfy, as
n → ∞,
hn,1 → 0,
nh3+α
n,1
|log hn,1 |
hn,2 → 0,
−1−α/2 2
hn,2
hn,1
nh1+α
n,1 hn,2
→ ∞,
|log hn,1 hn,2 |
→ 0,
→ ∞.
Note that in the case that hn,1 = hn,2 , the previous condition becomes hn,1 → 0 and
nh3+α
n,1 / |log hn,1 | → ∞. Recall the function class Ck+δ,M (S) defined via the Hölder condition (2.5).
Proposition 4.4. Let (X1 , Y1 ), (X2 , Y2 ), . . . be independent random vectors with common law
P . Assume that (G1’), (G3) and (G4”) hold. For any γ ∈ (0, 1/2), we have
h
i
lim P x 7→ F̂n− (u|x) : u ∈ [γ, 1 − γ] ⊂ C1+δ1 ,M1 (SX ) = 1,
n→∞
where δ1 = min(α/2, δ) and where M1 > 0 depends only on bγ and M .
Appendix A
Proof of Theorem 2.1
Condition (G2) implies the existence and continuity of Ċj on {u ∈ [0, 1]2 : 0 < uj < 1}, for
(or)
j ∈ {1, 2}. As a consequence, the oracle empirical process n1/2 (Ĉn − C) converges weakly in
`∞ ([0, 1]2 ) to the tight Gaussian process C (Segers, 2012). The proof of Theorem 2.1 therefore
consists of showing equation (2.6).
P
We use notation from empirical process theory. Let Pn = n−1 ni=1 δ(Xi ,YiR) denote the
empirical measure. For a function f and a probability measure Q, write Qf = f dQ. The
empirical process is
Gn = n1/2 (Pn − P ).
For any pair of cumulative distribution functions F1 and F2 on R, put F (y) = (F1 (y1 ), F2 (y2 ))
for y = (y1 , y2 ) ∈ R2 and F − (u) = (F1− (u1 ), F2− (u2 )) for u = (u1 , u2 ) ∈ [0, 1]2 , the generalized
inverse being defined in (2.3).
Our proof follows from an application of Theorem 2.1 stated in van der Vaart and Wellner
(2007) and reported below; for a proof see for instance van der Vaart and Wellner (1996,
Lemma 3.3.5), noting that the conclusion of their proof is in fact stronger than what is claimed
in their statement. Let ξ1 , ξ2 , . . . be independent and identically distributed random elements
of a measurable space (X , A) and with common distribution equal to P . Let P denote the
probability measure on the probability space on which the sequence ξ1 , ξ2 , . . . is defined. Let
Gξ,n be the empirical process associated to the sample ξ1 , . . . , ξn . Let E and U be sets and let
{mu,η : u ∈ U, η ∈ E} be a collection of real-valued, measurable functions on X .
Theorem A.1 (Theorem 2.1 in van der Vaart and Wellner (2007)). Let η̂n be random elements
in E. Suppose there exist η0 ∈ E and E0 ⊂ E such that the following three conditions hold:
2
(i) supu∈U P mu,η̂n − mu,η0 = oP (1) as n → ∞;
13
(ii) P(η̂n ∈ E0 ) → 1 as n → ∞;
(iii) {mu,η − mu,η0 : u ∈ U, η ∈ E0 } is P -Donsker.
Then it holds that
sup |Gξ,n (mu,η̂n − mu,η0 )| = oP (1),
n → ∞.
u∈U
The empirical process notation allows us to write
Ĉn(or) (u) = Pn 1{F ≤ Ĝ(or)− (u)} .
,
Ĉn (u) = Pn 1{F̂n ≤ Ĝ−
n (u)}
n
To establish (2.6), we rely on the decomposition
n1/2 Ĉn (u) − Ĉn(or) (u)
o
o
n
n
1/2
−
1
−
1
+
n
P
1
= Gn 1{F̂n ≤ Ĝ−
−
(or)−
(or)−
{F̂n ≤ Ĝn (u)}
(u)}
(u)}
{F ≤ Ĝ
{F ≤ Ĝ
n (u)}
n
n
= Ân,1 (u) + Ân,2 (u).
Let γ ∈ (0, 1/2). The proof consists in showing that the empirical process term Ân,1 (u) goes to
zero, uniformly over u ∈ [γ, 1 − γ]2 , in probability (first step) and that the bias term Ân,2 (u)
goes to zero, uniformly over u ∈ [γ, 1 − γ]2 , in probability (second step). The simplifying
assumption (2.1) is crucial for treating the bias term in the second step. But before executing
this program, it is useful to obtain some results on F̂n,j and Ĝn,j , j = 1, 2 (preliminary steps).
Preliminary step: We establish some preliminary results on F̂n,j and Ĝn,j , j = 1, 2, that we
list as facts in the following.
Fact 1. For any j = 1, 2,
sup
uj ∈[γ,1−γ], x∈SX
−
Fj F̂n,j
(uj |x)|x − uj = oP (n−1/4 ).
Proof. Because of (H1), invoking Lemma D.3, it holds that with probability going to one,
−
uj = F̂nj F̂n,j
(uj |x)|x ,
for each x ∈ SX and uj ∈ [γ, 1 − γ]. It follows that
−
−
−
Fj F̂n,j
(uj |x)|x − uj = Fj F̂n,j
(uj |x)|x − F̂n,j F̂n,j
(uj |x)|x
≤
sup
Fj (yj |x) − F̂n,j (yj |x) ,
x∈SX , yj ∈R
which is oP (n−1/4 ) by (H2).
A consequence of Fact 1 is that
inf
−
Fj F̂n,j
(u|x)|x = oP (1) + γ,
sup
−
Fj F̂n,j
(u|x)|x = oP (1) + 1 − γ.
x∈SX , uj ∈[γ,1−γ]
x∈SX , uj ∈[γ,1−γ]
14
Hence, the sequence of events
En,j,1 = {∀x ∈ SX , ∀uj ∈ [γ, 1 − γ]
−
γ/2 ≤ Fj F̂n,j
(uj |x)|x ≤ 1 − γ/2},
:
has probability going to one. From (H2) we have that, with probability going to 1, for any
x ∈ SX and yj ≥ Fj− (1 − γ/2|x), it holds that F̂n,j (yj |x) ≥ 1 − γ. Hence the sequence of events
En,j,2 = {∀x ∈ SX
:
inf
yj ≥Fj− (1−γ/2|x)
F̂n,j (yj |x) ≥ 1 − γ},
has probability going to one as well.
Fact 2. On a sequence of events whose probabilities tend to one, it holds that for every uj ∈
[γ, 1 − γ] and every (yj , x) ∈ R × SX ,
−
F̂n,j (yj |x) ≤ uj ⇔ yj ≤ F̂n,j
(uj |x).
Proof. The sense ⇐ is an easy consequence of (H1): because of the continuity of yj 7→ F̂n,j (yj |x)
we can apply Lemma D.3 to obtain the implication. The converse direction “⇒” requires the
fact that F̂n,j ( · |x) is strictly increasing on [Fj− (γ|x), Fj− (1 − γ|x)], which is given by (H1).
−
Assume that uj ∈ [γ, 1 − γ] and (yj , x) ∈ R × SX are such that F̂n,j
(uj |x) < yj . Consider two
−
cases, according to whether yj is smaller than Fj (1 − γ/2|x) or not.
• On the one hand, suppose that yj < Fj− (1 − γ/2|x). We have, under En,j,1 , that
−
Fj− (γ/2|x) ≤ F̂n,j
(uj |x). Hence we get the chain of inequalities
−
Fj− (γ/2|x) ≤ F̂n,j
(uj |x) < yj < Fj− (1 − γ/2|x).
Using the monotonicity of F̂n,j (·|x) on [Fj− (γ/2|x), Fj− (1 − γ/2|x)] and Lemma D.3, we
find that uj < F̂n,j (yj |x).
• On the other hand, suppose that Fj− (1 − γ/2|x) ≤ yj , or equivalently that 1 − γ/2 ≤
Fj (yj |x). Under En,j,2 , it then holds that uj ≤ 1 − γ < F̂n,j (yj |x).
Fact 3. We have
sup
uj ∈[γ,1−γ]
(or)
Ĝn,j (uj ) − Ĝn,j (uj ) = oP (n−1/4 ).
Proof. From Fact 2, it holds that, on a sequence of events whose probabilities tend to one, with
a slight abuse of notation,
(or)
Ĝn,j (uj ) − Ĝn,j (uj )
n
= n−1/2 Gn 1{Y ≤ F̂ −
n,j
(uj |X)} − 1{Y
o
≤ Fj− (uj |X)}
n
o
+ P 1{F̂n,j ≤ uj } − 1{Fj ≤ uj } .
We apply Theorem A.1 with ξi = (Yi,j , Xi ), X = R × SX , U = [γ, 1 − γ] and E the space of
measurable functions valued in R and defined on [γ, 1 − γ] × SX . Moreover, the quantities η0 ,
η̂n , and the map muj ,η are given by, for every uj ∈ [γ, 1 − γ] and x ∈ SX ,
η0 (uj , x) = Fj− (uj |x),
−
η̂n (uj , x) = F̂n,j
(uj |x),
muj ,η (y, x) = 1{y ≤ η(uj ,x)} .
15
Finally, the space E0 is the collection of those elements η in E such that
{x 7→ η(uj , x) : uj ∈ [γ, 1 − γ]} ⊂ C1+δ1 ,M1 (SX ).
The verification of the three assumptions in Theorem A.1 is as follows:
• First, we show point (i). Recall that if the random variable U is uniformly distributed on
(0, 1), then E(1{U ≤u1 } − 1{U ≤u2 } )2 = |u1 − u2 |. We have
Z
2
1{y ≤ F̂ − (uj |x)} − 1{y ≤ F − (uj |x)} fX,Y (x, y) d(x, y)
j
n,j
Z
−
=
F F̂n,j
(uj |x)|x − uj fX (x) dx
≤
sup
uj ∈[γ,1−γ], x∈SX
−
F F̂n,j
(uj |x)|x − uj ,
which, by Fact 1, tends to zero in probability.
• Second, point (ii) is directly obtained invoking (H3).
• Third, point (iii) follows from the existence of δ2 > 0 and M2 > 0 such that
{x 7→ Fj− (uj |x) : uj ∈ [γ, 1 − γ]} ⊂ C1+δ2 ,M2 (SX ).
(A.1)
The inclusion is indeed implied by the formula
∂x Fj− (uj |x) = −
∂x Fj (yj |x)
fj (yj |x)
,
y=Fj− (uj |x)
which, by (G1), is bounded by M/bγ . Then, based on (G1), we easily obtain that the
function x 7→ ∂x Fj− (uj |x) is δ-Hölder with Hölder constant depending only on bγ and M
(for more details, the reader is invited to read the proof of Proposition 4.4). It remains
to note that under (G1), the set SX is bounded and convex, implying that for any δ > 0
and M > 0, the class of subgraphs of C1+δ,M (SX ) is Donsker (van der Vaart and Wellner,
1996, Corollary 2.7.5). As the difference of two Donsker classes remains Donsker (van der
Vaart and Wellner, 1996, Example 2.10.7), we obtain point (iii).
Consequently, we have shown that
Z n
o
(or)
−
Ĝn,j (uj ) − Ĝn,j (uj ) =
F F̂n,j
(uj |x)|x − uj fX (x) dx + oP (n−1/2 ).
Conclude invoking Fact 1.
Fact 4. We have
sup
uj ∈[γ,1−γ]
(or)−
−1/4
Ĝ−
).
n,j (uj ) − Ĝn,j (uj ) = oP (n
(or)
Proof. By Fact 3, the supremum distance n = supuj ∈[γ,1−γ] |Ĝn,j (uj ) − Ĝn,j (uj )| converges to
zero in probability. We work on the event {n < γ}, the probability of which tends to 1. By
Lemma D.4, we have, for uj ∈ [γ, 1 − γ],
(or)−
Ĝ−
n,j (uj ) − Ĝn,j (uj )
(or)−
(or)−
(or)−
(or)−
≤ Ĝn,j ((uj − n ) ∨ 0) − Ĝn,j (uj ) ∨ Ĝn,j ((uj + n ) ∧ 1) − Ĝn,j (uj ) ,
16
with a ∨ b = max(a, b). In terms of Wn (uj ) =
(or)−
(or)−
√
(or)−
n{Ĝn,j (uj ) − uj }, we have
(or)−
(or)−
Ĝn,j ((uj − n ) ∨ 0) − Ĝn,j (uj ) ∨ Ĝn,j ((uj + n ) ∧ 1) − Ĝn,j (uj )
≤ n + 2 sup |Wn (uj )|.
uj ∈[0,1]
From Fact 3 and Lemma D.2, we get the desired rate oP (n−1/4 ).
For uj ∈ [γ, 1 − γ], x ∈ SX , and j ∈ {1, 2}, define
ˆ n,j (uj |x) = Fj F̂ − Ĝ− (uj )| x | x − Ĝ(or)− (uj ).
∆
n,j
n,j
n,j
(A.2)
Fact 5. We have
sup
uj ∈[γ,1−γ], x∈SX
ˆ n,j (uj |x)| = oP (n−1/4 ).
|∆
Proof. Write
ˆ n (uj |x) = F F̂ − Ĝ− (uj )| x | x − Ĝ− (uj ) + Ĝ− (uj ) − Ĝ(or)− (uj ) .
∆
n
n,j
n,j
n,j
n,j
By Fact 4, we only need to treat the first term on the right-hand side. Using the fact that
(or)−
(or)−
supuj ∈[γ,1−γ] |Ĝ−
n,j (uj )− Ĝn,j (uj )| = oP (1) and the fact that supuj ∈(0,1] |Ĝn,j (uj )−uj | = oP (1),
which is a consequence of Lemma D.2, we know that Ĝ−
n,j (uj ) takes values in [γ/2, 1 − γ/2] with
ˆ n (uj |x) = oP (n−1/4 ).
probability going to 1. Then we use Fact 1 to conclude that ∆
First step:
We show that
sup
u∈[γ,1−γ]2
Ân,1 (u) = oP (1),
By Fact 2, it holds that (with a slight abuse of notation)
n
Ân,1 (u) = Gn 1{Y ≤ F̂n− (Ĝ−
− 1{Y
n (u)|X)}
n → ∞.
o
(or)−
≤ F − (Ĝn
(u)|X)}
.
Therefore we apply Theorem A.1 with ξi = (Xi , Yi ), X = SX × R2 , U = [γ, 1 − γ]2 and E
the space of measurable functions valued in R4 and defined on SX × [γ, 1 − γ]2 . Moreover, the
quantities η0 and η̂n are given by, for every u ∈ [γ, 1 − γ]2 and x ∈ SX ,
η0 (u, x) = F − (u|x), F − (u|x) ,
− (or)−
η̂n (u, x) = F̂n− Ĝ−
Ĝn
(u)|x .
n (u)|x , F
Identifying u ∈ U with u ∈ [γ, 1 − γ]2 and η ∈ E with (η1 , η2 ), where ηj , j ∈ {1, 2}, are valued
in R2 , the map mu,η : R2 × SX → R is given by
mu,η (y, x) = 1{y ≤ η1 (u,x)} − 1{y ≤ η2 (u,x)} ,
Finally, the space E0 is the collection of those elements η = (η1 , η2 ) in E such that
2
{x 7→ η1 (u, x) : u ∈ [γ, 1 − γ]2 } ⊂ C1+δ1 ,M1 (SX ) ,
2
{x 7→ η2 (u, x) : u ∈ [γ, 1 − γ]2 } ⊂ C1+δ,M2 (SX ) ,
17
where M2 depends only on bγ and M . In the following we check each condition of Theorem A.1.
Verification of Condition (i) in Theorem A.1. Because the indicator function is bounded by
1, we have
Z
2
1{y ≤ F̂n− (Ĝ−
−
1
fX,Y (x, y) d(x, y)
(or)−
−
{y ≤ F (Ĝ
(u)|x)}
n (u)|x)}
n
≤
2
X
2
Z
sup
j=1 uj ∈[γ,1−γ]
1{yj ≤ F̂ −
−
n,j (Ĝn,j (uj )|x)}
− 1{y
(or)−
−
(uj )|x)}
j ≤ Fj (Ĝn,j
fX,Yj (x, yj ) d(x, yj ),
so that we can focus on each margin separately. Recall that if the random variable U is
uniformly distributed on (0, 1), then E(1{U ≤u1 } − 1{U ≤u2 } )2 = |u1 − u2 |. Writing ân,x (uj ) =
−
F̂n,j
Ĝ−
n,j (uj )|x , and using (2.1) and (2.2), we have
Z
2
1{yj ≤ ân,x (uj )} − 1{y ≤ F − (Ĝ(or)− (u )|x)} fX,Yj (x, yj ) d(x, yj )
j
j
j
n,j
2
Z
1{Fj (yj |x) ≤ Fj (ân,x (uj )|x)} − 1{F
=
(or)−
(uj )}
j (yj |x) ≤ Ĝn,j
Z
=
ZSX
=
fX,Yj (x, yj ) d(x, yj )
(or)−
Fj (ân,x (uj )|x) − Ĝn,j (uj ) fX (x) dx
ˆ n,j (uj |x) fX (x) dx,
∆
SX
ˆ n,j has been defined in (A.2). Fact 5, demonstrated during the preliminary step, permits
where ∆
to conclude.
Verification of Condition (ii) in Theorem A.1. We establish that, for each j = 1, 2,
n
o
−
x 7→ F̂n,j
Ĝ−
(u
)|x
:
u
∈
[γ,
1
−
γ]
⊂ C1+δ1 ,M1 (SX ),
j
n,j j
n
o
(or)−
x 7→ Fj− Ĝn,j (uj )|x : uj ∈ [γ, 1 − γ] ⊂ C1+δ,M2 (SX ),
with probability going to 1.
• For the first inclusion, we have already shown in the proof of Fact 5 that Ĝ−
n,j (uj ) ∈
[γ/2, 1 − γ/2], for every uj ∈ [γ, 1 − γ], with probability going to 1. On this set, we have
−
−
Ĝ−
x 7→ F̂n,j
n,j (uj )|x : uj ∈ [γ, 1 − γ] ⊂ x 7→ F̂n,j (uj |x) : uj ∈ [γ/2, 1 − γ/2] ,
which is included in C1+δ1 ,M1 (SX ) by (H3).
(or)−
• For the second inclusion, by Lemma D.2, for every uj ∈ [γ, 1−γ], it holds that Ĝn,j (uj ) ∈
[γ/2, 1 − γ/2] with probability going to one. It follows that
(or)−
x 7→ Fj− Ĝn,j (uj )|x : uj ∈ [γ, 1 − γ] ⊂ x 7→ Fj− (uj |x) : uj ∈ [γ/2, 1 − γ/2] ,
which is included in C1+δ,M2 (SX ) by (A.1), for some M2 > 0.
Verification of Condition (iii) in Theorem A.1. It is enough to show that the class of
functions
n
2
2 o
1{y ≤ g1 (x)} − 1{y ≤ g2 (x)} : (g1 , g2 ) ∈ C1+δ1 ,M1 (SX ) × C1+δ,M2 (SX )
is P -Donsker. Since the sum and the product of two bounded Donsker classes is Donsker
(van der Vaart and Wellner, 1996, Example 2.10.8), we can focus on the class
1{y ≤ g(x)} : g ∈ C1+δ,M (SX ) .
For any δ > 0 and M > 0, the latter is Donsker since the class of subgraphs of C1+δ,M (SX ),
under (G1), has a sufficiently small entropy (van der Vaart and Wellner, 1996, Corollary 2.7.5).
18
Second step:
We show that
sup
u∈[γ,1−γ]2
n → ∞.
Ân,2 (u) = oP (1),
Because of the simplifying assumption (2.1), we have, for every u ∈ [0, 1]2 , the formula
Z
Z
1{y ≤ F̂n− (Ĝ−
fX,Y (x, y) d(x, y) = C F F̂n− Ĝ−
n (u)|x | x fX (x) dx.
n (u)|x)}
(A.3)
(or)−
Next we use (G2) to expand C in the previous expression around Ĝn
(u) and then we
ˆ n,j (uj |x), j = 1, 2, established in Fact 5.
conclude by using the rates for the quantities ∆
In the light of the first step, we have
o
n
= oP (1).
−
1
sup
n1/2 {Ĉn (u) − Ĉn(or) (u)} − n1/2 P 1{F̂n ≤ Ĝ−
(or)−
(u)}
{F ≤ Ĝ
n (u)}
n
u∈[γ,1−γ]2
(or)−
(or)
(or)
Moreover, because Ĉn (u1 , 1) = Ĝn,1 Ĝ−
n,1 (u1 ) and Ĉn (u1 , 1) = Ĝn,1 Ĝn,1 (u1 ) , we find
sup
Ĉn (u1 , 1) − u1 = O(n−1 )
and
u1 ∈[γ,1−γ]
sup
u1 ∈[γ,1−γ]
Ĉn(or) (u1 , 1) − u1 = O(n−1 ),
which implies, by the triangle inequality, that
sup
u1 ∈[γ,1−γ]
n1/2 {Ĉn (u1 , 1) − Ĉn(or) (u1 , 1)} = o(1).
Similarly, we find
sup
u2 ∈[γ,1−γ]
n1/2 {Ĉn (1, u2 ) − Ĉn(or) (1, u2 )} = o(1).
Bringing all these facts together yields, for any j = 1, 2,
1/2
n P 1{F̂n,j ≤ Ĝ− (uj )} − 1{F
sup
uj ∈[γ,1−γ]
j
n,j
(or)−
≤ Ĝn,j
(uj )}
= oP (1).
ˆ n,j (uj |x) given in equation (A.2), it follows that, for any j = 1, 2,
Hence, from the definition of ∆
Z
ˆ n,j (uj |x) fX (x) dx = oP (1),
sup
n1/2 ∆
(A.4)
uj ∈[γ,1−γ]
Next define
ˆ n,1 (u1 |x) Ċ1 (u) + ∆
ˆ n,2 (u2 |x) Ċ2 (u),
Ŵn (u|x) = ∆
Z
Ŵn (u) = Ŵn (u|x) fX (x) dx.
√
By (A.4), it holds that supu∈[γ,1−γ]2 | n Ŵn (u)| = oP (1). Then because of (A.3) and the
simplifying assumption (2.1), we have
Ân,2 (u) − Ŵn (u)
Z n
o
1/2
−
(or)−
=n
H F̂n− Ĝ−
(u)|x
|x
−
H
F
Ĝ
(u)|x
|x
−
Ŵ
(u|X)
fX (x) dx
n
n
n
SX
Z n
o
1/2
=n
C F F̂n− Ĝ−
− C Ĝ(or)−
(u) − Ŵn (u|x) fX (x) dx.
n (u)|x |x
n
SX
19
The term inside the second integral equals
ˆ n (u|x) − C Ĝ(or)− (u) − Ŵn (u|x),
C Ĝ(or)−
(u)
+
∆
n
n
ˆ n (u|x) = (∆
ˆ n,1 (u1 |x), ∆
ˆ n,2 (u2 |x)). Then, by Lemma D.1 [which we can apply since
with ∆
(or)−
Ĝn
(u) lies in the interior of the unit square], we have, for all u ∈ [γ, 1 − γ]2 and all x ∈ SX ,
2
ˆ n (u|x) − C Ĝ(or)− (u) − Ŵn (u|x) ≤ 4κ ∆
ˆ n (u|x) .
C Ĝ(or)−
(u)
+
∆
n
n
γ
Integrating out over x ∈ SX yields
Ân,2 (u) − Ŵn (u) ≤ n
1/2
≤ n1/2
4κ
γ
4κ
γ
Z
ˆ n (u|x)
∆
2
fX (x) dx
SX
2
sup
ˆ n (u|x) ,
∆
u∈[γ,1−γ]2 , x∈SX
which is oP (1) by Fact 5.
Appendix B
Proof of Theorem 3.1
The proof follows from an application of Theorem 2.1. We need to show that (H1), (H2)
and (H3) are valid. The last two assertions are direct consequences of Proposition 4.2 and
Proposition 4.4, respectively. Those propositions are stated in section 4.
Hence, we only need to show that (H1) holds. By Lemma 4.1, there exists c > 0 depending
on K such that the events
2
2
En,3 = inf p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x) ≥ b c ,
x∈SX
have probability going to 1. Under En,3 , equations (4.2) and (4.3) hold, i.e.,
Q̂n,j,0 (yj , x) p̂n,2 (x) − Q̂n,j,1 (yj , x) p̂n,1 (x)
,
p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2
q̂n,j,0 (yj , x) p̂n,2 (x) − q̂n,j,1 (yj , x) p̂n,1 (x)
,
fˆn,j (yj |x) =
p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2
F̂n,j (yj |x) =
with fˆn,j (yj |x) = ∂yj F̂n,j (yj |x), and, for every k ∈ N,
q̂n,j,k (yj , x) = n
−1
n
X
Lhn,2 (yj − Yij ) wk,hn,1 (x − Xi ),
i=1
Z
yj
Q̂n,j,k (yj , x) =
q̂n,j,k (t, x) dt.
−∞
Hence, on the event En,3 , yj 7→ F̂n,j (yj |x) is continuous and differentiable on R. By the uniform
convergence of fˆn,j , stated in Proposition 4.2, and using (G1), we get for all x ∈ SX and all
yj ∈ [Fj− (γ|x), Fj− (1 − γ|x)] that
fˆn,j (yj |x) ≥ fj (yj |x) −
sup
x∈SX , yj ∈R
≥ b − oP (1).
20
fˆn,j (yj |x) − fj (yj |x)
Hence the event
(
En,4 =
)
min
inf
inf
j∈{1,2} x∈SX yj ∈[F − (γ|x),F − (1−γ|x)]
j
j
fˆn,j (yj |x) ≥ b/2 .
has probability going to 1. Conclude by noting that under En,4 , for every x ∈ SX , the function
Fn,j ( · |x) is strictly increasing on [Fj− (γ|x), Fj− (1 − γ|x)].
Appendix C
C.1
Proofs of the auxiliary results on the smoothed
local linear estimator
Uniform convergence rates for kernel estimators
Our analysis of the smoothed local linear estimator relies on a result on the concentration around
their expectations of certain kernel estimators. The result follows from empirical process theory
and notably by the use of some version of Talagrand’s inequality (Talagrand, 1994), formulated
in Giné and Guillou (2002). For stronger results including exact almost-sure convergence rates,
see Einmahl and Mason (2000).
Proposition C.1. Let (Xi , Yi ), i = 1, 2, . . . , be independent copies of a bivariate random vector
(X, Y ). Assume that (X, Y ) has a bounded density fX,Y and that K and L are bounded real
functions of bounded variation that vanish outside [−1, 1]. For any sequences hn,1 → 0 and
hn,2 → 0 such that nhn,1 / |log hn,1 | → ∞ and nhn,1 hn,2 / |log hn,1 hn,2 | → ∞, we have, as n → ∞,
s
!
n
X
|log hn,1 |
−1
sup n
Khn,1 (x − Xi ) − E[Khn,1 (x − X)] = OP
,
nhn,1
x∈R
i=1
s
!
n
X
|log
h
h
|
n,1 n,2
sup n−1
Lhn,2 (y − Yi ) Khn,1 (x − Xi ) − E[Lhn,2 (y − Y ) Khn,1 (x − X)] = OP
,
nh
hn,2
2
n,1
(x,y)∈R
i=1
s
!
n
X
|log
h
h
|
n,1
n,2
,
sup n−1
Lhn,2 (y − Yi ) − E[Lhn,2 (y − Yi ) | Xi ] Khn,1 (x − Xi ) = OP
nhn,1 hn,2
(x,y)∈R2
i=1
s
!
n
X
|log hn,1 |
−1
sup n
,
ϕhn,2 (y, Yi ) Khn,1 (x − Xi ) − E[ϕhn,2 (y, Y ) Khn,1 (x − X)] = OP
nhn,1
(x,y)∈R2
i=1
s
!
n
X
|log
h
|
n,1
sup n−1
ϕhn,2 (y, Yi ) − E[ϕhn,2 (y, Yi ) | Xi ] Khn,1 (x − Xi ) = OP
,
nhn,1
(x,y)∈R2
i=1
where Kh ( · ) = h−1 K( · /h), Lh ( · ) = h−1 L( · /h), and ϕh (y, Yi ) =
Ry
−∞ Lh (t
− Yi ) dt.
Proof. We use the definition of bounded measurable VC classes given in Giné and Guillou
(2002). That is, call a class F of measurable functions a bounded measurable VC class if F is
separable or image admissible Suslin (Dudley, 1999, Section 5.3) and if there exist A > 0 and
v > 0 such that, for every probability measure Q, and every 0 < < 1,
v
A
,
N F, L2 (Q), kF kL2 (Q) ≤
where F is an envelope for F and N (T, d, ) denotes the -covering number of the metric space
(T, d) (van der Vaart and Wellner, 1996). [Other terminologies associated to the same concepts
21
are Euclidean classes (Nolan and Pollard, 1987) and Uniform entropy numbers (van der Vaart
and Wellner, 1996).] Using Nolan and Pollard (1987), Lemma 22, Assertion (ii), the classes
{z 7→ L(h−1 (y − z)) : y ∈ R, h > 0} and {z 7→ K(h−1 (x − z)) : x ∈ R, h > 0} are bounded
measurable VC [see Giné and Guillou (2002) for remarks and references on measurability issues
associated to the previous classes].
The first statement is a direct application of Theorem 2.1, Equation (2.2), in Giné and
Guillou (2002), with, in their notation,
n
o
Fn = z 7→ K h−1
(x
−
z)
:
x
∈
R
,
n,1
Z
σn2 = hn,1 kfX k∞ K(u)2 du,
U=
sup |K(u)|,
u∈[−1,1]
where fX denotes the density of X and k · k∞ is the supremum norm. The class Fn is included
in one of the bounded measurable VC classes given above. Moreover, we have
Z
2
Var K h−1
(x
−
X)
≤
K h−1
n,1
n,1 (x − z) fX (z) dz
Z
= hn,1 K(u)2 fX (x − hn,1 u) du
≤ σn2 .
The P
cited equation yields that the expectation of the supremum over x of the absolute value of
n
−1
n
i=1 Khn,1 (x − Xi ) − E[Khn,1 (x − X)] is bounded by some constant times
(nhn,1 )−1 |log(hn,1 )| + (nhn,1 )−1 |log(hn,1 )|
1/2
=O
(nhn,1 )−1 |log(hn,1 )|
1/2
.
Hence we have obtained the first assertion.
Preservation properties for VC classes (van der Vaart and Wellner, 1996, Lemma 2.6.18)
imply that the product class
−1
(z1 , z2 ) 7→ L h−1
2 (y − z2 ) K h1 (x − z1 ) : x ∈ R, y ∈ R, h1 > 0, h2 > 0
is bounded measurable VC. Then we can apply Theorem 2.1, Equation (2.2), in Giné and
Guillou (2002) in a similar fashion as before. The main difference lies in the variance term,
which follows from
−1
−1
Var L hn,2 (y − Y ) K hn,1 (x − X)
Z
2
2
−1
≤ L h−1
n,2 (y − z2 ) K hn,1 (x − z1 ) fX,Y (z1 , z2 ) d(z1 , z2 )
Z
= hn,1 hn,2 L(u2 )2 K(u1 )2 fX,Y x − hn,1 u1 , y − hn,2 u2 d(u1 , u2 )
Z
≤ hn,1 hn,2 kfX,Y k∞ L(u2 )2 K(u1 )2 d(u1 , u2 ) = σn2 .
Computing the bound leads to the second assertion.
22
Using the second statement and the triangle inequality, the third assertion is obtained whenever
sup
n−1
(x,y)∈R2
n
X
E[Lhn,2 (y − Yi ) | Xi ] Khn,1 (x − Xi ) − E[Lhn,2 (y − Y ) Khn,1 (x − X)]
i=1
s
= OP
|log hn,1 hn,2 |
nhn,1 hn,2
!
.
If the class {z 7→ E[L(h−1 (y − Y )) | X = z] : y ∈ R, h > 0} is a bounded measurable VC
−1
class of functions, then the class {z 7→ E[L(h−1
2 (y − Y )) | X = z] K(h1 (x − z)) : x ∈ R, y ∈
R, h1 > 0, h2 > 0} is still a bounded measurable VC class of functions (van der Vaart and
Wellner, 1996, Lemma 2.6.18). Consequently, we can apply Theorem 2.1, Equation (2.2) in
Giné and Guillou (2002), with the same σn2 as before, because, by Jensen’s inequality,
h
2
2 i
−1
−1
−1
Var E L h−1
(y
−
Y
)
|
X
K
h
(x
−
X)
≤
E
E
L
h
(y
−
Y
)
|
X
K
h
(x
−
X)
n,2
n,1
n,2
n,1
h
2 i
2
−1
≤ E L h−1
(y
−
Y
)
K
h
(x
−
X)
n,2
n,1
≤ σn2 .
Now we show that L = {z 7→ E[L(h−1 (y − Y )) | X = z] : y ∈ R, h > 0} is a bounded
measurable VC class of functions. Let Q be a probability measure on SX . Define Q̃ as the
probability measure given by
Z
dQ̃(y) = f (y|x) dQ(x) dy.
Let f1 , . . . , fN denote the centers of an -covering of the class L0 = {z 7→ L(h−1 (y − z)) : y ∈
R, h > 0} with respect to the metric L2 (Q̃). For a function x 7→ E[f (Y ) | X = x], element of
the space F, there exists k ∈ {1, . . . , N } such that, by Jensen’s inequality and Fubini’s theorem,
Z
Z
2
2
E f (Y ) | X = x − E fk (Y ) | X = x
dQ(x) ≤ E f (Y ) − fk (Y ) | X = x dQ(x)
Z Z
2
=
f (y) − fk (y) f (y|x) dy dQ(x)
Z
2
=
f (y) − fk (y) dQ̃(y) ≤ 2 .
Consequently,
N L, L2 (Q), ≤ N L0 , L2 (Q̃), .
Since the kernel L is bounded, there exists a positive constant c∞ that is an envelope for both
classes L and L0 . Note that kc∞ kL2 (Q) = c∞ for any probability measure Q. Using the fact
that L0 is bounded measurable VC (see the proof of the first assertion), it follows that
v
A
,
N L, L2 (Q), c∞ ≤ N L0 , L2 (Q̃), c∞ ≤
i.e., the class L is bounded measurable VC.
In the same spirit, the fourth assertion holds true whenever the class {z 7→ ϕh (y, z) : y ∈
R, h > 0} is a bounded measurable VC class of functions. This is indeed true in virtue of
23
Lemma 22, Assertion (ii), in Nolan and Pollard (1987), because each function z 7→ ϕh (y, z) of
the class can be written as z 7→ L(h−1 (y − z)), where
Z t
Z t
Z t
+
L(t) =
L(u) du =
L (u) du +
L− (u) du,
−∞
−∞
−∞
with L = L+ + L− , L+ (u) ≥ 0, L− (u) < 0, which is indeed a bounded real function of
bounded variation (as the sum of an increasing function and a decreasing function is of bounded
variation). Applying Theorem 2.1, Equation (2.2) in Giné and Guillou (2002), with σn given
by the upper bound in the inequality chain
Z
2
Var ϕhn,2 (y, Y ) K h−1
(x
−
X)
≤
ϕhn,2 (y, z2 )2 K h−1
n,1
n,1 (x − z1 ) fX,Y (z1 , z2 ) d(z1 , z2 )
Z
≤ hn,1 sup {L(u)2 } kfX k∞ K(u)2 du = σn2 ,
u∈[−1,1]
leads to the same rate as in the first assertion.
Using the fourth statement and the triangle inequality, the fifth assertion is obtained whenever
sup
(x,y)∈R2
n−1
n
X
E[ϕhn,2 (y, Yi ) | Xi ] Khn,1 (x − Xi ) − E[ϕhn,2 (y, Y ) Khn,1 (x − X)]
i=1
s
= OP
|log hn,1 |
nhn,1
!
.
Following exactly the same lines as in the proof of the third statement [replacing L(h−1 (y − Y ))
by ϕh (y, Y )] we show that, since {z 7→ ϕh (y, z) : y ∈ R, h > 0} is a bounded measurable VC
class of functions (obtained in the proof of the fourth statement), the class {z 7→ E[ϕh (y, Y ) |
X = z] : y ∈ R, h > 0} is a bounded measurable VC class of functions. Then the class
{z 7→ E[ϕh2 (y, Y ) | X = z] K(h−1
1 (x − z)) : x ∈ R, y ∈ R, h1 > 0, h2 > 0} is still a bounded
measurable VC class of functions (van der Vaart and Wellner, 1996, Lemma 2.6.18). We conclude
by applying Theorem 2.1, Equation (2.2) in Giné and Guillou (2002), with the same σn2 as before;
note that Jensen’s inequality gives E[ϕhn,2 (y, Y ) | X = x]2 ≤ E[ϕhn,2 (y, Y )2 | X = x].
C.2
Proofs for Section 4
C.2.1
Proof of Lemma 4.1
P
Recall that p̂n,k (x) = n−1 ni=1 wk,hn,1 (x − Xi ) with wk,h ( · ) = h−1 wk ( · /h) and wk (u) =
uk K(u). We have
Z
E{p̂n,k (x)} =
fX (x − hn,1 u) wk (u) du.
By Condition (G3), the function K is a bounded function of bounded variation that vanishes
outside (−1, 1). Applying Proposition C.1, first statement, with kernel u 7→ wk (u) and using
Condition (G4), we have
sup |p̂n,k (x) − E{p̂n,k (x)}| = oP (1),
n → ∞.
x∈R
To obtain (4.1), we can rely on the expectations of all terms involved: it suffices to show that,
for some c > 0 and n sufficiently large, for all x ∈ SX ,
E{p̂n,0 (x)} E{p̂n,2 (x)} − [E{p̂n,1 (x)}]2 ≥ 2b2 c.
24
(C.1)
We have
Z
2
E{p̂n,0 (x)} E{p̂n,2 (x)} − E{p̂n,1 (x)} = E{p̂n,0 (x)}
fX (x − hn,1 u) u − an (x)
2
K(u) du,
with an (x) = E{p̂n,1 (x)}/ E{p̂n,0 (x)}. On the one hand, by (G3), as n is sufficiently large, we
have, for all x ∈ SX ,
Z
E{p̂n,0 (x)} = fX (x − hn,1 u) K(u) du
Z
≥b
K(u) du
x−hn,1 u ∈SX
Z 0
≥ b min
Z
1
K(u) du,
−1
K(u) du = b/2,
0
where the last equality is due to the symmetry of the function K. On the other hand, for n
large enough, for all x ∈ SX ,
Z
2
fX (x − hn,1 u) u − an (x) K(u) du
Z
2
≥b
u − an (x) K(u) du
x−hn,1 u ∈SX
Z 0
1
≥ min
u − an (x) K(u) du,
u − an (x) K(u) du
−1
0
Z 1
Z 1
2
2
= min
u + an (x) K(u) du,
u − an (x) K(u) du
0
0
Z 1
b cK
2
≥ b inf
u − a K(u) du =
,
a∈R 0
2
R1
R1
where cK = 2 0 (u − aK )2 K(u) du and aK = 2 0 u K(u) du. By taking c = cK /8 we obtain
(C.1). We have shown (4.1).
By differentiating, we see that (3.3) is equivalent to a linear system whose determinant is
equal to p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2 . Because the latter is strictly positive with probability going
to 1, we can invert the linear system to obtain (4.2).
C.2.2
Z
2
2
Proof of Proposition 4.2
Using Lemma 4.1, and since we are concerned with convergence in probability, we can restrict
2
2
attention
Pn to the event that p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x) ≥ b c for every x ∈ SX . Recall Q̂n,k (y, x) =
−1
n
i=1 ϕhn,2 (y, Yi ) wk,hn,1 (x−Xi ), where ϕh is defined in (3.1) and wk,h is defined right before
Lemma 4.1. It follows that
F̂n (y|x) =
Q̂n,0 (y, x) p̂n,2 (x) − Q̂n,1 (y, x) p̂n,1 (x)
p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2
= Q̂n,0 (y, x) ŝn,2 (x) − Q̂n,1 (y, x) ŝn,1 (x),
with ŝn,k (x) = p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2
−1
p̂n,k (x). Write
Q̂n,k (y, x) = β̂n,k (y, x) + b̂n,k (y, x) + γ̂n,k (y, x),
25
with
β̂n,k (y, x) = n
−1
b̂n,k (y, x) = n−1
n
X
i=1
n
X
ϕhn,2 (y, Yi ) − E[ϕhn,2 (y, Yi ) | Xi ] wk,hn,1 (x − Xi ),
E[ϕhn,2 (y, Yi ) | Xi ] − F (y|x) − (Xi − x) ∂x F (y|x) wk,hn,1 (x − Xi ),
i=1
γ̂n,k (y, x) = F (y|x) p̂n,k (x) − hn,1 p̂n,k+1 (x) ∂x F (y|x).
Elementary algebra yields
γ̂n,0 (y, x) ŝn,2 (x) − γ̂n,1 (x) ŝn,1 (x) = F (y|x).
It follows that
F̂n (y|x) − F (y|x) = {β̂n,0 (y, x) + b̂n,0 (y, x)} ŝn,2 (x) − {β̂n,1 (y, x) + b̂n,1 (y, x)} ŝn,1 (x).
(C.2)
In what follows, we apply Proposition C.1 to establish uniform rates of convergence for the
derivatives of order l ∈ {0, 1, 2} of β̂n,k and b̂n,k , for k ∈ {0, 1}, and of ŝn,k , for k ∈ {1, 2}. We
need to distinguish between many situations because differentiating with respect to x or y does
not impact the rates of convergence in the same way. In contrast, the index k has no effect.
Consequently, in what follows we fix k ∈ {0, 1, 2}. We define
(l)
r̂n,k (x) = n−1
n
X
(l)
(l)
(l)
wk,h ( · ) = h−1 wk ( · /h),
wk,hn,1 (x − Xi ) ,
i=1
(l)
where wk : u 7→ ∂ul wk (u) for l ∈ {0, 1, 2}. All asymptotic statements are for n → ∞, which is
omitted for brevity.
(l)
Rate of r̂n,k for l ∈ {0, 1, 2}. Invoking Condition (G3), the functions K, K 0 and K 00 are
bounded real functions of bounded variation that vanish outside (−1, 1). Consequently, the
(l)
functions wk , l ∈ {0, 1, 2}, are bounded functions of bounded variation that vanish outside
(−1, 1). Note that the absolute value of a function of bounded variation is of bounded variation
too, in view of the fact that |a| − |b| ≤ |a − b| for all a, b ∈ R. By Proposition C.1, first
(l)
assertion, with K equal to u 7→ |wk (u)|, we therefore have
s
sup
x∈SX
(l)
r̂n,k (x)
−
(l)
E{r̂n,k (x)}
= OP
|log hn,1 |
nhn,1
!
.
By (G1) it holds that
(l)
E{r̂n,k (x)} =
Z
(l)
fX (x − hn,1 u) |wk (u)| du ≤ M
Z
(l)
|wk (u)| du.
By the triangle inequality and Condition (G4), it follows that, for l ∈ {0, 1, 2},
s
!
|log hn,1 |
(l)
sup r̂n,k (x) = OP
+ 1 = OP (1).
nhn,1
x∈SX
26
(C.3)
Rate of ∂xl p̂n,k for l ∈ {0, 1, 2}. Differentiating under the expectation [apply the dominated
convergence theorem invoking (G3)], we get for any l ∈ {0, 1, 2},
Z
Z
(l)
−l
l
l
∂x {E p̂n,k (x)} = ∂x
fX (z) wk,hn,1 (x − z) dz = hn,1 fX (z) wk,h (x − z) dz.
(C.4)
Similarly [apply the dominated convergence theorem invoking (G1)], we have
Z
Z
(l)
l
l
∂x {E p̂n,k (x)} = ∂x
fX (x − hn,1 u) wk (u) du = fX (x − hn,1 u) wk (u) du.
(C.5)
As a consequence of (C.4), we have
∂xl {p̂n,k (x)
− E p̂n,k (x)} =
(nhln,1 )−1
n n
o
X
(l)
(l)
wk,hn,1 (x − Xi ) − E[wk,hn,1 (x − X)] ,
i=1
(l)
and in view of Proposition C.1, first assertion, with K equal to u 7→ wk (u), it holds that
!
s
|log hn,1 |
l
.
sup ∂x {p̂n,k (x) − E p̂n,k (x)} = OP
nh1+2l
x∈SX
n,1
Using (C.5) and invoking (G1), we have
∂xl E{p̂n,k (x)}
Z
≤M
|wk (u)| du.
By the triangle inequality, it follows that, for l ∈ {0, 1, 2},
!
s
|log hn,1 |
l
+1 .
sup ∂x p̂n,k (x) = OP
nh1+2l
x∈SX
n,1
(C.6)
Rate of ∂xl ŝn,k for l ∈ {0, 1, 2}.
We use the quotient rule for derivatives to obtain that
!
s
|log
h
|
n,1
sup ∂xl ŝn,k (x) = OP
+1 .
nh1+2l
x∈SX
n,1
For l = 0, the previous formula follows from equation (C.6) because p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2 ≥
bc for every x ∈ SX . For l = 1, differentiating ŝn,k (x) gives a sum of terms which are all of the
form
∂x {p̂n,k1 (x)} p̂n,k2 (x) p̂n,k3 (x)
2 ,
p̂n,0 (x) p̂n,2 (x) − p̂n,1 (x)2
q
where k1 , k2 , k3 ∈ {0, 1, 2}. By (C.6), each term is of the order OP ( |log hn,1 |/(nh3n,1 ) + 1).
Finally, when l = 2, the order is given by differentiating the previous expression. We obtain a
sum of different terms. Putting dˆn (x) = p̂n,0 (x)p̂n,2 (x) − p̂n,1 (x)2 , these terms are of the form
∂xl1 {p̂n,k1 (x)} ∂xl2 {p̂n,k2 (x)} p̂n,k3 (x)
,
dˆn (x)2
and
with l1 + l2 = 2,
∂x {dˆn (x)} ∂x {p̂n,k1 (x)} p̂n,k2 (x) p̂n,k3 (x)
.
dˆn (x)3
By (C.6), the term with the highest order is
∂x2 {p̂n,k1 (x)} p̂n,k2 (x) p̂n,k3 (x)
= OP
dˆn (x)2
27
s
!
|log hn,1 |
+1 .
nh5n,1
(l)
Rate of ∂xl β̂n,k for l ∈ {0, 1, 2}.
sup
x∈SX , y∈R
Proposition C.1, fifth assertion, with K equal to wk , yields
!
s
|log
h
|
n,1
∂xl β̂n,k (y, x) = OP
.
nh1+2l
n,1
R
Rate of ∂xl b̂n,k for l ∈ {0, 1, 2}. Let Fn (y|x) = F (y − hn,2 u|x)L(u) du. Fubini’s theorem
gives that E[ϕhn,2 (y, Yi ) | Xi = x] = Fn (y|x). Hence we consider
(0)
(0)
b̂n,k (y, x) = B̂n,1 (y, x) + B̂n,2 (y, x),
with
(0)
n
X
(0)
n
X
B̂n,1 (y, x) = n−1
{Fn (y|Xi ) − F (y|Xi )} wk,hn,1 (x − Xi ),
i=1
B̂n,2 (y, x) = n−1
{F (y|Xi ) − F (y|x) − (Xi − x) ∂x F (y|x)} wk,hn,1 (x − Xi ).
i=1
Differentiating once with respect to x, we find
(1)
(1)
(1)
∂x b̂n,k (y, x) = B̂n,1 (y, x) + B̂n,2 (y, x) + B̂n,3 (y, x),
with
(1)
n
X
(1)
i=1
n
X
B̂n,1 (y, x) = n−1
B̂n,2 (y, x) = n−1
{Fn (y|Xi ) − F (y|Xi )} ∂x {wk,hn,1 (x − Xi )},
{F (y|Xi ) − F (y|x) − (Xi − x) ∂x F (y|x)} ∂x {wk,hn,1 (x − Xi )},
i=1
(1)
B̂n,3 (y, x)
= hn,1 ∂x2 F (y|x) p̂n,k+1 (x).
Differentiating twice with respect to x, we find
(2)
(2)
(2)
(2)
∂x2 b̂n,k (y, x) = B̂n,1 (y, x) + B̂n,2 (y, x) + B̂n,3 (y, x) + B̂n,4 (y, x),
with
(2)
n
X
(2)
i=1
n
X
B̂n,1 (y, x) = n−1
B̂n,2 (y, x) = n−1
{Fn (y|Xi ) − F (y|Xi )} ∂x2 {wk,hn,1 (x − Xi )},
{F (y|Xi ) − F (y|x) − (Xi − x) ∂x F (y|x)} ∂x2 {wk,hn,1 (x − Xi )},
i=1
(2)
B̂n,3 (y, x)
(2)
B̂n,4 (y, x)
= hn,1 p̂n,k+1 (x) ∂x3 F (y|x),
=
hn,1 ∂x2 F (y|x)
n
−1
n
X
!
{(x − Xi )/hn,1 } ∂x {wk,hn,1 (x − Xi )}
i=1
+ hn,1 ∂x {p̂n,k+1 (x)} ∂x2 F (y|x).
All this results in the formula
(l)
(l)
(l)
(l)
∂xl b̂n,k (y, x) = B̂n,1 (y, x) + B̂n,2 (y, x) + B̂n,3 (y, x) + B̂n,4 (y, x),
28
with, for l ∈ {0, 1, 2},
(l)
n
X
(l)
n
X
B̂n,1 (y, x) = n−1
{Fn (y|Xi ) − F (y|Xi )} ∂xl {wk,hn,1 (x − Xi )},
i=1
B̂n,2 (y, x) = n−1
{F (y|Xi ) − F (y|x) − (Xi − x) ∂x F (y|x)} ∂xl {wk,hn,1 (x − Xi )},
i=1
(0)
(l)
(0)
whereas B̂n,3 (y, x) = 0 and B̂n,3 (y, x) = hn,1 p̂n,k+1 (x) ∂xl+1 F (y|x) for l ∈ {1, 2}, and B̂n,4 (y, x) =
(1)
B̂n,4 (y, x) = 0. Assumption (G1) implies that
|F (y + u|x) − F (y|x) − u ∂y F (y|x)| ≤ M
R
By (G3) and (C.3), since
x∈SX , y∈R
B̂n,1 (y, x)
(l)
≤ h−l
n,1 sup {r̂n,k (x)}
x∈SX
=
u L(u) du = 0, we have
(l)
sup
h−l
n,1
sup
x∈SX
(l)
{r̂n,k (x)}
Z
x∈SX
2
h−l
n,1 hn,2
{F (y − hn,2 u|x) − F (y|x) + hn,2 u ∂y F (y|x)} L(u) du
sup
x∈SX , y∈R
(l)
s
|Fn (y|x) − F (y|x)|
sup
x∈SX , y∈R
2
≤ h−l
n,1 hn,2 sup {r̂n,k (x)}
= OP
u2
.
2
M
2
Z
u2 L(u) du
|log hn,1 |
2
+ h−l
n,1 hn,2
nhn,1
!
2
= OP h−l
n,1 hn,2 .
By (G1), one has
|F (y|x + u) − F (y|x) − u ∂x F (y|x)| ≤ M
u2
.
2
Using (C.3), it follows that
(l)
sup
x∈SX , y∈R
B̂n,2 (y, x)
(l)
≤ h−l
n,1 sup {r̂n,k (x)}
x∈SX
s
= OP
h2−l
n,1
|F (y|x̃) − F (y|x) − (x̃ − x) ∂x F (y|x)|
sup
|x−x̃|<hn,1 , y∈R
|log hn,1 |
+ h2−l
n,1
nhn,1
!
= OP h2−l
n,1 .
29
p
(l)
Next, by (C.6), we have B̂n,3 (y, x) = OP (hn,1 |log hn,1 | /(nhn,1 ) + hn,1 ) = OP (hn,1 ) for l ∈
{1, 2}. Finally, because ∂u {uk+1 K(u)} = uk K(u) + u ∂u {uk K(u)}, we have
!
n
X
(2)
2
−1
Bn,4 (y, x) = hn,1 ∂x {F (y|x)} n
{(x − Xi )/hn,1 } ∂x {wk,hn,1 (x − Xi )} + ∂x {p̂n,k+1 (x)}
i=1
=
hn,1 ∂x2 {F (y|x)}
2∂x {p̂n,k+1 (x)} − p̂n,k (x)
!
s
|log hn,1 |
+ hn,1
hn,1
nh3n,1
= OP
= OP (hn,1 ).
Putting all this together yields
sup
x∈SX , y∈R
−l 2
∂xl b̂n,k (y, x) = OP h2−l
n,1 + hn,1 hn,2 ,
Rate of ∂yl1 ∂xl2 β̂n,k for (l1 , l2 ) ∈ {(1, 0), (1, 1), (2, 0)}.
pectation to obtain
∂yl1 ∂xl2 β̂n,k = n−1
n
X
l ∈ {0, 1, 2}.
Start by differentiating under the ex-
∂yl1 {ϕhn,2 (y, Yi )} − E[∂yl1 {ϕhn,2 (y, Yi )} | Xi ] ∂xl2 {wk,hn,1 (x − Xi )}
i=1
2
1 −1 −1
= (nhln,1
hln,2
)
n
X
(l1 −1)
(l1 −1)
(l2 )
Lhn,2
(y − Yi ) − E[Lhn,2
(y − Yi ) | Xi ] wk,h
(x − Xi ),
n,1
i=1
(l)
with Lh (u) = h−1 L(l) (u/h). By Condition (G3), the functions L and L(1) are bounded functions
of bounded variation. Applying Proposition C.1, third assertion, with L equal to L(l1 −1) and
(l )
K equal to wk 2 , for (l1 , l2 ) ∈ {(1, 0), (1, 1), (2, 0)}, gives
v
u
u
|log
h
h
|
n,1 n,2
.
∂yl1 ∂xl2 β̂n,k (y, x) = OP t
sup
1+2(l −1)
1+2l
2
x∈SX , y∈R
nhn,1 hn,2 1
Rate of ∂yl1 ∂xl2 b̂n,k for (l1 , l2 ) ∈ {(1, 0), (1, 1), (2, 0)}.
We mimic here the approach taken when
(l ,l )
B̂n 1 2 (y, x)
treating ∂xl b̂n,k . In the following, terms denoted by
are related to the derivatives
of b̂n,k of order l1 (resp. l2 ) with respect to y (resp. x). Differentiating l1 times with respect to
y produces
(l ,0)
(l ,0)
∂yl1 b̂n,k (y, x) = B̂n,11 (y, x) + B̂n,21 (y, x),
with
(l ,0)
n
X
(l ,0)
n
X
B̂n,11 (y, x) = n−1
∂yl1 Fn (y|Xi ) − ∂yl1 F (y|Xi ) wk,hn,1 (x − Xi ),
i=1
B̂n,21 (y, x) = n−1
∂yl1 F (y|Xi ) − ∂yl1 F (y|x) − (Xi − x) ∂yl1 ∂x F (y|x) wk,hn,1 (x − Xi ).
i=1
Differentiating with respect to x gives
(1,1)
(1,1)
(1,1)
∂x ∂y b̂n,k (y, x) = B̂n,1 (y, x) + B̂n,2 (y, x) + B̂n,3 (y, x),
30
with
(1,1)
B̂n,1 (y, x)
=n
−1
n
X
∂y Fn (y|Xi ) − ∂y F (y|Xi ) ∂x {wk,hn,1 (x − Xi )},
i=1
n
X
(1,1)
−1
B̂n,2 (y, x) = n
∂y F (y|Xi ) − ∂y F (y|x) − (Xi − x) ∂y ∂x F (y|x) ∂x {wk,hn,1 (x − Xi )},
i=1
(1,1)
B̂n,3 (y)
= hn,1 p̂n,k+1 (x) ∂y ∂x2 F (y|x).
By (G1) we have
∂y F (y + u|x) − ∂y F (y|x) − u ∂y2 F (y|x) ≤
By (G3) and (C.3), since
x∈SX , y∈R
B̂n,1 2 (y, x)
(l )
2
2
≤ h−l
n,1 sup {r̂n,k (x)}
x∈SX
=
≤
sup
x∈SX
2
h2n,2 h−l
n,1
(l2 )
{r̂n,k
(x)}
sup
x∈SX
sup
x∈SX , y∈R
2
h2n,2 h−l
n,1
∂y Fn (y|x) − ∂y F (y|x)
Z
sup
x∈SX , y∈R
(l2 )
{r̂n,k
(x)}
s
= OP
u L(u) du = 0, we have
(1,l )
sup
2
h−l
n,1
R
M u2
.
2
M
2
Z
{∂y F (y − hn,2 u|x) − ∂y F (y|x) + hn,2 u ∂y2 F (y|x)} L(u) du
u2 L(u) du
|log hn,1 |
2
+ h2n,2 h−l
n,1
nhn,1
!
−l2
= OP h2n,2 hn,1
.
By (G1) we have
∂y2 F (y + u|x) − ∂y2 F (y|x) − u ∂y3 F (y|x) ≤ M
|u|1+δ
.
1+δ
Then, by (C.3), it holds that
(2,0)
sup
x∈SX , y∈R
B̂n,1 (y, x)
≤ sup {r̂n,k (x)}
x∈SX
sup
x∈SX , y∈R
∂y2 Fn (y|x) − ∂y2 F (y|x)
Z
= sup {r̂n,k (x)}
x∈SX
sup
x∈SX , y∈R
≤ h1+δ
n,2 sup {r̂n,k (x)}
x∈SX
s
= OP
h1+δ
n,2
M
1+δ
Z
{∂y2 F (y − hn,2 u|x) − ∂y2 F (y|x) + hn,2 u ∂y3 F (y|x)} L(u) du
|u|1+δ L(u) du
|log hn,1 |
+ h1+δ
n,2
nhn,1
!
= OP h1+δ
.
n,2
31
By (G1), one has
∂y F (y|x + u) − ∂y F (y|x) − u ∂y ∂x F (y|x) ≤ M
u2
.
2
Using (C.3), it follows that
(1,l )
sup
x∈SX , y∈R
B̂n,2 2 (y, x)
(l )
−l2
2
sup {r̂n,k
(x)}
≤ hn,1
x∈SX
s
= OP
2
h2−l
n,1
|∂y F (y|x̃) − ∂y F (y|x) − (x̃ − x) ∂y ∂x F (y|x)|
sup
|x−x̃|<hn,1 , y∈R
|log hn,1 |
2
+ h2−l
n,1
nhn,1
!
2
= OP h2−l
.
n,1
By (G1), one has
∂y2 F (y|x + u) − ∂y2 F (y|x) − u ∂y2 ∂x F (y|x) ≤ M
|u|1+δ
.
1+δ
Using (C.3), it then follows that
(2,0)
sup
x∈SX , y∈R
B̂n,2 (y, x)
≤ sup {r̂n,k (x)}
x∈SX
s
= OP
h1+δ
n,1
sup
|x−x̃|<hn,1 , y∈R
|log hn,1 |
+ h1+δ
n,1
nhn,1
∂y2 F (y|x̃) − ∂y2 F (y|x) − (x̃ − x) ∂y2 ∂x F (y|x)
!
.
= OP h1+δ
n,1
(1,1)
Finally, by (C.6), we have B̂n,3 (y) = OP (hn,1
all this together yields
sup
x∈SX , y∈R
sup
x∈SX , y∈R
sup
x∈SX , y∈R
p
|log hn,1 |/(nhn,1 ) + hn,1 ) = OP (hn,1 ). Putting
∂y b̂n,k (y, x) = OP h2n,1 + h2n,2 ,
1+δ
,
∂y2 b̂n,k (y, x) = OP h1+δ
+
h
n,2
n,1
∂y ∂x b̂n,k (y, x) = OP h2n,2 h−1
+
h
n,1 .
n,1
Back to equation (C.2). So far, we have obtained uniform convergence rates in probability
for β̂n,k , b̂n,k and ŝn,k (with k ∈ {0, 1} for β̂n,k and b̂n,k and k ∈ {1, 2} for ŝn,k ), and their first and
second-order partial derivatives. In combination with equation (C.2) and hypothesis (H3), these
yield the convergence rates for F̂n (y|x) − F (y|x) stated in the proposition. For the benefit of
the reader, we provide here the details. For l ∈ {0, 1, 2}, we have, since |log hn,1 | /(nh3n,1 ) = o(1)
32
by (G4),
if l ∈ {0, 1},
OP (1)r
l
sup ∂x ŝn,k (x) =
|log hn1 |
+1
if l = 2,
x∈SX
OP
nh5n1
s
!
|log hn1 |
l
∂x β̂n,k (y, x) = OP
sup
,
nh1+2l
x∈SX ,y∈R
n1
−l 2
∂xl b̂n,k (y, x) = OP h2−l
+
h
h
sup
n,1
n,1 n,2 .
x∈SX ,y∈R
Moreover,
sup
x∈SX , y∈R
sup
x∈SX , y∈R
sup
x∈SX , y∈R
∂y b̂n,k (x, y) = OP (h2n,1 + h2n,2 ),
1+δ
∂y2 b̂n,k (x, y) = OP h1+δ
,
n,1 + hn,2
∂y ∂x b̂n,k (x, y) = OP h2n,2 h−1
n,1 + hn,1 .
Finally, for (l1 , l2 ) equal to (1, 0), (1, 1) or (2, 0),
sup
x∈SX , y∈R
v
u
u
l1 l2
∂y ∂x β̂n,k (y, x) = OP t
|log hn,1 hn,2 |
2 1+2(l1 −1)
nh1+2l
n,1 hn,2
.
Using equation (C.2), we find the following uniform convergence rates for F̂n (y|x) − F (y|x) and
its derivatives. First, differentiating both sides of equation (C.2) l ∈ {0, 1, 2} times with respect
to x, we have, using |log hn,1 | /(nh3n,1 ) = o(1),
s
sup
x∈SX , y∈R
∂xl F̂n (y|x) − ∂xl F (y|x) = OP
|log hn,1 |
nh1+2l
n,1
!
−l 2
+ h2−l
n,1 + hn,1 hn,2
.
Second, differentiating once or twice with respect to y, we find
s
!
|log hn,1 hn,2 |
sup
∂y F̂n (y|x) − ∂y F (y|x) = OP
+ h2n,1 + h2n,2 ,
nhn,1 hn,2
x∈SX , y∈R
!
s
|log hn,1 hn,2 |
1+δ
1+δ
2
2
+ hn,1 + hn,2 .
sup
∂y F̂n (y|x) − ∂y F (y|x) = OP
nhn,1 h3n,2
x∈SX , y∈R
Finally, the rate for the mixed second-order partial derivative is
!
s
|log hn,1 hn,2 |
sup
∂y ∂x F̂n (y|x) − ∂y ∂x F (y|x) = OP
+ h2n,2 h−1
n,1 + hn,1 .
nh3n,1 hn,2
x∈SX , y∈R
This completes the proof of the proposition.
33
C.2.3
Proof of Proposition 4.3
The first statement is a direct consequence of Fact 1. On the event corresponding to En,j,1 , by
the mean-value theorem, we find
F̂n− (u|x) − F − (u|x) = F − F F̂n− (u|x)|x | x − F − (u|x)
≤
1
F F̂n− (u|x)|x − u .
b
Conclude
using the first statement of the proposition to obtain a uniform rate of convergence
p
of |log hn,1 |/nhn,1 + h2n,1 + h2n,2 .
C.2.4
Proof of Proposition 4.4
Recall that δ1 = min(α/2, δ). We establish the occurrence with high probability of the following
uniform bounds: for certain constants M10 and M100 , to be determined later,
∂x F̂n− (u|x) ≤ M10 ,
sup
(C.7)
x∈SX , u∈[γ,1−γ]
∂x F̂n− (u|x) − ∂x F̂n− (u|x0 )
sup
x6=x0 , u∈[γ,1−γ]
|x − x0 |δ1
≤ M100 .
(C.8)
The constant M1 of the statement will be taken equal to the maximum between M10 and M100 .
By Lemma D.3, we have, almost surely, F̂n F̂n− (u|x)|x = u for u ∈ [γ, 1−γ]. Differentiating
both sides of this equality with respect to x produces the identity
∂x F̂n− (u|x) = −
∂x F̂n (y|x)
fˆn (y|x)
.
y=F̂n− (u|x)
Using (G1), we have
fˆn F̂n− (u|x)|x − f F − (u|x)|x
≤ fˆn F̂n− (u|x)|x − f F̂n− (u|x)|x + M F̂n− (u|x) − F − (u|x)
≤
sup
x0 ∈SX , y∈R
fˆn (y|x0 ) − f (y|x0 ) + M
sup
x0 ∈SX , u∈[γ,1−γ]
F̂n− (u|x0 ) − F − (u|x0 ) .
Invoking Proposition 4.2 (using nh2n,1 / |log hn,1 | → ∞) and Proposition 4.3, we get
sup
x∈SX , u∈[γ,1−γ]
fˆn F̂n− (u|x)|x − f F − (u|x)|x = oP (1).
(C.9)
Now we introduce ĝn (y, x) = ∂x F̂n (y|x) and g(y, x) = ∂x F (y|x). In a similar way, invoking
2
Proposition 4.2 (using nh3n,1 / |log hn,1 | → ∞ and h−1
n,1 hn,2 → 0) and Proposition 4.3, we find
sup
x∈SX , u∈[γ,1−γ]
ĝn F̂n− (u|x), x − g F − (u|x), x = oP (1).
It follows that, with probability going to 1,
sup
∂x F̂n− (u|x) ≤
x∈SX , u∈[γ,1−γ]
34
2M
.
bγ /2
(C.10)
Hence we have shown (C.7) with M10 = 2M/(bγ /2). To show (C.8), we start by proving that
(C.8) holds true whenever the maps (y, x) 7→ ∂x F̂n (y|x) and (y, x) 7→ fˆn (y|x) are both δ1 -Hölder
with constant 2M , i.e., when
∂x F̂n (y|x) − ∂x F̂n (y 0 |x0 )
sup
|(y − y 0 , x − x0 )|δ1
x6=x0 , y6=y 0
fˆn (y|x) − fˆn (y 0 |x0 )
sup
x6=x0 , y6=y 0
≤ 2M,
(C.11)
≤ 2M.
|(y − y 0 , x − x0 )|δ1
(C.12)
Abbreviate F̂n− (u|x) to ξu,x , and write
∂x F̂n− (u|x) − ∂x F̂n− (u|x0 )
=
g(ξu,x0 , x0 ) − g(ξu,x , x)
g(ξu,x , x)
+
fˆn (ξu,x |x) − fˆn (ξu,x0 |x0 ) .
fˆn (ξu,x0 |x0 )
fˆn (ξu,x0 |x0 )fˆn (ξu,x |x)
Use (C.9) and (C.10) to obtain, with probability going to 1,
∂x F̂n− (u|x) − ∂x F̂n− (u|x0 ) ≤
2M
1
g(ξu,x0 , x0 ) − g(ξu,x , x) +
fˆn (ξu,x |x) − fˆn (ξu,x0 |x0 ) .
bγ /2
(bγ /2)2
Use the Hölder properties (C.11) and (C.12), then the equivalence between lp norms, p > 0,
and finally (C.7), to get
2M
|(ξu,x0
bγ /2
2M
1+
=
bγ /2
2M
≤
1+
bγ /2
2M
1+
≤
bγ /2
∂x F̂n− (u|x) − ∂x F̂n− (u|x0 ) ≤
(2M )2
− ξu,x , x0 − x)|δ1 +
|(ξu,x − ξu,x0 , x − x0 )|δ1
(bγ /2)2
2M
|(ξu,x0 − ξu,x , x0 − x)|δ1
bγ /2
2M
Mδ1 |ξu,x0 − ξu,x |δ1 + |x0 − x|δ1
bγ /2
2M 2
Mδ1 |x0 − x|δ1 ,
bγ /2
for some positive constant Mδ1 .
To terminate the proof, we just need to show (C.11) and (C.12). We only focus on the
former because the treatment of the latter results in the same approach involving slightly weaker
conditions. Because the function spaces Cs,M (SX ), s ∈ R, are decreasing sets in s, (G1) still
holds with δ1 = min(α/2, δ) in place of δ. We shall apply Proposition 4.2 with δ1 in place of δ.
Write m̂n (y, x) = ∂x {F̂n (y|x) − F (y|x)}. We distinguish between the case that |(y − y 0 , x − x0 )|
is smaller than hn,1 or not.
• First, suppose |(y − y 0 , x − x0 )| ≤ hn,1 . By the mean-value theorem and because of the
rates associated to ∂x2 and ∂x ∂y in Proposition 4.2, we have
|m̂n (y, x) − m̂n (y 0 , x0 )|
|(y − y 0 , x − x0 )|δ1
≤
|∇{m̂n (y, x)}| |(y − y 0 , x − x0 )|1−δ1
sup
x∈SX , y∈R
≤
sup
x∈SX , y∈R
s
= OP
1
|∇{m̂n (y, x)}| h1−δ
n,1
| log hn,1 |
1
nh3+2δ
n,1
s
−1−δ1 2
1
+ h1−δ
hn,2 +
n,1 + hn,1
35
| log hn,1 hn,2 |
1
nh1+2δ
n,1 hn,2
!
1−δ1 2
1
+ h2−δ
n,1 + hn,1 hn,2
.
Because δ1 = min(α/2, δ), (G4”) still holds with 2δ1 in place of α. This implies that the
previous bound goes to 0.
• Second, suppose |(y−y 0 , x−x0 )| > hn,1 . Using the rates associated to ∂x in Proposition 4.2,
we find
|m̂n (y, x) − m̂n (y 0 , x0 )|
1
≤ 2 sup |m̂n (y, x)| h−δ
n,1
|(y − y 0 , x − x0 )|δ1
x∈SX , y∈R
!
s
| log hn,1 |
−1−δ1 2
1−δ1
= OP
+ hn,1 + hn,1 hn,2 ,
1
nh3+2δ
n,1
which also goes to 0 by (G4”).
As a consequence of (G4’), the previous bounds convergence to 0. Now because ∂x F belongs
to Cδ1 ,M (R × SX ), we have, with probability going to 1,
∂x F̂n (y|x) − ∂x F̂n (y 0 |x0 )
|(y − y 0 , x − x0 )|δ1
|m̂n (y|x) − m̂n (y 0 |x0 )| |∂x F (y|x) − ∂x F (y 0 |x0 )|
+
|(y − y 0 , x − x0 )|δ1
|(y − y 0 , x − x0 )|δ1
≤ 2M,
≤
as required.
Appendix D
Analytical results
Lemma D.1. If C is a bivariate copula satisfying (G2), then for all γ ∈ (0, 1/2), all u ∈
[γ, 1 − γ]2 and all v ∈ [0, 1]2 ,
C(v) − C(u) −
2
2
X
4κ X
(vj − uj )2 .
(vj − uj ) Ċj (u) ≤
γ
j=1
j=1
In the lemma, the point v is allowed to lie anywhere in [0, 1]2 , even on the boundary or
at the corner. The ‘anchor point’ u, however, must be at a distance at least γ away from the
boundary.
Proof. Fix γ ∈ (0, 1/2), u ∈ [γ, 1 − γ]2 and v ∈ [0, 1]2 . For t ∈ [0, 1], put
w(t) = u + t(v − u).
Note that w(t) ∈ (0, 1)2 for t ∈ [0, 1). The function t 7→ C(w(t)) is continuous on [0, 1] and is
continuously differentiable on (0, 1). By the fundamental theorem of calculus,
Z
C(v) − C(u) = C(w(1)) − C(w(0)) =
0
1
dC(w(t))
dt =
dt
Z
2
1X
(vj − uj ) Ċj (w(t)) dt.
0 j=1
It follows that
Z 1
2
2
X
X
C(v) − C(u) −
(vj − uj ) Ċj (u) =
(vj − uj )
{Ċj (w(t)) − Ċj (u)} dt.
j=1
j=1
36
0
Fix t ∈ [0, 1) and j ∈ {1, 2}. Note that
u + s(w(t) − u) = u + st(v − u) = w(st).
The function s 7→ Ċj (w(st)) is continuous on [0, 1] and continuously differentiable on (0, 1). By
the fundamental theorem of calculus,
1
Z
Ċj (w(t)) − Ċj (u) =
0
2
X
dĊj (w(st))
ds =
t(vk − uk )
ds
k=1
1
Z
C̈jk (w(st)) ds.
0
We obtain
Z
2
2 X
2
X
X
C(v) − C(u) −
(vj − uj ) Ċj (u) =
(vj − uj )(vk − uk )
j=1
1Z 1
0
j=1 k=1
C̈jk (w(st)) t dt ds.
0
Suppose we find a positive constant L such that, for all u ∈ [γ, 1−γ]2 , v ∈ [0, 1]2 and j, k ∈ {1, 2},
Z
1Z 1
C̈jk (w(st)) t dt ds ≤ L.
0
(D.1)
0
Then
C(v) − C(u) −
2
X
(vj − uj ) Ċj (u) ≤ L
j=1
2 X
2
X
|vj − uj | |vk − uk |
j=1 k=1
2
= L |v1 − u1 | + |v2 − u2 |
≤ 2L (v1 − u1 )2 + (v2 − u2 )2 ,
which is the inequality stated in the lemma with 2L in place of 4κ/γ. It remains to show (D.1).
By Condition (G2),
C̈jk (w(st)) ≤ κ {wj (1 − wj ) wk (1 − wk )}−1/2 ,
where wj = wj (st) is the j-th coordinate of w(st). Now
wj (1 − wj ) = {uj + st(vj − uj )} {1 − uj − st(vj − uj )}.
For fixed st ∈ [0, 1), this expression is concave as a function of (uj , vj ) ∈ [γ, 1 − γ] × [0, 1]. Hence
it attains its minimum for (uj , vj ) ∈ {γ, 1 − γ} × {0, 1}. In each of the four possible cases, we
find
1
wj (1 − wj ) ≥ (1 − st)γ.
2
We obtain
2κ
C̈jk (w(st)) ≤
.
(1 − st)γ
As a consequence,
Z
0
1Z 1
0
2κ
C̈jk (w(st)) t dt ds ≤
γ
by direct calculation of the double integral.
37
Z
0
1Z 1
0
t
2κ
dt ds =
,
1 − st
γ
We quote Lemma 4.3 in Segers (2015), which is a variant of “Vervaat’s lemma”, i.e., the
functional delta method for the mapping sending a monotone function to its inverse:
Lemma D.2. Let F : R → [0, 1] be a continuous distribution function. Let 0 < rn → ∞ and
let F̂n be a sequence of random distribution functions such that, in `∞ (R),
rn (F̂n − F )
β ◦ F,
n → ∞,
where β is a random element of `∞ ([0, 1]) with continuous trajectories. Then β(0) = β(1) = 0
almost surely and
sup rn {F (F̂n− (u)) − u} + rn {F̂n (F − (u)) − u} = oP (1).
u∈[0,1]
As a consequence, in `∞ ([0, 1]),
rn {F (F̂n− (u)) − u}
u∈[0,1]
−β,
n → ∞.
Lemma D.3. Let F be a continuous function such that limy→−∞ F (y) = 0 and limy→+∞ F (y) =
1, then for any u ∈ (0, 1), we have F (F − (u)) = u.
Proof. Let y0 = F − (u). By assumption, the set {y ∈ R : F (y) ≥ u} is non empty and
therefore −∞ < y0 < +∞. By definition of the quantile transformation, for any y < y0 , it holds
F (y) < u. Now using the continuity of F gives F (y0 ) ≤ u. Conclude by noting that we always
have F (y0 ) ≥ u.
Lemma D.4. Let F and G be cumulative distribution functions. If there exists ε > 0 such that
|F (y) − G(y)| ≤ for every y ∈ R, then
G− ((u − ) ∨ 0) ≤ F − (u),
−
−
F (u) ≤ G (u + ε),
u ∈ [0, 1],
(D.2)
u ∈ [0, 1 − ε].
(D.3)
Proof. We first show (D.2). If F − (u) = ∞, there is nothing to show, while if F − (u) = −∞, then
u ≤ F (F − (u)) = F (−∞) = 0, so G− ((u−ε)∨0) = G− (0) = −∞ too. Hence we can suppose that
F − (u) is finite. Since G(y) ≥ F (y) − for all y ∈ R, we have G(F − (u)) ≥ F (F − (u)) − ≥ u − .
Trivially, also G(F − (u)) ≥ 0. Together, we find G(F − (u)) ≥ (u − ) ∨ 0. As a consequence,
F − (u) ≥ G− ((u − ) ∨ 0).
Next we show (D.3). Let u ∈ [0, 1 − ε]. By (D.2) with the roles of F and G interchanged
and applied to u + ε rather than to u, we find F − (u) = F − (((u + ε) − ε) ∨ 0) ≤ G− (u + ε).
Acknowledgments
The research by F. Portier was supported by the Fonds de la Recherche Scientifique Belgium
(F.R.S.-FNRS) A4/5 FC 2779/2014-2017 No. 22342320, and the research by J. Segers was
supported by the contract “Projet d’Actions de Recherche Concertées” No. 12/17-045 of the
“Communauté française de Belgique” and by IAP research network Grant P7/06 of the Belgian
government (Belgian Science Policy).
The authors greatfully acknowledge valuable comments by the reviewers and the editors, in
particular the helpful suggestions related to the structure of the theory and the presentation of
the paper.
38
References
Aas, K., C. Czado, A. Frigessi, and H. Bakken (2009). Pair-copula constructions of multiple
dependence. Insurance: Mathematics and Economics 44, 182–198.
Acar, E. F., C. Genest, and J. Nešlehová (2012). Beyond simplified pair-copula constructions.
Journal of Multivariate Analysis 110, 74–90.
Akritas, M. G. and I. Van Keilegom (2001). Non-parametric estimation of the residual distribution. Scandinavian Journal of Statistics 28, 549–567.
Bedford, T. and R. Cooke (2002). Vines–a new graphical model for dependent random variables.
The Annals of Statistics 30, 1031–1068.
Bergsma, W. (2011). Nonparametric testing of conditional independence by means of the partial
copula. ArXiv e-prints, arXiv:1101.4607.
Deheuvels, P. (1979). La fonction de dépendance empirique et ses propriétés. Un test non
paramétrique d’indépendance. Acad. Roy. Belg. Bull. Cl. Sci. (5) 65 (6), 274–292.
Derumigny, A. and J.-D. Fermanian (2016). About tests of the “simplifying” assumption for
conditional copulas. ArXiv e-prints, arXiv:1612.07349.
Dony, J., U. Einmahl, and D. M. Mason (2006). Uniform in bandwidth consistency of local
polynomial regression function estimators. Austrian Journal of Statistics 35 (2&3), 105–120.
Dudley, R. M. (1999). Uniform central limit theorems, Volume 63 of Cambridge Studies in
Advanced Mathematics. Cambridge University Press, Cambridge.
Einmahl, U. and D. M. Mason (2000). An empirical process approach to the uniform consistency
of kernel-type function estimators. Journal of Theoretical Probability 13 (1), 1–37.
Fan, J. (1992). Design-adaptive nonparametric regression. J. Amer. Statist. Assoc. 87 (420),
998–1004.
Fan, J. and I. Gijbels (1996). Local polynomial modelling and its applications, Volume 66 of
Monographs on Statistics and Applied Probability. Chapman & Hall, London.
Gasser, T. and H.-G. Müller (1984). Estimating regression functions and their derivatives by
the kernel method. Scandinavian Journal of Statistics 11 (3), 171–185.
Gijbels, I., M. Omelka, and N. Veraverbeke (2015a). Estimation of a copula when a covariate
affects only marginal distributions. Scandinavian Journal of Statistics 42 (4), 1109–1126.
Gijbels, I., M. Omelka, and N. Veraverbeke (2015b). Partial and average copulas and association
measures. Electronic Journal of Statistics 9, 2420–2474.
Giné, E. and A. Guillou (2002). Rates of strong uniform consistency for multivariate kernel
density estimators. Ann. Inst. H. Poincaré Probab. Statist. 38 (6), 907–921. En l’honneur de
J. Bretagnolle, D. Dacunha-Castelle, I. Ibragimov.
Härdle, W., P. Janssen, and R. Serfling (1988). Strong uniform consistency rates for estimators
of conditional functionals. The Annals of Statistics 16 (4), 1428–1449.
Hobaek Haff, I. (2013). Parameter estimation for pair-copula constructions. Bernoulli 19 (2),
462–491.
Hobaek Haff, I., K. Aas, and A. Frigessi (2010). On the simplified pair-copula construction –
simply useful or too simplistic? Journal of Multivariate Analysis 101 (5), 1296–1310.
Hobæk Haff, I. and J. Segers (2015). Nonparametric estimation of pair-copula constructions
with the empirical pair-copula. Computational Statistics and Data Analysis 84, 1–13.
Joe, H. (1996). Families of m-variate distributions with given margins and m(m − 1)/2 bivariate dependence parameters. In L. Rüschendorf, B. Schweizer, and M. D. Taylor (Eds.),
Distributions with Fixed Marginals and Related Topics, pp. 120–141. Hayward, CA: Institute
of Mathematical Statistics.
39
Lambert, P. (2007). Archimedean copula estimation using Bayesian splines smoothing techniques. Computational Statistics and Data Analysis 51 (12), 6307–6320.
Masry, E. (1996). Multivariate local polynomial regression for time series: uniform strong
consistency and rates. J. Time Ser. Anal. 17 (6), 571–599.
Nolan, D. and D. Pollard (1987). U -processes: rates of convergence. The Annals of Statistics 15 (2), 780–799.
Omelka, M., I. Gijbels, and N. Veraverbeke (2009). Improved kernel estimator of copulas: weak
convergence and goodness-of-fit testing. The Annals of Statistics 37 (5B), 3023–3058.
Patton, A. J. (2006). Modelling asymmetric exchange rate dependence. International Economic
Review 47 (2), 527–556.
Portier, F. (2016). On the asymptotics of Z-estimators indexed by the objective functions.
Electronic Journal of Statistics 10 (1), 464–494.
R Core Team (2017). R: A Language and Environment for Statistical Computing. Vienna,
Austria: R Foundation for Statistical Computing.
Rüschendorf, L. (1976). Asymptotic distributions of multivariate rank order statistics. The
Annals of Statistics 4, 912–923.
Segers, J. (2012). Asymptotics of empirical copula processes under non-restrictive smoothness
assumptions. Bernoulli 18 (3), 764–782.
Segers, J. (2015). Hybrid copula estimators. Journal of Statistical Planning and Inference 160,
23–34.
Sklar, M. (1959). Fonctions de répartition à n dimensions et leurs marges. Université Paris 8.
Stone, C. J. (1977). Consistent nonparametric regression. The Annals of Statistics 5 (4), 595–
645. With discussion and a reply by the author.
Stöber, J., H. Joe, and C. Czado (2013). Simplified pair copula constructions—limitations and
extensions. Journal of Multivariate Analysis 119, 101–118.
Talagrand, M. (1994). Sharper bounds for Gaussian and empirical processes. The Annals of
Probability 22 (1), 28–76.
van der Vaart, A. W. and J. A. Wellner (1996). Weak Convergence and Empirical Processes.
With Applications to Statistics. Springer Series in Statistics. New York: Springer-Verlag.
van der Vaart, A. W. and J. A. Wellner (2007). Empirical processes indexed by estimated
functions. In Asymptotics: particles, processes and inverse problems, Volume 55 of IMS
Lecture Notes Monogr. Ser., pp. 234–252. Beachwood, OH: Inst. Math. Statist.
Veraverbeke, N., M. Omelka, and I. Gijbels (2011). Estimation of a conditional copula and
association measures. Scandinavian Journal of Statistics 38, 766–780.
40
| 10 |
arXiv:1705.04395v1 [math.CO] 11 May 2017
Unit Incomparability Dimension and Clique
Cover Width in Graphs
Farhad Shahrokhi
Department of Computer Science and Engineering, UNT
[email protected]
Abstract
For a clique cover C in the undirected graph G, the clique cover
graph of C is the graph obtained by contracting the vertices of each
clique in C into a single vertex. The clique cover width of G, denoted by CCW (G), is the minimum value of the bandwidth of all
clique cover graphs in G. Any G with CCW (G) = 1 is known to be
an incomparability graph, and hence is called, a unit incomparability graph. We introduced the unit incomparability dimension of G,
denoted by U dim(G), to be the smallest integer d so that there are
unit incomparability graphs Hi with V (Hi ) = V (G), i = 1, 2, ..., d, so
that E(G) = ∩di=1 E(Gi ). We prove a decomposition theorem establishing the inequality U dim(G) ≤ CCW (G). Specifically, given any
G, there are unit incomparability graphs H1 , H2 , ..., HCC(W ) with
V (Hi ) = V (G) so that and E(G) = ∩CCW
i=1 E(Hi ). In addition, Hi is
co-bipartite, for i = 1, 2, ..., CCW (G) − 1. Furthermore, we observe
that CCW (G) ≥ s(G)/2 − 1, where s(G) is the number of leaves in a
largest induced star of G , and use Ramsey Theory to give an upper
bound on s(G), when G is represented as an intersection graph using
our decomposition theorem. Finally, when G is an incomparability
graph we prove that CCW (G) ≤ s(G) − 1.
1
Introduction and Summary
Throughout this paper, G = (V (G), E(G)) denotes a graph on n vertices.
G is assumed to be undirected, unless stated otherwise. The complement of
G is denoted by Ḡ. A set C of vertex disjoint cliques in G is a clique cover, if
every vertex in V is, precisely, in one clique of C. Let L = {v0 , v2 , ..., vn−1 }
be a linear ordering of vertices in G. The width of L, denoted by W (L),
is maxvi vj ∈E(G) |j − i|. The bandwidth of G [2], [3], denoted by BW (G), is
the smallest width of all linear orderings of V (G). The bandwidth problem
is well studied and has intimate connections to other important concepts
including graph separation [4], [1], [7]. Unfortunately, computing the bandwidth is N P − hard, even if G is a tree [11].
In [15] we introduced the clique cover width problem which is a generalization of the bandwidth problem. For C a clique cover of G, let the clique
cover graph of C, denoted by G(C), be the graph obtained from C by contracting the vertices in each clique to one vertex in G. Thus V (G(C)) = C,
and E(G(C)) = {XY |X, Y ∈ C, there is xy ∈ E(G) with x ∈ X, and, y ∈
Y }. The clique cover width of G, denoted by CCW (G), is the minimum
value of BW (G(C)), where the minimum is taken over all clique covers C
of G. Note that CCW (G) ≤ BW (G), since {{x}|x ∈ V (G)} is a trivial
clique cover in G containing cliques that have only one vertex. We highly
suspect that the problem of computing CCW(G) is N P hard, due to the
connection to the bandwidth prob- lem. Let C be a clique cover in G.
Throughout this paper, we will write C = {C0 , C1 , ..., Ct } to indicate that
C is an ordered set of cliques. For a clique cover C = {C0 , C1 , ..., Ct }, in G,
let the width C, denoted by W (C), denote max{|j − i| |Ci Cj ∈ E(C(G))}.
Observe that, W (C) = max{|j − i||xy ∈ E(G), x ∈ Ci , y ∈ Cj , Ci , Cj ∈ C}.
An crucial tool for the design of a divide and conquer algorithm is
separation. The planar separation [12] asserts that any n vertex planar
graph can be separated √
into two subgraphs, each having at most 2n/3
vertices, by removing O( n) vertices. The key application of the clique
cover width is in the derivation of separation theorems in graphs, where
separation can be defined for other types measures [14], instead of just the
number of vertices. For instance, given a clique cover C in G, can G be
separated by removing a *small* number of cliques in C so that each the
two remaining subgraph of G can be covered by at most α|C| cliques from
C, where α < 1 is a constant [14, 16]?
Any (strict) partially ordered set (S, <) has a directed acyclic graph Ĝ
associated with it in a natural way: V (G) = S, and ab ∈ E(G) if and only
if a < b. The comparability graph associated with (S, <) is the undirected
graph which is obtained by dropping the orientation on edges of Ĝ [6, 21].
The complement of a comparability graph is an incomparability graph.
Any graph G with CCW (G) = 1 is known to be an incomparability graph [15], and hence we call such a G a unit incomparability graph.
Clearly, any co-bipartite graph, or the complement of a bipartite graph is
a unit comparability graph. In addition, it is easy to verify that any unit
interval graph is also a unit incomparability graph. Thus, the class of unit
incomparability graphs is relatively large.
Let d ≥ 1, and for i = 1, 2, ..., d let Hi be a graph with V (Hi ) = V , and
let G be a graph with V (G) = V and E(G) = ∩di=1 E(Gi ). Then, we say
that G is the intersection graph of H1 , H2 , ..., Hd , and write G = ∩di=1 Hi
[18]. Let the incomparability dimension of G, denoted by Idim(G), be
the smallest integer d so that there are d incomparability graphs whose
intersection graph is G. Similarly, let the unit incomparability dimension
of G, denoted by U dim(G), be the smallest integer d so that there are
d unit incomparability graphs whose intersection is G. In this paper we
focus on the connection between the the clique cover width and the unit
incomparability dimension of a graph. Our work gives rise to a new way
of representing any graph as the intersection graph of unit incomparability
graphs.
Recall that Boxicity, and Cubicity of a graph, denoted Box(G) and
Cub(G) respectively, are the smallest integer d so that G is the intersection
graph of d interval graphs, or unit interval graphs, respectively [13]. Recent
work [18, 19] has elevated the importance of the Boxicity and the Cubicity,
by improving the upper bounds, and by linking these concepts to other important graph parameters including treewidth [4], Poset dimension [21], and
crossing numbers [17]. Clearly, Idim(G) ≤ U dim(G), Box(G) ≤ Cub(G),
Idim(G) ≤ Box(G), and U dim(G) ≤ Cub(G).
While Cubicity and Boxicity are related to unit incomparability dimension, and while the results in [18, 19] are extremely valuable, these results
do not imply, or even address the concepts and results presented here, particularly, due to the focus of our work on the pivoting concept of the clique
cover width.
In Section two we prove a decomposition theorem that establishes the
inequality U dim(G) ≤ CCW (G), for any graph G. Furthermore, we observe that CCW (G) ≥ (s(G)/2) − 1, where s(G) is the largest number of
leaves in an induced star in G, and use Ramsey Theory to give an upper
bound on s(G), when G is represented as an intersection graph using our
decomposition theorem. In Section three we study the clique cover width
problem in incomparability graphs, and prove that s(G) − 1 ≥ CCW (G),
when G is an incomparability graph. The results give rise to polynomial
time algorithms for the construction of the appropriate structures.
Remark. In work in progress, the author has improved the upper
bound on U dim(G) (in the decomposition theorem) to O(log(CCW (G)).
This drastically improves all upper bounds presented here.
2
Main Results
Now we prove the decomposition result.
Theorem 2.1 (Decomposition Theorem) Let C = {C0 , C1 , ..., Ct } be a
a clique cover in G. Then, there are W (C) unit incomparability graphs
H1 , H2 , ..., HW (C) whose intersection is G. Specifically, Hi is a co-bipartite
graph, for i = 1, 2..., W (C) − 1. Moreover, the constructions can be done
in polynomial time.
Proof. For i = 1, 2, ..., W (C) − 1, we define a graph H̄i on the vertex
set V (G) and the edge set
E(H̄i ) = {xy ∈ E(Ḡ)|x ∈ Cl , y ∈ Ck , |k − l| = i},
and prove that H̄i is a bipartite graph. For i = 1, 2, ..., W (C) − 1, let odd(i)
and even(i) denote the set of all integers 0 ≤ j ≤ t, so that j = a.i + r, r ≤
i−1, where a ≥ 0 is odd, or even, respectively, and note that odd(i)∪even(i)
is a partition of {0, 1, 2, ..., t}. Next, for i = 1, 2, ..., W (C) − 1, let V1 =
{x ∈ Cj |j ∈ odd(i)} and V2 = {x ∈ Cj |j ∈ even(i)}, and note V1 ∪ V2 is a
partition of V (G) so that any edge in E(H̄i ) has one end point in V1 and
the other end point in ∈ V2 . Therefore, for i = 1, 2, ..., W (C) − 1, H̄i is
bipartite, and consequently, Hi is co-bipartite. Next, let H̄W (C) be a graph
on the vertex set V (G) and the edge set
E(H̄W (C) ) = {xy ∈ E(Ḡ)|x ∈ Cl , y ∈ Ck , |l − k| ≥ W (C)}.
Let xy ∈ E(H̄), then y ∈ Cl and x ∈ Ck with |l − k| ≥ W (C). Now orient
xy from x to y, if l ≥ k + W (C), otherwise, orient xy from y to x. It
is easy to verify that this orientation is transitive, and hence H̄W (C) is a
comparability graph. Consequently, HW (C) is an incomparability graph.
We need to show CCW (HW (C) ) = 1. First, observe that any consecutive
subset cliques of in C of cardinality at at most W (C) is a clique in HW (C) .
Next, let t = a.W (C) + r, r ≤ W (C) − 1. For i = 0, 1, ..., a − 1, let Si be
the set of all W (C) consecutive cliques in C starting at i.W (C) + 1 and
ending at (i + 1)W (C). Define Sa to be the set of r consecutive cliques
in C, starting at Ca.W (C + 1 and ending in Ct . It is easy to verify that
S = {S0 , S2 , ..., Sa } is a clique cover in HW (C) with W (S) = 1, and thus
CCW (HW (C) ) = 1. ✷
Remark 2.1 If G is a clique, then CCW (G) = 0, where, U dim(G) =
1. In addition, if G is disconnected, then CCW (G) and U dim(G) equal
to the maximum values of these parameters, respectively, taken over all
components of G.
A Simple consequence of Theorem 2.1 is the following.
Corollary 2.1 CCW (G) ≥ U dim(G), for any connected graph G which
is not a clique.
In light of Theorem 2.1 , one may want to have estimates for CCW (G).
Let s(G) denote the number of leaves for a largest induced star in G. When
|V (G)|2, we define s(G) = 1.
Observation 2.1 Let C = {C0 , C1 ..., Ck } be a clique cover in G, then,
W (C) ≥ ⌈ s(G)
2 ⌉ − 1.
Proof. Let S be an induced star with center r and s(G) leaves. Then,
r ∈ Ci for some 0 ≤ i ≤ t. Note that no two leaves in S can be in
the same clique of C, and hence there must be an edge rx ∈ E(S) with
x ∈ Cj for some 0 ≤ j ≤ t so that |j − i| ≥ ⌈ s(G)
2 ⌉ − 1. Consequently,
s(G)
W (C) ≥ ⌈ 2 ⌉ − 1. ✷
Ideally, one would like to have a converse for Theorem 2.1, where the
clique cover width of G can be estimated using the clique cover widths for
the factor graphs. Unfortunately, we have not been to derive a general
result that can used effectively. In addition, Observation 2.1 poses the
problem of finding an upper bound for CCW (G) in terms of s(G), only.
Unfortunately this is also impossible. For instance, take the n × n planar
gird G, then s(G) = 4, but CCW (G) is unbounded. (We omit the details.)
Nonetheless, one may pose the related question of finding an upper bound
for s(G) in G = ∩i=1 Hi , using s(Hi), i = 1, 2, ..., d.
For integers, n1 , n2 ..., nc let R(n1 , n2 , ..., nc ) denote the general Ramsey
number. Thus for any t ≥ R(n1 , n2 , ..., nc ), if the edges of Kt are colored
with c different colors, then for some 1 ≤ i ≤ c, we always get a complete
subgraph of Kt , on ni vertices, whose edges are all colored with color i.
Theorem 2.2 Let G = ∩di=1 Hi , then s(G) < R(s(H1 )+1, s(H2 )+1, ..., s(Hd )+
1).
Proof. Assume to the contrary that s(G) ≥ R(s(H1 ) + 1, s(H2 ) +
1, ..., s(Hd ) + 1), and let S be an induced star in G rooted at r with s(G)
leaves. Note that S = ∩di=1 Si , where for i = 1, 2, ..., d, Si is an induced
subgraph of Hi with V (Si ) = V (S). Now let H be a complete graph on
the vertex set V (H) = V (S){r}. Thus, the vertex set of H is precisely
the set of leaves in S. Now for i = 1, 2, ..., d, and for any ab ∈ E(H),
assign color i to ab, if and only if, ab ∈
/ E(Si ). Since|V (H)| = s(G) ≥
R(s(H1 ), s(H2 ), ..., s(Hd )), there must be a monochromatic complete subgraph of H on color i, with least s(Hi ) + 1 vertices, for some 1 ≤ i ≤ d. Let
Wi denote the set of vertices for this complete sub- graph, and note that
our coloring policy implies that Wi is an independent set of vertices in Hi
. Thus Wi ∪ {r} is an induced star in Hi with at least s(Hi ) + 1 > s(Hi )
leaves which is a contradiction. ✷
Corollary 2.2 For any G, s(G) < R(
3, 3, ..., 3
| {z }
CCW (G)−1 times
, 4).
Proof. By Theorem 2.1 , there are CCW (G) unit incomparability
graphs H1 , H2 , ..., HW (C) whose intersection is G. Observe, that s(Hi ) ≤ 2,
since Hi is co-bipartite for i = 1, 2, ..., CCW (G) − 1, and, s(HCCW (G) ) ≤ 3,
since HCCW (G) is a unit incomparability graph. Now apply Theorem 2.2.
✷
3
Clique Cover Width in Incomparability Graphs
Theorem 3.1 Let G be an incomparability graph, then there is a clique
cover C = {C0 , C1 , ..., Ck } so that s(G)−1 ≥ W (C) ≥ ⌈ s(G)
2 ⌉−1. Moreover,
if graph Ĝ, or the transitive orientation of Ḡ, is given in its adjacency list
form, then C and W (C) can be computed in O(|V (G)| + |E(G)|) time.
Proof. Let C = {C0 , C1 , ..., Ck } be a greedy clique cover in G, where
k + 1 is the size of largest independent set in G. Thus, C1 is the set
of all sources in the Ĝ, C2 is the set of all forces that are obtained after
removal of C1 , etc. The lower bound for W (C) follow from Observation 2.1.
For the upper bound, let e = ab ∈ E(G) with a ∈ Ci , b ∈ Cj , j > i so that
W (C) = |j − i|. Now let xj = b, then for t = i, i + 1, ..., j − 1 there is xt ∈ Ct
so that xt xt+1 ∈ E(Ĝ). It follows that for t, p = i, i + 1, ..., j, p > t, we have
tp ∈ E(Ĝ) and thus tp ∈
/ E(G). We conclude that the induced graph on
a, xi , xi+1 , ..., xj is a star in G, with center a, having j − i + 1 = W (C) + 1
leaves. Consequently, s(G) ≥ j − i + 1 ≥ W (C) + 1. To finish the proof,
and for the algorithm one can apply topological ordering to graph Ĝ to
compute C and W (C) in linear time. ✷
Corollary 3.1 G be an incomparability graph then, CCW (G) can be approximated within a factor of two, in O(|V | + |E|) time.
Corollary 3.2 Let G be a incomparability graph, then U dim(G) ≤ s(G)−
1.
Remark 3.1 Let G be a co-bipartite which is not a clique. Then, S(G) = 2
and CCW (G) = 1, and hence the upper bound in Theorem 3.1 is tight.
Let G be graph consisting of 3 cliques C1 , C2 , C3 , in that order, each
on 4 vertices, with xi ∈ Ci , i = 1, 2, 3 and additional edges, x1 x2 and x2 x3
. Then, s(G) = 3, and CCW (G) = 1, and hence the lower bound in
Observation 2.1 is tight.
References
[1] J. Böttcher, K. P. Pruessmannb, A. Taraz, A. Würfel Bandwidth,
treewidth, separators, expansion, and universality, Electronic Notes
in Discrete Mathematics, 31(20), 2008, 91-96.
[2] P.Z. Chinn, J. Chvatalov , A. K. Dewdney, N.E. Gibb , Bandwidth
problem for graphs and matrices- A Survey, Journal of Graph Theory,
6(3),2006, 223-254
[3] J. Diaz, J. Petit, M. Serna A survey of graph layout problems, ACM
Computing Surveys (CSUR) 34(3), 2002, 313 - 356.
[4] H.L. Bodlaender , A Tourist Guide through Treewidth. Acta Cybern.
11(1-2), 1993, 1-22.
[5] Golumbic, M.; Rotem, D.; Urrutia, J. (1983), ”Comparability graphs
and intersection graphs”, Discrete Mathematics 43 (1): 3746.
[6] Golumbic, Martin Charles (1980). Algorithmic Graph Theory and Perfect Graphs. Academic Press.
[7] Ellis J.A, Sudborough I.H., Turner J.S., The vertex separation and
search number of a graph, Information and Computation, 113(1), 1994,
50-79.
[8] B. Dushnik, E. W. Miller, Partially ordered Sets, American Journal of
Mathematics 63(3), 1941, 600-610.
[9] J. Fox, J. Pach, A separator theorem for string graphs and its applications, Combinatorics, Probability and Computing 19(2010), 371-390.
[10] J. Fox and J. Pach, String graphs and incomparability graphs, Advances in Mathematics, 2009.
[11] M.R. Garey and D.J. Johnson, Computers and Intractability: A Guide
to the Theory of NP-Completeness, Freeman, San Francisco, CA, 1978.
[12] R.J. Lipton, R.E. Tarjan, A separator theorem for planar graphs,
SIAM Journal on Applied Mathematics 36, 1979, 177-189.
[13] F. Roberts. Recent Progresses in Combinatorics, chapter On the boxicity and cubicity of a graph, pages 301310. Academic Press, New York,
1969.
[14] F. Shahrokhi, A New Separation Theorem with Geometric Applications, Proceedings of EuroCG2010, 2010, 253-256.
[15] F. Shahrokhi, On the clique cover width problem, Congressus Numerantium 205 (2010), 97-103.
[16] F. Shahrokhi, in preparation.
[17] Shahrokhi, F., Sýkora, O., Székely, L.A., and Vrt́o, I., Crossing Number Problems: Bounds and Applications, in Intuitive Geometry eds.
I. Bárány and K. Böröczky, Bolyai Society Mathematical Studies 6,
1997, 179-206.
[18] L. Sunil Chandran and Naveen Sivadasan. Boxicity and treewidth.
Journal of Combinatorial Theory, Series B, 97(5):733744, September
2007.
[19] Abhijin Adiga, L. Sunil Chandran, Rogers Mathew: Cubicity, Degeneracy, and Crossing Number. FSTTCS 2011: 176-190.
[20] W.T. Trotter, New perspectives on interval orders and interval graphs,
in Surveys in Combinatorics, Cambridge Univ. Press, 1977, 237-286.
[21] W.T. Trotter, Combinatorics and partially ordered sets: Dimension
theory, Johns Hopkins series in the mathematical sciences, The Johns
Hopkins University Press, 1992.
| 8 |
On the Local Structure of Stable Clustering Instances
Vincent Cohen-Addad∗
University of Copenhagen
arXiv:1701.08423v3 [] 10 Aug 2017
Chris Schwiegelshohn
†
Sapienza University of Rome
Abstract
We study the classic k-median and k-means clustering objectives in the beyond-worst-case
scenario. We consider three well-studied notions of structured data that aim at characterizing
real-world inputs:
• Distribution Stability (introduced by Awasthi, Blum, and Sheffet, FOCS 2010)
• Spectral Separability (introduced by Kumar and Kannan, FOCS 2010)
• Perturbation Resilience (introduced by Bilu and Linial, ICS 2010)
We prove structural results showing that inputs satisfying at least one of the conditions are
inherently “local”. Namely, for any such input, any local optimum is close both in term of
structure and in term of objective value to the global optima.
As a corollary we obtain that the widely-used Local Search algorithm has strong performance
guarantees for both the tasks of recovering the underlying optimal clustering and obtaining a
clustering of small cost. This is a significant step toward understanding the success of local
search heuristics in clustering applications.
∗
†
[email protected]
[email protected]
1
Introduction
Clustering is a fundamental, routinely-used approach to extract information from datasets. Given
a dataset and the most important features of the data, a clustering is a partition of the data such
that data elements in the same part have common features. The problem of computing a clustering
has received a considerable amount of attention in both practice and theory.
The variety of contexts in which clustering problems arise makes the problem of computing a
“good” clustering hard to define formally. From a theoretician’s perspective, clustering problems
are often modeled by an objective function we wish to optimize (e.g., the famous k-median or
k-means objective functions). This modeling step is both needed and crucial since it provides a
framework to quantitatively compare algorithms. Unfortunately, the most popular objectives for
clustering, like the k-median and k-means objectives, are hard to approximate, even when restricted
to Euclidean spaces.
This view is generally not shared by practitioners. Indeed, clustering is often used as a preprocessing step to simplify and speed up subsequent analysis, even if this analysis admits polynomial
time algorithms. If the clustering itself is of independent interest, there are many heuristics with
good running times and results on real-world inputs.
This induces a gap between theory and practice. On the one hand, the algorithms that are
efficient in practice cannot be proven to achieve good approximation to the k-median and k-means
objectives in the worst-case. Since approximation ratios are one of the main methods to evaluate
algorithms, theory predicts that determining a good clustering is a difficult task. On the other
hand, the best theoretical algorithms turn out to be noncompetitive in applications because they
are designed to handle “unrealistically” hard instances with little importance for practitioners. To
bridge the gap between theory and practice, it is necessary to go beyond the worst-case analysis by,
for example, characterizing and focusing on inputs that arise in practice.
1.1
Real-world Inputs
Several approaches have been proposed to bridge the gap between theory and practice. For example,
researchers have considered the average-case scenario (e.g., [26]) where the running time of an
algorithm is analyzed with respect to some probability distribution over the set of all inputs.
Smooth analysis (e.g., [90]) is another celebrated approach that analyzes the running time of an
algorithm with respect to worst-case inputs subject to small random perturbations.
Another successful approach, the one we take in this paper, consists in focusing on structured
inputs. In a seminal paper, Ostrovsky, Rabani, Schulman, and Swamy [85] introduced the idea
that inputs that come from practice induce a ground-truth or a meaningful clustering. They argued
that an input I contains a meaningful clustering into k clusters if the optimal k-median cost of
a clustering using k centers, say OPTk (I), is much smaller than the optimal cost of a clustering
using k − 1 centers OPTk−1 (I). This is also motivated by the elbow method 1 (see Section 7 for
more details) used by practitioners to define the number of clusters. More formally, an instance I
of k-median or k-means satisfies the α-ORSS property if OPTk (I)/OPTk−1 (I) ≤ α.
α-ORSS inputs exhibit interesting properties. The popular k-means++ algorithm (also known
as the D2 -sampling technique) achieves an O(1)-approximation for these inputs2 . The condition is
also robust with respect to noisy perturbations of the data set. ORSS-stability also implies several
1
The elbow-method consists in running an (approximation) algorithm for an incrementally increasing number of
clusters until the cost drops significantly.
2
For worst-case inputs, the k-means++ achieves an O(log k)-approximation ratio [9, 31, 66, 85].
1
other conditions aiming to capture well-clusterable instances. Thus, the inputs satisfying the ORSS
property arguably share some properties with the real-world inputs. In this paper, we also provide
experimental results supporting this claim, see Appendix C.
These results have opened new research directions and raised several questions. For example:
• Is it possible to obtain similar results for more general classes of inputs?
• How does the parameter α impact the approximation guarantee and running time?
• Is it possible to prove good performance guarantees for other popular heuristics?
• How close to the “ground-truth” clustering are the approximate clusterings?
We now review the most relevant work in connection to the above open questions, see Sections 2
for other related work.
Distribution Stability (Def. 4.1) Awasthi, Blum and Sheffet [12] have tackled the first two
questions by introducing the notion of distribution stable instances. Distribution stable instances
are a generalization of the ORSS instances (in other words, any instance satisfying the ORSS
property is distribution stable). They also introduced a new algorithm tailored for distribution
stable instances that achieves a (1 + ε)-approximation for α-ORSS inputs (and more generally αdistribution stable instances) in time nO(1/εα) . This was the first algorithm whose approximation
guarantee was independent from the parameter α for α-ORSS inputs.
Spectral Separability (Def. 6.1) Kumar and Kannan [74] tackled the first and third questions
by introducing the proximity condition3 . This condition also generalizes the ORSS condition. It
is motivated by the goal of learning a distribution mixture in a d-dimensional Euclidean space.
Quoting [74], the message of their paper can loosely be stated as:
If the projection of any data point onto the line joining its cluster center to any other
cluster center is γk times standard deviations closer to its own center than the other
center, then we can cluster correctly in polynomial time.
In addition, they have made a significant step toward understanding the success of the classic
k-means by showing that it achieves a 1 + O(1/γ)-approximation for instances that satisfy the
proximity condition.
Perturbation Resilience (Def. 5.1) In a seminal work, Bilu and Linial [29] introduced a new
condition to capture real-world instances. They argue that the optimal solution of a real-world
instance is often much better than any other solution and so, a slight perturbation of the instance
does not lead to a different optimal solution. Perturbation-resilient instances have been studied in
various contexts (see e.g., [13, 16, 20, 21, 27, 76]). For clustering problems, an instance is said to
be α-perturbation resilient if an adversary can change the distances between pairs of elements by a
factor at most α and the optimal solution remains the same. Recently, Angelidakis, Makarychev,
and Makarychev [80] have given a polynomial-time algorithm for solving 2-perturbation-resilient instances4 . Balcan and Liang [21] have tackled
√ the third question by showing that a classic algorithm
for hierarchical clustering can solve 1 + 2-perturbation-resilient instances. This very interesting
3
In this paper, we work with a slightly more general condition called spectral separability but the motivations
behind the two conditions are similar.
4
We note that it is NP-hard to recover the optimal clustering of a < 2-perturbation-resilient instance [27].
2
result leaves open the question as whether classic algorithms for (“flat”) clustering could also be
proven to be efficient for perturbation-resilient instances.
Main Open Questions Previous work has made important steps toward bridging the gap between theory and practice for clustering problems. However, we still do not have a complete
understanding of the properties of “well-structured” inputs, nor do we know why the algorithms
used in practice perform so well. Some of the most important open questions are the following:
• Do the different definitions of well-structured input have common properties?
• Do heuristics used in practice have strong approximation ratios for well-structured inputs?
• Do heuristics used in practice recover the “ground-truth” clustering on well-structured inputs?
1.2
Our Results: A unified approach via Local Search
We make a significant step toward answering the above open questions. We show that the classic
Local Search heuristic (see Algorithm 1), that has found widespread application in practice (see
Section 2), achieves good approximation guarantees for distribution-stable, spectrally-separable,
and perturbation-resilient instances (see Theorems 4.2, 5.2, 6.2).
More concretely, we show that Local Search is a polynomial-time approximation scheme (PTAS)
for both distribution-stable and spectrally-separable5 instances. In the case of distribution stability,
we also answer the above open question by showing that most of the structure of the optimal
underlying clustering is recovered by the algorithm. Furthermore, our results hold even when only
a δ fraction (for any constant δ > 0) of the points of each optimal cluster satisfies the β-distributionstability property.
For γ-perturbation-resilient instances, we show that if γ > 3 then any solution is the optimal
solution if it cannot be improved by adding or removing 2γ centers. We also show that the analysis
is essentially tight.
These results show that well-structured inputs have the property that the local optima are close
both qualitatively (in terms of structure) and quantitatively (in terms of objective value) to the
global “ground-truth” optimum.
These results make a significant step toward explaining the success of Local Search approaches
for solving clustering problems in practice.
Algorithm 1 Local Search(ε) for k-Median and k-Means
1:
2:
3:
4:
5:
6:
7:
8:
5
Input: A, F, cost, k
Parameter: ε
S ← Arbitrary subset of F of cardinality at most k.
while ∃ S 0 s.t. |S 0 | ≤ k and |S − S 0 | + |S 0 − S| ≤ 2/ε and cost(S 0 ) ≤ (1 − ε/n) cost(S)
do
S ← S0
end while
Output: S
Assuming a standard preprocessing step consisting of a projection onto a subspace of lower dimension.
3
1.3
Organization of the Paper
Section 2 provides a more detailed review of previous work on worst-case approximation algorithms
and Local Search. Further comments on stability conditions not covered in the introduction can
be found in Section 7 at the end of the paper. Section 3 introduces preliminaries and notation.
Section 4 is dedicated to distribution-stable instances, Section 5 to perturbation-resilient instances,
and Section 6 to spectrally-separated instances. All the missing proofs can be found in the appendix.
2
Related Work
Worst-Case Hardness The problems we study are NP-hard: k-median and k-means are already
NP-hard in the Euclidean plane (see Meggido and Supowit [83], Mahajan et al. [79], and Dasgupta
and Freud [43]). In terms of hardness of approximation, both problems are APX-hard, even in the
Euclidean setting when both k and d are part of the input (see Guha and Khuller [56], Jain et
al. [64], Guruswami et al. [59], and Awasthi et al. [14]). On the positive side, constant factor
approximations are known in metric space for both k-median and k-means (see [3, 33, 77, 65, 84]).
For Euclidean spaces we have a PTAS for both problems, either assuming d fixed and k arbitrary [7,
37, 52, 62, 63, 72], or assuming k fixed and d arbitrary [48, 75].
Local Search Local Search is an all-purpose heuristic that may be applied to any problem,
see Aarts and Lenstra [1] for a general introduction. For clustering, there exists a large body
of bicriteria approximations for k-median and k-means [23, 34, 38, 73]. Arya et al. [11] showed
that Local Search with a neighborhood size of 1/ε gives a 3 + 2ε approximation to k-median, see
also [58]. Kanungo et al. [70] proved an approximation ratio of 9 + ε for k-means clustering by
Local Search, which was until very recently [3] the best known algorithm with a polynomial running
time in metric and Euclidean spaces.6 Recently, Local Search with an appropriate neighborhood
size was shown to be a PTAS for k-means and k-median in certain restricted metrics including
constant dimensional Euclidean space [37, 52]. Due to its simplicity, Local Search is also a popular
subroutine for clustering tasks in various more specialized computational models [24, 30, 57]. For
more theoretical clustering papers using Local Search, we refer to [39, 45, 53, 60, 95].
Local Search is also often used for clustering in more applied areas of computer science (e.g., [92,
54, 4, 61]). Indeed, the use of Local Search with a neighborhood of size 1 for clustering was first
proposed by Tüzün and Burke [93], see also Ghosh [55] for a more efficient version of the same
approach. Due the ease by which it may be implemented, Local Search has become one of the
most commonly used heuristics for clustering and facility location, see Ardjmand [5]. Nevertheless,
high running times is one of the biggest drawbacks of Local Search compared to other approaches,
though a number of papers have engineered it to become surprisingly competitive, see Frahling and
Sohler [51], Kanungo et al. [69], and Sun [91].
3
Definitions and Notations
The problem The problem we consider in this work is the following slightly more general version
of the k-means and k-median problems.
6
They combined Local Search with techniques from Matousek [81] for k-means clustering in Euclidean spaces. The
running time of the algorithm as stated incurs an additional factor of ε−d due to the use of Matousek’s approximate
centroid set. Using standard techniques (see e.g. Section B of this paper), a fully polynomial running time in n, d,
and k is also possible without sacrificing approximation guarantees.
4
Definition 3.1 (k-Clustering). Let A be a set of clients, F a set of centers, both lying in a metric
space (X , dist), cost a function A × F → R+ , and k a non-negative integer. The k-clustering
problem asks for a subset S of F , of cardinality at most k, that minimizes
X
cost(S) =
min cost(x, c).
x∈A
c∈S
The clustering of A induced by S is the partition of A into subsets C = {C1 , . . . Ck } such that
Ci = {x ∈ A | ci = argmin cost(x, c)} (breaking ties arbitrarily).
c∈S
The well known k-median and k-means problems correspond to the special cases cost(a, c) =
dist(a, c) and cost(a, c) = dist(a, c)2 respectively. Throughout the rest of this paper, let OPT denote
the value of an optimal solution. To give slightly simpler proofs for β-distribution-stable and αperturbation-resilient instances, we will assume that cost(a, b) = dist(a, b). If cost(a, b) = dist(a, b)p ,
then α depends exponentially on the p for perturbation resilience. For distribution stability, we still
have a PTAS by introducing a dependency in 1/εO(p) in the neighborhood size of the algorithm.
The analysis is unchanged save for various applications of the following lemma at different steps of
the proof.
Lemma 3.2. Let p ≥ 0 and 1/2 > ε > 0. For any a, b, c ∈ A ∪ F , we have cost(a, b) ≤ (1 +
ε)p cost(a, c) + cost(c, b)(1 + 1/ε)p .
4
Distribution Stability
We work with the notion of β, δ-distribution stability which generalizes β-distribution stability.
This extends our result to datasets that exhibit a slightly weaker structure than the β-distribution
stability. Namely, the β, δ-distribution stability only requires that for each cluster of the optimal
solution, most of the points satisfy the β-distribution stability condition.
Definition 4.1 ((β, δ)-Distribution Stability). Let (A, F, cost, k) be an instance of k-clustering
where A ∪ F lie in a metric space and let S ∗ = {c∗1 , . . . , c∗k } ⊆ F be a set of centers and C ∗ =
{C1∗ , . . . , Ck∗ } be the clustering induced by S ∗ . Further, let β > 0 and 0 ≤ δ ≤ 1. Then the
pair (A, F, cost, k), (C ∗ , S ∗ ) is a (β, δ)-distribution stable instance if, for any i, there exists a set
∆i ⊆ Ci∗ such that |∆i | ≥ (1 − δ)|Ci∗ | and for any x ∈ ∆i , for any j 6= i,
cost(x, c∗j ) ≥ β
OPT
,
|Cj∗ |
where cost(x, c∗j ) is the cost of assigning x to c∗j .
For any instance (A, F, cost, k) that is (β, δ)-distribution stable, we refer to (C ∗ , S ∗ ) as a (β, δ)clustering of the instance. We show the following theorem for the k-median problem. For the
k-clustering problem with parameter p, the constant η becomes a function of p.
Theorem 4.2. Let p > 0, β > 0, and ε < min(1 − δ, 1/3). For a (β, δ)-stable instance with
(β, δ) clustering (C ∗ , S ∗ ) and an absolute constant η, the cost of the solution output by Local
Search(4ε−3 β −1 + O(ε−2 β −1 )) (Algorithm 1) is at most (1 + ηε)cost(C ∗ ).
Moreover, let L = {L1 , . . . , Lk } denote the clusters of the solution output by Local Search(4ε−3 β −1 +
O(ε−2 β −1 )). If δ = 0 (i.e.: the instance is simply β-distribution-stable), there exists a bijection
φ : L 7→ C ∗ such that for at least m = k − O(ε−3 β −1 ) clusters L01 , . . . , L0m ⊆ L, the following two
statements hold.
5
IRεi
2
IRεi
c∗i
β·
OPT
|Ci |
c∗j
p ∈ ∆j
L(i)
Ci∗
Cj∗
2
Figure 1: Example of aScluster Ci∗ 6∈ Z ∗ . An important fraction of the points in IRεi are served by
L(i) and few points in j6=i ∆j are served by L(i).
2
• At least a (1 − ε) fraction of IRεi ∩ Ci∗ are served by a unique center L(i) in solution L.
S
2
• The total number of clients p ∈ j6=i Cj∗ served by L(i) in L is at most ε|IRεi ∩ Ci∗ |.
We first give a high-level description of the analysis. Assume for simplicity that all the optimal
clusters cost less than an ε3 fraction of the total cost of the optimal solution. Combining this
assumption with the β-distribution-stability property, one can show that the centers and points
close to the center are far away from each other. Thus, guided by the objective function, the local
search algorithm identifies most of these centers. In addition, we can show that for most of these
good centers the corresponding cluster in the local solution is very similar to the optimal cluster
(see Figure 1). In total, only very few clusters (a function of ε and β) of the optimal solution are not
present in the local solution. We conclude our proof by using local optimality. Our proof includes
a few ingredients from [12] such as the notion of inner-ring (we work with a slightly more general
definition) and distinguishes between cheap and expensive clusters. Nevertheless our analysis is
slightly stronger as we consider a significantly weaker stability condition and can not only analyze
the cost of the solution of the algorithm, but also the structure of its clusters.
Throughout this section, we consider a set of centers S ∗ = {c∗1 , . . . , c∗k } whose induced clustering
is C ∗ = {C1∗ , . . . , Ck∗ } and such that the instance is (β, δ)-stable with respect (C ∗ , S ∗ ). We denote
P P
by clusters the parts of a partition C ∗ = {C1∗ , . . . , Ck∗ }. Let cost(C ∗ ) = ki=1 x∈C ∗ cost(x, c∗i ).
i
Moreover, for any cluster Ci∗ , for any client x ∈ Ci∗ , denote by gx the cost of client x in solution
C ∗ : gx = cost(x, c∗i ) = dist(x, c∗i ) since we consider the k-median problem. Let L denote the
output of LocalSearch(β −1 ε−3 ) and
P lx the cost induced by client x in solution L, namely lx =
min`∈L cost(x, `), and cost(L) = x∈A lx . The following definition is a generalization of the innerring definition of [12].
6
Definition 4.3. For any ε0 , we define the inner ring of cluster i, IRεi 0 , as the set of x ∈ A ∪ F
such that dist(x, c∗i ) ≤ ε0 βOPT/|Ci∗ |.
P
We say that cluster i is cheap if x∈C ∗ gx ≤ ε3 βOPT, and expensive otherwise. We aim at
i
proving the following structural lemma.
Lemma 4.4. There exists a set of clusters Z ∗ ⊆ C ∗ of size at most 2ε−3 β −1 + O(ε−2 β −1 ) such
that for any cluster Ci∗ ∈ C ∗ − Z ∗ , we have the following properties
1. Ci∗ is cheap.
2
2. At least a (1 − ε) fraction of IRεi ∩ Ci∗ are served by a unique center L(i) in solution L.
S
2
3. The total number of clients p ∈ j6=i ∆j served by L(i) in L is at most ε|IRεi ∩ Ci∗ |.
See Fig 1 for a typical cluster of C ∗ − Z ∗ . We start with the following lemma which generalizes
Fact 4.1 in [12].
Lemma 4.5. Let Ci∗ be a cheap cluster. For any ε0 , we have |IRεi 0 ∩ Ci∗ | > (1 − ε3 /ε0 )|Ci∗ |.
We then prove that the inner rings of cheap clusters are disjoint for δ +
Lemma 4.6. Let δ +
ε3
ε0
ε3
ε0
< 1 and ε0 < 13 .
< 1 and ε0 < 13 . If Ci∗ 6= Cj∗ are cheap clusters, then IRεi 0 ∩ IRεj 0 = ∅.
For each cheap cluster Ci∗ , let L(i) denote a center of L that belongs to IRεi if there exists
exactly such center and remain undefined otherwise. By Lemma 4.6, L(i) 6= L(j) for i 6= j.
Lemma 4.7. Let ε < 31 . Let C ∗ − Z1 denote the set of clusters Ci∗ that are cheap, such that L(i) is
2
2
defined and such that at least (1 − ε)|IRεi ∩ Ci∗ | clients of IRεi ∩ Ci∗ are served in L by L(i). Then
|Z1 | ≤ (2ε−3 + 11.25 · ε−2 + 22.5 · ε−1 )β −1 .
Proof. There are five different types of clusters in C ∗ :
1. k1 expensive clusters
2. k2 cheap clusters with no center of L belonging to IRεi
3. k3 cheap clusters with at least two centers of L belonging to IRεi
2
2
4. k4 cheap clusters with L(i) being defined and less than (1 − ε)|IRεi ∩ Ci∗ | clients of IRεi ∩ Ci∗
are served in L by L(i)
2
2
5. k5 cheap clusters with L(i) being defined and at least (1 − ε)|IRεi ∩ Ci∗ | clients of IRεi ∩ Ci∗
are served in L by L(i)
The definition of cheap clusters immediately yields k1 ≤ ε−3 β −1 .
Since L and C ∗ both have k clusters and the inner rings of cheap clusters are disjoint (Lemma 4.6),
we have c1 k1 + c3 k3 + k4 + k5 = k1 + k2 + k3 + k4 + k5 = |Z1 | + k5 = k with c1 ≥ 0 and c3 ≥ 2
resulting in k3 ≤ (c3 − 1)k3 = (1 − c1 )k1 + k2 ≤ k1 + k2 .
Before bounding k2 and k4 , we discuss the impact of a cheap cluster Ci∗ with at least a p fraction
2
of the clients of IRεi ∩ Ci∗ being served in L by some centers that are not in IRεi . By the triangular
inequality, the cost for any client x of this p fraction is at least (ε − ε2 )βcost(C ∗ )/|Ci∗ |. Then the
7
2
total cost of all clients of this p fraction in L is at least p|IRεi ∩ Ci∗ |(1 − ε)εβcost(C ∗ )/|Ci∗ |. By
2
Lemma 4.5, substituting |IRεi ∩ Ci∗ | yields for this total cost
2
p|IRεi ∩ Ci∗ |(1 − ε)εβ
cost(C ∗ )
cost(C ∗ )
≥ p(1 − ε)2 |Ci∗ |εβ
= p(1 − ε)2 εβcost(C ∗ ).
∗
|Ci |
|Ci∗ |
To determine k2 , we must use p = 1 while we have p > ε for k4 . Therefore, the total costs of all
clients of the k2 and the k4 clusters in L are at least k2 (1−ε)2 εβcost(C ∗ ) and k4 (1−ε)2 ε2 βcost(C ∗ ),
respectively.
Now, since cost(L) ≤ 5OPT ≤ 5cost(C ∗ ), we have (k2 + k4 ε)εβ ≤ 5/(1 − ε)2 ≤ 45/4.
Therefore, we have |Z1 | = k1 +k2 +k3 +k4 ≤ 2k1 +2k2 +k4 ≤ (2ε−3 +11.25·ε−2 +22.5·ε−1 )β −1 .
We continue with the following lemma, whose proof relies on similar arguments.
Lemma 4.8. There exists a set Z2 ⊆ C ∗ − Z1 of S
size at most 11.25ε−1 β −1 such that for any cluster
∗
∗
Cj ∈ C − Z2 , the total number of clients x ∈ i6=j ∆i , that are served by L(j) in L is at most
2
ε|IRεi ∩ Ci∗ |.
Therefore, the proof of Lemma 4.4 follows from combining Lemmas
S 4.7 and 4.8.
We now turn to the analysis of the cost of L. Let C(Z ∗ ) = C ∗ ∈Z ∗ Ci∗ . For any cluster
i
2
Ci∗ ∈ C ∗ − Z ∗ , let L(i) be the unique center of L that serves at least (1 − ε)|IRεi ∩ Ci∗ | > (1 − ε)2 |Ci |
S
2
b to be the set
clients of IRεi ∩ Ci∗ , see Lemmas 4.4 and 4.5. Let Lb = C ∗ ∈C ∗ −Z ∗ L(i) and define A
i
b Finally, let A(L(i)) be the set of clients
of clients that are served in solution L by centers of L.
b
that are served by L(i) in solution L. Observe that the A(L(i)) partition A.
Lemma 4.9. We have
X
−ε · cost(L)/n +
X
lx ≤
∗)
b
x∈A−C(Z
gx +
∗)
b
x∈A−C(Z
2ε
· (cost(C ∗ ) + cost(L)).
(1 − ε)2
Proof. Consider the following mixed solution M = Lb ∪ {c∗i | Ci∗ ∈ Z ∗ }. We start by bounding the
b the center that serves it in L belongs to M. Thus its cost in M
cost of M. For any client x ∈ A,
is at most lx . Now, for any client x ∈ C(Z ∗ ), the center that serves it in C ∗ is in M, so its cost in
M is at most gx .
b ∪ C(Z ∗ )). Consider such a client x and
Finally, we evaluate the cost of the clients in A − (A
let Ci∗ be the cluster it belongs to in solution C ∗ . Since Ci∗ ∈ C ∗ − Z ∗ , L(i) is defined and we have
L(i) ∈ Lb ⊆ M. Hence, the cost of x in M is at most cost(x, L(i)). Observe that by the triangular
inequality, cost(x, L(i)) ≤ cost(x, c∗i ) + cost(c∗i , L(i)) = gx + cost(c∗i , L(i)).
2
Now consider a client x0 ∈ IRεi ∩Ci∗ ∩A(L(i)). By the triangular inequality, we have cost(c∗i , L(i)) ≤
cost(c∗i , x0 ) + cost(x0 , L(i)) = gx0 + lx0 . Hence,
cost(c∗i , L(i)) ≤
1
2
|IRεi
∩
Ci∗
X
∩ A(L(i))|
(gx0 + lx0 ).
2
x0 ∈IRεi ∩Ci∗ ∩A(L(i))
b to L(i) induces a cost of at most
It follows that assigning the clients of Ci∗ ∩ (A − A)
X
b
x∈Ci∗ ∩(A−A)
gx +
b
|Ci∗ ∩ (A − A)|
X
2
|IRεi ∩ Ci∗ ∩ A(L(i))|
8
2
x0 ∈IRεi ∩Ci∗ ∩A(L(i))
(gx0 + lx0 ).
2
2
2
b ≤
Due to Lemma 4.4, we have |IRεi ∩Ci∗ ∩A(L(i))| ≥ (1−ε)·|IRεi ∩Ci∗ | and |(IRεi ∩Ci∗ )∩(A− A)|
ε2
ε2
ε2
ε2
∗
∗
∗
∗
∗
b
ε · |IRi ∩ Ci |. Further, |(Ci − IRi ) ∩ (A − A)| ≤ |(Ci − IRi )| = |Ci | − |IRi ∩ Ci |. Combining
these three bounds, we have
b
|Ci∗ ∩ (A − A)|
2
|IRεi ∩ Ci∗ ∩ A(L(i))|
2
b + |(C ∗ ∩ IRε2 ) ∩ (A − A)|
b
|(Ci∗ − IRεi ) ∩ (A − A)|
i
i
=
2
|IRεi ∩ Ci∗ ∩ A(L(i))|
2
|Ci∗ | − (1 − ε)|IRεi ∩ Ci∗ |
≤
2
|Ci∗ |
=
2
(1 − ε) · |IRεi ∩ Ci∗ |
(1 − ε) · |IRεi ∩ Ci∗ |
|Ci∗ |
2ε
2ε − ε2
<
,
−
1
≤
∗
(1 − ε)2 · |Ci |
(1 − ε)2
(1 − ε)2
≤
−1
(1)
where the inequality in (1) follows from Lemma 4.5.
Summing over all clusters Ci∗ ∈ C ∗ − Z ∗ , we obtain that the cost in M for the clients in
b ∩ C ∗ is less than
(A − A)
i
X
2ε
· (cost(C ∗ ) + cost(L)).
gx +
(1 − ε)2
∗ ))
b
c∈A−(A∪C(Z
By Lemmas 4.7 and 4.8, we have |M − L| + |L − M| = 2 · |Z ∗ | ≤ (4ε−3 + O(ε−2 ))β −1 . By
selecting the neighborhood size of Local Search (Algorithm 1) to be greater than this value, we
have (1 − ε/n) · cost(L) ≤ cost(M). Therefore, combining the above observations, we have
X
X
X
2ε
ε
lx +
gx +
gx +
· (cost(C ∗ ) + cost(L)).
(1 − ) · cost(L) ≤
2
n
(1
−
ε)
∗
∗)
b
x∈A−C(Z
x∈C(Z )
∗ ))
b
x∈A−(A∪C(Z
By simple transformations, we then obtain
X
ε
− · cost(L) +
lx ≤
n
∗)
b
x∈A−(A)∪C(Z
X
gx +
∗)
b
x∈A−(A)∪C(Z
2ε
· (cost(C ∗ ) + cost(L)).
(1 − ε)2
b − C(Z ∗ ). For any cluster
We now turn to evaluate the cost for the clients that are in A
∗
∗
∗
∈ C − C(Z ) and for any x ∈ Ci − A(L(i)) define Reassign(x) to be the cost of x with
respect to the center in L(i). Note that there exists only one center of L in IRiε for any cluster
Ci∗ ∈ C ∗ − C(Z ∗ ). Before going deeper in the analysis, we need the following lemma.
Ci∗
Lemma 4.10. For any Ci∗ ∈ C ∗ − C(Z ∗ ), we have
X
X
Reassign(x) ≤
x∈Ci∗ −A(L(i))
x∈Ci∗ −A(L(i))
gx +
X
2ε
(lx + gx ).
2
(1 − ε)
∗
x∈Ci
We now partition the clients of cluster Ci∗ ∈ C ∗ − Z ∗ . For any i, let Bi be the set of clients of
that are served in solution L by a center L(j) for some j 6= i and Cj∗ ∈ C ∗ − Z ∗ . Moreover, let
S
b − S Dj .
Di = (A(L(i)) ∩ ( j6=i Bj )). Finally, define Ei = (Ci∗ ∩ A)
j6=i
Ci∗
Lemma 4.11. Let Ci∗ be a cluster in C ∗ − Z ∗ . Define the solution Mi = L − {L(i)} ∪ {c∗i } and
denote by mix the cost of client x in solution Mi . Then
X
X
X
X
X
X
ε
mix ≤
lx +
gx +
Reassign(x) +
lx +
(
gx + lx ).
(1 − ε)
x∈A
x∈A−
(A(L(i))∪Ei )
x∈Ei
x∈Di
x∈A(L(i))−
(Ei ∪Di )
9
x∈Ei
We can thus prove the following lemma, which concludes the proof.
Lemma 4.12. We have
X
−ε · cost(L) +
lx ≤
∗)
b
x∈A−C(Z
X
gx +
∗)
b
x∈A−C(Z
3ε
· (cost(L) + cost(C ∗ )).
(1 − ε)2
The proof of Theorem 4.2 follows from (1) summing the equations from Lemmas 4.9 and 4.12
and (2) Lemma 4.4. The comparison of the structure of the local solution to the structure of C ∗ is
an immediate corollary of Lemma 4.4.
5
Perturbation Resilience
We first give the definition of α-perturbation-resilient instances.
Definition 5.1. Let I = (A, F, cost, k) be an instance for the k-clustering problem. For α ≥ 1, I
is α-perturbation-resilient if there exists a unique optimal set of centers C ∗ = {c∗1 , . . . , c∗k } and for
any instance I 0 = (A, F, cost0 , k, p), such that
∀ a, b ∈ P, cost(a, b) ≤ cost0 (a, b) ≤ αcost(a, b),
the unique optimal set of centers is C ∗ = {c∗1 , . . . , c∗k }.
For ease of exposition, we assume that cost(a, b) = dist(a, b) (i.e., we work with the k-median
problem). Given solution S0 , we say that S0 is 1/ε-locally optimal if any solution S1 such that
|S0 − S1 | + |S1 − S0 | ≤ 2/ε has at least cost(S0 ).
Theorem 5.2. Let α > 3. For any instance of the k-median problem that is α-perturbationresilient, any 2(α − 3)−1 -locally optimal solution is the optimal set of centers {c∗1 , . . . , c∗k }.
Moreover, define lc to be the cost for client c in solution L and gc to be its cost in the optimal
solution C ∗ . Finally, for any sets of centers S and S0 ⊂ S, define NS (S0 ) to be the set of clients
served by a center of S0 in solution S, i.e.: NS (S0 ) = {x | ∃s ∈ S0 , dist(x, s) = mins0 ∈S dist(x, s0 )}.
The proof of Theorem 5.2 relies on the following theorem of particular interest.
Theorem 5.3 (Local-Approximation Theorem.). Let L be a 1/ε-locally optimal solution and C ∗
be any solution. Define S = L ∩ C ∗ and L̃ = L − S and C˜∗ = C ∗ − S. Then
X
X
X
X
lc +
lc ≤
gc + (3 + 2ε)
gc .
c∈NC ∗ (C˜∗ )−NL (L̃)
c∈NL (L̃)
c∈NC ∗ (C˜∗ )−NL (L̃)
c∈NL (L̃)
We first show how Theorem 5.3 allows us to prove Theorem 5.2.
Proof of Theorem 5.2. Given an instance (A, F, dist, k), we define the following instance I 0 =
(A, F, dist0 , k), where dist0 (a, b) is a distance function defined over A ∪ F that we detail below.
For each client c ∈ NL (L̃) ∪ NC ∗ (C˜∗ ), let `c be the center of L that serves it in L, for any point
p 6= `c , we define dist0 (c, p) = αdist(c, p) and dist0 (c, `c ) = dist(c, `c ). For the other clients we set
dist0 = dist. Observe that by local optimality, the clustering induced by L is {c∗1 , . . . , c∗k } if and
only if L = C ∗ . Therefore, the cost of C ∗ in instance I 0 is equal to
X
X
X
α
gc +
min(αgc , lc ) +
gc .
c∈NL (L̃)
c∈NC ∗ (C˜∗ )−NL (L̃)
c6∈NC ∗ (C˜∗ )∪NL (L̃)
10
On the other hand, the cost of L in I 0 is the same as in I. By Theorem 5.3
X
X
lc +
c∈NC ∗ (C˜∗ )−NL (L̃)
X
lc + ≤
gc + (3 +
c∈NC ∗ (C˜∗ )−NL (L̃)
c∈NL (L̃)
2(α − 3)
)
2
X
gc
c∈NL (L̃)
and by definition of S we have, for each element c ∈
/ NC ∗ (C˜∗ ) ∪ NL (L̃), lc = gc .
0
Thus the cost of L in I is at most
(3 +
2(α − 3)
)
2
X
X
gc +
c∈NC ∗ (C˜∗ )−NL (L̃)
c∈NL (L̃)
X
gc +
gc
c6∈NC ∗ (C˜∗ )∪NL (L̃)
Now, observe that for the clients in NC ∗ (C˜∗ ) − NL (L̃) = NC ∗ (C˜∗ ) ∩ NL (S), we have lc ≥ gc .
Therefore, we have that the cost of L is at most the cost of C ∗ in I 0 and so by definition of
α-perturbation-resilience, we have that the clustering {c∗1 , . . . , c∗k } is the unique optimal solution in
I 0 . Therefore L = C ∗ and the Theorem follows.
We now turn to the proof of Theorem 5.3.
Consider the following bipartite graph Γ = (L̃ ∪ C˜∗ , E) where E is defined as follows. For any
center f ∈ C˜∗ , we have (f, `) ∈ E where ` is the center of L̃ that is the closest to f . Denote NΓ (`)
the neighbors of the point corresponding to center ` in Γ.
For each edge (f, `) ∈ E, for any client c ∈ NC ∗ (f ) − NL (`), we define Reassignc as the cost of
reassigning client c to `. We derive the following lemma.
Lemma 5.4. For any client c, Reassignc ≤ lc + 2gc .
Proof. By definition we have Reassignc = dist(c, `). By the triangle inequality dist(c, `) ≤ dist(c, f )+
dist(f, `). Since f serves c in C ∗ we have dist(c, f ) = gc , hence dist(c, `) ≤ gc + dist(f, `). We now
bound dist(f, `). Consider the center `0 that serves c in solution L. By the triangle inequality we
have dist(f, `0 ) ≤ dist(f, c) + dist(c, `0 ) = gc + lc . Finally, since ` is the closest center of f in L, we
have dist(f, `) ≤ dist(f, `0 ) ≤ gc + lc and the lemma follows.
We partition the centers of L̃ as follows. Let L̃0 be the set of centers of L̃ that have degree 0
in Γ. Let L̃≤ε−1 be the set of centers of L̃ that have degree at least one and at most 1/ε in Γ. Let
L̃>ε−1 be the set of centers of L̃ that have degree greater than 1/ε in Γ.
We now partition the centers of L̃ and C˜∗ using the neighborhoods of the vertices of L̃ in Γ. We
start by iteratively constructing two set of pairs S≤ε−1 and S>ε−1 . For each center ` ∈ L̃≤ε−1 ∪L̃>ε−1 ,
we pick a set A` of |NΓ (`)| − 1 centers of L̃0 and define a pair ({`} ∪ A` , NΓ (`)). We then remove
A` from L̃0 and repeat. Let S≤ε−1 be the pairs that contain a center of L̃≤ε−1 and let S>ε−1 be the
remaining pairs.
The following lemma follows from the definition of the pairs.
˜∗
Lemma 5.5. Let (RL̃ , RC ) be a pair in S≤ε−1 ∪ S>ε−1 . If ` ∈ RL̃ , then for any f such that
˜∗
(f, `) ∈ E, f ∈ RC .
˜∗
Lemma 5.6. For any pair (RL̃ , RC ) ∈ S≤ε−1 we have that
X
c∈NC
˜∗
∗ (RC )
lc ≤
X
c∈NC
˜∗
∗ (RC )
11
gc + 2
X
NL (RL̃ )
gc .
˜∗
Proof. Consider the mixed solution M = L − RL̃ ∪ RC . For each point c, let mc denote the cost
of c in solution M . We have the following upper bounds
˜∗
if c ∈ NC ∗ (RC ).
gc
∗
mc ≤ Reassignc if c ∈ NL (RL̃ ) − NC ∗ (RC˜ ) and by Lemma 5.5.
lc
Otherwise.
Now, observe that the solution M differs from L by at most 2/ε centers. Thus, by 1/ε-local
optimality we have cost(L) ≤ cost(M ). Summing over all clients and simplifying, we obtain
X
X
X
X
lc +
lc ≤
gc +
Reassignc .
c∈NC ∗ (RC ∗ )
˜
c∈NL (RL̃ )−NC ∗ (RC ∗ )
c∈NC ∗ (RC ∗ )
˜
c∈NL (RL̃ )−NC ∗ (RC ∗ )
˜
˜
The lemma follows by combining with Lemma 5.4.
We now analyze the cost of the clients served by a center of L that has degree greater than ε−1
in Γ. The argument is very similar.
˜∗
Lemma 5.7. For any pair (RL̃ , RC ) ∈ S>ε−1 we have that
X
X
X
lc ≤
gc + 2(1 + ε)
gc .
c∈NC ∗ (RC ∗ )
˜
c∈NC ∗ (RC ∗ )
˜
NL (RL̃ )
ˆ For
Proof. Consider the center `ˆ ∈ RL̃ that has in-degree greater than ε−1 . Let L̂ = RL̃ − {`}.
∗
˜
each ` ∈ L̂, we associate a center f (`) in RC in such a way that each f (`) 6= f (`0 ), for ` 6= `0 . Note
˜∗
˜∗
that this is possible since |L̂| = |RC | − 1. Let f˜ be the center of RC that is not associated with
any center of L̂.
Now, for each center ` of L̂ we consider the mixed solution M ` = L − {`} ∪ {f (`)}. For each
client c, we bound its cost m`c in solution M ` . We have
if c ∈ NC ∗ (f (`)).
gc
`
mc = Reassignc if c ∈ NL (`) − NC ∗ (f (`)) and by Lemma 5.5.
lc
Otherwise.
Summing over all center ` ∈ L̂, we have by ε−1 -local optimality
X
X X
lc +
lc ≤
˜
c∈NC ∗ (RC ∗ )−NC ∗ (f˜)
`∈RL̃ c∈NL (`)
X
gc +
˜
c∈NC ∗ (RC ∗ )−NC ∗ (f˜)
X
X
Reassignc .
(2)
`∈RL̃ c∈NL (`)
We now complete the proof of the lemma by analyzing the cost of the clients in NC ∗ (f˜). We
∗
consider the
the center `∗
P center ` ∈ L̂ that minimizes the reassignment cost of its clients. (`Namely,
∗ ,f˜)
such that c∈NL (`∗ ) Reassignc is minimized. We then consider the solution M
= L−{`∗ }∪{f˜}.
(`∗ ,f˜)
∗
˜
For each client c, we bound its cost mc
in solution M (` ,f ) . We have
if c ∈ NC ∗ (f˜).
gc
(`∗ ,f˜)
mc
≤ Reassignc if c ∈ NL (`∗ ) − NC ∗ (f˜) and by Lemma 5.5.
lc
Otherwise.
12
Thus, summing over all clients c, we have by local optimality
X
X
X
lc ≤
gc +
lc +
c∈NC ∗ (f˜)
c∈NL (`∗ )−NC ∗ (f (`∗ ))
X
Reassignc .
(3)
c∈NL (`∗ )−NC ∗ (f (`∗ ))
c∈NC ∗ (f˜)
By Lemma 5.4, combining Equations 2 and 3 and averaging over all centers of L̂ we have
X
X
X
lc ≤
gc + 2(1 + ε)
gc .
c∈NC ∗ (RC ∗ )
c∈NC ∗ (RC ∗ )
˜
˜
NL (RL̃ )
We now turn to the proof of Theorem 5.3.
Proof of Theorem 5.3. Observe first that for any c ∈ NL (L̃) − NC ∗ (C˜∗ ), we have lc ≤ gc . This
follows from the fact that the center that serves c in C ∗ is in S and so in L and thus, we have
lc ≤ gc . Therefore
X
X
lc ≤
gc .
(4)
c∈NL (L̃)−NC ∗ (C˜∗ )
c∈NL (L̃)−NC ∗ (C˜∗ )
We now sum the equations of Lemmas 5.6 and 5.7 over all pairs and obtain
X
X
X
X
X
lc ≤
gc + (2 + 2ε)
gc
(RL̃ ,RC ∗ ) c∈NC ∗ (RC ∗ )∪NL (RL̃ )
˜
(RL̃ ,RC ∗ )
˜
X
˜
lc ≤
c∈NC ∗ (C˜∗ )∪NL (L̃)
c∈NC ∗ (RC ∗ )∪NL (RL̃ )
˜
X
gc + (2 + 2ε)
c∈NC ∗ (C˜∗ )∪NL (L̃)
NL (RL̃ )
X
gc .
c∈NL (L̃)
Therefore,
X
c∈NC ∗ (C˜∗ )−NL (L̃)
lc +
X
NL (L̃)
X
lc ≤
c∈NC ∗ (C˜∗ )−NL (L̃)
X
gc + (3 + 2ε)
gc .
c∈NL (L̃)
Additionally, we show that the analysis is tight (up to a (1 + ε) factor):
Proposition 5.8. For any ε > 0, there exists an infinite family of 3 − ε-perturbation-resilient
instances such that for any constant ε > 0, there exists a locally optimal solution that has cost at
least 3OPT.
Proof. Consider a tripartite graph with nodes O, C, and L, where O is the set of optimal centers, L
is the set of centers of a locally optimal solution, and C is the set of clients. We have |O| = |L| = k
and |C| = k 2 . We specify the distances as follows. First, assume some arbitrary but fixed ordering
on the elements of O, L, and C. Then dist(Oi )(Ci,j ) = 1 + ε/3 and dist(Li )(Cj,i ) = 3 for any
i, j ∈ [k]. All other distances are induced by the shortest path metric along the edges of the graph,
i.e. dist(Oi )(Cj,` ) = 7 + ε/3 and dist(Li )(Cj,` ) = 5 + 2ε/3 for j, ` 6= i. We first note that O is
indeed the optimal solution with a cost of k 2 · (1 + ε/3). Multiplying the distances dist(Oi )(Ci,j ) by
a factor of (3 − ε) for all i ∈ [k] and j mod k = i, still ensures that O is an optimal solution with a
13
cost of k 2 · (1 + ε/3) · (3 − ε) = k 2 · 3(1 − ε2 ), which shows that the instance is (3 − ε)-perturbation
resilient.
What remains to be shown is that L is locally optimal. Assume that we swap out s centers.
Due to symmetry, we can consider the solution {Oi |i ∈ [s]} ∪ {Li |i ∈ [k] − [s]}. Each of centers
{Oi |i ∈ [s]} serve k clients with a cost of k · s · (1 + ε/3). The remaining clients are served by
{Li |i ∈ [k] − [s]}, as 5 + 2ε/3 < 7 + ε/3. The cost amounts to s · (k − s) · 5 + 2ε/3 for the clients
that get reassigned and (k − s)2 · 3 for the remaining clients. Combining these three figures gives
2
us a cost of k 2 · 3 + ksε − s2 · (2 + 2ε/3) > k 2 · 3 + ksε + s2 · 3. For k > 3s
ε , this is greater than k 3,
the cost of L.
6
Spectral Separability
In this section we will study the spectral separability condition for the Euclidean k-means problem.
Definition 6.1 (Spectral Separation [74]7 ). Let (A, Rd , || · ||2 , k) be an input for k-means clustering in Euclidean space and let {C1∗ , . . . Ck∗ } denote an optimal clustering of A with centers
S = {c∗1 , . . . c∗k }. Denote by C an n × d matrix such that the row Ci = argmin ||Ai − c∗j ||2 . Denote
c∗j ∈S
by || · ||2 the spectral norm of a matrix. Then
(i, j) the following condition holds:
{C1∗ , . . . Ck∗ }
is γ-spectrally separated, if for any pair
1
1
||c∗i − c∗j || ≥ γ · p ∗ + q
||A − C||2 .
|Ci |
|Cj∗ |
Nowadays, a standard preprocessing step in Euclidean k-means clustering is to project onto the
subspace spanned by the rank k-approximation. Indeed, this is the first step of the algorithm by
Kumar and Kannan [74] (see Algorithm 2).
Algorithm 2 k-means with spectral initialization [74]
Project points onto the best rank k subspace
Compute a clustering C with constant approximation factor on the projection
3: Initialize centroids of each cluster of C as centers in the original space
4: Run Lloyd’s k-means until convergence
1:
2:
In general, projecting onto the best rank k subspace and computing a constant approximation
on the projection results in a constant approximation in the original space. Kumar and Kannan [74]
and later Awasthi and Sheffet [15] gave tighter bounds if the spectral separation is large enough. Our
algorithm omits steps 3 and 4. Instead, we project onto slightly more dimensions and subsequently
use Local Search as the constant factor approximation in step 2. To utilize Local Search, we further
require a candidate set of solutions, which is described in Section B. For pseudocode, we refer to
Algorithm 3. Our main result is to show that, given spectral separability, this algorithm is PTAS
for k-means (Theorem 6.2).
Theorem 6.2. Let (A, Rd , || · ||2 , k) be an instance of Euclidean k-means √
clustering with optimal
clustering C = {C1∗ , . . . Ck∗ } and centers S = {c∗1 , . . . c∗k }. If C is more than 3 k-spectrally separated,
then Algorithm 3 is a polynomial time approximation scheme.
7
The proximity condition of Kumar and Kannan [74] implies the spectral separation condition.
14
Algorithm 3 SpectralLS
1:
2:
3:
4:
5:
Project points A onto the best rank k/ε subspace
Embed points into a random subspace of dimension O(ε−2 log n)
Compute candidate centers (Corollary B.3)
Local Search(Θ(ε−4 ))
Output clustering
We first recall the basic notions and definitions for Euclidean k-means. Let A ∈ Rn×d be a set of
points in d-dimensional Euclidean space, where the row Ai contains the coordinates of the ith point.
The singular value decomposition is defined as A = U ΣV T , where U ∈ Rn×d and V ∈ Rd×d are
orthogonal and Σ ∈ Rd×d is a diagonal matrix containing the singular values where per convention
the singular values are given in descending order, i.e. Σ1,1 = σ1 q
≥ Σ2,2 = σ2 ≥ . . . Σd,d = σd .
Pd
2
Denote the Euclidean norm of a d-dimensional vector x by ||x|| =
i=1 xi . The spectral norm
qP
d
2
and Frobenius norm are defined as ||A||2 = σ1 and ||A||F =
i=1 σi , respectively.
The best rank k approximation
min ||A − X||F is given via Ak = Uk ΣV T = U Σk V T =
rank(X)=k
U ΣVkT ,
VkT
where Uk , Σk and
consist of the first k columns of U , Σ and V T , respectively, and
are zero otherwise. The best rank k approximation also minimizes the spectral norm, that is
||A − Ak ||2 = σk+1 is minimal among all matrices of rank k. The following fact is well known
throughout k-means literature and will be used frequently throughout this section.
1 P
Fact 6.3. Let A be a set of points in Euclidean space and denote by c(A) = |A|
x∈A x the centroid
of A. Then the 1-means cost of any candidate center c can be decomposed via
X
X
||x − c||2 =
||x − c(A)||2 + |A| · ||c(A) − c||2
x∈A
x∈A
and
X
x∈A
||x − c(A)||2 =
1 XX
||x − y||2 .
2 · |A|
x∈A y∈A
Note that the centroid is the optimal 1-means
. . Ck } of
P center
P of A. For a clustering C = {C1 , . P
A with centers S = {c1 , . . . ck }, the cost is then ki=1 p∈Ci ||p − ci ||2 . Further, if ci = |C1i | p∈Ci p,
we can rewrite the objective function in matrix form by associating the
ith point with the ith row of
√ 1 ∗ if Ai ∈ C ∗
j
|Cj |
n×k
some matrix A and using the cluster matrix X ∈ R
with Xi,j =
to denote
0
else
membership. Note that X T X = I, i.e. X is an orthogonal projection and that ||A−XX T A||2F is the
cost of the optimal k-means clustering. k-means is therefore a constrained rank k-approximation
problem.
We first restate the separation condition.
Definition 6.4 (Spectral Separation). Let A be a set of points and let {C1 , . . . Ck } be a clustering
of A with centers {c1 , . . . ck }. Denote by C an n×d matrix such that Ci = argmin ||Ai −cj ||2 . Then
j∈{1,...,k}
{C1 , . . . Ck } is γ spectrally separated, if for any pair of centers ci and cj the following condition
holds:
!
1
1
+p
||A − C||2 .
||ci − cj || ≥ γ · p
|Cj |
|Ci |
15
The following crucial lemma relates spectral separation and distribution stability.
Lemma 6.5. For a point set A, let C = {C1 , . . . , Ck } be an√optimal clustering with centers S =
{c1 , . . . , ck } associated clustering matrix X that is at least γ · k spectrally separated, where γ > 3.
For ε > 0, let Am be the best rank m = k/ε approximation of A. Then there exists a clustering
K = {C10 , . . . C20 } and a set of centers Sk , such that
1. the cost of clustering Am with centers Sk via the assignment of K is less than ||Am −
XX T Am ||2F and
2. (K, Sk ) is Ω((γ − 3)2 · ε)-distribution stable.
We note that this lemma would also allow us to use the PTAS of Awasthi et al. [12]. Before
giving the proof, we outline how Lemma 6.5 helps us prove Theorem 6.2. We first notice that if the
rank of A is of order k, then elementary bounds on matrix norm show that spectral separability
implies distribution stability. We aim to combine this observation with the following theorem due
to Cohen et al. [36]. Informally, it states that for every rank k approximation, (an in particular for
every constrained rank k approximation such as k-means clustering), projecting to the best rank
k/ε subspace is cost-preserving.
Theorem 6.6 (Theorem 7 of [36]). For any A ∈ Rn×d , let A0 be the rank dk/εe-approximation of
A. Then there exists some positive number c such that for any rank k orthogonal projection P ,
||A − P A||2F ≤ ||A0 − P A0 ||2F + c ≤ (1 + ε)||A − P A||2F .
The combination of the low rank case and this theorem is not trivial as points may be closer to a
wrong center after projecting, see also Figure 2. Lemma 6.5 determines the existence of a clustering
whose cost for the projected points Am is at most the cost of C ∗ . Moreover, this clustering has
constant distribution stability as well which, combined with the results from Section B, allows us to
use Local Search. Given that we can find a clustering with cost at most (1 + ε) · ||Am − XX T Am ||2F ,
Theorem 6.6 implies that we will have a (1 + ε)2 -approximation overall.
To prove the lemma, we will require the following steps:
• A lower bound on the distance of the projected centers ||ci Vm VmT − cj Vm VmT || ≈ ||ci − cj ||.
∗ = {c V V T , . . . , c∗ V V T } of A
• Find a clustering K with centers Sm
m with cost less than
1 m m
k m m
T
2
||Am − XX Am ||F .
• Show that in a well-defined sense, K and C ∗ agree on a large fraction of points.
• For any point x ∈ Ki , show that the distance of x to any center not associated with Ki is
large.
We first require a technical statement.
Lemma 6.7. For a point set A, let C = {C1 , . . . Ck } be a clustering with associated clustering
matrix X and let A0 and A00 be optimal low rank approximations where without loss of generality
k ≤ rank(A0 ) < rank(A00 ). Then for each cluster Ci
s
1 X
A00j − A0j
|Ci |
j∈Ci
≤
2
16
k
· ||A − XX T A||2 .
|Ci |
p
cm
i
ci
∆i
γ(∆i + ∆j )
cj
∆j
cm
j
Figure 2: Despite the centroids of each cluster being close after computing the best rank m approximation, the projection of a point p to the line connecting the centroid of cluster Ci and Cj
can change after computing the best
q rank m approximation. In this case ||p − cj || < ||p − ci || and
m
||p − cm
i || < ||p − cj ||. (Here ∆i =
k
|Ci | ||A
− XX T A||2 .)
17
P
Proof. By Fact 6.3 |Ci | · || |C1i | j∈Ci (A00i − A0i )||22 is, for a set of point indexes Ci , the cost of moving
the centroid of the cluster computed on A00 to the centroid of the cluster computed on A0 . For a
clustering matrix X, ||XX T A0 − XX T A0 ||2F is the sum of squared distances of moving the centroids
computed on the point set A00 to the centroids computed on A0 . We then have
2
1 X 00
(Aj − A0j )
|Ci |·
|Ci |
j∈Ci
2
≤ ||XX T A00 −XX T A0 ||2F ≤ ||X||2F ·||A00 −A0 ||22 ≤ k·σk+1
≤ k·||A−XX T A||22 .
2
Proof of Lemma 6.5. For any point p associated with some row of A, let pm = pVm VmT be the
corresponding row in Am . Similarly, for some cluster Ci , denote the center in A by ci and the
k
k
center in Am by cm
i . Extend these notion analogously for projections p and ci to the span of the
best rank k approximation Ak .
We have for any m ≥ k i 6= j
m
m
m
||cm
i − cj || ≥ ||ci − cj || − ||ci − ci || − ||cj − cj ||
!
√
1
1
≥ γ· √ +p
k||A − XX T A||2
Ci
|Cj |
1 √
1 √
k||A − XX T A||2 − p
k||A − XX T A||2
−√
Ci
|Cj |
!
√
1
1
= (γ − 1) · √ + p
k||A − XX T A||2 ,
Ci
|Cj |
(5)
where the second inequality follows
√ from Lemma 6.7.
k
√
||A − XX T A||2 . We will now construct our target clustering
In the following, let ∆i =
|Ci |
K. Note that we require this clustering (and its properties) only for the analysis. We distinguish
between the following three cases.
m
Case 1: p ∈ Ci and cm
i = argmin||p − cj ||:
j∈{1,...,k}
m
These points remain assigned to cm
i . The distance between pm and a different center cj is at
γ−1
1
m
m
least 2 ||ci − cj || ≥ 2 ε(∆i + ∆j ) due to Equation 5.
m
k
k
k
Case 2: p ∈ Ci , cm
i 6= argmin||p − cj ||, and ci 6= argmin||p − cj ||:
j∈{1,...,k}
j∈{1,...,k}
These points will get reassigned to their closest center.
1
m
m
The distance between pm and a different center cm
j is at least 2 ||ci − cj || ≥
due to Equation 5.
γ−1
2 ε(∆i
+ ∆j )
m
m
k
k
k
Case 3: p ∈ Ci , cm
i 6= argmin||p − cj ||, and ci = argmin||p − cj ||:
j∈{1,...,k}
pm
j∈{1,...,k}
cm
i
We assign
to
at the cost of a slightly weaker movement bound on the distance between
pm and cm
.
Due
to
orthogonality
of V , we have for m > k, (Vm − Vk )T Vk = VkT (Vm − Vk ) = 0.
j
Hence Vm VmT Vk = Vm VkT Vk + Vm (Vm − Vk )T Vk = Vk VkT Vk + (Vm − Vk )VkT Vk = Vk VkT Vk = Vk .
Then pk = pVk VkT = pVm VmT Vk VkT = pm Vk VkT .
18
Further, ||pk − ckj || ≥ 12 ||ckj − cki || ≥
between pm and a different center cm
j
γ−1
2 (∆i
+ ∆j ) due to Equation 5. Then the distance
m
k
m
k
||pm − cm
j || ≥ ||p − cj || − ||cj − cj || =
≥ ||pk − ckj || − ∆j ≥
q
k
||pm − pk ||2 + ||pk − ckj ||2 − ||cm
j − cj ||
γ−3
(∆i + ∆j ),
2
where the equality follows from orthogonality and the second to last inequality follows from
Lemma 6.7.
m
Now, given the centers {cm
1 , . . . ck }, we obtain a center matrix MK where the ith row of MK
is the center according to the assignment of above. Since both clusterings use the same centers
but K improves locally on the assignments, we have ||Am − MK ||2F ≤ ||Am − XX T Am ||2F , which
proves the first statement of the lemma. Additionally, due to the fact that Am − XX T Am has rank
m = k/ε, we have
||Am − MK ||2F ≤ ||Am − XX T Am ||2F ≤ m · ||Am − XX T Am ||22 ≤ k/ε · ||A − XX T A||2F
(6)
To ensure stability, we will show that for each element of K there exists an element of C, such
that both clusters agree on a large fraction of points. This can be proven by using techniques from
Awasthi and Sheffet [15] (Theorem 3.1) and Kumar and Kannan [74] (Theorem 5.4), which we
repeat for completeness.
Lemma 6.8. Let K = {C10 , . . . Ck0 } and C = {C1 , . . . Ck } be defined as above. Then there exists a
bijection b : C → K such that for any i ∈ {i, . . . , k}
32
32
1−
|Ci | ≤ b(|Ci |) ≤ 1 +
|Ci |.
(γ − 1)2
(γ − 1)2
Proof. Denote by Ti→j the set of points from Ci such that ||cki − pk || > ||ckj − pk ||. We first
2
note that ||Ak − XX T A||2F ≤ 2k · ||Ak − XX T A||22 ≤ 2k · ||A − Ak ||2 + ||A − XX T A||2 ≤ 8k ·
||A − XX T A||22 ≤ 8 · |Ci | · ∆2i for any i ∈ {1, . . . , k}. The distance ||pk − cki || ≥ 12 ||cki − ckj || ≥
√
γ−1
1
1
√
√
+
k||A − XX T A||22 . Assigning these points to cki , we can bound the total
·
2
C
i
|Cj |
number of points added to and subtracted from cluster Cj by observing
γ−1 2
|Ti→j | ≤
|Ti→j | ·
· (∆i + ∆j )2 ≤ ||Ak − XX T A||2F ≤ 8 · |Cj | · ∆2j
2
i6=j
i6=j
X
X
γ−1 2
2
∆j
|Tj→i | ≤
|Tj→i | ·
· (∆i + ∆j )2 ≤ ||Ak − XX T A||2F ≤ 8 · |Cj | · ∆2j .
2
∆2j
X
X
i6=j
j6=i
Therefore, the cluster sizes are up to some multiplicative factor of 1 ±
19
32
(γ−1)2
identical.
We now have for each point pm ∈ Ci0 a minimum cost of
!2
!
√
1
1
γ−3
T
m
m 2
· k · ||A − XX A||2
||p − cj || ≥
+p
· p
2
|Cj |
|Ci |
2
v
v
u
u
1
1
γ − 3 u
√
u
T
≥
+ t
· t
· k · ||A − XX A||2
32
32
2
1 + (γ−1)2 · |Ci0 |
1 + (γ−1)2 · |Cj0 |
≥
4 · (γ − 3)2 ||Am − MK ||2F
·ε
81
|Cj0 |
where the first inequality holds due to Case 3, the second inequality holds due to Lemma 6.8 and
the last inequality follows from γ > 3 and Equation 6. This ensures that the distribution stability
condition is satisfied.
Proof of Theorem 6.2. Given the optimal clustering C ∗ of A with clustering matrix X, Lemma 6.5
guarantees the existence of a clustering K with center matrix MK such that ||Am − MK ||2F ≤
||Am − XX T Am || and that C has constant distribution stability. If ||Am − MK ||2F is not a constant
factor approximation, we are already done, as Local Search is guaranteed to find a constant factor
approximation. Otherwise due to Corollary B.3 (Section B in the appendix), there exists a discretization (Am , F, ||·||2 , k) of (Am , Rd , ||·||2 , k) such that the clustering C of the first instance has at
most (1+ε) times the cost of C in the second instance and such that C has constant distribution stability. By Theorem 4.2, Local Search with appropriate (but constant) neighborhood size will find a
clustering C 0 with cost at most (1+ε) times the cost of K in (Am , F, ||·||2 , k). Let Y be the clustering
matrix of C 0 . We then have ||Am −Y Y T Am ||2F +||A−Am ||2F ≤ (1+ε)2 ||Am −MK ||2F +||A−Am ||2F ≤
(1 + ε)2 ||Am − XX T Am ||2F + ||A − Am ||2F ≤ (1 + ε)3 ||A − XX T A||2F due to Theorem 6.6. Rescaling
ε completes the proof.
Remark. Any (1 + ε)-approximation will not in general agree with a target clustering. To see this
consider two clusters: (1) with mean on the origin and (2) with mean δ on the the first axis and
0 on all other coordinates. We generate points via a multivariate Gaussian distribution with an
identity covariance matrix centered on the mean of each cluster. If we generate enough points, the
instance will have constant spectral separability. However, if δ is small and the dimension large
enough, an optimal 1-clustering will approximate the k-means objective.
7
A Brief Survey on Stability Conditions
There are two general aims that shape the definitions of stability conditions. First, we want the
objective function to be appropriate. For instance, if the data is generated by mixture of Gaussians,
the k-means objective will be more appropriate than the k-median objective. Secondly, we assume
that there exists some ground truth, i.e. a correct assignment of points into clusters. Our objective
is to recover this ground truth as well as possible. These aims are not mutually exclusive. For
instance, an ideal objective function will allow us to recover the ground truth. We refer to Figure 3
for a visual overview of stability conditions and their relationships.
7.1
Cost-Based Separation
Given that an algorithm optimized with respect to some objective function, it is natural to define
a stability condition as a property the optimum clustering is required to have.
20
ORSS-Stability [85] Assume that we want to cluster a data set with respect to the k-means
objective, but have not decided on the number of clusters. A simple way of determining the
”correct” value of k is to run a k-means algorithm for k{1, 2, . . . m} until the objective value
decreases only marginally (using m centers). At this point, we set k = m − 1. The reasoning behind
this method, commonly known as the elbow-method is that we do not gain much information by
using m instead of m − 1 clusters, so we should favor the simpler model. Contrariwise, this implies
that we did gain information going from m − 2 to m − 1 and, in particular, that the m − 2-means
cost was considerably larger than the m − 1-means cost.
Ostrovsky et al. [85] considered whether such discrepancies in the cost also allow us to solve the
k-means problem more efficiently, see also Schulman [88] for an earlier condition for two clusters
and the irreducibility condition by Kumar et al. [75]. Specifically, they assumed that the optimal
k-means clustering has only an ε2 -fraction of the cost of the optimal (k − 1)-means clustering. For
such cost separated instances, the popular D2 -sampling technique has an improved performance
compared to the worst-case O(log k)-approximation ratio [9, 31, 66, 85]. Awasthi et al. [12] showed
that if an instance is cost-stable, it also admits a PTAS. In fact, they also showed that the weaker
condition β-stability is sufficient. β-stability states that the cost of assigning a point of cluster Ci to
another cluster Cj costs at least β times the total cost divided by the size of cluster Ci . Despite its
focus on the properties of the optimum, β-stability has many connections to target-clustering (see
below). Nowadays, the cost-stable property is one of the strongest stability conditions, implying
both distribution stability and spectral separability (see below). It is nevertheless the arguably
most intuitive stability condition.
Perturbation Resilience The other main optimum-based stability condition is perturbation
resilience. It was originally considered for the weighted max-cut problem by Bilu et al. [29, 28].
There, the optimum max cut is said to be α-perturbation resilient, if it remains the optimum even
if we multiply any edge weight up to a factor of α > 1. This notion naturally extends to metric
clustering problems, where, given a n×n distance matrix, the optimum clustering is α-perturbation
resilient if it remains optimal if we multiply entries by a factor α. Perturbation resilience has some
similarity to smoothed analysis (see Arthur et al. [8, 10] for work on k-means). Both smoothed
analysis and perturbation stability aim to study a smaller, more interesting part of the instance
space as opposed to worst case analysis that covers the entire space. Perturbation resilience assumes
that the optimum clustering stands out among any alternative clustering and measures the degree
by which it stands out via α. Smooth analysis is motivated by considering a problem after applying
a random perturbation, which for example accounts for measurement errors.
Perturbation resilience is unique among the considered stability conditions in that we aim to
recover the optimum solution, as opposed to finding a good (1+ε) approximation. Awasthi et al. [13]
showed that 3-perturbation resilience is sufficient√to find the optimum k-median clustering, which
was further improved by Balcan and Liang to 1+ 2 [21] 8 and finally to 2 by Angelidakis et al. [80].
Ben-David and Reyzin [27] showed that recovering the optimal clustering is NP-hard if the instance
is less than 2-perturbation resilient. Balcan et al. [20] gave an algorithm that optimally solves
symmetric and asymmetric k-center on 2-perturbation resilient instances. Recently, Angelidakis
et al. gave an algorithm that determines the optimum cluster for almost all used center-based
clustering if the instance is 2-perturbation resilient [80].
8
These results also holds for a slightly more general condition called the center proximity condition.
21
Cost Separation
Ostrovsky, Rabani,
Schulman, Swamy [85]
Jaiswal, Garg [66]
Perturbation Resilience
Bilu, Daniely,
Linial, Saks [28, 29]
Awasthi, Blum, Sheffet [13]
Balcan, Liang [21]
Distribution Stability
Awasthi, Blum, Sheffet [12]
Center Proximity
Awasthi, Blum, Sheffet [13]
Balcan, Liang [21]
Approximation Stability
Balcan, Blum, Gupta [17, 18]
Spectral Separation
Kumar, Kannan [74]
Awasthi, Sheffet [15]
Figure 3: An overview over all definitions of well-clusterability. Arrows correspond to implication.
For example, if an instance is cost-separated then it is distribution-stable; therefore the algorithm
by Awasthi, Blum and Sheffet [12] also works for cost-separated instances. The three highlighted
stability definitions in the middle of the figure are considered in this paper.
7.2
Target-Based Stability
The notion of finding a target clustering is more prevalent in machine learning than minimizing an
objective function. Though optimizing an objective value plays an important part in this line of
research, our ultimate goal is to find a clustering C that is close to the target clustering C ∗ . The
distance between two clusterings is the fraction of points where C and C ∗ disagree when considering
an optimal matching of clusters in C to clusters in C ∗ .
When the points are generated from some (unknown) mixture model, we are also given an
implicit target clustering. As a result, much work has focused on finding such clusterings using
probabilistic assumptions, see, for instance, [2, 6, 25, 32, 40, 41, 42, 44, 68, 82, 94]. We would like
to highlight two conditions that make no probabilistic assumptions and have a particular emphasis
on the k-means and k-median objective functions.
Approximation Stability The first assumption is that finding the target clustering is related
to optimizing the k-means objective function. In the simplest case, the target clustering coincides
with the optimum k-means clustering, but this a strong assumption that Balcan et al. [17, 18] avoid.
Instead they consider instances where any clustering with cost within a factor c of the optimum has
a distance at most ε to the target clustering, a condition they call (c, ε)-approximation stability.
Balcan et al. [17, 18] then showed that this condition is sufficient to both bypass worst-case lower
bounds for the approximation factor, and to find a clustering with distance O(ε) from the target
clustering. The condition was extended to account for the presence of noisy data by Balcan et
al. [22]. This approach was improved for other min-sum clustering objectives such as correlation
clustering by Balcan and Braverman [19]. For constant c, (c, ε) approximation stability also implies
the β-stability condition of Awasthi et al. [12] with constant β, if the target clusters are greater
22
than εn.
Spectral Separability Another condition that relates target clustering recovery via the k-means
objective was introduced by Kumar and Kannan [74]. In order to give an intuitive explanation,
consider a mixture model consisting of k centers. If the mixture is in a low-dimensional space, and
assuming that we have, for instance, approximation stability with respect to the k-means objective,
we could simply use the algorithm by Balcan et al. [18]. If the mixture has many additional
dimensions, the previous conditions have scaling issues, as the k-means cost may increase with each
dimension, even if many of the additional dimensions mostly contain noise. The notion behind the
spectral separability condition is that if the means of the mixture are well-separated in the subspace
containing their centers, it should be possible to determine the mixture even with the added noise.
Slightly more formally, Kumar and Kannan state that a point satisfies a proximity condition
if the projection of a point onto the line connecting its cluster center to another cluster center
is Ω(k) standard deviations closer to its own center than to the other. The standard deviations
are scaled with respect to the spectral norm of the matrix in which the ith row is the difference
vector between the ith point and its cluster mean. Given that all but an ε-fraction of points satisfy
the proximity condition, Kumar and Kannan [74] gave an algorithm that computes a clustering
with distance O(ε) to the target. They also show that their condition is (much) weaker than the
cost-stability condition by Ostrovsky et al. [85] and discuss some implications
of cost-stability on
√
approximation factors. Awasthi and Sheffet [15] later showed that Ω( k) standard deviations are
sufficient to recover most of the results by Kumar and Kannan.
8
Acknowledgments
The authors thank their dedicated advisor for this project: Claire Mathieu. Without her, this
collaboration would not have been possible.
The second author acknowledges the support by Deutsche Forschungsgemeinschaft within the
Collaborative Research Center SFB 876, project A2, and the Google Focused Award on Web
Algorithmics for Large-scale Data Analysis.
Appendix
A
(β, δ)-Stability
Lemma 4.5. Let Ci∗ be a cheap cluster. For any ε0 , we have |IRεi 0 ∩ Ci∗ | > (1 − ε3 /ε0 )|Ci∗ |.
Proof. Observe that each client that is not in IRiε0 is at a distance larger than ε0 βcost(C ∗ )/|Ci∗ |
from c∗i . Since Ci∗ is cheap, the total cost of the clients in Ci∗ = (IRεi 0 ∩ Ci∗ ) ∪ (Ci∗ − IRεi 0 ) is at most
ε3 βcost(C ∗ ) and in particular, the total cost of the clients in Ci∗ −IRεi 0 does not exceed ε3 βcost(C ∗ ).
Therefore, the total number of such clients is at most ε3 βcost(C ∗ )/(ε0 βcost(C ∗ )/|Ci∗ |) = ε3 |Ci∗ |/ε0 .
Lemma 4.6. Let δ +
ε3
ε0
< 1. If Ci∗ 6= Cj∗ are cheap clusters, then IRεi 0 ∩ IRεj 0 = ∅.
Proof. Assume that the claim is not true and consider a client x ∈ IRεi 0 ∩ IRεj 0 . Without loss of
generality assume |Ci∗ | ≥ |Cj∗ |. By the triangular inequality, we have cost(c∗j , c∗i ) ≤ cost(c∗j , x) +
cost(x, c∗i ) ≤ ε0 βcost(C ∗ )/|Cj∗ | + ε0 βcost(C ∗ )/|Ci∗ | ≤ 2ε0 βcost(C ∗ )/|Cj∗ |. Since the instance is
(β, δ)-distribution stable with respect to (C ∗ , S ∗ ) and due to Lemma 4.5, we have |∆i |+|IRεi 0 ∩Ci∗ | >
23
(1−δ)|Ci∗ |+(1−ε3 /ε0 )|Ci∗ | = (2−δ−ε3 /ε0 )|Ci∗ |. For δ+ε3 /ε0 < 1, there exists a client x0 ∈ IRεi 0 ∩∆i .
Thus, we have cost(x0 , c∗j ) ≤ cost(x0 , c∗i )+cost(c∗j , c∗i ) ≤ 3ε0 βcost(C ∗ )/|Cj∗ | < βcost(C ∗ )/|Cj∗ |. Since
x0 is in ∆i , we have cost(x0 , c∗j ) ≥ βcost(C ∗ )/|Cj∗ | resulting in a contradiction.
Lemma 4.8. There exists a set Z2 ⊆ C ∗ − Z1 of S
size at most 11.25ε−1 β −1 such that for any cluster
∗
∗
Cj ∈ C − Z2 , the total number of clients x ∈ i6=j ∆i , that are served by L(j) in L, is at most
2
ε|IRεi ∩ Ci∗ |.
Proof. Consider a cheap cluster Cj∗ ∈ C ∗ − Z1 such that the total number of clients x ∈ ∆i for
2
i 6= j, that are served by L(j) in L, is greater than ε|IRεj ∩ Cj∗ |. By the triangular inequality and
the definition of (β, δ)-stability, the total cost for each x ∈ ∆i with i 6= j served by L(j) is at least
2
(1 − ε)βcost(C ∗ )/|Cj∗ |. Since there are at least ε|IRεj ∩ Cj∗ | such clients, their total cost is at least
2
ε|IRεj ∩ Cj∗ |(1 − ε)βcost(C ∗ )/|Cj∗ |. By Lemma 4.5, this total cost is at least
2
ε|IRεj ∩ Cj∗ |(1 − ε)β
∗
cost(C ∗ )
2
∗ cost(C )
≥
ε(1
−
ε)
|C
|β
.
j
|Cj∗ |
|Cj∗ |
Recall that by [11], L is a 5-approximation and so there exist at most 11.25·ε−1 β −1 such clusters.
Lemma 4.10. Let Ci∗ be a cluster in C ∗ − Z ∗ . Define the solution Mi = L − {L(i)} ∪ {c∗i } and
denote by mix the cost of client x in solution Mi . Then
X
X
X
X
X
X
ε
Reassign(x) +
lx +
gx +
mix ≤
lx +
gx + lx ).
(
(1 − ε)
x∈A
x∈A−
(A(L(i))∪Ei )
x∈Di
x∈Ei
x∈Ei
x∈A(L(i))−
(Ei ∪Di )
Proof. Consider a client x ∈ Ci∗ − A(L(i)). By the triangular inequality, we have Reassign(x) =
cost(x, L(i)) ≤ cost(x, c∗i ) + cost(c∗i , L(i)) = gx + cost(c∗i , L(i)). Then,
X
X
Reassign(x) ≤
gx + |Ci∗ − A(L(i))| · cost(c∗i , L(i)).
x∈Ci∗ −A(L(i))
x∈Ci∗ −A(L(i))
Now consider the clients in Ci∗ ∩ A(L(i)). By the triangular inequality, we have cost(c∗i , L(i)) ≤
cost(c∗i , x0 ) + cost(x0 , L(i)) ≤ gx + lx . Therefore,
cost(c∗i , L(i)) ≤
|Ci∗
1
∩ A(L(i))|
X
(gx + lx ).
x∈Ci∗ ∩A(L(i))
|Ci∗ −A(L(i))|
ε2
∗
∗
|Ci∗ ∩A(L(i))| . Due to Lemma 4.5, we have |IRi ∩ Ci | ≥ (1 − ε)|Ci | and due to
2
2
Lemma 4.4, we have |IRεi ∩Ci∗ ∩A(L(i))| ≥ (1−ε)|IRεi ∩Ci∗ |. Therefore |Ci∗ ∩A(L(i))| ≥ (1−ε)2 |Ci∗ |
|C ∗ −A(L(i))|
2ε
and |Ci∗ − A(L(i))| ≤ (1 − (1 − ε)2 )|Ci∗ | ≤ 2ε|Ci∗ |, yielding |Ci∗ ∩A(L(i))| ≤ (1−ε)
2.
i
We now bound
Combining, we obtain
X
Reassign(x) ≤
x∈Ci∗ −A(L(i))
X
gx +
x∈Ci∗ −A(L(i))
≤
X
gx +
x∈Ci∗ −A(L(i))
24
|Ci∗ − A(L(i))|
|Ci∗ ∩ A(L(i))|
2ε
(1 − ε)2
X
(gx + lx )
x∈Ci∗ ∩A(L(i))
X
x∈Ci∗ ∩A(L(i))
(gx + lx ).
Lemma 4.11. Let Ci∗ be a cluster in C ∗ − Z ∗ . Define the solution Mi = L − {L(i)} ∪ {c∗i } and
denote by mic the cost of client c in solution Mi . Then
X
x∈A
mix ≤
X
lx +
x∈A−
(A(L(i))∪C̃i∗ )
X
X
gx +
x∈Di
x∈C̃i∗
X
Reassign(x) +
lx +
x∈A(L(i))−
(C̃i∗ ∪Di )
X
ε
(
gx + lx ).
(1 − ε)
∗
x∈C̃i
Proof. For any client x ∈ A − A(L(i)), the center that serves it in L belongs to Mi . Thus its cost
is at most lx . Moreover, observe that any client x ∈ Ei ⊆ Ci∗ can now be served by c∗i , and so its
cost is at most gx . For each client x ∈ Di , we bound its cost by Reassign(x) since all the centers of
L except for L(i) are in Mi and x ∈ Bj∗ ⊆ Cj∗ ∈ C ∗ − C(Z ∗ ).
Now, we bound the cost of a client x ∈ A(L(i)) − (Ei ∪ Di ) ⊆ A(L(i)). The closest center in
i
M for a client x0 ∈ A(L(i)) is not farther than c∗i . By the triangular inequality, the cost of such
client x0 is at most cost(x0 , c∗i ) ≤ cost(x0 , L(i)) + cost(L(i), c∗i ) = lx0 + cost(L(i), c∗i ), and so
X
X
mix ≤ |A(L(i)) − (Ei ∪ Di )| · cost(L(i), c∗i ) +
lx .
(7)
x∈A(L(i))−
(Ei ∪Di )
x∈A(L(i))−
(Ei ∪Di )
Now, observe that, for any client x ∈ |A(L(i)) ∩ Ei |, by the triangular inequality, we have
cost(L(i), c∗i ) ≤ cost(L(i), x) + cost(x, c∗i ) = lx + gx . Therefore,
1
|A(L(i)) ∩ Ei |
cost(L(i), c∗i ) ≤
X
(lx + gx ).
(8)
x∈A(L(i))∩Ei
Combining Equations 7 and 8, we have
X
X
mix ≤
x∈A(L(i))−
(Ei ∪Di )
lx +
x∈A(L(i))−
(Ei ∪Di )
X
≤
lx +
|A(L(i)) − (Ei ∪ Di )|
|A(L(i)) ∩ Ei |
X
(lx + gx )
x∈A(L(i))∩Ei
|A(L(i)) − Ei | X
(lx + gx ).
|A(L(i)) ∩ Ei |
(9)
x∈Ei
x∈A(L(i))
−(Ei ∪Di )
We now remark that since Ei is in C ∗ − Z ∗ , we have by Lemmas 4.7 and 4.8, |A(L(i)) − Ei | ≤
2
2
ε · |IRiε ∩ Ci∗ | and (1 − ε) · |IRiε ∩ Ci∗ | ≤ |A(L(i)) ∩ Ei |. Thus, combining with Equation 9 yields
the lemma.
Lemma 4.12. We have
X
−ε · cost(L) +
X
lx ≤
∗)
b
x∈A−C(Z
gx +
∗)
b
x∈A−C(Z
3ε
· (cost(L) + cost(C ∗ )).
(1 − ε)2
Proof. We consider a cluster Ci∗ in C ∗ − Z ∗ and the solution Mi = L − {L(i)} ∪ {c∗i }. Observe that
Mi and L only differ by L(i) and c∗i . Therefore, by local optimality we have (1 − nε ) · cost(Li ) ≤
cost(Mi ). Then Lemma 4.11 yields
ε
(1 − ) · cost(Li ) ≤
n
X
x∈A−
(A(L(i))∪Ei )
lx +
X
x∈Ei
gx +
X
Reassign(x) +
x∈Di
25
X
x∈A(L(i))−
(Ei ∪Di )
lx +
X
ε
·
(gx + lx )
(1 − ε)
x∈E
and so, simplifying
−
X
X
X
X
X
ε
ε
· cost(Li ) +
lx +
lx ≤
gx +
Reassign(x) +
·
(gx + lx )
n
(1 − ε)
x∈Ei
x∈Di
x∈Ei
x∈Di
x∈Ei
We now apply this analysis to each cluster Ci∗ ∈ C ∗ − Z ∗ . Summing over all clusters Ci∗ , we obtain,
|C ∗ −Z ∗ |
X
X
X
ε
− · cost(L)+
lx +
lx ≤
n
i=1
x∈Ei
x∈Di
∗
∗
|C −Z |
X
X
X
ε
· (cost(L) + cost(C ∗ ))
gx +
Reassign(c) +
(1 − ε)
i=1
x∈Ei
x∈Di
By Lemma 4.10 and the definition of Ei ,
ε
− · cost(L) +
n
|C ∗ −Z ∗ |
≤
Therefore, −
B
X
X
i=1
b
x∈Ci∗ ∩A
ε
· cost(L) +
n
gx +
X
∗)
b
x∈A−C(Z
ε
2ε
+
1 − ε (1 − ε)2
lx ≤
X
gx +
∗)
b
x∈A−C(Z
|C ∗ −Z ∗ |
X
X
i=1
b
x∈Ci∗ ∩A
lx
· (cost(L) + cost(C ∗ )).
3ε
· (cost(L) + cost(C ∗ )).
(1 − ε)2
Euclidean Distribution Stability
In this section we show how to reduce the Euclidean problem to the discrete version. Our analysis
is focused on the k-means problem, however we note that the discretization works for all values of
cost = distp , where the dependency on p grows exponentially. For constant p, we obtain polynomial
sized candidate solution sets in polynomial time. For k-means itself, we could alternatively combine
Matousek’s approximate centroid set [81] with the Johnson Lindenstrauss lemma and avoid the
following construction; however this would only work for optimal distribution stable clusterings
and the proof Theorem 6.2 requires it to hold for non-optimal clusterings as well.
First, we describe a discretization procedure. It will be important to us that the candidate
solution preserves (1) the cost of any given set of centers and (2) distribution stability.
For a set of points P , a set of points Nε is an ε-net of P if for every point x ∈ P there exists some
point y ∈ Nε with ||x − y|| ≤ ε. It is well known that for unit Euclidean ball of dimension d, there
exists an ε-net of cardinality (1 + 2/ε)d , see for instance Pisier [87], though in this case the proof
is non-constructive. Constructive methods yield slightly worse, but asymptotically similar bounds
of the form ε−O(d) , see for instance Chazelle [35] for an extensive overview on how to construct
such nets. Note that having constructed an ε-net for the unit sphere, we also have an ε · r-net
for any sphere with radius r. The following lemma shows that a sufficiently small ε-net preserves
distribution stability. Again for ease of exposition, we only give the proof for p = 1, and assuming
we can construct an appropriate ε-net, but similar results also hold for (k, p) clustering as long as
p is constant.
26
Lemma B.1. Let A be a set of n points in d-dimensional Euclidean space and let β, ε > 0 with
min(β, ε) > 2η > 0 be constants. Suppose there exists a clustering C = {C1 , . . . , Ck } with centers
S = {c1 , . . . ck } such that
P P
1. cost(C, S) = ki=1 x∈Ci ||x − ci || is a constant approximation to the optimum clustering and
2. C is β-distribution stable.
Then there exists a discretization D of the solution space such that there exists a subset S 0 =
{c01 , . . . c0k } ⊂ D of size k with
Pk P
0
1.
i=1
x∈Ci ||x − ci || ≤ (1 + ε) · cost(C, S) and
2. C with centers S 0 is β/2-distribution stable.
The discretization consists of O(n · log n · η d+2 ) many points.
Proof. Let OPT being the cost of an optimal k-median clustering. Define an exponential sequence
to the base of (1 + η) starting at (η · OPT
n ) and ending at (n · OPT). The sequence contains
t = log1+η (n2 /η) ∈ O(η −1 log n) many elements for 1/η < n. For each point p ∈ A, define B(p, `i )
as the d-dimensional ball centered at p with radius (1+η)i ·η· OPT
n . We cover the ball B(p, `i ) with an
η/8 · `i net denoted by Nη/8 (p, `i ). As the set of candidate centers, we let D = ∪p∈A ∪ti=0 Nη/8 (p, `i ).
Clearly, |D| ∈ O(n · log n · (1 + 16/η)d+2 ).
Now for each ci ∈ S, set c0i = argmin ||q − ci ||. We will show that S 0 = {c01 , . . . c0k } satisfies the
q∈D
two conditions of the lemma.
0
For (1), we first consider the points p with ||p − ci || ≤ ε/8 · OPT
n . Then there exists a ci such
OPT
OPT
0
that ||p − ci || ≤ (η/8 + ε/8) n ≤ ε/4 n and summing up over all such points, we have a total
contribution to the objective value of at most ε/4 · OPT.
Now consider the remaining points. Since the cost(C, S) is a constant approximation, the center
i+1 ·η· OPT for some i ∈ {0, . . . t}. Then
ci of each point p satisfies (1+η)i ·η· OPT
n ≤ ||ci −p|| ≤ (1+η)
n
there exists some point q ∈ Nη/8 (p, `i+1 ) with ||q−ci || ≤ η/8·(1+η)i+1 ·η· OPT
n ≤ η/8·(1+η)||p−ci || ≤
η/4||p − ci ||. We then have ||p − c0i || ≤ (1 + η/4)||p − ci ||. Summing up over both cases, we have a
total cost of at most ε/4 · OPT + (1 + η/4) · cost(C, S 0 ) ≤ (1 + ε/2) · cost(C, S 0 ).
OPT
OPT
To show (2), let us consider some point p ∈
/ Cj with ||p−cj || > β · OPT
|Cj | . Since β · |Cj | ≥ 2η · n ,
i+1 · OPT .
there exists a point q and an i ∈ {0, . . . t} such that β/8·(1+η)i · OPT
n ≤ ||ci −q|| ≤ β/8·(1+η)
n
0 satisfies ||p − c0 || ≥
.
Similarly
to
above,
the
point
c
Then ||c0j − cj || ≤ β · (1 + η)i+1 · OPT
j
j
n
OPT
OPT
OPT
||p − cj || − ||cj − c0j || ≥ β · OPT
|Cj | − β/8(1 + η) · n ≥ (1 − 1/4)β · |Cj | > β/2 · |Cj | .
To reduce the dependency on the dimension, we combine this statement with the seminal
theorem originally due to Johnson and Lindenstrauss [67].
Lemma B.2 (Johnson-Lindenstrauss lemma). For any set of n points N in d-dimensional Euclidean space and any 0 < ε < 1/2, there exists a distribution F over linear maps f : `d2 → `m
2 with
m ∈ O(ε−2 log n) such that
2
Pf ∼F [∀x, y ∈ N, (1 − ε)||x − y|| ≤ ||f (x) − f (y)|| ≤ (1 + ε)||x − y||] ≥ .
3
It is easy to see that Johnson-Lindenstrauss type embeddings preserve the Euclidean k-means
cost of any clustering, as the cost of any clustering can be written in terms of pairwise distances
(see also Fact 6.3 in Section 6). Since the distribution over linear maps F can be chosen obliviously
27
with respect to the points, this extends to distribution stability of a set of k candidate centers as
well.
Combining Lemmas B.2 and B.1 gives us the following corollary.
Corollary B.3. Let A be a set of points in d-dimensional Euclidean space with a clustering C =
{C1 , . . . Ck } and centers S = {c1 , . . . ck } such that C is β-perturbation stable. Then there exists a
−1
(A, F, || · ||2 , k)-clustering instance with clients A, npoly(ε ) centers F and a subset S 0 ⊂ F ∪ A of
k centers such that C and S 0 is O(β) stable and the cost of clustering A with S 0 is at most (1 + ε)
times the cost of clustering A with S.
Remark. This procedure can be adapted to work for general powers of cost functions. For Lemma B.1,
we simply rescale η. The Johnson-Lindenstrauss lemma can also be applied in these settings, at
a slightly worse target dimension of O((p + 1)2 log((p + 1)/ε)ε−3 log n), see Kerber and Raghvendra [71].
C
Experimental Results
In this section, we discuss the empirical applicability of stability as a model to capture real-world
−3 −1
data. Theorem 4.2 states that local search with neighborhood of size nΩ(ε β ) returns a solution
of cost at most (1 + ε)OPT. Thus, we ask the following question.
For which values of β are the random and real instances β-distribution-stable?
We focus on the k-means objective and we consider real-world and random instances with
ground truth clustering and study under which conditions the value of the solution induced by the
ground truth clustering is close to the value of the optimal clustering with respect to the k-means
objective. Our aim is to determine (a range of) values of β for which various data sets satisfy
distribution stability.
Setup
The machines used for the experiments have a processor Intel(R) Core(TM) i73770 CPU, 3.40GHz
with four cores and a total virtual memory of 8GB running on an Ubuntu 12.04.5 LTS operating
system. We implemented the Algorithms in C++ and Python. The C++ compiler is g++ 4.6.3.
Our experiments always used Local Search with a neighborhood of size 1. At each step, the neighborhood of the current solution was explored in parallel: 8 threads were created by a Python script
and each of them correspond to a C++ subprocess that explores a 1/8 fraction of the space of
the neighboring solutions. The best neighboring solution found by the 8 threads was taken for the
next step. For Lloyd’s algorithm we use the C++ implementation by Kanungo et al. [70] available
online.
To determine the stability parameter β, we also required a lower bound on the cost. This was
done via a linear relaxation describe in Algorithm 4. The LP for the linear program was generated
via a Python script and solved using the solver CPLEX. The average ratio between our upper bound
given via Local Search and lower bounds given via Algorithm 4 is 1.15 and the variance for the
value of the optimal fractional solution that is less than 0.5% of the value of the optimal solution.
Therefore, our estimate of β is quite accurate.
28
Algorithm 4 Linear relaxation for the k-means problem.
Input: A set of clients A, a set of candidates centers F , a number of centers k, a distance function
dist.
XX
min
xa,b · dist(a, b)2
a∈A b∈F
subject to,
P
P b∈F yb
∀a ∈ A,
b∈F xa,b
∀a ∈ A, ∀b ∈ F,
yb
∀a ∈ A, ∀b ∈ F,
xa,b
C.1
≤ k
= 1
≥ xa,b
≥ 0
Real Data
In this section, we focus on four classic real-world datasets with ground truth clustering: abalone,
digits, iris, and movement libras. abalone, iris, and movement libras have been used in
various works (see [46, 47, 49, 50, 89] for example) and are available online at the UCI Machine
learning repository [78].
The abalone dataset consists of 8 physical characteristics of all the individuals of a population
of abalones. Each abalone corresponds to a point in a 8-dimensional Euclidean space. The ground
truth clustering consists in partitioning the points according to the age of the abalones.
The digits dataset consists of 8px-by-8px images of handwritten digits from the standard
machine learning library scikit-learn [86]. Each image is associated to a point in a 64-dimensional
Euclidean space where each pixel corresponds to a coordinate. The ground truth clustering consists
in partitioning the points according to the number depicted in their corresponding images.
The iris dataset consists of the sepal and petal lengths and widths of all the individuals of a
population of iris plant containing 3 different types of iris plant. Each plant is associated to a point
in 4-dimensional Euclidean space. The ground truth clustering consists in partitioning the points
according to the type of iris plant of the corresponding individual.
The Movement libras dataset consists of a set of instances of 15 hand movements in LIBRAS9 .
Each instance is a curve that is mapped in a representation with 90 numeric values representing
the coordinates of the movements. The ground truth clustering consists in partitioning the points
according to the type of the movement they correspond to.
Properties
Number of points
Number of clusters
Value of ground truth clustering
Value of fractional relaxation
Value of Algorithm 1
% of pts correct. class. by Alg. 1
β-stability
Abalone
636
28
169.19
4.47
4.53
17
1.27e-06
Digits
1000
10
938817.0
855567.0
855567.0
76.2
0.0676
Iris
150
3
96.1
83.96
83.96
90
0.2185
Movement libras
360
15
780.96
366.34
369.65
39
0.0065
Table 1: Properties of the real-world instances with ground truth clustering. The neighborhood
size for Algorithm 1 is 1.
Table 1 shows the properties of the four instances.
For the Abalone and Movement libras instances, the values of an optimal solution is much
9
LIBRAS is the official Brazilian sign language
29
smaller than the value of the ground truth clustering. Therefore the k-means objective function
might not be ideal as a recovery mechanism. Since Local Search optimizes with respect to the
k-means objective, the clustering output by Local Search is far from the ground truth clustering
for those instances: the percentage of points correctly classified by Algorithm 1 is at most 17% for
the Abalone instance and at most 39% for the Movement libras instance. For the Digits and
Iris instances the value of the ground truth clustering is at most 1.15 times the optimal value.
In those cases, the number of points correctly classified is much higher: 90% for the Iris instance
and 76.2% for the Digits instance.
The experiments also show that the β-distribution-stability condition is satisfied for β > 0.06
for the Digits, Iris and Movement libras instances. This shows that the β-distribution-stability
condition captures the structure of some famous real-world instances for which the k-means objective is meaningful for finding the optimal clusters. We thus make the following observations.
Observation C.1. If the value of the ground truth clustering is close to the value of the optimal
solution, then one can expect the instance satisfy the β-distribution stability property for some
constant β.
The experiments show that Algorithm 1 with neighborhood size 1 (s = 1) is very efficient for
all those instances since it returns a solution whose value is within 2% of the optimal solution for
the Abalone instance and a within 0.002% for the other instances. Note that the running time
of Algorithm 1 with s = 1 is Õ(k · n/ε) (using a set of O(n) candidate centers) and less than 15
minutes for all the instances. We make the following observation.
Observation C.2. If the value of the ground truth clustering is close to the value of the optimal
solution, then one can expect both clusterings to agree on a large fraction of points.
Finally, observe that for those instances the value of an optimal solution to the fractional
relaxation of the linear program is very close to the optimal value of an optimal integral solution
(since the cost of the integral solution is smaller than the cost returned by Algorithm 1). This
suggests that the fractional relaxation (Algorithm 4) might have a small integrality gap for realworld instances.
Open Problem: We believe that it would be interesting to study the integrality gap of the classic
LP relaxation for the k-median and k-means problems under the stability assumption (for example
β-distribution stability).
C.2
Data generated from a mixture of k Gaussians
The synthetic data was generated via a Python script using numpy. The instances consist of 1000
points generated from a mixture of k Gaussians with the same variance σ lying in d-dimensional
space, where d ∈ {5, 10, 50} and k ∈ {5, 50, 100}. We generate 100 instances for all possible combinations of the parameters. The means of the k Gaussians are chosen uniformly and independently
at random in Qd ∩ (0, 1)d . The ground truth clustering is the family of sets of points generated
by the same Gaussian. We compare the value of the ground truth clustering to the optimal value
clustering.
The results are presented in Figures 4 and 5. We observe that when the variance σ is large, the
ratio between the average value of the ground truth clustering and the average value of the optimal
clustering becomes more important. Indeed, the ground truth clusters start to overlap, allowing to
improve the objective value by defining slightly different clusters. Therefore, the use of the k-means
30
or k-median objectives for modeling the recovery problem is not suitable anymore. In these cases,
since Local Search optimizes the solution with respect to the current cost, the clustering output by
local search is very different from the ground truth clustering. We thus identify instances for which
the k-means objective is meaningful and so, Local Search is a relevant heuristic. This motivates
the following defintion.
Definition C.3. We say that a variance σ̂ is relevant if, for the k-means instances generated
with variance σ̂ the ratio between the average value of the ground truth clustering and the optimal
clustering is less than 1.05.
We summarize in Table 2 the relevant variances observed.
hhh
hh
hhh
hh
Values of k
hhhh
Number of dimensions
2
10
50
5
50
100
hhh
h
h
< 0.05
< 15
< 1000000.0
< 0.002
<1
< 100
< 0.0005
< 0.5
<7
Table 2: Relevant variances for k ∈ {5, 50, 100} and d ∈ {2, 10, 50}.
We consider the β-distribution-stability condition and ask whether the instances generated
from a relevant variance satisfy this condition for constant values of β. We remark that β can take
arbitrarily small values.
We thus identify relevant variances (see Table 2) for each pair k, d, such that optimizing the
k-means objective in a d-dimensional instances generated from a relevant variance corresponds to
finding the underlying clusters.
On stability conditions. We now study the β-distribution-stability condition for random instances generated from a mixture of k Gaussians. The results are depicted in Figures 7 and 6.
We observe that for random instances that are not generated from a relevant variance, the
instances are β-distribution-stable for very small values of β (e.g., β < 1e − 07). We also make the
following observation.
Observation C.4. Instances generated using relevant variances satisfy the β-distribution-stability
condition for β > 0.001.
We remark that the number of dimensions is constant here and that having more dimensions
might incur slightly different values for β. It would be interesting to study this dependency in a
new study.
References
[1] Emile Aarts and Jan K. Lenstra, editors. Local Search in Combinatorial Optimization. John
Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1997.
[2] Dimitris Achlioptas and Frank McSherry. On spectral learning of mixtures of distributions. In
Learning Theory, 18th Annual Conference on Learning Theory, COLT 2005, Bertinoro, Italy,
June 27-30, 2005, Proceedings, pages 458–469, 2005.
31
[3] Sara Ahmadian, Ashkan Norouzi-Fard, Ola Svensson, and Justin Ward. Better guarantees for
k-means and euclidean k-median by primal-dual algorithms. CoRR, abs/1612.07925, 2016.
[4] Ehsan Ardjmand, Namkyu Park, Gary Weckman, and Mohammad Reza Amin-Naseri. The
discrete unconscious search and its application to uncapacitated facility location problem.
Computers & Industrial Engineering, 73:32 – 40, 2014.
[5] Ehsan Ardjmand, Namkyu Park, Gary R. Weckman, and Mohammad Reza Amin-Naseri.
The discrete unconscious search and its application to uncapacitated facility location problem.
Computers & Industrial Engineering, 73:32–40, 2014.
[6] Sanjeev Arora and Ravi Kannan. Learning mixtures of arbitrary gaussians. In Proceedings
on 33rd Annual ACM Symposium on Theory of Computing, July 6-8, 2001, Heraklion, Crete,
Greece, pages 247–257, 2001.
[7] Sanjeev Arora, Prabhakar Raghavan, and Satish Rao. Approximation schemes for Euclidean
k -medians and related problems. In Proceedings of the Thirtieth Annual ACM Symposium on
the Theory of Computing, Dallas, Texas, USA, May 23-26, 1998, pages 106–113, 1998.
[8] David Arthur, Bodo Manthey, and Heiko Röglin. Smoothed analysis of the k-means method.
J. ACM, 58(5):19, 2011.
[9] David Arthur and Sergei Vassilvitskii. k-means++: the advantages of careful seeding. In
Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA
2007, New Orleans, Louisiana, USA, January 7-9, 2007, pages 1027–1035, 2007.
[10] David Arthur and Sergei Vassilvitskii. Worst-case and smoothed analysis of the ICP algorithm,
with an application to the k-means method. SIAM J. Comput., 39(2):766–782, 2009.
[11] Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and
Vinayaka Pandit. Local search heuristics for k-median and facility location problems. SIAM
J. Comput., 33(3):544–562, 2004.
[12] Pranjal Awasthi, Avrim Blum, and Or Sheffet. Stability yields a PTAS for k-median and
k-means clustering. In 51th Annual IEEE Symposium on Foundations of Computer Science,
FOCS 2010, October 23-26, 2010, Las Vegas, Nevada, USA, pages 309–318, 2010.
[13] Pranjal Awasthi, Avrim Blum, and Or Sheffet. Center-based clustering under perturbation
stability. Inf. Process. Lett., 112(1-2):49–54, 2012.
[14] Pranjal Awasthi, Moses Charikar, Ravishankar Krishnaswamy, and Ali Kemal Sinop. The
hardness of approximation of Euclidean k-means. In 31st International Symposium on Computational Geometry, SoCG 2015, June 22-25, 2015, Eindhoven, The Netherlands, pages 754–
767, 2015.
[15] Pranjal Awasthi and Or Sheffet. Improved spectral-norm bounds for clustering. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques - 15th
International Workshop, APPROX 2012, and 16th International Workshop, RANDOM 2012,
Cambridge, MA, USA, August 15-17, 2012. Proceedings, pages 37–49, 2012.
[16] Ainesh Bakshi and Nadiia Chepurko. Polynomial time algorithm for 2-stable clustering instances. CoRR, abs/1607.07431, 2016.
32
[17] Maria-Florina Balcan, Avrim Blum, and Anupam Gupta. Approximate clustering without the
approximation. In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete
Algorithms, SODA 2009, New York, NY, USA, January 4-6, 2009, pages 1068–1077, 2009.
[18] Maria-Florina Balcan, Avrim Blum, and Anupam Gupta. Clustering under approximation
stability. J. ACM, 60(2):8, 2013.
[19] Maria-Florina Balcan and Mark Braverman. Finding low error clusterings. In COLT 2009
- The 22nd Conference on Learning Theory, Montreal, Quebec, Canada, June 18-21, 2009,
2009.
[20] Maria-Florina Balcan, Nika Haghtalab, and Colin White. k-center clustering under perturbation resilience. In 43rd International Colloquium on Automata, Languages, and Programming,
ICALP 2016, July 11-15, 2016, Rome, Italy, pages 68:1–68:14, 2016.
[21] Maria-Florina Balcan and Yingyu Liang. Clustering under perturbation resilience. SIAM J.
Comput., 45(1):102–155, 2016.
[22] Maria-Florina Balcan, Heiko Röglin, and Shang-Hua Teng. Agnostic clustering. In Algorithmic
Learning Theory, 20th International Conference, ALT 2009, Porto, Portugal, October 3-5,
2009. Proceedings, pages 384–398, 2009.
[23] Sayan Bandyapadhyay and Kasturi R. Varadarajan. On variants of k-means clustering. CoRR,
abs/1512.02985, 2015.
[24] MohammadHossein Bateni, Aditya Bhaskara, Silvio Lattanzi, and Vahab S. Mirrokni. Distributed balanced clustering via mapping coresets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December
8-13 2014, Montreal, Quebec, Canada, pages 2591–2599, 2014.
[25] Mikhail Belkin and Kaushik Sinha. Polynomial learning of distribution families. In 51th
Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010, October 23-26,
2010, Las Vegas, Nevada, USA, pages 103–112, 2010.
[26] Shai Ben-David, Benny Chor, Oded Goldreich, and Michel Luby. On the theory of average
case complexity. Journal of Computer and system Sciences, 44(2):193–219, 1992.
[27] Shalev Ben-David and Lev Reyzin. Data stability in clustering: A closer look. Theor. Comput.
Sci., 558:51–61, 2014.
[28] Yonatan Bilu, Amit Daniely, Nati Linial, and Michael E. Saks. On the practically interesting
instances of MAXCUT. In 30th International Symposium on Theoretical Aspects of Computer
Science, STACS 2013, February 27 - March 2, 2013, Kiel, Germany, pages 526–537, 2013.
[29] Yonatan Bilu and Nathan Linial. Are stable instances easy? Combinatorics, Probability &
Computing, 21(5):643–660, 2012.
[30] Guy E. Blelloch and Kanat Tangwongsan. Parallel approximation algorithms for facilitylocation problems. In SPAA 2010: Proceedings of the 22nd Annual ACM Symposium on
Parallelism in Algorithms and Architectures, Thira, Santorini, Greece, June 13-15, 2010, pages
315–324, 2010.
33
[31] Vladimir Braverman, Adam Meyerson, Rafail Ostrovsky, Alan Roytman, Michael Shindler,
and Brian Tagiku. Streaming k-means on well-clusterable data. In Proceedings of the TwentySecond Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco,
California, USA, January 23-25, 2011, pages 26–40, 2011.
[32] S. Charles Brubaker and Santosh Vempala. Isotropic PCA and affine-invariant clustering. In
49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2008, October
25-28, 2008, Philadelphia, PA, USA, pages 551–560, 2008.
[33] Jaroslaw Byrka, Thomas Pensyl, Bartosz Rybicki, Aravind Srinivasan, and Khoa Trinh. An improved approximation for k -median, and positive correlation in budgeted optimization. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA
2015, San Diego, CA, USA, January 4-6, 2015, pages 737–756, 2015.
[34] Moses Charikar and Sudipto Guha. Improved combinatorial algorithms for facility location
problems. SIAM J. Comput., 34(4):803–824, 2005.
[35] Bernard Chazelle. The discrepancy method - randomness and complexity. Cambridge University Press, 2001.
[36] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina Persu.
Dimensionality reduction for k-means clustering and low rank approximation. In Proceedings
of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015,
Portland, OR, USA, June 14-17, 2015, pages 163–172, 2015.
[37] Vincent Cohen-Addad, Philip N. Klein, and Claire Mathieu. Local search yields approximation schemes for k-means and k-median in euclidean and minor-free metrics. In IEEE 57th
Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016,
Hyatt Regency, New Brunswick, New Jersey, USA, pages 353–364, 2016.
[38] Vincent Cohen-Addad and Claire Mathieu. Effectiveness of local search for geometric optimization. In 31st International Symposium on Computational Geometry, SoCG 2015, June
22-25, 2015, Eindhoven, The Netherlands, pages 329–343, 2015.
[39] David Cohen-Steiner, Pierre Alliez, and Mathieu Desbrun. Variational shape approximation.
ACM Trans. Graph., 23(3):905–914, 2004.
[40] Amin Coja-Oghlan. Graph partitioning via adaptive spectral techniques. Combinatorics,
Probability & Computing, 19(2):227–284, 2010.
[41] Anirban Dasgupta, John E. Hopcroft, Ravi Kannan, and Pradipta Prometheus Mitra. Spectral
clustering with limited independence. In Proceedings of the Eighteenth Annual ACM-SIAM
Symposium on Discrete Algorithms, SODA 2007, New Orleans, Louisiana, USA, January 7-9,
2007, pages 1036–1045, 2007.
[42] Sanjoy Dasgupta. Learning mixtures of gaussians. In 40th Annual Symposium on Foundations
of Computer Science, FOCS ’99, 17-18 October, 1999, New York, NY, USA, pages 634–644,
1999.
[43] Sanjoy Dasgupta and Yoav Freund. Random projection trees for vector quantization. IEEE
Trans. Information Theory, 55(7):3229–3242, 2009.
34
[44] Sanjoy Dasgupta and Leonard J. Schulman. A probabilistic analysis of EM for mixtures of
separated, spherical gaussians. Journal of Machine Learning Research, 8:203–226, 2007.
[45] Inderjit S. Dhillon, Yuqiang Guan, and Jacob Kogan. Iterative clustering of high dimensional
text data augmented by local search. In Proceedings of the 2002 IEEE International Conference
on Data Mining (ICDM 2002), 9-12 December 2002, Maebashi City, Japan, pages 131–138,
2002.
[46] Daniel B Dias, Renata CB Madeo, Thiago Rocha, Helton H Bı́scaro, and Sarajane M Peres.
Hand movement recognition for brazilian sign language: a study using distance-based neural
networks. In Neural Networks, 2009. IJCNN 2009. International Joint Conference on, pages
697–704. IEEE, 2009.
[47] Richard O Duda, Peter E Hart, et al. Pattern classification and scene analysis, volume 3.
Wiley New York, 1973.
[48] Dan Feldman and Michael Langberg. A unified framework for approximating and clustering
data. In Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San
Jose, CA, USA, 6-8 June 2011, pages 569–578, 2011.
[49] Bernd Fischer, Johann Schumann, Wray Buntine, and Alexander G Gray. Automatic derivation of statistical algorithms: The em family and beyond. In Advances in Neural Information
Processing Systems, pages 673–680, 2002.
[50] Ronald A Fisher. The use of multiple measurements in taxonomic problems. Annals of
eugenics, 7(2):179–188, 1936.
[51] Gereon Frahling and Christian Sohler. A fast k-means implementation using coresets. Int. J.
Comput. Geometry Appl., 18(6):605–625, 2008.
[52] Zachary Friggstad, Mohsen Rezapour, and Mohammad R. Salavatipour. Local search yields
a PTAS for k-means in doubling metrics. In IEEE 57th Annual Symposium on Foundations
of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New
Jersey, USA, pages 365–374, 2016.
[53] Zachary Friggstad and Yifeng Zhang. Tight analysis of a multiple-swap heurstic for budgeted
red-blue median. In 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy, pages 75:1–75:13, 2016.
[54] Diptesh Ghosh. Neighborhood search heuristics for the uncapacitated facility location problem.
European Journal of Operational Research, 150(1):150 – 162, 2003. O.R. Applied to Health
Services.
[55] Diptesh Ghosh. Neighborhood search heuristics for the uncapacitated facility location problem.
European Journal of Operational Research, 150(1):150–162, 2003.
[56] Sudipto Guha and Samir Khuller. Greedy strikes back: Improved facility location algorithms.
J. Algorithms, 31(1):228–248, 1999.
[57] Sudipto Guha, Adam Meyerson, Nina Mishra, Rajeev Motwani, and Liadan O’Callaghan.
Clustering data streams: Theory and practice. IEEE Trans. Knowl. Data Eng., 15(3):515–
528, 2003.
35
[58] Anupam Gupta and Kanat Tangwongsan. Simpler analyses of local search algorithms for
facility location. CoRR, abs/0809.2554, 2008.
[59] Venkatesan Guruswami and Piotr Indyk. Embeddings and non-approximability of geometric problems. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete
Algorithms, January 12-14, 2003, Baltimore, Maryland, USA., pages 537–538, 2003.
[60] Pierre Hansen and Nenad Mladenovic. J-means : a new local search heuristic for minimum
sum of squares clustering. Pattern Recognition, 34(2):405–413, 2001.
[61] Pierre Hansen and Nenad Mladenović. Variable neighborhood search: Principles and applications. European journal of operational research, 130(3):449–467, 2001.
[62] Sariel Har-Peled and Akash Kushal. Smaller coresets for k-median and k-means clustering.
Discrete & Computational Geometry, 37(1):3–19, 2007.
[63] Sariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In
Proceedings of the 36th Annual ACM Symposium on Theory of Computing, Chicago, IL, USA,
June 13-16, 2004, pages 291–300, 2004.
[64] Kamal Jain, Mohammad Mahdian, and Amin Saberi. A new greedy approach for facility
location problems. In Proceedings on 34th Annual ACM Symposium on Theory of Computing,
May 19-21, 2002, Montréal, Québec, Canada, pages 731–740, 2002.
[65] Kamal Jain and Vijay V. Vazirani. Approximation algorithms for metric facility location
and k -median problems using the primal-dual schema and Lagrangian relaxation. J. ACM,
48(2):274–296, 2001.
[66] Ragesh Jaiswal and Nitin Garg. Analysis of k-means++ for separable data. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques - 15th
International Workshop, APPROX 2012, and 16th International Workshop, RANDOM 2012,
Cambridge, MA, USA, August 15-17, 2012. Proceedings, pages 591–602, 2012.
[67] W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mapping into Hilbert space.
In Conf. in modern analysis and probability, volume 26 of Contemporary Mathematics, pages
189–206. American Mathematical Society, 1984.
[68] Ravindran Kannan, Hadi Salmasian, and Santosh Vempala. The spectral method for general
mixture models. SIAM J. Comput., 38(3):1141–1156, 2008.
[69] Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman,
and Angela Y. Wu. An efficient k-means clustering algorithm: Analysis and implementation.
IEEE Trans. Pattern Anal. Mach. Intell., 24(7):881–892, 2002.
[70] Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman,
and Angela Y. Wu. A local search approximation algorithm for k-means clustering. Comput.
Geom., 28(2-3):89–112, 2004.
[71] Michael Kerber and Sharath Raghvendra. Approximation and streaming algorithms for projective clustering via random projections. In Proceedings of the 27th Canadian Conference
on Computational Geometry, CCCG 2015, Kingston, Ontario, Canada, August 10-12, 2015,
2015.
36
[72] Stavros G. Kolliopoulos and Satish Rao. A nearly linear-time approximation scheme for the
euclidean k-median problem. SIAM J. Comput., 37(3):757–782, June 2007.
[73] Madhukar R. Korupolu, C. Greg Plaxton, and Rajmohan Rajaraman. Analysis of a local
search heuristic for facility location problems. J. Algorithms, 37(1):146–188, 2000.
[74] Amit Kumar and Ravindran Kannan. Clustering with spectral norm and the k-means algorithm. In 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010,
October 23-26, 2010, Las Vegas, Nevada, USA, pages 299–308, 2010.
[75] Amit Kumar, Yogish Sabharwal, and Sandeep Sen. Linear-time approximation schemes for
clustering problems in any dimensions. J. ACM, 57(2), 2010.
[76] Shrinu Kushagra, Samira Samadi, and Shai Ben-David. Finding meaningful cluster structure
amidst background noise. In Algorithmic Learning Theory - 27th International Conference,
ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings, pages 339–354, 2016.
[77] Shi Li and Ola Svensson. Approximating k-median via pseudo-approximation. In Symposium
on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013, pages
901–910, 2013.
[78] Moshe Lichman. UCI machine learning repository, 2013.
[79] Meena Mahajan, Prajakta Nimbhorkar, and Kasturi R. Varadarajan. The planar k-means
problem is NP-hard. Theor. Comput. Sci., 442:13–21, 2012.
[80] Konstantin Makarychev and Yury Makarychev.
resilient problems.
Algorithms for stable and perturbation-
[81] Jirı́ Matousek. On approximate geometric k-clustering. Discrete & Computational Geometry,
24(1):61–84, 2000.
[82] Frank McSherry. Spectral partitioning of random graphs. In 42nd Annual Symposium on
Foundations of Computer Science, FOCS 2001, 14-17 October 2001, Las Vegas, Nevada, USA,
pages 529–537, 2001.
[83] Nimrod Megiddo and Kenneth J. Supowit. On the complexity of some common geometric
location problems. SIAM J. Comput., 13(1):182–196, 1984.
[84] Ramgopal R. Mettu and C. Greg Plaxton. The online median problem. SIAM J. Comput.,
32(3):816–832, 2003.
[85] Rafail Ostrovsky, Yuval Rabani, Leonard J. Schulman, and Chaitanya Swamy. The effectiveness of Lloyd-type methods for the k-means problem. J. ACM, 59(6):28, 2012.
[86] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion,
Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake
Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and
Édouard Duchesnay. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12:2825–
2830, November 2011.
[87] Gilles Pisier. The volume of convex bodies and Banach space geometry. Cambridge Tracts in
Mathematics. 94, 1999.
37
[88] Leonard J. Schulman. Clustering for edge-cost minimization (extended abstract). In Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, May 21-23,
2000, Portland, OR, USA, pages 547–555, 2000.
[89] Edward Snelson, Carl Edward Rasmussen, and Zoubin Ghahramani. Warped gaussian processes. Advances in neural information processing systems, 16:337–344, 2004.
[90] Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algorithms: Why the simplex
algorithm usually takes polynomial time. J. ACM, 51(3):385–463, 2004.
[91] Minghe Sun. Solving the uncapacitated facility location problem using tabu search. Computers
& OR, 33:2563–2589, 2006.
[92] Dilek Tuzun and Laura I Burke. A two-phase tabu search approach to the location routing
problem. European journal of operational research, 116(1):87–99, 1999.
[93] Dilek Tüzün and Laura I. Burke. A two-phase tabu search approach to the location routing
problem. European Journal of Operational Research, 116(1):87–99, 1999.
[94] Santosh Vempala and Grant Wang. A spectral algorithm for learning mixture models. J.
Comput. Syst. Sci., 68(4):841–860, 2004.
[95] Yi Yang, Min Shao, Sencun Zhu, Bhuvan Urgaonkar, and Guohong Cao. Towards event source
unobservability with minimum network traffic in sensor networks. In Proceedings of the First
ACM Conference on Wireless Network Security, WISEC 2008, Alexandria, VA, USA, March
31 - April 02, 2008, pages 77–88, 2008.
38
102
2.2
2.0
1.6
ratio
beta
1.8
101
1.4
1.2
1.0
10-2
10-1
covariance
0
1010
-3
101
100
(a) k = 5, d = 2.
10-2
10-1
covariance
100
101
(b) k = 50, d = 2.
ratio
102
101
0
1010
-4
10-3
10-1
10-2
covariance
100
101
(c) k = 100, d = 2.
Figure 4: The ratio of the average k-means cost induced by the means over the average optimal
cost vs the variance for 2-dimensional instances generated from a mixture of k Gaussians (k ∈
{5, 50, 100}). We observe that the k-means objective becomes “relevant” (i.e., is less than 1.05
times the optimal value) for finding the clustering induced by Gaussians when the variance is less
than 0.1 for k = 5, less than 0.02 when k = 50, and less than 0.0005 when k = 100.
39
1.8
1.15
1.6
1.10
1.4
ratio
ratio
1.20
1.05
1.2
1.00
0.95
10-2
1.0
10-1
101
100
covariance
102
103
10-2
(a) k = 5, d = 10.
10-1
100
covariance
101
102
(b) k = 50, d = 10.
1.6
1.5
beta
1.4
1.3
1.2
1.1
1.0
10-2
10-1
covariance
100
101
(c) k = 100, d = 10.
Figure 5: The ratio of the average k-means cost induced by the means over the average optimal
cost vs the variance for 10-dimensional instances generated from a mixture of k Gaussians (k ∈
{5, 50, 100}). We observe that the k-means objective becomes “relevant” (i.e., is less than 1.05
times the optimal value) for finding the clustering induced by Gaussians when the variance is less
than 0.1 for k = 5, less than 0.02 when k = 50, and less than 0.0005 when k = 100.
40
100
101
10-1
100
10-2
10-1
10-3
beta
beta
102
10-2
10-4
10-3
10-5
-4
1010
-2
10-1
101
100
covariance
102
-6
1010
-2
103
(a) k = 5, d = 10.
10-1
100
covariance
101
102
(b) k = 50, d = 10.
100
beta
10-1
10-2
10-3
10-4
-5
1010
-2
10-1
covariance
100
101
(c) k = 100, d = 10.
Figure 6: The average minimum value of β for which the instance is β-distribution-stable vs the
variance for 10-dimensional instances generated from a mixture of k Gaussians (k ∈ {5, 50, 100}).
We observe that for relevant variances, the value of β is greater than 0.001.
41
100
10-2
10-3
10-2
beta
beta
10-1
10-3
10-4
10-5
10-6
-4
1010
-2
10-1
covariance
-7
1010
-3
101
100
(a) The average minimum value of β for which the
instances is β-distribution-stable vs the variance for 2dimensional instances generated from a mixture of 5
Gaussians.
10-2
10-1
covariance
100
101
(b) The average of the minimum value of β for which
the instances is β-distribution-stable vs the variance
for 2-dimensional instances generated from a mixture
of 50 Gaussians.
10-1
10-2
beta
10-3
10-4
10-5
10-6
-7
1010
-4
10-3
10-1
10-2
covariance
100
101
(c) The average minimum value of β for which the
instance is β-distribution-stable vs the variance for 2dimensional instances generated from a mixture of 100
Gaussians.
Figure 7: The average minimum value of β for which the instance is β-distribution-stable vs the
variance for 2-dimensional instances generated from a mixture of k Gaussians (k ∈ {5, 50, 100}).
We observe that for relevant variances, the value of β is greater than 0.001.
42
| 8 |
1
Capturing the Future by Replaying the Past
Functional Pearl
arXiv:1710.10385v1 [] 28 Oct 2017
JAMES KOPPEL, MIT
ARMANDO SOLAR-LEZAMA, MIT
Delimited continuations are the mother of all monads! So goes the slogan inspired by Filinski’s 1994 paper, which showed
that delimited continuations can implement any monadic effect, letting the programmer use an effect as easily as if it was
built into the language. It’s a shame that not many languages have delimited continuations.
Luckily, exceptions and state are also the mother of all monads! In this Pearl, we show how to implement delimited
continuations in terms of exceptions and state, a construction we call thermometer continuations. While traditional implementations of delimited continuations require some way of "capturing" an intermediate state of the computation, the insight of
thermometer continuations is to reach this intermediate state by replaying the entire computation from the start, guiding it
using a "replay stack" it so that the same thing happens until the captured point.
Along the way, we explain delimited continuations and monadic reflection, show how the Filinski construction lets
thermometer continuations express any monadic effect, share an elegant special-case for nondeterminism, and discuss why
our construction is not prevented by theoretical results that exceptions and state cannot macro-express continuations.
CCS Concepts: •Theory of computation → Control primitives; Functional constructs; •Software and its engineering → Functional languages; Control structures; General programming languages;
Additional Key Words and Phrases: monads, delimited continuations
ACM Reference format:
James Koppel and Armando Solar-Lezama. 2016. Capturing the Future by Replaying the Past. 1, 1, Article 1 (January 2016),
27 pages.
DOI: 10.1145/nnnnnnn.nnnnnnn
1
INTRODUCTION
In the days when mainstream languages have been adopting higher-order functions, advanced monadic effects
like continuations and nondeterminism have held out as the province of the bourgeois programmer of obscure
languages. Until now, that is.
Of course, there’s a difference between effects which are built into a language and those that must be encoded.
Mutable state is built-in to C, and so one can write int x = 1; x += 1; int y = x + 1;. Curry 1 is nondeterministic,
and so one can write (3 ? 4) * (5 ? 6) , which evaluates to all of {15, 18, 20, 24}. This is called the direct style.
When an effect is not built into a language, the monadic, or indirect, style is needed. In the orthodox indirect
style, after every use of an effect, the remainder of the program is wrapped in a lambda. For instance, the nondeterminism example would be rendered in Scala as List(3,4).flatMap(x ⇒List(5,6).flatMap(y ⇒List(x * y))).
Effects implemented in this way are called monadic. "Do-notation," as seen in Haskell, makes this easier, but still
inconvenient.
1 Hanus
et al. (1995)
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that
copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
© 2016 Copyright held by the owner/author(s). XXXX-XXXX/2016/1-ART1 $15.00
DOI: 10.1145/nnnnnnn.nnnnnnn
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
We show that, in any language with exceptions and state, you can implement any monadic effect in direct
style. With our construction, you could implement a ? operator in Scala such that the example (3 ? 4) * (5 ? 6)
will run and return List(15,18,20,24). Filinski showed how to do this in any language that has an effect called
delimited continuations or delimited control 2 . We first show how to implement delimited continuations in
terms of exceptions and state, a construction we call thermometer continuations. Filinski’s result does the rest.
Continuations are rare, but exceptions are common. With thermometer continuations, you can get any effect in
direct style in 9 of the TIOBE top 10 languages 3 .
Here’s what delimited continuations look like in cooking: Imagine a recipe for making chocolate nut bars. Soak
the almonds in cold water. Rinse, and grind them with a mortar and pestle. Delimited continuations are like a step
that references a sub-recipe. Repeat the last two steps, but this time with pistachios. Delimited continuations can
perform arbitrary logic with these subprograms ("Do the next three steps once for each pan you’ll be using"), and
they can abort the present computation ("If using store-bought chocolate, ignore the previous four steps"). They
are "delimited" in that they capture only a part of the program, unlike traditional continuations, where you could
not capture the next three steps as a procedure without also capturing everything after them, including the part
where you serve the treats to friends and then watch Game of Thrones. Implementing delimited continuations
requires capturing the current state of the program, along with the rest of the computation up to a "delimited"
point. It’s like being able to rip out sections of the recipe and copy them, along with clones of whatever ingredients
have been prepared prior to that section. This is a form of "time travel" that typically requires runtime support —
if the nuts had not yet been crushed at step N, and you captured a continuation at step N, when it’s invoked, the
nuts will suddenly be uncrushed again.
The insight of thermometer continuations is that every subcomputation is contained within the entire computation, and so there is an alternative to time travel: just repeat the entire recipe from the start! But this time, use
the large pan for step 7. Because the computation contains delimited control (which can simulate any effect), it’s
not guaranteed to do the same thing when replayed. Thermometer continuations hence need state for a replay
stack, which makes each replayed effectful function call do the same thing as a previous invocation. Additionally,
like a recipe step that overrides previous steps, or that asks you to let it bake for an hour, delimited continuations
can abort or suspend the rest of the computation. Implementing this uses exceptions.
This approach poses an obvious limitation: the replayed computation can’t have any side effects, except for
thermometer continuations. And replays are inefficient. Luckily, thermometer continuations can implement all
other effects, and there are optimization techniques that make it less inefficient. Also, memoization — a "benign
effect" — is an exception to the no-side-effects rule, and makes replays cheaper.
Here’s what’s in the rest of this Pearl: Our construction has an elegant special case for nondeterminism,
presented in Section 2, which also serves as a warm-up to full delimited control. In the following section, we
give an intuition for how to generalize the nondeterminism construction with continuations. Section 4 explains
thermometer continuations. Section 5 explains Filinski’s construction and how it combines with thermometer
continuations to get arbitrary monadic effects. Section 6 discusses how to optimize the fusion of thermometer
continuations with Filinski’s construction, and provides a few benchmarks showing that thermometer continuations are not entirely impractical. Finally, Section 7 discusses why our construction does not contradict a
theoretical result that exceptions and state cannot simulate continuations.
2 Filinski
3 TIOBE
(1994)
Software BV (2017)
2
SML code
Closest Haskell equivalent
int option
type
α
f =
α
*
Maybe Int
α
type F a = (a, a)
!r
val x
let
:
readIORef r
int = 1
<decls> in <expr> end
1 :: 2 :: []
x
let
<decl1> in
let
:: Int; x = 1
<decl2> in <expr>
1 : 2 : []
f [1,2] @ g [3,4]
f [1,2] ++ g [3,4]
"abc" ^ "def"
"abc" ++ "def"
f o g
f . g
#1 (a,b)
fst (a,b)
Fig. 1. The SML/Haskell Cheat Sheet
2
WARM-UP: REPLAY-BASED NONDETERMINISM
Nondeterminism is perhaps the first effect students learn which is not readily available in a traditional imperative language. This section presents replay-based nondeterminism, a useful specialization of thermometer
continuations, and an introduction to its underlying ideas.
When writing the examples in this paper, we sought an impure language with built-in support for exceptions
and state, and which has a simple syntax with good support for closures. We hence chose to present in SML.
The cheat sheet in Figure 1 explains SML’s less obvious syntax by giving their closest Haskell equivalents. Also
note that ML functors are module-valued functions, and are substantially different from Haskell-style functors.
Throughout this paper, we will assume there is no concurrency.
Nondeterminism provides a choice operator choose such that choose [x1, x2, . . .] may return any of the x i . Its
counterpart is a withNondeterminism operator which executes a block that uses choose, and returns the list of values
resulting from all executions of the block.
withNondeterminism (fn ()
⇒
(choose [2,3,4]) * (choose [5,6]))
(* val it = [10,12,15,18,20,24]
:
int list *)
In this example, there are six resulting possible values, yet the body returns one value. It hence must run six
times. The replay-based implementation of nondeterminism does exactly this: in the first run, the two calls to
choose return 2 and 5, then 2 and 6 in the second, etc. In doing so, the program behaves as if the first call to choose
was run once but returned thrice. We’ll soon show exactly how this is done. But first, let us connect our approach
to the one most familiar to Haskell programmers: achieving nondeterminism through monads.
In SML, a monad is any module which implements the following signature (and satisfies the monad laws):
signature MONAD = sig
type
α
m;
val return
val bind
:
: α → α m;
α m → (α → β
m)
→β
m;
end;
Here is the implementation of the list monad in SML:
3
structure ListMonad
type
α
m =
α
:
MONAD = struct
list
fun return x = [x]
fun bind []
|
f = []
bind (x :: xs) f = f x @ bind xs f
end;
The ListMonad lets us rewrite the above example in monadic style. In the direct style, choose [2,3,4] would
return thrice, causing the rest of the code to run thrice. Comparatively, in the monadic style, the rest of the
computation is passed as a function argument to ListMonad.bind, which invokes it thrice.
open ListMonad;
bind [2,3,4] (fn x
bind [5,6]
(fn y
⇒
⇒
return (x * y)))
(* val it = [10,12,15,18,20,24]
:
int list *)
Let’s look at how the monadic version is constructed from the direct one. From the perspective of the invocation
the rest of the expression is a function awaiting its result, which it must invoke thrice:
choose [2, 3, 4],
C = (fn
⇒
* choose [5, 6])
This remaining computation is the continuation of choose [2,3,4]. Each time choose [2,3,4] returns, it invokes
this continuation. The monadic transformation captured this continuation, explicitly turning it into a function.
This transformation captures the continuation at compile time, but it can also be captured at runtime with the
callcc "call with current continuation" operator: if this first call to choose were replaced with callcc (fn k ⇒. . .),
then k would be equivalent to C. So, the functions being passed to bind are exactly what would be obtained if the
program were instead written in direct style and used callcc.
This insight makes it possible to implement a direct-style choose operator. The big idea is that, once callcc
has captured that continuation C in k, it must invoke k thrice, with values 2, 3, 4. This implementation is a little
verbose in terms of callcc, but we’ll later see how delimited continuations make this example simpler than with
callcc-style continuations.
Like the monadic and callcc-based implementations of nondeterminism, replay-based nondeterminism invokes
the continuation (fn x ⇒x * choose [5,6]) three times. Since the program is left in direct style (choose [2,3,4] * choose
and it cannot rely on a built-in language mechanism to capture the continuation, it does this by running the entire
block multiple times, with some bookkeeping to coordinate the runs. We begin our explanation of replay-based
nondeterminism with the simplest case: a variant of choose which takes only two arguments, and may only be
used once.
2.1
The Simple Case: Two-choice nondeterminism, used once
We begin by developing the simplified choose2 operator. Calling choose2
branches, returning x in the first branch and y in the second. For example:
(withNondeterminism2 (fn ()
=⇒
=⇒
⇒
3 * choose2 (5, 6)))
[3 * 5, 3 * 6]
[15, 18]
4
(x,y)
splits the execution into two
[5,6]) ,
br_idx
0
1
br_idx
1
1
pos=2
21 0
1 0
0
1
21 0
1 0
br_idx
1
0
1
0
0
pos=0
1 0
21 0
pos=0
(a)
(b)
(c)
Fig. 2. Several points in the execution of the replay-based nondeterminism algorithm.
This execution trace hints at an implementation in which withNondetermism2 calls the block twice, and where
returns the first value in the first run, and the second value in the second run. withNondeterminism2 uses a
single bit of state to communicate to choose2 whether it is being called in the first or second execution. As long as
the block passed to withNondeterminism2 is pure, with no effects other than the single use of choose (and hence no
nested calls to withNondeterminism2), each execution will have the same state at the time choose is called.
choose2
val firstTime = ref false
(* choose2
: α
*
α → α
*)
fun choose2 (x1,x2) = if !firstTime then x1 else x2
(* withNondeterminism2
→ α ) → α list *)
: = true ; f
(firstTime : = false; f
(unit
:
fun withNondeterminism2 f = [(firstTime
withNondeterminism2 (fn ()
(* val it = [15,18]
:
⇒
()),
())]
3 * choose2 (5,6))
int list *)
While simple and restrictive, this implementation contains the insights that allow arbitrary nondeterminism. View a nondeterministic computation as a tree, where each call to choose is a node, and the sequence of
values returned by calls to choose identify a branch. This section gives the special case where the tree has but
two branches. Here, withNondeterminism2 picks the branch, and choose2 follows it. Because there are only two
branches of execution, withNondeterminism2 and choose2 share only a single bit of state. What’s needed for arbitrary
nondeterminism? More state.
2.2
Towards Arbitrary Nondeterminism
Replay-based nondeterminism executes every branch in the computation tree, replaying the program once for
each. It’s like a depth-first tree traversal, except that reaching each leaf requires traversing the entire path from
the root. For instance, consider this program:
withNondeterminism (fn ()
if choose [3,4]
=
⇒
3 then
choose [5,6]
else
choose [7,8,9])
5
There are five branches in the execution tree of this program. In the first run, the two calls to choose return
(3, 5). In the second, they return (3, 6), followed by (4, 7), (4, 8) and (4, 9). Each branch is identified by a branch
index. Figure 2 depicts this execution tree with branch indices used at different points in the algorithm. The gist
of our algorithm is: withNondeterminism will run the block once for every branch index; within a run, each call to
choose uses the current branch index to pick which alternative to return.
Or, in code: br_idx stores the current branch index, while pos tells the next call to choose which element of the
branch index to look at.
val br_idx
int list ref = ref [];
:
val pos = ref 0;
A branch index is a list of numbers, each number indicating which alternative to return at a call to choose.
These choices are numbered right-to-left, so that, when this number reaches 0, the algorithm knows that all
choices at that call to choose have been exhausted. Like the stacks used to traverse trees, the first element of a
branch index corresponds to the last call to choose in that branch, which makes updating to the next branch index
simple. For instance, the first branch, in which the calls to choose return 3 and 5, has branch index [1, 1], and the
next branch, returning 3 and 6, has branch index [0, 1]. These are shown in Figures 2a and 2b. The decr inputs a
branch index, and returns the index of the next branch.
fun decr (0 :: ns) = decr ns
|
|
decr (n :: ns) = (n-1) :: ns
decr []
= []
After executing the branch with index [0, 1], withNondeterminism updates the current branch index to [0]. This
is merely a prefix of the actual branch index: it instructs the first call to choose to return 4, but withNondeterminism
does not yet know that doing so causes execution to encounter the call choose [7,8,9]. Instead, that call to choose
will extend the branch index to [2, 0]. Figure 2c depicts this.
Do note that nested calls to withNondeterminism will not work because they both use global state and will
interfere with each other. The thermometer continuations in Section 4.4 lack this problem.
We can now implement choose. If the branch index records a choice for the current invocation of choose, it
selects that alternative. Else, it extends the branch index to pick the first alternative of the current choice.
open List;
(* choose
: α
list
→ α
*)
fun choose xs = if not (0 = !pos) then
let val idxFromEnd = nth (!br_idx, !pos - 1) in
(pos
:=
(!pos) - 1;
nth (xs, (length xs - 1) - idxFromEnd))
end
else
(br_idx
:=
(length xs - 1)
::
!br_idx;
hd xs)
The withNondeterminism function repeatedly executes the given block. It picks a different branch each time until
the branches have been exhausted, and concatenates the results together. Note that no initialization code is
needed, because the state is always returned to its initial value upon completing a call of withNondeterminism.
6
(* withNondeterminism
(unit
:
→ α) → α
list *)
fun withNondeterminism f =
let val v = [f()] in
br_idx : = decr (!br_idx);
length (!br_idx);
_
case !br idx of
pos
|
:=
[]
_
⇒
⇒
v
v @ withNondeterminism f
end
With this implementation in place, we can now finally run the examples.
withNondeterminism (fn ()
⇒
choose [2,3,4] * choose [5, 6])
(* val it = [10,12,15,18,20,24]
withNondeterminism (fn ()
if choose [3,4]
=
:
int list *)
⇒
3 then
choose [5,6]
else
choose [7,8,9])
(* val it = [5,6,7,8,9]
2.3
:
int list *)
But What About the Empty Case?
The above implementation can handle nondeterminism with 1 or more alternatives. But having 0 alternatives is
fundamentally different: the previous implementation assumes one value per branch, but choose allows branches
with no values.
What to do when a program calls choose []? Returning a dummy value is not an option: choose has type
α list →α , and there is no value of type α to return. We look to the monadic version as a guide. A call to
choose [] is translated into:
bind [] (fn x
⇒ . . .)
Here, the function argument of bind represents the continuation of choose. bind never invokes it, which is
equivalent to aborting the continuation: it’s like choose never returns. The replay-based implementation of
choose [] achieves this by raising an exception.
Supporting empty choices requires minimal modification to our implementation. When a program calls
choose [], choose raises an exception to pass control to withNondeterminism, which moves on to the next branch.
7
exception Empty;
fun choose [] = raise Empty
|
choose xs =
...
fun withNondeterminism f =
let val v = [f()] handle Empty
⇒
[] in
...
end
fun fail () = choose []
withNondeterminism (fn ()
⇒
let val x = chooose [2,3,4] * choose [5,7] in
if x
≥
20 then x
else fail () end);
(* val it = [21,20,28]
3
:
int list *)
CONTINUATIONS IN DISGUISE
The previous section showed a trick for implementing direct-style nondeterminism in deterministic languages.
Now, we delve to the deeper idea behind it, and surface the ability to generalize from nondeterminism to
any monadic effect. We now examine how replay-based nondeterminism stealthily manipulates continuations.
Consider evaluating this expression:
val e = withNondeterminism (fn ()
⇒
choose [2,3,4] * choose [5, 6])
Every subexpression of e has a continuation, and when it returns a value, it invokes that continuation. After
the algorithm takes e down the first branch and reaches the point T = 2 * choose [5, 6], this second call to choose
has continuation C = fn ⇒2 * .
choose must invoke this continuation twice, with two different values. But C is not actually a function that can
be repeatedly invoked: it’s a description of what the program does with a value after it’s returned, and returning
a value causes the program to keep executing, consuming the continuation. choose invokes this continuation
the first time normally, returning 5. To copy this ephemeral continuation, it re-runs the computation until it’s
reached a point identical to T, evaluating that same call to choose with a second copy of C as its continuation —
and this time, it invokes the continuation with 6.
So, the first action of the choose operator is capturing the continuation. And what happens next? The
continuation is invoked once for each value, and the results are later appended together. We’ve already seen
another operation that invokes a function once on each value of a list and appends the results: the ListMonad.bind
operator. Figure 3 depicts how applying ListMonad.bind to the continuation produces direct-style nondeterminism.
So, replay-based nondeterminism is actually a fusion of two separate ideas:
(1) Capturing the continuation using replay
(2) Using the captured continuation with operators from the nondeterminism monad
In Section 4, we extract the first half to create thermometer continuations, our replay-based implementation of
delimited control. The second half — using continuations and monads to implement any effect in direct style
— is Filinski’s construction, which we explain in Section 5. These produce something more inefficient than the
8
Fig. 3. To implement
multiple times.
=
[
1
C
, , ]
2
C
3
C
[1,2,3]
>>=
choose:
first capture the continuation, and then use the list monad’s
C
bind
operator to evaluate it
replay-based nondeterminsm of Section 2, but we’ll show in Section 6 how to fuse them together into something
equivalent.
4
THERMOMETER CONTINUATIONS: REPLAY-BASED DELIMITED CONTROL
In the previous section, we explained how the replay-based nondeterminism algorithm actually hides a mechanism
for capturing continuations. Over the the remainder of this section, we extract out that mechanism, developing
the more general idea of thermometer continuations. But first, let us explain the variant of continuations that our
mechanism uses: delimited continuations.
4.1
What is delimited control?
When we speak of "the rest of the computation," a natural question is "until where?" For traditional continuations,
the answer is: until the program halts. This crosses all abstraction boundaries, making these "undelimited
continuations" difficult to work with. Delimited continuations on the other hand, introduced by Felleisen 4 ,
only represent a prefix of the remaining computation. Just as callcc makes continuations first class, allowing a
program to modify its continuation to implement many different global control-flow operators, the shift and
reset constructs make delimited continuations first-class, and can be used to implement many local control-flow
operators.
In the remainder of this section, we’ll denote a one-holed context C with the notation C[x], i.e.: C[x+1] is some
expression containing x+1. This notation makes it easy to give the semantics for shift and reset.
Consider an expression which is about to evaluate a shift. In the special case where there is only one shift in
the code, it evaluates as follows:
C 1 [reset (fn () ⇒C 2 [shift (fn k ⇒E)])]
=⇒C 1 [(fn k ⇒E)(fn x ⇒C 2 [x])]
C 1 [reset (fn () ⇒E)] =⇒E if E does not contain
a shift
Figure 4 depicts this evaluation. The name "shift" is illustrative. Suppose shift and reset were normal functions
rather than control operators. Then the delimited continuation of the shift up until the reset is C 2 , which in
turn has a delimited continuation of C 3 [x] = x. What shift does is, well, shift these. Continuation C 3 replaces
Continuation C 2 . This means that, after the shift returns, control jumps straight to the reset. In that regard, it
acts like a C-style return operator. C 2 , however, gets turned into a function and saved in the variable k.
E is free to use k in interesting ways. It can invoke k with several values. This essentially makes the shift
"return" multiple times, and can implement nondeterminism a la Figure 3. It can stuff k in a data-structure to be
"resumed" later, similar to a Python-style yield. It can even "chain" calls to k, like in this example:
1 + reset (fn ()
4 Felleisen
⇒
2 * (shift (fn k
⇒
k (k 5))))
(1988)
9
shift (λk.E)
k = λx
E
C
reset boundary
x
C
reset boundary
Fig. 4. Graphical depiction of the action of the shift operator.
Let’s rewrite this with C 1 [x]=
now evaluate it:
1+x
and C 2 [x]
= 2 * x
so that it matches the semantics we gave earlier. We can
⇒ 2 * (shift (fn k ⇒ k (k 5))))
C 1 [reset (fn () ⇒C 2 [shift (fn k ⇒k (k 5))])]
=⇒C 1 [(fn k ⇒k (k 5))(fn x ⇒C 2 [x])]
=⇒ 1 + (fn k ⇒ k (k 5))(fn x ⇒ 2 * x) end
=⇒ 21
1 + reset (fn ()
=
The definition above only works when there is only one shift. We now give the full semantics, which can
handle code with multiple shift’s. In the case where there is only one shift, this is equivalent to the previous
semantics.
C 1 [reset (fn () ⇒C 2 [shift (fn k ⇒E)])]
=⇒C 1 [reset (fn () ⇒(fn k ⇒E)(fn x ⇒ reset (fn
C 1 [reset (fn () ⇒E)] =⇒E if E does not contain a
()
⇒ C 2 [x])))]
shift
When the captured delimited continuation is invoked, it gets delimited by the inner reset. So, any shift in C_2
will return to the body of the first shift E, instead of discarding the computation in E. Meanwhile, if E contains a
shift, the inner shift will only capture the continuation up to the outer reset rather than the entire remainder of
the program, because the outer reset gets left in place.
Here’s an example of multiple shift’s in a row.
reset (fn ()
⇒
shift (fn k
⇒
=⇒ reset (fn
=⇒ 1 + reset
=⇒ 1 + reset
=⇒ 1 + 1 + 2
=⇒ 8
()
⇒
1 + k 2) (fn x
(fn k
(fn ()
(fn ()
⇒
⇒
⇒
1 + k 2) * shift (fn k’
2 * shift (fn k’
(fn k’
⇒
⇒
⇒
⇒
1 + k’ 3))
reset (fn ()
⇒
x * shift (fn k’
⇒
1 + k’ 3))))
1 + k’ 3))
1 + k’ 3)(fn x
⇒
reset (fn ()
⇒
2 * x)))
* 3
The outer reset gets left in place. This means that if there are nested shift’s (i.e.: a shift in E),
And here’s an example of nested shift’s. Note how the two delimited continuations fn x ⇒2 + x and fn
get applied in reverse-order.
1 + reset (fn ()
=⇒
=⇒
=⇒
=⇒
⇒
2 + (shift (fn k
1 + reset (fn ()
1 + reset (fn ()
1 + reset (fn ()
⇒
⇒
⇒
(fn k
⇒
⇒
3 * shift (fn l
(fn l
⇒
3 * shift (fn l
3 * shift (fn l
⇒
⇒
⇒
3 * x))
37
10
⇒3
l (k 10)))))
l (k 10)))(fn x
l (2 + 10)))
l 12)(fn x
⇒
x
⇒
reset (fn ()
⇒
2 + x)))
* x
As the previous examples show, shift and reset are quite versatile. Indeed, they can implement any other
control operator.
shift and reset are termed delimited control operators, and are encoded in the following SML signature. There
are also other equivalent formulations of delimited control using different operators5 .
signature CONTROL = sig
type ans
val reset
val shift
:
:
→ ans) → ans
→ ans) → ans) → α
(unit
(( α
end;
4.2
Baby Thermometer Continuations
Programming with continuations requires being able to capture and copy an intermediate state of the program. We
showed in Section 3 that replay-based nondeterminism implicitly does this by replaying the whole computation,
and hence needs no support from the runtime. We now see how to do this more explicitly.
This section presents a simplified pseudocode version of thermometer continuations. This version assumes the
reset block only contains one shift, and also ignores type-safety, but can still handle the example of delimited control from Section 4.1. Consider an expression containing a single shift, C 1 [reset (fn () ⇒C 2 [shift (fn k ⇒E)])]
The tough part of delimited continuations is to capture the continuation of the shift, namely C[t] = 2 * t, as
a function fn t ⇒C[t]. But suppose shift used mutable state so that change_state x; shift f evaluates to x. If C
doesn’t use any mutable state, then change_state x commutes with everything in C, so that change_state x;C [shift f]
is equivalent to C [change_state x; shift f]. Then, since block () is equal to C [shift (fn k ⇒E)], the function
fn x ⇒(change_state x; block () ) is equivalent to fn x ⇒C [x], the continuation to capture!
The other part of shift is to return a value directly to the enclosing reset (to "abort the current continuation").
It can do this by raising an exception. So, in totality, here are the semantics of shift and reset implemented in
this fashion:
C 1 [reset (fn () ⇒C 2 [shift (fn k ⇒E)])]
=⇒C 1 [C 2 [raise (Done (fn k ⇒E)(fn x ⇒(change_state x; C 2 [shift
handle (Done x ⇒ x)]
≡ C 1 [(fn k ⇒E)(fn x ⇒C 2 [change_state x; shift (fn k ⇒E)])]
≡ C 1 [(fn k ⇒E)(fn x ⇒C 2 [x])]
(fn k
⇒E)])))]
This is exactly the semantics of shift and reset given in Section 4.1 for the case when there is only one shift.
We will later add support for multiple shift’s.
For the remainder of this section, define block = (fn () ⇒2 * shift (fn k ⇒1 + k 5)). We consider the example
reset block. It evaluates as follows:
reset block
=⇒ let
=⇒ 11
val k = (fn t
⇒
2 * t) in 1 + k 5 end
⇒1 + k 5)] .
We now show a pseudocode implementation of shift and reset. shift checks a piece of mutable state to see
if it should ignore its argument and return some other value. For ease of extension, we use a stack, called the
replay_stack. If the replay_stack is nonempty, it does so; else, it does a "normal shift."
C[shift (fn k
5 Dyvbig
et al. (2007)
11
fun shift f = if
<replay_stack
is not empty>
pop replay_stack
else
normal_shift f
normal_shift calls its argument with "the captured continuation." In this example, (push replay_stack x; block ())
is equivalent to 2 * x. So, the captured continuation is equivalent to (fn x ⇒(push replay_stack x; block ())) .
Then normal_shift f must invoke f with this "proto thermometer continuation." shift then transfers control, along
with its result, to the enclosing reset — this is raising an exception.
fun normal_shift f = raise (Done (f (fn x
⇒
(push replay_stack x;
block ()))))
And reset must catch this exception so as to receive control.
reset f = (f ()) handle (Done x
⇒
x)
This is not a real definition of reset. With our definition of shift, reset f only works when f
ignoring how to handle multiple nested calls to reset until the real implementation.
The example is now translated as follows:
= block.
We’re
reset block
=⇒ (2
* (raise (Done (1 +
(push replay_stack 5; block ())))))
handle (Done x
=⇒
⇒
x)
11
This implementation also handles the example from Section 4.1 nicely. The example
1 + reset (fn ()
⇒
2 * (shift (fn k
⇒
k (k 5))))
becomes
⇒ 2 * (shift (fn k ⇒ k (k 5)))) in
1 + (2 * (raise (Done (push replay_stack
(push replay_stack 5; block ());
let val block = (fn ()
block ())))
handle (Done x
⇒
x))
end
==⇒ 21
4.3
Multiple (non-nested) shift’s
Replaying the block becomes more interesting when there are multiple shift’s. We’ll now show how to extend
our pseudocode implementation to handle multiple non-nested shift’s, introducing the record_stack. We’ll use
the following, which we explained in Section 4.1:
val block = (fn ()
⇒
shift (fn k
* shift (fn k’
⇒
⇒
1 + k
2)
1 + k’ 3))
reset block
12
With our previous implementation of shift, calling block () when the replay_stack contains [x, y] will return
x ∗ y. What happens when replay_stack only contains one value, [x]? It will run (pop replay_stack) * (shift . . .) .
x gets popped off the replay_stack before the second shift runs and pushes on y, so the block is never invoked
with more than one value on the replay_stack. This motivates the second record_stack. When shift pops a value
off the replay_stack, it saves it on the record_stack. This way, it can "remember" that value of x when it invokes
block the second time, moving values back from the record_stack to the replay_stack.
fun shift f = if
<replay_stack
is not empty>
let val x = pop replay_stack in
(push record_stack x;
x)
end
else
normal_shift f
fun normal_shift f = raise (Done (f (fn x
⇒
(replay_stack
record_stack
:=
:=
reverse (x
::
(!record_stack));
[];
reset block)))
This is enough to run our multi-shift example:
reset block
=⇒
(raise (Done (1 + (replay_stack : = reverse (2
record_stack : = [];
::
[]);
reset block)))
=⇒
=⇒
* shift (fn k’ ⇒ 1 + k’ 3)) handle (Done x ⇒ x)
1 + (push replay_stack 2; reset block)
1 + (push replay_stack 2; (((push record_stack 2; pop replay_stack)
* raise (Done (1 + (replay_stack : = reverse (3
record_stack : = [];
::
(!record_stack));
block ()))))
handle (Done x
=⇒
⇒
x)))
8
In Section 4.2, we could represent a continuation fn t ⇒C [t] as fn t ⇒(push replay_stack t; block ()). In this
section, if the continuation of a shift is fn t ⇒C [t], this continuation is represented by fn t ⇒(replay_stack : = reverse
where s is some stack. Hence, the continuation C can be represented by the pair (s, block). This motivates our
definition of a thermometer continuation.
Definition 4.1. A thermometer continuation for a continuation C is a pair of a stack and a computation,
(s, block), so that for appropriate functions change_state and run, change_state x s; run block evaluates to C [x].
So, the definitions of shift and reset in this section already implement thermometer continuations. Figure
5 depicts a thermometer continuation. For instance, the continuation of the second shift in this example,
(fn y ⇒2 * y) , is equivalent to invoking block with a replay stack of [2,y], and hence is represented by the
thermometer continuation ([2], block) .
Figure 5 shows a thermometer continuation. Figure 6 animates the execution of a thermometer continuation:
as block executes, each element of the replay stack pairs with a call to shift. When it’s finished, the remaining
13
(x
::st);
record
a
b
c
d
e
Function
Replay Record
Stack Stack
Fig. 5. A thermometer continuation before being invoked
e
d
c
b
stmt1 a
stmt2
stmt3
e
d
c
stmt1 b
stmt3
e
d
c
b
a
...
a
Thermometer
Continuation
Context
Fig. 6. Graphical depiction of running a thermometer continuation
execution of block is equivalent to that continuation C. The left side of Figure 6 resembles a thermometer sticking
out of the function, which inspired the name "thermometer continuations."
The version in this section, of course, still has limitations. We’ve been assuming the variable block was magically
set to the body of the reset, which ignores nested reset’s. It also ignores nested shift’s. It doesn’t work if the
captured continuation escapes the reset, which is needed for implementing the state monad (Section 5.4) and for
implementing iterators (e.g.: Python’s yield). And we’ve been assuming that replay_stack and record_stack only
contain integers; they need to store any value.
The general algorithm presented in the next section solves all these problems. The replay and record stacks
have a universal type. It’s careful about saving old values in closures. For nested shift’s, the replay stack can
store markers, indicating that the replay should re-enter a shift. And, to evaluate a nested reset, it must be able
to stow away the current block, record_stack, and replay_stack, evaluate the nested reset, and then restore them
afterwards. It does this with a third stack, called the reset_stack.
4.4
Real Thermometer Continuations
The previous two sections gave a quick sketch of thermometer continuations. Now we polish it, producing real
code that can handle anything that can be done with shift and reset. We start by defining the stack operations
we used in the previous sections. A stack is a mutable list:
(* push
: α
(* pop
: α
→ α → ()
: = x :: !st)
list ref
fun push st x = (st
list ref
→ α
*)
option *)
fun pop st = case !st of
(x :: xs)
⇒
(st
:=
xs;
14
SOME x)
|
⇒
[]
NONE
One problem with the record and replay stacks is that the values recorded may be of different types. So that
we may store many types of values on a single stack, following Filinski 6 , we define a universal type, which all
other types may be converted to and from:
signature UNIVERSAL = sig
type u;
val to_u
: α → u;
: u → α;
val from_u
end;
structure Universal
:
UNIVERSAL = struct
datatype u = U;
val to_u = Unsafe.cast;
val from_u = Unsafe.cast;
end;
Regrettably, this implementation uses Unsafe.cast, but shift and reset are carefully designed so that these casts
never fail. While Filinski was able to upgrade these definitions to a type-safe implementation of the universal
type in his follow-up paper 7 , the heavy use of replay in our construction unfortunately prevents that solution
from working. We chose not to search for another type-safe version of the universal type in order to keep this
paper focused.
A thermometer continuation is a (function, replay stack) pair. It can’t be called like a function directly. We will
provide a function called invoke_cont that invokes a thermometer continuation, running the function using the
replay stack as long as it lasts. We will present the definitions out-of-order, saving the invoke_cont function for
last, but the overall setup is:
functor Control (type ans)
:
CONTROL = struct
type ans = ans
(*
*
...
type, exception, and state definitions
*)
(* invoke_cont : (unit
fun invoke_cont f st =
fun reset f =
...
fun shift f =
...
→ ans) →
...
stack
→
ans *)
end;
6 Filinski
7 Filinski
(1994)
(1999)
15
The key state is the replay_stack and the record_stack. The entire executing computation is also stored in
mutable state.
To handle nested shift’s, the stacks store a frame type instead of raw values. Previously, when replaying an
expression that contains shift (fn k ⇒shift (fn l ⇒E)), there would be no way to enter the first shift, but make
the second shift return a value x. Now, this can be done with the replay stack [ENTER, RET x].
exception MissingFun
datatype frame = RET of Universal.u
|
ENTER
type stack = frame list
val record_stack
val replay_stack
val cur_fun
:
:
:
stack ref = ref []
stack ref = ref []
(unit
→
ans) ref = ref (fn ()
⇒
raise MissingFun)
The reset function is implemented as a small wrapper around invoke_cont: it invokes a computation with an
empty replay stack, causing the computation to execute from the start.
fun reset f = invoke_cont f []
The shift of Section 4.3 is almost complete. shift f just needs a couple casts to deal with the universally-typed
stacks, and it needs to dereference and it needs to dereference and capture the values of the record-stack and
cur_fun before invoking f. Then the captured continuation becomes observationally pure, so it can escape the
current reset.
exception Done of ans
fun shift f = case pop replay_stack of
(SOME (RET v)) ⇒ (push record_stack (RET v);
Universal.from_u v)
|
_
⇒
let val st = !record_stack
val g = !cur_fun in
(push record_stack ENTER;
raise (Done (f (fn v
⇒
invoke_cont g (RET (Universal.to_u v )
::
st)))))
end
One funny thing about this implementation is how it behaves the same when it encounters an ENTER frame
versus when the replay_stack is exhausted. Whether it was told to do so by the replay_stack or if it’s entering the shift for the first time, the correct thing to do is push an ENTER frame to the record stack. Note
how it captures the current state of the record_stack before pushing on the ENTER frame. So, in the expression
reset (fn () ⇒shift (fn k ⇒shift (fn l ⇒E))), invoking k x will replay the computation with a replay_stack
of [RET x], and will evaluate to reset (fn () ⇒x). Meanwhile, invoking l x will replay it with a replay_stack of
[ENTER, RET x], evaluating to reset (fn () ⇒shift (fn k ⇒x)).
We need one additional piece of state in order to implement invoke_cont. If a thermometer continuation is
invoked while executing another computation, the program must be able to save and restore the current values
in the replay and record stacks. Nested reset calls will also change the function currently being invoked, so this
must be saved as well. To accomplish this, invoke_cont uses a third stack, the reset stack.
type reset_stack = ((unit
→
ans) * stack * stack) list
16
val reset_stack
:
reset_stack ref = ref []
We are now ready to present invoke_cont f st. It is similar to the definition of reset given in 4.2, except that it
also saves and restores state to the reset_stack.
(* invoke_cont : (unit → ans) → stack → ans *)
fun invoke_cont f st =
(push reset_stack (!cur_fun, !record_stack, !replay_stack);
record_stack : = [];
replay_stack : = rev st;
cur_fun : = f;
let val res = (f () handle (Done x) ⇒ x)
val (SOME (f’, rec_stack’, rep_stack’)) = pop reset_stack in
cur_fun : = f’;
record_stack
replay_stack
:=
:=
rec_stack’;
rep_stack’;
res
end)
With this implementation finished, we can now run our earlier examples:
structure C = Control(type ans = int);
1 + C.reset (fn ()
(* val it = 21
:
C.reset (fn ()
⇒
⇒
2 * (C.shift (fn k
C.shift (fn k
* C.shift (fn k’
(* val it = 8
1 + C.reset (fn ()
5
k (k 5))))
⇒
⇒
1 + k
2)
1 + k’ 3))
int *)
:
(* val it = 37
⇒
int *)
:
⇒
2 + C.shift (fn k
⇒
3*(C.shift (fn l
⇒
l (k 10)))));
int *)
ARBITRARY MONADS
In 1994, Filinski showed how to use delimited continuations to express any monadic effect in direct style8 . The
explanation is quite obtuse, heavy on notation and light on examples. Dan Piponi has a blog post which is more
readable9 , but also missing big concepts like monadic reflection. In this section, we hope to convey a better intuition
for Filinski’s construction, and also discuss what it looks like when combined with thermometer continuations.
The code in this section comes almost verbatim from Filinski. This section is helpful for understanding the
optimizations of Section 6, in which we explain how to fuse thermometer continuations with the code in this
section.
8 Filinski
9 Dan
(1994)
Piponi (2008)
17
5.1
Monadic Reflection
In SML and Java, there are two ways to program with mutable state. The first is to use the language’s built-in
variables and assignment. The second is to use the monadic encoding, programming similar to how a pure
language like Haskell handles mutable state. A stateful computation is a monadic value, a pure value of type
s → (a, s).
These two approaches are interconvertible. The program can take a value of type s → (a, s) and run it,
yielding a stateful computation of return type a. This operation is called reflect. Conversely, it can take a stateful
computation of type a, and reify it into a pure value of type s → (a, s). Together, the reflect and reify operations
give a correspondence between monadic values and effectful computations. This correspondence is termed
monadic reflection.
reflect and reify generalize to arbitrary monads. Consider nondeterminism, where a nondeterministic computation is either an effectful computation of type a, or a monadic value of type [a]. Then the reflect operator
would take the input [1, 2, 3] and nondeterministically return 1, 2, or 3 — this is the choose operator from Section
2). reify would take a computation that nondeterministically returns 1, 2, or 3, and return the pure value [1, 2, 3]
— this is withNondeterminism.
So, for languages which natively support an effect, reflect and reify convert between effects implemented by
the semantics of the language, and effects implemented within the language. Curry is a language with built-in
nondeterminism, and it has these operators, calling them anyOf and getAllValues. SML does not have built-in
nondeterminism, but, for our previous example, one can think of the code within a withNondeterminism block as
running in a language extended with nondeterminism. So, one can think of the construction in the next section
as being able to extend a language with any monadic effect.
In SML, monadic reflection is given by the following signature:
signature RMONAD = sig
structure M
: MONAD
: α M.m → α
: (unit → α ) → α
val reflect
val reify
M.m
end;
5.2
Monadic Reflection through Delimited Control
Filinski’s insight was that the monadic style is similar to an older concept called continuation-passing style. We
can see this by revisiting an example from Section 2.
Consider this expression:
withNondeterminism (fun ()
⇒
(choose [2,3,4]) * (choose [5,6]))
It is transformed into the monadic style as follows:
bind [2,3,4] (fn x
bind [5,6]
(fn y
⇒
⇒
return (x * y)))
The first call to choose has continuation fn ⇒ * (choose [5,6]). If x is the value returned by the first call to
the second has continuation fn ⇒x * . These continuations correspond exactly to the functions bound
in the monadic style. The monadic bind is the "glue" between a value and its continuation. Nondeterministically
choosing from [2,3,4] wants to return thrice, which is the same as invoking the continuation thrice, which is the
same as binding to the continuation.
choose,
18
So, converting a program to monadic style is quite similar to converting a program to this "continuation-passing
style." Does this mean a language that has continuations can program with monads in direct style? Filinski
answers yes.
The definition of monadic reflection in terms of delimited control is short. The overall setup is as follows:
functor Represent (M
:
MONAD)
RMONAD = struct
:
structure C = Control(type ans = Universal.u M.m)
structure M = M
...
...
fun reflect m =
fun reify t =
end;
Figure 3 showed how nondeterminism can be implemented by binding a value to the (delimited) continuation.
The definition of reflect is a straightforward generalization of this.
fun reflect m = C.shift (fn k
⇒
M.bind m k)
If reflect uses shift, then reify uses reset to delimit the effects implemented through shift. This implementation
requires use of type casts, because reset is monomorphized to return a value of type Universal.u m. Without these
type casts, reify would read
fun reify t = C.reset (fn ()
⇒
M.return (t ()))
Because of the casts, the actual definition of reify is slightly more complicated:
⇒ M.return (Universal.to_u (t ()))))
(M.return o Universal.from_u)
fun reify t = M.bind (C.reset (fn ()
5.3
Example: Nondeterminism
Using this general construction, we immediately obtain an implementation of nondeterminism equivalent to the
one in Section 2.3 from Section 2’s definition of ListMonad.
structure N = Represent(ListMonad)
fun choose xs = N.reflect xs
fun fail () = choose []
N.reify (fn ()
⇒
let val x = choose [2,3,4] * choose [5,7] in
if x
≥
20 then x
else fail () end);
(* val it = [21,20,28]
:
int list *)
It’s worth thinking about how this generic implementation executes on the example, and contrasting it with
the direct implementation of Section 2.3. The direct implementation executes the function body 6 times, once
for each branch of the computation. The generic one executes the function body 10 times (once with a replay
stack of length 0, 3 times with length 1, and 6 times with length 2). In the direct implementation, choose will
return a value if it can. In the generic one, choose never returns. Instead, it invokes the thermometer continuation,
causes the desired value to be returned at the equivalent point in the computation, and then raises an exception
19
containing the final result. So, 4 of those times, it could just return a value rather than replaying the computation.
This is the idea of one of the optimizations we discuss in Section 6. This, plus one other optimization, let us
derive the direct implementation from the generic one.
5.4
Example: State monad
State implemented through delimited control works differently from SML’s native support for state.
functor StateMonad (type state)
type
α
m = state
fun return x = fn
fun bind m f = fn
: MONAD = struct
→ α * state
s ⇒ (x, s)
s ⇒ let val (x, s’) = m s
in f x s’ end
end;
structure S = Represent (StateMonad (type state = int) )
fun tick () = S.reflect (fn s
⇒
((), s+1))
= S.reflect (fn s ⇒ (s, s))
fun put n = S.reflect (fn _ ⇒ ((), n))
fun get ()
#1 (S.reify (fn ()
⇒
(put 5; tick ();
2 * get ()))
0)
(* val it = 12
:
int *)
Let’s take a look at how this works, starting with the example reify
(reify (fn ()
=⇒
=⇒
=⇒
=⇒
=⇒
=⇒
⇒
(reset (fn ()
⇒
⇒
⇒
3 * (reflect (fn s
⇒
⇒
* get ()).
(s, s))))) 2
return (3 * (shift (fn k
⇒ (s,s))
⇒ fn s ⇒
bind (fn s
(let val k = (fn x
(fn s
⇒3
3 * get ())) 2
(reify (fn ()
(fn k
(fn ()
k end)(fn x
⇒
⇒
bind (fn s
⇒
(s, s)) k))))) 2
return (3*x)) 2
(3*x, s)) in (fn s
⇒
k s s)) 2
(3*s, s)) 2
(6, 2)
The get in reify (fn () ⇒3 * get ()) suspends the current computation, causing the reify to return a function
which awaits the initial state. Once invoked with an initial state, it resumes the computation (multiplying by 3).
What does reify (fn () ⇒(tick (); get ())) do? The call to tick () expands into shift (fn k ⇒fn s ⇒k () (s+1)).
It again suspends the computation, awaiting the state s. Once it receives s, it resumes it, returning () from tick.
The call to get suspends the computation again, returning a function that awaits a new state; tick supplies s+1.
Think for a second about how this works when shift and reset are implemented as thermometer continuations.
The get, put, and tick operators do not communicate by mutating state. They communicate by suspending the
computation, i.e.: by raising exceptions containing functions. So, although the implementation of state in terms
of thermometer continuations uses SML’s native support for state under the hood, it only does so tangentially, to
capture the continuation.
20
6
OPTIMIZATIONS
Section 5.3 compared the two implementations of nondeterminism, and found that the generic one using
thermometer continuations replayed the computation gratuitously. Thermometer continuations also replay the
program in nested fashion, consuming stack space. In this section, we sketch a few optimizations that make
monadic reflection via thermometer continuations less impractical, and illustrate the connections between the
two implementations of nondeterminism.
6.1
CPS-bind: Invoking the Continuation at the Top of the Stack
The basic implementation of thermometer continuations wastes stack space. Look at the last example of Section
4.3, and notice how it calls block () three nested times. And yet, the outer two calls to block () will be discarded
by a raised exception as soon as the inner one completes. So, the implementation could save a lot of stack space
by raising an exception before replaying the computation. Indeed, we did this when symbolically evaluating that
example in 4.3 to make it easier to read.
So, when a program invokes a thermometer continuation, it will need to raise an exception to transfer control
to the enclosing reset, and thereby signal reset to replay the computation. While the existing Done exception
signals that a computation is complete, it can do this with a second kind of exception, which we call Invoke.
However, the shift and reset functions do not invoke a thermometer continuation: the body of the shift does.
In the case of monadic reflection, this is the monad’s bind operator. Raising an Invoke exception will discard the
remainder of bind, so it must somehow also capture the continuation of bind. We can do this by writing bind itself
in the continuation-passing style, i.e.: with the following signature:
val bind
: α
m
→ ( α → (β
m) cont)
→ (β
m) cont;
where (β m) cont = forall δ . (β m →δ ) →δ )
The above is not valid SML because SML lacks the rank-2 polymorphism (i.e.: the nested forall) required by
the continuation-passing style. Nonetheless, we have implemented this in both SML, using additional unsafe
casts, and in OCaml, which does support rank-2 polymorphism.
The supplementary material contains code with this optimization, and uses it to implement nondeterminism
in a way that executes more similarly to the direct implementation. We give here some key points. Here’s what
the CPS’d bind operator for the list monad would look like if SML hypothetically had rank-2 polymorphism:
fun bind []
|
f d = d []
bind (x :: xs) f d = f x (fn a
⇒
bind xs f
(fn b
⇒
d (a @ b)))
When used by reflect, f becomes a function that raises the Invoke exception, transferring control to the
enclosing reset, which then replays the entire computation, but at the top level. The continuations of the bind d
get nested in a manner which is harder to describe, but ultimately get evaluated at the very end, also at the
top level. So the list appends in d (a @ b) actually run at the top level of the reset, similar to how, in direct
nondeterminism, it is the outer call to withNondeterminism that aggregates the results of each branch.
While this CPS-monad optimization as described here can be used to implement many monadic effects, it
cannot be used to implement all of them, nor general delimited continuations. Consider the state monad from
Section 5.4: bind actually returns a function which escapes the outer reify. Then, when the program invokes that
function and it tries to invoke its captured thermometer continuation, it will try to raise an Invoke exception to
transfer control to its enclosing reify, but there is none. This CPS-monad optimization as described does not work
if the captured continuation can escape the enclosing reset. With more work, it could use mutable state to track
whether it is still inside a reset block, and then choose to raise an Invoke exception or invoke the thermometer
continuation directly.
21
6.2
Direct Returns
In our implementation, a reset body C[reflect (return 1)] expands into C[raise (Done (C[1] handle (Done x ⇒x)))].
So, the entire computation up until that reflect runs twice. Instead, of replaying the entire computation, that
reflect could just return a value. C[reflect (return 1)] could expand into C[1].
reflect (return 1) expands into shift (fn k ⇒bind (return 1) k). By the monad laws, this is equivalent to
shift (fn k ⇒k 1). Tail-calling a continuation is the same as returning a value, so this is equivalent to 1. So, it’s
the tail-call that allows this instance of reflect to return a value instead of replaying the computation.
Implementing the direct-return optimization is a small tweak to the CPS-bind optimization. The signature for
bind is further modified to:
val bind
: α
m
→ ( α → (β
m) cont)
→ ( α → (β
m) cont)
→ (β
m) cont;
where (β m) cont = forall δ . (β m →δ ) →δ ) . This variant of bind takes two arguments of type α →(β m) cont.
One raises an Invoke exception, as described in Section 6.1. The other returns a value directly, after updating the
internal state of the thermometer continuation implementation. So, the first time bind invokes the continuation,
it may do so by directly returning a value, and thereafter instead raises an Invoke exception.
The bind operator for the list monad never performs a tail-call (it must always wrap the result in a list), but,
after converting it to CPS, it always performs a tail-call. So this direct-return optimization combines well with
the previous CPS-monad optimization. Indeed, applying them both transforms the generic nondeterminsm of
Section 5.3 into the direct nondeterminism of Section 2. In Section 6.4, we show benchmarks showing that this
actually gives a faster implementation of nondeterminism than the code in Section 2.
In the supplementary material, we demonstrate this optimization, providing optimized implementations of
nondeterminism (list monad) and failure (maybe monad).
6.3
Memoization
While the frequent replays of thermometer continuations can interfere with any other effects in a computation,
it cannot interfere with observationally-pure memoization. Memoizing nested calls to reset can save a lot of
computation, and any expensive function can memoize itself without integrating into the implementation of
thermometer continuations.
6.4
Benchmarks
To get a better understanding of the performance cost of thermometer continuations and the effect of our
benchmarks, we implemented several benchmarks with different monadic effects.
There are four benchmarks: NQUEENS, INTPARSE-GLOB, INTPARSE-LOCAL, and MONADIC-ARITHPARSE. These four benchmarks use three different monads. Depending on the monad, we gave three to six
implementations of each benchmark. Each Direct implementation implements the program pure-functionally,
without monadic effects. The ThermoCont and Filinski implementations use monadic reflection, implemented
via thermometer continuations and Filinski’s construction, respectively. For the nondeterminism and failure
monads, our optimizations apply, given in Opt. ThermoCont. For the nondeterminism monad, we can also
use our Replay-based Nondet construction from Section 2. These were all implemented in SML. Finally,
for nondeterminism, we also compared to an implementation in Curry, which provides native support for
nondeterminism.
The SML solutions were all run using the SML/NJ interpreter v110.8010 . Although MLTON11 , the whole-program
optimizing compiler for SML, is far more efficient than SML/NJ, we could not easily port our implementation of
10 Appel
11 Weeks
and MacQueen (1991)
(2006)
22
thermometer continuations to MLTON because it lacks Unsafe.cast. We also tried using our OCaml implementation
of thermometer continuations; surprisingly, we got a stack overflow error even for relatively small inputs, even
though the implementation uses only shallow recursion, and the SML version ran fine. We ran the Curry
implementation in KiCS212 v0.5.1. Our informal experiments show that a different Curry compiler, PAKCS13 , was
much slower.All experiments were conducted on a 2015 MacBook Pro with a 2.8 GHz Intel Core i7 processor. All
times shown are the average of 5 trials, except for MONADIC-ARITH-PARSE, as discussed below.
The first benchmark NQUEENS is the problem of enumerating all solutions to the n-queens problem. Table 1
reports the times for each implementation for different n. While the direct implementation was unsurprisingly the
fastest, thermometer continuations beat Filinski’s construction for small n, and the optimized version remained
within a factor of 2 until n = 10. Optimized thermometer continuations beat replay-based nondeterminism, likely
because the optimized thermometer continuation solution replaces list allocations and operations with closures.
Surprisingly, Curry performed by far the worst; for n = 11, we killed the process after running it for 20 minutes.
The twin benchmarks INTPARSE-GLOB and INTPARSE-LOCAL both take a list of numbers as strings, parse
each one, and return their sum. They both use a monadic failure effect (like Haskell’s Maybe monad), and differ
only in their treatment of strings which are not valid numbers: INTPARSE-GLOB returns failure for the entire
computation, while INTPARSE-LOCAL will recover from failure and ignore any malformed entry. Table 2
gives the results for INTPARSE-GLOB. For each input size n, we constructed both a list of n valid integers, as
well as one which contains an invalid string halfway through. Table 3 gives the results for INTPARSE-LOCAL.
For each n, we constructed lists of n strings where every 1/100th, 1/10th, or 1/2nd string was not a valid int.
For INTPARSE-GLOB, unoptimized thermometer continuations wins, as it avoids Filinski’s cost of callcc, the
optimized version’s reliance on closures, as well as the direct approach’s cost of wrapping and unwrapping
results in an option type. For INTPARSE-LOCAL, unoptimized thermometer continuations lost out to the direct
implementation. Note that thermometer continuations here devolves into raising an exception for bad input, but
with a clean relation to the pure monadic monadic version.
Finally, benchmark MONADIC-ARITH-PARSE is a monadic parser in the style of Hutton and Meijer14 . These
programs input an arithmetic expression, and return the list of results of evaluating any prefix of the string which
is itself a valid expression. The Filinski and ThermoCont implementations closely follow Hutton and Meijer,
executing in a "parser" monad with both mutable state and nondeterminism (equivalent to Haskell’s StateT List
monad). The direct implementation inlines the monad definition, passing around a list of (remaining input, parse
result) pairs. We did not provide an implementation with optimized thermometer continations, as we have not
yet found how to make our optimizations work with the state monad. Note that all three implementations use
the same algorithm, while producing beautiful code (for the monadic versions), is exponentially slower than the
standard LL/LR parsing algorithms.
Table 4 reports the average running time of each implementation on 30 random arithmetic expressions with a
fixed number of leaves. There was very high variance in the running time of different inputs. For inputs with
40 leaves, the fastest input took under 25ms for all implementations, while the slowest took approximately 25
minutes on the direct implementation, 5 hours and 43 minutes for Filinski’s construction, and 9 hours 50 minutes
for thermometer continuations.
Overall, these benchmarks show that the optimizations of this section can provide a substantial speedup, and
there are many computations for which thermometer continuations do not pose a prohibitive cost. Thermometer
continuations are surprisingly competitive with Filinski’s construction, even though SML/NJ is known for its
efficient callcc, and yet thermometer continuations can be used in far more programming languages.
12 Braßel
et al. (2011)
et al. (2003)
14 Hutton and Meijer (1998)
13 Hanus
23
Direct
Replay-based Nondet
Filinski
ThermoCont
Opt. ThermoCont
Curry
4
0.006s
0.006s
0.010s
0.006s
0.006s
0.002s
5
0.005s
0.006s
0.009s
0.006s
0.006s
0.004s
6
0.005s
0.007s
0.010s
0.007s
0.006s
0.014s
7
0.006s
0.007s
0.011s
0.009s
0.008s
0.119s
8
9
10
0.006s 0.008s
0.015s
0.017s 0.089s
0.506s
0.014s 0.022s
0.046s
0.017s 0.052s
0.248s
0.013s 0.041s
0.198s
1.270s 13.593s 154.987s
11
12
0.064s
0.310s
2.606s 8m54.364s
0.199s
1.063s
1.497s
9.462s
1.210s
7.809s
>20m
>20m
Table 1. Benchmark NQUEENS
Bad input?
Y
Direct
N
Y
Filinski
N
Y
ThermoCont
N
Y
Opt. ThermoCont
N
1000
0.000s
0.000s
0.000s
0.000s
0.000s
0.000s
0.000s
0.000s
10,000
0.001s
0.002s
0.004s
0.004s
0.003s
0.003s
0.000s
0.002s
50,000
0.004s
0.014s
0.018s
0.020s
0.008s
0.011s
0.013s
0.015s
100,000
0.026s
0.051s
0.012s
0.014s
0.024s
0.029s
0.015s
0.024s
500,000
0.159s
0.213s
0.172s
0.221s
0.167s
0.223s
0.197s
0.247s
1,000,000
0.260s
0.371s
0.258s
0.360s
0.260s
0.367s
0.293s
0.364s
5,000,000
36.260s
1m54.072s
28.719s
1m26.603s
27.461s
1m20.841s
27.298s
1m23.433s
1000 10,000 50,000 100,000 500,000 1,000,000
0.000s 0.002s 0.011s
0.060s
0.232s
0.404s
0.000s 0.002s 0.002s
0.053s
0.221s
0.375s
0.000s 0.000s 0.001s
0.014s
0.176s
0.257s
0.000s 0.001s 0.048s
0.053s
0.264s
0.456s
0.000s 0.004s 0.043s
0.048s
0.234s
0.420s
0.000s 0.004s 0.008s
0.050s
0.182s
0.301s
0.000s 0.002s 0.045s
0.064s
0.344s
0.470s
0.000s 0.003s 0.028s
0.064s
0.266s
0.445s
0.000s 0.001s 0.023s
0.067s
0.214s
0.353s
0.000s 0.002s 0.030s
0.059s
0.227s
0.403s
0.000s 0.002s 0.030s
0.058s
0.218s
0.386s
0.000s 0.002s 0.023s
0.055s
0.183s
0.289s
5,000,000
2m04.316s
1m40.251s
15.515s
3m19.265s
3m22.245s
23.632s
4m28.700s
3m26.809s
22.750s
3m01.981s
2m37.321s
27.546s
Table 2. Benchmark INTPARSE-GLOB
% bad input
0.01
Direct
0.10
0.50
0.01
Filinski
0.10
0.50
0.01
ThermoCont
0.10
0.50
0.01
Opt. ThermoCont
0.10
0.50
Table 3. Benchmark INTPARSE-LOCAL
Direct
Filinski
ThermoCont
10
0.011s
0.116s
0.184s
20
0.163s
1.638s
2.540s
30
1.908s
19.035s
30.229s
40
2m39.860s
25m18.794s
39m58.183s
Table 4. Benchmark MONADIC-ARITH-PARSE
24
7
BUT ISN’T THIS IMPOSSIBLE?
In a 1990 paper, Matthias Felleisen presented formal notions of expressibility and macro-expressibility of one
language feature in terms of others, along with proof techniques to show a feature cannot be expressed 15 . Hayo
Thielecke used these to show that exceptions and state together cannot macro-express continuations 16 . This is
concerning, because, at first glance, this is exactly what we did.
First, a quick review of Felleisen’s concepts: Expressibility and macro-expressibility help define what should be
considered core to a language, and what is mere "syntactic sugar." An expression is a translation from a language
L containing a feature F to a language L 0 without it which preserves program semantics. A key restriction is
that an expression may only rewrite AST nodes that implement F and the descendants of these nodes. So, an
expression of state may only rewrite assignments, dereferences, and expressions that allocate reference cells. The
whole-program transformation that transforms a stateful program into a pure one that constantly passes around
an ever-updating "state" variables is not an expression. A macro-expression is an expression which may rewrite
nodes from F , but may only move or copy the children of such nodes (technically speaking, it must be a term
homomorphism). A classic example of a macro-expression is implementing the += operator in terms of normal
assignment and addition. A classic example of an expression which is not a macro-expression is desugaring
for-loops into while-loops (it must dig into the loop body and modify every continue statement). Another one is
implementing Java finally blocks (which need to execute an action after every return statement).
There are a couple reasons why Thielecke’s proof does not immediately apply. First, it only concerns macroexpressibility. Second, it concerns callcc-style continuations rather than delimited continuations. So, there are
two ways it could be extended to forbid our construction. First, one could extend Thielecke’s results to general
expressibility and delimited continuations. Second, we could limit the discussion to programs wrapped in an
all-encompassing reset, so that callcc will itself be macro-expressible using shift. We need not worry about their
combination: If we limit ourselves to programs that are entirely enclosed by a reset, then an "expression" of
shift/reset may rewrite the entire program.
It turns out that neither extension applies either. First, implementing effects with monadic reflection is not
an expression. An expression for mutable state may rewrite assignments and dereferences in terms of other
operations, but our construction must also enclose that entire program fragment in a reify. Filinski’s construction
is not an expression either for the same reason.
Second, even without the issue of wrapping with reify aside, our construction is still not a macro-expression.
Let’s take a look at Thielecke’s proof and see where it fails. Thielecke’s proof is based on showing that, in a
language with exceptions and state but not continuations, all expressions of the following form with different j
are operationally equivalent:
R j = λ f .((λx .λy.(f 0; x := !y; y := j; !x))(ref 0)(ref 0))
The intuition behind this equivalence is that the two reference cells are allocated locally and then discarded,
and so the value stored in them can never be observed. However, with continuations, on the other hand, f could
cause the two assignments to run twice on the same reference cells.
This example breaks down because it cannot be expressed in our monadic reflection framework as is. The
monadic reflection framework assumes there are no other effects within the program other than the ones
implemented via monadic reflection. To write the R j using thermometer continuations and monadic reflection,
the uses of ref must be changed from the native SML version to one implemented using the state monad. Then,
when the computation is replayed, repeated calls to ref may return the same reference cell, allowing the state to
escape, thereby allowing different R j to be distinguished.
15 Felleisen
(1990)
(2001)
16 Thielecke
25
8
RELATED WORK
Our work is most heavily based on Filinski’s work expressing monads using delimited control 17 . We have also
discussed theoretical results regarding the inter-expressability of exceptions and continuations in Section 7. Other
work on implementing continuations using exceptions relate the two from a runtime-engineering perspective
and from a typing perspective.
Continuations from stack inspection. Oleg Kiselyov’s delimcc library 18 provides an implementation of delimited
control for OCaml, based on the insight that the stack-unwinding facilities used to implement exceptions are
also useful in implementing delimited control. Unlike our approach, delimcc works by tightly integrating with
the OCaml runtime, exposing low-level details of its virtual machine to user code. Its implementation relies on
copying portions of the stack into a data structure, repurposing functionality used for recovering from stack
overflows. It hence would not work for e.g.: many implementations of Java, which recover from stack overflows
by simply deleting activation records. On the other hand, its low-level implementation makes it efficient and lets it
persist delimited continuations to disk. A similar insight is used by Pettyjohn et al 19 to implement continuations
using a global program transformation.
Typing power of exceptions vs. continuations. Lillibridge 20 shows that exceptions introduce a typing loophole
that can be used to implement unbounded loops in otherwise strongly-normalizing languages, while continuations
cannot do this, giving the slogan "Exceptions are strictly more powerful than call/cc." As noted by other authors 21 ,
this argument only concerns the typing of exceptions rather than their execution semantics, and is inapplicable
in languages that already have recursion.
9
CONCLUSION
Filinski’s original construction of monadic reflection from delimited continuations, and delimited continuations
from normal continuations plus state, provided a new way to program for the small fraction of languages which
support first-class continuations. With our demonstration that exceptions and state are sufficient, this capability
is extended to a large number of popular languages, including 9 of the TIOBE 1022 . While languages like Haskell
with syntactic support for monads may not benefit from this construction, bringing advanced monadic effects
to more languages paves the way for ideas spawned in the functional programming community to influence a
broader population.
In fact, the roots of this paper came from an attempt to make one of the benefits of monads more accessible.
We built a framework for Java where a user could write something that looks like a normal interpreter for a
language, but, executed differently, it would become a flow-graph generator, a static analyzer, a compiler, etc.
Darais 23 showed that this could be done by writing an interpreter in the monadic style (concrete interpreters
run programs directly; abstract interpreters run them nondeterministically). We discovered this concurrently
with Darais, and then discovered replay-based nondeterminism so that Java programmers could write normal,
non-monadic programs.
Despite the apparent inefficiency of thermometer continuations, the optimizations discussed in Section 6,
combined with the oft-unused speed of modern machines, provide hope that the ideas of this paper can find their
17 Filinski
(1994)
(2010)
19 Pettyjohn et al. (2005)
20 Lillibridge (1999)
21 Thielecke (2001)
22 TIOBE Software BV (2017)
23 Darais et al. (2017)
18 Kiselyov
26
way into practical applications. Indeed, Filinski’s construction is actually known as a way to make programs
faster 24
Overall, we view finding a way to bring delimited control into mainstream languages as a significant achievement. We hope to see a flourishing of work with advanced effects now that they can be used by more programmers.
Working code for all examples and benchmarks, as well as the CPS-bind and direct-return optimizations, is
available from https://github.com/jkoppel/thermometer-continuations .
REFERENCES
Andrew W Appel and David B MacQueen. 1991. Standard ML of new jersey. In International Symposium on Programming Language
Implementation and Logic Programming. Springer, 1–13.
Bernd Braßel, Michael Hanus, Björn Peemöller, and Fabian Reck. 2011. KiCS2: A new compiler from Curry to Haskell. Functional and
Constraint Logic Programming (2011), 1–18.
Dan Piponi. 2008. The Mother of all Monads. http://blog.sigfpe.com/2008/12/mother-of-all-monads.html. (2008). Posted: 2008-12-24. Accessed:
2017-02-27.
David Darais, Nicholas Labich, Phúc C Nguyen, and David Van Horn. 2017. Abstracting definitional interpreters (functional pearl). Proceedings
of the ACM on Programming Languages 1, ICFP (2017), 12.
R Kent Dyvbig, Simon Peyton Jones, and Amr Sabry. 2007. A monadic framework for delimited continuations. Journal of Functional
Programming 17, 6 (2007), 687–730.
Mattias Felleisen. 1988. The Theory and Practice of First-Class Prompts. In Proceedings of the 15th ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages. ACM, 180–190.
Matthias Felleisen. 1990. On the Expressive Power of Programming Languages. (1990), 134–151.
Andrzej Filinski. 1994. Representing Monads. In Proceedings of the 21st ACM SIGPLAN-SIGACT Symposium on Principles of Programming
Languages (POPL ’94). ACM, New York, NY, USA, 446–457. DOI:http://dx.doi.org/10.1145/174675.178047
Andrzej Filinski. 1999. Representing Layered Monads. In Proceedings of the 26th ACM SIGPLAN-SIGACT Symposium on Principles of
Programming Languages. ACM, 175–188.
M Hanus, S Antoy, B Braßel, M Engelke, K Höppner, J Koj, P Niederau, R Sadre, and F Steiner. 2003. PAKCS: The Portland-Aachen-Kiel Curry
System. (2003).
Michael Hanus, Herbert Kuchen, and Juan Jose Moreno-Navarro. 1995. Curry: A Truly Functional Logic Language. In Proc. ILPS, Vol. 95.
95–107.
Ralf Hinze. 2012. Kan Extensions for Program Optimisation or: Art and Dan Explain an Old Trick. In International Conference on Mathematics
of Program Construction. Springer, 324–362.
Graham Hutton and Erik Meijer. 1998. Monadic Parsing in Haskell. Journal of functional programming 8, 4 (1998), 437–444.
Oleg Kiselyov. 2010. Delimited Control in OCaml, Abstractly and Concretely: System Description. In International Symposium on Functional
and Logic Programming. Springer, 304–320.
Mark Lillibridge. 1999. Unchecked Exceptions Can Be Strictly More Powerful Than Call/CC. Higher-Order and Symbolic Computation 12, 1
(1999), 75–104.
Greg Pettyjohn, John Clements, Joe Marshall, Shriram Krishnamurthi, and Matthias Felleisen. 2005. Continuations from Generalized Stack
Inspection. In ACM SIGPLAN Notices, Vol. 40. ACM, 216–227.
Hayo Thielecke. 2001. Contrasting Exceptions and Continuations. Version available from http://www. cs. bham. ac. uk/hxt/research/exncontjournal. pdf (2001).
TIOBE Software BV. 2017. TIOBE Index for February 2017. http://www.tiobe.com/tiobe-index/. (2017). Posted: 2017-02-08. Accessed:
2017-02-22.
Stephen Weeks. 2006. Whole-program compilation in MLton. ML 6 (2006), 1–1.
24 Hinze
(2012)
27
| 6 |
arXiv:1502.01956v2 [] 9 Oct 2015
Stochastic recursive inclusion in two timescales
with an application to the Lagrangian dual
problem
Arun Selvan. R
1
and Shalabh Bhatnagar
2
1
[email protected]
2
[email protected]
1,2
Department of Computer Science and Automation, Indian
Institute of Science, Bangalore - 560012, India.
Abstract
In this paper we present a framework to analyze the asymptotic behavior of two timescale stochastic approximation algorithms including those
with set-valued mean fields. This paper builds on the works of Borkar
and Perkins & Leslie. The framework presented herein is more general
as compared to the synchronous two timescale framework of Perkins &
Leslie, however the assumptions involved are easily verifiable. As an application, we use this framework to analyze the two timescale stochastic
approximation algorithm corresponding to the Lagrangian dual problem
in optimization theory.
1
Introduction
The classical dynamical systems approach was developed by Benaı̈m [2, 3] and
Benaı̈m and Hirsch [4]. They showed that the asymptotic behavior of a stochastic approximation algorithm (SA) can be studied by analyzing the asymptotics
of the associated ordinary differential equation (o.d.e.). This method is popularly known as the o.d.e. method and was originally introduced by Ljung [12].
In 2005, Benaı̈m, Hofbauer and Sorin [5] extended the dynamical systems approach to include the situation where the stochastic approximation algorithm
tracks a solution to the associated differential inclusion. Such algorithms are
called stochastic recursive inclusions. For a detailed exposition on SA, the
reader is referred to books by Borkar [8] and Kushner and Yin [11].
There are many applications where the aforementioned paradigms are inadequate. For example, the right hand side of a SA may require further averaging
or an additional recursion to evaluate it. An instance mentioned in Borkar [7]
is the ‘adaptive heuristic critic’ approach to reinforcement learning [10] that
requires a stationary value iteration executed between two policy iterations. To
solve such problems, Borkar [7] analyzed the two timescale SA algorithms. The
1
two timescale paradigm presented in Borkar [7] is inadequate if the coupled iterates are stochastic recursive inclusions. Such iterates arise naturally in many
learning algorithms, see for instance Section 5 of [13]. For another application
from convex optimization the reader is referred to Section 4 of this paper. Such
iterates also arise in applications that involve projections onto non-convex sets.
The first attempt at tackling this problem was made by Perkins and Leslie [13]
in 2012. They extended the two timescale scheme of Borkar [7] to include the
situation when the two iterates track solutions to differential inclusions.
Consider the following coupled recursion:
1
,
xn+1 = xn + a(n) un + Mn+1
(1)
2
yn+1 = yn + b(n) vn + Mn+1 ,
where un ∈ h(xn , yn ), vn ∈ g(xn , yn ), h : Rd+k → subsets of Rd and g :
Rd+k → subsets of Rk . Such iterates were analyzed in [13]. Further, as an
application a Markov decision process (MDP) based actor critic type learning
algorithm was also presented in [13].
In this paper we generalize the synchronous two timescale stochastic approximation scheme presented in [13]. We present sufficient conditions that are mild
and easily verifiable. For a complete list of assumptions used herein, the reader
is referred to Section 2.2 and for the analyses under these conditions the reader is
referred to Section 3. It is worth noting that the analysis of the faster timescale
proceeds in a predictable manner, however, the analysis of the slower timescale
presented herein is new to the literature to the best of our knowledge.
In convex optimization, one is interested in minimizing an objective function
(that is convex) subject to a few constraints. A solution to this optimization
problem is a set of vectors that minimize our objective function. Often this set
is referred to as a minimum set. In Section 4, we analyze the two timescale
SA algorithm corresponding to the Lagrangian dual of a primal problem. As
we shall see later, this analysis considers a family of minimum sets and as a
consequence of our framework these minimum sets are no longer required to be
singleton. In [9], Dantzig, Folkman and Shapiro presented sufficient conditions
for the continuity of minimum sets of continuous functions. We shall use results
from that paper to show that under some standard convexity conditions the
assumptions of Section 2.2 are satisfied. We then conclude from our main result,
Theorem 3, that the two timescale algorithm in question converges to a solution
to the dual problem.
2
2.1
Preliminaries and assumptions
Definitions and notations
The definitions and notations used in this paper are similar to those in Benaı̈m
et. al. [5], Aubin et. al. [1] and Borkar [8]. We present a few for easy reference.
2
Let H be an upper semi-continuous, set-valued map on Rd , where for any
x ∈ Rd , H(x) is compact and convex valued. Note that we say that H is upper
semi-continuous when xn → x, yn → y and yn ∈ H(xn ) ∀n implies y ∈ H(x).
Consider the differential inclusion (DI)
ẋ ∈ H(x).
(2)
P
We say that x ∈
if x is an absolutely continuous map that satisfies (2).
d
The set-valued semiflow
P Φ associated with (2) is defined on k[0, +∞) × R as:
Φt (x) = {x(t) | x ∈ , x(0) = x}. Let T × M ⊂ [0, +∞) × R and define
[
ΦT (M ) =
Φt (x).
t∈T , x∈M
M ⊆ Rd is P
invariant if for every x ∈ M there exists a complete trajectory
in M , say x ∈
with x(0) = x.
Let x ∈ Rd and A ⊆ Rd , then d(x, A) := inf{ka − yk | y ∈ A}. We define
the δ-open neighborhood of A by N δ (A) := {x | d(x, A) < δ}. The δ-closed
neighborhood of A is defined by N δ (A) := {x | d(x, A) ≤ δ}. T
Let M ⊆ Rd , the ω − limit set be given by ωΦ (M ) := t≥0 Φ[t,+∞) (M ).
T
Similarly the limit set of a solution x is given by L(x) = t≥0 x([t, +∞)).
A ⊆ Rd is an attractor if it is compact, invariant and there exists a neighborhood U such that for any ǫ > 0, ∃ T (ǫ) ≥ 0 such that Φ[T (ǫ),+∞) (U ) ⊂ N ǫ (A).
Such a U is called the fundamental neighborhood of A. The basin of attraction
of A is given by B(A) = {x | ωΦ (x) ⊂ A}. If B(A) = Rd , then the set is called
a globally attracting set. It is called Lyapunov stable if for all δ > 0, ∃ ǫ > 0
such that Φ[0,+∞) (N ǫ (A)) ⊆ N δ (A).
A set-valued map h : Rn → {subsets of Rm } is called a Marchaud map if it
satisfies the following properties:
(i) For each z ∈ Rn , h(z) is convex and compact.
(ii) (point-wise boundedness) For each z ∈ Rn , sup kwk < K (1 + kzk) for
w∈h(z)
some K > 0.
(iii) h is an upper semi-continuous map.
The open ball of radius r around 0 is represented by Br (0), while the closed ball
is represented by B r (0).
2.2
Assumptions
Recall that we have the following coupled recursion:
1
,
xn+1 = xn + a(n) un + Mn+1
2
yn+1 = yn + b(n) vn + Mn+1 ,
where un ∈ h(xn , yn ), vn ∈ g(xn , yn ), h : Rd+k → subsets of Rd
Rd+k → subsets of Rk .
We list below our assumptions.
(A1) h and g are Marchaud maps.
3
and g :
(A2) {a(n)}n≥0 and {b(n)}n≥0P
are two scalar sequences
Psuch that:
(a(n) + b(n)) = ∞,
a(n)2 + b(n)2 < ∞
a(n), b(n) > 0, for all n,
n≥0
n≥0
and limn→∞
1.
b(n)
a(n)
= 0. Without loss of generality, we let supn a(n), supn b(n) ≤
(A3) {Mni }n≥1 , i = 1, 2, are square integrable martingale difference sequences
1
2
with respect to the filtration Fn := σ xm , ym , Mm
,M
m : m ≤ n , n ≥ 0,
i
such that E[kMn+1
k2 |Fn ] ≤ K 1 + (kxn k + kyn k)2 , i = 1, 2, for some
constant K > 0. Without loss of generality assume that the same constant, K, works for both (A1) (in the property (ii) of Marchaud maps, see
section 2.1) and (A3).
(A4) supn {kxn k + kyn k} < ∞ a.s.
(A5) For each y ∈ Rk , the differential inclusion ẋ(t) ∈ h(x(t), y) has a globally
attracting set, Ay , that is also Lyapunov stable. Further, sup kxk ≤
x∈Ay
K (1 + kyk). The set-valued map λ : Rk → {subsets of Rd }, where
λ(y) = Ay , is upper semi-continuous.
!
S
g(x, y) . The convex
Define for each y ∈ Rk , a function G(y) := co
x∈λ(y)
closure of a set A ⊆ Rk , denoted by co(A), is closure of the convex hull of A,
i.e., the closure of the smallest convex set containing A. It will be shown later
that G is a Marchaud map.
(A6) ẏ(t) ∈ G(y(t)) has a globally attracting set, A0 , that is also Lyapunov
stable.
With respect to the faster timescale, the slower timescale iterates appear stationary, hence the faster timescale iterates track a solution to ẋ(t) ∈ h(x(t), y0 ),
where y0 is fixed (see Theorem 1). The y iterates track a solution to ẏ(t) ∈
G(y(t)) (see Theorem 2). It is worth noting that Theorems 1 & 2 only require
(A1) − (A5) to hold. Since G(·) is the convex closure of a union of compact
convex sets one can expect the set-valued map to be point-wise bounded and
convex. However, it is unclear why it should be upper semi-continuous (hence
Marchaud). In lemma 2 we prove that G is indeed Marchaud without any
additional assumptions.
Over the course of this paper we shall see that (A5) is the key assumption that
links the asymptotic behaviors of the faster and slower timescale iterates. It may
be noted that (A5) is weaker than the corresponding assumption - (B6)/(B6)′
used in [13]. For example, (B6)′ requires that λ(y) and ∪ g(x, y) be convex
x∈λ(y)
for every y ∈ Rk while (B6) requires that λ(y) be singleton for every y ∈ Rk .
The reader is referred to [13] for more details. Note that λ(y) being a singleton
is a strong requirement in itself since it is the global attractor of some DI. It
is observed in most applications that both λ(y) and ∪ g(x, y) will not be
x∈λ(y)
convex and therefore (B6)/(B6)′ are easily violated. Further, our application
discussed in Section 4 illustrates the same.
4
3
Proof of convergence
Before we start analyzing the coupled recursion given by (1), we prove a bunch
of auxiliary results.
Lemma 1. Consider the differential inclusion ẋ(t) ∈ H(x(t)), where H : Rn →
Rn is a Marchaud map. Let A be the associated globally attracting set that is
also Lyapunov stable. Then A is an attractor and every compact set containing
A is a fundamental neighborhood.
Proof. Since A is compact and invariant, it is left to prove the following: given
a compact set K ⊆ Rn such that A ⊆ K; for each ǫ > 0 there exists T (ǫ) > 0
such that Φt (K) ⊆ N ǫ (A) for all t ≥ T (ǫ).
Since A is Lyapunov stable, corresponding to N ǫ (A) there exists N δ (A),
where δ > 0, such that Φ[0,+∞) (N δ (A)) ⊆ N ǫ (A). Fix x0 ∈ K. Since A is a globally attracting set, ∃t(x0 ) > 0 such that Φt(x0 ) (x0 ) ⊆ N δ/4 (A). Further, from
the upper semi-continuity of flow it follows that Φt(x0 ) (x) ⊆ N δ/4 (Φt(x0 ) (x0 ))
for all x ∈ N δ(x0 ) (x0 ), where δ(x0 ) > 0, see Chapter 2 of Aubin and Cellina [1]. Hence we get Φt(x0 ) (x) ⊆ N δ (A). Further since A is Lyapunov stable,
we get Φ(t(x0 ),+∞] (x) ⊆ N ǫ (A). In this manner for each x ∈ K we calculate
t(x) and δ(x), the collection N δ(x) (x) : x ∈ K is an open cover for K. Since
K is compact, there exists a finite sub-cover N δ(xi ) (xi ) | 1 ≤ i ≤ m . For
T (ǫ) := max{t(xi ) | 1 ≤ i ≤ m}, we have Φ[T (ǫ),+∞) (K) ⊆ N ǫ (A).
In Theorem 2 we prove that the slower timescale trajectory asymptotically
tracks a solution to ẏ(t) ∈ G(y(t)). The following lemma ensures that the
aforementioned DI has at least one solution.
Lemma 2. The map G referred to in (A6) is a Marchaud map.
Proof. Fix an arbitrary y ∈ Rk . For any x ∈ λ(y), it follows from (A1) that
sup kzk ≤ K(1 + kxk + kyk).
z∈g(x,y)
From assumption (A5), we have that kxk ≤ K(1 + kyk). Substituting in the
above equation we may conclude the following:
sup kzk ≤ K (1 + K (1 + kyk) + kyk) = K(K + 1)(1 + kyk) ,
z∈g(x,y)
sup
z∈
S
kzk ≤ K(K + 1)(1 + kyk) ,
g(x,y)
x∈λ(y)
sup kzk ≤ K(K + 1)(1 + kyk).
z∈G(y)
We have thus proven that G is point-wise bounded. From the definition of G,
it follows that G(y) is convex and compact.
It remains to show that G is an upper semi-continuous map. Let zn → z and
yn → y in Rk with zn ∈ G(yn ), ∀ n ≥ 1. We need to show that z ∈ G(y). We
present a proof by contradiction. Since G(y) is convex and compact, z ∈
/ G(y)
5
implies that there exists a linear functional on Rk , say f , such that
sup
w∈G(y)
f (w) ≤ α − ǫ and f (z) ≥ α + ǫ, for some α ∈ R and ǫ > 0. Since zn → z,
there exists N such that for all n ≥ N , f (zn ) ≥ α + 2ǫ . In other words,
G(yn ) ∩ [f ≥ α + 2ǫ ] 6= φ for all n ≥ N . Here the notation [f ≥ a] is used to
denote the set {x | f (x) ≥ a}.
S
For the sake of convenience, we denote the set
g(x, y) by B(y). We
x∈λ(y)
claim that B(yn ) ∩ [f ≥ α + 2ǫ ] 6= φ for all n ≥ N . We prove this claim later,
for now we assume that the claim is true and proceed. Pick wn ∈ g(xn , yn ) ∩
[f ≥ α + 2ǫ ], where xn ∈ λ(yn ) and n ≥ N . It can be shown that {xn }n≥N
and {wn }n≥N are norm bounded sequences and hence contain convergent subsequences. Construct sub-sequences, {wn(k) }k≥1 ⊆ {wn }n≥N and {xn(k) }k≥1 ⊆
{xn }n≥N such that lim wn(k) = w and lim xn(k) = x. It follows from the
k→∞
k→∞
upper semi-continuity of g that w ∈ g(x, y) and from the upper semi-continuity
of λ that x ∈ λ(y), hence w ∈ G(y). Since f is continuous, f (w) ≥ α + 2ǫ . This
is a contradiction.
It remains to prove that B(yn ) ∩ [f ≥ α + 2ǫ ] 6= φ for all n ≥ N . Suppose
this were false, then ∃{m(k)}k≥1 ⊆ {n ≥ N } such that B(ym(k) ) ⊆ [f < α + 2ǫ ]
for each k ≥ 1. It can be shown that co(B(ym(k) )) ⊆ [f ≤ α + 2ǫ ] for each k ≥ 1.
Since zm(k) → z, ∃N1 such that for all m(k) ≥ N1 , f (zm(k) ) ≥ α + 3ǫ
4 . This is
a contradiction. Hence we get B(xn ) ∩ [f ≥ α + 2ǫ ] 6= φ for all n ≥ N .
It is worth noting that (A5) is a key requirement in the above proof. In the
next lemma, we show the convergence of the martingale noise terms.
Pn−1
1
Lemma 3. The sequences {ζn1 } and {ζn2 }, where ζn1 = m=0 a(m)Mm+1
and
P
n−1
2
2
ζn = m=0 b(m)Mm+1 , are convergent almost surely.
Proof. Although a proof of the above statement can be found in [2] or [8],
we provide one for the sake of completeness. We only prove the almost sure
convergence of ζn1 as the convergence of ζn2 can be similarly shown.
It is enough to show that
∞
X
i.e.,
m=0
∞
X
m=0
1
1 2
a(m)2 E kζm+1
− ζm
k |Fm < ∞ a.s.,
1
a(m)2 E kMm+1
k2 |Fm < ∞ a.s.
From assumption (A3) it follows that
∞
X
m=0
∞
X
1
a(m)2 1 + (kxm k + kym k)2 .
k2 |Fm ≤ K
a(m)2 E kMm+1
m=0
From assumptions (A2) and (A4) it follows that
K
∞
X
m=0
a(m)2 1 + (kxm k + kym k)2 < ∞ a.s.
6
We now prove a couple of technical results that are essential to the proofs
of Theorems 1 and 2.
Lemma 4. Given any y0 ∈ Rk and ǫ > 0, there exists δ > 0 such that for all
x ∈ N δ (λ(y0 )), we have g(x, y0 ) ⊆ N ǫ (G(y0 )).
Proof. Assume the statement is not true. Then, ∃ δn ↓ 0 and xn ∈ N δn (λ(y0 ))
such that g(xn , y0 ) * N ǫ (G(y0 )), n ≥ 1. In other words, ∃γn ∈ g(xn , y0 ) and
γn ∈
/ N ǫ (G(y0 )) for each n ≥ 1. Since {xn } and {γn } are bounded sequences
there exist convergent sub-sequences, lim xn(k) = x and lim γn(k) = γ. Since
k→∞
k→∞
xn(k) ∈ N δn(k) (λ(y0 )) and δn(k) ↓ 0 it follows that x ∈ λ(y0 ) and hence g(x, y0 ) ⊆
G(y0 ). We also have that v ∈
/ N ǫ (G(y0 )) as vn(k) ∈
/ N ǫ (G(y0 )) for all k ≥ 1.
Since g is upper semi-continuous it follows that γ ∈ g(x, y0 ) and hence γ ∈
G(y0 ). This is a contradiction.
Lemma 5. Let x0 ∈ Rd and y0 ∈ Rk be such that the statement of lemma 4
is satisfied (with x0 in place of x). If lim xn = x0 and lim yn = y0 then ∃N
n→∞
such that ∀n ≥ N , g(xn , yn ) ⊆ N ǫ (G(y0 )).
n→∞
Proof. If not, ∃ {n(k)} ⊆ {n} such that lim n(k) = ∞ and g(xn(k) , yn(k) ) *
k→∞
N ǫ (G(y0 )). Without loss of generality assume that {n(k)} = {n}. In other
words, ∃γn ∈ g(xn , yn ) such that γn ∈
/ N ǫ (G(y0 )) for all n ≥ 1. Since {γn } is a
bounded sequence, it has a convergent sub-sequence, i.e., lim γn(m) = γ. Since
m→∞
lim xn(m) = x0 , lim yn(m) = y0 and g is upper semi-continuous it follows that
m→∞
m→∞
γ ∈ g(x0 , y0 ) and finally from lemma 4 we get that γ ∈ N ǫ (G(y0 )). This is a
contradiction.
Before we proceed let us construct trajectories, using (1), with respect to
Pn−1
the faster timescale. Define t(0) := 0, t(n) :=
i=0 a(i), n ≥ 1. The linearly
interpolated trajectory x(t), t ≥ 0, is constructed from the sequence {xn } as
follows: let x(t(n)) := xn and for t ∈ (t(n), t(n + 1)), let
t(n + 1) − t
t − t(n)
x(t) :=
x(t(n)) +
x(t(n + 1)). (3)
t(n + 1) − t(n)
t(n + 1) − t(n)
We construct a piecewise constant trajectory from the sequence {un } as follows:
u(t) := un for t ∈ [t(n), t(n + 1)), n ≥ 0.
Let us construct trajectories with respect
Pn−1 to the slower timescale in a similar
manner. Define s(0) := 0, s(n) :=
e(s(n)) := yn and
i=0 b(i), n ≥ 1. Let y
for s ∈ (s(n), s(n + 1)), let
s − s(n)
s(n + 1) − s
ye(s(n)) +
ye(s(n + 1)). (4)
ye(s) :=
s(n + 1) − s(n)
s(n + 1) − s(n)
Also ve(s) := vn for s ∈ [s(n), s(n + 1)), n ≥ 0, is the corresponding piecewise
constant trajectory.
For s ≥ 0, let xs (t), t ≥ 0, denote the solution to ẋs (t) = u(s + t) with the
initial condition xs (0) = x(s). Similarly, let y s (t), t ≥ 0, denote the solution to
ẏ s (t) = ve(s + t) with the initial condition y s (0) = ye(s).
7
The y iterate in recursion (1) can be re-written as
b(n) 2
b(n)
vn +
Mn+1 .
yn+1 = yn + a(n)
a(n)
a(n)
(5)
b(n)
b(n)
3
2
Define ǫ(n) := a(n)
vn and Mn+1
= a(n)
Mn+1
. It can be shown that the stochas3
tic iteration given by yn+1 = yn + a(n)Mn+1 satisfies the set of assumptions
given in Benaı̈m [2]. From (A1), (A2) and (A4) it follows that ǫ(n) → 0 almost
3
surely. Since ǫ(n) → 0 the recursion given by (5) and yn+1 = yn + a(n)Mn+1
have the same asymptotics. For a precise statement and proof the reader is
referred to lemma 2.1 of [7].
Define y(t(n)) := yn , where n ≥ 0 and y(t) for t ∈ (t(n), t(n + 1)) by
t(n + 1) − t
t − t(n)
y(t) :=
y(t(n)) +
y(t(n + 1)). (6)
t(n + 1) − t(n)
t(n + 1) − t(n)
The trajectory y(· ) can be seen as an evolution of the y iterate with respect to
the faster timescale, {a(n)}.
Lemma 6. Almost surely every limit point, y(· ), of {y(s+· ) | s ≥ 0} in
C([0, ∞), Rk ) as s → ∞ satisfies y(t) = y(0), t ≥ 0.
3
Proof. It can be shown that yn+1 = yn + a(n)Mn+1
satisfies the assumptions of
Benaı̈m [2]. Hence the corresponding linearly interpolated trajectory tracks the
solution to ẏ(t) = 0. The statement of the lemma then follows trivially.
Lemma 7. For any T > 0, lim sup kx(s+t)−xs (t)k = 0 and lim sup ke
y(s+
s→∞t∈[0,T ]
s→∞t∈[0,T ]
s
t) − y (t)k = 0, a.s.
Proof. In order to prove the above lemma, it enough to prove the following:
lim
sup
kx(t(n + m)) − xt(n) (t(n + m) − t(n))k = 0 and
lim
sup
ke
y(s(n + m)) − y s(n) (s(n + m) − s(n))k = 0 a.s.
t(n)→∞ 0≤t(n+m)−t(n)≤T
s(n)→∞ 0≤s(n+m)−s(n)≤T
Note the following:
x(t(n + m)) = x(t(n)) +
m−1
X
k=0
1
,
a(n + k) u(t(n + k)) + Mn+k+1
xt(n) (t(n + m) − t(n)) = x(t(n)) +
Z
t(n+m)−t(n)
u(t(n) + z) dz,
0
xt(n) (t(n + m) − t(n)) = x(t(n)) +
Z
t(n+m)
u(z) dz.
(7)
t(n)
From (7), we get,
kx(t(n + m)) − xt(n) (t(n + m) − t(n))k =
m−1
m−1
m−1
X Z t(n+k+1)
X
X
1
u(z) dz +
a(n + k)Mn+k+1
.
a(n + k)u(t(n + k)) −
k=0
k=0
t(n+k)
8
k=0
The R.H.S. of the above equation equals
m−1
X
a(n + k)u(t(n + k)) =
k=0
Pm−1
k=0
1
as
a(n + k)Mn+k+1
m−1
X Z t(n+k+1)
k=0
u(z) dz.
t(n+k)
Pn−1
1
Since ζn1 := m=0 a(m)Mm+1
, n ≥ 1, converges a.s., the first part of claim
follows.
The second part, for the y iterates, can be similarly proven.
From assumptions (A1) and (A4) it follows that {xr (· ) | r ≥ 0} and {y r (· ) | r ≥
0} are equicontinuous and pointwise bounded families of functions. By the
Arzela-Ascoli theorem they are relatively compact in C([0, ∞), Rd ) and C([0, ∞), Rk )
respectively. From lemma 7 it then follows that {x(r+· ) | r ≥ 0} and {e
y(r+· ) | r ≥
0} are also relatively compact, see (3) and (4) for the definitions of x(· ) and
ye(· ), respectively.
3.1
Convergence in the faster timescale
The following theorem and its proof are similar to Theorem 2 from Chapter 5
of Borkar [8]. We present a proof for the sake of completeness.
Theorem 1. Almost surely, every limit point of {x(r+· ) | r ≥ 0} in C([0, ∞), Rd )
Rt
is of the form x(t) = x(0) + 0 u(z) dz, where u is a measurable function such
that u(t) ∈ h(x(t), y(0)), t ≥ 0, for some fixed y(0) ∈ Rk .
Proof. Fix T > 0, then {u(r + t) | t ∈ [0, T ]}, r ≥ 0 can be viewed as a subset
of L2 ([0, T ], Rd ). From (A1) and (A4) it follows that the above is uniformly
bounded and hence weakly relatively compact. Let {r(n)} be a sequence such
that the following hold:
(i) lim r(n) = ∞.
n→∞
(ii) There exists some x(· ) ∈ C([0, ∞), Rd ) such that x(r(n)+· ) → x(· ) in
C([0, ∞), Rd ). This is because {x(r+· ) | r ≥ 0} is relatively compact in
C([0, ∞), Rd ).
(iii) y(r(n)+· ) → y(· ) in C([0, ∞), Rk ) for some y ∈ C([0, ∞), Rk ). It follows
from lemma 6 that y(t) = y(0) for all t ≥ 0.
(iv) u(r(n)+· ) → u(· ) weakly in L2 ([0, T ], Rd ).
From lemma 7, it follows that xr(n) (· ) → x(· ) in C([0, ∞), Rd ), and we have
Rt
Rt
that 0 u(r(n) + z) dz → 0 u(z) dz for t ∈ [0, T ]. Letting n → ∞ in
Z t
r(n)
r(n)
u(r(n) + z) dz, t ∈ [0, T ],
x
(t) = x
(0) +
0
Rt
we get x(t) = x(0) + 0 u(z) dz, t ∈ [0, T ].
Since u(r(n)+· ) → u(· ) weakly in L2 ([0, T ], Rd), there exists {n(k)} ⊂ {n}
such that n(k) ↑ ∞ and
N
1 X
u(r(n(k))+· ) → u(· )
N
k=1
9
strongly in L2 ([0, T ], Rd ). Further, there exist {N (m)} ⊂ {N } such that N (m) ↑
∞ and
N (m)
X
1
u(r(n(k))+· ) → u(· )
(8)
N (m)
k=1
a.e. in [0, T ].
Define [t] := max{t(n) | t(n) ≤ t}. If we fix t0 ∈ [0, T ] such that (8) holds,
then u(r(n(k)) + t0 ) ∈ h(x([r(n(k)) + t0 ]), y([r(n(k)) + t0 ])) for k ≥ 1. Since
lim kx(r(n(k)) + t0 ) − x([r(n(k)) + t0 ])k = 0, it follows that lim x([r(n(k)) +
k→∞
n(k)→∞
t0 ]) = x(t0 ), and similarly, we have that lim y([r(n(k)) + t0 ]) = y(0). Since h is
k→∞
upper semi-continuous it follows that lim d (u(r(n(k)) + t0 ), h(x(t0 ), y(0))) =
k→∞
0. The set h(x(t0 ), y(0)) is compact and convex, hence it follows from (8) that
u(t0 ) ∈ h(x(t0 ), y(0)).
3.2
Convergence in the slower timescale
Theorem 2. For any ǫ > 0, almost surely any limit point of {e
y(r+· ) | r ≥ 0}
Rt
k
in C([0, ∞), R ) is of the form y(t) = y(0) + 0 v(z) dz, where v is a measurable
function such that v(t) ∈ N ǫ (G(y(t))), t ≥ 0.
Proof. Fix T > 0. As before let {r(n)}n≥1 be a sequence such that the following
hold:
(i) lim r(n) = ∞.
n→∞
(ii) ye(r(n)+· ) → y(· ) in C([0, ∞), Rk ), where y(· ) ∈ C([0, ∞), Rk ).
(iii) e
v (r(n)+· ) → v(· ) weakly in L2 ([0, T ], Rk ).
Also, as before, we have the following:
(i) There exists {n(k)} ⊆ {n} such that
in L2 ([0, T ], Rd ) as N → ∞.
1
N
PN
k=1
ve(r(n(k))+· ) → v(· ) strongly
(ii) There exist {N (m)} ⊂ {N } such that N (m) ↑ ∞ and
N (m)
X
1
ve(r(n(k))+· ) → v(· )
N (m)
(9)
k=1
a.e. on [0, T ].
Define [s]′ := max{s(n) | s(n) ≤ s}. Construct a sequence {m(n)}n≥1 ⊆ N
such that s(m(n)) = [r(n) + t0 ]′ for each n ≥ 1. Observe that y(t(m(n))) =
ye(s(m(n))) and ve(r(n) + t0 ) ∈ g(x(t(m(n))), y(t(m(n)))).
Choose t0 ∈ (0, T ) such that (9) is satisfied. If we show that ∃ N such that
for all n ≥ N , g(x(t(m(n))), y(t(m(n)))) ⊆ N ǫ (G(y(t0 ))) then (9) implies that
v(t0 ) ∈ N ǫ (G(y(t0 ))).
It remains to show the existence of such a N . We present a proof by contradiction. We may assume without loss of generality that for each n ≥ 1,
g(x(t(m(n))), y(t(m(n)))) * N ǫ (G(y(t0 ))), i.e., ∃ γn ∈ g(x(t(m(n))), y(t(m(n))))
such that γn ∈
/ N ǫ (G(y(t0 ))). Let S1 be the set on which (A4) is satisfied
10
and S2 be the set on which lemma 3 holds. Clearly P (S1 ∩ S2 ) = 1. For
each ω ∈ S1 ∩ S2 , ∃ R(ω) < ∞ such that supn kxn (ω) + yn (ω)k ≤ R(ω) and
supn K(1 + kyn (ω)k) ≤ R(ω). In what follows we merely use R and the dependence on ω (sample path) is understood to be implicit. From lemma 1 it
follows that corresponding to ẋ(t) ∈ h(x(t), y(t0 )) and some δ > 0 there exists
T0 , possibly dependent on R, such that for all t ≥ T0 , Φt (x0 ) ∈ N δ (λ(y(t0 )))
for all x0 ∈ B R (0).
We construct a new sequence {l(n)}n≥1 from {m(n)}n≥1 such that t(l(n)) =
min{t(m) | |t(m(n))−t(m)| ≤ T0 }. Since {x(r+· ) | r ≥ 0} is relatively compact
in C([0, ∞), Rd ), it follows that x(t(l(n))+· ) → x(· ) in C([0, T0 ], Rd ). From
lemma 6 we can conclude that y(t(l(n))+· ) → y(· ) in C([0, T0 ], Rk ), where
y(t) = y(t0 ) for all t ∈ [0, T0 ]. lemma 6 only asserts that the limiting function is a constant, we recognize this constant to be y(t0 ) since ky(t(l(n)) +
T0 ) − y(t(m(n)))k → 0 and y(t(l(n)) + T0 ) → y(t0 ). Note that in the foregoing
discussion we can only assert the existence of convergent subsequences, again
for the sake of convenience we assume that the sequences at
R t hand are both
convergent. It follows from Theorem 1 that x(t) = x(0) + 0 u(z) dz, where
u(t) ∈ h(x(t), y(t0 )). Since x(0) ∈ B R (0) it follows that x(T0 ) ∈ N δ (λ(y(t0 ))).
From lemma 4 we get g(x(T0 ), y(t0 )) ⊆ N ǫ (G(y(t0 ))). Since kx(t(m(n))) −
x(t(l(n)) + T0 )k → 0 it follows that x(t(m(n))) → x(T0 ). It follows from lemma
5 that ∃N such that for n ≥ N , g(x(t(m(n))), y(t(m(n)))) ⊆ N ǫ (G(y(t0 ))).
This is a contradiction.
A direct consequence of the above theorem is that almost surelyR any limit
t
point of {e
y(r+· ) | r ≥ 0} in C([0, ∞), Rk ) is of the form y(t) = y(0)+ 0 v(z) dz,
where v is a measurable function such that v(t) ∈ G(y(t)), t ≥ 0.
3.3
Main result
Theorem 3. Under assumptions (A1) − (A6), almost surely the set of accumulation points is given by
[
{(x, y) | x ∈ λ(y)} .
(10)
(x, y) | lim d ((x, y), (xn , yn )) = 0 ⊆
n→∞
y∈A0
Proof. The statement follows directly from Theorems 1 and 2.
Note that assumption (A6) allows us to narrow the set of interest. If
(A6)
S does not hold then we can only conclude that the R.H.S. of (10) is
{(x, y) | x ∈ λ(y)}. On the other hand if (A6) holds and A0 consists of
y∈Rk
a single point, say y0 , then the R.H.S. of (10) is {(x, y0 ) | x ∈ λ(y0 )}. Further,
if λ(y0 ) is of cardinality one then the R.H.S. of (10) is just (λ(y0 ), y0 ).
Remark:It may be noted that all proofs and conclusions in this paper will go
through if (A1) is weakened to let g be upper semi-continuous and g(x, · ) be
Marchaud on Rk for each fixed x ∈ Rd .
11
4
Application: An SA algorithm to solve the
Lagrangian dual problem
Let f : Rd → R and g : Rd → Rk be two given functions. We want to minimize
f (x) subject to the condition that g(x) ≤ 0 (every component of g(x) is nonpositive). This problem can be stated in the following primal form:
inf sup (f (x) + µT g(x)) .
(11)
x∈Rd µ∈Rk
µ≥0
Let us consider the following two timescale SA algorithm to solve the primal
(11):
1
,
µn+1 = µn + a(n) ∇µ (f (xn ) + µTn g(xn )) + Mn+1
(12)
T
2
xn+1 = xn − b(n) ∇x (f (xn ) + µn g(xn )) + Mn+1 .
P
P
P
2
< ∞,
where, a(n), b(n) > 0,
n≥0 a(n)
n≥0 b(n) = ∞,
n≥0 a(n) =
P
b(n)
2
n≥0 b(n) < ∞ and a(n) → 0. Without loss of generality assume that
sup a(n), b(n) ≤ 1. The sequences {Mn1 }n≥1 and {Mn2 }n≥1 are suitable marn
tingale difference noise terms.
Suppose there exists x0 ∈ Rd such that g(x0 ) ≥ 0, then µ = (∞, . . . , ∞)
maximizes f (x0 ) + µT g(x0 ). With respect to the faster timescale (µ) iterates
the slower timescale (x) iterates can be viewed as being “quasi-static”, see [8]
for more details. It then follows from the aforementioned observation that the
µ iterates cannot be guaranteed to be stable. In other words, we cannot use
(12) to solve the primal problem.
If strong duality holds then solving (11) is equivalent to solving its dual given
by:
sup inf (f (x) + µT g(x)) .
(13)
µ∈Rk x∈Rd
µ≥0
Further, the two timescale scheme to solve the dual problem is given by:
2
xn+1 = xn − a(n) ∇x (f (xn ) + µTn g(xn )) + Mn+1
,
(14)
T
1
µn+1 = µn + b(n) ∇µ (f (xn ) + µn g(xn )) + Mn+1 .
Note that (14) is obtained by flipping the timescales of (12). Strong duality can
be enforced if we assume the following:
(S1) f (x) = xT Qx + bT x + c, where Q is a positive semi-definite d × d matrix,
b ∈ Rd and c ∈ R.
(S2) g = A, where A is a k × d matrix.
(S3) f is bounded from below.
The reader is referred to Bertsekas [6] for further details. For the purposes of
this section we assume the following:
12
(S1) − (S3) are satisfied.
P
i
(A3)′
n≥0 a(n)Mn+1 < ∞ a.s., where i = 1, 2.
The sole purpose of (A3) in Section 2.2 is to ensure the convergence of the
martingale noise terms i.e., (A3)′ holds. It is clear that (14) satisfies (A1) since
(S1) − (S3) hold while (A2) is the step size assumption that is enforced.
The stability of the µ iterates in (14) directly follows from strong duality and
(A3)′ . The µ iterates are “quasi-static” with respect to the x iterates. Further,
since f (x) + µT0 g(x) is a convex function (from (S1) and (S2)), for a fixed µ0 ,
f (x) + µT0 g(x) achieves its minimum “inside” Rd . Hence, the stability of the
x iterates will follow from that of the µ iterates and (A3)′ . In other words,
(14) satisfies (A1), (A2), (A3)′ & (A4), see Section 2.2 for the definitions of
(A1), (A2) and (A4).
For a fixed µ0 , the minimizers of f (x) + µT0 g(x) constitute the global attractor
of the o.d.e., ẋ(t) = −∇x (f (x) + µT0 g(x)). Our paradigm comes in handy when
this attractor set is NOT singleton, which is generally the case. In other words,
we can define the following set valued map: λm : Rk → Rd , where λm (µ0 ) is the
global attractor of ẋ(t) = −∇x (f (x) + µT0 g(x)).
Now we check that (14) satisfies (A5). To do so it is enough to ensure that
λm is an upper semi-continuous map. Recall that λm (µ) is the minimum set
of f (x) + µT g(x) for each µ ∈ Rk . Dantzig, Folkman and Shapiro [9] studied
the continuity of minimum sets of continuous functions. A wealth of sufficient
conditions can be found in [9] which when satisfied by the functions guarantee
“continuity” of the corresponding minimum sets. In our case since (S1) − (S3)
are satisfied, Corollary I.2.3 of [9] guarantees upper semi-continuity of λm .
Since (A1)-(A5) are satisfied by (14), it follows from Theorems 1 & 2 that:
(I) Almost surely every limit point of {x(r+· ) | r ≥ 0} in C([0, ∞), Rd ) is of
Rt
the form x(t) = x(0) + 0 ∇x (f (x(t)) + µT0 g(x(t))) dt for some x(0) ∈ Rd
and some µ0 ∈ Rk .
(II) Almost surely, any limit point of {e
µ(r+· ) | r ≥ 0} in C([0, ∞), Rk ) is of the
Rt
form µ(t) = µ(0) + 0 ν(z) dz for some measurable function ν with ν(t) ∈
G(µ(t)), t ≥ 0 and G(µ(t)) = co ({∇µ (f (x) + µ(t)T g(x)) | x ∈ λm (µ(t))}).
For the construction of x(· ) and µ
e(· ) see equations (3) and (4) respectively. If in
addition, (14) satisfies (A6) i.e., ∃ Aµ ⊂ Rk such that it is the global attractor
of µ(t) ∈ G(µ(t)), then it follows from Theorem 3 that: almost surely any accumulation point of {(xn , yn ) | n ≥ 0} belongs to the set A := ∪ {(x, µ) | x ∈
µ∈Am
λm (µ)}. The attractor Aµ is the maximum set of H(µ) := inf (f (x) + µT g(x))
x∈Rd
subject to µ ≥ 0. It may be noted that H is a concave function that is bounded
above as a consequence of strong duality. For any (x∗ , µ∗ ) ∈ A we have that
f (x∗ ) + (µ∗ )T g(x∗ ) = sup inf f (x) + µT g(x).
µ∈Rk x∈Rd
µ≥0
13
In other words, almost surely the two timescale iterates given by (14) converge
to a solution of the dual (13). It follows from strong duality that they almost
surely converge to a solution of the primal (11).
5
Conclusions
In this paper we have presented a framework for the analysis of two timescale
stochastic approximation algorithms with set valued mean fields. Our framework generalizes the one by Perkins and Leslie. We note that the analysis of
the faster timescale proceeds in a predictable manner but the analysis of the
slower timescale is new to the literature to the best of our knowledge. As an application we analyze the two timescale scheme that arises from the Lagrangian
dual problem in optimization using our framework. Our framework is applicable
even when the minimum sets are not singleton.
References
[1] J. Aubin and A. Cellina. Differential Inclusions: Set-Valued Maps and
Viability Theory. Springer, 1984.
[2] M. Benaı̈m. A dynamical system approach to stochastic approximations.
SIAM J. Control Optim., 34(2):437–472, 1996.
[3] M. Benaı̈m. Dynamics of stochastic approximation algorithms. Séminaire
de Probabilités XXXIII, Lecture Notes in Mathematics, Springer, 1709:1–
68, 1999.
[4] M. Benaı̈m and M. W. Hirsch. Asymptotic pseudotrajectories and chain
recurrent flows, with applications. J. Dynam. Differential Equations, 8:141–
176, 1996.
[5] M. Benaı̈m, J. Hofbauer, and S. Sorin. Stochastic approximations and
differential inclusions. SIAM Journal on Control and Optimization, pages
328–348, 2005.
[6] D.P. Bertsekas. Convex Optimization Theory. Athena Scientific optimization and computation series. Athena Scientific, 2009.
[7] V. S. Borkar. Stochastic approximation with two time scales. Syst. Control
Lett., 29(5):291–294, 1997.
[8] V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint.
Cambridge University Press, 2008.
[9] G. B. Dantzig, J. Folkman, and N. Shapiro. On the continuity of the
minimum set of a continuous function. Journal of Mathematical Analysis
and Applications, 17(3):519 – 548, 1967.
[10] Sathiya S. Keerthi and B. Ravindran. A tutorial survey of reinforcement learning. Indian Academy of Sciences, Proc. in Engineering Sciences,
19:851–889, 1994.
14
[11] H. Kushner and G.G. Yin. Stochastic Approximation and Recursive Algorithms and Applications. Springer, 2003.
[12] L. Ljung. Analysis of recursive stochastic algorithms. Automatic Control,
IEEE Transactions on, 22(4):551–575, 1977.
[13] S. Perkins and D.S. Leslie. Asynchronous stochastic approximation with
differential inclusions. Stochastic Systems, 2(2):409–446, 2012.
15
| 3 |
arXiv:1703.05038v2 [math.NA] 27 Feb 2018
Harmonic Mean Iteratively Reweighted Least Squares for
Low-Rank Matrix Recovery
Christian Kümmerle∗, Juliane Sigl†
February 28, 2018
Abstract
We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of
a matrix X ∈ Cd1 ×d2 of rank r min(d 1 , d 2 ) from incomplete linear observations, solving a
sequence of low complexity linear problems. The easily implementable algorithm, which we
call harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex
Schatten-p quasi-norm penalization to promote low-rankness and carries three major strengths,
in particular for the matrix completion setting. First, we observe a remarkable global convergence behavior of the algorithm’s iterates to the low-rank matrix for relevant, interesting cases,
for which any other state-of-the-art optimization approach fails the recovery. Secondly, HMIRLS exhibits an empirical recovery probability close to 1 even for a number of measurements
very close to the theoretical lower bound r (d 1 + d 2 − r ), i.e., already for significantly fewer linear
observations than any other tractable approach in the literature. Thirdly, HM-IRLS exhibits a
locally superlinear rate of convergence (of order 2 − p) if the linear observations fulfill a suitable
null space property. While for the first two properties we have so far only strong empirical
evidence, we prove the third property as our main theoretical result.
1
Introduction
The problem of recovering a low-rank matrix from incomplete linear measurements or observations
has gained considerable attention in the last few years due to the omnipresence of low-rank models
in different areas of science and applied mathematics. Low-rank models arise in a variety of areas such as system identification [LHV13, LV10], signal processing [AR15], quantum tomography
[GLF+ 10, Gro11] and phase retrieval [CSV13, CESV13, GKK15]. An instance of this problem of particular importance, e.g., in recommender systems [SRJ05, GNOT92, CR09], is the matrix completion
problem, where the measurements correspond to entries of the matrix to be recovered.
Although the low-rank matrix recovery problem is NP-hard in general, several tractable algorithms have been proposed that allow for provable recovery in many important cases. The nuclear
norm minimization (NNM) approach [Faz02, CR09], which solves a surrogate semidefinite program,
is particularly well-understood. For NNM, recovery guarantees have been shown for a number of
∗ Department of Mathematics, Technische Universität München, Boltzmannstr. 3, 85748 Garching/Munich, Germany
E-mail: [email protected]
† Department of Mathematics, Technische Universität München, Boltzmannstr. 3, 85748 Garching/Munich, Germany
E-mail: [email protected]
1
measurements on the order of the information theoretical lower bound r (d 1 + d 2 − r ), if r denotes
the rank of a d 1 × d 2 -matrix [RFP10, CR09]; i.e., for a number of measurements m ≥ ρr (d 1 + d 2 − r )
with some oversampling constant ρ ≥ 1. Even though NNM is solvable in polynomial time, it can
be computationally very demanding if the problem dimensions are large, which is the case in many
potential applications. Another issue is that although the number of measurements necessary for
successful recovery by nuclear norm minimization is of optimal order, it is not optimal. More precisely, it turns out that the oversampling factor ρ of nuclear norm minimization has to be much
larger than the oversampling factor of some other, non-convex algorithmic approaches [ZL15, TW13].
These limitations of convex relaxation approaches have led to a rapidly growing line of research
discussing the advantages of non-convex optimization for the low-rank matrix recovery problem
[JMD10, TW13, HH09, JNS13, WYZ12, TW16, Van13, WCCL16, TBS+ 15]. For several of these nonconvex algorithmic approaches, recovery guarantees comparable to those of NNM have been derived
[CLS15, TBS+ 15, ZL15, SL16]. Their advantage is a higher empirical recovery rate and an often
more efficient implementation. While there are some results about global convergence of firstorder methods minimizing a non-convex objective [GLM16, BNS16] so that a success of the method
might not depend on a particular initialization, the assumptions of these results are not always
optimal, e.g., in the scaling of the numbers of measurements m in the rank r [GLM16, Theorem 5.3].
In general, the success of many non-convex optimization approaches relies on a distinct, possibly
expensive initialization step.
1.1
Contribution of this paper
In this spirit, we propose a new iteratively reweighted least squares (IRLS) algorithm for the lowrank matrix recovery problem1 that strives to minimize a non-convex objective function based on
the Schatten-p quasi-norm
p
min kX kSp subject to Φ(X ) = Y ,
(1)
X
for 0 < p < 1, where Φ : Cd1 ×d2 → Cm is the linear measurement operator and Y ∈ Cm is the
data vector that define the problem. The overall strategy of the proposed IRLS algorithm is to mimic
this minimization by a sequence of weighted least squares problems. This strategy is shared by
the related previous algorithms of Fornasier, Rauhut & Ward [FRW11] and Mohan & Fazel [MF12]
which minimize (1) by defining iterates as
1
X (n+1) = min kWL(n) 2 X kF2 subject to Φ(X ) = Y ,
X
(2)
p−2
where WL(n) ≈ (X (n)X (n)∗ ) 2 is a so-called weight matrix which reweights the quadratic penalty by
operating on the column space of the matrix variable. Thus, we call this column-reweighting type
of IRLS algorithms IRLS-col. Due to the inherent symmetry, it is evident to conceive, still in the
spirit of [FRW11, MF12], the algorithm IRLS-row
1
X (n+1) = min kWR(n) 2 X ∗ kF2 subject to Φ(X ) = Y
X
1 The
(3)
algorithm and partial results were presented at the 12th International Conference on Sampling Theory and
Applications in Tallinn, Estonia, July 3–7, 2017. The corresponding conference paper has been published in its proceedings
[KS17].
2
p−2
with WR(n) ≈ (X (n)∗X (n) ) 2 , which reweights the quadratic penalty by acting on the row space of the
matrix variable. We note that even for square dimensions d 1 = d 2 , IRLS-col and IRLS-row do
not coincide.
In this paper, as an important innovation, we propose the use of a different type of weight matrices, so-called harmonic mean weight matrices, which can be interpreted as the harmonic mean of the
matrices WL(n) and WR(n) above. This motivates the name harmonic mean iteratively reweighted least
squares (HM-IRLS) for the corresponding algorithm. The harmonic mean of the weight matrices
of IRLS-col and of IRLS-row in HM-IRLS is able to use the information in both the column
and the row space of the iterates, and it also gives rise to a qualitatively better behavior than the use
of more obvious symmetrizations as, e.g., the arithmetic mean of weight matrices would allow for,
both in theory and in practice.
We argue that the choice of harmonic mean weight matrices as in HM-IRLS leads to an efficient algorithm for the low-rank matrix recovery problem with fast convergence and superior performance in terms of sample complexity, also compared to algorithms based on strategies different
from IRLS.
On the one hand, we show that the accumulation points of the iterates of HM-IRLS converge
to stationary points of a smoothed Schatten-p functional under the linear constraint, as it is known
for, e.g., IRLS-col, c.f. [FRW11, MF12]. On the other hand, we extend the theoretical guarantees
which are based on a Schatten-p null space property (NSP) of the measurement operator [OMFH11,
FR13], to HM-IRLS.
Our main theoretical result is that HM-IRLS exhibits a locally superlinear convergence rate of
order 2 − p in the neighborhood of a low-rank matrix for the non-convexity parameter 0 < p < 1
connected to the Schatten-p quasinorm, if the measurement operator fulfills the mentioned NSP of
sufficient order. For p 1, this means that the convergence rate is almost quadratic.
Although parts of our theoretical results, as in the case of the IRLS algorithms algorithms of
Fornasier, Rauhut & Ward [FRW11] and Mohan & Fazel [MF12], do not apply to the matrix completion setting, due to the popularity of the problem and for reasons of comparability with other
algorithms, we conduct numerical experiments to explore the empirical performance of HM-IRLS
also for this setting. Surprisingly enough we observe that the theoretical results comply with our
numerical experiments also for matrix completion. In particular, the theoretically predicted local
convergence rate of order 2 − p can be observed very precisely for this important measurement
model as well (see Figures 3 to 5).
This local superlinear convergence rate is unprecedented for IRLS variants such as IRLS-col
and as those that use the arithmetic mean of the one-sided weight matrices: this means that neither
can a superlinear rate be verified numerically, nor is it possible to show such a rate by our proof
techniques for any other IRLS variant.
To the best of our knowledge, HM-IRLS is the first algorithm for low-rank matrix recovery
which achieves superlinear rate of convergence for low complexity measurements as well as for
larger problems.
Additionally, we conduct extensive numerical experiments comparing the efficiency of HMIRLS with previous IRLS algorithms as IRLS-col, Riemannian optimization techniques [Van13],
alternating minimization approaches [HH09, TW16], algorithms based on iterative hard thresholding [KC14, BTW15], and others [PKCS16], in terms of sample complexity, again for the important
case of matrix completion.
The experiments lead to the following observation: HM-IRLS recovers low-rank matrices sys3
tematically with an optimal number of measurements that is very close to the theoretical lower
bound on the number of measurements that is necessary for recovery with high empirical probability. We consider this result to be remarkable, as it means that for problems of moderate dimensionality (matrices of ≈ 107 variables, e.g. (d 1 × d 2 )-matrices with d 1 ≈ d 2 ≈ 5 · 103 ) the proposed
algorithm needs fewer measurements for the recovery of a low rank matrix than all the state-of-the-art
algorithms we included in our experiments (see Figure 6).
An important practical observation of HM-IRLS is that its performance is very robust to the
choice of the initialization and can be used as a stand-alone algorithm to recover low-rank matrices
also starting from a trivial initialization. This is suggested by our numerical experiments since
even for random or adversary initializations, HM-IRLS converges to the low-rank matrix, even
though it is based on an objective function which is highly non-convex. While a complete theoretical
understanding of this behavior is not yet achieved, we regard the empirical evidence in a variety of
interesting cases as strong. In this context, we consider a proof of the global convergence of HMIRLS for non-convex penalizations under appropriate assumptions as an interesting open problem.
1.2
Outline
We proceed in the paper as follows. In the next section, we provide some background on Kronecker
and Hadamard products of matrices as these concepts are used in the analysis of the algorithm to be
discussed. Moreover, we explain different reformulations of the Schatten-p quasi-norm in terms of
weighted `2 -norms, which lead to the derivation of the harmonic mean iteratively reweighted least
squares (HM-IRLS) algorithm in Section 3. We present our main theoretical results, the convergence guarantees and the locally superlinear convergence rate for the algorithm in Section 4. Numerical experiments and comparisons to state-of-the-art methods for low-rank matrix recovery are
carried out in Section 5. In Section 6, we interpret the algorithm’s different steps as minimizations
of an auxililary functional with respect to its arguments and show theoretical guarantees for HMIRLS extending similar guarantees for IRLS-col. After this, we detail the proof of the locally
superlinear convergence rate under appropriate assumptions on the null space of the measurement
operator.
2
2.1
Notation and background
General notation, Schatten-p and weighted norms
In this section, we explain some of the notation we use in the course of this paper.
The set of matrices X ∈ Cd1 ×d2 is denoted by Md1 ×d2 . Unless stated otherwise, vectors x ∈ Cd are
h
iT
considered as column vectors. We also use the vectorized form X vec = X 1T , . . . , X jT , . . . , X dT2 ∈
Cd1d2 of a matrix X ∈ Md1 ×d2 with columns X j , j ∈ {1, . . . , d 2 }. The
reverse recast of a vector
d
d
1
2
x ∈C
into a matrix of dimension d 1 × d 2 is denoted by x mat(d1,d2 ) = X 1 , . . . , X j , . . . , X d2 , where
X j = [x (d1 −1)·j+1 , . . . , x (d1 −1)·j+d1 ]T , j = 1, . . . , d 2 are column vectors, or X mat if the dimensions are
clear from the context. Obviously, it holds that X = (X vec )mat .
The identity matrix in dimension d×d is denoted by Id . With 0d1 ×d2 ∈ Md1 ×d2 and 1d1 ×d2 ∈ Md1 ×d2
we denote the matrices with only 0- or 1-entries respectively. The set of Hermitian matrices is
denoted by Hd ×d := {X ∈ Md ×d | X = X ∗ }. We write X + ∈ Md1 ×d2 for the Moore-Penrose inverse
of the matrix X ∈ Md1 ×d2 .
4
Let Ud = {U ∈ Cd ×d ; U U ∗ = Id } denote the set of unitary matrices. Then the singular value
decomposition of a matrix X ∈ Md1 ×d2 can be written as X = U ΣV ∗ with U ∈ Ud1 , V ∈ Ud2 and
Σ ∈ Md1 ×d2 , where Σ is diagonal and contains the singular values of X such that Σii = σi (X ) ≥ 0 for
i ∈ {1, . . . , min(d 1 , d 2 )}. We define the Schatten-p (quasi-)norm of X ∈ Md1 ×d2 as
kX kSp :=
rank(X ),
hÍ
i 1/p
min(d 1,d 2 ) p
σ
(X
)
j
j=1
σmax (X ),
for p = 0,
,
for 0 < p < ∞,
(4)
for p = ∞.
Note that for p = 1, the Schatten-p norm is also called nuclear norm, written as kX k∗ := kX kS1 .
The trace tr[X ] of a matrix X ∈ Md1 ×d2 is defined by the sum of its diagonal elements, tr[X ] =
Ímin(d1,d2 )
p
X j j . It can be seen that the p-th power of the Schatten-p norm coincides with kX kSp =
j=1
tr (X ∗X )p/2 . The Schatten-2 norm is also called Frobenius norm and has the property that it is
p
induced by the Frobenius scalar product hX , Y iF = tr [X ∗Y ], i.e., kX kF = kX kS2 = hX , X iF . We
define the weighted Frobenius scalar product of two matrices X , Y ∈ Md1 ×d2 weighted by the the
positive definite weight matrix W ∈ Hd1 ×d1 as hX , Y iF (W ) := hW X , Y iF = hX ,W Y iF . This scalar
p
p
product induces the weighted Frobenius norm kX kF (W ) = hX , X iF (W ) = tr[(W X )∗X ]. It is clear
that the Frobenius norm of a matrix X coincides with the `2 -norm of its vectorization X vec , i.e.,
kX kF = kX vec k`2 .
Similar to weighted Frobenius norms, we define the weighted `2 -scalar product of vectors x, y ∈
Cd weighted by the positive definite weight matrix W ∈ Hd ×d as hx, yi`2 (W ) = x ∗W y = y ∗W x and
√
its induced weighted `2 -norm as kx k`2 (W ) = x ∗W x. We use the notation X 0 for a positive
definite matrix
X ∈ Hd ×d . Furthermore, we denote the range of a linear map Φ : Md1 ×d2 → Cm by
Ran(Φ) = Y ∈ Cm ; there is X ∈ Md1 ×d2 such that Y = Φ(X ) and its null space by N (Φ) = X ∈
Md1 ×d2 ; Φ(X ) = 0 .
2.2
Problem setting and characterization of Sp - and reweighted Frobenius norm
minimizers
Given a linear map Φ : Md1 ×d2 → Cm such that m d 1d 2 , we want to uniquely identify and
reconstruct an unknown matrix X 0 from its linear image Y := Φ(X 0 ) ∈ Cm . However, basic linear
algebra tells us that this is not possible without further assumptions, since Φ is not injective if
m < d 1d 2 . Indeed, there is a (d 1d 2 − m)-dimensional affine space {X 0 } + N (Φ) fulfilling the linear
constraint
Φ(X ) = Y .
Nevertheless, under the additional assumption that the matrix X 0 ∈ Md1 ×d2 has rank r < min(d 1 , d 2 )
and under appropriate assumptions on the map Φ, the recovery of X 0 is possible by solving the affine
rank minimization problem
min rank(X ) subject to Φ(X ) = Y .
(5)
The unique solvability of (5) is given with high probability if, for example, Φ is a linear map whose
matrix representation has i.i.d. Gaussian entries [ENP12] and m = Ω(r (d 1 + d 2 )). Unfortunately,
solving (5) is intractable in general, but the works [CR09, RFP10, CP11] suggest solving the tractable
5
convex optimization program
min kX kS1 subject to Φ(X ) = Y ,
(6)
also called nuclear norm minimization (NNM), as a proxy.
As discussed in the introduction, there are empirical as well as theoretical results (e.g., in [DDFG10,
Cha07]) coming from the related sparse vector recovery problem that suggest alternative relaxation
approaches. These results indicate that it might be even more advantageous to solve the non-convex
problem
p
min F p (X ) := kX kSp subject to Φ(X ) = Y ,
(7)
for 0 < p < 1, i.e., minimizing the p-th power of the Schatten-p quasi-norms under the affine
constraint. Heuristically, the choice of p < 1 relatively small can be motivated by the observation
that by the definition (4) of the Schatten-p quasi-norm
p→0
p
kX kSp −−−→ rank(X ) =: kX kS0 .
The above consideration suggests that the solution of (7) might be closer to (5) than (6) for small
p. On the other hand, again, it is in general computationally intractable to find a global minimum
of the non-convex optimization problem (7) if p < 1. Therefore it is a natural and very relevant
question to ask which optimization algorithm to use to find global minimizers of (7).
In this paper, we discuss an algorithm striving to solve (7) that is based on the following observations: Assume for the moment that we are given a square matrix X ∈ Md1 ×d2 with d 1 = d 2 of full
rank. Then, we can rewrite the p-th power of its Schatten-p quasi-norm as a square of a weighted
Frobenius norm, or, using Kronecker product notiation as explained in Appendix A, as a square of
a weighted `2 -norm (if we use the vectorized notation X vec ): Iit turns out that
p
p
(i) kX kSp = tr[(XX ∗ ) 2 ] = tr[(XX ∗ )
p−2
2
1
(XX ∗ )] = tr(WL XX ∗ ) = kWL2 X kF2
1
= kX kF2 (WL ) = k(Id2 ⊗ WL ) 2 X vec k`22 = kX vec k`22 (Id
p−2
2
2
⊗WL ) ,
whereWL is the symmetric weight matrix (XX ∗ ) in Md1 ×d1 and Id2 ⊗WL is the block diagonal
weight matrix in Md1d2 ×d1d2 with d 2 instances of WL on the diagonal blocks, but also that
p
p
(ii) kX kSp = tr[(X ∗X ) 2 ] = tr[(X ∗X )(X ∗X )
p−2
2
1
] = tr(X ∗XWR ) = kXWR2 kF2
1
= kX ∗ kF2 (WR ) = k(WR ⊗ Id1 ) 2 X vec k`22 = kX vec k`22 (WR ⊗Id ) ,
p−2
1
where WR is the symmetric weight matrix (X ∗X ) 2 in Md2 ×d2 . It follows from the definition
of the Kronecker product that the weight matrix WR ⊗ Id1 ∈ Md1d2 ×d1d2 is a block matrix of
diagonal blocks of the type diag (WR )i j , . . . , (WR )i j ∈ Md1 ×d1 , i, j ∈ [d 2 ].
6
(a) Id2 ⊗ WL
(b) WR ⊗ Id1
Figure 1: Sparsity structure of the weight matrices ∈ Md1d2 ×d1d2
The sparsity structures of Id2 ⊗ WL and WR ⊗ Id1 are illustrated in Fig. 1. Note that a representation
1
p
of kX kSp by squares of Frobenius norms can be achieved by multiplying X by WL2 from the left in
1
(i), or by WR2 from the right in (ii).
The above calculations are not well-defined if X is not of full rank or if d 1 , d 2 , since in these
cases at least one of the matrices XX ∗ ∈ Md1 ×d1 or X ∗X ∈ Md2 ×d2 is singular, prohibiting the
p−2
p−2
definition of the matrices WR = (X ∗X ) 2 or WL = (XX ∗ ) 2 for p < 2. However, these issues can be
overcome by introducing a smoothing parameter ϵ > 0 and smoothed weight matrices WL (X , ϵ) ∈
Md1 ×d1 and WR (X , ϵ) ∈ Md2 ×d2 defined by
WL (X , ϵ) := (XX ∗ + ϵ 2 Id1 )
p−2
2
,
(8)
WR (X , ϵ) := (X ∗X + ϵ 2 Id2 )
p−2
2
.
(9)
Remark 1. The weight matrices WL (X , ϵ) and WR (X , ϵ) are symmetric and positive definite.
The possibility to rewrite the p-th power of the Schatten-p of a matrix as a weighted Frobenius
norm gives rise to the general strategy of IRLS algorithms for low-rank matrix recovery: Weighted
least squares problems of the type
min
X ∈Md1 ×d2
Φ(X )=Y
kX kF2 (WL )
or
min
X ∈Md1 ×d2
Φ(X )=Y
kX ∗ kF2 (WR )
are solved and weight matrices WL are updated alternatingly, leading to the algorithms columnreweighting IRLS-col and row-reweighting IRLS-row, respectively [MF12, FRW11].
2.3
Averaging of weight matrices
While the algorithms IRLS-col and IRLS-row provide a tractable local minimization strategy
of smoothed Schatten-p functionals under the linear constraint, we argue that it is suboptimal to
follow either one of the two approaches as they do not exploit the symmetry of the problem in an
optimal way: They either use low-rank information in the column space or in the row space.
A first intuitive approach towards a symmetric exploitation of the low-rank structure is inspired
by the following identity, by combing the calculations (i) and (ii) carried out in Section 2.2.
7
Lemma 2. Let 0 < p ≤ 2 and X ∈ Md1 ×d2 with d = d 1 = d 2 be a full rank matrix. Then
p
kX kSp
1
1
1
1
WL ⊕ WR 2
2
2
2 2
=
X vec
kWL X kF + kXWR kF =
2
2
2
`2
= kX vec k`22 (W(arith) ) ,
where
WL ⊕ WR
1
=: W(arith)
Id2 ⊗ WL + WR ⊗ Id1 =
2
2
is the arithmetic mean matrix of the symmetric and positive definite weight matrices Id2 ⊗ WL and
p−2
p−2
WR ⊗ Id1 , WL := (XX ∗ ) 2 , and WR := (X ∗X ) 2 .
Unfortunately, the introduction of arithmetic mean weight matrices does not prove to be particularly advantageous compared to one-sided reweighting strategies. No convincing improvements
can be noted neither in numerical experiments nor in the theoretical investigations for the convergence rate of IRLS for low-rank matrix recovery, cf. also Section 5.2 and Remark 22.
In contrast, we want to promote the usage of the harmonic mean of the weight matrices Id2 ⊗ WL
−1
−1
and WR ⊗ Id1 , i.e., weight matrices of the type 2 WR−1 ⊗ Id1 + Id2 ⊗ WL−1
= 2 WL−1 ⊕ WR−1
=:
W(harm) . In the remaining parts of the paper, we explain why W(harm) is able to significantly outperform other weighting variants both theoretically and practically.
The following lemma verifies that also the harmonic mean of the weight matrices Id2 ⊗ WL and
WR ⊗ Id1 leads to a legitimate reformulation of the Schatten-p quasi-norm power.
Lemma 3. Let 0 < p ≤ 2 and X ∈ Cd1 ×d2 with d = d 1 = d 2 be a full rank matrix. Then
p
kX kSp = 2 WL−1 ⊕ WR−1
where
2 WR−1 ⊗ Id1 + Id2 ⊗ WL−1
− 21
−1
2
X vec
`2
= kX vec k`22 (W(harm) ) ,
= 2 WL−1 ⊕ WR−1
−1
=: W(harm)
is the harmonic mean matrix of the symmetric and positive definite weight matrices Id2 ⊗ WL and
p−2
p−2
WR ⊗ Id2 , WL := (XX ∗ ) 2 and WR := (X ∗X ) 2 .
Í
Proof. Let X = U ΣV ∗ = di=1 σi ui vi∗ ∈ Md ×d be the singular value decomposition of X . Therefore
for the vectorized version, X vec = (V ⊗ U )Σvec holds true. By the definitions of WL and WR , we can
Í
Í
2−p
2−p
write WL−1 = di=1 σi ui ui∗ and WR−1 = di=1 σi vi vi∗ . Using the Kronecker sum inversion formula
of Lemma 23 in Appendix A, we obtain
1
2
X vec k`22 = 2 WL−1 ⊕ WR−1
kX vec k`22 (W(harm) ) = kW(harm)
∗
−1
= 2 tr WL−1 ⊕ WR−1 X vec
X
− 21
X vec
2
`2
mat
=
d Õ
d
d Õ
Õ
2σk
2−p
i=1 j=1 k =1 σi
=2
d
Õ
σi2
2−p
i=1 2σi
!
v v ∗v u ∗ u u ∗
2−p j i k k i i
+ σj
d
Õ
l =1
σl ul vl∗
p
= kX kSp ,
which finishes the proof.
8
3
Harmonic mean iteratively reweighted least squares algorithm
In this section, we use this idea to formulate a new iteratively reweighted least squares algorithm
for low-rank matrix recovery. The so-called harmonic mean iteratively reweighted least squares algorithm (HM-IRLS) solves a sequence of weighted least squares problems to recover a low-rank
matrix X 0 ∈ Md1 ×d2 from few linear measurements Φ(X 0 ) ∈ Cm . The weight matrices appearing in
the least squares problems can be seen as the harmonic mean of the weight matrices in (8) and (9),
i.e., the ones used by IRLS-col and IRLS-row.
More precisely, for 0 < p ≤ 1 and d = min(d 1 , d 2 ), D = max(d 1 , d 2 ), given a non-increasing
∞ and the sequence of iterates (X (n) )∞ produced by
sequence of non-negative real numbers (ϵ (n) )n=1
n=1
the algorithm, we update our weight matrices such that
h
i −1
e (n) = 2 U (n) (Σd(n) )2−p U (n)∗ ⊕ V (n) (Σd(n) )2−p V (n)∗
W
,
1
2
(10)
(n)
with the diagonal matrices Σdt ∈ Mdt ×dt for dt = {d 1 , d 2 } such that
(n)
(Σdt )ii
1
(σi (X (n) )2 + ϵ (n)2 ) 2
=
0
(
if i ≤ d,
if d < i ≤ D,
(11)
and the matrices U (n) ∈ Ud1 and V (n) ∈ Ud2 , containing the left and right singular vectors of X (n)
in its columns, respectively.
e (n) can be seen as a stabilized version of the harmonic mean
We note that this definition of W
e (n) becomes very ill-conditioned
weight matrixW(harm) of Lemma 3. This stabilization is necessary as W
2−p
as soon as some of the singular values of X (n) approach zero and, related to that, (X (n)X (n)∗ ) 2 ⊕
2−p
(X (n)∗X (n) ) 2 would even be singular as soon as X (n) is not of full rank.
Additionally, for the formualtion of the algorithm and any n ∈ N, it is convenient to define the
f(n) )−1 : Md1 ×d2 → Md1 ×d2 as
linear operator (W
f(n) )−1 (X ) :=
(W
i
1 h (n) (n) 2−p (n)∗
(n)
U (Σd1 ) U X + XV (n) (Σd2 )2−p V (n)∗ ,
2
e (n) on Md1 ×d2 .
describing the operation of the inverse of W
Finally, HM-IRLS can be formulated in pseudo code as follows.
9
(12)
Algorithm 1 Harmonic Mean IRLS for low-rank matrix recovery (HM-IRLS)
Input: A linear map Φ : Md1 ×d2 → Cm , image Y = Φ(X 0 ) of the ground truth matrix X 0 ∈ Md1 ×d2 ,
rank estimate e
r , non-convexity parameter 0 < p ≤ 1.
0
Output: Sequence (X (n) )nn=1
⊂ Md1 ×d2 .
(0)
e
Initialize n = 0, ϵ = 1 and W (0) = Id1d2 ∈ Md1d2 ×d1d2 .
repeat
f(n) −1 Φ∗ (Φ ◦ (W
f(n) )−1 ◦ Φ∗ )−1 (Y ) ,
X (n+1) = arg min kX vec k`2 (W
f(n) )= (W )
(13)
ϵ (n+1) = min ϵ (n) , σer +1 (X (n+1) ) ,
h
i −1
e (n+1) = 2 U (n+1) (Σd(n+1) )2−p U (n+1)∗ ⊕ V (n+1) (Σd(n+1) )2−p V (n+1)∗
W
,
1
2
(14)
Φ(X )=Y
2
(15)
where U (n+1) ∈ Ud1 and V (n+1) ∈ Ud2 are matrices containing the left and right singular vectors
(n+1)
of X (n+1) in its columns, and the Σdt
are defined for t ∈ {1, 2} according to (11).
n = n + 1,
until stopping criterion is met.
Set n 0 = n.
From a practical point of view, it is beneficial that the explicit calculation of the very large
e (n+1) ∈ Hd1d2 ×d1d2 from (15) is not necessary in implementations of Algorithm 1.
weight matrices W
e (n+1) )−1
As suggested by formulas (12) and (13), it can be seen that just the operation of its inverse (W
(n)
−1
e ) is needed, which can be implemented by matrix-matrix multiplications on the space
resp. (W
e ∈ Md1 ×d2 , we have that W
e (n)X vec = X
evec if and only if X vec = (W
e (n) )−1X
evec ,
Md1 ×d2 : For matrices X , X
which can be written in matrix variables as
1 h (n) (n) 2−p (n)∗ e e (n) (n) 2−p (n)∗ i
X =
U (Σd1 ) U X + XV (Σd2 ) V
.
2
e (n) and the Kronecker sum, cf. (15) and Appendix A.
The last equivalence is due to the definitions of W
Note that the smoothing parameters ϵ (n) are chosen in dependence on a rank estimate r˜ here,
which will be an important ingredient for the theoretical analysis of the algorithm. In practice, how∞ are possible
ever, other choices of non-increasing sequences of non-negative real numbers (ϵ (n) )n=1
and can as well lead to (a maybe even faster) convergence when tuned appropriately.
We refer to Section 5.4 for a further discussion of implementation details.
Example With a simple example, we illustrate the versatility of HM-IRLS: Let d 1 = d 2 = 4, and
assume that we want to reconstruct the rank-1 matrix
1
2
3
4
1
© ª
©
ª
10 20 30 40 ®
10 ®
X 0 = uv ∗ = ® 1 2 3 4 =
®
−2 −4 −6 −8 ®
−2 ®
«0.1 0.2 0.3 0.4¬
«0.1¬
from m = d f = r (d 1 + d 2 − r ) = 7 sampled entries Φ(X 0 ), where Φ is the linear map Φ : M 4×4 →
C7 , Φ(X ) = X 2,1 , X 4,1 , X 3,2 , X 4,2 , X 4,3 , X 1,4 , X 2,4 . Since the linear map Φ samples some
10
entries of matrices in M 4×4 and does not see the others, this is an instance of the problem that is
called matrix completion.
In general, reconstructing a (d 1 × d 2 ) rank-r matrix from m = r (d 1 + d 2 − r ) entries is a hard
problem, as it is known that if m < r (d 1 + d 2 − r ), there is always more than one matrix X such
that Φ(X ) = Φ(X 0 ), and even for equality, the property that Φ is invertible on (most) rank-r matrices
might be hard to verify [KTT15].
It can be argued that the specific matrix completion problem we consider is in some sense a
hard one, since, e.g., the deterministic sufficient condition for unique completability of [PABN16,
Theorem 2] is not fulfilled (less then 2 observed entries in the third column), and since the classical
kuu ∗ e k 2
kvv ∗ e k 2
coherence parameters µ(u) = d 1 max ku ki4 2 ≈ 3.81 and µ(v) = d 2 max kv k 4i 2 ≈ 2.13 that are
1≤i ≤4
1≤i ≤4
2
2
used to analyze the behavior of many matrix completion algorithms [CR09, JNS13] are quite large,
with µ(u) being quite close to the maximal value of 4.
On the other hand, as the problem is small and X 0 has rank r = 1, it is possible to impute the
missing values of
∗
∗
∗
4
©
ª
∗ 40®
10 ∗
®
∗®
∗ −4 ∗
«0.1 0.2 0.3 ∗ ¬
by solving very simple linear equations, since, for example, X 4,4 = u 4v 4 , X 2,1 = u 2v 1 , X 2,4 = u 2v 4 , and
X X
X 4,1 = u 4v 1 , and therefore X 4,4 = 4,X12, 12, 4 = 0.4. This shows that the only rank-1 matrix compatible
with Φ(X 0 ) is X 0 .
It turns out that—without using the combinatorial simplicity of the problem—the classical NNM
does not solve the problem, as the nuclear norm minimizer (solution of (6) for Y = Φ(X 0 )) produced
by the semidefinite program of the convex optimization package CVX [GB14] converges to
X nuclear
1
0.023 0.041
4
©
ª
0.232 0.411
40 ®
10
≈
®,
−0.056 −4 −0.200 −0.226®
0.2
0.3
0.400 ¬
« 0.1
a matrix with 45.74 ≈ kX nuclear kS1 < kX 0 kS1 = σ1 (X 0 ) ≈ 56.13 and a relative Frobenius error of
kX nuclear −X 0 kF
kX 0 kF
= 0.661.
Interestingly, HM-IRLS is able to solve the problem, if p is chosen small enough, with very
high precision already after few iterations, for example, up to a relative error of 4.18 · 10−13 after
24 iterations if p = 0.1. This is in contrast to the behavior of IRLS-col, IRLS-row and also
to AM-IRLS, the IRLS variant that uses weight matrices derived from the arithmetic mean the
weights of IRLS-col and IRLS-row, cf. Lemma 2. The iterates X (n) for iteration n = 2000
of these algorithms exhibit relative errors of 0.240, 0.489 and 0.401, respectively, for the choice of
p = 0.1—furthermore, there is no choice of p that would lead to a convergence to X 0 .
To understand this very different behavior, we note that the n-th iterate of any of the four IRLS
variants can be written, using Appendix A, in a concise way as
X (n+1) = arg min hX vec ,W (n)X vec i,
Φ(X )=Y
11
(16)
(a) HM-IRLS
(b) IRLS-col
(c) IRLS-row
(d) AM-IRLS
Figure 2: Values of the matrix H (1) of ”weight coefficients” corresponding to the orthonormal basis
(ui(1)v j(1)∗ )i,4 j=1 after the first iteration in the example
where
4
Õ
(n) 2
(n)
hX vec ,W (n)X vec i = hX , U (n) H (n) ◦ (U (n)∗XV (n) ) V (n)∗ iF =
Hi(n)
j |hu i , Xv j i|
(17)
i, j=1
with X (n) = U (n) Σ(n)V (n)∗ =
Hi(n)
j
Í4
(n) (n) (n)
i=1 σi u i v i
being the SVD of X (n) , and
2−p −1
2−p
2 (σi(n) )2 + (ϵ (n) )2 ) 2 + (σ j(n) )2 + (ϵ (n) )2 ) 2
(n) 2
p−2
(σ ) + (ϵ (n) )2 2
i
=
p−2
(n) 2
) + (ϵ (n) )2 2
(σ
j
p−2
p−2
0.5 · (σ (n) )2 + (ϵ (n) )2 2 + (σ (n) )2 + (ϵ (n) )2 2
i
i
for HM-IRLS,
for IRLS-col,
for IRLS-row, and
for AM-IRLS,
for i, j ∈ {1, 2, 3, 4} and ϵ (n) = min(σ2(n) , ϵ (n−1) ).
The values of the matrix H (1) of weight coefficients after the first iteration in the above example
are visualized in Figure 2, for each of the four IRLS versions above.
The intuition for the superior behavior of HM-IRLS is now the following: Since large entries
of H (n) penalize the corresponding parts of the space Md1 ×d2 = span{ui(n)v j(n)∗ , i ∈ [d 1 ], j ∈ [d 2 ]}
in the minimization problem (16), large areas of blue and dark blue in Figure 2 indicate a benign
optimization landscape where the minimizer X (n+1) of (16) is able to improve considerably on the
previous iterate X (n) .
In particular, it can be seen that in the case of HM-IRLS, the penalties on the whole direct sum
of column and row space of the best rank-r approximation of X (n)
∗
T (n) := u 1(n) , . . . , ur(n) Z 1∗ + Z 2 v 1(n) , . . . , vr(n) : Z 1 ∈ Md1 ×r , Z 2 ∈ Md2 ×r ,
are small compared to the other penalites, since the coefficients of H (1) corresponding to T (1) are
exactly the ones in the first row and first column of the (4 × 4) matrices in Figure 2—a contrast that
becomes more and more pronounced as X (n) approaches the rank-r ground truth X 0 (with r = 1 in
the example).
On the other hand, IRLS-col, IRLS-row and AM-IRLS only have small coefficients on
smaller parts of T (n) , which, from a global perspective, explains why their usage might lead to nonglobal minima of the Schatten-p objective.
12
We note that the space T (n) plays also an important role in Riemannian optimization approaches
for matrix recovery problems [Van13], since it is also the tangent space of the smooth manifold of
rank-r matrices at the best rank-r approximation of X (n) .
4
Convergence results
In the following part, we state our main theoretical results about convergence properties of the
algorithm HM-IRLS. Furthermore, their relation to existing results for IRLS-col and IRLSrow is discussed.
It cannot be expected that a low-rank matrix recovery algorithm like HM-IRLS succeeds to
converge to a low-rank matrix without any assumptions on the measurement operator Φ that defines
the recovery problem (5). For the purpose of the convergence analysis of HM-IRLS, we introduce
the following strong Schatten-p null space property [FRW11, OMFH11, FR13].
Definition 4 (Strong Schatten-p null space property). Let 0 < p ≤ 1. We say that a linear map
Φ : Md1 ×d2 → Cm fulfills the strong Schatten-p null space property (Schatten-p NSP) of order r with
constant 0 < γr ≤ 1 if
Õ
p/2
Õ
r
d
γr
p
2
σi (X )
<
σi (X )
(18)
p
r 1− 2 i=r +1
i=1
for all X ∈ N (Φ) \ {0}.
Intuitively explained, if a map Φ fulfills the strong Schatten-p null space property of order r ,
there are no rank-r matrices in the null space and all the elements of the null space must not have
a quickly decaying spectrum.
Null space properties have already been used to guarantee the success of nuclear norm minimization (6), or Schatten-1 minimization in our terminology, for solving the low-rank matrix recovery problem [RXH11].
We note that the definitions of Schatten-p null space properties are quite analogous to the `p null space property in classical compressed sensing [FR13, Theorem 4.9], applied to the vector of
singular values. In particular, (18) implies that
r
Õ
i=1
p
σi (X )
<
d
Õ
i=r +1
p
σi (X )
for all X ∈ N (Φ) \ {0},
(19)
since kX kSp ≤ r 1/p−1/2 kX kS2 for X that is rank-r . This, in turn, ensures the existence of unique
solutions to (7) if Y = Φ(X 0 ) are the measurements of a low-rank matrix X 0 .
Proposition 5 ([Fou18]). Let Φ : Md1 ×d2 → Cm be a linear map, let 0 < p ≤ 1 and r ∈ N. Then every
matrix X 0 ∈ Md1 ×d2 such that rank(X 0 ) ≤ r and Φ(X 0 ) = Y ∈ Cm is the unique solution of Schatten-p
minimization (7) if and only if Φ fulfills (19).
Remark 6. The sufficiency of the Schatten-p NSP (19) in Proposition 5 already been pointed out by Oymak et al. [OMFH11]. The necessity as stated in the theorem, however, is due to a recent generalization
of Mirsky’s singular value inequalities to concave functions [Aud14, Fou18].
13
It can be seen that the (weak) Schatten-p NSP of (19) is a stronger property for larger p in the
sense that if 0 < p 0 ≤ p ≤ 1, the Schatten-p property implies the Schatten-p 0 property. Very related
to that, it can be seen that for any 0 < p ≤ 1, the strong Schatten-p null space property is implied
by a sufficiently small rank restricted isometry constant δr , which is a classical tool in the analysis of
low-rank matrix recovery algorithms [RFP10, CP11].
Definition 7 (Restricted isometry property (RIP)). The restricted isometry constant δr > 0 of order
r of the linear map Φ : Md1 ×d2 → Cm is defined as the smallest number such that
(1 − δr )kX kF2 ≤ kΦ(X )k`22 ≤ (1 + δr )kX kF2
for all matrices X ∈ Md1 ×d2 of rank at most r .
Indeed, it follows from the proof of [CDK15, Theorem 4.1] that a restricted isometry constant of
order 2r such that δ 2r < √ 2 ≈ 0.4531 implies the strong Schatten-p NSP of order r with a constant
2+3
√2
2+3
√
p
δ 2r
( 2+1)p
2p
(1−δ 2r )p .
γr < 1 for any 0 < p ≤ 1. More precisely, it can be seen that δ 2r <
implies that the strong
Schatten-p NSP (18) of order r holds with the constant γr =
Linear maps that are instances drawn from certain random models are known to fulfill the
restricted isometry property with high probability if the number of measurements is sufficiently
large [DR16], and, a fortiori, the Schatten-p null space property. In particular, this is true for (sub)Gaussian linear measurement maps Φ : Md1 ×d2 → Cm whose matrix representation is such that
1
Φ ∈ Cm×d1d2 , where e
Φ has i.i.d. standard (sub-)Gaussian entries,
√ e
m
(20)
as it is summarized in the following lemma.
Lemma 8. For any 0 < p ≤ 1, 0 < γ < 1 and any (sub-)Gaussian random operator Φ : Md1 ×d2 → Cm
(e.g. as defined in (20)), there exist constants C 1 > 1, C 2 > 0 such that if m ≥ C 1r (d 1 + d 2 ), the strong
Schatten-p null space property (18) of order r with constant γr < γ is fulfilled with probability at least
1 − e −C2m .
4.1
Local convergence for p < 1
In this section, we provide a convergence analysis for HM-IRLS covering several aspects. We are
able to show that the algorithm converges to stationary points of a smoothed Schatten-p functional
p
дϵ as in (21) without any additional assumptions on the measurement map Φ. Such guarantees have
already been obtained for IRLS algorithms with one-sided reweighting as IRLS-col and IRLSrow, in particular for p = 1 by Fornasier, Rauhut & Ward [FRW11] and for 0 < p ≤ 1 by Mohan &
Fazel [MF12].
Beyond that, assuming the measurement operator fulfills an appropriate Schatten-p null space
property as defined in Definition 4, we show the a-posteriori exact recovery statement that HMIRLS converges to the low-rank matrix X 0 if lim ϵn = 0, which only was shown for one-sided
n→∞
IRLS for the case p = 1 by [FRW11].
Moreover, we provide a local convergence guarantee stating that HM-IRLS recovers the lowrank matrix X 0 if we obtain an iterate X (n) that is close enough to X 0 , which is novel for IRLS
algorithms.
14
Let 0 < p ≤ 1 and ϵ > 0. To state the theorem, we introduce the ϵ-perturbed Schatten-p functional
p
дϵ : Md1 ×d2 → R ≥0 such that
d
Õ
p
p
дϵ (X ) =
(σi (X )2 + ϵ 2 ) 2
(21)
i=1
where σ (X ) ∈ Rd denotes the vector of singular values of X ∈ Md1 ×d2 .
Theorem 9. Let Φ : Md1 ×d2 → Cm be a linear operator and Y ∈ Ran(Φ) a vector in its range. Let
(X (n) )n ≥1 and (ϵ (n) )n ≥1 be the sequences produced by Algorithm 1 for input parameters Φ, Y , r and
0 < p ≤ 1, let ϵ = limn→∞ ϵ (n) .
(i) If ϵ = 0 and if Φ fulfills the strong Schatten-p NSP (18) of order r with constant 0 < γr < 1,
then the sequence (X (n) )n ≥1 converges to a matrix X ∈ Md1 ×d2 of rank at most r that is the
unique minimizer of the Schatten-p minimization problem (7). Moreover, there exists an absolute
constant Ĉ > 0 such that for any X with Φ(X ) = Y and any e
r ≤ r , it holds that
p
kX − X kF ≤
where Ĉ =
1−p/2
2p+1γ r
1−γ r
Ĉ
βer (X )Sp ,
r 1−p/2
and βer (X )Sp is the best rank-e
r Schatten-p approximation error of X , i.e.,
ek p , X
e ∈ Md1 ×d2 has rank e
βer (X )Sp := inf kX − X
r .
Sp
(22)
(ii) If ϵ > 0, then each accumulation point X of (X (n) )n ≥1 is a stationary point of the ϵ-perturbed
p
Schatten-p functional дϵ of (21) under the linear constraint Φ(X ) = Y . If additionally p = 1,
p
then X is the unique global minimizer of дϵ .
(iii) Assume that there exists a matrix X 0 ∈ Md1 ×d2 with Φ(X 0 ) = Y such that rank(X 0 ) = r ≤
min(d 1,d 2 )
, a constant 0 < ζ < 1 and an iteration n ∈ N such that
2
kX (n) − X 0 kS∞ ≤ ζ σer (X 0 )
and ϵ n = σr +1 (X n ). If Φ fulfills the strong Schatten-p NSP of order 2r with γ 2r < 1 and if the
0)
condition number κ = σσr1 (X
(X 0 ) of X 0 and ζ are sufficiently small (see condition (25) and formula
(26)), then
X (n) → X 0 for n → ∞.
It is important to note that by using Lemma 8, it follows that the assertions of Theorem 9(i) and
(iii) hold for (sub-)Gaussian operators (20) with high probability in the regime of measurements of
optimal sample complexity order. In particular, there exist constant oversampling factors ρ 1 , ρ 2 ≥ 1
such that the assertions of (i) and (iii) hold with high probability if m > ρ k r (d 1 + d 2 ), k ∈ {1, 2},
respectively.
Remark 10. However, if m < d 1d 2 , null space property-type assumptions as (18) or (19) do not hold for
the important case of matrix completion-type measurements [CR09], where Φ(X ) is given as m sample
entries
Φ(X )` = X i ` , j ` ,
` = 1, . . . , m,
(23)
15
and (i ` , j ` ) ∈ [d 1 ] × [d 2 ] for all ` ∈ [m], of the matrix X ∈ Md1 ×d2 , which also were considered in the
example of Section 3.
This means that parts (i) and (iii) of Theorem 9 do, unfortunately, not apply for matrix completion
measurements, which define a very relevant class of low-rank matrix recovery problems. This problem
is shared by any existing theory for IRLS algorithms for low-rank matrix recovery [FRW11, MF12].
However, in Section 5, we provide strong numerical evidence that HM-IRLS exhibits properties as
predicted by (i) and (iii) of Theorem 9 even for the matrix completion setting. We leave the extension
of the theory of HM-IRLS to matrix completion measurements as an open problem to be tackled by
techniques different from uniform null space properties [DR16, Section V].
4.2
Locally superlinear convergence rate for p < 1
Next, we state the second main theoretical result of this paper, Theorem 11. It shows that in a
neighborhood of a low-rank matrix X 0 that is compatible with the measurement vector Y , the algorithm HM-IRLS converges to X 0 with a convergence rate that is superlinear of the order 2 − p, if the
operator Φ fulfills an appropriate Schatten-p null space property.
Theorem 11 (Locally Superlinear Convergence Rate). Assume that the linear map Φ : Md1 ×d2 → Cm
fulfills the strong Schatten-p NSP of order 2r with constant γ 2r < 1 and that there exists a matrix
X 0 ∈ Md1 ×d2 with rank(X 0 ) = r ≤ min(d21,d2 ) such that Φ(X 0 ) = Y , let Φ, Y , r and 0 < p ≤ 1 be the input
0)
(n) := X (n) −X
parameters of Algorithm 1. Moreover, let κ = σσr1 (X
0
(X 0 ) be the condition number of X 0 and η
be the error matrices of the n-th output of Algorithm 1 for n ∈ N.
Assume that there exists an iteration n ∈ N and a constant 0 < ζ < 1 such that
kη (n) kS∞ ≤ ζ σr (X 0 )
(24)
and ϵ (n) = σr +1 (X (n) ). If additionally the condition number κ and ζ are small enough, or more precisely,
if
p(1−p)
<1
(25)
µ kη (n) kS∞
with the constant
γ (3 + γ )(1 + γ ) 2−p d − r 2− p2 σ (X )p(p−1)
2r
2r
2r
r
0
µ := 2 (1 + γ 2r )
rp
κp
(1 − γ 2r )
r
(1 − ζ )2p
5p
then
p
2−p
kη (n+1) kS∞ ≤ µ 1/p kη (n) kS∞
and
(26)
2−p
kη (n+1) kSp ≤ µ 1/p kη (n) kSp
for all n ≥ n.
We think that the result of Theorem 11 is remarkable, since there are only few low-rank recovery
algorithms which exhibit either theoretically or practically verifiable superlinear convergence rates.
In particular, although the algorithms of [MMBS13] and NewtonSLRA of [SS16] do show superlinear convergence rates, the first are not competitive to HM-IRLS in terms of sample complexity
and the second has neither applicable theoretical guarantees for most of the interesting problems
nor the ability of solving medium size problems.
16
Remark 12. We observe that while the statement describes the observed rates of convergence very
accurately (cf. Section 5.2), the assumption (25) on the neighborhood that enables convergence of a
rate 2 − p is more pessimistic than our numerical experiments suggest. Our experiments confirm that
the local convergence rate of order 2 − p also holds for matrix completion measurements, where the
assumption of a Schatten-p null space property fails to hold, cf. Section 5.
4.3
Discussion and comparison with existing IRLS algorithms
Optimally, we would like to have a statement in Theorem 9 about the accumulation points X being
p
global minimizers of дϵ , instead of mere stationary points [FRW11, Theorem 6.11], [DDFG10, Theorem 5.3]. A statement that strong is, unfortunately, difficult to achieve due to the non-convexity
p
of the Schatten-p quasinorm and of the ϵ-perturbed version дϵ . Nevertheless, our theorems can
be seen as analogues of [DDFG10, Theorem 7.7], which discusses the convergence properties of an
IRLS algorithm for sparse recovery based on `p -minimization with p < 1.
As already mentioned in previous sections, Fornasier, Rauhut & Ward [FRW11] and Mohan &
Fazel [MF12] proposed IRLS algorithms for low-rank matrix recovery and analysed their convergence properties. The algorithm of [FRW11] corresponds (almost) to IRLS-col with p = 1 as
explained in Section 3. In this context, Theorem 9 recovers the results [FRW11, Theorem 6.11(i-ii)]
for p = 1 and generalize them, with weaker conclusions due to the non-convexity, to the cases
0 < p < 1. The algorithm IRLS-p of [MF12] is similar to the former, but differs in the choice
of the ϵ-smoothing and also covers non-convex choices 0 < p < 1. However, we note that in the
non-convex case, its convergence result [MF12, Theorem 5.1] corresponds to Theorem 9(ii), but does
not provide statements similar to (i) and (iii) of Theorem 9.
Theorem 11 with its analysis of the convergence rate is new in the sense that to the best of
our knowledge, there are no convergence rate proofs for IRLS algorithms for the low-rank matrix
recovery problem in the literature. Indeed, we refer to Remark 22 in Section 6.3 for an explanation
why the variants of [FRW11] and [MF12] cannot exhibit superlinear convergence rates, unlike HMIRLS.
We also note that there is a close connection between the statements of Theorems 9 and 11 and
results that were obtained by Daubechies, DeVore, Fornasier and Güntürk [DDFG10, Theorems 7.7
and 7.9] for an IRLS algorithm dedicated to the sparse vector recovery problem.
5
Numerical experiments
In this section, we demonstrate first that the superlinear convergence rate that was proven theoretically for Algorithm 1 (HM-IRLS) in Theorem 11 can indeed be accurately verified in numerical
experiments, even beyond measurement operators fulfilling the strong null space property, and
compare its performance to other variants of IRLS.
In Section 5.3, we then examine the recovery performance of HM-IRLS for the matrix completion setting with the performance of other state-of-the-art algorithms comparing the measurement
complexities that are needed for successful recovery for many random instances.
The numerical experiments are conducted on Linux and Mac systems with MATLAB R2017b. An
implementation of the HM-IRLS algorithm and a minimal test example are available at https://wwwm15.ma.tum.de/Allgemeines/SoftwareSite.
17
5.1
Experimental setup
In the experiments, we sample (d 1 × d 2 ) dimensional ground truth matrices X 0 of rank r such that
X 0 = U ΣV ∗ , where U ∈ Rd1 ×r and V ∈ Rd2 ×r are independent matrices with i.i.d. standard Gaussian
entries and Σ ∈ Rr ×r is a diagonal matrix with i.i.d. standard Gaussian diagonal entries, independent
from U and V .
We recall that a rank-r matrix X ∈ Md1 ×d2 has d f = r (d 1 +d 2 −r ) degrees of freedom, which is the
theoretical lower bound on the number of measurements that are necessary for exact reconstruction
[CP11]. The random measurement setting we use in the experiments can be described as follows:
We take measurements of matrix completion type, sampling m = bρd f c entries of X 0 uniformly
over its d 1d 2 indices to obtain Y = Φ(X 0 ). Here, ρ is such that dd1df 2 ≥ ρ ≥ 1 and parametrizes the
difficulty of the reconstruction problem, from very hard problems for ρ ≈ 1 to easier problems for
larger ρ.
However, this uniform sampling of Φ could yield instances of measurement operators whose
information content is not large enough to ensure well-posedness of the corresponding low-rank
matrix recovery problem, even if ρ > 1. More precisely, it is impossible to recover a matrix exactly
if the number of revealed entries in any row or column is smaller than its rank r , which is explained
and shown in the context of the proof of [PABN16, Theorem 1].
Thus, in order to provide for a sensible measurement model for small ρ, we exclude operators Φ
that sample fewer than r entries in any row or column. Therefore, we adapt the uniform sampling
model such that operators Φ are discarded and sampled again until the requirement of at least r
entries per column and row is met and recovery can be achieved from a theoretical point of view.
We note that the described phenomenon is very related to the fact that matrix completion recovery guarantees for the uniform sampling model require at least one additional log factor, i.e.,
they require at least m ≥ log(max(d 1 , d 2 ))d f sampled entries [DR16, Section V].
While we detail the experiments for the matrix completion measurement setting just described
in the remaining section, we add that Gaussian measurement models also lead to very similar results
in experiments.
5.2
Convergence rate comparison with other IRLS algorithms
In this subsection, we vary the Schatten-p parameter between 0 and 1 and compare the corresponding convergence behavior of HM-IRLS with the IRLS variant IRLS-col, which performs the
reweighting just in the column space, and with the arithmetic mean variant AM-IRLS. The latter
two coincide with Algorithm 1 except that the weight matrices are chosen as described in Equation (17) in Section 3.
We note that IRLS-col is very similar to the IRLS algorithms of [FRW11] and [MF12] and
differs from them basically just in the choice of the ϵ-smoothing. We present the experiments with
IRLS-col to isolate the influence of the weight matrix type, but very similar results can be observed for the algorithms of [FRW11] and [MF12].2
In the matrix completion setup of Section 5.1, we choose d 1 = d 2 = 40, r = 10 and distinguish
easy, hard and very hard problems corresponding to oversampling factors ρ of 2.0, 1.2 and 1.0,
respectively. The algorithms are provided with the ground truth rank r and are stopped whenever
2 Implementations of the mentioned authors’ algorithms were downloaded from https://faculty.
washington.edu/mfazel/ and https://github.com/rward314/IRLSM, respectively.
18
100
10-5
p=1, HM-IRLS
p=1, AM-IRLS
p=1.0, IRLS-col
p=0.8, HM-IRLS
p=0.8, AM-IRLS
p=0.8, IRLS-col
p=0.5, HM-IRLS
p=0.5, AM-IRLS
p=0.5, IRLS-col
p=0.1, HM-IRLS
p=0.1, AM-IRLS
p=0.1, IRLS-col
10-10
10-15
0
5
10
15
20
25
30
35
40
45
50
Iteration n
Figure 3: Relative Frobenius errors as a function of the iteration n for oversampling factor ρ = 2
(easy problem).
the relative change of Frobenius norm kX (n) − X (n−1) kF /kX (n−1) kF drops below the threshold of
10−10 or a maximal iteration of iterations n max is reached.
5.2.1
Convergence rates
First, we study the behavior of the three IRLS algorithms for the easy setting of an oversampling
2 −r )
factor of ρ = 2, which means that 2r (dd1 1+d
= 0.875 of the entries are sampled, and parameters
d2
p ∈ {0.1, 0.5, 0.8, 1}.
In Figure 3, we observe that for p = 1, HM-IRLS, AM-IRLS and IRLS-col have a quite
similar behavior, as the relative Frobenius errors kX (n) − X 0 kF /kX 0 kF decrease only slowly, i.e.,
even a linear rate is hardly identifiable. For choices p < 1 that correspond to non-convex objectives,
we observe a very fast, superlinear convergence of HM-IRLS, as the iterates X (n) converge up to
a relative error of less than 10−12 within fewer than 20 iterations for p ∈ {0.8, 0.5, 0.1}. Precise
calculations verify that the rate of convergences are indeed of order 2 − p, the order predicted by
Theorem 11. We note that this fast convergence rate kicks in not only locally, but starting from the
very first iteration.
On the other hand, it is easy to see that AM-IRLS and IRLS-col converge linearly, but
not superlinearly to the ground truth X 0 for p ∈ {0.8, 0.5, 0.1}. The linear rate of AM-IRLS is
slightly better than the one of IRLS-col, but the numerical stability of AM-IRLS deteriorates
for p = 0.1 close to the ground truth (after iteration 43). This is due to a bad conditioning of the
quadratic problems as the X (n) are close to rank-r matrices. In contrast, no numerical instability
issues can be observed for HM-IRLS.
For the hard matrix completion problems with oversampling factor of ρ = 1.2, we observe that
for p = 0.8, the three algorithms typically do not converge to ground truth. This can be seen in
19
10
10
0
p=0.8, HM-IRLS
p=0.5, HM-IRLS
p=0.25, HM-IRLS
p=0.01, HM-IRLS
p=0.8, AM-IRLS
p=0.5, AM-IRLS
p=0.25, AM-IRLS
p=0.01, AM-IRLS
p=0.8, IRLS-col
p=0.5, IRLS-col
p=0.25, IRLS-col
p=0.01, IRLS-col
-1
10 -2
0
10
20
30
40
50
60
70
80
90
100
Iteration n
Figure 4: Relative Frobenius errors as a function of the iteration n for oversampling factor ρ = 1.2
(hard problem). Left column: y-range [10−10 ; 100 ]. Right column: Enlarged section of left column
corresponding to y-range of [10−2 ; 100 ].
the example that is shown in Figure 4, where HM-IRLS, AM-IRLS and IRLS-col all exhibit
a relative error of 0.27 after 100 iterations. We do not visualize the result for p = 1, as the iterates
of the three algorithms do not converge to the ground truth either, which is to be expected: In
some sense, they implement nuclear norm minimization, which is typically not able to recover a
low-rank matrix from measurements with an oversampling factor as small as ρ = 1.2 [DGM13].
The dramatically different behavior between HM-IRLS and the other approaches becomes very
apparent for more non-convex choices of p ∈ {0.01, 0.25, 0.5}, where the former converges up to a
relative Frobenius error of less than 10−10 within 15 to 35 iterations, while the others do not reach a
relative error of 10−2 even after 100 iterations. For HM-IRLS, the convergence of order 2 −p can be
very well locally observed also here, it just takes some iterations until the superlinear convergence
begins, which is due to the increased difficulty of the recovery problem.
Finally, we see in the example shown in Figure 5 that even for the very hard problems where
ρ = 1, which means that the number of sampled entries corresponds exactly to the degrees of
freedom r (d 1 + d 2 − r ), HM-IRLS can be successful to recover the rank-r matrix if the parameter
p is chosen small enough (here: p ≤ 0.25). This is not the case for the algorithms AM-IRLS and
IRLS-col.
5.2.2 HM-IRLS as the best extension of IRLS for sparse recovery
We summarize that among the three variants HM-IRLS, AM-IRLS and IRLS-col, only HMIRLS is able to solve the low-rank matrix recovery problem for very low sample complexities
corresponding to ρ ≈ 1. Furthermore, it is the only IRLS algorithm for low-rank matrix recovery
that exhibits a superlinear rate of convergence at all.
It is worthwhile to compare the properties of HM-IRLS with the behavior of the IRLS algorithm
of [DDFG10] designed to solve the sparse vector recovery problem by mimicking `p -minimization
for 0 < p ≤ 1. While neither IRLS-col nor AM-IRLS are able to generalize the superlinear
convergence behavior of [DDFG10] (which is illustrated in Figure 8.3 of the same paper) to the
20
10 0
10 0
10
p=0.5, HM-IRLS
p=0.25, HM-IRLS
p=0.01, HM-IRLS
p=0.8, AM-IRLS
p=0.5, AM-IRLS
p=0.25, AM-IRLS
p=0.01, AM-IRLS
p=0.8, IRLS-col
p=0.5, IRLS-col
p=0.25, IRLS-col
p=0.01, IRLS-col
-2
p=0.8, HM-IRLS
p=0.8, AM-IRLS
p=0.8, IRLS-col
p=0.5, HM-IRLS
p=0.5, AM-IRLS
p=0.5, IRLS-col
p=0.25, HM-IRLS
p=0.25, AM-IRLS
p=0.25, IRLS-col
p=0.01, HM-IRLS
p=0.01, AM-IRLS
p=0.01, IRLS-col
10 -4
10 -6
10
p=0.8, HM-IRLS
10 -1
-8
10 -2
10 -10
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
Iteration n
Iteration n
Figure 5: Relative Frobenius errors as a function of the iteration n for oversampling factor ρ = 1.0
(very hard problem). Left column: y-range [10−10 ; 100 ]. Right column: Enlarged section of left
column corresponding to y-range of [10−2 ; 100 ].
low-rank matrix recovery problem, HM-IRLS is, as can be seen in Figures 3 to 5.
Taking the theoretical guarantees as well as the numerical evidence into account, we claim that
HM-IRLS is the presently best extension of IRLS for vector recovery [DDFG10] to the low-rank matrix
recovery setting, providing a substantial improvement over the reweighting strategies of [FRW11]
and [MF12].
Moreover, we mention two observations which suggest that HM-IRLS has in some sense even
more favorable properties than the algorithm of [DDFG10]: First, the discussion of [DDFG10, Section 8] states that a superlinear convergence can only be observed locally after a considerable
amount of iterations with just a linear error decay. In contrast to that, HM-IRLS exhibits a superlinear error decay quite early (i.e., for example as early as after two iterations), at least if the
sample complexity is large enough, cf. Figure 3.
Secondly, it can be observed that the convergence of the algorithm of [DDFG10] to a sparse
vector often breaks down if p is smaller than 0.5 [DDFG10, Section 8]. In contrast to that, we
observe that HM-IRLS does not suffer from this loss of global convergence for p 0.5. Thus, a
choice of very small parameters p ≈ 0.1 or smaller is suggested as such a choice is accompanied by
a very fast convergence.
5.3
Recovery performance compared to state-of-the-art algorithms
After comparing the performance of HM-IRLS with other IRLS variants, we now conduct experiments to compare the empirical performance of HM-IRLS also to that of low-rank matrix recovery
algorithms different from IRLS.
To obtain a comprehensive picture, we consider not only the IRLS variants AM-IRLS and
IRLS-col, but a variety of state-of-the-art methods in the experiments, as Riemannian optimization technique Riemann Opt [Van13], the alternating minimization approaches AltMin
[HH09], ASD [TW16] and BFGD [PKCS16], and finally the algorithms Matrix ALPS II [KC14]
21
1
Empirical probability of successful recovery
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
Riemann_Opt
0.1
0
1.0
1.2
1.5
1.7
2.0
2.3
2.6
Figure 6: Comparison of empirical success rates of state-of-the-art algorithms, as a function of the
oversampling factor ρ
and CGIHT Matrix [BTW15], which are based on iterative hard thresholding. As the IRLS variants we consider, all these algorthms use knowledge about the true ground truth rank r .
In the experiments, we examine the empirical recovery probabilities of the different algorithms
systematically for varying oversampling factors ρ, determining the difficulty of the low-rank recovery problem as the sample complexity fulfills m = bρd f c. We recall that a large parameter ρ
corresponds to an easy reconstruction problem, while a small ρ, e.g., ρ ≈ 1, defines a very hard
problem.
We choose d 1 = d 2 = 100 and the r = 8 as parameter of the experimental setting, conducting
the experiments to recover rank-8 matrices X 0 ∈ R100×100 . We remain in the matrix completion
measurement setting described in Section 5.1, but sample now 150 random instances of X 0 and Φ
for different numbers of measurements varying between m min = 1500 to m max = 4000. This means
that the oversampling factor ρ increases from ρ min = 0.975 to ρ max = 2.60. For each algorithm, a
successful recovery of X 0 is defined as a relative Frobenius error kX out − X 0 kF /kX 0 kF of the matrix
X out returned by the algorithm of smaller than 10−3 . The algorithms are run until stagnation of
the iterates or until the maximal number of iterations n max = 3000 is reached. The number n max is
chosen large enough to ensure that a recovery failure is not due to a lack of iterations.
In the experiments, except for AltMin, for which we used our own implementation, we used
implementations provided by the authors of the corresponding papers for the respective algorithms,
using default input parameters provided by the authors. The respective code sources can be found
in the references.
22
5.3.1
Beyond the state-of-the-art performance of HM-IRLS
The results of the experiment can be seen in Figure 6. We observe that HM-IRLS exhibits a very
high empirical recovery probability for p = 0.1 and p = 0.5 as soon as the sample complexity
parameter ρ is slightly larger than 1.0, which means that m = bρr (d 1 + d 2 − r )c measurements
suffice to recover (d 1 × d 2 )-dimensional rank-r matrices with ρ close to 1. This is very close to the
information theoretical lower bound of d f = r (d 1 + d 2 − r ). Very interestingly, it can be observed
that the empirical recovery percentage reaches almost 100% already for an oversampling factor of
ρ ≈ 1.1, and remains at exactly 100% starting from ρ ≈ 1.2.
Quite good success rates can also be observed for the algorithms AM-IRLS and IRLS-col
for non-convex parameter choices p ∈ {0.1, 0.5}, reaching an empirical success probability of almost
100% at around ρ = 1.5. AM-IRLS performs only marginally better than the classical IRLS strategy
IRLS-col, which are both outperformed considerably by HM-IRLS. It is important to note that
in accordance to what was observed in Section 5.2, in the successful instances, the error threshold
that defines successful recovery is achieved already after a few dozen iterations for HM-IRLS, while
typically only after several or many hundreds for AM-IRLS and IRLS-col. Furthermore, it is
interesting to observe that the algorithm IRLS-MF, which corresponds to the variant studied and
implemented by [MF12] and differs from IRLS-col mainly only in the choice of the ϵ-smoothing
(14), has a considerably worse performance than the other IRLS methods. This is plausible since the
smoothing influences severely the optimization landscape of the objective to be minimized.
The strong performance of HM-IRLS is in stark contrast to the behavior of all the algorithms
that are based on different approaches than IRLS and that we considered in our experiments. They
basically never recover any rank-r matrix if ρ < 1.2, and most of the algorithms need a sample
complexity parameter of ρ > 1.7 to exceed a empirical recovery probability of a mere 50%. A
success rate of close to 80% is reached not before raising ρ above 2.0 in our experimental setting,
and also only for a subset of the comparison algorithms, in particular for Matrix ALPS II,
ASD, AltMin. The empirical probability of 100% is only reached for some of the IRLS methods,
and not for any competing method in our experimental setting, even for quite large oversampling
factors such as ρ = 2.5. While we do not rule out that a possible parameter tuning could improve
the performance of any of the algorithms slightly, we conclude that for hard matrix completion
problems, the experimental evidence for the vast differences in the recovery performance of HMIRLS compared to other methods is very apparent.
Thus, our observation is that the proposed HM-IRLS algorithm recovers low-rank matrices systematically with nearly the optimal number of measurements and needs fewer measurements than all
the state-of-the-art algorithms we included in our experiments, if the non-convexity parameter p is
chosen such that p 1.
We also note that the very sharp phase transition between failure and success that can be observed in Figure 6 for HM-IRLS indicates that the sample complexity parameter ρ is indeed the
major variable determining the success of HM-IRLS. In contrast, the wider phase transitions for
the other algorithms suggest that they might depend more on other factors, as the realizations of
the random sampling model and the interplay of measurement operator Φ and ground truth matrix
X0.
Another conclusion that can be drawn from the empirical recovery probability of 1 is that, despite the severe non-convexity of the underlying Schatten-p quasinorm for, e.g., p = 0.1, HM-IRLS
with the initialization of X (1) as the Frobenius norm minimizer does not get stuck in stationary
points if the oversampling factor is large enough. Further experiments conducted with random
23
initializations as well as severely adversary initializations, e.g., with starting points chosen in the
orthogonal complement of the spaces spanned by the singular vectors of the ground truth matrix
X 0 , lead to comparable results. Therefore, we claim that HM-IRLS exhibits a global convergence
behavior in interesting application cases and for oversampling factor ranges for which competing
non-convex low-rank matrix recovery algorithms fail to succeed. We consider a theoretical investigation of such behavior as an interesting open problem to explore.
5.4
Computational complexity
e (n) , cf. (15), is an inverse of a (d 1d 2 × d 1d 2 )-matrix
While the harmonic mean weight matrix W
and therefore in general a dense (d 1d 2 × d 1d 2 )-matrix, it is important to note that it never has to
be computed explicitly
implementation of HM-IRLS;neither is it necessary to compute its
in an(n)
1
(n)
−1
(n)
e ) = U (Σ )2−p U (n)∗ ⊕ V (n) (Σ(n) )2−p V (n)∗ explicitly.
inverse (W
2
Indeed, as it can be seen in (13) and by the definition of the Kronecker sum (55), the harmonic
mean weight matrix appears just as the linear operator (W (n) )−1 on the space of matrices Md1 ×d2 ,
whose action consists of a left- and right-sided matrix multiplication, cf. (12). Therefore, the application of (W (n) )−1 is O(d 1d 2 (d 1 + d 2 )) by the naive matrix multiplication algorithm, and can be
easily parallelized.
While this useful observation is helpful for the implementation of HM-IRLS, it is not true for
(n)
AM-IRLS, as the action of (W(arith)
)−1 , the inverse of the arithmetic mean weight matrix at iteration
n, is not representable as a sum of left- and right-sided matrix multiplication. This means that even
the execution of a fixed number of iterations of HM-IRLS is faster than computational advantage
over AM-IRLS.
f(n)−1 ◦ Φ∗ ∈ Mm×m depends on the linear measurement operator Φ.
The cost to compute Φ ◦ W
In the matrix completion setting (23), no additional arithmetic operations have to be performed, as
f(n)−1 ◦ Φ∗ is a
Φ is a just a selection operator in this case, and for HM-IRLS, this means that Φ ◦ W
sparse matrix.
Thus, the algorithm HM-IRLS consists of basically of two computational steps per iteration:
The computation of the SVD of the d 1 × d 2 -matrix X (n) and the solution of the linearly constrained
least squares problem in (13). The first is of time complexity O(d 1d 2 min(d 1 , d 2 )). The time complexity
of the second depends on Φ, but is dominated by the inversion of a symmetric, m × m sparse linear
system in the matrix completion setting, if m is the number of given entries. This has a worst case
time complexity of O(max(d 1 , d 2 )3r 3 ) if ρ is just a constant oversampling factor.
For the matrix completion case, this allows us to recover low-rank matrices up to, e.g., d 1 = d 2 =
3000 on a single machine given very few entries with HM-IRLS.
Acceleration possibilities and extensions
To tackle higher dimensionalities in reasonable runtimes, a key strategy could be to address the
computational bottleneck of HM-IRLS, the solution of the m × m linear system in (13), by using
iterative methods. For IRLS algorithms designed for the related sparse recovery problem, the usage
of conjugate gradient (CG) methods is discussed in [FPRW16]. By coupling the accuracy of the CG
solutions to the outer IRLS iteration and using appropriate preconditioning, the authors obtain a
competitive solver for the sparse recovery problem, also providing a convergence analysis. Similar
ideas could be used for an acceleration of HM-IRLS.
24
It is interesting to see if further computational improvements can be achieved by combining the
ideas of HM-IRLS with the usage of truncated and randomized SVDs [HMT11], replacing the full
SVDs of the X (n) that are needed to define the linear operator (W (n) )−1 in Algorithm 1.
6
Theoretical analysis
For the theoretical analysis of HM-IRLS, we introduce the following auxiliary functional Jp , leading to a variational interpretation of the algorithm. In the whole section, we denote d = min(d 1 , d 2 )
and D = max(d 1 , d 2 ).
Definition 13. Let 0 < p ≤ 1. Given a full rank matrix Z ∈ Md1 ×d2 , let
i −1
h
1
e (Z ) := 2 Id2 ⊗ (ZZ ∗ ) 21 (ZZ ∗ ) 12 ⊕ (Z ∗Z ) 12
W
(Z ∗Z ) 2 ⊗ Id1 ∈ Hd1d2 ×d1d2
e associated to Z .
be the harmonic mean matrix W
We define the auxiliary functional Jp : Md1 ×d2 × R ≥0 ×Md1 ×d2 → R ≥0 as
Jp (X , ϵ, Z ) :=
p kX vec k 2
2
f(Z ))
`2 (W
+
d
Í
ϵ 2p
2
i=1
σi (Z ) +
2−p
2
d
Í
i=1
p
σi (Z ) (p−2)
+∞
if rank(Z ) = d,
if rank(Z ) < d.
e of Definition 13 is just the harmonic mean of the matrices W
e1 :=
We note that the matrix W
1
1
1
1
∗
∗
∗
∗
e
2
2
2
2
Id2 ⊗ (ZZ ) and W2 = (Z Z ) ⊗ Id1 , as introduced in Section 2.3, if (ZZ ) and (Z Z ) are positive
1
1
e1 + W
e2 is invertible and as (A−1 + B −1 )−1 =
definite. Indeed, in this case, (ZZ ∗ ) 2 ⊕ (Z ∗Z ) 2 = W
A(A + B)−1 B for any positive definite matrices A and B of the same dimensions,
e (Z ) = 2W
e1 W
e1 + W
e2 −1W
e2 = 2(W
e1−1 + W
e2−1 )−1 .
W
(27)
e (Z ) as it is well-defined for any full-rank Z ∈ Md1 ×d2 and as
We use the more general definition W
1
it allows to handle the case of non-square matrices, i.e., the case d 1 ,d 2 , as in this case (ZZ ∗ ) 2 or
1
e + and W
e + of the matrices W
e1
(Z ∗Z ) 2 has to be singular. Using the Moore-Penrose pseudo inverse W
1
2
e2 , we can rewrite W
e (Z ) from Definition 13 as
and W
e (Z ) = 2W
e1 W
e1 + W
e2 −1W
e2 = 2(W
e1+ + W
e2+ )−1 .
W
With the auxiliary functional Jp at hand, we can interpret Algorithm 1 as an alternating minimization of the functional Jp (X , ϵ, Z ) with respect to its arguments X , ϵ and Z .
e (n+1) as the evaluation
In the following,
we derive the formula (15) for the weight matrix W
(n+1)
(n+1)
e
e
e
W
=W Z
of W from Definition 13 at the minimizer
Z (n+1) = arg min Jp (X (n+1) , ϵ (n+1) , Z ),
Z ∈Md1 ×d2
(28)
with the minimizer being unique. Similarly, the formula (13) can be interpreted as
(n)
(n)
X (n+1) = arg min kX vec k`2 (W
f(Z (n) )) = arg min Jp (X , ϵ , Z )
X ∈Md1 ×d2
Φ(X )=Y
2
X ∈Md1 ×d2
Φ(X )=Y
(29)
These observations constitute the starting point of the convergence analysis of Algorithm 1,
which is detailed subsequently after the verification of the optimization steps.
25
6.1
Optimization of Jp with respect to Z and X
Í
We fix X ∈ Md1 ×d2 with singular value decomposition X = di=1 σi ui vi∗ , where ui ∈ Cd1 vi ∈ Cd2 are
the left and right singular vectors respectively and σi = σi (X ) denote its singular values for i ∈ [d].
Our objective in the following is the justification of formula (15). To yield the building blocks
e (n+1) , we consider the minimization problem
of the weight matrix W
arg min Jp (X , ϵ, Z )
(30)
Z ∈Md1 ×d2
for ϵ > 0.
Lemma 14. The unique minimizer of (30) is given by
Z opt =
d
Õ
(σi (X )2 + ϵ 2 )
p−2
2
ui vi∗ .
i=1
Furthermore, the value of Jp at the minimizer Z opt is
Jp (X , ϵ, Z opt ) =
d
Õ
p
p
(σi (X )2 + ϵ 2 ) 2 =: дϵ (X )
(31)
i=1
for p > 0.
The proof of this results is detailed in the appendix.
Remark 15. We note that the value of Jp (X , ϵ, Z opt ) can be interpreted as a smooth ϵ-perturbation of
a p-th power of a Schatten-p quasi-norm of the matrix X . In fact, for ϵ = 0 we have
p
p
Jp (X , 0, Z opt ) = kX kSp = д0 (X ).
Now, we show that our definition rule (13) of X (n+1) in Algorithm 1 can be interpreted as a
minimization of the auxiliary functional Jp with respect to the variable X . Additionally, this minimization step can be formulated as the solution of a weighted least squares problem with weight
e (n) . This is summarized in the following lemma.
matrix W
e (Z ) := 2([(ZZ ∗ ) 21 ]+ ⊕
Lemma 16. Let 0 < p ≤ 1. Given a full-rank matrix Z ∈ Md1 ×d2 , let W
1
[(Z ∗Z ) 2 ]+ )−1 ∈ Hd1d2 ×d1d2 be the matrix from Definition 13 and W −1 : Md1 ×d2 → Md1 ×d2 the linear
operator of its inverse
i
1
1
1h
W −1 (X ) :=
[(ZZ ∗ ) 2 ]+X + X [(Z ∗Z ) 2 ]+ .
2
Then the matrix
X opt = W −1 ◦ Φ∗ ◦ (Φ ◦ W −1 ◦ Φ∗ )−1 Y ∈ Md1 ×d2
is the unique minimizer of the optimization problems
arg min Jp (X , ϵ, Z ) = arg min kX vec k`2 (W
f) .
Φ(X )=Y
Φ(X )=Y
2
(32)
Moreover, a matrix X opt ∈ Md1 ×d2 is a minimizer of the minimization problem (32) if and only if it
fulfills the property
e (Z )(X opt )vec , H vec i`2 = 0 for all H ∈ N (Φ) and Φ(X opt ) = Y .
hW
In Appendix B.3, the interested reader can find a sketch of the proof of this lemma.
26
(33)
6.2
Basic properties of the algorithm and convergence results
In the following subsection, we will have a closer look at Algorithm 1 and point out some of its
properties, in particular, the boundedness of the iterates (X (n) )n ∈N and the fact that two consecutive
iterates are getting arbitrarily close as n → ∞. These results will be useful to develop finally the
proof of convergence and to determine the rate of convergence of Algorithm 1 under conditions
determined along the way.
Lemma 17. Let (X (n) , ϵ (n) )n ∈N be the sequence of iterates and smoothing parameters of Algorithm 1.
Í
Let X (n) = di=1 σi(n)ui(n)vi(n)∗ be the SVD of the n-th iterate X (n) . Let (Z (n) )n ∈N be a corresponding
sequence such that
d
Õ
p−2
Z (n) =
(σi(n)2 + ϵ (n)2 ) 2 ui(n)vi(n)∗
i=1
for n ∈ N. Then the following properties hold:
(a) Jp (X (n) , ϵ (n) , Z (n) ) ≥ Jp (X (n+1) , ϵ (n+1) , Z (n+1) ) for all n ≥ 1,
p
(b) kX (n) kSp ≤ Jp (X (1) , ϵ (0) , Z (0) ) =: Jp,0 for all n ≥ 1,
(c) The iterates X (n) , X (n+1) come arbitrarily close as n → ∞, i.e.,
lim k(X (n) − X (n+1) )vec k`22 = 0.
n→∞
At this point we notice that, assuming X (n) → X and ϵ (n) → ϵ for n → ∞ with the limit point
(X , ϵ) ∈ Md1 ×d2 × R ≥0 , it would follow that
p
Jp (X (n) , ϵ (n) , Z (n) ) → дϵ (X )
for n → ∞ by equation (31).
Now, let ϵ > 0, a measurement vector Y ∈ Cm and the linear operator Φ be given and consider
the optimization problem
p
min дϵ (X )
(34)
X ∈Md1 ×d2
Φ(X )=Y
p
Í
p
p
with дϵ (X ) = di=1 (σi (X )2 + ϵ 2 ) 2 and σi (X ) being the i-th singular value of X , cf. (31). If дϵ (X ) is
non-convex, which is the case for p < 1, one might practically only be able to find critical points of
the problem.
Lemma 18. Let X ∈ Md1 ×d2 be a matrix with the SVD such that X =
Íd
∗
i=1 σi u i v i , let ϵ > 0. If we define
e (X , ϵ) = 2
W
Õ
d
d
Õ
−1
2−p
2−p
2
2 2
∗
2
2 2
∗
(σi + ϵ ) ui ui ⊕
(σi + ϵ ) vi vi
∈ Hd1d2 ×d1d2 ,
i=1
i=1
e (X (n) , ϵ (n) ) = W
e (n) , with W
e (n) defined as in Algorithm 1, cf. (10).
then W
Furthermore, X is a critical point of the optimization problem (34) if and only if
e (X , ϵ)X vec , H vec i`2 = 0 for all H ∈ N (Φ) and Φ(X ) = Y .
hW
p
In the case that дϵ is convex, i.e., if p = 1, (35) implies that X is the unique minimizer of (34).
27
(35)
Now, we have some basic properties of the algorithm at hand that allow us, together with the
strong nullspace property in Definition 4 to carry out the proof of the convergence result in Theorem 9. The proof is sketched in Appendix C using the results above.
6.3
Locally superlinear convergence
In the proof of Theorem 11 we use the following bound on perturbations of the singular value
decomposition, which is originally due to Wedin [Wed72]. It bounds the alignment of the subspaces
spanned by the singular vectors of two matrices by their norm distance, given a gap between the first
singular values of the one matrix and the last singular values of the other matrix that is sufficiently
pronounced.
Lemma 19 (Wedin’s bound [Ste06]). Let X and X̄ be two matrices of the same size and their singular
value decompositions
Σ1 0 V1∗
Σ̄1 0 V̄1∗
Ū
Ū
and
D̄
=
,
X = U1 U2
1
2
0 Σ2 V2∗
0 Σ̄2 V̄2∗
where the submatrices have the sizes of corresponding dimensions. Suppose that δ, α satisfying 0 <
δ ≤ α are such that α ≤ σmin (Σ1 ) and σmax (Σ̄2 ) < α − δ . Then
kŪ2∗U1 kS∞ ≤
√ kX − X̄ kS∞
√ kX − X̄ kS∞
and kV̄2∗V1 kS∞ ≤ 2
.
2
δ
δ
(36)
As a first step towards the proof of Theorem 11, we show the following lemma.
Lemma 20. Let (X (n) )n be the output sequence of Algorithm 1 for parameters Φ, Y , r and 0 < p ≤ 1,
and X 0 ∈ Md1 ×d2 be a matrix such that Φ(X 0 ) = Y .
(n+1)
(i) Let η 2r
be the best rank-2r approximation of η (n+1) = X (n+1) − X 0 . Then
kη
(n+1)
(n+1) 2p
− η 2r
kSp
≤2
2−p
Õ
d
i=r +1
σi2 (X (n) )
+ϵ
(n)2
p
2
2−p
(n+1)
kη vec
k
2p
,
f(n) )
`2 (W
e (n) denotes the harmonic mean weight matrix from (10).
where W
(ii) Assume that the linear map Φ : Md1 ×d2 → Cm fulfills the strong Schatten-p NSP of order 2r with
constant γ 2r < 1. Then
2p
kη (n+1) kS2 ≤ 2p
2−p
γ 2r
r 2−p
Õ
d
i=r +1
σi2 (X (n) ) + ϵ (n)2
p2
2−p
(n+1)
kη vec
k
2p
f(n) )
`2 (W
(iii) Under the same assumption as for (ii), it holds that
2p
kη (n+1) kSp ≤ (1 + γ 2r )2 22−p
Õ
d
i=r +1
σi2 (X (n) ) + ϵ (n)2
28
p2
2−p
(n+1)
kη vec
k
2p
.
f(n) )
`2 (W
(37)
Proof of Lemma 20. (i) Let the X (n) = Ue(n) Σ(n)Ve(n)∗ be the (full) singular value decomposition of
e(n) ∈ Ud1 and Ve(n) ∈ Ud2 are unitary matrices and Σ(n) = diag(σ1 (X (n) ), . . . , σr (X (n) )) ∈
X (n) , i.e., U
Md1 ×d2 . We define UT(n) ∈ Md1 ×r as the matrix of the first r columns of Ue(n) and UT(n)
∈ Md1 ×(d1 −r ) as
c
(n)
e(n) = U (n) U (n) , and similarly V and V (n) .
the matrix of its last d 1 − r columns, so that U
T
Tc
T
Tc
As Id1 = UT(n)UT(n)∗ + UT(n)
UT(n)∗
and Id2 = VT(n)VT(n)∗ + VT(n)
VT(n)∗
, we note that
c
c
c
c
UT(n)
UT(n)∗
η (n+1)VT(n)
VT(n)∗
= η (n+1) − UT(n)UT(n)∗η (n+1) + UT(n)
UT(n)∗
η (n+1)VT(n)VT(n)∗ ,
c
c
c
c
c
c
while UT(n)UT(n)∗η (n+1) + UT(n)
UT(n)∗
η (n+1)VT(n)VT(n)∗ has a rank of at most 2r . This implies that
c
c
(n+1)
kη (n+1) − η 2r
kSp ≤ kUT(n)
UT(n)∗
η (n+1)VT(n)
VT(n)∗
kSp = kUT(n)∗
η (n+1)VT(n)
kSp .
c
c
c
c
c
c
(38)
Using the definitions of Ue(n) and Ve(n) , we write the harmonic mean weight matrices of the n-th
iteration (10) as
e (n) = 2(Ve(n) ⊗ Ue(n) ) Σd(n)2−p ⊕ Σd(n)2−p −1 (Ve(n) ⊗ Ue(n) )∗ ,
(39)
W
1
2
(n)
(n)
where Σd1 ∈ Md1 ×d1 and Σd2 ∈ Md2 ×d2 are the diagonal matrices with the smoothed singular values
of X (n) from (11), but filled up with zeros if necessary. Using the abbreviation
1
(n+1)
e (n) 2 η vec
Ω := (Ve(n) ⊗ Ue(n) )∗W
∈ Cd1d2 ,
(40)
(n)2−p
(n)2−p 1/2
(n+1)
(n+1)
e (n)− 21 W
e (n) 12 η vec
η vec
=W
⊕ Σd2
Ω
= 2−1/2 (Ve(n) ⊗ Ue(n) ) Σd1
2−p
2−p
(n)
(n)
= 2−1/2 (Ve(n) ⊗ Ue(n) ) (Id2 ⊗ Σd1 2 )D L + (Σd2 2 ⊗ Id1 )D R Ω
(41)
we rewrite
with the diagonal matrices D L , D R ∈ Md1d2 ×d1d2 such that
(D L )i+(j−1)d1,i+(j−1)d1 = 1 +
and
(D R )i+(j−1)d1,i+(j−1)d1 =
σ 2 (X (n) ) + ϵ (n)2 2−p
−1/2
2
j
σi2 (X (n) )
+ ϵ (n)2
σ 2 (X (n) ) + ϵ (n)2 2−p
2
i
σ j2 (X (n) )
+ ϵ (n)2
+1
−1/2
for i ∈ [d 1 ] and j ∈ [d 2 ]. This can be seen from the definitions of the Kronecker product ⊗ and the
Kronecker sum ⊕ (cf. Appendix A), as
(n)2−p
(n)2−p 1/2
Σd1
⊕ Σd2
= (si + s j )1/2
i+(j−1)d 1,i+(j−1)d 1
= si (si + s j )
−1/2
+ s j (si + s j )−1/2 = si1/2 (1 +
(n)2−p
if s ` denotes the `-th diagonal entry of Σd2
(n)2−p
and Σd1
29
s j −1/2
si
)
+ s j1/2 ( + 1)−1/2
si
sj
for ` ∈ [max(d 1 , d 2 )].
(n)
2−p
If we write Σd1,Tc2 ∈ M (d1 −r )×(d1 −r ) for the diagonal matrix containing the d 1 − r last diagonal
(n)2−p
elements of Σd1
(n)
2−p
and Σd2,Tc2 ∈ M (d1 −r )×(d1 −r ) for the diagonal matrix containing the d 2 − r last
(n)2−p
diagonal elements of Σd2
, it follows from (41) that
p
−2
(n+1) (n) p
UT(n)∗
=
2
η
V
Tc Sp
c
p
2−p
(n) 2
≤ 2− 2 Σd1,Tc
UT(n)∗
Ue(n)
c
(D L Ω)mat
(n)
Σd1
2−p
2
p
(n)
(D L Ω)mat +(D R Ω)mat Σd2
+
Tc ,Tc S
p
(D R Ω)mat
Tc ,Tc
2−p
2
Ve(n)∗VT(n)
c
p
Sp
p
2−p
(n) 2
Σd2,Tc
Sp
with the notation that MTc ,Tc denotes the submatrix of M which contains the intersection of the last
d 1 − r rows of M with its last d 2 − r columns.
Now, Hölder’s inequality for Schatten-p quasinorms (e.g., [GGK00, Theorem 11.2]) can be used
to see that
2−p
2−p p
p
p
(n)
(n)
(D L Ω)mat Tc ,Tc .
Σd1,Tc2 (D L Ω)mat Tc ,Tc
≤ ΣTc 2
(42)
S
Sp
S2
2p
2−p
Inserting the definition
(n)
ΣTc
2−p
2
p
S
=
2p
2−p
Õ
d
i=r +1
σi2 (X (n) )
+ϵ
(n)2
2p(2−p)
(2−p)4
Õ
2−p
2−p
d
2
2
p
2
(n)
(n)2 2
σi (X )+ ϵ
=
i=r +1
allows us to rewrite the first factor, while the second factor can be bounded by
(D L Ω)mat
p
Tc ,Tc S 2
p
p
≤ kΩmat kS2 = k(Ve(n)
S2
(n+1) p
(n+1) p
e (n) 21 η vec
kW
k`2 = kη vec
k f(n) ,
` (W )
≤ (D L Ω)mat
=
1
p
(n+1)
e (n) 2 η vec
k`2
⊗ Ue(n) )∗W
2
as the matrix D L ∈ Md1d2 ×d1d2 from (41) fulfills kD L kS∞ ≤ 1 since its entries are bounded by 1; we
also recall the definition (40) of Ω and that Ve(n) and Ue(n) are unitary.
2−p p
(n)
p
The term (D R Ω)mat Tc ,Tc Σd2,Tc2
in the bound of UT(n)∗
η (n+1)VT(n)
can be estimated analSp
c
c
Sp
ogously. Combining this with (38), we obtain
kη
(n+1)
(n+1) 2p
− η 2r
kSp
≤2
−p
2−p
d
Õ
p
2
2
(n+1) 2p
2
(n)
(n)2 2
2
σi (X ) + ϵ
kη vec
k f(n) ,
`2 (W
i=r +1
)
concluding the proof of statement (i).
(ii) Using the strong Schatten-p null space property (18) of order 2r and that η (n+1) ∈ N (Φ), we
estimate
2p
kη (n+1) kS2 =
(n+1) 2
(n+1) 2 p
kη 2r
kS2+ kη (n+1)− η 2r
kS2 ≤
2−p
γ 2/p + γ 2/p−1
2r
2r
2/p−1
(2r )
2−p
(n+1) 2
kη (n+1)− η 2r
kSp
p
γ
γ (γ 2r + 1)p (n+1)
(n+1) 2p
(n+1) 2p
≤ 2r
kη
− η 2r
kSp ≤ 2p 2−p2r 2−p kη (n+1) − η 2r
kSp ,
2−p
(2r )
2 r
30
where we use in the second inequality a version of Stechkin’s lemma [KKRT16, Lemma 3.1], which
leads to the estimate
2−p
kη
(n+1)
(n+1) 2
− η 2r
kS2
≤
(n+1)
kη 2r
kS2
(2r )2−p
2/p−1
p
(n+1)
kη (n+1) − η 2r
kSp ≤
γ 2r
(2r )2/p−1
(n+1) 2
kη (n+1) − η 2r
kSp .
2p
Combining the estimate for kη (n+1) kS2 with statement (i), this results in
2p
kη (n+1) kS2
≤
2−p
γ
p 2r
2 2−p
r
Õ
d
i=r +1
σi2 (X (n) )
+ϵ
(n)2
p
2
2−p
(n+1)
kη vec
k
2p
,
f(n) )
`2 (W
which shows statement (ii).
(iii) For the third statement, we use the strong Schatten-p NSP (18) to see that
p
p
p
p
(n+1)
(n+1)
(n+1)
kη (n+1) kSp = kη 2r
kSp + kη (n+1) − η 2r
kSp ≤ (1 + γ 2r )kη (n+1) − η 2r
kSp ,
and combine this with statement (i).
Lemma 21. Let (X (n) )n be the output sequence of Algorithm 1 with parameters Φ, Y , r and 0 < p ≤ 1,
e (n) be the harmonic mean weight matrix matrix (10) for n ∈ N. Let X 0 ∈ Md1 ×d2 be a rank-r
and W
matrix such that Φ(X 0 ) = Y with condition number κ :=
σ1 (X 0 )
σr (X 0 ) .
(i) If (24) is fulfilled for iteration n, then η (n+1) = X (n) − X 0 fulfills
(n+1)
η vec
2p−p 2
2p
f(n) )
`2 (W
(n)
4p r p/2σr (X 0 )p(p−1) p kη kS∞
p
≤
κ
kη (n+1) kS2 .
2
2p
(n)
2p−p
(1 − ζ )
(ϵ )
(ii) Under the same assumption as for (i), it holds that
(n+1)
η vec
2p−p 2
2p
f(n) )
`2 (W
(n)
7p r p/2 max(r , d − r )p/2σr (X 0 )p(p−1) p kη kS∞
p
kη (n+1) kS∞
κ
≤
2
2p
(n)
2p−p
(1 − ζ )
(ϵ )
Proof of Lemma 21. (i) Recall that X (n+1) = arg min kX vec k 2
Φ(X )=Y
e (n) . As
matrix W
f(n) )
`2 (W
is the minimizer of the weighted
least squares problem with weight
η (n+1) = X (n+1) − X 0 is in the null space of the
measurement map Φ, it follows from Lemma 16 that
(n+1) (n+1)
(n+1)
e (n)X vec
e (n) (η (n+1) + X 0 )vec , η vec
0 = hW
, η vec i = hW
i,
which is equivalent to
(n+1)
η vec
2
f(n) )
`2 (W
(n+1) (n+1)
(n+1)
e (n)η vec
e (n) (X 0 )vec , η vec
= hW
, η vec i = −hW
i.
Using Hölder’s inequality, we can therefore estimate
(n+1)
η vec
2
f(n) )
`2 (W
(n+1)
e (n) (X 0 )vec , η vec
e (n) (X 0 )vec ]mat , η (n+1) iF
= −hW
i`2 = −h[W
(n)
e (X 0 )vec
≤ W
kη (n+1) kS2 .
mat S 2
31
(43)
e (n) on X 0 in the matrix space as
To bound the first factor, we first rewrite the action of W
h
i
e (n)(X 0 )vec = 2[(Ve(n) ⊗ Ue(n) ) Σd(n)2−p ⊕ Σd(n)2−p −1 (Ve(n) ⊗ U
e(n) )∗ (X 0 )vec ]mat =
W
1
2
mat
e(n)∗X 0Ve(n) ) Ve(n)∗ ,
= Ue(n) H (n) ◦ (U
using (39) and Lemma 20 about the action of inverses of Kronecker sums, with the notation that
H (n) ∈ Md1 ×d2 such that
h
i
2−p
2−p −1
2
(n)
(n)2 2
2
(n)
(n)2 2
+
1
(σ
(X
)
+
ϵ
)
Hi(n)
=
2
1
(σ
(X
)
+
ϵ
)
{j
≤d
}
{i
≤d
}
j
i
j
for i ∈ [d 1 ], j ∈ [d 2 ], where 1 {i ≤d } = 1 if i ≤ d and 1 {i ≤d } = 0 otherwise. This enables us to estimate
(n)
e (X 0 )vec
W
mat
2
e(n)∗X 0Ve(n) ) Ve(n)∗
= Ue(n) H (n)◦ (U
S2
UT(n)∗X 0VT(n) UT(n)∗X 0VT(n)
(n)
c
= H ◦ (n)∗
(n)
UTc X 0VT(n) UT(n)∗
X
V
0
Tc
c
(n)∗
(n)
= HT(n)
,T ◦ (UT X 0VT )
+ HT(n)
◦ (UT(n)∗
X 0VT(n) )
c ,T
c
2
S2
2
S2
!
2
∗
= H (n)◦ (Ue(n) X 0Ve(n) )
S2
2
S2
2
S2
(n)∗
(n)
+ HT(n)
,Tc ◦ (UT X 0VTc )
+ HT(n)
◦ (UT(n)∗
X 0VT(n)
)
c ,Tc
c
c
2
(44)
S2
2
S2
,
using the notation from the proof of Lemma 20. To bound the first summand, we calculate
(n)∗
(n)
HT(n)
,T ◦(UT X 0VT )
≤
≤
≤
≤
S2
(n)
(n)∗ (n) (n)
(n)∗ (n) (n)
≤ HT(n)
,T ◦(UT X VT ) + HT ,T ◦(−UT η VT )
S2
(n)∗ (n) (n)
◦ ΣT(n) + HT(n)
,T ◦ (UT η VT ) S
S2
2
1/2
Õ
r
2
(n)
σi (X )
r
+ max |Hi,(n)j |kUT(n)∗η (n)VT(n) kS2
2−p
2
i, j=1
(n) ) + ϵ (n)2
i=1 σi (X
p−2
√ p−1 (n)
rσr (X ) + (σr2 (X (n) ) + ϵ (n)2 ) 2 kUT(n)∗η (n)VT(n) kS2
√ p−1 (n)
√
√ p−2
p−2
rσr (X ) + σr (X (n) ) r kη (n) kS∞ = rσr (X (n) ) σr (X (n) )
S2
HT(n)
,T
+ kη (n) kS∞ ,
denoting ΣT(n) = diag(σi (X (n) ))ri=1 and that the matrices UT(n) and VT(n) contain the first r left resp.
√
right singular vectors of X (n) in the second inequality, together with the estimates kX kS1 ≤ r kX kS2 ≤
r kX kS∞ for (r × r )-matrices X .
With the notations sr0 := σr (X 0 ) and s 10 := σ1 (X 0 ), we note that
σr (X (n) ) ≥ sr0 (1 − ζ ),
as the assumption (24) implies that
sr0 = σr (X 0 ) = σr (X (n) − η (n) ) ≤ σr (X (n) ) + σ1 (η (n) ) ≤ σr (X (n) ) + ζ sr0 ,
using [Ber09, Proposition 9.6.8] in the first inequality.
32
Therefore, we can bound the first summand of (44) such that
(n)∗
(n)
HT(n)
,T ◦ (UT X 0VT )
S2
√
√
≤ r (sr0 (1 − ζ ))p−2 [sr0 (1 − ζ ) + ζ sr0 ] = r (sr0 )p−1 (1 − ζ )p−2 .
For the second summand in the estimate of
(n)
e (X 0 )vec
W
sumption (24) are used to compute
2
mat S
2
(45)
, similar arguments and again as-
=0
(n)∗
(n)
HT(n)
,Tc ◦ (UT X 0VTc )
(n)∗ (n) (n)
HT(n)
,Tc ◦ (UT η VTc )
≤
S2
S2
2kUT(n)∗η (n)VT(n)
kF
c
(X (n) )2
+ ϵ (n)2 )
≤ HT(n)
,Tc
2−p
2
≤
z
}|
{
(n)∗ (n) (n)
◦ (UT X VTc )
max
i ∈[r ]
j ∈ {r +1, ...,d 2 }
S2
+
|Hi,(n)j |kUT(n)∗η (n)VT(n)
kS2
c
(46)
(n)∗ (n) (n)
(n) p−2
≤ 2σr (X ) kUT η VTc kS2
(σr
√
√ 0
≤ 2 r (sr (1 − ζ ))p−2 kη (n) kS∞ ≤ 2ζ r (sr0 )p−1 (1 − ζ )p−2 .
From exactly the same arguments it follows that also
HT(n)
◦ (UT(n)∗
X 0VT(n) )
c ,T
c
S2
√
≤ 2ζ r (sr0 )p−1 (1 − ζ )p−2 .
It remains to bound the last summand HT(n)
◦ (UT(n)∗
X 0VT(n)
)
c ,Tc
c
c
◦ (UT(n)∗
X 0VT(n)
)
HT(n)
c
c ,Tc
c
S2
≤
max
i ∈ {r +1, ...,d 1 }
j ∈ {r +1, ...,d 2 }
2
S2
(47)
. We see that
X 0VT(n)
kS2
Hi,(n)j kUT(n)∗
c
c
kS∞
UT0 kS∞ kS 0 kS2 kVT0∗VT(n)
kS2 ≤ (ϵ (n) )p−2 kUT(n)∗
X 0VT(n)
≤ (ϵ (n) )p−2 kUT(n)∗
c
c
c
c
√ (n)
√ (n)
0
√ (n) 2 (n) p−2
2kη kS∞ √ 0 2kη kS∞
−2 0 −1 s 1
≤ (ϵ (n) )p−2
rs
=
2
r
kη
k
(ϵ
)
(1
−
ζ
)
(s
)
r
1
S∞
(1 − ζ )sr0
(1 − ζ )sr0
sr0
(48)
where Hölder’s inequality for Schatten norms was used in the third inequality. In the fourth inequality, Wedin’s singular value perturbation bound of Lemma 19 is used with the choice Z = X 0 ,
Z = X (n) , α = sr0 and δ = (1 − ζ )sr0 , and finally ϵ (n) ≤ ζ sr0 in the last inequality, which is implied by
the rule (14) for ϵ (n) together with assumption (24).
Summarizing the estimates (45)–(48), we conclude that
kη (n) kS4∞ (n) 2p−4 0 −2p s 10 2
r (sr0 )2p−2
2
1 + 8ζ + 4
(ϵ )
(sr )
≤
S2
(1 − ζ )4−2p
(1 − ζ )2p
sr0
4−2p
2p
kη (n) kS∞ kη (n) kS∞ s 10 2
r (sr0 )2p−2
2
2p
=
(1 + 8ζ )(1 − ζ ) + 4 (n) 4−2p
(1 − ζ )4
(sr0 )2p sr0
(ϵ )
4−2p
(n) 4−2p
kη (n) kS∞ 2p 2
r (sr0 )2p−2
13r (sr0 )2p−2 kη kS∞ 2
≤
9
+
4
ζ
κ
≤
κ ,
(1 − ζ )4
(1 − ζ )4
(ϵ (n) )4−2p
(ϵ (n) )4−2p
(n)
e (X 0 )vec
W
mat
2
33
as 0 < ζ < 1, ϵ (n) ≤ σr +1 (X (n) ) = kXT(n)
kS∞ ≤ kη (n) kS∞ and using the assumption (24) in the second
c
inequality. This concludes the proof of Lemma 21(i) together with inequality (43) as 13p/2 ≤ 16p/2 =
4p .
(ii) For the second statement of Lemma 21, we proceed similarly as before, but note that by
Hölder’s inequality, also
(n+1)
η vec
2
`2
f(n) )
(W
(n)
e (X 0 )vec
W
≤
mat S 1
kη (n+1) kS∞ ,
(49)
cf. (43). Furthermore
(n)
e (X 0 )vec
W
mat
S1
(n)∗
(n)
≤ HT(n)
,T ◦ (UT X 0VT )
+
HT(n)
c ,T
◦
(n)∗
(n)
+ HT(n)
,Tc ◦ (UT X 0VTc )
S1
(UT(n)∗
X 0VT(n) )
c
S1
HT(n)
c ,Tc
+
◦
S1
(UT(n)∗
X 0VT(n)
)
c
c
S1
.
(50)
The four Schatten-1 norms can then be estimated by max(r, (d − r ))1/2 times the corresponding
Schatten-2 norms. Using then again inequalities (45)–(48), we conclude the proof of (ii).
Proof of Theorem 11. First we note that
d
Õ
i=r +1
σi2 (X (n) ) + ϵ
p
(n)2 2
! 2−p
p2
≤ 2p− 2 (d − r )2−p σr +1 (X (n) )p(2−p)
(51)
as ϵ (n) ≤ σr +1 (X (n+1) ) due to the choice of ϵ (n) in (14). We proceed by induction over n ≥ n.
Lemma 20(ii) and Lemma 21(ii) imply together with (51) that for n = n,
2p
p
kη (n+1) kS∞
≤
kη (n+1) kS2
p
kη (n+1) kS∞
2−p
≤ 2p γ 2r 2p−
2−p
≤ 25p γ 2r
p2
2
d − r 2−p/2 7p r p (s 0 )p(p−1)
r
(1 − ζ )2p
r
d − r 2−p/2 r p (s 0 )p(p−1)
r
(1 − ζ )2p
r
2p−p 2
κ p kη (n) kS∞
(52)
p(2−p)
κ p kη (n) kS∞
as σr +1 (X (n) ) = ϵ (n) by assumption for n = n.
Similarly, by Lemma 20(iii), Lemma 21(ii) and (51), the error in the Schatten-p quasinorm fulfills
2p
kη (n+1) kSp ≤ (1 + γ 2r )2 22+2p d − r
2−p r p/2 (sr0 )p(p−1) p (n) p(2−p) (n+1) p
κ kη kS∞ kη
kS2
(1 − ζ )2p
(53)
for n = n. Using the strong Schatten-p null space property of order 2r for the operator Φ, we see
from the arguments of the proof of Lemma 20(ii) that
p
1−p/2
p
kη (n) kS∞ ≤ kη (n) kS2 ≤
p
and also kη (n+1) kS2 ≤
p
1−p/2
2p−1γ 2r
r 1−p/2
2p−1γ 2r
r 1−p/2
p
kη (n) kSp
p
p
kη (n+1) kSp . Inserting that in (53) and dividing by kη (n+1) kSp , we obtain
2−p
kη (n+1) kSp ≤ 24p (1 + γ 2r )2γ 2r
d − r 2−p r p/2 (s 0 )p(p−1)
r
r
(1
34
− ζ )2p
p(1−p)
κ p kη (n) kS∞
p
kη (n) kSp .
Under the assumption that (25) holds, it follows from this and (52) that
p
p
p
p
kη (n+1) kS∞ ≤ kη (n) kS∞ and kη (n+1) kSp ≤ kη (n) kSp
(54)
for n = n, which also entails the statement of Theorem 11 for this iteration.
0
0
Let now n 0 > n such that (54) is true for all n with n 0 > n ≥ n. If σr +1 (X (n ) ) ≤ ϵ (n −1) , then
0
0
ϵ (n ) = σr +1 (X (n ) ) and the arguments from above show (54) also for n = n 0.
0
0
0
00
00
Otherwise σr +1 (X (n ) ) > ϵ (n −1) and there exists n 0 > n 00 ≥ n such that ϵ (n ) = ϵ (n ) = σr +1 (X (n ) ).
Then
0
2−p Õ
d 2
p2 2−p r p/2 max(r, d − r )p/2
γ
σi (X (n ) )
p (n 0 ) p(2−p)
(n 0 +1) p
p 2r
+
1
κ
kη kS∞
kη
kS∞≤ 14 2−p
00
0
r
ϵ (n )2
(sr )p(1−p) (1 − ζ )2p
i=r +1
and we compute
Õ
0
d 2
σ (X (n ) )
i
ϵ (n
i=r +1
kη (n0) k p
Sp
≤
≤
00 )2
ϵ (n
00 )p
+1
p2 2−p
+ (d − r )
2−p
00 )p
ϵ (n
i=r +1
kη (n00) k p
Sp
00 )p
+ (d − r )
2−p
2−p
+ (d − r )
00
ϵ (n )p
2−p
3 + γ 2r 2−p
+ (d − r )
≤
(d − r )2−p ,
1 − γ 2r
≤
2(1 + γ )kX (n00) k p
2r
Tc
Sp
(1 − γ 2r )ϵ (n
≤
Õ
0
p
d
σi (X (n ) )
using that X 0 is a matrix of rank at most r in the second inequality, the inductive hypothesis in
the third and an analogue of (61) for a Schatten-p quasinorm on the left hand side (cf. [KKRT16,
Lemma 3.2] for the corresponding result for p = 1) in the last inequality. The latter argument uses
the assumption on the null space property. This shows that
p
p(2−p)
kη (n +1) kS∞ ≤ µ kη (n ) kS∞
0
for
2−p
e
µ := 24p γ 2r
p
0
(3 + γ )(d − r ) 2−p r p/2 (s 0 )p(p−1)
p
2r
r
p
p
2 , (1 + γ )2 ,
κ
max
2
(d
−
r
)
2r
(1 − γ 2r )r
(1 − ζ )2p
p
and kη (n +1) kS∞ ≤ kη (n ) kS∞ under the assumption (25) of Theorem 11, as e
µ ≤ µ with µ as in (26).
Indeed e
µ ≤ µ since
0
0
d − r 2−p
d − r 2−p/2
p
r p/2 ≤ 2p (1 + γ 2r )2
rp.
max 2p (d − r ) 2 , (1 + γ 2r )2
r
r
p
p
The same argument shows that kη (n +1) kSp ≤ kη (n ) kSp , which finishes the proof.
0
0
Remark 22. We note that the weight matrices of the previous IRLS approaches IRLS-col and
IRLS-row [FRW11, MF12] at iteration n could be expressed in our notation as
(n)
Id2 ⊗ WL(n) := Id2 ⊗ U (n) (Σd1 )p−2U (n)∗
and
(n)
WR(n) ⊗ Id1 := V (n) (Σd2 )p−2V (n)∗ ⊗ Id1 ,
35
respectively, cf. Section 2.2, if X (n) = U (n) Σ(n)V (n)∗ = UT(n) ΣT(n)VT(n)∗ + UT(n)
ΣT(n)
V (n)∗ is the SVD of the
c
c Tc
iterate X (n) with UT(n) and VT(n) containing the r first left- and right singular vectors.
Now let
T (n) := UT(n)Z 1∗ + Z 2VT(n)∗ : Z 1 ∈ Md1 ×r , Z 2 ∈ Md2 ×r
be the tangent space of the smooth manifold of rank-r matrices at the best rank-r approximation
UT(n) ΣT(n)VT(n)∗ of X (n) , or, put differently, the direct sum of the row and column spaces of UT(n) ΣT(n)VT(n)∗ .
The fact that left- or right-sided weight matrices do not lead to algorithms with superlinear convergence rates for p < 1 can be explained by noting that there are always parts of the space T (n) that
are equipped with too large weights if X (n) = U (n) Σ(n)V (n)∗ is already approximately low-rank. In
particular, proceeding as in (44), we obtain for Id2 ⊗ WL(n)
Id2 ⊗ WL(n) (X 0 )vec
2
mat S
2
(n) p−2
= ΣT
UT(n)∗X 0VT(n)
p−2 (n)∗
UTc X 0VT(n)
+ ΣTc (n)
2
+
S2
+
S2
(n)
2
(n) p−2
ΣT
UT(n)∗X 0VT(n)
c
(n) p−2 (n)∗
ΣTc
UTc X 0VT(n)
c
(n)
2
S2
2
S2
(n)
if ΣT denotes the diagonal matrix with the first r non-zero entries of Σd1 and ΣTc the one of the
remaining entries.
Here, the third of the four summands would become too large for p < 1 to allow for a superlinear
convergence when the last d − r singular values of X (n) approach zero. An analogous argument can be
used for the right-sided weight matrix WR(n) ⊗ Id1 and, notably, also for arithmetic mean weight matrices
(n)
W(arith)
= Id2 ⊗ WL(n) + WR(n) ⊗ Id1 , cf. Section 2.3.
7
Acknowledgments
The two authors acknowledge the support and hospitality of the Hausdorff Research Institute for
Mathematics (HIM) during the early stage of this work within the HIM Trimester Program ”Mathematics of Signal Processing”. C.K. is supported by the German Research Foundation (DFG) in the
context of the Emmy Noether Junior Research Group “Randomized Sensing and Quantization of
Signals and Images” (KR 4512/1-1) and the ERC Starting Grant “High-Dimensional Sparse Optimal
Control” (HDSPCONTR - 306274). J.S. is supported by the DFG through the D-A-CH project no.
I1669-N26 and through the international research training group IGDK 1754 “Optimization and Numerical Analysis for Partial Differential Equations with Nonsmooth Structures”. The authors thank
Ke Wei for providing code of his implementations. They also thank Massimo Fornasier for helpful
discussions.
A
Kronecker and Hadamard products
For two matrices A = (ai j )i ∈[d1 ], j ∈[d3 ] ∈ Cd1 ×d3 and B ∈ Cd2 ×d4 , we call the matrix representation of
their tensor product with respect to the standard bases the Kronecker product A ⊗ B ∈ Cd1 ·d2 ×d3 ·d4 .
By its definition, A ⊗ B is a block matrix of d 2 × d 4 blocks whose block of index (i, j) ∈ [d 1 ] × [d 3 ] is
the matrix ai j B ∈ Rd2 ×d4 . This implies, e.g., for A ∈ Cd1 ×d3 with d 1 = 2 and d 3 = 3 that
a 11 B a 12 B a 13 B
a 11 a 12 a 13
A⊗B =
⊗B =
.
a 21 a 22 a 23
a 21 B a 22 B a 23 B
36
The Kronecker product is useful for the elegant formulation of matrix equations involving left
and right matrix multiplications with the variable X , as
AX B ∗ = Y
if and only if
(B ⊗ A)X vec = Yvec .
We define the Hadamard product A ◦ B ∈ Cd1 ×d2 of two matrices A ∈ Cd1 ×d2 and B ∈ Cd1 ×d2 as
their entry-wise product
(A ◦ B)i, j = Ai, j Bi, j
with i ∈ [d 1 ] and j ∈ [d 2 ]. The Hadamard product is also known as Schur product in the literature.
Furthermore, if d 1 = d 3 and d 2 = d 4 , we define the Kronecker sum A ⊕ B ∈ Cd1d2 ×d1d2 of two
matrices A ∈ Cd1 ×d1 and B ∈ Cd2 ×d2 as the matrix
A ⊕ B = (Id2 ⊗ A) + (B ⊗ Id1 ).
(55)
Note that equations of the form AX + X B ∗ = Y can be rewritten as
(A ⊕ B)X vec = Yvec ,
using again the vectorizations of X and Y . An explicit formula that expresses the inverse (A ⊕ B)−1
of the Kronecker sum A ⊕ B is provided by the following lemma.
Lemma 23 ([Jam68]). Let A ∈ Hd1 ×d1 and B ∈ Hd2 ×d2 , where one of the matrices is positive definite
and the other positive semidefinite. If we denote the singular vectors of A by ui ∈ Cd1 , i ∈ [d 1 ], its
singular values by σi , i ∈ [d 1 ] and the singular vectors resp. values of B by v j ∈ Cd2 resp. µ j , j ∈ [d 2 ],
then
d1 Õ
d2 v v ∗ ⊗ u u ∗
Õ
j j
i i
−1
.
(56)
(A ⊕ B) =
σi + µ j
i=1 j=1
Furthermore, the action of (A ⊕ B)−1 on the matrix space Md1 ×d2 can be written as
(A ⊕ B)−1Z vec mat = U H ◦ (U ∗ZV ) V ∗ .
(57)
for Z ∈ Md1 ×d2 , U = [u 1 , . . . , ud1 ], and V = [v 1 , . . . , vd2 ] and the matrix H ∈ Md1 ×d2 with the entries
Hi, j = (σi + µ j )−1 , i ∈ [d 1 ], j ∈ [d 2 ].
B
B.1
Proofs of preliminary statements in Section 6
Proof of Lemma 14: Main part
First, we define the function
p
f X,ϵ (Z )
= Jp (X , ϵ, Z ) =
p kX vec k 2
2
f(Z ))
`2 (W
+
ϵ 2p
2
d
Í
i=1
+∞
σi (Z ) +
2−p
2
d
Í
i=1
p
σi (Z ) (p−2)
if rank(Z ) = d,
if rank(Z ) < d,
for X ∈ Md1 ×d2 , ϵ > 0 fixed and with Z ∈ Md1 ×d2 as its only argument. We note that the set
p
of minimizers of f X,ϵ (Z ) does not contain an instance Z with rank smaller than d as the value of
p
f X,ϵ (Z ) is infinite at such points and, therefore, it is sufficient to search for minimizers on the set
Ω = Z ∈ Md1 ×d2 | rank(Z ) = d of matrices with rank d. We observe that the set Ω is an open set
and that we have that
37
k→∞
p
(a) f X,ϵ (Z ) is lower semicontinuous, which means that any sequence (Z k )k ∈N with Z k −→ Z
p
p
fulfills lim inf f X,ϵ (Z k ) ≥ f X,ϵ (Z ),
k→∞
p
(b) f X,ϵ (Z ) ≥ α for all Z ∈ Md1 ×d2 for some constant α,
k→∞
p
p
k→∞
(c) f X,ϵ (Z ) is coercive, i.e., for any sequence (Z k )k ∈N with kZ k kF −→ ∞, we have f X,ϵ (Z k ) −→
∞.
p
Property (a) is true as f X,ϵ (Z ))|Ω is a concatenation of an indicator function of an open set,
which is lower-semicontinuous and a sum of continuous functions on Ω. Property (b) is obviously
true for the choice α = 0.
d
p
ϵ 2p
ϵ 2p Í
ϵ 2p
To justify point (c), we note that f X,ϵ (Z ) > 2
σi (Z ) = 2 kZ kS1 ≥ 2 kZ kF and therefore,
i=1
coercivity nis clear from its definition.
o As a consequence from (a) and (c), it is also true that the level
p
sets LC = Z ∈ Md1 ×d2 | f X,ϵ (Z ) ≤ C are closed and bounded and therefore, compact.
Via the direct method of calculus of variations, we conclude from the properties (a) - (c) that
p
p
f X,ϵ (Z ) has at least one global minimizer belonging to the set of critical points of f X,ϵ (Z ) [Dac89,
Theorem 1].
p
To characterize the set of critical points of f X,ϵ (Z ), its derivative with respect to Z is calculated
explicitly and equated with zero in Appendix B.2. The solution of the resulting equation reveals that
p−2
Í
Í
Z opt = di=1 (σi2 (X )+ϵ 2 ) 2 ui vi∗ =: di=1 e
σi ui vi∗ is the only critical point and consequently the unique
Íd
p
L := Íd e
∗
R
global minimizer of f X,ϵ (Z ). We define the matrices Wopt
σi vi vi∗ ,
i=1 σi u i u i and Wopt :=
i=1 e
−1
R )−1 ⊕ (W L )−1
e (Z opt ) = 2 (Wopt
and note that W
with Definition 13. To verify the second part of
opt
the theorem, we simply plug the optimal solution Z opt into the functional Jp and compute using
(56) that
d
d
p
p
ϵ 2p Õ
2 − p Õ p−2
kX vec k`2 (W
+
e
σ
+
e
σ
i
2 f(Z opt ))
2
2 i=1
2 i=1 i
!
#
"
d2 Õ
d1 u u ∗ ⊗ v v ∗
d
d
d
p
Õ
j
k
pÕ 2
ϵ 2p Õ
2 − p Õ p−2
j
k
=
(u
⊗
v
)
+
e
σ
+
e
σ
σi (X )(ui∗ ⊗ vi∗ )2
i
i
i
2 i=1
2 i=1
2 i=1 i
e
σk−1 + e
σ j−1
k =1 j=1
Jp (X , ϵ, Z opt ) =
ii
d
d
p
pÕ 2
2 − p Õ p−2
=
(σi (X ) + ϵ 2 )e
σi +
e
σi
2 i=1
2 i=1
=
=
d
d
p−2
p
pÕ 2
2−p Õ 2
(σi (X ) + ϵ 2 )(σi2 (X ) + ϵ 2 ) 2 +
(σi (X ) + ϵ 2 ) 2
2 i=1
2 i=1
d
Õ
i=1
B.2
p
(σi2 (X ) + ϵ 2 ) 2 .
p
Proof of Lemma 14: Critical points of fX ,ϵ
Let us without loss of generality consider the case d = d 1 = d 2 and define
Ω = {Z ∈ Md ×d s.t. rank(Z ) = d } .
38
e (Z ) can be then rewritten as
As already mentioned in (27), the harmonic mean matrix W
e (Z ) = 2W
e1 W
e1 + W
e2 −1W
e2 = 2(W
e1−1 + W
e2−1 )−1
W
e2 = (Z ∗Z ) 12 ⊗ Id . For Z ∈ Ω, we reformulate
e1 := Id ⊗ (ZZ ∗ ) 21 and W
for Z ∈ Ω with the definitions W
the auxiliary functional such that
p
f X,ϵ (Z ) = J p (X , ϵ, Z ) =
=
d
d
p
ϵ 2p Õ
p
2−p Õ
(p−2)
kX vec k`2 (W
+
σ
(Z
)
+
σ
(Z
)
i
i
f
(Z ))
2
2
2 i=1
2 i=1
p
p
ϵ 2p
2−p
∗ 1/2 2
∗
2(p−2) k 2 .
kX vec k`2 (W
+
k(Z
Z
)
k
+
k(Z
Z
)
F
F
2 f(Z ))
2
2
2
p
To identify the set of critical points of f X,ϵ (Z ) located in Ω, we compute its derivative with respect
to Z using the derivative rules (7), (12), (13), (15), (16), (18), (20) in Chapter 8.2 and Theorem 3 in
Chapter 8.4 of [MN99] in the following. Using the notation of [MN99], we calculate
p
∂ f X,ϵ (Z )
pϵ 2
p ∗ e e −1 e
∗ − 12
∗
∗ − 12 ∗
tr Z (Z Z ) ∂Z + tr((Z Z ) Z ∂Z )
= − tr X vecW ∂W W X vec +
2
4
4−p
4−p
p
∗
∗
∗
∗
2(p−2)
2(p−2)
− tr Z (Z Z )
∂Z + tr((Z Z )
Z ∂Z )
4
where
i
i
1 h
1 h ∗ −3 ∗
−1
∗ − 12
∗
∗ − 32
∗ − 21
e
2
∂W = ∂ (ZZ ) ⊕ (Z Z )
=−
(Z Z ) Z ∂Z + ∂Z Z (Z Z )
⊗ Id1
2
4
i
1h
∗ − 23 ∗
∗ − 32
∗
Id ⊗ ∂Z (ZZ ) Z + (ZZ ) Z ∂Z
−
.
4 2
We can reformulate the first term as follows using the cyclicity of the trace,
(58)
p h
p ∗ e e −1 e
∗
∗ − 32 ∗
e
e
− tr X vecW ∂W W X vec =
tr (W X vec )mat (W X vec )mat (Z Z ) Z ∂Z
2
8
3
e X vec )∗mat (W
e X vec )mat ∂Z ∗
+ tr Z (Z ∗Z )− 2 (W
3
e X vec )mat (W
e X vec )∗mat ∂Z
+ tr Z ∗ (ZZ ∗ )− 2 (W
i
e X vec )mat (W
e X vec )∗mat (ZZ ∗ )− 32 Z ∂Z ∗ .
+ tr (W
p
To determine the critical points of f X,ϵ (Z ), we summarize the calculations above, rearrange the
39
terms and equate the derivative with zero, such that
p
∂ f X,ϵ (Z )
p h e
e X vec )mat (Z ∗Z )− 23 Z ∗ + Z ∗ (ZZ ∗ )− 32 (W
e X vec )∗mat
e X vec )mat (W
= tr (W X vec )∗mat (W
8
i
4−p
1
+2ϵ 2 (Z ∗Z )− 2 Z ∗ − 2(Z ∗Z ) 2(p−2) Z ∗ ∂Z
3
p h
e X vec )∗mat (W
e X vec )mat + (W
e X vec )mat (W
e X vec )∗mat (ZZ ∗ )− 23 Z
+ tr Z (Z ∗Z )− 2 (W
8
4−p i
2
∗ − 12
∗
∗
2(p−2)
+2ϵ Z (Z Z ) − 2Z (Z Z )
∂Z
:=
p
p
p
tr (A∂Z ) + tr (A∗ ∂Z ∗ ) = tr ((A ⊕ A)∂Z ) = 0,
8
8
8
where
h
e X vec )∗mat (W
e X vec )mat (Z ∗Z )− 23 Z ∗ + Z ∗ (ZZ ∗ )− 32 (W
e X vec )mat (W
e X vec )∗mat
A = (W
i
4−p
2 ∗ − 21 ∗
∗
∗
2(p−2)
+2ϵ (Z Z ) Z − 2(Z Z )
Z .
(59)
and hence an easy calculation as in [Duc] gives
p
∂ f X,ϵ (Z )
∂Z
=
p
8
tr ((A ⊕ A)∂Z ) p
= (A ⊕ A) = 0.
∂Z
8
Now we have to find Z such that A ⊕ A = 0. This implies that all eigenvalues of A ⊕ A =
A ⊗ Id + Id ⊗ A are equal to zero. The eigenvalues of the Kronecker sum of two matrices A1 and A2
with eigenvalues λs and µ t with s, t ∈ [d] are the sum of the eigenvalues λs + µ t . As in our case
A = A1 = A2 this means that all eigenvalues of A itself have to be zero. This is only possible if A is
the zero matrix.
Let Z = U ΣV ∗ ∈ Md ×d with U , V ∈ Ud and Σ ∈ Md ×d , where Σ = diag(σ ) is a diagonal matrix
2
with ascending entries. We define the matrix H = Hi, j = σ −1 +σ
−1 for i = 1, . . . , d, j = 1, . . . , d
i
j
corresponding to the result of reshaping the
diagonal of 2(Σ ⊕ Σ) into a d × d-matrix. Using (57), we
e X vec )mat = U H ◦ (U ∗XV ) V ∗ and denote B := H ◦ (U ∗XV ).
can express (W
Plugging the decomposition Z = U ΣV ∗ into (59), we can therefore calculate
A = 0 ⇔ (U BV ∗ )∗ (U BV ∗ )(V Σ2V ∗ )−3/2 (U ΣV ∗ )∗ + (U ΣV ∗ )∗ (U Σ2U ∗ )∗ )−3/2 (U BV ∗ )(U BV ∗ )∗
4−p
+ 2ϵ 2 (V Σ2V ∗ )−1/2 (U ΣV ∗ )∗ − 2(V Σ2V ∗ ) 2(p−2) (U ΣV ∗ )∗ = 0
2
⇔ V B ∗ BΣ−2U ∗ + V Σ−2 BB ∗U ∗ + 2ϵ 2V Id U ∗ − 2V Σ p−2 U ∗ = 0
2
⇔ B ∗ BΣ−2 + Σ−2 BB ∗ + 2ϵ 2 Id − 2Σ p−2 = 0.
(60)
We now note that
− 2Σ
is diagonal and therefore,
+
is diagonal as
∗
−2
∗
2
well. Moreover, observe that B B + Σ BB Σ is again a diagonal matrix and has a symmetric first
summand B ∗ B. As the sum or difference of symmetric matrices is again symmetric also the second
summand Σ−2 BB ∗ Σ2 has to be symmetric, i.e., Σ−2 BB ∗ Σ2 = (Σ−2 BB ∗ Σ2 )∗ = Σ2 BB ∗ Σ−2 . We conclude
that it has to hold that BB ∗ Σ4 = Σ4 BB ∗ and hence Σ4 and BB ∗ commute.
2ϵ 2 Id
2
p−2
B ∗ BΣ−2
40
Σ−2 BB ∗
This is only possible if either Σ is a multiple of the identity or if BB ∗ is diagonal. Assuming the
first case, (60) would imply that also BB ∗ and B ∗ B have to be a multiple of the identity. Therefore, this
first case, where Σ is a multiple of the identity is a special case of the second possible scenario, where
BB ∗ is diagonal. Hence, it suffices to further consider the more general second case. (Considerations
for B ∗ B can be carried out analogously.)
Diagonality of BB ∗ only occurs if B is either orthonormal or diagonal. Assuming orthonormality
would lead to contradictions with the equations in (60). Hence B = H ◦(U ∗XV ) can only be diagonal.
Let now be X = Ū S¯V̄ ∗ the singular value decomposition of X . As H has no zero entries due to
the full rank of W , this implies the diagonality of U ∗Ū S¯V̄ ∗V . Consequently, U and V can only be
chosen such that P = [U ∗Ū ]d ×d and P ∗ = [V̄ ∗V ]d ×d for a permutation matrix P ∈ Ud . The reshuffled
indexing corresponding to P is denoted by p(i) ∈ [d] for i ∈ [d]. Having in mind that Hii = σi for
i ∈ [d], we obtain
2
¯ ∗ ))∗ (H ◦ (P SP
¯ ∗ ))Σ−2 + Σ−2 (H ◦ (P SP
¯ ∗ ))(H ◦ (P SP
¯ ∗ ))∗ + 2ϵ 2 Id − 2Σ p−2 = 0
(H ◦ (P SP
2
2
⇔ 2s̄p(i)
+ 2ϵ 2 = 2σip−2 for all i ∈ [d]
2
⇔ σi = (s̄p(i)
+ ϵ 2)
p−2
2
for all i ∈ [d].
As the diagonal of Σ was assumed to have ascending entries and the diagonal of S¯ has descending
entries, the permutation matrix P has to be equal to the identity matrix. From P = Id , it follows that
p−2
U = Ū and V = V̄ and hence Σ = (S¯2 + ϵ 2 Id ) 2 .
We summarize our calculations by stating that
Z opt = Ū ΣV̄ ∗ = Ū (S¯2 + ϵ 2 Id )
p−2
2
V̄ ∗
p
is the only critical point of f X,ϵ on the domain Ω.
e (Z ) is adapted by introducing
The results extend for the case d 1 , d 2 , where the definition of W
∗
1/2
the Moore-Penrose pseudo inverse of (ZZ )
e (Z ) = 2W
e1 W
e1 + W
e2 −1W
e2 = 2(W
e1+ + W
e2−1 )−1 .
W
The corresponding derivative rule as pointed out in [MN99, Chapter 8.4, Theorem 5] can be used
for the calculation in (58).
B.3
Proof of Lemma 16
The equality of the optimization problems (32) can easily be seen by the fact that only the first
e (Z ) = 2([(Z ∗Z ) 21 ]+ ⊕
summand of Jp (X , ϵ, Z ) depends on X . Now, it is important to show first that W
1
[(ZZ ∗ ) 2 ]+ )−1 is positive definite as minimizing Jp (X , ϵ, Z ) then reduces to minimizing a quadratic
Í
form. Let Z = di=1 σi ui vi∗ , where ui , vi for i ∈ [d] are the left and right singular vectors, respectively,
Í
and σi for i ∈ [d] are the singular values of Z . Since Z ∗Z = di=1 σi2vi vi∗ 0, also the generalized
Í
1
1
inverse root fulfills [(ZZ ∗ ) 2 ]+ 0 and for ZZ ∗ = di=1 σi2ui ui∗ 0, it follows that [(ZZ ∗ ) 2 ]+ 0.
1
1
We stress that at least one of the matrices (ZZ ∗ ) 2 and (Z ∗Z ) 2 is positive definite and hence also
e (Z ) 0. With the fact that W
e (Z ) 0, the statement can be proven analogously to the results in
W
[FRW11, Lemma 5.1].
41
B.4
Proof of Lemma 17
(a) With the minimization property that defines X (n+1) in (29), the inequality ϵ (n+1) ≤ ϵ (n) , and the
minimization property that defines Z (n+1) in (28) and Lemma 14 the monotonicity follows from
Jp (X (n) , ϵ (n) , Z (n) ) ≥ Jp (X (n+1) , ϵ (n) , Z (n) ) ≥ Jp (X (n+1) , ϵ (n+1) , Z (n) )
≥ Jp (X (n+1) , ϵ (n+1) , Z (n+1) )
(b) Using Lemma 14 and the monotonicity property of (a) for all n ∈ N, we see that
p
p
kX (n) kSp ≤ дϵ (n) (X (n) ) = Jp (X (n) , ϵ (n) , Z (n) ) ≤ Jp (X (1) , ϵ (0) , Z (0) ),
(c) The proof follows analogously to [FRW11, Proposition 6.1] where only the technical calculation
p e (n) −1
to bound σ1 ((W
) ) requires to take into account that the spectrum of a Kronecker sum A ⊕ B
consists of the pairwise sum of the spectra of A and B [Ber09, Proposition 7.2.3].
B.5
Proof of Lemma 18
e (X (n) , ϵ (n) ) = W
e (n) is clear from the definition of W
e (X , ϵ) and (10). To show
The first statement W
the necessity of (35), let X ∈ Md1 ×d2 be a critical point of (34). Without loss of generality, let us
p
assume that d 1 ≤ d 2 . In this case, a short calculation shows that дϵ (X ) = tr (XX ∗ + ϵ 2 Id1 )p/2 . It
follows from the matrix derivative rules of [MN99, Chapter 8.2, (7),(15),(18) and (20)] that
p
∇дϵ (X )
2
= p(XX + ϵ Id1 )
∗
p−2
2
d
Õ
p−2
X =p
(σi2 + ϵ 2 ) 2 σi ui vi∗ ,
i=1
Í
using the singular value decomposition X = di=1 σi ui vi∗ in the last equality. Using the Kronecker
p
e (X , ϵ)X vec
sum inversion formula (56), we see that ∇дϵ (X ) = p W
. The proof can be continued
mat
analogously to [DDFG10, Lemma 5.2].
C
Proof of Theorem 9
For statement (i) of the convergence result of Algorithm 1, we use the following reverse triangle
inequalities implied by the strong Schatten-p NSP: Let X , X 0 ∈ Md1 ×d2 such that Φ(X −X 0) = 0. Then
p
kX 0 − X kF ≤
1−p/2
2p γr
1 0 p
p
+
2β
(X
)
kX
k
−
kX
k
r
Sp ,
Sp
Sp
r 1−p/2 1 − γr
(61)
where βr (X )Sp is defined in (22). This inequality can be proven using an adaptation of the proof of
the corresponding result for `p -minimization in [GPYZ15, Theorem 13] and the generalization of
Mirksy’s singular value inequality to concave functions [Aud14, Fou18]. Furthermore, the proof of
the similar statement in [KKRT16, Theorem 12] can be adapted to show (61).
The further part of the proof of (i) as well as (ii) follow analogously to [FRW11, Theorem 6.11]
and [DDFG10, Theorem 5.3] using the preliminary results deduced in Section 6. Statement (iii) is a
direct consequence of Theorem 11, which is proven in Section 6.3.
42
References
[AR15] A. Ahmed and J. Romberg. Compressive multiplexing of correlated signals. IEEE Trans.
Inf. Theory, 61(1):479–498, 2015.
[Aud14] K. M. R. Audenaert. A generalisation of Mirsky’s singular value inequalities. preprint,
arXiv:1410.4941 [math.FA], 2014.
[Ber09] D. S. Bernstein. Matrix Mathematics: Theory, Facts, and Formulas (Second Edition).
Princeton University Press, 2009.
[BNS16] S. Bhojanapalli, B. Neyshabur, and N. Srebro. Global optimality of local search for low
rank matrix recovery. In Advances in Neural Information Processing Systems (NIPS),
pages 3873–3881, 2016.
[BTW15] J. D. Blanchard, J. Tanner, and K. Wei. CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. Inf. Inference, 4(4):289–327, 2015.
[CDK15] J. A. Chavez-Dominguez and D. Kutzarova. Stability of low-rank matrix recovery and
its connections to Banach space geometry. J. Math. Anal. Appl., 427(1):320–335, 2015.
[CESV13] E. J. Candès, Y. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM J. Imag. Sci., 6(1):199–225, 2013.
[Cha07] R. Chartrand. Exact reconstructions of sparse signals via nonconvex minimization. IEEE
Signal Process. Lett., 14:707–710, 2007.
[CLS15] E. J. Candès, X. Li, and M. Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory
and algorithms. IEEE Trans. Inf. Theory, 61(4):1985–2007, 2015.
[CP11] E. J. Candès and Y. Plan. Tight oracle inequalities for low-rank matrix recovery from a
minimal number of noisy random measurements. IEEE Trans. Inf. Theory, 57(4):2342–
2359, April 2011.
[CR09] E. J. Candès and B. Recht. Exact matrix completion via convex optimization. Found.
Comput. Math., 9(6):717–772, 2009.
[CSV13] E. J. Candès, T. Strohmer, and V. Voroninski. PhaseLift: Exact and Stable Signal Recovery
from Magnitude Measurements via Convex Programming. Commun. Pure Appl. Math.,
66(8):1241–1274, 2013.
[Dac89] B. Dacorogna. Direct Methods in the Calculus of Variations. Springer, New York, 1989.
[DDFG10] I. Daubechies, R. DeVore, M. Fornasier, and C.S. Güntürk. Iteratively reweighted least
squares minimization for sparse recovery. Commun. Pure Appl. Math., 63:1–38, 2010.
[DGM13] D. L. Donoho, M. Gavish, and A. Montanari. The phase transition of matrix recovery
from Gaussian measurements matches the minimax MSE of matrix denoising. Proc. Nat.
Acad. Sci. U.S.A., 110(21):8405–8410, 2013.
[DR16] M. A. Davenport and J. Romberg. An overview of low-rank matrix recovery from incomplete observations. IEEE J. Sel. Topics Signal Process., 10:608–622, 06 2016.
43
[Duc] J. Duchi. Properties of the Trace and Matrix Derivatives. Available electronically
at https://web.stanford.edu/∼jduchi/projects/matrix prop.
pdf.
[ENP12] Y.C. Eldar, D. Needell, and Y. Plan. Uniqueness conditions for low-rank matrix recovery.
Appl. Comput. Harmon. Anal., 33(2):309–314, 2012.
[Faz02] M. Fazel. Matrix rank minimization with applications. Ph.D. Thesis, Electrical Engineering Department, Stanford University, 2002.
[Fou18] S. Foucart. Concave Mirsky Inequality and Low-Rank Recovery. SIAM J. Matrix Anal.
Appl., 39(1):99–103, 2018.
[FPRW16] M. Fornasier, S. Peter, H. Rauhut, and S. Worm. Conjugate gradient acceleration of
iteratively re-weighted least squares methods. Comput. Optim. Appl., 65(1):205–259,
2016.
[FR13] S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Applied
and Numerical Harmonic Analysis. Birkhäuser/Springer, New York, 2013.
[FRW11] M. Fornasier, H. Rauhut, and R. Ward. Low-rank matrix recovery via iteratively
reweighted least squares minimization. SIAM J. Optim., 21(4):1614–1640, 2011. code
from https://github.com/rward314/IRLSM].
[GB14] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming,
version 2.1. http://cvxr.com/cvx, March 2014.
[GGK00] I. Gohberg, S. Goldberg, and N. Krupnik. Traces and determinants of linear operators,
volume 116 of Operator Theory: Advances and Applications. Birkhäuser, Basel, 2000.
[GKK15] D. Gross, F. Krahmer, and R. Kueng. A partial derandomization of phaselift using spherical designs. J. Fourier Anal. Appl., 21(2):229–266, 2015.
[GLF+ 10] D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert. Quantum state tomography
via compressed sensing. Phys. Rev. Lett., 105:150401, 2010.
[GLM16] R. Ge, J. D. Lee, and T. Ma. Matrix completion has no spurious local minimum. In
Advances in Neural Information Processing Systems (NIPS), pages 2973–2981, 2016.
[GNOT92] D. Goldberg, D. Nichols, B. M. Oki, and D. Terry. Using collaborative filtering to weave
an information tapestry. Commun. ACM, 35(12):61–70, 1992.
[GPYZ15] Y. Gao, J. Peng, S. Yue, and Y. Zhao. On the null space property of `q -minimization for
0 < q ≤ 1 in compressed sensing. J. Funct. Spaces, 2015:4203–4215, 2015.
[Gro11] D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans.
Inf. Theory, 57(3):1548–1566, 2011.
[HH09] J. P. Haldar and D. Hernando. Rank-constrained solutions to linear matrix equations
using powerfactorization. IEEE Signal Process. Lett., 16(7):584–587, July 2009. [using
AltMin (Alternating Minimization) algorithm].
44
[HMT11] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev.,
53(2):217–288, 2011.
[Jam68] A. Jameson. Solution of the equation ax + xb = c by inversion of an m x m or n x n
matrix. SIAM J. Appl. Math., 16(5):1020–1023, 1968.
[JMD10] P. Jain, Raghu M., and I. S. Dhillon. Guaranteed rank minimization via singular value
projection. In Advances in Neural Information Processing Systems (NIPS), pages 937–945,
2010.
[JNS13] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating
minimization. In Proc. ACM Symp. Theory Comput. (STOC), pages 665–674, Palo Alto,
CA, USA, June 2013.
[KC14] A. Kyrillidis and V. Cevher. Matrix recipes for hard thresholding methods. J. Math.
Imaging Vision, 48(2):235–265, 2014. [using Matrix ALPS II (’Matrix ALgrebraic PursuitS II’) algorithm, code from http://akyrillidis.github.io/
projects/].
[KKRT16] M. Kabanava, R. Kueng, H. Rauhut, and U. Terstiege. Stable low-rank matrix recovery
via null space properties. Inf. Inference, 5(4):405–441, 2016.
[KS17] C. Kümmerle and J. Sigl. Harmonic Mean Iteratively Reweighted Least Squares for
low-rank matrix recovery. In 12th International Conference on Sampling Theory and
Applications (SampTA), pages 489–493, 2017.
[KTT15] F. J. Király, L. Theran, and R. Tomioka. The Algebraic Combinatorial Approach for LowRank Matrix Completion. J. Mach. Learn. Res., 16:1391–1436, 2015.
[LHV13] Z. Liu, A. Hansson, and L. Vandenberghe. Nuclear norm system identification with
missing inputs and outputs. Systems Control Lett., 62(8):605–612, 2013.
[LV10] Z. Liu and L. Vandenberghe. Interior-point method for nuclear norm approximation
with application to system identification. SIAM J. Matrix Anal. Appl., 31(3):1235–1256,
2010.
[MF12] K. Mohan and M. Fazel. Iterative reweighted algorithms for matrix rank minimization.
J. Mach. Learn. Res., 13(1):3441–3473, 2012. [using IRLS-MF (’IRLS-p’) algorithm, code
from https://faculty.washington.edu/mfazel/].
[MMBS13] B. Mishra, G. Meyer, F. Bach, and R. Sepulchre. Low-rank optimization with trace norm
penalty. SIAM J. Optim., 23(4):2124–2149, 2013.
[MN99] J.R. Magnus and H. Neudecker. Matrix Differential Calculus with Applications in Statistics
and Econometrics. Wiley Series in Probability and Statistics. Wiley, 1999.
[OMFH11] S. Oymak, K. Mohan, M. Fazel, and B. Hassibi. A simplified approach to recovery conditions for low rank matrices. In Proceedings of the IEEE International Symposium on
Information Theory (ISIT), pages 2318–2322, 2011.
45
[PABN16] D.L. Pimentel-Alarcón, N. Boston, and R. D. Nowak. A Characterization of Deterministic Sampling Patterns for Low-Rank Matrix Completion. preprint, arXiv:1503.02596v3
[stat.ML], October 2016.
[PKCS16] D. Park, A. Kyrillidis, C. Caramanis, and S. Sanghavi.
Finding Low-rank Solutions to Matrix Problems, Efficiently and Provably. preprint, arXiv:1606.03168
[math.OC], [using BFGD (’Bi-Factored Gradient Descent’) algorithm, code from http:
//akyrillidis.github.io/projects/], 2016.
[RFP10] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear
matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471–501, 2010.
[RXH11] B. Recht, W. Xu, and B. Hassibi. Null space conditions and thresholds for rank minimization. Math. Program., 127(1):175–202, 2011.
[SL16] R. Sun and Z. Q. Luo. Guaranteed matrix completion via non-convex factorization. IEEE
Trans. Inf. Theory, 62(11):6535–6579, 2016.
[SRJ05] N. Srebro, J. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In
Advances in Neural Information Processing Systems (NIPS), pages 1329–1336, 2005.
[SS16] É. Schost and P.-J. Spaenlehauer. A quadratically convergent algorithm for structured
low-rank approximation. Found. Comput. Math., 16(2):457–492, 2016.
[Ste06] M. Stewart. Perturbation of the SVD in the presence of small singular values. Linear
Algebra Appl., 419(1):53–77, 2006.
[TBS+ 15] S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank Solutions
of Linear Matrix Equations via Procrustes Flow. preprint, arXiv:1507.03566 [math.OC],
2015.
[TW13] J. Tanner and K. Wei. Normalized Iterative Hard Thresholding for Matrix Completion.
SIAM J. Sci. Comput., 35(5):S104–S125, 2013.
[TW16] J. Tanner and K. Wei. Low rank matrix completion by alternating steepest descent
methods. Appl. Comput. Harmon. Anal., 40(2):417–429, 2016. [using ASD (’Alternating
Steepest Descent’) algorithm, code from https://www.math.ucdavis.edu/
∼kewei/publications.html].
[Van13] B. Vandereycken.
Low-rank matrix completion by Riemannian optimization.
SIAM J. Optim., 23(2):1214–1236, 2013.
[using Riemann Opt (’Riemannian Optimization’) algorithm, code from http://www.unige.ch/math/
vandereycken/matrix completion.html].
[WCCL16] K. Wei, J.-F. Cai, T. F. Chan, and S. Leung. Guarantees of Riemannian Optimization for
Low Rank Matrix Recovery. SIAM J. Matrix Anal. Appl., 37(3):1198–1222, 2016.
[Wed72] P.-Å. Wedin. Perturbation bounds in connection with singular value decomposition.
BIT, 12(1):99–111, 1972.
46
[WYZ12] Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput.,
4(4):333–361, 2012.
[ZL15] Q. Zheng and J. Lafferty. A convergent gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements. In Advances in
Neural Information Processing Systems (NIPS), pages 109–117, 2015.
47
| 7 |
On the adoption of abductive reasoning for time series
interpretation
T. Teijeiro, P. Félix
arXiv:1609.05632v2 [] 20 Dec 2017
Centro Singular de Investigación en Tecnoloxı́as da Información (CITIUS), University of
Santiago de Compostela, Santiago de Compostela, Spain
Abstract
Time series interpretation aims to provide an explanation of what is observed in
terms of its underlying processes. The present work is based on the assumption
that the common classification-based approaches to time series interpretation
suffer from a set of inherent weaknesses, whose ultimate cause lies in the monotonic nature of the deductive reasoning paradigm. In this document we propose
a new approach to this problem, based on the initial hypothesis that abductive
reasoning properly accounts for the human ability to identify and characterize
the patterns appearing in a time series. The result of this interpretation is a
set of conjectures in the form of observations, organized into an abstraction hierarchy and explaining what has been observed. A knowledge-based framework
and a set of algorithms for the interpretation task are provided, implementing a
hypothesize-and-test cycle guided by an attentional mechanism. As a representative application domain, interpretation of the electrocardiogram allows us to
highlight the strengths of the proposed approach in comparison with traditional
classification-based approaches.
Keywords: Abduction, Interpretation, Time Series, Temporal Abstraction,
Temporal Reasoning, Non-monotonic Reasoning, Signal Abstraction
1. Introduction
The interpretation and understanding of the behavior of a complex system
involves the deployment of a cognitive apparatus aimed at guessing the processes
and mechanisms underlying what is observed. The human ability to recognize
patterns plays a paramount role as an instrument for highlighting evidence which
should require an explanation, by matching information from observations with
information retrieved from memory. Classification naturally arises as a pattern
recognition task, defined as the assignment of observations to categories.
Let us first state precisely at this point what is the problem under consideration: we wish to interpret the behavior of a complex system by measuring a
physical quantity along time. This quantity is represented as a time series.
Preprint submitted to Artificial Intelligence
December 21, 2017
The Artificial Intelligence community has devoted a great deal of effort on
different paradigms, strategies, methodologies and techniques for time series
classification. Nonetheless, in spite of the wide range of proposals for building
classifiers, either by eliciting domain knowledge or by induction from a set of
observations, the resulting classifier behaves as a deductive system. The present
work is premised on the assumption that some of the important weaknesses of
this approach lie in its deductive nature, and that an abductive approach can
address these shortcomings.
Let us remember that a deduction contains in its conclusions information
that is already implicitly contained in the premises, and thus it is truth-preserving.
In this sense, a classifier ultimately assigns a label or a set of labels to observations. This label can designate a process or a mechanism of the system being
observed, but it is no more than a term that summarizes the premises satisfied
by the observations. Conversely, abduction, or inference to the best explanation,
is a form of inference that goes from data to a hypothesis that best explains
or accounts for the data [21]. Abductive conclusions contain new information
not contained in the premises, and are capable of predicting new evidence, although they are fallible. Abductions are thus truth-widening, and they can
make the leap from the language of observations to the language of the underlying processes and mechanisms, responding to the aforementioned problem in
a natural way [24]. For example, consider a simple rule stating that if a patient
experiences a sudden tachycardia and a decrease in blood pressure, then we can
conclude that is suffering from shock due to a loss of blood volume. From a deductive perspective, loss of blood volume is just a name provided by the rule for
the satisfaction of the two premises. However, from an abductive perspective,
loss of blood volume is an explanatory hypothesis, a conjecture, that expands
the truth contained in the premises, enabling the observer to predict additional
consequences such as, for example, pallid skin, faintness, dizziness or thirst.
Of course, the result of a classifier can be considered as a conjecture, but
always from an external agent, since a classifier is monotonic as a logical system
and its conclusions cannot be refuted from within. Classifier ensembles aim to
overcome the errors of individual classifiers by combining different classification
instances to obtain a better result; thus, a classifier can be amended by others in
the final result of the ensemble. However, even an ensemble represents a bottomup mapping, and classification invariably fails above a certain level of distortion
within the data. The interpretation and understanding of a complex system
usually unfolds along a set of abstraction layers, where at each layer the temporal
granularity of the representation is reduced from below. A classification strategy
provides an interpretation as the result of connecting a set of classifiers along the
abstraction structure, and the monotonicity of deduction entails a propagation
of errors from the first abstraction layers upwards, narrowing the capability of
making a proper interpretation as new abstraction layers are successively added.
Following an abductive process instead, an observation is conjectured at each
abstraction layer as the best explanatory hypothesis for the data from the layer
or layers below, within the context of information from above, and the nonmonotonicity of abduction supports the retraction of any observation at any
2
abstraction layer in the search for the best global explanation. Thus, bottomup and top-down processing complement one another and provide a joint result.
As a consequence, abduction can guess the underlying processes from corrupted
data or even in the temporary absence of data.
On the other hand, a classifier is based on the assumption that the underlying processes or mechanisms are mutually exclusive. Superpositions of two or
more processes are excluded; they must be represented by a new process, corresponding to a new category which is different and usually unrelated to previous
ones. Therefore, an artificial casuistry-based heuristics is adopted, increasing
the complexity of the interpretation and reducing its adaptability to the variability of observations. In contrast, abduction can reach a conclusion from the
availability of partial evidence, refining the result by the incremental addition of
new information. This makes it possible to discern different processes just from
certain distinguishable features, and at the end to infer a set of explanations as
far as the available evidence does not allow us to identify the best one, and they
are not incompatible with each other.
In a classifier, the truth of the conclusion follows from the truth of all the
premises, and missing data usually demand an imputation strategy that results
in a conjecture: a sort of abducing to go on deducing. In contrast, an abductive
interpretation is posed as a hypothesize-and-test cycle, in which missing data
is naturally managed, since a hypothesis can be evoked by every single piece of
evidence in isolation and these can be incrementally added to reasoning. This
fundamental property of abduction is well suited to the time-varying requirements of the interpretation of time series, where future data can compel changes
to previous conclusions, and the interpretation task may be requested to provide
the current result as the best explanation at any given time.
Abduction has primarily been proposed for diagnostic tasks [10, 33], but also
for question answering [15], language understanding [22], story comprehension
[6], image understanding [36] or plan recognition [28], amongst others. Some
studies have proposed that perception might rely on some form of abduction.
Even though abductive reasoning has been proven to be NP-complete, a compiled form of abduction based on a set of pre-stored hypotheses could narrow
the generation of hypotheses [24]. The present work takes this assumption as
a starting point and proposes a model-based abductive framework for time series interpretation supported on a set of temporal abstraction patterns. An
abstraction pattern represents a set of constraints that must be satisfied by
some evidence in order to be interpreted as the hypothetical observation of a
certain process, together with an observation procedure providing a set of measurements for the features of the conjectured observation. A set of algorithms is
devised in order to achieve the best explanation through a process of successive
abstraction from raw data, by means of a hypothesize-and-test strategy.
Some previous proposals have adopted a non-monotonic schema for time
series interpretation. TrenDx system detects significant trends in time series
by matching data to predefined trend patterns [19, 20]. One of these patterns
plays the role of the expected or normal pattern, and the other patterns are
fault patterns. A matching score of each pattern is based on the error between
3
the pattern and the data. Multiple trend patterns can be maintained as competing hypotheses according to their matching score; as additional data arrive
some of the patterns can be discarded and new patterns can be triggered. This
proposal has been applied to diagnose pediatric growth trends. A similar proposal can be found in [27], taking a step further by providing complex temporal
abstractions, the result of finding out specific temporal relationships between
a set of significant trends. This proposal has been applied to the infectious
surveillance of heart transplanted patients. Another example is the Résumé
system, a knowledge-based temporal abstraction framework [42, 39]. Its goal
is to provide, from time-stamped input data, a set of interval-based temporal
abstractions, distinguishing four output abstraction types: state, gradient, rate
and pattern. It uses a truth maintenance system to retract inferred intervals
that are no longer true, and propagate new abstractions. Furthermore, this
framework includes a non-monotonic interpolation mechanism for trend detection [41]. This proposal has been applied to several clinical domains (protocolbased care, monitoring of children’s growth and therapy of diabetes) and to an
engineering domain (monitoring of traffic control).
The present work includes several examples and results from the domain
of electrocardiography. The electrocardiogram (ECG) is the recording at the
body’s surface of the electrical activity of the heart as it changes with time,
and is the primary method for the study and diagnosis of cardiac disease, since
the processes involved in cardiac physiology manifest in characteristic temporal
patterns on the ECG trace. In other words, a correct reading of the ECG has
the potential to provide valuable insight into cardiac phenomena. Learning to
interpret the ECG involves the acquisition of perceptual skills from an extensive
bibliography with interpretation criteria and worked examples. In particular,
pattern recognition is especially important in order to build a bottom-up representation of cardiac phenomena in multiple abstraction levels. This has encouraged extensive research on classification techniques for interpreting the ECG;
however, in spite of all these efforts, this is still considered an open problem.
We shall try to demonstrate that the problem lies in the nature of deduction
itself.
The rest of this paper is structured as follows: Section 2 introduces the main
concepts and terminology used in the paper in an informal and intuitive way.
Following this, in Sections 3, 4 and 5 we formally describe all the components
of the interpretation framework, including the knowledge representation model
and the algorithms used to obtain effective interpretations within an affordable
time. Section 6 illustrates the capabilities of the framework in overcoming some
of the most important shortcomings of deductive classifiers. Section 7 presents
the main experimental results derived from this work. Finally, in section 8 we
discuss the properties of the model compared with other related approaches and
draw several conclusions.
4
2. Interpretation as a process-guessing task
We propose a knowledge-based interpretation framework upon the principles
of abductive reasoning, on the basis of a strategy of hypothesis formation and
testing. Taking as a starting point a time series of physical measurements, a set
of observations are guessed as conjectures of the underlying processes, through
successive levels of abstraction. Each new observation will be generated from
previous levels as the underlying processes aggregate, superimpose or concatenate to form more complex processes with greater duration and scope, and are
organized into an abstraction hierarchy.
The knowledge of the domain is described as a set of abstraction patterns
as follows:
hψ ((Ah , Thb , The ) = Θ(A1 , T1 , ..., An , Tn )) abstracts m1 (A1 , T1 ), ..., mn (An , Tn )
{C(Ah , Thb , The , A1 , T1 , ..., An , Tn )}
where hψ (Ah , Thb , The ) is an observable of the domain playing the role of a
hypothesis on the observation of an underlying process ψ, where Ah represents a set of attributes, and its temporal support is represented by two instants Thb and The , corresponding to the beginning and the end of the observable; m1 (A1 , T1 ), . . . , mn (An , Tn ) is a set of observables of the domain which
plays the role of the evidence suggesting the observation of hψ , where each of
these has its own set of attributes Ai and temporal support Ti , represented
here as a single instant for the sake of simplicity, although this may also be
an interval; C is a set of constraints among the variables involved in the abstraction pattern, which are interpreted as necessary conditions in order for
the evidence m1 (A1 , T1 ), . . . , mn (An , Tn ) to be abstracted into hψ (Ah , Thb , The );
Θ(A1 , T1 , . . . , An , Tn ) is an observation procedure that gives as a result an
observation of hψ (Ah , Thb , The ) from a set of observations for m1 (A1 , T1 ), . . . ,
mn (An , Tn ).
To illustrate this concept, consider the sequence of observations in Figure 1.
Each of these observations is an instance of an observable we call point (p),
represented as p(A = {V }, T ), where T determines the temporal location of the
observation and V is a value attribute.
If we analyze these observations visually, we may hypothesize the presence
of an underlying sinusoidal process. Let us define an observable sinus for such
a sinusoidal process, with two attributes: the amplitude of the process (α) and
its frequency (ω). The knowledge necessary to conjecture this hypothesis is
collected in the following abstraction pattern:
hsinus ({α, ω}, Thb , The ) = Θ(V1 , T1 , ..., Vn , Tn )) abstracts p(V1 , T1 ), ..., p(Vn , Tn )
{C(α, ω, Thb , The , V1 , T1 , ..., Vn , Tn )}
We can estimate the attribute values (α, ω, Thb , The ) of this process by a simple
observation procedure Θ that calculates α = max(|Vi |), for 1 ≤ i ≤ n, i.e., the
amplitude α is obtained as the maximum absolute value of the observations; ω =
5
30
20
v
10
0
10
20
30
0
20
40
T
60
80
100
V 3.4 17.6 12.9 2.6 -17.5 -20 -10.5 -0.8 7.8 17.5 19.4 19.4 17.6 7.8 -14.9 -16.3 -11.6 15.6 17.1 -15.8 7.7 13.7 2.7 -19.8 -19.6 -3.1 0.2 8.1 9.6 0
T
1
4
8 10
14 15
19 21 22 24 26 27 28 30
34
35
40 45 47
55 64 70 73
78
80 83 84 85 86 94
Figure 1: Initial temporal observations.
peak
π/mean(Tjpeak −Tj−1
), where Tjpeak are point observations representing a peak,
satisfying (Vjpeak = Vk , Tjpeak = Tk ) ∧ sign(Vk − Vk−1 ) 6= sign(Vk+1 − Vk ), so
that the frequency ω is obtained as the inverse of the mean temporal separation
between consecutive peaks in the sequence of observations; and Thb = T1 , The =
Tn , i.e., the temporal support of the hypothesis is the time interval between the
first and the last evidence points.
We can impose the following constraint C(α, ω, Thb , The , V1 , T1 , . . . , Vn , Tn ) for
every pair (Vi , Ti ) in the sequence:
max(|α · sin(ω · Ti ) − Vi |) ≤ ,
This constraint provides a model of a sinusoidal process and a measure of
how well it fits a set of observations by means of a maximum error . Figure 2
shows the continuous representation of the abstracted process, whose resulting
observation is hsinus (α = 20, ω = 0.3, Thb = 1, The = 94). A value of α/3 has
been chosen for .
30
20
10
v
0
10
20
30
0
20
40
T
60
80
100
Figure 2: Abstracted sinusoidal process.
Of course, various observation procedures can be devised in order to estimate the same or different characteristics of the process being guessed. These
procedures can provide one or several valid estimations in terms of their consistency with the abovementioned necessary constraints. In addition, different
processes can be guessed from the same set of observations, all of them being
6
valid in terms of their consistency. Hence, further criteria may be needed in
order to rank the set of interpretations.
This simple example summarizes the common approach to the interpretation of experimental results in science and technology, when the knowledge is
available as a model or a set of models. The challenge is to assume that this
knowledge is not available in an analytical but in a declarative form, as a pattern or a set of patterns, and that the interpretation task is expected to mimic
certain mechanisms of human perception.
3. Definitions
In this section we formally define the main pieces of our interpretation framework: observables and observations for representing the behavior of the system
under study, and abstraction patterns for representing the knowledge about this
system.
3.1. Representation entities
An observation is the result of observing something with the quality of being
observable. We call Q = {q0 , q1 , ..., qn } the set of observables of a particular
domain.
Definition 1. We define an observable as a tuple q = hψ, A, T b , T e i, where ψ
is a name representing the underlying process being observable, A = {A1 , ..., Anq }
is a set of attributes to be valued, and T b and T e are two temporal variables
representing the beginning and the end of the observable.
We call Vq (Ai ) the domain of possible values for the attribute Ai . We assume a discrete representation of the time domain τ , isomorphic to the set of
natural numbers N. For any observable, we implicitly assume the constraint
T b < T e . In the case of an instantaneous observable, this is represented as
q = hψ, A, T i. Some observables can be dually represented from the temporal
perspective, as either an observable supported by a temporal interval or as an
observable supported by a temporal instant, according to the task to be carried
out. A paradigmatic example is found in representing the heart beat, since it
can be represented as a domain entity with a temporal extension comprising its
constituent waves, and it can also be represented as an instantaneous entity for
measuring heart rate.
Example 3.1. In the ECG signal, several distinctive waveforms can be identified, corresponding to the electrical activation-recovery cycle of the different
heart chambers. The so-called P wave represents the activation of the atria,
and is the first wave of the cardiac cycle. The next group of waves recorded is
the QRS complex, representing the simultaneous activation of the right and left
ventricles. Finally, the wave that represents the ventricular recovery is called
the T wave. Together, these waveforms devise the characteristic pattern of the
heart cycle, which is repeated in a normal situation with every beat [46]. An
example of a common ECG strip is shown in Figure 3.
7
According to this description, the observable qP w = hatrial activation,
{amplitude}, T b , T e i represents a P wave resulting from an atrial activation
process with an unknown amplitude, localized in a still unknown temporal interval.
P
T
Figure 3: Example of the ECG basic waveforms. [Source: MIT-BIH arrhythmia DB [18],
recording: 123, between 12:11.900 and 12:22.400]
Definition 2. We define an observation as a tuple o = hq, v, tb , te i, an instance
of the observable q resulting from assigning a specific value to each attribute
and to the temporal variables, where v = (v1 , . . . , vnq ) is the set of attribute
values such that v ∈ Vq (A1 ) × . . . × Vq (Anq ) and tb , te ∈ τ are two precise
instants limiting the beginning and the end of the observation.
We also use the notation (A1 = v1 , . . . , Anq = vnq ) to represent the assignment of values to the attributes of the observable and T b = tb and T e = te for
representing the assignment of temporal limits to the observation.
Example 3.2. The tuple o = hqP w , 0.17mV, 12 : 16.977, 12 : 17.094i represents
the particular occurrence of the P wave observable highlighted in Figure 3.
Some notions involving observables and observations are defined below that
will be useful in describing certain properties and constraints of the domain
concepts, as well as in temporally arranging the interpretation process.
Definition 3. Given a set of observables Q, a generalization relation can be
defined between two different observables q = hψ, A, T b , T e i and q 0 = hψ 0 , A0 ,
T 0b , T 0e i, denoted by q 0 is a q, meaning that q generalizes q 0 if and only if
A ⊆ A0 and Vq0 (Ai ) ⊆ Vq (Ai ) ∀Ai ∈ A.
The generalization relation is reflexive, antisymmetric and transitive. The
inverse of a generalization relation is a specification relation. From a logical
perspective, a generalization relation can be read as an implication q 0 → q,
meaning that q 0 is more specific than q. It holds that every observation o =
hq 0 , v, tb , te i of the observable q 0 is also an observation of q.
Example 3.3. A common example of a generalization relation can be defined from a domain partition of an attribute. For example, the observable
q1 = hSinus Rhythm, {RR ∈ [200ms, 4000ms]}, T b , T e i is a generalization of the
observables q2 = hSinus Tachycardia, {RR ∈ [200ms, 600ms]}, T b , T e i, q3 =
hNormal Rhythm, {RR ∈ [600ms, 1000ms]}, T b , T e i and q4 = hSinus Bradycardia,
{RR ∈ [1000ms, 4000ms]}, T b , T e i. The RR attribute represents the measure of
the mean time distance between consecutive beats, while q2 , q3 and q4 represent
the normal cardiac rhythm denominations according to the heart rate [46].
8
Definition 4. Given a set of observables Q, an exclusion relation can be
defined between two different observables q = hψ, A, T b , T e i and q 0 = hψ 0 , A0 ,
T 0b , T 0e i, denoted by q excludes q 0 , meaning that they are mutually exclusive
if and only if their respective processes ψ and ψ 0 cannot concurrently occur.
The exclusion relation is defined extensionally from the knowledge of the
domain, and its rationale lies in the nature of the underlying processes and
mechanisms. In so far as the occurrence of a process can only be hypothesized
as long as it is observable, the exclusion relation behaves as a restriction on observations. Thus, given two observables q and q 0 , q excludes q 0 entails that they
cannot be observed over two overlapping intervals, i.e., every two observations
o = hq, v, tb , te i and o0 = hq 0 , v0 , t0b , t0e i satisfy either te < t0b or t0e < tb . The opposite is not generally true. The exclusion relation is symmetric and transitive.
As an example, in the domain of electrocardiography, the knowledge about the
physiology of the heart precludes the observation of a P wave during an episode
of Atrial fibrillation [46], so these two observables are mutually exclusive.
We call O the set of observations available for the observables in Q. In
order to index this set of observations, they will be represented as a sequence
by defining an order relation between them. This ordering aims to prioritize the
interpretation of the observations as they appear.
Definition 5. Let < be an order relation between two observations oi =
hqi , vi , tbi , tei i and oj = hqj , vj , tbj , tej i such that (oi < oj ) ⇔ (tbi < tbj ) ∨ ((tbi =
tbj ) ∧ (tei < tej )) ∨ ((tbi = tbj ) ∧ (tei = tej ) ∧ (qi < qj )), assuming a lexicographical
order between observable names.
A sequence of observations is an ordered set of observations O = (o1 , ..., oi , ...)
where for all i < j then oi < oj . Every subset of a sequence of observations
is also a sequence. The q-sequence of observations from O, denoted as O(q),
is the subset of the observations for the observable q. The exclusion relation
forces that any two observations oi = hq, vi , tbi , tei i and oj = hq, vj , tbj , tej i in O(q)
satisfy oi < oj ⇒ tei < tbj for the current application domain. By succ(oi )
we denote the successor of the observation oi in the sequence O, according to
the order relation <. By q-succ(oi ) we denote the successor of the observation
oi ∈ O(q) in its q-sequence O(q). Conversely to this notation, we denote by
q(oi ) the observable corresponding to the oi observation.
3.2. Abstraction patterns
We model an abstraction process as an abduction process, based on the conjectural relation m ← h [21], which can be read as ‘the observation of the
finding m allows us to conjecture the observation of h as a possible explanatory hypothesis’. For example, a very prominent peak in the ECG signal allows
us to conjecture the observation of a heartbeat. A key aspect of the present
proposal is that both the hypothesis and the finding are observables, and therefore formally identical, i.e., there exists qi , qj ∈ Q, with qi 6= qj , such that
h ≡ qi = hψi , Ai , Tib , Tie i and m ≡ qj = hψj , Aj , Tjb , Tje i. In general, an abstraction process can involve a number of different findings, even multiple findings of
9
the same observable, and a set of constraints among them; thus, for example, a
regular sequence of normal heartbeats allows us to conjecture the observation of
a sinus rhythm. Additionally, an observation procedure is required in order to
produce an observation of the hypothesis from the observation of those findings
involved in the abstraction process.
We devise an abstraction process as a knowledge-based reasoning process,
supported by the notion of abstraction pattern, which brings together those
elements required to perform an abstraction. Formally:
Definition 6. An abstraction pattern P = hh, MP , CP , ΘP i consists of
a hypothesis h, a set of findings MP = {m1 , . . . , mn }, a set of constraints
CP = {C1 , . . . , Ct } among the findings and the hypothesis, and an observation
procedure ΘP (A1 , T1b , T1e , . . . , An , Tnb , Tne ) ∈ O(h).
Every constraint Ci ∈ CP is a relation defined on a subset of the set of
variables taking part in the set of findings and the hypothesis {Ah , Thb , The , A1 ,
T1b , T1e , . . . , An , Tnb , Tne }. Thus, a constraint is a subset of the Cartesian product of the respective domains, and represents the simultaneously valid assignments to the variables involved. We will denote each constraint by making
reference to the set of variables being constrained, as in CP (Ah , Thb , The , A1 ,
T1b , T1e , . . . , An , Tnb , Tne ) for the whole abstraction pattern.
An abstraction pattern establishes, through the set CP , the conditions for
conjecturing the observation of h from a set of findings MP , and through the
observation procedure ΘP , the calculations for producing a new observation oh ∈
O(h) from the observation of these findings. We call MSPq = {mq1 , mq2 , ..., mqs }
the set of findings of the observable q in P , being MP = q∈Q MPq . Thus, a set
of findings allows the elements of a multiset of observables to be distinguished.
The interpretation procedure will choose, as we will see later, from the available
observations for every observable q satisfying the constraints CP , which are to
be assigned to the findings in MPq in order to calculate oh .
The set of findings MP is divided into two sets AP and EP , being AP ∩EP =
∅, where AP is the set of findings that is said to be abstracted in oh , and EP is
the set of findings that constitute the observation environment of oh , that is, the
set of findings needed to properly conjecture oh , but which are not synthesized
in oh .
A temporal covering assumption can be made as a default assumption [36] on
a hypothesis h = hψh , Ah , Thb , The i with respect to those findings m = hψm , Am ,
b
e
Tm
, Tm
i appearing in an abstraction pattern:
Default Assumption 1. (Temporal covering) Given an abstraction pattern P ,
b
e
≤ The , for all m ∈ AP ⊆ MP .
it holds that Thb ≤ Tm
and Tm
The temporal covering assumption allows us to define the exclusiveness of
an interpretation as the impossibility of including competing abstractions in the
same interpretation.
Example 3.4. According to [11], in the electrocardiography domain a “wave” is
a discernible deviation from a horizontal reference line called baseline, where at
10
least two opposite slopes can be identified. The term discernible means that both
the amplitude and the duration of the deviation must exceed some minimum values, agreed as 20 µV and 6 ms respectively. A wave can be completely described
by a set of attributes: its amplitude (A), voltage polarity (V P ∈ {+, −}) and
its main turning point T tp , resulting in the following observable:
qwave = helectrical activity, {A, V P, T tp }, T b , T e i
Let us consider the following abstraction pattern:
Pwave = hwave, MP = {mECG
, . . . , mECG
}, CPwave , wave observation()i
0
n
where mECG
is a finding representing an ECG sample, with a single attribute
i
Vi representing the sample value, and a temporal variable Ti representing its
time point. We set the onset and end of a wave to the time of the second
ECG
as
mECG
and second-to-last mECG
and mECG
n
1
n−1 samples, considering m0
environmental observations which are used to check the presence of a slope
change just before and after the wave; thus EPwave = {mECG
, mECG
}, and
n
0
ECG
ECG
APwave = {m1
, . . . , mn−1 }.
A set of temporal constraints are established between the temporal variables:
c1 = {T e − T b ≥ 6ms}, c2 = {T b = T1 }, c3 = {T e = Tn−1 } and c4 = {T b <
T tp < T e }. Another set of constraints limit the amplitude and slope changes
of the samples included in a wave: c5 = {sign(V1 − V0 ) 6= sign(V2 − V1 )},
c6 = {sign(Vn − Vn−1 ) 6= sign(Vn−1 − Vn−2 )}, c7 = {sign(Vtp − Vtp−1 ) =
−sign(Vtp+1 −Vtp )} and c8 = {min{|Vtp −V1 |, |Vtp −Vn−1 |} ≥ 20µV }. These two
sets form the complete set of constraints of the pattern CPwave = {c1 , . . . , c8 }.
Once a set of ECG samples has satisfied these constraints, they support the
observation of a wave: owave = hqwave , (a, vp, ttp ), tb , te i. The values of tb and
te are completely determined by the constraints c2 and c3 , while the observation
procedure wave observation() provides a value for the attributes as follows:
vp = sign(Vtp − V1 ), a = max{|Vtp − V1 |, |Vtp − Vn−1 |}, and ttp = tb + tp, where
tp = arg mink {Vk |1 ≤ k ≤ n − 1}, if V1 < V0 , or tp = arg maxk {Vk |1 ≤ k ≤
n − 1}, if V1 > V0 .
3.3. Abstraction grammars
According to the definition, an abstraction pattern is defined over a fixed
set of evidence findings MP . In general, however, an abstraction involves an
undetermined number of pieces of evidence -in the case of an ECG wave, the
number of samples-. Hence we provide a procedure for dynamically generating
abstraction patterns, based on formal language theory. The set Q of observables
can be considered as an alphabet. Given an alphabet Q, the special symbols ∅
(empty set), and λ (empty string), and the operators | (union), · (concatenation),
and ∗ (Kleene closure), a formal grammar G denotes a pattern of symbols of
the alphabet, describing a language L(G) ⊆ Q∗ as a subset of the set of possible
strings of symbols of the alphabet.
Let Gap be the class of formal grammars of abstraction patterns. An abstraction grammar G ∈ Gap is syntactically defined as a tuple (VN , VT , H, R).
11
For the production rules in R the expressiveness of right-linear grammars is
adopted [23]:
H → qD
D → qF | q | λ
H is the initial symbol of the grammar, and this plays the role of the hypothesis guessed by the patterns generated by G. VN is the set of non-terminal
symbols of the grammar, satisfying H ∈ VN , although H cannot be found on the
right-hand side of any production rule, since a hypothesis cannot be abstracted
by itself. VT is the set of terminal symbols of the grammar, representing the set
of observables QG ⊆ Q that can be abstracted by the hypothesis.
Given a grammar G ∈ Gap , we devise a constructive method for generating
a set of abstraction patterns PG = {P1 , . . . , Pi , . . .}. Since a formal grammar
is simply a syntactic specification of a set of strings, every grammar G ∈ Gap
is semantically extended to an attribute grammar [1], embedded with a set of
actions to be performed in order to incrementally build an abstraction pattern by
the application of production rules. An abstraction grammar is represented as
G = ((VN , VT , H, R), B, BR), where B(α) associates each grammar symbol α ∈
VN ∪VT with a set of attributes, and BR(r) associates each rule r ∈ R with a set
of attribute computation rules. An abstraction grammar associates the following
attributes: i) P (attern), with each non-terminal symbol of the grammar; this
will be assigned an abstraction pattern; ii) A(bstracted), with each terminal
symbol corresponding to an observable q ∈ QG ; this allows us to assign each
finding either to the set AP or EP , depending on its value of true or false; iii)
C(onstraint), with each terminal symbol corresponding to an observable; this
will be assigned a set of constraints. There are proposals in the bibliography
dealing with different descriptions of Constraint Satisfaction Problems and their
semantic expression in different formalisms [2, 5, 12]. By explicitly specifying a
constraint as a relation a clear description is provided on its underlying meaning,
but this can lead to cumbersome knowledge representation processes. Multiple
mathematical conventions can concisely and conveniently describe a constraint
as a boolean-valued function over the variables of a set of observables. However,
we will focus on the result of applying a set of constraints among the variables
involved.
In the following, the set of attribute computation rules associated with the
grammar productions is specified to provide a formal method for building abstraction patterns P ∈ PGh from a grammar Gh ∈ Gap . PGh gathers the set
of abstraction patterns that share the same observable h as a hypothesis; thus,
these represent the different ways to conjecture h. Using this method, the application of every production incrementally adds a new observable as a finding
and a set of constraints between this finding and previous entities, as follows:
12
1. The initial production H → qD entails:
PH := hh, MH = ∅, CH = ∅, ΘH = ∅i
Cq := C(Ah , Thb , The , A1 , T1b , T1e )
Aq ∈ {true, f alse}
PD := hh, MD = MH ∪ {mq1 }, CD = CH ∪ Cq , ΘD (A1 , T1b , T1e )i
2. All the productions of the form D → qF entail:
PD := hh, MD , CD , ΘD (A1 , T1b , T1e , . . . , Ak , Tkb , Tke )i
b
e
Cq := C(Ah , Thb , The , A1 , . . . , Ak+1 , Tk+1
, Tk+1
)
Aq ∈ {true, f alse}
b
e
PF := hh, MF = MD ∪ {mqk+1 }, CF = CD ∪ Cq , ΘF (A1 , T1b , T1e , . . . , Ak+1 , Tk+1
, Tk+1
)i
3. Productions of the form D → q conclude the generation of a pattern
P ∈ PG h :
PD := hh, MD , CD , ΘD (A1 , T1b , T1e , . . . , Ak , Tkb , Tke )i
b
e
Cq := C(Ah , Thb , The , A1 , . . . , Ak+1 , Tk+1
, Tk+1
)
Aq ∈ {true, f alse}
b
e
P := hh, MP = MD ∪ {mqk+1 }, CP = CD ∪ Cq , ΘP (A1 , T1b , T1e , . . . , Ak+1 , Tk+1
, Tk+1
)i
4. Productions of the form D → λ also conclude the generation of a pattern:
PD := hh, MD , CD , ΘD (A1 , T1b , T1e , . . . , Ak , Tkb , Tke )i
P := PD
This constructive method enables the incremental addition of new constraints as new findings are included in the representation of the abstraction
pattern, providing a dynamic mechanism for knowledge assembly by language
generation. The final constraints in CP are obtained from the conjunction of the
constraints added at each step. Moreover, it is possible to design an adaptive
observation procedure as new evidence becomes available, since the observation
procedure may be different at each step.
In the case that no temporal constraints are attributed to a production, a
’hereafter’ temporal relationship will be assumed by default to exist between
the new finding and the set of previous findings. For instance, a production of
b
the form D → qF entails that CF = CP ∪ {Tib ≤ Tk+1
| mi ∈ MP }.
Hence, in the absence of any temporal constraint, an increasing temporal order among consecutive findings in every abstraction pattern is assumed. Moreover, every temporal constraint must be consistent with this temporal order.
According to the limitation imposed on observations of the same observable
which prevents two different observations from occurring at the same time, an
additional constraint is added on any two findings of the same observable, and
thus ∀mqi , mqj ∈ MPq , (Tie < Tjb ∨ Tje < Tib ).
13
Several examples of abstraction pattern grammars modeling common knowledge in electrocardiography are given below, in order to illustrate the expressiveness of the Gap grammars.
Example 3.5. The grammar GN = (VN , VT , H, R) is designed to generate an
abstraction pattern for a normal cardiac cycle, represented by the observable
qN , including the descriptions of common durations and intervals [46]. In this
grammar, VN = {H, D, E}, VT = {qP w , qQRS , qT w }, and R is given by:
H → qP w D
{PH := hqN ,MH =∅,CH =∅,ΘH =∅i,
b
CP w := {TN
=TPb w ; 50ms≤TPe w −TPb w ≤120ms},
AP w := true,
PD := hqN ,MD ={mP w },CD =CP w ,ΘD =∅i
}
D → qQRS E
{PD := hqN ,MD ={mP w },CD =CP w ,ΘD =∅i,
e
b
b
CQRS := {50ms≤TQRS
−TQRS
≤150ms; 100ms≤TQRS
−TPb w ≤210ms},
AQRS := true,
PE := hqN ,ME =MD ∪{mQRS },CE =CD ∪CQRS ,ΘE =∅i
}
E → qT w
{PE := hqN ,ME ={mP w ,mQRS },CE ,ΘE =∅i,
e
b
e
CT w := {80ms≤TTb w −TQRS
≤120ms; TTe w −TQRS
≤520ms; TN
=TTe w },
AT w := true,
P := hqN ,MP =ME ∪{mT w },CP =CE ∪CT w ,ΘP =∅i
}
This grammar generates a single abstraction pattern, which allows us to interpret
the sequence of a P wave, a QRS complex, and a T wave as the coordinated
contraction and relaxation of the heart muscle, from the atria to the ventricles.
Some additional temporal constraints are required and specified in the semantic
description of the production rules. In this case, an observation procedure Θ is
not necessary since the attributes of the hypothesis are completely determined
by the constraints in the grammar, and do not require additional calculus.
The next example shows the ability of an abstraction grammar to generate
abstraction patterns dynamically with an undefined number of findings.
Example 3.6. A bigeminy is a heart arrhythmia in which there is a continuous
alternation of long and short heart beats. Most often this is due to ectopic heart
beats occurring so frequently that there is one after each normal beat, typically
premature ventricular contractions (PVCs) [46]. For example, a normal beat
is followed shortly by a PVC, which is then followed by a pause. The normal
beat then returns, only to be followed by another PVC. The grammar GV B =
(VN , VT , H, R) generates a set of abstraction patterns for ventricular bigeminy,
where VN = {H, D, E, F }, VT = {qN , qV }, and R is given by:
14
H → qN D
{PH := hqV B ,MH =∅,CH =∅,ΘH =∅i,
CN := {TVb B =T1 },
AN := true,
PD := hqV B ,MD ={mN
1 },CD =CN ,ΘD =∅i
}
D → qV E
{PD := hqV B ,MD ={mN
1 },CD =CN ,ΘD =∅i,
CV := {200ms≤T2 −T1 ≤800ms},
AV := true,
PE := hqV B ,ME =MD ∪{mV
2 },CE =CD ∪CV ,ΘE =∅i
}
E → qN F
V
{PE := hqV B ,ME ={mN
1 ,...,mk−1 },CE ,ΘE =∅i,
CN := {1.5·200ms≤Tk −Tk−1 ≤4·800ms},
AN := true,
PF := hqV B ,MF =ME ∪{mN
k },CF =CE ∪CN ,ΘF =∅i
}
F → qV E
V
N
{PF := hqV B ,MF ={mN
1 ,m2 ,...,mk },CF ,ΘF =∅i,
CV := {200ms≤Tk+1 −Tk ≤800ms},
AV := true,
PE := hqV B ,ME =MF ∪{mV
k+1 },CE =CF ∪CV ,ΘF =∅i
}
F → qV
V
N
{PF := hqV B ,MF ={mN
1 ,m2 ,...,mn−1 },CF ,ΘF =∅i,
CV := {200ms≤Tn −Tn−1 ≤800ms; TVe B =Tn },
AV := true,
P := hqV B ,MP =MF ∪{mV
n },CP =CF ∪CV ,ΘP =∅i
}
For simplicity, we have referenced each N and V heart beat with a single
temporal variable. Thus Ti represents the time point of the ith heart beat, and
is a normal beat if i is odd, and a PVC if i is even. With the execution of
these production rules, an unbounded sequence of alternating normal and premature ventricular QRS complexes is generated, described above as ventricular
bigeminy. Note that in terms of the {N, V } symbols the GV B grammar is syntactically equivalent to the regular expression N V (N V )+ .
In this example, as in 3.5, an observation procedure ΘP is not necessary,
since the constraints in the grammar completely determine the temporal endpoints of the hypothesis and there are no more attributes to be valued. Figure 4
shows an example of a ventricular bigeminy pattern.
4. An interpretation framework
In this section, we define and characterize an interpretation problem. Informally, an interpretation problem arises from the availability of a set of initial
15
N
V
N
V
N
V
N
V
N
V
Figure 4: Example of ventricular bigeminy. [Source: MIT-BIH arrhythmia DB, recording:
106, between 25:06.350 and 25:16.850]
observations from a given system, and of domain knowledge formalized as a
set G = {Gq1 , . . . , Gqn } of Gap grammars. Every abstraction grammar Gh ∈ G
generates a set of abstraction patterns that share the same hypothesis h. The
whole set of abstraction patterns that can be generated by G is denoted by P.
Definition 7. Let Q be a set of observables and G a set of abstraction grammars. We say G induces an abstraction relation in Q × Q, denoted by qi qj
if and only if there exists an abstraction pattern P generated by some Gh ∈ G
such that:
1. qj = h
2. MPqi ∩ AP 6= ∅
3. qi + qi , where
+
is the transitive closure of
The relation qi qj is a sort of conjectural relation that allows us to conjecture the presence of qj from the observation of qi . The transitive closure of
the abstraction relation is a strict partial order relation between the domain
observables, such that qi < qj ⇔ qi +qj ; that is, if and only if ∃qk0 , . . . , qkn ∈ Q
such that qk0 = qi , qkn = qj and for all m, with 0 ≤ m < n, it holds that
...
qkn = qj an abstraction
qkm qkm+1 . We denote by qi = qk0 qk1
sequence in n steps that allows the conjecture of qj from qi . This order relation
defines an abstraction hierarchy among the observables in Q. From the definition of a strict partial order, there must be at the base of this hierarchy at least
one observable we call q0 , corresponding in the domain of electrocardiography
to the digital signal.
Example 4.1. Let Q = {qP w , qQRS , qT w , qN , qV , qV B } and G = {GN , GV B },
containing the knowledge represented in examples 3.5 and 3.6. The derived abstraction relation states that qP w , qQRS , qT w qN , and qN , qV qV B . Intuitively,
we can see that this relation splits the observables into three abstraction levels:
the wave level, describing the activation/recovery of the different heart chambers; the heartbeat level, describing each cardiac cycle by its origin in the muscle
tissue; and the rhythm level, describing the dynamic behavior of the heart over
multiple cardiac cycles. These levels match those commonly used by experts in
electrocardiogram analysis [46].
It is worth noting that the abstraction relation is only established between
observables in the AP set. This provides flexibility in defining the evidence
forming the context of a pattern, as this may belong to different abstraction
levels.
16
Definition 8. We define an abstraction model as a tuple M = hQ, , Gi,
where Q is the set of domain observables, is an abstraction relation between
such observables, and G is the available knowledge as a set of abstraction grammars.
The successive application of the available abstraction grammars results in
a series of observations organized in a hierarchy of abstraction, according to the
order relation between observables as described above. We are able to define an
interpretation problem as follows.
Definition 9. We define an interpretation problem as a tuple IP = hO, Mi,
where O = (o1 , o2 , . . . , oi , . . .) is a sequence of observations requiring interpretation and M is an abstraction model of the domain.
It is worth mentioning that this definition of an abductive interpretation
problem differs from the common definition of an abductive diagnosis problem,
where the difference between normal and faulty behaviors is explicit, leading to
the role of faulty manifestations. Only when a faulty manifestation is detected is
the abductive process of diagnosis started. In contrast, in the present framework
all the observations have the same status, and the objective of the interpretation
process is to provide an interpretation of what is observed at the highest possible
abstraction level in terms of the underlying processes. As we will see later,
some observables may stand out amongst others regarding the efficiency of the
interpretation process, as salient features that can draw some sort of perceptual
attention.
As discussed above, any observable q ∈ QP can appear multiple times as
different pieces of evidence for an abstraction pattern P , in the form of findings
collected in the set MP . As a consequence, P can predict multiple observations
of the set O for a given observable q ∈ QP , each of these corresponding to
one of the findings of the set MP through a matching relation. This matching
relation is a matter of choice for the agent in charge of the interpretation task,
by selecting from the evidence the observation corresponding to each finding in
a given pattern.
Definition 10. Given an interpretation problem IP , a matching relation for
a pattern P ∈ P is an injective relation in MP × O, defined by mq o if and
only if o = hq, v, tb , te i ∈ O(q) ⊆ O and mq = hψ, A, T b , T e i ∈ MP , such that
(A1 = v1 , . . . , Anq = vnq ), T b = tb and T e = te .
A matching relation makes an assignment of a set of observations to a set
of findings of a certain pattern, leading us to understand the interpretation
problem as a search within the available evidence for a valid assignment for the
constraints represented in an abstraction pattern.
From the notion of matching relation we can design a mechanism for abductively interpreting a subset of observations in O through the use of abstraction
patterns. Thus, a matching relation for a given pattern allows us to hypothesize new observations from previous ones, and to iteratively incorporate new
evidence into the interpretation by means of a hypothesize-and-test cycle. The
17
notion of abstraction hypothesis defines those conditions that a subset of observations must satisfy in order to be abstracted by a new observation, and makes
it possible to incrementally build an interpretation from the incorporation of
new evidence.
Definition 11. Given an interpretation problem IP , we define an abstraction
hypothesis as a tuple ~ = hoh , P, i, where P = hh, MP , CP , ΘP i ∈ P, ⊆
MP × O, and we denote O~ = codomain(), satisfying:
1. oh ∈ O(h).
2. oh = ΘP (O~ ).
3. CP (Ah , Thb , The , A1 , T1b , T1e , . . . , An , Tnb , Tne )|oh ,o1 ,...,on ∈O~ is satisfied.
These conditions entail: (1) an abstraction hypothesis guesses an observation
of the observable hypothesized by the pattern; (2) a new observation is obtained
from the application of the observation procedure to those observations being
assigned to the set of findings MP by the matching relation; and (3) the observations taking part in an abstraction hypothesis must satisfy those constraints
of the pattern whose variables are assigned a value by the observations.
Even though the matching relation is a matter of choice, and therefore a
conjecture in itself, some additional constraints may be considered as default
assumptions. An important default assumption in the abstraction of periodic
processes states that consecutive observations are related by taking part in the
same hypothesis, defining the basic period of the process. This assumption
functions as a sort of operative hypothesis of the abstraction task:
Default Assumption 2. (Basic periodicity) Periodic findings in an abstraction
pattern must be assigned consecutive observations by any matching relation:
∀mqi , mqi+1 ∈ MPq , mqi oj ∧ q−succ(oj ) ∈ O~ ⇒ mqi+1 q−succ(oj )
This default assumption allows us to avoid certain combinations of abstraction hypotheses that, although formally correct, are meaningless from an interpretation point of view. For example, without the assumption of basic periodicity, a normal rhythm fragment might be abstracted by two alternating
bradycardia hypotheses, as shown in Figure 5.
obradycardia
1
obradycardia
2
Figure 5: Motivation for the assumption of basic periodicity. [Source: MIT-BIH arrhythmia
DB, recording: 103, between 00:40.700 and 00:51.200]
18
The set of observations that may be abstracted in an interpretation problem
IP is O(domain( )), that is, observations corresponding to observables involved
in the set of findings to be abstracted by some abstraction pattern. An abstraction hypothesis defines in the set of observations O a counterpart of the subsets
AP and EP of the set of findings MP of a pattern P , resulting from the selection of a set of observations O~ ⊆ O by means of a matching relation, satisfying
those requirements shown in the definition 11.
Definition 12. Given an interpretation problem IP and an abstraction hypothesis ~ = hoh , P, i, we define the following sets of observations:
• abstracted by(oh ) = {o ∈ O~ | mqi o ∧ mqi ∈ AP }.
• environment of(oh ) = {o ∈ O~ | mqi o ∧ mqi ∈ EP }.
• evidence of(oh ) = abstracted by(oh ) ∪ environment of(oh ).
We denote by abstracted by(oh ) the set of observations abstracted by oh
and which are somehow its constituents, while environment of(oh ) denotes
the evidential context of oh . We denote by evidence of(oh ) the set of all
observations supporting a specific hypothesis. Since the matching relation is
injective, it follows that abstracted by(oh ) ∩ environment of(oh ) = ∅.
The definition of these sets can be generalized to include as arguments a set of
observations O = {oh1 , ..., ohm } from a set of abstraction hypotheses ~1 , ..., ~m :
S
• abstracted by(O) = oh ∈O abstracted by(oh )
S
• environment of(O) = oh ∈O environment of(oh ).
S
• evidence of(O) = oh ∈O evidence of(oh ).
As a result of an abstraction hypothesis, a new observation oh is generated
which can be included in the set of domain observations, so that O = O ∪ {oh }.
In this way, an interpretation can be incrementally built from the observations,
by means of the aggregation of abstraction hypotheses.
Definition 13. Given an interpretation problem IP , an interpretation is
defined as a set of abstraction hypotheses I = {~1 , . . . , ~m }.
An interpretation can be rewritten as I = hOI , PI , I i, where OI = {oh1 , . . .
, ohm } is the set of observations guessed by performing multiple abstraction
hypotheses; PI = {P1 , . . . , Pm } is the set of abstraction patterns used in the
interpretation; and I =~1 ∪ . . . ∪ ~m ⊆ (M1 ∪ . . . ∪ Mm ) × O is the global
matching relation. We should note that the global matching relation I is
not necessarily injective, since some observations may simultaneously belong to
both the abstracted by() and environment of() sets of different observations.
From a given interpretation problem IP , multiple interpretations can be
abductively proposed through different sets of abstraction hypotheses. Indeed,
the definition of interpretation is actually weak, since even an empty set I = ∅
19
is formally a valid interpretation. Thus, we need additional criteria in order
to select the solution to the interpretation problem as the best choice among
different possibilities [33].
Definition 14. Given an interpretation problem IP , an interpretation I is a
cover of IP if the set of observations to be interpreted O(domain( )) ⊆ O is
included in the set of observations abstracted by I, that is, O(domain( )) ⊆
abstracted by(OI ).
Definition 15. Given an interpretation problem IP , two different abstraction
hypotheses ~ and ~0 of the mutually exclusive observables qh and qh0 are alternative hypotheses if and only if abstracted by(oh ) ∩ abstracted by(oh0 ) 6=
∅.
Example 4.2. A ventricular trigeminy is an infrequent arrhythmia very similar
to ventricular bigeminy, except that the ectopic heart beats occur after every pair
of normal beats instead of after each one. The grammar for hypothesizing a
ventricular trigeminy qV T would therefore be very similar to that described in
example 3.6, with the difference that each qV finding would appear after every
pair of qN findings. These two processes are mutually exclusive, insofar as the
heart can develop just one of these activation patterns at a given time. For this
reason, in the event of an observation of qV , this may be abstracted by either a
qV B or a qV T hypothesis, but never by both simultaneously.
Definition 16. Given an interpretation problem IP , a cover I for IP is exclusive if and only if it contains no alternative hypotheses.
Thus, two or more different hypotheses of mutually exclusive observables
abstracted from the same observation will be incompatible in the same interpretation, since inferring both a statement and its negation is logically prevented,
and therefore only one of them can be selected.
On the other hand, a parsimony criterion is required, in order to disambiguate the possible interpretations to select as the most plausible those of
which the complexity is minimum [33]. We translate this minimum complexity
in terms of minimal cardinality.
Definition 17. Given an interpretation problem IP , a cover I for IP is minimal, if and only if its cardinality is the smallest among all covers for IP .
Minimality introduces a parsimony criterion on hypothesis generation, promoting temporally maximal hypotheses, that is, those hypotheses of a larger
scope rather than multiple equivalent hypotheses of smaller scope. For example,
consider an abstraction pattern that allows the conjecture of a regular cardiac
rhythm from the presence of three or more consecutive heart beats. Without a
parsimony criterion, a sequence of nine consecutive beats could be abstracted
by up to three consecutive rhythm observations, even when a single rhythm
observation would be sufficient and better.
Definition 18. The solution of an interpretation problem IP is the set of all
minimal and exclusive covers of IP .
20
This definition of solution is very conservative and has limited practical
value, since the usual objective is to obtain a small set of interpretations explaining what has been observed (and ideally only a single one). However, it
allows us to characterize the problem in terms of complexity. Abduction has
been formulated under different frameworks according to the task to be addressed, but has always been found an intractable problem in the general case
[24]. The next theorem proves that an interpretation problem is also an intractable problem.
Theorem 1. Finding the solution to an interpretation problem is NP-hard.
Proof: We will provide a polynomial-time reduction of the well-known set covering problem to an interpretation problem. Given a set of elements U =
{u1 , . . . , um } and a set S of subsets of U , a cover is a set C ⊆ S of subsets
of S whose union is U . In terms of complexity analysis, two different problems
of interest are identified:
• A set covering decision problem, stating that given a pair (U, S) and an
integer k the question is whether there is a set covering of size k or less.
This decision version of set covering is NP-complete.
• A set covering optimization problem, stating that given a pair (U, S) the
task is to find a set covering that uses the fewest sets. This optimization
version of set covering is NP-hard.
We will therefore reduce the set covering problem to an interpretation problem
by means of a polynomial-time function ϕ. Thus, we shall prove that ϕ(U, S)
is an interpretation problem, and there is a set covering of ϕ(U, S) of size k or
less if and only if there is a set covering of U in S of size k or less.
Given a pair (U, S), let ϕ(U, S) = hO, Mi where:
1. O = U = {u1 , . . . , um }, such that ui = hq, true, ii and q = hψ, present, T i.
2. M = hQ, , Pi, such that domain( ) = q.
3. ∀s = {ui1 , . . . , uin } ∈ S, ∃P ∈ P, being P = hqP , MP , CP , ΘP i, where:
• q
qP and P 6= P 0 ⇒ qP 6= qP 0 .
• MP = AP = MPq = {mq1 = hψ, present1 , T1 i, . . . , mqn }.
Vn
• CP = { k=1 Tk = k; Thb = min{Tk }; The = max{Tk }}.
Vn
• presentP = ΘP (mq1 , . . . , mqn ) = k=1 presentk .
Thus, ϕ(U, S) is an interpretation problem according to this definition. On the
other hand, ϕ(U, S) can be built in polynomial time. In addition, for all s ∈ S
there exists an abstraction hypothesis ~ = hoh , P, i such that:
1. oh = hh, true, minui ∈s {i}, maxui ∈s {i}i.
2. ui ∈ s ⇒ ui ∈ codomain().
3. provides a valid assignment, since the set of observations satisfying
ΘP = true also satisfies the constraints in CP .
21
Since each abstraction hypothesis involves a different abstraction pattern
there are no alternative hypotheses in any interpretation of ϕ(U, S).
Suppose there is a set covering C ⊆ S of U of size k or less. For all u ∈ U
there exists ci ∈ C − {∅} such that u ∈ ci and, by the above construction, there
) = {u ∈ codomain(~i )} =
exists ~i ∈ I such that abstracted by(ohS
i
S {u ∈
ci } = ci , and therefore, O(domain( )) ⊆ ~i ∈I abstracted by(ohi ) = i ci =
C. That is, the set of abstraction hypotheses I is an exclusive cover of the
interpretation problem ϕ(U, S) of size k or less.
Following the same reasoning as for the set covering optimization problem,
finding a minimal and a exclusive cover of an interpretation problem ϕ(U, S) is
NP-hard, since we can use the solution of this problem to check whether there
is an exclusive cover of the interpretation problem of size k or less, and this has
been proven above to be NP-complete.
5. Solving an interpretation problem: A heuristic search approach
The solution set for an interpretation problem IP consists of all exclusive
covers of IP having the minimum possible number of abstraction hypotheses.
Obtaining this solution set can be stated as a search on the set of interpretations of IP . The major source of complexity of searching for a solution is the
local selection, from the available evidence in O, of the most appropriate matching relation for a number of abstraction hypotheses that can globally shape a
minimal and exclusive cover of IP .
Nevertheless, the whole concept of solution must be revised in practical
terms, due to the intractability of the task and the incompleteness of the abstraction model, that is, of the available knowledge. Indeed, we assume that
any realistic abstraction model can hardly provide a cover for every possible
interpretation problem. Hence the objective should shift from searching for a
solution to searching for an approximate solution.
Certain principles applicable to the interpretation problem can be exploited
in order to approach a solution in an iterative way, bounding the combinatorial
complexity of the search. These principles can be stated as a set of heuristics
that make it possible to evaluate and discriminate some interpretations against
others from the same base evidence:
• A coverage principle, which states the preference for interpretations explaining more initial observations.
• A simplicity principle, which states the preference for interpretations with
fewer abstraction hypotheses.
• An abstraction principle, which states the preference for interpretations
involving higher abstraction levels.
• A predictability principle, which states the preference for interpretations
that properly predict future evidence.
22
The coverage and simplicity principles are used to define a cost measure
for the heuristic search process [14], while the abstraction and predictability
principles are used to guide the reasoning process, in an attempt to emulate the
same shortcuts used by humans.
Given an interpretation problem IP , a heuristic vector for a certain interpretation I can be defined to guide the search, as (I) = (1 − ς(I), κ(I)), where
ς(I) = |abstracted by(OI )|/|O(domain( ))| is the covering ratio of I, and
κ(I) = |OI | is the complexity of I. The main goal of the search strategy is to
approach a solution with a maximum covering ratio and a minimum complexity,
which is equivalent to the minimization of the heuristic vector. The covering
ratio will be considered the primary heuristic, and complexity will be considered
for ranking interpretations with the same covering ratio. The (I) heuristic is
intuitive and very easy to calculate, but as a counterpart it is a non-admissible
heuristic, since it is not monotone and may underestimate or overestimate the
true goal covering. Therefore optimality cannot be guaranteed and we require
an algorithm efficient with this type of heuristic. We propose the CONSTRUE()
algorithm, whose pseudocode is shown in Algorithm 1. This algorithm is a minor variation of the K-Best First Search algorithm [14], with partial expansion
to reduce the number of explored nodes.
Algorithm 1 CONSTRUE search algorithm.
1: function CONSTRUE(IP )
2:
var I0 = ∅
3:
var K = max(|{qj ∈ Q | qi qj , qi ∈ Q}|)
4:
set focus(I0 , o1 )
5:
var open = sorted([h(I0 ), I0 i])
6:
var closed = sorted([])
7:
while open 6= ∅ do
8:
for all I ∈ open[0 . . . K] do
9:
I 0 = next(get descendants(I))
10:
if I 0 is null then
11:
open = open − {h(I), Ii}
12:
closed = closed ∪ {h(I), Ii}
13:
else if ς(I 0 ) = 1.0 then
14:
return I 0
15:
else
16:
open = open ∪ {h(I 0 ), I 0 i}
17:
end if
18:
end for
19:
end while
20:
return min(closed)
21: end function
The CONSTRUE() algorithm takes as its input an interpretation problem IP ,
and returns the first interpretation found with full coverage, or the interpretation with the maximum covering ratio and minimum complexity if no covers
are found, using the abstraction and predictability principles in the searching
23
process. To do this, it manages two ordered lists of interpretations, named open
and closed. Each interpretation is annotated with the computed values of the
heuristic vector. The open list contains those partial interpretations that can
further evolve by (1) appending new hypotheses or (2) extending previously
conjectured hypotheses to subsume or predict new evidence. This open list is
initialized with the trivial interpretation I0 = ∅. The closed list contains those
interpretations that cannot explain more evidence.
At each iteration, the algorithm selects the K most promising interpretations
according to the heuristic vector (line 8), and partially expands each one of them
to obtain the next descendant node I 0 . If this node is a solution, then the process
ends by returning it (line 13), otherwise it is added to the open list. The partial
expansion ensures that the open list grows at each iteration by at most K new
nodes, in order to save memory. When a node cannot expand further, it is added
to the closed list (line 12), from which the solution is taken if no full coverages
are found (line 20).
The selection of a value for the K parameter depends on the problem at
hand. We select its value as K = max(|{qj ∈ Q | qi qj , qi ∈ Q}|), that is, as
the maximum number of observables that can be abstracted from any observable
qi . The intuition behind this choice is that at any point in the interpretation
process, and with the same heuristic values, the same chance is given to any
plausible abstraction hypothesis in order to explain a certain observation.
In order to expand the current set of interpretations, the GET DESCENDANTS() function relies on different reasoning modes, that is, different forms of
abduction and deduction, which are brought into play under the guidance of an
attentional mechanism. Since searching for a solution finally involves the election of a matching relation, both observations and findings should be included
in the scope of this mechanism. Hence, a focus of attention can be defined to
answer the following question: which is the next observation or finding to be
processed? The answer to this question takes the form of a hypothesize-and-test
cycle: if the attention focuses on an observation, then an abstraction hypothesis explaining this observation should be generated (hypothesize); however, if
the attention focuses on a finding predicted by some hypothesis, an observation
should be sought to match such finding (test). Thus, the interpretation problem
is solved by a reasoning strategy that progresses incrementally over time, coping with new evidence through the dynamic generation of abstraction patterns
from a finite number of abstraction grammars, and bounding the theoretical
complexity by a parsimony criterion.
To illustrate and motivate the reasoning modes implemented in building
interpretations and supporting the execution of the CONSTRUE() algorithm,
we use a simple, but complete, interpretation problem.
Example 5.1. Let Q = {qwave , qP w , qQRS , qT w , qN }, G = {Gw , GN , GT w }, where
Gw models the example 3.4, GN is described in example 3.5, and GT w =
({H, D}, {qQRS , qwave }, H, R) describes the knowledge to conjecture a T wave
with the following rules:
24
H → qQRS D
{PH := hqT w ,MH =∅,CH =∅,ΘH =∅i,
e
b
CQRS := {80ms≤TTb w −TQRS
≤120ms; TTe w −TQRS
≤520ms},
AQRS := f alse,
PD := hqT w ,MD ={mQRS },CD =CQRS ,ΘD =∅i
}
D → qwave
{PD := hqT w ,MD ={mQRS },CD =CQRS ,ΘD =∅i,
b
e
Cwave := {TTb w =Twave
; TTe w =Twave
; max(diff(sig[mwave ])≤0.7·max(diff(sig[mQRS ]))},
Awave := true,
b
e
b
e
,TQRS
,Twave
,Twave
)i
P := hqT w ,MP =MD ∪{mwave },CP =CD ∪Cwave ,ΘP =Tw delin(TQRS
}
This grammar hypothesizes the observation of a T wave from a wave appearing shortly after the observation of a QRS complex, requiring a significant
decrease in the maximum slope of the signal (in the constraint definition Cwave ,
the expression “max(diff(sig[m])” stands for the maximum absolute value of the
e
b
). The observation procedure of
and Tm
derivative of the ECG signal between Tm
the generated pattern is denoted as Tw delin(), and may be any of the methods
described in the literature for the delineation of T waves, such as in [26].
In addition to the Pwave pattern generated by Gw and detailed in example 3.4,
GN and GT w generate the following abstraction patterns:
PN = hqN , APN = {mP w , mQRS , mT w } ∪ EPN = ∅, CPN , ΘPN = ∅i
PT w = hqT w , APT w = {mwave } ∪ EPT w = {mQRS }, CQRS ∪ Cwave , Tw delin()i
Finally, let O = {owave
= hqwave , ∅, 0.300, 0.403i, owave
= hqwave , ∅, 0.463,
1
2
Pw
0.549i, o
= hqP w , ∅, 0.300, 0.403i, oQRS = hqQRS , ∅, 0.463, 0.549i} be a set of
initial observations including a P wave and a QRS complex abstracting two wave
observations located at specific time points.
Given this interpretation problem, Figure 6 shows the starting point for the
interpretation, where the root of the interpretation process is the trivial interpretation I0 , and the attention is focused on the first observation. The sequence
of reasoning steps towards the resolution of this interpretation problem will be
explained in the following subsections.
5.1. Focus of attention
The focus of attention is modeled as a stack; thus, once the focus is set
on a particular observation (or finding), any observation that was previously
under focus will not return to be focused on until the reasoning process on the
current observation is finished. Algorithm 2 shows how the different reasoning
modes are invoked based on the content of the focus of attention, resulting in a
hypothesize-and-test cycle.
Lines 4-8 generate the descendants of an interpretation I when there is an
observation at the top of the stack. These descendants are the result of two
possible reasoning modes: the deduction of new findings, performed by the
25
Algorithm 2 Method for obtaining the descendants of an interpretation using different reasoning modes based on the content of the focus of attention.
1: function get descendants(I)
2:
var f ocus = get focus(I).top()
3:
var desc = ∅
4:
if is observation(f ocus) then
5:
if f ocus = oh | ~ ∈ I then
6:
desc = deduce(I, f ocus)
7:
end if
8:
desc = desc ∪ abduce(I, f ocus) ∪ advance(I, f ocus)
9:
else if is finding(f ocus) then
10:
desc = subsume(I, f ocus) ∪ predict(I, f ocus)
11:
end if
12:
return desc
13: end function
DEDUCE() function, provided that the observation being focused on is an ab-
straction hypothesis; and the abduction of a new hypothesis explaining the
observation being focused on, performed by the ABDUCE() function. A last
descendant is obtained using the ADVANCE() function, which simply restores
the previous focus of attention by means of a POP() operation. If the focus is
then empty, ADVANCE() inserts the next observation to explain, which may be
selected by temporal order in the general case, or by some domain-dependent
saliency criterion to prioritize certain observations over others. By removing the
observation at the top of the focus of attention, the ADVANCE() function sets
aside that observation as unintelligible in the current interpretation, according
to the available knowledge.
If the top of the stack contains a finding, then Algorithm 2 obtains the
descendants of the interpretation from the SUBSUME() and PREDICT() functions (line 10). The first of these functions looks for an existing observation
satisfying the constraints on the finding focused on, while the second makes
predictions about observables that have not yet been observed. All of these reasoning modes are described separately and detailed below; we will illustrate how
the CONSTRUE() algorithm combines these in order to solve the interpretation
problem in Example 5.1.
5.2. Building an interpretation: Abduction
Algorithm 3 enables the abductive generation of new abstraction hypotheses. It is applied when the attention is focused on an observation that can
be abstracted by some abstraction pattern, producing a new observation at a
higher level of abstraction.
The result of ABDUCE() is a set of interpretations I 0 , each one adding a new
abstraction hypothesis with respect to the parent interpretation I. To generate
these hypotheses, we iterate through those grammars that can make a conjecture from the observation oi under focus (line 3). Then, for each grammar, each
production including the corresponding observable q(oi ) (line 4) initializes an
26
Algorithm 3 Moving forward an interpretation through abduction.
1: function abduce(I, oi )
2:
var desc = ∅
3:
for all Gh = hVN , VT , H, Ri ∈ G | q(oi ) h do
4:
for all (U → qV ) ∈ R | q(oi ) is a q ∧ Aq = true do
5:
PV = hh, MV = {mq }, CV , ΘV i
6:
~ = hoh , PV , ~ = {mq oi }i
7:
L~ = [(U → qV )]; B~ = U ; E~ = V
8:
I 0 = hOI ∪ {oh }, PI ∪ {PV }, I ∪ ~ i
9:
O = O ∪ {oh }
10:
get focus(I 0 ).pop()
11:
get focus(I 0 ).push(oh )
12:
desc = desc ∪ {I 0 }
13:
end for
14:
end for
15:
return desc
16: end function
abstraction pattern with a single finding of this observable (line 5), and a new
hypothesis is conjectured with a matching relation involving both the observation under focus and the finding (line 6). A list structure L~ and two additional
variables B~ and E~ are initialized to trace the sequence of productions used to
generate the findings in the abstraction pattern; these will play an important
role in subsequent reasoning steps (line 7). Finally the new hypothesis opens a
new interpretation (lines 8-9) focused on this hypothesis (line 11).
In this way, the ABDUCE() function implements, from a single piece of evidence, the hypothesize step of the hypothesize-and-test cycle. Below we explain
the reasoning modes involved in the test step of the cycle.
Example 5.2. Let us consider the interpretation problem set out in example 5.1
and the interpretation I0 shown in Figure 6. According to Algorithm 2, the ABDUCE() function is used to move forward the interpretation, since the focus
of attention points to an observation oP w . The abstraction pattern that supports this operation is PN , and a matching relation is established with the mP w
finding. As a result, the following hypothesis is generated:
~1 = hoN , PN , {mP w oP w }i
Figure 6 shows the result of this reasoning process, in a new interpretation
called I1 . Note that the focus of attention has been moved to the newly created
hypothesis (lines 10-11 of the ABDUCE() function).
5.3. Building an interpretation: Deduction
This reasoning mode is applied when the attention is focused on an observation oh previously conjectured as part of an abstraction hypothesis ~ (see Algorithm 4). The DEDUCE() function takes the evidence that has led to conjecture
oh and tries to extend it with new findings which can be expected, i.e., deduced,
27
Algorithm 4 Moving forward an interpretation through the deduction of new findings.
1: function deduce(I, oh )
2:
var desc = ∅
3:
if B~ 6= H then
4:
for all (X → qB~ ) ∈ R do
5:
PB~ = hh, MB~ = {mq }, CB~ , ΘB~ i
6:
for all (U → q 0 V ) ∈ L~ do
0
7:
PV = hh, MU ∪ {mq }, CU ∪ CV , ΘV i
8:
end for
9:
~ = hoh , PE~ , ~ i
10:
I 0 = hOI , PI ∪ {PE~ }, I i
11:
insert(L~ , (X → qB~ ), begin); B~ = X
12:
get focus(I 0 ).push(mq )
13:
desc = desc ∪ {I 0 }
14:
end for
15:
else
16:
for all (E~ → qX) ∈ R do
17:
PX = hh, ME~ ∪ {mq }, CE~ ∪ CX , ΘX i
18:
~ = hoh , PX , ~ i
19:
I 0 = hOI , PI \ {PE~ } ∪ {PX }, I i
20:
insert(L~ , (E~ → qX), end); E~ = X
21:
get focus(I 0 ).push(mq )
22:
desc = desc ∪ {I 0 }
23:
end for
24:
end if
25:
return desc
26: end function
from the abstraction grammar Gh used to guess the observation. The key point
is that this deduction process follows an iterative procedure, as the corresponding abstraction pattern is dynamically generated from the grammar. Hence the
DEDUCE() function aims to extend a partial matching relation by providing the
next finding to be tested, as part of the test step of the hypothesize-and-test
cycle.
Since the first finding leading to conjecture oh does not necessarily appear at
the beginning of the grammar description, the corresponding abstraction pattern
will not, in general, be generated incrementally from the first production of the
grammar. Taking as a starting point the production used to conjecture oh (line 4
in Algorithm 3), the goal is to add a new finding by applying a new production
at both sides, towards the beginning and the end of the grammar, using the
information in the L~ list. The B~ variable represents the non-terminal at the
left-hand side of the first production in L~ , while E~ represents the non-terminal
at the right-hand side of the last production in L~ . Hence, this list has the form
L~ = [(B~ → q 0 V 0 ), (V 0 → q 00 V 00 ), . . . , (V 0n−1 → q 0n E~ )]. In case L~ is empty,
both variables B~ and E~ represent the H non-terminal. With this information
the sequence of findings supporting the hypothesis ~ can be updated in two
28
opposite directions:
• Towards the beginning of the grammar (lines 3-14): we explore the set
of observables that may occur before the first finding according to the
productions of the grammar (line 4), and a new finding is deduced for
each of these in different descendant interpretations. A new pattern PB~
associated with the B~ non-terminal is initialized with the new finding
(line 5), and by moving along the sequence of productions generating the
previous set of findings (lines 6-8) the pattern associated to the rightmost non-terminal PE~ is updated with a new set of findings containing
mq . Consequently, the hypothesis and the interpretation are also updated
(lines 9 and 10), and the applied production is inserted at the beginning
of L~ (line 11). Finally the newly deduced finding is focused on (line 12).
• Towards the end of the grammar (lines 15-23): for each one of the observables that may occur after the last finding, a new finding mq is deduced,
expanding the abstraction pattern associated with the new rightmost nonterminal X. After updating the hypothesis ~, the previous pattern PE~
in the resulting interpretation I 0 is replaced by the new one, PX , and the
applied production is inserted at the end of L~ . Finally, the new finding
is focused on (line 21).
Example 5.3. Let us consider the interpretation problem set out in example 5.1
and the interpretation I1 shown in Figure 6. Remember that the grammar used
to generate the hypothesis in the focus of attention, GN , has the following form:
H → qP w D
D → qQRS E
E → qT w
In this situation, it is possible to deduce new findings from the oN hypothesis.
Following Algorithm 3 we can check that B~ = H and E~ = D, since the only
finding in the matching relation is mP w . Deduction then has to be performed
after this last finding, using the production D → qQRS E. After constraint checking, the resulting finding is as follows:
b
e
mqn+1 = mQRS = hqQRS , ∅, TQRS
∈ [0.400, 0.520], TQRS
∈ [0.450, 0.660]i
Figure 6 illustrates the outcome of this reasoning process and the uncertainty
in the temporal limits of the predicted finding, which is now focused on in the
interpretation I2 .
5.4. Building an interpretation: Subsumption
Subsumption is performed when the attention is focused on a finding previously deduced from some abstraction grammar (see Algorithm 5). This reasoning mode avoids the generation of a new hypothesis for every piece of available
evidence if it can be explained by a previous hypothesis. The SUBSUME()
29
function explores the set of observations O and selects those consistent with
the constraints on the finding in the focus of attention (line 3), expanding the
matching relation of the corresponding hypothesis in different descendant interpretations (line 4). The focus of attention is then restored to its previous state
(line 5), allowing the deduction of new findings from the same hypothesis. The
SUBSUME() function clearly enforces the simplicity principle.
Algorithm 5 Moving forward an interpretation through subsumption.
1: function subsume(I, mi )
2:
var desc = ∅
3:
for all oj ∈ O | mi oj do
4:
I 0 = hOI , PI , I ∪ {mi oj }i
5:
get focus(I 0 ).pop(mi )
6:
desc = desc ∪ {I 0 }
7:
end for
8:
return desc
9: end function
Example 5.4. Let us consider the interpretation I2 shown in Figure 6. If we
apply the subsumption procedure, it is possible to set a matching relation between oQRS and mQRS , since this observation satisfies all the constraints on
the finding. The result is shown in the interpretation I3 . Note that the uncertainty in the end time of the oN hypothesis is now reduced after the matching,
having TNe ∈ [0.631, 1.030]. Following this, the attention focuses once again on
this hypothesis, and a new deduction operation may therefore be performed.
5.5. Building an interpretation: Prediction
This reasoning mode is also performed when the attention is focused on a
finding deduced from some abstraction grammar (see Algorithm 6). In this case,
if a finding previously deduced has not yet been observed, it will be predicted.
The goal of the PREDICT() function is to conjecture a new observation to
match the focused finding. For this, the abstraction model is explored and those
grammars whose hypothesized observable is more specific than the predicted observable are selected (line 3). Then, a new pattern is initialized with no evidence
supporting it, and a new abstraction hypothesis with an empty matching relation is generated (lines 4-5). Finally, the attention focuses on the observation
being guessed (lines 9-10) to enable the DEDUCE() function to start a new test
step at a lower abstraction level. Since L~ is initialized as an empty list (line 6),
B~ and E~ point to the initial symbol of the grammar, and the corresponding
abstraction pattern will be generated only towards the end of the grammar.
Example 5.5. Starting from the I3 interpretation shown in Figure 6, the next
step we can take to move forward the interpretation is a new deduction on the
oN hypothesis, generating a new finding mT w and leading to the I4 interpretation. Since there is no available observation of the T wave, a matching with
this new finding mT w cannot be made by the SUBSUME() function, thus, the
30
Algorithm 6 Moving forward an interpretation through the prediction of nonavailable evidence.
1: function predict(I, mi )
2:
var desc = ∅
3:
for all Gh = hVN , VT , H, Ri ∈ G | h is a q(mi ) do
4:
PH = hh, MH = ∅, CH = ∅, ΘH = ∅i
5:
~ = hoh , PH , ~ = ∅i
6:
L~ = ∅; B~ = E~ = H
7:
I 0 = hOI ∪ {oh }, PI ∪ {PH }, I ∪ {mi oh }i
8:
O = O ∪ {oh }
9:
get focus(I 0 ).pop(mi )
10:
get focus(I 0 ).push(oh )
11:
desc = desc ∪ {I 0 }
12:
end for
13:
return desc
14: end function
only option for moving forward this interpretation is through prediction. Following the PREDICT() function, the GT w grammar can be selected, and a new
observation oT w can be conjectured, generating the I5 interpretation.
From I5 we can continue the deduction on the oT w hypothesis. If we ap0
ply the DEDUCE() function we obtain the mQRS finding from the environment,
shown in Figure 6 as I6 . To move on, we can apply the SUBSUME() function,
0
establishing the matching relation {mQRS oQRS }. This leads to the I7 interpretation, in which the uncertainty on the oT w observation is reduced; however,
the evidence for the PT w pattern is not yet complete. A new DEDUCE() step is
necessary, which deduces the mwave necessary finding in the I8 interpretation.
This finding is also absent, so another PREDICT() step is required. In this last
step, the Pwave pattern can be applied to observe the deviation in the raw ECG
signal, generating the owave
observation and completing the necessary evidence
3
for the oT w observation and thus also for oN . Constraint solving assigns the
value of tbT w , teT w and teN , so the result is a cover of the initial interpretation
problem in which all the hypotheses have a necessary and sufficient set of evidence. This solution is depicted in I9 .
It is worth noting that in this example the global matching relation I is
0
not injective, since mQRS oQRS and mQRS oQRS . Also note that each
interpretation only generates one descendant; in a more complex scenario, however, the possibilities are numerous, and the responsibility of finding the proper
sequence of reasoning steps lies with the CONSTRUE() algorithm.
5.6. Improving the efficiency of interpretation through saliency
Starting a hypothesize-and-test cycle for every single sample is not feasible
for most of the time series interpretation problems. Still, many problems may
benefit from certain saliency features that can guide the attention focus to
some limited temporal fragments that can be easily interpretable. Thus, the
31
oN
oPw
oN
focus
focus
oPw
oQRS
oPw
(mwave) (mwave)
owave
owave
1
2
oQRS
(m ) (mwave)
owave
owave
1
2
wave
I0 = Initial evidence
mQRS
oN
(mPw)
ABDUCE(I0 , oP w )
I1 =
oN
oN
(m ) m ?
(m ) (m )
Pw
Pw
QRS
oN
focus
focus
oPw
oQRS
oPw
(m ) (m )
owave
owave
1
2
wave
(m ) (mwave)
owave
owave
1
2
SUBSUME(I2 , mQRS )
I3 =
oN
(mPw) (mQRS)
oN
oTw
oN
mTw?
focus
oQRS
oPw
(m ) (m )
owave
owave
1
2
wave
I5 =
(m ) (m )
mQRS’ ?
QRS
oQRS
I6 =
m ?
focus
focus
(m ) (m )
(mQRS’ )
Pw
oPw
oTw
DEDUCE(I5 , oT w )
I7 =
oQRS
(m ) (m )
owave
owave
1
2
wave
I8 =
wave
oQRS
mTw?
oTw
wave
0
SUBSUME(I6 , mQRS )
oN
oPw
QRS
(m ) (m )
owave
owave
1
2
wave
wave
(mPw) (mQRS)
(mQRS’ )
oTw
wave
oN
oTw
oN
Tw
(m ) (m )
owave
owave
1
2
wave
oQRS
PREDICT(I4 , mT w )
oN
Pw
oPw
mwave
oTw
oN
mTw?
(m ) (m )
owave
owave
1
2
wave
wave
DEDUCE(I3 , oN )
I4 =
focus
(mPw) (mQRS)
focus
oPw
mQRS’
oTw
oN
oQRS
wave
wave
DEDUCE(I1 , oN )
I2 =
mTw
oN
QRS
oN
oTw
oN
mTw?
focus
oTw
m
wave
(mPw) (mQRS)
(mQRS’ )
oPw
oQRS
(m ) (m )
owave
owave
1
2
?
wave
DEDUCE(I7 , oT w )
I9 =
wave
(mTw)
oTw
(mwave)
owave
3
PREDICT(I8 , mwave )
Figure 6: Sequence of reasoning steps for solving a simple interpretation problem.
32
interpretation of the whole time series can pivot on a reduced number of initial
observations, thereby speeding up the interpretation process.
A saliency-based attentional strategy can be devised from the definition of
abstraction patterns using a subset of their constraints as a coarse filter to
identify a set of plausible observations. For example, in the ECG interpretation
problem the most common strategy is to begin the analysis by considering a
reduced set of time points showing a significant slope in the signal, consistent
with the presence of QRS complexes [47]. This small set of evidence allows us
to focus the interpretation on the promising signal segments, in the same way
that a cardiologist focuses on the prominent peaks to start the analysis [46]. It
should be noted that this strategy is primarily concerned with the behavior of
the focus of attention, and that it does not discard the remaining, non-salient
observations, as these are included later in the interpretation by means of the
subsumption and prediction reasoning modes.
6. Some strengths of the framework
In this section we provide several practical examples which illustrate some of
the strengths of the proposed interpretation framework and its ability to tackle
with typical weaknesses of strategies based solely on a classification approach.
6.1. Avoiding a casuistry-based interpretation
In the time domain, classification-based recognition of multiple processes occurring concurrently usually leads to a casuistry-based proliferation of classes, in
which a new class is usually needed for each possible superposition of processes
in order to properly identify all situations. It is common to use a representation
in the transform domain, where certain regular processes are easily separable,
although at the expense of a cumbersome representation of the temporal information [30]. In contrast, in the present framework, the hypothesize-and-test
cycle aims to conjecture those hypotheses that best explain the available evidence, including simultaneous hypotheses in a natural way as long as these are
not mutually exclusive.
ECG interpretation provides some interesting examples of this type of problem. Atrial fibrillation, a common heart arrhythmia caused by the independent
and erratic contractions of the atrial muscle fibers, is characterized by an irregularly irregular heart rhythm [46]. Most of the classification techniques for the
identification of atrial fibrillation are based on the analysis of the time interval
between consecutive beats, and attempt to detect this irregularity [34]. These
techniques offer good results in those situations in which atrial fibrillation is
the only anomaly, but they fail to properly identify complex scenarios which go
beyond the distinction between atrial fibrillation and normal rhythm. In the
strip shown in Figure 7, obtained during a pilot study for the home follow-up of
patients with cardiac diseases [38], such a classifier would wrongly identify this
segment as an atrial fibrillation episode, since the observed rhythm variability
is consistent with the description of this arrhythmia. In contrast, the present
33
interpretation framework properly explains the first five beats as a sinus bradycardia, compatible with the presence of a premature ectopic beat in the second
position, followed by a trigeminy pattern during six beats, and finally another
ectopic beat with a morphology change. The reason to choose this interpretation, despite being more complex than the atrial fibrillation explanation, is
that it is able to abstract some of the small P waves before the QRS complexes,
increasing the interpretation coverage.
Figure 7: False atrial fibrillation episode. [Source: Mobiguide Project [38], private recording]
6.2. Coping with ignorance
Most of the classifiers solve a separability problem among classes, either by
learning from a training set or by eliciting prior knowledge, and these are implicitly based on the closed-world assumption, i.e., every new instance to be
classified is assigned to one of the predefined classes. Such classifiers may additionally include a ’reject’ option for all those instances that could be misclassified
since they appear too close to the classification boundaries [7, 17]. This reject
option is added as another possible answer expressing doubt. However, such
classifiers fail to classify new instances of unknown classes, since they cannot
express ignorance. An approach to this problem can be found in novelty detection proposals [35], which can detect when a new instance does not fit any of
the predefined classes as it substantially differs from those instances available
during training. Still, these are limited to a common feature representation for
every instance, hindering the identification of what is unintelligible from the
available knowledge.
The present framework provides an expression of ignorance as a common
result of the interpretation problem. As long as the abstraction model is incomplete, the non-coverage of some piece of evidence by any interpretation is
an expression of partial ignorance. In the extreme case, the trivial interpretation I0 may be a proper solution for an interpretation problem, expressing total
ignorance. Furthermore, abduction naturally includes the notion of ignorance
in the reasoning process, since any single piece of evidence can be sufficient to
guess an interpretation, and the hypothesize-and-test cycle can be understood
as a process of incremental addition of evidence against an initial state of ignorance, while being able to provide an interpretation at any time based on the
available evidence.
As an example, consider the interpretation problem illustrated in Figure 8.
Let the initial evidence be the set of QRS annotations obtained by a state-of-the
art detection algorithm [47]. In this short strip, the eighth and ninth annotations
34
correspond to false positives caused by the presence of noise. A classificationbased strategy processes these two annotations as true QRS complexes, and the
monotone nature of the reasoning prevents their possible refutation, probably
leading to beat misclassification and false arrhythmia detection, with errors
propagating onwards to the end of the processing. In contrast, the present
framework provides a single normal rhythm as the best interpretation, which
explains all but the two aforementioned annotations, which are ignored and
considered unintelligible in the available model. It is also worth noting the
ability of this framework to integrate the results of an available classifier as a
type of constraint specification in the interpretation cycle.
Figure 8: Unintelligible evidence due to noise. [Source: MIT-BIH arrhythmia DB, recording:
112, between 13:46.200 and 13:56.700]
6.3. Looking for missing evidence
The application of the classification paradigm to pattern detection also entails the potential risk of providing false negative results. In the worst case,
a false negative result may be interpreted by a decision maker as evidence of
absence, leading to interpretation errors with their subsequent costs, or in the
best case as an absence of evidence caused by the lack of a proper detection
instrument.
Even though abduction is fallible, and false negative results persist, the
hypothesize-and-test cycle involves a prediction mechanism that points to missing evidence that is expected and, moreover, estimates when it should appear.
Both the bottom-up and top-down processing performed in this cycle reinforces
confidence in the interpretation, since the semantics of any conclusion is widened
according to its explanatory power.
As an example, consider the interpretation problem illustrated in Figure 9.
The initial evidence is again a set of QRS annotations obtained by a stateof-the-art detection algorithm [47]. Note that the eighth beat has not been
annotated, due to a sudden decrease in the signal amplitude. This error can be
amended in the hypothesize-and-test cycle, since the normal rhythm hypothesis
that abstracts the first seven QRS annotations predicts the following QRS to
be in the position of the missing annotation, and the PREDICT() procedure can
look for this (e.g., checking an alternative set of constraints).
The capability of abduction to ignore or look for new evidence has been
tested with a simplified version of the present framework in the QRS detection
problem [43], leading to a statistically significant improvement over a state-ofthe art algorithm.
35
Figure 9: Missing evidence that may be discovered by prediction. [Source: MIT-BIH normal
sinus rhythm DB, recording: 18184, between 09:12:45.000 and 09:12:55.500]
6.4. Interpretability of the reasoning process and the results
The interpretability of a reasoning formalism, defined as the ability to understand and evaluate its conclusions, is an essential feature for achieving an
adequate confidence in decision making [31]. In this sense, there are a number
of classification methods with good interpretability; however, the methods that
typically offer the best performance belong to the so-called black box approaches.
The present interpretation framework is able to provide a justification of any
result in relation to the available model. Given any solution or partial solution
of an interpretation problem, the searching path up to I0 gives full details of all
the reasoning steps taken to this end, and any abstraction hypothesis can be
traced back to the information supporting it.
This interpretation framework is also able to answer the question of why a
certain hypothesis has been rejected or neglected at any reasoning step. This is
done by exploring the branches outside the path between I0 and the solution.
Since the K exploration parameter within the CONSTRUE() algorithm has been
chosen as the maximum number of hypotheses that may explain a given observable, it is possible to reproduce the reasoning steps taken in the conjecture of any
abstraction hypothesis, and to check why this did not succeed (non-satisfaction
of pattern constraints, lower coverage, etc.). This can be useful in building and
refining the knowledge base.
7. Experimental evaluation: beat labeling and arrhythmia detection
The interpretation of the electrocardiogram has served both as a challenge
and as an inspiration for the AI community almost since its inception, due to a
number of factors that can be summarized as: (1) the complexity of the physiological processes underlying what is observed; and (2) the absence of an accurate
model of the heart and the hardly formalizable knowledge that constitutes the
experience of the cardiologist. There are numerous problems falling within the
scope of ECG interpretation, the most relevant being heartbeat labeling [29].
We have tested the present framework by abductively identifying and measuring
a set of qualitative morphological and rhythm attributes for each heartbeat, and
using a rule-based classifier to assign a label to clusters of similar heartbeats [44].
It is noteworthy that an explicit representation of knowledge has been adopted,
namely the kind of knowledge that can be found in an ECG handbook. Table 1
reproduces the performance comparison between this approach and the most
36
Table 1: VEB and SVEB classification performance of the abductive approach and comparison
with the most relevant automatic and assisted methods of the state-of-the-art
Dataset
Method
VEB
Se
P+
SVEB
Se
P+
MIT-BIH Arrhythmia
DS1+DS2
Teijeiro et al. - Automatic 92.82 92.23 85.10 84.51
Llamedo et al. - Assisted
90±1 97±0 89±2 88±3
Kiranyaz et al. - Assisted
93.9
90.6
60.3
63.5
Ince et al. - Assisted
84.6
87.4
63.5
53.7
Llamedo et al. - Automatic 80±2 82±3 76±2 43±2
MIT-BIH Arrhythmia
DS2
Teijeiro et al. - Automatic 94.63 96.79 87.17
Llamedo et al. - Assisted
93±1 97±1 92±1
Kiranyaz et al. - Assisted
95.0
89.5
64.6
Chazal et al. - Assisted
93.4
97.0
94.0
Zhang et al. - Automatic
85.48 92.75 79.06
Llamedo et al. - Automatic 89±1 87±1 79±2
Chazal et al. - Automatic
77.7
81.9
75.9
83.98
90±3
62.1
62.5
35.98
46±2
38.5
relevant automatic and assisted approaches of the state-of-the art, using sensitivity and positive predictivity of ventricular and supraventricular ectopic beat
classes.
As it can be seen, this method significantly outperforms any other automatic
approaches in the state-of-the-art, and even improves most of the assisted approaches that require expert aid. The most remarkable improvement concerns
the classification of supraventricular ectopic beats, which are usually hard to
distinguish using only morphological features. The abductive interpretation in
multiple abstraction levels, including a rhythm description of signal, is what
enables a more precise classification of each individual heartbeat.
Furthermore, the abductive interpretation approach has been used for arrhythmia detection in short single-lead ECG records, focusing on atrial fibrillation [45]. The interpretation results are combined with machine learning
techniques to obtain an arrhythmia classifier, achieving the best score in the
2017 Physionet/CinC Challenge dataset and outperforming some of the most
popular techniques such as deep learning and random forests [8].
8. Discussion
A new model-based framework for time series interpretation is proposed.
This framework relies on some basic assumptions: (i) interpretation of the behavior of a system from the set of available observations is a sort of conjecturing,
and as such follows the logic of abduction; (ii) the interpretation task involves
both bottom-up and top-down processing of information along a set of abstraction levels; (iii) at the lower levels of abstraction, the interpretation task is a
form of precompiled knowledge-based pattern recognition; (iv) the interpretation task involves both the representation of time and reasoning about time and
along time.
37
Model-based representation in the present framework is based on the notion
of abstraction pattern, which defines an abstraction relation between observables
and provides the knowledge and methods to conjecture new observations from
previous ones. Let us deepen in both the backward and forward logical meaning
of an abstraction pattern, following a reasoning similar to that of [4]:
• Backward meaning. From the backward reading of an abstraction pattern P , a hypothesis h is a possible abstraction of m1 , . . . , mn , provided
that the constraints in CP hold. An abstraction pattern satisfies the compositionality principle of abductive reasoning, and hence an abstraction
hypothesis can be conjectured from a single piece of evidence, and new
pieces of evidence can be added later [16]. On the other hand, if there
are multiple ways of observing h by means of multiple patterns, and their
respective constraints are inconsistent with the evidence, we do not conclude ¬h, interpreted as failure to prove h; we will only conclude ¬h in all
those interpretations conjecturing an observation of a different h0 , where
h and h0 are mutually exclusive.
• Forward meaning. An abductive observation is built upon an archetypical
representation of a hypothesis h, creating an observation as an instance of
h by estimating, from the available evidence, its attribute values A and
its temporal location T b and T e by means of an observation procedure
ΘP . From a forward reading, assuming h is true, there is an observation
for each observable of the set m1 , . . . , mn such that the constraints in
CP hold. However, the estimated nature of abstraction does not usually
allow us to infer, from the observation of h, the same observations of
m1 , . . . , mn that have been abstracted into h. We must presume instead
that assuming h is true entails the occurrence of an observation for each
observable of m1 , . . . , mn , without necessarily entailing its attribute values
and its temporal location.
Both the forward and the backward meanings of an abstraction pattern support the incremental building of an interpretation in the present framework.
Thus, what initially was defined as a set covering problem of a time series fragment -a completely intractable problem as it moves away from a toy examplecan be feasibly solved if it is properly structured in a set of abstraction levels, on which four reasoning modes (abduction, deduction, subsumption and
prediction) can make a more efficient search of the best explanation under a
parsimony criterion. Moreover, this incremental reasoning primarily follows the
time direction, since the available knowledge is usually compiled in the form of
a set of processes that can be expected to be found in a certain sequence, which
underscores the anticipatory information contained in the evidence.
An abstraction model, built on a set of abstraction patterns, establishes a
causal responsibility for the behavior observed in a complex system [24]. This
responsibility is expressed in the language of processes: a process is said to be
observable if it is assumed that it causes a recognizable trace in the physical
quantity to be interpreted. This notion of causality is behind perception, i.e.,
38
concerned with the explanation of sensory data, in contrast with the notion of
causality in diagnosis, concerned with the explanation of abnormality [10].
Representing and reasoning about context is a relevant issue in model-based
diagnosis [4, 10, 33, 40]. A contextual observation is nothing more than another observation that need not be explained by a diagnosis. In most of the
bibliography, the distinction between these two roles must be defined beforehand. Several other works enable the same observation to play different roles
in different causal patterns, thus providing some general operations for expressing common changes made by the context in a diagnostic pattern [25, 32]. In
the present interpretation framework, an observation can either be part of the
evidence to be explained in a certain abstraction pattern, or can be part of the
environment in another abstraction pattern. Both types of observation play a
part in the hypothesize-and-test cycle, with the only difference that observations
of the environment of an abstraction pattern are not expected to be abstracted
by this pattern. Hence, observations of the environment are naturally included
in the deduction, subsumption and prediction modes of reasoning.
An important limitation of the present framework is its knowledge-intensive
nature, requiring a non-trivial elicitation of expert knowledge. It is worth exploring different possibilities for the inclusion of machine learning strategies,
both for the adaption and the definition of the knowledge base. A first approach may address the automatic adjustment of the initial constraints among
recurrent findings in abstraction grammars. In this manner, for example, temporal constraints between consecutive heartbeats in a normal rhythm abstraction
grammar could be adapted to the characteristics of the subject whose ECG is
being interpreted, allowing the identification of possible deviations from normality with greater sensitivity. On the other hand, the discovery of new abstraction
patterns and abstraction grammars by data mining methods appears as a key
challenge. In this regard, the CONSTRUE() algorithm should be extended by
designing an INDUCE() procedure aimed at conjecturing new observables after
an inductive process. To this end, new default assumptions should be made in
order to define those grammar structures that should rule the inductive process.
These grammar structures may lead to discovery new morphologies or rhythms
not previously included in the knowledge base.
The proposed framework formulates an interpretation problem as an abduction problem with constraints, targeted at finding a set of hypotheses covering
all the observations while satisfying a set of constraints on their attribute and
temporal values. Thus, consistency is the only criterion to evaluate the plausibility of a hypothesis, resulting in a true or false value, and any evoked hypothesis
(no matter how unusual it is) for which inconsistent evidence cannot be found is
considered as plausible and, consequently, it will be explored in the interpretation cycle. Even though this simple approach has provided remarkable results,
it can be expected that the inclusion of a hypothesis evaluation scheme, typically based on probability [33, 37] or possibility [13, 32] theories, will allow us
to better discriminate between plausible and implausible hypotheses, leading to
better explanations with fewer computational requirements.
The expressiveness of the present framework should also be enhanced to
39
support the representation of the absence of some piece of evidence, in the form
of negation, so that ¬q represents the absence of q. The exclusion relation is a
first approach to manage with the notion of absence in the hypothesize-and-test
cycle, since the occurrence of a process is negated by the concurrent occurrence
of any of the processes related to it by the exclusion relation. On the other hand,
an inhibitory relation can enable us to represent a certain process preventing
another from occurring under some temporal constraints, providing a method to
insert the prediction of the absence of some observable in the hypothesize-andtest cycle. Furthermore, other forms of interaction between processes, possibly
modifying the respective initial patterns of evidence, should be modeled.
Further efforts should be made to improve the efficiency of the interpretation
process. To this end, two main strategies are currently being explored. On the
one hand, the model structure is exploited to identify necessary and sufficient
conditions for every hypothesis to be conjectured; the necessary conditions avoid
the expansion of the hypotheses that can be ruled out because they are inconsistent with observations, while sufficient conditions avoid the construction of
redundant interpretations [9]. A different strategy entails additional restrictions
in the amount of computer memory and time needed to run the algorithm, resulting in a selective pruning of the node expansion while sacrificing optimality;
this strategy is similar to the one used in the K-Beam algorithm [14].
The CONSTRUE() algorithm is based on the assumption that all the evidence to be explained is available at the beginning of the interpretation task. A
new version of the algorithm should be provided to cope with a wide range of
problems, where the interpretation must be updated as new evidence becomes
available over time. Examples of such problems are continuous biosignal monitoring or plan execution monitoring [3]. At the emergence of a new piece of evidence, two reasoning modes may come into play triggered by the CONSTRUE()
algorithm: a new explanatory hypothesis can be conjectured by means of the
ABDUCE() procedure, or the evidence can be incorporated in an existing hypothesis by means of the SUBSUME() procedure. In this way, the incorporation
of new evidence over time is seamlessly integrated into the hypothesize-andtest cycle. Furthermore, to properly address these interpretation scenarios, the
heuristics used to guide the search must be updated to account for the timing
of the interpretation process, which will lead to the definition of a covering ratio
until time t, and a complexity until time t.
Implementation
With the aim of supporting reproducible research, the full source code of the
algorithms presented in this paper has been published under an Open Source
License1 , along with a knowledge base for the interpretation of the ECG signal
strips of all examples in this paper.
1 https://github.com/citiususc/construe
40
Acknowledgments
This work was supported by the Spanish Ministry of Economy and Competitiveness under project TIN2014-55183-R. T. Teijeiro was funded by an FPU
grant from the Spanish Ministry of Education (MEC) (ref. AP2010-1012).
References
[1] A.V. Aho, M.S. Lam, R. Sethi, and J.D. Ullman. Compilers: Principles,
Techniques and Tools. Pearson Education, Inc., 2006.
[2] S. Barro, R. Marı́n, J. Mira, and A. Patón. A model and a language for
the fuzzy representation and handling of time. Fuzzy Sets and Systems,
61:153–175, 1994.
[3] R. Barták, R. A. Morris, and K. B. Venable. An Introduction to ConstraintBased Temporal Reasoning. Synthesis Lectures on Artificial Intelligence
and Machine Learning, 8(1):1–121, feb 2014.
[4] V. Brusoni, L. Console, P. Terenziani, and D. Theseider Dupré. A spectrum
of definitions for temporal model-based diagnosis. Artificial Intelligence,
102(1):39–79, 1998.
[5] S. Chakravarty and Y. Shahar. CAPSUL: A constraint-based specification
of repeating patterns in time-oriented data. Annals of Mathematics and
Artificial Intelligence, 30:3–22, 2000.
[6] E. Charniak. Motivation analysis, abductive unification and nonmonotonic
equality. Artificial Intelligence, 34(3):275–295, 1989.
[7] C.K. Chow. On optimum recognition error and reject tradeoff. IEEE
Transaction on Information Theory, 16(1):41–46, 1970.
[8] G. Clifford, C. Liu, B. Moody, I. Silva, Q. Li, A. Johnson, and R. Mark.
AF Classification from a Short Single Lead ECG Recording: the PhysioNet
Computing in Cardiology Challenge. In Proceedings of the 2017 Computing
in Cardiology Conference (CinC), volume 47, 2017.
[9] L. Console, L. Portinale, and D. Theseider Dupré. Using compiled knowledge to guide and focus abductive diagnosis. IEEE Transactions on Knowledge and Data Engineering, 8(5):690–706, 1996.
[10] L. Console and P. Torasso. A spectrum of logical definitions of model-based
diagnosis. Computational Intelligence, 3(7):133–141, 1991.
[11] Working Party CSE. Recommendations for measurement standards in
quantitative electrocardiography. European Heart Journal, 6(10):815–825,
1985.
[12] R. Dechter. Constraint Processing. Morgan Kaufmann Publishers, 2003.
41
[13] D. Dubois and H. Prade. Fuzzy relation equations and causal reasoning.
Special Issue on ”Equations and Relations on Ordered Structures : Mathematical Aspects and Applications” (A. Di Nola, W. Pedrycz, S. Sessa,
eds.), Fuzzy Sets and Systems, 75:119–134, 1995.
[14] S. Edelkamp and S. Schrödl. Heuristic Search: Theory and Applications.
Morgan Kaufmann, 2011.
[15] D. Ferrucci, A. Levas, S. Bagchi, D. Gondek, and E.T. Mueller. Watson:
Beyond Jeopardy. Artificial Intelligence, 199–200:93–105, 2012.
[16] P. Flach. Abduction and induction: Syllogistic and inferential perspectives. In Abductive and Inductive Reasoning Workshop Notes, pages 31–35.
University of Bristol, 1996.
[17] G. Fumera, F. Roli, and G. Giacinto. Reject option with multiple thresholds. Pattern Recognition, (33):2099–2101, 2000.
[18] A. L. Goldberger et al. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation, 101(23):215–220, June 2000.
[19] I.J. Haimowitz and I.S. Kohane. Automated Trend Detection with Alternate Temporal Hypotheses. In Proceedings of the 13th International Joint
Conference of Artificial Intelligence, volume 1, pages 146–151, 1993.
[20] I.J. Haimowitz, P.P. Le, and I.S. Kohane. Clinical monitoring using
regression-based trend templates. Artificial Intelligence in Medicine,
7(6):473–496, 1995.
[21] C. Hartshorn et al. Collected papers of Charles Sanders Peirce. Harvard
University Press, 1931.
[22] J.R. Hobbs, M. Stickel, and P. Martin. Interpretation as abduction. Artificial Intelligence, 63:69–142, 1993.
[23] J. Hopcroft, R. Motwani, and J. Ullman. Introduction to automata theory,
languages and computation. Addison-Wesley, 2001.
[24] J.R. Josephson and S.G. Josephson. Abductive inference. Computation,
philosophy, technology. Cambridge University Press, 1994.
[25] J. M. Juárez, M. Campos, J. Palma, and R. Marı́n. Computing contextdependent temporal diagnosis in complex domains. Expert Systems with
Applications, 35(3):991–1010, 2008.
[26] P. Laguna, R. Jané, and P. Caminal. Automatic detection of wave boundaries in multilead ECG signals: validation with the CSE database. Computers and Biomedical Research, 27:45–60, 1994.
42
[27] C. Larizza, G. Bernuzzi, and M. Stefanelli. A general framework for building patient monitoring systems. In Proceedings of the 5th Conference on
Artificial intelligence in Medicine, pages 91–102, 1995.
[28] D. Litman and J. Allen. A plan recognition model for subdialogues in
conversation. Cognitive Science, 11:163–200, 1987.
[29] E. J. S. Luz, W. R. Schwartz, G. Cámara-Chávez, and D. Menotti. ECGbased Heartbeat Classification for Arrhythmia Detection: A Survey. Computer Methods and Programs in Biomedicine, 2016.
[30] F. Mörchen. Time series feature extraction for data mining using DWT and
DFT. Technical Report no. 33, Department of Mathematics and Computer
Science, University of Marburg, 2003.
[31] D. Nauck and R. Kruse. Obtaining interpretable fuzzy classification rules
from medical data. Artificial Intelligence in Medicine, 16(2):149–169, 1999.
[32] J. Palma, J. M. Juárez, M. Campos, and R. Marı́n. Fuzzy theory approach
for temporal model-based diagnosis: An application to medical domains.
Artificial Intelligence in Medicine, 38(2):197, 2006.
[33] Y. Peng and J.A. Reggia. Abductive inference models for diagnostic
problem-solving. Springer-Verlag, 1990.
[34] A. Petrenas, V. Marozas, and L. Sörnmo. Low-complexity detection of
atrial fibrillation in continuous long-term monitoring. Computers in biology
and medicine, 65:184–91, oct 2015.
[35] M.A.F. Pimentel, D.A. Clifton, L. Clifton, and L. Tarassenko. A review of
novelty detection. Signal Processing, (99):215–249, 2014.
[36] D. Poole. A methodology for using a default and abductive reasoning
system. International Journal of Intelligent Systems, 5(5):521–548, 1990.
[37] D. Poole. Learning, Bayesian Probability, Graphical Models, and Abduction. In Abduction and Induction: Essays on their Relation and Integration,
pages 153–168. Springer Netherlands, 2000.
[38] L. Sacchi, E. Parimbelli, S. Panzarasa, N. Viani, E. Rizzo, C. Napolitano,
R. Ioana Budasu, and S. Quaglini. Combining Decision Support SystemGenerated Recommendations with Interactive Guideline Visualization for
Better Informed Decisions. In Artificial Intelligence in Medicine, pages
337–341. Springer International Publishing, 2015.
[39] Y. Shahar. A framework for knowledge-based temporal abstraction. Artificial intelligence, 90(1–2):79–133, 1997.
[40] Y. Shahar. Dynamic temporal interpretation contexts for temporal abstraction. Annals of Mathematics and Artificial Intelligence, 22(1–2):159–192,
1998.
43
[41] Y. Shahar. Knowledge-based temporal interpolation. Journal of experimental and theoretical artificial intelligence, 11:123–144, 1999.
[42] Y. Shahar and M.A. Musen. Knowledge-based temporal abstraction in
clinical domains. Artificial Intelligence in Medicine, 8(3):267–298, 1996.
[43] T. Teijeiro, P. Félix, and J. Presedo. Using Temporal Abduction for Biosignal Interpretation: A Case Study on QRS Detection. In 2014 IEEE International Conference on Healthcare Informatics, pages 334–339, 2014.
[44] T. Teijeiro, P. Félix, J. Presedo, and D. Castro. Heartbeat classification
using abstract features from the abductive interpretation of the ECG. IEEE
Journal of Biomedical and Health Informatics, 2016.
[45] T. Teijeiro, C.A. Garcı́a, D. Castro, and P. Félix. Arrhythmia Classification
from the Abductive Interpretation of Short Single-Lead ECG Records. In
Proceedings of the 2017 Computing in Cardiology Conference (CinC), volume 47, 2017.
[46] Galen S. Wagner. Marriott’s Practical Electrocardiography. Wolters Kluwer
Health/Lippincott Williams & Wilkins, 11 edition, 2008.
[47] W. Zong, G.B. Moody, and D. Jiang. A robust open-source algorithm to
detect onset and duration of QRS complexes. In Computers in Cardiology,
pages 737–740, 2003.
44
| 2 |
Characterization and recognition of proper tagged
probe interval graphs
arXiv:1607.02922v1 [math.CO] 11 Jul 2016
Sourav Chakraborty∗, Shamik Ghosh†‡, Sanchita Paul§ and Malay Sen¶
July 12, 2016
Abstract
Interval graphs were used in the study of genomics by the famous molecular biologist Benzer. Later on
probe interval graphs were introduced by Zhang as a generalization of interval graphs for the study of
cosmid contig mapping of DNA.
A tagged probe interval graph (briefly, TPIG) is motivated by similar applications to genomics, where
the set of vertices is partitioned into two sets, namely, probes and nonprobes and there is an interval
on the real line corresponding to each vertex. The graph has an edge between two probe vertices if
their corresponding intervals intersect, has an edge between a probe vertex and a nonprobe vertex if the
interval corresponding to a nonprobe vertex contains at least one end point of the interval corresponding
to a probe vertex and the set of non-probe vertices is an independent set. This class of graphs have
been defined nearly two decades ago, but till today there is no known recognition algorithm for it.
In this paper, we consider a natural subclass of TPIG, namely, the class of proper tagged probe interval graphs (in short PTPIG). We present characterization and a linear time recognition algorithm for
PTPIG. To obtain this characterization theorem we introduce a new concept called canonical sequence
for proper interval graphs, which, we belief, has an independent interest in the study of proper interval
graphs. Also to obtain the recognition algorithm for PTPIG, we introduce and solve a variation of
consecutive 1’s problem, namely, oriented consecutive 1’s problem and some variations of PQ-tree algorithm. We also discuss the interrelations between the classes of PTPIG and TPIG with probe interval
graphs and probe proper interval graphs.
Keywords: Interval graph, proper interval graph, probe interval graph, probe proper interval graph,
tagged probe interval graph, consecutive 1’s property, PQ-tree algorithm.
∗
Chennai Mathematical Institute, Chennai, India. [email protected]
Department of Mathematics, Jadavpur University, Kolkata - 700 032, India. [email protected]
‡
(Corresponding author)
§
Department of Mathematics, Jadavpur University, Kolkata - 700 032, India. [email protected]
¶
Department of Mathematics, North Bengal University, District - Darjeeling, West Bengal, Pin - 734 430, India.
†
[email protected]
1
1
Introduction
A graph G = (V, E) is an interval graph if one can map each vertex into an interval on the real line
so that any two vertices are adjacent if and only if their corresponding intervals intersect. Such
a mapping of vertices into an interval on the real line is called an interval representation of G.
The study of interval graphs was motivated by the study of the famous molecular biologist Benzer
[2] in 1959. Since then interval graphs has been widely used in molecular biology and genetics,
particularly for DNA sequencing. Different variations of interval graphs have been used to model
different scenarios arising in the area of DNA sequencing. Literature on the applications of different
variations of interval graphs can be found in [5, 13, 22].
In an attempt to aid a problem called cosmid contig mapping, a particular component of the physical
mapping of DNA, in 1998 Sheng, Wang and Zhang [34] defined a new class of graphs called tagged
probe interval graphs (briefly, TPIG) which is a generalization of interval graphs. Since then one of
the main open problems in this area has been “Given a graph G, recognizing whether G is a tagged
probe interval graphs.”
Definition 1.1. A graph G = (V, E) is a tagged probe interval graph if the vertex set V can be
partitioned into two disjoint sets P (called “probe vertices”) and N (called “nonprobe vertices”)
and one can map each vertex into an interval on the real line (vertex x ∈ V mapped to Ix = [`x , rx ])
such that all the following conditions hold:
1. N is an independent set in G, i.e., there is no edge between nonprobe vertices.
2. If x, y ∈ P then there is an edge between x and y if and only if Ix ∩ Iy 6= ∅, or in other words
the mapping is an interval representation of the subgraph of G induced by P .
3. If x ∈ P and y ∈ N then there is an edge between x and y if and only if the interval corresponding to the nonprobe vertex contains at least one end point of the interval corresponding
to the probe vertex, i.e., either `x ∈ Iy or rx ∈ Iy .
We call the collection {Ix | x ∈ V } a TPIG representation of G. If the partition of the vertex set
V into probe and nonprobe vertices is given, then we denote the graph as G = (P, N, E).
Problem 1.2. Given a graph G = (P, N, E), give a linear time algorithm for checking if G is a
tagged probe interval graph.
Tagged probe inteval graphs have been defined nearly two decades ago and since then this problem
has not seen much progress. Untill this paper there was no known algorithm for tagged probe
2
interval graphs or any natural subclass of tagged probe interval graphs, excepting probe proper
interval graphs (cf. Subsection 1.2.3).
A natural and well studied subclass of interval graphs are the proper interval graphs. A proper
interval graph G is an interval graph in which there is an interval representation of G such that
no interval contains another properly. Such an interval representation is called a proper interval
representation of G. Proper interval graphs is an extremely rich class of graphs and we have a
number of different characterizations of proper interval graphs.
In this paper, we study a natural special case of tagged probe interval graphs which we call proper
tagged probe interval graph (in short, PTPIG). The only extra condition that a PTPIG should
satisfy over TPIG is that the mapping of the vertices into intervals on the real line that gives a
TPIG representation of G should be a proper interval representation of the subgraph of G
induced by P .
In this paper we present a linear time (linear in (|V | + |E|) recognition algorithm for PTPIG.
The backbone of our recognition algorithm is the characterization of proper tagged probe interval
graphs that we obtain in Theorem 3.4. To obtain this characterization theorem we introduce a
new concept called “canonical sequence” for proper interval graphs, which we believe would
be of independent interest in the study of proper interval graphs. Also to obtain the recognition
algorithm for PTPIG, we introduce and solve a variation of consecutive 1’s problem, namely,
oriented consecutive 1’s problem.
1.1
Notations
Suppose a graph G is a PTPIG (or TPIG), then we will assume that the vertex set is partitioned
into two sets P (for probe vertices) and N (for nonprobe vertices). To indicate that the partition
is known to us, we will sometimes denote G by G = (P, N, E), where E is the edge set. We will
denote by GP the subgraph of G that is induced by the vertex set P . We will assume that there
are p probe vertices {u1 , . . . , up } and q nonprobe vertices {w1 , . . . , wq }. To be consistent in our
notation we will use i or i0 or i1 , i2 , . . . as indices for probe vertices and use j or j 0 or j1 , j2 , . . . as
indices for nonprobe vertices.
Let G = (V, E) be a graph and v ∈ V . Then the set N [v] = {u ∈ V | u is adjacent to v}∪{v} is the
closed neighborhood of v in G. A graph G is called reduced if no two vertices have the same closed
neighbourhood. If the graph is not reduced then we define an equivalence relation on the vertex
set V such that vi and vj are equivalent if and only if vi and vj have the same (closed) neighbors
3
in V . Each equivalence class under this relation is called a block of G. For any vertex v ∈ V we
denote the equivalence class containing v by B(v). Let the blocks of G be B1G , B2G , . . . , BtG (or
sometimes just B1 , B2 , . . . , Bt ). So, the collection of blocks is a partition of V . The reduced graph
e = (Ve , E))
e is the graph obtained by merging all the vertices that are in the
of G (denoted by G
same equivalence class.
If M is a (0, 1)-matrix, then we say M satisfies the consecutive 1’s property if in each row and
column, 1’s appear consecutively [17, 24]. We will denote by A(G) the augmented adjacency matrix
of the graph G, in which all the diagonal entries are 1 and non-diagonal elements are same as the
adjacency matrix of G.
1.2
1.2.1
Background Materials
PQ-trees
In the past few decades many variations of interval graphs has been studied mostly in context of
modeling different scenario arising from molecular biology and DNA sequencing. Understanding the
structure and properties of these classes of graphs and designing efficient recognition algorithms are
the central problems in this field. Many times this studies have led to nice combinatorial problems
and development of important data structures.
For example, the original linear time recognition algorithm for interval graphs by Booth and
Lueker [4] in 1976 is based on their complex PQ tree data structure (also see [3]). Habib et
al. [14] in 2000 showed how to solve the problem more simply using lexicographic breadth-first
search, based on the fact that a graph is an interval graph if and only if it is chordal and its
complement is a comparability graph. A similar approach using a 6-sweep LexBFS algorithm is
described in Corneil, Olariu and Stewart [8] in 2010.
In this paper we will be using the data structure of PQ-trees quite extensively. PQ-trees are not
only used to check whether a given matrix satisfy the consecutive 1’s property, they also store all
the possible permutations such that if one permuted the rows (or column) using the permutations,
the matrix would satisfy the consecutive 1’s property. We define a generalization of the problem of
checking consecutive 1’s property to a problem called Oriented-consecutive 1’s problem and show
how the PQ-tree representation can be used to solve this problem also. Details are available in
Section 4.
4
Index
Properties
References
1
G is a Proper Interval Graph.
2
G is a Unit Interval Graph
3
G is claw-free, i.e., G does not contain K1,3 as an induced subgraph.
4
For all v ∈ V , elements of N [v] = {u ∈ V | uv ∈ E} ∪ {v} are
[27, 28, 29, 30, 12]
consecutive for some ordering of V (closed neighborhood condition).
5
There is an ordering v1 , v2 , · · · , vn of V such that
G has a proper interval graph representation
{Ivi = [ai , bi |i = 1, 2, · · · , n} where a1 < a2 < · · · < an
[6, 7, 8, 12]
and b1 < b2 < · · · < bn .
6
There is an ordering of V such that the augmented
adjacency matrix A(G) of G satisfies the consecutive 1’s property.
7
A straight enumeration of G is a linear ordering of blocks
(vertices having same closed neighborhood) in G, such that
for every block, the block and its neighboring blocks are
[9, 15, 16, 25]
consecutive in the ordering.
G has a straight enumeration
which is unique up to reversal, if G is connected.
8
e is obtained from G
The reduced graph G
by merging vertices having same closed neighborhood.
G(n, r) is a graph with n vertices x1 , x2 , . . . , xn such that
xi is adjacent to xj if and only if 0 < |i − j| 6 r, where r is a positive integer.
[19]
e is an induced subgraph of G(n, r)
G
for some positive integers n, r with n > r.
Table 1: Characterizations of proper interval graphs: equivalent conditions on an interval graph G = (V, E).
5
1.2.2
Proper Interval Graphs
For proper interval graphs we have a number of equivalent characterizations. Recall that a proper
interval graph G is an interval graph in which there is an interval representation of G such that
no interval contains another properly and such an interval representation is called a proper interval
representation of G. It is important to note that a proper interval graph G may have an interval
representation which is not proper. Linear-time recognition algorithms for proper interval graphs
are obtained in [9, 10, 15, 16]. A unit interval graph is an interval graph in which there is an
interval representation of G such that all intervals have the same length. Interestingly, these two
concepts are equivalent. Another equivalence is that an interval graph is a proper interval graph if
and only if it does not conatin K1,3 as an induced subgraph. Apart from these, there are several
other characterizations of proper interval graphs (see Table 1). Among them we repeatedly use the
following equivalent conditions in the rest of the paper:
Theorem 1.3. Let G = (V, E) be an interval graph. then the following are equivalent:
1. G is a proper interval graph.
2. There is an ordering of V such that for all v ∈ V , elements of N [v] are consecutive (the closed
neighborhood condition).
3. There is an ordering of V such that the augmented adjacency matrix A(G) of G satisfies the
consecutive 1’s property.
4. There is an ordering {v1 , v2 , . . . , vn } of V such that G has a proper interval representation
{Ivi = [ai , bi ] | i = 1, 2, . . . , n} where ai 6= bj for all i, j ∈ {1, 2, . . . , n}, a1 < a2 < · · · < an
and b1 < b2 < · · · < bn .
Remark 1.4. We note that in a proper interval graph G = (V, E), the ordering of V that satisfies
any one of the conditions (2), (3) and (4) in the above theorem also satisfies the other conditions
among them.
1.2.3
Relation between PTPIG and other variants
A very similar definition to the TPIG is the class of graphs called probe interval graphs. A graph
G = (V, E) is a probe interval graph (in short, PIG) if the vertex set V can be partitioned into two
disjoint sets probe vertices P and nonprobe vertices N and one can map each vertex into an interval
on the real line (vertex x ∈ V mapped to Ix ) such that there is an edge between two vertices x
and y if at least one of them is in P and their corresponding intervals Ix and Iy intersect. When
6
the interval representation is proper (i.e., no interval is properly contained in another interval) the
graph is called a probe proper interval graph (briefly, PPIG). Clearly PPIG is a subclass of PIG. The
concept of PIG was introduced by Zhang [37] in 1994 in an attempt to aid a problem called cosmid
contig mapping. Recognition algorithms for PIG can be found in [18, 20, 21]. A characterization
of probe interval graphs in terms of adjacency matrix was obtained in [11]. For more information
about PIG, see [5, 13, 26, 22, 23]. Only recently, Nussbaum [25] presented the first recognition
algorithms for PPIG.
While the definition of PIG is very similar to that of TPIG, the two classes of graphs are not comparable [34]. But PPIG is a proper subclass of PTPIG. In fact, since in an interval representation of
PPIG, no interval contains other properly, there is an edge between a probe and nonprobe vertices
if and only if an end point of the interval corresponding to the probe vertex belongs to the interval
corresponding the nonprobe vertex.
On the other hand, K1,3 with a single nonprobe at the center cannot be a PPIG for otherwise it
would be a proper interval graph (as any probe interval graph with a single nonprobe vertex is an
interval graph). But it is a PTPIG by choosing three disjoint intervals for probe vertices and an
interval containing all of them corresponding to the nonprobe vertex. As K1,3 is an interval graph,
it is an example of PIG and PTPIG, but not a PPIG.
Similarly, C4 with a single nonprobe vertex is a PTPIG with an interval representation {[1, 3], [2, 5], [4, 6]}
for probes and [3, 4] for a nonprobe, but this is not a PIG (for otherwise it would be an interval
graph). Now consider the graph G1 in Figure 1. That G1 is a PIG and TPIG follows from the
interval representation described in Figure 2. But G1 is not a PTPIG as the subgraph induced by
probe vertices is K1,3 which is not a proper interval graph.
1.3
Organization of the paper
The first section contains definitions, preliminaries and examples. Also we fix some notations in
Subsebtion 1.1 which we will use for the rest of the paper. In section 2, we introduce the concepts
n2
p3
p4
p1
n1
n3
p2
Figure 1: The graph G1 .
7
1 2 3 45 6 7 8 9
32
p2
n1
p1
n3
p3
n2
p4
Figure 2: Interval representation of the graph G1 in Figure 1.
of a canonical sequence of a proper interval graph and study the structural properties of the
sequence. Section 3 contains the details of the graph class PTPIG. In this section we provide the
main characterization theorem for PTPIG (cf. Theorem 3.4) with illustrative examples and more
structural properties of the canonical sequence in context of a PTPIG which are used to develop
the recognition algorithm presented in the next section. In Section 4, a linear time recognition
algorithm for PTPIG is obtained gradually special to the most general case in three subsections.
We put some concluding remarks and bibliography at the end of the paper.
2
Canonical Sequence of Proper Interval Graphs
Let G be a proper interval graph. Then there is an ordering of V that satisfies conditions (2), (3)
and (4) of Theorem 1.3. Henceforth we call such an ordering, a natural or canonical ordering of
V . But this canonical ordering is not unique. A proper interval graph may have more than one
canonical orderings of its vertices. Interestingly, it follows from Corollary 2.5 of [9] (also see [25])
that the canonical ordering is unique up to reversal for a connected reduced proper interval graph.
Definition 2.1. Let G = (V, E) be a proper interval graph. Let {v1 , v2 , . . . , vn } be a canonical
ordering of the set V with the interval representation be {Ivi = [ai , bi ] | i = 1, 2, . . . , n} where ai 6= bj
for all i, j ∈ {1, 2, . . . , n}, a1 < a2 < · · · < an and b1 < b2 < · · · < bn . Now we combine all ai
and bi (i = 1, 2, . . . , n) in an increasing sequence which we call the interval canonical sequence with
respect to the canonical ordering of vertices of G and is denoted by IG .
Now if we replace ai or bi by i for all i = 1, 2, . . . , n in IG , then we obtain a sequence of integers
belonging to {1, 2, . . . , n} each occurring twice. We call such a sequence a canonical sequence of
G with respect to the canonical ordering of vertices of G and is denoted by SG . Moreover, if we
replace i by vi for all i = 1, 2, . . . , n in SG , then the resulting sequence is called a vertex canonical
sequence of G (corresponding to the canonical sequence SG ) and is denoted by VG .
Note that SG and its corresponding VG and IG can all be obtained uniquely from each other. Thus
by abuse of notations, sometimes we will use the term canonical sequence to mean any of these.
8
[2, 4]
[3, 9]
[6, 12]
[7, 14]
[8, 17]
[10, 20]
[16, 24]
[22, 28]
v1
v2
v3
v4
v5
v6
v7
v8
[2, 4]
v1
1
1
0
0
0
0
0
0
[3, 9]
v2
1
1
1
1
1
0
0
0
[6, 12]
v3
0
1
1
1
1
1
0
0
[7, 14]
v4
0
1
1
1
1
1
0
0
[8, 17]
v5
0
1
1
1
1
1
1
0
[10, 20]
v6
0
0
1
1
1
1
1
0
[16, 24]
v7
0
0
0
0
1
1
1
1
[22, 28]
v8
0
0
0
0
0
0
1
1
Table 2: The augmented adjacency matrix A(G) of the graph G in Example 2.2
2.1
Example of Canonical Sequence
Example 2.2. Consider the proper interval graph G = (V, E) whose augmented adjacency matrix
A(G) along with a proper interval representation is given in Table 2. Note that vertices of G are
arranged in a canonical ordering as A(G) that satisfies the consecutive 1’s property. Let [ai , bi ] be
the interval corresponding to the vertex vi for i = 1, 2, . . . , 8. Then
a1 a2
a3
a4
a5
a6
a7
2
3
6
7
8
10 16 22
b1
b2
b3
b4
b5
b6
4
9
12 14 17 20 24 28
b7
a8
b8
Then the interval canonical sequence combining ai and bi is given by
IG = (a1 , a2 , b1 , a3 , a4 , a5 , b2 , a6 , b3 , b4 , a7 , b5 , b6 , a8 , b7 , b8 ).
Therefore the canonical sequence and the vertex canonical sequence with respect to the given
canonical vertex ordering is
SG = (1 2 1 3 4 5 2 6 3 4 7 5 6 8 7 8) and VG = (v1 v2 v1 v3 v4 v5 v2 v6 v3 v4 v7 v5 v6 v8 v7 v8 ).
2.2
Structure of the Canonical Sequence for Proper Interval Graphs
If a graph G is a reduced proper interval graph then the following lemma states that the canonical
sequence for G is unique upto reversal.
Lemma 2.3. Let G = (V, E) be a proper interval graph and V = {v1 , v2 , . . . , vn } be a canonical
ordering of vertices of G. Then the canonical sequence SG is independent of proper interval representations that satisfies the given canonical ordering. Moreover SG is unique up to reversal for
connected reduced proper interval graphs.
9
v1
v2
v3
v4
v5
v6
v7
v8
v1
1
1
0
v2
1
1
1
0
0
1
1
0
0
0
0
0
v3
0
1
1
1
0
1
1
0
0
v4
0
v5
0
1
1
1
1
1
1
1
0
0
1
1
1
1
0
v6
v7
0
0
0
0
1
1
1
1
1
0
0
0
1
1
1
1
v8
0
0
0
0
0
0
1
1
Table 3: The matrix A(G) of the graph G in Example 2.2 with its stair partition.
Proof. Let {Ivi = [ai , bi ] | i = 1, 2, . . . , n} and {Jvi = [ci , di ] | i = 1, 2, . . . , n} be two proper interval
representations of G that satisfy the given canonical ordering. Now for any i < j, aj < bi if and
only if vi is adjacent to vj if and only if cj < di . Thus the canonical sequence SG is independent
of proper interval graph representations. Now since the canonical ordering is unique up to reversal
for a connected reduced proper interval graph, the canonical sequence SG is unique up to reversal
for connected reduced proper interval graphs.
Now there is an alternative way to get the canonical sequence directly from the augmented adjacency
matrix. Let G = (V, E) be a proper interval graph with V = {vi | i = 1, 2, . . . , n} and A(G) be the
augmented adjacency matrix of G with consecutive 1’s property. We partition positions of A(G)
into two sets (L, U ) by drawing a polygonal path from the upper left corner to the lower right
corner such that the set L [resp. U ] is closed under leftward or downward [respectively, rightward
or upward] movement (called a stair partition [1, 31, 32]) and U contains precisely all the zeros
right to the principal diagonal of A(G) (see Table 3). This is possible due to the consecutive 1’s
property of A(G). Now we obtain a sequence of positive integers belonging to {1, 2, . . . , n}, each
occurs exactly twice, by writing the row or column numbers as they appear along the stair. We call
this sequence, the stair sequence of A(G) (see Table 4) and note that it is same as the canonical
sequence of G with respect to the given canonical ordering of vertices of G.
Proposition 2.4. Let G = (V, E) be a proper interval graph with a canonical ordering V =
{v1 , v2 , . . . , vn } of vertices of G. Let A(G) be the augmented adjacency matrix of G arranging
vertices in the same order as in the canonical ordering. Then the canonical sequence SG of G is
the same as the stair sequence of A(G).
Proof. We show that the stair sequence is a canonical sequence for some proper interval representation and so the proof follows by the uniqueness mentioned in Lemma 2.3. Given the
10
v1
v1
v2
v3
1
2
1
v2
3
v4
v5
4
5
v3
v6
v7
v8
2
6
3
v4
4
v5
7
5
v6
6
v7
8
v8
7
8
Table 4: The stair sequence (1 2 1 3 4 5 2 6 3 4 7 5 6 8 7 8) of A(G) of the graph G in Example
2.2.
matrix A(G), a proper interval representation of G is obtained as follows.
bi = U (i) + 1 −
1
i
Let ai = i and
where U (i) = max {j | j ≥ i and vi vj = 1 in A(G)} for each i = 1, 2, . . . , n.
Then {Ivi = [ai , bi ] | i = 1, 2, . . . , n} is a proper interval representation of G. [9, 15] If U (1) > 1,
to make all the end points distinct, we slightly increase the value of b1 (which is the only integer
valued right end point and is equal to aU (1) ) so that it is still less than its nearest end point which
is greater than it. Now we get a proper interval representation of G that satisfies the condition 4 of
Theorem 1.3. Then the canonical sequence merges with the stair sequence for this proper interval
representation of G as for i < j, aj = j < bi if and only if vi vj = 1 if and only if the column number
j appears before the row number i in the stair sequence.
Corollary 2.5. Let G = (V, E) be a connected proper interval graph. Then SG is unique up to
reversal.
e = (Ve , E)
e
Proof. By Lemma 2.3, the result is true if G is reduced. Suppose G is not reduced and G
be the reduced graph of G having vertices v˜1 , . . . , v˜t corresponding to the blocks B1 , . . . , Bt of G.
e has unique canonical ordering of vertices up to reversal. Consider any of these orderings.
Now G
When blocks of G are arranged according to this order in its augmented adjacency matrix A(G),
we have same rows (and hence same columns) for vertices in the same block. Thus permutation of
vertices within the same block does not change the matrix. Let bi = |Bi | for each i = 1, 2, . . . , t.
e it is clear that SG is obtained uniquely from S e
Then considering the stair sequences of G and G,
G
by replacing each i by the subsequence
i−1
i−1
i−1
X
X
X
(
bj + 1,
bj + 2, . . . ,
bj + bi )
j=1
j=1
j=1
irrespective of the permutations of vertices within a block.
11
Figure 3: The graph Gb and its TPIG representation [34]
e be the
Remark 2.6. Let G be a connected proper interval graph which is not reduced and G
e has a unique (upto reversal) canonical ordering of vertices,
reduced graph of G. Then the graph G
say, b1 , . . . , bt (corresponding to the blocks B1 , . . . , Bt ) as it is connected and reduced. Now the
canonical orderings of the vertices of G are obtained from this ordering (and its reversal) by all
possible permutation of the vertices of G within each block. In all such cases SG will remain same
up to reversal.
3
Structure of PTPIG
Let us recall the definition of proper tagged probe interval graph.
Definition 3.1. A tagged probe interval graph G = (P, N, E) is a proper tagged probe interval graph
(briefly PTPIG) if G has a TPIG representation {Ix | x ∈ P ∪ N } such that {Ip | p ∈ P } is a proper
interval representation of GP . We call such an interval representation a PTPIG representation of
G.
It is interesting to note that there are examples of T P IG, G for which GP is a proper interval
graph but G is not a P T P IG. For example, the graph Gb (see Figure 3) in [34] is a TPIG in which
(Gb )P consists of a path of length 4 along with 2 isolated vertices which is a proper interval graph.
But Gb has no TPIG representation with a proper interval representation of (Gb )P .
Now let us consider a graph G = (V, E), in general, with an independent set N and P = V r N
such that the subgraph GP of G induced by P is a proper interval graph. Let us order the vertices
of P in a canonical ordering. Now the adjacency matrix of G looks like the following:
P
N
P
A(P )
A(P, N )
N
A(P, N )T
0
12
Note that the (augmented) adjacency matrix A(P ) of GP satisfies the consecutive 1’s property and
the P × N submatrix A(P, N ) of the adjacency matrix of G represents edges between probe vertices
and nonprobe vertices. In the following lemma we obtain a necessary condition for a PTPIG.
Lemma 3.2. Let G = (P, N, E) be a PTPIG. Then for any canonical ordering of the vertices
belonging to P each column of A(P, N ) can not have more than two consecutive stretches of 1’s.
Proof. Let us prove by contradiction. Consider a canonical ordering of vertices belonging to P , say,
{u1 , u2 , . . . , um }. Let wj be a vertex in N such that in the matrix A(P, N ) the column corresponding
to wj has at least three consecutive stretches of 1’s. That is, there are five vertices in P , say
ui1 , ui2 , ui3 , ui4 and ui5 (with i1 , i2 , i3 , i4 , i5 ∈ {1, 2, . . . , m}) such that i1 < i2 < i3 < i4 < i5 and
ui1 , ui3 and ui5 are neighbors of wj while ui2 and ui4 are not neighbors of wj . Now let us prove its
impossibility. We prove it case by case.
Let the interval corresponding to the vertex vik be Ivik = [`k , rk ] for k = 1, 2, 3, 4, 5 in a PTPIG
representation. Now by Theorem 1.3, we have `1 < `2 < `3 < `4 < `5 and r1 < r2 < r3 < r4 < r5 .
Since G is a PTPIG either `i ∈ Iwj or ri ∈ Iwj for each j = 1, 3, 5.
Case 1 (`1 , `5 ∈ Iwj ): In this case, for t such that i1 ≤ t ≤ i5 , we have `t ∈ Iwj . In particular we
have `2 and `4 are in Iwj , i.e., ui2 and ui4 are neighbors of wj which is a contradiction.
Case 2 (r1 , r5 ∈ Iwj ): In this case, for all t such that i1 ≤ t ≤ i5 , we have rt ∈ Iwj . And again here
we have a contradiction just like the previous case.
Case 3 (`1 , r5 ∈ Iwj but r1 , `5 ∈
/ Iwj ): Let Iwj be [`wj , rwj ]. So in this case, a5 < `wj 6 a1 which is
a contradiction.
Case 4 (r1 , `5 ∈ Iwj ): If `3 ∈ Iwj , then `t ∈ Iwj for all t ∈ {i3 , . . . , i5 and this would mean that
`4 ∈ Iwj which is a contradiction. Similarly, if r3 ∈ Iwj , then rt ∈ Iwj for all t ∈ {i1 , . . . , i3 }
and then r2 ∈ Iwj which also gives a contradiction.
Note that all the cases are taken care of and thus each column of A(P, N ) can not have more than
two consecutive stretches of 1’s.
Unfortunately the condition in the above lemma is not sufficient as the following example shows. For
convenience, we say an interval Ip = [a, b] contains strongly an interval In = [c, d] if a < c 6 d < b,
where p ∈ P and n ∈ N .1
1
In [36], Sheng et al. used the term “contains properly” in this case. Here we consider a different term in order to
avoid confusion with the definition of proper interval graph. Note that if a 6 c 6 d < b or a < c 6 d 6 b, then also
Ip contains In properly, but not strongly.
13
p1
p2
p3
p4
p5
p6
n1
n2
p1
1
1
0
0
0
0
p2
1
1
1
1
0
0
p1
0
0
p2
1
p3
0
1
1
1
0
0
0
p3
0
0
p4
0
1
1
1
p5
0
0
0
1
1
0
p4
1
1
1
1
p5
1
1
p6
0
0
0
0
1
1
p6
0
1
Table 5: The matrix A(P ) (left) and A(P, N ) (right) of the graph G in Example 3.3
Example 3.3. Consider the graph G = (V, E) with an independent set N = {n1 , n2 } and P =
V r N = {p1 , p2 , . . . , p6 }, where the matrices A(P ) and A(P, N ) are given in Table 5. We note
that A(P ) satisfies consecutive 1’s property. So GP is a proper interval graph by Theorem 1.3.
Also note that GP is connected and reduced. Thus the given ordering and its reversal are the
only canonical ordering of vertices of GP by Lemma 2.3. Suppose G is a PTPIG with an interval
representation {Ix | x ∈ V }, where Ipi = [ai , bi ] and In1 = [c, d] such that a1 < a2 < · · · < a6 and
b1 < b2 < · · · < b6 .
Since p2 n1 = 1, we have either a2 ∈ [c, d] or b2 ∈ [c, d]. Let a2 ∈ [c, d]. Now a4 ∈ [c, d] implies
a3 ∈ [c, d]. But p3 n1 = 0, which is a contradiction. Thus b4 ∈ [c, d] and a4 ∈
/ [c, d]. But then we
have a4 < c 6 a2 as a4 < b4 , which is again a contradiction. Again b2 , b4 ∈ [c, d] would imply
b3 ∈ [c, d] and consequently p3 n1 = 1. Thus we must have b2 , a4 ∈ [c, d] and [a3 , b3 ] contains [c, d]
strongly as a2 < a3 < a4 and b2 < b3 < b4 . Again p5 n1 = 1 implies [c, d] ∩ {a5 , b5 } 6= ∅. But
p3 p5 = 0 which implies [a3 , b3 ] ∩ [a5 , b5 ] = ∅. Thus a3 < b3 < a5 < b5 as a3 < a5 . But then
[c, d] ∩ {a5 , b5 } = ∅ (as [c, d] $ [a3 , b3 ]) which is a contradiction. Similar contradiction would arise
if one considers the reverse ordering of vertices of GP . Therefore G is not a PTPIG, though each
column of A(P, N ) does not have more than two consecutive stretches of 1’s.
The following is a characterization theorem for a PTPIG. For convenience, henceforth, a continuous
stretch (subsequence) in a canonical sequence will be called a substring.
Theorem 3.4. Let G = (V, E) be a graph with an independent set N and P = V rN such that GP ,
the subgraph induced by P is a proper interval graph. Then G is a proper tagged probe interval graph
with probes P and nonprobes N if and only if there is a canonical ordering of vertices belonging to
P such that the following condition holds:
(A) for every nonprobe vertex w ∈ N , there is a substring in the canonical sequence with respect
to the canonical ordering such that all the vertices in the substring are neighbors of w and all
the neighbors of w are present at least once in the substring.
14
Proof. Necessary condition: Let G = (V, E) be a PTPIG with probes P and nonprobes N such that
V = P ∪ N . Let {Ix = [`x , rx ] | x ∈ V } be a PTPIG representation of G such that {Iu | u ∈ P } be
a proper interval representation of GP . Then a probe vertex u ∈ P is adjacent to w ∈ N if and
only if `u ∈ Iw or ru ∈ Iw . Let u1 , u2 , . . . , up be a canonical ordering of vertices in P that satisfies
the conditions of Theorem 1.3. Now consider the corresponding canonical sequence SGP , i.e., the
combined increasing sequence of `ui and rui for i = 1, 2, . . . , p. Since the sequence is increasing
and Iw is an interval all the `ui ’s and rui ’s which are belonging to Iw occur consecutively in that
sequence. Thus for any w ∈ N there exists a substring of SGP such that all the vertices in the
substring are neighbors of w and all the neighbors of w are present at least once in the substring.
Sufficiency condition: Let G = (V, E) be a graph with an independent set N and P = V r N
such that GP , the subgraph induced by P is a proper interval graph, P = {u1 , u2 , . . . , up } and
N = {w1 , w2 , . . . , wq }. Suppose there is a canonical ordering u1 , u2 , . . . , up of vertices belonging to
P such that for any nonprobe veretex w ∈ N , there is a substring in the canonical sequence S = SGP
with respect to this canonical ordering such that all the vertices in the substring are neighbors of w
and all the neighbors of w are present at least once in the substring. Let us count the positions of
each element in S from 1 to 2p. Now for each probe vertex ui , we assign the closed interval [`ui , rui ]
such that `ui and rui are position numbers of first and second occurrences of i in S respectively.
By definition of a canonical sequence, we have `u1 < `u2 < · · · < `up and ru1 < ru2 < · · · < rup .
Also since all position numbers are distinct, `ui 6= ruj for all i, j ∈ {1, 2, . . . , p}. Thus this interval
representation obeys the given canonical ordering of vertices belonging to P and by construction
the canonical sequence with respect to it is same as S.
We show that this interval representation is indeed an interval representation of GP which is proper.
Let i < j, i, j ∈ {1, 2, . . . , p}. Then `ui < `uj and rui < ruj . Thus none of [`ui , rui ] and [`uj , ruj ]
contains other properly. Now ui is adjacent to uj in GP if and only if ui uj = 1 in A(P ) when
vertices of A(P ) are arranged as in the given canonical ordering. Again ui uj = 1 with i < j if and
only if j is lying between two occurrences of i in the stair sequence of A(P ) and hence in S by
Proposition 2.4. Also since i < j, the second occurrence of j is always after the second occurrence
of i in S. Thus ui uj = 1 with i < j if and only if `uj ∈ [`ui , rui ]. This completes the verification
that {[`ui , rui ] | i = 1, 2, . . . , p} is a proper interval representation of GP and that corresponds to
S.
Next for each j = 1, 2, . . . , q, consider the substring in the canonical sequence S such that all the
vertices in the substring are neighbors of wj and all the neighbors of wj are present at least once
in the substring. Let the substring start at `wj and end at rwj in S. Then we assign the interval
15
[`wj , rwj ] to the vertex wj . If wj is an isolated vertex, then we assign a closed interval whose
end points are greater than `ui and rui for all i = 1, 2, . . . , p. Now all we need to show is that
{[`ui , rui ] | i = 1, 2, . . . , p} ∪ [`wj , rwj ] | j = 1, 2, . . . , q is a PTPIG representation of G, i.e., if ui
is a probe vertex and wj is a nonprobe vertex then there is an edge between them if and only if
either `ui ∈ [`wj , rwj ] or rui ∈ [`wj , rwj ].
First let us assume that there is an edge between ui and wj . So the vertex ui must be present
in the substring of S that contains all the neighbors of wj and contains only the neighbors of wj .
Since `wj and rwj are the beginning and ending positions of the substring respectively, either `ui or
rui must be in the interval [`wj , rwj ]. Conversely, let either `ui ∈ [`wj , rwj ] or rui ∈ [`wj , rwj ]. Then
we have either `ui or rui must be present in the substring. Since the substring contains vertices
that are neighbors of wj , we have ui must be a neighbor of wj .
Remark 3.5. If G is a PTPIG such that GP is connected and reduced, then there is a unique
(up to reversal) canonical ordering of vertices belonging to P , as we mentioned at the beginning of
Section 2. Thus the corresponding canonical sequence is also unique up to reversal. Also if condition
(A) holds for a canonical sequence, it also holds for its reversal. Thus in this case condition (A)
holds for any canonical ordering of vertices belonging to P .
p1
p2
p3
p4
p5
p6
p7
p8
n1
n2
n3
n4
n5
n6
p1
1
1
0
0
0
0
0
0
p1
0
0
0
1
0
1
p2
1
1
1
1
1
0
0
0
p2
1
1
0
1
0
1
p3
0
1
1
1
1
1
0
0
p3
0
1
0
1
0
1
p4
0
1
1
1
1
1
0
0
p4
0
1
1
1
0
1
p5
0
1
1
1
1
1
1
0
p5
1
1
1
1
0
1
p6
0
0
1
1
1
1
1
0
p6
1
1
1
1
0
1
p7
0
0
0
0
1
1
1
1
p7
0
0
1
0
0
1
p8
0
0
0
0
0
0
1
1
p8
0
0
1
0
0
1
Table 6: The matrix A(P ) and A(P, N ) of the graph G in Example 3.6
Let us illustrate the above theorem by the following example.
Example 3.6. Consider the graph G = (V, E) with an independent set N = {n1 , n2 , . . . , n6 }
and P = V r N = {p1 , p2 , . . . , p8 }, where the matrices A(P ) and A(P, N ) are given in Table
6. First note that A(P ) satisfies consecutive 1’s property. So GP is a proper interval graph.
Secondly, each column of A(P, N ) does not have more than two consecutive stretches of 1’s. Now
16
the S = SGP = (1 2 1 3 4 5 2 6 3 4 7 5 6 8 7 8). The required substrings of probe neighbors for
nonprobe vertices n1 , n2 , . . . , n6 are (5 2 6), (3 4 5 2 6 3 4), (4 7 5 6 8 7 8), (1 2 1 3 4 5 2 6 3 4),
∅, S respectively. Note that G is indeed a PTPIG with an interval representation shown in Table
7 which is constructed by the method described in the sufficiency part of Theorem 3.4.
Now consider the graph G in Example 3.3 which is not a PTPIG. From Table 5 we compute
S = S(GP ) = (1 2 1 3 4 2 3 5 4 6 5 6). The graph GP is connected and reduced. So S is unique up
to reversal. Note that the nonprobe vertex n1 is adjacent to probe vertices {p2 , p4 , p5 } and there is
no substring in S containing only {2, 4, 5}.
Definition 3.7. Let G = (V, E) be a graph with an independent set N and P = V r N such that
GP , the subgraph induced by P is a proper interval graph. Let SGP be a canonical sequence of
GP . Let w ∈ N . If there exists a substring in SGP which contains all the neighbors of w and all
the vertices in the substring are neighbors of w then we call the substring a perfect substring of w
in G. If the canonical sequence SGP contains a perfect substring of w in SGP for all w ∈ N , we call
it a perfect canonical sequence for G.
Proposition 3.8. Let G = (P, N, E) be a PTPIG such that GP is a connected reduced proper
interval graph and SGP be a canonical sequence of GP . Then for any nonprobe vertex w ∈ N , there
cannot exist more than one disjoint perfect substrings of w in SGP , unless the substring consists of
a single element.
Proof. Let u1 , u2 , . . . , up be the canonical ordering of the probe vertices of G with the proper interval
representation {[`i , ri ] | i = 1, 2, . . . , p} that satisfies the condition 4 of Theorem 1.3 and S be the
corresponding canonical sequence SGP . We first note that, since each vertex in S appears twice,
there cannot be more than two disjoint perfect substrings of S.
Now suppose there is a nonprobe vertex, say, w in G such that there are two disjoint perfect
substrings of length greater than 1. We will refer to these substrings as the first substring and
[1, 3]
[2, 7]
[4, 9]
[5, 10]
[6, 12]
[8, 13]
[11, 15]
[14, 16]
p1
p2
p3
p4
p5
p6
p7
p8
[1, 3]
p1
1
1
0
0
0
0
0
0
[2, 7]
p2
1
1
1
1
1
0
0
0
[4, 9]
p3
0
1
1
1
1
1
0
0
[5, 10]
p4
0
1
1
1
1
1
0
0
[6, 12]
p5
0
1
1
1
1
1
1
0
[8, 13]
p6
0
0
1
1
1
1
1
0
[11, 15]
p7
0
0
0
0
1
1
1
1
[14, 16]
p8
0
0
0
0
0
0
1
1
[6, 8]
n1
0
1
0
0
1
1
0
0
[4, 10]
n2
0
1
1
1
1
1
0
0
[10, 16]
n3
0
0
0
1
1
1
1
1
[1, 10]
n4
1
1
1
1
1
1
0
0
[17, 17]
n5
0
0
0
0
0
0
0
0
[1, 16]
n6
1
1
1
1
1
1
1
1
Table 7: A proper tagged probe interval representation of the graph G in Example 3.6
17
second substring corresponding to the relative location of the substrings in S. Now in S, each
number i appears twice due to li and ri only. Thus if we think of the canonical sequence as an
ordering of `i ’s and ri ’s, then we have that the first substring contains all the `i ’s and the second
substring contains all the ri ’s for all the probe vertices ui those are neighbors of w, as `i < ri and
both substrings contain all numbers i such that ui is a neighbor of w.
Moreover due to the increasing order of `i ’s and ri ’s, both the substrings contain numbers k, k +
1, . . . , k + r for some integers k, r with 1 6 k 6 m and 1 6 r 6 m − k. So the first substring must
comprise of some consecutive collection of `i and similarly for the second substring, i.e., the first
substring is `k , `k+1 , . . . , `k+r and the second substring is rk , rk+1 , . . . , rk+r (in IGP ). Therefore the
vertices uk , . . . , uk+r form a clique.
Now suppose ui is adjacent to uk+t for some i < k and 1 ≤ t ≤ r. Then `i < `k and `k+r < ri
as `k to `k+r are consecutive in the first substring (in IGP ). But this implies ui is adjacent to all
uk , uk+1 , . . . , uk+r . Similarly, one can show that if uj is adjacent to uk+t for some j > k + r and 1 ≤
t ≤ r. Then uj is adjacent to all uk , uk+1 , . . . , uk+r . Thus (closed) neighbors of uk , uk+1 , . . . , uk+r
are same in GP which contradicts the assumption that GP is reduced as r ≥ 1.
In fact, we can go one step more in understanding the structure of a PTPIG. If G is a PTPIG,
not only there cannot be two disjoint perfect substrings (of length more than 1) for any nonprobe
vertex in any canonical sequence but also any two perfect substrings for the same vertex must
intersect at atleast two places, except two trivial cases.
Lemma 3.9. Let G = (P, N, E) be a PTPIG such that GP is a connected reduced proper interval
graph with a canonical ordering of vertices {u1 , u2 , . . . , up } and let VGP be the corresponding vertex
canonical sequence of GP . Let w ∈ N be such that w has at least two neighbors in P and T1 and T2
be two perfect substrings for w in VGP intersecting in exactly one place. Then one of the following
holds:
1. VGP begins with u1 u2 u1 and only u1 and u2 are neighbors of w.
2. VGP ends with up up−1 up and only up−1 and up are neighbors of w.
Proof. Let [ai , bi ] be the interval corresponding to ui for i = 1, 2, . . . , p. Let the place where T1 and
T2 intersect be the first occurrence of the vertex uk .
Without loss of generality, let the substring T1 end with the first occurrence of uk and the substring
T2 start with the first occurrence of uk . Thus for all i > k, the vertex ui cannot appear before the
18
first occurrence of uk in the VGP . So T1 does not contain any ui such that i > k. Thus w is not a
neighbor of any ui such that i > k. Note that, it also means that for any vertex in the neighbor
of w (except for uk ) the substring T1 contains the first occurrence, while the substring T2 contains
the second occurrence. Thus the vertices in the neighborhood of w has to be consecutive vertices
in the canonical ordering of GP . Let the vertices in the neighborhood of w be uk−r , . . . , uk−1 , uk ,
where 1 ≤ r ≤ k − 1.
Now for any vertex ui such that i < k − r, we have ui is not in T1 and T2 . So the first occurrence
of ui is before the first occurrence of uk−r and the second occurrence of ui is either also before the
first occurrence of uk−r or after T2 , i.e., after the second occurrence of uk−r . But if the second
case happens, then we would violate the fact that GP is proper. So the only option is that both
the first and second occurrence of ui is before the first occurrence of uk−r and this would violate
the condition that the graph GP is connected. So the only option is there exists no ui such that
i < k − r. So k − r = 1. Thus we have the neighbors of w precisely u1 , . . . , uk .
Now if we look at the interval canonical sequence of GP , we have T1 corresponds to a1 , . . . , ak and
T2 corresponds to ak , b1 , . . . , bk−1 . But this would mean that all the vertices u1 , . . . , uk−1 have the
same (closed) neighborhood in GP which is not possible as we assumed GP is reduced, unless the
set {u1 , . . . , uk−1 } is a single element set. In that case, w has neighbors u1 and u2 and the T1 and
T2 correspond to a1 , a2 and a2 , b1 respectively (in IGP ). This is the first option in the Lemma. By
similar argument, if we assume that T1 and T2 intersect in the second occurrence of the vertex uk ,
we get the other option.
Now let us consider what happens when the graph GP induced by the probe vertices is not necessarily reduced. Let the blocks of GP be B1 , . . . , Bt . Then a canonical ordering of vertices in the
fP can be considered as a canonical ordering of blocks of GP and the corresponding
reduced graph G
canonical sequence SG
g can be considered as a canonical sequence of blocks of GP . Then a substring
P
in SG
g can also be called as a block substring. If there is a vertex in a block Bj that is a neighbor
P
of a nonprobe vertex w, we call Bj a block-neighbor of w. Also if a block substring contains all
the block neighbors of w and all the blocks in it are block-neighbors of w, then we call the block
substring a perfect block substring of w in SG
g . Then the following corollaries are immediate from
P
Proposition 3.8 and Lemma 3.9.
Corollary 3.10. Let G = (P, N, E) be a PTPIG such that GP is a connected proper interval graph.
fP . Then for any w ∈ N , there cannot exist
Let Sg be a canonical sequence for the reduced graph G
GP
more than one disjoint perfect block substrings of w in SG
g , unless the block substring consists of
P
a single element.
19
Corollary 3.11. Let G = (P, N, E) be a PTPIG such that GP is a connected proper interval
graph with a canonical ordering of blocks {B1 , B2 , . . . , Bt } and let VG
g be the corresponding vertex
P
canonical sequence of blocks of GP . Let w ∈ N be such that w has at least two block-neighbors in
GP and T1 and T2 be two perfect block substrings for w in VG
g intersecting in exactly one place.
P
Then one of the following holds:
1. VG
g begins with B1 B2 B1 and only B1 and B2 are block-neighbors of w.
P
2. VG
g ends with Bt Bt−1 Bt and only Bt−1 and Bt are block-neighbors of w.
P
4
Recognition algorithm
In this section, we present a linear time recognition algorithm for PTPIG. That is, given a graph
G = (V, E), and a partition of the vertex set into N and P = V r N we can check, in time
O(|V | + |E|), if the graph G(P, N, E) is a PTPIG. Now G = (P, N, E) is a PTPIG if and only if
it a TPIG, i.e., it satisfies the three conditions in Definition 1.1 and GP is a proper interval graph
for a TPIG representation of G. Note that it is easy to check in linear time if the graph G satisfies
the first condition, namely, if N is an independent set in the graph. Now for testing if the graph
satisfies the other two properties we will use the characterization we obtained in Theorem 3.4.
We will use the recognition algorithm for proper interval graph H = (V 0 , E 0 ) given by Booth and
Lueker [4] as a blackbox that runs in O(|V 0 | + |E 0 |). The main idea of their algorithm is that H
is a proper interval graph if and only if the adjacency matrix of the graph satisfies the consecutive
1’s property. In other words, H is a proper interval graph if and only if there is an ordering of
the vertices of H such that for any vertices v in H, the neighbors of v must be consecutive in that
ordering. So for every vertex v in H they consider restrictions, on the ordering of the vertices, of
the form “all vertices in the neighborhood of v must be consecutive”. This is done using the data
structure of P Q-trees. The PQ-tree helps to store all the possible orderings that respect all these
kind of restrictions. It is important to note that all the orderings that satisfy all the restrictions
are precisely all the canonical orderings of vertices of H.
The main idea of our recognition algorithm is that if the graph G = (P, N, E) is PTPIG then, from
Condition (A) in Theorem 3.4, we can obtain a series of restrictions on the ordering of vertices
that also can be “stored” using the PQ-tree data structure. These restrictions are on and above the
restrictions that we need to ensure the graph GP is a proper interval graph. If finally there exists
any ordering of the vertices that satisfy all the restrictions, then that ordering will be a canonical
ordering that satisfies the condition (A) in Theorem 3.4. So the main challenge is to identify all
20
the extra restrictions on the ordering and how to store them in the PQ-tree.
Once we have verified that the graph G = (P, N, E) satisfies the first condition in Definition 3.1
and we have stored all the possible canonical ordering of the vertices of the subgraph GP = (P, E1 )
of G induced by P in a PQ-tree (in O(|P | + |E1 |) time), we proceed to find the extra restrictions
that is necessary to be applied on the orderings. We present our algorithm in three steps - each
step handling a class of graphs that is a generalization of the class of graphs handled in the previous
one.
STEP
I: First we consider the case when GP is a connected reduced proper interval graph.
STEP II: Next we consider the case when GP is a connected proper interval graph, but not necessarily
reduced.
STEP III: Finally we consider the general case when the graph GP is a proper interval graph, but may
not be connected or reduced.
For all the steps we will assume that the vertices in P are v1 , . . . , vp , the vertices in N are w1 , . . . , wq
and Aj be the adjacency list of the vertex wj and let dj be the degree of the vertex wj . So the
neighbors of wj are Ai [1], Ai [2], . . . , Aj [dj ] for j = 1, 2, . . . , q.
4.1
Step I: The graph GP is a connected reduced proper interval graph
By Lemma 2.3, there is a unique (upto reversal) canonical ordering of the vertices of GP . By
Theorem 3.4, we know that the graph G is PTPIG if and only if the following condition is satisfied:
Condition (A1): For all 1 ≤ j ≤ q, there is a substring in SGP where only the neighbors of wj
appear and all the neighbors of wj appears at least once.
In this case, when the graph GP is connected reduced proper interval graph, since there is a unique
canonical ordering of the vertices, all we have to do is to check if the corresponding canonical
sequence satisfies the Condition (A1). So the rest of the algorithm in this case is to check if the
property is satisfied.
Idea of the algorithm: Since we know the canonical sequence SGP (or obtain by using known
algorithms described before in O(|P | + |E1 |) time, where E1 is the set of edges between probe
vertices), we can have two look up tables L and R such that for any vertex vi ∈ P , the L(vi ) and
R(vi ) has the index of the first and the second appearance of vi in SGP respectively. We can obtain
the look up tables in time O(|P |) steps.
21
Also by SGP [k1 , k2 ] (where 1 ≤ k1 ≤ k2 ≤ 2p) we will denote the substring of the canonical sequence
sequence SGP that start at the k1th position and ends at the k2th position in SGP .
To check the Condition (A1), we will go over all the wj ∈ N . For j ∈ {1, 2, . . . , q}, let L(Aj [1]) = `j
and R(Aj [1]) = rj . Now since all the neighbors of wj has to be in a substring, there must be
a substring of length at least dj and at most 2dj (as each number appears twice) in SGP [`j −
2dj , `j + 2dj ] or SGP [rj − 2dj , rj + 2dj ] which contains only and all the neighbors of wj . We can
identify all such possible substrings by first marking the positions in SGP [`j − 2dj , `j + 2dj ] and
SGP [rj − 2dj , rj + 2dj ] those are neighbors of wj and then by doing a double pass (Algorithm 1),
we find all the possible substrings of length greater than or equal to dj in SGP [`j − 2dj , `j + 2dj ]
and SGP [rj − 2dj , rj + 2dj ] that contains only neighbors of wj .
We prove the correctness and run-time of the algorithm in the following theorem:
Theorem 4.1. Let G = (V, E) be a graph with an independent set N and P = V r N such that
GP is a connected reduced proper interval graph. Then Algorithm 2 correctly decides whether G is
a PTPIG with probes P and nonprobes N in time O(|P | + |N | + |E2 |), where E2 is the set of edges
between probes P and nonprobes N .
Proof. For each j = 1, 2, . . . , q, we test the Condition (A1). In Line 3, ` and r are the first and
second appearance of the first probe neighbor of wj respectively. In Lines 4–9, we generate two (0, 1)
arrays X and Y , each of length 4dj +1. The probe neighbors of wj lying between [`−2dj , `+2dj ] are
marked 1 in X with positions translated by 2dj + 1 − `. Similarly Y is formed by marking 1 for the
probe neighbors of wj lying between [r − 2dj , r + 2dj ] and translated by 2dj + 1 − r. Translations are
required to avoid occurrence of a negative position. Then using Algorithm 1 we find all substrings
of probe neighbors of length greater than or equal to dj in Lines 10 and 15. Finally, in Lines
12–13 and 17–18, we check that whether the substring contains all probe neighbors of wj . Thus by
Theorem 3.4 and Remark 3.5, Algorithm 2 correctly decides whether G is a PTPIG with probes P
and nonprobes N by testing that whether the Condition A1 is satisfied.
Now we compute the running time. For each j = 1, 2, . . . , q, Algorithm 1 requires O(dj ) time. Also
Lines 4–9, 12 and 17 of Algorithm 2 require O(dj ) time. Other steps require constant times. Thus
Algorithms 1 and 2 run in O(|E2 | + |N |) times, where E2 is the set of edges between probes P and
nonprobes N and the look up tables L and R were obtained in O(|P |) time. Thus the total running
time is O(|P | + |N | + |E2 |).
22
Algorithm 1: Finding-1Subsequence
Input
: An array A[1, T ] containing only 0/1 entries, d (Assume A[T + 1] = 0).
Output : Output all (s, t) pairs such that t − s ≥ d and for all s ≤ i ≤ t, A[i] = 1,
(either s = 1 or A[s − 1] = 0) and (either t = T or A[t + 1] = 0).
1
2
Initialize ` = r = 0 while r ≤ T do
if A[r + 1] = 1 then
r =r+1
3
4
if A[r + 1] = 0 then
if r − ` ≥ d then
5
Output (` + 1, r)
6
7
`=r+1
8
r =r+1
Algorithm 2: Testing Condition (A1)
Input
: Given a graph G = (V, E)
Output : Accept if the Condition (A1) is satisfied
1
for j ← 1 to q do
2
Initialize two arrays X and Y of length 4dj + 1 by setting all the entries to 0.
3
Let ` = L(Aj [1]) and r = R(Aj [1]).
4
for each k ← 1 to dj do
5
Let s = L(Aj [k]) and t = R(Aj [k])
6
If (` − 2dj ≤ s ≤ ` + 2dj ) Mark X[s − ` + 2dj + 1] = 1
7
If (` − 2dj ≤ t ≤ ` + 2dj ) Mark X[t − ` + 2dj + 1] = 1
8
If (r − 2dj ≤ s ≤ r + 2dj ) Mark Y [s − r + 2dj + 1] = 1
9
If (r − 2dj ≤ t ≤ r + 2dj ) Mark Y [t − r + 2dj + 1] = 1
10
for all (a, b) output of Finding-1Subsequence(X, dj ) do
11
Let â = a + ` − 2dj − 1 and b̂ = b + ` − 2dj − 1.
12
if ∀1 ≤ k ≤ dj either â ≤ L(Aj [k]) ≤ b̂ or â ≤ R(Aj [k]) ≤ b̂ then
13
j =j+1
14
Go To Step 1
15
for all (a, b) output of Finding-1Subsequence(Y, dj ) do
16
Let â = a + r − 2dj − 1 and b̂ = b + r − 2dj − 1.
17
if ∀1 ≤ k ≤ dj either â ≤ L(Aj [k]) ≤ b̂ or â ≤ R(Aj [k]) ≤ b̂ then
18
j =j+1
19
Go To Step 1
20
21
REJECT
ACCEPT
Remark 4.2. Since given a graph G = (V, E) with an independent set N and P = V r N , the checking
whether GP = (P, E1 ) is a proper interval graph and obtaining SGP (from any proper interval representation
of GP ) requires O(|P | + |E1 |) time, the total recognition time is O(|P | + |N | + |E1 | + |E2 |) = O(|V | + |E|).
23
4.2
Step II: The graph GP is a connected (not necessarily reduced) proper
interval graph
In this case, that is when the graph GP is not reduced, we cannot say that there exists a unique
canonical ordering of the vertices of GP . So by Theorem 3.4, all we can say is that among the
set of canonical orderings of the vertices of GP , is there an ordering such that the corresponding
canonical sequence satisfies Condition (A) of Theorem 3.4? As mentioned before we will assume
that we have all the possible canonical ordering of the vertices of GP stored in a PQ-tree. Now we
will impose more constraints on the orderings so that the required condition is satisfied.
fP be the reduced graph of GP . By Remark 2.6, G
fP has a unique (upto reversal) canonical
Let G
ordering of vertices, say, b1 , . . . , bt (corresponding to the blocks B1 , . . . , Bt of the vertices of GP )
and the canonical orderings of the vertices of GP are obtained by all possible permutations of the
vertices of G within each block.
fP , we can have two look up tables
Since we know the canonical sequence SG
for the vertices in G
g
P
fP , L(bk ) and R(bk ) have the indices of the first and the
L and R such that for any vertex bk ∈ G
second appearance of bk in the SG
g respectively. Abusing notation, for any probe vertex v ∈ Bk ,
P
by L(v) and R(v) we will denote L(bk ) and R(bk ) respectively. We can obtain the look up tables
in time O(|P |) steps.
Definition 4.3. For any w ∈ N and any block Bk , we say that Bk is a block-neighbor of w if there
exists at least one vertex in Bk that is a neighbor of w. If all the vertices in Bk are neighbors of w,
we call Bk a full-block-neighbor of w and if there exists at least one vertex in Bk that is a neighbor
of w and at least one vertex of Bk that is not a neighbor of w we call it a partial-block-neighbor of
w. Also for any w ∈ N let us define
0,
fw (k) =
1,
2,
the function fw : {1, 2, . . . , t} → {0, 1, 2} as
if Bk is not a block-neighbor of w;
if Bk is a full-block-neighbor of w;
if Bk is a partial-block-neighbor of w.
By abuse of notation, for any probe vertex v ∈ Bk by fw (v) we will denote fw (k).
Note that for any wj ∈ N one can compute the function fwj in O(dj ) number of steps and can
store the function fwj in an array of size t.
4.2.1
Idea of the Algorithm
If G is PTPIG then from Condition (A) in Theorem 3.4 we can see that the following condition is
a necessary (though not a sufficient) condition:
24
Condition B1: For all 1 ≤ j ≤ q, there is a substring of SG
g where only the block-neighbors of
P
wj appear and all the block-neighbors of wj appear at least once and any block that is not at the
beginning or end of the substring must be a full-block-neighbor of wj .
Though the above Condition B1 is not a sufficient condition, but as a first step we will check
if the graph satisfies the Condition B1. For every wj ∈ N , we will identify (using algorithm
CheckConditionB1(wj )) all possible maximal substrings of SGP that can satisfy Condition B1.
If such a substring exists, then CheckConditionB1(wj ) outputs the block numbers that appear at
the beginning and end of the substring. Let for some wj , CheckConditionB1(wj ) outputs (k1 , k2 ),
then note that 1 ≤ k1 ≤ k2 ≤ t and k1 and k2 are the only possible partial neighbors of wj . The
algorithm CheckConditionB1(wj ) is very similar to the algorithm Testing Condition (A1) that
we described for the Step I, when we assumed GP is a connected reduced proper interval graph.
Now as we mentioned earlier the Condition (B1) is not sufficient for G to be a PTPIG. For G to be a
PTPIG (that is, to satisfy Condition (A) of Theorem 3.4) we will have to find a suitable canonical
ordering of the vertices or in other words, by Remark 2.6, we need to find suitable ordering of
vertices in each block. Depending on (k1 , k2 ) which is the output by CheckConditionB1(wj ), we
have a number of cases and for each of the cases some restrictions will be imposed on the ordering
of the vertices within blocks.
Let σ1 , . . . , σt be the ordering of the vertices of blocks B1 , B2 , . . . , Bt respectively. For every wj ∈ N ,
let us denote by N gbk (wj ), the vertices in the block Bk those are neighbors of wj . We list the
different cases and restriction on σ1 , . . . , σt that would be imposed in each of the cases when the
algorithm CheckConditionB1(wj ) outputs (k1 , k2 ).
• Category 0: There are no partial-block-neighbors of wj . In this case Condition (B1)
is sufficient.
• Category 1: There exists exactly one partial-block-neighbor of wj . Let k be the
block that is a partial-block-neighbor of wj . In this case we will have 4 subcategories:
– Category 1a: Bk is the only block-neighbor of wj . In that case, σk must ensure
that the vertices in N gbk (wj ) must be contiguous.
– Category 1b: All pairs returned by CheckConditionB1(wj ) is of the form
(k, k2 ) with k2 6= k. In that case, σk must ensure that the vertices in N gbk (wj ) must
be contiguous and must be flushed to the Right.
– Category 1c: All pairs returned by CheckConditionB1(wj ) is of the form
25
(k1 , k) with k1 6= k. In that case, σk must ensure that the vertices in N gbk (wj ) must
be contiguous and must be flushed to the Left.
– Category 1d: All other cases. In that case, CheckConditionB1(wj ) returns (k, k) or
returns both (k, k2 ) and (k1 , k). In both the cases σk must ensure that the vertices in
N gbk (wj ) must be contiguous and must either be flushed to the Left or to the Right.
• Category 2: There exists exactly two partial-block-neighbor of wj . Let k1 , k2 be the
two blocks which are partial-block-neighbors of wj . In this case we claim that the following
three cases may happen:
– Category 2a: {k1 , k2 } = {1, 2} and the SG
g begins with 121. In this case the
P
following two conditions must be satisfied:
∗ Condition 2a(1) σ1 must ensure that the vertices in N gb1 (wj ) are contiguous and
must be either flushed to the Right or Left and σ2 must ensure that the vertices in
N gb2 (wj ) are contiguous and must be either flushed to the Right or Left.
∗ Condition 2a(2) If the vertices in N gb1 (wj ) flushed to the Left then the vertices
of N gb2 (wj ) must be flushed to Right and if the vertices in N gb1 (wj ) flushed to the
Right then the vertices of N gb2 (wj ) must be flushed to the Left.
– Category 2b: {k1 , k2 } = {t−1, t} and the SG
g ends with t(t−1)t. Like the previous
P
condition, both the following two conditions must satisfied:
∗ Condition 2b(1) σt−1 must ensure that the vertices in N gbt−1 (wj ) are contiguous
and must be either flushed to the Right or Left and σt must ensure that the vertices
in N gbt (wj ) are contiguous and must be either flushed to the Right or Left.
∗ Condition 2b(2) If the vertices in N gbt−1 (wj ) flushed to the Left then the vertices
of N gbt (wj ) must be flushed to Right and if the vertices in N gbt−1 (wj ) flushed to
the Right then the vertices of N gbt (wj ) must be flushed to Left
– Category 2c: CheckConditionB1(wj ) outputs (k1 , k2 ). In this case (from Corollary 3.11) CheckConditionB1(wj ) cannot output both (k1 , k2 ) and (k2 , k1 ) unless in the
Category 2a and 2b. Thus in this case σk1 must ensure that the vertices in N gbk1 (wj )
are contiguous and is flushed to the Right and σk2 must ensure that the vertices in
N gbk2 (wj ) are contiguous and is flushed to the Left.
• Category 3: If for some wj , there are more than 2 partial block neighbors then G cannot
be a PTPIG.
26
Using algorithm CheckConditionB1(wj ) we can identify all the various kind of restrictions on
σ1 , . . . , σt those are necessary to be imposed so that G is a PTPIG. Note that if there exists
σ1 , . . . , σt such that the above restriction are satisfied for all wj , then G is a PTPIG. So our goal
is to check if there exists σ1 , . . . , σt such that the above restriction are satisfied for all wj .
For this we define a generalization of the consecutive 1’s problem (we call it the oriented-consecutive
1’s problem) and show how that can be used to store all the restrictions in the PQ-tree that already
stores all the restrictions which are imposed by the fact that GP is a proper interval graph. We
will first show how to handle all the conditions except for the special conditions Condition 2a(2) for
Category 2a and Condition 2b(2) for Category 2b. After we have successfully stored all the other
restrictions we will show how to handle these special conditions.
Note that except for these special conditions all the restrictions are of the following four kinds:
• All the vertices in N gbk (wj ) are consecutive and flushed to the left
• All the vertices in N gbk (wj ) are consecutive and flushed to the Right.
• All the vertices in N gbk (wj ) are consecutive and either flushed to the Left or flushed to the
Right.
• All the vertices in N gbk (wj ) are consecutive.
In the Section 4.2.2 we describe the Oriented-Consecutive 1’s problem and show how that can be
used to store all the above restrictions except for the special conditions mentioned above. The
following Lemma shows how to handle the special conditions Condition 2a(2) for Category 2a and
Condition 2b(2) for Category 2b.
Lemma 4.4. Let σ1 , . . . , σt be an ordering of blocks B1 , . . . , Bt in G such that for all k = 1, 2, . . . , t
and for all wj ∈ N , all the restrictions imposed by wj on σk is satisfied, except for the special
conditions Condition 2a(2) for Category 2a and Condition 2b(2) for Category 2b. If σkr is denotes
the reverse permutation of σk then we claim that one of the 16 possibilities obtained by reversing
or not reversing the orderings σ1 , σ2 , σt−1 and σt will be a valid ordering if G is PTPIG.
Proof. Straightforward.
So by the above lemma once we identify all the orderings σ1 , . . . , σt that satisfies all the restrictions
imposed by all wj ∈ N except for the special conditions Condition 2a(2) for Category 2a and
Condition 2b(2) for Category 2b, we have to check if any of the 16 possibilities is a valid set of
permutation.
27
4.2.2
Oriented-Consecutive 1’s problem
Problem 4.5 (Oriented-Consecutive 1’s problem). Input: A set Ω = {s1 . . . , sm } and n restrictions (S1 , b1 ), . . . , (Sn , bn ) where Si ⊆ Ω and bi ∈ {−1, 0, 1, 2}.
Question: Is there a linear ordering of Ω, say, sσ(1) , . . . , sσ(m) such that for 1 ≤ i ≤ n the following
are satisfied:
• If bi = 0, then all the elements in Si are consecutive in the linear ordering.
• If bi = −1, then all the element in Si are consecutive in the linear ordering and all the elements
of Si are flushed towards Left, i.e., sσ(1) ∈ Si .
• If bi = 1, then all the element in Si are consecutive in the linear ordering and all the elements
of Si are flushed towards Right, i.e., sσ(m) ∈ Si .
• If bi = 2, then all the element in Si are consecutive in the linear ordering and all the elements
of Si are either flushed towards Left or flushed towards Right, i.e., either sσ(1) ∈ Si or
sσ(m) ∈ Si .
Now this problem is very similar to the consecutive 1’s problem. The consecutive 1’s problem is
solved by using the PQ-tree. Recall the algorithm: given a set Ω = {s1 , . . . , sm } and a set of
subsets of Ω, say, S1 , . . . , Sm , the algorithm outputs a PQ tree T with leaves {s1 , . . . , sm } and the
property that P Q(T ) is the set of all orderings of Ω where for all Si (1 ≤ i ≤ m), the elements in
Si are contiguous. The algorithm has m iterations. The algorithms starts will a trivial PQ tree T
and at the end of the k-th iteration the PQ tree T has the property that P Q(T ) is the set of all
orderings of Ω where for all Si (1 ≤ i ≤ k − 1), the elements in Si are contiguous in the ordering.
At iteration k, the PQ tree T is updated using the algorithm Restrict(T, Sk )[4]2 . If for some Sk ,
Restrict(T, Sk ) cannot be performed, the algorithm halts and rejects.
We will use similar technique to solve the Oriented-Consecutive 1’s Problem. In fact we will reduce
the Oriented-Consecutive 1’s Problem to the standard Consecutive 1’s Problem, but we will assume
that all the restrictions of the form (Si , bi ), where bi = 2, appear at the end. That is, if for some
i we have bi = 2, then for all j ≥ i we will have bj = 2. We present Algorithm 4 for solving
the Oriented-Consecutive 1’s Problem and the following result proves that the correctness of the
algorithm.
2
In [4] they define the update algorithm as Reduce(T, S). In this paper we use the name Restrict(T, S) instead of
Reduce(T, S) as we think that the word “Restrict” is more suitable.
28
Claim 4.6. If we assume that bi = 2 implies bj = 2 for all j ≥ i, then Algorithm 4 is a correct
algorithm for the Oriented-Consecutive 1’s Problem. In fact all the valid ordering are stored in the
PQ-tree T .
Proof. First let us assume that bi 6= 2 for all i. We initialize the PQ-tree with Ω ∪ {`, a} as leaves.
Then we use Restrict(T, Ω ∪ {`}) and Restrict(T, Ω ∪ {a}) this implies that any ordering of the σ
of Ω ∪ {`, a} must have ` and a at the two ends. Note that if σ is infact a valid ordering of the
elements in Ω ∪ {`, a} that satisfies all the constraints then the reverse of σ is also a valid ordering
of the elements in Ω ∪ {`, a} that satisfies all the constraints. So we can always assume that ` is
in the beginning (i.e., the leftmost position) and a is at the end (i.e., the rightmost position).
From now on, if we say the elements of the set Si is flushed to the Left we would mean that Si ∪ {`}
is contiguous. Similarly if we say the elements of the set Si is flushed to the Right we would mean
that Si ∪ {a} is contiguous. Now let us check case by case basis:
Case 1: bj = 0. Then by using Restrict(T, Sj ) we ensure that all the elements in Sj are contiguous in
the ordering.
Case 2: bj = −1. Then by using Restrict(T, Sj ∪ {`}) we ensure that all the elements in Sj ∪ {`} are
contiguous in the ordering and since ` is at the leftmost position so that means the elements
in Sj are contiguous and flushed to the Left.
Case 3: bj = 1. Then by using Restrict(T, Sj ∪ {a}) we ensure that all the elements in Sj ∪ {a} are
contiguous in the ordering and since a is at the rightmost position so that means the elements
in Sj are contiguous and flushed to the Right.
So if bi 6= 2 for all i, then the algorithm solves the Oriented-Consecutive 1’s Problem correctly.
Now let us assume that there exists i such that for all j ≥ i, bj = 2 and for all j < i, bj 6= 2.
First consider the case that there exists an i0 ≤ i such that bi0 = 1. So at the ith iteration of the
algorithm, the permutation σ that satisfies the PQ-tree ensures that the elements in Si0 is flushed
to the Right. Now at the ith iteration, the algorithm ORestrict(T, Si ∪ {`, a}) first checks that
whether Si can be flushed to the Right or Si can be flushed to the Left. By Lemma 4.7 we note
either Si can be flushed Left or can be flushed Right. In other words there is no choice and the
algorithm will find that option. Similar argument will hold if there exists a i0 ≤ i such that bi0 = −1.
Now the complement of the above two cases is that for all i, bi is either 0 or 2. In this case, note
that if σ is a permutation that satisfies the conditions, then the reverse of permutation σ r will also
satisfy the conditions. Now if for some i, bi = 2 and we flushed Si to Left. By Lemma 4.7 if bj = 2
for all the j > i, only one option is possible. Now instead of flushing Si to Left if we had flushed
Si to Right we would be getting the reverse permutations.
29
Algorithm 3: ORestrict(T, S, {a, b})
Input
: PQ-tree T on set Ω, set S ⊂ Ω, {a, b}
Output : PQ-Tree
1
2
3
4
if Restrict(T, S ∪ {a}) is possible then
Restrict(T, S ∪ {a})
else
Restrict(T, S ∪ {b})
Algorithm 4: Oriented-Consecutive 1’s test
Input
: Ω, (S1 , b1 ), . . . , (Sn , bn )
Output : A P Q-tree T over the universe Ω such that all the remaining orderings satisfy the restrictions
(Sj , bj )
1
Initialize PQ-tree T with Ω ∪ {`, a} as leaves attached to the root (which is a P-node)
2
Restrict(T, Ω ∪ {`});
3
Restrict(T, Ω ∪ {a});
4
for j ← 1 to t do
5
6
7
8
9
10
11
12
if bj = 0 then
Restrict(T, Sj );
if bj = −1 then
Restrict(T, Sj ∪ {`});
if bj = 1 then
Restrict(T, Sj ∪ {a});
if bj = 2 then
ORestrict(T, Sj ∪ {`, a});
Lemma 4.7. Let Ω = {s1 , . . . , sm } and S1 , S2 ⊂ Ω. Then the following two conditions cannot
happen simultaneously:
1. There is an ordering of elements of Ω such that the elements of S1 and S2 are flushed Left.
2. There is an ordering of elements of Ω such that the elements of S1 are flushed Left while the
elements of S2 are flushed Right.
Proof. Let σ be an ordering of the elements of Ω such that the elements of both S1 and S2 are
flushed Left. This can happen only if either S1 ⊆ S2 or S2 ⊆ S1 . Similarly, if σ 0 be an ordering
of the elements of Ω such that the elements of S1 are flushed Left and S2 are flushed Right. This
can happen only if either (S2 ) ⊆ (S1 )c or (S1 )c ⊆ S2 , where (S1 )c is the complement of set S1 .
Note that both the conditions, that is “either S1 ⊆ S2 or S2 ⊆ S1 ” and “either (S2 ) ⊆ (S1 )c or
(S1 )c ⊆ S2 ” cannot happen simultaneously.
30
Algorithm 5: Finding-1-2-Subsequence
Input
: An array A[1, T ] containing only 0/1/2 entries, d (Assume A[T + 1] = 0).
Output : Output all (s, t) pairs such that (t − s + 1) ≥ d and for all s < i < t, A[i] = 1, A[s], A[t] ∈ {1, 2},
(either s = 1 or A[s − 1] = 0 or A[s − 1] = A[s] = 2) and (either t = T or A[t + 1] = 0 or
A[t] = A[t + 1] = 2).
1
2
Initialize ` = r = 0 while r ≤ T do
if A[r + 1] = 0 then
if r − ` ≥ d then
3
Output (` + 1, r)
4
` = r + 1; r = r + 1
5
6
if A[r + 1] = 1 then
r =r+1
7
8
if A[r + 1] = 2 then
9
if A[r] = 1 then
if r + 1 − ` ≥ d then
10
Output (` + 1, r + 1)
11
` = r; r = r + 1
12
if A[r] 6= 1 then
13
` = r; r = r + 1
14
Algorithm 6: CheckConditionB1(wj )
Input
: Given a graph G = (P, N, E), wj ∈ N . (For fwj refer to Definition 4.3).
Output : Accept if the Condition B1 is satisfied for the vertex wj .
1
Initialize two arrays X and Y of length 4dj + 1 by setting all the entries to 0.
2
Let ` = L(Aj [1]) and r = R(Aj [1]).
3
for each k ← 1 to dj do
4
Let s = L(Aj [k]) and t = R(Aj [k])
5
If (` − 2dj ≤ s ≤ ` + 2dj ) Mark X[s − ` + 2dj + 1] = fwj (Aj [k])
6
If (` − 2dj ≤ t ≤ ` + 2dj ) Mark X[t − ` + 2dj + 1] = fwj (Aj [k])
7
If (r − 2dj ≤ s ≤ r + 2di ) Mark Y [s − r + 2dj + 1] = fwj (Aj [k])
8
If (r − 2dj ≤ t ≤ r + 2dj ) Mark Y [t − r + 2dj + 1] = fwj (Aj [k])
9
for all (a, b) output of Find-1-2-Subsequence(X, dj ) do
10
Let â = a + ` − 2dj − 1 and b̂ = b + ` − 2dj − 1.
11
if ∀1 ≤ k ≤ dj Either â ≤ L(Aj [k]) ≤ b̂ OR â ≤ R(Aj [k]) ≤ b̂ then
12
Return(k1 , k2 ) such that (Either L(bk1 ) = â OR R(bk1 ) = â) AND (Either L(bk2 ) = b̂ OR R(bk2 ) = b̂)
13
for all (a, b) output of Find-1-2-Subsequence(Y, dj ) do
14
Let â = a + r − 2dj − 1 and b̂ = b + r − 2dj − 1.
15
if ∀1 ≤ k ≤ dj Either â ≤ L(Aj [k]) ≤ b̂ OR â ≤ R(Aj [k]) ≤ b̂ then
16
17
Return(k1 , k2 ) such that (Either L(bk1 ) = â OR R(bk1 ) = â) AND (Either L(bk2 ) = b̂ OR R(bk2 ) = b̂)
If nothing is returned then REJECT
31
Algorithm 7: Testing if G is PTPIG
Input : Given a graph G = (P, N, E)
Output: Accept if G is a PTPIG
1
for k ← 1 to t do
2
Initialize PQ-tree Tk with Bk ∪ {`, a} as leaves attached to the root (which is a P-node)
3
Restrict(Tk , Bk ∪ {`}); Restrict(Tk , Bk ∪ {a});
4
5
for j ← 1 to q do
if Condition B1 is satisfied by node wj then
if Number of Partial-Block Neighbors of wj is 0 then
6
j =j+1
7
if Number of Partial-Block-Neighbors of wj is 1 (let Bk be the partial-block-neighbor) then
8
if Number of Block-Neighbors of wj is 1 then
9
Restrict(Tk , N gbk (wj )); j = j + 1
10
if All pairs returned by CheckConditionB1(wj ) are of the form (k, k2 ) with k2 6= k then
11
Restrict(Tk , N gbk (wj )∪ `); j = j + 1
12
if All pairs returned by CheckConditionB1(wj ) are of the form (k1 , k) with k1 6= k then
13
Restrict(Tk , N gbk (wj )∪ a); j = j + 1
14
else
15
ORestrict(Tk , N gbk (wj ), {`, a}); j = j + 1
16
if Number of Block Neighbors of wj is 2 (let Bk1 , Bk2 be the partial-block-neighbors) then
17
if ({k1 , k2 } = {1, 2} & SG
starts with 121) Or ({k1 , k2 } = {t − 1, t} & SG
ends with t(t − 1)t)
g
g
P
P
18
then
ORestrict(Tk1 , N gbk1 (wj ), {`, a}); ORestrict(Tk2 , N gbk2 (wj ), {`, a}); j = j + 1
19
else
20
21
Let CheckConditionB1(wj ) returns (k1 , k2 )
22
Restrict(Tk1 , N gbk1 (wj ) ∪ {`}); Restrict(Tk2 , N gbk2 (wj ) ∪ {a}); j = j + 1
if Number of Block Neighbors of wj is ≥ 3 then
23
REJECT
24
25
Let σ1 , σ2 , . . . , σt be permutations satisfying the PQ-trees T1 , . . . , Tt .
26
if σ1 , σ2 , . . . , σt−1 , σt satisfies Condition (A) of Theorem 3.4, where σs = σs or, σsr for s = 1, 2, t − 1, t then
27
28
29
ACCEPT
else
REJECT
32
4.2.3
The algorithm
The Algorithm 6 ckecks if for a given wj ∈ N the Condition (B1) is satisfied. The algorithm is
similar to the one in Step I where we assumed that the graph GP is a connected reduced proper
interval graph. Once we have the Algorithm 6 for checking if the graph satisfies Condition (B1)
for every wj ∈ N , the Algorithm 7 checks whether the graph G = (P, N, E) is a PTPIG. In the
following theorem we prove the correctness of the algorithm.
Theorem 4.8. Given graph G = (P, N, E) such that GP is a connected proper interval graph,
the Algorithm 7 correctly decides whether G is a PTPIG with probes P and nonprobes N in time
O(|P | + |N | + |E2 |), where E2 is the set of edges between probes P and nonprobes N .
Proof. We follow notations, terminologies and ideas developed in the beginning of Step II and in the
‘Idea of the Algorithm’. In Lines 1–3 of Algorithm 7, we fix the markers a and ` at the beginning
and end of each block Bk respectively, k = 1, 2, . . . , t. Then for each j = 1, 2, . . . , q, we run the
algorithm. First, in Line 5, we check Condition B1 for wj by Algorithm 6 which takes a help from
Algorithm 5. Both algorithms are similar to Algorithm 2 and 1. The difference is that Algorithm 6
not only ensures that the graph G satisfies Condition B1 for vertex wj ∈ N but also return (k1 , k2 )
where k1 and k2 are the index of the start and end block of the substring that helps to satisfy the
Condition B1. It also ensure that no block in between k1 and k2 is a partial block neighbor of wj .
If wj has more than two partial-block neighbors, the Algorithm rejects.
Now in Lines 6–24, Algorithm 7 uses the ideas developed for Algorithm 4 for Oriented-consecutive
1’s problem to apply all the necessary restrictions on the orderings of vertices in each block, as
described in Section 4.2.1, except for the special conditions Condition 2a(2) for Category 2a and
Condition 2b(2) for Category 2b. Here we assume that all the calls to ORestrict is done at the
very end. This can be done by having two passes over the graph. Once we make all the calls to
ORestrict at the very end by Claim 4.6, we get the correctness of the algorithm so far.
Finally for checking whether special conditions Condition 2a(2) for Category 2a and Condition
2b(2) for Category 2b are satisfied, we require Lines 17-19 and 25-26. Suppose wj falls under
Category 2a and both B1 and B2 are partial-block-neighbors of wj . Then Algorithm 6 returns
both (1, 2) and (2, 1). Note that we have already ensure that both N gb1 (wj ) and N gb2 (wj ) are
either flushed Left or Right. If for some other wj 0 we have already flushed N gb1 (wj 0 ) Left or Right
then by Lemma 4.7 we had only one option when we were flushing N gb1 (wj ) to Left or Right. If
for no other wj 0 we have already flushed N gb1 (wj 0 ) Left or Right then we can consider either σ1 or
σ1r . The same argument holds for σ2 , σt and σt−1 . Thus if we check for all possibilities for σi or σir ,
33
i = 1, 2, t, t − 1 (as in Lines 25-26 of Algorithm 7) we can correctly decide whether G is a PTPIG
with probes P and nonprobes N .
Now we compute the running time. The total running time of Algorithm 6 (including the formation
of look up tables and the function fni (j) is O(|P | + |E2 |) as before, where E2 is the set of edges
between probes P and nonprobes N ). The variation of PQ-tree algorithm used in Lines 10, 12, 14,
16, 19 and 22 requires O(|P | + |N | + |E2 |) as it requires for normal PQ-tree algorithm. The Line
26 requires constant times of |E2 | steps for 4 blocks, namely, B1 , B2 , Bt−1 and Bt where σs or σsr
(s = 1, 2, t − 1, t) is required if the special cases Category 2a or 2b occur. Thus the total running
time is O(|P | + |N | + |E2 |).
4.3
Step III: The graph GP is a proper interval graph (not necessarily connected
or reduced)
Finally, we consider the graph G = (V, E) with an independent set N (nonprobes) and P = V r N
(probes) such that GP is a proper interval graph, which may not be connected. Suppose GP has
r connected components G1 , G2 , . . . , Gr with vertex sets P1 , . . . , Pr . For G to be a PTPIG, it is
essential that the subgraphs of G induced by Pk ∪ N is a PTPIG for each k = 1, 2, . . . , r. As we
have seen in the last subsection we can check if all the subgraphs are PTPIG in time O(|V | + |E|).
In fact, for each k, we can store all the possible canonical orderings of vertices in Pk such that
the corresponding canonical sequence satisfies the Condition (A) of Theorem 3.4 so that the graph
induced by Pk ∪ N is a PTPIG.
For any vertex wj ∈ N , let us call Gk a component-neighbor of wj if there is a vertex in Pk that is
adjacent to wj . Gk is called a full-component-neighbor of wj if all the vertices in Pk are neighbors
of wj . Also Gk is called a partial-component-neighbor of wj if there exists at least one vertex in Pk
that is a neighbor of wj and at least one vertex in Pk that is not a neighbor of wj .
We will just be presenting the idea of the algorithm. The steps of the algorithm is obvious from
the description given. We will be using all the algorithm developed so far in the previous sections
for this recognition algorithm.
4.3.1
Idea of the algorithm
To check if the whole graph G is a PTPIG we have to find if there exists a canonical ordering of
all the vertices in GP such that for the whole graph the Condition (A) of Theorem 3.4 is satisfied.
Note that a canonical ordering of the vertices of GP would place the vertices in each connected
34
component next to each other and moreover for each k, the ordering of the vertices of Gk would be
a canonical ordering for the graph Gk . So to check if G is a PTPIG we have to find if there exist an
ordering of the connected components and canonical ordering of vertices in each of the components
such that the corresponding canonical ordering satisfies the Condition (A) of Theorem 3.4. In fact
G is a PTPIG if and only if the following condition is satisfied:
Condition (C1): There exists a permutation π : {1, . . . , r} → {1, . . . , r} and canonical sequences
SG1 , . . . , SGr of G1 , . . . , Gr such that the canonical sequence SGP of GP obtained by concatenation
of the canonical sequences of Gπ(1) , . . . , Gπ(r) (that is, SGP = SGπ(1) . . . SGπ(r) ) has the property
that for all w ∈ N , there exists a perfect substring of w in SGP (that is, there exists a substring of
SGP where only the vertices of w appear and all the neighbors of w appears at least once).
Firstly we will use our previous algorithms to store all the possible canonical orderings of the
vertices in each component so that the graphs induced by Gk ∪ N is a PTPIG, for each k. As usual
we will store the restrictions using the PQ-tree. Next we will have to add some more restrictions
on the canonical ordering of the vertices in each of the connected components which are necessary
for the graph G to be a PTPIG. These restriction will be be stored in the same the PQ-tree. At
last we check if there exists an ordering of the components such that the corresponding canonical
ordering satisfies the Condition (A) of Theorem 3.4.
Now to find the extra restrictions on the canonical ordering we have to place we will look at several
cases depending on how many partial-component neighbors are there for each wj ∈ N . The main
idea is that the following is a necessary condition for the above Condition (C1) to be satisfied:
Condition (C2): If vertex wj has more than one component-neighbors, then for any component
Gk the canonical ordering of the vertices in Gk should ensure that not only there is a perfect
substring of wj in SGk , but also a perfect substring of wj must be present either at the beginning
or at the end of SGk .
Depending on how many partial-component-neighbors are there for each wj ∈ N and the number
of blocks in each of the partial component we list the restrictions on the canonical ordering of the
vertices in each component so the Condition (C2) is satisfied. Here we will also use the notion of
block-neighbor as defined in the previous section.
• (Case 1): wj has only one component neighbor. In this case there are no extra restrictions.
• (Case 2): wj has no partial-component neighbors. In this case there are no extra restrictions.
35
• (Case 3): wj has more than two partial-component-neighbors. In this case the graph G cannot
be a PTPIG.
• (Case 4): wj has more than one component neighbors and in one of the component neighbor
there are 2 or more partial-block-neighbors. In this case the graph G cannot be a PTPIG.
• (Case 5): wj has more than one component neighbors and let Gk be a partial-component
neighbor of wj . In this case we have two cases depending on the arrangement of blocks of Gk :
– (Case 5a): If Gk has only one block. Then we have to add the restriction that in the
canonical ordering of the vertices in Gk the neighbors of wj must be either flushed Left
or flushed Right. If the component Gk is a partial-component-neighbor of more than
one nonprobe vertex then by Lemma 4.7, the choice of whether to flush Left or Right is
fixed.
– (Case 5b): If Gk has more than one blocks and at least one of the block (say Bsk ) is
a partial-block-neighbor of wj . By Corollary 3.10 and Corollary 3.11 we see that there
is at most one direction (Left or Right) to flush the neighbors of wj in Bsk so that in
the corresponding canonical sequence there is a perfect-substring of wj either at the
beginning or the end.
Note that one can easily identify the cases above and apply the necessary restrictions on the
orderings using the PQ-tree and the same techniques as developed for the Oriented-consecutive
1’s Problem in Section 4.2.2. If after applying the restriction we see that Condition (C2) is not
satisfied, we know that G is not a PTPIG. All this can be done in time O(|V | + |E|).
Once we have been able to apply the restriction and ensure that the Condition (C2) is satisfied we
now need to check if there exits an ordering of the components so that Condition (C1) is satisfied.
For any wj ∈ N and any connected component Gk , if Gk is a component-neighbor of wj then we
know whether the perfect substring of wj is at the beginning or end of the canonical sequence of
Gk . If the perfect substring of wj is at the beginning of the canonical sequence of Gk we will call
wj left-oriented with respect to Gk and if the perfect substring of wj is at the end of the canonical
sequence of Gk we will call wj right-oriented with respect to Gk .
The problem now is to check if there exists an ordering of the connected components such that for
any wj ∈ N the following conditions are satisfied:
1. the component-neighbors of wj must be consecutive,
36
2. there exist at most one of the component-neighbors Gk of wj such that wj is right-oriented
with respect to Gk and this component-neighbor is first component-neighbor of wj in the
ordering,
3. there exist at most one of the component-neighbors Gk of wj such that wj is left-oriented with
respect to Gk and this component-neighbor is last component-neighbor of wj in the ordering.
This problem can be reduced to a consecutive 1’s problem: Let Ω = ∪rk=1 {`k , ak }. For any wj ∈ N
let us define the set Tj as follows:
• if Gk is a full-component neighbor of wj the Tj contains `k and ak
• if Gk is a partial-component-neighbor of wj and wj is right-oriented with respect to Bk then
Tj contains ak
• if Gk is a partial-component-neighbor of wj and wj is left-oriented with respect to Bk then
Tj contains `k
Now G is a PTPIG if and only if there exists a permutation of Ω that satisfies the following
properties:
• For all k, {`k , ak } are next to each other.
• For all wj ∈ N , the elements in Tj must be consecutive.
Note that the above can be tested easily using the PQ-tree in time linear in |V | and |E| and with
this we have the complete recognition algorithm for PTPIG.
5
Conclusion
The study of interval graphs was spearheaded by Benzer [2] in his studies in the field of molecular
biology. In [37], Zhang introduced a generalization of interval graphs called probe interval graphs
(PIG) in an attempt to aid a problem called cosmid contig mapping. In order to obtain a better
model another generalization of interval graphs were introduced that capture the property of overlap
information, namely tagged probe interval graphs (TPIG) by Sheng, Wang and Zhang in [34]. Still
there is no recognition algorithm for TPIG, in general.
37
In this paper we characterize and obtain linear time recognition algorithm for a special class of
TPIG, namely proper tagged probe interval graphs (PTPIG). The problem of obtaining a recognition algorithm for TPIG, in general is challenging and open till date. It is well known that an
interval graph is a proper interval graph if and only if it does not contain K1,3 as an induced subgraph of it. Similar forbidden subgraph characterization for PTPIG is another interesting problem.
References
[1] Asim. Basu, Sandip Das, Shamik Ghosh and Malay Sen, Circular-arc bigraphs and its subclasses, J. Graph Theory 73 (2013), 361 376.
[2] S. Benzer, On the topology of the genetic fine structure, Proc. Nat. Acad. Sci. USA 45 (1959),
1607–1620.
[3] K. S. Booth and G. S. Leuker, Linear algorithm to recognize interval grapha and test for the
consecutive ones property, Proc. 7th ACM Symp. Theory of computing, (1975), 255–265.
[4] K. S. Booth and G. S. Leuker, Testing for the consecutive ones property, interval graphs and
graph planarity using PQ-tree algorithms, J. Comput. System Sci., 13 (1976), 335–379.
[5] D. E. Brown, Variations on interval graphs, Ph.D. Thesis, Univ. of Colorado at Denver, USA,
2004.
[6] D.G. Corneil, A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs,
Discrete Appl. Math. 138 (2004) 371 - 379.
[7] D.G. Corneil, S. Olariu, L. Stewart, The LBFS structure and recognition of interval graphs,
in preparation; extended abstract appeared as The ultimate interval graph recognition algorithm?, in: Proceedings of SODA 98, Ninth Annual ACM-SIAM Symposium on Discrete
Algorithms, San Francisco, CA, USA, 1998, pp. 175–180.
[8] D.G. Corneil, S. Olariu, L. Stewart, The LBFS structure and recognition of interval graphs,
SIAM J. Discrete Math., 23 (2010), 19051953.
[9] X. Deng, P. Hell and J. Huang, Linear time representation algorithms for proper circular arc
graphs and proper interval graphs, SIAM J. Comput. 25 (1996) 390–403.
[10] C.M.H. de Figueiredo, J. Meidanis and C.P. de Mello, A linear-time algorithm for proper
interval graph recognition Inform. Process. Lett. 56 (1995) 179–184.
38
[11] Shamik Ghosh, Maitry Podder and Malay Sen, Adjacency matrices of probe interval graphs,
Discrete Appl. Math. 158 (2010) 2004–2013.
[12] M. C. Golumbic, Algorithmic graph theory and perfect graphs, Annals of Disc. Math., 57,
Elsevier Sci., USA, 2004.
[13] M. C. Golumbic and A. Trenk, Tolerence Graphs, Cambridge studies in advanced mathematics,
Cambridge University Press, 2004.
[14] M. Habib, R. McConnel, C. Paul and L. Viennot, Lex-BFS and partition refinement, with
applications to transitive orientation, interval graph recognition, and consecutive ones testing,
Theor. Comput. Sci. 234 (2000) 59-84.
[15] Pavol Hell and Jing Huang, Certifying LexBFS Recognition Algorithms for Proper Interval
Graphs and Proper Interval Bigraphs, SIAM J. Discret. Math. 18 (2005) 554–570.
[16] P. Hell, R. Shamir and R. Sharan, A fully dynamic algorithm for recognizing and representing
proper interval graphs, SIAM J. Comput. 31 (2001) 289–305.
[17] Wen-Lian Hsu, A Simple Test for the Consecutive Ones Property, J. Algorithms, 43 (2002),
1–16.
[18] J. Johnson and J. Spinrad, A polynomial time recognition algorithm for probe interval graphs,
Proc. 12th annual ACM - SIAM Symposium on Disc. Algorithms, Washington D.C., 2001.
[19] D. S. Malik, M. K. Sen and S. Ghosh, Introduction to Graph Theory, Cengage Learning, New
York, 2014.
[20] R. M. McConnell, Y. Nussbaum, Linear-time recognition of probe interval graphs, in: A. Fiat,
P. Sanders (Eds.), ESA 2009, in: LNCS, vol. 5757, Springer, Heidelberg, 2009, pp. 349–360.
Full version available at: arXiv:1307.5547 [].
[21] R. M. McConnel and J. P. Spinrad, Construction of probe interval models, SODA 02 Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, (2002) 866–875.
[22] Terry A. McKee and F. R. McMorris, Topics in intersection graph theory, Siam Monographs
on Discrete Mathematics and Applications, Philadelphia, 1999.
[23] F. McMorris, C. Wang and P. Zhang, On probe interval graphs, Disc. App. Math., 88 (1998),
315–324.
39
[24] J. Meidanis, O. Porto, G. P. Telles, On the consecutive ones property, Disc. App. Math. 88
(1998), 325-354.
[25] Yahav Nussbaum, Recognition of probe proper interval graphs, Discrete Applied Mathematics,
167 (2014), 228–238.
[26] H.-J. Pan, Partitioned probe proper interval graphs. Masters Thesis, National Chung Cheng
University, Taiwan, 2008.
[27] F.S. Roberts, Representations of indifference relations, Ph. D. Thesis, Stanford Univeristy,
1968.
[28] F.S. Roberts, Indifference graphs, in: F. Harary (Ed.), Proof Techniques in Graph Theory,
Academic Press, New York, 1969, pp. 139–146.
[29] F.S. Roberts, On the compatibility between a graph and a simple order, J. Combin. Theory
Ser. B 11 (1971) 28–38.
[30] F.S. Roberts, Some applications of graph theory, Chapter in book Combinatorial and Computational Mathematics, Published by World Scientific Publishing Co. Pte. Ltd.
[31] M. Sen, S. Das, A. B. Roy and D. B. West, Interval digraphs: An analogue of interval graphs,
J Graph Theory 13 (1989), 189–202.
[32] M. Sen, S. Das, and D. B. West, Circular-arc digraphs: a characterization, J. Graph Theory,
13 (1989), 581–592.
[33] L. Sheng, Cycle free probe interval graphs, Congressus Numerantium, 140 (1999), 33–42.
[34] L. Sheng, C. Wang and P. Zhang Tagged probe interval graphs, J. Combinatorial optimization,
5 (2001), 133–142.
[35] L. Sheng, C. Wang and P. Zhang On the Perfectness of Tagged Probe Interval Graphs (with Chi
Wang, Peisen Zhang, Discrete Mathematical Problems with Medical Applications, American
Mathematical Society, Providence, RI, 159–163, 2000.
[36] L. Sheng, C. Wang and P. Zhang Tagged probe interval graphs, DIMACS Technical Report
98-12, 1998.
[37] P. Zhang, Probe interval graphs and their application to physical mapping of DNA, Manuscript,
1994.
40
| 8 |
Central limit theorems for functionals of large sample covariance matrix and
mean vector in matrix-variate location mixture of normal distributions
arXiv:1602.05522v2 [] 13 Jul 2017
Taras Bodnara,1 , Stepan Mazurb , Nestor Parolyac
a
b
Department of Mathematics, Stockholm University, SE-10691 Stockholm, Sweden
Department of Statistics, School of Business, Örebro University, SE-70182 Örebro, Sweden
c
Institute of Statistics, Leibniz University of Hannover, D-30167 Hannover, Germany
Abstract
In this paper we consider the asymptotic distributions of functionals of the sample covariance matrix and the sample mean vector obtained under the assumption that the matrix
of observations has a matrix-variate location mixture of normal distributions. The central
limit theorem is derived for the product of the sample covariance matrix and the sample
mean vector. Moreover, we consider the product of the inverse sample covariance matrix
and the mean vector for which the central limit theorem is established as well. All results
are obtained under the large-dimensional asymptotic regime where the dimension p and the
sample size n approach to infinity such that p/n → c ∈ [0, +∞) when the sample covariance
matrix does not need to be invertible and p/n → c ∈ [0, 1) otherwise.
ASM Classification: 62H10, 62E15, 62E20, 60F05, 60B20
Keywords: Normal mixtures, skew normal distribution, large dimensional asymptotics, stochastic representation, random matrix theory.
1
Corresponding author. E-mail address: [email protected]. The second author appreciates the financial support of the Swedish Research Council Grant Dnr: 2013-5180 and Riksbankens Jubileumsfond Grant Dnr:
P13-1024:1
1
1
Introduction
The functions of the sample covariance matrix and the sample mean vector appear in various
statistical applications. The classical improvement techniques for the mean estimation have already been discussed by Stein (1956) and Jorion (1986). In particular, Efron (2006) constructed
confidence regions of smaller volume than the standard spheres for the mean vector of a multivariate normal distribution. Fan et al. (2008), Bai and Shi (2011), Bodnar and Gupta (2011),
Cai and Zhou (2012), Cai and Yuan (2012), Fan et al. (2013), Bodnar et al. (2014a), Wang et al.
(2015), Bodnar et al. (2016a) among others suggested improved techniques for the estimation
of covariance matrix and precision matrix (the inverse of covariance matrix).
In our work we introduce the family of matrix-variate location mixture of normal distributions (MVLMN) which is a generalization of the models considered by Azzalini and Dalla-Valle
(1996), Azzalini and Capitanio (1999), Azzalini (2005), Liseo and Loperfido (2003, 2006), Bartoletti and Loperfido (2010), Loperfido (2010), Christiansen and Loperfido (2014), Adcock et al.
(2015), De Luca and Loperfido (2015) among others. Under the assumption of MVLMN we
consider the expressions for the sample mean vector x and the sample covariance matrix S.
In particulary, we deal with two products l> Sx and l> S−1 x where l is a non-zero vector of
constants. It is noted that this kind of expressions has not been intensively considered in the
literature, although they are present in numerous important applications. The first application
of the products arises in the portfolio theory, where the vector of optimal portfolio weights is
proportional to S−1 x. The second application is in the discriminant analysis where the coefficients of the discriminant function are expressed as a product of the inverse sample covariance
matrix and the difference of the sample mean vectors.
Bodnar and Okhrin (2011) derived the exact distribution of the product of the inverse sample
covariance matrix and the sample mean vector under the assumption of normality, while Kotsiuba and Mazur (2015) obtained its asymptotic distribution as well as its approximate density
based on the Gaussian integral and the third order Taylor series expansion. Moreover, Bodnar
et al. (2013, 2014b) analyzed the product of the sample (singular) covariance matrix and the
sample mean vector. In the present paper, we contribute to the existing literature by deriving
the central limit theorems (CLTs) under the introduced class of matrix-variate distributions in
the case of the high-dimensional observation matrix. Under the considered family of distributions, the columns of the observation matrix are not independent anymore and, thus, the CLTs
cover a more general class of random matrices.
Nowadays, modern scientific data include large number of sample points which is often
comparable to the number of features (dimension) and so the sample covariance matrix and
the sample mean vector are not the efficient estimators anymore. For example, stock markets
include a large number of companies which is often close to the number of available time points.
In order to understand better the statistical properties of the traditional estimators and tests
based on high-dimensional settings, it is of interest to study the asymptotic distribution of the
above mentioned bilinear forms involving the sample covariance matrix and the sample mean
vector.
The appropriate central limit theorems, which do not suffer from the “curse of dimension-
2
ality” and do not reduce the number of dimensions, are of great interest for high-dimensional
statistics because more efficient estimators and tests may be constructed and applied in practice. The classical multivariate procedures are based on the central limit theorems assuming that
the dimension p is fixed and the sample size n increases. However, numerous authors provide
quite reasonable proofs that this assumption does not lead to precise distributional approximations for commonly used statistics, and that under increasing dimension asymptotics the better
approximations can be obtained [see, e.g., Bai and Silverstein (2004) and references therein].
Technically speaking, under the high-dimensional asymptotics we understand the case when
the sample size n and the dimension p tend to infinity, such that their ratio p/n converges to
some positive constant c. Under this condition the well-known Marčenko-Pastur and Silverstein
equations were derived [see, Marčenko and Pastur (1967), Silverstein (1995)].
The rest of the paper is structured as follows. In Section 2 we introduce a semi-parametric
matrix-variate location mixture of normal distributions. Main results are given in Section 3,
where we derive the central limit theorems under high-dimensional asymptotic regime of the
(inverse) sample covariance matrix and the sample mean vector under the MVLMN. Section 4
presents a short numerical study in order to verify the obtained analytic results.
2
Semi-parametric family of matrix-variate location mixture of
normal distributions
In this section we introduce the family of MVLMN which generalizes the existent families of
skew normal distributions.
Let
x11 . . . x1n
.
..
..
..
X=
.
.
= (x1 , ..., xn ) ,
xp1 . . . xpn
be the p × n observation matrix where xj is the j th observation vector. In the following, we
assume that the random matrix X possesses a stochastic representation given by
d
X = Y + Bν1>
n,
(1)
where Y ∼ Np,n (µ1>
n , Σ ⊗ In ) (p × n-dimensional matrix-variate normal distribution with mean
>
matrix µ1n and covariance matrix Σ ⊗ In ), ν is a q-dimensional random vector with continuous
density function fν (·), B is a p × q matrix of constants. Further, it is assumed that Y and ν
are independently distributed. If random matrix X follows model (1), then we say that X is
MVLMN distributed with parameters µ, Σ, B, and fν (·). The first three parameters are finite
dimensional, while the fourth parameter is infinite dimensional. This makes model (1) to be of
a semi-parametric type. The assertion we denote by X ∼ LMN p,n;q (µ, Σ, B; fν ). If fν can be
parametrized by finite dimensional parameter θ, then (1) reduces to a parametrical model which
is denoted by X ∼ LMN p,n;q (µ, Σ, B; θ). If n = 1, then we use the notation LMN p;q (·, ·, ·; ·)
3
instead of LMN p,1;q (·, ·, ·; ·).
From (1) the density function of X is expressed as
Z
fX (Z) =
Rq
∗
∗
fNp,n (µ1>
(Z − Bν ∗ 1>
n )fν (ν )dν .
n ,Σ⊗In )
(2)
In a special case when ν = |ψ| is the vector formed by the absolute values of every element in
ψ where ψ ∼ Nq (0, Ω), i.e. ν has a q-variate truncated normal distribution, we get
Proposition 1. Assume model (1). Let ν = |ψ| with ψ ∼ Nq (0, Ω). Then the density function
of X is given by
e −1 Φq 0; −DEvec(Z − µ1> ), D φpn vec(Z − µ1> ); 0, F
fX (Z) = C
n
n
(3)
> −1
−1 − E> DE)−1 , and
where D = (nB> Σ−1 B + Ω−1 )−1 , E = 1>
n ⊗ B Σ , F = (In ⊗ Σ
1/2
1/2
e −1 = C −1 |F| |D|
C
with C = Φq (0; 0, Ω).
|Ω|1/2 |Σ|n/2
The proof of Proposition 1 is given in the Appendix.
It is remarkable that model (1) includes several skew-normal distributions considered by
Azzalini and Dalla-Valle (1996), Azzalini and Capitanio (1999), Azzalini (2005). For example,
in case of n = 1, q = 1, µ = 0, B = ∆1p , and Σ = (Ip − ∆2 )1/2 Ψ(Ip − ∆2 )1/2 we get
d
X = (Ip − ∆2 )1/2 v0 + ∆1p |v1 |,
(4)
where v0 ∼ Np (0, Ψ) and v1 ∼ N (0, 1) are independently distributed; Ψ is a correlation matrix
and ∆ = diag(δ1 , ..., δp ) with δj ∈ (−1, 1). Model (4) was previously introduced by Azzalini
(2005).
Moreover, model (1) also extends the classical random coefficient growth-curve model (see,
Potthoff and Roy (1964), Amemiya (1994) among others), i.e., the columns of X can be rewritten
in the following way
d
xi = µ + Bν + εi , i = 1, . . . , n ,
(5)
where ε1 , . . . , εn are i.i.d. Np (0, Σ), ν ∼ fν and ε1 , . . . , εn , ν are independent. In the random
coefficient growth-curve model, it is typically assumed that εi ∼ Np (0, σ 2 I) and ν ∼ Nq (0, Ω)
(see, e.g., Rao (1965)). As a result, the suggested MVLMN model may be useful for studying
the robustness of the random coefficients against the non-normality.
4
3
CLTs for expressions involving the sample covariance matrix
and the sample mean vector
The sample estimators for the mean vector and the covariance matrix are given by
n
1X
1
x=
xi = X1n
n
n
n
and
i=1
1 X
S=
(xi − x)(xi − x)> = XVX> ,
n−1
i=1
>
2
where V = In − n1 1n 1>
n is a symmetric idempotent matrix, i.e., V = V and V = V.
The following proposition shows that x and S are independently distributed and presents
their marginal distributions under model (1). Moreover, its results lead to the conclusion that
the independence of x and S could not be used as a characterization property of a multivariate
normal distribution if the observation vectors in data matrix are dependent.
Proposition 2. Let X ∼ LMN p,n;q (µ, Σ, B; fν ). Then
(a) (n−1)S ∼ Wp (n−1, Σ) (p-dimensional Wishart distribution for p ≤ n−1 and p-dimensional
singular Wishart distribution for p > n − 1 with (n − 1) degrees of freedom and covariance
matrix Σ),
(b) x ∼ LMN p;q µ, n1 Σ, B; fν ,
(c) S and x are independently distributed.
Proof. The statements of the proposition follow immediately from the fact that
n
x̄ =
1X
xi = ȳ + Bν and S = XVX> = YVY> .
n
(6)
i=1
Indeed, from (6), the multivariate normality of Y and the independence of Y and ν, we get that
x and S are independent; S is (singular) Wishart distributed with (n − 1) degrees of freedom
and covariance matrix Σ; x has a location mixture of normal distributions with parameters µ,
1
n Σ, B and fν .
For the validity of the asymptotic results presented in Sections 3.1 and 3.2 we need the
following two conditions
(A1) Let (λi , ui ) denote the set of eigenvalues and eigenvectors of Σ. We assume that there
exist m1 and M1 such that
0 < m1 ≤ λ1 ≤ λ2 ≤ ... ≤ λp ≤ M1 < ∞
uniformly in p.
(A2) There exists M2 < ∞ such that
>
|u>
i µ| ≤ M2 and |ui bj | ≤ M2 for all i = 1, ..., p and j = 1, ..., q
uniformly in p where bj , j = 1, ..., q, are the columns of B.
5
Generally, we say that an arbitrary p-dimensional vector l satisfies the condition (A2) if |u>
i l| ≤
M2 < ∞ for all i = 1, . . . , p.
Assumption (A1) is a classical condition in random matrix theory (see, Bai and Silverstein
(2004)), which bounds the spectrum of Σ from below as well as from above. Assumption (A2) is
a technical one. In combination with (A1) this condition ensures that p−1 µ> Σµ, p−1 µ> Σ−1 µ,
p−1 µ> Σ3 µ, p−1 µ> Σ−3 µ, as well as that all the diagonal elements of B> ΣB, B> Σ3 B, and
B> Σ−1 B are uniformly bounded. All these quadratic forms are used in the statements and the
proofs of our results. Note that the constants appearing in the inequalities will be denoted by
M2 and may vary from one expression to another. We further note, that assumption (A2) is
automatically fulfilled if µ and B are sparse (not Σ itself). More precisely, a stronger condition,
which verifies (A2) is ||µ|| < ∞ and ||bj || < ∞ for j = 1, . . . , q uniformly in p. Indeed, using
2
the Cauchy-Schwarz inequality to (u>
i µ) we get
2
2
2
2
(u>
i µ) ≤ ||ui || ||µ|| = ||µ|| < ∞ .
(7)
It is remarkable that ||µ|| < ∞ is fulfilled if µ is indeed a sparse vector or if all its elements are of
order O p−1/2 . Thus, a sufficient condition for (A2) to hold would be either sparsity of µ and B
or their elements are reasonably small but not exactly equal to zero. Note that the assumption of
sparsity for B is quite natural in the context of high-dimensional random coefficients regression
models. It implies that if B is large dimensional, then it may have many zeros and there exists
a small set of highly significant random coefficients which drive the random effects in the model
(see, e.g., Bühlmann and van de Geer (2011)). Moreover, it is hard to estimate µ in a reasonable
way when ||µ|| → ∞ as p → ∞ (see, e.g., Bodnar et al. (2016c)). In general, however, ||µ||
and ||bj || do not need to be bounded, in such case the sparsity of eigenvectors of matrix Σ may
guarantee the validity of (A2). Consequently, depending on the properties of the given data set
the proposed model may cover many practical problems.
3.1
CLT for the product of sample covariance matrix and sample mean vector
In this section we present the central limit theorem for the product of the sample covariance
matrix and the sample mean vector.
Theorem 1. Assume X ∼ LMN p,n;q (µ, Σ, B; fν ) with Σ positive definite and let p/n =
c + o(n−1/2 ), c ∈ [0, +∞) as n → ∞. Let l be a p-dimensional vector of constants that satisfies
condition (A2). Then, under (A1) and (A2) it holds that
√
where
D
−1
nσν
l> Sx − l> Σµν −→ N (0, 1) for p/n → c ∈ [0, +∞) as n → ∞ ,
µν
2
σν
= µ + Bν,
tr(Σ2 ) >
>
= µν Σµν + c
l Σl + (l> Σµν )2 + l> Σ3 l .
p
(8)
(9)
(10)
Proof. First, we consider the case of p ≤ n−1, i.e., S has a Wishart distribution. Let L = (l, x)>
6
e = LSL> = {S
e ij }i,j=1,2 with Se11 = l> Sl, Se12 = l> Sx, Se21 = x> Sl, and Se22 = x> Sx.
and define S
e = LΣL> = {Σ
e ij }i,j=1,2 with Σ
e 11 = l> Σl, Σ
e 12 = l> Σx, Σ
e 21 = x> Σl, and
Similarly, let Σ
e 22 = x> Σx.
Σ
1
Since S and x are independently distributed, S ∼ Wp n − 1,
Σ and rank L =
n−1
e
2 ≤ p with probability
one, we get from Theorem 3.2.5 of Muirhead (1982) that S|x
∼
1 e
W2 n − 1,
Σ . As a result, the application of Theorem 3.2.10 of Muirhead (1982) leads
n−1
to
1 e
−1 e
e
e
e
e
e
Σ11·2 S22 ,
S12 |S22 , x ∼ N Σ12 Σ22 S22 ,
n−1
e 11·2 = Σ
e 11 − Σ
e 2 /Σ
e 22 is the Schur complement.
where Σ
12
e 22 , then
Let ξ = (n − 1)Se22 /Σ
>
l Sx|ξ, x ∼ N
ξ >
ξ
>
>
>
2
[x Σxl Σl − (x Σl) ] .
l Σx,
n−1
(n − 1)2
From Theorem 3.2.8 of Muirhead (1982) it follows that ξ and x are independently distributed
and ξ ∼ χ2n−1 . Hence, the stochastic representation of l> Sx is given by
ξ >
l Sx =
l Σx +
n−1
>
d
r
ξ
z0
(x> Σxl> Σl − (l> Σx)2 )1/2 √
,
n−1
n−1
(11)
where ξ ∼ χ2n−1 , z0 ∼ N (0, 1), x ∼ LMN p;q µ, n1 Σ, B; fν ; ξ, z0 and x are mutually independ
dent. The symbol ”=” stands for the equality in distribution.
It is remarkable that the stochastic representation (11) for l> Sx remains valid also in the
case of p > n − 1 following the proof of Theorem 4 in Bodnar et al. (2014b). Hence, we will
make no difference between these two cases in the remaining part of the proof.
From the properties of χ2 -distribution and using the fact that n/(n − 1) → 1 as n → ∞, we
immediately receive
√
n
ξ
−1
n
D
−→ N (0, 2) as n → ∞ .
(12)
√
√
We further get that n(z0 / n) ∼ N (0, 1) for all n and, consequently, it is also its asymptotic
distribution.
Next, we show that l> Σx and x> Σx are jointly asymptotically normally distributed given
ν. For any a1 and a2 , we consider
>
>
a1 x Σx + 2a2 l Σx = a1
where x̃ = x̄ +
a2
a1 1
a2
x+ l
a1
>
a2
a2
a2
Σ x + l − 2 l> Σl = a1 x̃> Σx̃ − 2 l> Σl ,
a1
a1
a1
a2
and x̃|ν ∼ Np µa,ν , n1 Σ with µa,ν = µ + Bν + l. By Provost and
a1
7
Rudiuk (1996) the stochastic representation of x̃> Σx̃ is given by
p
1X 2
x̃ Σx̃ =
λ i ξi ,
n
>
d
i=1
√ −1/2 >
where ξ1 , ..., ξp given ν are independent with ξi |ν ∼ χ21 (δi2 ), δi = nλi
ui µa,ν . Here, the
2
2
symbol χd (δi ) denotes a chi-squared distribution with d degrees of freedom and non-centrality
parameter δi2 .
Now, we apply the Lindeberg CLT to the conditionally independent random variables Vi =
2
λi ξi /n. For that reason, we need first to verify the Lindeberg’s condition. Denoting σn2 =
P
V( pi=1 Vi |ν) we get
σn2 =
p
X
V
X
p
1
λ4i
λ2i
2(1 + 2δi2 ) = 2 2tr(Σ4 ) + 4nµ0a,ν Σ3 µa,ν
ξi ν =
2
n
n
n
i=1
i=1
We need to check if for any small ε > 0 it holds that
p
i
1 X h
2
E
(V
−
E
(V
))
1
ν
−→ 0 .
i
i
{|Vi −E(Vi )|>εσn }
2
n→∞ σn
lim
i=1
First, we get
p
X
h
E (Vi − E(Vi |ν))2 1{|Vi −E(Vi |ν )|>εσn } ν
i
i=1
Cauchy−Schwarz
≤
Chebyshev
≤
p
X
i=1
p
X
i=1
h
i
E1/2 (Vi − E(Vi |ν))4 ν P1/2 { |Vi − E(Vi |ν)| > εσn | ν}
λ4i
n2
q
σi
12(1 + 2δi2 )2 + 48(1 + 4δi2 )
εσn
8
(13)
with σi2 = V(Vi |ν) and, thus,
≤
=
≤
≤
≤
≤
p
i
1 X h
2
E
(V
−
E
(V
|ν))
1
ν
i
i
{|Vi −E(Vi |ν )|>εσn }
σn2
i=1
q
Pp
4 12(1 + 2δ 2 )2 + 48(1 + 4δ 2 ) σi
λ
i
i σn
1 i=1 i
4
3
>
ε
2tr(Σ ) + 4nµa,ν Σ µa,ν
q
P
√
p
4 (5 + 2δ 2 )2 − 20 σi
λ
i
3 i=1 i
σn
4
3
>
ε
tr(Σ ) + 2nµa,ν Σ µa,ν
Pp
√
4
2 σi
3
i=1 λi (5 + 2δi ) σn
3
ε tr(Σ4 ) + 2nµ>
a,ν Σ µa,ν
√
3
3 5tr(Σ4 ) + 2nµ>
a,ν Σ µa,ν σmax
3
ε tr(Σ4 ) + 2nµ>
σn
a,ν Σ µa,ν
!
√
4
3
σmax
+1
3
4
>
ε
σn
1 + 2nµa,ν Σ µa,ν /tr(Σ )
√
5 3 σmax
.
ε σn
Finally, Assumptions (A1) and (A2) yield
2
2
supi λ4i + 2nλ3i (u>
σmax
supi σi2
supi λ4i (1 + 2δi2 )
i µa,ν )
=
−→ 0 ,
=
=
3
3
σn2
σn2
tr(Σ4 ) + 2nµ>
tr(Σ4 ) + 2nµ>
a,ν Σ µa,ν
a,ν Σ µa,ν
(14)
which verifies the Lindeberg condition since
2
(u>
i µa,ν )
=
=
(A2)
≤
=
2
>
>
> a2
ui µ + ui Bν + ui l
a1
2
a2 > >
a2
2
>
>
+ (u>
u l(ui µ + u>
(u0i µ)2 + u>
i l
i Bν) + 2ui µ · ui Bν + 2
i Bν)
a1
a1 i
p
p
a2
|a2 |
M22 + qM22 ν > ν + M22 22 + 2M22 qν > ν + 2M22
(1 + qν > ν)
|a1 |
a1
p
|a2 | 2
M22 1 + qν > ν +
< ∞.
(15)
|a1 |
Thus, using (13) and
p
X
i=1
E(Vi |ν) =
p
X
λ2
i
i=1
n
(1 + δi2 ) = tr(Σ2 )/n + µ>
a,ν Σµa,ν
we get that
√ x̃> Σx̃ − tr(Σ2 )/n − µ>
D
a,ν Σµa,ν
n q
ν −→ N (0, 2)
tr(Σ4 )/n + 2µ0a,ν Σ3 µa,ν
9
(16)
and for a1 x> Σx + 2a2 l> Σx we have
a22 >
> Σx + 2a l> Σx − a tr(Σ2 )/n + µ> Σµ
a
l Σl
x
2
1
a,ν +
a,ν
√ 1
D
a1
q
n
ν −→ N (0, 2) .
4
3
a21 tr(Σ )/n + 2µ0a,ν Σ µa,ν
Denoting a = (a1 , 2a2 )> and µν = µ + Bν we can rewrite it as
√
"
n a
>
x> Σx
l> Σx
2
!
−a
>
tr(Σ
µ>
ν Σµν + c p
l> Σµν
which implies that the vector
√
)
4
!#
D
ν −→ N
0, a
>
)
3
2c tr(Σ
+ 4µ>
ν Σ µν
p
2l> Σ3 µν
2l> Σ3 µν
l> Σ3 l
! !
a
>
2
tr(Σ )
>
>
>
>
n x Σx − µν Σµν − c p , l Σx − l Σµν
has asymp-
totically multivariate normal distribution conditionally on ν because the vector a is arbitrary.
Taking into account (17), (12) and the fact that ξ, z0 and x are mutually independent we
get
ξ
n
>
√
x Σx
n
>
l Σx
z0
√
n
1
2
)
µ>
Σµν + c tr(Σ
ν
p
−
l> Σµν
0
D
ν −→
N
0
0,
0
0
0
0
0
)
3
2c tr(Σ
+ 4µ>
ν Σ µν
p
2l> Σ3 µν
0
2l> Σ3 µν
l> Σ3 l
0
0
0
1
.
2
4
The application of the multivariate delta method leads to
√
D
−1
l> Sx − l> Σµν ν −→ N (0, 1)
nσν
(18)
where
tr(Σ2 )
σν = (l Σµν ) + l Σ l + l Σl µν Σµν + c
p
2
>
2
>
3
>
>
The asymptotic distribution does not depend on ν and, thus, it is also the unconditional asymptotic distribution.
Theorem 1 shows that properly normalized bilinear form l> Sx itself can be accurately approximated by a mixture of normal distributions with both mean and variance depending on ν.
Moreover, this central limit theorem delivers the following approximation for the distribution of
l> Sx, namely for large n and p we have
p
2
p−2 σν
−1 >
l Sx|ν ≈ CN p l Σµν ,
,
n
−1 >
(19)
i.e., it has a compound normal distribution with random mean and variance.
The proof of Theorem 1 shows, in particular, that its key point is a stochastic representation
of the product l> Sx which can be presented using a χ2 distributed random variable, a standard
normally distributed random variable, and a random vector which has a location mixture of
normal distributions. Both assumptions (A1) and (A2) guarantee that the asymptotic mean
2 are bounded with probability one and the cop−1 l> Σµν and the asymptotic variance p−2 σν
variance matrix Σ is invertible as the dimension p increases. Note that the case of standard
10
(17)
asymptotics can be easily recovered from our result if we set c → 0.
Although Theorem 1 presents a result similar to the classical central limit theorem, it is not
a proper CLT because of ν which is random and, hence, it provides an asymptotic equivalence
only. In the current settings it is not possible to provide a proper CLT without additional
assumptions imposed on ν. The reason is the finite dimensionality of ν denoted by q which
is fixed and independent of p and n. This assures that the randomness in ν will not vanish
asymptotically. So, in order to justify a classical CLT we need to have an increasing value of
q = q(n) with the following additional assumptions:
(A3) Let
E(ν) = ω and Cov(ν) = Ω. For any vector l it holds that
√ l> ν − l> ω D
q √
−→ z̃0
l> Ωl
(20)
where the distribution of z̃0 may in general depend on l.
(A4) There exists M2 < ∞ such that
|u>
i Bω| ≤ M2 for all i = 1, ..., p uniformly in p.
Moreover, BΩB> has an uniformly bounded spectral norm.
Assumption (A3) assures that the vector of random coefficients ν satisfies itself a sort of concentration property in the sense that any linear combination of its elements asymptotically
concentrates around some random variable. In a special case of z̃0 being standard normally
distributed (A3) will imply a usual CLT for a linear combination of vector ν. As a result,
assumption (A3) is a pretty general and natural condition on ν taking into account that it is
assumed ν ∼ Nq (ω, Ω) (see, e.g., Rao (1965)) in many practical situations. Assumption (A4)
is similar to (A1) and (A2) and it ensures that µ and Bω as well as Σ and BΩB> have the
same behaviour as p → ∞.
Corollary 1 (CLT). Under the assumptions of Theorem 1, assume (A3) and (A4). Let l> Σl =
O(p) and µ> Σµ = O(p). Then it holds that
√ −1 >
D
l Sx̄ − l> Σ(µ + Bω) −→ N (0, 1)
nσ
for p/n = c + o n−1/2 with c ≥ 0 and q/n → γ with γ > 0 as n → ∞ where
σ
2
tr(Σ2 )
>
= (l Σ(Bω + µ)) + l Σ l + l Σl (µ + Bω) Σ(µ + Bω) + c
.
p
>
2
>
3
>
11
Proof. Since
2
(u>
i µa,ν )
=
u>
i µ
+
u>
i Bω
+
u>
i B(ν
− ω) +
a2
u>
i l
a1
2
a2
M22 + M22 + M22 q(ν − ω)> (ν − ω) + M22 22
a
q 1
|a2 |
|a2 |
2M22 1 + 2
+ 2M22 2 +
q(ν − ω)> (ν − ω) < ∞
|a1 |
|a1 |
(A2),(A4)
≤
+
with probability one, we get from the proof of Theorem 1 and assumption (A3) that
√
nσ
−1
√ √>
σν
n l ΣBΩB> Σl
l Sx̄ − l Σ(µ + Bω) =
z0 + √
z̃0 + oP (1) ,
σ
q
σ
>
>
(21)
where z0 ∼ N (0, 1) and z̃0 are independently distributed.
Let λmax (A) denotes the largest eigenvalue of a symmetric positive definite matrix A. First,
we note that for any vector m that satisfies (A2), we get
m> ΣBΩB> Σm ≤ λmax (Σ)2 λmax (UT BΩB> U)
p
X
|m> ui | = O(p),
i=1
where U is the eigenvector matrix of Σ which is a unitary matrix and, consequently, (see, e.g.,
Chapter 8.5 in Lütkepohl (1996))
λmax (UT BΩB> U) = λmax (BΩB> ) = O(1).
As a result, we get l> ΣB(ν − ω) = OP (1) and (µ + Bω)> ΣB(ν − ω) = OP (1).
Furthermore, it holds that
(ν − ω)> B> ΣB(ν − ω) ≤ λmax (Σ)
p
X
p
2
(u>
i B(ν − ω)) = OP (1)
i=1
i=1
≤ OP (1)λmax (BΩB> )
Hence, we have
√
1X >
ui BΩB> ui
q
1
q
p
X
u>
i ui = OP (1).
i=1
l> ΣBΩB> Σl
→0
σ
and, consequently, together with z̃0 = OP (1) and q/n → γ > 0 it ensures that the second
summand in (21) will disappear. At last,
2
σν
a.s.
→1
σ2
for p/n = c + o n−1/2 with c ≥ 0 and q/n → γ with γ > 0 as n → ∞.
Note that the asymptotic regime q/n → γ > 0 ensures that the dimension of the random
coefficients is not increasing much faster than the sample size, i.e., q and n are comparable. It
12
is worth mentioning that due to Corollary 1 the asymptotic distribution of the bilinear form
l> Sx̄ does not depend on the covariance matrix of the random effects Ω. This knowledge may
be further useful in constructing a proper pivotal statistic.
3.2
CLT for the product of inverse sample covariance matrix and sample
mean vector
In this section we consider the distributional properties of the product of the inverse sample
covariance matrix S−1 and the sample mean vector x. Again we prove that proper weighted
bilinear forms involving S−1 and x have asymptotically a normal distribution. This result is
summarized in Theorem 2.
Theorem 2. Assume X ∼ LMN p,n;q (µ, Σ, B; fν ), p < n − 1, with Σ positive definite and let
p/n = c + o(n−1/2 ), c ∈ [0, 1) as n → ∞. Let l be a p-dimensional vector of constants that
satisfies (A2). Then, under (A1) and (A2) it holds that
√ −1 > −1
nσ̃ν l S x −
1 > −1
l Σ µν
1−c
D
−→ N (0, 1)
(22)
where µν = µ + Bν and
2
σ̃ν
=
1
(1 − c)3
>
−1
l Σ
µν
2
>
−1
+l Σ
>
−1
l(1 + µν Σ
µν ) .
Proof. From Theorem 3.4.1 of Gupta and Nagar (2000) and Proposition 2 we get
S−1 ∼ IW p n + p, (n − 1)Σ−1 .
It holds that
l> S−1 x = (n − 1)x> Σ−1 x
l> S−1 x
x> S−1 x
.
x> S−1 x (n − 1)x> Σ−1 x
Since S−1 and x are independently distributed, we get from Theorem 3.2.12 of Muirhead
(1982) that
x> Σ−1 x
ξe = (n − 1) > −1 ∼ χ2n−p
x S x
(23)
and it is independent of x. Moreover, the application of Theorem 3 in Bodnar and Okhrin
(2008) proves that x> S−1 x is independent of l> S−1 x/x> S−1 x for given x. As a result, it is also
independent of x> Σ−1 x · l> S−1 x/x> S−1 x for given x and, consequently,
(n − 1)x> Σ−1 x
l> S−1 x
x> S−1 x
and
.
x> S−1 x
(n − 1)x> Σ−1 x
are independent.
13
From the proof of Theorem 1 of Bodnar and Schmid (2008) we obtain
>
−1
(n − 1)x Σ
> −1
l> S−1 x
> −1
2 x Σ x >
x > −1 x ∼ t n − p + 1; (n − 1)l Σ x; (n − 1)
l Rx l ,
n−p+1
x S x
(24)
where Ra = Σ−1 − Σ−1 aa> Σ−1 /a> Σ−1 a, a ∈ IRp , and the symbol t(k, µ, τ 2 ) denotes a tdistribution with k degrees of freedom, location parameter µ and scale parameter τ 2 .
Combining (23) and (24), we get the stochastic representation of l> S−1 x given by
> Σ−1 x
x
d
· l> Rx l
l> S−1 x = ξe−1 (n − 1) l> Σ−1 x + t0
n−p+1
q
√
t0
−1
> −1
>
>
−1
e
√
l Σ l x Rl x ,
= ξ (n − 1) l Σ x +
n−p+1
s
where (see, Proposition 2)
d
l> Σ−1 x = l> Σ−1 (µ + Bν) +
l> Σ−1 l 0
√
z ,
n 0
e ν, z 0 , and t0 are mutually independent.
with ξe ∼ χ2n−p , z00 ∼ N (0, 1), and t0 ∼ t(n−p+1, 0, 1); ξ,
0
Since Rl ΣRl = Rl , tr(Rl Σ) = p − 1, and Rl ΣΣ−1 l = 0, the application of Corollary 5.1.3a
and Theorem 5.5.1 in Mathai and Provost (1992) leads to
nx> Rl x|ν ∼ χ2p−1 (nδ 2 (ν)) with δ 2 (ν) = (µ + Bν)> Rl (µ + Bν)
(25)
as well as l> Σ−1 x and x> Rl x are independent given ν. Finally, using the stochastic representation of a t-distributed random variable, we get
z000
d
t0 =
q
(26)
ζ
n−p+1
with ζ ∼ χ2n−p+1 , z000 ∼ N (0, 1) and z00 ∼ N (0, 1) being mutually independent. Together with
(25) it yields
> −1
l S
r
z000 √ > −1
p−1
z00 p > −1
−1
> −1
e
x = ξ (n − 1) l Σ (µ + Bν) + √
l Σ l+ √
l Σ l
η
n−p+1
n
n
r
√
p−1
z0
−1
> −1
>
−1
e
= ξ (n − 1) l Σ (µ + Bν) + l Σ l 1 +
η√
,
(27)
n−p+1
n
d
where the last equality in (27) follows from the fact that
z00
+
z000
r
p−1
d
η = z0
n−p+1
>
r
1+
p−1
η
n−p+1
Rl x/(p−1)
with η = nxζ/(n−p+1)
|ν ∼ Fp−1,n−p+1 (nδ 2 (ν)) (non-central F -distribution with p − 1 and
n − p + 1 degrees of freedom and non-centrality parameter nδ 2 (ν)). Moreover, we have that
14
e z0 , and η are mutually independent.
ξe ∼ χ2n−p and z0 ∼ N (0, 1); ξ,
From Lemma 6.4.(b) in Bodnar et al. (2016b) we get
√
n
e
ξ/(n
− p)
1
D
2
η
− 1 + δ (ν)/c ν −→ N
√
z0 / n
0
0,
2/(1 − c) 0
0
ση2
0
0
0
0
0
0
1
0
for p/n = c + o(n−1/2 ), c ∈ [0, 1) as n → ∞ with
ση2
2
=
c
2
δ 2 (ν)
2
δ 2 (ν)
1+2
+
1+
c
1−c
c
Consequently,
e
ξ/(n
− 1)
(1 − c)
√
n (p − 1)η/(n − p + 1) − (c + δ 2 (ν))/(1 − c) ν
√
z0 / n
0
2(1 − c)
0
0
D
2
2
2
−→ N 0,
0
c ση /(1 − c) 0
0
0
1
for p/n = c + o(n−1/2 ), c ∈ [0, 1) as n → ∞.
Finally, the application of the delta-method (c.f. DasGupta (2008, Theorem 3.7)) leads to
√
n l> S−1 x −
1 > −1
D
2
l Σ (µ + Bν) ν −→ N (0, σ̃ν
)
1−c
for p/n = c + o(n−1/2 ), c ∈ [0, 1) as n → ∞ with
1
σ̃ν =
(1 − c)3
2
2
> −1
> −1
2
2 l Σ (µ + Bν) + l Σ l(1 + δ (ν)) .
Consequently,
√
1 > −1
D
> −1
nσ̃ν l S x −
l Σ (µ + Bν) ν −→ N (0, 1) ,
1−c
−1
where the asymptotic distribution does not depend on ν. Hence, it is also the unconditional
asymptotic distribution.
Again, Theorem 2 shows that the distribution of l> S−1 x can be approximated by a mixture
of normal distributions. Indeed,
p
−1 > −1
l S
x|ν ≈ CN
2
p−1 > −1
p−2 σ̃ν
l Σ µν ,
1−c
n
.
(28)
In the proof of Theorem 2 we can read out that the stochastic representation for the product
15
of the inverse sample covariance matrix and the sample mean vector is presented by using a χ2
distributed random variable, a general skew normally distributed random vector and a standard
t-distributed random variable. This result is itself very useful and allows to generate the values
of l> S−1 x by just simulating three random variables from the standard univariate distributions
and a random vector ν which determines the family of the matrix-variate location mixture of
normal distributions. The assumptions about the boundedness of the quadratic and bilinear
forms involving Σ−1 plays here the same role as in Theorem 1. Note that in this case we need
no assumption either on the Frobenius norm of the covariance matrix or its inverse.
Finally, in Corollary 2 we formulate the CLT for the product of the inverse sample covariance
matrix and the sample mean vector.
Corollary 2 (CLT). Under the assumptions of Theorem 2, assume (A3) and (A4). Let
l> Σ−1 l = O(p) and µ> Σ−1 µ = O(p). Then it holds that
√
nσ̃
−1
l> S−1 x −
1 > −1
D
l Σ (µ + Bω) −→ N (0, 1)
1−c
for p/n = c + o n−1/2 with c ∈ [0, 1) and q/n → γ with γ > 0 as n → ∞ where
σ̃
2
=
1
(1 − c)3
>
−1
l Σ
(µ + Bω)
2
>
−1
+l Σ
>
−1
l(1 + (µ + Bω) Σ
(µ + Bω)) .
Proof. From the proof of Theorem 2 and assumption (A3) it holds that
√
nσ̃
−1
l> S−1 x −
1 > −1
l Σ (µ + Bω)
1−c
√ √ > −1
n l Σ BΩB> Σ−1 l
σ̃ν
1
=
z0 +
z̃0 + oP (1) ,
√
σ̃
1−c q
σ̃
where z0 ∼ N (0, 1) and z̃0 are independently distributed.
Finally, in a similar way like in Corollary 1 we get from (A4) that
(1 − c)−2 γ −1
l> Σ−1 BΩB> Σ−1 l
σ̃ 2
2
σ̃ν
σ̃ 2
→
a.s.
→
0,
(29)
1
(30)
for p/n = c + o n−1/2 with c ∈ [0, 1) and q/n → γ with γ > 0 as n → ∞.
4
Numerical study
In this section we provide a Monte Carlo simulation study to investigate the performance of the
suggested CLTs for the products of the (inverse) sample covariance matrix and the sample mean
vector.
In our simulations we put l = 1p , each element of the vector µ is uniformly distributed
on [−1, 1] while each element of the matrix B is uniformly distributed on [0, 1]. Also, we take
Σ as a diagonal matrix where each diagonal element is uniformly distributed on [0, 1]. It can
be checked that in such a setting the assumptions (A1) and (A2) are satisfied. Indeed, the
16
population covariance matrix satisfies the condition (A1) because the probability of getting
exactly zero eigenvalue equals to zero. On the other hand, the condition (A2) is obviously valid
too because the ith eigenvector of Σ is ui = ei = (0, . . . , 1 , 0, . . . , 0)0 .
ith place
In order to define the distribution for the random vector ν, we consider two special cases.
In the first case we take ν = |ψ|, where ψ ∼ Nq (0, Iq ), i.e., ν has a q-variate truncated normal
distribution. In the second case we put ν ∼ GALq (Iq , 1q , 10), i.e., ν has a q-variate generalized
asymmetric Laplace distribution (c.f., Kozubowski et al. (2013)). Also, we put q = 10.
We compare the results for several values of c ∈ {0.1, 0.5, 0.8, 0.95}. The simulated data
consist of N = 105 independent realizations which are used to fit the corresponding kernel
density estimators with Epanechnikov kernel. The bandwith parameters are determined via
cross-validation for every sample. The asymptotic distributions are simulated using the results
of Theorems 1 and 2. The corresponding algorithm is given next:
a) generate ν = |ψ|, where ψ ∼ Nq (0q , Iq ), or generate ν ∼ GALq (Iq , 1q , 10);
b) generate l> Sx by using the stochastic representation (11) obtained in the proof of Theorem
1, namely
√
ξ >
ξ
l Σ(y + Bν) +
((y + Bν)> Σ(y + Bν)l> Σl − (l> Σ(y + Bν))2 )1/2 z0 ,
l Sx =
n−1
n−1
>
d
where ξ ∼ χ2n−1 , z0 ∼ N (0, 1), y ∼ Np (µ, n1 Σ); ξ, z0 , y, and ν are mutually independent
b’) generate l> S−1 x by using the stochastic representation (27) obtained in the proof of Theorem 2, namely
> −1
l S
r
√
−1
> −1
>
−1
e
x = ξ (n − 1) l Σ (µ + Bν) + l Σ l 1 +
d
p−1
z0
η√
n−p+1
n
,
where ξe ∼ χ2n−p , z0 ∼ N (0, 1), and η ∼ Fp−1,n−p+1 (nδ 2 (ν)) with δ 2 (ν) = (µ+Bν)> Rl (µ+
e z0 and (η, ν) are mutually independent.
Bν), Rl = Σ−1 − Σ−1 ll> Σ−1 /l> Σ−1 l; ξ,
c) compute
√
and
√
−1
l> Sx − l> Σµν
nσν
1 > −1
> −1
nσ̃ν l S x −
l Σ µν
1−c
−1
where
µν
2
σν
2
σ̃ν
= µ + Bν
h
i
2
>
>
2
> 3
= µ>
ν Σµν + c||Σ||F l Σl + (l Σµν ) + l Σ l
2
1
> −1
> −1
2
=
2 l Σ µν + l Σ l(1 + δ (ν))
(1 − c)3
−1
−1 > −1 > −1
with δ 2 (ν) = µ>
ν Rl µν , Rl = Σ − Σ ll Σ /l Σ l.
17
d) repeat a)-c) N times.
It is remarkable that for generating l> Sx and l> S−1 x only random variables from the standard distributions are need. Neither the data matrix X nor the sample covariance matrix S are
used.
[ F igures 1 − 8 ]
In Figures 1-4 we present the results of simulations for the asymptotic distribution that
is given in Theorem 1 while the asymptotic distribution as given in Theorem 2 is presented in
Figures 5-8 for different values of c = {0.1, 0.5, 0.8, 0.95}. The suggested asymptotic distributions
are shown as a dashed black line, while the standard normal distribution is a solid black line.
All results demonstrate a good performance of both asymptotic distributions for all considered
values of c. Even in the extreme case c = 0.95 our asymptotic results seem to produce a quite
reasonable approximation. Moreover, we observe a good robustness of our theoretical results
for different distributions of ν. Also, we observe that all asymptotic distributions are slightly
skewed to the right for the finite dimensions. This effect is even more significant in the case
of the generalized asymmetric Laplace distribution. Nevertheless, the skewness disappears with
growing dimension and sample size, i.e., the distribution becomes symmetric one and converges
to its asymptotic counterpart.
5
Summary
In this paper we introduce the family of the matrix-variate location mixture of normal distributions that generalizes a large number of the existing skew normal models. Under the MVLMN we
derive the distributions of the sample mean vector and the sample covariance matrix. Moreover,
we show that they are independently distributed. Furthermore, we derive the CLTs under the
high-dimensional asymptotic regime for the products of the (inverse) sample covariance matrix
and the sample mean vector. In the numerical study, the good finite sample performance of
both asymptotic distributions is documented.
Acknowledgement
The authors are thankful to Professor Niels Richard Hansen, the Associate Editor, and two
anonymous Reviewers for careful reading of the manuscript and for their suggestions which have
improved an earlier version of this paper.
6
Appendix
Proof of Proposition 1.
18
Proof. Straightforward but tedious calculations give
fX (Z)
= C −1
Z
Rq+
∗
∗
fNp,n (µ1>
(Z − Bν ∗ 1>
n )fNq (0,Ω) (ν )dν
n ,Σ⊗In )
Z
(2π)−(np+q)/2
1 ∗> −1 ∗
exp
−
ν
Ω
ν
2
|Ω|1/2 |Σ|n/2 Rq+
1
>
∗ > >
−1
>
∗ >
dν ∗
exp − vec Z − µ1n − Bν 1n (In ⊗ Σ) vec Z − µ1n − Bν 1n
2
1
(2π)−(np+q)/2
> >
−1
>
exp
−
vec(Z
−
µ1
)
(I
⊗
Σ)
vec(Z
−
µ1
)
C −1
n
n
n
2
|Ω|1/2 |Σ|n/2
1
> >
>
exp
vec(Z − µ1>
n ) E DEvec(Z − µ1n )
2
Z
> −1 ∗
i
1h ∗
>
dν ∗
ν − DEvec(Z − µ1>
)
D
ν
−
DEvec(Z
−
µ1
)
exp −
n
n
2
Rq+
= C −1
×
=
×
×
|F|1/2 |D|1/2
>
Φq 0; −DEvec(Z − µ1>
n ), D φpn vec(Z − µ1n ); 0, F
1/2
n/2
|Ω| |Σ|
−1
>
e
= C Φq 0; −DEvec(Z − µ1>
n ), D φpn vec(Z − µ1n ); 0, F
= C −1
> −1
, F = (In ⊗ Σ−1 − E> DE)−1 , and
where D = (nB> Σ−1 B + Ω−1 )−1 , E = 1>
n ⊗B Σ
1/2
1/2
e −1 = C −1 |F| |D| .
C
|Ω|1/2 |Σ|n/2
References
Adcock, C., Eling, M., and Loperfido, N. (2015). Skewed distributions in finance and actuarial
science: a review. The European Journal of Finance, 21:1253–1281.
Amemiya, Y. (1994). On multivariate mixed model analysis. Lecture Notes-Monograph Series,
24:83–95.
Azzalini, A. (2005).
The skew-normal distribution and related multivariate families.
Scandinavian Journal of Statistics, 32:159–188.
Azzalini, A. and Capitanio, A. (1999). Statistical applications of the multivariate skew-normal
distribution. Journal of the Royal Statistical Society: Series B, 61:579–602.
Azzalini, A. and Dalla-Valle, A. (1996). The multivariate skew-normal distribution. Biometrika,
83:715–726.
Bai, J. and Shi, S. (2011). Estimating high dimensional covariance matrices and its applications.
Annals of Economics and Finance, 12:199–215.
Bai, Z. D. and Silverstein, J. W. (2004). CLT for linear spectral statistics of large dimensional
sample covariance matrices. Annals of Probability, 32:553–605.
19
Bartoletti, S. and Loperfido, N. (2010). Modelling air polution data by the skew-normal distribution. Stochastic Environmental Research and Risk Assessment, 24:513–517.
Bodnar, T. and Gupta, A. K. (2011). Estimation of the precision matrix of multivariate elliptically contoured stable distribution. Statistics, 45:131–142.
Bodnar, T., Gupta, A. K., and Parolya, N. (2014a). On the strong convergence of the optimal
linear shrinkage estimator for large dimensional covariance matrix. Journal of Multivariate
Analysis, 132:215–228.
Bodnar, T., Gupta, A. K., and Parolya, N. (2016a). Direct shrinkage estimation of large dimensional precision matrix. Journal of Multivariate Analysis, 146:223–236.
Bodnar, T., Hautsch, N., and Parolya, N. (2016b). Consistent estimation of the high dimensional
efficient frontier. Technical report.
Bodnar, T., Mazur, S., and Okhrin, Y. (2013). On the exact and approximate distributions
of the product of a wishart matrix with a normal vector. Journal of Multivariate Analysis,
125:176–189.
Bodnar, T., Mazur, S., and Okhrin, Y. (2014b). Distribution of the product of singular wishart
matrix and normal vector. Theory of Probability and Mathematical Statistics, 91:1–14.
Bodnar, T., Okhrin, O., and Parolya, N. (2016c). Optimal Shrinkage Estimator for HighDimensional Mean Vector. ArXiv e-prints.
Bodnar, T. and Okhrin, Y. (2008). Properties of the singular, inverse and generalized inverse
partitioned wishart distributions. Journal of Multivariate Analysis, 99:2389–2405.
Bodnar, T. and Okhrin, Y. (2011). On the product of inverse wishart and normal distributions
with applications to discriminant analysis and portfolio theory. Scandinavian Journal of
Statistics, 38:311–331.
Bodnar, T. and Schmid, W. (2008). A test for the weights of the global minimum variance
portfolio in an elliptical model. Metrika, 67(2):127–143.
Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods,
Theory and Applications. Springer Publishing Company, Incorporated, 1st edition.
Cai, T. T. and Yuan, M. (2012). Adaptive covariance matrix estimation through block thresholding. Ann. Statist., 40:2014–2042.
Cai, T. T. and Zhou, H. H. (2012). Optimal rates of convergence for sparse covariance matrix
estimation. Ann. Statist., 40:2389–2420.
Christiansen, M. and Loperfido, N. (2014). Improved approximation of the sum of random
vectors by the skew normal distribution. Journal of Applied Probability, 51(2):466–482.
20
DasGupta, A. (2008). Asymptotic Theory of Statistics and Probability. Springer Texts in
Statistics. Springer.
De Luca, G. and Loperfido, N. (2015). Modelling multivariate skewness in financial returns: a
SGARCH approach. The European Journal of Finance, 21:1113–1131.
Efron, B. (2006). Minimum volume confidence regions for a multivariate normal mean vector.
Journal of the Royal Statistical Society: Series B, 68:655–670.
Fan, J., Fan, Y., and Lv, J. (2008). High dimensional covariance matrix estimation using a
factor model. Journal of Econometrics, 147:186–197.
Fan, J., Liao, Y., and Mincheva, M. (2013). Large covariance estimation by thresholding principal orthogonal complements. Journal of the Royal Statistical Society: Series B, 75:603–680.
Gupta, A. and Nagar, D. (2000). Matrix Variate Distributions. Chapman & Hall/CRC.
Jorion, P. (1986). Bayes-stein estimation for portfolio analysis.
Quantative Analysis, 21:279–292.
Journal of Financial and
Kotsiuba, I. and Mazur, S. (2015). On the asymptotic and approximate distributions of the
product of an inverse Wishart matrix and a Gaussian random vector. Theory of Probability
and Mathematical Statistics, 93:96–105.
Kozubowski, T. J., Podgórski, K., and Rychlik, I. (2013). Multivariate generalized Laplace
distribution and related random fields. Journal of Multivariate Analysis, 113:59–72.
Liseo, B. and Loperfido, N. (2003). A Bayesian interpretation of the multivariate skew-normal
distribution. Statistics & Probability Letters, 61:395–401.
Liseo, B. and Loperfido, N. (2006). A note on reference priors for the scalar skew-normal
distribution. Journal of Statistical Planning and Inference, 136:373–389.
Loperfido, N. (2010). Canonical transformations of skew-normal variates. TEST, 19:146–165.
Lütkepohl, H. (1996). Handbook of Matrices. New York: John wiley & Sons.
Marčenko, V. A. and Pastur, L. A. (1967). Distribution of eigenvalues for some sets of random
matrices. Sbornik: Mathematics, 1:457–483.
Mathai, A. and Provost, S. B. (1992). Quadratic Forms in Random Variables. Marcel Dekker.
Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory. Wiley, New York.
Potthoff, R. F. and Roy, S. N. (1964). A generalized multivariate analysis of variance model
useful especially for growth curve problems. Biometrika, 51(3/4):313–326.
Provost, S. and Rudiuk, E. (1996). The exact distribution of indefinite quadratic forms in
noncentral normal vectors. Annals of the Institute of Statistical Mathematics, 48:381–394.
21
Rao, C. R. (1965). The theory of least squares when the parameters are stochastic and its
application to the analysis of growth curves. Biometrika, 52(3/4):447–458.
Silverstein, J. W. (1995). Strong convergence of the empirical distribution of eigenvalues of
large-dimensional random matrices. Journal of Multivariate Analysis, 55:331–339.
Stein, C. (1956). Inadmissibility of the usual estimator of the mean of a multivariate normal distribution. In Neyman, J., editor, Proceedings of the third Berkeley symposium on
mathematical and statistical probability. University of California, Berkley.
Wang, C., Pan, G., Tong, T., and Zhu, L. (2015). Shrinkage estimation of large dimensional
precision matrix using random matrix theory. Statistica Sinica, 25:993–1008.
22
0.5
0.5
0.4
0.3
0.2
0.1
-4
-2
0
2
4
-4
p = 50, n = 500, ν ∼ T N q (0, Iq ).
(b)
0
2
4
p = 100, n = 1000, ν ∼ T N q (0, Iq ).
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
-4
(c)
-2
0.5
0.5
(a)
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
-2
0
2
4
p = 50, n = 500, ν ∼ GALq (1q , Iq , 10).
-4
(d)
-2
0
2
4
p = 100, n = 1000, ν ∼ GALq (1q , Iq , 10).
Figure 1: The kernel density estimator of the asymptotic distribution as given in Theorem 1 for
c = 0.1.
23
0.5
0.5
0.4
0.3
0.2
0.1
-4
-2
0
2
4
-4
p = 250, n = 500, ν ∼ T N q (0, Iq ).
(b)
0
2
4
p = 500, n = 1000, ν ∼ T N q (0, Iq ).
Asymptotic
Standard Normal
0.0
0.0
0.1
0.1
0.2
0.2
0.3
0.3
0.4
0.4
Asymptotic
Standard Normal
-4
(c)
-2
0.5
0.5
(a)
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
-2
0
2
4
−4
p = 250, n = 500, ν ∼ GALq (1q , Iq , 10).
(d)
−2
0
2
4
p = 500, n = 1000, ν ∼ GALq (1q , Iq , 10).
Figure 2: The kernel density estimator of the asymptotic distribution as given in Theorem 1 for
c = 0.5.
24
0.5
0.5
0.3
0.3
0.2
0.2
0.1
0.1
0.0
0.0
-4
-2
0
2
4
−4
p = 400, n = 500, ν ∼ T N q (0, Iq ).
(b)
0
2
4
p = 800, n = 1000, ν ∼ T N q (0, Iq ).
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−4
(c)
−2
0.5
0.5
(a)
Asymptotic
Standard Normal
0.4
0.4
Asymptotic
Standard Normal
−2
0
2
4
−4
p = 400, n = 500, ν ∼ GALq (1q , Iq , 10).
(d)
−2
0
2
4
p = 800, n = 1000, ν ∼ GALq (1q , Iq , 10).
Figure 3: The kernel density estimator of the asymptotic distribution as given in Theorem 1 for
c = 0.8.
25
0.5
0.5
0.4
0.3
0.2
0.1
−4
−2
0
2
4
−4
p = 475, n = 500, ν ∼ T N q (0, Iq ).
(b)
0
2
4
p = 950, n = 1000, ν ∼ T N q (0, Iq ).
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−4
(c)
−2
0.5
0.5
(a)
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−2
0
2
4
−4
p = 475, n = 500, ν ∼ GALq (1q , Iq , 10).
(d)
−2
0
2
4
p = 950, n = 1000, ν ∼ GALq (1q , Iq , 10).
Figure 4: The kernel density estimator of the asymptotic distribution as given in Theorem 1 for
c = 0.95.
26
0.5
0.5
0.4
0.3
0.2
0.1
−4
−2
0
2
4
−4
p = 50, n = 500, ν ∼ T N q (0, Iq ).
(b)
0
2
4
p = 100, n = 1000, ν ∼ T N q (0, Iq ).
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−4
(c)
−2
0.5
0.5
(a)
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−2
0
2
4
p = 50, n = 500, ν ∼ GALq (1q , Iq , 10).
−4
(d)
−2
0
2
4
p = 100, n = 1000, ν ∼ GALq (1q , Iq , 10).
Figure 5: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for
c = 0.1.
27
0.5
0.5
0.4
0.3
0.2
0.1
−4
−2
0
2
4
−4
p = 250, n = 500, ν ∼ T N q (0, Iq ).
(b)
0
2
4
p = 500, n = 1000, ν ∼ T N q (0, Iq ).
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−4
(c)
−2
0.5
0.5
(a)
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−2
0
2
4
−4
p = 250, n = 500, ν ∼ GALq (1q , Iq , 10).
(d)
−2
0
2
4
p = 500, n = 1000, ν ∼ GALq (1q , Iq , 10).
Figure 6: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for
c = 0.5.
28
0.5
0.5
0.4
0.3
0.2
0.1
−4
−2
0
2
4
−4
p = 400, n = 500, ν ∼ T N q (0, Iq ).
(b)
0
2
4
p = 800, n = 1000, ν ∼ T N q (0, Iq ).
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−4
(c)
−2
0.5
0.5
(a)
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−2
0
2
4
−4
p = 400, n = 500, ν ∼ GALq (1q , Iq , 10).
(d)
−2
0
2
4
p = 800, n = 1000, ν ∼ GALq (1q , Iq , 10).
Figure 7: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for
c = 0.8.
29
0.5
0.5
0.4
0.3
0.2
0.1
−4
−2
0
2
4
−4
p = 475, n = 500, ν ∼ T N q (0, Iq ).
(b)
0
2
4
p = 950, n = 1000, ν ∼ T N q (0, Iq ).
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−4
(c)
−2
0.5
0.5
(a)
Asymptotic
Standard Normal
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic
Standard Normal
−2
0
2
4
−4
p = 475, n = 500, ν ∼ GALq (1q , Iq , 10).
(d)
−2
0
2
4
p = 950, n = 1000, ν ∼ GALq (1q , Iq , 10).
Figure 8: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for
c = 0.95.
30
| 10 |
Linear Recursion
arXiv:1001.3368v4 [cs.LO] 25 Nov 2016
Sandra Alves1 , Maribel Fernández2 , Mário Florido1 , and Ian Mackie3
1
University of Porto, Faculty of Science & LIACC,
R. do Campo Alegre 1021/55, 4169-007, Porto, Portugal
2
King’s College London, Department of Informatics
Strand, London WC2R 2LS, U.K.
3
LIX, CNRS UMR 7161, École Polytechnique
91128 Palaiseau Cedex, France
Abstract
We define two extensions of the typed linear lambda-calculus that yield minimal Turingcomplete systems. The extensions are based on unbounded recursion in one case, and bounded
recursion with minimisation in the other. We show that both approaches are compatible with
linearity and typeability constraints. Both extensions of the typed linear lambda-calculus
are minimal, in the sense that taking out any of the components breaks the universality of
the system. We discuss implementation techniques that exploit the linearity of the calculi.
Finally, we apply the results to languages with fixpoint operators: we give a compilation of the
programming language PCF into a linear lambda-calculus with linear unbounded recursion.
1
Introduction
Turing completeness is significant in computer science because it is a standard measure of computational power: all general purpose programming languages are Turing complete. There are a
number of Turing-complete models of computation: Turing Machines, the λ-calculus, term rewriting systems, partial recursive functions, etc. We refer to these as computation models rather than
programming languages, as the former can be seen as abstract representations of computing devices, where the emphasis is in the essential notions, whereas the latter include additional features
to make representing data and algorithms easier.
In this paper, we are interested in minimal models of computation that are Turing complete
(or universal). In particular, we contribute to the collection of universal systems based on the
typed λ-calculus, which is a paradigmatic model of functional computation.
There are several approaches to build a Turing complete system starting from a typed λcalculus. To obtain a minimal system, our starting point is the typed linear λ-calculus, and we
add the least machinery needed to obtain a complete system.
The linear λ-calculus [1] is a restriction of the λ-calculus that models linear functions, defined
by syntactically linear terms where each variable occurs exactly once [38]. The linear λ-calculus
captures the essence of functional computation, but it is computationally weak: all the functions
terminate in linear time. In fact, the linear λ-calculus is operationally linear, that is, functions
cannot duplicate or erase arguments during evaluation (see also [8, 41]). Operational linearity has
great impact when the management of resources (copying and erasing of arguments) is important,
as it can be used to efficiently implement garbage collection, for instance. Note however, that
checking if a system is operationally linear relies on evaluation. On the other hand, syntactical
linearity is easy to check, and it is well-known that compilers can make use of this information to
optimise code. Syntactic linearity is relevant in several program analysis techniques, for instance,
strictness analysis, pointer analysis, effects and resource analysis (see, e.g., [16, 22, 56, 54, 55,
1
49, 35, 21]). Linear functions are also relevant in hardware compilation [26]: circuits are static
(i.e., they cannot be copied at run-time), so linear computations are more naturally compiled into
hardware.
Starting from the linear λ-calculus, we define two Turing-complete typed λ-calculi that are
universal and syntactically linear: one is based on bounded iteration and minimisation, and the
other uses unbounded recursion.
In the context of the simply typed λ-calculus, interesting classes of programs can be captured by
extensions of the linear λ-calculus based on bounded iteration (see, e.g., [27, 31, 9, 11, 34, 45, 52]).
In particular, a linear version of Gödel’s System T , which we call System L, captures exactly
the class of primitive recursive functions (PR), if iterators use only closed linear functions [19],
whereas the same system with a closed reduction strategy [23] has all the computation power of
System T [6]. The latter result shows some redundancy regarding duplication in System T , which
can be achieved through iteration or through non-linear occurrences of the bound variable in the
body of a function.
In recursion theory, Turing completeness can be achieved by adding a minimisation operator
to a first-order linear system built from a set of linear initial functions and a linear primitive
recursion scheme [4]. A similar result is shown in this paper for the linear λ-calculus: an extension
of System L with a minimiser, which we call System Lµ , is Turing-complete. In System Lµ , both
iteration and minimisation are needed to achieve completeness.
Alternatively, Turing completeness can be achieved by adding a fixpoint operator to a typed
λ-calculus (as it is done in PCF [51]). This approach has been used to extend linear functional
calculi (see, e.g., [46, 15, 50, 17]), however, it relies on the existence of a non-linear conditional
which throws away a possibly infinite computation in one of the branches.
The question that arises is, what is the minimal extension of the typed linear λ-calculus that
yields a Turing complete system, compatible with the notion of linear function? We show how to
obtain a Turing-complete typed linear λ-calculus through the use of an unbounded recursor with
a built-in test on pairs, which allows the encoding of both finite iteration and minimisation. More
precisely, we define System Lrec , a linear λ-calculus extended with numbers, pairs and a linear
unbounded recursor, with a closed-reduction strategy. We show that Lrec is Turing-complete and
can be easily implemented: we give an abstract machine whose configurations consist simply of a
pair of term and a stack of terms.
System L, System Lrec and System Lµ use a closed-reduction strategy in order to preserve
linearity and accommodate iteration or recursion. This strategy is inspired by the closed cutelimination strategy defined by Girard [29] for proof nets, which was adapted to the λ-calculus
in [23]. Closed cut elimination is a simple and exceptionally efficient strategy in terms of the
number of cut elimination steps. In the λ-calculus, it avoids α-conversion while allowing reductions
inside abstractions (in contrast with standard weak strategies), thus achieving more sharing of
computation. An alternative approach to preserve linearity of systems with iterators or recursors
is to consider a “closed-at-construction” discipline: the function used in a bounded or unbounded
recursor should be closed when the recursor is built (rather than closed at the time of reduction).
In this paper, we consider both approaches and analyse their computational power. Although in
the case of linear calculi with bounded recursion closed reduction and closed construction capture
different classes of functions, we show that both disciplines yield Turing-complete systems in calculi
with unbounded recursion.
Summarising, this paper investigates the relationship between linearity and bounded/unbounded
recursion in typed functional theories, aiming at obtaining minimal Turing complete systems. The
main contributions are:
• We define two extensions of the typed linear λ-calculus: Lrec , a linear calculus with numbers,
pairs and an unbounded recursor, with a closed-reduction strategy; and Lµ , a linear λcalculus extended with numbers, pairs, a bounded recursor and a minimisation operator,
also with a closed-reduction strategy. We show some properties regarding reduction (such
as subject-reduction and confluence), and prove Turing completeness of both systems by
encoding the set of partial recursive functions in Lrec and Lµ . We also show that both systems
2
are minimal, in the sense that taking out any of their components breaks the universality
of the system. Lrec relies only on unbounded recursion, whereas Lµ needs both the iterator
and the minimiser.
• We explore some implementation issues for Lrec : we give call-by-name and call-by-value
evaluation strategies, and define a simple abstract machine, exploiting its linearity.
• We study the interplay between linearity and recursion based on fixpoint combinators, and
define an encoding of PCF into Lrec , which combined with the definition of an abstract
machine for Lrec , gives a new implementation of PCF via a simple stack-based abstract
machine.
• We study the interplay between linearity and closed-reduction/closed-construction disciplines in systems with bounded iteration and in systems with unbounded recursion.
Related Work Extensions of the linear λ-calculus based on bounded iteration capture interesting classes of programs and have been used to characterise complexity classes (see, e.g.,
[27, 31, 9, 11, 34, 45, 52]). However, in this paper we are interested in Turing complete systems,
so bounded iteration is not sufficient.
Several approaches to obtain Turing complete system are described in the literature, inspired by
the work on linear logic [28]. In linear logic, linearity is the default, and copying is obtained by the
use of the “of course” exponential operator (!). To recover the full power of linear logic, the linear
calculi defined in [1, 46, 36] provide explicit syntactical constructs for copying and erasing terms,
corresponding to the exponentials in linear logic. However, adding only copy and erase constructs
to the typed linear λ-calculus does not yield a universal system (see Section 3). In these works,
some form of unbounded recursion (using for instance fixpoint combinators and conditionals) is
also included. Moreover, copy and erase constructs are superfluous once recursion is added: a
PCF-like language with explicit resource management is not minimal (copy and erase constructs
are not needed). Instead, copy and erase can be encoded through bounded or unbounded recursion
as shown in this paper (see also [10, 2, 3]).
Several abstract machines for linear calculi are available in the literature (see for instance [47,
55, 44]). The novelty here is that we implement a calculus that is syntactically linear (in the sense
that each variable is linear in Lrec terms) and therefore there is no need to include in the abstract
machine an environment (or store in the terminology of [55]) to store bindings for variables. As
an application, we give a compilation of the full PCF language into Lrec , establishing a relation
between unbounded recursion and recursion through the use of fixpoint operators.
For Lrec , which combines syntactical linearity with closed reduction, the fragment without
recursion is operationally linear; erasing and duplication can only be done by the recursor (in
linear logic [28] this is done by the use of exponentials, and in other linear calculi [1, 46, 36, 55] by
explicit syntactical constructs). Moreover, only closed terms can be erased or duplicated in Lrec .
There are several other domains where linearity plays a key role. For instance, in the area of
quantum computation, the no-cloning theorem, which states that qubits cannot be duplicated, is
one of the most important results in the area. This property is captured by a linear calculus [53]. In
concurrent calculi, like the π-calculus [48], a key aspect is the notion of name, and the dual role that
names play as communication channels and variables. The linear π-calculus [43] has linear (useonce) channels, which leads to clear gains in efficiency and on program analysis avoiding several
problems of channel sharing. Also, inspired by the works by Kobayashi, Pierce and Turner [43]
and the works by Honda [37] on session types, several type systems for the π-calculus rely directly
on linearity to deal with resources, non-interference and effects [32, 57]. In this paper we focus
on functional computations, and aim at obtaining linear, universal models of computation that
can serve as a basis for the design of programming languages. Our approach is to begin with the
linear λ-calculus, and achieve Turing-completeness in a controlled way.
This paper is an extended and revised version of [7], where Lrec was first defined. Here, we
provide proofs of Subject Reduction, confluence and Turing completeness of Lrec , introduce Lµ ,
3
analyse the power of iteration, minimisation, recursion and fixpoint operators in linear calculi,
and compare the closed-reduction and closed-construction approaches.
2
Preliminaries: Linear Iteration
In this section we recall the definition of System L [6], a linear version of Gödel’s System T (for
details on the latter see [30]). We assume the reader is familiar with the λ-calculus [12].
System L is an extension of the linear λ-calculus [1] with numbers, pairs, and an iterator.
Linear λ-terms t, u, . . . are inductively defined by: x ∈ Λ, λx.t ∈ Λ if x ∈ fv(t), and tu ∈ Λ if
fv(t)∩fv(u) = ∅. Note that x is used at least once in the body of the abstraction, and the condition
on the application ensures that all variables are used at most once. Thus these conditions ensure
syntactic linearity (variables occur exactly once). In System L we also have numbers, generated
by 0 and S, with an iterator:
iter t u v if fv(t) ∩ fv(u) = fv(u) ∩ fv(v) = fv(v) ∩ fv(t) = ∅
and pairs:
ht, ui
let hx, yi = t in u
if fv(t) ∩ fv(u) = ∅
if x, y ∈ fv(u) and
fv(t) ∩ (fv(u) − {x, y}) = ∅
Since λ and let are binders, terms are defined modulo α-equivalence as usual.
Note that, when projecting from a pair, we use both projections. A simple example is the
function that swaps the components of a pair: λx.let hy, zi = x in hz, yi. In examples below we use
tuples of any size, built from pairs. For example, hx1 , x2 , x3 i = hx1 , hx2 , x3 ii and let hx1 , x2 , x3 i =
u in t represents the term let hx1 , yi = u in let hx2 , x3 i = y in t.
System L uses a closed reduction strategy. The reduction rules for System L are given in
Table 1. Substitution is a meta-operation defined as usual, and reductions can take place in any
context.
Name
Beta
Let
Iter0
IterS
Reduction
(λx.t)v
→
let hx, yi = ht, ui in v →
iter 0 u v
→
iter (S t) u v
→
t[v/x]
(v[t/x])[u/y]
u
v(iter t u v)
Condition
fv(v) = ∅
fv(t) = fv(u) = ∅
fv(v) = ∅
fv(v) = ∅
Table 1: Closed reduction in System L
Note that the Iter rules are only triggered when the function v is closed. Thanks to the use
of a closed reduction strategy, iterators on open linear functions are accepted in System L (since
these terms are syntactically linear), and reduction preserves linearity. The closedness conditions
in rules Beta and Let are not necessary to preserve linearity (since variables are used linearly
in abstractions and lets), but they ensure that all the substitutions created during reduction are
closed (thus, there is no need to perform α-conversions during reduction). Normal forms are not
the same as in the λ-calculus (for example, λx.(λy.y)x is a normal form), but closed reduction
is still adequate for the evaluation of closed terms (if a closed term has a weak head normal
form, it will be reached [6]). Closed reduction can also be used to evaluate open terms, using the
“normalisation by evaluation” technique [14] as shown in [23, 24] (in the latter director strings are
used to implement closedness tests as local checks on terms).
System L is a typed calculus. Note that, although linear, some untyped terms are not strongly
normalisable. For instance, ∆∆ where ∆ = λx.iter S 2 0 (λxy.xy) (λy.yx) reduces to itself. However, the linear type system defined in [6] ensures strong normalisation. We recall the type definitions for System L below.
4
Axiom and Structural Rule:
Γ, x : A, y : B, ∆ ⊢ t : C
(Axiom)
x:A⊢x:A
Γ, y : B, x : A, ∆ ⊢ t : C
(Exchange)
Logical Rules:
Γ, x : A ⊢ t : B
Γ ⊢ λx.t : A −◦ B
Γ⊢t:A ∆⊢u:B
Γ, ∆ ⊢ ht, ui : A ⊗ B
Γ ⊢ t : A −◦ B
(−◦Intro)
(⊗Intro)
Γ, ∆ ⊢ tu : B
Γ ⊢t: A⊗B
Numbers:
⊢0:N
Γ⊢t:N
∆⊢u:A
(Zero)
(−◦Elim)
∆, x : A, y : B ⊢ u : C
Γ, ∆ ⊢ let hx, yi = t in u : C
(⊗Elim)
Γ⊢n:N
(Succ)
Γ⊢Sn:N
Θ ⊢ u : A ∆ ⊢ v : A −◦ A
Γ, Θ, ∆ ⊢ iter t u v : A
(Iter)
Table 2: Type System for System L
The syntax of terms in L does not include type annotations, instead we will use a type assignment system based on linear types. The set of linear types is generated by the grammar:
A, B ::= N | A −◦ B | A ⊗ B
where N is the type of numbers. A type environment Γ is a list of type assumptions of the form
x : A where x is a variable and A a type, and each variable occurs at most once in Γ. We write
dom(Γ) to denote the set of variables that occur in Γ.
We write Γ ⊢ t : A if the term t can be assigned the type A in the environment Γ using the
typing rules in Table 2. Note that the only structural rule is Exchange, we do not have Weakening
and Contraction rules: we are in a linear system. For the same reason, the logical rules split the
context between the premises (i.e., the variable conditions in Table 3 are enforced by the typing
rules).
System L has all the power of System T ; we refer to [6] for more details and examples.
3
Towards a Minimal Universal Type System
In this section we will present two universal type systems which extend the linear λ-calculus: Lrec
and Lµ . While Lrec is a linear calculus with an unbounded recursor, Lµ is a linear calculus where
recursion is obtained through iteration and minimisation. We show that both typed calculi are
universal and minimal (in the sense that all their constructors are necessary for the system to be
universal).
We avoid introducing superfluous operators and rules, such as copy and erase combinators.
Indeed, these can be encoded using recursion, as we will show in this section. The reverse is not
true (although in an untyped system, adding copy and erase combinators to the linear λ-calculus
would produce a Turing-complete system). More precisely, the untyped linear λ-calculus extended
with linear pairs and projections, and copy and erase combinators (c and w) with the following
reduction rules:
c t → ht, ti
fv(t) = ∅
w t → (λx.x) f v(t) = ∅
has the computational power of the pure untyped λ-calculus, but the same does not follow if we
5
Construction
0
St
rec t1 t2 t3 t4
x
tu
λx.t
ht, ui
let hx, yi = t in u
Variable Constraint
−
−
fv(ti ) ∩ fv(tj ) = ∅, for i 6= j
−
fv(t) ∩ fv(u) = ∅
x ∈ fv(t)
fv(t) ∩ fv(u) = ∅
x, y ∈ fv(u), fv(t) ∩ fv(u) = ∅
Free Variables (fv)
∅
fv(t)
∪fv(ti )
{x}
fv(t) ∪ fv(u)
fv(t) r {x}
fv(t) ∪ fv(u)
fv(t) ∪ (fv(u) r {x, y})
Table 3: Terms in System Lrec
Name
Rec0
RecS
rec h0, t′ i u v w
rec hS t, t′ i u v w
Reduction
→ u
→ v(rec (wht, t′ i) u v w)
Condition
fv(t′ vw) = ∅
fv(vw) = ∅
Table 4: Closed reduction for recursion
consider typed terms. The typing rules for c and w are:
Γ⊢t:A
Γ⊢t:B
Γ ⊢ c t:A⊗A
Γ ⊢ w t : A −◦ A
Since this system can be encoded in System L (see [6]), which is not Turing complete (all typable
terms are terminating), we conclude that the typed linear λ-calculus with pairs, projections and
the combinators c and w is not universal.
Another way to obtain Turing completeness of typed λ-calculi is via fixpoint operators and
conditionals, as done in PCF [51]. In Section 5 we discuss fixpoints in the presence of linearity
and study the relation between Lrec and PCF.
3.1
Linear Unbounded Recursion
In this section we define Lrec , an extension of the linear λ-calculus [1] with numbers, pairs, and a
typed unbounded recursor with a closed reduction strategy that preserves syntactic linearity. We
prove that this system is Turing complete.
The syntax of System Lrec is similar to that of System L (recalled in Section 2), except that
instead of a bounded iterator we have a recursor working on pairs of natural numbers. Table 3
summarises the syntax of terms in Lrec . We assume Barendregt’s convention regarding names of
free and bound variables in terms (in particular, bound names are different from free names).
The reduction rules for Lrec are Beta and Let, given in Table 1, together with two rules for the
recursor shown in Table 4.
Note that the Rec rules are only triggered when the closedness conditions hold, thus linearity is
preserved by reduction. The conditions on Beta and Let are orthogonal to the linearity issues (as
explained in the previous section, they simply produce a more efficient strategy of reduction) and
do not affect the technical results of the paper (we discuss the role of closed reduction in System
L and System Lrec in more detail in Section 6).
The Rec rules pattern-match on a pair of numbers (the usual bounded recursor works on
a single number). This is because we are representing both bounded and unbounded recursion
with the same operator (as the examples below illustrate), which requires (for a particular n and
function f ) being able to test the value of f (n), and access the value n. An alternative would be
to have an extra parameter of type N in the recursor.
6
Example 1 We illustrate the use of the recursor by encoding some standard functions in System
Lrec .
• Bounded iteration Let I be the identity function λx.x. System L’s iterator can be encoded
in Lrec using the term “iter” defined as follows:
def
“iter” t u v = rec ht, 0i u v I
We will show later that this term has the same behaviour as System L’s iterator.
• Projections and duplication of natural numbers The first and second projection functions on pairs ha, bi of natural numbers can be defined by using the numbers in a recursor.
pr 1
pr 2
=
=
λx.let ha, bi = x in rec hb, 0i a I I
λx.let ha, bi = x in rec ha, 0i b I I
The following function C can be used to copy numbers:
C = λx.rec hx, 0i h0, 0i (λx.let x = ha, bi in hSa, Sbi) I
Other mechanisms to erase and copy numbers in Lrec will be shown later.
• Arithmetic functions We can now define some arithmetic functions that we will use in
the paper.
– add = λmn.rec hm, 0i n (λx.Sx) I;
– mult = λmn.rec hm, 0i 0 (add n) I;
– pred = λn.pr 1 (rec hn, 0i h0, 0i F I)
where F = λx.let ht, ui = C(pr 2 x) in ht, S ui;
– iszero = λn.pr 1 (rec hn, 0i h0, S 0i (λx.C(pr 2 x)) I).
The correctness of these encodings can be easily proved by induction.
• Minimisation The examples above can also be defined in System L, using bounded recursion. Lrec is a more powerful system: it can encode the minimisation operator µf used to
define partial recursive functions. Recall that if f : N → N is a total function on natural
numbers, µf = min{x ∈ N | f (x) = 0}.
Let f be a closed λ-term in Lrec representing a total function f on natural numbers. The
encoding of µf is
M = rec hf 0, 0i 0 (λx.S(x)) F
where F = λx.let hy, zi = C(pr 2 x) in hf (Sy), Szi. We prove the correctness of this encoding
below (see Theorem 2).
We use the same notation for typing judgements in System L and System Lrec , since there will
be no ambiguity. We write Γ ⊢ t : A if the term t can be assigned the type A in the environment Γ
using the typing rules in Table 2, where we replace the rule for the iterator by the following rule:
Γ⊢t:N⊗N
Θ⊢u:A
∆ ⊢ v : A −◦ A Σ ⊢ w : N ⊗ N −◦ N ⊗ N
Γ, Θ, ∆, Σ ⊢ rec t u v w : A
Note that all the terms given in the example above can be typed.
Theorem 1 (Properties of reductions in System Lrec )
1. If Γ ⊢ t : T then dom(Γ) = fv(t).
2. Subject Reduction: Reductions preserve types.
7
(Rec)
3. Church-Rosser: System Lrec is confluent.
4. Adequacy: If ⊢ t : T in System Lrec , and t is a normal form, then:
T =N
T =A⊗B
T = A −◦ B
⇒ t = S(S . . . (S 0))
⇒ t = hu, si
⇒ t = λx.s
5. System Lrec is not strongly normalising, even for typeable terms.
Proof:
1. By induction on type derivations.
2. By induction on type derivations, using a substitution lemma as usual. We show the case
where the term has the form rec ht, t′ i u v w (for the other cases, the proof is the same as
for System L [6]).
Assume Γ ⊢ rec ht, t′ i u v w : A. If the reduction takes place inside t, t′ , u, v or w the
property follows directly by induction. If the reduction takes place at the root, there are
two cases:
(a) rec h0, t′ i u v w → u if fv(t′ vw) = ∅. Then, by part 1, dom(Γ) = fv(rec h0, t′ i u v w) =
fv(u). The type derivation may end with (Exchange), in which case the result is trivial,
or with (Rec), in which case the derivation has conclusion Γ ⊢ rec h0, t′ i u v w : A with
premises: ⊢ h0, t′ i : N ⊗ N, Γ ⊢ u : A, ⊢ v : A −◦ A, ⊢ w : N ⊗ N −◦ N ⊗ N. Therefore
the property holds, directly from Γ ⊢ u : A.
(b) rec hS t, t′ i u v w → v(rec (wht, t′ i) u v w) if fv(vw) = ∅. Reasoning in a similar
way, we note that when the type derivation ends with an application of the rule (Rec),
it has conclusion Γ, ∆ ⊢ rec hSt, t′ i u v w : A with premises Γ ⊢ hSt, t′ i : N ⊗ N,
∆ ⊢ u : A, ⊢ v : A −◦ A, and ⊢ w : N ⊗ N −◦ N ⊗ N. If Γ ⊢ hSt, t′ i : N ⊗ N, then we can
deduce Γ ⊢ ht, t′ i : N ⊗ N, therefore we have Γ ⊢ wht, t′ i : N ⊗ N. Thus we can obtain
Γ, ∆ ⊢ rec (wht, t′ i) u v w : A. From these we deduce Γ, ∆ ⊢ v(rec wht, t′ i u v w) : A as
required.
3. Confluence can be proved directly, using Martin-Löf’s technique (as it was done for System
L, see [2]) or can be obtained as a consequence of Klop’s theorem for orthogonal higher-order
reduction systems [42].
4. By induction on t. If t = 0, λx.t′ or ht1 , t2 i, then we are done. Otherwise:
• If t = S t, it follows by induction.
• If t = rec t0 t1 t2 t3 . Since t is in normal form, so are the terms ti . Since t is typable,
t0 must be a term of type N ⊗ N, and by induction, t0 is a pair of numbers. But then
one of the recursor rules applies (contradiction).
• The cases of application and let are similar.
5. The following term is typable but is not strongly normalisable:
rec hS(0), 0i 0 I (λx.let hy, zi = x in hS(y), zi)
Another non-terminating typable term will be given later, using the encoding of a fixpoint
operator.
8
Γ⊢t:N
Γ ⊢ ht, 0i : N ⊗ N
Θ⊢u:A
∆ ⊢ v : A −◦ A
⊢ I : N ⊗ N −◦ N ⊗ N
Γ, Θ, ∆ ⊢ rec ht, 0i u v I : A
Figure 1: Type derivation for “iter” t u v
The Computational Power of System Lrec
We now prove that System Lrec is Turing complete. Since System L can encode all the primitive recursive functions [2, 6], it suffices to show that System L is a subset of Lrec (therefore Lrec
also encodes primitive recursion), and that one can encode minimisation.
First we show that the encoding of System L’s iterator, defined in Example 1, behaves as
expected. System L is a sub-system of Lrec .
Proposition 1
“iter” t u v →∗ u
“iter” t u v →∗ v(“iter” t1 u v)
if t →∗ 0, fv(v) = ∅
if t →∗ S(t1 ), fv(v) = ∅
Proof:
• If t →∗ 0:
“iter” t u v
def
=
rec ht, 0i u v I →∗ rec h0, 0i u v I → u, if fv(v) = ∅
• If t →∗ S(t1 ):
def
= rec ht, 0i u v I →∗ rec hS(t1 ), 0i u v I
→ v(rec Iht1 , 0i u v I), if fv(t1 v) = ∅
“iter” t u v
def
→ v(rec ht1 , 0i u v I) = v(“iter” t1 u v)
If Γ ⊢ t : N, Θ ⊢ u : A, and ∆ ⊢ v : A −◦ A, then Γ, Θ, ∆ ⊢ rec ht, 0i u v I : A, that is “iter” t u v
is properly typed in System Lrec , as shown in Figure 1.
Corollary 1 System Lrec has all the computation power of System L, thus, any function definable
in System T can be defined in Lrec .
We now show that the encoding of the minimiser given in Section 1 behaves as expected.
Theorem 2 (Minimisation in System Lrec ) Let f be a closed λ-term in Lrec , encoding the
total function f on natural numbers. Consider the term M = rec hf 0, 0i 0 (λx.S(x)) F , with
F = λx.let hy, zi = C(pr 2 x) in hf (Sy), Szi. The term M encodes µf .
Proof: Consider the non-empty sequence S = f (i), f (i + 1), . . . , f (i + n), such that f (i + n) is
the first element in the sequence that is equal to zero. Then
rec hf (Si 0), Si 0i 0 (λx.S(x)) F →∗ Sn 0
We proceed by induction on the length of S.
• Basis: S = f (i). Thus
∗
→
rec hf (Si 0), Si 0i 0 (λx.S(x)) F
rec h0, Si 0i 0 (λx.S(x)) F → 0
9
• Induction: If S = f (i), f (i + 1), . . . , f (i + n), then f (i) > 0, therefore f i reduces to a term
of the form (S t). One easily notice that
∗
→
→∗
(I.H.)
∗
→
rec hf (Si 0), Si 0i 0 (λx.S(x)) F
rec hSt, Si 0i 0 (λx.S(x)) F
S(rec hf (Si+1 0), Si+1 0i 0 (λx.S(x)) F )
S(Sn−1 0) = Sn 0
Now, let j = min{x ∈ N | f (x) = 0}, and consider the sequence f (0), . . . , f (j). One easily notices
that rec hf 0, 0i 0 (λx.S(x)) F →∗ Sj 0. Note that, if there exists no x such that f (x) = 0, then
rec hf 0, 0i 0 (λx.S(x)) F diverges, and so does the minimisation of f .
Corollary 2 System Lrec is Turing complete.
Erasing and Duplicating in Lrec
There are various ways of encoding erasing and duplicating in Lrec . First note that, although
in the linear λ-calculus we are not able to discard arguments of functions, terms are consumed
by reduction. The idea of erasing by consuming is related to the notion of Solvability (see [12],
Chapter 8) as it relies on reduction to the identity. Using this technique, in [2, 6] it is shown that
in System L there is a general form of erasing. In Lrec this technique can be used to erase terms
of type A, where A is a type generated by the grammar: A, B ::= N | A ⊗ B. In the definition
of the erasing function E(t, A) we use a function M(A) to build a term of type A (E and M are
mutually recursive).
Definition 1 (Erasing) If Γ ⊢ t : A, then E(t, A) is defined as follows:
E(t, N)
E(t, A ⊗ B)
E(t, A −◦ B)
and
M(N)
M(A ⊗ B)
M(A −◦ B)
Theorem 3
= rec ht, 0i I I I
= let hx, yi = t in E(x, A)E(y, B)
= E(tM(A), B)
= 0
= hM(A), M(B)i
= λx.E(x, A)M(B)
1. If Γ ⊢ t : T then Γ ⊢ E(t, T ) : B −◦ B, for any type B.
2. M(T ) is closed and typeable: ⊢ M(T ) : T .
3. For any type T , E(M(T ), T ) →∗ I.
4. M(T ) is normalisable.
Proof: The first two parts are proved by simultaneous induction on T , as done for System L [6].
The third part is proved by induction on T .
• If T = N, then M(T ) = 0, and E(0, N) = rec h0, 0i I I I → I.
• If T = A ⊗ B, then M(A ⊗ B) = hM(A), M(B)i, then
E(hM(A), M(B)i, A ⊗ B)
=
let hx, yi = hM(A), M(B)i in E(x, A)E(y, B)
(I.H.)
→ E(M(A), A)E(M(B), B) →∗ II → I
Note that, by induction, E(M(A), A) →∗ I and E(M(B), B) →∗ I.
10
• If T = A −◦ B then M(T ) = λx.E(x, A)M(B), therefore
=
→
(I.H.)
∗
→
E(λx.E(x, A)M(B), A −◦ B)
E((λx.E(x, A)M(B))M(A), B)
E(E(M(A), A)M(B), B)
(I.H.)
E(IM(B), B) → E(M(B), B) →∗ I
The last part is proved by induction on T .
Lrec , unlike System L, is not normalising, and there are terms that cannot be consumed
using the technique described above. There are even normalising terms that cannot be erased
by reduction. For example, consider the following term YN which represents a fixpoint operator
(more details are given in Section 5):
YN = λf.rec hS(0), 0i 0 f (λx.let hy, zi = x in hS(y), zi)
This term is typable (it has type (N −◦ N) −◦ N) and is a normal form (the recursor rules do
not apply because f is a variable). However, the term
E(YN , (N −◦ N) −◦ N) = rec hYN (λx.E(x, N)0), 0i I I I
does not have a normal form. On the positive side, closed terms of type N, or tuples where the
elements are terms of type N, can indeed be erased using this technique. Erasing “by consuming”
reflects the work that needs to be done to effectively dispose of a data structure (where each
component is garbage collected). For arrow types, a different erasing mechanism will be defined
in Section 5.
Theorem 4 Let T be a type generated by the grammar: A, B ::= N | A ⊗ B. If ⊢ t : T and t has
a normal form, then E(t, T ) →∗ I.
Proof: By induction on T .
• If T = N, then E(t, T ) = rec ht, 0i I I I. Since t is normalising, t →∗ v, and by the Adequacy
result (Theorem 1), v = Sn 0, n ≥ 0. Therefore rec ht, 0i I I I →∗ rec hSn 0, 0i I I I →∗ I.
• If T = A ⊗ B: E(t, T ) = let hx, yi = t in E(x, A)E(y, B). Since t is normalisable then,
by Adequacy (Theorem 1), t →∗ v = hu, si. Thus let hx, yi = t in E(x, A)E(y, B) →∗
let hx, yi = hu, si in E(x, A)E(y, B) → E(u, A)E(s, B). By induction hypothesis E(u, A) →∗
I and E(s, B) →∗ I, therefore E(u, A)E(s, B) →∗ II → I.
There is also a mechanism to copy closed terms in Lrec :
Definition 2 (Duplication) Define DA : A −◦ A ⊗ A as:
λx.rec hS(S 0), 0i hM(A), M(A)i F I
where F = (λy.let hz, wi = y in E(z, A)hw, xi).
Theorem 5 If ⊢ t : A then DA t →∗ ht, ti.
Proof: By the definition of →.
DA t
→
→∗
→∗
→∗
→∗
rec hS(S 0), 0i hM(A), M(A)i (λy.let hz, wi = y in E(z, A)hw, ti) I
(λy.let hz, wi = y in E(z, A)hw, ti)2 hM(A), M(A)i
(λy.let hz, wi = y in E(z, A)hw, ti)(E(M(A), A)hM(A), ti)
(λy.let hz, wi = y in E(z, A)hw, ti)hM(A), ti
E(M(A), A)ht, ti →∗ ht, ti
11
3.2
System Lµ : Minimisation vs. Unbounded Recursion
There are two standard ways of extending the primitive recursive functions so that all partial
recursive functions are obtained. One is unbounded minimisation, the other is unbounded recursion. For first-order functions (i.e., functions of type level 1), both methods are equivalent, see for
instance [13]. In this section we extend System L with a minimisation operator – we will refer to
this extension as System Lµ – and establish its relation with System Lrec . Starting from System
L, we add a minimiser with a typing rule
Γ⊢t:N
Θ⊢u:N
∆ ⊢ f : N −◦ N
Γ, Θ, ∆ ⊢ µ t u f : N
(Min)
and two reduction rules:
µ0uf
µ (S t) u f
→ u,
→ µ (f (S u)) (S u) f
fv(f ) = ∅
fv(f tu) = ∅
Theorem 6 (Properties of reductions in System Lµ )
1. If Γ ⊢ t : T then dom(Γ) = fv(t).
2. Subject Reduction: If Γ ⊢ t : T and t −→ t′ then Γ ⊢ t′ : T .
3. System Lµ is confluent: If t −→∗ u and t −→∗ v then there is some term s such that u −→∗ s
and v −→∗ s.
Proof:
1. By induction on the type derivation.
2. Straightforward extension of the proof given for System L in [6], by induction on the type
derivation Γ ⊢ t : T . We show the case where the term t is µ s u f and there is a type
derivation ending in:
Γ⊢s:N
Θ⊢u:N
∆ ⊢ f : N −◦ N
Γ, Θ, ∆ ⊢ µ s u f : N
(Min)
If the reduction step takes place inside s, u or f , the result follows directly by induction. If
reduction takes place at the root, we have two cases:
(a) µ 0 u f → u, with fv(f ) = ∅. Note that f v(µ 0 u f ) = fv(u) = dom(Θ) by part 1, and
we have Θ ⊢ u : N.
(b) µ (S t) u f → µ (f (S u)) (S u) f , with fv(tuf ) = ∅. Then f v(µ (S t) u f ) = ∅, and
we have:
⊢St:N
⊢u:N
⊢ f : N −◦ N
⊢ µ (S t) u f : N
Therefore:
⊢ f : N −◦ N
⊢Su:N
⊢ f (S u) : N
(Min)
⊢u:N
⊢Su:N
⊢ f : N −◦ N
⊢ µ (f (S u)) (S u) f : N
(Min)
3. Using Tait-Martin-Löf’s method (see [12] for more details).
Since System L, and therefore System Lµ , includes all the primitive recursive functions, to
show Turing completeness of System Lµ it is sufficient to show that unbounded minimisation can
be encoded. First, we recall the following result from Kleene [39], which uses the well-known
minimisation operator µ (already mentioned in Example 1).
12
Theorem 7 (The Kleene normal form) Let h be a partial recursive function on Nk . Then,
a number n and two primitive recursive functions f , g can be found such that h(x1 , . . . , xk ) =
f (µg (n, x1 , . . . , xk )) where µg is the minimisation operator on the last argument of g, that is,
µg (n, x1 , . . . , xk ) = min{y | g(n, x1 , . . . , xk , y) = 0}.
As a consequence of Kleene’s theorem, we only have to prove that we can encode minimisation
of primitive recursive functions in order to show Turing-completeness of Lµ , relying on the fact
that primitive recursive functions can be encoded in System L. Below we give the encoding of
minimisation for functions of arity 1 (the extension to functions of arity n > 1 is straightforward).
Theorem 8 (Unbounded minimisation in System Lµ ) If f : N → N is a primitive recursive
function and f is its encoding in System Lµ , then
µf = µ (f 0) 0 f
Proof: Similar to the proof for System Lrec (Theorem 2), considering the non-empty sequence
S = f (i), f (i + 1), . . . , f (i + j), such that f (i + j) is the first element in the sequence that is equal
to zero, and showing (by induction on the length of S) that:
µ (f i) i f →∗ i + j.
Corollary 3 System Lµ is Turing complete.
We can also encode System Lrec into System Lµ , simulating the recursor with iter and µ. Consider
the following term:
f = λn.pr 1 (iter n ht, t′ i (w ◦ pred1 )
where pred1 is such that pred1 hS(t), t′ i = ht, ti. The function f , given n, will produce pr 1 ((w ◦
pred1 )n ht, t′ i). Now consider (µ t 0 f ), which will lead to the following sequence:
µ t 0 f → µ f (1) 1 f → µ f (2) 2 f → µ f (3) 3 f → ... → n
where n is the minimum number such that (w ◦pred1 )n ht, t′ i produces h0, t′′ i. Now, one can encode
rec ht, ti u v w as:
iter (µ t 0 f ) u v
Intuitively, rec hSt, t′ i u v w will iterate v until wht, t′ i is equal to zero, and that µ t 0 f will
count the number of iterations that will actually be necessary, or will go on forever if that never
happens.
System Lµ is a minimal universal system in the sense that the subsystems obtained by taking
out iter and µ, respectively, are not universal. Note that the subsystem without µ corresponds
to System L and is therefore strongly normalising. Also note that, the minimiser cannot replace
bounded iteration, either in recursion theory or in the typed λ-calculus, as we now show.
Partial Recursive Functions without Bounded Iteration
Lemma 1 For any function f (x1 , . . . , xm ), m ≥ 0, defined from the initial functions (0, S and projections) and composition, without using the primitive recursive scheme, there is a constant k such
that, f (n1 , n2 , . . . , nm ) = ni +k or f (n1 , n2 , . . . , nm ) = k, for any given arguments n1 , . . . , nm ∈ N.
Proof: Assume f (x1 , . . . , xm ) is defined by the expression e. We proceed by induction on e: The
base cases (e = 0, e = S(x) and e = pr in (x1 , . . . , xm )) are trivial. Let us consider the composition
case. If e = g(f1 (x1 , . . . , xm ), . . . , fj (x1 , . . . , xm )), where g and fi are previously defined functions,
then by induction hypothesis fi (n1 , . . . , nm ) = ki , where ki = ki or ki = ni + ki for some constant
ki . But then, by induction hypothesis g(k1 , . . . , kj ) = k or ki + k, and the result follows.
13
V is a value
V ⇓V
t ⇓ ht1 , t2 i
V al
s ⇓ λx.u
st⇓V
t1 ⇓ 0 u ⇓ V
rec t u v w ⇓ V
u[t/x] ⇓ V
Rec1
App
t ⇓ ht1 , t2 i (λxy.u)t1 t2 ⇓ V
let hx, yi = t in u ⇓ V
t ⇓ ht1 , t2 i t1 ⇓ S t′
Let
v(rec (wht′ , t2 i) u v w) ⇓ V
rec t u v w ⇓ V
Rec2
Table 5: CBN evaluation for System Lrec
Theorem 9 Minimisation applied to functions in the previous class either returns 0 or is not
defined.
Proof: By the previous lemma, when f (x1 , . . . , xn ) = 0, then either it is the constant function
returning 0, or it returns 0 when the argument xi = 0. In the first case µf returns 0, and in the
second case either i = n and then µf returns 0, or µf diverges.
System Lµ without Iteration
Lemma 2 If ⊢ f : N −◦ N is a term in System Lµ without iter, µ:
f (S t) →∗ S k 0, where k 6= 0
Proof: First note that f (S t) : N, and it is strongly normalisable1 . Therefore, by Adequacy,
f (S t) →∗ Sk 0, for some k ≥ 0. Since f is linear, it cannot erase the S in its argument, therefore
k 6= 0.
Theorem 10 Let ⊢ t : N, ⊢ u : N, ⊢ f : N −◦ N be terms in System Lµ without iter, µ. Then
µ t u f either reduces to a reduct of u, or diverges.
Proof: By Adequacy, t →∗ Sk 0, for some k. If k = 0, then µ 0 u f → u →∗ m, using the first
rule for µ. If k 6= 0 then, using the second rule for µ, the computation diverges because, by the
previous lemma, f (Sk 0) with k 6= 0, will never reduce to 0.
This theorem is stated for closed terms, but is valid also if t →∗ 0 and u is an open term. Otherwise,
if we have open terms the rule for µ will not apply.
Lrec can be seen as a more compact version of System Lµ where the recursor can perform both
bounded iteration or minimisation.
4
Evaluation Strategies for System Lrec
In this section we define two evaluation strategies for System Lrec and derive a stack-based abstract
machine.
Call-by-name The CBN evaluation relation for closed terms in System Lrec is defined in Table 5.
The notation t ⇓ V means that the closed term t evaluates in System Lrec to the value V .
Values are terms of the form 0, St, λx.t and hs, ti, i.e., weak head normal forms (whnf). Note
that System Lrec does not evaluate under a S symbol, since S is used as a constructor for natural
numbers. Also note that no closedness conditions are needed in the evaluation rules for closed
terms. The rule Let is given using application to simplify the presentation (in this way, we will be
able to reuse this rule when we define the call-by-value evaluation relation below).
The evaluation relation · ⇓ · corresponds to standard reduction to weak head normal form.
Recall that a reduction is called standard if the contraction of redexes is made from left-to-right
1 We
are in a proper subset of System L, for which the properties of Strong Normalisation and Adequacy hold [6].
14
(i.e., leftmost-outermost). It is well known that for the λ-calculus [12], the standard reduction is
normalising, that is, if a term has a normal form, then it will be reached. A “standardisation”
result holds for closed terms in Lrec , as the following theorem shows.
Theorem 11 (Standardisation) If ⊢ t : T (i.e., t is a closed term in Lrec ) and t has a whnf, then
t ⇓ V , for some value V .
Proof: We rely on Klop’s result [40, 20], which states that leftmost-outermost reduction is normalising for left-normal orthogonal Combinatory Reduction Systems (CRSs). A CRS is orthogonal if
its rules are left-linear (i.e., the left hand-sides of the rewrite rules contain no duplicated variables)
and non-overlapping (there are no critical pairs). A CRS is left-normal if on the left hand-sides of
the rewrite rules, all the function symbols appear before the variables. The λ-calculus is an example of a left-normal orthogonal CRS, as is System Lrec . Therefore, leftmost-outermost reduction
is normalising for Lrec . The result follows, since CBN performs leftmost-outermost reduction.
For open terms, the set of weak head normal forms includes not only values but also other
kinds of terms, since, for instance, reduction of an application is blocked if the argument is open.
However, an evaluation procedure can also be defined for open terms using closed reduction, if we
consider all the free variables as constants as shown in [23] (see also [14]).
Call-by-value A call-by-value evaluation relation for System Lrec can be obtained from the
CBN relation by changing the rule for application, as usual.
s ⇓ λx.u
t⇓V′
u[V ′ /x] ⇓ V
st⇓V
There is no change in the Rec and Let rules, since they rely on the App rule. Unlike CBN,
the CBV strategy does not always reach a value, even if a closed term has one (Theorem 11
does not hold for a CBV strategy). For example, recall the term YN in Section 3.1, and consider
(λxy.rec h0, 0i I E(x, N) I)y)(YN I). This term has a value under the CBN strategy, but not under
CBV. In fact, innermost strategies are normalising in an orthogonal system if and only if the
system is itself strongly normalising.
4.1
Stack Machine for System Lrec
Intermediate languages that incorporate linearity have well known implementation advantages
whether in compilers, static analysis, or whenever resources are limited [44, 46, 15, 55]. Inspired
by these previous works, we finish this section by illustrating how simply System Lrec can be
implemented as a stack machine. We show a call-by-name version, but it is straightforward to
modify to other reduction strategies.
The basic principle of the machine is to find the next redex, using a stack S to store future
computations. The elements of the stack are terms in an extension of Lrec that includes the
following additional kinds of terms: LET (x, y, t), REC(u, v, w), REC ′ (n, u, v, w), where x, y are
variables bound in LET (x, y, t) and n, t, u, v, w are Lrec terms.
The configurations of the machine are pairs consisting of a term and a stack of extended terms.
Unlike Krivine’s machine or its variants (see for instance [33, 18, 25]) we do not need to include an
environment (sometimes called store, as in [55]) in the configurations. Indeed, the environment is
used to store bindings for variables, but here as soon as a binding of a variable to a term is known
we can replace the unique occurrence of that variable (the calculus is syntactically linear). In
other words, instead of building an environment, we use “assignment” and replace the occurrence
of the variable by the term.
The transitions of the machine are given in Table 6. For a program (closed term t), the machine
is started with an empty stack: (t, []). The machine stops when no rule can apply.
The use of “assignment” means that there is no manipulation (no copying, erasing, or even
searching for bindings) in environments usually associated to these kinds of implementations.
15
(app)
(abs)
(let)
(pair1)
(rec)
(pair2)
(zero)
(succ)
(st, S)
(λx.u, t : S)
(let hx, yi = t in u, S)
(ht1 , t2 i, LET (x, y, u) : S)
(rec t u v w, S)
(ht1 , t2 i, REC(u, v, w) : S)
(0, REC ′ (t2 , u, v, w) : S)
(S(t1 ), REC ′ (t2 , u, v, w) : S)
⇒
⇒
⇒
⇒
⇒
⇒
⇒
⇒
(s, t : S)
(u[t/x], S)
(t, LET (x, y, u) : S)
(u[t1 /x][t2 /y], S)
(t, REC(u, v, w) : S)
(t1 , REC ′ (t2 , u, v, w) : S)
(u, S)
(v, (rec (wht1 , t2 i) u v w) : S)
Table 6: Stack machine for System Lrec
The correctness of the machine with respect to the CBN evaluation relation is proved in the
usual way: first we show that if a typeable term has a value, the machine will find it (it cannot
remain blocked) and then we show that if the machine starting with a configuration (t, []) stops
at a value, then this value is a reduct of t in the calculus.
Theorem 12 (Completeness) If ⊢ t : T and there is a value V such that t ⇓ V , then (t, []) ⇒∗
(V, []).
Proof: By induction on the evaluation relation, using Subject Reduction (Theorem 1) and the
following property:
If (t, S) ⇒ (t′ , S ′ ) then (t, S ++ S ′′ ) ⇒ (t′ , S ′ ++ S ′′ ).
This property is proved by induction on (t, S). Intuitively, since only the top of the stack is
used to select a transition, it is clear that appending elements at the bottom of the stack does not
affect the computation.
Theorem 13 (Soundness) If ⊢ t : T and (t, []) ⇒∗ (V, []) then t →∗ V .
Proof: First, we define a readback function that converts a machine configuration (t, S) into a
term, by induction on S as follows:
Readback(t, [])
Readback(t, LET (x, y, u) : S)
Readback(t, REC(u, v, w) : S)
Readback(t1 , REC ′ (t2 , u, v, w) : S)
Readback(s, t : S)
=
=
=
=
=
t
Readback(let hx, yi = t in u, S)
Readback(rec t u v w, S)
Readback(rec ht1 , t2 i u v w, S)
Readback(st, S), otherwise
Then, we show that a machine transition does not change the meaning of the configuration: If
(t, S) ⇒ (t′ , S ′ ) then Readback(t, S) →∗ Readback(t′ , S ′ ). To prove this result we distinguish
cases depending on the transition rule applied from Table 6.
If the transition (t, S) ⇒ (t′ , S ′ ) is an instance of the rules (app), (let), (rec) or (pair2), the
result follows trivially since the readback is the same for both configurations: Readback(t, S) =
Readback(t′ , S ′ ).
If the transition (t, S) ⇒ (t′ , S ′ ) is an instance of rule (abs), (pair1), (zero) or (succ) then we
can prove that Readback(t, S) → Readback(t′ , S ′ ) as follows. We observe that by definition of the
readback function, in each of these cases there are terms t1 , t2 such that t1 → t2 , Readback(t, S) =
Readback(t1 , S) and Readback(t′ , S ′ ) = Readback(t2 , S). Finally, by induction on the definition
of the readback function, we show that if t1 → t2 then Readback(t1 , S) → Readback(t2 , S).
Having shown that a single transition (t, S) ⇒ (t′ , S ′ ) is sound, we derive the soundness
of the machine by induction on the length of the transition sequence: If (t, []) ⇒∗ (V, []) then
t = Readback(t, []) →∗ Readback(V, []) = V .
16
·
·
·
⊢ hS(0), 0i : N ⊗ N
·
·
·
⊢ M(A) : A
f : A −◦ A ⊢ f : A −◦ A
·
·
·
⊢ W : N ⊗ N −◦ N ⊗ N
f : A −◦ A ⊢ rec hS(0), 0i M(A) f W : A
⊢ λf.rec hS(0), 0i M(A) f W : (A −◦ A) −◦ A
Figure 2: Type derivation for YA
5
Applications: Fixpoint Operators and PCF
We now study the relation between Lrec and languages with fixpoint operators, in particular PCF.
5.1
The Role of Conditionals
Recursive function definitions based on fixpoint operators rely on the use of a non-linear conditional that should discard the branch corresponding to an infinite computation. For instance, the
definition of factorial:
fact = Y (λf n.cond n 1 (n ∗ f (n − 1)))
relies on the fact that cond will return 1 when the input number is 0, and discard the nonterminating “else” branch. Enabling the occurrence of the (bound) variable, used to iterate the
function (f in the above definition), in only one branch of the conditional is crucial for the definition
of interesting recursive programs. This is why denotational linear versions of PCF [50] allow stable
variables to be used non-linearly but not to be abstracted, since their only purpose is to obtain
fixpoints.
Fixpoint operators can be encoded in System Lrec : recall the term YN in Section 3.1. More
generally, for any type A we define the term
YA = λf.rec hS(0), 0i M(A) f W
where W represents the term (λx.let hy, zi = x in hS(y), zi). For every type A, YA : (A−◦ A)−◦ A
is well-typed in System Lrec (see Figure 2). Note that, for any closed term f of type A −◦ A, we
have:
YA f = rec hS(0), 0i M(A) f W
→∗ f (rec (let hy, zi = h0, 0i in hS(y), zi) M(A) f W )
→ f (rec hS(0), 0i M(A) f W ) = f (YA f )
Although YA behaves like a fixpoint operator, one cannot write useful recursive programs using
fixpoint operators alone (i.e. without a conditional): if we apply YA to a linear function f , we
obtain a non-normalisable term (recall the example in Section 3.1). Instead, in System Lrec ,
recursive functions, such as factorial, can be easily encoded using rec:
λn.pr 2 (rec hn, 0i hS(0), S(0)i (λx.let ht, ui = x in F ) I)
where F = let ht1 , t2 i = DN t in hS t1 , mult u t2 i and DN is the duplicator term defined previously
(see Definition 2). Note that, although conditionals are not part of System Lrec syntax, reduction
rules for rec use pattern-matching. In the remainder of this section we show how we can encode
in System Lrec recursive functions defined using fixpoints.
5.2
Encoding PCF in System Lrec
PCF (Programming Language for Computable Functions) [51] can be seen as a minimalistic typed
functional programming language. It is an extension of the simply typed λ-calculus with numbers,
a fixpoint operator, and a conditional. Let us first recall its syntax. PCF is a variant of the typed
λ-calculus, with a basic type N for numbers and the following constants:
17
• n : N, for n = 0, 1, 2, . . .
• succ, pred : N → N
• iszero : N → N, such that
iszero 0
→ 0
iszero (n + 1) → 1
• for each type A, condA : N → A → A → A, such that
condA 0 u v
condA (n + 1) u v
→ u
→ v
• for each type A, YA : (A → A) → A, such that YA f → f (YA f ).
Definition 3 PCF types and environments are translated into System Lrec types using h ·ii:
h Nii
h A → Bii
h x1 : T1 , . . . , xn : Tn i
=
=
=
N
h Aii −◦ h Bii
x1 : h T1 i , . . . , xn : h Tn i
Since System Lrec is Turing complete, it can simulate any PCF program. Furthermore, it is possible
to define an encoding in System Lrec for all the terms in PCF. We give a definition below, which is
inspired by the encoding of System T [6]. For convenience, we make the following abbreviations,
where the variables x1 and x2 are assumed fresh, and [x]t is defined below:
x1 ,x2
t
Cx:A
Axy t
= let hx1 , x2 i = DA x in t
= ([x]t)[y/x]
Definition 4 Let t be a PCF term such that f v(t) = {x1 , . . . , xn } and x1 : A1 , . . . , xn : An ⊢ t : A.
2
An
1
The compilation into System Lrec , is defined as: [xA
1 ] . . . [xn ]hhtii , where h ·ii is defined in Table
7, and for a term t and a variable x, such that x ∈ fv(t), [x]t is inductively defined in the following
way:
[x](S u)
= S([x]u)
[x]x
= x
[x](λy.u) = λy.[x]u
x ,x
1
2
x
x
Cx:A (Ax1 s)(Ax2 u) x ∈ fv(s) ∩ fv(u)
[xA ](su) =
([x]s)u
x∈
/ fv(u)
s([x]u)
x∈
/ fv(s)
Notice that [x]t is not defined for the entire syntax of System Lrec . The reason for this is
that, although other syntactic constructors (like recursors or pairs) may appear in t, they are the
outcome of h ·ii and therefore are closed terms, where x does not occur free.
Some observations about the encoding follow.
First, we remark that succ is not encoded as λx.Sx, since Lrec does not evaluate under λ or S.
We should not encode a divergent PCF program into a terminating term in Lrec . In particular,
the translation of condA (succ(YN I)) P Q is h condA i (hhsuccii(hhYN i I)) h P i h Qii, which diverges (if
we encode succ as λx.Sx, then we obtain h Qii, which is not right).
Regarding abstractions or conditionals, the encoding is different from the one used in for
System T in [6]. We cannot use the same encoding as in System L, where terms are erased by
“consuming them”, because PCF, unlike System T , is not strongly normalising. The technique
used here for erasing could have been used for System L, but erasing “by consuming” reflects the
work needed to erase a data structure.
2 We
omit the types of variables when they do not play a role in the compilation.
18
h nii
h succii
h predii
h iszeroii
h YA i
h condA i
h xii
h uvii
=
=
=
=
=
=
=
=
h λxA .tii
=
Sn 0
λn.rec hn, 0i (S 0) (λx.Sx) I
λn.pr 1 (rec hn, 0i h0, 0i (λx.let ht, ui = DN (pr 2 x) in ht, S ui) I)
λn.pr 1 (rec hn, 0i h0, S 0i (λx.DN (pr 2 x)) I)
λf.rec hS(0), 0i M(hhAii) f (λx.let hy, zi = x in hS(y), zi)
λtuv.rec ht, 0i u (λx.(rec h0, 0i I E(x, h Aii) I)v) I
x
h(uiih vii
λx.[xA ]hhtii
if x ∈ fv(t)
λx.(rec h0, 0i I λy.E(E(y, h Bii −◦ h Bii)x, h Aii) I)hhtii otherwise
Table 7: PCF compilation into Lrec
t : N ⊢ ht, 0i : N ⊗ N
u : h Aii ⊢ u : h Aii
v : h Aii ⊢ V : h Aii −◦ h Aii
⊢ I : N ⊗ N −◦ N ⊗ N
t : N, u : h Aii, v : h Aii ⊢ rec ht, 0i u V I : h Aii
·
·
·
⊢ condA : N −◦ h Aii −◦ h Aii −◦ h Aii
Figure 3: Type derivation for condA
The second case in the encoding for abstractions (see Table 7) uses a recursor on zero to
discard the argument, where the function parameter is λy.E(E(y, h Bii −◦ h Bii)x, h Aii). The reason
for this is that one cannot use x directly as the function parameter because that might make the
term untypable, and just using E(x, h Aii) would make the types work, but could encode strongly
normalisable terms into terms with infinite reduction sequences (because E(x, h Aii) might not
terminate). For example, consider the encoding of (λxy.y)YN .
The translation of a typable PCF term is also typable in System Lrec (this is proved below). In
particular, for any type A, the term h condA i is well-typed. In Figure 3, we show the type derivation
for the encoding of the conditional (we use V to represent the term λx.(rec h0, 0i I E(x, h Aii) I)v).
The type derivation for V depends on the fact that, if Γ ⊢ t : A, then for any type B, we have
Γ ⊢ E(t, A) : B −◦ B by Theorem 3. Note that the recursor on h0, 0i in V discards the remaining
recursion (corresponding to the branch of the conditional that is not needed), returning Iv.
We prove by induction that the encoding respects types. To make the induction work, we need
to define and intermediate system where certain variables (not yet affected by the encoding) may
occur non-linearly. More precisely, we consider an extension to System Lrec , which allows variables
on a certain set X to appear non-linearly in a term. We call the extended system System L+X
rec ;
and it is defined by the rules in Table 8. Intuitively, if X is the set of free-variables of t, then h tii
will be a System Lrec term, except for the variables X = fv(t), which may occur non-linearly, and
[x1 ] . . . [xn ]hhtii, will be a typed System Lrec term. We can prove the following results regarding
System L+X
rec .
′
Lemma 3 If Γ ⊢+X t : A, where dom(Γ) = fv(t) and x ∈ X ⊆ fv(t), then Γ ⊢+X [x]t : A, where
X ′ = X \ {x}.
Proof: By induction on t, using the fact that x : A ⊢+∅ DA x : A ⊗ A. We show the cases for
variable and application.
• t ≡ x. Then [x]x = x, and using the axiom we obtain both x : A ⊢+{x} x : A and
x : A ⊢+∅ x : A.
• t ≡ uv, and x ∈ fv(u), x ∈
/ fv(v) (the case where x ∈
/ fv(u), x ∈ fv(v) is similar). Then
[x]uv = ([x]u)v and Γ ⊢+X uv : A. Let Γ1 = Γ|fv(u) and Γ2 = Γ|fv(v) . Then Γ1 ⊢+X u :
19
Axiom and Structural Rule:
x : A ⊢+X x : A
Γ ⊢+X t : B
Γ, x : A ⊢
and x ∈ X
+X
Γ, x : A, y : B, ∆ ⊢+X t : C
(Axiom)
Γ, y : B, x : A, ∆ ⊢+X t : C
Γ, x : A, x : A ⊢+X t : B
(Weakening)
t:B
(Exchange)
Γ, x : A ⊢
+X
and x ∈ X
t:B
(Contraction)
Logical Rules:
Γ, x : A ⊢+X t : B
Γ ⊢+X λx.t : A −◦ B
(−◦Intro)
Γ ⊢+X1 t : A ∆ ⊢+X2 u : B
Γ, ∆ ⊢+(X1 ∪X2 ) ht, ui : A ⊗ B
(⊗Intro)
Γ, ∆ ⊢+(X1 ∪X2 ) tu : B
Γ ⊢+X1 t : A ⊗ B
Numbers:
⊢
Γ ⊢+X1 t : N ⊗ N
+∅
0:N
∆ ⊢+X2 u : A
Γ ⊢+X1 t : A −◦ B
(Zero)
(−◦Elim)
x : A, y : B, ∆ ⊢+X2 u : C
Γ, ∆ ⊢+(X1 ∪X2 ) let hx, yi = t in u : C
Γ ⊢+X t : N
Γ ⊢+X S(t) : N
(⊗Elim)
(Succ)
Θ ⊢+X2 u : A ∆ ⊢+X3 v : A −◦ A Σ ⊢+X4 w : N ⊗ N −◦ N ⊗ N
Γ, Θ, ∆, Σ ⊢+(X1 ∪X2 ∪X3 ∪X4 ) rec t u v w : A
(Rec)
Table 8: Typing rules for System L+X
rec
B −◦ A and Γ2 ⊢+X v : B, where Γ1 and Γ2 can only share variables in X. By induction
′
hypothesis Γ1 ⊢+X [x]u : B −◦ A. Also, since x ∈
/ fv(v) and dom(Γ2 ) = fv(v), we have
′
′
Γ2 ⊢+X v : B. Therefore Γ ⊢+X (x[u])v : A.
• t ≡ uv, x ∈ fv(u), and x ∈ fv(v). Let Γ1 = Γ|fv(u)\{x} and Γ2 = Γ|fv(v)\{x} and assume C
is the type associated to x in Γ. Then Γ1 , x : C ⊢+X u : B −◦ A and Γ2 , x : C ⊢+X v : B.
′
′
By induction hypothesis Γ1 , x : C ⊢+X [x]u : B −◦ A, and Γ2 , x : C ⊢+X [x]v : B. Thus
+X ′
+X ′
Γ1 , x1 : C ⊢
([x]u)[x1 /x] : B −◦ A, and Γ2 , x2 : C ⊢
([x]v)[x2 /x] : B. Therefore
′
Γ1 , x1 : C, Γ2 , x2 : C ⊢+X (Axx1 u)(Axx2 v) : A. Also x : C ⊢+∅ Dx : C ⊗ C, therefore
′
Γ1 , Γ2 , x : C ⊢+X let hx1 , x2 i = Dx in (Axx1 u)(Axx2 v) : A.
Lemma 4 If t is a PCF term of type A, then h Γ|fv(t) i ⊢+fv(t) h tii : h Aii where the notation Γ|X is
used to denote the restriction of Γ to the variables in X.
Proof: By induction on the PCF type derivation for t, as done for System T in [6].
Theorem 14 If t is a PCF term of type A under a set of assumptions Γ for its free variables
{x1 , . . . , xn }, then h Γ|fv(t) i ⊢ [x1 ] . . . [xn ]hhtii : h Aii
Proof: By induction on the number of free variables of t, using Lemmas 3 and 4.
Using the encodings given above, it is possible to simulate the evaluation of a PCF program
in System Lrec . More precisely, if t is a closed PCF term of type N, which evaluates to V under a
20
V is a value
s ⇓PCF V ′
V ⇓PCF V
V ′ t ⇓PCF V
u[t/x] ⇓PCF V
s is not a value
(λx.u) t ⇓PCF V
s t ⇓PCF V
t ⇓PCF 0
t ⇓PCF n + 1
t ⇓PCF n
t ⇓PCF 0
t ⇓PCF n + 1
pred t ⇓PCF 0
pred t ⇓PCF n
succ t ⇓PCF n + 1
iszero t ⇓PCF 0
iszero t ⇓PCF 1
t ⇓PCF 0 u ⇓PCF V
condA t u v ⇓PCF V
t ⇓PCF n + 1
v ⇓PCF V
f (YA f ) ⇓PCF V
condA t u v ⇓PCF V
YA f ⇓PCF V
Table 9: CBN evaluation for PCF
CBN semantics for PCF [51], then the encoding of t reduces in System Lrec to the encoding of V ,
and evaluates under a CBN semantics to a value which is equal to the encoding of V . In Table 9
we recall the CBN rules for PCF: t ⇓PCF V means that the closed term t evaluates to the value V
(a value is either a number, a λ-abstraction, a constant, or a partially applied conditional).
Lemma 5 (Substitution) Let t be a term in System Lrec .
1. If x ∈ fv(t), and fv(u) = ∅, then h tii[hhuii/x] = h t[u/x]ii
2. If x ∈ fv(t), then ([x]t)[u/x] →∗ t[u/x].
Proof: By induction on t.
Lemma 6 Let t be a closed PCF term. If t ⇓PCF V , then h tii →∗ h V i .
Proof: By induction on the evaluation relation, using a technique similar to the one used for
System T in [6]. Here we show the main steps of reduction for condA t u v where u, v are closed
terms by assumption.
• If t ⇓PCF 0:
h condA t u vii
=
h condA i h tii h uii h vii
(I.H.)
∗
→
condA 0 h uii h vii
→∗
h uii →∗ h V i
(I.H.)
• If t ⇓PCF n + 1, let v ′ be the term (λx.(rec h0, 0i I E(x, A) I)hhvii):
h condA t u vii
=
→∗
h condA i (Sn+1 0) h uii h vii
rec hSn+1 0, 0i h uii v ′ I
→∗
Ihhvii → h vii →∗ h V i .
(I.H.)
For application, we rely on the substitution lemmas above. Note that for an application uv, where
u is a constant, we rely on the correctness of the encodings for constants, which can be easily
proved by induction. For example, in the case of succ it is trivial to prove that, if t is a number
Sn 0 in Lrec (n ≥ 0), then rec ht, 0i (S 0) (λx.Sx) I →∗ Sn+1 0.
Theorem 15 Let t be a closed PCF term. If t ⇓PCF V , then ∃V ′ such that h tii ⇓ V ′ , and
V ′ =Lrec h V i .
Proof: By Lemma 6, t ⇓PCF V implies h tii →∗ h V i . By Theorem 11, h tii ⇓ V ′ . Therefore, since
⇓ ⊂→∗ and the system is confluent (Theorem 1), V ′ =Lrec h V i .
21
Lemma 7 If t ⇓ V and t =Lrec u, then u ⇓ V ′ and V =Lrec V ′ .
Proof: By transitivity of the equality relation.
Theorem 16 Let t be a closed PCF term. If h tii ⇓ V , then ∃V ′ , such that, t ⇓PCF V ′ and
h V ′ i =Lrec V .
Proof: By induction on the evaluation relation, using Lemma 7. Note that, if t is a value
different from a partially applied conditional, the result follows because t = V ′ and h tii is also
a value, i.e. h tii = V , therefore h tii = h V ′ i = V . If t is an application uv then h tii = h uiih vii,
therefore h uiih vii ⇓ V if h uii ⇓ λx.s and s[hhvii/x] ⇓ V . If h uii ⇓ λx.s, then by I.H. u ⇓PCF W , and
h W i =Lrec λx.s. Note that W is a value of arrow type, which compilation equals an abstraction,
therefore W = λx.s′ , pred, succ, iszero, Y, cond, cond p or cond p q.
• If W = λx.s′ , we have two cases:
– x ∈ fv(s′ ): then h W i = λx.[x]hhs′ i =Lrec λx.s, thus [x]hhs′ i =Lrec s. Since s[hhvii/x] ⇓ V
and s[hhvii/x] =Lrec [x]hhs′ i [hhvii/x] then, by Lemma 5.2 [x]hhs′ i [hhvii/x] →∗ h s′ i [hhvii/x],
which, by Lemma 5.1, equals h s′ [v/x]ii, therefore (by Lemma 7) h s′ [v/x]ii ⇓ V ′′ , and
V =Lrec V ′′ . By I.H., s′ [v/x] ⇓PCF V ′ and h V ′ i = V , therefore uv ⇓PCF V ′ and
h V ′ i =Lrec V ′′ =Lrec V .
– x ∈
/ fv(s′ ): let v ′ represent the term λy.E(E(y, h Bii −◦ h Bii)x, h Aii). Then h W i =
λx.(rec h0, 0i I v ′ I)hhs′ i =Lrec λx.s, therefore (rec h0, 0i I v ′ I)hhs′ i =Lrec s. Note that
s[hhvii/x] = (rec h0, 0i I v ′ [hhvii/x] I)hhs′ i and (rec h0, 0i I v ′ [hhvii/x] I)hhs′ i ⇓ V if h s′ i ⇓ V ,
then, since s′ [v/x] = s′ , by I.H., s′ ⇓PCF V ′ and h V ′ i =Lrec V , therefore uv ⇓PCF V ′ and
h V ′ i =Lrec V as required.
• W = succ: then h W i = λx.rec hx, 0i S 0 (λx.Sx) I =Lrec λx.s, then rec hx, 0i S 1 (λx.Sx) I =Lrec
s. Then s[hhvii/x] = rec hhhvii, 0i S 0 (λx.Sx) I and s[hhvii/x] ⇓ V if h vii ⇓ W ′ , in which case we
have two possibilities:
– W ′ = 0: then rec hhhvii, 0i S 0 (λx.Sx) I ⇓ V if S 0 ⇓ V , in which case V = S 0. By I.H.,
v ⇓PCF W ′′ , and h W ′′ i =Lrec 0, therefore W ′′ = 0 (0 is the only value of type N that
compiles to 0). Therefore succ v ⇓PCF 1 and h 1ii = S 0 =Lrec V .
– W ′ = Sp: then rec hhhvii , 0i S 0 (λx.Sx) I ⇓ V if (λx.Sx)(rec hp, 0i S 0 (λx.Sx) I) ⇓ V .
By I.H., v ⇓PCF W ′′ , and h W ′′ i =Lrec Sp, thus W ′′ = n+1 (W ′′ is a number in PCF and it
must different from 0, otherwise its compilation would be 0) and p =Lrec Sn 0. Note that
(λx.Sx)(rec hSn 0, 0i S 0 (λx.Sx) I) →∗ Sn+2 0, therefore, by Lemma 7, V =Lrec Sn+2 0.
Now it suffices to notice that succ v ⇓PCF n + 2, and h n + 2ii = Sn+2 0 =Lrec V as
required.
• For pred and iszero, the proof is similar to the case of succ.
• If W = YA : let w′ represent the term (λy.let hy1 , y2 i = y in hS(y1 ), y2 i). Then h W i =
λx.rec hS(0), 0i M(hhAii) x w′ =Lrec λx.s, therefore rec hS(0), 0i M(hhAii) x w′ =Lrec s. Then,
since s[hhvii/x] = rec hS(0), 0i M(hhAii) h vii w′ , s[hhvii/x] ⇓ V if h vii(hhYA i h vii) ⇓ V (and
h vii(hhYA i h vii) = h v(YA v)ii). Thus, by I.H. v(YA v) ⇓PCF V ′ and h V ′ i =Lrec V , therefore
YA v ⇓PCF V ′ and h V ′ i =Lrec V as required.
• W = condA : let v ′ represent the term (λz.(rec h0, 0i I E(z, h Aii) I)q). Then h W i =
λxpq.rec hx, 0i p v ′ I =Lrec λx.s, therefore λpq.rec hx, 0i p v ′ I =Lrec s. Then s[hhvii/x] =
λpq.rec hhhvii , 0i p v ′ I and s[hhvii/x] ⇓ λpq.rec hhhvii , 0i p v ′ I. Note that condA v ⇓PCF condA v,
because it is a value, and h condA vii =Lrec λpq.rec hhhvii, 0i p v ′ I.
22
• W = condA p1 : let v ′ represent the term (λz.(rec h0, 0i I E(z, h Aii) I)q). Then h W i =
(λpxq.rec hp, 0i x v ′ I)hhp1 i =Lrec λxq.rec hhhp1 i , 0i x v ′ I =Lrec λx.s, therefore
λq.rec hhhp1 i , 0i x v ′ I =Lrec s. Then s[hhvii/x] = λq.rec hhhp1 i , 0i h vii v ′ I and s[hhvii/x] ⇓
λq.rec hhhp1 i , 0i h vii v ′ I. Note that condA p1 v ⇓PCF condA p1 v, because it is a value, and
h condA p1 vii =Lrec λy.rec hhhp1 i , 0i h vii v ′ I.
• W = condA p1 p2 : let v ′ represent the term (λz.(rec h0, 0i I E(z, h Aii) I)x). Then h W i =
(λpqx.rec hp, 0i q v ′ I)hhp1 i h p2 i =Lrec λx.rec hhhp1 i , 0i h p2 i v ′ I =Lrec λx.s, therefore s =Lrec
rec hhhp1 i , 0i h p2 i v ′ I. Then s[hhvii/x] = rec hhhp1 i , 0i h p2 i v ′ [hhvii/x] I and s[hhvii/x] ⇓ V if
h p1 i ⇓ W ′ , in which case we have two possibilities:
– W ′ = 0: then rec hhhp1 i , 0i h p2 i v ′ [hhvii/x] I ⇓ V if h p2 i ⇓ V . By I.H., p1 ⇓PCF W ′′ , and
h W ′′ i =Lrec 0, therefore W ′′ = 0 (0 is the only value of type N that compiles to 0). Also
by I.H, p2 ⇓PCF V ′ and h V ′ i =Lrec V , therefore condA p1 p2 v ⇓PCF V ′ , thus uv ⇓PCF V ′ ,
and h V ′ i =Lrec V as required.
– W ′ = Sp′ : then rec hhhp1 i , 0i h p2 i v ′ [hhvii/x] I ⇓ V if h vii ⇓ V . By I.H., p1 ⇓PCF W ′′ ,
and h W ′′ i =Lrec Sp′ , thus W ′′ = n + 1 (W ′′ is a number in PCF and it must different
from 0, otherwise its compilation would be 0). Also by I.H, t ⇓PCF V ′ and h V ′ i =Lrec V ,
therefore condA p1 p2 v ⇓PCF V ′ and h V ′ i =Lrec V as required.
This completes the proof of soundness and completeness of the encoding.
Note that the terms of the form rec h0, 0i I t I used in the encoding of conditionals and λabstractions allow us to discard terms without evaluating them. This is a feature of the encoding,
otherwise terminating programs in PCF could be translated to non-terminating programs in System Lrec . This differs from the definition of erasing given in Section 3.1, where terms are consumed
and not discarded (in pure linear systems functions do not discard their arguments). However,
allowing terms to be discarded without being evaluated, is crucial when defining recursion based
on fixpoints.
Once a PCF term is compiled into Lrec it can be implemented using the techniques in Section
4, thus we obtain a new stack machine implementation of PCF.
6
Closed Reduction vs Closed Construction in Calculi with
Recursion
Both System L and System Lrec use a closed reduction strategy that waits for arguments to
become closed before firing redexes. We now look in more detail at the implications of using a
closed reduction strategy, instead of imposing functions used in iteration/recursion to be closedby-construction (a viable alternative in the presence of linearity).
As mentioned in the Introduction, the closed reduction strategy for the λ-calculus avoids
α-conversion while allowing reductions inside abstractions, thus achieving more sharing of computation. When applied to System L and System Lrec , it imposes certain conditions on reduction
rules; in particular iterated functions should be closed. The intuition here is that we should only
copy closed terms because then all the resources are there. In linear logic words, we can promote
a term that is closed.
The closed reduction strategy waits, to reduce an iterator/recursor term, until the iterated
functions are closed. One can ask a stronger constraint on the construction of terms, that is, to
constrain iterators/recursors to be closed on construction (i.e., we have a syntactical constraint
that only terms without free variables are used in this context). For System L, to follow the
closed-construction approach one imposes an extra condition on the iterated function v, when
defining iterators:
iter t u v
if fv(t) ∩ fv(u) = ∅ and fv(v) = ∅
23
For System Lrec , one imposes an extra condition on v and w:
rec t u v w,
if fv(t) ∩ fv(u) = ∅ and fv(vw) = ∅
In the rest of this section we compare the computation power of linear calculi with closed
reduction vs closed construction. We consider first calculi with bounded recursion (iterators) and
then unbounded recursion.
6.1
Closed Reduction/Closed Construction and Iteration
Dal Lago [19] defines a linear λ-calculus with bounded iteration that encodes exactly the set
of primitive recursive functions following the closed construction approach. A similar system
allowing iterators to be open at construction, but imposing a closed condition on reduction, allows
to encode more than the primitive recursive functions, and in particular allows the encoding of
the Ackermann function, as shown in [5]. Thus, imposing a closed-at-construction restriction on
iterators clearly has an impact in the presence of linearity.
For Gödel’s System T , the fact that we do not allow iterators to be open at construction,
does not affect the set of definable functions. If we define v = λxy.y(xy), then each iterator term
iter n b f in System T , where f may be an open term, can be translated into the typable term
(iter n (λx.b) v)f , where x 6∈ fv(b). It is easy to see that iter n b f and (iter n (λx.b) v)f have the
same normal form f (. . . (f b)). It is worth remarking that we rely on a non-linear term v to get
this result. Indeed, iterating v is essentially equivalent to constructing a Church numeral.
For a linear system with iteration such as System L, although some functions are naturally
defined using an open function, for example: mult = λmn.iter m 0 (add n), one can encode them
using a closed-at-construction iteration. In general, an iterator with an open function where the
free variables are of type N can be encoded using a closed-at-construction iterator, as follows.
Consider iter t u v, where v is open, for free variables x1 , . . . , xn of type N. Then let
F
W
≡
≡
let Cx′1 = hx1 , x′′1 i in . . . let Cxk = hxk , x′′k i in hvx0 , x′′1 , x′′k i
λx.let x = hx0 , x′1 , x′k i in F
Then we simulate iter t u v using a closed iterator as follows: π1 (iter t hu, x1 , xk i W ).
This technique can also be applied to open functions where the free variables are of type τ , for
τ generated by the following grammar: τ ::= N | τ ⊗ τ . More generally, open functions where the
free variables have base type can be encoded when we consider iteration closed-at-construction.
6.2
Closed Construction and Unbounded Recursion
We now consider what happens when we use the closed-at-construction approach in a linear system
with unbounded recursion such as System Lrec .
Notice that the encoding of µf in Lrec given in Section 3.1 is a term closed-at-construction. Since
all the primitive recursive functions are definable using closed-at-construction iterators, which are
trivially encoded using closed Lrec recursors, we conclude that imposing a closed-at-construction
condition on System Lrec still gives a Turing complete system.
Note however that, although System Lrec can encode all the computable functions, that does
not mean one can encode all the computational behaviours. For example for any closed function
f , one can encode in Lrec a term Y , such that, Y f → f (Y f ). However, this relies on the fact that
one can copy any closed function f , which can be done both in System L and System Lrec with
closed reduction, but so far there is no known encoding when one imposes a closed-at-construction
condition.
7
Conclusions
This paper completes a line of work investigating the power of linear functions, from the set
of primitive recursive functions to the full set of computable functions, with a strong focus on
24
Turing complete systems based on linear calculi. In previous work, we investigated linear primitive
recursive functions, and a linear version of Gödel’s System T . Here, we extended these notions to
general recursion, using iteration and minimisation (Lµ ) and, alternatively, unbounded recursion
(Lrec ). System Lrec is a syntactically linear calculus, but only the fragment without the recursor is
operationally linear. The linear recursor allows us to encode duplicating and erasing, thus playing
a similar role to the exponentials in linear logic. It encompasses bounded recursion (iteration) and
minimisation in just one operator. Summarising, a typed linear λ-calculus with bounded iteration
(System L) is not Turing complete, but replacing the iterator with an unbounded recursor (Lrec ),
or adding a minimiser (Lµ ), yields a universal system.
Linear calculi have been successfully used to characterise complexity classes, for instance, as a
consequence of Dal Lago’s results [19], we know that a closed-by-construction discipline in System
L gives exactly the set of PR functions, whereas closed reduction recovers the power of System T .
Interestingly, a closed-construction discipline does not weaken Lrec (the encoding of µ is closed).
The encoding of PCF in Lrec is type-respecting, and Lrec seems a potentially useful intermediate language for compilation. The meaning of the linear recursor will be further analysed in a
denotational setting in future work and the pragmatical impact of these results is currently being
investigated within the language Lilac [46].
References
[1] S. Abramsky. Computational Interpretations of Linear Logic. Theoretical Computer Science,
111:3–57, 1993.
[2] S. Alves. Linearisation of the Lambda Calculus. PhD thesis, Faculty of Science - University
of Porto, April 2007.
[3] S. Alves, M. Fernández, M. Florido, and I. Mackie. The power of linear functions. In Computer
Science Logic, volume 4207 of LNCS, pages 119–134. Springer, 2006.
[4] S. Alves, M. Fernández, M. Florido, and I. Mackie. Linear recursive functions. In Rewriting,
Computation and Proof, volume 4600 of LNCS, pages 182–195. Springer, 2007.
[5] S. Alves, M. Fernández, M. Florido, and I. Mackie. The power of closed reduction strategies.
ENTCS, 174(10):57–74, 2007.
[6] S. Alves, M. Fernández, M. Florido, and I. Mackie. Gödel’s system T revisited. Theor.
Comput. Sci., 411(11-13):1484–1500, 2010.
[7] S. Alves, M. Fernández, M. Florido, and I. Mackie. Linearity and recursion in a typed
lambda-calculus. In PPDP, 2011.
[8] S. Alves and M. Florido. Weak linearization of the lambda calculus. Theoretical Computer
Science, 342(1):79–103, 2005.
[9] A. Asperti and L. Roversi. Intuitionistic light affine logic. ACM Transactions on Computational Logic, 3(1):137–175, 2002.
[10] D. Baelde and D. Miller. Least and greatest fixed points in linear logic. In LPAR 2007: Logic
for Programming, Artificial Intelligence and Reasoning. Springer, 2007.
[11] P. Baillot and V. Mogbil. Soft lambda-calculus: a language for polynomial time computation.
In Proc. FOSSACS’04, volume 2987 of LNCS, pages 27–41. Springer-Verlag, 2004.
[12] H. P. Barendregt. The Lambda Calculus: Its Syntax and Semantics, volume 103 of Studies in
Logic and the Foundations of Mathematics. North-Holland, 1984.
25
[13] U. Berger. Minimisation vs. recursion on the partial continuous functionals. In In the Scope
of Logic, Methodology and Philosophy of Science, volume 1 of Synthese Library 316, pages
57–64. Kluwer, 2002.
[14] U. Berger and H. Schwichtenberg. An inverse of the evaluation functional for typed lambdacalculus. In Proc. Logic in Computer Science (LICS’91), pages 203–211. IEEE Computer
Society, 1991.
[15] G. M. Bierman, A. M. Pitts, and C. V. Russo. Operational properties of Lily, a polymorphic
linear lambda calculus with recursion. In Workshop on Higher Order Operational Techniques
in Semantics, volume 41 of ENTCS, pages 70–88. Elsevier, 2000.
[16] G. Boudol, P.-L. Curien, and C. Lavatelli. A semantics for lambda calculi with resources.
MSCS, 9(4):437–482, 1999.
[17] T. Braüner. The Girard translation extended with recursion. In Computer Science Logic,
8th International Workshop, CSL’94, Kazimierz, Poland, volume 933 of Lecture Notes in
Computer Science, pages 31–45. Springer, 1994.
[18] P.-L. Curien. An abstract framework for environment machines. Theor. Comput. Sci.,
82(2):389–402, 1991.
[19] U. Dal Lago. The geometry of linear higher-order recursion. In Proc. Logic in Computer
Science (LICS’05), pages 366–375, June 2005.
[20] N. Dershowitz. Term rewriting systems by “terese”. Theory Pract. Log. Program., 5(3):395–
399, 2005.
[21] J. Egger, R. E. Møgelberg, and A. Simpson. Enriching an effect calculus with linear types. In
Computer Science Logic, 23rd international Workshop, CSL 2009, 18th Annual Conference of
the EACSL, Coimbra, Portugal, September 7-11, 2009. Proceedings, volume 5771 of Lecture
Notes in Computer Science, pages 240–254. Springer, 2009.
[22] T. Ehrhard and L. Regnier. The differential lambda-calculus. Theor. Comput. Sci., 309(13):1–41, 2003.
[23] M. Fernández, I. Mackie, and F.-R. Sinot. Closed reduction: explicit substitutions without
alpha conversion. MSCS, 15(2):343–381, 2005.
[24] M. Fernández, I. Mackie, and F.-R. Sinot. Lambda-calculus with director strings. Applicable
Algebra in Engineering, Communication and Computing, 15(6):393–437, 2005.
[25] M. Fernández and N. Siafakas. New developments in environment machines. Electr. Notes
Theor. Comput. Sci., 237:57–73, 2009.
[26] D. R. Ghica. Geometry of synthesis: a structured approach to VLSI design. In POPL, pages
363–375, 2007.
[27] J. Girard. Light linear logic. Inf. and Comp., 143(2):175–204, 1998.
[28] J.-Y. Girard. Linear Logic. Theor. Comp. Sci., 50(1):1–102, 1987.
[29] J.-Y. Girard. Towards a geometry of interaction. In Categories in Computer Science and
Logic: Proc. of the Joint Summer Research Conference, pages 69–108. American Mathematical Society, 1989.
[30] J.-Y. Girard, Y. Lafont, and P. Taylor. Proofs and Types. Cambridge Tracts in Theor. Comp.
Sci. Cambridge University Press, 1989.
26
[31] J.-Y. Girard, A. Scedrov, and P. J. Scott. Bounded linear logic: A modular approach to
polynomial time computability. Theoretical Computer Science, 97:1–66, 1992.
[32] M. Giunti and V. T. Vasconcelos. A linear account of session types in the pi calculus. In
CONCUR, pages 432–446, 2010.
[33] C. Hankin. An Introduction to Lambda Calculi for Computer Scientists, volume 2. College
Publications, 2004. ISBN 0-9543006-5-3.
[34] M. Hofmann. Linear types and non-size-increasing polynomial time computation. In Proc.
Logic in Computer Science (LICS’99). IEEE Computer Society, 1999.
[35] M. Hofmann and S. Jost. Static prediction of heap space usage for first-order functional
programs. In POPL, pages 185–197, 2003.
[36] S. Holmström. Linear functional programming. In Proc. of the Workshop on Implementation
of Lazy Functional Languages, pages 13–32, 1988.
[37] K. Honda. Types for dyadic interaction. In CONCUR’93, volume 715 of LNCS, pages 509–
523. Springer, 1993.
[38] A. J. Kfoury. A linearization of the lambda-calculus and consequences. Journal of Logic and
Computation, 10(3):411–436, 2000.
[39] S. C. Kleene. Introduction to Metamathematics. North-Holland, 1952.
[40] J. W. Klop. Combinatory Reduction Systems. PhD thesis, Mathematisch Centrum, Amsterdam, 1980.
[41] J. W. Klop. New fixpoint combinators from old. Reflections on Type Theory, 2007.
[42] J.-W. Klop, V. van Oostrom, and F. van Raamsdonk. Combinatory reduction systems,
introduction and survey. Theor. Computer Science, 121:279–308, 1993.
[43] N. Kobayashi, B. C. Pierce, and D. N. Turner. Linearity and the pi-calculus. In POPL, pages
358–371, 1996.
[44] Y. Lafont. The linear abstract machine. Theor. Comp. Sci., 59:157–180, 1988.
[45] Y. Lafont. Soft linear logic and polynomial time. Theoretical Computer Science, 318(1-2):163–
180, 2004.
[46] I. Mackie. Lilac: A functional programming language based on linear logic. Journal of
Functional Programming, 4(4):395–433, 1994.
[47] I. Mackie. The geometry of interaction machine. In Principles of Programming Languages
(POPL), pages 198–208. ACM Press, 1995.
[48] R. Milner, J. Parrow, and D. Walker. A calculus of mobile processes, I. Information and
Computation, 100(1):1 – 40, 1992.
[49] E. Nöcker, J. Smetsers, M. van Eekelen, and M. Plasmeijer. Concurrent clean. In PARLE’91,
volume 506 of LNCS, pages 202–219. Springer, 1991.
[50] L. Paolini and M. Piccolo. Semantically linear programming languages. In PPDP, pages
97–107, Valencia, Spain, 2008. ACM.
[51] G. D. Plotkin. LCF Considered as a Programming Language. Theoretical Computer Science,
5:223–255, 1977.
[52] K. Terui. Light affine calculus and polytime strong normalization. In Proc. Logic in Comp
Sci. (LICS’01). IEEE Computer Society, 2001.
27
[53] A. v. Tonder. A lambda calculus for quantum computation. SIAM J. Comput., 33(5):1109–
1135, 2004.
[54] P. Wadler. Linear types can change the world! In IFIP TC 2 Conf. on Progr. Concepts and
Methods, pages 347–359. North Holland, 1990.
[55] D. Walker. Substructural type systems. In Adv. Topics in Types and Progr. Languages,
chapter 1, pages 3–43. MIT Press, Cambridge, 2005.
[56] K. Wansbrough and S. P. Jones. Simple usage polymorphism. In Proc. ACM SIGPLAN
Workshop on Types in Compilation. ACM Press, 2000.
[57] N. Yoshida, K. Honda, and M. Berger. Linearity and bisimulation. In FoSSaCS, LNCS, pages
417–434. Springer-Verlag, 2002.
28
| 6 |
Modelling contextuality by probabilistic programs with
hypergraph semantics
arXiv:1802.00690v1 [] 31 Jan 2018
Peter D. Bruzaa
a School of Information Systems
Queensland University of Technology
GPO Box 2434
Brisbane 4001
Australia
Abstract
Models of a phenomenon are often developed by examining it under different
experimental conditions, or measurement contexts. The resultant probabilistic
models assume that the underlying random variables, which define a measurable
set of outcomes, can be defined independent of the measurement context. The
phenomenon is deemed contextual when this assumption fails. Contextuality
is an important issue in quantum physics. However, there has been growing
speculation that it manifests outside the quantum realm with human cognition
being a particularly prominent area of investigation. This article contributes
the foundations of a probabilistic programming language that allows convenient
exploration of contextuality in wide range of applications relevant to cognitive
science and artificial intelligence. Using the style of syntax employed by the
probabilistic programming language WebPPL, specific syntax is proposed to
allow the specification of “measurement contexts”. Each such context delivers
a partial model of the phenomenon based on the associated experimental condition described by the measurement context. An important construct in the
syntax determines if and how these partial models can be consistently combined
into a single model of the phenomenon. The associated semantics are based on
hypergraphs in two ways. Firstly, if the schema of random variables of the partial models is acyclic, a hypergraph approach from relational database theory
is used to compute a join tree from which the partial models can be combined
to form a single joint probability distribution. Secondly, if the schema is cyclic,
measurement contexts are mapped to a hypergraph where edges correspond to
sets of events denoting outcomes in measurement contexts. Recent theoretical
results from the field of quantum physics show that contextuality can be equated
with the possibility of constructing a probabilisitic model on the resulting hypergraph. The use of hypergraphs opens the door for a theoretically succinct
and efficient computational semantics sensitive to modelling both contextual
and non-contextual phenomena. In addition, the hypergraph semantics allow
Email address: [email protected] (Peter D. Bruza)
Preprint submitted to Elsevier
February 5, 2018
measurement contexts to be combined in various ways. This aspect is exploited
to allow the modular specification of experimental designs involving both signalling and no signalling between components of the design. An example is
provided as to how the hypergraph semantics may be applied to investigate
contextuality in an information fusion setting. Finally, the overarching aim of
this article is to raise awareness of contextuality beyond quantum physics and
to contribute formal methods to detect its presence by means of probabilistic
programming language semantics.
Keywords: probabilistic programming, probabilistic modelling, programming
language semantics, contextuality
1. Introduction
Probabilistic models are used in a broad swathe of disciplines ranging from
the social and behavioural sciences, biology, the physical and computational
sciences, to name but a few. At their very core, probabilistic models are defined
in terms of random variables, which range over a set of outcomes that are subject
to chance. For example, a random variable R might be defined to model the
performance of human memory. In this case, the possible outcomes might be
words studied by a human subject before their memory is cued. After cueing,
the subject recalls the first word that comes to mind from the set of study words.
This outcome is recorded as a measurement. Repeated measurements over a set
of subjects allow the probability of the recall of a certain word to be empirically
established.
It is important to note from the outset that the random variable R has been
been devised by the modeller with a specific functional identity in mind, namely
to model the recall of a set of predefined study words. When developing probabilistic models in this way, the underlying assumption is that the functional
identity of a random variable is independent of the context in which it is measured. For example, the purpose, or functional identity of R is assumed to be
the same regardless of whether the memories of human subjects are studied in
a controlled laboratory, or in “the wild”, such as in a night club. This assumption seems perfectly reasonable. However, in quantum physics the analog of
this assumption does not always hold and has become known as “contextuality”. More formally, the Kochen-Specker theorem (Kochen and Specker, 1967)
implies that quantum theory is incompatible with the assumption that measurement outcomes are determined by physical properties that are independent of
the measurement context. Placing this theorem in the context of probabilistic
models: contextuality is the “impossibility of assigning a single random variable to represent the outcomes of the same measurement procedure in different
measurement conditions” (Acacio De Barros and Oas, 2015).
Contextuality plays a central role in the rapidly developing field of quantum
information in delineating how quantum resources can transcend the bounds of
classical information processing (Howard et al., 2014). It also has important
2
consequences for our understanding of the very nature of physical reality. It is
still an open question, however, if contextuality manifests outside the quantum
realm. Some authors in the emerging field of quantum cognition have investigated whether contextuality manifests in cognitive information processing, for
example, human conceptual processing (Gabora and Aerts, 2002; Aerts et al.,
2014; Aerts and Sozzo, 2014; Bruza et al., 2015; Gronchi and Strambini, 2016)
and perception (Atmanspacher and Filk, 2010; Asano et al., 2014; Zhang and
Dzhafarov, 2017).
It is curious that the preceding deliberations around random variables have
a parallel in the field of computer programming languages. More than five
decades ago, programming languages such as FORTRAN featured variables that
were global. (In early versions of FORTAN, all variables were global.) As
programming languages developed, global variables were seen as a potential
cause of errors. For example, in a large program a variable X can inadvertently
be used for functionally different purposes at different points in the program.
The error can be fixed by splitting variable X into two global variables X1
and X2 . In this way X1 can be used for one functional purpose and X2 for
the other, and hence there is no danger that their unique functional identities
can become confounded. However, when the program involves large numbers
of global variables, keeping track of the functional identities of variables can
became tedious and a source of error. Such errors were considered serious and
prevalent enough that following in the wake of Dijkstra’s famous paper titled
“Go To statement considered harmful”, Wulf and Shaw (1973) advocated in
a similarly influential article that global variables are “harmful” and perhaps
should be abolished. This stance was developed in relation to block structured
programming languages. A “block”, or “scope”, refers to the set of program
constructs, such as variable definitions, that are only valid within a delineated
syntactic fragment of program code. Wulf and Shaw (1973) argued that when
a program employs a scope in which variable X is defined locally, as well as a
variable with the same label X that is global to that scope, then X becomes
“vulnerable” for erroneous overloading. The theory of programming languages
subsequently developed means so that a variable with the same label can be
used in two different scopes but preserve a unique functional identity within the
given scope. This is not the case in state-of-the-art probabilistic modelling. We
believe that the way probabilistic models are currently developed is somewhat
akin to writing FORTRAN programs from a few decades ago. By this we mean
that in the development of a probabilistic model all the random variables are
global. As a consequence errors can appear in the associated model should
the functional identity of variables be changing because the phenomenon being
modelled is contextual.
The aim of this article to contribute the foundations of a probabilistic programming language that allows convenient exploration of contextuality in wide
range of applications relevant to cognitive science and artificial intelligence. For
example, dedicated syntax is illustrated which shows how a measurement context can be specified as a syntactic scope in a probabilistic program. In addition,
random variables can be declared local to a scope to allow overloading, which
3
is convenient for the development of models. Such programs are referred to
a P-programs and fall within the emerging area of probabilistic programming
(Gordon et al., 2014).
Probabilistic programming languages (PPLs) unify techniques from conventional programming such as modularity, imperative or functional specification,
as well as the representation and use of uncertain knowledge. A variety of
PPLs have been proposed (see Gordon et al. (2014) for references), which
have attracted interest from artificial intelligence, programming languages, cognitive science, and the natural language processing communities (Goodman
and Stuhlmüller, 2014). However, unlike conventional programming languages,
which are written with the intention to be executed, a core purpose of a probabilistic program is to specify a model in the form of a probability distribution.
In short, PPLs are high-level and universal languages for expressing probabilistic models. As a consequence, these languages should not be confused with
probabilistic algorithms, or randomized algorithms, which employ a degree of
randomness as part of their logic.
In addition to the dedicated syntax, P-programs have a semantics based on
hypergraphs which determine whether the phenomenon is contextual. These
semantics will be based on hypergraphs in two ways: Firstly, a hypergraph
approach from relational database theory is used to determine whether the
schema of variables of the various measurement contexts is acyclic. If so, the
phenomenon is being modelled is non-contextual. Secondly, if the schema is
cyclic, measurement contexts are mapped to “contextuality scenarios”, which
are probabilistic hypergraphs. Although these hypergraphs have been developed
in the field of quantum physics, they provide a comprehensive general framework
to determine whether the phenomenon being modelled by the P-program is
contextual.
2. An example P-program
In order to convey some of the core ideas behind P-programs, Figure 1 illustrates an example program where the phenomenon being modelled is two coins
being tossed in four experimental conditions. Some of these conditions induce
various biases on the coins. The syntax of the P-program is expressed in the style
of a feature rich probabilistic programming language called WebPPL1 (Goodman
and Tenenbaum, 2016). However, the choice of the language is not significant.
WebPPL is simply being used as an example syntactic framework.
In P-programs, syntactic scopes are delineated by the reserved word context.
Each scope specifies an experimental condition, or “measurement context” under which a phenomenon is being examined. The example P-program defines
four such contexts labelled P1, P2, P3 and P4. Consider context P1 which declares two coins as dichotomous random variables A1 and B1 which are local
to this scope. The syntax flip(0.5) denotes a fair coin; any value other than
1 http://webppl.org/
4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
var P1 = context (){
# declare two binary random variables ; 0.5 signifies a fair coin toss
var A1 = flip (0.6)
var B1 = flip (0.5)
# declare joint distribution across the variables A1 , B1
var p =[ A1 , B1 ]
# flip the dual coins 1000 times to form the joint distribution
return { Infer ({ samples :1000} , p )}
};
var P2 = context (){
var A1 = flip (0.6)
var B2 = flip (0.3)
var p =[ A1 , B2 ]
return { Infer ({ samples :1000} , p )}
};
var P3 = context (){
var A2 = flip (0.2)
var B1 = flip (0.5)
var p =[ A2 , B1 ]
return { Infer ({ samples :1000} , p )}
};
var P4 = context (){
var A3 = flip (0.5)
var B2 = flip (0.3)
var p =[ A3 , B2 ]
return { Infer ({ samples :1000} , p )}
};
# return a single model
return { model ( P1 , P2 , P3 , P4 )}
Figure 1: Example P-program in the style of WebPPL.
5
P1:
P3:
A1
1
1
0
0
A2
1
1
0
0
B1
1
0
1
0
B1
1
0
1
0
p
p1
p2 P2:
p3
p4
p
r1
r2 P4:
r3
r4
A1
1
1
0
0
A3
1
1
0
0
B2
1
0
1
0
B2
1
0
1
0
p
q1
q2
q3
q4
p
s1
s2
s3
s4
Figure 2: Four p-tables returned by the respective contexts P1, P2, P3, P4 from the P-program
of Figure 1. The values in the column labelled “p” denote probabilities and sum to unity in
each table.
0.5 defines a biased coin. Declaring variables local to the scope syntactically
expresses the assumption that the variables retain a unique functional identity
within the scope. The random variable declarations within a scope define a set
of events which correspond to outcomes which can be observed in relation to the
phenomenon being examined in the given measurement context. For example,
A1 = 1 signifies that coin A1 has been observed as being a head after flipping.
Joint event spaces are defined by the syntax p[A1,B1] which becomes a joint
probability distribution by the syntax Infer(samples:1000,p). In this case
two coins have been flipped 1000 times to prime the probabilities associated
with each of the four mutually exclusive joint events in the event space. The
resulting distribution represents the model of the phenomenon in that measurement context which is returned from the scope as a partial model. The other
measurement contexts P2, P3 and P4 are similarly defined resulting in the four
distributions depicted in Figure 2. The structure of each distribution will be
referred to as a probabilistic table, or p-table for short, as these are a natural
probabilistic extension to the tables defined in relational databases (Bruza and
Abramsky, 2016).
Modelling practice is usually governed by the norm that it is desirable to construct a single model of the phenomenon being studied. Dedicated syntax, e.g.,
model(P1,P2,P3,P4) allows partial models from the four measurement contexts to be combined to form a single distribution, such that each distribution
corresponding to partial models can be recovered from this single distribution
by appropriately marginalizing it (Bruza and Abramsky, 2016; Bruza, 2016). It
turns out that this is not always possible to construct such a single model. As
we shall see below, when this happens the phenomenon being modelled turns
out to be “contextual”. Abramsky (2015) discovered that this formulation of
contextuality is equivalent to the universal relation problem in database theory.
This problem involves determining whether the relations in a relational database
be joined together (using the natural join operator) to form a single “universal” relation such that each constituent relation be can be recovered from the
6
universal relation by means of projection.
Relational database theory tells us that a key consideration in this problem
turns out to be whether the database schema comprising constituent relations
(p-tables) is cyclic or acyclic. A database schema is deemed “acyclic” iff the
hypergraph H(N, E) can be reduced into an empty graph using the Graham
procedure (Gyssens and Paredaens, 1984). The set N denotes the vertices in
the graph and E the set of edges.
A hypergraph differs from a normal graph in that an edge can connect more
than two vertices. For this reason, such edges are termed “hyperedges”. For
example, the database schema corresponding to the P-program in Figure 1 is
depicted in Figure 2. In our case, the nodes N of the hypergraph are the the
individual variables in the headers of the p-tables and the edges correspond
to the sets of variables in these headers, i.e., there will be one edge corresponding to each constituent p-table, where the edge is the set of variables
defining the header of that p-table. Therefore, N = {A1 , A2 , A3 , B1 , B2 } and
E = {{A1 , B1 }, {A1 , B2 }, {A2 , B1 }, {A3 , B2 }}. (As the headers of p-tables only
contain two variables, H is in this case a standard graph.)
The Graham procedure is applied to the hypergraph H until no further
action is possible:
• delete every edge that is properly contained in another one;
• delete every node that is only contained in one edge.
The following details the steps of the Graham procedure when applied to the
example:
1.
2.
3.
4.
5.
6.
7.
8.
9.
{A1 , B1 }, {A1 , B2 }, {A2 , B1 }, {A3 , B2 }
{A1 , B1 }, {A1 , B2 }, {A2 , B1 }, {B2 }
{A1 , B1 }, {A1 , B2 }, {A2 , B1 }
{A1 , B1 }, {A1 , B2 }, {B1 }
{A1 , B1 }, {A1 , B2 }
{A1 , B1 }, {A1 }
{A1 , B1 }
{A1 }
∅
In this case the Graham procedure results in an empty hypergraph, so the
schema is deemed “acyclic”.
There are a number of theoretical results in relational database theory which
make acyclic hypergraphs significant with regard to providing the semantics of
joining partial models into a single model. Wong (1997) formalizes the relationship between Markov distributions and relational database theory by means of
a generalized acyclic join dependency (GAJD). The key idea behind this relationship is the equivalence between probabilistic conditional independence and
a generalized form of multivalued dependency, the latter being a functional constraint imposed between two sets of attributes in the database schema. It turns
7
out that a joint distribution factorized on an acyclic hypergraph is equivalent
to a GAJD (Wong, 2001).
For example, consider once again the acyclic schema in Figure 2. There
are four p-tables, P = {P1 , P2 , P3 , P4 }. As the hypergraph is acyclic, there is
necessarily a so called join tree construction, denoted ⊗{S1 , . . . , Sn }, that satisfies the GAJD. In the tree construction, each Si , 1 ≤ i ≤ n denotes a unique
p-table in the set P . The practical consequence of this is that there is a join
expression of the form: (((S1 ⊗ S2 ) ⊗ S3 ) ⊗ S4 ) where the sequence S1 , . . . , S4 is
a tree construction ordering derived from the acyclic hypergraph. If the hypergraph constructed from the schema comprising n p-tables {P1 , P2 , P3 , . . . Pn } is
acyclic, then a generalized join expression (. . . (S1 ⊗ S2 ) ⊗ S3 ) . . . ⊗ Sn ) exists
which joins the p-tables into a single probability distribution P such that each
Pi , 1 ≤ i ≤ n is a marginal distribution of P .
P2
{A1,B2}
2:{A1}
3:{B2}
P1
{A1,B1}
P4
{A3,B2}
1:{B1}
P3
{A2,B1}
Figure 3: Join tree of the p-tables P1 , P2 , P3 , P4 to be joined.
In order to gain some intuition about how this plays out in practice, the
acyclic database schema depicted in Figure 2 results in the join tree depicted in
Figure 3. The nodes depict the variables in the respective p-tables and the edges
represent the overlap between the sets of variables in the respective headers. The
numbers on the edges denote the ordering used to produce the join expression:
(((P3 ⊗P1 )⊗P2 )⊗P4 ). Under the assumption that the probability distributions
represented in the nodes have identical distributions when marginalized by the
variable associated with the edge, we can see how the hypertree produces a
Markov network which, in turn, specifies the probabilistic join of the constituent
p-tables (Liu et al., 2011):
P (A1 , A2 , A3 , B1 , B2 ) =
P (A2 , B1 )P (A1 , B1 )P (A1 , B2 )P (A3 , B2 )
P (B1 )P (A1 )P (B2 )
(1)
Observe how the structure of the equation mirrors the graph in Figure 3 where
the numerator corresponds to the nodes of the join tree and the denominator
corresponds to terms which normalize the probabilities. In addition, this expression reflects conditional independence assumptions implied by the join tree,
namely A1 and A2 are conditionally independent of B1 , B1 and B2 are conditionally independent of A1 , and A1 and A3 are conditionally independent of
B2 .
8
Let us summarize the situation so far and reflect on the issue of contextuality. A P-program comprises a number of scopes where each scope corresponds
to a measurement context. A scope returns a probability distribution in the
form of a p-table, which can be considered a partial model of the phenomenon.
A reasonable goal is to combine these distributions into a single distribution
so a single model of the phenomenon is produced. When the schema of the
constituent p-tables is acyclic and the marginal distributions of the set of intersecting variables are constant, then a straightforward extension of relational
database theory can be used to produce the required single model (as has been
shown in more detail in (Bruza, 2016)). The fact that it is possible to construct a single model means that the random variables in the P-program have
a functional identity that is independent of the measurement contexts. The
phenomenon is therefore non-contextual.
Much of the research on contextuality corresponds to when the schema of
the p-tables is cyclic (Zhang and Dzhafarov, 2017). In order to explore such
cases, we will continue to use hypergraphs, but instead on defining the graph
structure at the level of the schema as was illustrated in Graham procedure, the
structure of the hypergraph will be defined in terms of the underlying events
in the measurement contexts defined in the P-program. In database terms
this equates to defining the hypergraph structure in terms of the data in the
respective p-tables.
3. Probabilistic models and Hypergraph semantics
In the following, we draw from a comprehensive theoretical investigation
using hypergraphs to model contextuality in quantum physics (Acin et al., 2015).
The driving motivation is to leverage these theoretical results to provide the
semantics of P-programs when the schema of the p-tables to be joined is cyclic.
How these semantics are expressed relates to how the syntax has been specified,
which in turn relates to the experimental design that the modeller has in mind.
The basic building block of these semantics is a “contextuality scenario”.
Definition 3.1. (Contextuality Scenario) (Definition 2.2.1 (Acin et al., 2015))
A contextuality scenario is a hypergraph X = (V, E) such that:
• v ∈ V denotes an event which can occur in a measurement context;
• e ∈ E is the set of all possible events in a measurement context.
The set of hyperedges E are determined by both the measurement contexts
as well as the measurement protocol. Each measurement context is represented
by an edge in the hypergraph X. The basic idea is that each syntactic scope in
a P-program will lead to a hyperedge, where the events are a complete set of
outcomes in the given measurement context specified in the associated scope.
Additional hyperedges are a consequence of the constraints inherent in the measurement protocol that is applied. The examples to follow aim to make this
clear.
9
1
2
3
4
5
6
7
8
9
10
11
12
13
var P1 = context () {
var A = flip (0.7)
var B = A ? flip (0.8): flip (0.1)
var p =[ A , B ]
return { Infer ({ samples :1000} , p }
};
var P2 = context () {
var B = flip (0.4)
var A = B ? flip (0.4): flip (0.6)
var p =[ B , A ]
return { Infer ({ samples :1000} , p }
};
return { model ( P1 , P2 )}
Figure 4: Example order effects P-program.
In some cases, hyperedges will have a non-trivial intersection: If v ∈ e1 and
v ∈ e2 , then this represents the idea that the two different measurement outcomes corresponding to v should be thought of as equivalent as will be detailed
below by means of an order effects experiment.
Order effects experiments involve two measurement contexts each involving
two dichotomous variables A and B which represent answers to yes/no questions
QA and QB . In one measurement context, the question QA is served before
question QB and in the second measurement context the reverse order is served,
namely QB then QA . Order effects occur when the answer to the first question
influences the answer to the second. These two measurements contexts are
syntactically specified by the scopes P1 and P2 shown in Figure 4.
In this P-program, syntax of the form var B = A ? flip(0.8): flip(0.1)
models the influence of the answer of QA on QB via a pair of biased coins. In
this case, if QA = y, then the response to QB is determined by flipping an 80%
biased coin. Conversely, if QA = n, then the response to QB is determined by
flipping a 10% biased coin (The choices of such biases are determined by the
modeller). It should be carefully noted that the measurement contexts in the
order effects program do not reflect the usual understanding of measurement
context employed in experiments analyzing contextuality in quantum physics.
In these experiments, a measurement context comprises observables that are
jointly measurable, so the order in which the observables within a given context
are measured will not affect the associated statistics.
We will now use this simple example to illustrate the associated contextuality
scenario which is shown in Figure 5. Firstly, the set of V of events (measurement outcomes) comprises all possible combinations of yes/no answers to the
questions QA and QB , namely V = {A = 1 ∧ B = 1, A = 1 ∧ B = 0, A = 0 ∧ B =
1, A = 0 ∧ B = 0}, where 1 denotes ‘yes’ and 0 denotes ‘no’. In this figure, the
two rounded rectangles represent the events within the two measurement contexts specified by the syntactic scopes P1 and P2. For example, in the rectangle
labeled P1, “11” is shorthand for the event A = 1 ∧ B = 1 etc. Observe that the
10
P1:{A,B}
P2:{A,B}
p1
11
p3
01
q1
p2
11
10
p4
q2
10
00
q3
01
q4
00
Figure 5: Contextuality scenario corresponding to P-program depicted in Figure 4. In total,
the hypergraph has 4 edges of four vertices each.
corresponding hyperedges (rounded rectangles) contain an exhaustive, mutually
exclusive set of events. In addition, the two spanning hyperedges going across
these rectangles similarly comprise a set of exhaustive, mutually exclusive set
of events. These spanning edges help illustrate events that are considered to be
equivalent.
Firstly, it is reasonable to assume answering yes (or no) to both questions in
either measurement context represents equivalent events. Therefore, the events
labelled p1 and p4 can respectively be assumed equivalent to q1 and q4 . It becomes a little more subtle when the polarity of the answers differ. For example,
the event labelled p3 represents the event A = 0∧B = 1, remembering that question QA was asked before question QB in this context. The equivalent event in
hyperedge P2 is labelled q2 , which corresponds the event B = 1 ∧ A = 0, where
question B is asked before question A. As conjunction is commutative, it is
reasonable to view these two converse events as equivalent. In summary, if p3 is
equivalent to q2 and p4 is equivalent to q4 then the hyperedge {p1 , p2 , q2 , q4 } (the
dashed hyperedge in Figure 5) can be established, in addition to the hyperedge
{p1 , p2 , p3 , p4 }.
Let us now return to the issue of contextuality. A probabilistic model corresponding to a contextuality scenario X is the mapping of measurement outcomes
to a probability p : V → [0, 1]. Henson and Sainz (2015) point out that
“By defining probabilistic models in this way [rather than by a function pe (V ) depending on the measurement e performed], we are assuming that in the set of experimental protocols that we are interested in, the probability for a given outcome is independent of the
measurement that is performed”.
Observe carefully that by defining probabilistic models in this way formalizes the
assumption mentioned in the introduction, namely that random variables are
independent of measurement context and thus have a single functional identity.
Without a single functional identity it is impossible to assign a random variable to represent the outcomes of the same measurement protocol in different
measurement contexts.
11
It is a requirement that
P the mapping adheres to the expected normalization condition: ∀e∈E : v∈e p(v) = 1. By way of illustration, consider once
again Figure 5. This contextuality scenario has four edges. The normalization
condition enforces the following constraints:
p1 + p2 + p3 + p4 = 1
(2)
q1 + q2 + q3 + q4 = 1
(3)
p1 + p2 + q2 + q4 = 1
(4)
p3 + p4 + q1 + q3 = 1
(5)
where pi , 1 ≤ i ≤ 4 and qj , 1 ≤ j ≤ 4 denote the probabilities of outcomes in
the four hyperedges. A definition of contextuality can now be presented.
Definition 3.2 ((Probabilistic) contextuality). Let X = (V, E) be a contextuality scenario. Let G(X) denote the set of probabilistic models on X. X is
deemed “contextual” if G(X) = ∅.
In other words, the impossibility of a probabilistic model signifies that the
phenomenon being modelled is contextual. (The label “probabilistic” mirrors
an analogous definition of contextuality based on sheaf theory (Abramsky et al.,
2016)).
Let us now examine the possibility of a probabilistic model on the order
effects contextuality scenario (Figure 5). Equations (2) and (5) imply that
p1 + p2 = q1 + q3 . Now, p1 + p2 are repectively associated with the outcomes
A = 1 ∧ B = 1 and A = 1 ∧ B = 0. In other words, p1 + p2 denotes the marginal
probability p(A) in measurement context P1. By a similar argument, q1 + q3
denotes p(B = 1 ∧ A = 1) + p(B = 0 ∧ A = 1) which is written this way to
emphasize that question QB is asked first in measurement context P2. This
also equates to the marginal probability p(A). In other words, the constraints
imposed by normalization conditions in the hyperedges imply that the marginal
probability p(A) must be the same across both measurement contexts P1 and
P2.
This conclusion makes sense when considered in relation to the definition
of contextuality: The only way that a function p : V → [0, 1] can be defined
is if the marginal probabilities of the variables A and B are the same in both
measurement contexts P1 and P2. If not, then this means that variable A has
a different functional identity when question QA is asked first (in measurement
context P1) as opposed to when it is asked second (in measurement context P2).
In summary, the semantics of a P-program is represented by a contextuality
scenario, which has the form of a hypergraph. Contextuality equates to the
impossibility of a probabilistic model over the hypergraph. This impossibility
is where contextuality meets probabilistic models.
4. Syntax and semantics of combining contextual scenarios according
to experimental design
Different fields employ various experimental designs when studying a phenomenon of interest. For example, in psychology a “between subjects” ex12
perimental design means a given participant should only be subjected to one
measurement context. In quantum physics, however, some experiments involve
measurement contexts which are enacted simultaneously with the requirement
that observations made in each context are local to that context and don’t influence other measurement contexts. This constraint is often referred to as the
“no signalling” condition.
One of the advantages of using a programming approach to develop probabilistic models is that experimental designs can be syntactically specified in a
modular way. In this way, a wide variety of experimental designs across fields
can potentially be catered for. For example, consider the situation where an
experimenter wishes to determine whether a system S can validly be modelled
compositionally in terms of two component subsystems A and B as shown in
Figure 6.
A1 {yes, no}
B1 {yes, no}
A
S
B
B2 {yes, no}
A2 {yes, no}
Figure 6: A potentially compositional system S, consisting of two assumed components A
and B. S can perhaps be understood in terms of a mutually exclusive choice of experiments
performed upon those components, one represented by the random variables A1, A2 (pertaining to an interaction between the experimenter and component A), and the other by B1, B2
(pertaining to an interaction between the experimenter and component B). Each of these
experiments can return a value of ‘yes’ or ‘no’.
Two different experiments can be carried out upon each of the two presumed
components, which will answer a set of ‘questions’ with binary outcomes, leading
to four measurement contexts. For example, one experimental context would
be to ask A1 of component A and B1 of component B.
This abstract experimental design has be instantiated in a number of ways.
For example, in quantum physics it has been employed to determine whether
system S comprising photons2 . A and B is entangled. In addition, it has been
employed in cognitive psychology to test for contextuality in human cognition
(Aerts et al., 2014; Aerts and Sozzo, 2014; Bruza et al., 2015; Dzhafarov et al.,
2015; Gronchi and Strambini, 2016). For example, Bruza et al. (2015) describe
an experiment to determine whether novel conceptual combinations such as
BOXER BAT adhere to the principle of semantic compositionality (Pelletier,
1994). Semantic compositionality entails that the meaning of BOXER BAT is
2 Here a bipartite system of photons is being introduced. Whenever such systems of photons
are mentioned throughout this article, in principle, any bipartite or multipartite quantum
system would do, even fermions
13
some function of the meaning of the component concepts BOXER and BAT.
In this case, component A corresponds to the concept BOXER and component
B corresponds to the concept BAT. Each of these concepts happen to be biambiguous, for example, BOXER can be interpreted in a sport sense or an
animal sense (a breed of dog). Similarly, the concept BAT can be interpreted in
either of these senses. Interpretation of concepts can be manipulated by priming
words which correspond to the ‘questions’ asked of the component concepts. For
example, one experimental context would be to ask a set of human subjects to
return an interpretation of BOXER BAT after being shown the priming words
fighter (A1) and vampire (B1). Note how A1 is designed to prime the sport
sense of BOXER and B2 to prime the animal sense of BAT. An interpretation
given in this context might be “an angry furry black animal with boxing gloves
on”. It is important to note that the interpretation is probabilistic, namely the
priming word influences an interpretation of the concept but does not determine
it.
How can system S depicted in Figure 6 be modelled as a P-program? And,
how can the semantics of the P-program determine whether S is contextual?
One way to think about system S is that it is equivalent to a set of biased coins
A and B, where the bias is local to a given measurement context. Figure 7
depicts a P-program that follows this line of thinking.
The P-program will be referred to as “Bell scenario” as it programmatically
specifies the design of experiments in quantum physics inspired by the physicist
John Bell (Clauser and Horne, 1974). Such experiments involve a system of two
space-like separated photons.
4.1. Bell contextuality scenario with no-signalling
The Bell scenario program follows the design depicted in Figure 6 by first
defining the components A and B together with the associated variables. Thereafter, the program features the four measurement associated contexts P1, P2, P3
and P4. Finally, the line model(design: ’no-signal’,P1,P2,P3,P4) specifies that the measurement contexts are to be combined according to a “no signalling” condition. Formal details of this condition will follow, but essentially it
imposes a constraint that measurements made on one component do not affect
outcomes observed in relation to the other component. This could be because
the components have sufficient spatial separation in a physics experiment, or
alternatively, in a psychology experiment the cognitive phenomena represented
by components A and B of system S are independent cognitive functions.
The question now to be addressed is how the hypergraph semantics are
to be formulated. Acin et al. (2015) provides the general semantics of the
Bell scenarios by means of multipartite composition of contextuality scenarios.
As these semantics are compositional, it opens the door to map syntactically
specified components in a P-program to contextuality scenarios and then to
exploit the composition to provide the semantics of the program as a whole.
Consider the Bell scenario program depicted in Figure 7. The syntactically
defined components A and B are modelled as a contextuality scenarios XA and
XB respectively. The corresponding hypergraphs are depicted in Figure 8.
14
1
2
3
# define the components of the experiment
def A = component ( A1 , A2 )
def B = component ( B1 , B2 )
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
var P1 = context (){
# declare two binary random variables ; 0.5 signifies a fair coin toss
var A1 = flip (0.6)
var B1 = flip (0.5)
# declare joint distribution across the variables A1 , B1
var p =[ A1 , B1 ]
# flip the dual coins 1000 times to form the joint distribution
return { Infer ({ samples :1000} , p )}
};
var P2 = context (){
var A1 = flip (0.4)
var B2 = flip (0.7)
var p =[ A1 , B2 ]
return { Infer ({ samples :1000} , p )}
};
var P3 = context (){
var A2 = flip (0.2)
var B1 = flip (0.7)
var p =[ A2 , B1 ]
return { Infer ({ samples :1000} , p )}
};
var P4 = context (){
var A2 = flip (0.4)
var B2 = flip (0.5)
var p =[ A2 , B2 ]
return { Infer ({ samples :1000} , p )}
};
# return a single model
return { model ({ design : ‘no - signal ’ ,P1 , P2 , P3 , P4 })}
Figure 7: Example P-program “Bell scenario” which models system S depicted in Figure 6.
15
0
B1
1
1
0
B2
A1
1
0
1
0
A2
XA
XB
Figure 8: Contextuality scenarios corresponding to the components A and B of Figure 6.
Note how the variable definitions associated with the component map to an
edge in a hypergraphs. For example, the syntax def A = component(A1,A2)
corresponds to the two edges labelled A1 and A2 on the left hand side of Figure
8.
The question now is how the compose the contextuality scenarios XA and
XB into a single contextuality scenario XAB , which will express the semantics
of the Bell scenario P-program. The most basic form of composition is by
means of a direct product of the respective hypergraphs. The direct product is a
contextual scenario XAB = XA ×XB such that V (XA ×XB ) = V (XA )×V (XB )
and E(XA × XB ) = E(XA ) × E(XB ). (See Definition 3.1.1 in (Acin et al.,
2015).) The hypergraph of the product is shown in Figure 10. Observe how each
syntactic context P1, P2, P3 and P4 specified in the Bell scenario P-program
corresponds to an edge in the hypergraph. In addition, note the structural
correspondence of the hypergraph in Figure 10 with the cyclic database schema
depicted in Figure 9.
P2
{A1,B2}
{B2}
{A1}
P4
P1
{A1,B1}
{A2,B2}
{B1}
P3
{A2}
{A2,B1}
Figure 9: Cyclic schema of the p-tables P1, P2, P3, and P4.
Note that the events in Figure 10 are denoted as various coloured dots with
each such dot corresponding directly to a row of a p-table within the cyclic
schema.
The Bell scenario program syntactically specifies that there should be “no
signalling” between the respective components A and B via the command
model(design: ‘no-signal’,P1,P2,P3,P4). This condition imposes constraints on the allowable probabilistic models on the combined hypergraph struc-
16
P1:{A1,B1}
P2:{A1,B2}
00
01
00
01
10
11
10
11
00
01
00
01
10
11
10
11
P4:{A2,B2}
P3:{A2,B1}
Figure 10: Contextuality scenario corresponding to the direct product XAB = XA × XB
ture. Following Definition 3.1.2 in (Acin et al., 2015), a probabilistic model
p ∈ G(XA × XB ) is a “no signalling” model if:
X
X
p(v, w), ∀v ∈ V (XA ), e, e0 ∈ E(XB )
p(v, w) =
w∈e0
w∈e
X
w∈e
p(v, w)
=
X
p(v, w), ∀w ∈ V (XB ), e, e0 ∈ E(XA )
w∈e0
The probabilistic constraints entailed by this definition will be illustrated in an
example to follow. Acin et al. (2015)(p45) show that not all probabilistic models of contextuality scenarios composed by a direct product are “no signalling”
models. In order to guarantee that all probabilistic models of a combined contextuality scenario are “no signalling” models, the constituent contextuality
scenarios XA and XB should be combined by the Foulis-Randall (FR) product
denoted XAB = XA ⊗ FR XB . As with the direct product XA × XB of contextuality scenarios, the vertices of the FR product are defined by V (XA ⊗ FR XB ) =
V (XA ) × V (XB ). It is with respect to the hyperedges that there is a difference
between the FR product and the direct product:
XA ⊗ FR XB
= EA→B ∪ EB←A
17
where
EA→B
:=
[
{v} × f (v) : ea ∈ E(XA ), f : ea → E(XB )
v∈ea
EA←B
:=
[
f (w) × {w}× : eb ∈ E(XB ), f : eb → E(XA )
w∈eb
We are now in a position to illustrate the semantics of the P-program of Figure
7 by the corresponding contextuality scenario depicted in Figure 11. Observe
how the FR product produces the extra edges that span the events across measurement contexts labeled P1, P2, P3 and P4 when compared with the direct
product hypergraph depicted in Figure 10, At first these spanning edges may
seem arbitrary, but they happen to guarantee that the allowable probabilistic models over the composite contextuality scenario XA ⊗ FR XB satisfy the
“no signalling” condition (Sainz and Wolfe, 2017). By way of illustration, the
normalization condition on edges imposes the following constraints (see Figure
11):
p1 + p2 + p3 + p4 = 1
(6)
q1 + q2 + q3 + q4 = 1
(7)
p1 + p2 + q3 + q4 = 1
(8)
p3 + p4 + q1 + q2 = 1
(9)
where pi , 1 ≤ i ≤ 4 and qj , 1 ≤ j ≤ 4 denote the probabilities of events in
the respective hyperedges. A consequence of constraints (6) and (8) is that
p3 + p4 = q3 + q4 . When considering the associated outcomes this means
p(A1 = 1 ∧ B1 = 0) + p(A1 = 1 ∧ B1 = 1) = p(A1 = 1 ∧ B2 = 0) + p(A1 = 1 ∧ B2 = 1)
|
{z
} |
{z
}
{z
} |
{z
}
|
p3
p4
q3
q4
(The preceding is an example of one of the constraints imposed by Definition
3.1.2 in (Acin et al., 2015) as specified above). In other words, the marginal
probability p(A1 = 1) does not differ across the measurement contexts P1 and
P2 specified in the P-program of Figure 7. In a similar vein, equations (6)
and (9) imply that the marginal probability p(A1 = 0) does not differ across
measurement contexts P1 and P2. The stability of marginal probability ensures
that “no signalling” is occurring from component B to component A (see Figure
6). In terms of our BOXER BAT example, “no signalling” implies that the
probability of interpretation of the concept BOXER does change whether the
priming word for BAT is ball (B1 - sport sense) or vampire (B2 - animal sense).
4.2. Bell contextuality scenario with signalling
Investigations into contextuality in quantum physics involve the “no signalling” condition. However, in cognitive science and related areas, the situation
seems isn’t as clear cut. Dzhafarov and Kujala (2015) argue, for example, that
18
P1:{A1,B1}
P2:{A1,B2}
p1
p2
00
01
00
01
p3
p4
q3
q4
10
11
10
00
01
10
11
P3:{A2,B1}
q1
P1:{A1,B1}
P2:{A1,B2}
q2
00
01
00
01
11
10
11
10
11
00
01
00
01
00
01
10
11
10
11
10
11
P4:{A2,B2}
P3:{A2,B1}
P4:{A2,B2}
Figure 11: Contextuality scenario of the P-program of Figure 7. This P-program has the
cyclic schema depicted in Figure 9. In total the hypergraph comprises 12 edges of four events.
The nodes in rectangles represent events in a probability distribution returned by a given
scope:P1, P2, P3, and P4. Note this figure depicts a single hypergraph. Two copies have been
made to depict the spanning edges more clearly. This figure corresponds to Figure 7f in (Acin
et al., 2015).
the “no signalling” condition seems always to be violated in psychological experiments. By way of illustration, consider once again the conceptual combination
BOXER BAT. Recall that the “no signalling” condition entails that the probability of interpretation of the concept BOXER does change whether the priming
word for BAT is ball (B1 - sport sense) or vampire (B2 - animal sense). Nor
does the probability of interpretation of the concept BAT does change whether
the priming word for BOXER is fighter (A1 - sport sense) or dog (A2 - animal
sense). However, it is easy to imagine that signalling may be involved in forming
an interpretation of BOXER BAT. For example, Wisniewski (1997) identifies
a property-based interpretation of conceptual combinations whereby properties
of the modifying concept BOXER apply in some way to the head concept BAT.
One way to view this kind of interpretation is that a sense of BOXER is first
established and then influences the interpretation of the concept BAT. In other
words, the interpretation of the conceptual combination is formed by processing
the combination from left to right. In relation to the general system depicted in
Figure 6, the preceding situation involves an arrow proceeding from component
A to B, which represents component A signalling information to component B.
We can model Wisniewski’s property interpretation by extending the Bell
scenario to involve signalling as specified in the P-program shown in Figure 12.
The signalling from concept A to concept B in a given measurement context is
modelled as the outcome of the B coin being dependent on the outcome of the
A coin. Note that signalling does not occur the other way, namely, the probability of interpretation of A does not change according to outcomes measured
in relation to component B. This fact allows a more refined understanding of
the hypergraph semantics depicted in Figure 11 and how these semantics relate
19
1
2
3
# define the components of the experiment
def A = component ( A1 , A2 )
def B = component ( B1 , B2 )
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
var P1 = context (){
# signalling : variable B1 ’s outcome is dependent on A1 ’s outcome
var A1 = flip (0.6)
var B1 = A1 ? flip (0.8): flip (0.2)
# declare joint distribution across the variables A1 , B1
var p =[ A1 , B1 ]
# flip the dual coins 1000 times to form the joint distribution
return { Infer ({ samples :1000} , p )}
};
var P2 = context (){
var A1 = flip (0.4)
var B2 = A1 ? flip (0.3): flip (0.6)
var p =[ A1 , B2 ]
return { Infer ({ samples :1000} , p )}
};
var P3 = context (){
var A2 = flip (0.2)
var B2 = A1 ? flip (0.9): flip (0.2)
var p =[ A2 , B1 ]
return { Infer ({ samples :1000} , p )}
};
var P4 = context (){
var A2 = flip (0.4)
var B2 = A2 ? flip (0.8): flip (0.1)
var p =[ A2 , B2 ]
return { Infer ({ samples :1000} , p )}
};
# return a single model
return { model ({ design : ‘ signal (A - > B ) ’ ,P1 , P2 , P3 , P4 })}
Figure 12: Example P-program specifying a signalling “Bell” scenario
20
to an experimental design which now involves signalling.
In the previous section it was established that the spanning edges in the left
hand side of this figure prevents signalling from component B to A. Conversely,
the spanning edges in the right hand side of the figure prevent signalling from
A to B. Therefore, the hypergraph semantics of the signalling Bell scenario
specified by the program in Figure 12 does not include these right hand side
spanning hyperedges. The resulting hypergraph semantics are depicted as the
contextuality scenario shown Figure 13 which is the semantics corresponding
to the syntax model(design: ‘signal(A->B)’,P1,P2,P3,P4), where A->B
expresses the direction of the signalling between the respective components.
Definition 3.2 can then be applied to determine whether a probabilistic model
exists in relation to this contextuality scenario. If not, the signalling system
modelled by the P-program in Figure 12 is deemed to be “strongly contextual”.
P1:{A1,B1}
P2:{A1,B2}
p1
p2
00
01
00
01
p3
p4
q3
q4
10
11
10
11
00
01
00
01
10
11
10
11
q1
P3:{A2,B1}
q2
P4:{A2,B2}
Figure 13: Contextuality scenario corresponding to the signalling Bell scenario specified by
the P-program in Figure 12.
5. Discussion
The aim of this article is to take an algorithmic approach for the development of probabilistic models by providing a high level language that makes
it convenient for the modeller to express models of a phenomenon that may
be contextual. Borrowing from programming language theory, a key feature is
21
the use of syntactic scopes which permits measurement contexts to be specified
that correspond to the experimental conditions under which the phenomenon
is being examined. The use of syntactic scopes has two consequences. Firstly,
random variables local to a scope will be invisible to those local to other scopes.
Secondly, each scope returns a probability distribution as a partial model.
The first consequence relates to scopes preventing the incorrect overloading
of random variables. This article has attempted to show that the overloading
of variables in probabilistic models relates to contextuality namely “contextual
situations are those in which seemingly the same random variable changes its
identity depending on the conditions under which it is recorded” (Dzhafarov
and Kujala, 2014a).
Regarding the second consequence, Abramsky (2015) discovered that the
problem of combining partial models into a single model has an equivalent expression in relational database theory where the problem is to determine whether
a universal relation exists for a set of relations such that these relations can be
recovered from the universal relation via projection. Contextuality occurs when
it possible to construct the universal relation. The question to be addressed,
then, is how to determine when it is possible, and when it is not.
This article proposes hypergraphs as an underlying semantic structure to
address this question. Firstly, an approach developed in relational database
theory is used to determine whether the schema of the partial models are acyclic.
If so, the hypergraph is exploited to form a join tree which can compute a
single model such that the partial models can be retrieved by appropriately
marginalizing this model.
When the schema is cyclic, hypergraphs called “contextuality scenarios” are
formed. The general picture is the following: Experimental designs are syntactically specified in addition to associated measurement contexts appropriate to
the design. Each component can be translated into a contextuality scenario.
Multipartite composition of these contextuality scenarios yields a single contextuality scenario corresponding to the experimental design. In this article, we
illustrated two Bell scenario designs based on whether the “no signalling” condition holds. If this condition does hold, then the Foulis-Randall (FR) product
can be used to define the composition. However, when signalling is permitted,
means other than the FR product need to be developed. This is an open question which is particularly relevant to psychology experiments where signalling
appears to be pervasive. In this regard, recent work on signalling in Bell scenarios may provide a useful basis for further development (Brask and Chaves,
2017). For example, Brask and Chaves (2017) studies relaxations of the “no
signalling” condition where different forms of communication are allowed. The
P-program depicted in Figure 12 modelled one such condition in which outcomes can be uni-directionally communicated between the two components of
the assumed model. Investigating contextuality in the presence of signalling is
an important issue for cognitive science and related areas. Perhaps surprisingly
it is an issue that has received scant attention to date (Dzhafarov and Kujala,
2014b). When signalling is not present, it would be interesting to investigate
how variations of multipartite composition of contextuality being investigated
22
in physics may inspire new experimental designs outside of physics (Sainz and
Wolfe, 2017).
Once a contextuality scenario has been constructed for the P-program, “strong
contextuality” occurs when it is not possible to construct a probabilistic model
on the underlying hypergraph. If a probabilistic model on the hypergraph is
possible, then the random variables are independent of the measurement contexts.
The motivation for demarcating the problem into acyclic vs. cyclic cases
is related to efficiency: The number of variables at the schema level is likely
to be much smaller than the number of underlying events, especially when one
considers larger scale experiments involving numerous random variables. This
is not withstanding the fact that determining whether there is a global model
turns out to be tractable. Stated more formally, given a contextuality scenario
X, a linear progam can determine whether strong contextuality holds. (See
Proposition 8.1.1 in (Acin et al., 2015).) This theoretical result echoes linear
programming solutions which have been found for contextual semantics based
on sheaf theory (Abramsky et al., 2016) and selective influence (Dzhafarov and
Kujala, 2012).
One of the advantages of the hypergraph semantics of contextuality scenarios
is that they are general enough to allow contextuality to be investigated in a
variety of experimental settings. In the next section we show how contextuality
could be investigated in an information fusion setting.
5.1. The use of P-programs for investigating contextuality in information fusion
Information fusion refers to the problem of making a judgement about an
entity, situation, or event by combining data from multiple sources which are
simultaneously presented to a human subject. For example, one source might
be an image and another might be a social media post. Fusion allows a much
better judgment to be made because it is based on multiple sources of evidence.
However, the sources may involve uncertainty, for example, the human subject
may not trust the source of a social media post, or the image may appear
manipulated. As a consequence, a decision of trust may be contextual because
a random variable T modelling trust may have different functional identities
depending on the source stimulus.
Let us now sketch how a P-program could be developed to investigate whether
trust is contextual. Firstly, imagine that empirical data is collected from human subjects in an experiment. For example, subjects could be simultaneously
presented with two visual stimuli as is shown in Figure 14. The left hand stimulus purports to be a image of a typhoon hitting the Phillipines sourced from
an obscure Asian media site. The right hand stimulus is sourced from Twitter
where the language is unfamiliar (Japanese), but the graphic seems to depict
a typhoon tracking towards the Phillipines. The subject must decide if they
trust whether the stimuli depict the same event. Random variables affecting
the decision of trust could be defined as follows:
23
Figure 14: Example information fusion scenario. Do the two stimuli pertain to the same
event?
• Variables relating to the image: I1 (e.g., “Do you trust that the image
does correspond to the situation described by the text?”), I2 (e.g., “Does
the image look fake or manipulated in any way?”)
• Variables related to the tweet: Credibility S1 (e.g., “Do you trust the
source of the tweet to be credible?”), S2 (e.g., “Do you trust that the
tweet corresponds to the situation depicted in the image?”).
These four variables allow for an experiment in which one variable of each
stimulus is measured, thus imply four measurement contexts based on the the
following pairs of variables: {I1 , S1 },{I1 , S2 },{I2 , S1 } and {I2 , S2 }. A between
subjects design allows experimental data to be collected in each experimental
context meaning a human subject is exposed to only one measurement context in
order to counter learning effects. The corresponding P-program would therefore
include four scopes corresponding to these measurement contexts and each scope
would return the corresponding partial model based on the data collected in that
measurement context. These four partial models correspond to the pairwise
distributions: p(I1 , S1 ), p(I1 , S2 ), p(I2 , S1 ) and p(I2 , S2 ).
As this program involves a cyclic schema, the situation is similar to that
depicted in Figure 9. Therefore, measurement contexts would be defined around
observations of individual variables I1 , I2 , S1 , S2 using a signalling Bell scenario
design. As subjects are processing both stimuli simultaneously, it raises the
possibility of signalling between the left stimulus (component A) and the right
stimulus (component B).
6. Summary and Future Directions
The aim of this article to contribute the foundations of a probabilistic programming language that allows exploration of contextuality in wide range of
applications relevant to cognitive science and artificial intelligence. The core
24
idea is that probabilistic models are specified as a program with associated
semantics which are sensitive to contextuality. The programs feature specific
syntactic scopes to specify experimental conditions under which a phenomenon
is being examined. Random variables are declared local to a scope, and hence
are not visible to other variables. In this way, random variables can be safely
overloaded which is convenient for developing models whilst the programming
semantics, not the modeller, keeps track of the whether functional identities of
the random variables are being preserved.
Hypergraphs were proposed as an underlying structure to specify contextually sensitive program semantics. Firstly, a hypergraph approach developed in
relational database theory was used to determine whether the schema of the
partial probabilistic models is acyclic. If so, the hypergraph is exploited to form
a join tree which can compute a single model such that the partial models can
be retrieved by appropriately marginalizing this model. In this case, the phenomenon is non-contextual. When the schema is cyclic, the phenomenon may or
may not be contextual. For the cyclic case a hypergraph called a “contextuality
scenario” is formed. “Strong contextuality” occurs when it is not possible to
construct a probabilistic model on the hypergraph. If it is possible, then each
such model is a candidate global model and the phenomenon is non-contextual.
Further research could be directed at refining the semantics to admit different
types of contextuality (Abramsky and Brandenburger, 2011; Acin et al., 2015),
as well as experimental designs based on different variations of signalling (Brask
and Chaves, 2017; Curchod et al., 2017).
Just like higher level programming languages, such as functional programming, provided a convenient means for harnessing the power of the lambdacalculus, P-programs aim to advance the understanding of contextuality, by
providing a convenient means for harnessing the power of contextual semantics.
As P-programs are algorithmic, future work could provide syntax to specify the
temporal flow of actions using control structures akin to those used in high level
programming languages. This feature allows measurements with some causal
structure, which is an important topic in cognitive psychology where Bayesian
models are often used.
Finally, the overarching aim of this article is to raise awareness of contextuality beyond quantum physics and to contribute formal methods to detect its
presence in the form of a convenient programming language.
Acknowledgements
Thanks to the three anonymous reviewers, Bevan Koopman and Ana Belen
Sainz for their constructive input and suggestions. This research was supported
by the Asian Office of Aerospace Research and Development (AOARD) grant:
FA2386-17-1-4016
References
Abramsky, S. (2015). Contextual semantics: From quantum mechanics to logic,
databases, constraints, and complexity. Bulletin of EATCS, 2(113).
25
Abramsky, S., Barbosa, R., and Mansfield, S. (2016). Quantifying contextuality
via linear programming. In Proceedings of the 13th International Conference
on Quantum Physics and Logic (QPL 2016).
Abramsky, S. and Brandenburger, A. (2011). The sheaf-theoretic structure of
non-locality and contextuality. New Journal of Physics, 13(113036).
Acacio De Barros, J. and Oas, G. (2015). Some examples of contextuality in
physics: Implications to quantum cognition. arXiv:1512.00033.
Acin, A., Fritz, T., Leverrier, A., and Sainz, A. (2015). A combinatorial apporoach to nonlocality and contextuality. Communications in Mathematical
Physics, 334:533–628.
Aerts, D., Gabora, L., and Sozzo, S. (2014). Concept combination, entangled
measurements, and prototype theory. Topics in Cognitive Science, 6:129–137.
Aerts, D. and Sozzo, S. (2014). Quantum entanglement in concept combinations.
International Journal of Theoretical Physics, 53:3587–3603.
Asano, M., Hashimoto, T., Khrennikov, A., Ohya, M., and Tanaka, Y. (2014).
Violation of contextual generalization of the Leggett–Garg inequality for
recognition of ambiguous figures. Physica Scripta, 2014(T163):014006.
Atmanspacher, H. and Filk, T. (2010). A proposed test of temporal nonlocality
in bistable perception. Journal of Mathematical Psychology, 54:314–321.
Brask, J. B. and Chaves, R. (2017). Bell scenarios with communication. Journal
of Physics A: Mathematical and Theoretical, 50(9):094001.
Bruza, P. and Abramsky, S. (2016). Probabilistic programs: Contextuality and
relational database theory. In Acacio De Barros, J., Coecke, B., and Pothos,
E., editors, Quantum Interaction: 10th International Conference (QI’2016),
Lecture Notes in Computer Science. Springer (In Press).
Bruza, P., Kitto, K., Ramm, B., and Sitbon, L. (2015). A probabilistic framework for analysing the compositionality of conceptual combinations. Journal
of Mathematical Psychology, 67:26–38.
Bruza, P. D. (2016). Syntax and operational semantics of a probabilistic
programming language with scopes. Journal of Mathematical Psychology
10.1016/j.jmp.2016.06.006 (In press).
Clauser, J. and Horne, M. (1974). Experimental consequences of objective local
theories. Physical Review D, 10(2):526–535.
Curchod, F., Johansson, M., Augusiak, R., Hoban, M., Wittek, P., and Acin,
A. (2017). Unbounded randomness certification using sequences of measurements. arXiv:1510.03394v2.
26
Dzhafarov, E. and Kujala, J. (2014a). Contextuality is about identity of random
variables. Physica Scripta, T163(014009).
Dzhafarov, E. and Kujala, J. (2014b). Embedding quantum into classical: Contextualization vs conditionalization. PLoS ONE, 9(3):e92818.
Dzhafarov, E. and Kujala, J. (2015). Probabilistic contextuality in EPR/Bohmtype systems with signaling allowed. In Dzhafarov, E., editor, Contextuality
from Quantum Physics to Psychology., chapter 12, pages 287–308. World Scientific Press.
Dzhafarov, E., Zhang, R., and Kujala, J. (2015). Is there contextuality in behavioral and social systems? Philosophical Transactions of the Royal Society
A, 374(20150099).
Dzhafarov, R. and Kujala, J. (2012). Selectivity in probabilistic causality:
Where psychology runs into quantum physics. Journal of Mathematical Psychology, 56(1):54–63.
Gabora, L. and Aerts, D. (2002). Contextualizing concepts using a mathematical generalization of the quantum formalism. Journal of Experimental and
Theoretical Artificial Intelligence, 14:327–358.
Goodman, N. D. and Stuhlmüller, A. (2014). The Design and Implementation of
Probabilistic Programming Languages. http://dippl.org. Accessed: 20179-14.
Goodman, N. D. and Tenenbaum, J. B. (2016). Probabilistic Models of Cognition. http://probmods.org/v2. Accessed: 2017-6-5.
Gordon, A., Henzinger, T., Nori, A., and Rajamani, S. (2014). Probabilistic
programming. In Proceedings of the on Future of Software Engineering (FOSE
2014), pages 167–181. ACM Press.
Gronchi, G. and Strambini, E. (2016). Quantum cognition and Bell’s inequality:
A model for probabilistic judgment bias. Journal of Mathematical Psychology,
78:65–75.
Gyssens, M. and Paredaens, J. (1984). A decomposition methodology for cyclic
databases. In Advances in Database Theory, volume 2, pages 85–122. Springer.
Henson, J. and Sainz, A. (2015). Macroscopic noncontextuality as a principle
for almost-quantum correlations. Phyical Review A, 91:042114.
Howard, M., Wallman, J., Veitch, V., and Emerson, J. (2014). Contextuality
supplies the magic for quantum computation. Nature, 510(7505):351–355.
Kochen, S. and Specker, E. (1967). The problem of hidden variables in quantum
mechanics. Journal of Mathematics and Mechanics, 17(59).
27
Liu, W., Yue, K., and Li, W. (2011). Constructing the Bayesian network structure from dependencies implied in multiple relational schemas. Expert systems
with Applications, 38:7123–7134.
Pelletier, J. (1994). The principle of semantic compositionality. Topoi, 13:11–24.
Sainz, A. and Wolfe, E. (2017). Multipartite composition of contextuality scenarios. arXiv:1701.05171 [quant-ph].
Wisniewski, E. J. (1997). When concepts combine. Psychonomic Bulletin and
Review, 42(2):167–183.
Wong, S. (1997). An extended relational model for probabilistic reasoning.
Journal of Intelligent Information Systems, 9:181–202.
Wong, S. (2001). The relational structure of belief networks. Journal of Intelligent Information Systems, 16:117–148.
Wulf, W. and Shaw, M. (1973). Global variable considered harmful. SIGPLAN
Notices, 8(2):80–86.
Zhang, R. and Dzhafarov, E. S. (2017). Testing contextuality in cyclic psychophysical systems of high ranks. In Acacio De Barros, J., Coecke, B.,
and Pothos, E., editors, Quantum Interaction: 10th International Conference
(QI’2016), Lecture Notes in Computer Science. Springer (In Press).
28
| 2 |
arXiv:1708.07989v1 [] 26 Aug 2017
On Secure Communication using RF Energy
Harvesting Two-Way Untrusted Relay
Vipul Gupta
Sanket S. Kalamkar
Adrish Banerjee
Dept. of EECS
University of California, Berkeley, CA, USA
E-mail: vipul [email protected]
Dept. of EE
University of Notre Dame, IN, USA
E-mail: [email protected]
Dept. of EE
IIT Kanpur, India
E-mail: [email protected]
Abstract—We focus on a scenario where two wireless source
nodes wish to exchange confidential information via an RF energy
harvesting untrusted two-way relay. Despite its cooperation in
forwarding the information, the relay is considered untrusted out
of the concern that it might attempt to decode the confidential information that is being relayed. To discourage the eavesdropping
intention of the relay, we use a friendly jammer. Under the total
power constraint, to maximize the sum-secrecy rate, we allocate
the power among the sources and the jammer optimally and
calculate the optimal power splitting ratio to balance between
the energy harvesting and the information processing at the
relay. We further examine the effect of imperfect channel state
information at both sources on the sum-secrecy rate. Numerical
results highlight the role of the jammer in achieving the secure
communication under channel estimation errors. We have shown
that, as the channel estimation error on any of the channels
increases, the power allocated to the jammer decreases to abate
the interference caused to the confidential information reception
due to the imperfect cancellation of jammer’s signal.
Index Terms—Energy harvesting, imperfect channel state information, physical layer security, two-way relay, untrusted relay
I. I NTRODUCTION
The demand for higher data rates has led to a shift towards
higher frequency bands, resulting in higher path loss. Thus
relays have become important for reliable long distance wireless transmissions. The two-way relay has received attention in
the past few years due to its ability to make communications
more spectral efficient [1], [2]. In a two-way relay-assisted
communication, the relay receives the information from two
nodes simultaneously, which it broadcasts in the next slot.
A. Motivation
To improve the energy efficiency, harvesting energy from
the surrounding environment has become a promising approach, which can prolong the lifetime of energy-constrained
nodes and avoid frequent recharging and replacement of
batteries. In [3] and [4], authors have proposed the concept
of energy harvesting using radio-frequency (RF) signals that
carry information as a viable source of energy. Simultaneous
wireless information and power transfer has applications in
cooperative relaying. The works in [5]–[9] study throughput
maximization problems when the cooperative relays harvest
energy from incoming RF signals to forward the information,
where references [8], [9] have focused on two-way relaying.
Though the open wireless medium has facilitated cooperative relaying, it has also allowed unintended nodes to
eavesdrop the communication between two legitimate nodes.
Traditional ways to achieve secure communication rely on
upper-layer cryptographic methods that involve intensive key
distribution. Unlike this technique, the physical layer security
aims to achieve secure communication by exploiting the
random nature of the wireless channel. In this regard, Wyner
introduced the idea of secrecy rate for the wiretap channel,
where the secure communication between two nodes was
obtained without private keys [10].
For cooperative relaying with energy harvesting, [11]–
[13] investigate relay-assisted secure communication in the
presence of an external eavesdropper. The security of the
confidential message may still be a concern when the source
and the destination wish to keep the message secret from the
relay, despite its help in forwarding the information [14]–[18].
Hence the relay is trusted in forwarding the information, but
untrusted out of the concern that the relay might attempt to
decode the confidential information that is being relayed.1
In practice, such a scenario may occur in heterogeneous
networks, where all nodes do not possess the same right to
access the confidential information. For example, if two nodes
having the access to confidential information wish to exchange
that information but do not have the direct link due to severe
fading and shadowing, they might require to take the help from
an intermediate node that does not have the privilege to access
the confidential information.
B. Related Work
In [14], authors show that the cooperation by an untrusted
relay can be beneficial and can achieve higher secrecy rate than
just treating the untrusted relay as a pure eavesdropper. In [19],
authors investigate the secure communication in untrusted
two-way relay systems with the help of external friendly
jammers and show that, though it is possible to achieve a nonzero secrecy rate without the friendly jammers, the secrecy
rate at both sources can effectively be improved with the
help from an external friendly jammer. In [20], authors have
focused on improving the energy efficiency while achieving
1 In this case, the decode-and-forward relay is no longer suitable to forward
the confidential information.
II. S ECURE C OMMUNICATION
Jammer J
Source S 2
Source S1
h2
Relay R
First Slot
Second Slot
Fig. 1. Secure communication via an untrusted energy harvesting two-way
relay. hi with i ∈ {1, 2, J} denotes a channel coefficient.
the minimum secrecy rate for the untrusted two-way relay.
The works in [14]–[20] assume that the relay is a conventional
node and has a stable power supply. As to energy harvesting
untrusted relaying, the works in [21]–[23] analyze the effect
of untrusted energy harvesting one-way relay on the secure
communication between two legitimate nodes. To the best of
our knowledge, for energy harvesting two-way untrusted relay,
the problem of achieving the secure communication has not
been yet studied in the literature.
C. Contributions
The contributions and main results of this paper are as
follows:
•
•
•
•
P ERFECT CSI
A. System Model
hJ
h1
WITH
First, assuming the perfect channel state information
(CSI) at source nodes, we extend the notion of secure
communication via an untrusted relay for the two-way
wireless-powered relay, as shown in Fig. 1. To discourage
the eavesdropping intentions of the relay, a friendly
jammer sends a jamming signal during relay’s reception
of signals from source nodes.
To harvest energy, the relay uses a part of the received
RF signals which consist of two sources’ transmissions
and the jamming signal. Hence we utilize the jamming
signal effectively as a source of extra energy in addition
to its original purpose of degrading relay’s eavesdropping
channel.
Under the total power constraint, we exploit the structure
of the original optimization problem and make use of
the signomial geometric programming technique [24] to
jointly find the optimal power splitting ratio for energy harvesting and the optimal power allocation among
sources and the jammer that maximize the sum-secrecy
rate for two source nodes.
Finally, with the imperfect CSI at source nodes, we study
the joint effects of the energy harvesting nature of an
untrusted relay and channel estimation errors on the sumsecrecy rate and the power allocated to the jammer. We
particularly focus on the role of jammer in achieving the
secure communication, where we show that the power
allocated to the jammer decreases as the estimation error
on any of the channels increases, in order to subside the
detrimental effects of the imperfect cancellation of the
jamming signal at source nodes.
Fig. 1 shows the communication protocol between two
legitimate source nodes S1 and S2 —lacking the direct link
between them—via an untrusted two-way relay R. All nodes
are half-duplex and have a single antenna [19]. To discourage
eavesdropping by the relay, a friendly jammer J sends the
jamming signal during relay’s reception of sources’ signals.
The communication of a secret message between S1 and S2
happens over two slots of equal duration T /2. In the first
slot, the nodes S1 and S2 jointly send their information to the
relay with powers P1 and P2 , respectively, and the jammer
J sends the jamming signal with power PJ . The powers
P1 , P2 , and PJ are restricted by the power budget P such
that P1 + P2 + PJ ≤ P . This constraint may arise, for
instance, when the sources and the jammer belong to the same
network, and the network has a limited power budget to cater
transmission requirements of sources and the jammer. The
relay uses a part of the received power to harvest energy. In the
second slot, using the harvested energy, the relay broadcasts
the received signal in an amplify-and-forward manner.
Let h1 , h2 , and hJ denote the channel coefficients of the
reciprocal channels from the relay to S1 , S2 , and jammer
J, respectively. In this section, we assume that both sources
have the perfect CSI for all channels, which can be obtained
from the classical channel training, estimation, and feedback
from the relay. But if there are errors in the estimation and/or
feedback, the sources will have imperfect CSI, which is the
focus of Section III. Hence the relay is basically trusted when
it comes to providing the services like feeding CSI back to
transmitters and forwarding the information but untrusted in
the sense that it is not supposed to decode the confidential
information that is being relayed [20]. Both sources have the
perfect knowledge of the jamming signal [19].2
B. RF Energy Harvesting at Relay
The relay is an energy-starved node. It harvests energy from
incoming RF signals which include information signals from
nodes S1 and S2 and the jamming signal from the jammer. To
harvest energy from received RF signals, the relay uses power
splitting (PS) policy [4]. In PS policy, the relay uses a fraction
β of the total received power for energy harvesting. Under PS
policy, the energy harvested by the relay is3
EH = βη P1 |h1 |2 + P2 |h2 |2 + PJ |hJ |2 (T /2), (1)
where η is the energy conversion efficiency factor with 0 <
η < 1. The transmit power of the relay in the second slot is
PH =
EH
= βη P1 |h1 |2 + P2 |h2 |2 + PJ |hJ |2 .
T /2
(2)
2 Jammer can use pseudo-random codes as the jamming signals that are
known to both sources beforehand but not to the untrusted relay.
3 For the exposition, we assume that the incident power on the energy
harvesting circuitry of the relay is sufficient to activate it.
C. Information Processing and Relaying Protocol
In the first slot, the relay receives the signal
p
p
p
p
yR = (1 − β)( P1 h1 x1 + P2 h2 x2 + PJ hJ xJ ) + nR ,
(3)
where x1 and x2 are the messages of S1 and S2 , respectively,
with E[|x1 |2 ] = E[|x1 |2 ] = 1. Also xJ is the artificial
noise by the jammer with E[|xJ |2 ] = 1, and nR is the
additive white Gaussian noise (AWGN) at relay with mean
zero and variance N0 . Using the received signal yR , the relay
may attempt to decode the confidential messages x1 and x2 .
To shield the confidential messages x1 and x2 from relay’s
eavesdropping, we assume that the physical layer security
coding like stochastic encoding and nested code structure can
be used (see [20] and [25]). The relay can decode one of the
sources’ confidential messages, i.e., either x1 or x2 , if its rate
is such that it can be decoded by considering other source’s
message as noise [26]. In this case, at relay, the signal-to-noise
ratio (SNR) corresponding to x1 , i.e., the message intended for
S2 , is given by
SNRR2
e 1 |h1 |2
βP
,
=
e 2 |h2 |2 + βP
e J |hJ |2 + N0
βP
(4)
where βe = 1 − β. Accordingly the achievable throughput
of S1 − R link is C2R = (1/2) log(1 + SNRR2 ). In (4),
e 2 |h2 |2 , corresponding to S2 ’s message for S1
the term βP
indirectly serves as an artificial noise for the relay in addition
e J |hJ |2 from the jammer. Similarly, the SNR
to the signal βP
corresponding to x2 , i.e., the message intended for S1 , is given
by
e 2 |h2 |2
βP
SNRR1 =
,
(5)
e 1 |h1 |2 + βP
e J |hJ |2 + N0
βP
e 1 |h1 |2 serves as an artificial noise for the relay.
where βP
Thus the achievable throughput of S2 − R link is C1R =
(1/2) log(1 + SNRR1 ). Let γi = Pi |hi |2 /N0 , where i ∈
{1, 2, J}. It follows that
SNRR2 =
e 2
e 1
βγ
βγ
, SNRR1 =
. (6)
e
e
e
e J +1
βγ2 + βγJ + 1
βγ1 + βγ
The relay amplifies the received signal yR given by (3) by a
factor α based on its harvested power PH . Accordingly,
s
PH
α=
2
e
e
e J |hJ |2 + N0
βP1 |h1 | + βP2 |h2 |2 + βP
s
βη(γ1 + γ2 + γJ )
=
.
(7)
e 1 + βγ
e 2 + βγ
e J +1
βγ
The received signal at S2 in the second slot is given by
y2 = h2 (αyR ) + n2 ,
(8)
where n2 is AWGN with power N0 . We assume that S1 and
S2 know xJ beforehand. Hence after cancelling the terms that
are known to S2 , i.e., the terms corresponding to x2 and xJ ,
the resultant received signal at S2 is
q
e 1 h1 x1 + h2 αnR + n2 .
(9)
y2 = h2 α βP
{z
}
|
{z
} |
desired signal
noise
The perfect CSI allows S2 to cancel unwanted components of
the signal. Substituting α from (7) in (9), we can express the
SNR at node S2 as
SNRS2 =
e 1 + γ2 + γJ )
γ1 |h2 |2 β βη(γ
,
e 1 + γ2 + γJ ) + 1
(|h2 |2 βη + β)(γ
(10)
and the corresponding achievable throughput of link R − S2
is C2S = (1/2) log(1 + SNRS2 ). Similarly the received signal
at S1 is
q
e 2 h2 x2 + h1 αnR + n1 .
(11)
y1 = h1 α βP
{z
}
|
{z
} |
desired signal
noise
The SNR at node S1 is
SNRS1 =
e 1 + γ2 + γJ )
γ2 |h1 |2 β βη(γ
,
e 1 + γ2 + γJ ) + 1
(|h1 |2 βη + β)(γ
(12)
and the corresponding achievable throughput of link R − S1
is C1S = (1/2) log(1 + SNRS1 ).
D. Secrecy Rate and Problem Formulation
For the communication via two-way untrusted relay, the
sum-secrecy rate is given by
+
+
CS = C1S − C1R + C2S − C2R
+
1
1
=
log2 (1 + SNRS1 ) − log2 (1 + SNRR1 )
2
2
+
1
1
log2 (1 + SNRS2 ) − log2 (1 + SNRR2 ) , (13)
+
2
2
where [x]+ , max(x, 0). Given the total power budget P , we
have a constraint on transmit powers, i.e., P1 + P2 + PJ ≤
P . To maximize the sum-secrecy rate, we optimally allocate
powers P1 , P2 , and PJ to S1 , S2 , and J, respectively, and
find the optimal power splitting ratio β. We can formulate the
optimization problem as
maximize
CS
subject to
P1 + P2 + PJ ≤ P,
β + βe = 1,
e 1 ,P2 ,PJ
β,β,P
β, βe ≤ 1,
e P1 , P2 , PJ ≥ 0.
β, β,
(14)
Based on the non-negativeness of two terms in the secrecy rate
expression given by (13), we need to investigate four cases.
We calculate the sum-secrecy rate in all four cases, with the
best case being the one that gives the maximum sum-secrecy
rate.
Case I: C1S − C1R ≥ 0 and C2S − C2R ≥ 0
2
Substituting γi = Pi |hi | /N0 and simplifying the problem
in (14), it follows that
minimize
e 1 ,P2 ,PJ
β,β,P
subject to
where
e γ1 , γ2 , γJ )
f (β, β,
1
log2
e γ1 , γ2 , γJ )
2
g(β, β,
γ1 N 0
γ2 N 0
γJ N 0
+
+
≤ P,
|h1 |2
|h2 |2
|hJ |2
β + βe = 1,
β, βe ≤ 1,
e P1 , P2 , PJ ≥ 0,
β, β,
(15a)
(15b)
(15c)
(15d)
(15e)
e γ1 , γ2 , γJ ) = [β(γ
e 1 + γ2 + γJ ) + 1]2
f (β, β,
and
e
× [1 + (γ1 + γ2 + γJ )(|h2 |2 βη + β)]
e
× [1 + (γ1 + γ2 + γJ )(|h1 |2 βη + β)],
e γ1 , γ2 , γJ ) = (β(γ
e 2 + γJ ) + 1)(β(γ
e 1 + γJ ) + 1)
g(β, β,
e 1 + 1)) + 1]
×[(γ1 + γ2 + γJ )(βe + |h2 |2 βη(βγ
e 2 + 1)) + 1].
×[(γ1 + γ2 + γJ )(βe + |h1 |2 βη(βγ
We can drop the logarithm from the objective (15a) as it
retains the monotonicity and yields the same optimal solution.
We introduce an auxiliary variable t and do the following
transformation.
e γ1 , γ2 , γJ )
f (β, β,
minimize
(16a)
e 1 ,P2 ,PJ
t
β,β,P
e γ1 , γ2 , γJ ),
subject to t ≤ g(β, β,
(16b)
γ1 N 0
γ2 N 0
γJ N 0
+
+
≤ P,
|h1 |2
|h2 |2
|hJ |2
β + βe ≤ 1,
β, βe ≤ 1,
e P1 , P2 , PJ ≥ 0.
t, β, β,
(16c)
(16d)
(16e)
(16f)
The above transformation is valid for t > 0 because, to minie γ1 , γ2 , γJ )/t, we need to maximize
mize the objective f (β, β,
t, and it happens when t = g(β, γ1 , γ2 , γJ ). Hence under
the optimal condition, we have t = g(β, γ1 , γ2 , γJ ), and the
problems (14) and (16) are equivalent. Further we can replace
the constraint (15c) by
β + βe ≤ 1.
(17)
The substitution of (15c) by (17) in problem (16) yields an
equivalent problem because β + βe = 1 under the optimal
condition, i.e., if β + βe < 1, we can always increase the value
of β so that β + βe = 1. The increase in β leads to more
harvested energy, which in turn increases the transmit power
of the relay and the sum-secrecy rate.
The objective (16a) is a posynomial function and (16c),
(16d), and (16e) are posynomial constraints [24]. When the
objective and constraints are of posynomial form, the problem
can be transformed into a Geometric Programming (GP) form
and converted into a convex problem [24]. Also, as the domain
of GP problem includes only real positive variables, the
constraint (16f) is implicit. But the constraint (16b) is not
posynomial as it contains a posynomial function g which is
bounded from below and GP cannot handle such constraints.
We can solve this problem if the right-hand side of (16b), i.e.,
e γ1 , γ2 , γJ ), can be approximated by a monomial. Then
g(β, β,
the problem (16) reduces to a class of problems that can be
solved by Signomial Geometric Programming (SGP) [24].
To find a monomial approximation of the form b
g (x) =
Q
5
e γ1 , γ2 , γJ ]T is
c i=1 xai i of a function g(x) where x = [β, β,
the vector containing all variables, it would suffice if we find
an affine approximation of h(y) = log g(y) with ith element
of y given by yi = log xi [24]. Let the affine approximation
of h(y) be b
h(y) = log b
g(x) = log c + aT y. Using Taylor’s
approximation of h(y) around the point y0 in the feasible
region and equating it with b
h(y), it follows that
h(y) ≈ h(y0 ) + ∇h(y0 )T (y − y0 ) = log c + aT y,
(18)
for y ≈ y0 . From (18), we have a = ∇h(y0 ), i.e.,
ai =
xi ∂g
g(x) ∂xi
,
x=x0
and
c = exp(h(y0 ) − ∇h(y0 )T y0 ) = g(x0 )
5
Y
xa0,ii ,
i=1
where x0,i is an ith element of x0 . We substitute the monomial
approximation gb(x) of g(x) in (16b) and use GP technique
to solve (16). The aforementioned affine approximation is,
however, imprecise if the optimal solution lies far from the
initial guess x0 as the Taylor’s approximation would be
inaccurate. To overcome this problem, we take an iterative
approach, where, if the current guess is xk , we obtain the
Taylor’s approximation about xk and solve a GP again. Let the
current solution of GP be xk+1 . In the next iteration, we take
Taylor’s approximation around xk+1 and solve a GP again. We
keep iterating in this fashion until the convergence. Since the
problem (16) is close to GP (as we have only one constraint
in (16) that is not a posynomial), the aforementioned iterative
approach works well in our case and yields the optimal
solution [24]. If the obtained optimal solution contradicts with
our initial assumption that C1S − C1R ≥ 0 and C2S − C2R ≥ 0,
we move to other three cases discussed below.
Case II: C1S − C1R ≥ 0 and C2S − C2R < 0
In this case, the secrecy rate is given by CS = (C1S −C1R )+ ,
and we need to solve the problem (16) with the following
e γ1 , γ2 , γJ ) and g(β, β,
e γ1 , γ2 , γJ ):
expressions for f (β, β,
e γ1 , γ2 , γJ ) = [β(γ
e 1 + γ2 + γJ ) + 1]
f (β, β,
e
× [1 + (γ1 + γ2 + γJ )(|h2 |2 βη + β)],
e γ1 , γ2 , γJ ) = (β(γ
e 2 + γJ ) + 1)
g(β, β,
e 1 + 1))].
×[1+(γ1 + γ2 + γJ )(βe + |h2 |2 βη(βγ
We again check if the assumption C1S − C1R ≥ 0 and C2S −
C2R < 0 is valid; if not, we move to the remaining two cases.
the messages x1 and x2 remain the same as in (6). The signal
received at S2 in the second slot is
Case III: C1S − C1R < 0 and C2S − C2R ≥ 0
y2 = h2 α
This case is similar to Case II, and only the subscripts
1 and 2 need to be interchanged in the expressions of
e γ1 , γ2 , γJ ) and g(β, β,
e γ1 , γ2 , γJ ). If the solution obf (β, β,
tained does not satisfy the initial assumptions, we move to
Case IV.
Case IV:
C1S
−
C1R
< 0 and
C2S
−
C2R
<0
In this case, the sum-secrecy rate is zero.
Algorithm 1 summarizes the aforementioned process of
obtaining the optimal sum-secrecy rate and power allocation
by solving (16).
Algorithm 1 Solution of (16)
Input Total power P , Channel coefficients h1 , h2 , and hJ ,
Energy conversion efficiency η, Noise power N0 , Tolerance δ
Output Power splitting ratio β, power P1 , P2 , and PJ , sumsecrecy rate CS
Initialize 0 ≤ P1,k , P2,k , PJ,k ≤ P , 0 < βk < 1 (Random
initialization) with k = 0
1) While |CS,k − CS,k−1 | > δCS,k−1
2) Find the monomial expression b
g for g using the Taylor’s
approximation around xk = [βk , γ1,k , γ2,k , γJ,k ]
3) k = k + 1
4) Solve (16) with the monomial approximation gb to find
[βk , γ1,k , γ2,k , γJ,k ]
5) Assign C1S , C1R , C2S and C2R using above solution
6) If C1S − C1R ≥ 0 and C2S − C2R ≥ 0
Go to step 1
Else
Proceed to Case II
7) Check for Cases II, III, and IV in a similar fashion
8) Find the optimal [βk , γ1,k , γ2,k , γJ,k ] for the current
iteration after going through all cases
g(β ,γ ,γ2,k ,γJ,k )
9) Assign CS,k = 12 log f (βkk ,γ1,k
1,k ,γ2,k ,γJ,k )
10) End While
III. S ECURE C OMMUNICATION
WITH I MPERFECT
CSI
We now investigate the effect of imperfect CSI on sumsecrecy rate. We model the imperfection in channel knowledge
as in [27], where the channel coefficients are given as
hi = ĥi + ∆hi ,
(19)
for i ∈ {1, 2, J}. Here ĥi is the estimated channel coefficient
and ∆hi is the error in estimation which is bounded as
|∆hi | ≤ ǫi . ǫi is the maximum possible error in estimating hi
with respect to S1 and S2 . We consider the worst case scenario
where the relay knows all channel coefficients perfectly, while
legitimate nodes S1 and S2 concede estimation errors according to (19). In this case, SNRs at the relay corresponding to
q
q
e 1 h1 x1 + βP
e 2 h2 x2 + βP
e J hJ xJ + nR + n2
βP
q
q
e 1 (ĥ1 + ∆h1 )x1 + βP
e 2 (ĥ2 + ∆h2 )x2
βP
= (ĥ2 + ∆h2 )α
q
e J (ĥJ + ∆hJ )xJ ) + nR + n2 ,
(20)
+ βP
q
where ĥ1 , ĥ2 , and ĥJ are the channel coefficients estimated by
node S2 . Using these imperfect channel estimates, the node S2
tries to cancel the self-interference and the known jammer’s
signal in the following manner:
q
e 1 (ĥ1 + ∆h1 )x1
y2 = (ĥ2 + ∆h2 )α( βP
q
q
e 2 (ĥ2 + ∆h2 )x2 + βP
e J (ĥJ + ∆hJ )xJ ) + nR )
+ βP
q
q
e 2 ĥ2 x2 + βP
e J ĥJ xJ ) .
(21)
+ n2 − ĥ2 α( βP
|
{z
}
imperfect interference cancellation
It follows that
q
e 1 ĥ1 x1 + (ĥ2 + ∆h2 )αnR + n2
y2 = ĥ2 α βP
q
q
q
e 1 ĥ1 x1 + βP
e 2 ĥ2 x2 + βP
e J ĥJ xJ )
+ ∆h2 α( βP
q
q
q
e 1 ∆h1 x1 + βP
e 2 ∆h2 x2 + βP
e J ∆hJ xJ ).
+ ĥ2 α( βP
As (21) shows, due to the imperfect CSI, S2 cannot cancel the
jamming signal and the self-interference completely. Here we
ignore the smaller terms of the form ∆hi ∆hj as they will be
negligible compared to other terms. The received SNR at S2
is thus given by (22) at the top of the next page. Using the
triangle inequality, it follows that
|ĥi | − |∆hi | ≤ hi ≤ |ĥi | + |∆hi |, ∀ i ∈ {1, 2, J}.
The worst case secrecy rate will occur when
hi = |ĥi | + |∆hi | = |ĥi | + ǫi , ∀ i ∈ {1, 2, J},
and this will happen when the phase of hi and ∆hi are the
same and ∆hi concedes maximum error, i.e., |∆hi | = ǫi . Then
the worst case SNR (denoted by SNRwc
S2 ) at node S2 is given
by (23) at the top of the next page. Similarly the worst case
SNR (denoted by SNRwc
S1 ) at S1 is given by (24) at the top of
the next page. In (24), we again denote estimated channels by
ĥ1 , ĥ2 , and ĥJ for brevity, but these values may be different
from those estimated by S2 .
Using these worst case SNRs, we maximize the worst case
sum-secrecy rate and solve for the corresponding optimal
power allocation and β using SGP as done for problem in
(16), i.e., for the case of perfect CSI.
IV. N UMERICAL R ESULTS AND D ISCUSSIONS
A. Effect of Power Splitting Ratio β
Fig. 2 shows the sum-secrecy rate (left y-axis) and the
harvested energy (right y-axis) versus the total power budget
SNRS2 =
e 1 |ĥ1 |2
|ĥ2 |2 α2 βP
2
2
2
2
2
2
2
2
e
N0 (|ĥ2 + ∆h2 |2 α2 + 1) + α2 β(|∆h
2 | (P1 |ĥ1 | + P2 |ĥ2 | + PJ |ĥJ | ) + |ĥ2 | (P1 |∆h1 | + P2 |∆h2 | + PJ |∆hJ | ))
(22)
SNRwc
S2 =
SNRwc
S1 =
e 1 |ĥ1 |2
|ĥ2 |2 α2 βP
.
(23)
e 2 |ĥ2 |2
|ĥ1 |2 α2 βP
.
(24)
e ĥ2 |2 (P1 ǫ2 + P2 ǫ2 + PJ ǫ2 )
e 2 (P1 |ĥ1 |2 + P2 |ĥ2 |2 + PJ |ĥJ |2 ) + α2 β|
N0 ((|ĥ2 | + ǫ2 )2 α2 + 1) + α2 βǫ
1
2
2
J
e ĥ1 |2 (P1 ǫ2 + P2 ǫ2 + PJ ǫ2 )
e 2 (P1 |ĥ1 |2 + P2 |ĥ2 |2 + PJ |ĥJ |2 ) + α2 β|
N0 ((|ĥ1 | + ǫ1 )2 α2 + 1) + α2 βǫ
2
1
1
J
Sum secrecy rate
400
5
300
Harvested energy
200
100
Sum secrecy rate (bits/s/Hz)
500
Power allocated to jammer
Harvested energy (J)
Sum secrecy rate (bits/s/Hz)
600
Optimal β
β = 0.15
β = 0.85
22
20
5
Sum secrecy rate
18
Case I: Imperfect h1, h2 and hJ
Case II: Imperfect h1 and h2
16
Case III: Imperfect h
Power allocated to jammer (dB)
10
10
J
0
5
10
15
20
0
30
25
0
0
0.02
0.04
Fig. 2. Effect of β on harvested energy at relay and the sum-secrecy rate.
Sum secrecy rate (bits/s/Hz)
0.08
0.1
Fig. 4. Effect of ǫ on sum-secrecy rate and and power PJ allocated to the
jammer, |h21 | = 1.2479, |h2 |2 = 1.4484, and |hJ |2 = 6.0162, P = 30 dB.
B. Effect of Power Allocation
8
Optimal Power Allocation
Equal Power Allocation
7
6
ε=0
5
4
3
2.5
2
2
1
1.5
0
10
0.06
ε
Total power (dB)
ε = 0.05
ε = 0.1
14
15
15
16
20
25
30
Total power (dB)
Fig. 3. Effect of power allocation on sum-secrecy rate.
for a random channel realization: |h1 |2 = 0.6647, |h2|2 =
2.9152, and |hJ |2 = 1.3289. We set η = 0.5 and N0 = 1.
Higher β (= 0.85) than the optimal β (the solution of the
problem (16)) results in higher harvested energy, which increases relay’s transmit power, but the reduced strength of
the received information signal at the relay (thus at nodes S1
and S2 ) due to higher β dominates the secrecy performance
of the system. A lower β (= 0.15) ensures more power
for the information processing at relay, but this reduces the
harvested energy (reducing its transmit power to forward the
information) and increases the chances of relay eavesdropping
the secret message. As a result, the sum-secrecy rate reduces.
For different values of maximum channel estimation errors,
Fig. 3 compares the sum-secrecy rate when the total power
is allocated optimally (obtained by solving the problem (16))
and equally among nodes S1 , S2 , and jammer J for the same
system parameters used to obtain Fig. 2. For exposition, we
consider ǫ1 = ǫ2 = ǫJ = ǫ in numerical results. The case
ǫ = 0 corresponds to the perfect CSI at S1 and S2 . Since
the equal power allocation does not use channel conditions
optimally, it suffers a loss in sum-secrecy rate as expected.
Due to the error in channel estimation, the nodes S1 and S2
cannot cancel the self-interference (information signals sent
to the relay in the first slot) and the jamming signal perfectly
from the received signal in the second slot. This reduces the
SNR at legitimate nodes S1 and S2 , which further reduces the
sum-secrecy rate.
C. Effect of Imperfect CSI
Fig. 4 shows three cases based on the knowledge of channel
conditions at S1 and S2 .4 The sum-secrecy rate in Case II
is slightly better than that in Case I, because in Case II, a
higher fraction of the total power is allocated to the jammer
(see the right y-axis of Fig. 4) to use the perfect channel
knowledge about hJ . But this has a side-effect: the imperfect
CSI about h1 and h2 leads to higher interference from the
4 These three cases in Fig. 4 should not be confused with four cases
considered in Section II-D.
.
jammer to S1 and S2 . As a result, Case II does not gain much
compared to Case I in terms of the sum-secrecy rate. Under
Case III, the sum-secrecy rate is the highest, because S1 and
S2 can cancel the jamming signal more effectively as they
have imperfect CSI about only one channel. When ǫ is small
enough (less than 0.06 in this case), the power allocated to
the jammer in Case III is higher than that in Cases I and II.
This is because when ǫ is small, if we allocate the power to
S1 and S2 instead of jammer, it increases relay’s chances of
eavesdropping the information due to the increased received
power, which dominates the detrimental effect incurred due to
imperfect cancellation of jammer’s signal at S1 and S2 . But if
ǫ goes beyond a threshold, the loss in the secrecy rate due to
the imperfect cancellation of jammer’s interference dominates,
and the system is better off by allocating more power to S1 and
S2 and using each other’s signals to confuse the relay. Hence
the power allocated to jammer in Case III is smaller than that
in Cases I and II at higher ǫ. In Case III, the redistribution of
the power from jammer to S1 and S2 with the increase in ǫ
keeps the sum-secrecy rate almost the same.
V. C ONCLUDING R EMARKS
AND
F UTURE D IRECTIONS
In a two-way untrusted relay scenario, though the signal
from one source can indirectly serve as an artificial noise
to the relay while processing other source’s signal, the nonzero power allocated to the jammer implies that the assistance
from an external jammer can still be useful to achieve a better
secrecy rate. But the knowledge of two sources about channel
conditions decides the contribution of the jammer in achieving
the secure communication. For example, as the channel estimation error on any of the channel increases, the power allocated
to the jammer decreases to subside the interference caused at
the sources due to the imperfect cancellation of the jamming
signal. The optimal power splitting factor balances between
the energy harvesting and the information processing at relay.
Hence the joint allocation of the total power and the selection
of the power splitting factor are necessary to maximize the
sum-secrecy rate.
Future directions: There are several interesting future directions that are worth investigating. First the proposed model
can be extended to general setups such as multiple antennas at
nodes and multiple relays. Another interesting future direction
is to investigate the effect of the placement of the jammer
and the relay, which also incorporates the effect of path loss.
Third we have considered the bounded uncertainty model to
characterize the imperfect CSI. Extension to other models of
imperfect CSI such as the model where only channel statistics
are known is also possible.
R EFERENCES
[1] B. Rankov and A. Wittneben, “Achievable rate regions for the two-way
relay channel,” in Proc. 2006 IEEE ISIT, pp. 1668–1672.
[2] M. Chen and A. Yener, “Multiuser two-way relaying: detection and
interference management strategies,” IEEE Trans. Wireless Commun.,
vol. 8, no. 8, pp. 4296–4305, Aug. 2009.
[3] L. Varshney, “Transporting information and energy simultaneously,” in
Proc. 2008 IEEE ISIT, pp. 1612–1616.
[4] X. Zhou, R. Zhang, and C. K. Ho, “Wireless information and power
transfer: Architecture design and rate-energy tradeoff,” IEEE Trans.
Commun., vol. 61, no. 11, pp. 4754–4767, Nov. 2013.
[5] A. A. Nasir, X. Zhou, S. Durrani, and R. A. Kennedy, “Relaying
protocols for wireless energy harvesting and information processing,”
IEEE Trans. Wireless Commun., vol. 12, no. 7, pp. 3622–3636, July
2013.
[6] H. Chen, Y. Jiang, Y. Li, Y. Ma, and B. Vucetic, “A game-theoretical
model for wireless information and power transfer in relay interference
channels,” in Proc. 2014 IEEE ISIT, pp. 1161–1165.
[7] S. S. Kalamkar and A. Banerjee, “Interference-aided energy harvesting:
Cognitive relaying with multiple primary transceivers,” IEEE Trans.
Cogn. Commun. Netw., accepted for publication.
[8] Z. Chen, B. Xia, and H. Liu, “Wireless information and power transfer
in two-way amplify-and-forward relaying channels,” in Proc. 2014 IEEE
GLOBALSIP, pp. 168–172.
[9] Y. Liu, L. Wang, M. Elkashlan, T. Q. Duong, and A. Nallanathan, “Twoway relaying networks with wireless power transfer: Policies design and
throughput analysis,” in Proc. 2014 IEEE GLOBECOM, pp. 4030–4035.
[10] A. D. Wyner, “The wire-tap channel,” Bell Syst. Tech. J., vol. 54, no. 8,
pp. 1355–1387, Jul. 1975.
[11] Q. Li, Q. Zhang, and J. Qin, “Secure relay beamforming for simultaneous wireless information and power transfer in nonregenerative relay
networks,” IEEE Trans. Veh. Technol., vol. 63, no. 5, pp. 2462–2467,
June 2014.
[12] H. Xing, Z. Chu, Z. Ding, and A. Nallanathan, “Harvest-and-jam:
Improving security for wireless energy harvesting cooperative networks,”
in Proc. 2014 IEEE GLOBECOM, pp. 3145–3150.
[13] X. Chen, J. Chen, H. Zhang, Y. Zhang, and C. Yuen, “On secrecy
performance of a multi-antenna jammer aided secure communications
with imperfect CSI,” IEEE Trans. Veh. Technol., vol. 65, no. 10, pp.
8014–8024, Oct. 2016.
[14] X. He and A. Yener, “Cooperation with an untrusted relay: A secrecy
perspective,” IEEE Trans. Inf. Theory, vol. 56, no. 8, pp. 3807–3827,
Aug. 2010.
[15] J. Huang, A. Mukherjee, and A. L. Swindlehurst, “Secure communication via an untrusted non-regenerative relay in fading channels,” IEEE
Trans. Signal Process., vol. 61, no. 10, pp. 2536–2550, May 2013.
[16] L. Wang, M. Elkashlan, J. Huang, N. H. Tran, and T. Q. Duong, “Secure
transmission with optimal power allocation in untrusted relay networks,”
IEEE Wireless Commun. Lett., vol. 3, no. 3, pp. 289–292, June 2014.
[17] L. Sun, P. Ren, Q. Du, Y. Wang, and Z. Gao, “Security-aware relaying
scheme for cooperative networks with untrusted relay nodes,” IEEE
Commun. Lett., vol. 19, no. 3, pp. 463–466, Mar. 2015.
[18] K.-H. Park and M.-S. Alouini, “Secure amplify-and-forward untrusted
relaying networks using cooperative jamming and zero-forcing cancelation,” in Proc. 2015 IEEE PIMRC, pp. 234–238.
[19] R. Zhang, L. Song, Z. Han, and B. Jiao, “Physical layer security for
two-way untrusted relaying with friendly jammers,” IEEE Trans. Veh.
Technol., vol. 61, no. 8, pp. 3693–3704, Oct. 2012.
[20] D. Wang, B. Bai, W. Chen, and Z. Han, “Secure green communication
via untrusted two-way relaying: A physical layer approach,” IEEE Trans.
Commun., vol. 64, no. 5, pp. 1861–1874, May 2016.
[21] S. S. Kalamkar and A. Banerjee, “Secure communication via a wireless
energy harvesting untrusted relay,” IEEE Trans. Vech. Technol., vol. 66,
no. 3, pp. 2199–2213, Mar. 2017.
[22] M. Zhao, S. Feng, X. Wang, M. Zhang, Y. Liu, and H. Fu, “Joint
power splitting and secure beamforming design in the wireless-powered
untrusted relay networks,” in Proc. 2015 IEEE GLOBECOM, pp. 1–6.
[23] D. J. Su, S. A. Mousavifar, and C. Leung, “Secrecy capacity and wireless
energy harvesting in amplify-and-forward relay networks,” in Proc. 2015
IEEE PACRIM, pp. 258–262.
[24] S. Boyd, S.-J. Kim, L. Vandenberghe, and A. Hassibi, “A tutorial on
geometric programming,” Optimization and Engineering, vol. 8, no. 1,
pp. 67–127, 2007.
[25] M. Hayashi and R. Matsumoto, “Construction of wiretap codes from
ordinary channel codes,” in Proc. 2010 IEEE ISIT, pp. 2538–2542.
[26] E. Tekin and A. Yener, “The general Gaussian multiple-access and twoway wiretap channels: Achievable rates and cooperative jamming,” IEEE
Trans. Inf. Theory, vol. 54, no. 6, pp. 2735–2751, June 2008.
[27] Z. Xiang and M. Tao, “Robust beamforming for wireless information
and power transmission,” IEEE Wireless Commun. Lett., vol. 1, no. 4,
pp. 372–375, Aug. 2012.
| 7 |
ESAIM: Probability and Statistics
Will be set by the publisher
arXiv:1412.7103v4 [] 7 Jul 2016
URL: http://www.emath.fr/ps/
ADAPTIVE CONFIDENCE BANDS FOR MARKOV CHAINS AND
DIFFUSIONS: ESTIMATING THE INVARIANT MEASURE AND THE DRIFT ∗
Jakob Söhl 1 and Mathias Trabs 2
Abstract. As a starting point we prove a functional central limit theorem for estimators of the
invariant measure of a geometrically ergodic Harris-recurrent Markov chain in a multi-scale space. This
allows to construct confidence bands for the invariant density with optimal (up to undersmoothing) L∞ diameter by using wavelet projection estimators. In addition our setting applies to the drift estimation
of diffusions observed discretely with fixed observation distance. We prove a functional central limit
theorem for estimators of the drift function and finally construct adaptive confidence bands for the
drift by using a completely data-driven estimator.
1991 Mathematics Subject Classification. Primary 62G15; secondary 60F05, 60J05, 60J60, 62M05.
The dates will be set by the publisher.
Introduction
Diffusion processes are prototypical examples of the theory of stochastic differential equations as well as of
continuous time Markov processes. At the same time diffusions are widely used in applications, for instance, to
model molecular movements, climate data or in econometrics. Focusing on Langevin diffusions, we will consider
the solution of the stochastic differential equation
dXt = b(Xt )dt + σdWt ,
t > 0,
with unknown drift function b : R → R, a volatility parameter σ > 0 and with a Brownian motion W = {Wt :
t > 0}. The problem of statistical estimation based on discrete observations from this model is embedded
into the framework of geometrically ergodic Harris-recurrent Markov chains. We study the estimation of the
invariant density of such Markov chains. The drift function b depends nonlinearly on the invariant density µ so
that the two estimation problems of b and µ are closely related. We prove functional central limit theorems for
Keywords and phrases: Adaptive confidence bands, diffusion, drift estimation, ergodic Markov chain, stationary density, Lepski’s
method, functional central limit theorem
The authors acknowledge intensive and very helpful discussions with Richard Nickl. J.S. thanks the European Research
Council (ERC) for support under Grant No. 647812. M.T. is grateful to the Statistical Laboratory of the University of
Cambridge for its hospitality during a visit from February to March 2014, where this research was initiated, and to the Deutsche
Forschungsgemeinschaft (DFG) for the research fellowship TR 1349/1-1. Part of the paper was carried out while M.T. was
employed at the Humboldt-Universität zu Berlin.
1 Statistical Laboratory, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, CB3 0WB
Cambridge, UK. Email: [email protected]
2 Department of Mathematics, University of Hamburg, Bundesstraße 55, 20146 Hamburg, Germany. Email: [email protected]
∗
c EDP Sciences, SMAI 1999
2
TITLE WILL BE SET BY THE PUBLISHER
estimators of both b and µ in multi-scale spaces. This allows the construction of confidence bands for µ. Owing
to the nonlinear dependence the construction of confidence bands for b is more involved. In this more difficult
situation and by using a self-similarity assumption we make the additional step of constructing confidence bands
for b that shrink at a rate adapting to the unknown smoothness.
Estimating the invariant density of a Markov process has been of interest for a long time. An early treatment
is given by Roussas [39], who considered kernel estimators and showed consistency and asymptotic normality of
the estimators under the strong Doeblin condition. Rosenblatt [38] analysed kernel estimators under the weaker
condition G2 on the Markov chain. More general δ-sequences were used for the estimation by Castellana and
Leadbetter [6], who prove pointwise consistency and under strong mixing assumptions asymptotic normality.
Yakowitz [47] shows asymptotic normality of kernel density estimators for the invariant density of Markov chains
without using assumptions on the rates of mixing parameter sequences. Adaptive estimation was considered by
Lacour [29], who estimates the invariant density and the transition density of Markov chains by model selection
and proves that the estimators attain the minimax convergence rate under L2 -loss. For stationary processes
Schmisser [40] estimates the derivatives of the invariant density by model selection, derives the convergence rates
of the estimators and pays special attention to the case of discretely observed diffusion processes. We see that
asymptotic normality has been widely considered in the nonparametric estimation of invariant densities and
thus implicitly also confidence intervals. However, we are not aware of any extensions of the pointwise results to
uniform confidence bands for invariant densities, which are, for instance, necessary to construct goodness-of-fit
tests of the Kolmogorov–Smirnov type.
The statistical properties of the diffusion model depend crucially on the observation scheme. If the whole path
(Xt )06t6T is observed for some time horizon T > 0, we speak of continuous observations. The case of discrete
observations (Xk∆ )k=0,...,n−1 with observation distance ∆ > 0 is distinguished into high-frequency observations,
i.e. ∆ ↓ 0, and low-frequency observations, where ∆ > 0 is fixed. While in the first two settings path properties
of the process can be used, statistical inference for low-frequency observations has to rely on the Markovian
structure of the observations. A review on parametric estimation in diffusion models is given by Kutoyants [28]
and Aı̈t-Sahalia [2]. Nonparametric results are summarized in Gobet et al. [20], where also estimators based
on low-frequency observations are introduced and analysed. These low-frequency estimators rely on a spectral
identification of diffusion coefficients which have been introduced by Hansen and Scheinkman [22] and Hansen
et al. [23]. On the same observation scheme Kristensen [27] studies a pseudo-maximum likelihood approach
in a semiparametric model. Nonparametric estimation based on random sampling times of the diffusion has
been studied in Chorowski and Trabs [12]. While we pursue a frequentist approach, the Bayesian approach is
also very attractive. Based on low-frequency observations van der Meulen and van Zanten [44] have proved
consistency of the Bayesian method and Nickl and Söhl [36] showed posterior contraction rates.
As usual, nonparametric estimators depend on some tuning parameters, such as the bandwidth for classical
kernel estimators. Choosing these parameters in a data-driven way, Spokoiny [41] initiated adaptive drift
estimation in the diffusion model based on continuous observations. This was further developed by Dalalyan
[14] and Löcherbach et al. [31]. Based on high-frequency observations, adaptive estimation was studied by
Hoffmann [25] as well as Comte et al. [13]. In the low-frequency case the question of adaptive estimation has
been studied by Chorowski and Trabs [12]. In this work we go one step further not only constructing a (rate
optimal) adaptive estimator for the drift, but constructing adaptive confidence bands.
Statistical applications require tests and confidence statements. Negri and Nishiyama [35] as well as Masuda
et al. [33] have constructed goodness-of-fit tests for diffusions based on high-frequency observations. Low [32]
has shown that even in a simple density estimation problem no confidence bands exist which are honest and
adaptive at the same time. Circumventing this negative result by a “self-similarity” condition, Giné and Nickl
[18] have constructed honest and adaptive confidence bands for density estimation. Hoffmann and Nickl [26]
have further studied necessary and sufficient conditions for the existence of adaptive confidence bands and the
“self-similarity” condition has led to several recent papers on adaptive confidence bands, notably Chernozhukov
et al. [11] and Szabó et al. [42]. The present paper extends the theory of adaptive confidence bands beyond the
TITLE WILL BE SET BY THE PUBLISHER
3
classical nonparametric models of density estimation, white noise regression and the Gaussian sequence model
which have been treated in the above papers.
In order to derive confidence bands, we first have to establish a uniform central limit theorem. The empirical
measure of the observations X0 , . . . , X(n−1)∆ is the canonical estimator for the invariant measure of a Markov
chain or diffusion. Considering a wavelet projection estimator, we obtain a smoothed version of the empirical
measure, which is subsequently used to estimate the drift function in the case of diffusions. Thus a natural
starting point is a functional central limit theorem for the invariant measure. Since our observations are not
independent, the standard empirical process theory does not apply. Instead we have to use the Markov structure
of the chain (Xk∆ )k . In the continuous time analogue the Donsker theorem for diffusion processes has been
studied by van der Vaart and van Zanten [45]. In the case of low-frequency observations, the estimation problem
is ill-posed and we have nonparametric convergence rates under the uniform loss. For the asymptotic behaviour
of the estimation error in the uniform norm we would expect a Gumbel distribution as shown by Giné and Nickl
[18] in the density estimation case using extreme value theory. Recent papers by Castillo and Nickl [7, 8] show
that we can hope for parametric rates and an asymptotic normal distribution if we consider instead a weaker
norm for the loss. More precisely, the estimation error can be measured in a multi-scale space where the wavelet
coefficients are down-weighted appropriately. The resulting norm corresponds to a negative Hölder norm.
Following this approach and relying on a concentration inequality by Adamczak and Bednorz [1], our first
result is a functional central limit theorem for rather general geometrically ergodic, Harris-recurrent Markov
chains. This could also be of interest for the theory on Markov chain Monte Carlo (MCMC) methods considering
that the central limit theorem measures the distance between a target integral and its approximation,
ˆ
f (z)µ(dz) and
R
n−1
1X
f (Zk ),
n
k=0
respectively, where (Zk )k is a Markov chain with invariant measure µ, cf. Geyer [16]. Nevertheless, our
focus is on the statistical point of view. The functional central limit theorem immediately yields non-adaptive
confidence bands and as in Castillo and Nickl [8] these have an L∞ -diameter shrinking with (almost) the optimal
nonparametric rate. This small deviation from the optimal rate corresponds to the usual undersmoothing in
the construction of nonparametric confidence sets.
Applying the results for general Markov chains to diffusion processes observed at low frequency, we obtain
a functional central limit theorem for estimators of the drift function. Inspired by Giné and Nickl [18], in a
last demanding step the smoothness of b and the corresponding size of the confidence band is estimated to find
adaptive confidence bands. The adaptive procedure relies on Lepski’s method. In order to make the construction
of adaptive confidence bands feasible, we impose a self-similarity assumption on the drift function.
This work is organized as follows: In Section 1 we study general Markov chains and prove the functional
central limit theorem and confidence bands under appropriate conditions on the chain. These results are applied
to diffusion processes in Section 2. The adaptive confidence bands for the drift estimator are constructed in
Section 3. Some proofs are postponed to the last two sections.
1. Confidence bands for the invariant probability density of Markov
processes
1.1. Preliminaries on Markov chains
We start with recalling some facts from the theory of Markov chains. For all basic definition and results we
refer to Meyn and Tweedie [34]. Let Z = (Zk ), k = 0, 1, . . . , be a time-homogeneous Markov chain with state
space (R, B(R)). To fix the notation, let Px and Pν denote the probability measure of the chain with initial
conditions Z0 = x ∈ R and Z0 ∼ ν, respectively. The corresponding expectations will be denoted by Ex and
Eν , the Markov chain transition kernel by P (x, A), x ∈ R, A ∈ B(R). The transition operator is defined by
(P f )(x) = Ex [f (Z1 )].
4
TITLE WILL BE SET BY THE PUBLISHER
From the general theory of Markov chains we know that for a Harris-recurrent Markov chain Z the existence
of a unique invariant probability measure µ is equivalent to the drift condition
P V (x) − V (x) 6 −1 + c1C (x)
for some petite set C, some c < ∞ and some non-negative function V , which is finite at some x0 ∈ R. If Z is
additionally aperiodic, then this drift condition is already equivalent to Z being ergodic
kP n (x, ·) − πkT V → 0,
as n → ∞,
for all x ∈ R,
denoting the total variation norm of a measure by k · kT V . If we impose a stronger drift condition, namely the
geometric drift towards C, we obtain even geometric ergodicity: For a ψ-irreducible and aperiodic Markov chain
Z satisfying
(P V )(x) − V (x) 6 −λV (x) + c1C (x),
for all x ∈ R,
(1)
for a petite set C, some λ > 0, c < ∞ and a function V : R → [1, ∞), it holds for some r > 1, R < ∞,
X
rn kP n (x, ·) − µkT V 6 RV (x), for all x ∈ R.
n>0
Note that ψ-irreducibility together with the geometric drift condition (1) implies already that Z is positive
Harris with invariant probability measure µ.
The geometric ergodicity yields the following central limit theorem, see Chen [10, Thm. II.4.1]. The weakest
form of ergodicity so that the central limit theorem holds is ergodicity of degree 2 which is slightly weaker than
the geometric ergodicity that we have assumed here.
Proposition 1.1. Let (Zk )k>0 be a geometrically ergodic Markov chain with arbitrary initial condition and
invariant probability measure µ, then there exists for every bounded function f = (f1 , . . . , fd ) : R → Rd a
symmetric, positive semidefinite matrix Σf = (Σfi ,fj )i,j=1,...,d such that
n−1/2
n−1
X
k=0
d
f (Zk ) − nEµ [f (Z0 )] −→ N (0, Σf ),
as n → ∞.
For i, j ∈ {1, . . . , d} the asymptotic covariances are given by
Σfi ,fj := lim n−1 Covµ
n→∞
n−1
X
k=0
fi (Zk ),
n−1
X
k=0
fj (Zk )
(2)
∞
X
Eµ (fi (Z0 ) − Eµ [fi ])(fj (Zk ) − Eµ [fj ])
= Eµ (fi (Z0 ) − Eµ [fi ])(fj (Z0 ) − Eµ [fj ]) +
k=1
+
∞
X
k=1
Eµ (fi (Zk ) − Eµ [fi ])(fj (Z0 ) − Eµ [fj ]) .
In order to lift this “pointwise” result to a functional central limit theorem, we will in addition need a
Pn−1
concentration
inequality for a preciser control on how the sum n−1 k=0 f (Zk ) deviates from the integral
´
f (z)µ(dz) for finite sample sizes. To this end, we strengthen the aperiodicity assumption to strong aperiodicity
[see 34, Prop. 5.4.5], that is there exists a set C ∈ B(R), a probability measure ν with ν(C) > 0 and a constant
δ > 0 such that
P (x, B) > δν(B),
for all x ∈ C, B ∈ B(R).
(3)
5
TITLE WILL BE SET BY THE PUBLISHER
Any set C satisfying this condition is called small set. Recall that any small set is a petite set.
Proposition 1.2 (Theorem 9 by Adamczak and Bednorz [1]). Let Z = (Zk )k>0 be a Harris recurrent, strongly
aperiodic Markov chain on (R, B(R)) with unique invariant measure µ. For some set C ∈ B(R) with µ(C) > 0
let Z satisfy the drift condition (1) and the small set condition (3).
Let f ∈ L2 (µ) be bounded. For any 0 < τ < 1 there are constants K, c2 depending only on δ, V , λ, c and τ
and a constant c1 depending additionally on the initial value x ∈ R such that for any t > 0
Px
n−1
X
k=0
f (Zk ) − nEµ [f (Z0 )] > t
6 K exp −c1
t τ
kf k∞
+ K exp
−
c 2 t2
1/2
nΣf + t max(kf k∞ (log n)1/τ , Σf )
,
where Σf is given by (2) with d = 1.
As a last ingredient we need to bound the´ asymptotic variance Σf in Propositions 1.1 and 1.2 in terms of
kf k2L2 (µ) for the centred function f := f − f dµ. The geometric ergodicity only yields a bound O(kf k2∞ ).
Therefore, we require that the transition operator is a contraction in the sense that there exists some ρ ∈ (0, 1)
satisfying
ˆ
kP gkL2 (µ) 6 ρkgkL2(µ) for all g ∈ L2 (µ) with
g dµ = 0.
(4)
This property is also known as ρ-mixing. It corresponds to a Poincaré inequality [cf. 3, Thm. 1.3] and its
relation to drift conditions is analysed by Bakry et al. [3]. If (4) is fulfilled, the Cauchy–Schwarz inequality
yields
∞
∞
X
X
1+ρ
2
k
kf kL2 (µ) kP f kL2 (µ) 6 1 + 2
ρk kf k2L2 (µ) = 1−ρ
(5)
Σf = Σf 6 kf kL2 (µ) + 2
kf k2L2 (µ) .
k=1
k=1
1.2. A functional central limit theorem
The basic idea is to prove a functional central limit theorem for the invariant probability measure µ by
choosing an orthonormal basis, applying the pointwise central limit theorem to the basis functions (Proposition 1.1) and extending this result to finite linear combinations with the help of the concentration inequality
(Proposition 1.2). Provided µ has some regularity, the approximation error due to considering only a finite basis
expansion of µ will be negligible. Noting that it is straightforward to extend the results to any compact subset
of R, we focus on a central limit theorem on a bounded interval [a, b] with −∞ < a < b < ∞.
Let (ϕj0 ,l , ψj,k : j > j0 , l, k ∈ Z), for some j0 > 0, a scaling function ϕ and a wavelet function ψ, be a
regular compactly supported L2 -orthonormal wavelet basis of L2 (R). For the sake of clarity we throughout use
Daubechies’ wavelets of order N ∈ N, but any other compactly supported regular wavelet basis can be applied
as well. As a standing assumption we suppose that N is chosen large enough such that the Hölder regularity of ϕ
and ψ is larger than the regularity required for the invariant measure. The approximation spaces for resolution
levels J > j0 are defined as
VJ := span{ϕj0 ,l , ψj,k : j = j0 , . . . , J, l, k ∈ Z},
The projection onto VJ is denoted by πJ . Since j0 is fixed and to simplify the notation, we write ψ−1,l := ϕj0 ,l .
Using the first n ∈ N steps Z0 , Z1 , . . . , Zn−1 of a realisation of the chain, we define the empirical measure
µn :=
n−1
1X
δZk ,
n
k=0
6
TITLE WILL BE SET BY THE PUBLISHER
where δx denotes the Dirac measure at the point x ∈ R. The canonical projection wavelet estimator of µ is
given by
µ
bJ := πJ (µn ) =
X
l∈Z
µ
b−1,l ψ−1,l +
For any ψj,k Proposition 1.1 yields that
random variable
J X
X
j=j0 k∈Z
µ
bj,k ψj,k ,
µ
bj,k := hψj,k , µn i :=
ˆ
ψj,k dµn .
(6)
√
n(µn − µ)(ψj,k ) converges in distribution for n → ∞ to a Gaussian
Gµ (j, k) ∼ N (0, Σψj,k ) with covariances
Eµ [Gµ (j, k)Gµ (l, m)] = Σψj,k ,ψl,m .
(7)
Using the techniques from Castillo and Nickl [8], this pointwise convergence of µn can be extended to a uniform
central limit theorem on [a, b] for the projection estimator µ
bJ in the multi-scale sequence spaces which are
defined as follows: Noting that the Daubechies wavelets fulfil supp ϕ ⊆ [0, 2N − 1] and supp ψ ⊆ [−N + 1, N ],
cf. Härdle et al. [24, Chap. 7], the sets L := K−1 := {k ∈ Z : 2j0 a − 2N + 1 6 k 6 2j0 b} and Kj := {k ∈ Z :
2j a − N 6 k 6 2j b + N − 1} contain all indices of ϕj0 ,· and ψj,· , respectively, whose support intersects with the
interval [a, b]. For a monotonously increasing weighting sequence w = (wj )j=−1,j0 ,j0 +1,j0 +2,... with wj > 1 and
w−1 := 1 we define the multi-scale sequence spaces as
n
M := M(w) := x = (xjk ) : kxkM(w) :=
sup
max
j∈{−1,j0 ,j0 +1,... } k∈Kj
o
|xjk |
<∞ ,
wj
Since the Banach space M(w) is non-separable, we define the separable, closed subspace
o
n
|xjk |
=0 .
M0 := M0 (w) := x = (xjk ) : lim max
j→∞ k∈Kj wj
Let us assume that µ is absolutely continuous with respect to the Lebesgue measure and denote the density
likewise by µ. If the density is bounded on D = [a − 2−j0 (2N − 1), b + 2−j0 (2N − 1)], the orthonormality
and the support of (ψj,k ) and (5) yield Σψj,k = O(kµk∞ ). Standard estimates of the supremum of normal
random variables yield√that the maximum over the 2j variables Gµ (j, ·) of a resolution level j are of the order
√
maxk |Gµ (j, k)| = OP ( j), see (14) below. Since the cardinality of Kj is of the order 2j , a weighting wj = j
seems to be appropriate and indeed we conclude as Castillo and Nickl [8, Prop. 3]:
Lemma 1.3. Let µ admit a Lebesgue density which is bounded on D. Then Gµ from (7) satisfies E[kGµ kM(w) ] <
√
∞
√ for the weights wj = j. Moreover, L(Gµ ) is a tight Gaussian Borel probability measure in M0 (w) if
j/wj → 0.
Let us now summarise the assumptions on the Markov chain, which are needed to prove the functional central
limit theorem and for the construction of confidence bands. For any regularity s > 0, denoting the integer part
of s by [s], the Hölder space on a domain D is defined by
s
n
C (D) := f : D → R kf kC s :=
[s]
X
k=0
kf (k) k∞ + sup
x6=y
|f [s] (x) − f [s] (y)| o
.
|x − y|s−[s]
Assumption A. Let (Zk )k>0 be a Harris recurrent, strongly aperiodic Markov chain on (R, B(R)) with initial
condition Z0 = x. Let the invariant probability measure have a density µ in C s (D) for some s > 0 and some
sufficiently large set D ⊆ R containing [a, b]. Let the drift condition (1) and small set condition (3) be satisfied
for some C ∈ B(R) with µ(C) > 0. Further suppose that the transition operator is an L2 (µ)-contraction fulfilling
(4) with ρ ∈ (0, 1).
7
TITLE WILL BE SET BY THE PUBLISHER
Remark 1.4. As we have discussed above it suffices to verify that the chain (Zk )k>0 is ψ-irreducible and
satisfies (1) and (3) in order to conclude that the (Zk )k>0 is Harris recurrent, strongly aperiodic and has a
unique invariant probability measure.
Now we can show the functional central limit theorem for µ
bJ in the space M0 (w). Note that the natural
nonparametric choice Jn given by 2Jn ∼ n1/(2s+1) satisfies the conditions of the following theorem. Recall
that weak convergence of laws L(X) of random variables X on a metric space (S, d) can be metrised by the
bounded-Lipschitz metric
ˆ
βS (µ, ν) :=
sup
F (x)(µ(dx) − ν(dx)) with
F :kF kBL 61
S
kF kBL := sup |F (x)| +
x∈S
|F (x) − F (y)|
.
d(x, y)
x,y∈S:x6=y
sup
Theorem 1.5. Grant Assumption A and let w = (wj ) be increasing and satisfy
Jn ∈ N fulfil, for some τ ∈ (0, 1),
√ −Jn (2s+1)/2 −1
n2
wJn = o(1),
Then µ
bJn from (6) satisfies, for n → ∞,
√
√
j/wj → 0 as j → ∞. Let
(log n)2/τ n−1 2Jn Jn = O(1).
d
n(b
µJn − µ) −→ Gµ
in M0 (w).
Proof. We follow the strategy of [8, Thm. 1]. First we deal with the bias term. By the s-Hölder regularity of
µ we have [19, Definition (5.90) and Proposition 5.3.13]
sup 2j(2s+1)/2 |hψj,k , µi| < ∞
j,k
and thus by the assumption on Jn
kµ − πJn (µ)kM = sup max wj−1 |hψjk , µi| . sup wj−1 2−j(2s+1)/2 = o(n−1/2 ).
j>Jn k∈Kj
j>Jn
√
Defining νn := n(b
µJn − πJn (µ)), we decompose the stochastic error, for J < Jn to be specified later,
βM0 (L(νn ), L(Gµ )) 6 βM0 (L(νn ), L(νn ) ◦ πJ−1 ) + βM0 (L(νn ) ◦ πJ−1 , L(Gµ ) ◦ πJ−1 )
+ βM0 (L(Gµ ) ◦ πJ−1 , L(Gµ )).
(8)
In the sequel we will separately show that all three terms converge to zero. Let ε > 0. By definition of the
βM0 -norm we estimate the first term by
βM0 (L(νn ), L(νn ) ◦ πJ−1 ) =
sup
F :kF kBL 61
E F (νn ) − F (πJ (νn ))
√
6 E k n(πJn − πJ )(µn − µ)kM
h
i
√
6 max (wj−1 j 1/2 )E max max j −1/2 |h n(µn − µ), ψj,k i| .
J<j6Jn
J<j6Jn k∈Kj
(9)
By the assumptions on w and due to the factor in front of the expectation, the above display can be bounded
by ε/3 if J is chosen large enough and provided that the expectation can be bounded by a constant independent
of J and n. To apply the concentration inequality in Proposition 1.2, note that Σψj,k = O(kµk∞ ) by (5) and
8
TITLE WILL BE SET BY THE PUBLISHER
√
√
√
jkψj,k k∞ = j2j/2 = O( n(log n)−1/τ ) for j 6 Jn . Hence, for any M > 0 large enough we obtain for
constants ci > 0, i = 1, 2, . . . ,
i
h
√
E max max j −1/2 |h n(µn − µ), ψj,k i|
J<j6Jn k∈Kj
ˆ ∞
√
6M+
P
max max j −1/2 |h n(µn − µ), ψj,k i| > u du
J<j6Jn k∈Kj
M
ˆ ∞
X
p
√
6M+
P |h n(µn − µ), ψj,k i| > ju du
M
J<j6Jn ,k∈Kj
X
.M+
2j
.M+
∞
M
J<j6Jn
X
ˆ
2
J<j6Jn
j
exp − c1 (log n)j τ uτ + exp − c2 ju2 /(1 + u) du
e−c3 (jM)τ log n
j τ log n
+
(10)
e−c4 jM
j
τ
. M + e−c5 JM . M + 1,
where we have used in the next to last estimate that Jn . log n and thus j τ log n & j for all j 6 Jn .
To bound the second term in (8), we use Proposition 1.1 and the Cramér–Wold device to see that it is smaller
than ε/3 for fixed J and n sufficiently large. It remains to consider the third term in (8) which can be estimated
similarly to (9), using that E[supj maxk j −1/2 |Gµ (j, k)|] < ∞ by Lemma 1.3.
1.3. The construction of confidence bands
Using the multi-scale central limit theorem, we now construct confidence bands for the density of the invariant
probability measure. For some confidence level α ∈ (0, 1) the natural idea is to take
n
ζα o
ζα o n
= f : sup wj−1 |hf − µ
bJn , ψj,k i| < √ ,
Cn (ζα ) := f : kf − µ
bJn kM < √
n
n
j,k
where ζα is chosen such that P (kGµ kM < ζα ) > 1 − α. For this set the asymptotic coverage follows immediately
from Theorem 1.5. However, Cn (ζα ) is too large in terms of the L∞ ([a, b])-diameter
|Cn (ζα )|∞ := sup sup |f (x) − g(x)| : f, g ∈ Cn (ζα ) .
x∈[a,b]
To obtain the (nearly) optimal L∞ -diameter, we need to control the large resolution levels. As suggested by
Castillo and Nickl [8], we use a-priori knowledge of the regularity s to define
(11)
C n := C n (ζα , s, un ) := Cn (ζα ) ∩ f : kf kC s 6 un
for a sequence un → ∞.
Proposition 1.6. Grant Assumption A with s > 0 and let w = (wj ) be increasing and satisfy
α ∈ (0, 1) let ζα > 0 be such that P (kGµ kM > ζα ) 6 α and choose Jn := Jn (s) such that
2 Jn =
n 1/(2s+1)
.
log n
√
Then the confidence set C n = C n (ζα , s, un ) from (11) with un := wJn / Jn satisfies
lim inf P (µ ∈ C n ) > 1 − α
n→∞
and
|C n |∞ = OP
n −s/(2s+1)
un .
log n
√
j/wj → 0. For
9
TITLE WILL BE SET BY THE PUBLISHER
Proof. Let us first verify lim inf n→∞ P (µ ∈ C n ) > 1 − α. Since µ ∈ C s (un ) for large enough n, Theorem 1.5
yields
√
µJn − µkM < ζα ) > P (kGµ kM < ζα ) > 1 − α.
lim inf P (µ ∈ C n ) = lim inf P ( nkb
n→∞
n→∞
To bound the diameter let f, g ∈ C n . Using kf − µ
bJn kM = OP (n−1/2 ) and f − g ∈ C s (2un ), we obtain
kf − gkL∞ ([a,b]) .
6
X
2j/2 max |hf − g, ψj,k i| +
k∈Kj
j6Jn
X
X
j>Jn
2j/2 max |hf − g, ψj,k i|
k∈Kj
bJn , ψj,k i| + max |hg − µ
bJn , ψj,k i|
2j/2 max |hf − µ
k∈Kj
j6Jn
+
X
j>Jn
k∈Kj
2−js 2j(s+1/2) max |hf − g, ψj,k i|
k∈Kj
bJn kM
6 kf − µ
bJn kM + kg − µ
X
j6Jn
2j/2 wj + kf − gkC s
2
wJn + OP 2−Jn s un
= OP n
= OP n−1/2 2Jn /2 Jn1/2 un + OP 2−Jns un .
−1/2 Jn /2
Plugging in the choice of Jn , we finally have n−1/2
X
2−js
j>Jn
p
2Jn Jn . (n/ log n)−s/(2s+1) = 2−Jn s .
(12)
A multi-scale confidence band as in (11) allows for the construction of a classical L∞ -band on [a, b] around
−s/(2s+1)
un . As we can deduce from
µ
bJn as follows: Let us denote the almost optimal diameter by ρn := logn n
(12), there is a constant D > 0 such that we have kf − µ
bJn kL∞ ([a,b]) 6 Dρn for any f ∈ Cn . Hence, the band
Cen := f : [a, b] → R kf − µ
bJn kL∞ ([a,b]) 6 Dρn
contains C n which only improves the coverage. Consequently, Cen is an L∞ -confidence band with level α which
shrinks with almost optimal rate ρn . In addition, a multi-scale confidence band as in (11) allows for simultaneous
confidence intervals in all wavelet coefficients. This is especially useful for goodness-of-fit tests where the optimal
L∞ -diameter of C n is a measure of the power of the test.
In order to apply the confidence band (11) we need the regularity s of the invariant density and a critical
value ζα such that P (kGµ kM < ζα ) > 1 − α for α ∈ (0, 1). Adaptive confidence bands will be presented later
in the context of diffusions. So let us suppose for a moment that the regularity s is known. Then the problem
reduces to the construction of the critical value to which the remainder of this section is devoted.
A first observation is that if several independent copies of the diffusion are observed then one could calculate
for each copy an estimator µ
bJn and obtain estimators for the values ζα from the distribution of the estimators
µ
bJn around their joint mean. Since the assumption of many independent copies is not realistic we will not pursue
this further. Instead of the consistent estimation of the lowest possible ζα we restrict ourselves to estimating
an upper bound, which yields possibly more conservative confidence sets. By the concentration of Gaussian
measures we know for any κ > 0 that
2
P kGµ kM > E[kGµ kM ] + κ 6 e−κ /(2Σ) ,
where Σ := supj,k E[|Gµ (j, k)|2 ] = supj,k Σψj,k , see for example Ledoux [30, Thm. 7.1]. Hence, an upper bound
for ζα is given by
p
2Σ log α−1 + E[kGµ kM ].
10
TITLE WILL BE SET BY THE PUBLISHER
The expected value E[kGµ kM ] can be bounded as in Proposition 2 by Castillo and Nickl [8], depending on Σ
again. We obtain the following upper bound for ζα :
√
√
Lemma 1.7. Let j0 > 1 and w = (wj ) satisfy w−1 = j0 and inf j wj / j > 1, j > j0 , and define Σ :=
supj,k Σψj,k . Then P (kGµ kM > ζ α ) 6 α holds for
ζ α (Σ) :=
p
2 log α−1 + 2C +
32 −2j0
3C 2
√
Σ
(13)
with C := (supj>j0 (4 log |Kj | + 2 log 2)/j)1/2 .
Proof. The cardinality of Kj is denoted by |Kj |. Recall that a standard normal random variable Z satisfies
E[eZ
2
/4
]=
√
2
2
1
and P (Z > κ) 6 √ e−κ /2 , κ > 0.
κ 2π
1/2
For each j > j0 and κ = 2 supk Σψj,k Jensen’s inequality thus yields
2
2 1/2
E[max |Gµ (j, k)|] 6 κ log E emaxk |Gµ (j,k)| /κ
k
p
1/2
1/2
6 C Σj,
6 2 sup Σψj,k log |Kj | + 12 log 2
(14)
k
for the constant C := (supj (4 log |Kj |+2 log 2)/j)1/2 . Theorem 7.1 in [30] yields for all t, T such that t > Σ1/2 CT
and T > 1
X
P kGµ kM > t 6
P | max Gµ (j, k) − E[max Gµ (j, k)]| > twj − E[max |Gµ (j, k)|]
k
j
k
k
X
p
P | max Gµ (j, k) − E[max Gµ (j, k)]| > (t − C) Σj
6
k
j
62
X
2
e−jt
k
(T −1)2 /(2ΣT 2 )
.
j
Recall that j = j0 , j0 , j0 + 1, j0 + 2, . . . in the above sum. Using Fubini’s theorem and the Gaussian tail bound,
we conclude
ˆ ∞
Xˆ ∞
2
2
E[kGµ kM ] 6 Σ1/2 CT +
P (kGµ kM > t)dt 6 Σ1/2 CT + 2Σ1/2
e−jt (1−1/T ) /2 dt
Σ1/2 CT
6Σ
1/2
j
CT
2
4Σ1/2 T 2−2(T −1) j0
2Σ1/2 T X −(2 log 2)j(T −1)2
1/2
e
6 Σ CT +
.
CT +
C(T − 1)2 j
C(T − 1)2 (1 − 2−2(T −1)2 )
Choosing the T = 2, we obtain E[kGµ kM ] 6 (2C +
32 −2j0
)Σ1/2 .
3C 2
From the above lemma we see that Σ is the key quantity for the construction of the critical values ζα . A
b n := (maxj6Jn ,k Σ
b ψj,k ), where Σ
b ψj,k are estimators of Σψj,k based on n observations.
natural estimator for Σ is Σ
Since Jn tends to infinity, the maximum over all j 6 Jn converges to the supremum over all j so that we are
b ψ we propose the initial monotone sequence
asymptotically estimating the right quantity. For the estimators Σ
j,k
estimators based on autocovariations by Geyer [16], which are consistent over-estimates, and this yields almost
surely
b n > Σ,
lim inf Σ
n→∞
11
TITLE WILL BE SET BY THE PUBLISHER
which suffices for our purposes.
The estimation of Σψj,k amounts to the estimation of the asymptotic variance Σf in (2) for a known function
f and this problem is studied in the MCMC-literature. In addition to the sequence estimators, Geyer [16]
discusses two other constructions together with their advantages and disadvantages. Robert [37] constructs
another estimator applying renewal theory, which is however difficult to calculate. A more recent estimator
using i.i.d. copies of the process X is given by Chauveau and Diebolt [9].
As an alternative to the above estimation of Σ in (13) an upper bound could be estimated as follows: Using (5)
we can bound Σ from above,
1+ρ
1+ρ
Σ 6 sup 1−ρ
kψ j,k kL2 (µ) 6 sup 1−ρ
kψj,k kL2 kµk∞ =
j,k
j,k
1+ρ
1−ρ kµk∞ ,
where we can plug in estimators for kµk∞ and ρ. Considering a wavelet ψj,k localised around the maximum of
µ we see that the second inequality should provide a good bound. To estimate kµk∞ a calculation along the
lines of the bound (12) shows that for µ ∈ C s (D) with Jn as in Proposition 1.6
kb
µJn − µkL∞ ([a,b]) = OP
log n s/(2s+1)
n
un ,
(15)
√
where un = wJn / Jn . Provided the supremum of µ is attained in [a, b] or µ admits some positive global Hölder
regularity, we conclude that kµk∞ can be estimated by kb
µJn k∞ with the above rate and is in particular a
consistent estimator, which is all that is needed. For the estimation of ρ we observe that it is the second largest
eigenvalue of the transition operator P∆ . Gobet et al. [20] estimate this eigenvalue in a reflected diffusion model
by constructing first an empirical transition matrix for the transition operator restricted to a finite dimensional
space and then taking the second largest eigenvalue of the empirical transition matrix as an estimator for ρ,
there denoted by κ1 . They give a rate for their estimator, in particular the estimator is consistent.
Let us finally note that the estimation of ζα can be circumvented by a Bayesian approach as studied by
Castillo and Nickl [8] as well as Szabó et al. [42] in simpler statistical problems. The papers analyse Bayesian
credible sets in the density estimation model and in the white noise regression model as well as in the Gaussian
sequence model and show that they are frequentist confidence sets. Estimating the drift of a diffusion from
low-frequency observations is a more complicated statistical model. Consistency of the Bayesian approach in
this setting has been established by van der Meulen and van Zanten [44] and has been extended to the multidimensional case by Gugushvili and Spreij [21]. Recently Nickl and Söhl [36] have shown Bayesian posterior
contraction rates for scalar diffusions with unknown drift and unknown diffusion coefficient observed at low
frequency.
2. Application to diffusion processes
2.1. Estimation of the invariant density and its consequences
We now apply the results from the previous section to diffusion processes. At the same time we extend the
results from inference on the invariant probability measure to confidence bands for the drift function. Let us
consider the diffusion
dXt = b(Xt )dt + σdWt ,
t > 0, X0 = x,
(16)
with a Brownian motion Wt , an unknown drift function b : R → R, a volatility parameter σ > 0 and starting
point x ∈ R. We observe X at equidistant time points 0, ∆, 2∆, . . . , (n− 1)∆ for some fixed observation distance
∆ > 0 and sample size n → ∞. Our aim is inference on the drift b.
Underlying the sequence of observations (X∆k )k>0 is a Markov structure described by the transition operator
P∆ f (x) := E[f (X∆ )|X0 = x].
12
TITLE WILL BE SET BY THE PUBLISHER
The semi-group (Pt : t > 0) has the infinitesimal generator L on the space of twice continuously differentiable
functions given by
2
Lf (x) = Lb f (x) := b(x)f ′ (x) + σ2 f ′′ (x).
(17)
If there is an´ invariant density µ = µb , the operator L is symmetric with respect to the scalar product of
L2 (µ) = {f : |f |2 dµ < ∞}. We impose the following assumptions on the diffusion:
Assumption B. In model (16) let b be continuously differentiable and satisfy b ∈ C s (D) for s > 1 and a
sufficiently large set D ⊆ R containing the interval [a, b] for a < b. Let σ be in a fixed bounded interval away
from the origin. Suppose that b′ is bounded and that there are M, r > 0 such that
sign(x)b(x) 6 −r,
for all |x| > M.
More precisely, we will need D = [a−21−j0 (2N −1), b+21−j0 (2N −1)]. Due to the global Lipschitz continuity
and the assumptions on the drift, equation (16) has a unique strong solution. Moreover, Xt is a Markov process
with invariant probability density given by
ˆ x
−2
−2
µ(x) = C0 σ exp 2σ
b(y)dy , x ∈ R,
(18)
0
with normalization constant C0 > 0, cf. Bass [4, Chaps. 1,4]. The corresponding Markov chain Z with Zk = Xk∆
satisfies Assumption A from the previous section.
Proposition 2.1. If the diffusion process (16) satisfies Assumption B, then the Markov chain (Xk∆ )k>0 satisfies
Assumption A where µ ∈ C s+1 (D).
Proof. By a time-change argument we can set σ = 1 without loss of generality. Gihman and Skorohod [17,
Thm. 13.2] have given an´ explicit formula for the transition density p∆ (x, y) with respect to the Lebesgue
measure, i.e., P∆ (x, B) = B p∆ (x, y)dy for all B ∈ B(R). In particular, p∆ (x, y) is strictly positive and thus Z
is ψ-irreducible, where ψ is given by the Lebesgue measure on R.
Moreover, (x, y) 7→ p∆ (x, y) is continuous such that for any compact interval C ⊆ R we have δ := δ(C) :=
inf x,y∈C p∆ (x, y) > 0 and the small set condition (3) is satisfied:
ˆ
ˆ
dy = δ|C|ν(B),
p∆ (x, y)dy > δ
P∆ (x, B) =
B∩C
B
where |C| denotes the Lebesgue measure of C and ν is the uniform distribution on C. It also follows that the
Markov chain is strongly aperiodic.
To show the drift condition (1), we first construct a Lyapunov function for the infinitesimal generator (which
is the continuous time analogue of the drift operator P − Id), that is we find a function V > 1 such that
LV (x) 6 −λV (x) + c1C (x),
x ∈ R.
(19)
Let V be a smooth function with V (x) = ea|x| for |x| > R for some R > 0. Due to the assumptions on b, we
then obtain for these x and R large enough
2
1
a
LV (x) = V ′′ (x) + b(x)V ′ (x) =
+ a sign(x)b(x) V (x) 6 −λV (x)
2
2
for sufficiently small a, λ and thus the previous inequality is satisfied with C = [−R, R]. To carry this result
over to the drift condition (1), we adopt the approach by Galtchouk and Pergamenshchikov [15, Prop. 6.4]:
Itô’s formula yields for all 0 6 t 6 ∆
ˆ t
ˆ t
V (Xt ) = V (x) +
L(V )(Xs )ds +
V ′ (Xs )dWs .
0
0
13
TITLE WILL BE SET BY THE PUBLISHER
´∆
´∆
We note that Fubini’s theorem yields Eµ [ 0 V ′ (Xs )2 ds] = 0 Eµ [V ′ (X0 )2 ]ds < ∞ for constants a small enough
´∆
by (18) and by the assumptions on b. Consequently we have Ex [ 0 V ′ (Xs )2 ds] < ∞ for almost all x ∈ R. By
´∆
the explicit formula of p∆ (x, y) we conclude that Ex [ 0 V ′ (Xs )2 ds] < ∞ for all x ∈ R. Hence, the stochastic
integral is a martingale (under Px ) and Z(t) := Pt V (x) satisfies
Z ′ (t) = Ex [L(V )(Xt )] = −λZ(t) + ψ(t),
ψ(t) := Ex [L(V )(Xt ) + λV (Xt )],
where we have ψ(t) 6 cPx (Xt ∈ C) 6 c by (19). Solving this differential equation, we obtain for all t ∈ [0, ∆]
Z(t) = Z(0)e
−λt
+
ˆ
t
e−λ(t−s) ψ(s)ds 6 V (x)e−λt + c
0
1 − e−λ∆
.
λ
Therefore, the drift condition follows:
P∆ V (x) − V (x) 6 (e−λ∆ − 1)V (x) +
c
e (x) + c 1{|x|6R} (x),
6 −λV
λ
λ
e > 0 are chosen such that (1 − e−λ∆ − λ)V
e (x) > c/λ for |x| > R. In combination with the
where R > 0 and λ
ψ-irreducibility the drift condition shows that the Markov chain is positive Harris recurrent.
Since our diffusion is symmetric, in the sense that the transition operator is symmetric with respect to
L2 (µ), we argue as Bakry et al. [3, Sect. 4.3], using that the Poincaré inequality is implied by a Lyapunov–
Poincaré inequality and we thus have the contraction property (4) [3, Thm. 1.3]. Finally, the smoothness of b
in combination with the formula for the invariant probability density (18) imply that µ is in C s+1 (D).
Theorem 1.5 and Proposition 1.6 yield immediately
Corollary 2.2. Grant Assumption B and let w = (wj ) be increasing and satisfy
projection estimator µ
bJn from (6) with 2Jn = (n/ log n)1/(2s+3) satisfies
√
d
n(b
µJn − µ) −→ Gµ
√
j/wj → 0. Then the wavelet
in M0 (w).
Moreover, the confidence
√ band C n = C n (ζα , s + 1, un ) from (11) with critical value ζα such that P (kGµ kM >
ζα ) 6 α and un = wJn / Jn satisfies
lim inf P (µ ∈ C n ) > 1 − α
n→∞
and
|C n |∞ = OP
n −(s+1)/(2s+3)
un .
log n
2.2. Drift estimation via plug-in
Supposing from now on that σ = 1 and rewriting the formula of the invariant measure (18), we see that
b(x) =
′
1
log µ(x) .
2
(20)
Obviously, b depends on µ in a nonlinear way and the estimation problem is ill-posed because b is a function
of the derivative µ′ . In general, the same calculation leads to a formula for the function b(x)/σ 2 . Note that
all shape properties of the drift function, like monotonicity, extrema, etc. are already determined by b/σ 2 . As
demonstrated by Gobet et al. [20], the information on σ is encoded in the transition operator of the underlying
Markov chain. However, the estimation procedure in this latter article is quite involved and the construction
of adaptive confidence bands in the general setting is beyond the scope of the present article. In the following
we always set σ = 1. Note that if we have an estimator for σ at hand, for instance from a short high-frequency
time series of the diffusion, the results easily carry over to an unknown volatility σ > 0.
14
TITLE WILL BE SET BY THE PUBLISHER
Denoting the set of continuous functions on the real line by C(R), we introduce the map
ξ : f ∈ C 1 (R) : f > 0, kf kL1 = 1 → C(R),
f 7→
f′
,
2f
´·
which is one-to-one with inverse function ξ −1 (g) = exp(2 0 g(y)dy − cg ) with normalization constant cg ∈ R
and for any function g in the range of ξ. We can thus estimate the drift function of the diffusion by the plug-in
estimator ξ(b
µJn ).
Using the confidence set C n (ζα , s + 1, un ) for the invariant density µ from (11), a confidence band for the
drift can be constructed via
Dn := Dn (ζα , s, un ) := ξ(f ) : f ∈ C n (ζα , s + 1, un ) .
(21)
Since ξ is one-to-one, an immediate consequence of Corollary 2.2 is that we have for the coverage probability
lim inf n→∞ P (b ∈ Dn ) = lim inf n→∞ P (µ ∈ C n ) > 1 − α. To bound the diameter of Dn , we first note that ξ is
locally Lipschitz continuous: For f, g ∈ C 1 (R) both bounded away from zero on [a, b] we have in L∞ ([a, b])
g′
1 f ′ − g′
1 g′
1 f′
−
(g − f )
6
+
2 f
g ∞
2
f
2 fg
∞
∞
1 −1
′
′
−1
6 kf k∞ kf − g k∞ + kξ(g)k∞ kf k∞ kf − gk∞
2
6 kf −1 k∞ 12 + kξ(g)k∞ kf − gkC 1 ([a,b]) .
kξ(f ) − ξ(g)k∞ =
(22)
For f, g ∈ C n we conclude in L∞ ([a, b])
kξ(f ) − ξ(g)k∞ 6 kξ(f ) − ξ(µ)k∞ + kξ(g) − ξ(µ)k∞
6 12 + kbk∞ kf −1 k∞ kf − µkC 1 ([a,b]) + kg −1 k∞ kg − µkC 1 ([a,b]) .
Analogously to (12) the choice 2Jn = (n/ log n)1/(2s+3) yields
kf − µkC 1 ([a,b]) = OP
n −s/(2s+3)
un
log n
for all f ∈ C n (ζα , s + 1, un ).
We conclude that f −1 is uniformly bounded in L∞ ([a, b]) for all f ∈ C n . Hence, we have proved
√
Proposition 2.3. Grant Assumption B with σ = 1, s > 0 and let w = (wj ) satisfy j/wj → 0. Then the
confidence
set Dn = Dn (ζα , s, un ) from (21) with critical value ζα satisfying P (kGµ kM > ζα ) 6 α, un =
√
wJn / Jn and Jn chosen such that 2Jn = (n/ log n)1/(2s+3) fulfils
lim inf P (b ∈ Dn ) > 1 − α
n→∞
and
Dn
∞
= OP
n −s/(2s+3)
un .
log n
Let us comment on the rate appearing in the previous proposition. Since the identification (20) incorporates
the derivative of the invariant measure, drift estimation is an inverse problem, which is ill-posed of degree one.
Therefore, the minimax rate for the pointwise or L2 -loss is n−s/(2s+3) . Considering the uniform loss, we obtain
the rate (n/ log n)−s/(2s+3)
. Finally, un → ∞ is the payment for undersmoothing (by using a weighting sequence
√
slightly larger than j). Note that we obtain a faster rate than Gobet et al. [20] who have proved that the
minimax rate for drift estimation for the mean integrated squared error is n−s/(2s+5) if there is additionally an
unknown volatility function in front of the Brownian motion in (16).
15
TITLE WILL BE SET BY THE PUBLISHER
In fact the map ξ is not only Lipschitz continuous, but even Hadamard differentiable (on appropriate function
spaces) with derivative at µ
′
1 h
′
ξµ (h) =
, h ∈ M0 (w).
(23)
2 µ
Using the delta method [46, Thm. 20.8], we obtain a functional central limit theorem for the plug-in estimator
ξ(b
µJn ).
√
Theorem 2.4. Grant Assumption B with σ = 1 and let w = (wj ) be increasing and satisfy j/wj → 0,
wj 6 2jδ for some δ ∈ (0, 1/2). Let Jn ∈ N fulfil, for some τ ∈ (0, 1),
2(9/4+δ/2)Jn wJn n−1/2 = o(1),
√ −Jn (2s+3)/2 −1
n2
wJn = o(1),
For w
ej > 2j(1+δ) we have as n → ∞
√
d
n(ξ(b
µJn ) − b) −→ ξµ′ (Gµ )
(log n)2/τ n−1 2Jn Jn = O(1).
in M0 (w).
e
The proof of this theorem is postponed to Section 5.1. Similarly as in (11) confidence bands for the drift
function can alternatively be constructed by
ζα
Dn (ζ α , s, un ) := f : kf − ξ(b
µJn )kM(w)
e < √ , kf kC s 6 un ,
n
for α ∈ (0, 1/2), a quantile ζ α such that P (kξµ′ (Gµ )kM(w)
e < ζ α ) > 1 − α and a sequence un → ∞. With
√
Jn
−Jn
1/(2s+3)
2 = (n/ log n)
and un = w
eJn 2
/ Jn this leads to asymptotic coverage of at least 1 − α and a
diameter decaying at rate (n/ log n)−s/(2s+3) un . Note that in contrast to Dn the diameter of Dn is slightly
suboptimal due to the δ > 0 that appeared in Theorem 2.4 and which presumably could be removed by a more
technical proof. Based on a direct estimator of the drift we will construct a similar confidence band with the
optimal diameter (up to undersmoothing) in the next section.
Comparing both constructions of confidence sets, we see that Dn can be understood as the variance stabilised
version of Dn : The critical value of Dn depends on the unknown µ only through the covariance structure of the
limit processes Gµ which seems to be unavoidable due to the underlying Markov chain structure. In contrast ζ α
depends additionally on µ through the derivative ξµ′ . As a consequence the confidence band Dn has the same
diameter everywhere while the diameter of Dn changes.
2.3. A direct approach to estimate the drift
Instead of relying only on the estimator µ
bJn of the invariant density and the plug-in approach, we can
use a direct approach to estimate the drift and to obtain its confidence bands. Although there is a one-toone correspondence between the drift function b and the invariant measure µ, the drift is both the canonical
parameter of our model and the main parameter of interest in the context of diffusions. Since we aim for adapting
to the regularity of b, the direct estimation approach is natural and, additionally, the resulting confidence bands
will have a constant diameter.
Motivated by formula (20), we define our drift estimator for integers J, U > 0 as
′
′
1X X
bbJ,U = 1 πJ log µ
h log µ
bJ+U , ψj,k iψj,k ,
bJ+U =
2
2
(24)
j6J k∈Kj
using the wavelet projection estimator µ
bJ+U from (6). In contrast to the plug-in estimator in the previous section
the underlying bias-variance trade-off is now driven by the estimation problem of b and the outer projection πJ
onto level J. However, in order to linearise the estimation error, we need a stable prior estimator of µ such that
16
TITLE WILL BE SET BY THE PUBLISHER
we cannot simply use the empirical measure µn but instead use its projection onto some resolution level J + U
which is strictly larger than J. As a rule of thumb, U = Un can be chosen such that 2Un = log n implying that
an additional bias term from estimating µ is negligible. Linearising the estimation error, we obtain
hbbJ,U − b, ψj,k i = − µ
bJ+U − µ,
′
ψj,k
2µ
+ hRJ+U , ψj,k i,
j 6 J, k ∈ Kj ,
(25)
where the remainder is of order oP (n−1/2 ) for appropriate choices of J = Jn , cf. Lemma 4.1 below. In view
of the linear error term and our findings in Section 1, the limit process Gb in the multi-scale space M0 will be
given by
′
Gb (j, k) ∼ N (0, Σj,k ) where Σj,k := Σf,f is given by (2) with f = ψj,k
/(2µ)
(26)
′
′
/(2µ) and f2 = ψl,m
/(2µ). The illwith covariances E[Gb (j, k)Gb (l, m)] = Σf1 ,f2 from (2) with f1 = ψj,k
′
j
posedness of the problem is reflected by ψj,k being a factor 2 larger than ψj,k . We thus need larger weights for
high resolution levels to ensure that Gb takes values in M0 (w).
Definition 2.5. A weighting sequence w = (wj ) is called admissible, if it is monotonously increasing, satisfies
√
j2j /wj → 0 as j → ∞ and if there is some δ ∈ (1, 2] such that j 7→ 2jδ /wj is monotonously increasing for
large j.
The last condition in the definition is a mild technical assumption that
√ we will need in the multi-scale central
limit theorem below. For instance, any weighting sequence wj = uj j2j with uj = j p for some polynomial
rate p > 0 is admissible of degree one for any δ ∈ (1, 2]. Note that admissibility of w implies in particular
that wj . 2jδ which allows to compare the k · k∞ -norm with the k · kM -norm. We find an analogous result to
Lemma 1.3, cf. Castillo and Nickl [8, Prop. 3].
√
Lemma 2.6. Gb from (26) satisfies E[kGb kM(w) ] < ∞ for the weights w given by wj = j2j . Moreover, L(Gb )
is a tight Gaussian Borel probability measure in M0 (w) for any admissible sequence w.
For the following result suppose that the wavelet basis (ϕj0 ,l , ψj,k : j > j0 , l ∈ L, k ∈ Kj ) of L2 (R) is
sufficiently regular (i.e., satisfies (34) with γ > 3/2 + δ), for instance, Daubechies’ wavelets of order N > 20.
Theorem 2.7. Grant Assumption B with σ = 1 and let w = (wj ) be admissible. Let Jn → ∞, Un → ∞ fulfil
√ −Jn (s+1/2) −1
n2
wJn = o(1),
√ −(Jn+Un )(2s+1)
n2
= o(1),
Then bbJ,U from (24) satisfies, as n → ∞,
√
d
n(bbJn ,Un − b) −→ Gb
n−1/2 22(Jn +Un ) (Jn + Un ) = o(1).
in M0 (w)
for the tight Gaussian random variable Gb in M0 (w) given by (26).
The proof of this theorem is postponed to Section 4. The first condition on Jn is the bias condition for b in
M0 . The latter two conditions on Jn + Un are determined by a bias and a variance condition for µ which we
will need to bound the remainder RJn +Un from (25) in L∞ . If δ < 1/2 + s in Definition 2.5, then the second
condition is strictly weaker than the first one.
Similarly to the confidence band for µ in Proposition 1.6 we can now construct a confidence band for the
drift function b. For some α ∈ (0, 1) we consider
o
n
ζα
En := En (ζα , s, un ) := f : kf − bbJn ,Un kM < √ , kf kC s 6 un ,
n
where ζα is chosen such that P (kGb kM < ζα ) > 1 − α and (un )n is a diverging sequence.
(27)
TITLE WILL BE SET BY THE PUBLISHER
17
Proposition 2.8. Grant Assumption B with σ = 1, s > 1 and let w = (wj ) be admissible. For α ∈ (0, 1) let
ζα > 0 satisfy P (kGb kM > ζα ) 6 α and choose Jn := Jn (s) and Un → ∞ such that
2 Jn =
n 1/(2s+3)
log n
and
2Un = O(log n).
√
Then the confidence set En = En (ζα , s, un ) from (27) with un := wJn 2−Jn / Jn satisfies
lim inf P (b ∈ En ) > 1 − α
n→∞
and
|En |∞ = OP
n −s/(2s+3)
un .
log n
Proof. The proof is essentially the same as for the confidence band of the invariant probability density. We
show that the asymptotic coverage probability is at least 1 − α and obtain for f, g ∈ En as in (12) the bound
kf − gk∞ = OP n−1/2 2Jn /2 wJn + OP 2−Jns un
√
Using un = wJn 2−Jn / Jn we thus have
kf − gk∞ = OP n−1/2 23Jn /2 Jn1/2 un + OP 2−Jns un .
The choice of Jn yields n−1/2
p
23Jn Jn . (n/ log n)−s/(2s+3) = 2−Jns .
3. Adaptive confidence bands for drift estimation
Inspired by Giné and Nickl [18], we will now construct an adaptive version of the confidence set En from (27).
To this end we estimate the regularity s of the drift with a Lepski-type method. For some maximal regularity
r > 1, let the integers 0 < Jmin < Jmax be given by
2Jmin ∼
n 1/(2r+3)
,
log n
2Jmax ∼
n1/4
.
(log n)2
Note that Jmin , Jmax depend on the sample size n, which is suppressed in the notation. If we knew in advance
that b has regularity r, then we would choose the resolution level Jmin . The upper bound Jmax is chosen such
that Jmax + Un satisfies the third condition in Theorem 2.7. The set in which we will adaptively choose the
optimal resolution level for regularities s ∈ [1, r] is defined by
Jn := [Jmin , Jmax ] ∩ N.
Similar to Giné and Nickl [18, Lem. 2], we show under the following assumption on b that the optimal truncation
level can be consistently estimated up to a fixed integer.
Assumption C. Let b ∈ C s (D), s > 1, satisfy for constants 0 < d1 < d2 < ∞ and an integer J0 > 0 that
d1 2−Js 6 kπJ (b) − bkL∞ ([a,b]) 6 d2 2−Js ,
∀J > J0 .
(28)
The second inequality in (28) is the well known Jackson inequality which is satisfied for all usual choices of
wavelet basis. The first inequality is the main condition here, called self-similarity assumption. It excludes the
cases where the bias would be smaller than the usual order 2−Js . Although the estimator bbJ,U would profit
from a smaller bias, we cannot hope for a consistent estimation of the optimal projection level and the resulting
regularity index s if (28) (or a slight generalisation by Bull [5]) is violated. Indeed, Hoffmann and Nickl [26]
have shown that this kind of condition is necessary to construct adaptive and honest confidence bands. On
18
TITLE WILL BE SET BY THE PUBLISHER
the other hand, it has been proved by Giné and Nickl [18] that the set of functions that do not satisfy the
self-similarity assumption is nowhere dense in the Hölder norm topologies. In that sense, the self-similarity
assumption is satisfied by “typical” functions. We will give an illustrative example next. Probabilistic examples
for self-similar functions are those Gaussian processes which can be represented as stochastic series expansions
like the Karhunen-Loève expansion for Brownian motion or typical examples of Bayesian priors. Naturally,
more regular functions b ∈ C r (D) for some r > s cannot satisfy Assumption C. For a further discussion and
examples we refer to Giné and Nickl [18, Section 3.5] as well as Bull [5].
Example 3.1. Let b be a smooth function on R except for some point x0 ∈ (a, b) where the sth order derivative
b(s) has a jump for some integerP
s > 1. Locally around x0 the function b can be approximated by a Taylor
s−1
polynomial b(x) = βs± (x − x0 )s + l=0
βl (x − x0 )l + O(|x − x0 |s+1 ) with coefficients β0 , . . . , βs−1 ∈ R and where
the coefficient of order s is some βs+ ∈ R or βs− ∈ R depending on x > x0 and x < x0 , respectively. Due to the
jump of b(s) we have βs+ 6= βs− .
Choose kj as the nearest integer of 2j x0 , implying that x0 is in the middle of the support of ψj,kj . Using the
elementary estimate |hf, ψj,k i| 6 kf kL∞ (supp ψj,k ) 2−j/2 kψkL1 , we obtain
kπJ (b) − bkL∞ ([a,b]) =
X
j>J,k
hb, ψj,k iψj,k
L∞ ([a,b])
& sup 2j/2 |hb, ψj,kj i|.
j>J
For sufficiently large j the regularity of the wavelet basis, being thus orthogonal to polynomials, and their
compact support yield
2j/2 |hb, ψj,kj i| > 2j
ˆ
b(x) − βs− (x − x0 )s +
> 2j |βs+ − βs− |
ˆ
s−1
X
l=0
βl (x − x0 )l ψ(2j x − kj )dx + O(2−j(s+1) )
(x − x0 )s ψ(2j x − kj )dx + O(2−j(s+1) )
ˆ
y s ψ(y + ε)dy + O(2−j(s+1) ).
min
x>x0
> 2−js |βs+ − βs− |
ε∈[−1/2,1/2]
y>0
We conclude for J sufficiently large that kπJ (b) − bkL∞ ([a,b]) & 2−Js .
The oracle choice Jn∗ which balances the bias kπJ (b) − bkL∞ ([a,b]) and the main stochastic error is given by
Jn∗
:=
Jn∗ (s)
n
K
= min J ∈ Jn : (d2 + 1)2−Js 6
4
r
23J J o
n
for some suitable constant K > 0 depending only on ψ, inf{µ(x) : x ∈ ∪l∈L supp ϕj0 ,l } and the maximal
e = sup Σj,k where the latter two quantities can be replaced by the consistent estimators
asymptotic variance Σ
j,k
∗
which we have discussed in Section 1.3. We see easily that 2Jn ∼ ( logn n )1/(2s+3) . Following Lepski’s approach,
we define the estimator for Jn∗ by
r
n
o
23j j
b
b
b
Jn = min J ∈ Jn : kbJ − bj kL∞ ([a,b]) 6 K
(29)
∀j > J, j ∈ Jn .
n
Lemma 3.2. Grant Assumptions B and C for s ∈ [1, r] with some r > 1 and σ = 1. Let w be admissible. Then
there are a constant K > 0 depending only on ψ, inf{µ(x) : x ∈ ∪l∈L supp ϕj0 ,l } and the maximal asymptotic
e = supj,k Σj,k , an integer M > 0 depending only on d1 , d2 , K and for any τ ∈ (0, 1) there are
variance Σ
constants C, c > 0 depending on τ, K, ψ such that
τ
/ [Jn∗ − M, Jn∗ ] 6 C n−cJmin + e−cJmin → 0.
P Jbn ∈
TITLE WILL BE SET BY THE PUBLISHER
19
The proof of this lemma relies on the concentration result in Proposition 1.2 and is postponed to Section 5.2.
Applying that Jbn is a reasonable estimator of Jn∗ , we obtain a completely data-driven estimator
bb := bb b
Jn ,Un ,
with Jbn from (29) and 2Un = log n.
(30)
Corollary 3.3. In the situation of Lemma 3.2 the adaptive estimator bb defined by (30) satisfies kbb − bkM =
∗ p
OP (n−1/2 ) and kbb − bkL∞ ([a,b]) = OP (n/ log n)−s/(2s+3) un with un := wJn∗ 2−Jn / Jn∗ . Further for every
m ∈ {0, 1, . . . , M } we have
√
d
n(bbJn∗ −m,Un − b) −→ Gb in M0 (w)
as n → ∞ for the tight Gaussian random variable Gb in M0 (w) given by (26).
Proof. Combining Lemma 3.2 and Theorem 2.7, there is for any δ > 0 a constant C > 0 such that for n large
enough
∗
Jn
X
√
√
P nkbbJ − bkM > C + o(1) 6 (M + 1)δ.
P nkbb − bkM > C 6
∗ −M
J=Jn
∗
b
Since M is a finite constant, we havekbb − bkM = OP (n−1/2 ). Using that 2Jn ∼ 2Jn , a calculation similar to
(12) yields the bound for the uniform norm. For the second claim notice that the estimators bbJn∗ −m,Un satisfy
the conditions of Theorem 2.7.
The bound for the uniform risk is slightly suboptimal because un diverges arbitrary slowly (depending on
the choice of w) to infinity. Using direct estimates of the k · k∞ -norm in the proofs in Section 4, this additional
factor could be circumvented. However, it can be interpreted as an additional factor that corresponds to a slight
undersmoothing which is often used to have a negligible bias in the construction of confidence bands.
Another consequence of Lemma 3.2 is that we can consistently estimate the regularity s of b. For a sequence
of random variables (vn ) with vn−1 = oP (1) we define the estimator
vn
log n − log log n
3
.
(31)
1+
sbn := max 1,
−
Jbn
2(log 2)(Jbn + vn ) 2
∗
Using that 2Jn ∼ (n/ log n)1/(2s+3) , we derive from Lemma 3.2 the following corollary. The proof can be found
in Section 5.3.
Corollary 3.4. In the situation of Lemma 3.2 the estimator sbn given by (31) satisfies for any sequence of
random variables (vn ) with vn−1 = oP (1)
v
n
P (b
sn 6 s) → 1 and s − sbn = OP
.
Jn∗
With the estimator bb from (30) we can now construct our adaptive confidence bands as follows. By a
Bonferroni correction we take care of the possible dependence between the estimators bbJn∗ −m,Un and the adaptive
choice Jbn of the resolution level. In this way sample splitting can be avoided, which was also used by Bull [5].
For any level α ∈ (0, 1) let β = α/(M + 1) and define
o
n
ζbβ,n
Een := Een (ζbβ,n , sbn , tn ) := f : kf − bbkM < √ , kf kC sbn 6 tn ,
n
(32)
b
where (tn ) is a sequence of random variables with t−1
n = oP (1) and ζβ,n is an (over-)estimator of the critical
value ζβ given by P (kGb kM < ζβ ) > 1 − β similarly to the construction in Section 1.3. Now we can state our
final theorem:
20
TITLE WILL BE SET BY THE PUBLISHER
Theorem 3.5. Grant Assumptions B and C for σ = 1, s ∈ [1, r] with some r > 1. Let w = (wj ) be admissible
∗ p
b −1/2
bn := wJbn 2−Jn Jbn . For α ∈ (0, 1) set β := α/(M + 1) with M
and define un := wJn∗ 2−Jn / Jn∗ as well as u
as in Lemma 3.2. Let ζβ > 0 be given by P (kGb kM > ζβ ) 6 β and let ζbβ,n be an (over-)estimator satisfying
P (ζbβ,n√> ζβ − ε) → 1 for all ε > 0. Then the confidence set Een = Een (ζbβ,n , sbn , tn ) given by (32), where we choose
tn := u
bn and sbn according to (31) with vn = oP (log u
bn ) and vn−1 = oP (1), satisfies
lim inf P (b ∈ Een ) > 1 − α
n→∞
and
|Een |∞ = OP
n −s/(2s+3)
un .
log n
Proof. We will adapt the proof of Proposition 2.8 to the estimated quantities Jbn , sbn and ζbβ . By Corollary 3.4
the probability of the event {b
sn 6 s} converges to one. Due to vn−1 = oP (1), we thus have b ∈ C bsn (vn ) with
probability tending to one. Using Lemma 3.2 and Corollary 3.3, we infer
lim sup P (b ∈
/ Een ) = lim sup
n→∞
n→∞
M
X
m=0
P
√
nkbbJn∗ −m,Un − bkM > ζbβ,n 6 (M + 1)P (kGb kM > ζβ ) 6 α.
We conclude that lim inf n→∞ P (b ∈ Een ) > 1 − α.
To estimate the diameter, we proceed as in (12). Applying additionally Corollary 3.4, we obtain for any
f, g ∈ Een
kf − gk∞ .
X
∗
j6Jn
2j/2 max |hf − g, ψj,k i| +
k
6 kf − bbkM + kg − bbkM
X
∗
j6Jn
X
∗
j>Jn
2j/2 max |hf − g, ψj,k i|
k
2j/2 wj + kf − gkC sbn
∗
∗
= OP n−1/2 2Jn /2 wJn∗ + OP (tn 2−Jn s+OP (vn ) )
∗
∗
bn ,
= OP n−1/2 23Jn /2 (Jn∗ )1/2 un + 2−Jn s u
X
2−jbsn
∗
j>Jn
where we have plugged in the choices of tn , un and vn and u
bn . un with probability converging to one. Since
∗
2Jn ∼ (n/ log n)1/(2s+3) , the assertion follows.
The confidence bands are constructed explicitly and this helps to verify that the confidence bands are honest,
i.e. the coverage is achieved uniformly over some set of the unknown parameter. The general philosophy being
that uniformity in the assumptions leads to uniformity in the statements, the detailed derivation of honesty is
tedious so that we only sketch it here. The main ingredients of the proof are the central limit theorem and the
concentration inequality for Markov chains. In the original version of the concentration inequality, Theorem 9
by Adamczak and Bednorz [1], the constants are given explicitly in terms of the assumptions and thus the
concentration inequality is uniform in the underlying Markov chain Z. It is also to be expected that the central
limit theorem holds uniformly in the bounded-Lipschitz metric with respect to Z although this is not explicitly
contained in the statement. With these uniform ingredients a uniform version of Theorem 1.5 can be proved,
where the convergence in distribution is again metrised in the bounded-Lipschitz metric. In combination with
a uniform bound on the Lebesgue densities of kGµ kM this leads to honest confidence bands in Proposition 1.6.
Thanks to the explicit derivation of Assumption A from Assumption B, uniformity in the diffusion model carries
over to uniformity in the Markov chain and we see that the confidence bands in Proposition 2.3 are honest.
Likewise a uniform version of Theorem 2.7 can be proved. Provided the random variables kGb kM have uniformly
bounded Lebesgue densities this uniform version entails honest and adaptive confidence bands for the drift in
Theorem 3.5.
21
TITLE WILL BE SET BY THE PUBLISHER
4. Proof of Theorem 2.7
In the sequel we use the notation
J+ = J + U
for the projection level of µ
bJ + and we define
S :=
[
l∈L
supp ϕj0 ,l ⊆ [a − 2−j0 (2N − 1), b + 2−j0 (2N − 1)].
To analyse the estimation error of the wavelet coefficients hbbJ,U , ψj,k i, we apply the following linearisation
lemma:
Lemma 4.1. Grant Assumption B with σ = 1. For j ∈ {−1, j0 , . . . , J} and k ∈ Kj we have
hbbJ,U − b, ψj,k i = − µ
bJ + − µ,
where the remainder is given by RJ + = −
µ
bJ + −µ
2b
µJ +
+
2−Jn (s+1) = o(1),
then
µ
bJ + −µ
µ
′
′
ψj,k
+ hRJ + , ψj,k i,
2µ
. If J + = Jn+ satisfies for some τ ∈ (0, 1)
+
(log n)2/τ n−1 2Jn Jn+ = O(1),
+
+
kRJn+ kL∞ (S) = OP n−1 Jn+ 22Jn + 2−Jn (2s+1) .
Proof. Writing η := (b
µJ + − µ)/µ, the chain rule yields
′
η′
bJ + − µ ′
1
1µ
1
log(1 + η) =
+ RJ + ,
(log µ
bJ + )′ − b =
=
2
2
2(1 + η)
2
µ
where the remainder is given by
RJ + := −
bJ + − µ ′
µ
b + − µµ
ηη ′
=− J
.
2(1 + η)
2b
µJ +
µ
Using integration by parts with vanishing boundary terms, the wavelet coefficients corresponding to the linear
term can be written as
1
1
′
h((b
µJ + − µ)µ−1 )′ , ψj,k i = − hb
µ + − µ, ψj,k
µ−1 i.
2
2 J
Let us bound the remainder, starting with kb
µ′J + − µ′ kL∞ (S) . Decomposing the uniform error into a bias and
n
a stochastic error term, we obtain
kb
µ′J + − µ′ kL∞ (S) 6
n
X
+
j6Jn
,k
′
hµn − µ, ψj,k iψj,k
L∞ (S)
+
X
+
j>Jn
,k
′
hµ, ψj,k iψj,k
L∞ (S)
=: Vn + Bn .
P
Using the localisation property of the wavelet function k k |ψ(• − k)|k∞ . 1 (which holds for ψ ′ as well) and
the regularity of µ ∈ C s+1 (D), implying supj,k:supp ψj,k ∩S6=∅ 2j(s+3/2) |hµ, ψj,k i| < ∞, the bias can be estimated
by
X
X
X
+
′
Bn .
max
|hµ, ψj,k i|
|ψj,k
|
.
2−js . 2−Jn s .
+
j>Jn
k:supp ψj,k ∩S6=∅
k
L∞ (S)
+
j>Jn
22
TITLE WILL BE SET BY THE PUBLISHER
For the stochastic error term we obtain similarly
X
Vn .
23j/2
+
j6Jn
max
k:supp ψj,k ∩S6=∅
|hµn − µ, ψj,k i|.
√
The maximum of 2j subgaussian random variables is of order OP ( j). More precisely, Proposition 1.2 and the
assumptions on Jn+ yield for any j0 6 j 6 Jn+ and τ ∈ (0, 1), similarly to (10),
p
√
P
max
n|hµn − µ, ψj,k i| > jt . 2j exp − c1 (log n)j τ tτ + 2j exp − c2 j(t ∧ t2 )
k:supp ψj,k ∩S6=∅
6 exp j(log 2 − c1 j τ −1 (log n)tτ ) + exp j(log 2 − c2 (t ∧ t2 )) .
Using Jn+ . log n, the right-hand side of the previous display is arbitrarily small for large enough t. An analogous
estimate holds for the scaling functions ψ−1,· . Therefore,
q
X
p
+
+
+
+
3j/2
′
′
3Jn
/2
−Jn
s
2
kb
µJ + − µ k∞ = OP
= OP 2
j/n + O 2
Jn /n + O 2−Jn s .
n
+
j0 6j6Jn
Analogously, we have
kb
µJn+ − µk∞
q
+
+
+
Jn
/2
= OP 2
Jn /n + O 2−Jn (s+1) .
Since µ is bounded away from zero on S, the choice of Jn+ yields in particular that we have limn→∞ P (inf x∈S µ
bJn+ (x) >
1
inf
µ(x))
=
1.
We
conclude
x∈S
2
µ′J + − µ′ kL∞ (S) kb
µJn+ − µkL∞ (S) + kb
µJn+ − µk2L∞ (S)
kRJn+ kL∞ (S) = OP kb
n
q
q
+
+
+
+
= OP 23Jn /2 Jn+ /n + 2−Jn s 2Jn /2 Jn+ /n + 2−Jn (s+1) ,
which shows the asserted bound for kRJn+ kL∞ (S) .
The linearised stochastic error term can be decomposed into
′
′
X
ψj,k
ψj,k
1
′
+ (Id −πJ + )µ,
− hb
µJ + − µ, ψj,k
µ−1 i = −
hµn − µ, ψl,m i ψl,m ,
2
2µ
2µ
+
l6J ,m
= − µn − µ,
′
′
ψ′
ψj,k
ψj,k
j,k
+ µn − µ, (Id −πJ + )
+ (Id −πJ + )µ,
.
2µ
2µ
2µ
(33)
Roughly for j 6 J 6 J + , Theorem 1.5 (or an analogous result for ill-posed problems) applies for the first term
in the above display, the second term should converge to zero by the localisation of the ψj,k in the Fourier
domain and the third term is a bias that can be bounded by the P
smoothness of µ. If Un → ∞ this “µ-bias”
term is of smaller order than the “b-bias” which is determined by j>J,k hb, ψj,k iψj,k .
Let us make these considerations precise. We will need the following lemma, which relies on the localisation
of the wavelets in Fourier domain. More precisely, ψ can be chosen such that for some γ > 1 we have
ˆ
ϕ, ψ ∈ C γ (R) and
xk ψ(x)dx = 0, for k = 0, . . . , ⌈γ⌉.
In the Fourier domain we conclude by the compact support of ψ
|F ϕ(u)| .
1
,
(1 + |u|)γ
|F ψ(u)| .
|u|γ
,
(1 + |u|)2γ
u ∈ R.
(34)
23
TITLE WILL BE SET BY THE PUBLISHER
Lemma 4.2. Grant Assumption B and let the compactly supported father and mother wavelet functions ϕ and
ψ satisfy (34) for some γ > 1. Then for any m ∈ Z, j < l and k ∈ Kj
X
′
|hψl,m , ψj,k
µ−1 i| . 2l−(l−j)(γ−1/2) + 2−l+j ,
m∈Z
X
m∈Z
′
|hψl,m , ψj,k
µ−1 i| . 2l−(l−j)(γ−3/2) + 1
and
′
|hψl,m , ψj,k
µ−1 i|2 . 22l−(l−j)(2γ−2) + 2−(l−j) ,
where we have to replace j by j0 on the right-hand side for ψ−1,k = ϕj0 ,k .
Proof. Let Γ > 0 be large enough such that supp ϕ ∪ supp ψ ⊆ [−Γ, Γ]. Noting that the following scalar product
can only be nonzero if the support of ψl,m is contained in D, a Taylor expansion of µ−1 yields for j > j0
′
i|
|hψl,m µ−1 , ψj,k
′
6 |µ(2−l m)|−1 |hψl,m , ψj,k
i| + 2−l Γ
max
x:|x−m2−l |62−l Γ
′
6 kµ−1 kL∞ (D) |hψl,m , ψj,k
i| + 2−l+j Γ
We conclude
|(µ−1 )′ (x)|
max
x:|x−m2−l |62−l Γ
ˆ
|ψl,m (x)||ψ ′j,k (x)|dx
|(µ−1 )′ (x)|kψkL2 kψ ′ kL2 .
′
′
|hψl,m , ψj,k
µ−1 i| . kµ−1 kL∞ (D) |hψl,m , ψj,k
i| + 2−l+j k(µ−1 )′ kL∞ (D) .
(35)
−l
Using Plancherel’s identity, F ψl,m (u) = F [2l/2 ψ(2l • − m)](u) = 2−l/2 eimu2 F ψ(2−l u) and (34), we obtain
′
|hψl,m , ψj,k
i| 6
2−(j+l)/2
2π
ˆ
|F ψ(2−j u)uF ψ(2−l u)|du
2−(j+l)γ |u|2γ+1
du
(1 +
+ 2−l |u|)2γ
ˆ
ˆ
|u|du
|v|dv
l−(l−j)(γ−1/2)
6 2−(j+l)/2−(j+l)γ
=
2
,
−2γj
−l
2γ
2
(1 + 2 |u|)
(1 + |v|)2γ
. 2−(j+l)/2
ˆ
2−j |u|)2γ (1
where we have substituted v = 2−l u in the last line. Due to γ > 1, the integral in the last display is finite so
that combining this bound with (35) yields the assertions, noting that by the compact support of ψ only for
′
O(2l−j ) many m the scalar products hψl,m , ψj,k
µ−1 i are nonzero.
−l
For j = −1 we substitute again v = 2 u and obtain analogously
′
|hψl,m , ψ−1,k
i| .
ˆ
2−(j0 +l)/2−lγ |u|γ+1 du
6 2l−(l−j0 )(γ−1/2)
(1 + 2−l |u|)2γ (1 + 2−j0 |u|)γ
ˆ
|v|dv
(1 + |v|)2γ
and
′
|hψl,m , ψ−1,k
µ−1 i| . kµ−1 kL∞ (D) |hψl,m , ϕ′j0 ,k i| + 2−l+j0 k(µ−1 )′ kL∞ (D) .
Now we can bound the bias in (33) in the multi-scale space M0 .
Lemma 4.3. Let the weighting sequence w be admissible and grant Assumption B and (34) for some γ >
3/2 + δ, δ ∈ (1, 2]. Then we have
′
(Id −πJ+U )µ, ψj,k
/(2µ)
j6J,k M
. 2−J(s+1/2) 2−U(s+3/2) wJ−1 .
24
TITLE WILL BE SET BY THE PUBLISHER
Proof. Recall that we have by definition
′
(Id −πJ + )µ, ψj,k
/(2µ)
′
/(2µ) .
= sup max wj−1 (Id −πJ + )µ, ψj,k
j6J,k M
j6J k∈Kj
As in the proof of Theorem 1.5 we have
sup
l,m:supp ψl,m ∩S6=∅
2j(s+3/2) |hψl,m , µi| . kµkC s+1 (D) .
Hence, for all j 6 J,
′
(Id −πJ + )µ, ψj,k
/(2µ)
X
=
′
hµ, ψl,m ihψl,m , ψj,k
/(2µ)i
l>J + ,m
6 sup 2l |hµ, ψl,m i|
l>J + ,m
.2
−J + (s+1/2)
X
l>J + ,m
X
l>J + ,m
′
2−l |hψl,m , ψj,k
/(2µ)i|
′
2−l |hψl,m , ψj,k
/(2µ)i|.
Now Lemma 4.2 yields
X
X
X
+
+
′
2−l |hψl,m , ψj,k
/(2µ)i| .
2−(l−j)(γ−3/2) +
2−l . 2−(J −j)(γ−3/2) + 2−J .
l>J + ,m
l>J +
l>J +
Due to the monotonicity of j 7→ 2jδ wj−1 , we conclude for γ > 3/2 + δ
′
/(2µ)
sup max wj−1 (Id −πJ + )µ, ψj,k
. 2−J
j6J k∈Kj
+
(s+1/2)
sup wj−1 (2−(J
+
−j)(γ−3/2)
+
+ 2−J )
j6J
. 2−J
+
(s+1/2) −(J + −J)
2
wJ−1 .
The second term in (33) can be bounded by the following lemma.
√
Lemma 4.4. Let the weighting sequence w satisfy j2j /wj = O(1) and grant Assumption B and (34) for some
γ > 5/2. If Jn+ = Jn + Un satisfies for some τ ∈ (0, 1)
+
(log n)2/τ n−1 2Jn Jn+ = O(1)
then we have
Un → ∞,
and
ψ′
j,k
µn − µ, (Id −πJn+ )
= oP (n−1/2 ).
j,k
2µ
M
′
Proof. In order to apply Proposition 1.2, we need to calculate the L2 -norm and the L∞ -norm of (Id −πJn+ )(ψj,k
/µ).
For j ∈ {j0 , . . . , Jn } Parseval’s identity and Lemma 4.2 yield
πJn
ψ′
j,k
(Id −πJn+ )
µ
2
L2
=
X
+
l>Jn
,m
.
X
′
|hψl,m , ψj,k
/µi|2
22l−(l−j)(2γ−2) +
+
l>Jn
. 2j(2γ−2)
X
X
2−2(l−j)+(l−j)+
+
l>Jn
+
2−l(2γ−4) + 2−(Jn −j)
+
l>Jn
+
+
+
. 2−(Jn −j)(2γ−4)+2j + 2−(Jn −j) . 2−(Jn −j) 22j .
25
TITLE WILL BE SET BY THE PUBLISHER
Another application of Lemma 4.2 yields
ψ′
j,k
(Id −πJn+ )
2µ
∞
X
6
′
2l/2 max |hψl,m , ψj,k
/µi|
n
+
l>Jn
.
X
23l/2−(l−j)(γ−1/2) +
+
l>Jn
.2
X
2−l/2+j
+
l>Jn
+
−j)(γ−2)+3j/2
−(Jn
+
+
+ 2−(Jn −j)/2+j/2 . 2−(Jn −j)/2 23j/2 .
The concentration inequality, Proposition 1.2, yields for positive constants ci > 0, i = 1, 2, . . . ,
P
6
ψ′
√ −1
j,k
>t
nwj µn − µ, (Id −πJn+ )
2µ
16j6Jn k
′
√
ψ
X
j,k
P
n µn − µ, (Id −πJn+ )
> twj
2µ
sup max
16j6Jn ,k
.
Jn
X
j=1
√
+
2j exp(−c1 (2(Jn −j)/2 2−3j/2 nwj t)τ )
+ exp −
+
c2 2(Jn −j)/2 t2
√
.
(2j /wj )2 + t max(23j/2 (log n)1/τ , 2j )/(wj n)
Since jwj−1 23j/2 (log n)1/τ n−1/2 . 2j/2 j 1/2 (log n)1/τ n−1/2 . 1 and Jn+ . log n by the assumptions on wj and
Jn+ , we conclude for any t > 0 and n sufficiently large
P
.
sup max
16j6Jn
Jn
X
j=1
2
j
k
ψ′
√ −1
j,k
>t
nwj µn − µ, (Id −πJn+ )
2µ
exp(−c3 (log n)2
2(Jn −j)/2 jt2
j t ) + exp − c4
1+t
+
τ (Jn
−j)/2 τ τ
+
Jn
X
t2
exp j log 2 − c5 2Un /2
exp j log 2 − c3 (Jn+ )τ −1 tτ log n +
1+t
j=1
j=1
+ τ −1 τ
+ τ −1 τ
1 − eJn log 2−c3 (Jn ) t log n
. elog 2−c3 (Jn ) t log n
+
1 − elog 2−c3 (Jn )τ −1 tτ log n
.
Jn
X
+e
log 2−c6 2Un /2 (t2 ∧t) 1
+ τ −1 τ
. e−c3 (Jn )
t log n
Un /2
2
(t ∧t))
− eJn (log 2−c6 2
U
/2
1 − elog 2−c6 2 n (t2 ∧t)
Un /2 2
(t ∧t)
+ e−c6 2
→ 0.
Finally note that all bounds hold true for the scaling function ϕj0 ,· if j is replace by j0 .
Now we have all pieces together to prove the multi-scale central limit theorem.
Proof of Theorem 2.7. Since b has Hölder regularity s > 0 on S ⊆ D, the bias can be bounded by
kb − πJn (b)kM = sup max wj−1 |hψj,k , bi| . sup wj−1 2−j(s+1/2) = o(n−1/2 ).
j>Jn k∈Kj
j>Jn
26
TITLE WILL BE SET BY THE PUBLISHER
Using that the M0 -norm is weaker than the L∞ (S)-norm, Lemma 4.1 and decomposition (33) together with
Lemmas 4.3 and 4.4 yield
′
ψj,k
eJn ,Un
hbbJn ,Un − b, ψj,k i j6Jn ,k = − µn − µ,
+R
2µ j6Jn ,k
eJn ,Un )kM = oP (n−1/2 ).
with kπJn (R
Therefore, it remains to show that
√
′
/(2µ)i)j,k , L(Gb ) → 0.
βM0 L πJn (− nhµn − µ, ψj,k
This follows exactly as in Theorem 1.5, where we use that the factor 2j , by which the norms
′
kψj,k
/(2µ)kL2 . 2j ,
′
kψj,k
/(2µ)k∞ . 23j/2
(36)
are larger than kψj,k kL2 and kψj,k k∞ , respectively, is counterbalanced through the additional growth of the
admissible weighting sequence w.
5. Remaining proofs
5.1. Proof of Theorem 2.4
Step 1: For δ ∈ (0, 1/2) and 0 < c < C < ∞ define
Vξ := {µ ∈ C 7/4+δ/2 (D) : 0 < c < µ and kµkC 7/4+δ/2 < C},
V := M0 (w) and W := M0 (w)
e with wj 6 2δj and w
e > 2(1+δ)j . We first establish the Hadamard differentiability
of
ξ : Vξ ⊆ V → W, µ 7→
µ′
2µ
with derivative given by (23).
To this end let h ∈ M0 (w) and ht → h as t → 0. For all ht such that µ + tht is contained in Vξ for small
t > 0 we have
ξ(µ + tht ) − ξ(µ)
− ξµ′ (h)
t
µh′t − µ′ ht
µh′ − µ′ h
=
−
2(µ + tht )µ
2µ2
=
M(w)
e
M(w)
e
µ(µ′ + th′t ) − µ′ (µ + tht ) µh′ − µ′ h
−
2(µ + tht )µt
2µ2
using w
ej > 2j(1+δ) for δ ∈ (0, 1/2), this is bounded by
(h′t − h′ )
2(µ + tht )
+
−3/2−δ
B∞∞
M(w)
e
µ2 (h′t − h′ ) + µµ′ (h − ht ) + tht (µ′ h − µh′ )
=
2µ2 (µ + tht )
µ′ (h − ht )
2µ(µ + tht )
+
−3/2−δ
B∞∞
tht (µ′ h − µh′ )
2µ2 (µ + tht )
.
M(w)
e
,
M(w)
e
27
TITLE WILL BE SET BY THE PUBLISHER
ρ
Applying a pointwise multiplier theorem [43, Thm. 2.8.2] and the continuous embedding C ρ → B∞∞
, we obtain
up to constants the upper bound
1
µ′
kh′t − h′ kB −3/2−δ +
∞∞
2(µ + tht ) C 7/4+δ/2
2µ(µ + tht )
′
′
ht (µ h − µh )
+t
2µ2 (µ + tht ) M(w)
e
C 7/4+δ/2
kh − ht kB −1/2−δ
∞∞
. kht − hkB −1/2−δ + kh − ht kB −1/2−δ + t . kht − hkM(w) + t,
∞∞
∞∞
where we have used wj 6 2jδ in the last step. The last expression tends to 0 as t → 0 and this shows the
Hadamard differentiability of ξ : Vξ → W.
Step 2: To apply the delta method it is now important that µ
bJn maps into Vξ . Theorem 1.5 gives conditions
such that kb
µJn − µkM(w) = O(n−1/2 ). Provided that 2(9/4+δ/2)Jn wJn n−1/2 = o(1) we deduce from the fact that
µJn − µkC 7/4+δ/2 = o(1).
µ
bJn is developed until level Jn only and from the ratio of the weights at level Jn that kb
We conclude that with probability tending to one µ
b ∈ Vξ . By modifying µ
b on events with probability tending
to zero we can achieve that always µ
b ∈ Vξ . On the above assumptions we obtain the weak convergence
√
n(b
µJn − µ) → Gµ in M0 (w) by Theorem 1.5 and application of the delta method yields the assertion.
5.2. Proof of Lemma 3.2
We will prove that:
(i) for any τ ∈ (0, 1) there are constants 0 < c, C < ∞ depending only on τ, K, ψ such that for any J ∈ Jn
satisfying J > Jn∗ and for all n ∈ N large enough
τ
P (Jbn = J) 6 C(n−cJ + e−cJ ),
(ii) there is an integer M > 0 depending only on d1 , d2 , K and constants 0 < c′ , C ′ < ∞ depending on
τ, K, ψ such that for any J ∈ Jn satisfying J < Jn∗ − M and for all n ∈ N large enough
′ τ
′
P (Jbn = J) 6 C ′ n−c J + e−c J .
Given (i) and (ii), we obtain, for a constant c′′ > 0,
P Jbn ∈
/ [Jn∗ − M, Jn∗ ] 6
∗
Jn
−M−1
X
J=Jmin
P (Jbn = J) +
JX
max
∗ +1
J=Jn
P (Jbn = J)
′′
τ
′′
= O (Jmax − Jmin ) n−c (Jmin ) + e−c Jmin
′′
′′ τ
= O log n n−c Jmin + e−c Jmin .
′′
τ
′′
Since n−c Jmin + e−c Jmin decays polynomially in n, the assertion of the lemma follows.
To show (i) and (ii), recall J + = J + Un = J + log2 log n. For notational convenience we define
V (n, j) := (23j j/n)1/2 ,
which is the order of magnitude of the stochastic error for projection level j. Recall that for any f ∈ M ∩ VJ
we can bound
X
X
kf kL∞ ([a,b]) .
2j/2 max |hf, ψj,k i| 6 kf kM
2j/2 wj . kf kM 2J/2 wJ .
j6J
k∈Kj
j6J
28
TITLE WILL BE SET BY THE PUBLISHER
+
+
Since any J ∈ Jn satisfies 2−J (s+1) = o(1) and (log n)2/τ n−1 2J J + = O(1), we conclude from decomposition
(33) as well as from Lemma 4.1 and Lemma 4.4 (applied to wj = j 1/2 2j ) that
bbJ − b = −
X D
j6J,k∈Z
µn − µ,
′ E
ψj,k
ψj,k +
2µ
X
j6J,k∈Z
|
D
′ E
X
ψj,k
ψj,k +
hb, ψj,k iψj,k +RJ
2µ
j>J,k∈Z
} |
{z
}
(Id −πJ + )µ,
{z
=:B µ (J)
for some remainder RJ ∈ VJ with
kRJ kL∞ ([a,b]) = OP n−1 J + 22J
+
Moreover, Lemma 4.3 and Assumption C yield
+ O 2−J
+
(2s+1)
=:B b (J)
+ oP n−1/2 J 1/2 2−3J/2 .
kB µ (J) + B b (J)kL∞ ([a,b]) 6 (d2 + o(1))2−Js .
(37)
+
Using n−1/2 22J J + = o(1) for all J ∈ Jn , we conclude
kbbJ − bkL∞ ([a,b]) 6
X
j6J,k∈Kj
D
µn − µ,
′ E
ψj,k
ψj,k
2µ
L∞ ([a,b])
+ (d2 + o(1))2−Js + oP (V (n, J)).
With this preparation at hand we can proceed similarly as in [18, Lem. 2].
Part (i): For any fixed J > Jn∗ we have
X
P Jbn = J 6
P kbbJ−1 − bbL kL∞ ([a,b]) > K V (n, L) .
L∈Jn ,L>J
As in the derivation of (38) we obtain for n sufficiently large
kbbJ−1 − bbL kL∞ ([a,b]) 6
X
J<j6L,k
′ E
D
ψj,k
µn − µ,
ψj,k
2µ
L∞ ([a,b])
1
+ d2 + 1 (2−(J−1)s + 2−Ls ) + (V (n, J − 1) + V (n, L)).
4
By definition of Jn∗ we have for any L > J > Jn∗ that
∗
K
K
d2 + 1 (2−(J−1)s + 2−Ls ) 6 2(d2 + 1)2−Jns 6 V (n, Jn∗ ) 6 V (n, L).
2
2
Therefore,
P Jbn = J 6
X
P
L∈Jn ,L>J
X
J<j6L,k
D
µn − µ,
′ E
ψj,k
ψj,k
2µ
L∞ ([a,b])
>
K −1
V (n, L) .
2
Analogously to (10) and using (36), Proposition 1.2 yields for any τ ∈ (0, 1) and constants c1 , ..., c4 > 0:
′ E
D
ψj,k
µn − µ,
ψj,k
2µ
K −1
V
(n,
L)
2
L∞ ([a,b])
J<j6L,k∈Kj
r
′ E
X
D
ψj,k
K − 1 23L L
>
,
2j/2 max µn − µ,
6P
k∈Kj
2µ
2
n
P
X
J<j6L
>
(38)
29
TITLE WILL BE SET BY THE PUBLISHER
using that
P∞
k=0
2−k/2 6 7/2, we obtain the upper bound
′ E
D
ψj,k
K − 1 1/2
max n1/2 2−L µn − µ,
>
L
J<j6L,k
2µ
7
′ E
D
X
ψj,k
K − 1 1/2
6
>
P n1/2 2−j µn − µ,
j
2µ
7
J<j6L,k
X
τ
τ
6 c1
e−c2 j log n + e−c3 j 6 c1 (n−c4 J + e−c4 J ),
P
(39)
J<j6L
′
where we require that K is chosen sufficiently large, depending on kψj,k
/µkL∞ (S) and Σj,k . It remains to sum
this upper bound over all L ∈ Jn with L > J which yields the claim since Jn contains no more than log n
elements.
Part (ii): Let J < Jn∗ − M for some M ∈ N to be specified below. We have
P Jbn = J 6 P kbbJ − bbJn∗ kL∞ ([a,b]) 6 K V (n, Jn∗ ) .
Using Assumption C and the triangle inequality, we obtain similarly to (38), for sufficiently large n,
∗
d1
kbbJ − bbJn∗ kL∞ ([a,b]) > 2−Js − d2 + 1 2−Jn s
2
′ E
D
X
ψj,k
ψj,k
µn − µ,
−
2µ
∗
J<j6Jn ,k∈Kj
L∞ ([a,b])
1
− (V (n, Jn∗ ) + V (n, J)).
4
Owing to J < Jn∗ − M , s > 1 and the definition of Jn∗ , we can bound
∗
1 −(Jn∗ −1)s
d1 −Js
d1
2
2
− d2 + 1 2−Jn s > (d2 + 1)
2M−1 −
2
2(d2 + 1)
2
K
d1
1
>
2M−1 −
V (n, Jn∗ − 1)
4 2(d2 + 1)
2
d1
1
K
V (n, Jn∗ ),
2M−1 −
>
16 2(d2 + 1)
2
where we have used in the last inequality that (Jn∗ − 1)/Jn∗ > 1/2 for n sufficiently large. We conclude
e V (n, J ∗ ) −
kbbJ − bbJn∗ kL∞ ([a,b]) > K
n
X
∗ ,k∈K
J<j6Jn
j
′ E
D
ψj,k
ψj,k
µn − µ,
2µ
L∞ ([a,b])
e := Kd1 2M−1 − K − 1 . Since K
e > K for M large enough, we obtain similarly as in (39) for any
with K
32(d2 +1)
32
2
′
′
τ ∈ (0, 1) and some c , C > 0
P Jbn = J 6 P
X
∗ ,k
J<j6Jn
D
µn − µ,
′ τ
′
6 C ′ n−c J + e−c J .
′ E
ψj,k
ψj,k
2µ
L∞ ([a,b])
e − K)V (n, J ∗ )
> (K
n
30
TITLE WILL BE SET BY THE PUBLISHER
5.3. Proof of Corollary 3.4
∗
Owing to (cn/ log n)1/(2s+3) 6 2Jn 6 (Cn/ log n)1/(2s+3) for constants 0 < c < C, we find
log n − log log n
log c
3
log C
3
log n − log log n
+
− 6s6
+
− .
2(log 2)Jn∗
2(log 2)Jn∗
2
2(log 2)Jn∗
2(log 2)Jn∗
2
Since P (Jbn 6 Jn∗ ) → 1 by Lemma 3.2 and due to vn−1 = oP (1), we obtain with probability converging to one
that
v 3
log n − log log n
n
−
> sbn .
− oP
s > max 1,
b
b
2
Jn
2(log 2)(Jn + vn )
Moreover, since P (Jn∗ − M 6 Jbn 6 Jn∗ ) → 1 and vn−1 = oP (1), we have with probability converging to one
s − sbn 6
log n − log log n
1
log C
3vn
vn
1
−
+
+
. ∗.
2(log 2)Jn∗
1 + vn /Jn∗
2(log 2)Jn∗
2(Jn∗ − M )
Jn
References
[1] Adamczak, R. and Bednorz, W. (2015). Exponential concentration inequalities for additive functionals of
Markov chains. ESAIM Probab. Stat., 19:440–481.
[2] Aı̈t-Sahalia, Y. (2010). Econometrics of Diffusion Models. John Wiley & Sons, Ltd.
[3] Bakry, D., Cattiaux, P., and Guillin, A. (2008). Rate of convergence for ergodic continuous Markov processes:
Lyapunov versus Poincaré. J. Funct. Anal., 254(3):727–759.
[4] Bass, R. F. (1998). Diffusions and elliptic operators. Springer, New York.
[5] Bull, A. D. (2012). Honest adaptive confidence bands and self-similar functions. Electron. J. Stat., 6:1490–
1516.
[6] Castellana, J. V. and Leadbetter, M. R. (1986). On smoothed probability density estimation for stationary
processes. Stochastic Process. Appl., 21(2):179–193.
[7] Castillo, I. and Nickl, R. (2013). Nonparametric Bernstein–von Mises theorems in Gaussian white noise.
Ann. Statist., 41(4):1999–2028.
[8] Castillo, I. and Nickl, R. (2014). On the Bernstein–von Mises phenomenon for nonparametric Bayes procedures. Ann. Statist., 42(5):1941–1969.
[9] Chauveau, D. and Diebolt, J. (2003). Estimation of the asymptotic variance in the CLT for Markov chains.
Stoch. Models, 19(4):449–465.
[10] Chen, X. (1999). Limit theorems for functionals of ergodic Markov chains with general state space. Mem.
Amer. Math. Soc., 139(664).
[11] Chernozhukov, V., Chetverikov, D., and Kato, K. (2014). Anti-concentration and honest, adaptive confidence bands. Ann. Statist., 42(5):1787–1818.
[12] Chorowski, J. and Trabs, M. (2016). Spectral estimation for diffusions with random sampling times.
Stochastic Process. Appl., doi 10.1016/j.spa.2016.03.009.
[13] Comte, F., Genon-Catalot, V., and Rozenholc, Y. (2007). Penalized nonparametric mean square estimation
of the coefficients of diffusion processes. Bernoulli, 13(2):514–543.
[14] Dalalyan, A. (2005). Sharp adaptive estimation of the drift function for ergodic diffusions. Ann. Statist.,
33(6):2507–2528.
[15] Galtchouk, L. and Pergamenshchikov, S. (2014). Geometric ergodicity for classes of homogeneous Markov
chains. Stochastic Process. Appl., 124(10):3362–3391.
[16] Geyer, C. J. (1992). Practical Markov chain Monte Carlo. Statistical Science, 7(4):473–483.
[17] Gihman, I. I. and Skorohod, A. V. (1972). Stochastic differential equations. Springer, Heidelberg.
[18] Giné, E. and Nickl, R. (2010). Confidence bands in density estimation. Ann. Statist., 38(2):1122–1170.
[19] Giné, E. and Nickl, R. (2015). Mathematical foundations of infinite-dimensional statistical models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press.
TITLE WILL BE SET BY THE PUBLISHER
31
[20] Gobet, E., Hoffmann, M., and Reiß, M. (2004). Nonparametric estimation of scalar diffusions based on low
frequency data. Ann. Statist., 32(5):2223–2253.
[21] Gugushvili, S. and Spreij, P. (2014). Nonparametric Bayesian drift estimation for multidimensional stochastic differential equations. Lith. Math. J., 54(2):127–141.
[22] Hansen, L. P. and Scheinkman, J. A. (1995). Back to the future: generating moment implications for
continuous-time Markov processes. Econometrica, 63(4):767–804.
[23] Hansen, L. P., Scheinkman, J. A., and Touzi, N. (1998). Spectral methods for identifying scalar diffusions.
J. Econometrics, 86(1):1–32.
[24] Härdle, W., Kerkyacharian, G., Picard, D., and Tsybakov, A. (1998). Wavelets, approximation, and
statistical applications, volume 129 of Lecture Notes in Statistics. Springer-Verlag, New York.
[25] Hoffmann, M. (1999). Adaptive estimation in diffusion processes. Stochastic Process. Appl., 79(1):135–163.
[26] Hoffmann, M. and Nickl, R. (2011). On adaptive inference and confidence bands. Ann. Statist., 39(5):2383–
2409.
[27] Kristensen, D. (2010). Pseudo-maximum likelihood estimation in two classes of semiparametric diffusion
models. Journal of Econometrics, 156(2):239–259.
[28] Kutoyants, Y. A. (2004). Statistical inference for ergodic diffusion processes. Springer Series in Statistics.
Springer-Verlag London, Ltd., London.
[29] Lacour, C. (2008). Nonparametric estimation of the stationary density and the transition density of a
Markov chain. Stochastic Process. Appl., 118(2):232–260.
[30] Ledoux, M. (2001). The concentration of measure phenomenon, volume 89 of Mathematical Surveys and
Monographs. American Mathematical Society, Providence, RI.
[31] Löcherbach, E., Loukianova, D., and Loukianov, O. (2011). Penalized nonparametric drift estimation for
a continuously observed one-dimensional diffusion process. ESAIM Probab. Stat., 15:197–216.
[32] Low, M. G. (1997). On nonparametric confidence intervals. Ann. Statist., 25(6):2547–2554.
[33] Masuda, H., Negri, I., and Nishiyama, Y. (2011). Goodness-of-fit test for ergodic diffusions by discrete-time
observations: an innovation martingale approach. J. Nonparametr. Stat., 23(2):237–254.
[34] Meyn, S. P. and Tweedie, R. L. (2009). Markov chains and stochastic stability. Cambridge University
Press, Cambridge.
[35] Negri, I. and Nishiyama, Y. (2010). Goodness of fit test for ergodic diffusions by tick time sample scheme.
Stat. Inference Stoch. Process., 13(1):81–95.
[36] Nickl, R. and Söhl, J. (2015). Nonparametric Bayesian posterior contraction rates for discretely observed
scalar diffusions. arXiv preprint arXiv:1510.05526.
[37] Robert, C. P. (1995). Convergence control methods for Markov chain Monte Carlo algorithms. Statistical
Science, 10(3):231–253.
[38] Rosenblatt, M. (1970). Density estimates and Markov sequences. In Nonparametric Techniques in Statistical
Inference (Proc. Sympos., Indiana Univ., Bloomington, Ind., 1969), pages 199–213. Cambridge Univ. Press,
London.
[39] Roussas, G. G. (1969). Nonparametric estimation in Markov processes. Ann. Inst. Statist. Math., 21:73–87.
[40] Schmisser, E. (2013). Nonparametric estimation of the derivatives of the stationary density for stationary
processes. ESAIM Probab. Stat., 17:33–69.
[41] Spokoiny, V. G. (2000). Adaptive drift estimation for nonparametric diffusion model. Ann. Statist.,
28(3):815–836.
[42] Szabó, B., van der Vaart, A. W., and van Zanten, J. H. (2015). Frequentist coverage of adaptive nonparametric Bayesian credible sets (with discussion). Ann. Statist., 43(4):1391–1428.
[43] Triebel, H. (1983). Theory of function spaces, volume 78 of Monographs in Mathematics. Birkhäuser Verlag,
Basel.
[44] van der Meulen, F. and van Zanten, H. (2013). Consistent nonparametric Bayesian inference for discretely
observed scalar diffusions. Bernoulli, 19(1):44–63.
32
TITLE WILL BE SET BY THE PUBLISHER
[45] van der Vaart, A. and van Zanten, H. (2005). Donsker theorems for diffusions: necessary and sufficient
conditions. Ann. Probab., 33(4):1422–1451.
[46] van der Vaart, A. W. (1998). Asymptotic statistics, volume 3 of Cambridge Series in Statistical and
Probabilistic Mathematics. Cambridge University Press, Cambridge.
[47] Yakowitz, S. (1989). Nonparametric density and regression estimation for Markov sequences without mixing
assumptions. J. Multivariate Anal., 30(1):124–136.
| 10 |
An Energy Efficient Spectrum Sensing in Cognitive
Radio Wireless Sensor Networks
Atchutananda Surampudi and Krishnamoorthy Kalimuthu
arXiv:1711.09255v1 [] 25 Nov 2017
Department of Electronics and Communication Engineering,
SRM University,
Chennai, India 603203.
[email protected], [email protected]
Abstract—The cognitive radio wireless sensor networks
have become an integral part of communicating spectrum
information to the fusion centre, in a cooperative spectrum
sensing environment. A group of battery operated sensors or
nodes, sensing information about spectrum availability in the
radio links, needs an energy efficient strategy to pass on the
binary information. A proper routing method through intelligent
decision making can be a good solution. In this paper, an energy
efficient routing protocol has been introduced and a performance
comparison is made with the existing system in terms of it’s
energy consumption. The proposed routing technique considers
both uniform and non-uniform clustering strategies and proves
to be an energy efficient method of communicaton.
Index Terms: Cluster head, clustering, minimum spanning tree,
prims algorithm, routing.
I. I NTRODUCTION
The cognitive radio wireless sensor network (CR-WSN) is
a group of sensors, performing the task of sensing spectrum
availability information in a cognitive radio (CR) environment
[1]. The spectrum sensing can happen in a cooperative or
a non-cooperative way. In a cooperative spectrum sensing
these sensor nodes communicate their respective information
to an information sink called the fusion centre (FC), by
forming an interconnected wireless network [2]. At the FC
the data is aggregated and further decision for utilization of
spectrum is taken. So, once a wireless network is formed,
the one bit information present with each node is routed
along a wireless route with the help of routing protocols and
associated algorithms defining the protocols. There have been
several routing protocols proposed for routing information
along these sensor networks [3], [4], [5]. In this paper, the
focus has been made on the routing strategy used by sensors
which sense spectrum availability information in cognitive
radio systems.
The existing system of sending the binary sensing
information is done by hierarchical clustering using the low
energy adaptive clustering hierarchy (LEACH) protocol and
there have been several developments on the same [6], [7]. In
such systems, energy efficiency has been the most important
key point of the network routing design. Here we introduce
a novel routing protocol which combines with hierarchical
clustering and inter cluster routing to communicate the
information to the FC in an energy efficient manner. This
research puts light on the energy demand for the sensors
to send the sensed information to the FC directly or along
a certain intelligently decided path. A greedy algorithm
called the prim’s algorithm [8] is used to form a low energy
minimum spanning tree at every round of cluster formation.
The rest depends on the elected cluster head to take an
energy efficient decision to send the sensed information. In
this analysis, both uniform and non uniform size of clustering
have been considered.
This paper describes the proposed methodology of the new
routing protocol and compares the energy parameter with
that of the existing protocol, the LEACH. This paper is
organized as follows: In section II, the clustering strategy
of existing system is described and the proposed method of
communication protocol is analyzed in section III. Section IV
discusses the results and comparisons made with the existing
system. Finally the paper concludes with section V.
II. E XISTING S YSTEM
A group of Wireless Sensors are placed for spectrum sensing
along with the FC, where the data is aggregated. These nodes
are further differentiated into advanced and normal nodes and
are powered by their residual energy. A probability of selection
of Cluster Head (CH) is pre-specified. The system undergoes
a certain number of rounds and selection of CH is based on
distance and energy parameters of each node from the FC.
From [9], the condition for electing the CH for each round is
given as
T (s) =
p
1 ;s
1−p(rmod p
)
∈ G,
(1)
0, otherwise,
where r is the current round of clustering, G is the set of nodes
that have not been CHs in the last 1/p rounds. Each sensor
node decides independently of other senor nodes whether it
will claim to be a CH or not, by picking a random value
between 0 and 1 and comparing it with a threshold T(s) based
on a user-specified probability p. After the CH is selected, it
aggregates information from its cluster of nodes and calculates
its distance from the fusion centre. This distance is used to
calculate the cost to send the one bit information to the fusion
centre soon after election and the sensing information is sent.
In this method, all elected CHs transmit the aggregated data to
the FC directly. If the CH is far away to the FC, then it needs
more energy to transmit. So, the overall lifetime becomes very
short and a considerable number of nodes die after certain
rounds.
III. P ROPOSED M ETHOD
In the proposed method an energy efficient routing protocol
has been introduced. As a further extension of the previously
described method, the proposed method introduces the
process of intelligent decision making and forwarding the
information along a minimum energy route by forming a
minimum spanning tree (MST) using a greedy algorithm
called the Prim’s algorithm. A greedy algorithm formulates
a certain path in a stepwise manner by greedily choosing
the next step with respect to the previous one, according to
the least value of the available metric. The Prim’s algorithm
finds an MST in a connected, weighted graph. It begins with
a vertex and no edges and then applies the greedy rule,
Add an edge of minimum weight that has one vertex in the
current tree and the other not in the current tree.
Using this Prims algorithm, the proposed communication is
made as follows
• Node is elected as a CH using (1). But it doesn’t directly
send the sensed information to the FC soon after election.
• At once all the required ’M’ number of CH’s are elected
for the ’rth’ round and each CH now calculates it’s
Cartesian distance from the fusion centre and distances
from every other node.
• Using this information an adjacency matrix A(i,j) is
formed, with distance as the metric.
• MST is formed using the Prims algorithm. This connects
all the elected ’M’ number of CH’s along a minimum
energy route in that round. This tree varies dynamically
with every round.
Now, as a development with respect to the earlier protocol,
each CH maintains a sensing table which provides a comprehensive information about the one bit information sensed by
itself and all other CH’s that are present. So, the CH sends
’M’ bit information rather than one bit along an intelligently
decided path. It calculates the respective cost (C) to the fusion
centre and to the nearest CH along the minimum spanning tree,
using the previously calculated distances as
(Et + Ed )M + Ef M d2 ; d ≤ do ,
C=
(2)
(Et + Ed )M + Em M d4 ; d > do .
where Et is energy spent by transmitting one bit information,
Ed is data aggregation energy, do is relative distance, Ef and
Em is energy spent for free space path loss and multipath
loss respectively. Now the CH takes a decision on which
route to select, either along the minimum spanning tree or
Table I
PARAMETERS : T HIS TABLE SHOWS THE PARAMETERS CONSIDERED IN
THIS STUDY.
PARAMETER
Initial energy Eo
Tranmission energy Et
Aggregation energy Ed
Reception energy Er
Free space energy Ef
Multipath energy Em
Path loss exponent α
Maximum number of rounds rmax
Probability of CH election p
Number of sensor nodes in the area N
VALUE
0.5J
50 × 10−9 J
5 × 10−9 J
50 × 10−9 J
10 × 10−12 J
0.0013×10−12 J
0.3
1500
0.1
100
directly to the fusion centre, based on the costs calculated.
The minimum cost path is selected. So, all the CHs perform
intelligent decision making and finally the fusion centre acquires complete information about the available spectrum in a
short period of time. Moreover network life time is extended
and comparatively very few nodes die after the same number
of rounds considered earlier. The energy expended during
transmission Et and reception Er for M bits to a distance
d between transmitter and receiver for the secondary user is
given by
Et = Ee M + Ea αd.
Er = Ee M.
where α is path loss component, Ea is energy constant
for propagation, Ee is the electronics energy. The following
assumptions are made while making the analysis,
• A secondary users’ network is stable and consists of one
FC, one primary transmitter and N number of cognitive
radio users or sensor nodes.
• The base station acts as the Fusion Centre and it has
the location information of all secondary users, possibly
determined using Global Positioning System (GPS).
• The instantaneous channel state information of the channel is available at the each secondary user.
• The channel between any two secondary users in the same
cluster is perfect since they are close to each other.
• In this research work, minimum tree cost determination
is made with the adjacency matrix of distances.
IV. S IMULATION R ESULTS
The performance of the proposed algorithm is carried with
the following parameters given in Table I. The performance
of the proposed routing protocol is compared with the
conventional protocol.
In the uniform clustering method, the number of clusters is
restricted to 10. From Fig. ??, the proposed communication
protocol proves to be energy efficient by 1.4 times (for 1500
rounds) and this increases the lifetime of the network. Moreover, in this research, when uniform clustering is restricted to
10 CHs, the network residual energy is still higher by 27% (for
1500 rounds) than non-uniform clustering (where the cluster
Energy remaining in all the nodes in Joules
60
50
40
30
LEACH
LEACH using Prims algorithm (non uniform clustering)
LEACH using Prims algorithm (uniform clustering)
20
10
R EFERENCES
0
200
400
600
800 1,000 1,200 1,400
Number of rounds (r)
Figure 1. This graph shows the total energy remaining in Joules, in all the
alive nodes N , at the end of each round r. Uniform and non-uniform clustering
strategies have also been compared for the proposed protocol.
Number of alive sensor nodes (N )
scales down a large size of network and proves to be an elegant
design paradigm of reducing energy consumption. Moreover,
all the elected CHs, along with the FC have complete information about the spectrum availability and as a result, network
convergence has taken place. When M number of bits are
sent only by those CHs who are nearer to FC, the problem of
channel congestion is also eliminated. As an extension work,
the sensing tables can be used to store control information
as well as to information storage and intelligence for sending
the sensed information. This may be useful in case of mission
critical situations like natural disasters, where network failure
is a common phenomenon.
100
80
60
40
LEACH
LEACH using Prims algorithm (non uniform clustering)
LEACH using Prims algorithm (uniform clustering)
20
0
200
400
600
800 1,000 1,200 1,400
Number of rounds (r)
Figure 2. This graph shows the number of alive sensor nodes N , at the end of
each round r. Uniform and non-uniform clustering strategies have also been
compared for the proposed protocol.
size is not uniform for every cluster). Thus, it is observed that
the overall network residual energy using proposed protocol
with uniform size clustering is much greater than the existing
system and non-uniform clustering as well. From Fig. 2, it is
observed that the number of active nodes after 1500 rounds of
clustering and spectrum sensing, in the proposed protocol with
uniform clustering is more than the number of active nodes in
the existing protocol by 7 times. So, with this it can be shown
that the Proposed method is an energy efficient strategy with
respect to the existing system of communication for CR-WSN.
V. C ONCLUSION
The analysis and simulation results show that the proposed
protocol with uniform or non-uniform clustering shows better
performance in terms of energy consumption. Moreover, this
[1] J. Mitola, ?Cognitive Radio?An Integrated Agent Architecture for Software Defined Radio,? 2000.
[2] A. S. Zahmati, S. Hussain, X. Fernando, and A. Grami, ?Cognitive
Wireless Sensor Networks: Emerging Topics and Recent Challenges,?
pp. 593?596, 2009.
[3] A. Ghasemi and E. S. Sousa, ?Spectrum Sensing in Cognitive Radio
Networks: Requirements, Challenges and Design Trade-offs,? IEEE Communications Magazine, vol. 46, no. 4, 2008.
[4] K.-L. A. Yau, P. Komisarczuk, and P. D. Teal, ?Cognitive Radio-Based
Wireless Sensor Networks: Conceptual Design and Open Issues,? in Local
Computer Networks, 2009. LCN 2009. IEEE 34th Conference on. IEEE,
2009, pp. 955?962.
[5] A. Jamal, C.-K. Tham, and W.-C. Wong, ?CR-WSN MAC: An Energy
Efficient and Spectrum Aware MAC Protocol for Cognitive Radio Sensor
Network,? in Cognitive Radio Oriented Wireless Networks and Communications (CROWNCOM), 2014 9th International Conference on. IEEE,
2014, pp. 67?72.
[6] J. Xu, N. Jin, X. Lou, T. Peng, Q. Zhou, and Y. Chen, ?Improvement of
LEACH Protocol for WSN,? in Fuzzy Systems and Knowledge Discovery
(FSKD), 2012 9th International Conference on. IEEE, 2012, pp. 2174?
2177.
[7] K. Akkaya and M. Younis, ?A Survey on Routing Protocols for Wireless
Sensor Networks,? Ad Hoc Networks, vol. 3, no. 3, pp. 325?349, 2005.
[8] C. J. Alpert, T. C. Hu, J. Huang, A. B. Kahng, and D. Karger, ?Primdijkstra tradeoffs for improved performance-driven routing tree design,?
IEEE Transactions on Computer-Aided Design of Integrated Circuits and
Systems, vol. 14, no. 7, pp. 890?896, 1995.
[9] M. Handy, M. Haase, and D. Timmermann, ?Low Energy Adaptive
Clus- tering Hierarchy with Deterministic Cluster-Head Selection,? in
Mobile and Wireless Communications Network, 2002. 4th International
Workshop on. IEEE, 2002, pp. 368?372.
| 7 |
Data-Driven Stochastic Robust Optimization: General Computational
Framework and Algorithm Leveraging Machine Learning for
Optimization under Uncertainty in the Big Data Era
Chao Ning, Fengqi You *
Robert Frederick Smith School of Chemical and Biomolecular Engineering,
Cornell University, Ithaca, New York 14853, USA
December 29, 2017
Submitted to Computers & Chemical Engineering
Abstract
A novel data-driven stochastic robust optimization (DDSRO) framework is proposed for
optimization under uncertainty leveraging labeled multi-class uncertainty data. Uncertainty data
in large datasets are often collected from various conditions, which are encoded by class labels.
Machine learning methods including Dirichlet process mixture model and maximum likelihood
estimation are employed for uncertainty modeling. A DDSRO framework is further proposed
based on the data-driven uncertainty model through a bi-level optimization structure. The outer
optimization problem follows a two-stage stochastic programming approach to optimize the
expected objective across different data classes; adaptive robust optimization is nested as the inner
problem to ensure the robustness of the solution while maintaining computational tractability. A
decomposition-based algorithm is further developed to solve the resulting multi-level optimization
problem efficiently. Case studies on process network design and planning are presented to
demonstrate the applicability of the proposed framework and algorithm.
Key words: big data, optimization under uncertainty, Bayesian model, machine learning, process
design and operations
*
Corresponding author. Phone: (607) 255-1162; Fax: (607) 255-9166; E-mail: [email protected]
1
1. Introduction
Labeled multi-class data are ubiquitous in a variety of areas and disciplines, such as process
monitoring [1], multimedia studies [2, 3], and machine learning [4]. For example, process data are
labeled with the operating modes of chemical plants [1], and text documents are tagged with labels
to indicate their topics [5]. Due to the massive amount of available uncertainty data (realizations
of uncertain parameters) and dramatic progress in big data analytics [6], data-driven optimization
emerges as a promising paradigm for decision making under uncertainty [7-14]. Most existing
data-driven optimization methods are restricted to unlabeled uncertainty data. However, recent
work has revealed the significance of leveraging labeled uncertainty data in decision-making under
uncertainty [15]. Applying the existing methods to labeled uncertainty data cannot take advantage
of useful information embedded in labels. Consequently, novel modeling frameworks and
computational algorithms are needed to leverage labeled multi-class uncertainty data in datadriven decision making under uncertainty.
Optimization under uncertainty has attracted tremendous attention from both academia and
industry [16-21]. Uncertain parameters, if not being accounted for, could render the solution of an
optimization problem suboptimal or even infeasible [22]. To this end, a plethora of mathematical
programming techniques, such as stochastic programming and robust optimization [23-25], have
been proposed for decision making under uncertainty. These techniques have their respective
strengths and weaknesses, which lead to different application scopes [26-41]. Stochastic
programming focuses on the expected performance of a solution by leveraging the scenarios of
uncertainty realization and their probability distribution [42-44]. However, this approach requires
accurate information on the probability distribution, and the resulting optimization problem could
become computationally challenging as the number of scenarios increases [40, 45, 46]. Robust
optimization provides an alternative approach that does not require accurate knowledge on
probability distributions of uncertain parameters [47-51]. It has attracted considerable interest
owing to the merit of feasibility guarantee and computational tractability [50]. Nevertheless, robust
optimization does not take advantage of available probability distribution information, and its
solution usually suffers from the conservatism issue. The state-of-the-art approaches for
optimization under uncertainty leverage the synergy of different optimization methods to inherit
their corresponding strengths and complement respective weaknesses [52-54]. However, these
approaches do not take advantage of the recent advances in machine learning and big data analytics
2
to leverage uncertainty data for optimization under uncertainty. Therefore, the research objective
of our work is to propose a novel data-driven decision-making framework that organically
integrates machine learning methods for labeled uncertainty data with the state-of-the-art
optimization approach.
There are a number of research challenges towards a general data-driven stochastic robust
optimization (DDSRO) framework. First, uncertainty data are often collected from a wide
spectrum of conditions, which are indicated by categorical labels. For example, uncertain power
generation data of a solar farm are labeled with weather conditions, such as cloudy, sunny and
rainy [55]. The occurrence probability of each weather condition is known in the data collection
process, and is typically embedded in the datasets. Moreover, the uncertainty data with the same
label could exhibit very complicated distribution, which is impossible to fit into some commonlyknown probability distributions. Therefore, a major research challenge is how to develop machine
learning methods to accurately extract useful information from labeled multi-class uncertainty data
for constructing the uncertainty model in the data-driven optimization framework. Second, we are
confronted with the challenge of how to integrate different approaches for optimization under
uncertainty in a holistic framework to leverage their respective advantages. The third challenge is
to develop an efficient computational algorithm for the resulting large-scale multi-level
optimization problem, which cannot be solved directly by any off-the-shelf optimization solvers.
This paper proposes a novel DDSRO framework that leverages machine learning for decision
making under uncertainty. In large datasets, uncertainty data are often attached with labels to
indicate their data classes [6]. The proposed framework takes advantage of machine learning
methods to accurately and truthfully extract uncertainty information, including probability
distributions of data classes and uncertainty sets. The probability distribution of data classes is
learned from labeled uncertainty data through maximum likelihood estimation for the multinoulli
distribution [4, 56]. To accurately capture the structure and complexity of uncertainty data, a group
of Dirichlet process mixture models are employed to construct uncertainty sets with a variational
inference algorithm [57, 58]. These two pieces of uncertainty information are incorporated into the
DDSRO framework through a bi-level optimization structure. Two-stage stochastic programming
is nested in the outer problem to optimize the expected objective over categorical data classes;
adaptive robust optimization (ARO) is nested as the inner problem to hedge against uncertainty
and ensure computational tractability. Estimating a categorical distribution is much more
3
computationally tractable than estimating a joint probability distribution of high-dimensional
uncertainties [4]. Therefore, the outer optimization problem follows a two-stage stochastic
programming structure to take advantage of the available probability information on data classes.
Robust optimization, which uses an uncertainty set instead of accurate knowledge on probability
distributions, is more suitable to be nested as the inner problem to tackle high-dimensional
uncertainties. The DDSRO problem is formulated as a multi-level mixed-integer linear program
(MILP). We further develop a decomposition-based algorithm to efficiently solve the resulting
multi-level MILP problem. To demonstrate the advantages of the proposed framework, two case
studies on design and planning of process networks under uncertainty are presented.
The major novelties of this paper are summarized as follows.
•
A novel uncertainty modeling framework based on machine learning methods that
accurately extract different types of uncertainty information for optimization under
uncertainty;
•
A general data-driven decision making under uncertainty framework that combines
machine learning with mathematical programming and that leverages the strengths of
both stochastic programming and robust optimization;
•
An efficient decomposition-based algorithm to solve the resulting data-driven stochastic
robust four-level MILP problems.
The rest of this paper is organized as follows. Section 2 proposes the general DDSRO
framework, which includes data-driven uncertainty model, optimization model and computational
algorithm. A motivating example is presented in Section 3. It is followed by case studies on process
network design and planning under uncertainty in Section 4. Conclusions are drawn in Section 5.
2. Data-Driven Stochastic Robust Optimization Framework
In this section, we first provide a brief introduction to stochastic robust optimization. The
proposed data-driven uncertainty model and the DDSRO framework are presented next. A
decomposition-based algorithm is further developed to efficiently solve the resulting multi-level
optimization problem.
4
2.1. Stochastic robust optimization
The stochastic robust optimization framework combines two-stage stochastic programming
and ARO [53]. In this subsection, we first analyze the strengths and weaknesses of two-stage
stochastic programming and two-stage ARO. They are then integrated in a novel form to leverage
their corresponding advantages and complement respective shortcomings.
A general two-stage stochastic MILP in its compact form is given as follows [23].
min cT x + u Q ( x, u )
x
s.t.
Ax ≥ d
(1)
x ∈ R+n1 × Z n2
The recourse function Q(x, u) is defined as follows.
Q ( x, u ) = min b T y
y
s.t. Wy ≥ h − Tx − Mu
(2)
y∈R
n3
+
where x is first-stage decisions made “here-and-now” before the uncertainty u is realized, while
the second-stage decisions or recourse decisions y are postponed in a “wait-and-see” manner after
the uncertainties are revealed. x can include both continuous and integer variables, while y only
includes continuous variables. The objective of (1) includes two parts: the first-stage objective cTx
and the expectation of the second-stage objective bTy. The constraints associated with the firststage decisions are Ax ≥ d, x ∈ R+n × Z n , and the constraints of the second-stage decisions are
1
2
Wy ≥ h − Tx − Mu, y ∈ R+n3 .
Two-stage stochastic programming makes full use of probability distribution information, and
aims to find a solution that performs well on average under all scenarios. However, an accurate
joint probability distribution of high-dimensional uncertainty u is required to calculate the
expectation term in (1). Besides, the resulting two-stage stochastic programming problem could
become computationally challenging as the number of scenarios increases [45].
Another popular approach for optimization under uncertainty is two-stage ARO, which
hedges against the worst-case uncertainty realization within an uncertainty set. A general twostage ARO problem in its compact form is given as follows [59].
5
min cT x + max
x
min b T y
y∈Ω ( x ,u )
u∈U
(3)
s.t. Ax ≥ d, x ∈ R+n1 × Z n2
}
{
Ω ( x, u ) = y ∈ R+n3 : Wy ≥ h − Tx − Mu
where x is the vector of first-stage decision variables, and y represents the vector of second-stage
decisions. Note that x includes both continuous and integer variables, and y is the vector of
continuous recourse variables. U is an uncertainty set that characterizes the region in which
uncertainty realization resides. Note that the two-stage ARO problem in (3) is a tri-level
optimization problem.
The stochastic robust optimization framework is able to organically integrate the two-stage
stochastic programming approach with the ARO method and leverage their strengths. A general
form of the stochastic robust MILP is given by [53],
min cT x + σ ∈Π C σ ( x )
x
s.t.
Ax ≥ d
x ∈ R+n1 × Z n2
C σ ( x ) = max min b T yσ
yσ
u∈Uσ
≥
−
−
Wy
h
Tx
Mu
s.t.
, ∀σ ∈ Π
σ
yσ ∈ R+n3
(4)
where σ is an uncertain scenario that influences the uncertainty set, and Π is a set of the scenarios.
Two-stage stochastic programming approach is nested in the outer problem to optimize the
expectation of objectives for different scenarios, and robust optimization is nested inside to hedge
against the worst-case uncertainty realization. Stochastic robust optimization is capable of
handling different types of uncertainties in a holistic framework.
In stochastic programming, an accurate joint probability distribution of high-dimensional
uncertainty u is required. However, from a statistical inference perspective, estimating a joint
probability distribution of high-dimensional uncertainty is far more challenging than estimating
accurate probability distributions of individual scenarios [4]. Therefore, it is better to use a coarsegrained uncertainty set to describe high-dimensional uncertainty u, and to use fine-grained
probability distributions to model scenarios. As a result, two-stage stochastic programming
approach is nested outside to leverage the available probability distributions. Robust optimization
is more suitable to be nested inside for computational tractability.
6
The stochastic robust optimization method typically assumes the uncertainty probability
distributions and uncertainty sets are given and known a priori, rather than deriving them from the
uncertainty data. However, the predefined probability distribution and manually constructed
uncertainty sets might not correctly capture the fine-grained information from the available
uncertainty data. Thus, we propose a novel data-driven uncertainty modeling framework for
stochastic robust optimization with labeled multi-class uncertainty data to fill this knowledge gap.
2.2. Data-driven uncertainty modeling
In this subsection, we propose a data-driven uncertainty modeling framework based on
machine learning techniques to seamlessly integrate data-driven system and model-based system
in the proposed decision-making framework. This uncertainty modeling framework includes two
parts, i.e. probability distribution estimation for data classes, and uncertainty set construction from
uncertainty data in the same data class. For the first part, the probability distributions of data classes
are learned from the label information in multi-class uncertainty data through maximum likelihood
estimation. For the second part, a group of Dirichlet process mixture models is employed to
construct uncertainty sets with a variational inference algorithm [58].
{
We consider the multi-class uncertainty data with labels u(i ) , c(i )
}
L
i =1
, where u(i) is i th
uncertainty data point, c(i) is its corresponding label, and L is the total number of uncertainty data.
Although the labeled uncertainty data is of practical relevance [6, 15], existing literature on data-
{ }
driven optimization under uncertainty are restricted to unlabeled uncertainty data u(i )
L
[8, 12].
i=1
The uncertainty information on the probability of different data classes can be extracted from
labeled uncertainty data by leveraging their label information. The occurrence probabilities of data
classes are modeled with a multinoulli distribution. The probability of each data class can be
calculated through maximum likelihood estimation, as given by [4, 56],
∑ ( c( ) = s )
i
ps =
i
(5)
L
where ps represents the occurrence probability of data class s, c(i) is the label associated with the i
th uncertainty data point, c(i)=s indicates that the i th uncertainty data point is from data class s,
and the indicator function is defined as follows.
7
1 if c(i ) = s
i
c( =) s=
0 else
(
)
(6)
As will be shown in the next subsection, the extracted probability distribution information of
data classes is incorporated into the two-stage stochastic programming framework nested as the
outer optimization problem of DDSRO.
The second part of the uncertainty modeling is the construction of data-driven uncertainty sets
from uncertainty data in the same data class. The information on uncertainty sets is incorporated
into robust optimization nested as the inner optimization problem of DDSRO. To handle the
labeled multi-class uncertainty data, a group of Dirichlet process mixture models is employed, and
uncertainty data with the same label are modeled using one separate Dirichlet process mixture
model [58]. The details of Dirichlet process mixture model are given in Appendix A.
Based on the extracted information from Dirichlet process mixture model, we construct a
data-driven uncertainty set for data class s using both l1 and l∞ norms [8], as shown below,
Us
=
=
U s,i U s,1 U s,2 U s,m( s)
(7)
{u =u
(8)
i: γ s ,i ≥γ *
U s=
,i
μ s ,i + κ s ,i Ψ1/2
z
s ,i Λ s ,i z ,
∞
≤ 1, z 1 ≤ Φ s ,i }
where Us is the data-driven uncertainty set for data class s, Us,i is the basic uncertainty set
corresponding to the i th component of data class s, Λs,i is a scaling factor, γs,i is the weight of the
i th component of data class s, γ s ,i =
τ s ,i
τ s ,i + vs ,i
i −1
∏τ
j =1
vs , j
s, j
+ vs , j
and γ s , M = 1 −
∑
M −1
i =1
γ s ,i . The weight γs,i
indicates the probability of the corresponding component. γ* is a threshold value. m(s) is the total
number of components for data class s. κ s ,i =
λs ,i + 1
[57], τs,i, νs,i, μs,i, λs,i, ωs,i, ψs,i
λs ,i (ωs ,i + 1 − dim ( u ) )
are the inference results of the i th component learned from uncertainty data of data class s, and
Φs,i is its uncertainty budget. It is worth noting that the number of basic uncertainty sets m(s) for
each data class varies depending on the structure and complexity of the uncertainty data.
2.3. Data-driven stochastic robust optimization model
The proposed DDSRO framework is shown as follows.
8
min cT x + s∈Ξ
max
x
u∈U s ,1 U s ,2 U s ,m( s )
s.t. Ax ≥ d, x ∈ R+n1 × Z n2
{u =u μ + κ Ψ
Ω ( x, u )= {y ∈ R : Wy
U s=
,i
s ,i
s
s ,i
n3
+
1/2
s ,i
s
min b T y s
y s ∈Ω ( x ,u )
Λ s ,i z , z
∞
≤ 1, z 1 ≤ Φ s ,i }
(9)
}
≥ h − Tx − Mu
where Ξ ={1, 2, ,C} is the set of data classes, C is the total number of data classes, m(s) is the
total number of mixture components for data class s. Each data class corresponds to a scenario in
the two-stage stochastic program nested outside the DDSRO framework. It is worth noting that
the DDSRO problem in (9) is a multi-level MILP that involves mixed-integer first-stage decisions,
and continuous recourse decisions.
In the DDSRO framework, the outer optimization problem follows a two-stage stochastic
programming structure, and robust optimization is nested as the inner optimization problem to
hedge against high-dimensional uncertainty u. DDSRO aims to find a solution that performs well
on average across all data classes by making full use of their occurrence probabilities. Instead of
treating all the uncertainty data as a whole, the DDSRO framework explicitly accounts for multiclass uncertainty data by leveraging their label information. Stochastic robust optimization
structure is employed because it is more suitable to leverage different types of uncertainty
information in a holistic framework. Note that DDSRO is a general framework for data-driven
optimization under uncertainty, while stochastic robust optimization methods assume uncertainty
information is given a priori rather than learning it from uncertainty data.
There are two reasons that two-stage stochastic programming is nested as the outer
optimization problem while robust optimization is nested as the inner optimization problem. First,
the proposed DDSRO framework aims to balance all data classes instead of focusing only on the
worst-case data class, and it also ensures the robustness of the solution. Second, estimating the
probability distribution of data classes accurately is much more computationally tractable than
estimating an accurate joint probability distribution [4]. Consequently, it is better to employ
uncertainty set to characterize high-dimensional uncertain parameters in the inner optimization
problem. On the other hand, stochastic programming is more suitable as the outer optimization
problem to leverage the probability information of data classes.
The DDSRO framework leverages machine learning techniques to truthfully capture the
structure and complexity of uncertainty data for decision making under uncertainty. It enjoys the
9
computational tractability of robust optimization, while accounting for exact probability
distribution of data class in the same way as stochastic programming approach.
Note that the proposed data-driven uncertainty set for each data class in (7) is a union of
several basic uncertainty sets. Therefore, we reformulate the DDSRO model in (9) into (10), as
given below.
min cT x + s∈Ξ max max
x
i∈{1,,m( s )} u∈U s ,i
s.t. Ax ≥ d, x ∈ R+n1 × Z n2
{u =u μ + κ Ψ
Ω ( x, u )= {y ∈ R : Wy
U s=
,i
s ,i
s
s ,i
n3
+
min b T y s
y s ∈Ω ( x ,u )
1/2
s ,i
s
Λ s ,i z , z
∞
≤ 1, z 1 ≤ Φ s ,i }
}
≥ h − Tx − Mu
Fig. 1. An overview of the proposed DDSRO framework.
10
(10)
The DDSRO model explicitly accounts for the label information within multi-class
uncertainty data, and balances different data classes by using the weighted sum based on their
occurrence probabilities. Robust optimization is nested as the inner optimization problem to handle
uncertainty u, thus providing a good balance between solution quality and computational
tractability. Moreover, multiple basic uncertainty sets Us,i are unified to capture the structure and
complexity of uncertainty data, and this scheme leads to the inner max-max optimization problem
in (15). An overview of the proposed DDSRO framework for labeled multi-class uncertainty data
is shown in Fig. 1. Uncertainty data points are depicted using circles with different colors to
indicate its associated data class, and they are treated separately using a group of Dirichlet process
mixture models. The probability distribution of data classes can be inferred from uncertainty data
using eq. (5). The proposed framework leverages the strengths of stochastic programming and
robust optimization, and uses the uncertainty information embedded in uncertainty data.
The resulting DDSRO problem has a multi-level optimization structure that cannot be solved
by any general-purpose optimization solvers directly. An efficient computational algorithm is
further proposed in the next subsection to address this computational challenge.
2.4. The decomposition-based algorithm
To address computational challenge of solving the DDSRO problem, we extend the algorithm
proposed in [8]. The main idea is to first decompose the multi-level MILP shown in (15) into a
master problem and a number of groups of sub-problems, where each group corresponds to a data
class. Moreover, there are a number of sub-problems in each group, and each sub-problem
corresponds to a component in the Dirichlet process mixture model.
The proposed decomposition-based algorithm iteratively solves the master problem and the
sub-problems, and adds cuts to the master problem after iteration, until the relative optimality gap
is reduced to the predefined tolerance ζ. The pseudocode of the algorithm is given in Fig. 2, where
C represents the total number of data classes. The algorithm is guaranteed to terminate within finite
iterations due to the finite number of extreme points of basic uncertainty sets [53]. The details of
the decomposition-based algorithm, including the formulations of master problem (MP) and subproblem (SUPs,i), are given in Appendix B.
11
Decomposition Algorithm
1:
2:
3:
4:
5:
Set LB ← −∞ , UB ← +∞ , k ← 0 , and ζ ;
UB − LB
while
> ζ do
UB
Solve (MP) to obtain x*k +1 ,η k*+1 , y1*s , , y ks * ;
Update LB ← cT x*k +1 + ηk*+1 ;
for s=1 to C do
for i = 1 to m ( s ) do
6:
Solve (SUPs,i) to obtain u s ,i and Qs ,i ( x*k +1 ) ;
k +1
7:
8:
end
i ← arg max Qs ,i ( x*k +1 ) ;
*
9:
i
u
10:
11:
k +1
s
←u
k +1
s , i*
and Qs ( x*k +1 ) ← Qs ,i* ( x*k +1 ) ;
end
Update UB ← min UB, cT x*k +1 + ∑ ps ⋅ Qs x*k +1 ;
s =1
(
12:
Create
second-stage
variables
)
y ks +1 and
add
cuts
k +1
k +1
η ≥ ∑ ps ( b T y ks +1 ) and Tx + Wy s ≥ h − Mu s to (MP);
13:
s
k ← k +1;
14:
15:
16:
end
return UB ;
Fig. 2. The pseudocode of the decomposition-based algorithm.
2.5. Relationship with the existing data-driven adaptive nested robust
optimization framework
In this subsection, we discuss the relationship between the DDSRO framework proposed in
this work and the data-driven adaptive nested robust optimization (DDANRO) framework
proposed earlier [8].
The DDANRO framework in its general form has a four-level optimization structure, as given
below.
12
min cT x + max max min b T y
i∈{1,, m} u∈U i
x
y∈Ω ( x ,u )
s.t. Ax ≥ d, x ∈ R+n1 × Z n2
U i=
{u u= μi + κ i Ψ1/2i Λi z, z
{
∞
≤ 1, z 1 ≤ ∆ i }
(11)
}
Ω ( x, u ) = y ∈ R+n3 : Wy ≥ h − Tx − Mu
It is worth noting that the DDANRO framework treats uncertainty data homogeneously
without considering the label information [8]. DDSRO greatly enhances the capabilities of
DDANRO, and expands its application scope to more complex uncertainty data structures. There
are some similarities between DDSRO and DDANRO. For example, they both employ data-driven
uncertainty sets, and have multi-level optimization structures. Moreover, DDANRO can be
considered as a special case of DDSRO when there is one only one class of uncertainty data.
Specifically, if there is only one data class, we can simplify the expectation term as a single term
in (10), such that the DDSRO model shown in (10) reduces to the DDANRO model in (11). The
DDANRO framework is more appropriate to handle single-class or unlabeled uncertainty data,
while the proposed DDSRO framework, on the other hand, is suitable for handling labeled multiclass uncertainty data systematically. Therefore, the DDSRO framework proposed in this paper
could be considered as a generalization of the existing DDANRO approach.
3. Motivating Example
In this section, a motivating example is first presented to illustrate different characteristics of
optimization methods, and to show that accounting for label information in optimization problem
is critical.
The deterministic formulation of the motivating example is shown in (12).
min 3x1 + 5 x2 + 6 x3 + 6y1 + 10 y2 + 12 y3
x,y
s.t. x1 + x2 + x3 ≤ 200
x1 + y1 ≥ u1
(12)
x2 + y2 ≥ u2
x3 + y3 ≥ u3
xi , yi ≥ 0, i =
1, 2,3
where x1, x2, x3, y1, y2 and y3 are decision variables, u1, u2 and u3 are parameters. In the deterministic
optimization problem, u1, u2 and u3 are assumed to be known exactly.
13
Under the uncertain environment, the parameters u1, u2 and u3 are subject to uncertainty. In a
data-driven optimization setting, what decision makers know about these uncertain parameters are
their realizations, also known as uncertainty data [60]. The scatter plot of 1,000 labeled uncertainty
data is shown in Fig. 1, where the points represent uncertainty realizations of u1, u2 and u3. There
are 4 labels within uncertainty data, which means that there are 4 data classes in total. Note that
uncertainty data points from different data classes are indicated with disparate shapes in Fig. 1.
Moreover, uncertainty data in the same data class could exhibit complex characteristics, such as
correlation, asymmetry and multimode.
Fig. 3. The scatter plot of labeled uncertainty data in the motivating example.
One conventional way to hedging against uncertainty is the two-stage stochastic programming
approach [23]. However, it is computationally challenging to fit the uncertainty data points in Fig.
3 into a reasonable probability distribution. If a scenario-based stochastic programming approach
is applied to this problem, each data point in Fig. 3 corresponds to a scenario. There are 1,000
scenarios in total, and the probability of each scenario is given as 0.001. The scenario-based twostage stochastic programming model is shown as follows.
14
min 3 x1 + 5 x2 + 6 x3 + ∑ pω ⋅ 6yω ,1 + 10 yω ,2 + 12 yω ,3
x , yω
ω∈Θ
s.t. x1 + x2 + x3 ≤ 200
x1 + yω ,1 ≥ uω ,1 ∀ω
(13)
x2 + yω ,2 ≥ uω ,2 ∀ω
x3 + yω ,3 ≥ uω ,3 ∀ω
∀ω , i =
1, 2,3
xi , yω ,i ≥ 0,
where the notation Θ represents the set of scenarios, and ω is the scenario index. The probability
of each scenario is pω. x1, x2 and x3 are the first-stage or “here-and-now” decision variables, uω,1,
uω,2 and uω,3 are the uncertainty realizations, and yω,1, yω,2 and yω,3 are the second-stage or “waitand-see” decision variables corresponding to scenario ω. Note that the “here-and-now” decisions
are made prior to the resolution of uncertainties, whereas “wait-and-see” decisions could be made
after the uncertainties are revealed. The deterministic equivalent of this stochastic program will be
a large-scale linear program with 3,003 variables and 6,004 constraints based on the 1,000
scenarios. We note that the size of the stochastic programming problem grows exponentially as
the number of uncertainty data points increases. While decomposition-based optimization
algorithms, such as multi-cut Benders decomposition algorithm, can take advantage of the problem
structure and improve the computational efficiency of solving this problem [61], it remains
intractable to handle problems with “big” data for uncertain parameters (e.g. those with millions
of data points).
An alternative way to optimization under uncertainty is the two-stage ARO method [50], of
which the problem size might be less sensitive to the amount of uncertainty data. The two-stage
ARO model with a box uncertainty set is given as follows.
min 3x1 + 5 x2 + 6 x3 + max min 6y1 + 10 y2 + 12 y3
u∈U
x
y
s.t. x1 + x2 + x3 ≤ 200
x1 + y1 ≥ u1
x2 + y2 ≥ u2
x3 + y3 ≥ u3
U
=
{u u
L
i
(14)
i 1, 2,3}
≤ ui ≤ uiU ,=
1, 2,3
xi , yi ≥ 0, i =
where uiL and uiU are the lower and upper bounds of uncertain parameter ui, respectively. These
two parameters can be readily derived from the uncertainty data in Fig. 3.
15
The numerical results are shown in Table 1. In the deterministic optimization method, u1, u2
and u3 are set to be their mean values of the uncertainty data. Although the deterministic approach
achieves a low objective value of 455.3 in the minimization problem, its optimal solution could
become infeasible when the parameters u1, u2 and u3 are subject to uncertainty. The two-stage
stochastic programming approach focuses on the expected objective value, while the two-stage
ARO method aims to minimize the worst-case objective value. Therefore, the objective value of
the two-stage stochastic programming approach is lower than that of the two-stage ARO method.
Compared with the two-stage stochastic programming approach, the two-stage ARO method is
more computationally efficient.
A large amount of uncertainty data implies there is much uncertainty information.
Nevertheless, the two-stage stochastic programming problem could become computationally
intractable as the size of uncertainty data increases. Moreover, the stochastic programming
approach could not well capture uncertainty data if we fit the uncertainty data into an inaccurate
probability distribution, thereby resulting in poor-quality solutions. Consequently, it might be
unsuitable to apply the conventional two-stage stochastic programming approach to the
optimization problem where large amounts of uncertainty data are available. Despite its
computational efficiency, the conventional two-stage ARO method only extracts coarse-grained
uncertainty information on the bounds of uncertain parameters. Neglecting relevant uncertainty
information makes the conventional ARO method unsuitable for the data-driven optimization
setting either.
The DDANRO framework was recently proposed to overcome the drawback of the
conventional ARO method [8] by integrating machine learning methods with ARO. The
DDANRO model of this motivating example is given by
min 3x1 + 5 x2 + 6 x3 + max max min 6y1 + 10 y2 + 12 y3
j∈{1,, m} u∈U j
x
y
s.t. x1 + x2 + x3 ≤ 200
x1 + y1 ≥ u1
x2 + y2 ≥ u2
(15)
x3 + y3 ≥ u3
xi , yi ≥ 0, i =
1, 2,3
U=
j
{u u=
μ j + κ j Ψ1/2
z
j Λ j z,
16
∞
≤ 1, z 1 ≤ ∆ j }
where Uj is a basic uncertainty set, μj, κj, ψj, and Λj are the parameters derived from uncertainty
data [8]. Δj is an uncertainty budget, and m represents the total number of the basic uncertainty
sets.
Nevertheless, one limitation of the DDANRO method is that it does not leverage the label
information in Fig. 3 to infer the occurrence probability of each data class. This information
derived from labels should inform the decision-making process. Besides, DDANRO does not
leverage the strengths of stochastic programming for tackling uncertainty in optimization. These
motivate us to propose the DDSRO framework in this paper as an extension of the DDANRO
method for optimization under labeled multi-class uncertainty data.
The DDSRO model formulation of this motivating example is given in (16).
min 3x1 + 5 x2 + 6 x3 + s∈Ξ max min 6y1 + 10 y2 + 12 y3
u∈U s y
x
s.t. x1 + x2 + x3 ≤ 200
x1 + y1 ≥ u1
(16)
x2 + y2 ≥ u2
x3 + y3 ≥ u3
xi , yi ≥ 0, i =
1, 2,3
where the notation Ξ represents the set of data classes, and s is the corresponding element. The
data-driven uncertainty set Us for data class s is defined in (7).
Following the proposed data-driven uncertainty modeling framework in Section 2, the
occurrence probabilities of different data classes are inferred from the label information embedded
within uncertainty data using maximum likelihood estimation. The computational results show
that the probabilities of data classes 1-4 are 0.2, 0.4, 0.3 and 0.1, respectively. Moreover, the
uncertainty modeling results also show that there are two components in the Dirichlet process
mixture models for data class 1-3, while there is only one component for data class 4. The
occurrence probabilities of different data classes are listed in Fig. 4. The data-driven uncertainty
sets with uncertainty budget Φ=1.8 are also constructed for each data class, and are depicted using
different colors in Fig. 4. From Fig. 4, we can see that the structure and complexity of uncertainty
data are accurately and truthfully captured by the proposed data-driven uncertainty model. For
example, the uncertainty set for data class 2 accurately captures the bimodal feature in uncertainty
data, and the correlations among uncertain parameters u1, u2 and u3. The computational results of
the DDANRO and DDSRO methods are also provided in Table 1. By leveraging the label
17
information, the DDSRO approach is less conservative than the DDANRO method. From Table 1,
we can see that the objective value of DDANRO is 18.8% larger than that of DDSRO, while the
computational times of both problems are comparable.
Fig. 4. The data-driven uncertainty modeling results for the four data classes in the motivating
example.
Table 1. Computational results of different optimization methods in the motivating example.
Deterministic
optimization
Min.obj.
Iterations
Total CPU (s)
Decision x
Two-stage
ARO
DDANRO
Δ=1.8*
DDSRO
Φ=1.8#
455.3
N/A
0.1
Two-stage
stochastic
programming
646.4
7
477
x1 = 38.43
964.0
2
1
x1 = 66.64
944.6
5
9
x1 = 55.48
795.4
6
12
x1 = 25.18
x2 = 29.55
x2 = 27.55
x2 = 62.49
x2 = 52.56
x2 = 33.62
x3 = 34.30
x3 = 39.92
x3 = 70.87
x3 = 67.60
x3 = 53.14
x1 = 33.93
* Δ is the data-driven uncertainty budget in the DDANRO approach.
# Φ is the data-driven uncertainty budget in the DDSRO approach.
18
4. Application: Data-Driven Stochastic Robust Planning of Chemical Process
Networks under Uncertainty
The application of the proposed DDSRO framework to a multi-period process network
planning problem is presented in this section. Large integrated chemical complexes consist of
interconnected processes and various chemicals (see Fig. 9 for an example) [62, 63]. These
interconnections allow the chemical production to make the most of different processes [64, 65].
The chemicals in the process network include raw materials, intermediates and final products. In
the process network planning problem, purchase levels of raw materials, sales of products, capacity
expansions, and production profiles of processes at each time period should be determined in order
to maximize the net present value (NPV) during the entire planning horizon [66].
4.1. Data-driven stochastic robust planning of process networks model
The DDSRO model for process network planning under uncertainty is formulated as follows.
The objective is to maximize the NPV, which is given in (17). The constraints include capacity
expansion constraints (18)-(19), budget constraints (20)-(21), production level constraint (22),
mass balance constraint (23), supply and demand constraints (24)-(25), non-negativity constraints
(26)-(27), and integrity constraint (28). A list of indices/sets, parameters and variables is given in
Nomenclature, where all the parameters are in lower-case symbols, and all the variables are
denoted in upper-case symbols.
The objective function is defined by (17).
max − ∑∑ ( c1it ⋅ QEit + c 2it ⋅ Yit ) + s∈Ξ
min
max
ν jt ⋅ SAsjt − ∑∑ c3it ⋅ Wsit − ∑∑ c 4 jt ⋅ Psjt
du jt ∈U sdem , su jt ∈U ssup Psjt ,Qsit , ∑∑
QEit ,Yit
i∈I t∈T
j∈J t∈T
i∈I t∈T
j∈J t∈T
SAsjt ,Wsit
(17)
where QEit is a decision variable for capacity expansion of process i at period t, Yit is a binary
decision variable to determine on whether process i is expanded at period t, SAsjt is the amount of
chemical j sold to the market at period t for data class s, Wsit represents the operating level of
process i at period t for data class s, and Psjt is a decision variable on the amount of chemical j
purchased at period t for data class s. c1it, c2it, c3it and c4jt are the coefficients associated with
variable investment cost, fixed investment cost, operating cost and purchase cost, respectively. νjt
is the sale price of chemical j in time period t.
19
Constraint (18) specifies the lower and upper bounds of the capacity expansions. Specifically,
if Yit=1, i.e. process i is selected to be expanded at period t, the expanded capacity should be within
the range [qeitL, qeitU].
qeitL ⋅ Yit ≤ QEit ≤ qeitU ⋅ Yit ,
∀i, t
(18)
Constraint (19) depicts the update of capacity of process i.
Qit =
Qit −1 + QEit ,
∀i, t
(19)
where Qit is a decision variable for total capacity of process i at period t.
Constraints (20) and (21) enforce the largest number of capacity expansions for process and
investment budget, respectively.
∑Y
it
≤ cei ,
∀i
(20)
t
∑ ( c1
it
⋅ QEit + c 2it ⋅ Yit ) ≤ cbt ,
∀t
(21)
i
where cei is the largest possible expansion times of process i, and cbt is the investment budget for
period t.
Constraint (22) specifies that the production level of a process cannot exceed its total capacity.
Wsit ≤ Qit ,
∀s, i, t
(22)
Constraint (23) specifies the mass balance for chemicals.
Psjt − ∑ κ ij ⋅ Wsit − SAsjt = 0,
∀s, j , t
(23)
i
where κij represents the mass balance coefficient of chemical j in process i.
Constraint (24) specifies that the purchase amount of a feedstock cannot exceed its available
amount. The sale amount of a product and its market demand are specified in Constraint (25).
Psjt ≤ su jt ,
SAsjt ≤ du jt ,
∀s, j , t
(24)
∀s, j , t
(25)
where sujt is the market availability and dujt is the demand of chemical j in time period t.
Constraints (26)-(27) enforces the non-negativity of continuous decisions.
QEit ≥ 0, Qit ≥ 0 ∀i, t
(26)
Psjt , SAsjt , Wsit ≥ 0,
(27)
∀s, j , t
Constraint (28) ensures that Yit is a binary decision variable.
Yit ∈ {0,1} ,
∀i, t
(28)
20
Uncertainty information for demand and supply is shown in (29)-(30).
dem
U=
s
*
k : γ sdem
,k ≥ γ
du
{du =
dem
dem
dem
μ dem
z
s ,k + κ s ,k ⋅ Ψ s ,k ⋅ Λ s ,k ⋅ z,
∞
≤ 1, z 1 ≤ Φ dem
s ,k
}
(29)
dem
dem
dem
where μ s ,k , κ s , k , Ψ s ,k are the inference results learned from uncertain product demand data
using the variational inference algorithm.
sup
U=
s
*
k : γ ssup
,k ≥ γ
su
{su =
sup
sup
sup
μ sup
z
s , k + ss , k ⋅ Ψ s , k ⋅ Λ s , k ⋅ z ,
∞
≤ 1, z 1 ≤ Φ sup
s ,k
}
(30)
sup
sup
sup
where μ s ,k , κ s ,k , Ψ s ,k are the inference results learned from uncertain supply data employing the
variational inference algorithm. The probabilities of different data classes are extracted using
maximum likelihood estimation.
The resulting problem on strategic planning of process network under uncertainty is a
stochastic robust MILP. The proposed computational algorithm is employed to efficiently solve
the large-scale multi-level optimization problem. In the next two subsections, two case studies are
presented to demonstrate the applicability and advantages of the proposed approach.
4.2. Case study 1
A small-scale case study is provided in this subsection. The process network, which is shown
in Fig. 5, consists of five chemicals and three processes [63]. In Fig. 5, chemicals A-C represent
raw materials, which can be either purchased from suppliers or produced by certain processes. For
example, Chemical C can be either manufactured by Process 3 or purchased from a supplier.
Chemicals D and E are final products, which are sold to the markets. In this case study, we consider
10 time periods over the 20-year planning horizon, and the duration of each time period is 2 years.
It is assumed that processes do not have initial capacities, and they can be installed at the beginning
of the planning horizon.
Fig. 5. The chemical process network of case study 1.
21
The supplies of feedstocks A-C, and demands of products D-E are subject to uncertainty. A
total of 160 demand uncertainty data points are used in this case study, and each data point
corresponds to a combination of all product demand realizations. For the supply uncertainty, 200
uncertainty data points are used. Each data point represents a combination of supply uncertainty
realizations for all raw materials. Demand and supply uncertainty data are attached with the labels
of different government policies [67]. Government could encourage or discourage a certain
industry by the means of subsidies and tax rates [68]. There are two labels within demand
uncertainty data, which indicates two data classes. There are also two data classes within the supply
uncertainty data. The two different labels represent two kinds of government policies that
encourage or discourage the industry related to the materials and products of this process network
[67, 68].
The results of the proposed data-driven uncertainty modeling framework are given in Fig.6
and Fig. 7. In these two figures, the red dots and red stars represent uncertainty data from different
data classes, and the polytopes represent data-driven uncertainty sets with uncertainty budgets
Φdem=1 and Φsup=1. Due to the settings of uncertainty budgets, some data points are not included
in the uncertainty sets. The occurrence probabilities of different data classes are inferred from the
label information, and the corresponding numerical values are listed in these two figures. From
Fig. 6, we can see that the probabilities of demand data classes 1 and 2 are both 0.5. There is only
one component in the Dirichlet process mixture model for both data classes. The proposed
uncertainty model accurately captures the correlations among the two uncertain parameters for
demands. Fig. 7 shows that the occurrence probabilities for the two supply data classes are the
same. There is one component in each of these data classes. The DDSRO problem for the planning
of process network can be solved using the proposed computational algorithm.
22
Fig. 6. The data-driven demand uncertainty modeling results (Φdem=1) for the two data classes in
case study 1.
Fig. 7. The data-driven supply uncertainty modeling results (Φsup=1) for the two data classes in
case study 1.
We solve the optimization problem of process network planning using the deterministic
optimization method, the two-stage stochastic programming method, the two-stage ARO method
with a box set, the DDANRO approach, and the DDSRO approach. All optimization problems are
modeled in GAMS 24.7.3 [69], solved with CPLEX 12.6.3, and are implemented on a computer
with an Intel (R) Core (TM) i7-6700 CPU @ 3.40 GHz and 32 GB RAM. The relative optimality
23
gaps for both the decomposition-based algorithm in Section 2.4 and the multi-cut Benders
decomposition algorithm are set to be 0.1%.
The problem sizes and computational results are summarized in Table 2. For the two-stage
stochastic programming problem, each scenario corresponds to a demand uncertainty data point
and a supply uncertainty data point. There are 32,000 (160×200) scenarios in the two-stage
stochastic programming problem, because there are 160 and 200 data points for demand and supply
uncertainty, respectively. Two-stage stochastic programming approach aims to find a solution that
performs well on average, so its objective value is only 3.7% higher than that of the DDSRO
approach. However, it takes 28 iterations and 106,140 seconds to solve the resulting two-stage
stochastic programming problem with the multi-cut Benders decomposition [61]. We note that the
deterministic equivalent MILP of this two-stage stochastic program is not suitable to be solved
directly due to its large-scale problem size. By contrast, the DDSRO problem can be solved within
only 3 seconds, which demonstrates its computational advantage. The two-stage ARO problem
with box uncertainty sets is solved using the bounds of demand and supply derived from
uncertainty data points in Fig. 6 and Fig. 7, respectively. Data-driven uncertainty sets for demand
and supply are constructed based on all the data points in Fig. 6 and Fig. 7, respectively. The
uncertainty budgets for demand and supply are both 1. The computational times of the two-stage
ARO problem and the DDANRO problem are 1 CPU second and 7 CPU seconds, respectively. In
spite of its computational tractability, the two-stage ARO method is conservative and only
generates a NPV of $1,372.5MM. Compared with the two-stage ARO method, the DDANRO
method ameliorates the conservatism issue and generates $244.6MM higher NPV.
Table 2. Comparisons of problem sizes and computational results in case study 1.
Deterministic
Planning
Bin. Var.
30
Two-stage
stochastic
programming
30
Two-stage
ARO
246
Cont. Var.
196
4,160,060
Constraints
276
10,560,193
376
396
551
Max. NPV ($MM)
1,809.5
1,802.6
1,372.5
1,671.7
1,739.1
Total CPU (s)
0.2
106,140
1
7
3
Iterations
N/A
28
2
4
2
30
DDANRO
(Δdem=1, Δsup=1)*
DDSRO
(Φdem=1, Φsup=1)#
30
30
246
468
* Δdem and Δsup are data-driven uncertainty budgets in the DDANRO approach for demand and supply, respectively.
# Φdem and Φsup are data-driven uncertainty budgets in the DDSRO approach for demand and supply, respectively.
24
The optimal capacity expansion decisions determined by the two-stage stochastic
programming approach, the DDANRO method and DDSRO approach are shown in Fig. 8 (a), (b)
and (c), respectively. As can be observed from Fig. 8, Process 1 is expanded at time period 5, and
Process 2 starts the expansion at time period 2 for all the three methods. Moreover, Process 2 is
further expanded at the 6-th time period in the solution determined by the DDSRO approach. By
contrast, Process 2 is not selected to be expanded from time period 2 in the DDANRO solution.
The optimal process capacities at the end of the planning horizon determined by the two-stage
stochastic programming approach, the DDANRO method and the DDSRO approach are listed in
Table 3.
Table 3. The optimal capacity of processes at the end of the planning horizon.
Process capacity (kt/y)
Process 1
Process 2
Process 3
Two-stage stochastic
programming
100
227
51
25
DDANRO
DDSRO
114
207
39
114
225
39
Fig. 8. Optimal capacity expansion decisions over the entire planning horizon determined by (a)
the two-stage stochastic programming approach, (b) the DDANRO approach, and (c) the DDSRO
approach in case study 1.
4.3. Case study 2
A large-scale case study is presented in this subsection to demonstrate the advantages of the
DDSRO approach. The process network in this case study consists of 28 chemicals and 38
processes [63], as shown in Fig. 9. Chemicals A-J represent raw materials, which can be purchased
26
from suppliers or manufactured by some processes. Chemicals K-Z are final products, which could
be sold to markets. There are 2 intermediates AA and AB. This complex process network has such
flexibility that many process technologies are available. For example, Chemical L can be
manufactured by Process 2, Process 15 and Process 17. In this case study, we consider 10 time
periods over the planning horizon, and the duration of each time period is 2 years. It is assumed
that processes 12, 13, 16 and 38 have initial capacities of 40, 25, 300 and 200 kt/y at the beginning
of the planning horizon. These 4 processes cannot be expanded until time period 2. The other
processes can be built at the beginning of time period 1.
As in the previous case study, supplies of all raw materials and demands of all final products
are subject to uncertainty. For the supply uncertainty, a set of 40,000 uncertainty data are used for
uncertainty modeling. Each data point has 10 dimensions for a combination of all raw materials.
For the demand uncertainty, 40,000 uncertainty data points are used, and each data point is for a
combination of all the 16 products. There are 3 labels within demand uncertainty data, which
indicates different data classes. There are also 3 data classes within the supply uncertainty data.
The 3 labels are different kinds of government policies regarding the industry associated with the
process network [67, 68]: (1) encouragement policy, (2) neutral policy and (3) discouragement
policy. These policies not only influence the willingness of suppliers to provide related raw
materials but also the consumptions of products via subsidies or tax rates. The occurrence
probabilities of different data classes are inferred from the label information. The data-driven
uncertainty modeling results show that the probabilities of supply data classes 1-3 are 0.25, 0.50,
and 0.25, respectively. The probabilities of demand data classes 1-3 are calculated to be 0.25, 0.50,
and 0.25, respectively. The number of components in the Dirichlet process mixture models is 1 for
all demand data classes. The extracted uncertainty information is incorporated into the DDSRO
based process network planning problem.
To demonstrate the advantages of the proposed approach, we solve the process network
planning problem using deterministic method, the two-stage stochastic programming approach,
the two-stage ARO method, the DDANRO method and the DDSRO approach. In the two-stage
stochastic programming problem, there are 1.6 billion scenarios (40,000×40,000), as there are
40,000 data points for both demand and supply uncertainty. In the deterministic equivalent of the
two-stage stochastic programming problem, there are 380 integer variables, 1,472,000,001,000
continuous variables, and 1,024,000,004,000 constraints. The size of this two-stage stochastic
27
programming problem is too large to be solved within the computational time limit of 48 hours
using the multi-cut Benders decomposition algorithm. While multi-cut Benders decomposition
algorithm can take advantage of the problem structure and improve the computational efficiency,
it is still intractable to handle this process network planning problem with “big” data for uncertain
parameters. Therefore, we only report the problem sizes and computational results of the
deterministic approach, the two-stage ARO method, the DDANRO method and the DDSRO
approach in Table 4.
28
Fig. 9. The chemical process network of case study 2.
29
Table 4. Comparisons of problem sizes and computational results in case study 2.
Deterministic
Planning
Two-stage ARO
DDANRO
(Δdem=5, Δsup=3)*
380
DDSRO
(Φdem=5, Φsup=3)#
Bin. Var.
380
380
380
Cont. Var.
1,706
1,986
1,986
3,678
Constraints
2,366
2,926
2,946
6,462
Max. NPV ($MM)
2,204.5
698.4
761.0
1,831.2
Total CPU (s)
0.4
5
3,232
505
Iterations
N/A
6
12
6
* Δdem and Δsup are data-driven uncertainty budgets in the DDANRO approach for demand and supply, respective.
# Φdem and Φsup are data-driven uncertainty budgets in the DDSRO approach for demand and supply, respective.
From Table 4, we can see that the deterministic planning method consumes less computational
time, and generates the highest NPV. However, its solution suffers from infeasibility issue if the
supply of raw materials and demand of final products are uncertain. By leveraging the
corresponding strengths of two-stage stochastic programming approach and the two-stage ARO
method, the DDSRO approach generates $1,070.2MM higher NPV than the solution determined
by the DDANRO method, while still maintaining computational tractability. The DDSRO problem
can be solved using only 505 seconds. The optimal design and planning decisions at the end of the
planning horizon determined by the deterministic optimization method, DDANRO, and DDSRO
are shown in Fig. 10, Fig. 11 and Fig. 12, respectively. In these 3 figures, the optimal total
production capacities are displayed under operating processes.
30
Fig. 10. The optimal design and planning decisions at the end of the planning horizon determined
by the deterministic optimization method in case study 2.
31
Fig. 11. The optimal design and planning decisions at the end of the planning horizon determined
by the DDANRO approach in case study 2.
32
Fig. 12. The optimal design and planning decisions at the end of the planning horizon determined
by the proposed DDSRO approach in case study 2.
33
The optimal capacity expansion decisions determined by the DDANRO method and the
DDSRO approach are shown in Fig. 13 and Fig. 14, respectively. From Fig. 13, we can see that a
total of 15 processes are selected in the optimal process network determined by the DDANRO
method. As shown in Fig. 14, a total of 18 processes are chosen in the optimal process network
determined by the DDSRO approach. Specifically, Processes 2, 6 and 11 are not selected in the
optimal process network determined by the DDANRO method. Besides, the optimal expansion
frequencies and total capacities of Process 28 determined by the two methods are different.
Fig. 13. Optimal capacity expansion decisions over the entire planning horizon determined by the
DDANRO approach in case study 2.
34
Fig. 14. Optimal capacity expansion decisions over the entire planning horizon determined by the
proposed DDSRO approach in case study 2.
Fig. 15 displays the upper and lower bounds in each iteration of the proposed algorithm. The
X-axis and Y-axis denote the iteration number and the objective function values, respectively. In
Fig. 15, the green dots stand for the upper bounds, while the yellow circles represent the lower
bounds. The proposed solution algorithm requires only 6 iterations to converge. During the first 3
iterations, the relative optimality gap decreases significantly from 88.1% to 0.6%.
Fig. 15. Upper and lower bounds in each iteration of the decomposition-based algorithm in case
study 2.
35
5. Conclusions
This paper proposed a novel DDSRO framework that leveraged machine learning methods to
extract accurate uncertainty information from multi-class uncertainty data for decision making
under uncertainty. The probability distribution of data classes was learned through maximum
likelihood estimation. A group of Dirichlet process mixture models were used to construct
uncertainty sets for each data class. Stochastic programming approach was nested as the outer
optimization problem to leverage available probability distribution, while ARO was nested as the
inner optimization problem for computational tractability. The proposed DDSRO framework
leveraged the strengths of both stochastic programming and robust optimization. A decompositionbased algorithm was further developed to efficiently solve the resulting DDSRO problem. Twostage stochastic programming approach was computationally intractable for the large-scale
optimization problem with huge amounts of uncertainty data. The DDSRO approach was much
more computationally efficient than two-stage stochastic programming approach. The size of the
stochastic programming problem grew exponentially as the number of uncertainty data points
increased. By contrast, the problem size of DDSRO was less sensitive to the amount of uncertainty
data. Therefore, DDSRO remained tractable to handle problems with “big” data for uncertainties.
Moreover, DDSRO generated less conservative solutions compared with two-stage ARO method.
Moreover, the DDSRO approach could be considered as a generalization of the DDANRO method.
The results showed that the DDSRO approach was advantageous over the DDANRO method for
labeled multi-class uncertainty data.
Acknowledgements
The authors acknowledge financial support from the National Science Foundation (NSF)
CAREER Award (CBET-1643244).
Appendix A. Dirichlet Process Mixture Model
The Dirichlet process is a widely used stochastic process, especially in the Dirichlet process
mixture model. Suppose a random distribution G follows a Dirichlet process, denoted as G ~ DP
36
(α, G0), (G(Θ1),…, G(Θr)) is then Dirichlet distributed for any measurable partition (Θ1, ..., Θr) of
Θ [70].
( G (Θ1 ), , G(Θr ) ) ~ Dir (α G0 (Θ1 ), , α G0 (Θr ) )
(A1)
where α and G0 are the concentration parameter and base distribution, respectively. Dir in (A1)
represents the Dirichlet distribution. A random draw from the Dirichlet process follows the stickbreaking construction in (A2) [71].
G = ∑ k =1 π k δθk
∞
=
π k βk
where
∏ (1 − β ) , β
k −1
j =1
j
k
(A2)
is the proportion of stick broken off from the remaining stick, and
β k ~ Beta (1, α). θ k is the random variable drawn from the base distribution G0. The delta function
δθ is equal to 1 at the location of θ k , and becomes 0 elsewhere.
k
In the Dirichlet process mixture model, θ k is treated as the parameters of the data distribution.
A significant feature of the Dirichlet process mixture model is that it has a potentially infinite
number of components. This novel feature grants it with great flexibility to model intricate data
distribution. The Dirichlet process mixture model is given in (A3) [58].
β k α ~ Beta(1, α )
θ k G0 ~ G0
ti {β1 , β 2 ,} ~ Mult (π ( β ) )
(A3)
oi ti ~ p (oi θli )
where Beta and Mult denote Beta distribution and multinomial distribution, respectively. The
observation is denoted as oi drawn from the ti th component.
Appendix B. Details of The Decomposition Algorithm
The multi-level optimization problem (10) can be reformulated to an equivalent single-level
optimization problem by enumerating all the extreme points [72]. However, this single-level
optimization problem could be a large-scale optimization problem due to the potentially large
number of extreme points. Therefore, we partially enumerate the extreme points, and construct the
master problem following the literature [8]. Because master problem (MP) contains a subset of the
37
extreme points, it is a relaxation of the original single-level optimization problem. The master
problem (MP) is shown below.
min cT x + η
x ,η , y ls
s.t. Ax ≥ d
1, , r
η ≥ ∑ ps ( b T y ls ), l =
(MP)
s
Tx + Wy ls ≥ h − Muls =
, l 1, , r , ∀s
x ∈ R+n1 × Z n2 , y ls ∈ R+n3 , =
l 1, , r , ∀s
where s is the index for data class, and uls is the enumerated uncertainty realization at the l th
iteration for data class s, y ls is its corresponding recourse variable, and r stands for the current
number of iterations.
To enumerate the important uncertainty realization on-the-fly, the sub-problem in (B1) is
solved in each iteration [8].
Qs ,i ( x ) = max min b T y s
u∈U s ,i
ys
s.t. Wy s ≥ h − Tx − Mu
(B1)
y s ∈ R+n3
It is worth noting that (B1) is a max-min optimization problem, which can be transformed to
a single-level optimization problem by employing strong duality or KKT conditions. To make the
sub-problem computationally tractable, we dualize the inner optimization problem of (B1), which
is a linear program with regard to ys, and merge the dual problem with the maximization problem
with respect to u. The resulting problem is shown below.
Qs ,i ( =
x ) max ( h − Tx − Mμ s ,i ) φ − ∑∑ (κ s ,i MΨ1/2
s ,i Λ s ,i ) z j φ t
T
z ,φ
(SUPs,i)
t
j
tj
s.t. W T φ ≤ b
φ≥0
z
∞
≤ 1, z 1 ≤ Φ s ,i
where φ is the vector of dual variables corresponding to the constraint Wy ≥ h − Tx − Mu , φt
represents the t th component in vector φ, and zj is the j th component in vector z. Problem (SUPs,i)
is reformulated to handle the bilinear term zjφt in its objective function.
To facilitate the reformulation, zj is divided into two parts z=
z +j − z −j . Thus, we have
j
+
−
u =μ s ,i + κ s ,i Ψ1/2
s ,i Λ s ,i ( z − z )
38
(B2)
Following the existing literature [73, 74], the uncertainty budget parameter Φs,i is typically set
as an integer value due to its physical meaning. For the integer uncertainty budget, the optimal z +j
and z −j take the values of 0 or 1, and these two variables can be restricted to binary variables. We
employ Glover’s linearization for the bilinear terms G tj+ = z +j φt and G tj− = z −j φt [75], which are
the products of a binary variable z +j (and z −j ) and a continuous variable φt, and reformulate (SUPs,i)
into (B3) following the procedure introduced in [8].
max+
Qs ,i (=
x)
+ −
φ , z , z ,G ,G
−
( h − Tx − Mμ )
T
s ,i
(
φ − Tr (κ s ,i MΨ1/s ,i2 Λ s ,i )
T
(G
+
− G− )
)
s.t. WT φ ≤ b
φ≥0
0 ≤ G + ≤ φ ⋅ eT
0 ≤ G − ≤ φ ⋅ eT
G + ≤ e ⋅ ( M 0 ⋅ z + )
G − ≤ e ⋅ ( M 0 ⋅ z − )
T
T
(
)
− e ⋅ ( M ⋅ ( e − z ) )
G + ≥ φ ⋅ eT − e ⋅ M 0 ⋅ ( e − z + )
T
G − ≥ φ ⋅ eT
T
−
0
e T ( z + + z − ) ≤ Φ s ,i
z + + z − ≤ e,
z + , z − ∈ {0,1}
K
(B3)
where e and e denote vectors with all elements being one. The t th row and j th column elements
of matrices G+ and G- are G tj+ and G tj− , respectively.
Nomenclature
DDSRO Framework
The main notations used in the DDSRO modeling framework are listed below.
b
vector of objective coefficients corresponding to second-stage decisions
c
vector of objective coefficients corresponding to first-stage decisions
u
vector of uncertainties
x
vector of first-stage decisions
ys
vector of second-stage decisions for data class s
39
C
total number of data classes in uncertainty data
m(s)
total number of basic uncertainty sets for data class s
M
truncation level in the Variational inference
M0
constant with a very large value
Us
uncertainty set for data class s
s
index of data class
Λi
scaling factor in the data-driven uncertainty set
τi
a hyper-parameter for Beta distribution in the Variational inference
νi
a hyper-parameter for Beta distribution in the Variational inference
γi
weight of mixture component
γ*
threshold value for the weight of mixture component
Φs,i
uncertainty budget in data-driven uncertainty set
A
coefficient matrix corresponding to the first-stage decisions
M
coefficient matrix corresponding to the uncertainties
T
technology matrix in the DDSRO framework
W
recourse matrix in the DDSRO framework
Application to planning of process networks
The sets, parameters, and variables used in the application part are summarized below. Note
that all parameters are denoted in lower-case symbols, and all variables are denoted in upper-case
symbols.
Sets/indices
I
set of processes indexed by i
J
set of chemicals indexed by j
T
set of time periods indexed by t
Ξ
set of data classes indexed by s
Parameters
c1it
variable investment cost for process i in time period t
40
c2it
fixed investment cost for process i in time period t
c3it
unit operating cost for process i in time period t
c4it
purchase price of chemical j in time period t
cbt
maximum allowable investment in time period t
cei
maximum number of expansions for process i over the planning horizon
dujt
demand of chemical j in time period t
qeitL
lower bound for capacity expansion of process i in time period t
qeitU
upper bound for capacity expansion of process i in time period t
sujt
supply of chemical j in time period t
vjt
sale price of chemical j in time period t
κij
mass balance coefficient for chemical j in process i
Binary variables
Yit
binary variable that indicates whether process i is chosen for expansion in time
period t
Continuous variables
Psjt
purchase amount of chemical j in time period t for data class s
Qit
total capacity of process i in time period t
QEit
capacity expansion of process i in time period t
SAsjt
sale amount of chemical j in time period t for data class s
Wsit
operation level of process i in time period t for data class s
References
[1]
[2]
[3]
[4]
M. S. Afzal, W. Tan, and T. Chen, "Process monitoring for multimodal processes with
mode-reachability constraints," IEEE Transactions on Industrial Electronics, vol. 64, pp.
4325-4335, 2017.
A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and F. Li, "Large-scale video
classification with convolutional neural networks," in 2014 IEEE Conference on Computer
Vision and Pattern Recognition, 2014, pp. 1725-1732.
T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, "Learning from massive noisy labeled
data for image classification," in 2015 IEEE Conference on Computer Vision and Pattern
Recognition, 2015, pp. 2691-2699.
K. P. Murphy, Machine learning: A probabilistic perspective: MIT press, 2012.
41
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
F. Sebastiani, "Machine learning in automated text categorization," ACM Comput. Surv.,
vol. 34, pp. 1-47, 2002.
M. I. Jordan and T. M. Mitchell, "Machine learning: Trends, perspectives, and prospects,"
Science, vol. 349, pp. 255-260, 2015.
B. A. Calfa, I. E. Grossmann, A. Agarwal, S. J. Bury, and J. M. Wassick, "Data-driven
individual and joint chance-constrained optimization via kernel smoothing," Computers &
Chemical Engineering, vol. 78, pp. 51-69, 2015.
C. Ning and F. You, "Data-driven adaptive nested robust optimization: General modeling
framework and efficient computational algorithm for decision making under uncertainty,"
AIChE Journal, vol. 63, pp. 3790-3817, 2017.
R. Jiang and Y. Guan, "Data-driven chance constrained stochastic program," Mathematical
Programming, vol. 158, pp. 291-327, 2016.
C. Shang, X. Huang, and F. You, "Data-driven robust optimization based on kernel
learning," Computers & Chemical Engineering, vol. 106, pp. 464-479, 2017.
C. Ning and F. You, "A data-driven multistage adaptive robust optimization framework for
planning and scheduling under uncertainty," AIChE Journal, vol. 63, pp. 4343–4369, 2017.
D. Bertsimas, V. Gupta, and N. Kallus, "Data-driven robust optimization," Mathematical
Programming, 2017. DOI 10.1007/s10107-017-1125-8
C. Ning and F. You, "Adaptive robust optimization with minimax regret criterion:
Multiobjective optimization framework and computational algorithm for planning and
scheduling under uncertainty," Computers & Chemical Engineering, vol. 108, pp. 425-447,
2018.
C. Shang and F. You, "Distributionally robust optimization for planning and scheduling
under
uncertainty,"
Computers
&
Chemical
Engineering.
DOI:
10.1016/j.compchemeng.2017.12.002
D. Bertsimas and N. Kallus, "From predictive to prescriptive analytics," arXiv preprint
arXiv:1402.5481, 2014.
N. V. Sahinidis, "Optimization under uncertainty: State-of-the-art and opportunities,"
Computers & Chemical Engineering, vol. 28, pp. 971-983, 2004.
I. E. Grossmann, R. M. Apap, B. A. Calfa, P. García-Herreros, and Q. Zhang, "Recent
advances in mathematical programming techniques for the optimization of process systems
under uncertainty," Computers & Chemical Engineering, vol. 91, pp. 3-14, 2016.
E. N. Pistikopoulos, "Uncertainty in process design and operations," Computers &
Chemical Engineering, vol. 19, pp. 553-563, 1995.
J. M. Mulvey, R. J. Vanderbei, and S. A. Zenios, "Robust optimization of large-scale
systems," Operations Research, vol. 43, pp. 264-281, 1995.
P. Psarris and C. A. Floudas, "Robust stability analysis of systems with real parametric
uncertainty: A global optimization approach," International Journal of Robust and
Nonlinear Control, vol. 5, pp. 699-717, 1995.
M. G. Ierapetritou, E. N. Pistikopoulos, and C. A. Floudas, "Operational planning under
uncertainty," Computers & Chemical Engineering, vol. 20, pp. 1499-1516, 1996.
A. Ben-Tal and A. Nemirovski, "Robust optimization – methodology and applications,"
Mathematical Programming, vol. 92, pp. 453-480, 2002.
J. R. Birge and F. Louveaux, Introduction to stochastic programming: Springer Science &
Business Media, 2011.
42
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
V. Gabrel, C. Murat, and A. Thiele, "Recent advances in robust optimization: An
overview," European Journal of Operational Research, vol. 235, pp. 471-483, 2014.
D. Bertsimas, D. B. Brown, and C. Caramanis, "Theory and applications of robust
optimization," SIAM Review, vol. 53, pp. 464-501, 2011.
J. Gong, D. J. Garcia, and F. You, "Unraveling optimal biomass processing routes from
bioconversion product and process networks under uncertainty: An adaptive robust
optimization approach," ACS Sustainable Chemistry & Engineering, vol. 4, pp. 3160-3173,
2016.
J. Gong and F. You, "Optimal processing network design under uncertainty for producing
fuels and value-added bioproducts from microalgae: Two-stage adaptive robust mixed
integer fractional programming model and computationally efficient solution algorithm,"
AIChE Journal, vol. 63, pp. 582-600, 2017.
H. Shi and F. You, "A computational framework and solution algorithms for two-stage
adaptive robust scheduling of batch manufacturing processes under uncertainty," AIChE
Journal, vol. 62, pp. 687-703, 2016.
J. Gao and F. You, "Deciphering and handling uncertainty in shale gas supply chain design
and optimization: Novel modeling framework and computationally efficient solution
algorithm," AIChE Journal, vol. 61, pp. 3739-3755, 2015.
Y. Chu and F. You, "Integration of scheduling and dynamic optimization of batch
processes under uncertainty: Two-stage stochastic programming approach and enhanced
generalized benders decomposition algorithm," Industrial & Engineering Chemistry
Research, vol. 52, pp. 16851-16869, 2013.
H. Kasivisvanathan, A. T. Ubando, D. K. S. Ng, and R. R. Tan, "Robust optimization for
process synthesis and design of multifunctional energy systems with uncertainties,"
Industrial & Engineering Chemistry Research, vol. 53, pp. 3196-3209, 2014.
S. R. Cardoso, A. P. F. D. Barbosa-Póvoa, and S. Relvas, "Design and planning of supply
chains with integration of reverse logistics activities under demand uncertainty," European
Journal of Operational Research, vol. 226, pp. 436-451, 2013.
S. Liu, S. S. Farid, and L. G. Papageorgiou, "Integrated optimization of upstream and
downstream processing in biopharmaceutical manufacturing under uncertainty: A chance
constrained programming approach," Industrial & Engineering Chemistry Research, vol.
55, pp. 4599-4612, 2016.
A. Krieger and E. N. Pistikopoulos, "Model predictive control of anesthesia under
uncertainty," Computers & Chemical Engineering, vol. 71, pp. 699-707, 2014.
G. H. Huang and D. P. Loucks, "An inexact two-stage stochastic programming model for
water resources management under uncertainty," Civil Engineering and Environmental
Systems, vol. 17, pp. 95-118, 2000.
P. M. Verderame, J. A. Elia, J. Li, and C. A. Floudas, "Planning and scheduling under
uncertainty: A review across multiple sectors," Industrial & Engineering Chemistry
Research, vol. 49, pp. 3993-4017, 2010.
H. Khajuria and E. N. Pistikopoulos, "Optimization and control of pressure swing
adsorption processes under uncertainty," AIChE Journal, vol. 59, pp. 120-131, 2013.
S. Ahmed, A. J. King, and G. Parija, "A multi-stage stochastic integer programming
approach for capacity expansion under uncertainty," Journal of Global Optimization, vol.
26, pp. 3-24, 2003.
43
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
J. Li, P. M. Verderame, and C. A. Floudas, "Operational planning of large-scale continuous
processes: Deterministic planning model and robust optimization for demand amount and
due date uncertainty," Industrial & Engineering Chemistry Research, vol. 51, pp. 43474362, 2012.
P. Liu, E. N. Pistikopoulos, and Z. Li, "Decomposition based stochastic programming
approach for polygeneration energy systems design under uncertainty," Industrial &
Engineering Chemistry Research, vol. 49, pp. 3295-3305, 2010.
J. Li, R. Misener, and C. A. Floudas, "Scheduling of crude oil operations under demand
uncertainty: A robust optimization framework coupled with global optimization," AIChE
Journal, vol. 58, pp. 2373-2396, 2012.
J. R. Birge, "State-of-the-art-survey—stochastic programming: Computation and
applications," INFORMS Journal on Computing, vol. 9, pp. 111-133, 1997.
A. Bonfill, A. Espuña, and L. Puigjaner, "Addressing robustness in scheduling batch
processes with uncertain operation times," Industrial & Engineering Chemistry Research,
vol. 44, pp. 1524-1534, 2005.
A. Bonfill, M. Bagajewicz, A. Espuña, and L. Puigjaner, "Risk management in the
scheduling of batch plants under uncertain market demand," Industrial & Engineering
Chemistry Research, vol. 43, pp. 741-750, 2004.
B. H. Gebreslassie, Y. Yao, and F. You, "Design under uncertainty of hydrocarbon
biorefinery supply chains: Multiobjective stochastic programming models, decomposition
algorithm, and a comparison between cvar and downside risk," AIChE Journal, vol. 58, pp.
2155-2179, 2012.
L. J. Zeballos, C. A. Méndez, and A. P. Barbosa-Povoa, "Design and planning of closedloop supply chains: A risk-averse multistage stochastic approach," Industrial &
Engineering Chemistry Research, vol. 55, pp. 6236-6249, 2016.
Y. Yuan, Z. Li, and B. Huang, "Robust optimization under correlated uncertainty:
Formulations and computational study," Computers & Chemical Engineering, vol. 85, pp.
58-71, 2016.
A. Ben-Tal and A. Nemirovski, "Robust solutions of linear programming problems
contaminated with uncertain data," Math. Programming, vol. 88, p. 411-4, 2000.
D. Bertsimas and M. Sim, "The price of robustness," Oper. Res., vol. 52, p. 35, 2004.
A. Ben-Tal, L. E. Ghaoui, and A. Nemirovski, Robust optimization: Princeton University
Press, 2009.
J. Gong and F. You, "Resilient design and operations of process systems: Nonlinear
adaptive robust optimization model and algorithm for resilience analysis and
enhancement,"
Computers
&
Chemical
Engineering,
2017.
DOI:
10.1016/j.compchemeng.2017.11.002
K. McLean and X. Li, "Robust scenario formulations for strategic supply chain
optimization under uncertainty," Industrial & Engineering Chemistry Research, vol. 52,
pp. 5721-5734, 2013.
D. Yue and F. You, "Optimal supply chain design and operations under multi-scale
uncertainties: Nested stochastic robust optimization modeling framework and solution
algorithm," AIChE Journal, vol. 62, pp. 3041-3055, 2016.
C. Liu, C. Lee, H. Chen, and S. Mehrotra, "Stochastic robust mathematical programming
model for power system optimization," IEEE Transactions on Power Systems, vol. 31, pp.
821-822, 2016.
44
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
J. Shi, W. J. Lee, Y. Liu, Y. Yang, and P. Wang, "Forecasting power output of photovoltaic
systems based on weather classification and support vector machines," IEEE Transactions
on Industry Applications, vol. 48, pp. 1064-1069, 2012.
L. Wasserman, All of statistics: A concise course in statistical inference: Springer Science
& Business Media, 2013.
T. Campbell and J. P. How, "Bayesian nonparametric set construction for robust
optimization," in 2015 American Control Conference (ACC), 2015, pp. 4216-4221.
D. M. Blei and M. I. Jordan, "Variational inference for dirichlet process mixtures,"
Bayesian analysis, vol. 1, pp. 121-143, 2006.
A. Billionnet, M.-C. Costa, and P.-L. Poirion, "2-stage robust milp with continuous
recourse variables," Discrete Applied Mathematics, vol. 170, pp. 21-32, 2014.
C. Ning and F. You, "A data-driven multistage adaptive robust optimization framework for
planning and scheduling under uncertainty," AIChE Journal, vol. 63, pp. 4343-4369, 2017.
F. You and I. E. Grossmann, "Multicut benders decomposition algorithm for process
supply chain planning under uncertainty," Annals of Operations Research, vol. 210, pp.
191-211, 2013.
N. V. Sahinidis, I. E. Grossmann, R. E. Fornari, and M. Chathrathi, "Optimization model
for long range planning in the chemical industry," Computers & Chemical Engineering,
vol. 13, pp. 1049-1063, 1989.
F. You and I. E. Grossmann, "Stochastic inventory management for tactical process
planning under uncertainties: Minlp models and algorithms," AIChE Journal, vol. 57, pp.
1250-1277, 2011.
D. Yue and F. You, "Planning and scheduling of flexible process networks under
uncertainty with stochastic inventory: Minlp models and algorithm," AIChE Journal, vol.
59, pp. 1511-1532, 2013.
M. L. Liu and N. V. Sahinidis, "Optimization in process planning under uncertainty,"
Industrial & Engineering Chemistry Research, vol. 35, pp. 4154-4165, 1996.
S. Ahmed and N. V. Sahinidis, "Robust process planning under uncertainty," Industrial &
Engineering Chemistry Research, vol. 37, pp. 1883-1892, 1998.
M. Dal-Mas, S. Giarola, A. Zamboni, and F. Bezzo, "Strategic design and investment
capacity planning of the ethanol supply chain under price uncertainty," Biomass and
Bioenergy, vol. 35, pp. 2059-2071, 2011.
A. A. Vertes, N. Qureshi, H. Yukawa, and H. P. Blaschek, Biomass to biofuels: Strategies
for global industries: John Wiley & Sons, 2011.
E. Rosenthal, Gams-a user’s guide. Washington, DC: GAMS Development Corporation,
2008.
T. S. Ferguson, "A bayesian analysis of some nonparametric problems," The annals of
statistics, pp. 209-230, 1973.
J. Sethuraman, "A constructive definition of dirichlet priors," Statistica Sinica, vol. 4, pp.
639-650, 1994.
A. Takeda, S. Taguchi, and R. H. Tütüncü, "Adjustable robust optimization models
for a nonlinear two-period system," Journal of Optimization Theory and Applications, vol.
136, pp. 275-295, 2008.
D. Bertsimas, E. Litvinov, X. A. Sun, J. Zhao, and T. Zheng, "Adaptive robust optimization
for the security constrained unit commitment problem," IEEE Transactions on Power
Systems, vol. 28, pp. 52-63, 2013.
45
[74]
[75]
A. Thiele, T. Terry, and M. Epelman, "Robust linear optimization with recourse," Rapport
technique, pp. 4-37, 2009.
F. Glover, "Improved linear integer programming formulations of nonlinear integer
problems," Management Science, vol. 22, pp. 455-460, 1975.
46
| 2 |
arXiv:1703.01326v1 [] 3 Mar 2017
Prediction based on the Kennedy-O’Hagan calibration
model: asymptotic consistency and other properties
Rui Tuo
Academy of Mathematics and Systems Sciences
Chinese Academy of Sciences
C. F. Jeff Wu
School of Industrial and Systems Engineering
Georgia Institute of Technology
March 7, 2017
Abstract
Kennedy and O’Hagan (2001) propose a model for calibrating some unknown parameters in a computer model and estimating the discrepancy between the computer
output and physical response. This model is known to have certain identifiability
issues. Tuo and Wu (2016) show that there are examples for which the KennedyO’Hagan method renders unreasonable results in calibration. In spite of its unstable
performance in calibration, the Kennedy-O’Hagan approach has a more robust behavior in predicting the physical response. In this work, we present some theoretical
analysis to show the consistency of predictor based on their calibration model in the
context of radial basis functions.
Key words and phrases: computer experiments, kriging, Bayesian inference
1
Introduction
With the development of mathematical models and computational technique, simulation programs or softwares are shown to be increasingly powerful for the prediction,
validation and control of many physical processes. A computer simulation run, based
on a virtual platform, requires only computational resources that are rather inexpensive in today’s computing environment. In contrast, a physical experiment usually
requires more facilities, materials, and human labor. As a consequence, a typical
computer simulation run is much cheaper than its corresponding physical experiment
1
trial. The economic benefits of computer simulations make them particularly useful
and attractive in scientific and engineering research. As a branch of statistics, design
of experiments mainly studies the methodologies on the planning, analysis and optimization of physical experiments (Wu and Hamada, 2011). Given the rapid spread of
computer simulations, it is beneficial to develop theory and methods for the design and
analysis of computer simulation experiments. This emerging field is commonly referred
to as computer experiments. We refer to Santner et al. (2003) for more details.
The input variables of a computer experiment normally consist of factors which
can be controlled in the physical process, referred to as the control variables, as well as
some model parameters. These model parameters represent certain intrinsic properties
of the physical system. For example, to simulate a heat transfer process, we need to
solve a heat equation. The formulation of the equation requires the environmental settings and the initial conditions of the system which can be controlled physically, as well
as the thermal conductivity which is uncontrollable and cannot be measured directly
in general. For most computer simulations, the prediction accuracy of the computer
model is closely related to the choice of the model parameters. A standard method
for determining the unknown model parameters is to estimate them by comparing the
computer outputs and the physical responses. Such a procedure is known as the calibration for computer models, and the model parameters to be identified are called
the calibration parameters. Kennedy and O’Hagan (2001) first study the calibration
problem using ideas and methods in computer experiments. They propose a Bayesian
hierarchical model to estimate the calibration parameters by computing their posterior distributions. Tuo and Wu (2016) show that the Kennedy-O’Hagan method may
render unreasonable estimates for the calibration parameters. Given the widespread
use of the Kennedy-O’Hagan method, it will be desirable to make a comprehensive
assessment about this method. For brevity, we sometimes refer to Kennedy-O’Hagan
as KO.
This paper endeavors to study the prediction performance of the Kennedy-O’Hagan
approach. First we adopt the framework of Tuo and Wu (2016) which assumes the
physical observations to be non-random. Interpolation theory in the native spaces
becomes the key mathematical tool in this part. Then, we study the more realistic
situation where the physical data are noisy. We employ the asymptotic theory of the
smoothing splines in the Sobolev spaces to obtain the rate of convergence of the KO
predictor in this case.
This article is organized as follows. In Section 2 we review the Bayesian method
proposed by Kennedy and O’Hagan (2001) for calibrating the model parameters and
predicting for new physical responses. In Section 3 we present our main results on
the asymptotic theory on the prediction performance of the KO method. Concluding
remarks and further discussions are made in Section 4. Some technical proofs are given
in Appendix A.
2
2
Review on the Kennedy-O’Hagan Method
In this section we review the Bayesian method proposed by Kennedy and O’Hagan
(2001). The formulation of this approach can be generalized to some extend. See, for
example, Higdon et al. (2004).
Denote the experimental region for the control variables as Ω. We suppose that Ω
is a convex and compact subset of Rd . Let {x1 , . . . , xn } ⊂ Ω be the set of design points
for the physical experiment. Denote the responses of the n physical experimental runs
by y1p , . . . , ynp respectively, with p standing for “physical”. Let Θ be the domain of the
calibration parameter. In this article, we suppose the computer model is deterministic,
i.e., the computer output is a deterministic function of the control variables and the
calibration parameters, denoted by y s (x, θ) for x ∈ Ω, θ ∈ Θ with s standing for
“simulation”.
We consider two types of computer models. The first is called “cheap computer
simulations”. In these problems each run of the computer code takes only a short time
so that we can call the computer simulation code inside our statistical analysis program
which is usually based on an iterative algorithm like the Markov Chain Monte Carlo
(MCMC). The second is called “expensive computer simulations”. In these problems
each run of the computer code takes a long time so that it is unrealistic to embed
the computer simulation code into an iterative algorithm. A standard approach in
computer experiments is to run the computer code over a set of selected points, and
build a surrogate model based on the obtained computer outputs to approximate the
underlying true function. The surrogate model can be evaluated much faster. In the
statistical analysis, the response values from the surrogate model are used instead of
those from the original computer model.
2.1
The Case of Cheap Computer Simulations
We model the physical response y p in the following nonparametric manner
yip = ζ(xi ) + ei ,
(1)
where ζ(·) is an underlying function, referred to as the true process, and ei ’s are the
observation error. We assume ei ’s are independent and identically distributed normal
random variables with mean zero and unknown variance σ 2 . The computer output
function and the physical true process are linked by
ζ(·) = y s (·, θ0 ) + δ(·),
(2)
where θ0 denotes the “true” calibration parameter (from a physical point of view),
and δ denotes an underlying discrepancy function between the physical process and
the computer model under the true calibration parameters. It is reasonable to believe
that in most computer experiment problems, the discrepancy function δ should be
nonzero and possibly highly nonlinear because the computer codes are usually built
under assumptions or simplifications that do not hold true in reality.
3
To estimate θ0 and δ, we follow a standard Bayesian procedure by imposing certain
prior distributions on the unknown parameters θ0 and σ 2 and the unknown function
δ(·). In the computer experiment literature, a prominent method is to use a Gaussian
process as the prior for an unknown function (Santner et al., 2003). There are two
major reasons for choosing Gaussian processes. First, the sample paths of a Gaussian
process are smooth if a smooth covariance function is chosen, which can be beneficial when the target function is smooth as well. Second, the computational burden
of the statistical inference and prediction for a Gaussian process model is relatively
low. Specifically, we use a Gaussian process with mean zero and covariance function
τ 2 Cγ (·, ·) as the prior of δ(·), where Cγ is a stationary kernel with hyper-parameter γ.
In view of the finite-dimensional distribution of a Gaussian process, given τ 2 and
γ, δ(x) = (δ(x1 ), . . . , δ(xn ))T follows the multivariate normal distribution N (0, τ 2 Σγ ),
where Σγ = (Cγ (xi , xj ))ij . In order to discuss the prediction problem later, we apply the data augmentation algorithm of Tanner and Wong (1987) and consider the
posterior distribution of (θ0 , δ(x), σ 2 , γ) given by
π(θ0 , δ(x), τ 2 , σ 2 , γ|yp )
∝ π(yp |θ0 , δ(x), τ 2 , σ 2 , γ)π(δ(x)|θ0 , τ 2 , σ 2 , γ)π(θ0 , τ 2 , σ 2 , γ)
1
p
s
2
−n/2
∝ σ
exp − 2 ky − y (x, θ0 ) − δ(x)k
2σ
)
(
δ(x)T Σ−1
γ δ(x)
−1/2
−n/2
π(θ0 , τ 2 , σ 2 , γ),
×τ
(det Σγ )
exp −
2τ 2
(3)
where yp = (y1p , . . . , ynp )T , y s (x, θ0 ) = (y s (x1 , θ0 ), . . . , y s (xn , θ0 ))T . It is not timeconsuming to evaluate the posterior density function π(·, ·, ·, ·, ·|yp ) because the computer code is cheap to run. A standard MCMC procedure can then be employed to
draw samples from the posterior distribution. We refer to Higdon et al. (2004) for
further details.
In this work, we pay special attention to the prediction for a new physical reponse
at an untried point xnew , denoted as y p (xnew ). Samples from the posterior predictive
distribution of y p (xnew ) can be drawn along with the MCMC sampling. To see this,
we note that in view of the Gaussian process assumption, given δ(x) and γ, δ(xnew )
follows the normal distribution
2
T −1
−1
N (ΣT
1 Σγ δ(x), τ (Cγ (xnew , xnew ) − Σ1 Σγ Σ1 )),
where Σ1 = (Cγ (x1 , xnew ), . . . , Cγ (xn , xnew ))T . Because in each iteration of the MCMC
procedure a sample of (δ(x), θ0 , γ, σ 2 ) is drawn, we can draw a sample of y p (xnew ) from
its posterior distribution π(y p (xnew )|yp , δ(x), θ0 , τ 2 , γ, σ 2 ), which is the multivariate
normal distribution
−1
2
T −1
2
N (y s (xnew , θ0 ) + ΣT
1 Σγ δ(x), τ (Cγ (xnew , xnew ) − Σ1 Σγ Σ1 ) + σ ).
4
(4)
2.2
The Case of Expensive Computer Simulations
When the computer code is expensive to run, it is intractable to run MCMC based
on (3) directly. Instead, we need a surrogate model to approximate the computer
output function y s (·, ·). In this setting Kennedy and O’Hagan (2001) use the Gaussian
process modeling again. Suppose we first run the computer simulation over a set of
design points {(xs1 , θ1s ), . . . , (xsl , θls )} ⊂ Ω × Θ. We choose a Gaussian process with
mean mβ (·) and covariance function τ ′2 Cγs′ (·, ·) as the prior for y s , where β, τ ′ and
γ ′ are hyper-parameters. Besides, the prior processes of y s and δ are assumed to be
independent.
The Bayesian analysis for the present model is similar to that in Section 2.1 but
with more cumbersome derivations. We write ys := (y s (xs1 , θ1s ), . . . , y s (xsl , θls ))T and
define (n + l)-dimensional vectors
T
E
:= (x1 , . . . , xn , xs1 , . . . , xsl )T ,
xE = (xE
1 , . . . , xn+l )
E
)T := (θ0 , . . . , θ0 , θ1s , . . . , θls )T .
θ E = (θ1E , . . . , θn+l
By (1) and (2), the joint distribution of yp and ys conditional on θ0 , σ 2 , γ, β and τ is
Σ11 + σ 2 In 0
p
s
2
2 ′2 ′
E
(y , y )|σ , γ, β, τ , τ , γ ∼ N mβ (x ), ΣE +
,
0
0
T
E
where mβ (xE ) = (mβ (xE
1 ), . . . , mβ (xn+l )) and
ΣE =
Σ11 =
τ ′2 Cγs′
E
E E
xE
,
i , θ i , xj , θ j
ij
τ 2 Cγ (xi , xj ) ij .
Then the posterior distribution of the parameters is given by
π(θ0 , σ 2 , γ, β, τ 2 , τ ′2 , γ ′ |yp , ys ) ∝ π(yp , ys |θ0 , σ 2 , γ, β, τ 2 , τ ′2 , γ ′ )π(θ0 , σ 2 , γ, β, τ 2 , τ ′2 , γ ′ ).
The parameter estimation proceeds in a similar manner to the MCMC scheme discussed
in Section 2.1. As before, the prediction for the true process can be done along with
the MCMC iterations. Noting the fact that (y p (xnew ), yp , ys ) follows a multivariate
normal distribution given the model parameters, the posterior predictive distribution
of y p (xnew ) can be obtained using the Bayes’ theorem.
It can be seen that the modeling and analysis for the KO method with expensive
computer code is much more complicated than that with cheap computer code. For
the ease of mathematical analysis, our theoretical studies in the next section considers
only the cases with cheap code. Hence, we omit the detailed formulae of the posterior
density of the model parameters and the posterior predictive distribution of y p (xnew )
in this section.
5
3
Theoretical Studies
In this section we conduct some theoretical study on the power of prediction of the
KO method. For the ease of the mathematical treatment, we only consider the case of
cheap computer code, because the formulae for the case of expensive computer code
are much more complicated and cumbersome as shown in Section 2.2. We believe that
this simplification does not affect our general conclusion.
The mathematical treatment to develop the asymptotic theory for the KO method
also depend on the choice of the correlation family Cγ . In the present work, we restrict
ourselves with the Matérn family of kernel functions (Stein, 1999), defined as
Cυ,γ (s, t) =
υ
√
√
1
2 υγks − tk Kυ 2 υγks − tk ,
υ−1
Γ(υ)2
(5)
where Kυ is the modified Bessel function of the second kind. In Matérn family, the
model parameter υ dominates the smoothness of the process and γ is a scale parameter.
Because the smoothness parameter υ has an effect on the rate of convergence of the
prediction, for simplicity we suppose υ is fixed in the entire data analysis.
All proofs in this section are postponed to Appendix A.
3.1
A Function Approximation Perspective
In this section we follow the theoretical framework of Tuo and Wu (2016) to study
the prediction performance of the KO method. Under this framework, the physical
responses are assumed to have no random error, i.e., ǫi ’s in (1) are zero. This is an
unrealistic assumption in practice. But this assumption simplifies the model structure,
so that we are able to find some mathematical tools which help us to understand certain
intrinsic properties of the KO method.
From (1), we have yip = ζ(xi ). We remind that ζ is indeed a deterministic function
(as the expectation of the physical response). Therefore, we will regard the Gaussian
process modeling technique used in the KO method as a way of reconstructing the
function ζ based on samples ζ(xi ).
An immediate consequence of the deterministic assumption is δ(x) = yp − y s (x, θ0 ),
i.e., δ(x) is determined by θ0 given the observations. Thus (3) is not applicable. Instead,
we have
π(θ0 , τ 2 , γ|yp ) ∝ π(yp |θ0 , τ 2 , γ)π(θ0 , τ 2 , γ)
1 p
s
T −1 p
s
−1/2
∝ (det Σγ )
exp − (y − y (x, θ0 )) Σγ (y − y (x, θ0 ) π(θ0 , τ 2 , γ).
2
To differentiate between the true process ζ and its estimate based on the observations, we denote a draw from the predictive distribution π(ζ(xnew )) by ζ rep (xnew ).
Then the posterior predictive distribution π(ζ rep (xnew )|θ0 , γ, yp ) is
−1 p
s
2
T −1
N y s (xnew , θ0 ) + ΣT
(6)
1 Σγ (y − y (x, θ0 )), τ (Cυ,γ (xnew , xnew ) − Σ1 Σγ Σ1 ) .
6
We now suppose the prior distribution π(θ0 , τ 2 , γ) is separable, i.e., π(θ0 , τ 2 , γ) =
π(θ0 )π(τ 2 )π(γ). Let Sθ , Sτ 2 and Sγ denote the supports of the distributions π(θ0 ), π(τ 2 )
and π(γ) respectively. For the ease of mathematical treatment, we further suppose that
Sθ is a compact subset of R, and Sτ 2 ⊂ [0, τ02 ], Sγ ⊂ [γ1 , γ2 ] for some 0 < τ02 < +∞, 0 <
γ1 < γ2 < +∞. The independence assumption of the prior distributions can be replaced
with a more general assumption, which would not affect the validity of our theoretical
analysis. However, the compact support assumption is technically unavoidable in the
current treatment. Because here we only focus on the posterior mode, the use of the
compact support assumption does not affect the practical applicability of the results.
The aim of this section is to study the asymptotic behavior of
−1 p
s
µ̂θ,γ = y s (xnew , θ) + ΣT
1 Σγ (y − y (x, θ)),
−1
ςˆτ22 ,γ = τ 2 (Cυ,γ (xnew , xnew ) − ΣT
1 Σγ Σ1 ),
as the design points become dense in Ω, for (θ, τ 2 , γ) ∈ Sθ , Sτ 2 , Sγ . Clearly, the true
posterior mean of ζ rep (xnew ) given by (6) is
E[ζ rep (xnew )|yp ] = E[µ̂θ̂,γ̂ |yp ],
where (θ̂, γ̂) follows the posterior distribution π(θ0 , γ|yp ). Note that
|E[ζ rep (xnew )|yp ] − ζ(xnew )|
o
n
= E E[ζ rep (xnew ) − ζ(xnew )|yp , θ̂, γ̂] yp
≤
=
sup
θ∈Sθ ,γ∈Sγ
sup
θ∈Sθ ,γ∈Sγ
|E[ζ rep (xnew ) − ζ(xnew )|yp , θ, γ]|
|µ̂θ,γ − ζ(xnew )|,
i.e., the bias of the posterior predictive mean can be bounded by the supremum of
|µ̂θ,γ − ζ(xnew )|. Similarly, we find
Var(ζ rep (xnew )|yp ) ≤
sup
τ 2 ∈S
τ 2 ,γ∈Sγ
ςˆτ22 ,γ .
In this section we will bound supθ∈Sθ ,γ∈Sγ |µ̂θ,γ − ζ(xnew )| and supτ 2 ∈Sτ 2 γ∈Sγ ςˆτ22 ,γ .
To this end, we resort to the theory of native spaces. We refer to Wendland (2005)
for detailed discussions. For a symmetric and positive definite function Φ over Ω × Ω,
consider the linear space
)
(m
X
αi Φ(si , ·) : m ∈ N+ , αi ∈ R ,
FΦ (Ω) :=
i=1
equipped with the inner product
+
*m
l
m X
l
X
X
X
αi βj Φ(si , tj ).
βj Φ(tj , ·) =
αi Φ(si , ·),
i=1
i=1 j=1
j=1
7
(7)
The completion of FΦ (Ω) with respect to its inner product is called the native space
generated by Φ, denoted by NΦ (Ω). Denote the inner product and the norm of NΦ (Ω)
by h·, ·iNΦ (Ω) and k · kNΦ (Ω) respectively.
Now we state the interpolation scheme in the native space. Let f ∈ NΦ (Ω) and
x = {x1 , . . . , xn } a set of distinct points in Ω. Let y = (f (x1 ), . . . , f (xn ))T be the
observerd data. Define
sf,x (x) =
n
X
ui Φ(xi , x),
(8)
i=1
where u = (u1 , . . . , un )T is given by the linear equation
y = Φ(x, x)u
for (Φ(x, x))ij = Φ(xi , xj ).
Clearly, sf,x ∈ FΦ and thus sf,x ∈ NΦ (Ω). The next lemma can be found in
Wendland (2005). For the completeness of the present article, we provide its proof in
Appendix A.
Lemma 1. For f ∈ NΦ (Ω) and a set of design points x ⊂ Ω,
hsf,x , f − sf,x iNΦ (Ω) = 0.
From Lemma 1 we can deduce the Pythagorean identity
ksf,x k2NΦ (Ω) + kf − sf,x k2NΦ (Ω) = kf k2NΦ (Ω) .
(9)
Now we consider an arbitrary function h ∈ NΦ (Ω) which interpolates f over x, denoted
as f |x = h|x . Then we have sf,x = sh,x and thus (9) also holds true if we replace f
with h. This suggests ksf,x kNΦ (Ω) ≤ khkNΦ (Ω) , which yields the following optimality
condition
sf,x = argmin khkNΦ (Ω) ,
(10)
h∈NΦ (Ω)
h|x =f |x
i.e., sf,x has the minimum native norm among all functions in NΦ (Ω) that interpolate
f over x.
It can be shown that the native space generated by the Matérn kernel Cυ,γ for υ ≥ 1
coincides with the (fractional) Sobolev space H υ+d/2 (Ω) (Adams and Fournier, 2003),
and the norms are equivalent. See Tuo and Wu (2016) for details. Moreover, we can
also prove that the norms of the native spaces generated by Cυ,γ for a set of γ values
bounded away from 0 and +∞ are equivalent.
Lemma 2. Suppose υ ≥ 1. There exist constants c1 , c2 > 1, so that
c1 kf kH υ+d/2 (Ω) ≤ kf kNCυ,γ (Ω) ≤ c2 kf kH υ+d/2 (Ω)
holds for all f ∈ H υ+d/2 (Ω) and all γ ∈ [γ1 , γ2 ].
8
(11)
Next, we turn to the error estimate of the interpolant sf,x . Wendland (2005) shows
that for u ∈ H µ (Ω) with u|x = 0 and ⌊µ⌋ > d/2,
µ−d/2
kukL∞ (Ω) ≤ Chx,Ω
kukH µ (Ω) ,
provided that x is “sufficiently dense”, where C is independent of x and u; hx,Ω is the
fill distance of the design x defined as
hx,Ω = sup min kx − xj k.
x∈Ω xj ∈x
Here “x is sufficiently dense” means that its fill distance hx,Ω is less than a constant
h0 depending only on Ω and µ. Noting the fact that (f − sf,x )|x = 0 and f − sf,x ∈
H υ+d/2 (Ω), we obtain that for υ ≥ 1,
kf − sf,x kL∞ (Ω) ≤ Chυx,Ω kf − sf,x kH υ+d/2 (Ω) ,
which, together with (9), yields
kf − sf,x kL∞ (Ω) ≤ Chυx,Ω kf kH υ+d/2 (Ω) .
(12)
Then we apply Lemma 2 to prove Lemma 3.
Lemma 3. Suppose υ ≥ 1. For f ∈ H υ+d/2 (Ω), let sf,x be the interpolant of f over x
with the kernel Cγ,υ , γ ∈ [γ1 , γ2 ]. Then for sufficiently dense x
kf − sf,x kL∞ (Ω) ≤ Chυx,Ω kf kNCυ,γ (Ω) ,
where C is independent of the choices of f , x and γ.
Following the notation of Tuo and Wu (2016), we define ǫ(x, θ) = ζ(x) − y s (x, θ).
It is commented by Tuo and Wu (2016) that in general θ0 is not estimable due to the
identifiability problem, and thus neither is δ(·) = ǫ(·, θ0 ). However, as will be shown
later, the function ǫ(·, ·) can be consistently estimated using KO calibration. Suppose
ǫ(·, θ) ∈ H υ+d/2 (Ω) for each θ ∈ Sθ . Let ǫ(x, θ) = (ǫ(x1 , θ), . . . , ǫ(xn , θ))T . Clearly,
yp − y s (x, θ) = ǫ(x, θ) and thus
−1 p
s
sǫ(·,θ),x (xnew ) = ΣT
1 Σγ (y − y (x, θ)).
By (12) we obtain
|µ̂θ,γ − ζ(xnew )| = |ǫ(xnew , θ) − sǫ(·,θ),x (xnew )|
≤ Chυx,Ω kǫ(·, θ)kH υ+d/2 (Ω)
≤ Chυx,Ω sup kǫ(·, θ)kH υ+d/2 (Ω) .
(13)
θ∈Sθ
The error bound for the variance term can be obtained similarly. Elementary calculations show that
−1
ΣT
1 Σγ Σ1 = sCυ,γ (·,xnew ),x (xnew ).
9
Hence we apply Lemma 3 to find
−1
2
|τ 2 (Cυ,γ (xnew , xnew ) − ΣT
1 Σγ Σ1 )| = τ |Cυ,γ (xnew , xnew ) − sCυ,γ (·,xnew ),x (xnew )|
≤ τ02 Chυx,Ω kCυ,γ (·, xnew )kNCυ,γ (Ω)
= τ02 Chυx,Ω ,
(14)
where the last equality follows from the fact that kCυ,γ (·, xnew )kNCυ,γ (Ω) = 1. We
summarize our findings in (13) and (14) as Theorem 1.
Theorem 1. Suppose υ ≥ 1, γ ∈ [γ1 , γ2 ], τ ≤ τ0 . Then for a sufficiently dense design
x, we have the upper bound for the predictive mean as
sup
θ∈Sθ ,γ∈Sγ
|µ̂θ,γ − ζ(xnew )| ≤ Chυx,Ω sup kǫ(·, θ)kH υ+d/2 (Ω) ,
θ∈Sθ
and the upper bound for the predictive variance as
sup
τ 2 ∈Sτ2 ,γ∈Sγ
ςˆτ22 ,γ ≤ τ02 Chυx,Ω ,
with a constant C depending only on Ω, υ, γ1 , γ2 .
From Theorem 1, the rate of convergence is O(hυx,Ω ), which is known to be optimal in the current setting (Wendland, 2005). It is worth noting that the predictive
behavior of the KO calibration is more robust than in the case of estimation as shown
by Tuo and Wu (2016), Theorem 4.2. Specifically, they show the KO calibration estimator tends to the minimizer of a norm involving the prior assumption, i.e., the KO
calibration can reply heavily on the prior specification. By comparison, the predictive
performance as shown in Theorem 1 above does not depend on the choice of the prior
asymptotically.
3.2
A Nonparametric Regression Perspective
Now we turn to a more realistic case, where the physical observations have random
measurement errors. As before, we treat the true process ζ(·) as a deterministic function. For the ease of mathematical treatment, in this section we fix the value of γ. Our
analysis later will show in Theorems 3 and 4 that the resulting rate of convergence is
not influenced by the choice of γ. Other parameters are either estimated or chosen to
vary along with the sample size n.
To study the predictive behavior of the KO method asymptotically, the key is
to understand the posterior mode of δ(x) in (3). We first introduce the representer
theorem (Schölkopf et al., 2001; Wahba, 1990). We also give a proof of the representer
theorem using Lemma 1 in Appendix A.
10
Lemma 4 (Representer Theorem). Let x1 , . . . , xn be a set of distinct points in Ω and
L : Rn → R be an arbitrary function. Denote the minimizer of the optimization
problem
min L(f (x1 ), f (x2 ), . . . , f (xn )) + kf k2NΦ (Ω)
f ∈NΦ (Ω)
by fˆ. Then fˆ possesses the representation
fˆ =
n
X
i=1
αi Φ(xi , ·),
with coefficients αi ∈ R, i = 1, . . . , n.
Similar to Section 3.1, we first fix the values of τ 2 , σ 2 , γ in their domain. Then
we consider the profile posterior density function of δ(x), which, according to (3), is
proportional to
(
)
T Σ−1 δ(x)
δ(x)
1
γ
πτ 2 ,σ2 ,γ (θ, δ(x)) = exp − 2 ky2 − y s (x, θ) − δ(x)k2 −
.
(15)
2σ
2τ 2
The profile posterior mode (θ̂KO , δ̂(x)) maximizes πτ 2 ,σ2 ,γ (·, ·). Using the representer theorem, we show an equality between δ̂(x) and the solution to a penalized least
squares problem.
ˆ be the solution to
Theorem 2. Let (θ̂, ∆)
argmin
n
X
σ2
(yip − y s (xi , θ) − f (xi ))2 + 2 kf k2NC (Ω) .
υ,γ
τ
(16)
θ∈Θ
f ∈NCυ,γ (Ω) i=1
ˆ 1 ), . . . , ∆(x
ˆ n ))T =: ∆(x)
ˆ
Then θ̂ = θ̂KO and (∆(x
= δ̂(x).
Now we are ready to state the main asymptotic theory. We will first investigate the
asymptotic properties of the predictive mean. Next we will consider the consistency of
the predictive variance.
From (4), the predictive mean of the KO model is
−1
ζ rep (xnew ) = y s (xnew , θ̂KO ) + ΣT
1 (xnew )Σγ δ̂(x),
(17)
where Σ1 (xnew ) = (Cυ,γ (xnew , x1 ), . . . , Cυ,γ (xnew , xn ))T . Invoking Theorem 2, we have
ˆ
ˆ defined in (16). Using the formula of the kernel interpolant
δ̂(x) = ∆(x)
with ∆
given by (8), it can be seen that ζ̂(·) − y s (·, θ̂KO ) is the kernel interpolant of the data
ˆ
(x, ∆(x)).
Hence, from Lemma 4 and Theorem 2 we have
ˆ
ζ̂(·) − y s (·, θ̂KO ) = ∆(·).
11
We note that the ratio of the variances σ 2 /τ 2 plays an important role in (16). In
the nonparametric regression literature, such a quantity is commonly referred to as
the smoothing parameter. The smoothing parameter is a tuning parameter to balance
the bias and variance of the estimator. It can be seen that as σ 2 /τ 2 → ∞, ǫ̂ tends
to 0, which has the smallest variance but a large bias; as σ 2 /τ 2 ↓ 0, ǫ̂ will eventually
interpolate (xi , yip − y s (xi , θ̂KO )), which typically leads to an over-fitting problem. We
denote σ 2 /τ 2 by rn when the sample size is n. According to Theorem 3, the optimal
4υ+5d
rate for rn is rn ∼ n 4υ+4d . There is a theory by van der Vaart and van Zanten (2008)
which says that the optimal tuning rate can be automatically achieved by following a
standard Bayesian analysis procedure. To save space, we do not pursue this approach
here.
Some asymptotic theory for the penalized least squares problem (16) is available
in the literature (van der Geer, 2000). In order to employ such a theory, we need to
choose the smoothing parameter rn to diverge at an appropriate rate as n goes to
infinity. For convenience, we suppose that the design points are randomly chosen.
We consider the rate of convergence of the penalized least squares estimator under
the L2 metric. We assume that y s is Lipschitz continuous. Then the metric entropy
of {y s (·, θ) : θ ∈ Θ} is dominated by that of the unit ball of the nonparametric class
NCυ,γ (Ω). See van der Vaart and Wellner (1996) for details. Theorem 3 then is a direct
consequence of Theorem 10.2 of van der Geer (2000), where the required upper bound
for the metric entropy is obtained from (3.6) of Tuo and Wu (2015).
Theorem 3. Suppose the design points {xi } are independent samples from the uniform
distribution over Ω. We assume υ ≥ 1 and y s is Lipschitz continuous. Choose rn
4υ+5d
appropriately so that rn ∼ n 4υ+4d . Under model (1) with σ 2 > 0, the KO predictor ζ̂
defined in (17) has the approximation properties
n
2υ+d
1X
(ζ̂(xi ) − ζ(xi ))2 = Op (n− 2υ+2d ),
n
(18)
i=1
υ+d/2
kζ̂ − ζkL2 (Ω) = Op (n− 2υ+2d ),
(19)
and
kζ̂ − ζkH υ+d/2 (Ω) = Op (1).
(20)
Since the native space NCυ,γ (Ω) is equivalent to the Sobolev space H υ+d/2 (Ω), the
rate of convergence in (19) is optimal according to the theory by Stone (1982). The interpolation inequality of the Sobolev spaces (see (12) in Chapter 5 of Adams and Fournier
(2003)) claims that
d
1−
d
2(υ+d/2)
2(υ+d/2)
kζ̂ − ζkL∞ (Ω) ≤ Kkζ̂ − ζkH
,
υ+d/2 (Ω) kζ̂ − ζkL2 (Ω)
with constant K depending only on Ω and υ. In view of (19) and (20), we have
υ
kζ̂ − ζkL∞ (Ω) = Op (n− 2υ+2d ),
12
(21)
which gives the rate of convergence of the predictive mean under the uniform metric.
Now we turn to the consistency of the predictive variance. To avoid ambiguity, we
denote the true value of σ 2 by σ02 . First we consider the profile posterior mode of σ 2 in
(3). For simplicity, we only consider the non-informative prior for σ 2 with π(σ 2 ) ∝ 1.
However, we note that the limiting value of the posterior mode of σ 2 is not affected by
the choice of π(σ 2 ), provided that σ02 is contained in the support of π(σ 2 ). It is easily
seen from (3) that the posterior mode of σ 2 is
kyp − y s (x, θ̂KO ) − δ̂(x)k2
n
n
X
1
{ei + (ζ̂(xi ) − ζ(xi ))}2
n
σ̂ 2 =
=
i=1
n
n
n
1X 2 2X
1X
ei +
ei (ζ̂(xi ) − ζ(xi )) +
(ζ̂(xi ) − ζ(xi ))2 ,
n
n
n
=
i=1
i=1
i=1
which yields the following asymptotic property
n
|σ̂ 2 − σ02 | =
σ̂ 2 −
1X 2
ei + Op (n−1/2 )
n
i=1
n
n
2X
1X
=
ei (ζ̂(xi ) − ζ(xi )) +
(ζ̂(xi ) − ζ(xi ))2 + Op (n−1/2 )
n
n
i=1
i=1
!1/2
!1/2
n
n
1X 2
1X
≤ 2
ei
(ζ̂(xi ) − ζ(xi ))2
n
n
i=1
i=1
n
1X
+
(ζ̂(xi ) − ζ(xi ))2 + Op (n−1/2 )
n
i=1
υ+d/2
= Op (n− 2υ+2d ),
(22)
where the inequality follows from Cauchy-Schwarz inequality and the last equality
follows from (18).
From (4), the predictive variance of the KO model is
−1
2
ςˆ2 (xnew ) = τ 2 (Cγ (xnew , xnew ) − ΣT
1 Σγ Σ1 ) + σ̂ .
(23)
−1
As discussed in Section 3.1, Cγ (xnew , xnew ) − ΣT
1 Σγ Σ1 is the approximation error
of the kernel interpolation for the function Cγ (·, xnew ). Clearly, the error from the
interpolation problem discussed in Section 3.1 should be no more than that from the
smoothing problem discussed in the current section, because of the presence of the
13
random error in the latter situation. Thus we have
−1
sup |τ 2 (Cγ (xnew , xnew ) − ΣT
1 Σγ Σ1 )|
xnew ∈Ω
−1
sup |rn σ̂ 2 (Cγ (xnew , xnew ) − ΣT
1 Σγ Σ1 )|
=
xnew ∈Ω
d
υ
υ+d/2
= Op (n− 4υ+4d n− 2υ+2d ) = Op (n− 2υ+2d ),
(24)
d
where the second equality follows from the assumption rn ∼ n 4υ+4d in Theorem 3, (23)
and (21). Combining (23) and (24) we obtain Theorem 4.
Theorem 4. Under the conditions of Theorem 3, we have the error bound for the
predictive variance under the uniform metric
υ+d/2
kˆ
ς 2 (·) − σ02 kL∞ (Ω) = Op (n− 2υ+2d ).
Noting that σ02 is the variance of the random noise, which is present in prediction
for a new physical response. In other word, no predictor has a mean square error less
than σ02 . Theorems 3 and 4 reveal that the predictive distribution given by the KO
method can capture the true uncertainty of the physical data in the asymptotic sense.
4
Discussions
In this work, we prove some error bounds for the predictive error given by the KennedyO’Hagan method in two cases: 1) the physical observations have no random error
and 2) the physical observations are noisy. For the ease of mathematical analysis,
we only consider the Matérn correlation family. If a different covariance structure
is used, we believe that the consistency for the predictive mean and the predictive
variance still holds. However, additional study is required to obtain the appropriate
rate of convergence. In our entire analysis, we ignore the estimation for some model
parameters like γ and τ 2 . One may consider the error estimate in a fully Bayesian
procedure. But the analysis will then become rather complicated and it is unclear
whether a new theory can be developed along the same lines.
Throughout this work, we assume that the smoothness parameter υ is given. From
Theorems 1 and 3, a better rate of convergence can be obtained by using a greater υ
provided that the target function still lies in NCυ,γ (Ω). Thus ideally, one should choose
υ to be close to, but no more than the true degree of smoothness of the target function.
There are different ways of choosing data-dependent υ, but the mathematical analysis
will become much more involved. We refer to Loh et al. (2015) and the references
therein for some related discussions.
In Section 3.2, we assume that the design points xi ’s are random samples over Ω.
In practice, one may also wish to choose design points using a systematic (deterministic) scheme. In general, if a sequence of fixed designs is used, the same (optimal)
rate of convergence is retained, provided that these designs satisfy certain space-filling
14
conditions. We refer to Utreras (1988) for the results and necessary mathematical
tools.
Finally, we discuss how the calibration procedure can have an effect on the prediction for the true process. In this article, we allow the number of physical measurements
to grow to infinity and obtain the rate of convergence. By comparing the results presented here and the standard ones using radial basis functions or smoothing spline
approximation, we find that the rate of convergence is not elevated by doing calibration. But we can use the following heuristics to show that by doing KO calibration the
predictive error can be improved by a constant factor. To see this, we review the proof
of Theorem 1, from which it can be seen that if we fix Φ and γ, the predictive error is
bounded by
|µ̂θ,γ − ζ(xnew )| ≤ Chυx,Ω kǫ(·, θ)kH υ+d/2 (Ω) ,
(25)
for an arbitrarily chosen θ ∈ Θ. So the rate of convergence is given by O(hυx,Ω ), and
kǫ(·, θ)kH υ+d/2 (Ω) acts as a constant factor. Tuo and Wu (2016) show that under certain
conditions, the KO estimator for the calibration parameter converges to
θ ′ = argmin kǫ(·, θ)kNΦ (Ω) ,
θ∈Θ
as the design points become dense over Ω. Since k·kNΦ (Ω) is equivalent to k·kH υ+d/2 (Ω) ,
estimating the calibration parameter via the KO method is apparently beneficial for
prediction in the sense that the upper error bound is reduced because kǫ(·, θ ′ )kNΦ (Ω) ≤
kǫ(·, θ)kNΦ (Ω) for all θ ∈ Θ. There is a similar phenomenon for the stochastic case, by
using the arguments in the proof of Theorem 10.2 of van der Geer (2000).
Appendix
A
Technical Proofs
In this section, we present the proofs for Lemma 1, Lemma 2, Lemma 4 and Theorem
2.
Proof of Lemma 1. We first assume f ∈ FΦ . If f = sf,x , there is nothing to prove. If
f 6= sf,x , without loss of generality, we write
f (x) =
n+m
X
αi Φ(x, xi ),
i=1
for an extra set of distinct points {xn+1 , . . . , xn+m } ⊂ Ω. Now partition (Ai,j ) =
Φ(xi , xj ), 1 ≤ i, j ≤ n + m into
(A1 )n×n (A2 )n×m
,
A=
(A3 )m×n (A4 )m×m
15
where A3 = AT
2 because Φ is symmetric.
Let y = (f (x1 ), . . . , f (xn ))T , a1 = (α1 , . . . , αn )T , a2 = (αn+1 , . . . , αn+m )T . Clearly,
y = A1 a1 + A2 a2 . By the definition of sf,x , we have
sf,x (x) =
n
X
ui Φ(x, xi ),
i=1
with u = (u1 , . . . , un )T satisfying y = A1 u. Then from (7) we obtain
hsf,x , f − sf,x iNΦ (Ω)
+
* n
n+m
n
X
X
X
αi Φ(x, xi )
(αi − ui )Φ(x, xi ) +
ui Φ(x, xi ),
=
A1 A2
0
A3 A4
uT
=
i=n+1
i=1
i=1
NΦ (Ω)
a1 − u
a2
= uT (A1 a1 + A2 a2 − A1 u)
= uT (y − y) = 0.
(26)
For a general f ∈ NΦ (Ω), we can find a sequence fn ∈ FΦ with fn → f in NΦ (Ω)
as n → ∞. The desired result then follows from a limiting form of (26).
Proof of Lemma 2. For any g ∈ L2 (Rd ) ∩ C(Rd ), its native norm admits the representation
Z
|g̃(ω)|2
2
−d/2
dω,
(27)
kgkNΦ (Rd ) = (2π)
Rd Φ̃(ω)
where g̃ and Φ̃ denote the Fourier transforms of g and Φ respectively. See Theorem
10.12 of Wendland (2005). The (fractional) Sobolev norms have a similar representation
Z
kgk2H s (Rd ) = (2π)−d/2
|g̃(ω)|2 (1 + kωk2 )s dω.
(28)
Rd
See Adams and Fournier (2003) for details. Tuo and Wu (2016) show that
C̃υ,γ (ω) = 2d/2 (4υγ 2 )υ
Γ(υ + d/2)
(4υγ 2 + kωk2 )−(υ+d/2) .
Γ(ν)
Using the inequality
(1 + b) min(1, a) ≤ a + b ≤ (1 + b) max(1, a),
for a, b ≥ 0, we obtain
C̃υ,γ (ω)
≤
≤
o
n
Γ(υ + d/2)
max 1, (4υγ 2 )−(υ+d/2) (1 + kωk2 )−(υ+d/2)
Γ(υ)
n
o
Γ(υ + d/2)
max (4υγ22 )υ , (4υγ12 )−d/2 (1 + kωk2 )−(υ+d/2)
2d/2
Γ(υ)
2d/2 (4υγ 2 )υ
=: C1 (1 + kωk2 )−(υ+d/2) ,
(29)
16
and
C̃υ,γ (ω)
≥
≥
o
n
Γ(υ + d/2)
min 1, (4υγ 2 )−(υ+d/2) (1 + kωk2 )−(υ+d/2)
Γ(υ)
o
n
Γ(υ + d/2)
2d/2
min (4υγ12 )υ , (4υγ22 )−d/2 (1 + kωk2 )−(υ+d/2)
Γ(υ)
2d/2 (4υγ 2 )υ
=: C2 (1 + kωk2 )−(υ+d/2) ,
(30)
hold for all ω ∈ Rd .
Now we apply the extension theorem of the native spaces (Theorem 10.46 of Wendland,
2005) to obtain a function f E ∈ NCυ,γ (Rd ) such that f E |Ω = f and kf kNCυ,γ (Ω) =
kf E kNCυ,γ (Rd ) for each γ ∈ [γ1 , γ2 ]. We use (27)-(29) to obtain
kf k2NC
υ,γ (Ω)
= kf E k2NC
υ,γ (R
= (2π)−d/2
d)
≥ C1−1 (2π)−d/2
Z
Rd
Z
Rd
|f˜E (ω)|2
dω
C̃υ,γ (ω)
|f˜E (ω)|2 (1 + kωk2 )υ+d/2 dω
= C1−1 kf E k2H υ+d/2 (Rd ) ≥ C1−1 kf k2H υ+d/2 (Ω) ,
(31)
where the last inequality follows from the fact that f E |Ω = f . On the other hand,
because Ω is convex, f has an extension fE ∈ H υ+d/2 (Rd ) satisfying kfE kH k (Rd ) ≤
ckf kH k (Ω) for some constant c independent of f . Then we use (27), (28) and (30) to
obtain
kfE k2H k (Ω) ≥ c−2 kf k2H k (Ω)
−2
= c
−2
≥ c
−2
= c
−d/2
(2π)
Z
Rd
−d/2
C2 (2π)
Z
|f˜E (ω)|2 (1 + kωk2 )υ+d/2 dω
|f˜E (ω)|2
dω
C̃υ,γ (ω)
Rd
2
C2 kfE kNC (Rd )
υ,γ
≥ c−2 C2 kf k2NCυ,γ (Ω) ,
where the last inequality follows from the restriction theorem of the native space, which
states that the restriction f = fE |Ω is contained in NCυ,γ (Ω) with a norm that is less
than or equal to the norm kfE kNCυ,γ (Rd ) . See Theorem 10.47 of Wendland (2005). The
desired result is proved by combining (31) and (32).
Proof of Lemma 4. For f ∈ NΦ (Ω), define
M (f ) = L(f (x1 ), . . . , f (xn )) + kf k2NΦ (Ω) .
Now consider sfˆ,X , i.e., the interpolant of fˆ over X = {x1 , . . . , xn } using the kernel
function Φ. Because fˆ(xi ) = s ˆ (xi ) for i = 1, . . . , n, we have
f ,X
L(f (x1 ), . . . , f (xn )) = L(sfˆ,X (x1 ), . . . , sfˆ,X (xn )).
17
(32)
In addition, it is easily seen from Lemma 1, (9) and (10) that
ksfˆ,X k2NΦ (Ω) ≤ kfˆk2NΦ (Ω) ,
(33)
and the equality holds if and only if sfˆ,X = fˆ. By combining (32) and (33) we obtain
M (sfˆ,X ) ≤ M (fˆ).
(34)
= fˆ,
Because fˆ minimizes M (f ), the reverse of (34) also holds. Hence we deduce sf,X
ˆ
which proves the theorem according to the definition of the interpolant.
Proof of Theorem 2. We first rewrite the minimization problem (16) as the following
iterated form
n
X
σ2
(yip − y s (xi , θ) − f (xi ))2 + 2 kf k2NC (Ω)
min
υ,γ
θ∈Θ
τ
f ∈NCυ,γ (Ω) i=1
= min
min
θ∈Θ f ∈NCυ,γ (Ω)
n
X
i=1
(yip − y s (xi , θ) − f (xi ))2 +
σ2
kf k2NC (Ω)
υ,γ
τ2
(35)
Now we apply Lemma 4 to the inner minimization problem in (35) and obtain the
ˆ
following representation for ∆:
ˆ =
∆
n
X
i=1
αi Cυ,γ (xi , ·),
with an undetermined vector of coefficients α = (α1 , . . . , αn )T . Using the definition
Σγ = (Cυ,γ (xi , xj ))ij , clearly we have the matrix representation
ˆ
∆(x)
= Σγ α.
(36)
Now using (7) we have
+
* n
n
X
X
2
ˆ
αi Cυ,γ (xi , ·)
αi Cυ,γ (xi , ·),
=
k∆k
= αT Σγ α.
NCυ,γ (Ω)
i=1
i=1
NCυ,γ (Ω)
The minimization problem (16) then reduces to
argmin kyp − y s (x, θ) − αΣγ k2 +
θ∈Θ
α∈Rn
σ2 T
α Σγ α.
τ2
Applying a change-of-variable argument using (36) we obtain the following optimization
formula
argmin kyp − y s (x, θ) − ∆(x)k2 +
θ∈Θ
∆(x)∈Rn
σ2
∆(x)T Σ−1
γ ∆(x).
τ2
Elementary calculations show its equivalence to the definition of (θ̂KO , δ̂(x)).
18
Acknowledgements
Tuo’s work is supported by the National Center for Mathematics and Interdisciplinary
Sciences in CAS and NSFC grant 11501551, 11271355 and 11671386. Wu’s work is
supported by NSF grant DMS 1564438. The authors are grateful to the Associate
Editor and referees for very helpful comments.
References
Adams, R. A. and J. J. Fournier (2003). Sobolev Spaces, Volume 140. Access Online
via Elsevier.
Higdon, D., M. Kennedy, J. Cavendish, J. Cafeo, and R. Ryne (2004). Combining
field data and computer simulations for calibration and prediction. SIAM Journal
of Scientific Computing 26, 448–466.
Kennedy, M. and A. O’Hagan (2001). Bayesian calibration of computer models. Journal
of the Royal Statistical Society: Series B 63 (3), 425–464.
Loh, W.-L. et al. (2015). Estimating the smoothness of a gaussian random field from
irregularly spaced data via higher-order quadratic variations. The Annals of Statistics 43 (6), 2766–2794.
Santner, T., B. Williams, and W. Notz (2003). The Design and Analysis of Computer
Experiments. Springer Verlag.
Schölkopf, B., R. Herbrich, and A. J. Smola (2001). A generalized representer theorem.
In Computational Learning Theory, pp. 416–426. Springer.
Stein, M. (1999). Interpolation of Spatial Data: Some Theory for Kriging. Springer
Verlag.
Stone, C. J. (1982). Optimal global rates of convergence for nonparametric regression.
The Annals of Statistics 10 (4), 1040–1053.
Tanner, M. A. and W. H. Wong (1987). The calculation of posterior distributions
by data augmentation. Journal of the American Statistical Association 82 (398),
528–540.
Tuo, R. and C. F. J. Wu (2015). Efficient calibration for imperfect computer models.
The Annals of Statistics 43, 2331–2352.
Tuo, R. and C. F. J. Wu (2016). A theoretical framework for calibration in computer models: parametrization, estimation and convergence properties. SIAM/ASA
Journal on Uncertainty Quantification 4 (1), 767–795.
19
Utreras, F. I. (1988). Convergence rates for multivariate smoothing spline functions.
Journal of Approximation Theory 52 (1), 1–27.
van der Geer, S. A. (2000). Empirical Processes in M-estimation, Volume 6. Cambridge
university press.
van der Vaart, A. W. and J. H. van Zanten (2008). Rates of contraction of posterior
distributions based on gaussian process priors. The Annals of Statistics 36 (3), 1435–
1463.
van der Vaart, A. W. and J. A. Wellner (1996). Weak Convergence and Empirical
Processes: With Applications to Statistics. Springer.
Wahba, G. (1990). Spline Models for Observational Data, Volume 59. Society for
Industrial Mathematics.
Wendland, H. (2005). Scattered Data Approximation. Cambridge University Press.
Wu, C. F. J. and M. S. Hamada (2011). Experiments: Planning, Analysis, and Optimization, Volume 552. John Wiley & Sons.
20
| 10 |
New Frameworks for Offline and
Streaming Coreset Constructions
arXiv:1612.00889v1 [] 2 Dec 2016
Vladimir Braverman
∗
Dan Feldman†
Harry Lang‡
Abstract
Let P be a set (called points), Q be a set (called queries) and a function f : P ×Q →
[0, ∞) (called cost). For an error P
parameter > 0, a set S ⊆ P with
P a weight function
w : P → [0, ∞) is an ε-coreset if s∈S w(s)f (s, q) approximates p∈P f (p, q) up to a
multiplicative factor of 1 ± ε for every given query q ∈ Q. Coresets are used to solve
fundamental problems in machine learning of streaming and distributed data.
We construct coresets for the k-means clustering of n input points, both in an
arbitrary metric space and d-dimensional Euclidean space. For Euclidean space, we
present the first coreset whose size is simultaneously independent of both d and n. In
particular, this is the first coreset of size o(n) for a stream of n sparse points in a
d ≥ n dimensional space (e.g. adjacency matrices of graphs). We also provide the first
generalizations of such coresets for handling outliers. For arbitrary metric spaces, we
improve the dependence on k to k log k and present a matching lower bound.
For M -estimator clustering (special cases include the well-known k-median and
k-means clustering), we introduce a new technique for converting an offline coreset
construction to the streaming setting. Our method yields streaming coreset algorithms
requiring the storage of O(S + k log n) points, where S is the size of the offline coreset.
In comparison, the previous state-of-the-art was the merge-and-reduce technique that
required O(S log2a+1 n) points, where a is the exponent in the offline construction’s
dependence on −1 . For example, combining our offline and streaming results, we
produce a streaming metric k-means coreset algorithm using O(−2 k log k log n) points
of storage. The previous state-of-the-art required O(−4 k log k log6 n) points.
∗
Department of Computer Science, Johns Hopkins University. This material is based upon work supported
in part by the National Science Foundation under Grant No. 1447639, by the Google Faculty Award and
by DARPA grant N660001-1-2-4014. Its contents are solely the responsibility of the authors and do not
represent the official view of DARPA or the Department of Defense.
†
Department of Computer Science, University of Haifa
‡
Department of Mathematics, Johns Hopkins University. This material is based upon work supported by
the Franco-American Fulbright Commission.
1
Introduction
In the algorithmic field of computer science, we usually have an optimization problem at
hand and a state-of-the-art or a straight-forward exhaustive search algorithm that solves it.
The challenge is then to suggest a new algorithm with a better running time, storage or
other feature. A different and less traditional approach is to use data reduction, which is a
compression of the input data in some sense, and to run the (possibly inefficient) existing
algorithm on the compressed data. In this case, the problem of solving the problem at hand
reduced to the computing a problem-dependent compression such that:
1. an existing algorithm that solves the optimization problem on the original (complete)
data, will yield a good approximate solution for the original data when applied on the
compressed data.
2. The time and space needed for constructing the compression and running the optimization algorithm on the coreset will be better than simply solving the problem on
the complete data.
There are many approaches for obtaining such a provable data reduction for different
problems and from different fields, such as using uniform sampling, random projections (i.e.,
the Johnson-Lindenstrauss Lemma), compressed sensing, sketches or PCA.
In this paper we focus on a specific type of a reduced data set, called coreset (or coreset) that was originated in computational geometry, but now applied in other fields such as
computer vision and machine learning. Our paper is organized into basic sections: results for
maintaining coresets over data streams (Section 3) and results for offline coresets (Sections 46). We briefly introduce both of these topics the remainder of this section. Many of our
results, along with comparison to prior works, are summarized in Table 1 in Section 2.
In the Appendix A (Section 7) we summarize the merge-and-reduce technique that is used
in previous approaches [Che09a, HPM04, HPK07, AMR+ 12, FL11]. In the Appendix B (Section 8) we provide an alternative framework that generalizes our main result (Theorem 3.1),
applying to a wide-array of constructions although giving a weaker bound.
1.1
Streaming Results
In the streaming model of computation, the input arrives sequentially. This differs from the
standard model where the algorithm is given free access to the entire input. Given a memory
that is linear in the size of the input, these models are evidently equivalent; therefore the
goal of a streaming algorithm is to perform the computation using a sublinear amount of
memory.
Our stream consists of n elements p1 , . . . , pn . In the streaming model (or more specifically
the insertion-only streaming model, since points that arrive will never be deleted), we attempt
to compute our solution using o(n) memory. Sometimes the algorithm will be allowed to
pass over the stream multiple times, resulting in another parameter called the number of
passes. All of our algorithms use polylog(n) memory and require only a single pass.
1
Prior to the current work, the merge-and-reduce technique due to Har-Peled and Mazumdar [HPM04] and Bentley and Sax [BS80] was used to maintain a coreset on an insertion-only
stream. For a summary of this technique, see Section 7 in the Appendix. In this paper we
introduce an alternative technique that reduces the multiplicative overhead from log2a+1 n
to log n (here, a is the offline construction’s dependence on 1/). While our method is not
as general as merge-and-reduce (it requires that the function in question satisfies more than
just the “merge” and “reduce” properties, defined in Section 7), it is general enough to apply
to all M -estimators. For the special case of our coreset offline construction for M -estimators
(introduced in Section 6), we use a more tailored method that causes this to be log n additive
overhead. Therefore our streaming space complexity matches our offline space complexity,
both of which improve upon the state-of-the-art.
The offline coreset construction of [FL11] has the following structure: first, a bicriterion approximation is computed. Second, points are sampled according a distribution that
depends only on the distances between points of the input and their assigned bicriterion
centers. This suggests a two-pass streaming algorithm (which we later combine into a single
pass): in the first pass, construct a bicriterion using an algorithm such as [BMO+ 11]. In
the second pass, sample according to the bicriterion found in the first pass. This provides a
two-pass algorithm for a coreset using O(−2 k log k log n)-space. Our contribution is showing
how these two passes can be combined into a single-pass. Using the algorithm of [BMO+ 11]
to output O(k log n) centers at any time, we show that this is sufficient to carry out the
sampling (originally in the second pass) in parallel without re-reading the stream. Our main
lemma (Lemma 3.7) shows that the bicriterion, rather than just providing “central” points to
concentrate the sampling, actually can be thought of as a proof of the importance of points
for the coreset (technically, a bound on the “sensitivity” that we define at the beginning of
Section 3). Moreover, the importance of points is non-increasing as the stream progresses,
so we can maintain a sample in the streaming setting without using any additional space.
1.2
Offline Results
The name coreset was suggested by Agarwal, Har-Peled, and Varadarajan in [AHPV04] as a
small subset S of points for a given input set P , such that any shape from a given family that
covers S will also cover P , after expanding the shape by a factor of (1 + ε). In particular,
the smallest shape that covers S will be a good approximation for the smallest shape that
covers P . For approximating different cost functions, e.g. the sum of distances to a given
shape, we expect that the total weight of the sample will be similar to the number n of input
points. Hence, in their seminal work [HPM04], Har-Peled and Mazumdar used multiplicative
weights for each point in S, such that the weighted sum of distances from S to a given shape
from the family will approximate its sum of distances from the original data. In [HPM04]
each shape in the family was actually a set of k points, and the application was the classic
k-means problem.
In this paper, we are given an input set P (called points), a family (set) Q of items,
called queries and a function f : P → [0, ∞) that is called a cost function. A coreset is then
a subset S of P , that is associated with a non-negative weight function u : S → [0, ∞) such
2
P
that, for every given
query
q
∈
Q,
the
sum
of
original
costs
p∈P f (p, q) is approximated by
P
the weighted sum p∈S u(p)f (p, q) of costs in S up to a multiplicative factor, i.e.,
X
X
X
(1 − ε)
f (p, q) ≤
u(p)f (p, q) ≤ (1 + ε)
f (p, q)
p∈P
p∈S
p∈P
While our framework is general we demonstrate it on the k-means problem and its variant.
There are at least three reasons for this: (i) This is a fundamental problem in both computer
science and machine learning, (ii) This is probably the most common clustering technique
that used in practice, (iii) Many other clustering and non-clustering problems can be reduced
to k-means; e.g. Mixture of Gaussians, Bregman Clustering, or DP-means [FFK11, LBK15,
BLK15]. In this context we suggest offline coreset constructions for k-clustering queries, that
can be constructed in a streaming fashion using our streaming approach, and are:
• of size linear in k (for d > log k) and arbitrary metric space of dimension d. Current
coresets that are subset of the input have size at least cubic in k [LS10]. This is by
reducing the total sensitivity to O(1) without introducing negative weights that might
be conditioned on the queries as in [FL11].
• of size independent of d for the Euclidean case of k-means (squared distances). This
is particular useful for sparse input set of points where d ≥ n, such as in adjacency
matrices of graphs, document-term, or image-object matrices. Recent coreset for sparse
k-means of [BF15] is of size exponential in 1/ε and thus turn to O(n) when used when
the merge-and-reduce tree (where ε is replaced by O(ε/ log(n)). The result of [FSS13]
for k-means is exponential in k/ε and fails with constant probability, so also cannot
be used with streaming. Another result of [FSS13] suggests a coreset type set for kmeans of size O(k/ε) but which is based on projections that loss the sparsity of the
data. Similar sparsity loss occurs with other projection-type compression methods e.g.
in [CEM+ 15]. Nevertheless, we use the technique in [FSS13] to bound the dimension
of the k-means problem by O(k/ε).
• of size independent of d for the Euclidean case, and non-squared distances, using weak
coresets. These coresets can be used to approximates the optimal solution, but not
every set of k centers. Unlike the weak coresets in [FL11, FMS07], we can use any
existing heuristic on these coresets, as explained in Section 6.1.
• Robust to outliers. This is since the general pseudo-metric definition we used (inspired
by [FS12], support m-estimators which is a tool for handling outliers [Tyl87]. Unlike
in [FS12] our coresets are linear (and not exponential) in k, and also independent of n
(and not logarithmic in n).
2
Related Work
The following table summarizes previous work along with our current results. By far,
the most widely-studied problems in this class have been the k-median and k-means func3
tions. In general, the extension to arbitrary M -estimators is non-trivial; the first such result
was [FS12]. Our approach naturally lends itself to this extension. M -estimators are highly
important for noisy data or data with outliers. As one example, Huber’s estimator is widely
used in the statistics community [HHR11, Hub81]. It was written that “this estimator is so
satisfactory that it has been recommended for almost all situations” [Zha11]. Our results
work not only for Huber’s estimator but for all M -estimators, such as the Cauchy and Tukey
biweight functions which are also well-used functions.
Note that in the below table, Õ notation is used to write in terms of d, , k, and log n
(therefore hiding factors of log log n but not log n).
Problem
Offline Size
Streaming Size
−d
Euclidean k-means
O(k log n)
O(k−d logd+2 n)
Euclidean k-means
O(k 3 −(d+1) )
O(k 3 −(d+1) logd+2 n)
Euclidean k-means
O(dk 2 −2 log n)
O(dk 2 −2 log8 n)
Euclidean k-means
O(dk log k−4 )
O(dk log k−4 log5 n)
Õ((d/)O(d) k log n)
Õ((d/)O(d) k logO(d) n)
Euclidean k-means
−2
−2
Euclidean k-means
O( k log k min(k/, d)) O( k log k min( k , d) + k log n)
Metric k-means
O(−2 k 2 log2 n)
O(−2 k 2 log8 n)
Metric k-means
O(−4 k log k log n)
O(−4 k log k log6 n)
−2
Metric k-means
O( k log k log n)
O(−2 k log k log n)
Õ(dk 2 −2 log n)
Euclidean k-median
O(dk 2 −2 log8 n)
Euclidean k-median
O(k−d log n)
O(k−d logd+2 n)
Euclidean k-median
O(k 2 −d )
O(k 2 −(d) logd+1 n)
Euclidean k-median
O(d−2 k log k)
O(d−2 k log k log3 n)
Euclidean k-median
O(d−2 k log k)
O(d−2 k log k + k log n)
2
Metric k-median
O(k 2 −2 log n)
O(k 2 −2 log8 n)
Metric k-median
O(−2 k log k log n)
O(−2 k log k log4 n)
Metric k-median
O(−2 k log k log n)
O(−2 k log k log n)
2
Euclidean M -estimator
O(−2 k O(k) d2 log n)
O(−2 k O(k) d2 log5 n)
Euclidean M -estimator
O(d−2 k log k)
O(d−2 k log k + k log n)
Metric M -estimator
O(−2 k O(k) log4 n)
O(−2 k O(k) log7 n)
−2
Metric M -estimator
O( k log k log n)
O(−2 k log k log n)
Paper
[HPM04]
[HPK07]
[Che09a]
[FL11]
[AMR+ 12]
**
[Che09b]
[FL11]
**
[Che09a]
[HPM04]
[HPK07]
[FL11]
**
[Che09a]
[FL11]
**
[FS12]
**
[FS12]
**
Table 1: Summary of Related Work
Framework. A generic framework for coreset construction was suggested in [FL11]. The
main technique is a reduction from coreset to ε-approximations, that can be computed using
non-uniform sampling. The distribution of the sampling is based on the importance of each
point (in some well defined sense), and the size of the coreset depends on the sum of these
importance levels. This term of importance appeared in the literature as leverage score (in
the context of low-rank approximation, see [PKB14] and references therein), or, for the case
of k-clustering, sensitivity [LS10]. The proof of many previous coreset constructions were
4
significantly simplified by using this framework, and maybe more importantly, the size of
these coresets was sometimes significantly reduced; see [FL11] for references. Many of these
coresets size can be further improve by our improved framework, as explained below.
The size of the coreset in [FL11] depends quadratically on the sum of sensitivities, called
total sensitivity [LS10]. In this paper, we reduce the size of the coreset that are constructed
by this framework to be only near-linear (t log t) in the total sensitivity t. In addition, we
generalize and significantly simplify the notation and results from this framework.
k-means.
In the k-means problem we wish to compute a set k of centers (points) in
some metric space, such that the sum of squared distances to the input points is minimized,
where each input point is assigned to its nearest center. The corresponding coreset is a
positively weighted subset of points that approximates this cost to every given set of k
centers. First deterministic coresets of size exponential in d were first suggested by HarPeled and Mazumdar in [HPM04]. The first coreset construction of size polynomial in d was
suggested by Ke-Chen in [Che09a] using several sets of uniform sampling.
The state-of-the-art is the result of Schulman and Langberg [LS10] who suggested a
coreset of size O(d2 k 3 /ε2 ) for k-means in the Euclidean case based on non-uniform sampling.
The distribution is similar to the distribution in [FMS07] over the input points, however
in [FMS07] the goal was to have weaker version coresets that can be used to solve the
optimal solution, but are of size independent of d.
Some kind of coreset for k-means of size near-linear in k was suggested in [FL11]. However, unlike the definition of this paper, the multiplicative weights of some of the points in
this coreset were (i) negative, and (ii) depends on the query, i.e., instead of a weight w(p) > 0
for an input point p, as in this paper, the weight is w(p, C) ∈ R where C is the set of queries.
While exhaustive search was suggested to compute a PTAS for the coreset, it is not clear how
to compute existing algorithms or heuristics (what is actually done in practice) on such a
coreset. On the contrary, generalizing an existing approximation algorithm for the k-means
problem to handle positively weights is easy, and public implementations are not hard to
find (e.g. in Matlab and Python).
k-mean for handling outliers.
Coresets for k-means and its variants that handle
outliers via m-estimators were suggested in [FS12], and also inspired our paper. The size
of these coresets is exponential in k and also depend on logn. For comparison, we suggest
similar coreset of size near-linear in k, and independent of n. PTAS for handling exactly
m-outliers was suggested in [Che08] but with no coreset or streaming version.
Streaming.
The metric results of [Che09a, FL11] and Euclidean results of [Che09a,
HPM04, HPK07, FL11] that rely on merge-and-reduce have already been mentioned. A
summary of these results appears in the tables below. For the specific case of Euclidean
space, a more diverse set of stronger results is known. In particular, coreset constructions
are known that do not begin with a bicriterion solution, and whose streaming variant does
not rely on merge-and-reduce [AMR+ 12]. With the additional assumption in Euclidean
5
space that the points lie on a discrete grid {1, . . . , ∆}d , alternative techniques are known for
k-means and other problems, even when the stream allows the deletion of points [FS05].
3
Streaming Algorithm
We present a streaming algorithm for constructing a coreset for metric k-means clustering
that requires the storage of O(−2 k log n) points. The previous state-of-the-art [FL11] required the storage of O(−4 k log k log6 n) points. In this section we assume the correctness
of our offline algorithm, which is proven in Section 6.
More generally, our technique works for M -estimators, a general class of clustering objectives that includes the well-known k-median and k-means functions as special cases. Other
special cases include the Cauchy functions, the Tukey functions, and the Lp norms. Our
method combines a streaming bicriterion algorithm [BMO+ 11] and a batch coreset construction [FL11] to create a streaming coreset algorithm. The space requirements are combined
addivitely, therefore ensuring no overhead.
The streaming algorithm of [BMO+ 11] provides a bicriterion solution using O(k log n)
space. Our new offline construction of Section 6 requires O(−2 k log k log n) space. Therefore
our main theorem yields a streaming algorithm that combines these spaces additively, therefore requiring O(−2 k log k log n) space while maintaining a coreset for k-means clustering.
The previous state-of-the-art framework that works for the metric variant (other methods
are known to improve upon this for the special case of Euclidean space) was the mergeand-reduce technique [BS80] that yields a streaming algorithm requiring O(−4 k log k log6 n)
space, incurring an overhead of Θ(log5 n) over the offline coreset size. In comparison, our
framework incurs no overhead. The additional improvement in our space is due the improved
offline construction given in Sections 5-7.
We now state our main theorem. The result is stated in full generality: the k-median
clustering in a ρ-metric space (see Definition 6.1). Note that metric k-means clustering
corresponds to setting ρ = 2. Also, the probability of success 1 − δ typically has one of two
meanings: that the construction succeeds at the end of the stream (a weaker result), or that
the construction succeeds at every intermediate point of the stream (a stronger result). Our
theorem gives the stronger result, maintaining a valid coreset at every point of the stream.
Theorem 3.1 (Main Theorem). There exists an insertion-only streaming algorithm that
maintains a (k, )-coreset for k-median clustering in a ρ-metric space, requires the storage
of O(ρ2 −2 k log(ρk) log(n) log(1/δ)) points, has poly(k, log n, ρ, , log(1/δ)) worst-case update
time, and succeeds at every point of the stream with probability 1 − δ.
Our method can be applied to the coreset constructions of [HPM04, HPK07, Che09a,
FL11] with a multiplicative overhead of O(log n). Our second theorem is a more generally
applicable technique; it applies to all constructions that first compute a bicriterion solution
and then sample points according to the bicriterion solution. The constructions of [HPM04,
HPK07, Che09a, FL11] follow this outline, and we are unaware of any constructions which do
6
not. The theorem yields immediate corollaries as well as reducing certain streaming coreset
problems to that of constructing an offline coreset.
Theorem 3.2. Given an offline algorithm that constructs a (k, )-coreset consisting of S =
S(n, k, , δ) points with probability 1 − δ by sampling points based on a bicriterion solution,
there exists a streaming algorithm requiring the storage of O(S log n) points that maintains
a (k, )-coreset on an insertion-only stream.
Proof Sketch. The known offline coreset constructions start with a bicriterion solution of
O(k) points. We modify the algorithm of [BMO+ 11] to output O(k log n) centers; this is
trivial since the final step of the algorithm of [BMO+ 11] is to take the O(k log n) centers
stored in memory and reduce them to exactly k centers to provide a solution. Our first
modification to the original algorithm is thus to simply remove this final step, but we must
also keep a datastructure storing log(1/) intermediate states of these O(k log n) centers.
See Section 8 for a precise description of our modification and the sampling method, applied
to the construction of [FL11] as an example (but equally applicable to [HPM04, HPK07,
Che09a]). As the high-level idea, since the bicriterion given to the offline construction
consists of O(k log n) centers instead of exactly k, the number of additional points taken
in the coreset increases by a factor of O(log n).
Two important corollaries include:
1. Using the result of [FL11], we obtain a streaming algorithm that maintains a (k, )coreset with negative weights for metric k-median requiring the storage of O(−2 k log n)
points.
2. Given a O(k · poly(, log n, log(1/δ))) point (k, )-coreset, we would obtain a streaming
algorithm that maintains a (k, )-coreset (with only positive weights) for metric kmedian requiring the storage of O(k · poly(, log n, log(1/δ))) points. This differs from
Theorem 3.1 in that the dependence on k is linear instead of O(k log k).
3.1
Definitions
We begin by defining a ρ-metric space, which is defined in full as Definition 6.1. Briefly, let
X be a set. If D : X × X → [0, ∞) is a symmetric function such that for every x, z ∈ X we
have that D(x, z) ≤ ρ(D(x, y) + D(y, z)) for every y ∈ X, when we call (X, D) a ρ-metric
space. Note that this is a weakening of the triangle inequality, and at ρ = 1 we recover the
definition of a metric space. All M -estimators can be re-cast for a certain constant value of
ρ, and k-means is obtained with ρ = 2. This generality is therefore useful and working in
this language allows us to naturally generalize our results to any M -estimator.
The k-median problem is, given an input set P and an integer k ≥ 1, to find a set C of
k points that minimizes:
X
min D(p, c)
p∈P
c∈C
7
We use OPTk (P ) to denote this minimal value. As this is NP-Hard to compute, we settle
for an approximation. The notion of a bicriterion approximation is well-known; we state a
definition that suits our needs while also fitting into the definition of previous works.
Definition 3.3 ((α, β)-approximation). An (α, β)-approximation forP
the k-median clustering
of a multiset P is a map π : P → B for some set B such that
p∈P w(p)D(p, π(p)) ≤
αOPTk (P ) and |B| ≤ βk.
We now define a coreset:
Definition 3.4 ((k, )-coreset). A (k, )-coreset of a multiset P is a weighted
Pset (S, v) with
k
non-negative weight functionPv such that for every Z ∈ X we have (1 − ) p∈P D(p, Z) ≤
P
s∈S v(s)D(s, Z) ≤ (1 + )
p∈P D(p, Z).
Coresets with arbitrary weight functions (i.e. with negative weights allowed) have been
considered [FL11, etc]. However, computing approximate solutions on these coresets in
polynomial-time remains a challenge, so we restrict our definition to non-negative weight
functions. This ensures that an approximate solution can be quickly produced. This implies
a PTAS for Euclidean space and a polynomial-time 2γ(1 + )-approximation for general
metric spaces (where γ is the best polynomial-time approximation factor for the problem
in the batch setting). This factor of 2γ(1 + ) is well-known in the literature, see [COP03,
BMO+ 11, GMM+ 03] for details.
3.2
Constant-Approximation Algorithm
Let Pi denote the prefix of the stream {p1 , . . . , pi }. The entire stream is then Pn . Consider
the moment when the first i points have arrived, meaning that the prefix Pi is the current set
of arrived points. The algorithm A of [BMO+ 11] provides an (O(1), O(log n))-approximation
of Pi in the following sense. Define f0 : ∅ → ∅ as the null map, and define Bi = image(fi ).
Upon receiving point pi , algorithm A defines a map fi : Bi−1 ∪ {pi } → Bi . We define
πi : Pi → Bi by πi (pj ) = fi (fi−1 (. . . (fj (pj )) . . .)) for each 1 ≤ j ≤ i. These mappings have
an essential gaurantee stated in the following lemma.
Theorem 3.5 ([BMO+ 11]). For every 1 ≤ i ≤ n, after receiving Pi , Algorithm A(k, n, δ)
definesP
a function fi such that with probability 1 − δ, using the above definition of πi , the
bound
p∈Pi D(p, πi (p)) ≤ αOPTk (Pi ) holds. The algorithm deterministically requires the
storage of O(k(log n + log(1/δ))) points.
3.3
Offline Coreset Construction
We briefly describe the offline coreset construction. The proof of correctness can be found
in Sections 5 and 6. It is this construction that we will maintain in the streaming setting.
The sensitivity of a point p ∈ P is defined as:
D(p, Z)
q∈P D(q, Z)
s(p) = max P
Z∈X k
8
0
Notice that 0 ≤
P s(p) ≤ 1. We give an upper bound s (p) ∈0 [s(p), 1]. Define
Pthe total
0
sensitivity t = p∈P s(p). Likewise, we give an upper bound t ≥ t where t = p∈P s0 (p)
and will show that t = O(k). The sampling probability distribution at point p is set to
s0 (p)/t0 . We take an i.i.d. sample from P of size m for any m ≥ ct0 −2 (log n log t0 + log(1/δ))
where c is a constant.
Let R be the union of these m i.i.d. samples, and then define a weight function v : R →
[0, ∞) where v(r) = (|R|s0 (r))−1 . It is proven as one of our main theorems (Theorem 6.6)
that the weighted set (R, v) is a (k, )-coreset for P .
3.4
Bounding the Sensitivity
Consider the prefix Pi which is the input after the first i points have arrived. Using Algorithm
A we obtain an (α, β)-approximation πi where α = O(1) and β = O(log n). Recall that Bi
is the image of this approximation, i.e. Bi = image(πi (Pi )).
Running an offline (γ, λ)-approximation algorithm on Bi , we obtain a multiset Ci of at
most λk distinct points. Let p0 denote the element of Ci nearest to πi (p) (this is the element
of Ci P
that p gets mapped to when we pass from Pi → Bi → Ci ). The following lemma implies
that p∈P w(p)D(p, p0 ) ≤ ᾱOPT(P ) where ᾱ = ρα + 2ρ2 γ(α + 1). This is an observation
used widely in the literature [COP03, BMO+ 11], but we include a proof for completeness.
Lemma 3.6. Let B be a (α, β)-approximation of P , and let C be a (γ, λ)-approximation of
B. Then C is a (ρα + 2ρ2 γ(α + 1), λ)-approximation of A.
Proof. Let π : P → B be the (α, β)-approximation of P and let t : B → C be the (γ, λ)approximation
P of B. In the following, all sums
P will be taken over all p ∈ P . The hypotheses
∗
state that
D(p, π(p)) ≤ αOPT
P Let P be∗ an
P(P ) and∗ D(π(p), t(π(p))) 1≤ γ OPT(B).
D(π(p), P ) ≤
optimal
clustering of P , that is
D(p, P ) = OPT(P ). Then 2 OPT(B) ≤
P
ρ (D(π(p), p) + D(p, P ∗ )) ≤ ρ(α + 1)OPT(P ). The factor of 12 comes from the fact that
OPT(B) is defined using centers restricted to B (see [GMM+ 03] for details). We now write
P
P
D(p, t(π(p))) ≤ ρ (D(p, π(p))+D(π(p), t(π(p)))) ≤ (ρα+2ρ2 γ(α+1))OPT(P ) as desired.
We now prove the following lemma which gives us our sampling probability s0 (p). Recall
that for the construction to succeed, the sampling probability s0 (p) must be at least the
sensitivity s(p) (defined in the previous subsection). Since we focus on a single iteration, we
drop subscripts and write C = Ci and P = Pi . Let p 7→ p0 be an (ᾱ, λ)-approximation of P .
Define P (p) = {q ∈ P : q 0 = p0 } to be the cluster containing p.
Lemma 3.7. Let the map p 7→ p0 define an (ᾱ, λ)-approximation for the k-median clustering
of P . For every point p ∈ P :
ρᾱD(p, p0 )
ρ2 (ᾱ + 1)
P
s(p) ≤
+
0
|P (p)|
q∈P D(q, q )
9
Proof. For an arbitrary Z ∈ X k we need to provide a uniform bound for
ρD(p, p0 )
ρD(p0 , Z)
D(p, Z)
P
P
≤
+
q∈P D(q, Z)
q∈P D(q, Z)
q∈P D(q, Z)
P
ρD(p0 , Z)
ᾱρD(p, p0 )
P
+
≤P
0
q∈P D(q, q )
q∈P D(q, Z)
(1)
P
P
where the second inequality holds because q∈P D(q, q 0 ) ≤ ᾱOPT(P ) ≤ q∈P D(q, Z). To
bound the last term, recall that q 0 = p0 for all q ∈ P (p) so:
X
X
D(p0 , Z)|P (p)| =
D(p0 , Z) =
D(q 0 , Z)
q∈P (p)
≤ρ
q∈P (p)
X
(D(q 0 , q) + D(q, Z))
q∈P (p)
≤ρ
X
D(q 0 , q) + ρ
q∈P
≤ ρᾱ
X
D(q, Z)
q∈P (p)
X
D(q, Z) + ρ
q∈P
X
D(q, Z)
q∈P (p)
≤ ρ(ᾱ + 1)
X
D(q, Z)
q∈P
Dividing by |P (p)|
P
q∈P
D(q, Z) gives
ρ(ᾱ + 1)
D(p0 , Z)
≤
|P (p)|
q∈P D(q, Z)
P
Substituting this in (16) yields the desired result.
We therefore define our upper bound s0 (p) as in the
An immediate but extremely
P lemma.
0
0
important consequence of Lemma 3.7 is that t = p∈P s (p) = ρᾱ + ρ2 (ᾱ + 1)k ≤ 3ρ2 ᾱk.
This can be seen by directly summing the formula given in the lemma.
3.5
Streaming Algorithm
We now state Algorithm 1, which we then prove maintains a coreset. To use Lemma 3.7
0
to
P determine0 s (p), we will compute the cluster sizes |P (p)| and estimate the clustering cost
q∈P D(q, q ). We must bound the clustering cost from below because we need an upperbound of s(p). On Line 9, L is an estimate of the cost of clustering P to the centers C. On
Line 4, c is the absolute constant used in Theorem 6.6.
Algorithm 1 outputs (R, v) such that point p is sampled with probability xs0 (p) where x
is defined on Line 4. For each p that has arrived, the value of s0 (p) is non-increasing (notice
that it is defined as the minimum of itself and a new value on Line 13), so it is possible to
10
Algorithm 1: Input: stream of n points in a ρ-metric space, > 0, k ∈ N, maximum
failure probability δ > 0
1 Initilization:
2 R ← ∅
0
2
3 t ← ρᾱ + ρ (ᾱ + 1)k
−2
0
4 x ← 2c (log n log t + log(1/δ))
5 Initialize A(k, n, δ)
6 Update Process: after receiving pi
7 (Bi , fi ) ← A.update(pi )
8 C ← an (γ, λ)-approximation of Bi
P
−1
−1
9 L ← D(pi , C) + ᾱ (1 + )
r∈R v(r)D(r, C)
10 for r ∈ R do
11
πi (s) ← fi (πi−1 (r))
0)
2
12
z(r) ← ρᾱD(p,p
+ ρ|P(ᾱ+1)
L
(p)|
13
s0 (r) ← min(s0 (r), z(r))
14 for r ∈ R do
15
if u(r) > xs0 (r) then
16
Delete r from R
17 u(pi ) ← uniform random from [0, 1)
0
18 if u(pi ) ≤ xs (pi ) then
19
Add pi to R
20
πi (pi ) ← fi (pi )
21 Query Process:
22 for each r ∈ R do
23
v(r) ← (|R|s0 (r))−1
11
maintain this in the streaming setting since once the deletion condition on Line 15 becomes
satisfied, it remains satisfied forever.
We now proceed with the proof of Theorem 1. Since the probability of storing point p is
xs0 (p), the expected space of Algorithm 1 is xt0 . By Lemma 3.7 that implies t0 ≤ 3ρ2 ᾱk, we
then bound the expected space as 2c−2 (log n log t + log(1/δ))(3ρ2 ᾱk). Simplifying notation
by defining an absolute constant c̃ (a function of c and ᾱ), we write this expected space as
c̃ρ2 −2 k log(ρk) log n log(1/δ). By a Chernoff bound, the high-probability gaurantee follows
by replacing c̃ with 2c̃.
Proof of Theorem 1. The proof of Theorem 1 can be divided into the following pieces:
1. P
For correctness (to satisfy the bound given in Lemma 3.7, we must show that L ≤
0
p∈Pi D(p, p ). For space, it is important that L is not too small. In particular, the
space grows as 1/L. We show that L is a -approximation of the true cost.
2. The value of |P (p)| can be computed exactly for every p. This is needed on Line 12.
3. The construction of Algorithm 1 that samples p with probability xs0 (p) can be processed
to be identical to the offline construction of Subsection 3.3 that takes an i.i.d. sample
of size xt0 from the distribution where p is given sampling probability s0 (p)/t0 .
1: To lower bound the clustering cost, inductively assume that we have a (k, )-coreset
Si−1 of Pi−1 . Note that pi is a (k, )-coreset of itself (in fact a (k, 0)-coreset of itself),
so Si−1 ∪ {pi } is a (k, )-coreset of Pi . Let L be the cost of clustering Si−1 ∪ {pi } to C.
Therefore the cost of clustering Pi to C is in the interval [(1 − )L, (1 + )L]. Recall that the
upper bound on s(p) from Lemma 3.7 is:
ρᾱD(p, p0 )
ρ2 (ᾱ + 1)
P
+
0
|P (p)|
q∈P D(q, q )
P
0
0
0
By using L in place of the true cost
p∈Pi D(p, p ) for defining s (p), the value of t =
P
1+
0
2
4
p∈Pi s (p) increases to at most ρᾱ 1− +ρ (ᾱ+1)λk = O(ρ k). Here there is no dependence
1+
is bounded by an absolute constant.
on since we assume ≤ 1/2, so 1−
2: Computing |P (p)| is straightforward. Define w(b) = |{p ∈ P
P : π(p) = b}| and then let
h : B → C be the (γ, λ)-approximate clustering. Then |P (p)| = b∈h−1 (p0 ) w(b).
3: In Algorithm 1, we sample point p with probability s0 (p) to maintain a set M of
non-deterministic size. We now argue that this can be converted to the desired coreset,
where an i.i.d. sample of size m is taken from the distribution s0 /t0 . First, by a Chernoff bound, |R| ≥ E[|R|]/2 with probability 1 − exp(−E[|R|]/8). Since E[|R|] = xt0 =
(2c−2 log(ρk) log n log(1/δ)) · (ρᾱ + ρ2 (ᾱ + 1)k) = Ω(log(n)), we have that |R| ≥ E[|R|]/2
with probability 1 − O(1/n). Then by the union bound, this inequality holds true at each of
the n iterations of receiving a point throughout the entire stream. Recall that for the offline
coreset construction outlined in Subsection 3.3 to hold, we need an i.i.d. sample of at least
m = ct0 −2 (log n log t0 + log(1/δ)). By Lemma 3.7 t0 = ρᾱ + ρ2 (ᾱ + 1)k, and so by plugging
in values we see that E[|R|] = xt0 ≥ 2m. Having that |R| ≥ m with probability 1 − O(1/n),
12
it is well-known (see [DDH+ 07] for example) that this can be converted to the required i.i.d.
sample.
4
4.1
Preliminaries for Offline Coreset Construction
Query space
In our framework, as in [FL11], we are given a finite input set P of items that are called
points, a (usually infinite) set Q of items that are called queries, and a cost function f that
maps each pair of a point in P and a query in Q to a non-negative number f (p, q). The cost
of the set P to this query is the sum over all the costs,
X
f¯(P, q) :=
f (p, q).
p∈P
More generally, each input point might be given a positive multiplicative weight w(p) > 0,
and the overall cost of each point is then reweighed, so that
X
w(p)f (p, q).
f¯(P, w, q) =
p∈P
The tuple (P, w, f, Q) is thus define our input problem and we call it a query space. In the
case that the points are unweighted we can simply define w(p) = 1 for every p ∈ P .
However, for the following sections, it might help to scale the weights so that their sum
is 1. In this case, we can think of the weights as a given distribution over the input points,
and the cost f¯(P, w, q) is the expected value of f (p, q) for a point that is sampled at random
from P . For the unweighted case we thus have w(p) = 1/n for each point, and
1X
f (p, q)
f¯(P, w, q) =
n p∈P
is the average cost per point. The cost function f is usually also scaled as will be explained
later, to have values f (p, q) between 0 and 1, or −1 to 1.
Example 1: Consider the following problem where the input is a set P of n points in
d
R . Given a ball B = B(c, r) of radius r that is centered in c ∈ Rd , we wish to compute the
fraction of input points that are covered by this ball, |P|P∩B|
. More generally, the query is a
|
set of k balls, and we wish to compute the fraction of points in P that are covered by the
union of these balls. In this case, each input point p ∈ P has a weight w(p) = 1/n, the set
Q of queries is the union over every k balls in Rd ,
Q = B(c1 , r1 ) ∪ · · · ∪ B(ck , rk ) | ∀i ∈ [k] : ci ∈ Rd , ri ≥ 0 ,
(2)
and the cost f (p, q) for a query q = {B1 ∪ · · · ∪ Bk } ∈ Q is
Peither 1/n if p is inside one of the
balls of q, and 0 otherwise. The overall cost f¯(P, w, q) = p∈P f (p, q) is thus the fraction of
points of P that are covered by the union of these k balls.
13
The motivation of defining query spaces is usually to solve some related optimization
problem, such as the query q that minimizes the cost f¯(P, w, q). In this case, the requirement
to approximate every query in Q is too strong, and we may want to approximate only the
optimal query in some sense. To this end, we may wish to replace the set Q by a function
that assigns a different set of queries Q(S) for each subset S of P . For the correctness of
our results, we require that this function Q will be monotonic in the following sense: if T is
a subset of S then Q(T ) must be a subset of Q(S). If we wish to have a single set Q(P ) of
queries as above, we can simply define Q(S) := Q(P ) for every subset S of P , so that the
desired monotonic property will hold.
Example 2: For the set Q in (2), define Q(S) to be the set of balls in Rd such that the
center of each ball is a point in S. More generally, we can require that the center of each
ball will be spanned by (i.e., linear combination of) at most 10 points from S.
We now conclude with the formal definitions for the above discussion.
Definition 4.1 (weighted set). Let S be a subset of some set P and w : P → [0, ∞) be a
function. The pair (S, w) is called a weighted set.
Definition 4.2 (query space). [FL11]. Let Q be a function that maps every set S ⊆ P to
a corresponding set Q(S), such that Q(T ) ⊆ Q(S) for every T ⊆ S. Let f : P × Q(P ) → R
be a cost function. The tuple (P, w, Q, f ) is called a query space.
We denote the cost of a query q ∈ Q(P ) by
X
w(p)f (p, q).
f (P, w, q) :=
p∈P
4.2
(ε, ν)-Approximation
Consider the query space (P, w, Q, f ) in Example 1 for k = 1, and suppose that we wish to
compute a set S ⊆ P , such that, for every given ball B, the fraction of points in P that are
covered by B, are approximately the same as the fraction of the points that are covered in
S, up to a given small additive percentage error ε > 0. That is, for every ball B,
|P ∩ B| |S ∩ B|
−
≤ ε.
|P |
|S|
By defining the weight u(p) = 1/|S| for every p ∈ S, this implies that for every query
(ball) q = B,
f¯(P, w, q) − f¯(S, u, q) =
X 1
X 1
|P ∩ B| |S ∩ B|
· f (p, q) −
· f (p, q) =
−
≤ ε.
|P |
|S|
|P |
|S|
p∈P
p∈S
A weighted set (S, u) that satisfies the last inequality for every query q ∈ Q(S) is called an
ε-approximation for the query space (P, w, f, Q). Note that the above example assumes that
the maximum answer to a query q is f (p, q) ≤ 1. Otherwise, the error guaranteed by an
ε-approximation for a query q is ε maxp∈P |f (p, q)|.
14
The above inequalities implies that if a ball covers a fraction of at least ε points from P
(i.e., at least εn points), then it must cover at least one point of P . If we only ask for this
(weaker) property from S then S is called an ε-net. To obtain the new results of this paper,
we use a tool that generalizes the notion of ε-approximation and ε-net, but less common in
the literature, and is known as (ε, ν)-approximation [LLS01].
By letting a = f¯(P, w, q) and b = f¯(S, u, q), an ε-approximation implies that |a − b| ≤ ε
for every query q, and ε-net implies that b > 0 if a ≥ ε. Following [LLS00], we define below
a distance function that maps a positive real |a − b|ν for each pair of positive real numbers
a and b. A specific value ν will imply |a − b|ν ≤ ε, i.e., that S is an ε-approximation, and a
different value of ν will imply that S is an ε-net for P . This is formalized as follows.
Definition 4.3 ((ε, ν)-approximation [LLS00]). Let ν > 0. For every a, b ≥ 0, we define the
distance function
|a − b|
.
|a − b|ν =
a+b+ν
P
Let (P, w, Q, f ) be a query space such that p∈P w(p) = 1 and f : P → [0, 1]. For ε > 0, a
weighted set (S, u) is an (ε, ν)-approximation for this query space, if for every q ∈ Q(S) we
have
f (P, w, q) − f (S, u, q) ν ≤ ε
(3)
Corollary 4.4 ( [LLS00, HP11]). Let a, b ≥ 0 and τ, ν > 0 such that |a − b|ν ≤ τ . Let ε > 0.
Then
(i) (ε-sample/approximation). If τ = ε/4 and ν = 1/4 then
|a − b| ≤ ε.
(ii) (ε-net). If τ = 1/4 and ν = ε then
(a ≥ ε
⇒
b > 0).
(iii) (Relative ε-approximation). Put µ > 0. If ν = µ/2 and τ = ε/9 then
(1) a ≥ µ
(2) a < µ
4.3
⇒
⇒
(1 − ε)a ≤ b ≤ (1 + ε)a.
b ≤ (1 + ε)µ.
Constructing ε-Approximations
Unlike the notion of coresets in the next section, the idea of ε-approximations is known
for decades [VC71, Mat89]. In particular, unlike coresets, ε-approximations and (ε, ν)approximations in general, can be constructed using simple uniform random sampling of
the points. The size of the sample depends linearly on the complexity of the queries in the
sense soon to be defined in this section.
Intuitively, and in most practical cases, including the examples in this paper, this complexity is roughly the number of parameters that are needed to define a single query. For
15
example, a ball in Rd can be defined by d + 1 parameters: its center c ∈ Rd , which is defined
using d numbers and the radius r > 0 which is an additional number. A query of k balls is
similarly defined by k(d + 1) parameters. However, by the set theory we have |R| = |Rm | for
every integer c ≥ 1, which means that we can encode every m integers to a single integer.
We can thus always reduce the number of parameters that are needed to define a query from
m to 1 by redefining our cost function f and re-encoding the set of queries.
There are also natural examples of query spaces whose cost function f is defined by
one parameter, but the size of the sampling needed for obtaining an ε-approximation is
unbounded, e.g., the query space where f (p, q) = sign(sin(pq)) and P = Q = Rd , where
sign(x) = 1 if x > 0 and 0 otherwise; see details e.g. in [?]. Hence, a more involved
definition of complexity is needed as follows.
While the number of subsets from a given set of n points is 2n , its can be easily verified
that the number of subsets that can be covered by a ball in Rd is roughly nO(d) . The exponent
O(d) is called the VC-dimension of the family of balls in Rd . The following definition is a
simple generalization by [FL11] for query spaces where the query set is a function and not a
single set. The original definition of pseudo-dimension can be found e.g. in [LLS00] and is
very similar to the definition of VC-dimension given by [VC71] as well as many other similar
measures for the complexity of a family of shapes.
Definition 4.5 (dimension [FL11]). For a query space (P, w, Q, f ) and r ∈ [0, ∞) we define
range(q, r) = {p ∈ P | w(p) · f (p, q) ≤ r} .
The dimension of (P, w, Q, f ) is the smallest integer d such that for every S ⊆ P we have
{range(q, r) | q ∈ Q(S), r ∈ [0, ∞)} ≤ |S|d .
The main motivation of the above definition, is that it tells us how many samples we
need to take uniformly at random from the input set P , to get an (ε, ν)-approximation (S, u)
as follows.
Theorem
4.6 ([VC71, LLS00, FL11]). Let (P, w, Q, f ) be a query space of dimension d such
P
that p∈P w(p) = 1 and f : P → [0, 1], and let ε, δ, ν > 0. Let S be a random sample from
P , where every point p ∈ P is sampled independently with probability w(p). Assign a weight
1
for every p ∈ S. If
u(p) = |S|
1
|S| ∈ Ω(1) · 2
εν
1
1
d log
+ log
ν
δ
then, with probability at least 1 − δ, (S, u) is an (ε, ν)-approximation for the query space
(P, w, Q, f ).
16
5
Improved Coreset Framework
5.1
Improved (ε, ν)-approximations
In this section we show the first technical result of this paper:
P that the additive error
ε maxp∈P |f (p, q)| in (3) can be replaced by ε, and the assumption i=1 wi = 1 in Theorem 4.6
can be removed. The condition for this to work is that the total importance value t, as
defined below, is small. More precisely, the required sample size will be near-linear with
t. Also, the uniform sampling in Theorem 4.6 will be replaced by non-uniform sampling
with a distribution that is proportional to the importance of each point. This result is
essentially a generalization and significant simplification of the framework in [FL11] that
used ε-approximations for constructing coresets. Maybe more importantly: using the idea
of (ε, ν)-approximations we are able to show that the sample size is near linear in t while
in [FL11] it is quadratic in t. For some applications, such an improvement
means turning a
√
theoretical result into a practical result, especially when t is close to n.
We define the importance of a point as the maximum absolute weighted cost w(p)f (p, q)
of a point p, over all the possible queries q, i.e,
s(p) := w(p) max |f (p, q)|,
q∈Q(P )
and hope that this sum is small (say, constant or log n), in P
other words, that not all the
points are very important. More precisely, if the sum t = p∈P s(p) of these costs is 1,
then we prove below that a new query space (P, w0 , Q, f 0 ) can be constructed, such that an
(ε, ν)-approximation (S, u) for the new query space would imply
|f (P, w, q)| − |f (S, u, q)|
ν
≤ ε.
(4)
That is, the additive error in (3) is reduced as desired.
The new query space (P, w0 , Q, f 0 ) is essentially a re-scaling of the original weights by
their importance, w0 (p) = s(p). To make sure that the cost of each query will still be the
same, we need to define f 0 such that w(p)f (p, q) = w0 (p)f 0 (p, q). This implies f 0 (p, q) :=
w(p)f (p, q)/s(p). While the new cost f¯0 (P, w0 , q) is the same as the old one f¯(P, w, q) for
every query q, the maximum value of |f 0 (p, q)| is 1, by definition of s(p), even if |f (p, q)| is
arbitrarily large. Hence, the additive error ε|f 0 (p, q)| in (3) reduced to ε in (4).
More generally, an (ε, ν/t)-approximation for (P, w0 , Q, f 0 ) would yield (4). Using the
uniform sample construction of Theorem 4.6, this implies that to get (4) we need to increase
the sample size by a factor that is nearly linear in t.
Theorem 5.1.
• Let (P, w, Q, f ) be a query space where f (p, q) ≤ 1 for every p ∈ P and q ∈ Q(P ).
• Let s : P → (0, ∞) such that s(p) ≥ w(p) maxq∈Q(P ) f (p, q).
P
• Let t = p∈P s(p).
17
• Let w0 : P → [0, 1] such that w0 (p) := s(p)/t.
• Let f 0 : P × Q(P ) → [0, 1] be defined as f 0 (p, q) := w(p) ·
f (p, q)
.
s(p)
• Let (S, u) be an (ε, ν/t)-approximation for (P, w0 , Q, f 0 ).
• Let u0 (p) = u(p) ·
w(p)
w0 (p)
for every p ∈ S.
Then for every q ∈ Q(S),
|f (P, w, q) − f (S, u0 , q)|ν ≤ ε.
(5)
Proof. Put q ∈ Q(S).
|f (P, w, q) − f (S, u0 , q)|ν =
X
w(p)f (p, q) −
p∈P
= t
X s(p)
X
t
· w(p) ·
X
· f (p, q)
(6)
ν
f (p, q)
f (p, q)
u(p) · w(p) ·
−t
s(p)
s(p)
p∈S
X
w0 (p) · f 0 (p, q) − t
p∈P
=
s(p)
t
p∈S
p∈P
= t
X u(p)w(p)
X
u(p) · f 0 (p, q)
p∈S
w0 (p) · f 0 (p, q) −
p∈P
X
(7)
ν
u(p) · f 0 (p, q)
p∈S
ν
(8)
ν/t
≤ ε,
(9)
where (6) is by the definition of u0 , (7) is by the definition of f 0 , (8) follows since
|ta − tb|ν =
|a − b|
|ta − tb|
=
= |a − b|ν/t ,
ta + tb + ν
a + b + ν/t
(10)
and (9) is by (3) and the definition of S in the sixth bullet of the statement.
Plugging Theorem 4.6 in Theorem 5.1 yields our main technical result that would imply
smaller coresets in the next sections.
Theorem 5.2. Let (P, w, Q, f ) be a query space of dimension d, where f is non-negative,
and let ε, δ, ν > 0. Let S be a random sample from P , where every point p ∈ P is sampled
tw(p)
independently with probability s(p)/t. Assign a weight u0 (p) = s(p)·|S|
for every p ∈ S. If
t
1
t
d log
+ log
|S| ∈ Ω(1) · 2
εν
ν
δ
then, with probability at least 1 − δ,
|f (P, w, q) − f (S, u0 , q)|ν ≤ ε.
18
0
0
Proof. By Theorem
P 5.10 it suffices to prove that S is an (ε, ν/t)-approximation for (P, w , Q, f ).
Indeed, since
p∈P w (p) = 1 and S is a random sample where p ∈ P is sampled with
0
probability w (p) = s(p)/t, the weighted set (S, u) is, with probability at least 1 − δ, an
(ε, ν/t)-approximation for (P, w0 , Q, f 0 ) by Theorem 4.6.
While Theorem 4.6 suggests to construct an ε-approximation simply by taking a uniform
random sample from P , Theorem 5.2 requires us to take non-uniform sample where the
distribution
P is defined by the importance s(·). Bounding these importances such that the
sum t = p∈P s(p) of importances will be small raise a new optimization problem that, as
we will see in the next sections, might be not easy at all to solve. So what did we gain by
proving Theorem 4.6 (or the framework of [FL11] in general)?
Most of the existing papers for constructing coresets essentially had to bound the total
importance of the related problem anyway. However, without Theorem 4.6, their proofs also
had to deal with complicated terms that involved ε and include sophisticated probability
arguments. Essentially each paper had to re-prove in some sense the very involved proofs
in [VC71, LLS00] that were researched for decades, as well as the mathematics behind the
usage of Definition 4.3. Beside the complicated proofs, the final bounds, the dependency on
ε and δ, as well as the coreset construction algorithm, were usually sub-optimal compared
to Theorem 4.6. On the contrary, bounding the total importance allows us to focus on a
deterministic results (no δ involved) and the terms s(p) can be approximated up to constant
factors (and not (1 + ε)-factors). This is demonstrated in Section 6.
5.2
Improved Coreset Framework
While the result in the previous section can be used to improve the quality of (ε, ν)approximation in general, its main motivation is to construct the more recent type of data
structures that is sometimes called coresets.
As explained in Theorem 4.6, ε-approximation and (ε, ν)-approximation in general, can
be easily computed using uniform random sampling. While ε-approximations are useful for
hitting sets, where we wish to know how much points are covered by a set of shapes, they
are less relevant for shape fitting, where we wish to approximate the sum of distances of the
input points to a given query shape or model. The main reason is that in shape fitting the
maximum contribution of a point to the overall cost (sum of covered points) is bounded by 1,
while in general its distance to the shape is unbounded. Using the notation of the previous
section: the importance of each point is the same, so Theorem 5.2 yields uniform sample S
and the same error as Theorem 4.6.
Example: In the Euclidean k-means problem the input is a set P of n points in Rd .
For simplicity, assume that the set is unweighted, that is w(p) = 1/n for every p ∈ P . A
query in this context is a set of points (centers) in Rd so Q(P ) is the family (set) of k balls
in Rd . We denote the squared Euclidean distance from a point p ∈ P to a center c ∈ Rd by
D(p, c) = kp − ck22 . The distance from p to its closest center in C = {c1 , · · · , ck } is then
D(p, C) := min D(p, c) = min kp − ck22 .
c∈C
c∈C
19
By defining g(p, C) = D(p, C) we obtain the query space (P, w, g, Q), where ḡ(P, w, C) is the
average squared distances to a given (query) set C of k centers. Our goal is to compute a
weighted subset (S, u) such that ḡ(P, w, C) will approximate the average squared distances
ḡ(S, u, C) for every set of centers.
Suppose that (S, u) is an ε-approximation of (P, w, Q, g). Then
|ḡ(P, w, C) − ḡ(S, u, C)| ≤ ε max g(p, C).
p∈P
(11)
That is,
X
1X
D(p, C) −
u(p)D(p, C) ≤ ε max D(p, C).
p∈P
n p∈P
p∈P
(12)
In other words, the additive error depends on the maximum distance between a point to a
given center, which can be arbitrary large, unless we assume, say, that both the input points
and centers are inside the unit cube. Theorem 5.2 would not improve this bound by itself,
since the importance of each point, maxc∈Q(P ) D(p, C) is unbounded.
In this section we wish to compute a weighted subset (S, u) that will approximate the
average distance ḡ(P, w, C) for every query, up to a multiplicative factor of 1 ± ε without
further assumptions. Such a set is sometimes called a coreset as follows.
Definition 5.3 (ε-coreset). For an error parameter ε ∈ (0, 1), the weighted set (S, u) is an
ε-coreset for the query space (P, w, Q, g) if S ⊆ P and for every q ∈ Q(S),
(1 − ε)ḡ(P, w, q) ≤ ḡ(S, u, q) ≤ (1 + ε)ḡ(P, w, q).
An equivalent definition of an ε-coreset, is that the additive error ε maxp∈P g(p, C) in (11)
is replaced by εḡ(P, w, q). That is, for every q ∈ Q(S)
|ḡ(P, w, q) − ḡ(S, u, q)| ≤ εḡ(P, w, q).
ε-coreset implies that not only the average, but also the sum of squared distances (or sum
of costs, in general) are preserved up to 1 ± ε. Note also that simply multiplying the weight
of each point in S by (1 − ε) would yield a one sided error,
ḡ(P, w, q) ≤ ḡ(S, u, q)| ≤
2ε
1+ε
· ḡ(P, w, q) = 1 +
· ḡ(P, w).
1−ε
1−ε
If we assume in addition that ε ∈ (0, 1 − 2/c) for some c > 2, then 1 − ε ≥ 2/c and thus
ḡ(P, w, q) ≤ ḡ(S, u, q)| ≤ (1 + cε)ḡ(P, w).
Hence, an ε/c-coreset (S, u) implies,
ḡ(P, w, q) ≤ ḡ(S, u, q) ≤ (1 + ε)ḡ(P, w, q).
(13)
For example, if ε ∈ (0, 1/2), then an (ε/4)-coreset would yield (13), by substituting c = 4.
20
The main observation for getting the desired multiplicative approximation is that a multiplicative approximation of εḡ(P, w, q), can be turned into an additive approximation of ε
by replacing g(p, q) with its scaled version
f (p, q) =
g(p, q)
.
ḡ(P, w, q)
To get a coreset for g as in (13) it suffices to have an ε-approximation for f . In addition, for
many problems, while the importance s(p) of a point in p is unbounded with respect to g, it
is bounded with respect to f . We formalize this observation as follows.
Corollary 5.4. Let (P, w, Q, g) be a query space for some non-negative function g, and
define f : P × Q(P ) → R such that for every p ∈ P and q ∈ Q(P ) we have
f (p, q) =
g(p, q)
.
ḡ(P, w, q)
1
Let (S, u ) be a ε/4,
-approximation for (P, w0 , Q, f 0 ) as defined in Theorem 5.1. Then
4t
(S, u0 ) is an ε-coreset for (P, w, Q, g), i.e., for every q ∈ Q(S)
0
|ḡ(P, w, q) − ḡ(S, u, q)| ≤ εḡ(P, w, q).
Proof. Put q ∈ Q(S), τ = ε/4 and ν = 1/4. Applying Theorem 5.1 with ν yields
|f¯(P, w, q) − f¯(S, u0 , q)|ν ≤ τ.
Since f¯(P, w, q) = 1, this implies
1−
ḡ(S, u0 , q)
ḡ(P, w, q)
= |f¯(P, w, q) − f¯(S, u0 , q)|ν ≤ τ.
ν
Substituting a = 1, and b = f¯(S, u0 , q) in Corollary 4.4(i) yields
1−
ḡ(S, u0 , q)
≤ ε.
ḡ(P, w, q)
Multiplying by ḡ(P, w, q) yields an ε-coreset as
|ḡ(P, w, q) − ḡ(S, u0 , q)| ≤ εḡ(P, w, q).
Combining Corollary 5.4 and Theorem 4.6 yields the following theorem.
21
Theorem 5.5. Let (P, w, Q, g) be a query space, where g is a non-negative function. Let
s : P → (0, ∞) such that
w(p)g(p, q)
s(p) ≥ max P
,
q∈Q(P )
p∈P w(p)g(p, q)
P
and t = p∈P s(p). Let d be the dimension of (P, w0 , Q, f 0 ) as defined in Theorem 5.4. Let
c ≥ 1 be a sufficiently large constant, and let S be a random sample of
ct
1
|S| ≥ 2 d log t + log
,
ε
δ
points from P , such that for every p ∈ P and q ∈ S we have p = q with probability s(p)/t.
t·w(p)
Let u0 (p) = s(p)|S|
for every p ∈ S. Then, with probability at least 1 − δ, (S, u0 ) is an ε-coreset
for (P, w, Q, g), i.e.,
∀Q ∈ Q(S) : (1 − ε)g(P, w, q) ≤ g(S, u0 , q) ≤ (1 + ε)g(P, w, q).
6
Application: Smaller and Generalized Coreset for kMeans
Definition 6.1 (ρ-metric space). Let X be a set, and D : X 2 → [0, ∞) be a function. Let
ρ > 0. The pair (X, D) is a ρ-metric space if for every (x, y, z) ∈ X 3 we have
D(x, z) ≤ ρ(D(x, y) + D(y, z)).
For C ⊆ X we denote D(x, C) := minc∈C D(x, c) assuming that such minimum exists.
Moreover, for ε > 0, the pair (X, D) is a (ψ, ε)-metric space if for every (x, y, z) ∈ X 3 we
have
D(x, y)
+ εD(y, z)).
|D(x, z) − D(y, z)| ≤
ψ
Note that for every x, y ∈ X, and a center cy ∈ C that is closest to y, we have
D(x, C) = min D(x, c) ≤ D(x, cy ) ≤ ρ(D(x, y) + D(y, cy )) = ρ(D(x, y) + D(y, C)).
c∈C
(14)
A simple way to decide whether (D, X) is indeed a ρ-metric space is to use the following
bound, known as Log-Log Lipschitz, that is usually easier to compute. The following lemma
is very similar to [FS12, Lemma 2.1], where in the proof of (i) the constant 4 that appeared
there is replaced here by 1/ε.
Lemma 6.2 (Lemma 2.1(ii) in [FS12]). Let D̃ : [0, ∞) → [0, ∞) be a monotonic nondecreasing function that satisfies the following (Log-Log Lipschitz) condition: there is r > 0
such that for every x > 0 and ∆ > 1 we have
D̃(∆x) ≤ ∆r D̃(x).
22
Let (X, dist) be a metric space, and let D(x, y) = D̃(dist(x, y)) for every x, y ∈ X. Then
(X, D) is a ρ-metric space for ρ = max {2r−1 , 1}, and a (ψ, ε)-metric space for every ε ∈ (0, 1)
and ψ = (r/ε)r .
For example, consider a metric space (X, dist) and the function D̃(x) = x2 that corresponds to the squared distance D(p, q) = D̃(dist(p, q)) = (dist(p, q))2 . Note that (X, D) is
not a metric space since the triangle inequality does not hold. However, for every x > 0 and
∆>1
D̃(x∆) = (x∆)2 = ∆2 x2 = ∆2 D̃(x).
Hence, by Lemma 6.2 r = 2, ρ = 2r−1 = 2, (D, X) is a 2-metric space and a (ψ, ε)-metric
space for ψ = (2/ε)2 .
To compute a coreset for a problem we need to decide what are the important points,
or more formally, to use Theorem 5.5 we need to bound the importance s(p) of each point
p ∈ P . To do this, we usually need to solve another optimization problem that is usually
related to computing the query with the minimal cost. For example, the bound of importance
of a point in the k-mean problem, as will be defined later, is based on the optimal solution
for the k-means problem. Unfortunately, this optimization problem is usually hard and the
main motivation for constructing the coreset in the first place.
There are two leeways from this chicken-and-egg problem:
(i) Use the merge-and-reduce approach that reduces the problem of computing a coreset
n
small weighted
for a large set of n items to the problem of computing coresets for 2|S|
(core)sets. Each input coreset 2|S| is reduced to a coreset of |S| and merged with
another such coreset, where 2|S| is the minimum size of input set that can be reduced
to half using the given coreset construction. In this case, if this coreset construction
takes time f (|S|) than, since there are such O(n) constructions, the overall running
time will then be O(n) · f (|S|).
(ii) For problems such as k-means, it is NP-hard to compute the optimal solution, even
for a small set of n = O(k) points. Instead of computing an optimal solution, usually a constant factor approximation suffices for computing the importance of each
point. Since for many problems such an approximation is also unknown or too slow to
compute, (α, β)-approximation, (or bi-criteria, or bicriterion), can be used instead as
explained below.
Suppose that the optimal solution for the k-means problem on a set is OP Tk . That is,
there is a set C of k centers whose cost (sum of squared distances to the input points of P )
is OP Tk , and there is no such set of smaller cost. Then a set C̃ is called an α-approximation
if its cost is at most α · OP T . However, for many problems, even a rougher approximation
would do: instead of using k centers to approximate OP Tk by a factor of α, we use βk
centers, where each input point may be assigned for the nearest center. Note that the cost is
still compared to OP Tk and not to OP Tβk . We define (α, β)-approximation formally below.
For our purposes later, we generalize this common definition of (α, β)-approximation, and
23
allow a point to be assigned to a different center than its nearest one, as long as the overall
cost is small.
Definition 6.3 ((α, β)-approximation.). Let (X, D) be a ρ-metric space, and k ≥ 1 be an
integer. Let (P, w, Q, g) be a query space such that P ⊆ X, Q(P ) = {C ⊆ X | |C| ≤ k}, and
g : P × Q(P ) be a function such that
g(p, C) = D(p, C) := min D(p, c).
c∈C
(15)
Let α, β ≥ 0, B ⊆ X such that |B| ≤ βk and B : P → B such that
X
w(p)g(p, {B(p)}) ≤ α min ḡ(P, w, q).
q∈Q(P )
p∈P
Then B is called an (α, β)-approximation for (P, w, Q, g).
For every b ∈ B, we denote by Pb = {p ∈ P | B(p) = b} the points that are mapped to
the center b. We also denote p0 = B(p) for every p ∈ P .
One of the main tools in our novel streaming algorithm is also a technique to update an
(α, β)-approximation. However, due to memory limitations, our streaming algorithm cannot
attach each point to its nearest center, but still the distances to the approximated centers
is bounded. We thus generalize the definition of an (α, β)-approximation, which is usually a
set B of size βk, to a function B : P → B that assigns each input point to a (not necessarily
closest) center in B, while the overall cost is still bounded.
Since an (α, β)-approximation yields a weaker result compared to a PTAS or α-approximation,
it can usually be computed very quickly. Indeed, a very general framework for constructing
(α, β)-approximation to any query space with a small VC-dimension is suggested in [FL11]
where α = 1 + ε and β = O(log n).
Reducing (α, β)-approximation to an α = O(1) approximation. The size of the
coreset usually depends on α and β. However, if they are reasonably small (e.g. polynomial
in log n), we can reduce the approximation factor and number of centers in few phases as
follows: (i) Compute an (α, β)-approximation for small (but maybe not constant) α and β.
(ii) Compute an ε-coreset for ε = 1/2 using this approximation. (iii) Compute an O(1) factor
approximation on the coreset. Since the coreset is small, such an approximation algorithm
can run inefficiently, say, in polynomial time if the coreset is of size (log n)O(1) . The resulting
O(1) approximation for the coreset is also an O(1) approximation for the original set, by
the definition of the ε = 1/2 coreset. (iv) Recompute the coreset for the complete (original)
data using the O(1) approximation instead of the (α, β)-approximation to obtain a coreset
of size independent of both n and d.
Assumption 6.4. In what follows we assume that:
• P is a set that is contained in X where (X, D) is a ρ-metric space and (φ, ε)-metric
space as defined in (6.1).
24
Algorithm 2: Coreset(P, w, B, m)
Input:
A weighted set (P, w) where P ⊆ X and (D, X) is a ρ-metric space,
(α, β)-approximation B : P → B,
and sample size m ≥ 1.
Output:
A pair (S, u) that satisfies Theorem 6.6.
1
2
3
4
5
6
7
9
for each b ∈ B do
Set Pb ← {p ∈ P | B(p) = b}
for each b ∈ B and p ∈ Pb do
w(p)
w(p)D(p, B(p))
P
.
+
Set Prob(p) ← P
2 q∈P w(q)D(q, B(q)) 2|B| q∈Pb w(q)
Pick a sample S of at least m points from P such that for each q ∈ S and p ∈ P we
have q = p with probability Prob(p).
for each p ∈ P do
w(p)
Set u(p) ← |S|·Prob(p)
.
Set u(p) ← 0 for each p ∈ P \ S. /* Used only in the analysis.
*/
return (S, u)
• (P, w, Q, g) is a query space as defined in (15).
• dk denotes the dimension of f 0 as defined in Corollary 5.5.
• c is a sufficiently large constant that can be determined from the proofs of the theorems.
• B is an (α, β)-approximation for (P, w, Q, g) as in Definition 6.3.
• We are given an error parameter ε ∈ (0, 1).
• We are given a maximum probability of failure δ ∈ (0, 1).
We begin with the following claim, that is a simplified and generalized version of a similar
claim in [LS10].
Lemma 6.5. For every b ∈ B, and p ∈ Pb have
w(p)D(p, C)
ραw(p)D(p, p0 )
ρ2 (α + 1)
P
≤P
+
.
0
q∈P w(q)D(q, C)
q∈P w(q)D(q, q )
q∈Pb w(q)
P
Proof. Put p ∈ P and b ∈ B such that p ∈ Pb . We need to bound
w(p)D(p, C)
ρw(p)D(p, p0 )
ρw(p)D(p0 , C)
≤P
+P
q∈P w(q)D(q, C)
q∈P w(q)D(q, C)
q∈P w(q)D(q, C)
P
αρw(p)D(p, p0 )
ρw(p)D(p0 , C)
P
P
≤
+
,
0
q∈P w(q)D(q, q )
q∈P w(q)D(q, C)
25
(16)
where the first inequality holds by (14), and the second inequality holds since B is an (α, β)approximation.
To bound the last term, we sum the inequality D(p0 , C) ≤ ρ(D(p0 , q) + D(q, C)) over
every q ∈ Pb to obtain
X
X
X
D(p0 , C)
w(q) =
w(q)D(p0 , C) ≤
w(q) · ρ(D(p0 , q) + D(q, C))
q∈Pb
q∈Pb
=ρ
q∈Pb
X
w(q)D(q 0 , q) + ρ
q∈Pb
≤ ρα
X
X
w(q)D(q, C)
q∈Pb
w(q)D(q, C) + ρ
q∈Pb
≤ ρ(α + 1)
X
w(q)D(q, C)
q∈Pb
X
w(q)D(q, C).
q∈P
Dividing by
P
q∈Pb
w(q) ·
P
q∈P
w(q)D(q, C) yields
ρ(α + 1)
D(p0 , C)
≤P
.
q∈P w(q)D(q, C)
q∈Pb w(q)
P
Substituting this in (16) yields the desired result
ραw(p)D(p, p0 )
ρ2 (α + 1)
w(p)D(p, C)
P
≤P
+
.
0
q∈P w(q)D(q, C)
q∈P w(q)D(q, q )
q∈Pb w(q)
P
Our first theorem suggests a coreset for k-means of size near-quadratic in k and quadratic
in ε, based on our improved framework and last lemma. Existing work [LS10] for obtaining
such coresets with only positive weights requires size cubic in k.
Theorem 6.6. Under Assumption 6.4, let t = k · ρ2 (α + 1)β. Let (S, u) be the output of a
call to algorithm Coreset(P, w, B, m), where
1
ct
.
m ≥ 2 dk log t + log
ε
δ
Then, with probability at least 1 − δ, (S, u) is an ε-coreset of size m for (P, w, Q, g).
Proof. Let f : P → [0, ∞) such that for every C ∈ Q(P ),
D(p, C)
,
q∈P w(q)D(q, C)
f (p, C) = P
and define
ραw(p)D(p, p0 )
ρ2 (α + 1)
P
+
.
s(p) = P
0
q∈P w(q)D(q, q )
q∈Pb w(q)
26
By Lemma 6.5,
X
XX
XX
max w(p)|f (p, C)| =
max w(p)f (p, C) ≤
s(p)
p∈P
C∈Q(P )
b∈B p∈Pb
C∈Q(P )
b∈B p∈Pb
2
= ρα + |B| · ρ (α + 1) = ρ(α + |B|α + |B|) ∈ O(ρ2 (α + 1)βk).
Applying Theorem 5.5 with the query space (P, w, Q, g) then yields the desired bound.
Our second theorem in this section suggests a coreset for k-means of size near-linear in
k by combining new observations with our improved framework.
Theorem 6.7. Under Assumption 6.4, let t = α/φ. Let (S, u) be the output of a call to
algorithm Coreset(P, w, B, m), where
ck(t + β)
1
m≥
d log t + log(βk) + log
.
2
ε
δ
Then, with probability at least 1 − δ, (S, u) is an ε-coreset of size m for (P, w, Q, g).
Proof. Let
2D(p, p0 )
0
H = p ∈ P | |D(p, C) − D(p , C)| ≤
.
φ
We need to bound by O(ε) the expression
P
p∈P (w(p) − u(p)) · D(p, C)
P
q∈P D(q, C)
P
0
p∈H (w(p) − u(p)) · (D(p, C) − D(p , C))
P
≤
q∈P D(q, C)
P
X q∈P w(q)D(b, C)
P b
+
·
r∈P w(r)D(r, C)
b∈B
X (w(p) − u(p)) · D(p, C)
X (w(p) − u(p)) · (D(p, C) − D(p0 , C))
P
P
−
w(q)D(b,
C)
q∈P
q∈Pb w(q)D(b, C)
b
p∈P
p∈P ∩H
b
(17)
(18)
(19)
!
.
(20)
b
Put b ∈ B and p ∈ Pb . Then,
X (w(p) − u(p)) · D(p, C)
X (w(p) − u(p)) · (D(p, C) − D(p0 , C))
P
P
−
q∈Pb w(q)D(b, C)
q∈Pb w(q)D(b, C)
p∈P
p∈P ∩H
b
b
X w(p) − u(p)
P
=
q∈Pb w(q)
p∈P
(21)
b
+
X (w(p) − u(p)) · (D(p, C) − D(b, C))
P
.
q∈Pb w(q)D(b, C)
(22)
p∈Pb \H
We now prove that, with probability at least 1 − cδ, each of the expressions (18), (21)
and (22) is bounded by 2ε.
27
Bound on (18):
every p ∈ P ,
Let h(p, C) =
D(p,C)−D(p0 ,C)
P
q∈P w(q)D(q,C)
if p ∈ H and h(p, C) = 0 otherwise. For
2αw(p)D(p, p0 )
2αProb(p)
2w(p)D(p, p0 )
P
P
≤
≤
.
w(p) · |h(p, C)| ≤
0
φ q∈P w(q)D(q, C)
φ q∈P w(q)D(q, q )
φ
Hence, using t = 2α/φ in Theorem 5.5, with probability at least 1 − δ,
P
p∈H (w(p)
− u(p)) · (D(p, C) − D(p0 , C))
X
P
(w(p) − u(p))h(p, C) = |h̄(P, w, q)−h̄(S, u, q)| ≤ ε,
=
q∈P D(q, C)
p∈H
(23)
Bound on (21):
Let I(p, b) = 1/
P
q∈Pb
w(q) if p ∈ Pb and I(p, b) = 0 otherwise. We have
max w(p)I(p, b) ≤ 2|B|Prob(p),
p∈P
P
where Prob(p) is defined in Line 3 of Algorithm 2. Hence, p∈P max w(p)I(p, b) ≤ 2|B|.
Also, each point in S is sampled with probability proportional to 2|B|Prob(p) and for c ≥ 2,
2|B|
1
2|B|
|B|
|S| ≥ 2
2 log(2|B|) + log
= 2
log(|B|) + log
.
ε
δ
ε
δ
Using d = 1 and replacing δ by δ/|B| in Theorem 5.5 yields that S is an ε-coreset for
(P, w, {b} , I), with probability at least 1 − δ/|B|. That is
|
X w(p) − u(p)
¯ w, b) − I(S,
¯ u, b)| ≤ ε.
P
| = |I(P,
w(q)
q∈P
b
p∈P
(24)
b
By the union bound, with probability at least 1 − δ, the last inequality holds for every b ∈ B
simultaneously.
Bound on (22):
Since (D, X) is a ρ-metric,
D(p, b)
|D(p, C) − D(b, C)| ≤
+ εD(b, C) ≤ max
φ
Hence
2D(p, b)
, 2εD(b, C) .
φ
X (w(p) − u(p)) · (D(p, C) − D(b, C))
X w(p) + u(p)
P
P
≤ 2ε
.
w(q)D(b,
C)
q∈Pb
q∈Pb w(q)
p∈Pb \H
(25)
p∈Pb \H
The last expression is bounded using (24), as
X w(p) + u(p)
X (1 + ε)w(p)
P
P
≤
= (1 + ε).
q∈Pb w(q)
q∈Pb w(q)
p∈Pb \H
p∈Pb \H
28
(26)
Bounding (17):
least 1 − 10δ,
P
p∈P (w(p)
P
By combining the above bounds we obtain that with probability at
− u(p)) · D(p, C)
q∈P
D(q, C)
≤ ε + (ε + 2ε(1 + ε))
X
b∈B
P
w(q)D(b, C)
Pq∈Pb
≤ cε.
r∈P w(r)D(r, C)
Replacing ε with ε/c, and δ with δ/c then proves the theorem.
The values of ρ and ψ. Algorithm 2 can be applied to compute a coreset for any given
variant of the k-means/median problem given a set P and a ρ or (ρ, ε)-metric (X, D). The
only difference is the size of the required coreset. The parameters ρ and φ can usually be
computed easily using Lemma 6.2. For example, in the case of distances to the power of
r ≥ 1, the value of ρ is roughly 2r and the value of φ is roughly εr . For most common
m-estimators the values are similar, when r is some constant.
The values of α and β can be α = β = O(1) for k-means and all its variants, by using
the generic algorithm for computing (α, β)-approximation in [FL11] with α = β = O(log n),
and then use the technique for reducing α and β. For bounding the approximation of the
bi-criteria approximation in [FL11] only the pseudo-dimension of the problem is required,
which is usually easy to compute as explained below.
Dimension d. Unlike the total sensitivity t, the dimension d for numerous problems was
already computed in many papers in computational geometry and machine learning (in the
context of PAC learning). This includes reductions and connections to similar notions such
as the shattering dimension or the VC-dimension of a set. General techniques for computing
the dimension of a set based on number of parameters to define a query, or number of
operations that are needed to answer a query can be found in the book [AB99].
Dimension of the k-means problem and its variants. Note that, unlike sensitivity,
the dimension is less dependent on the exact type of distance function. For example, using
Euclidean distance or Euclidean distance to the power of 3 as a variant for the k-means
clustering problem does not change the dimension. This is because set of ranges for both of
these problems is the same: subsets of the input points that can be covered by k balls. It is
easy to compute the dimension for the k-means problem (the query space (P, w, Q, g) in our
paper, as well as the modified query space for the function f . These bounds can be found
in [FL11]. In addition, [FL11] provide a simple reduction that shows that the dimension for
k centers is the same as the dimension of 1 center multiplied by k.
In short, for the Euclidean space, the dimension of the k-means/median problem is O(dk),
and for metric spaces (graph) the dimension is O(k log n).
29
Smaller coreset for k-means queries.
Consider the k-means queries in Rd , i.e., the
P
cost is the sum of squared distances
p∈P D(p, C) over every point in P to its nearest
d
center in a given set C of k points in R . It was proven that projecting P onto an O(k/ε)dimensional subspace that minimizes its sum of squared distances, known as the low-rank
approximation of P , would preserve this sum, to any set of k centers, up to a factor of 1 ± ε.
This is in some sense an ε-coreset for P of size n and that is not subset of the input, but of
low-dimensionality. In particular, this result implies that there is a set of centers (known as
centroid set [HPK07]) that is contained in a O(k/ε)-dimensional space, such that every set
of k centers in Rd can be replaced by a set of k centers in the centroid set, that would yield
the same cost up to a factor of 1 ± ε. In particular, this implies that the dimension of the
k-means problem can be reduced to O(k/ε) instead of dk, i.e., independent of d. Combining
this result with our paper yields the first coreset for k-means of size independent of d that
is subset of the input (in particular, preserve the sparsity of the input points) that also
supports streaming.
Weak coresets of size independent of d. For the non-Euclidean case or non-squared
distances it seems that it is impossible to obtain coreset of size independent of d. However,
coreset as defined above (sometimes called strong coreset) approximates every query in
the set of queries, while the main application and motivation for constructing coreset is to
compute the optimal query or its approximation. A weak coreset is a small set that can
be used to give such an approximation. The exact definition of weak coreset also changes
from paper to paper. In particular, a weak coreset for k-means was suggested in [FMS07].
However, to extract the approximated solution from the coreset we must run an exhaustive
search and cannot use existing algorithms or heuristics as in the case of strong coreset.
In this paper, following [FL11], we use a simple and general definition of weak coreset,
that is also more practical. Instead of defining a unique (static, global) set Q of queries,
we define Q to be a function that maps every subset (potential coreset) S of P to a set
of queries. A weak coreset S needs only to approximate the queries in Q(S). It turns out
that for many case the (generalized definition) of dimension for such a query space is much
smaller compared to the traditional case where Q(S) = Q(P ) is the same for every subset
S ⊆ P.
To be able to use this property in our existing proofs, we require a monotonicity property:
that the set Q(T ) of queries that are assigned to a subset T ⊂ S must be contained in Q(S).
If we can prove that for every S ⊆ P , the set Q(S) contains a (1 + ε)-approximation to both
the optimal solution of S and P , then we can extract such an approximation from S. For
k-means, k-median and their variants, it was proven in [SV07] that the optimal k-centers of a
set S can be approximated by such a set that is spanned by O(k/ε) points in S. By defining
Q(S) to be the union of the optimal center of P , with all the centers that are spanned by
O(k/ε) points in S, we get that S is a weak coreset. Note that the definition of Q can be
explicit (without knowing the optimal center of P ) and is needed only to bound its dimension
as in Definition 4.5.
30
Extracting a (1 + ε)-approximation from the coreset. It was proven in [FL11] that
for problems such as k-median such weak coresets have dimension O(k/ε), i.e., independent
of d. Unlike [FL11], we suggest here a very simple way to extract the approximated solution
from the coreset: compute the weighted coreset (S, u) (of size independent of d) as defined in
Algorithm 2, and then use any given algorithm to compute a (1 + ε) approximation set C of
k centers on the coreset (or any other trusted heuristic that we hope computes such a set C).
Since C is not necessarily spanned by few points in S, it may not be a good approximation for
the original set P and we should not return it. However,the proof in [SV07] is constructive
and shows a near-linear time algorithm that using such an approximated solution set C, we
can compute another (1 + O(ε))-approximation C 0 that has the additional property that C 0
is spanned by O(k/ε) points in S. Hence, C 0 is both a near-optimal solution to S and in
Q(S), so it must be a (1 + ε)-approximation for the optimal solution of P .
7
Appendix A: Merge and Reduce Tree
We now briefly introduce the previous technique for maintaining coresets in the streaming
setting due to Har-Peled and Mazumdar [HPM04] and Bentley and Sax [BS80]. In this
method, a merge-and-reduce tree is built by using an offline coreset construction as a blackbox. Previously merge-and-reduce was the only known technique for building a streaming
coreset for metric k-median, and it relies solely on the following two properties:
1. Merge: The union of (k, )-coresets is a (k, )-coreset.
2. Reduce: A (k, )-coreset of a (k, δ)-coreset is a (k, + δ)-coreset.
The merge-and-reduce tree works as follows. There are buckets Bi for i ≥ 0. In each step,
the bucket B0 takes in a segment of O(1) points from the stream. Then the tree works like
counting in binary: whenever buckets B0 to Bi−1 are full, these i buckets are merged and
then reduced by taking a (k, log n )-coreset and storing the result in Bi .
Let s be the space of offline construction, which depends on as −a . At the end of
the stream, O(log n) buckets have been used and each bucket uses O(s loga n) space; this
incurs a multiplicative overhead of Θ(loga+1 n) in the storage requirement. The second factor
comes from using the accuracy parameter log n , which is necessary by Property 2 since the
construction will be compounded O(log n) times. Due to this compounding, the runtime is
multiplied by a factor of O(log n).
8
Appendix B: General Streaming Reduction
We present a general technique for converting an offline coreset construction to a streaming coreset construction with O(log n) overhead. Given a ρ-metric space (X, D) (recall
Definition 6.1), we build a query space (P, w, g, Q) in the same way as in Definition 6.3:
Q(P ) = {C ⊆ X | |C| ≤ k} and g(p, C) = D(p, C) := minc∈C D(p, c). Here k is a positive
integer that denotes how many centers may be used for the clustering.
31
Our bicriterion algorithm is an adjustment to the algorithms of [BMO+ 11] and [CCF]
with the following important difference: our bicriterion is online, so we do not delete and
reassign centers as is [BMO+ 11]. This “online” property is critical for the algorithm to work
and is one of the main technical ideas. Although a fully online bicriterion can require linear
space, we maintain a division of the stream P into a prefix R and a suffix P \ R such that
our bicriterion is online on the suffix P \ R and the prefix R can be largely ignored. To
maintain this property of being online for the suffix, we incur only a modest space increase
from O(k log n) to O(log( 1 )k log n).
After having an online bicriterion (it is further explained below why this property is essential), the offline coreset algorithms perform non-uniform sampling procedure with carefully
chosen probabilities that are defined by the bicriterion. Equipped with our new bicriterion algorithm, implementing the sampling procedure is rather straightforward computation
which is explained in Section 8.2. As a result, we can implement any sampling-based coreset
algorithm for k-median without merge-and-reduce and in one pass. As such it is applicable
to several coreset constructions (such as k-means and other M -estimators). In addition,
we believe that our methods will work with other objective functions as well such as (k, j)subspace, and we hope that future work will investigate these directions.
Many clustering algorithms [COP03, GMM+ 03, BMO+ 11] maintain a weighted set (B, u)
of points (which are selected using a facility-location algorithm). Upon arrival of an update
(p, w(p)) from the stream, this update is added to the set by u(p) ← u(p) + w(p).
In these algorithms, only a single operation is performed on (B, u) which we call MOVE.
For two points p, p0 ∈ B with weights u(p) and u(p0 ), the function MOVE(p, p0 ) does the
following: u(p0 ) ← u(p0 ) + u(p) and u(p) ← 0. This essentially moves the weight at location
p to the location of p0 . The motivation for building (B, u) will be compression; (B, u) will
be maintained over the stream P in such a way that |B| = O(log |P |).
Throughout this section, we will assume for ease of exposition that each point in the
stream is from a distinct location. This simplifies the analysis, allowing B to take only a
single value for each input point. The algorithm works for general inputs without modification, requiring only a more careful notation for the analysis. Additionally, we state the
algorithm for unweighted input (where w(p) = 1 for all p ∈ P ) and the parameter n is
defined as |P |. We still include w(p) throughout the algorithm and analysis, as the analysis
generalizes to weighted inputs where the parameter n is replaced by the sum of all weights
(after normalizing the minimally weighted point to 1).
8.1
An Algorithm for building a coreset
First, let us describe how the algorithm of [BMO+ 11] works. We will modify this algorithm
as part of our coreset construction. In this summary, we alter the presentation from that
of [BMO+ 11] to more fluidly transition to our modified version but the algorithm remains the
same. [BMO+ 11] operates in phases i ≥ 1. This means that the algorithm maintains a phase
number i (used internally by the algorithm), beginning in phase i = 1. As the stream arrives,
the algorithm may decide to increment the phase number. Let (Ri , wi ) denote the prefix of
the input received before the end of phase i, and let OPTk (Ri ) denote the minimal value of
32
Figure 1: A bicriterion approximation. Although connections are not optimal, the sum of
all connections costs is O(OPT).
ḡ(Ri , wi , C) over every C ∈ Q. When phase i + 1 begins, a value Li+1 is declared on Line 27
as a lower-bound for the current value of OPTk (Ri+1 ). The algorithm has computed (Mi , ui ),
which we inductively assume is a bicriterion approximation for Ri (more precisely, a map
B : Ri → Mi such that ∪p∈Ri (B(p), w(p)) = (Mi , ui ). However, to maintain polylogarithmicspace the algorithm pushes (Mi , ui ) to the beginning of the stream and restarts the bicriterion
construction. This means that the algorithm, at this point, restarts by viewing the stream as
(P, w − wi + ui ) (i.e. replacing (Ri , wi ) with (Mi , ui )). Continuing in this way, the algorithm
maintains a bicriterion (Mi+1 , ui+1 ) for (Ri+1 , wi+1 − wi + ui ) (which is also a bicriterion for
(Ri+1 , wi+1 ) by Theorem 8.2) until the next phase change is triggered.
Now we explain our modifications to [BMO+ 11] (see Algorithm 3). The first step is
that the bicriterion our algorithm builds must be determined “online” in the following sense:
upon receiving a point (x, w(x)), the value of B(x) must be determined (and never be altered)
before receiving the next point from the stream.
This generalization is necessary for the following reason. Suppose we connect an element
p to a center b1 . Later in the stream, we open a new center b2 that becomes the closest
center to p. However, using polylogarithmic space, we have already deleted b1 and/or p from
memory and the state of our algorithm is identical to the case where b1 remains the closest
center to p. Therefore the connections must be immutable, and this results in non-optimal
connections.
Definition 8.1. An online [α, β]-bicriterion is an algorithm that maintains a bicriterion
(B, t) over a stream X, and operates online in the following sense. Upon arrival of each
point p, it is immediately decided whether p is added to B, and then t(p) is determined. Both
of these decisions are permanent.
Upon receiving an update (p, w(p)) from the stream, the algorithm may call MOVE(p, p0 )
for the nearest p0 ∈ B to p. In the analysis we use the function B : P → B that maps
each point p to its immediate location after this initial move (either p itself, or p0 ). If future
moves are performed on p, this does not change the value B(p). B is not stored by the
algorithm due to space constraints; only the value of B(p) for the most recent point p is used
by the algorithm, and older values are used in the analysis only. We will show that B is a
(O(1), O(log n))-approximation that allows us to maintain a coreset over the stream.
33
We now state Algorithm 3. φ and γ are constants (dependent on ρ) used in the analysis
that are defined as in [BMO+ 11]. Each point has a flag that is either raised or lowered (given
by the F lag function). All points have their flag initially lowered, given on Line 1. A lowered
flag shows that the point is being read for the first time (being received from the stream),
and a raised flag shows that the point is being re-read by the algorithm (having been stored
in memory).
On Line 22, (Mi , ui ) is the weighted set (B, u) as it exists at the end of phase i. We
define the cost of MOVE(p, p0 ) to be w(p)D(p, p0 ). The value Ki is therefore the total cost of
all moves performed in phase i.
At a phase change, the set (B, u) is pushed onto the beginning of the stream with all
points having a raised flag. This means that during the next |B| iterations of the outer-loop
where a point (x, w(x)) is received we actually receive a point from memory. We continue
to process the stream after these |B| points are read.
The following theorem summarizes the guarantees of this algorithm. We note that, as
stated, the algorithm’s runtime of O(nk log n) is not correct - in fact the number of phases
may be arbitrarily large. However, using the same technique as detailed in Section 3.3
n
) while requiring O(k 2 log2 n)
of [BMO+ 11] the number of phases can be bounded to O( k log
n
time per phase.
Theorem 8.2 ([BMO+ 11]). Let Algorithm 3 process a stream (P, w) of at most n points.
Let Ri denote the prefix of the stream received before the end of phase i, and let Ki and
Li be their final values from the algorithm (which are never modified after phase i). With
probability at least 1 − n1 , the following statements all hold after processing each point:
P
ρφγ
1.
i Ki ≤ φ−ρ OPTk (Ri )
2. The total runtime is O(nk log n)
3. For every phase i, Li ≤ OPTk (Ri ) ≤ φLi ≤ Li+1
4. At the execution of Line 22, Mi consists of O(k log n) points
Part 1 of the preceding theorem, which bounds the total cost of all moves performed
by the algorithm, also serves as an upper bound on the Earth-Mover distance between the
stream (P, w) and the maintained set (B, u).
Definition 8.3 (Earth-Mover Distance). Let (A, wA ) and (B, wB ) be weighted sets in (X, D).
Morever, let them be of equal weight in the sense that Σa∈A w(a) = Σb∈B w(b). Define a
“movement” from (A, wA ) to (B, wB ) to be a weighted set (Y, v) in (X × X, D) such that
∪(a,b)∈Y (a, v(a, b)) = (A, wA ) and ∪(a,b)∈Y (b, v(a, b)) = (B, wB ). Then dEM (A, wA , B, wB ) is
the minimum of Σ(a,b)∈Y v((a, b))D(a, b) over all movements Y from (A, wA ) to (B, wB ).
Another way to view the preceding definition is in terms of probability distributions.
Over all joint distributions over A × B with marginal distributions (A, wA ) and (B, wB ), we
seek the minimum possible cost (as defined) for any such joint distribution.
34
Algorithm 3: Input: integer k, ρ-metric space (X, D), stream (P, w) of n weighted
points from (X, D). Output: B(x) after receiving each point x, and a weighted set
(Mi , ui ) after each phase i ≥ 1
1 F lag(x) = 0 for all x ∈ P
2 L1 ← minimum D(x, y) for any x, y in the first k distinct points
3 i ← 1
4 K1 ← 0
5 B ← ∅
6 u(x) ← 0 for all x
7 for each point (x, w(x)) received do
8
u(x) ← u(x) + w(x)
9
y ← argminy∈Q D(x, y) (break ties arbitrarily)
, 1}, otherwise I ← 0
10
I ← 1 with probability min{ Liw(x)D(x,y)
/k(1+log2 n)
11
if I then
12
if F lag(x) = 0 then
13
B(x) ← x
14
B ← B ∪ {x}
15
else
16
Ki ← Ki + w(x)D(x, y)
17
u(y) ← u(y) + u(x) /* Step 1 of MOVE(x, y)
*/
18
u(x) ← 0 /* Step 2 of MOVE(x, y)
*/
19
if F lag(x) = 0 then
20
B(x) ← y
21
if Ki > γLi or |B| > (γ − 1)(1 + log2 n)k then
22
(Mi , ui ) ← (B, u)
23
F lag(b) ← 1 for all b ∈ B
24
Push (B, u) onto the stream (P, w) before the next point to read
25
B←∅
26
u(x) ← 0 for all x
27
Li+1 ← φLi
28
q←0
29
Ki+1 ← 0
30
i←i+1
35
From now on we will write a weighted set A instead of (A, wA ) when the meaning is clear.
If we start with a set A0 and apply n operations of MOVE until it becomes the set An , we can
provide an upper-bound for dEM (A0 , An ) by summing dEM (Ai , Ai+1 ) for 0 ≤ i < n. This is
a direct application of the triangle inequality. And if Ai+1 is obtained from Ai by applying
MOVE(p, p0 ), then dEM (Ai , Ai+1 ) = D(p, p0 )w(p), the cost of this move.
The Earth-Mover distance is important for clustering problems for the following reason. For any weighted sets (P, w) and (B, u) and query C, |ḡ(P, w, C) − ḡ(B, u, C)| ≤
dEM (P, w, B, u). This is immediate from a repeated application of the triangle-inequality
(proofs are found in Theorem 2.3 of [GMM+ 03] as well as in [COP03, BMO+ 11, Guh09]).
Theorem 8.4. There exists an algorithm stores O(log 1 k log n) points and maintains for
every prefix (R, w0 ) of the stream (P, w): (1) a weighted set (M, u) such that dEM (M, u, R, w0 ) ≤
OPTk (R), and (2) a (O(1), O(log( 1 ) log n))-bicriterion B for (R, w0 ). Morever, this bicriterion B is computed online in the sense that B(x) is determined upon receiving x from the
stream.
Proof. The algorithm will consist of running Algorithm 3 and storing certain information
for the λ most recent phases (where λ depends on ).
Let i be the current phase number, and let (R, w0 ) be the points received so far. We remind
the reader that when the meaning is clear we suppress notation and write R for the weighted
ργ
)e. The prefix R and the set
set (R, w0 ), and likewise for (M, u). Define λ = 2 + dlogφ ( (φ−ρ)
M in the statement of the theorem will be Ri−λ and Mi−λ . To upper bound dEM (R, M ),
which by definition is the minimum cost of any movement from R to M, we note that one
movement
is the set of moves carried out by the algorithm through phase i − λ, whose cost is
Pi−λ
ρφγ
i−λ
K
.
Part
1 of Theorem 8.2 then shows that dEM (R, M ) ≤ Σj=1
OPTk (Ri−λ ).
Kj ≤ φ−ρ
j
j=1
1−λ
Combining Statements 3 and 4 of the theorem shows that OPTk (Ri−λ ) ≤ φ OPTk (R).
ρφγ 1−λ
Therefore dEM (R, M ) ≤ φ−ρ
φ OPTk (R) ≤ OPTk (R) as desired.
As for the second statement of the theorem, the algorithm defines the map B and we are
currently interested in the restriction of B to R \ Ri−λ . B maps to at most O(k log n) points
per phase - this is guaranteed by Statement 5 (a direct result the termination condition on
Line 17). Over the last λ phases, this then maps to O(λk log n) = O(( 1 )k log n) points.
Therefore we have that β = O(( 1 ) log n). And the value of α is immediate from Statement 1
of Theorem 8.2, since B incurs only a subset of the costs as Kj (the subset that comes from
the new portion of the stream Rj \ Rj−1 ).
As a final note, we do not store B(P ) in memory (it may be linear in space). For
algorithms in the following sections it is only required to know it’s value for the most recent
point received; previous values are used only in the analysis.
8.2
Maintaining a coreset over the stream
In the previous section, we presented an algorithm that maintains an (α, β)-approximation of
the stream (this is given by the function B : P → B). In this section we show how we can use
this approximation to carry out the coreset construction of Algorithm 2 on an insertion-only
36
stream. In the offline construction, a sample S of m points is taken from X according to a
distribution where point p sampled with probability depending on D(p, B(p)), |B|, and nB(p)
(the total weight of points connected to the center B(p), which is written as Σq∈Pb w(q) where
b = B(p)) - the specific formula for the probability written below (and comes from Line 3 of
Algorithm 2). All three of these quantities can easily be maintained over the stream using
Algorithm 4 since B is online (i.e. B(p) never changes).
P rob(p) =
w(p)
w(p)D(p, B(p))
P
P
+
2 q∈P w(q)D(q, B(q)) 2|B| q∈Pb w(q)
Upon receiving a point p, we assign r(p) a uniform random number in the interval (0, 1).
This is the threshold for keeping p in our sample S - we keep the point p if and only if
r(p) < P rob(p). For each point p, P rob(p) is non-increasing as the stream progresses (this is
immediate from the formula and from the fact that clusters never decrease in size since B is
online). Therefore after receiving point p, we update P rob(s) for each s ∈ S and delete any
such s that drop below their threshold: r(s) ≥ P rob(s). Once a point crosses the threshold,
it may be deleted since P rob(s) is non-increasing and so it will remain below the threshold
at all future times. In this way, the construction exactly matches the output as if the offline
Algorithm 2 had been used.
Algorithm 4: Input: integer k, ρ-metric space (X, D), stream P of points n from
(X, D), online bicriterion B, sample size m ≥ 1. Output: a coreset S
1 S ← ∅
2 for each point p that arrives do
3
Assign r(p) a uniform random in (0, 1)
4
S ← S ∪ {p}
5
Compute Prob(p) according to Line 3 of Algorithm 2
6
for each point s ∈ S do
7
Update Prob(s)
8
if Prob(s) ≤ r(s) then
9
Delete s from S
10
Update u(s) according to Line 6 of Algorithm 2
Algorithm 3 provides the function B and a weighted set Mi−λ . Beginning in phase i − λ,
we will begin running Algorithm 4. This outputs the sample S for phases i − λ + 1 until the
current phase i. The following theorem shows that Mi−λ ∪ Si is a coreset for the stream. Of
course, we need to do this construction for each of the λ = O(log( 1 )) most recent phases, so
the space gets multiplied by this factor.
We return to definition 6.2 of an r Log-Log Lipschitz function D̃. Given a metric space
(X, dist), the space (X, D) where D = D̃(dist) is a ρ-metric space for ρ = max{2r−1 , 1}. It
is well-known that most M -estimators can be recast as a Log-Log Lipschitz function for a
low constant value of r. For example, k-means has r = 2.
37
Theorem 8.5. There exists a single-pass streaming algorithm requiring O(−O(1) k log n(log n+
log k + log 1δ )) space that maintains a (k, )-coreset for the k-median clustering of an r-LogLog-Lipschitz function D̃ on a stream of at most n elements with probability at least 1 − δ.
Proof. Running Algorithm 2 (producing the Mi ) and Algorithm 3 (producing the Si ) in
parallel, we will show that Mi−λ ∪ Si is the desired coreset of the stream.
We already have by Theorem 8.4 that dEM (Mi−λ , Ri−λ ) ≤ OPTk (P ) ≤ ḡ(P, w, C) for
any set C of size k. Also, by Theorem 6.6 we have the Si is an (k, )-coreset for P \
Ri−λ . This is because (in the statement of Theorem 6.6) we are required to carry out the
1
d
log
t
+
log(βk)
+
log
where t = α/ψ.
construction of Algorithm 2 using m ≥ ck(t+β)
2
ε
δ
It was shown in Lemma 6.2 that ψ = (/r)r . We are using the (O(1), O(log( 1 ) log n))approximation B, and in [FL11] it is shown that d = O(log n) for k-median in a ρ-metric
space. So therefore the minimal possible value of m thatsatisfies the hypotheses of the
−r
n)
theorem is O( k( ε+log
log n log −r + log(k log n) + log 1δ ). Simplifying notation, this is
2
O(−O(1) k log n(log n + log k + log 1δ )).
We write COST(A, C) to denote ḡ(A, wA , C) to simplify notation; by A ∪ B we mean
(A ∪ B, wA + wB ). Taking the union, we get that |COST(P, C) − COST(Mi−λ ∪ Si , C)| ≤
|COST(Ri−λ , C) − COST(Mi−λ , C)| + |COST(P \ Ri−λ , C) − COST(Si , C)|. The first term is
upper-bounded by the Earth-Mover distance, and the second term is upper-bounded by
since Si is a (k, )-coreset for P \ Ri−λ . So therefore Mi−λ ∪ Si is a 2-coreset for the stream
P , and the proof is complete after rescaling .
The previous theorem has, as special cases, streaming coresets for k-median, k-means,
Lp , Cauchy estimators, Tukey estimators, etc. This the first algorithm that does not use
merge-and-reduce for building coresets over streams for any of these problems. Moreover,
the constant in the exponent for is small, for the example of k-median the dependence is
−3 log(1/).
References
[AB99]
M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge, England, 1999.
[AHPV04] P. K. Agarwal, S. Har-Peled, and K. R. Varadarajan. Approximating extent
measures of points. Journal of the ACM, 51(4):606–635, 2004.
[AMR+ 12] Marcel R. Ackermann, Marcus Märtens, Christoph Raupach, Kamil Swierkot,
Christiane Lammersen, and Christian Sohler. Streamkm++: A clustering algorithm for data streams. J. Exp. Algorithmics, 17:2.4:2.1–2.4:2.30, May 2012.
[BF15]
Artem Barger and Dan Feldman. k-means for streaming and distributed big
sparse data. SDM’16 and arXiv preprint arXiv:1511.08990, 2015.
38
[BLK15]
Olivier Bachem, Mario Lucic, and Andreas Krause. Coresets for nonparametric estimation-the case of dp-means. In Proceedings of the 32nd International
Conference on Machine Learning (ICML-15), pages 209–217, 2015.
[BMO+ 11] Vladimir Braverman, Adam Meyerson, Rafail Ostrovsky, Alan Roytman,
Michael Shindler, and Brian Tagiku. Streaming k-means on well-clusterable
data. In Proceedings of the Twenty-second Annual ACM-SIAM Symposium on
Discrete Algorithms, SODA ’11, pages 26–40. SIAM, 2011.
[BS80]
Jon Louis Bentley and James B Saxe. Decomposable searching problems i.
static-to-dynamic transformation. Journal of Algorithms, 1(4):301–358, 1980.
[CEM+ 15] Michael B Cohen, Sam Elder, Cameron Musco, Christopher Musco, and
Madalina Persu. Dimensionality reduction for k-means clustering and low rank
approximation. In Proceedings of the Forty-Seventh Annual ACM on Symposium
on Theory of Computing, pages 163–172. ACM, 2015.
[Che08]
Ke Chen. A constant factor approximation algorithm for k -median clustering
with outliers. In Shang-Hua Teng, editor, SODA, pages 826–835. SIAM, 2008.
[Che09a]
Ke Chen. On coresets for k-median and k-means clustering in metric and euclidean spaces and their applications. SIAM Journal on Computing, 39(3):923–
947, 2009.
[Che09b]
Ke Chen. On coresets for k-median and k-means clustering in metric and euclidean spaces and their applications. SIAM J. Comput., 39(3):923–947, August
2009.
[COP03]
Moses Charikar, Liadan O’Callaghan, and Rina Panigrahy. Better streaming
algorithms for clustering problems. In Proceedings of the Thirty-fifth Annual
ACM Symposium on Theory of Computing, STOC ’03, pages 30–39, New York,
NY, USA, 2003. ACM.
[DDH+ 07] Anirban Dasgupta, Petros Drineas, Boulos Harb, Ravi Kumar, and Michael W.
Mahoney.
Sampling algorithms and coresets for lp regression.
CoRR,
abs/0707.1714, 2007.
[FFK11]
Dan Feldman, Matthew Faulkner, and Andreas Krause. Scalable training of
mixture models via coresets. In Advances in Neural Information Processing
Systems, pages 2142–2150, 2011.
[FL11]
Dan Feldman and Michael Langberg. A unified framework for approximating
and clustering data. In Proceedings of the Forty-third Annual ACM Symposium
on Theory of Computing, STOC ’11, pages 569–578, New York, NY, USA, 2011.
ACM.
39
[FMS07]
D. Feldman, M. Monemizadeh, and C. Sohler. A PTAS for k-means clustering
based on weak coresets. In SoCG, 2007.
[FS05]
G. Frahling and C. Sohler. Coresets in dynamic geometric data streams. In Proc.
37th Annu. ACM Symp. on Theory of Computing (STOC), pages 209–217, 2005.
[FS12]
Dan Feldman and Leonard J Schulman. Data reduction for weighted and outlierresistant clustering. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, pages 1343–1354. SIAM, 2012.
[FSS13]
Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into
tiny data: Constant-size coresets for k-means, pca and projective clustering. In
Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete
Algorithms, pages 1434–1453. SIAM, 2013.
[GMM+ 03] Sudipto Guha, Adam Meyerson, Nina Mishra, Rajeev Motwani, and Liadan
O’Callaghan. Clustering data streams: Theory and practice. IEEE Trans. on
Knowl. and Data Eng., 15(3):515–528, March 2003.
[Guh09]
Sudipto Guha. Tight results for clustering and summarizing data streams. In
Proceedings of the 12th International Conference on Database Theory, ICDT ’09,
pages 268–275, New York, NY, USA, 2009. ACM.
[HHR11]
Frank Hampel, Christian Hennig, and Elvezio Ronchetti. A smoothing principle
for the huber and other location m-estimators. Computational Statistics & Data
Analysis, 55(1):324 – 337, 2011.
[HP11]
Sariel Har-Peled. Geometric approximation algorithms, volume 173. American
mathematical society Providence, 2011.
[HPK07]
S. Har-Peled and A. Kushal. Smaller coresets for k-median and k-means clustering. Discrete Comput. Geom., 37(1):3–19, 2007.
[HPM04]
S. Har-Peled and S. Mazumdar. On coresets for k-means and k-median clustering. In STOC, 2004.
[Hub81]
P. J. Huber. Robust statistics. 1981.
[LBK15]
Mario Lucic, Olivier Bachem, and Andreas Krause. Strong coresets for hard
and soft bregman clustering with applications to exponential family mixtures.
CoRR, abs/1508.05243, 2015.
[LLS00]
Yi Li, Philip M. Long, and Aravind Srinivasan. Improved bounds on the sample
complexity of learning. In SODA, pages 309–318, 2000.
[LLS01]
Y. Li, P. M. Long, and A. Srinivasan. Improved bounds on the sample complexity
of learning. Journal of Computer and System Sciences (JCSS), 62, 2001.
40
[LS10]
M. Langberg and L. J. Schulman. Universal ε approximators for integrals. To appear in proceedings of ACM-SIAM Symposium on Discrete Algorithms (SODA),
2010.
[Mat89]
Jiřı́ Matoušek. Construction of epsilon nets. In Proceedings of the fifth annual
symposium on Computational geometry, pages 1–10. ACM, 1989.
[PKB14]
Dimitris Papailiopoulos, Anastasios Kyrillidis, and Christos Boutsidis. Provable
deterministic leverage score sampling. In Proceedings of the 20th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 997–
1006. ACM, 2014.
[SV07]
Nariankadu D Shyamalkumar and Kasturi Varadarajan. Efficient subspace approximation algorithms. In SODA, volume 7, pages 532–540. Citeseer, 2007.
[Tyl87]
David E Tyler. A distribution-free m-estimator of multivariate scatter. The
Annals of Statistics, pages 234–251, 1987.
[VC71]
V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative
frequencies of events to their probabilities. Theory Prob. Appl., 16:264–280,
1971.
[Zha11]
Z. Zhang.
M-estimators.
http://research. microsoft.com/enus/um/people/zhang/INRIA/ Publis/Tutorial-Estim/node20.html, [accessed
July 2011].
41
| 8 |
Crowd ideation of supervised learning problems
James P. Bagrow1,2,*
1 Department
of Mathematics & Statistics, University of Vermont, Burlington, VT, United States
Complex Systems Center, University of Vermont, Burlington, VT, United States
* Corresponding author. Email: [email protected], Homepage: bagrow.com
arXiv:1802.05101v1 [cs.HC] 14 Feb 2018
2 Vermont
February 14, 2018
Abstract
Crowdsourcing is an important avenue for collecting machine learning data, but crowd-
sourcing can go beyond simple data collection by employing the creativity and wisdom of crowd
workers. Yet crowd participants are unlikely to be experts in statistics or predictive modeling, and
it is not clear how well non-experts can contribute creatively to the process of machine learning.
Here we study an end-to-end crowdsourcing algorithm where groups of non-expert workers propose
supervised learning problems, rank and categorize those problems, and then provide data to train
predictive models on those problems. Problem proposal includes and extends feature engineering because workers propose the entire problem, not only the input features but also the target variable. We
show that workers without machine learning experience can collectively construct useful datasets and
that predictive models can be learned on these datasets. In our experiments, the problems proposed
by workers covered a broad range of topics, from politics and current events to problems capturing
health behavior, demographics, and more. Workers also favored questions showing positively correlated relationships, which has interesting implications given many supervised learning methods
perform as well with strong negative correlations. Proper instructions are crucial for non-experts,
so we also conducted a randomized trial to understand how different instructions may influence the
types of problems proposed by workers. In general, shifting the focus of machine learning tasks
from designing and training individual predictive models to problem proposal allows crowdsourcers
to design requirements for problems of interest and then guide workers towards contributing to the
most suitable problems.
Keywords— citizen science; novel data collection; Amazon Mechanical Turk; top-k ranking; randomized
control trial
1
1
Introduction
We study how to combine crowd creativity with supervised learning. Crowdsourcers often use crowd workers
for data collection and active learning, but it is less common to ask workers for creative input as part of their
tasks. Indeed, crowdsourcing is typically not conducive to creative processes. Creativity is often enabled by
autonomy and freedom [2], yet crowdsourcing, with its focus on the requirements of the crowdsourcer and
the drive towards small microtasks, is generally not suitable for autonomy or freedom. This tension poses
important crowdsourcing research challenges: what is the best and most efficient way to guide workers to
creative ideation around a topic the crowdsourcer is interested in? How much guidance is possible without
negatively impacting creativity? Can creative tasks be combined with traditional crowdsourcing tasks such
as data collection?
In this paper, we investigate the following research questions:
1. Can crowd workers, who are typically not versed in the details of statistical or machine learning,
propose meaningful supervised learning problems?
2. What is the best way to pose the task of proposing machine learning problems to workers, allowing
them to understand the task of supervised learning while accounting for crowdsourcer requirements
and minimizing potential bias? Does the design of the task influence what problems workers may
propose?
3. Are workers able to compare and contrast previously proposed problems to help the crowdsourcer
allocate workers towards problems deemed important or interesting?
4. What are the properties of problems proposed by workers? Do workers tend to ideate around certain
topics or certain types of questions?
5. Can data collected for worker-proposed problems be used to build accurate predictive models without
intervention from the crowdsourcer?
To study these questions, we implement and test a crowdsourcing algorithm where groups of workers
ideate supervised learning problems, categorize and efficiently rank those problems according to criteria of
interest, and collect training data for those problems. We study the topics and features of problems, show
2
that performant predictive models can be trained on proposed problems, explore the design of the problem
proposal task using a randomized trial, and discuss limitations and benefits when approaching learning from
this perspective—how it changes the learning task from feature engineering and modeling to the specification
of problem requirements.
The rest of this paper is organized as follows. In Sec. 2 we discuss crowdsourcing research on machine
learning data collection and on creative ideation. In Sec. 3 we detail the end-to-end crowdsourcing algorithm
we introduce for proposing supervised learning problems and generating novel datasets. Section 4 describes
methods for how to rank proposed problems according to various criteria, saving crowdsourcer resources by
eliminating poor problems and focusing data collection on problems of interest. In Sec. 5 we present our
results applying our end-to-end algorithm using Amazon Mechanical Turk, studying features of problems
proposed by workers and showing that supervised learning can be achieved on newly collected data. As
crowdsourcing relies on worker’s having a clear understanding of the task at hand, Section 6 describes a
randomized trial we conducted to further understand how giving examples may help or hinder workers as
they propose problems. Finally, we discuss our results and how our findings can inform future applications
of crowdsourcing to machine learning problem ideation in Secs. 7 and 8.
2
Background
Crowdsourcing has long been used as an avenue to gather training data for machine learning methods [14].
In this setting, it is important to understand the quality of worker responses, to prevent gathering bad data and
to maximize the wisdom-of-the-crowd effects without introducing bias [11]. Researchers have also studied
active learning, where the supervised learning algorithm is coupled in a feedback loop with responses
from the crowd. One example is the Galaxy Zoo project [12], where crowd workers are asked to classify
photographs of galaxies while learning algorithms try to predict the galaxy classes and also manage quality
by predicting the responses of individual workers.
While most crowdsourcing focuses on relatively rote tasks such as basic image classification [20], many
researchers have studied how to incorporate crowdsourcing into creative tasks. Some examples include the
work of Bernstein et al. [3] and Teevan et al. [24], both of which leverage crowd workers for prose writing;
Kittur [13], where the crowd helps with translation of poetry; Chilton et al. [7], where the crowd develops
3
taxonomic hierarchies; and Dontcheva et al. [8], where crowd workers were asked to ideate new and interesting
applications or uses of common everyday objects, such as coins. In the context of machine learning, the
website Kaggle provides a competition platform for expert crowd participants to create predictive models,
allowing data holders to crowdsource modeling, but problems are posed by the data providers not the crowd.
Two recent studies take an approach to using crowdsourcing for creative ideation that share similarities
with the algorithm we propose here. The work of Siangliulue et al. [22] studies crowdsourcing to write
greeting cards, and implemented a proposal-and-ranking algorithm similar to our proposal-ranking-datacollection approach (Sec. 3). Similarly, the “Wiki Surveys” project [19] asks volunteers to contribute and
vote on new ideas for improving quality-of-life in New York City. As with our algorithm, this project couples
a proposal phase with a ranking and selection step, to create ideas and then filter and select the best ideas
for the city government to consider. None of these studies applied crowdsourced creativity to problems of
machine learning or data collection, however.
Perhaps the research most related to ours is the series of papers [4, 5, 23, 26]. That work studies the
crowdsourcing of survey questions in multiple problem domains. Participants answered questions related
to a quantity of interest to the crowdsourcer, such as how much they exercised vs. their obesity level or
how much laundry they did at home compared with their home energy usage. Those participants were
also offered the chance to propose new questions related to the quantity of interest (obesity level or home
energy use). Supervised learning algorithms were deployed while crowdsourcing occurred to relate proposed
questions (predictors) to the quantity of interest, and thus participants were performing crowdsourced feature
engineering with the goal of discovering novel predictors of interest. Our work here generalizes this to
crowdsourcing the entire supervised learning problem, not just the features, by allowing workers to propose
not just questions related to a quantity of interest chosen by the research team, but also the quantity of interest
itself.
3
Crowd ideation algorithm
Here we introduce a crowd ideation algorithm for the creation and data collection of supervised learning
problems. The algorithm works in three phases: (i) problem proposal, (ii) problem selection by ranking,
and (iii) data collection for selected problems. As part of the crowdsourcing, proposed problems may also
4
be categorized or labeled by workers. This is an end-to-end algorithm in that crowd workers generate all
problems and data without manual interventions from the crowdsourcer.
3.1
Problem proposal
In the first phase, a small number of workers are directed to propose sets of questions (see supplemental
materials for the exact wording of these and all instructions we used in our experiments). Workers are
instructed to provide a problem consisting of one target question and p = 4 input questions. We focused
on four input questions here to keep the proposal task short; we discuss generalizing this in Sec. 7. Several
examples of problems proposed by workers are shown in Table 1. Workers are told that our goal is to predict
what a person’s answer will be to the target question after only receiving answers to the input questions.
Describing the problem in this manner allows workers to envision the underlying goal of the supervised
learning problem without the need to discuss data matrices, response variables, predictors, or other fieldspecific vocabulary. Workers were also instructed to use their judgment and experience to determine
“interesting and important” problems. Importantly, no examples of questions were shown to workers, to
help ensure they were not biased in favor of the example (we investigate this bias with a randomized trial in
Sec. 6). Workers were asked to write their questions into provided text fields, ending each with a question
mark. They were also asked to categorize the type of answer expected for each question; for simplicity, we
directed workers to provide questions whose answers were either numeric or true/false (Boolean), though
this can be readily generalized. Lastly, workers in the first phase are also asked to provide answers to their
own questions.
3.2
Problem ranking
In the second phase, new workers are shown previously proposed problems, along with instructions again
describing the goal of predicting the target answer given the input answers, but these workers are asked to (i)
rank the problems according to our criteria but using their own judgment, and (ii) answer survey questions
describing the problems they were shown. It is useful to keep individual crowdsourcing tasks short, so it is
generally too burdensome to ask each worker to rank all N problems. Instead, we suppose that workers will
study either one problem or a pair of problems depending on the ranking procedure, complete the survey
5
Table 1: Examples of crowd-proposed supervised learning problems. Each problem is a set of questions, one target and
p inputs, all generated by workers. Answers to input questions form the data matrix X and answers to the target question
form the target vector y. Machine learning algorithms try to predict the value of the target given only responses to the
inputs.
Problem
Target:
Input:
What is your annual income?
You have a job?
Input:
How much do you make per hour?
Input:
How many hours do you work per
week?
How many weeks per year do you
work?
Input:
Problem
Do you have a good doctor?
How many times have you had a
physical in the last year?
How many times have you gone to the
doctor in the past year?
How much do you weigh?
Do you have high blood pressure?
Problem
Target:
Input:
Input:
Input:
Input:
Has racial profiling in America gone too far?
Do you feel authorities should use race when
determining who to give scrutiny to?
How many times have you been racially profiled?
Should laws be created to limit the use of racial
profiling?
How many close friends of a race other than
yourself do you have?
questions for the problem(s), and, if shown a pair of problems, to rate which of the two problems they
believed was “better” according to the instructions given to them by the crowdsourcer. To use these ratings
to develop a global ranking of problems from “best” to “worst”, a crowdsourcer can apply top-K ranking
algorithms such as those described below (Sec. 4). These algorithms select the K most suitable problems to
pass along to phase three.
Problem categorization
As an optional part of phase two, data can be gathered to categorize what types of problems are being
proposed, and what are the properties of those problems. To categorize problems a crowdsourcer can design
any manner of survey questions depending on her interests, although it is helpful to keep the overall task
short and as easy for workers to complete as possible. In our case, we asked workers what the topic of each
problem is, whether questions in the problem were subjective or objective, how well the input questions
would help to predict the answer to the target question, and what kind of responses other people would give
to some questions. We describe the properties of proposed problems in Sec. 5.
6
3.3
Data collection and supervised learning
In phase three, workers were tasked with answering the input and target questions for the problems selected
during the ranking phase. Workers could answer the questions in each selected problem only once but could
work on multiple problems. In our case, we collected data from workers until each problem had responses
from a fixed number of unique workers n, but a crowdsourcer could specify other criteria for collecting data.
The answers workers give in this phase create the datasets to be used for learning. Specifically, the n × p
matrix X consists of the n worker responses to the p input questions (we can also represent the answers to
each input question i as a predictor vector xi , with X = [x1, . . . , x p ]). Likewise, the answers to the target
question provide the supervising or target vector y.
After data collection, supervised learning methods can be applied to find the best predictive model fˆ
that relates y and X, i.e., y = fˆ(X). In our case, we focused on random forests [6], a powerful and general
ensemble learning method. Random forests work well on both linear and nonlinear problems and can be used
for both regression problems (where y is numeric) and classifications (where y is categorical). However, any
supervised learning method can be applied in this context. For hyperparameters used to fit the forests, we
chose 200 trees per forest, a split criterion of MSE for regression and Gini impurity for classification, and
tree nodes are expanded until all leaves are pure or contain fewer than 2 samples.
4
Ranking proposed problems
Not all workers will propose meaningful problems, so it is important to include a ranking step (phase two)
that filters out poor problems while promoting important problems. Furthermore, problems may not be
meaningful for different reasons. Problems may lead to unimportant or unimpactful broader consequences,
or problems may simply recapitulate known relationships (“Do you think 1 + 1 = 2?”). Another reason is
that they may lack learnability. For a binary classification task, learnability is reflected in the balance of
class labels. For example, the target question “Do you think running and having a good diet are healthy?” is
likely to lead to very many “true” responses and very few “false” responses. This makes learning difficult
and not particularly meaningful (while a predictive model in such a scenario is not especially useful, the
relationships and content of the target and input questions are likely to be meaningful, as we saw in some of
7
our examples; see supplemental materials). Given these potential concerns, it is important to rank problems
according to criteria of interest in order to focus the crowd on the most important and meaningful problems.
The choice of ranking criteria gives the crowdsourcer flexibility to guide workers in favor of, not
necessarily specific types of problems, but problems that possess certain features or properties. This
balances the needs of the crowdsourcer (and possible budget constraints) without restricting the free-form
creative ideation of the crowd. Here we detail two methods to use crowd feedback to efficiently rank problems
based on importance and learnability.
4.1
Importance ranking
We asked workers to use their judgment to estimate the “importance” of problems (see supplemental materials
for the exact wording of instructions). To keep task size per work manageable, we assume workers only
compare a pair of problems, asking them to compare two problems per task, with a simple “Which of these two
problems is more important?”-style question. This reduces the worker’s task to a pairwise comparison. Yet
even reduced to pairwise comparisons, the global ranking problem is still challenging, as one needs O(N 2 )
pairwise comparisons for N problems, comparing every problem to every other problem. Furthermore,
importance is generally subjective, so we need the responses of many workers and cannot rely on a single
response to a given pair of problems. Assuming we require L independent worker comparisons per pair, the
number of worker responses required for problem ranking grows as O(LN 2 ).
Thankfully, ranking algorithms can reduce this complexity. Instead of comparing all pairs of problems,
these algorithms allow us to compare a subset of pairs to infer a latent score for each problem, then rank
all problems according to these latent scores. For this work, we chose the following top-K spectral ranking
algorithm, due to Negahban et al. [17], to rank crowd-proposed problems and extract the K best problems
for subsequent crowdsourced data collection.
The algorithm begins with a comparison graph G = (V, E), where the vertices V = {1, 2, . . . , N } denote
the problems to be compared, and comparison between two problems i and j occurs only if (i, j) ∈ E. If any
two problems are independently and equally likely to be chosen for comparison, then G is an Erdős-Rényi
graph G N, p where p is the constant probability that an edge exists. In order to compute a global ranking from
the pairwise comparisons, G must be connected. For Erdős-Rényi graphs, this occurs when p ≥ c log(N)/N
8
for some constant c > 1. The choice of an Erdős-Rényi comparison graph here is useful: when all possible
edges are equally and independently probable, the number of samples needed to produce a consistent ranking
is nearly optimal [17].
After generating a comparison graph, crowd workers are assigned ranking tasks for problem pairs
(i, j) ∈ E. We seek L independent comparisons per edge, giving L |E | total comparison tasks (less than the
original O(LN 2 ) when p is small). The outcome ti(`)
j of the `-th comparison between problems i and j is:
ti(`)
j =
1,
problem j beats i,
0,
otherwise.
From Eq. (1), the aggregate comparison for pair (i, j) is ti j =
1
L
(`)
`=1 ti j .
ÍL
(1)
Next, these ti j are used to convert
the comparison graph into a transition matrix T representing a first-order Markov chain. The elements of
this transition matrix are:
1
d ti j ,
Ti j = 1 − 1 Í N tik Aik ,
k=1
d
0,
if (i, j) ∈ E,
if i = j,
(2)
otherwise,
where the constant d ≡ ∆(G) is the maximum vertex degree, A = [Ai j ] is the adjacency matrix of G, and
the diagonal terms Tii ensure that T is a stochastic matrix. Lastly, the stationary distribution of this Markov
chain, computed from the leading left eigenvector u of T, provides the latent scores for ranking. The top-K
problems then correspond with the K largest elements of u.
For our specific crowdsourcing experiment, we generated N = 50 problems during the proposal phase,
so here we generated a single Erdős-Rényi comparison graph of 50 nodes with p = 1.5 log(N)/N, and opted
for L = 15. Increasing L can improve ranking accuracy, but doing so comes at the cost of more worker time
and crowdsourcer resources.
4.2
Learnability ranking
As discussed above, problems lack learnability when there is insufficient diversity in the dataset. If nearly
every observation is identical, there is not enough “spread” of data for the supervised learning method to
9
train upon. To avoid collecting data for such problems, we seek a means for workers to estimate for us the
learnability of a proposed problem when shown the input and target questions. The challenge is providing
workers with a task that is sufficiently simple for them to perform quickly yet the workers do not require
training or background in how supervised learning works.
To address this challenge, we designed a task to ask workers about their opinions of the set of answers we
would receive to a given question (a form of meta-knowledge). We limited ourselves to Boolean (true/false)
target questions, although it is straightforward to generalize to regression problems (numeric target questions)
by rephrasing the task slightly. Specifically, we asked workers what proportion of respondents would answer
“true” to the given question. Workers gave a 1–5 Likert-scale response from (1) “No one will answer true”
to (3) “About half will answer true” to (5) “Everyone will answer true”. The idea is that, since a diversity of
responses is generally necessary (but not sufficient) for (binary) learnability, classification problems that are
balanced between two class labels are more likely to be learnable. To select problems, we use a simple ranking
procedure to seeking questions with responses predominantly in the middle of the Likert scale. Specifically,
if ti j ∈ {1, . . . , 5} is the response of the i-th worker to problem j, we take the aggregate learnability ranking
to be
ÍW
t j = 3 − Íi=1
W
ti j δi j
i=1 δi j
,
(3)
where W is the total number of workers participating in learnability ranking tasks, and δi j = 1 if worker i
ranked problem j, and zero otherwise. The closer a problem’s score is to 3, the more the workers agree that
target answers would be evenly split between true and false, and so we rank problems based on the absolute
deviation from the middle score of 3. While Eq. (3) is specific to a 1–5 Likert scale variable, similar scores
can be constructed for any ordinal variable.
This learnability ranking task can be combined with a pairwise comparison methodology like the one
described for importance ranking. In our case, we elected to perform a simpler one-problem task because
learnability ranking only requires examining the target question and because workers are less likely to need
a relative baseline here as much as they may with importance ranking, where a contrast effect between two
problems is useful for judging subjective values such as importance. Due to time and budget constraints we
also took K = 5 for experiments using this ranking task.
10
Table 2: Summary of crowdsourcing tasks. Rewards are in USD.
Task
Problem proposal
Importance rating & problem categorization
Learnability rating
Data collection for top importance problems
Data collection for top learnability problems
5
5.1
Reward
$3.00
$0.25
$0.05
$0.12
$0.12
# responses
50
2042
835
2004
990
# workers
50
239
83
495
281
Results
Crowdsourcing tasks
We performed crowdsourcing using Amazon Mechanical Turk during August 2017. Tasks were performed
in sequence, first problem proposal (phase one), then ranking and categorization (phase two), then data
collection (phase three). These tasks and the numbers of responses and numbers of workers involved in
each task are detailed in Table 2, as are the rewards workers were given. Rewards were determined based on
estimates of the difficulty or time spent on the problem, so proposing a problem had a much higher reward
($3 USD) than providing data by answering the problem’s questions ($0.12 USD). No responses were filtered
out at any point, although a small number of responses (less than 1%) were not successfully recorded.
We solicited N = 50 problems in phase one, compensating Mechanical Turk workers $3 for their task.
Workers could submit only one problem. A screenshot of the task interface for this phase (and all phases)
is shown in the supplemental materials. Some example problems provided by crowd workers are shown in
Table 1; all 50 problems are shown in the supplemental materials. After these problems were collected,
phase two began where workers were asked to rate the problems by their importance and learnability and to
categorize the features of the proposed problems. Workers were compensated $0.25 per task in phase two
and were limited to 25 tasks total. After the second phase completed, we chose the top-10 most important
problems and the top-5 most learnable problems (Sec. 4) to pass on to data collection (phase three). We
collected data for these problems until n = 200 responses were gathered for each problem (we have slightly
less responses for some problems as a few responses were not recorded successfully; no worker responses
were rejected). Workers in this phase could respond to more than one problem but only once to each problem.
11
Health/wellness
Demographic/personal
Politics/current events
Factual
Other/unsure
0%
5% 10% 15% 20% 25% 30%
Proportion of worker responses
Health/wellness
Demographic/personal
Politics/current events
Factual
Other/unsure
0
5
10
15
Number of proposed problems
Figure 1: Categories of crowdsourced problems. The bottom plot counts the majority categorization of each problem.
5.2
Characteristics of proposed problems
We examined the properties of problems proposed by workers in phase one. We measured the prevalence
of Boolean and numeric questions. In general, workers were significantly in favor of proposing Boolean
questions over numeric questions. Of the N = 50 proposed problems, 34 were classifications (Boolean
target question) and 16 were regressions (numeric target question). Further, of the 250 total questions
provided across the N = 50 problems, 177 (70.8%) were Boolean and 73 were numeric (95% CI on the
proportion of Boolean: 64.74% to 76.36%), indicating that workers were significantly in favor of Boolean
questions over numeric. Likewise, we also found an association between whether the input questions
were numeric or Boolean given the target question was numeric or Boolean. Specifically, we found that
problems with a Boolean target question had on average 3.12 Boolean input questions out of 4 (median
of 4 Boolean input questions), whereas problems with a numeric target question had 2.31 Boolean input
questions on average (median of 2 Boolean input questions). The difference was significant (Mann-Whitney
test: U = 368.5, nbool = 34, nnum = 16, p < 0.02). Although it is difficult to draw a strong conclusion from
this test given the amount of data we have (only N = 50 problems), the evidence we have indicates that
workers tend to think of the same type of question for both the target and the inputs, despite the potential
power of mixing the two types of questions.
To understand more properties of the questions workers proposed, we gave survey questions to workers as
part of the importance rating task to categorize the problems shown to them. We used survey questions about
12
Strongly disagree
Disagree
Neutral
Agree
Strongly agree
0%
10%
20%
30%
40%
Proportion of worker responses
Figure 2: Worker responses to, “Are the input questions useful at predicting answers to the target question?”
Health/wellness
Demographic/personal
Politics/current events
Factual
Other/unsure
0%
20% 40% 60% 80%
Proportion of 'objective' question ratings
Figure 3: Proportion of question ratings of ‘objective’ instead of ‘subjective’ vs. the majority category of the problem.
the topical nature of the problem (Fig. 1), whether the inputs were useful at predicting the target (Fig. 2), and
whether the questions were objective or subjective (Fig. 3). Problem categories (Fig. 1) were selected from a
multiple choice categorization we determined manually. Problems about demographic or personal attributes
were common, as were political and current events. Workers generally reported that the inputs were useful
at predicting the target, either rating “agree” or “strongly agree” to that statement (Fig. 2). Many types of
problems were mixes between objective and subjective questions, while problems categorized as “factual”
tended to contain the most objective questions and problems categorized as “other/unsure” contained the
most subjective questions, indicating a degree of meaningful consistency across the categorization survey
questions.
To rank the learnability of classification problems, we asked workers about the diversity of responses
they expected others to give to the Boolean target question, whether they believed most people would answer
false to the target question, or answer true, or if people would be evenly split between true and false (Fig. 4).
We found that generally there was a bias in favor of positive (true) responses to the target questions, but that
workers felt that many questions would have responses to the target questions be split between true and false.
This bias is potentially important for a crowdsourcer to consider when designing her own tasks, but seeing
that most Boolean target questions are well split between true and false response also supports that workers
are proposing useful problems; if the answer to the target question is always false, for example, then the input
13
NO ONE will answer 'true'
FEW will answer 'true'
ABOUT HALF will answer 'true'
MOST will answer 'true'
EVERYONE will answer 'true'
0%
5%
10%
15%
Proportion of ratings
Figure 4: Crowd-categorized diversity of the (Boolean) target questions.
questions are likely not necessary, and the workers generally realize this when proposing their problems.
5.3
Performance of supervised learning on collected data
Given the proposed problems and the selection of problems for subsequent data collection, it is also important
to quantify predictive model performance on these problems. Since workers are typically not familiar with
supervised learning, there is a risk they may be unable to propose learnable problems. At the same time,
however, workers may not be locked into traditional modes of thinking, such as assumptions that predictors
are linearly related to the target, leading to problems with interesting and potentially unexpected combinations
of predictor variables and the response variable.
Here we trained and measured the performance of random forest regressors and classifiers (Sec. 3.3),
depending on whether the proposer flagged the target question as either numeric or Boolean, using the
data collected for the 15 selected problems. Predictive performance was measured using the coefficient of
determination for regressions and mean accuracy for classifications, as assessed with k-fold cross-validation
(stratified k-fold cross-validation if the problem is a classification). To assess the variability of performance
over different datasets, we used bootstrap replicates of the original crowd-collected data to estimate a
distribution of cross-validation scores. There is also a risk of class imbalance, where nearly every target
variable is equal and always guessing the majority class label can appear to perform well. To assess this, we
also trained on a shuffled version of each problem’s dataset, where we randomly permuted the rows of the
data matrix X, breaking the connection with the target variable y. If models trained on these data performed
similarly to models trained on the real data, then it is difficult to conclude that learning has occurred, although
this does not mean the questions are not meaningful, only that the data collected does not lead to useful
predictive models.
14
Prob. density
Prob. density
Regression
Classification
4
15
10
5
0
2
0
0.0
0.5
1e 17Regression
real
6
random
4
2
0
1.0
0.5
15
Classification
5
5
0
0
20
0.7 0.8 0.9
Classification
10
10
5
0.0
1e18
0
0.7
0.8
0.9
30
10
10
0.8
0.9
Classification
Classification
0
0.8
0.9
CV score
20
10
0.6
0.8
Classification
0
15
10
10
5
5
0
Classification
0.7
0.8
0.9
0.925 0.950 0.975
Classification
0
0.8
0.9
Prob. density
(a) Top importance problems
Classification
15
15
Classification
15
Classification
Classification
10
10
10
10
5
5
5
5
0
0.7
0.8
0.9
0
0.6
0.8
0
0.6
0.8
CV score
0
20
Classification
10
0.6
0.8
0
0.6
0.8
1.0
(b) Top learnability (classification) problems
Figure 5: Cross-validation scores for (a) the top-10 importance ranked problems and (b) the top-5 learnable problems. At
least two of the importance problems and four of the learnable problems demonstrate significant prediction performance.
Performance variability was assessed with bootstrap replicates of the crowdsourced datasets and class imbalance was
assessed by randomizing the target variable relative to the input data. Note that the second regression problem in panel
a showed poor predictive performance for reasons we describe in Sec. 7.
The results of this model assessment procedure are shown in Fig. 5. Many of the 10 importance ranked
problems in Fig. 5(a) demonstrate this class imbalance but at least two of the ten problems, one regression
and one classification, show significant learning1. At the same time, four out of the five learnability-ranked
problems (Fig. 5(b)) showed strong predictive performance.
These results show that, while many of the worker-proposed problems are difficult to learn on, it is
possible to generate multiple problems where learning can be successful, and to assess this with an automatic
procedure such as testing the differences of the distributions shown in Fig. 5.
5.3.1
Avoiding redundant questions
One concern when allowing non-experts to propose supervised learning problems is that they may introduce
multiple questions that effectively ask the same thing. For example, it is redundant to ask both “Are you
obese?” and “Is your BMI over 30?” within a single problem. This could lead to redundant predictors,
which is inefficient, or redundant target variables, which is disastrous: if one can immediately guess the
1One regression problem showed terrible performance scores for reasons we detail in the discussion.
15
Prob. density
3
2
mean 15
median
10
1
5
0
1
0
0
1
predictor-target R
predictor-target
model score
0.0
0.5
R2
1.0
Figure 6: Distributions of correlations between individual predictors and the target variable, and the model score
between the target variable and the predictions of the fitted model. Most problems do not feature a redundant predictor
that can fully explain the target variable.
target answer given a predictor, then there is not much point in doing the supervised learning.
To determine if two natural language questions are equivalent short of rephrasing is a challenging
computational linguistics task. However, with access to worker responses, we can infer redundancy based on
correlations between the answers given during the data collection task. Therefore, to estimate redundancy,
we measured the correlation between individual predictors xi and the target variable y, R(xi, y), to determine
any redundancy. We used the Pearson correlation coefficient which reduces to the phi coefficient if xi and y
are binary. The distribution of these predictor-target correlations for the 15 crowdsourced problems is shown
in Fig. 6. Many predictors are only weakly correlated or anti-correlated with the target—on the right panel
we see nearly all predictors have an R2 < 1/2. In contrast, the model scores, the correlations between the
observed y and the trained model fˆ(X), also shown in the right panel, are all much higher. Together, these
distributions imply that few if any target variables have redundant predictor questions, and thus workers
generally avoided the concern of redundancy2.
In Fig. 7 we compare the training and cross-validation scores for each problem as a function of the
maximum predictor-target correlation for that problem (maxi R(xi, y)). If the learned model was only
performing as well as the information available to it from the best single predictor, then the points would
fall on the dashed line. However, we see that all problems3 outperform that base single-predictor level,
demonstrating that the problems workers proposed capture useful, non-redundant information.
2One can worry if this was due to workers simply answering the different questions randomly, but if that were the case it would
be unlikely to have model scores as high as the ones we found in Fig 5.
3We excluded one regression problem from this set where learning was not possible. See the discussion for information on this.
16
1.0
Model score
0.8
0.6
0.4
0.2
training
cross-validation
0.0
0.0
0.2
0.4
0.6
0.8
Max predictor-target R
1.0
Figure 7: Predictive performance of learned models compared with the most correlated individual predictor for
the problem. Under both training and cross-validation, learned models perform better than expected if there was a
redundant predictor that perfectly or best explained the target (dashed line).
5.3.2
Positive correlation bias
While investigating the correlations between individual predictors and the target, we also found that positive
correlations were more likely than negative correlations. The mean and median correlation over all predictortarget pairs, annotated in the left panel of Fig. 6, were significantly different from zero: mean correlation (95%
CI) = 0.128 (0.0519, 0.204); median correlation (95% CI) = 0.101 (0.0420, 0.193). In other words, crowd
workers were more likely to propose positive relationships that negative relationships, which has interesting
implications given that in principle learning algorithms can perform just as well with both positive and
negative relationships. This positivity bias was also observed by Wagy et al. [26], and leveraging it for
improved response diversity, perhaps by focusing on workers who have a record of proposing anti-correlated
relationships, is an interesting avenue for further research.
6
Task design: Do examples help explain the problem proposal task? Do
examples introduce bias?
Care must be taken when instructing workers to perform the problem proposal task. Without experience in
machine learning, they may be unable to follow instructions which are too vague or too reliant on machine
learning terminology. Providing an example with the instructions is one way to make the task more clear
while avoiding jargon. An example helps avoid the ambiguity effect [10], where workers are more likely to
17
avoid the task because they do not understand it. However, there are potential downsides as well: introducing
an example may lead to anchoring [25] where workers will be biased towards proposing problems related to
the example and may not think of important, different problems.
To understand what role an example may play—positive or negative—in problem proposal, we conducted
a randomized trial investigating the instructions given for the problem problem task. Workers who did not
participate in previous tasks were randomly assigned to one of three arms when accepting the problem
proposal task (simple random assignment). One arm had no example given with the instructions and was
identical to the task studied previously (Sec. 5.2). The second arm included with the instructions an example
related to obesity (An example target question is: “Are you obese?”), and the third arm presented an example
related to personal finance (An example target question is: “What is your current life savings?”). The
presence or absence of an example is the only difference across arms; all other instructions were identical
and, crucially, workers were not instructed to propose problems related to any specific topic or area of
interest.
After we collected new problems proposed by workers who participated in this randomized trial, we
then initiated a followup problem categorization task (Sec. 3.2) identical to the categorization task discussed
previously but with two exceptions: we asked workers to only look at one problem per task and we did not
use a comparison graph as here we will not rank the problems for subsequent data collection. Since only one
problem was categorized per task instead of two, workers were paid $0.13 per task instead of the original
$0.25 per task. The results of this categorization task allow us to investigate the categories and features of
the proposed problems and to see whether or not the problems differ across the three experimental arms.
6.1
Results
We collected n = 90 proposed problems across all three arms (27 in the no-example baseline arm, 33 in the
obesity example arm, and 30 in the savings example arm), paying workers as before. We then collected 458
problem categorization ratings, gathering ratings from 5 or more distinct workers per proposed problem (no
worker could rate more than 25 different problems). From these ratings we can study changes in problem
category, usefulness of input questions at answering the target question, if the questions are answerable or
unanswerable, and if the questions are objective or subjective, as judged by workers participating in the
18
Health/wellness
Demographic/personal
Politics/current events
Factual
No example
Obesity
Savings
Other/unsure
0% 10% 20% 30% 40% 50% 60%
Proportion of worker responses
Figure 8: Categories of proposed problems under the different instructional treatments (the baseline instructions with
no example, the instructions with the obesity example, and the instructions with the savings example). Problems
proposed by workers who saw either example were more likely to be rated as demographic or personal and less likely to
be considered factual. Interestingly, the obesity example led to fewer proposed problems related to health or wellness.
Table 3: Typical ratings and features of problems proposed under the three instruction types (the no-example baseline,
the obesity example, and the savings example). Bold treatment quantities show a significant difference with the baseline
(Table 4).
Rating or feature (variable type)
Problem importance (x = 1–5 Likert; 5: Strongly agree)
Inputs are useful (x = 1–5 Likert; 5: Strongly agree)
Questions are answerable (x = 1) or unanswerable (x = 0)
Questions are objective (x = 1) or subjective (x = 0)
Questions are numeric (x = 1) or Boolean (x = 0)
Mean x
baseline
Mean x
obesity
Mean x
savings
3.09
3.36
0.82
0.59
0.25
3.16
3.91
0.92
0.67
0.35
3.50
3.67
0.92
0.68
0.60
followup rating tasks.
The results of this trial are summarized in Fig. 8 and Table 3, with statistical tests comparing the
no-example baseline to the example treatments summarized in Table 4. In brief, we found that:
• Problem categories changed significantly across arms (Fig. 8 and Table 4), with more ‘demographic/personal’ problems, fewer ‘politics/current events’, and fewer ‘factual’ questions under the
example treatments compared with the baseline.
• Workers shown the savings example were significantly more likely than workers in other arms to
propose questions with numeric responses instead of Boolean responses: 60% of questions proposed
in the savings arm were numeric compared with 25% in the no-example baseline (p < 10−8 ; Table 4).
19
Table 4: Statistical tests comparing the categories and ratings given for problems generated under the no-example
baseline with the categories and ratings of problems generated under the obesity example and savings example
baselines. For categorical and Likert-scale ratings we used a Chi-squared test of independence while for binary ratings
we used a Fisher exact test. Significant results (p < 0.05) are denoted with ∗.
Baseline vs. obesity
Baseline vs. savings
Difference in
Test
statistic
p-value
statistic
p-value
problem categories (cf. Fig. 8)
problem importance (cf. Table 3)
inputs are useful (cf. Table 3)
answerable/unanswerable (cf. Table 3)
objective/subjective (cf. Table 3)
numeric/Boolean (cf. Table 3)
Chi-square
Chi-square
Chi-square
Fisher exact
Fisher exact
Fisher exact
52.73∗
8.57
16.35∗
—∗
—
—∗
< 10−10
> 0.05
< 0.005
0.0083
0.12
0.041
52.73∗
11.84∗
7.20
—∗
—
—∗
< 10−9
< 0.02
> 0.1
0.012
0.078
< 10−8
• All three arms were rated as having mostly answerable questions, with a higher proportion of answerable questions for both example treatments: 92% of ratings were ‘answerable’ for both example
treatments compared with 82% for the baseline (Table 3). Proportions for both example treatments
were significantly different from the baseline (Table 4).
• Workers more strongly agreed that the inputs were useful at predicting the target for problems proposed
by workers under the example treatments than the no-example baseline problems. The overall increase
was not dramatic however, and tested as significant (p < 0.05) only for the savings example vs. the
baseline (Table 4).
• Questions proposed under the example treatments were more likely to be rated as objective than
questions proposed under the no-example baseline: 67% and 68% of ratings were ‘objective’ for the
obesity and savings examples, respectively, compared with 59% for the baseline (Table 3). However,
this difference was not significant (Table 4).
Taken together, the results of this trial demonstrate that examples, while helping to explain the task to
workers, will lead to significant changes in the features and content of problems the workers will propose. A
crowdsourcer may be able to get better and somewhat more specific questions, depending on her requirements,
but care should be taken when selecting which examples to use, as workers may anchor onto those examples
in some ways when developing their own problem proposals.
20
7
Discussion
Here we studied a problem where crowd workers independently designed supervised learning problems
and then collected data for selected problems. We determined that workers were able to propose learnable
and important problems with minimal instruction, but that several challenges mean care should be taken
when developing worker instructions as workers may propose trivial or “bad” problems. To avoid wasting
resources on such bad problems, we introduced efficient problem selection and data collection algorithms to
maximize the crowd’s ability to generate suitable problems while minimizing the resources required from
the crowdsourcer.
Analyzing the problems proposed by workers, we found that workers tended to favor Boolean questions
over numeric questions, that input questions tended to be positively correlated with target questions, and that
many problems were related to demographic or personal attributes. To better understand how the design of the
proposal task may affect the problems workers proposed, we also conducted a randomized trial comparing
problems proposed by workers shown no example to those shown examples, and found that examples
significantly altered the categories of proposed problems. These associations make it important to carefully
consider the tasks assigned to workers, but they also provide opportunities to help the crowdsourcer. For
example, it is less common for workers to mix Boolean and numeric questions, but workers that do propose
such mixtures may be identified early on and then steered towards particular tasks, perhaps more difficult
tasks. Likewise, given that examples have a powerful indirect effect on problem proposal, a crowdsourcer
may be able to use examples to “nudge” workers in one direction while retaining more of their creativity
than if they explicitly restricted workers to a particular type of problem. We saw an example of this in
Sec. 6: workers shown the savings example were over 2.5 times more likely to propose numeric questions
than workers shown no example.
When allowing creative contributions from the crowd, a challenge is that workers may propose trivial or
uninteresting problems. This may happen intentionally, due to bad actors, or unintentionally, due to workers
misunderstanding the goal of their task. Indeed, we encountered a number of such proposed problems, further
underscoring the need for both careful instructions and the problem ranking phase. Yet, we found that the
problem ranking phase did a reasonable job at detecting and down-ranking such problems, although there is
room to improve on this further, for example by reputation modeling of workers or implementing other quality
21
control measures [1, 14, 21]. More generally, it may be worth combining the ranking and data collection
phases, collecting data immediately for all or most problems but simultaneously monitoring problems as
data are collected for certain specifications and then dynamically allocating more incoming workers to the
most suitable subset of problems [15, 16]. To monitor learnability, for example, a crowdsourcer can detect
if most responses are similar or identical while data is collected, and deemphasize that particular problem
accordingly.
The ranking procedures we used considered problem importance and problem learnability separately,
but a single ranking may be more practical. One approach is to design the ranking task so that workers
account for all the attributes the crowdsourcer wishes to select for at the same time. Another approach is to
determine separate rankings for each desired attribute, then develop a joint ranking using a rank aggregation
method such as Borda count [9]. Further work will be helpful in delineating which approaches work best for
which problems.
We limited ourselves to numeric or Boolean questions, and a total of five questions per problem, but
varying the numbers of questions and incorporating other answer types could be useful. For numeric
questions, one important consideration is the choice of units. We encountered one regression problem
(mentioned previously; see supplemental materials) where learning failed because the questions involved
distances and volumes, but workers were not given information on units, leading to wildly varying answers.
This teaches us that workers may need to be asked if units should be associated with the answers to numeric
questions during problem ideation tasks.
One of the great potentials of crowdsourcing problem ideation is that it allows a diverse set of individuals
to contribute their ideas and perspectives into the design of problems. Diversity can come in many forms,
from gender and nationality to interdisciplinary training and education. Diversity is known to lead to a
number of positive outcomes for groups and organizations [18] and judicious use of crowdsourced input may
allow smaller teams of researchers to augment their own diversity with the crowd’s diversity.
Allowing the crowd to propose problems changes the researcher’s main workload from feature engineering and model fitting to problem specification. While here we allowed the crowd to ideate freely about
problems, with the goal of determining what problems they were most likely to propose, in practice the
crowdsourcer is likely to instead choose to focus on particular types of problems. As one example, a team
22
of medical researchers or a team working at an insurance firm may request only problems focused on health
care. Future work will investigating methods for steering the crowd towards topics of interest, in particular
ways of focusing the crowd while biasing workers as little as possible.
8
Conclusion
In this work, we introduced an end-to-end algorithm allowing crowd workers to propose new supervised
learning problems and to collect datasets on which to train predictive models. Ranking algorithms were
used to eliminate poor problems and to select the problems most desirable to a crowdsourcer. We described
the properties of the collected problems and validated the predictive performance of trained models. While
not all proposed problems were useful or led to performant models, we demonstrated that the crowd can
create multiple novel, learnable problems and then generate novel and useful datasets associated with those
problems.
Acknowledgments This material is based upon work supported by the National Science Foundation under
Grant No. IIS-1447634.
References
[1] M. Allahbakhsh, B. Benatallah, A. Ignjatovic, H. R. Motahari-Nezhad, E. Bertino, and S. Dustdar.
Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing, 17(2):
76–81, 2013. 00152.
[2] T. M. Amabile, R. Conti, H. Coon, J. Lazenby, and M. Herron. Assessing the work environment for
creativity. Academy of management journal, 39(5):1154–1184, 1996.
[3] M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann, M. S. Ackerman, D. R. Karger, D. Crowell, and
K. Panovich. Soylent: a word processor with a crowd inside. Communications of the ACM, 58(8):
85–94, 2015. 00607.
[4] K. E. Bevelander, K. Kaipainen, R. Swain, S. Dohle, J. C. Bongard, P. D. H. Hines, and B. Wansink.
Crowdsourcing Novel Childhood Predictors of Adult Obesity. PLOS ONE, 9(2):e87756, 2014. ISSN
1932-6203. doi: 10.1371/journal.pone.0087756. 00019.
[5] J. C. Bongard, P. D. Hines, D. Conger, P. Hurd, and Z. Lu. Crowdsourcing predictors of behavioral
outcomes. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 43(1):176–185, 2013.
00024.
[6] L. Breiman. Random Forests. Machine Learning, 45(1):5–32, 2001. ISSN 0885-6125, 1573-0565.
doi: 10.1023/A:1010933404324.
23
[7] L. B. Chilton, G. Little, D. Edge, D. S. Weld, and J. A. Landay. Cascade: Crowdsourcing taxonomy
creation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages
1999–2008. ACM, 2013. 00117.
[8] M. Dontcheva, E. Gerber, and S. Lewis. Crowdsourcing and creativity. In CHI 2011: Crowdsourcing
Workshop, 2011. 00010.
[9] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In
Proceedings of the 10th international conference on World Wide Web, pages 613–622. ACM, 2001.
[10] D. Ellsberg. Risk, ambiguity, and the Savage axioms. The Quarterly Journal of Economics, pages
643–669, 1961.
[11] P.-Y. Hsueh, P. Melville, and V. Sindhwani. Data quality from crowdsourcing: a study of annotation
selection criteria. In Proceedings of the NAACL HLT 2009 workshop on active learning for natural
language processing, pages 27–35. Association for Computational Linguistics, 2009.
[12] E. Kamar, S. Hacker, and E. Horvitz. Combining human and machine intelligence in large-scale
crowdsourcing. In Proceedings of the 11th International Conference on Autonomous Agents and
Multiagent Systems-Volume 1, pages 467–474, 2012.
[13] A. Kittur. Crowdsourcing, collaboration and creativity. XRDS: crossroads, the ACM magazine for
students, 17(2):22–26, 2010.
[14] M. Lease. On quality control and machine learning in crowdsourcing. In Proceedings of the 11th AAAI
Conference on Human Computation, AAAIWS’11-11, pages 97–102. AAAI Press, 2011.
[15] Q. Li, F. Ma, J. Gao, L. Su, and C. J. Quinn. Crowdsourcing high quality labels with a tight budget.
In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, pages
237–246. ACM, 2016. 00004.
[16] T. C. McAndrew, E. Guseva, and J. P. Bagrow. Reply & Supply: Efficient crowdsourcing when workers
do more than answer questions. PLOS ONE, 12(8):e69829, 2017. doi: 10.1371/journal.pone.0182662.
[17] S. Negahban, S. Oh, and D. Shah. Rank centrality: Ranking from pairwise comparisons. Operations
Research, 65(1):266–287, 2017. doi: 10.1287/opre.2016.1534.
[18] S. E. Page. The difference: How the power of diversity creates better groups, firms, schools, and
societies. Princeton University Press, 2008.
[19] M. J. Salganik and K. E. Levy. Wiki surveys: Open and quantifiable social data collection. PLOS ONE,
10(5):e0123483, 2015.
[20] E. Schenk and C. Guittard. Crowdsourcing: What can be outsourced to the crowd, and why. In
Workshop on Open Source Innovation, Strasbourg, France, volume 72, 2009.
[21] F. Scholer, A. Turpin, and M. Sanderson. Quantifying test collection quality based on the consistency
of relevance judgements. In Proceedings of the 34th international ACM SIGIR conference on Research
and development in Information Retrieval, pages 1063–1072. ACM, 2011. 00053.
24
[22] P. Siangliulue, K. C. Arnold, K. Z. Gajos, and S. P. Dow. Toward collaborative ideation at scale:
Leveraging ideas from others to generate more creative and diverse ideas. In Proceedings of the 18th
ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 937–945.
ACM, 2015.
[23] R. Swain, A. Berger, J. Bongard, and P. Hines. Participation and contribution in crowdsourced surveys.
PLOS ONE, 10(4):e0120521, 2015. 00004.
[24] J. Teevan, S. T. Iqbal, and C. von Veh. Supporting Collaborative Writing with Microtasks. In
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, pages
2657–2668, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-3362-7. doi: 10.1145/2858036.
2858108.
[25] A. Tversky and D. Kahneman. Judgment under uncertainty: Heuristics and biases. Science, 185(4157):
1124–1131, 1974.
[26] M. D. Wagy, J. C. Bongard, J. P. Bagrow, and P. D. H. Hines. Crowdsourcing predictors of residential
electric energy usage. IEEE Systems Journal, PP(99):1–10, 2017.
25
| 2 |
arXiv:1706.07136v2 [stat.ME] 18 Aug 2017
Multiscale Information Decomposition: Exact Computation
for Multivariate Gaussian Processes
L. Faes1 , S. Stramaglia2 , and D. Marinazzo3
1
1
2
Bruno Kessler Foundation, Trento, Italy
BIOtech, Dept. of Industrial Engineering, University of Trento, Italy
Dipartimento di Fisica, Universitá degli Studi Aldo Moro, Bari, Italy
2
INFN, Sezione di Bari, Italy
3
Data Analysis Department, Ghent University, Ghent, Belgium
August 21, 2017
Abstract
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes
observed at multiple temporal scales. All the terms, constituting the frameworks known as
interaction information decomposition and partial information decomposition, can thus
be analytically obtained for different time scales from the parameters of the VAR model that
fits the processes. We report the application of the proposed methodology firstly to benchmark
Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales, or even by alternating prevalence of redundant and synergistic
source interaction depending on the time scale. Then, we apply our method to an important
topic in neuroscience, i.e. the detection of causal interactions in human epilepsy networks, for
which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone.
1
Introduction
The information-theoretic treatment of groups of correlated degrees of freedom can reveal their
functional roles as memory structures or information processing units. A large body of recent work
has shown how the general concept of “information processing” in a network of multiple interacting dynamical systems described by multivariate stochastic processes can be dissected into basic
1
elements of computation defined within the so-called framework of information dynamics Lizier
et al. (2014). These elements essentially reflect the new information produced at each moment
in time about a target system in the network Pincus (1995), the information stored in the target
system Lizier et al. (2012), Wibral et al. (2014), the information transferred to it from the other
connected systems Schreiber (2000), Wibral et al. (2014) and the modification of the information
flowing from multiple source systems to the target Lizier et al. (2010), Wibral et al. (2015). The
measures of information dynamics have gained more and more importance in both theoretical and
applicative studies in several fields of science Lizier et al. (2011), Wibral et al. (2011), Hlinka
et al. (2013), Barnett et al. (2013), Marinazzo et al. (2014), Faes et al. (2014, 2015), Porta et al.
(2015), Faes et al. (2017), Wollstadt et al. (2017). While the information-theoretic approaches to
the definition and quantification of new information, information storage and information transfer
are well understood and widely accepted, the problem of defining, interpreting and using measures
of information modification has not been fully addressed in the literature.
Information modification in a network is tightly related to the concepts of redundancy and synergy between source systems sharing information about a target system, which refer to the existence
of common information about the target that can be retrieved when the sources are used separately
(redundancy) or when they are used jointly (synergy) Schneidman et al. (2003). Classical multivariate entropy-based approaches refer to the interaction information decomposition (IID), which
reflects information modification through the balance between redundant and synergetic interaction among different source systems influencing the target Stramaglia et al. (2012, 2014, 2016).
The IID framework has the drawback that it implicitly considers redundancy and synergy as mutually exclusive concepts, because it quantifies information modification with a single measure of
interaction information McGill (1954) (also called co-information Bell (2003)) that takes positive
or negative values depending on whether the net interaction between the sources is synergistic or
redundant. This limitation has been overcome by the elegant mathematical framework introduced
by Williams and Beer Williams and Beer (2010), who proposed the so-called partial information
decomposition (PID) as a nonnegative decomposition of the information shared between a target
and a set of sources into terms quantifying separately unique, redundant and synergistic contributions. However, the PID framework has the drawback that the terms composing the PID cannot
be obtained unequivocally from classic measures of information theory (i.e., entropy and mutual
information), but a new definition of either redundant, synergistic or unique information needs to
be provided to implement the decomposition. Accordingly, much effort has focused on finding
the most proper measures to define the components of the PID, with alternative proposals defining new measures of redundancy Williams and Beer (2010), Harder et al. (2013), synergy Griffith
et al. (2014), Quax et al. (2017) or unique information Bertschinger et al. (2014). The proliferation
of different definitions is mainly due to the fact that there is no full consensus on which axioms
should be stated to impose desirable properties for the PID measures. An additional problem which
2
so far has seriously limited the practical implementation of these concepts is the difficulty in providing reliable estimates of the information measures appearing in the IID and PID decompositions.
The naive estimation of probabilities by histogram-based methods followed by the use of plug-in
estimators leads to serious bias problems Panzeri et al. (2007), Faes and Porta (2014). While the
use of binless density estimators Kozachenko and Leonenko (1987) and the adoption of schemes
for dimensionality reduction Vlachos and Kugiumtzis (2010), Marinazzo et al. (2012) have been
shown to improve the reliability of estimates of information storage and transfer Faes et al. (2015),
the effectiveness of these approaches for the computation of measures of information modification
has not been demonstrated yet. Interestingly, both the problems of defining appropriate PID measures and of reliably estimating these measures from data are much alleviated if one assumes that
the observed variables have a joint Gaussian distribution. Indeed, in such a case, recent studies
have proven the equivalence between most of the proposed redundancy measures to be used in the
PID Barrett (2015) and have provided closed form solutions to the issue of computing any measure
of information dynamics from the parameters of the vector autoregressive (VAR) model that characterizes an observed multivariate Gaussian process Faes et al. (2017), Barrett et al. (2010), Porta
et al. (2017).
The second fundamental question that is addressed in this study is relevant to the computation
of information dynamics for stochastic processes displaying multiscale dynamical structures. It is
indeed well known that many complex physical and biological systems exhibit peculiar oscillatory
activities, which are deployed across multiple temporal scales Ivanov et al. (1999), Chou (2011),
Wang et al. (2013). The most common way to investigate such activities is to resample at different scales, typically through low pass filtering and downsampling Costa et al. (2002), Valencia
et al. (2009), the originally measured realization of an observed process, so as to yield a set of
rescaled time series, which are then analyzed employing different dynamical measures. This approach is well established and widely used for the multiscale entropy analysis of individual time
series measured from scalar stochastic processes. However, its extension to the investigation of the
multiscale structure of the information transfer among coupled processes is complicated by theoretical and practical issues Barnett and Seth (2015), Solo (2016). Theoretically, the procedure of
rescaling alters the causal interactions between lagged components of the processes in a way that
is not fully understood and, if not properly performed, may alter the temporal relations between
processes and thus induce spurious detection of information transfer. In practical analysis, filtering
and downsampling are known to degrade severely the estimation of information dynamics and to
impact consistently the detectability, accuracy and data demand Florin et al. (2010), Barnett and
Seth (2017).
In recent works, we have started tackling the above problems within the framework of linear
VAR modeling of multivariate Gaussian processes, with the focus on the multiscale computation
of information storage and information transfer Faes et al. (2016, 2017). In this study, we aim at
3
extending these recent theoretical advances to the multiscale analysis of information modification
in multivariate Gaussian systems performed through the IID and PID decomposition frameworks.
To this end, we exploit the theory of state space (SS) models Aoki and Havenner (1991) and build
on recent theoretical results Barnett and Seth (2015), Solo (2016) to show that exact values of interaction transfer, as well as redundant and synergistic transfer can be obtained for coupled Gaussian
processes observed at different time scales starting from the parameters of the VAR model that
fits the processes and from the scale factor. The theoretical derivations are first used in examples
of benchmark Gaussian systems, reporting that these systems may generate patterns of information
decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales or even by alternating the prevalence of redundant and synergistic
source interaction depending on the time scale. The high computational reliability of the SS approach is then exploited in the analysis of real data by the application to a topic of great interest in
neuroscience, i.e., the detection of information transfer in epilepsy networks.
The proposed framework is implemented in the msID MATLAB R toolbox, which is uploaded
as Supplementary Material to this article and is freely available for download from www.lucafaes.
net/msID.html and https://github.com/danielemarinazzo/multiscale_PID
2
Information Transfer Decomposition in Multivariate Processes
Let us consider a discrete-time, stationary vector stochastic process composed of M real-valued
zero-mean scalar processes, Yn = [Y1,n · · · YM,n ]T , −∞ < n < ∞. In an information-theoretic
framework, the information transfer between scalar sub-processes is quantified by the well-known
transfer entropy (TE), which is a popular measure of the “information transfer” directed towards
an assigned target process from one or more source processes. Specifically, the TE quantifies the
amount of information that the past of the source provides about the present of the target over and
above the information already provided by the past of the target itself Schreiber (2000). Taking Yj
as target and Yi as source, the TE is defined as:
−
−
Ti→j = I(Yj,n ; Yi,n
|Yj,n
)
(1)
−
−
where Yi,n
= [Yi,n−1 Yi,n−2 · · · ] and Yj,n
= [Yj,n−1 Yj,n−2 · · · ] represent the past of the source and
target processes and I(·; ·|·) denotes conditional mutual information (MI). In the presence of two
sources Yi and Yk and a target Yj , the information transferred toward Yj from the sources Yi and
Yk taken together is quantified by the joint TE:
−
−
−
Tik→j = I(Yj,n ; Yi,n
, Yk,n
|Yj,n
).
4
(2)
Under the premise that the information jointly transferred to the target by the two sources is
different than the sum of the amounts of information transferred individually, in the following, we
present two possible strategies to decompose the joint TE into amounts eliciting the individual TEs,
as well as redundant and/or synergistic TE terms.
2.1
Interaction Information Decomposition
The first strategy, which we denote as interaction information decomposition (IID), decomposes
the joint TE (2) as:
Tik→j = Ti→j + Tk→j + Iik→j ,
(3)
where Iik→j is denoted as interaction transfer entropy (ITE) because it is equivalent to the interaction information McGill (1954) computed between the present of the target and the past of the two
sources, conditioned to the past of the target:
−
−
−
|Yj,n
).
Iik→j = I(Yj,n ; Yi,n
; Yk,n
(4)
The interaction TE quantifies the modification of the information transferred from the source
processes Yi and Yk to the target Yj , being positive when Yi and Yk cooperate in a synergistic
way and negative when they act redundantly. This interpretation is evident from the diagrams
of Figure 1: in the case of synergy (Figure 1a), the two sources Yi and Yk taken together contribute
to the target Yj with more information than the sum of their individual contributions (Tik→j >
Ti→j + Tk→j ), and the ITE is positive; in the case of redundancy (Figure 1b), the sum of the
information amounts transferred individually from each source to the target is higher than the joint
information transfer (Ti→j + Tk→j > Tik→j ), so that the ITE is negative.
5
Figure 1: Venn diagram representations of the interaction information decomposition (IID)
(a,b)) and the partial information decomposition (PID) (c)). The IID is depicted in a way
such that all areas in the diagrams are positive: the interaction information transfer Iik→j
is positive in (a), denoting net synergy, and is negative in (b), denoting net redundancy.
2.2
Partial Information Decomposition
An alternative expansion of the joint TE is that provided by the so-called partial information decomposition (PID) Williams and Beer (2010). The PID evidences four distinct quantities measuring the unique information transferred from each individual source to the target, measured by the
unique TEs Ui→j and Uk→j , and the redundant and synergistic information transferred from the
two sources to the target, measured by the redundant TE Rik→j and the synergistic TE Sik→j .
These four measures are related to each other and to the joint and individual TEs by the following
equations (see also Figure 1c):
Tik→j = Ui→j + Uk→j + Rik→j + Sik→j ,
(5a)
Ti→j = Ui→j + Rik→j ,
(5b)
Tk→j = Uk→j + Rik→j .
(5c)
In the PID defined above, the terms Ui→j and Uk→j quantify the parts of the information transferred to the target process Yj , which are unique to the source processes Yi and Yk , respectively,
thus reflecting contributions to the predictability of the target that can be obtained from one of the
sources alone, but not from the other source alone. Each of these unique contributions sums up
6
with the redundant transfer Rik→j to yield the information transfer from one source to the target as
is known from the classic Shannon information theory. Then, the term Sik→j refers to the synergy
between the two sources while they transfer information to the target, intended as the information that is uniquely obtained taking the two sources Yi and Yk together, but not considering them
alone. Compared to the IID defined in (3), the PID (5) has the advantage that it provides distinct
non-negative measures of redundancy and synergy, thereby accounting for the possibility that redundancy and synergy may coexist as separate elements of information modification. Interestingly,
the IID and PID defined in Equations (3) and (5) are related to each other in a way such that:
Iik→j = Sik→j − Rik→j ,
(6)
thus showing that the interaction TE is actually a measure of the ‘net’ synergy manifested in the
transfer of information from the two sources to the target.
An issue with the PID (5) is that its constituent measures cannot be obtained through classic
information theory simply subtracting conditional MI terms as done for the IID; an additional
ingredient to the theory is needed to get a fourth defining equation to be added to (5) for providing
an unambiguous definition of Ui→j , Uk→j , Rik→j and Sik→j . While several PID definitions have
been proposed arising from different conceptual definitions of redundancy and synergy Harder
et al. (2013), Griffith et al. (2014), Bertschinger et al. (2014), here, we make reference to the socalled minimum MI (MMI) PID Barrett (2015). According to the MMI PID, redundancy is defined
as the minimum of the information provided by each individual source to the target. In terms of
information transfer measured by the TE, this leads to the following definition of the redundant
TE:
Rik→j = min{Ti→j , Tk→j }.
(7)
This choice satisfies the desirable property that the redundant TE is independent of the correlation between the source processes. Moreover, it has been shown that, if the observed processes
have a joint Gaussian distribution, all previously-proposed PID formulations reduce to the MMI
PID Barrett (2015).
3
3.1
Multiscale Information Transfer Decomposition
Multiscale Representation of Multivariate Gaussian Processes
In the linear signal processing framework, the M -dimensional vector stochastic process Yn = [Y1,n · · · YM,n ]T
is classically described using a vector autoregressive (VAR) model of order p:
Yn =
p
X
Ak Yn−k + Un
k=1
7
(8)
where Ak are M × M matrices of coefficients and Un = [U1,n · · · UM,n ]T is a vector of M zero
mean Gaussian processes with covariance matrix Σ ≡ E[Un UnT ] (E is the expectation operator).
To study the observed process Y at the temporal scale identified by the scale factor τ , we apply the
following transformation to each constituent process Ym , m = 1, . . . , M :
Ȳm,n =
q
X
bl Ym,nτ −l .
(9)
l=0
This rescaling operation corresponds to transforming the original process Y through a two-step
procedure that consists of the following filtering and downsampling steps, yielding respectively the
processes Ỹ and Ȳ :
q
X
bl Yn−l ,
(10a)
Ȳn = Ỹnτ , n = 1, . . . , N/τ
(10b)
Ỹn =
l=0
The change of scale in (9) generalizes the averaging procedure originally proposed in Costa
et al. (2002), which sets q = τ − 1 and bl = 1/τ and, thus, realizes the step of filtering through
the simple procedure of averaging τ subsequent samples. To improve the elimination of the fast
temporal scales, in this study, we follow the idea of Valencia et al. (2009), in which a more appropriate low pass filter than averaging is employed. Here, we identify the bl as the coefficients
of a linear finite impulse response (FIR) low pass filter of order q; the FIR filter is designed using
the classic window method with the Hamming window Oppenheim and Schafer (1975), setting the
cutoff frequency at fτ = 1/2τ in order to avoid aliasing in the subsequent downsampling step.
Substituting (8) in (10a), the filtering step leads to the process representation:
Ỹn =
p
X
Ak Ỹn−k +
k=1
q
X
Bl Un−l
(11)
l=0
where Bl = bl IM (IM is the M × M identity matrix). Hence, the change of scale introduces a
moving average (MA) component of order q in the original VAR(p) process, transforming it into
a VARMA(p, q) process. As we will show in the next section, the downsampling step (10b) keeps
the VARMA representation, altering the model parameters.
3.2
3.2.1
State Space Processes
Formulation of State Space Models
State space models are models that make use of state variables to describe a system by a set of firstorder difference equations, rather than by one or more high-order difference equations Hannan and
8
Deistler (2012), Aoki (2013). The general linear state space (SS) model describing an observed
vector process Y has the form:
Xn+1 = AXn + Wn
Yn = CXn + Vn
(12a)
(12b)
where the state Equation (12a) describes the update of the L-dimensional state (unobserved) process through the L × L matrix A, and the observation Equation (12b) describes the instantaneous
mapping from the state to the observed process through the M × L matrix C. Wn and Vn are
zero-mean white noise processes with covariances Q ≡ E[Wn WnT ] and R ≡ E[Vn VnT ] and crosscovariance S ≡ E[Wn VnT ]. Thus, the parameters of the SS model (12) are (A, C, Q, R, S).
Another possible SS representation is that evidencing the innovations En = Yn − E[Yn |Yn− ],
T Y T · · · ]T Aoki
i.e., the residuals of the linear regression of Yn on its infinite past Yn− = [Yn−1
n−2
(2013). This new SS representation, usually referred to as the “innovations form” SS model (ISS),
is characterized by the state process Zn = E[Xn |Yn− ] and by the L × M Kalman gain matrix K:
Zn+1 = AZn + KEn
(13a)
Yn = CZn + En
(13b)
The parameters of the ISS model (13) are (A, C, K, V), where V is the covariance of the
innovations, V ≡ E[En EnT ]. Note that the ISS (13) is a special case of (12) in which Wn = KEn
and Vn = En , so that Q = KVKT , R = V and S = KV.
Given an SS model in the form (12), the corresponding ISS model (13) can be identified by
solving a so-called discrete algebraic Riccati equation (DARE) formulated in terms of the state
error variance matrix P Solo (2016):
P = APAT + Q − (APCT + S)(CPCT + R)−1 (CPAT + ST )
(14)
Under some assumptions Solo (2016), the DARE (14) has a unique stabilizing solution, from
which the Kalman gain and innovation covariance can be computed as:
V = CPCT + R
K = (APCT + S)V−1 ,
(15)
thus completing the transformation from the SS form to the ISS form.
3.2.2
State Space Models of Filtered and Downsampled Linear Processes
Exploiting the close relation between VARMA models and SS models, first we show how to convert
the VARMA model (11) into an ISS model in the form of (13) that describes the filtered process
9
Y˜n . To do this, we exploit Aoki’s method Aoki and Havenner (1991) defining the state process
T · · · Y T UT
T
T
˜
Z̃n = [Yn−1
n−p n−1 · · · Un−q ] that, together with Yn , obeys the state Equation (13) with
parameters (Ã, C̃, K̃, Ṽ), where:
A1 · · · Ap−1 Ap B1 · · · Bq−1 Bq
IM · · · 0M 0M 0M · · · 0M 0M
.
..
..
..
..
..
..
.
.
.
.
.
0
0M 0M · · · 0M 0M
M · · · IM
à =
0M · · · 0M 0M 0M · · · 0M 0M
0M · · · 0M 0M IM · · · 0M 0M
..
..
..
..
..
..
.
.
.
.
.
.
· · · 0M 0M 0M · · · IM
h
i
C̃ = A1 · · · Ap B1 · · · Bq
0M
h
K̃ = IM
0M ×M (p−1) B−T
0
0M ×M (q−1)
0M
iT
and Ṽ = B0 Σ BT0 , where Ṽ is the covariance of the innovations Ẽn = B0 Un .
Now, we turn to show how the downsampled process Ȳn can be represented through an ISS
model directly from the ISS formulation of the filtered process Ỹn . To this end, we exploit recent
theoretical findings providing the state space form of downsampled signals (Theorem III in Solo
(2016)). Accordingly, the SS representation of the process downsampled at scale τ , Ȳn = Ỹnτ has
parameters (Ā, C̄, Q̄, R̄, S̄), where Ā = Ãτ , C̄ = C̃, Q̄ = Qτ , R̄ = Ṽ and S̄ = Sτ , with Qτ
and Sτ given by:
Sτ = Ãτ −1 K̃Ṽ
Qτ = ÃQτ −1 ÃT + K̃ṼK̃T , τ ≥ 2
(16)
Q1 = K̃ṼK̃T , τ = 1.
Therefore, the downsampled process has an ISS representation with state process Z̄n = Z̃nτ ,
innovation process Ēn = Ẽnτ and parameters (Ā, C̄, K̄, V̄), where K̄ and V̄ are obtained solving
the DARE (14) and (15) for the SS model with parameters (Ā, C̄, Q̄, R̄, S̄).
To sum up, the relations and parametric representations of the original process Y , the filtered
process Ỹ and the downsampled process Ȳ are depicted in Figure 2a. The step of low pass filtering
(FLT) applied to a VAR(p) process yields a VARMA(p, q) process (where q is the filter order,
and the cutoff frequency is fτ = 1/2τ ); this process is equivalent to an ISS process Aoki and
Havenner (1991). The subsequent downsampling (DWS) yields a different SS process, which in
turn can be converted to the ISS form solving the DARE. Thus, both the filtered process Ỹn and
the downsampled process Ȳn can be represented as ISS processes with parameters (Ã, C̃, K̃, Ṽ)
10
and (Ā, C̄, K̄, V̄) which can be derived analytically from the knowledge of the parameters of
the original process (A1 , . . . , Ap , Σ) and of the filter (q, fτ ). In the next section, we show how to
compute analytically any measure appearing in the information decomposition of a jointly Gaussian
multivariate stochastic process starting from its associated ISS model parameters, thus opening the
way to the analytical computation of these measures for multiscale (filtered and downsampled)
processes.
Figure 2: Schematic representation of a linear VAR process and of its multiscale representation obtained through filtering (FLT) and downsampling (DWS) steps. The downsampled
process has an innovations form state space model (ISS) representation from which submodels can be formed to compute the partial variances needed for the computation of
information measures appearing in the IID and PID decompositions. This makes it possible to perform multiscale information decomposition analytically from the original VAR
parameters and from the scale factor.
3.3
Multiscale IID and PID
After introducing the general theory of information decomposition and deriving the multiscale
representation of the parameters of a linear VAR model, in this section, we provide expressions
for the terms of the IID and PID decompositions of the information transfer valid for multivariate
jointly Gaussian processes. The derivations are based on the knowledge that the linear parametric
representation of Gaussian processes given in (8) captures all of the entropy differences that define
the various information measures Barrett et al. (2010) and that these entropy differences are related
to the partial variances of the present of the target given its past and the past of one or more sources,
intended as variances of the prediction errors resulting from linear regression Faes et al. (2015,
−
−
−
2017). Specifically, let us denote as Ej|j,n = Yj,n − E[Yj,n |Yj,n
], Ej|ij,n = Yj,n − E[Yj,n |Yi,n
, Yj,n
]
−
−
−
the prediction error of a linear regression of Yj,n performed respectively on Yj,n and (Yj,n , Yi,n )
2
2
and as λj|j = E[Ej|j,n
], λj|ij = E[Ej|ij,n
], the corresponding prediction error variances. Then, the
11
TE from Yi to Yj can be expressed as:
Ti→j =
λj|j
1
ln
.
2 λj|ij
(17)
In a similar way, the joint TE from (Yi , Yk ) to Yj can be defined as:
Tik→j =
λj|j
1
ln
,
2 λj|ijk
(18)
2
where λj|ijk = E[Ej|ijk,n
] is the variance of the prediction error of a linear regression of Yj,n
−
−
−
−
−
−
on (Yj,n , Yi,n , Yk,n ), Ej|ijk,n = Yj,n − E[Yj,n |Yi,n
, Yj,n
, Yk,n
]. Based on these derivations, one
can easily complete the IID decomposition of TE by computing Tk→j as in (17) and deriving the
interaction TE from (3) and the PID decomposition, as well by deriving the redundant TE from (7),
the synergistic TE from (6) and the unique TEs from (5).
Next, we show how to compute any partial variance from the parameters of an ISS model in the
form of (13) Barnett and Seth (2015), Solo (2016). The partial variance λj|a , where the subscript
a denotes any combination of indexes ∈ {1, . . . , M }, can be derived from the ISS representation
of the innovations of a submodel obtained removing the variables not indexed by a from the observation equation. Specifically, we need to consider the submodel with state Equation (13b) and
observation equation:
Yn(a) = C(a) Zn + En(a) ,
(19)
where the superscript (a) denotes the selection of the rows with indices a of a vector or a matrix.
It is important to note that the submodels (13a) and (19) are not in innovations form, but are
rather an SS model with parameters (A, C(a) , KVKT , V(a, a), KV(:, a)). This SS model can be
converted to an ISS model with innovation covariance V(a) solving the DARE (14) and (15), so that
the partial variance λj|a is derived as the diagonal element of V(a) corresponding to the position
of the target Yj . Thus, with this procedure, it is possible to compute the partial variances needed
for the computation of the information measures starting from a set of ISS model parameters; since
any VAR process can be represented at scale τ as an ISS process, the procedure allows computing
the IID and PID information decompositions for the rescaled multivariate process (see Figure 2).
It is worth remarking that, while the general formulation of IID and PID decompositions introduced in Section 2 holds for arbitrary processes, the multiscale extension detailed in Section 3
is exact only if the processes have a joint Gaussian distribution. In such a case, the linear VAR
representation captures exhaustively the joint variability of the processes, and any nonlinear extension has no additional utility (a formal proof of the fact that a stationary Gaussian VAR process
must be linear can be found in Barrett et al. (2010)). If, on the contrary, non-Gaussian processes
are under scrutiny, the linear representation provided in Section 3.1 can still be adopted, but may
miss important properties in the dynamics and thus provide only a partial description. Moreover,
12
since the close correspondence between conditional entropies and partial variances reported in this
subsection does not hold anymore for non-Gaussian processes, all of the obtained measures should
be regarded as indexes of (linear) predictability rather than as information measures.
4
Simulation Experiment
To study the multiscale patterns of information transfer in a controlled setting with known dynamical interactions between time series, we consider a simulation scheme similar to some already
used for the assessment of theoretical values of information dynamics Faes et al. (2015, 2017).
Specifically, we analyze the following VAR process of order M = 4:
Y1,n = 2ρ1 cos2πf1 Y1,n−1 − ρ21 Y1,n−2 + U1,n ,
Y2,n = 2ρ2 cos2πf2 Y2,n−1 −
Y3,n = 2ρ3 cos2πf3 Y3,n−1 −
ρ22 Y2,n−2
ρ23 Y3,n−2
(20a)
+ cY1,n−1 + U2,n ,
(20b)
+ cY1,n−1 + U3,n ,
(20c)
Y4,n = bY2,n−1 + (1 − b)Y3,n−1 + U4,n ,
(20d)
where Un = [U1,n · · · U4,n ]T is a vector of zero mean white Gaussian noises with unit variance
and uncorrelated with each other (Σ= I). The parameter design in Equation (20) is chosen to allow
autonomous oscillations in the processes Yi , i = 1, . . . , 3, obtained placing complex-conjugate
poles with modulus ρi and frequency fi in the complex plane representation of the transfer function
of the vector process, as well as causal interactions between the processes at a fixed time lag of one
sample and with strength modulated by the parameters b and c (see Figure 3). In this study, we
set the coefficients related to self-dependencies to values generating well-defined oscillations in
all processes (ρ1 = ρ2 = ρ3 = 0.95) and letting Y1 fluctuate at slower time scales than Y2 and Y3
(f1 = 0.1, f2 = f3 = 0.025). We consider four configurations of the parameters, chosen to
reproduce paradigmatic conditions of interaction between the processes:
(a)
isolation of Y1 and Y2 and unidirectional coupling Y3 → Y4 , obtained setting b = c = 0;
(b)
common driver effects Y2 ← Y1 → Y3 and unidirectional coupling Y3 → Y4 , obtained setting
b = 0 and c = 1;
(c)
isolation of Y1 and unidirectional couplings Y2 → Y4 and Y3 → Y4 , obtained setting b = 0.5
and c = 0;
(d)
common driver effects Y2 ← Y1 → Y3 and unidirectional couplings Y2 → Y4 and Y3 → Y4 ,
obtained setting b = 0.5 and c = 1.
13
Figure 3: Graphical representation of the four-variate VAR process of Equation (20) that
we use to explore the multiscale decomposition of the information transferred to Y4 , selected
as the target process, from Y2 and Y3 , selected as the source processes, in the presence of Y1 ,
acting as the exogenous process. To favor such exploration, we set oscillations at different
time scales for Y1 (f1 = 0.1) and for Y2 and Y3 (f2 = f3 = 0.025), induce common driver
effects from the exogenous process to the sources modulated by the parameter c and allow
for varying strengths of the causal interactions from the sources to the target as modulated
by the parameter b. The four configurations explored in this study are depicted in (a–d).
With this simulation setting, we compute all measures appearing in the IID and PID decompositions of the information transfer, considering Y4 as the target process and Y2 and Y3 as the source
processes. The theoretical values of these measures, computed as a function of the time scale using
the IID and the PID, are reported in Figure 4. In the simple case of unidirectional coupling Y3 → Y4
(b = c = 0, Figure 4a), the joint information transferred from (Y2 , Y3 ) to Y4 is exclusively due to
the source Y3 without contributions from Y2 and without interaction effects between the sources
(T23→4 = T3→4 = U3→4 , T2→4 = U2→4 = 0, I23→4 = S23→4 = R23→4 = 0).
When the causal interactions towards Y4 are still due exclusively to Y3 , but the two sources
Y2 and Y3 share information arriving from Y1 (b = 0, c = 1; Figure 4b), the IID evidences that
the joint information transfer coincides again with the transfer from Y3 (T23→4 = T3→4 ), but a
non-trivial amount of information transferred from Y2 to Y4 emerges, which is fully redundant
(T2→4 = −I23→4 ). The PID highlights that the information from Y3 to Y4 is not all unique, but
is in part transferred redundantly with Y2 , while the unique transfer from Y2 and the synergistic
transfer are negligible.
In the case of two isolated sources equally contributing to the target (b = 0.5, c = 0, Figure
4c), the IID evidences the presence of net synergy and of identical amounts of information transferred to Y4 from Y2 or Y3 (I23→4 > 0, T2→4 = T3→4 ). The PID documents that there are no
unique contributions, so that the two amounts of information transfer from each source to the target coincide with the redundant transfer, and the remaining part of the joint transfer is synergistic
(U2→4 = U3→4 = 0, T2→4 = T3→4 = R23→4 , S23→4 = T23→4 − R23→4 ).
Finally, when the two sources share common information and contribute equally to the target
(b = 0.5, c = 1; Figure 4d), we find that they send the same amount of information as before, but in
this case, no unique information is sent by any of the sources (T2→4 = T3→4 , U2→4 = U3→4 = 0).
Moreover, the nature of the interaction between the sources is not trivial and is scale dependent:
14
at low time scales, where the dynamics are likely dominated by the fast oscillations of Y1 , the IID
reveals net redundancy, and the PID shows that the redundant transfer prevails over the synergistic
(I23→4 < 0, R23→4 > S23→4 ); at higher time scales, where fast dynamics are filtered out and
the slow dynamics of Y2 and Y3 prevail, the IID reveals net synergy, and the PID shows that the
synergistic transfer prevails over the redundant (I23→4 > 0, S23→4 > R23→4 ).
Figure 4: Multiscale information decomposition for the simulated VAR process of Equation (20). Plots depict the exact values of the entropy measures forming the interaction
information decomposition (IID, upper row) and the partial information decomposition
(PID, lower row) of the information transferred from the source processes Y2 and Y3 to
the target process Y4 generated according to the scheme of Figure 3 with four different
configurations of the parameters. We find that linear processes may generate trivial information patterns with the absence of synergistic or redundant behaviors (a), patterns with
the prevalence of redundant information transfer (b) or synergistic information transfer
(c) that persist across multiple time scales, or even complex patterns with the alternating
prevalence of redundant transfer and synergistic transfer at different time scales (d).
5
Application
As a real data application, we analyze intracranial EEG recordings from a patient with drugresistant epilepsy measured by an implanted array of 8 × 8 cortical electrodes and two left hip15
pocampal depth electrodes with six contacts each. The data are available in epi, and further details
on the dataset are given in Kramer et al. (2008). Data were sampled at 400 Hz and correspond to
10-s segments recorded in the pre-ictal period, just before the seizure onset, and 10 s during the
ictal stage of the seizure, for a total of eight seizures. Defining and locating the seizure onset zone,
i.e., the specific location in the brain where the synchronous activity of neighboring groups of cells
becomes so strong so as to be able to spread its own activity to other distant regions, is an important issue in the study of epilepsy in humans. Here, we focus on the information flow from the
sub-cortical regions, probed by depth electrodes, to the brain cortex. In Stramaglia et al. (2014), it
has been suggested that Contacts 11 and 12, in the second depth electrode, are mostly influencing
the cortical activity; accordingly, in this work, we consider Channels 11 and 12 as a pair of source
variables for all of the cortical electrodes and decompose the information flowing from them using
the multiscale IID and PID here proposed, both in the pre-ictal stage and in the ictal stage. An FIR
filter with q = 12 coefficients is used, and the order p of the VAR model is fixed according to the
Bayesian information criterion. In the analyzed dataset, the model order assessed in the pre-ictal
phase was p = 14.61 ± 1.07 (mean ± std. dev.across 64 electrodes and eight seizures) and during
the ictal phase decreased significantly to p = 11.09 ± 3.95.
In Figure 5, we depict the terms of the IID applied from the two sources (Channels {11, 12})
to any of the electrodes as a function of the scale τ , averaged over the eight seizures. We observe
a relevant enhancement of the joint TE during the seizure, w.r.t. the pre-ictal period. This enhancement is determined by a marked increase of both the individual TEs from Channels 11 and 12 to all
of the cortical electrodes; the patterns of the two TEs are similar to each other in both stages. The
pattern of interaction information transfer displays prevalent redundant transfer for low values of
τ and prevalent synergistic transfer for high τ , but the values of the interaction TE have relatively
low magnitude and are only slightly different in pre-ictal and ictal conditions. It is worth stressing
that at scale τ , the algorithm analyzes oscillations, in the time series, slower than 2τ1fs s, where
fs = 400 Hz.
16
Figure 5: Interaction information decomposition (IID) of the intracranial EEG information
flow from subcortical to cortical regions in an epileptic patient. The joint transfer entropy
from depth Channels 11 and 12 to cortical electrodes (a); the transfer entropy from depth
Channel 11 to cortical electrodes (b); the transfer entropy from depth Channel 12 to
cortical electrodes (c) and the interaction transfer entropy from depth Channels 11 and
12 to cortical electrodes (d) are depicted as a function of the scale τ , after averaging over
the eight pre-ictal segments (left column) and over the eight ictal segments (right column).
Compared with pre-ictal periods, during the seizure, the IID evidences marked increases
of the joint and individual information transfer from depth to cortical electrodes and low
and almost unvaried levels of interaction transfer.
17
In Figure 6, we depict, on the other hand, the terms of the PID computed for the same data.
This decomposition shows that the increased joint TE across the seizure transition seen in Figure 5a
is in large part the result of an increase of both the synergistic and the redundant TE, which are
markedly higher during the ictal stage compared with the pre-ictal. This explains why the interaction TE of Figure 5d, which is the difference between two quantities that both increase, is nearly
constant moving from the pre-ictal to the ictal stage. The quantity that, instead, clearly differentiates between Channels 11 and 12 is the unique information transfer: indeed, only the unique TE
from Channel 12 increases in the ictal stage, while the unique TE from Channel 13 remains at low
levels.
18
Figure 6: Partial information decomposition (PID) of the intracranial EEG information
flow from subcortical to cortical regions in an epileptic patient. The synergistic transfer
entropy from depth Channels 11 and 12 to cortical electrodes (a); the redundant transfer
entropy from depth Channels 11 and 12 to cortical electrodes (b); the unique transfer
entropy from depth Channel 11 to cortical electrodes (c) and the unique transfer entropy
from depth Channel 12 to cortical electrodes (d) are depicted as a function of the scale
τ , after averaging over the eight pre-ictal segments (left column) and over the eight ictal
segments (right column). Compared with pre-ictal periods, during the seizure, the PID
evidences marked increases of the information transferred synergistically and redundantly
from depth to cortical electrodes and of the information transferred uniquely from one of
the two depth electrodes, but not from the other.
19
In order to investigate the variability across trials of the estimates of the various information
measures, in Figure 7, we depict the terms of both IID and PID expressed for each ictal episode as
average values over all 64 cortical electrodes. The analysis shows that the higher average values
observed in Figures 5 and 6 at Scales 1–4 during the ictal state for the joint TE, the two individual
TEs, the redundant and synergistic TEs and the unique TE from depth Channel 12 are the result of
an increase of the measures for almost all of the observed seizure episodes.
These findings are largely in agreement with the increasing awareness that epilepsy is a network
phenomenon that involves aberrant functional connections across vast parts of the brain on virtually
all spatial scales Richardson (2012), Dickten et al. (2016). Indeed, our results document that the
occurrence of seizures is associated with a relevant increase of the information flowing from the
subcortical regions (associated with the depth electrode) to the cortex and that the character of this
information flow is mostly redundant both in the pre-ictal and in the ictal state. Here, the need for
a multiscale approach is testified by the fact that several quantities in the ictal state (e.g., the joint
TE, the synergistic ITand the unique ITfrom Channel 12) attain their maximum at scale τ > 1.
Moreover, the approaches that we propose for information decomposition appear useful to
improve the localization of epileptogenic areas in patients with drug-resistant epilepsy. Indeed,
our analysis suggests that Contact 12 is the closest to the seizure onset zone, and it is driving the
cortical oscillations during the ictal stage, as it sends unique information to the cortex. On the other
hand, to disentangle this effect, it has been necessary to include also Channel 11 in the analysis and
to make the PID of the total information from the pair of depth channels to the cortex; indeed, the
redundancy between Channels 11 and 12 confounds the informational pattern unless the PID is
performed.
20
Figure 7: Multiscale representation of the measures of interaction information decomposition (IID, top) and partial information decomposition (PID, bottom) computed as a
function of the time scale for each of the eight seizures during the pre-ictal period (black)
and the ictal period (red). Values of joint transfer entropy (TE), individual TE, interaction
TE, redundant TE, synergistic TE and unique TE are obtained taking the depth Channels
11 and 12 as sources and averaging over all 64 target cortical electrodes. Increases during
seizure of the joint TE, individual TEs from both depth electrodes, redundant and synergistic TE and unique TE from the depth electrode 12 are evident at low time scales for
almost all considered episodes.
6
Conclusions
Understanding how multiple inputs may combine to create the output of a given target is a fundamental challenge in many fields, in particular in neuroscience. Shannon’s information theory is the
most suitable frame to cope with this problem and thus to assess the informational character of multiplets of variables describing complex systems; IID indeed measures the balance between redundant and synergetic interaction within the classical multivariate entropy-based approach. Recently
Shannon’s information theory has been extended, in the PID, so as to provide specific measures
21
for the information that several variables convey individually (unique information), redundantly
(shared information) or only jointly (synergistic information) about the output.
The contribution of the present work is the proposal of an analytical frame where both IID
and PID can be exactly evaluated in a multiscale fashion, for multivariate Gaussian processes, on
the basis of simple vector autoregressive identification. In doing this, our work opens the way
for both the theoretical analysis and the practical implementation of information modification in
processes that exhibit multiscale dynamical structures. The effectiveness of the proposed approach
has been demonstrated both on simulated examples and on real publicly-available intracranial EEG
data. Our results provide a firm ground to the multiscale evaluation of PID, to be applied in all
applications where causal influences coexist at multiple temporal scales.
Future developments of this work include the refinement of the SS model structure to accommodate the description of long-range linear correlations Sela and Hurvich (2009) or its expansion
to the description of nonstationary processes Kitagawa (1987) and the formalization of exact crossscale computation of information decomposition within and between multivariate processes Paluš
(2014). A major challenge in the field remains the generalization of this type of analysis to nonGaussian processes, for which exact analytical solutions or computationally-reliable estimation
approaches are still lacking. This constitutes a main direction for further research, because realworld processes display very often non-Gaussian distributions, which would make an extension
to nonlinear models or model-free approaches beneficial. The questions that are still open in this
respect include the evaluation of proper theoretical definitions of synergy or redundancy for nonlinear processes Williams and Beer (2010), Harder et al. (2013), Griffith et al. (2014), Quax et al.
(2017), Bertschinger et al. (2014), the development of reliable entropy estimators for multivariate
variables with different dimensions Wibral et al. (2014), Faes et al. (2015), Papana et al. (2011)
and the assessment of the extent to which non-linear model-free methods really outperform the
linear model-based approach adopted here and in previous investigations Porta et al. (2017).
References
Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. A framework for the local information dynamics of
distributed computation in complex systems. In Guided Self-Organization: Inception; Springer:
Berlin, Heidelberg, Germany, 2014; pp. 115–158.
Pincus, S. Approximate entropy (ApEn) as a complexity measure. Chaos Interdiscip. J. Nonlinear
Sci. 1995, 5, 110–117.
Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. Local measures of information storage in complex
distributed computation. Inf. Sci. 2012, 208, 39–54.
22
Wibral, M.; Lizier, J.T.; Vögler, S.; Priesemann, V.; Galuske, R. Local active information storage
as a tool to understand distributed neural information processing. Front. Neuroinform. 2014, 8,
doi:10.3389/fninf.2014.00001.
Schreiber, T. Measuring information transfer. Phys. Rev. Lett. 2000, 85, 461.
Wibral, M.; Vicente, R.; Lizier, J.T. Directed Information Measures in Neuroscience; Springer:
Berlin, Heidelberg, Germany, 2014.
Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. Information modification and particle collisions in
distributed computation. Chaos Interdiscip. J. Nonlinear Sci. 2010, 20, 037109.
Wibral, M.; Lizier, J.T.; Priesemann, V. Bits from brains for biologically inspired computing.
Front. Robot. Artif. Intell. 2015, 2, doi:10.3389/frobt.2015.00005.
Lizier, J.T.; Pritam, S.; Prokopenko, M. Information dynamics in small-world Boolean networks.
Artif. Life 2011, 17, 293–314.
Wibral, M.; Rahm, B.; Rieder, M.; Lindner, M.; Vicente, R.; Kaiser, J. Transfer entropy in magnetoencephalographic data: Quantifying information flow in cortical and cerebellar networks.
Prog. Biophys. Mol. Biol. 2011, 105, 80–97.
Hlinka, J.; Hartman, D.; Vejmelka, M.; Runge, J.; Marwan, N.; Kurths, J.; Paluš, M. Reliability
of inference of directed climate networks using conditional mutual information. Entropy 2013,
15, 2023–2045.
Barnett, L.; Lizier, J.T.; Harré, M.; Seth, A.K.; Bossomaier, T. Information flow in a kinetic Ising
model peaks in the disordered phase. Phys. Rev. Lett. 2013, 111, 177203.
Marinazzo, D.; Pellicoro, M.; Wu, G.; Angelini, L.; Cortés, J.M.; Stramaglia, S. Information transfer and criticality in the ising model on the human connectome. PLoS ONE 2014, 9, e93616.
Faes, L.; Nollo, G.; Jurysta, F.; Marinazzo, D. Information dynamics of brain–heart physiological
networks during sleep. New J. Phys. 2014, 16, 105005.
Faes, L.; Porta, A.; Nollo, G. Information decomposition in bivariate systems: Theory and application to cardiorespiratory dynamics. Entropy 2015, 17, 277–303.
Porta, A.; Faes, L.; Nollo, G.; Bari, V.; Marchi, A.; de Maria, B.; Takahashi, A.C.; Catai, A.M.
Conditional self-entropy and conditional joint transfer entropy in heart period variability during
graded postural challenge. PLoS ONE 2015, 10, e0132851.
23
Faes, L.; Porta, A.; Nollo, G.; Javorka, M. Information Decomposition in Multivariate Systems:
Definitions, Implementation and Application to Cardiovascular Networks. Entropy 2017, 19, 5.
Wollstadt, P.; Sellers, K.K.; Rudelt, L.; Priesemann, V.; Hutt, A.; Fröhlich, F.; Wibral, M. Breakdown of local information processing may underlie isoflurane anesthesia effects. PLoS Comput.
Biol. 2017, 13, e1005511.
Schneidman, E.; Bialek, W.; Berry, M.J. Synergy, redundancy, and independence in population
codes. J. Neurosci. 2003, 23, 11539–11553.
Stramaglia, S.; Wu, G.R.; Pellicoro, M.; Marinazzo, D. Expanding the transfer entropy to identify
information circuits in complex systems. Phys. Rev. E 2012, 86, 066211.
Stramaglia, S.; Cortes, J.M.; Marinazzo, D. Synergy and redundancy in the Granger causal analysis
of dynamical networks. New J. Phys. 2014, 16, 105003.
Stramaglia, S.; Angelini, L.; Wu, G.; Cortes, J.; Faes, L.; Marinazzo, D. Synergetic and Redundant
Information Flow Detected by Unnormalized Granger Causality: Application to Resting State
fMRI. IEEE Trans. Biomed. Eng. 2016, 63, 2518–2524.
McGill, W. Multivariate information transmission. Trans. IRE Prof. Group Inf. Theory 1954,
4, 93–111.
Bell, A.J. The co-information lattice. In Proceedings of the Fourth International Symposium on
Independent Component Analysis and Blind Signal Separation (ICA), Nara, Japan, 1–4 April
2003.
Williams, P.L.; Beer, R.D. Nonnegative decomposition of multivariate information. arXiv 2010,
arXiv:1004.2515 .
Harder, M.; Salge, C.; Polani, D. Bivariate measure of redundant information. Phys. Rev. E 2013,
87, 012130.
Griffith, V.; Chong, E.K.; James, R.G.; Ellison, C.J.; Crutchfield, J.P. Intersection information
based on common randomness. Entropy 2014, 16, 1985–2000.
Quax, R.; Har-Shemesh, O.; Sloot, P.M.A. Quantifying Synergistic Information Using Intermediate
Stochastic Variables. Entropy 2017, 19, 85.
Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J.; Ay, N. Quantifying unique information. Entropy
2014, 16, 2161–2183.
24
Panzeri, S.; Senatore, R.; Montemurro, M.A.; Petersen, R.S. Correcting for the sampling bias
problem in spike train information measures. J. Neurophysiol. 2007, 98, 1064–1072.
Faes, L.; Porta, A. Conditional entropy-based evaluation of information dynamics in physiological systems. In Directed Information Measures in Neuroscience; Springer: Berlin, Heidelberg,
Germany, 2014; pp. 61–86.
Kozachenko, L.; Leonenko, N.N. Sample estimate of the entropy of a random vector. Probl.
Pereda. Inf. 1987, 23, 9–16.
Vlachos, I.; Kugiumtzis, D. Nonuniform state-space reconstruction and coupling detection. Phys.
Rev. E 2010, 82, 016207.
Marinazzo, D.; Pellicoro, M.; Stramaglia, S. Causal information approach to partial conditioning
in multivariate data sets. Comput. Math. Methods Med. 2012, 2012, 303601.
Faes, L.; Kugiumtzis, D.; Nollo, G.; Jurysta, F.; Marinazzo, D. Estimating the decomposition of
predictive information in multivariate systems. Phys. Rev. E 2015, 91, 032904
Barrett, A.B. Exploration of synergistic and redundant information sharing in static and dynamical
Gaussian systems. Phys. Rev. E 2015, 91, 052802.
Barrett, A.B.; Barnett, L.; Seth, A.K. Multivariate Granger causality and generalized variance.
Phys. Rev. E 2010, 81, 041907.
Porta, A.; Bari, V.; de Maria, B.; Takahashi, A.C.; Guzzetti, S.; Colombo, R.; Catai, A.M.; Raimondi, F.; Faes, L. Quantifying Net Synergy/Redundancy of Spontaneous Variability Regulation
via Predictability and Transfer Entropy Decomposition Frameworks. IEEE Trans. Biomed. Eng.
2017, doi:10.1109/TBME.2017.2654509.
Ivanov, P.; Nunes Amaral, L.; Goldberger, A.; Havlin, S.; Rosenblum, M.; Struzik, Z.; Stanley, H.
Multifractality in human heartbeat dynamics. Nature 1999, 399, 461–465.
Chou, C.M. Wavelet-based multi-scale entropy analysis of complex rainfall time series. Entropy
2011, 13, 241–253.
Wang, J.; Shang, P.; Zhao, X.; Xia, J. Multiscale entropy analysis of traffic time series. Int. J. Mod.
Phys. C 2013, 24, 1350006.
Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale entropy analysis of complex physiologic time
series. Phys. Rev. Lett. 2002, 89, 068102.
25
Valencia, J.; Porta, A.; Vallverdú, M.; Clariá, F.; Baranowski, R.; Orłowska-Baranowska, E.; Caminal, P. Refined multiscale entropy: Application to 24-h holter recordings of heart period variability in healthy and aortic stenosis subjects. IEEE Trans. Biomed. Eng. 2009, 56, 2202–2213.
Barnett, L.; Seth, A.K. Granger causality for state-space models. Phys. Rev. E 2015, 91, 040101.
Solo, V. State-space analysis of Granger-Geweke causality measures with application to fMRI.
Neural Comput. 2016, 28, 914–949.
Florin, E.; Gross, J.; Pfeifer, J.; Fink, G.R.; Timmermann, L. The effect of filtering on Granger
causality based multivariate causality measures. Neuroimage 2010, 50, 577–588.
Barnett, L.; Seth, A.K. Detectability of Granger causality for subsampled continuous-time neurophysiological processes. J. Neurosci. Methods 2017, 275, 93–121.
Faes, L.; Montalto, A.; Stramaglia, S.; Nollo, G.; Marinazzo, D. Multiscale Analysis of Information Dynamics for Linear Multivariate Processes. arXiv 2016, arXiv:1602.06155 .
Faes, L.; Nollo, G.; Stramaglia, S.; Marinazzo, D. Multiscale Granger causality. arXiv 2017,
arXiv:1703.08487.
Aoki, M.; Havenner, A. State space modeling of multiple time series. Econom. Rev. 1991, 10, 1–
59.
Oppenheim, A.V.; Schafer, R.W. Digital Signal Processing; Prentice-Hall: Englewood Cliffs, NJ,
USA, 1975.
Hannan, E.J.; Deistler, M. The Statistical Theory of Linear Systems; Society for Industrial and
Applied Mathematics: Philadelphia, PA, USA, 2012; Volume 70.
Aoki, M. State Space Modeling of Time Series; Springer Science & Business Media: Berlin,
Heidelberg, Germany, 2013.
Earth System Research Laboratory.
Available online:
kolaczyk/datasets.html (accessed on 5 May 2017).
http://math.bu.edu/people/
Kramer, M.A.; Kolaczyk, E.D.; Kirsch, H.E. Emergent network topology at seizure onset in humans. Epilepsy Res. 2008, 79, 173–186.
Richardson, M.P. Large scale brain models of epilepsy: Dynamics meets connectomics. J. Neurol
Neurosurg. Psychiatry 2012, 83, 1238–1248.
26
Dickten, H.; Porz, S.; Elger, C.E.; Lehnertz, K. Weighted and directed interactions in evolving
large-scale epileptic brain networks. Sci. Rep. 2016, 6, 34824.
Sela, R.J.; Hurvich, C.M. Computationally efficient methods for two multivariate fractionally
integrated models. J. Time Ser. Anal. 2009, 30, 631–651.
Kitagawa, G. Non-gaussian state—Space modeling of nonstationary time series. J. Am. Stat. Assoc.
1987, 82, 1032–1041.
Paluš, M. Cross-scale interactions and information transfer. Entropy 2014, 16, 5263–5289.
Papana, A.; Kugiumtzis, D.; Larsson, P. Reducing the bias of causality measures. Phys. Rev. E
2011, 83, 036207.
Porta, A.; de Maria, B.; Bari, V.; Marchi, A.; Faes, L. Are Nonlinear Model-Free Conditional
Entropy Approaches for the Assessment of Cardiac Control Complexity Superior to the Linear
Model-Based One? IEEE Trans. Biomed. Eng. 2017, 64, 1287–1296.
27
| 10 |
arXiv:1708.05026v1 [] 16 Aug 2017
Adjusting systematic bias in high
dimensional principal component scores
Sungkyu Jung
Department of Statistics, University of Pittsburgh, Pittsburgh, PA 15260, USA
e-mail: [email protected]
Abstract: Principal component analysis continues to be a powerful tool
in dimension reduction of high dimensional data. We assume a variancediverging model and use the high-dimension, low-sample-size asymptotics
to show that even though the principal component directions are not consistent, the sample and prediction principal component scores can be useful
in revealing the population structure. We further show that these scores are
biased, and the bias is asymptotically decomposed into rotation and scaling
parts. We propose methods of bias-adjustment that are shown to be consistent and work well in the finite but high dimensional situations with small
sample sizes. The potential advantage of bias-adjustment is demonstrated
in a classification setting.
Keywords and phrases: proportional bias, HDLSS, high-dimension, lowsample-size, jackknife, principal component analysis, pervasive factor.
1. Introduction
Principal component analysis is a workhorse method of multivariate analysis,
and has been used in a variety of fields for dimension reduction, visualization
and as exploratory analysis. The standard estimates of principal components,
obtained by either the eigendecomposition of the sample covariance matrix or
the singular value decomposition of the data matrix, are now well-known to be
inconsistent when the number of variables, or the dimension d, is much larger
than the sample size n (Paul, 2007; Johnstone & Lu, 2009; Jung & Marron,
2009). These observations were paralleled with a vast amount of proposals on,
e.g., sparse principal component estimations (cf. most notably, Zou et al., 2006),
which performs better in some models with high dimensions.
However, the standard estimates of principal components continues to be
useful, partly due to available fast computations (see, e.g., Abraham & Inouye,
2014). Many of the sparse estimation methods, unfortunately, do not computationally scale well for large data with hundreds of thousands of variables.
Moreover, the standard estimation has shown to be useful in some application
areas such as imaging, genomics and big-data analysis (Fan et al., 2014). In
these areas, the sample principal component scores (the projection scores of the
data points onto the principal component directions) are often used in the next
stage of analysis, such as regression and classification. The predicted principal
component scores of new observations can be obtained as well, and can be used
as the input to fitted models for prediction.
1
S. Jung/Bias in principal component scores
60
Sample
True
40
40
20
20
PC 1
PC 1
60
0
-20
-40
-40
-60
-60
-20
-10
0
PC 2
10
20
30
Prediction
True
0
-20
-30
2
-30
-20
-10
0
10
20
30
PC 2
Fig 1. Sample and prediction principal component scores connected to their true values. This
toy data set of size pd, nq “ p10000, 50q is generated from the spike model with m “ 2 spikes,
with polynomially-decreasing eigenvalues with β “ 0.3; see Section 4.2 for details.
In this paper, we revisit the standard estimates of principal components in
ultra-high dimensions and reveal that while the component directions and variances are inconsistent, the sample and prediction scores are useful for moderately large sample size. For low sample sizes, the scores are biased. We quantify
the bias, decompose it into two systematic parts, and propose to estimate biasadjustment factors.
As a visual example of the systematic bias, a toy data set with 2 distinguishable principal components is simulated and plotted in Fig. 1. Each observation
in the data set consists of d “ 10, 000 variables. The first two sample principal
component directions are estimated from n “ 50 observations, and are used
to obtain the sample and prediction scores (the latter are computed from 20
new observations). The true principal scores for each of observations are also
plotted and connected to their empirical counterparts. This example visually
reveals that the sample scores are systematically biased, that is, uniformly rotated and stretched. What is more surprising is that the prediction scores are
also uniformly rotated, by the same angle as the sample scores, and uniformly
shrunk.
On the other hand, the third component scores from this example appear to
be quite arbitrary; see Fig. 2. (The estimates for component 3 in this example
is only as good as random guess.) Moreover, unlike the first two components
plotted in Fig. 1, the sample scores of the third component are grossly inflated,
while the prediction scores are much smaller than the sample scores.
S. Jung/Bias in principal component scores
Sample
True
40
40
20
20
0
0
-20
-20
-40
-40
-60
-60
-40
-20
PC 3
0
20
Prediction
True
60
PC 1
PC 1
60
3
-40
-20
0
20
PC 3
Fig 2. Sample and prediction principal component scores connected to their true values.
Models and data are the same as in Fig. 1.
In Section 2, we provide theoretical justification of the phenomenon observed
in Figs. 1 and 2, and asymptotically quantify the two parts of the systematic
bias. We assume m-component models with diverging variances, and use the
high-dimension, low-sample-size asymptotic scenario (i.e. d Ñ 8 while n is
fixed). The models and asymptotics are used in giving the contrasting results
of the sample and prediction scores. The correlation coefficients between the
sample (or prediction) and true scores turn out to be close to 1, for large signals
and large sample sizes, indicating the situations where the principal component
scores are most useful.
Since the bias is asymptotically quantified, the natural next step is to adjust
the bias by estimating the bias-adjustment factor. In Section 3, we propose a
simple, yet consistent, estimator and several variants of estimators based on
the idea of Jackknife. Adjusting these biases improves the performance of prediction modeling, and we demonstrate its potential by an example involving
classification. Results from numerical studies are summarized in Section 4.
There are several related work on the principal component scores in high
dimensions (Lee et al., 2010; Fan et al., 2013; Lee et al., 2014; Sundberg &
Feldmann, 2016; Shen et al., 2016; Hellton & Thoresen, 2017; Jung et al., 2017).
This paper is built upon these previous work. Connections to those are discussed
in Section 5. A survey of high-dimension, low-sample-size asymptotics can be
found in Aoshima et al. (2017).
S. Jung/Bias in principal component scores
4
2. Asymptotic behavior of principal component scores
2.1. Model and assumptions
Let X “ rX1 , . . . , Xn s be a d ˆ n data matrix, where each Xi is mutually
independent and has zero mean and covariance matrix Σd . Population principal components are obtained by the eigendecomposition of Σd “ U ΛU 1 , where
Λ “ diagpλ1 , . . . , λd q is the diagonal matrix of principal component variances
and U “ ru1 , . . . , ud s consists of principal component directions. For a fixed m,
we assume an m-component model, where the first m component variances are
distinguishably larger than the rest. Specifically, the larger variances increasing
at the same rate as the dimension d, i.e. λi — d, which was previously noted as
the “boundary situation” (Jung et al., 2012). This diverging-variance condition
seems to be more realistic than the other simpler cases λi " d and λi ! d (Hellton & Thoresen, 2017; Shen et al., 2016), and is satisfied for high-dimensional
models used in factor analysis (Fan et al., 2013; Li et al., 2017; Sundberg &
Feldmann, 2016).
We assume that the population principal component variances satisfy the
following:
2
(A1) λi “ σi2 d, i “ 1, . . . , m, σ12 ě ¨ ¨ ¨ ě σm
.
řd
2
(A2) limdÑ8 i“m`1 λi {d :“ τ P p0, 8q.
(A3) There exist B ă 8 such that for all i ą m, lim supdÑ8 λi ă B.
The conditions (A2) and (A3) are used to allow λi for i ą m increase as d
increases. All of our results hold when the condition (A3) is relaxed to, e.g., allow
the situation that λi — dα , α ă 1{2. Such a generalization is straightforward,
but invites nonintuitive technicality (see, e.g., Jung et al., 2012, 2017). By
decomposing each independent observation into the first m components and the
remaining term, we write
Xj “
m
ÿ
i“1
1{2
λi ui zij `
d
ÿ
1{2
λi ui zij ,
pj “ 1, . . . , nq,
(1)
i“m`1
where zij is the normalized principal component score.
(A4) For each j “ 1, 2, . . ., pz1j , z2j , . . .q is a sequence of independent random
variables such that for any i, Epzij q “ 0, Varpzij q “ 1, and that the fourth
moment of zij is uniformly bounded.
2.2. Sample and prediction principal component scores
Suppose we have a data matrix X “ rX1 , . . . , Xn s and a vector X˚ , independently drawn from the same population with principal component directions ui .
The principal component analysis is performed for data X and is used to predict
the principal component scores of X˚ .
S. Jung/Bias in principal component scores
5
We define the ith true principal component scores of X as the vector of n
projection scores:
wiT “ uTi X “ pwi1 , . . . , win q, pi “ 1, . . . , dq,
(2)
?
where wij “ uTi Xj “ λi zij . The last equality is given by the decomposition
of Xj in ?
(1). Likewise, the true ith principal component score of X˚ is wi˚ “
uTi X˚ “ λi zi˚ .
The classical estimators of the pair of the ith principal component direction and variance are pûi , λ̂i q, obtained by either the eigendecomposition of the
sample covariance matrix Sd “ n´1 X X T ,
Sd “
n
ÿ
λ̂i ûi ûTi ,
i“1
or by the singular value decomposition of the data matrix,
X “
n b
? ÿ
n
λ̂i ûi v̂iT ,
(3)
i“1
where v̂i is the right singular vector of X . By replacing ui in (2) with its estimator
ûi , we define the ith sample principal component scores of X as
ŵiT “ ûTi X “ pŵi1 , . . . , ŵin q,
pi “ 1, . . . , nq.
(4)
The sample principal component scores
are in fact weighted right singular veca
tors of X ; comparing to (3), ŵi “ nλ̂i v̂iT .
For an independent observation X˚ , the definition (4) gives
ŵi˚ “ ûTi X˚ ,
which is called the ith prediction principal component score for X˚ .
2.3. Main results
Denote W1 “ pσi zij qi,j “ pd´1{2 wij qi,j “ d´1{2 ru1 , . . . , um sT X for the m ˆ n
matrix of the scaled true scores for the first m principal components. The ith
row of W1 is d´1{2 wiT . Similarly, the scaled sample scores for the first m principal
x1 “ d´1{2 rû1 , . . . , ûm sT X .
components are denoted by W
xT “
For a new observation X˚ , write W˚ “ d´1{2 pw1˚ , . . . , wm˚ qT and W
˚
d´1{2 pŵ1˚ , . . . , ŵm˚ qT for the scaled true scores, and prediction scores, respectively, of the fist m principal components.
Denote that W “ W1 W1T for the scaled mˆm sample covariance matrix of the
first m scores. Let tλi pSq, vi pSqu denote the ith largest eigenvalue-eigenvector
pair of a non-negative definite matrix S and vij pSq denote the jth loading of
the vector vi pSq.
S. Jung/Bias in principal component scores
6
Theorem 1. Assume the m-component model under Conditions (A1)–(A4) and
let n ą m ě 0 be fixed and d Ñ 8. Then, the first m sample and prediction
scores are systematically biased:
x1 “ SRT W1 ` Op pd´1{4 q
W
x˚ “ S
W
´1
T
´1{2
R W˚ ` Op pd
(5)
q
where R “ rv1 pWq, . . . , vm pWqs, S “ diagpρ1 , . . . , ρm q, and ρk “
Moreover, for k ą m,
ŵkj “ Op pd1{2 q,
ŵk˚ “ Op p1q.
j “ 1, . . . , n,
(6)
a
1 ` τ 2 {λk pWq.
(7)
(8)
Our main results show that the first m sample and predictions scores are
comparable to the true scores. The asymptotic relation tells that for large d, the
x1 converge to the true scores in W1 , uniformly rotated
first m sample scores in W
and scaled for all data points. It is thus valid to use the first m sample principal
scores for exploration of important data structure, to reduce the dimension of
the data space from d to m in the high-dimension, low-sample-size context.
Theorem 1 explains and quantifies the two parts of the bias, exemplified
in Fig. 1. In particular, the same rotational bias applies to both sample and
prediction scores. The scaling bias factors ρk in the matrix S are all greater
than 1. Thus, while the sample scores are all stretched, the prediction scores
are all shrunk. The second part of the theorem shows that the magnitude of
inflation for the sample scores of the “noise” component (see, e.g., component
3 scores in Fig. 2) is of order d1{2 . On the other hand, the prediction scores of
the noise component do not diverge.
Remark 1. Suppose m “ 1 in Theorem 1. Then the sample and prediction scores
are simply proportionally-biased in the limit: ŵ1j {w1j Ñ ρ1 and ŵ1˚ {w1˚ Ñ ρ´1
1
in probability as d Ñ 8. There is no rotational bias.
The proof of Theorem 1 relies on the asymptotic behavior of the principal
component direction and variance, which is now well-understood. For reference
we restate it here.
Lemma 2. [Theorem S2.1, Jung et al. (2017)] Assume the conditions of Theorem 1. (i) the sample principal component variances converge in probability as
d Ñ 8;
"
λi pWq ` τ 2 ` Op pd´1{2 q, i “ 1, . . . , m;
´1
d nλ̂i “
τ 2 ` Op pd´1{2 q,
i “ m ` 1, . . . , n.
(ii) The inner product between sample and population PC directions converges
in probability as d Ñ 8;
" ´1
ρi vij pWq ` Op pd´1{2 q, i, j “ 1, . . . , m;
T
ûi uj “
Op pd´1{2 q,
otherwise.
S. Jung/Bias in principal component scores
7
This result is abridged later in Section 2.4 for discussion. To handle prediction
scores, we need in addition the following observation, summarized in Lemma 3.
For each k “ 1, . . . , m, the kth projection score ŵk˚ is decomposed into
m
ÿ
ŵk˚ “ ûTk X˚ “
wi˚ ûTk ui ` k˚ ,
(9)
i“1
řd
where k˚ “ i“m`1 wi˚ ûTk ui . In the next lemma, we show that the “error
term,” k˚ , is stochastically bounded.
Lemma 3. Assume the m-factor model with (A1)–(A4) and let n ą m ě 0 be
fixed. For k “ 1, . . . , n, Epk˚ |W1 q “ 0, and
2
lim Varpk˚ | W1 q “ υO
{pλk pWq ` τ 2 q,
lim
dÑ8
dÑ8
n
ÿ
for k ď m;
1
2
Varpk˚ | W1 q “ υO
{τ 2 ,
n ´ m k“m`1
2
where υO
“ limdÑ8 d´1
(10)
(11)
řd
λ2i . As d Ñ 8, k˚ “ Op p1q.
?
Proof of Lemma 3. Fix k “ 1, . . . , n. Let Yi “ λi zi˚ pki , where pki “ ûTk ui .
řd
Then k˚ “ i“m`1 Yi . Since zi˚ and pki are independent, for each i ą m,
EpYi | W1 q “ 0 and
Varp
d
ÿ
i“m`1
Yi | W1 q “ Ep
i“m`1
d
ÿ
2 2
λi zi˚
pki | W1 q “
i“m`1
λi Epp2ki | W1 q,
i“m`1
where we use the fact that Epzi˚ q “ 0,
For k ď m, if the following claim,
Epp2ki | W1 q “ d´1
d
ÿ
2
Epzi˚
q
“ 1.
λi
` Opd´3{2 q,
pλk pWq ` τ 2 q
(12)
is true for any i ą m, then it is easy to check (10).
To show (12), we first post-multiply v̂i to (3) to obtain ûi “ pnλ̂i q´1{2 X v̂i .
´1{2
By writing ziT “ λi wiT “ pzi1 , . . . , zin q from (2), we have
pki “ uTi ûk
“ pnλ̂k q´1{2 uTi X v̂k
1{2
“ pnλ̂k q´1{2 λi ziT v̂k .
Thus,
p2ki “ d´1
λi
nd´1 λ̂k
pziT v̂k q2
λi
pz T v̂k q2
λk pWq ` τ 2 ` Op pd´1{2 q i
λi
“ d´1
pz T v̂k q2 ` Op pd´3{2 q.
λk pWq ` τ 2 i
“ d´1
(13)
S. Jung/Bias in principal component scores
8
In (13), we used Lemma 2(i) and that p1 ` xq´1 “ 1 ` Opxq, and the fact that
2
2
2
|ziT v̂k |2 ď }zi }2 }v̂k }2 “ }zi }2 “ Op p1q.
Write pziT v̂k q2 “ rziT vk pW1T W1 q ` ziT pv̂k ´ vk pW1T W1 qqs2 . Note that W1T W1 is
an n ˆ n matrix, and is different from the m ˆ m matrix W “ W1 W1T . It can
be shown that the right singular vector v̂k converges to vk pW1T W1 q (see, e.g.,
Lemma S1.1 of Jung et al., 2017): For k “ 1, . . . , m,
v̂k “ vk pW1T W1 q ` Op pd´1{2 q.
(14)
Thus we get |ziT pv̂k ´ vk pW1T W1 qq| ď }zi }2 }v̂k ´ vk pW1T W1 qq}2 “ Op pd´1{2 q.
Therefore,
EppziT v̂k q2 | W1 q “ EppziT vk pW1T W1 qq2 | W1 q ` Opd´1{2 q
n
ÿ
2
2
“
qvk`
pW1T W1 q ` Opd´1{2 q
Epzi`
`“1
“ 1 ` Opd´1{2 q.
(15)
Combing (13) and (15), we get (12) for k ď m as desired.
To show (11), note that W “ W1 W1T is of rank m. For k ą m, with λk pWq “
0, (13) holds. Thus,
n
n
d
ÿ
ÿ
ÿ
1
1
Varpk˚ | W1 q “
λi Epp2ki | W1 q
n ´ m k“m`1
n ´ m k“m`1 i“m`1
“
(16)
d
n
ÿ
ÿ
1
λ2i {τ 2
EppziT v̂k q2 | W1 q.
dpn ´ mq i“m`1
k“m`1
To simplify the expression EppziT v̂k q2 | W1 q, one should not naively try (15).
This is because that (15) does not apply for k ą m due to the non-unique kth
eigenvector vk pW1T W1 q of the rank-m matrix W1T W1 . Instead, from
n
ÿ
pziT v̂k q2 “ ziT zi ´
k“m`1
m
ÿ
pziT v̂k q2 ,
k“1
and (15) for k ď m, we get
n
ÿ
EppziT v̂k q2 | W1 q “ n ´ m ` Opd´1{2 q.
(17)
k“m`1
Taking the limit d Ñ 8 to (16), combined with (17), leads to (11).
The last statement, k˚ “ Op p1q, easily follows from the fact limdÑ8 Varpk˚ q ď
2
υO
{τ 2 pn ´ mq ă 8, which is obtained by (10) and (11).
We are now ready to show Theorem 1. Note that the results on the sample
scores,
(5) and (7), can be easily shown, using the decomposition d´1{2 ŵk “
a
d´1 nλ̂k v̂k , together with Lemma 2(i) and (14). We show (6) and (8).
S. Jung/Bias in principal component scores
9
Proof of Theorem 1. Proof of (6). Recall the decomposition (9). Using the notation pki “ ûTk ui , we write ŵk˚ “ ppk1 , . . . , pkm qpw1˚ , . . . , wm˚ qT ` k˚ . Putting
all parts together, we have
¨
˛
p11 ¨ ¨ ¨ p1m
.
.. ‹ W ` ˜ ,
..
x˚ “ d´1{2 pŵ1˚ , . . . , ŵm˚ qT “ ˚
W
˝ ..
k˚
.
. ‚ ˚
pm1
¨¨¨
pmm
where ˜k˚ “ d´1{2 p1˚ , . . . , m˚ qT . By Lemma 3, as d Ñ 8, ˜k˚ “ Op pd´1{2 q.
´1{2
Since pki “ ρ´1
q, by Lemma 2(ii), we have
k vki pWq ` Op pd
x T “ S ´1 RT W T ` Op pd´1{2 q.
W
˚
˚
Proof of (8). Using the decomposition
řm (9), and by the fact k˚ “ Op p1q,
from Lemma 3, it is enough to show i“1 wi˚ pki “ Op p1q. But, since Lemma 2
1
implies d 2 pki “ Op p1q for any pair of pk, iq such that k ą m, i ď m, we have
řm
1
1
2
2
i“1 wi˚ pki “ σi pd pk1 , . . . , d pkm qpz1˚ , . . . , zm˚ q “ Op p1q,
Next result shows that the sample and true scores (or prediction and true
scores) are highly correlated with each other. For this,
a we compute the inner product between standardized sample scores ŵk { ŵkT ŵk and true scores
a
?
wk { wkT wk . Define for a pair px, yq of n-vectors rpx, yq “ xT y{ xT x ¨ y T y,
which is an empirical correlation coefficient between x and y when the mean is
assumed to be zero.
řm 2
řm 2
Theorem 4. Let ζkj “ λk pWq{p `“1 v`j
pWqλ` pWqq and ζ̄kj “ σk2 {p `“1 v`j
pWqσ`2 q.
Under the assumptions of Theorem 1, as d Ñ 8, for k, j “ 1 . . . , m,
1{2
(i) rpŵk , wj q Ñ vkj pWqζkj in probability ;
(ii) limdÑ8 Corrpŵk˚ , wj˚ | W1 q “ vkj pWqζ̄kj 1{2 .
Remark 2. In the special case, m “ 1, both the sample and prediction scores
of the first principal component are perfectly correlated with the true scores, in
the limit. Specifically, Theorem 4 leads that |rpŵ1 , w1 q| Ñ 1 in probability and
|Corrpŵk˚ , wj˚ q| Ñ 1 as d Ñ 8.
1{2
Remark 3. The somewhat complex limiting quantity vkj pWqζkj is an artifact
of fixed sample size. To simplify the expression for the case k “ j, write
´
¯2
ÿ
λ` pWq
1
1{2
2
, ξk pWq “
v`k
pWq
.
vkk pWqζkk
“
1 ` ξk pWq
λ
k pWq
`‰k
Note that W “ W1 W1T is proportional to the sample covariance matrix of the
first m true scores, and that vkk pWq is the inner product between the kth sample
and theoretical principal component directions of the data set W1 , where the
number of variables, m, is smaller than the sample size n. Therefore, we expect
that |vkk pWq| « 1 and ξk pWq « 0 for large sample size n. Taking the additional
limits n Ñ 8, the results in Theorem 4 become more interpretable:
|rpŵk , wj q| Ñ 1pk“jq in probability, and |Corrpŵk˚ , wj˚ q| Ñ 1pk“jq ,
S. Jung/Bias in principal component scores
10
as d Ñ 8, n Ñ 8 (limits are taken progressively).
Remark 4. What is the correlation coefficient rpŵk , wk q for k ą m in the limit
d Ñ 8? In an attempt to answer this question, we note ŵk “ pnλ̂k q1{2 v̂k ,
řd
v̂k “ vk pX T X q and X T X “ i“1 wi wiT . Thus,
rpŵk , wk q “ wkT vk p
d
ÿ
a
wi wiT q{ λk ,
i“1
and it is natural to guess that the dependence of v̂k on any wi , including the
case i “ k, would diminish as d tends to infinity. In fact, d´1 X T X converges to
the rank-m matrix S0 :“ W1T W1 ` τ 2 In , (Jung et al., 2012), and wk and S0 are
independent. Thus, it is reasonable to conjecture that limdÑ8 Errpŵk , wk qs “ 0,
for k ą m. Unfortunately, in the limit d Ñ 8, the kth, k ą m, eigenvector
of d´1 X T X becomes an arbitrary choice in the left null space of W1 . Due to
this non-unique eigenvector, the inner product wkT vk pS0 q is not defined, and
consequently discussing the convergence of rpŵk , wk q is somewhat demanding.
We numerically confirm the conjecture in Section 4.1.
Proof of Theorem 4. Proof of (i). Write the singular value decomposition of the
m ˆ n matrix of scaled scores W1 as
a
a
(18)
W1 “ Rdiagp λ1 pWq, . . . , λ1 pWqqGT ,
where G “ rg1 , . . . , gm s is the n ˆ m matrix consisting of right singular vectors
of W1 . The left singular vector matrix R “ rv1 pWq, . . . , vm pWqs is exactly the
matrix R appearing in Theorem 1. Since
m a
ÿ
λ` pWqv` pWqg`T ,
W1 “
`“1
the jth row of W1 is, for j ď m,
1
d´ 2 wjT “
m a
ÿ
λ` pWqv`j pWqg`T .
`“1
´1{2
For the scaled sample
ŵa
k , k ď m, we obtain from Theorem 1 and
ascore d
x
(18) that Wa
λ1 pWqqGT ` Op pd´1{4 q and its kth row
1 “ Sdiagp λ1 pWq, . . . ,
´1{2
´1{4
2
d
ŵk “ λk pWq ` τ gk ` Op pd
q. Since g` ’s are orthonormal,
1
}d´ 2 ŵk }2 “
a
λk pWq ` τ 2 ` Op pd´1{4 q,
and
d´1 ŵkT wj “ pd´1{2 ŵk qT pd´1{2 wj q
a
a
“ λk pWq λk pWq ` τ 2 vkj pWq ` Op pd´1{4 q.
S. Jung/Bias in principal component scores
Since d´1 wjT wj “
řm
`“1
11
2
v`j
pWqλ` pWq, we have
rpŵk , wj q “
d´1 ŵkT wj
1{2
Ñ vkj pWqζkj
}d´1{2 ŵk }2 ¨ }d´1{2 wj }2
in probability, as d Ñ 8.
Proof of (ii). From Theorem 1, write
d´1{2 ŵk˚ “ ρ´1
k
m
ÿ
vk` pWqd´1{2 w`˚ ` Op pd´1{2 q,
(19)
`“1
and note that Epwk˚ q “ Epŵk˚ q “ 0. Then for k “ 1, . . . , m, we have
Varpd´1{2 wk˚ q “ d´1 Epwk˚ q2 “ σk2 Epzk˚ q2 “ σk2 ,
and, by (19),
Varpd´1{2 ŵk˚ | W1 q “ ρ´2
k
m
ÿ
2
pvk` pWqq σ`2 ` Opd´1{2 q.
`“1
The independence of w`˚ and wk˚ for k ‰ ` and (19) give
Covpd´1{2 ŵk˚ , d´1{2 wj˚ | W1 q “ Epd´1 ŵk˚ wj˚ | W1 q
2
´1{2
“ ρ´1
q,
k vkj pWqσj ` Opd
which in turn leads to
corrpŵk˚ , wj˚ | W1 q “ `
Covpd´1{2 ŵk˚ , d´1{2 wj˚ | W1 q
˘1{2
Varpd´1{2 wj˚ qVarpd´1{2 ŵk˚ | W1 q
σj
´1{2
“ vkj pWq ”
q.
ı1{2 ` Opd
řm
2 2
pv
pWqq
σ
k`
`“1
`
2.4. Inconsistency of the direction and variance estimators
The findings in the previous subsection may be summarized as that the first m
principal component scores convey about the same visual information as the true
values when displayed. (The information is further honed by bias adjustment in
Section 3.) In a practical point of view, the scores and their graph matter the
most.
On the other hand, a quite different conclusion about the standard principal
component analysis is made when the standard estimator of the principal component direction ûi is of interest. The asymptotic behavior of the direction ûi
S. Jung/Bias in principal component scores
12
as well as the variance estimator λ̂i are obtained as a special case of Lemma 2.
Under our model,
" ´1
pρi vii pWq, λi pWq ` τ 2 q, i “ 1, . . . , m;
T
´1
pûi ui , d nλ̂i q Ñ
(20)
p0, τ 2 q,
i “ m ` 1, . . . , n.
in probability as d Ñ 8 (n is fixed).
The variance estimator λ̂i , for i ď m, is asymptotically proportionally-biased.
Specifically, λ̂i {λi Ñ pλi pWq ` τ 2 q{pnσi2 q in probability as d Ñ 8. Thus by
using a classical result on the expansion of the eigenvalues of W for large n,
«
ff
m
σj2
τ2
1 ÿ
` 2 ` Opn´2 q,
Epλ̂i {λi q Ñ 1 `
n j‰i σi2 ´ σj2
σi
as d Ñ 8. Note that even when m “ 1, the bias is still of order n´1 . This
proportional bias may be empirically adjusted, using good estimates of σi2 and
τ 2 . We do not pursue it here. Note that all empirical principal component variances, for i ą m, converge to τ 2 {n, when scaled by d, and thus do not reflect
any information of the population.
The result (20) also shows that the direction estimator ûi is inconsistent and
asymptotically-biased, compared to ui . The estimator ûi is closer to ui when
ρ´1
i |vii pWq| is closer to 1. This is impossible to achieve since for finite n, both
|vii pWq| and ρ´1
are strictly less than 1. Although the “angle” between ûi and
i
ui is quantified in (20), the theorem itself is useless in adjusting the bias. This is
because that the direction to which ûi moves away from ui is not specified in the
theorem, and is conjectured to be random (i.e. uniformly distributed). See Jung
et al. (2012) and Shen et al. (2016) for related discussions on the inconsistent
principal component directions.
In short, while the bias in the principal component direction is challenging to
remove, the bias in sample and prediction scores can be quantified and removed.
3. Bias-adjusted scores
In this section, we describe and compare several choices in estimation of the biasadjustment factor ρi , of which the matrix S in Theorem 1 consists. Since both
sample and prediction scores are rotated by the same direction and amount,
specified in the matrix R, there is little practical advantage in estimating R. We
focus on adjusting the scores by estimating ρi .
Suppose that the number of effective principal components, m, is prespecified
or estimated inaadvance. Our first estimator is obtained by replacing τ 2 and
λi pWq in ρi “ 1 ` τ 2 {λi pWq with reasonable estimators. In particular, we set
řn
λ̂i n
2
τ̃ “ i“m`1
, λ̃i pWq “ d´1 nλ̂i ´ τ 2 ,
(21)
n´m d
and
ρ̃i “
b
1 ` τ̃ 2 {λ̃i pWq,
pi “ 1, . . . , mq.
(22)
S. Jung/Bias in principal component scores
Sample (Adj)
True
40
13
Prediction (Adj)
True
50
40
30
30
20
20
10
PC 1
PC 1
10
0
0
-10
-10
-20
-20
-30
-30
-40
-40
-50
-20
-10
0
10
20
-20
PC 2
-10
0
10
20
30
PC 2
Fig 3. Bias-adjusted sample and prediction scores using (23) for the toy data introduced in
Fig. 1. The estimates (22) are pρ̃1 , ρ̃2 q “ p1.385, 1.546q, very close to the theoretical values
pρ1 , ρ2 q “ p1.385, 1.557q. Both sample and prediction scores are simultaneously rotated about
16 degrees clockwise.
This simple estimator ρ̃i is in fact consistent.
Corollary 5. Suppose the assumptions of Lemma 2 are satisfied. Let d Ñ 8.
For i “ 1, . . . , m, conditional to W1 , τ̃ 2 , λ̃i pWq and ρ̃i are consistent estimators
of τ 2 , λi pWq and ρi , respectively.
Proof. Lemma 2 is used to show that τ̃ 2 and λ̃i pWq converge in probability
to τ 2 and λi pWq as d Ñ 8, respectively. By continuous mapping theorem, ρ̃i
converges in probability to ρi .
padjq
Using (22), the bias-adjusted sample and prediction scores are ŵi
“ ρ̃´1
i ŵi
padjq
and ŵi˚ “ ρ̃i ŵi˚ for i “ 1, . . . , m. The sample and prediction scores matrices
in (5) and (6) are then adjusted to, using S̃ “ diagpρ̃‘ , . . . , ρ̃m q,
x padjq “ S̃ ´1 W
x1 ,
W
1
x˚padjq “ S̃ W
x˚ .
W
(23)
An application of the above bias-adjustment procedure is exemplified in
Fig. 3. There, the magnitudes of the sample and prediction scores are welladjusted. Adjustment for the ‘rotation’ part is not needed, since both sample
and prediction scores are simultaneously rotated.
Our next proposed estimators are motivated by the well-known jackknife
bias adjustment procedures and also by the leave-one-out cross-validation. For
simplicity, assume m “ 1. The bias-adjustment factor we aim to estimate is
S. Jung/Bias in principal component scores
14
ρ1 “ p1 ` τ 2 {}ω1 }22 q1{2 , where ω1 “ d´1{2 w1 “ σ1 pz11 , . . . , z1n qT is the scaled
true scores for the first principal component.
Write, for each j “ 1, . . . , n, the jth scaled sample score as ω̂1j “ d´1{2 ûT1 Xj ,
and the jth scaled prediction score as
ω̂1pjq “ d´1{2 ûT1p´jq Xj ,
where û1p´jq is the first principal component direction, computed from Xp´jq ,
i.e., the data except the jth observation.
Then from Theorem 1, ρ1 is the asymptotic bias-adjustment factor for ω̂1 ;
ω̂1j “ ρ1 ω1j ` Op pd´1{4 q. For ω̂1pjq , again applying Theorem 1, we get ω̂1pjq “
´1{2
ρ´1
q, where ρ1p´jq “ p1 ` τ 2 {}ω1p´jq }22 q1{2 is the bias-adjustment
1p´jq ω1j `Op pd
factor computed from Xp´jq , using ω1p´jq “ σ1 pz11 , . . . , z1,j´1 , z1,j`1 , . . . z1n qT .
To simplify terms, Taylor expansion is used to expand ρ1p´jq as a function of
2
ω1j
{n, resulting in
˜
ρ1p´jq “
τ 2 {n
1`
2 {n
}ω1 }22 {n ´ ω1j
¸1{2
“ ρ1 `
2
1 }ω1 }22 {n ω1j
1
` Op p 2 q. (24)
2
2ρ1 τ
n
n
Using the approximation
ρ1 ρ1p´jq « ρ21 `
2
}ω1 }22 ω1j
2τ 2 n2
given by (24), we write the ratio of the sample and prediction scores to cancel
out the unknown true score ω1j as follows:
ˆ
ŵ1j
ŵ1pjq
˙1{2
ˆ
“
ω̂1j
ω̂1pjq
˙1{2
« ρ1 .
Based on above heuristic, we define the following estimators of the bias-adjustment
factors:
˙1{2
n ˆ
1 ÿ ŵij
p1q
ρ̂i “
,
(25)
n j“1 ŵipjq
˜ řn
¸1{2
j“1 ŵij
p2q
ρ̂i “ řn
,
(26)
j“1 ŵipjq
˜ řn
¸1{4
2
j“1 ŵij
p3q
ρ̂i “ řn
.
(27)
2
j“1 ŵipjq
In implementing the above estimators, we used absolute values of the sample
and predicted scores.
p1q
p2q
p3q
The estimators ρ̂i , ρ̂i , and ρ̂i tend to overestimate ρ for small sample size
n, as expected from (24). In our numerical experiments, these three estimators
perform similarly.
S. Jung/Bias in principal component scores
15
4. Numerical studies
4.1. Simulations to confirm the asymptotic bias and near-perfect
correlations
In this section, we compare the theoretical asymptotic quantities derived in
Section 2.3 with their finite-dimensional empirical counterparts.
First, the theoretical values of the scaling bias ρi and the rotation matrix
R in Theorem 1 are compared with their empirical counterparts. The empirical counterparts of the two matrices R, S are defined as the minimizer of the
Procrustes problem
›2
›
›
x T S ´1 R0 ›› ,
(28)
min ›W1 ´ W
1
0
F
with the constraint that S0 is a diagonal matrix with positive entries and R0 is an
orthogonal matrix. The solutions are denoted by Sq “ diagpρ̌1 pW1 q, . . . , ρ̌m pW1 qq
q For simplicity, we consider the m “ 2 case, and parameterize R by the
and R.
q by θ̌R “ cos´1 pR
q1,1 q. We compare
rotation angle, θR “ cos´1 pR1,1 q, and R
θR with θ̌R and ρi pW1 q with ρ̌i pW1 q, from a 2-component model with pn, dq “
p50, 5000q (precisely, the spike model with m “ 2 and β “ 0.3 in Section 4.2).
Note that both the theoretical values and the best-fitted values depend on the
true scores W1 . To capture the natural variation given by W1 , the experiment
is repeated for 100 times. The results, summarized in the top row of Fig. 4,
confirm that the asymptotic statements in Theorem 1 approximately hold for
q are very close
finite dimensions. In particular, the rotation matrices R and R
to each other. The Procrustes-fitted, or “best”, ρ̌i tends to be larger than the
asymptotic, or theoretical, ρi , especially for i “ 2 (shown as
in Fig. 4) and for
larger values of ρ2 . This is not unexpected. Larger values of ρ2 are from smaller
λ2 pWq. Take an extreme case where λ2 pWq “ 0, then by (7) in Theorem 1,
the sample scores are of magnitude d1{2 compared to the true scores. Thus,
as λ2 pWq decreases to 0, the Procrustes scaler ρ̌2 empirically interpolates the
finite-scaling case (5) to the diverging case (7) of Theorem 1.
Second, we compare the limit of correlation coefficients in Theorem 4 with
finite-dimensional empirical correlations, rpŵk , wk q, for k “ 1, 2. For the correlation coefficient of the prediction scores, we use the sample correlation coefficient
between pŵk˚ , wk˚ q, as an estimate of Corrpŵk˚ , wk˚ | W1 q. The simulated results are shown in the bottom row of Fig. 4. The empirical correlation coefficients
tend to be smaller than the theoretical counterparts. Note that the approximation of rpŵk , wk q by its limit is of better quality if nσk2 “ Epλk pWqq is larger.
Moreover, the correlation coefficient tends to be larger for larger nσk2 , which
represents “signal strength” of the kth principal component.
Third, from the same simulations, it can be checked that the kth, where
k ą m, sample scores are diverging, while the prediction scores are stable, as
indicated in (7) and (8). For this, we choose k “ 3 and for each experiment, comy ŵ3 q, the sample variance of the sample scores, and an approximation
pute Varp
of Varpŵ3˚ q. The results are shown in Table 1. As expected, the sample scores
S. Jung/Bias in principal component scores
Angles (in degree) in R
30
16
Bias-adjustment factors in S
2.5
25
Best
Best
20
15
2
10
1.5
5
0
0
5
10
15
20
25
30
1.5
2
Theory
1
Correlation coefficient (sample)
1
Correlation coefficient (prediction)
0.95
Empirical
0.95
Empirical
2.5
Theory
0.9
0.85
0.9
0.85
0.8
0.8
0.8
0.85
0.9
Theory
0.95
1
0.8
0.85
0.9
0.95
1
Theory
Fig 4. (Top row) Theoretical rotation angles θR and bias-adjustment factors ρ1 (ˆ), ρ2 ( ),
compared with the best-fitting Procrustes counterparts (θ̌R , ρ̌i pW1 q). (Bottom row) Empirical
correlations compared with their limits in Theorem 4.
are grossly inflated, while the prediction scores are stable. Finally, the conjecture in Remark 4 is also empirically checked; Table 1 also shows that for large
d, the sample (or prediction) and true scores for the kth, k ą m, component are
nearly uncorrelated.
4.2. Numerical performance of the bias-adjustment factor
estimation
We now test our estimators of the bias-adjustment factor ρi , using the following
data-generating models with m “ 2.
The first one is called a spike model. We sample from the d-dimensional
zero-mean normal distribution where the first two largest eigenvalues of the
covariance matrix are λi “ σi2 d, for i “ 1, 2, where pσ12 , σ22 q “ p0.02, 0.01q.
´β
The rest
řd of eigenvalues are slowly-decreasing. In particular, λi “ τ i , where
τ “ r i“3 i´β {pd ´ 2qs´1 . We set β “ 0.3 or 0.5. This spike model has more
than two unique principal components, for each fixed dimension, but in the limit
d Ñ 8, only the first two principal components are useful.
S. Jung/Bias in principal component scores
17
Sample scores
Prediction scores
120.7(4.4)
1.38(0.2)
-0.0024(0.2)
-0.004(0.15)
Table 1
The case k ą m. The 3rd sample and prediction scores (unadjusted). Shown are the mean
(standard deviation) of the variances and correlation coefficients to true scores, from 100
repetitions. The true variance is λ3 “ Varpw3˚ q « 6.5.
Variance
Corr. Coef.
The second model is a mixture model. Let µg (g “ 1, 2, 3) be d-dimensional
vectors, the elements of which are randomly drawn from t´a, 0, au with replacement for a given a ą 0, then assumed as fixed quantities. Given µg ’s we
sample from the mixture model X | G “ g „ N pµg , Id q, P pG “ gq “ pg ą 0,
ř3
g“1 pg “ 1. We set pp1 , p2 , p3 q “ p0.5, 0.3, 0.2q. It can be shown that the mean
EpXq is non-zero, and covariance matrix satisfies the assumption of 2-component
model in (A1)–(A4).
For various cases of high-dimension, low-sample-size situations, ranging d “
5, 000 to 20, 000 and n “ 50 to 100, samples from each of these models are
obtained. For each of samples, the theoretical quantity ρi “ ρi pW1 q and the
best-fitted Procrustes scaler ρ̌i “ ρ̌i pW1 q are computed. These quantities depend
on the m ˆ n random matrix W1 . The mean and standard deviation of ρi (from
100 repetitions) are shown in the first column of Table 2. As expected, the
theoretical value ρi depends on the sample size n; large sample size decreases
the bias, Epρi q, and also decreases the variance Varpρi q.
The mean of the best-fitted scaler ρ̌i (i “ 1, 2) is displayed in the second
column of the table. While they are quite close to the theoretical counterpart,
ρ̌i s are significantly larger for the mixture model, whose signal-to-noise ratio is
smaller than the spike model, and for the not-so-large dimension d “ 5, 000. This
is not unexpected, since the theoretical values are also based on the increasingdimension asymptotic arguments.
We further compute the proposed estimators of ρi , given in (22), (25)–(27).
We also compute the estimator derived from Lee et al. (2010), which is the
square-root of the reciprocal of the shrinkage factor, obtained by numerical
iterations, denoted dˆν in Lee et al. (2010). (The relation of Lee et al. (2010) to
our work is further discussed in Section 5.) Table 2 summarizes the results from
our simulation study. All of the methods considered provide accurate estimates
of the theoretical quantity ρi . We omit the numerical results from the estimators
(26) and (27), as their performances are very close to that from (25).
4.3. Bias-adjustment improves classification
Our last simulation study is an application of the bias-adjustment procedure to
classification. Our training and testing data, each with sample size 100, are sampled from the mixture model with three groups, as described in Section 4.2. As
frequently used in practice, dimension reduction by the standard principal component analysis is performed first, then our classifier, a support vector machine
(SVM, Cristianini & Shawe-Taylor, 2000), is trained on the sample principal
S. Jung/Bias in principal component scores
Spike model
β “ 0.3
Spike model
β “ 0.5
Mixture model
a “ 0.15
d
5000
10000
10000
20000
5000
10000
10000
20000
5000
10000
10000
20000
n
50
50
100
100
50
50
100
100
50
50
100
100
d
5000
10000
10000
20000
5000
10000
10000
20000
5000
10000
10000
20000
n
50
50
100
100
50
50
100
100
50
50
100
100
Theory
1.41 (0.07)
1.42 (0.06)
1.23 (0.03)
1.23 (0.02)
1.42 (0.08)
1.43 (0.07)
1.22 (0.02)
1.23 (0.02)
2.06 (0.06)
2.09 (0.06)
1.63 (0.02)
1.64 (0.02)
Best
1.42
1.43
1.23
1.23
1.45
1.45
1.23
1.23
2.22
2.17
1.67
1.66
ρ1
Asymp.
1.40
1.42
1.23
1.23
1.41
1.43
1.22
1.23
1.92
1.98
1.61
1.62
ρ2
Asymp.
1.75
1.77
1.43
1.43
1.72
1.76
1.43
1.42
2.20
2.35
1.90
1.93
18
Jackknife
1.43
1.44
1.24
1.24
1.45
1.46
1.23
1.24
2.14
2.14
1.65
1.66
LZW
1.41
1.42
1.23
1.23
1.40
1.42
1.21
1.22
2.00
2.02
1.63
1.63
Theory
Best
Jackknife
LZW
1.79 (0.11)
1.86
1.78
1.79
Spike model
1.79 (0.11)
1.82
1.77
1.79
β “ 0.3
1.43 (0.06)
1.44
1.42
1.43
1.43 (0.05)
1.44
1.42
1.43
1.79 (0.11)
1.99
1.81
1.71
Spike model
1.80 (0.11)
1.88
1.79
1.74
β “ 0.5
1.44 (0.05)
1.47
1.44
1.41
1.42 (0.05)
1.44
1.41
1.40
2.62 (0.21)
5.44
2.68
2.46
Mixture model
2.68 (0.19)
3.20
2.68
2.50
a “ 0.15
2.00 (0.09)
2.13
2.00
1.99
1.99 (0.10)
2.05
1.97
1.97
Table 2
Simulation results from 100 repetitions. “Theory” is mean (standard deviation) of ρi ;
p1q
“Best” is ρ̌i (28); “Asymp.” is ρ̃i (22); “Jackknife” is ρ̂i (25); “LZW” is from Lee et al.
(2010). Averages are shown for the latter four columns. The standard errors of the
quantities in estimation of ρi are at most 0.04.
component scores. In this simulation, we fix m “ 2 and d “ 5000. We compare the training and testing missclassification error rates (estimated by 100
repetitions) of SVMs trained (and tested) either on the unadjusted sample and
x1 and W
x‹ , or on the bias-adjusted sample and prediction
prediction scores, W
padjq
padjq
x
x
scores, W1
and W‹
in (23). The estimated error rates are shown in Table 3. It is clear that using the bias-adjusted scores, proposed in (23), greatly
improves the performance of classification.
To better understand the huge improvement of classification performances,
we plot the sample and prediction scores that are inputs of the classifier. In
Fig. 5, the classifier is estimated from the the sample scores (symbol ) and is
used to classify future observations, i.e. the prediction scores (symbol ˆ). Due
to the scaling bias, the unadjusted sample and prediction scores are of different
scales (shown in the left panel), and classification is bound to fail. On the other
hand, the proposed bias-adjustment, shown in the right panel, works well for
this data set, leading to a better classification performance.
S. Jung/Bias in principal component scores
19
Unadjusted scores
Bias-adjusted scores
0.04(0.02)
0.07(0.03)
21.4(1.33)
1.98(0.23)
Table 3
Means (standard errors) of Missclassification error rates (in percent).
Training Error
Testing Error
Unadjusted scores
Bias-adjusted scores
10
PC 1 Scores
PC 1 Scores
10
5
0
-5
-10
5
0
-5
-10
-10
0
10
PC 2 Scores
20
-10
0
10
20
PC 2 Scores
Fig 5. Bias-adjusted scores from the mixture models greatly improve the classification performance. Different colors correspond to different groups. Symbol
represents the sample scores
(unadjusted in the left, adjusted in the right); symbol ˆ represents the prediction scores.
5. Discussion
The standard principal component analysis is shown to be useful in the dimension reduction of data from the m-component models with diverging variances.
In particular, in the high-dimension, low-sample-size asymptotic scenario we
reveal that the sample and prediction scores have systematic bias that can be
consistently adjusted. We propose several estimators of the scaling bias, while
there is no compelling reason to adjust rotational bias. The amount of bias is
large when the sample size is small and when the variance of accumulated noise
is large compared to the variances of the first m components.
This work is built upon several previous findings on the principal component scores in high dimensions. The decomposition of the bias into rotation and scaling parts is also found in Hellton & Thoresen (2017) and Jung
et al. (2017). While our current work is a continuation of the latter, Hellton
& Thoresen (2017) focused on the standardized sample scores ŵi { }ŵi }. Using
the standardized scores complicates the interpretation since the resulting scalpHTq
ing bias, say ρi
, can be any positive value. For example, if m “ 1, then
a
pHTq
1{2
“ n ρ1 { }ŵ1 } “
ρ1
n{λ1 pWq « 1{σ1 , and the the sample scores are
stretched if σ1 ă 1, and shrunk if σ1 ą 1, for any value of error variance τ 2 . In
comparison, the unstandardized sample scores ŵi are always stretched by ρi ą 1.
The identification of the systematic bias is the key in revealing the bias of prediction scores, which is not available in the previous work (Hellton & Thoresen,
2017; Jung et al., 2017).
Shen et al. (2016) used a stronger model, with eigenvalues diverging faster
S. Jung/Bias in principal component scores
20
than the dimension, i.e., λ1 {d Ñ 8, to show that there is an asymptotic scaling
bias for standardized sample scores. They, however, did not consider either the
multiple component model or the prediction scores. Moreover, their result, when
applied to unstandardized scores, is less useful as, following Shen et al. (2016),
the scores are asymptotically unbiased. For example, under their assumptions,
for a 1-component model with λ1 " d, |ŵ11 {w11 | Ñ 1 as d Ñ 8 for any fixed
sample size n. This is an artifact of assuming fast-diverging variance; our scaling
factor ρi also approaches 1 when λi pWq « nλ1 {d increases.
Lee et al. (2010) discussed adjusting bias in the prediction of principal components, based on random matrix theory and the asymptotic scenario of d{n Ñ γ P
p0, 8q, n Ñ 8. They showed that the prediction scores tend to be smaller than
the sample scores, and the ratio of the shrinkage is asymptotically sdpŵi1 q{sdpŵi˚ q «
pLZWq
pLZWq
i ´1
. This “shrinkage factor” ρi
corresponds to the squared
ρi
“ λiλ`γ´1
´2
reciprocal of our scaling bias, ρi . Specifically, these two quantities, for finite
d and n, d " n, are in fact close to each other; replacing λi by σi2 d and γ by
pLZWq
´2
i pWq
i ´n{d
« nλ
d{n results in ρi
« λnλ
by order
nλi `τ 2 , which approximates ρi
i `1´n{d
n´1{2 when τ “ 1. Our work can be thought of as an extension of Lee et al.
(2010) from the asymptotic regime d — n to the high-dimension, low-sample-size
situations (see also Lee et al. (2014)). Finally, we note that in the asymptotic
scenario of Lee et al. (2010) and Lee et al. (2014) there is no rotational bias. This
is because in their limit the sample size is infinite. We show that the rotational
bias is universal to both sample and prediction scores and is of order n´1{2 .
Principal component analysis is often thought of as a special case of factor
model or its estimation method. Our covariance model with diverging variances
frequently appears in recent investigations of high-dimensional factor models
(e.g., Fan et al., 2013; Li et al., 2017; Sundberg & Feldmann, 2016). In particular, Sundberg & Feldmann (2016) also investigated the high-dimension, lowsample-size asymptotic scenario for factor analysis, and reached a similar conclusion to this work. Our work echoes the message that “(principal component or
factor) scores are useful in high dimensions” and further provides interpretable
decomposition of bias and methods of bias-adjustment.
References
Abraham, G. & Inouye, M. (2014). Fast principal component analysis of
large-scale genome-wide data. PloS one 9, e93766.
Aoshima, M., Shen, D., Shen, H., Yata, K., hui Zhou4, Y. & Marron,
J. S. (2017). A survey of high dimension low sample size asymptotics. Aust.
N. Z. J. Stat. to appear.
Cristianini, N. & Shawe-Taylor, J. (2000). An Introduction to Support
Vector Machines. Cambridge University Press.
Fan, J., Han, F. & Liu, H. (2014). Challenges of big data analysis. Natl. Sci.
Rev. 1, 293–314.
Fan, J., Liao, Y. & Mincheva, M. (2013). Large covariance estimation by
S. Jung/Bias in principal component scores
21
thresholding principal orthogonal complements. J. R. Stat. Soc. B 75, 603–
680.
Hellton, K. H. & Thoresen, M. (2017). When and why are principal component scores a good tool for visualizing high-dimensional data? Scand. J.
Stat , to appear.
Johnstone, I. M. & Lu, A. Y. (2009). On Consistency and Sparsity for
Principal Components Analysis in High Dimensions. J. Am. Stat. Assoc.
104, 682–693.
Jung, S., Ahn, J. & Lee, M. H. (2017). On the number of principal components in high dimensions. submitted to Biometrika .
Jung, S. & Marron, J. S. (2009). PCA consistency in high dimension, low
sample size context. Ann. Stat. 37, 4104–4130.
Jung, S., Sen, A. & Marron, J. (2012). Boundary behavior in High Dimension, Low Sample Size asymptotics of PCA. J. Multivar. Anal. 109, 190–203.
Lee, S., Zou, F. & Wright, F. A. (2010). Convergence and prediction of
principal component scores in high-dimensional settings. Ann. Stat. 38, 3605.
Lee, S., Zou, F. & Wright, F. A. (2014). Convergence of sample eigenvalues,
eigenvectors, and principal component scores for ultra-high dimensional data.
Biometrika 101, 484.
Li, Q., Cheng, G., Fan, J. & Wang, Y. (2017). Embracing the blessing of
dimensionality in factor models. J. Am. Stat. Assoc. To appear.
Paul, D. (2007). Asymptotics of sample eigenstructure for a large dimensional
spiked covariance model. Stat. Sin. 17, 1617–1642.
Shen, D., Shen, H., Zhu, H. & Marron, J. (2016). The statistics and
mathematics of high dimension low sample size asymptotics. Stat. Sin. 26,
1747.
Sundberg, R. & Feldmann, U. (2016). Exploratory factor analysisparameter
estimation and scores prediction with high-dimensional data. J. Multivar.
Anal. 148, 49–59.
Zou, H., Hastie, T. & Tibshirani, R. (2006). Sparse Principal Component
Analysis. J. Comp. Graph. Stat. 15, 265–286.
| 10 |
GEOMETRY AND DYNAMICS IN GROMOV HYPERBOLIC
METRIC SPACES
arXiv:1409.2155v7 [math.DS] 28 Jun 2016
WITH AN EMPHASIS ON NON-PROPER SETTINGS
Tushar Das
David Simmons
Mariusz Urbański
University of Wisconsin – La Crosse, Department of Mathematics
& Statistics, 1725 State Street, La Crosse, WI 54601, USA
E-mail address: [email protected]
URL: https://sites.google.com/a/uwlax.edu/tdas/
University of York, Department of Mathematics, Heslington, York
YO10 5DD, UK
E-mail address: [email protected]
URL: https://sites.google.com/site/davidsimmonsmath/
University of North Texas, Department of Mathematics, 1155 Union
Circle #311430, Denton, TX 76203-5017, USA
E-mail address: [email protected]
URL: http://www.urbanskimath.com/
2010 Mathematics Subject Classification. Primary 20H10, 28A78, 37F35, 20F67,
20E08; Secondary 37A45, 22E65, 20M20
Key words and phrases. hyperbolic geometry, Gromov hyperbolic metric space,
infinite-dimensional symmetric space, metric tree, Hausdorff dimension, Poincaré
exponent, Patterson–Sullivan measure, conformal measure, divergence type,
geometrically finite group, global measure formula, exact dimensional measure
Abstract. Our monograph presents the foundations of the theory of groups
and semigroups acting isometrically on Gromov hyperbolic metric spaces. We
make it a point to avoid any assumption of properness/compactness, keeping in mind the motivating example of H∞ , the infinite-dimensional rank-one
symmetric space of noncompact type over the reals. We have not skipped
over parts that might be thought of as “trivial” extensions of the finitedimensional/proper theory, as our intuition has turned out to be wrong often
enough regarding these matters that we feel it is worth writing everything down
explicitly and with an eye to detail. Moreover, we feel that it is a methodological advantage to have the entire theory presented from scratch, in order
to provide a basic reference for the theory.
Though our work unifies and extends a long list of results obtained by
many authors, Part 1 of this monograph may be treated as mostly expository.
The remainder of this work, some of whose highlights are described in brief
below, contains several new methods, examples, and theorems. In Part 2, we
introduce a modification of the Poincaré exponent, an invariant of a group
which provides more information than the usual Poincaré exponent, which we
then use to vastly generalize the Bishop–Jones theorem relating the Hausdorff
dimension of the radial limit set to the Poincaré exponent of the underlying
semigroup. We construct examples which illustrate the surprising connection
between Hausdorff dimension and various notions of discreteness which show
up in non-proper settings. Part 3 of the monograph provides a number of
examples of groups acting on H∞ which exhibit a wide range of phenomena not
to be found in the finite-dimensional theory. Such examples often demonstrate
the optimality of our theorems.
In Part 4, we construct Patterson–Sullivan measures for groups of divergence type without any compactness assumption on either the boundary
or the limit set. This is carried out by first constructing such measures on
the Samuel–Smirnov compactification of the bordification of the underlying
hyperbolic space, and then showing that the measures are supported on the
(non-compactified) bordification. We end with a study of quasiconformal measures of geometrically finite groups in terms of doubling and exact dimensionality. Our analysis characterizes exact dimensionality in terms of Diophantine
approximation on the boundary. We demonstrate that though all doubling
Patterson–Sullivan measures are exact dimensional, there exist Patterson–
Sullivan measures that are exact dimensional but not doubling, as well as
ones that are neither doubling nor exact dimensional.
Dedicated to the memory of our friend
Bernd O. Stratmann
Mathematiker
17th July 1957 – 8th August 2015
Contents
List of Figures
xiii
Prologue
xv
Chapter 1. Introduction and Overview
1.1. Preliminaries
xix
xx
1.1.1. Algebraic hyperbolic spaces
1.1.2. Gromov hyperbolic metric spaces
1.1.3. Discreteness
1.1.4. The classification of semigroups
1.1.5. Limit sets
1.2. The Bishop–Jones theorem and its generalization
1.2.1. The modified Poincaré exponent
1.3. Examples
1.3.1. Schottky products
xx
xxi
xxv
xxvi
xxvii
xxviii
xxxi
xxxii
xxxiii
1.3.2. Parabolic groups
1.3.3. Geometrically finite and convex-cobounded groups
xxxiv
xxxiv
1.3.4. Counterexamples
1.3.5. R-trees and their isometry groups
xxxv
xxxvi
1.4. Patterson–Sullivan theory
1.4.1. Quasiconformal measures of geometrically finite groups
1.5. Appendices
Part 1.
xxxvi
xxxix
xli
Preliminaries
1
Chapter 2. Algebraic hyperbolic spaces
2.1. The definition
2.2. The hyperboloid model
2.3. Isometries of algebraic hyperbolic spaces
3
3
4
8
2.4. Totally geodesic subsets of algebraic hyperbolic spaces
14
2.5. Other models of hyperbolic geometry
2.5.1. The (Klein) ball model
18
18
2.5.2. The half-space model
19
vii
viii
CONTENTS
2.5.3. Transitivity of the action of Isom(H) on ∂H
Chapter 3. R-trees, CAT(-1) spaces, and Gromov hyperbolic metric spaces
3.1. Graphs and R-trees
21
23
23
3.2. CAT(-1) spaces
3.2.1. Examples of CAT(-1) spaces
27
28
3.3. Gromov hyperbolic metric spaces
3.3.1. Examples of Gromov hyperbolic metric spaces
29
31
3.4. The boundary of a hyperbolic metric space
3.4.1. Extending the Gromov product to the boundary
33
35
3.4.2. A topology on bord X
38
3.5. The Gromov product in algebraic hyperbolic spaces
3.5.1. The Gromov boundary of an algebraic hyperbolic space
41
46
3.6. Metrics and metametrics on bord X
3.6.1. General theory of metametrics
48
48
3.6.2. The visual metametric based at a point w ∈ X
3.6.3. The extended visual metric on bord X
50
51
3.6.4. The visual metametric based at a point ξ ∈ ∂X
53
Chapter 4. More about the geometry of hyperbolic metric spaces
59
4.1. Gromov triples
4.2. Derivatives
59
60
4.2.1. Derivatives of metametrics
4.2.2. Derivatives of maps
60
61
4.2.3. The dynamical derivative
63
4.3. The Rips condition
4.4. Geodesics in CAT(-1) spaces
65
67
4.5. The geometry of shadows
4.5.1. Shadows in regularly geodesic hyperbolic metric spaces
73
73
4.5.2. Shadows in hyperbolic metric spaces
4.6. Generalized polar coordinates
74
79
Chapter 5. Discreteness
5.1. Topologies on Isom(X)
5.2. Discrete groups of isometries
5.2.1. Topological discreteness
83
83
87
89
5.2.2. Equivalence in finite dimensions
91
5.2.3. Proper discontinuity
5.2.4. Behavior with respect to restrictions
92
94
5.2.5. Countability of discrete groups
94
CONTENTS
ix
Chapter 6. Classification of isometries and semigroups
95
6.1. Classification of isometries
6.1.1. More on loxodromic isometries
95
97
6.1.2. The story for real hyperbolic spaces
6.2. Classification of semigroups
98
99
6.2.1. Elliptic semigroups
6.2.2. Parabolic semigroups
100
100
6.2.3. Loxodromic semigroups
6.3. Proof of the Classification Theorem
101
103
6.4. Discreteness and focal groups
105
Chapter 7. Limit sets
7.1. Modes of convergence to the boundary
109
109
7.2. Limit sets
7.3. Cardinality of the limit set
112
113
7.4. Minimality of the limit set
114
7.5. Convex hulls
7.6. Semigroups which act irreducibly on algebraic hyperbolic spaces
119
122
7.7. Semigroups of compact type
123
Part 2.
The Bishop–Jones theorem
127
Chapter 8. The modified Poincaré exponent
8.1. The Poincaré exponent of a semigroup
129
129
8.2. The modified Poincaré exponent of a semigroup
130
Chapter 9. Generalization of the Bishop–Jones theorem
9.1. Partition structures
135
136
9.2. A partition structure on ∂X
9.3. Sufficient conditions for Poincaré regularity
Part 3.
141
148
Examples
155
Chapter 10. Schottky products
157
10.1. Free products
10.2. Schottky products
157
158
10.3. Strongly separated Schottky products
160
10.4. A partition-structure–like structure
10.5. Existence of Schottky products
168
171
Chapter 11. Parabolic groups
11.1. Examples of parabolic groups acting on E
175
∞
175
x
CONTENTS
11.1.1. The Haagerup property and the absence of a Margulis lemma
11.1.2. Edelstein examples
11.2. The Poincaré exponent of a finitely generated parabolic group
176
177
183
11.2.1. Nilpotent and virtually nilpotent groups
11.2.2. A universal lower bound on the Poincaré exponent
183
185
11.2.3. Examples with explicit Poincaré exponents
186
Chapter 12. Geometrically finite and convex-cobounded groups
12.1. Some geometric shapes
12.1.1. Horoballs
12.1.2. Dirichlet domains
193
193
193
194
12.2. Cobounded and convex-cobounded groups
12.2.1. Characterizations of convex-coboundedness
196
198
12.2.2. Consequences of convex-coboundedness
12.3. Bounded parabolic points
201
201
12.4. Geometrically finite groups
12.4.1. Characterizations of geometrical finiteness
206
207
12.4.2. Consequences of geometrical finiteness
213
12.4.3. Examples of geometrically finite groups
218
Chapter 13. Counterexamples
221
13.1. Embedding R-trees into real hyperbolic spaces
13.2. Strongly discrete groups with infinite Poincaré exponent
222
226
13.3. Moderately discrete groups which are not strongly discrete
226
13.4. Poincaré irregular groups
13.5. Miscellaneous counterexamples
228
232
Chapter 14. R-trees and their isometry groups
14.1. Construction of R-trees by the cone method
233
233
14.2. Graphs with contractible cycles
237
14.3. The nearest-neighbor projection onto a convex set
14.4. Constructing R-trees by the stapling method
239
240
14.5. Examples of R-trees constructed using the stapling method
245
Part 4.
Patterson–Sullivan theory
Chapter 15. Conformal and quasiconformal measures
253
255
15.1. The definition
255
15.2. Conformal measures
15.3. Ergodic decomposition
256
256
15.4. Quasiconformal measures
259
CONTENTS
xi
15.4.1. Pointmass quasiconformal measures
260
15.4.2. Non-pointmass quasiconformal measures
261
Chapter 16. Patterson–Sullivan theorem for groups of divergence type
16.1. Samuel–Smirnov compactifications
b
16.2. Extending the geometric functions to X
b
16.3. Quasiconformal measures on X
267
267
268
270
16.4. The main argument
16.5. End of the argument
273
276
16.6. Necessity of the generalized divergence type assumption
16.7. Orbital counting functions of nonelementary groups
277
279
Chapter 17. Quasiconformal measures of geometrically finite groups
281
17.1. Sufficient conditions for divergence type
17.2. The global measure formula
281
284
17.3. Proof of the global measure formula
17.4. Groups for which µ is doubling
289
295
17.5. Exact dimensionality of µ
17.5.1. Diophantine approximation on Λ
303
304
17.5.2. Examples and non-examples of exact dimensional measures
308
Appendix A. Open problems
313
Appendix B. Index of defined terms
315
Bibliography
321
List of Figures
3.1.1 A geodesic triangle in an R-tree
27
3.3.1 A quadruple of points in an R-tree
30
3.3.2 Expressing distance via Gromov products in an R-tree
32
3.4.1 A Gromov sequence in an R-tree
34
3.5.1 Relating angle and the Gromov product
42
3.5.2 B is strongly Gromov hyperbolic
45
3.5.3 A formula for the Busemann function in the half-space model
48
3.6.1 The Hamenstädt distance
56
4.2.1 The derivative of g at ∞
63
4.3.1 The Rips condition
66
4.4.1 The triangle ∆(x, y1 , y2 )
69
4.5.1 Shadows in regularly geodesic hyperbolic metric spaces
74
4.5.2 The Intersecting Shadows Lemma
76
4.5.3 The Big Shadows Lemma
77
4.5.4 The Diameter of Shadows Lemma
78
4.6.1 Polar coordinates in the half-space model
79
6.4.1 High altitude implies small displacement in the half-space model
107
7.1.1 Conical convergence to the boundary
110
7.1.2 Converging horospherically but not radially to the boundary
111
9.2.1 The construction of children
142
9.2.2 The sets Cn , for n ∈ Z
143
10.3.1 The strong separation lemma for Schottky products
162
12.1.1 Visualizing horoballs in the ball and half-space models
194
12.1.2 Diameter decay of a ball complement inside a horoball
195
xiii
xiv
LIST OF FIGURES
12.1.3 The Cayley graph of Γ = F2 (Z) = hγ1 , γ2 i
196
12.2.1 Proving that convex-cobounded groups are of compact type
200
12.3.1 The geometry of bounded parabolic points
205
12.4.1 Proving that geometrically finite groups are of compact type
209
12.4.2 Local finiteness of the horoball collection
211
12.4.3 Orbit maps of geometrically finite groups are QI embeddings
215
13.4.1 Geometry of automorphisms of a simplicial tree
229
14.2.1 Triangles in graphs with contractible cycles
239
14.2.2 Triangles in graphs with contractible cycles: general case
239
14.4.1 The consistency condition for stapling metric spaces
242
14.5.1 The Cayley graph of F2 (Z) as a pure Schottky product
248
14.5.2 An example of a geometric product
251
14.5.3 Another example of a geometric product
252
17.2.1 Cusp excursion and ball measure functions
285
17.3.1 Estimating measures of balls via information “at infinity”
290
17.3.2 Estimating measures of balls via “local” information
293
Prologue
. . . Cela suffit pour faire comprendre que dans les cinq mémoires
des Acta mathematica que j’ai consacrés à l’étude des transcendantes fuchsiennes et kleinéennes, je n’ai fait qu’effleurer un sujet très vaste, qui fournira sans doute aux géomètres l’occasion de
nombreuses et importantes découvertes.1
– H. Poincaré, Acta Mathematica, 5, 1884, p. 278.
The theory of discrete subgroups of real hyperbolic space has a long history. It
was inaugurated by Poincaré, who developed the two-dimensional (Fuchsian) and
three-dimensional (Kleinian) cases of this theory in a series of articles published
between 1881 and 1884 that included numerous notes submitted to the C. R. Acad.
Sci. Paris, a paper at Klein’s request in Math. Annalen, and five memoirs commissioned by Mittag-Leffler for his then freshly-minted Acta Mathematica. One
must also mention the complementary work of the German school that came before Poincaré and continued well after he had moved on to other areas, viz. that
of Klein, Schottky, Schwarz, and Fricke. See [80, Chapter 3] for a brief exposition of this fascinating history, and [79, 63] for more in-depth presentations of the
mathematics involved.
We note that in finite dimensions, the theory of higher-dimensional Kleinian
groups, i.e., discrete isometry groups of the hyperbolic d-space Hd for d ≥ 4, is
markedly different from that in H3 and H2 . For example, the Teichmüller theory used by the Ahlfors–Bers school (viz. Marden, Maskit, Jørgensen, Sullivan,
Thurston, etc.) to study three-dimensional Kleinian groups has no generalization
to higher dimensions. Moreover, the recent resolution of the Ahlfors measure conjecture [3, 43] has more to do with three-dimensional topology than with analysis
and dynamics. Indeed, the conjecture remains open in higher dimensions [106, p.
526, last paragraph]. Throughout the twentieth century, there are several instances
of theorems proven for three-dimensional Kleinian groups whose proofs extended
1This is enough to make it apparent that in these five memoirs in Acta Mathematica which I have
dedicated to the study of Fuschian and Kleinian transcendants, I have only skimmed the surface
of a very broad subject, which will no doubt provide geometers with the opportunity for many
important discoveries.
xv
xvi
PROLOGUE
easily to n dimensions (e.g. [21, 133]), but it seems that the theory of higherdimensional Kleinian groups was not really considered a subject in its own right
until around the 1990s. For more information on the theory of higher-dimensional
Kleinian groups, see the survey article [106], which describes the state of the art
up to the last decade, emphasizing connections with homological algebra.
But why stop at finite n? Dennis Sullivan, in his IHÉS Seminar on Conformal
and Hyperbolic Geometry [164] that ran during the late 1970s and early ’80s, indicated a possibility of developing the theory of discrete groups acting by hyperbolic
isometries on the open unit ball of a separable infinite-dimensional Hilbert space.2
Later in the early ’90s, Misha Gromov observed the paucity of results regarding
such actions in his seminal lectures Asymptotic Invariants of Infinite Groups [86]
where he encouraged their investigation in memorable terms: “The spaces like this
[infinite-dimensional symmetric spaces] . . . look as cute and sexy to me as their
finite dimensional siblings but they have been for years shamefully neglected by
geometers and algebraists alike”.
Gromov’s lament had not fallen to deaf ears, and the geometry and representation theory of infinite-dimensional hyperbolic space H∞ and its isometry group
have been studied in the last decade by a handful of mathematicians, see e.g.
[40, 65, 132]. However, infinite-dimensional hyperbolic geometry has come into
prominence most spectacularly through the recent resolution of a long-standing
conjecture in algebraic geometry due to Enriques from the late nineteenth century. Cantat and Lamy [47] proved that the Cremona group (i.e. the group of
birational transformations of the complex projective plane) has uncountably many
non-isomorphic normal subgroups, thus disproving Enriques’ conjecture. Key to
their enterprise is the fact, due to Manin [125], that the Cremona group admits a
faithful isometric action on a non-separable infinite-dimensional hyperbolic space,
now known as the Picard–Manin space.
Our project was motivated by a desire to answer Gromov’s plea by exposing a
coherent general theory of groups acting isometrically on the infinite-dimensional
hyperbolic space H∞ . In the process we came to realize that a more natural domain for our inquiries was the much larger setting of semigroups acting on Gromov hyperbolic metric spaces – that way we could simultaneously answer our own
questions about H∞ and construct a theoretical framework for those who are interested in more exotic spaces such as the curve graph, arc graph, and arc complex
[95, 126, 96] and the free splitting and free factor complexes [89, 27, 104, 96].
2This was the earliest instance of such a proposal that we could find in the literature, although
(as pointed out to us by P. de la Harpe) infinite-dimensional hyperbolic spaces without groups
acting on them had been discussed earlier [130, §27], [131, 60]. It would be of interest to know
whether such an idea may have been discussed prior to that.
PROLOGUE
xvii
These examples are particularly interesting as they extend the well-known dictionary [26, p.375] between mapping class groups and the groups Out(FN ). In
another direction, a dictionary is emerging between mapping class groups and Cremona groups, see [30, 66]. We speculate that developing the Patterson–Sullivan
theory in these three areas would be fruitful and may lead to new connections and
analogies that have not surfaced till now.
In a similar spirit, we believe there is a longer story for which this monograph
lays the foundations. In general, infinite-dimensional space is a wellspring of outlandish examples and the wide range of new phenomena we have started to uncover
has no analogue in finite dimensions. The geometry and analysis of such groups
should pique the interests of specialists in probability, geometric group theory, and
metric geometry. More speculatively, our work should interact with the ongoing
and still nascent study of geometry, topology, and dynamics in a variety of infinitedimensional spaces and groups, especially in scenarios with sufficient negative curvature. Here are three concrete settings that would be interesting to consider: the
universal Teichmüller space, the group of volume-preserving diffeomorphisms of R3
or a 3-torus, and the space of Kähler metrics/potentials on a closed complex manifold in a fixed cohomology class equipped with the Mabuchi–Semmes–Donaldson
metric. We have been developing a few such themes. The study of thermodynamics
(equilibrium states and Gibbs measures) on the boundaries of Gromov hyperbolic
spaces will be investigated in future work [57]. We speculate that the study of
stochastic processes (random walks and Brownian motion) in such settings would
be fruitful. Furthermore, it would be of interest to develop the theory of discrete
isometric actions and limit sets in infinite-dimensional spaces of higher rank.
Acknowledgements. This monograph is dedicated to our colleague Bernd O.
Stratmann, who passed away on the 8th of August, 2015. Various discussions with
Bernd provided inspiration for this project and we remain grateful for his friendship.
The authors thank D. P. Sullivan, D. Mumford, B. Farb, P. Pansu, F. Ledrappier,
A. Wilkinson, K. Biswas, E. Breuillard, A. Karlsson, I. Assani, M. Lapidus, R.
Guo, Z. Huang, I. Gekhtman, G. Tiozzo, P. Py, M. Davis, M. Roychowdury, M.
Hochman, J. Tao, P. de la Harpe, T. Barthelme, J. P. Conze, and Y. Guivarc’h
for their interest and encouragement, as well as for invitations to speak about our
work at various venues. We are grateful to S. J. Patterson, J. Elstrodt, and É. Ghys
for enlightening historical discussions on various themes relating to the history of
Fuchsian and Kleinian groups and their study through the twentieth century, and to
D. P. Sullivan and D. Mumford for suggesting work on diffeomorphism groups and
the universal Teichmüller space. We thank X. Xie for pointing us to a solution to
one of the problems in our Appendix A. The research of the first-named author was
xviii
PROLOGUE
supported in part by 2014-2015 and 2016-2017 Faculty Research Grants from the
University of Wisconsin–La Crosse. The research of the second-named author was
supported in part by the EPSRC Programme Grant EP/J018260/1. The research
of the third-named author was supported in part by the NSF grant DMS-1361677.
CHAPTER 1
Introduction and Overview
The purpose of this monograph is to present the theory of groups and semigroups acting isometrically on Gromov hyperbolic metric spaces in full detail as
we understand it, with special emphasis on the case of infinite-dimensional algebraic hyperbolic spaces X = H∞
F , where F denotes a division algebra. We have
not skipped over the parts which some would call “trivial” extensions of the finitedimensional/proper theory, for two main reasons: first, intuition has turned out to
be wrong often enough regarding these matters that we feel it is worth writing everything down explicitly; second, we feel it is better methodologically to present the
entire theory from scratch, in order to provide a basic reference for the theory, since
no such reference exists currently (the closest, [39], has a fairly different emphasis).
Thus Part 1 of this monograph should be treated as mostly expository, while Parts
2-4 contain a range of new material. For experts who want a geodesic path to
significant theorems, we list here five such results that we prove in this monograph:
Theorems 1.2.1 and 1.4.4 provide generalizations of the Bishop–Jones theorem [28,
Theorem 1] and the Global Measure Formula [160, Theorem 2], respectively, to
Gromov hyperbolic metric spaces. Theorem 1.4.1 guarantees the existence of a
δ-quasiconformal measure for groups of divergence type, even if the space they are
acting on is not proper. Theorem 1.4.5 provides a sufficient condition for the exact
dimensionality of the Patterson-Sullivan measure of a geometrically finite group,
and Theorem 1.4.6 relates the exact dimensionality to Diophantine properties of
the measure. However, the reader should be aware that a sharp focus on just these
results, without care for their motivation or the larger context in which they are situated, will necessarily preclude access to the interesting and uncharted landscapes
that our work has begun to uncover. The remainder of this chapter provides an
overview of these landscapes.
Convention 1. The symbols ., &, and ≍ will denote coarse asymptotics;
a subscript of + indicates that the asymptotic is additive, and a subscript of ×
indicates that it is multiplicative. For example, A .×,K B means that there exists
a constant C > 0 (the implied constant ), depending only on K, such that A ≤ CB.
Moreover, A .+,× B means that there exist constants C1 , C2 > 0 so that A ≤
C1 B + C2 . In general, dependence of the implied constant(s) on universal objects
xix
xx
1. INTRODUCTION AND OVERVIEW
such as the metric space X, the group G, and the distinguished point o ∈ X (cf.
Notation 1.1.5) will be omitted from the notation.
Convention 2. The notation xn −
→ x means that xn → x as n → ∞, while
the notation xn −−→ x means that
n
n,+
x ≍+ lim sup xn ≍+ lim inf xn ,
n→∞
n→∞
and similarly for xn −−→ x.
n,×
Convention 3. The symbol ⊳ is used to indicate the end of a nested proof.
Convention 4. We use the Iverson bracket notation:
1 statement true
[statement] =
0 statement false
Convention 5. Given a distinguished point o ∈ X, we write
kxk = d(o, x) and kgk = kg(o)k.
1.1. Preliminaries
1.1.1. Algebraic hyperbolic spaces. Although we are mostly interested in
this monograph in the real infinite-dimensional hyperbolic space H∞
R , the complex
∞
and quaternionic hyperbolic spaces H∞
C and HQ are also interesting. In finite
dimensions, these spaces constitute (modulo the Cayley hyperbolic plane1) the rank
one symmetric spaces of noncompact type. In the infinite-dimensional case we retain
this terminology by analogy; cf. Remark 2.2.6. For brevity we will refer to a rank
one symmetric space of noncompact type as an algebraic hyperbolic space.
There are several equivalent ways to define algebraic hyperbolic spaces; these
are known as “models” of hyperbolic geometry. We consider here the hyperboloid
model, ball model (Klein’s, not Poincaré’s), and upper half-space model (which
only applies to algebraic hyperbolic spaces defined over the reals, which we will call
α
α
real hyperbolic spaces), which we denote by Hα
F , BF , and E , respectively. Here F
denotes the base field (either R, C, or Q), and α denotes a cardinal number. We
omit the base field when it is R, and denote the exponent by ∞ when it is #(N),
#(N)
so that H∞ = HR
is the unique separable infinite-dimensional real hyperbolic
space.
The main theorem of Chapter 2 is Theorem 2.3.3, which states that any isometry of an algebraic hyperbolic space must be an “algebraic” isometry. The finitedimensional case is given as an exercise in Bridson–Haefliger [39, Exercise II.10.21].
1We omit all discussion of the Cayley hyperbolic plane H2 , as the algebra involved is too exotic
O
for our taste; cf. Remark 2.1.1.
1.1. PRELIMINARIES
xxi
We also describe the relation between totally geodesic subsets of algebraic hyperbolic spaces and fixed point sets of isometries (Theorem 2.4.7), a relation which
will be used throughout the paper.
Remark 1.1.1. Key to the study of finite-dimensional algebraic hyperbolic
spaces is the theory of quasiconformal mappings (e.g., as in Mostow and Pansu’s
rigidity theorems [133, 141]). Unfortunately, it appears to be quite difficult to
generalize this theory to infinite dimensions. For example, it is an open question
[92, p.1335] whether every quasiconformal homeomorphism of Hilbert space is also
quasisymmetric.
1.1.2. Gromov hyperbolic metric spaces. Historically, the first motivation for the theory of negatively curved metric spaces came from differential geometry and the study of negatively curved Riemannian manifolds. The idea was
to describe the most important consequences of negative curvature in terms of the
metric structure of the manifold. This approach was pioneered by Aleksandrov
[6], who discovered for each κ ∈ R an inequality regarding triangles in a metric
space with the property that a Riemannian manifold satisfies this inequality if and
only if its sectional curvature is bounded above by κ, and popularized by Gromov,
who called Aleksandrov’s inequality the “CAT(κ) inequality” as an abbreviation
for “comparison inequality of Alexandrov–Toponogov” [85, p.106].2 A metric space
is called CAT(κ) if the distance between any two points on a geodesic triangle is
smaller than the corresponding distance on the “comparison triangle” in a model
space of constant curvature κ; see Definition 3.2.1.
The second motivation came from geometric group theory, in particular the
study of groups acting on manifolds of negative curvature. For example, Dehn
proved that the word problem is solvable for finitely generated Fuchsian groups [64],
and this was generalized by Cannon to groups acting cocompactly on manifolds of
negative curvature [44]. Gromov attempted to give a geometric characterization of
these groups in terms of their Cayley graphs; he tried many definitions (cf. [83,
§6.4], [84, §4]) before converging to what is now known as Gromov hyperbolicity
in 1987 [85, 1.1, p.89], a notion which has influenced much research. A metric
space is said to be Gromov hyperbolic if it satisfies a certain inequality that we call
Gromov’s inequality; see Definition 3.3.2. A finitely generated group is then said
to be word-hyperbolic if its Cayley graph is Gromov hyperbolic.
2It appears that Bridson and Haefliger may be responsible for promulgating the idea that the C in
CAT refers to E. Cartan [39, p.159]. We were unable to find such an indication in [85], although
Cartan is referenced in connection with some theorems regarding CAT(κ) spaces (as are Riemann
and Hadamard).
xxii
1. INTRODUCTION AND OVERVIEW
The big advantage of Gromov hyperbolicity is its generality. We give some idea
of its scope by providing the following nested list of metric spaces which have been
proven to be Gromov hyperbolic:
• CAT(-1) spaces (Definition 3.2.1)
– Riemannian manifolds (both finite- and infinite-dimensional) with
sectional curvature ≤ −1
∗ Algebraic hyperbolic spaces (Definition 2.2.5)
· Picard–Manin spaces of projective surfaces defined over
algebraically closed fields [125], cf. [46, §3.1]
– R-trees (Definition 3.1.10)
∗ Simplicial trees
· Unweighted simplicial trees
• Cayley metrics (Example 3.1.2) on word-hyperbolic groups
• Green metrics on word-hyperbolic groups [29, Corollary 1.2]
• Quasihyperbolic metrics of uniform domains in Banach spaces [173, Theorem 2.12]
• Arc graphs and curve graphs [95] and arc complexes [126, 96] of finitely
punctured oriented surfaces
• Free splitting complexes [89, 96] and free factor complexes [27, 104, 96]
Remark 1.1.2. Many of the above examples admit natural isometric group
actions:
• The Cremona group acts isometrically on the Picard–Manin space [125],
cf. [46, Theorem 3.3].
• The mapping class group of a finitely punctured oriented surface acts
isometrically on its arc graph, curve graph, and arc complex.
• The outer automorphism group Out(FN ) of the free group on N generators
acts isometrically on the free splitting complex F S(FN ) and the free factor
complex F F (FN ).
Remark 1.1.3. Most of the above examples are examples of non-proper hyperbolic metric spaces. Recall that a metric space is said to be proper if its distance
function x 7→ kxk = d(o, x) is proper, or equivalently if closed balls are compact.
Though much of the existing literature on CAT(-1) and hyperbolic metric spaces
assumes that the spaces in question are proper, it is often not obvious whether this
assumption is really essential. However, since results about proper metric spaces
do not apply to infinite-dimensional algebraic hyperbolic spaces, we avoid the assumption of properness.
1.1. PRELIMINARIES
xxiii
Remark 1.1.4. One of the above examples, namely, Green metrics on wordhyperbolic groups, is a natural class of non-geodesic hyperbolic metric spaces.3
However, Bonk and Schramm proved that all non-geodesic hyperbolic metric spaces
can be isometrically embedded into geodesic hyperbolic metric spaces [31, Theorem
4.1], and the equivariance of their construction was proven by Blachère, Haı̈ssinsky,
and Mathieu [29, Corollary A.10]. Thus, one could view the assumption of geodesicity to be harmless, since most theorems regarding geodesic hyperbolic metric spaces
can be pulled back to non-geodesic hyperbolic metric spaces. However, for the most
part we also avoid the assumption of geodesicity, mostly for methodological reasons
rather than because we are considering any particular non-geodesic hyperbolic metric space. Specifically, we felt that Gromov’s definition of hyperbolicity in metric
spaces is a “deep” definition whose consequences should be explored independently
of such considerations as geodesicity. We do make the assumption of geodesicity in Chapter 12, where it seems necessary in order to prove the main theorems.
(The assumption of geodesicity in Chapter 12 can for the most part be replaced by
the weaker assumption of almost geodesicity [31, p.271], but we felt that such a
presentation would be more technical and less intuitive.)
We now introduce a list of standing assumptions and notations. They apply to
all chapters except for Chapters 2, 3, and 5 (see also §4.1).
Notation 1.1.5. Throughout the introduction,
• X is a Gromov hyperbolic metric space (cf. Definition 3.3.2),
• d denotes the distance function of X,
• ∂X denotes the Gromov boundary of X, and bord X denotes the bordification bord X = X ∪ ∂X (cf. Definition 3.4.2),
• D denotes a visual metric on ∂X with respect to a parameter b > 1 and a
distinguished point o ∈ X (cf. Proposition 3.6.8). By definition, a visual
metric satisfies the asymptotic
(1.1.1)
Db,o (ξ, η) ≍× b−hξ|ηio ,
where h·|·i denotes the Gromov product (cf. (3.3.2)).
• Isom(X) denotes the isometry group of X. Also, G ≤ Isom(X) will mean
that G is a subgroup of Isom(X), while G Isom(X) will mean that G
is a subsemigroup of Isom(X).
A prime example to have in mind is the special case where X is an infinitedimensional algebraic hyperbolic space, in which case the Gromov boundary ∂X
3Quasihyperbolic metrics on uniform domains in Banach spaces can also fail to be geodesic, but
they are almost geodesic which is almost as good. See e.g. [172] for a study of almost geodesic
hyperbolic metric spaces.
xxiv
1. INTRODUCTION AND OVERVIEW
can be identified with the natural boundary of X (Proposition 3.5.3), and we can
set b = e and get equality in (1.1.1) (Observation 3.6.7).
Another important example of a hyperbolic metric space that we will keep in
our minds is the case of R-trees alluded to above. R-trees are a generalization of
simplicial trees, which in turn are a generalization of unweighted simplicial trees,
also known as “Z-trees” or just “trees”. R-trees are worth studying in the context
of hyperbolic metric spaces for two reasons: first of all, they are “prototype spaces”
in the sense that any finite set in a hyperbolic metric space can be roughly isometrically embedded into an R-tree, with a roughness constant depending only on
the cardinality of the set [77, pp.33-38]; second of all, R-trees can be equivariantly
embedded into infinite-dimensional real hyperbolic space H∞ (Theorem 13.1.1),
meaning that any example of a group acting on an R-tree can be used to construct
an example of the same group acting on H∞ . R-trees are also much simpler to
understand than general hyperbolic metric spaces: for any finite set of points, one
can draw out a list of all possible diagrams, and then the set of distances must be
determined from one of these diagrams (cf. e.g., Figure 3.3.1).
Besides introducing R-trees, CAT(-1) spaces, and hyperbolic metric spaces, the
following things are done in Chapter 3: construction of the Gromov boundary
∂X and analysis of its basic topological properties (Section 3.4), proof that the
Gromov boundary of an algebraic hyperbolic space is equal to its natural boundary
(Proposition 3.5.3), and the construction of various metrics and metametrics on the
boundary of X (Section 3.6). None of this is new, although the idea of a metametric
(due to Väisälä [172, §4]) is not very well known.
In Chapter 4, we go more into detail regarding the geometry of hyperbolic
metric spaces. We prove the geometric mean value theorem for hyperbolic metric
spaces (Section 4.2), the existence of geodesic rays connecting two points in the
boundary of a CAT(-1) space (Proposition 4.4.4), and various geometrical theorems
regarding the sets
Shadz (x, σ) := {ξ ∈ ∂X : hx|ξiz ≤ σ},
which we call “shadows” due to their similarity to the famous shadows of Sullivan
[161, Fig. 2] on the boundary of Hd (Section 4.5). We remark that most proofs
of the existence of geodesics between points on the boundary of complete CAT(-1)
spaces, e.g. [39, Proposition II.9.32], assume properness and make use of it in a
crucial way, whereas we make no such assumption in Proposition 4.4.4. Finally,
in Section 4.6 we introduce “generalized polar coordinates” in a hyperbolic metric
space. These polar coordinates tell us that the action of a loxodromic isometry
(see Definition 6.1.2) on a hyperbolic metric space is roughly the same as the map
x 7→ λx in the upper half-plane E2 .
1.1. PRELIMINARIES
xxv
1.1.3. Discreteness. The first step towards extending the theory of Kleinian
groups to infinite dimensions (or more generally to hyperbolic metric spaces) is to
define the appropriate class of groups to consider. This is less trivial than might be
expected. Recalling that a d-dimensional Kleinian group is defined to be a discrete
subgroup of Isom(Hd ), we would want to define an infinite-dimensional Kleinian
group to be a discrete subgroup of Isom(H∞ ). But what does it mean for a subgroup
of Isom(H∞ ) to be discrete? In finite dimensions, the most natural definition is
to call a subgroup discrete if it is discrete relative to the natural topology on
Isom(Hd ); this definition works well since Isom(Hd ) is a Lie group. But in infinite
dimensions and especially in more exotic spaces, many applications require stronger
hypotheses (e.g., Theorem 1.2.1, Chapter 12). In Chapter 5, we discuss several
potential definitions of discreteness, which are inequivalent in general but agree in
the case of finite-dimensional space X = Hd (Proposition 5.2.10):
Definitions 5.2.1 and 5.2.6. Fix G ≤ Isom(X).
• G is called strongly discrete (SD) if for every bounded set B ⊆ X, we have
#{g ∈ G : g(B) ∩ B 6= } < ∞.
• G is called moderately discrete (MD) if for every x ∈ X, there exists an
open set U containing x such that
#{g ∈ G : g(U ) ∩ U 6= } < ∞.
• G is called weakly discrete (WD) if for every x ∈ X, there exists an open
set U containing x such that
g(U ) ∩ U 6= ⇒ g(x) = x.
• G is called COT-discrete (COTD) if it is discrete as a subset of Isom(X)
when Isom(X) is given the compact-open topology (COT).
• If X is an algebraic hyperbolic space, then G is called UOT-discrete
(UOTD) if it is discrete as a subset of Isom(X) when Isom(X) is given
the uniform operator topology (UOT; cf. Section 5.1).
As our naming suggests, the condition of strong discreteness is stronger than
the condition of moderate discreteness, which is in turn stronger than the condition
of weak discreteness (Proposition 5.2.4). Moreover, any moderately discrete group
is COT-discrete, and any weakly discrete subgroup of Isom(H∞ ) is COT-discrete
(Proposition 5.2.7). These relations and more are summarized in Table 1 on p. 93.
Out of all these definitions, strong discreteness should perhaps be thought of as
the best generalization of discreteness to infinite dimensions. Thus, we propose that
the phrase “infinite-dimensional Kleinian group” should mean “strongly discrete
xxvi
1. INTRODUCTION AND OVERVIEW
subgroup of Isom(H∞ )”. However, in this monograph we will be interested in the
consequences of all the different notions of discreteness, as well as the interactions
between them.
Remark 1.1.6. Strongly discrete groups are known in the literature as metrically proper, and moderately discrete groups are known as wandering. However,
we prefer our terminology since it more clearly shows the relationship between the
different notions of discreteness.
1.1.4. The classification of semigroups. After clarifying the different types
of discreteness which can occur in infinite dimensions, we turn to the question of
classification. This question makes sense both for individual isometries and for entire semigroups.4 Historically, the study of classification began in the 1870s when
Klein proved a theorem classifying isometries of H2 and attached the words “elliptic”, “parabolic”, and “hyperbolic” to these classifications. Elliptic isometries are
those which have at least one fixed point in the interior, while parabolic isometries
have exactly one fixed point, which is a neutral fixed point on the boundary, and
hyperbolic isometries have two fixed points on the boundary, one of which is attracting and one of which is repelling. Later, the word “loxodromic” was used to
refer to isometries in H3 which have two fixed points on the boundary but which
are geometrically “screw motions” rather than simple translations. In what follows
we use the word “loxodromic” to refer to all isometries of Hn (or more generally a
hyperbolic metric space) with two fixed points on the boundary – this is analogous
to calling a circle an ellipse. Our real reason for using the word “loxodromic” in
this instance, rather than “hyperbolic”, is to avoid confusion with the many other
meanings of the word “hyperbolic” that have entered usage in various scenarios.
To extend this classification from individual isometries to groups, we call a
group “elliptic” if its orbits are bounded, “parabolic” if it has a unique neutral
global fixed point on the boundary, and “loxodromic” if it contains at least one
loxodromic isometry. The main theorem of Chapter 6 (viz. Theorem 6.2.3) is that
every subsemigroup of Isom(X) is either elliptic, parabolic, or loxodromic.
Classification of groups has appeared in the literature in various contexts, from
Eberlein and O’Neill’s results regarding visiblility manifolds [69], through Gromov’s remarks about groups acting on strictly convex spaces [83, §3.5] and wordhyperbolic groups [85, §3.1], to the more general results of Hamann [88, Theorem
4In Chapters 6-10, we work in the setting of semigroups rather than groups. Like dropping the
assumption of geodesicity (cf. Remark 1.1.4), this is done partly in order to broaden our class of
examples and partly for methodological reasons – we want to show exactly where the assumption
of being closed under inverses is being used. It should be also noted that semigroups sometimes
show up naturally when one is studying groups; cf. Proposition 10.5.4(B).
1.1. PRELIMINARIES
xxvii
2.7], Osin [140, §3], and Caprace, de Cornulier, Monod, and Tessera [48, §3.A]
regarding geodesic hyperbolic metric spaces.5 Many of these theorems have similar
statements to ours ([88] and [48] seem to be the closest), but we have not kept
track of this carefully, since our proof appears to be sufficiently different to warrant
independent interest anyway.
After proving Theorem 6.2.3, we discuss further aspects of the classification
of groups, such as the further classification of loxodromic groups given in §6.2.3:
a loxodromic group is called “lineal”, “focal”, or “of general type” according to
whether it has two, one, or zero global fixed points, respectively. (This terminology
was introduced in [48].) The “focal” case is especially interesting, as it represents a
class of nonelementary groups which have global fixed points.6 We show that certain
classes of discrete groups cannot be focal (Proposition 6.4.1), which explains why
such groups do not appear in the theory of Kleinian groups. On the other hand, we
show that in infinite dimensions, focal groups can have interesting limit sets even
though they satisfy only a weak form of discreteness; cf. Remark 13.4.3.
1.1.5. Limit sets. An important invariant of a Kleinian group G is its limit
set Λ = ΛG , the set of all accumulation points of the orbit of any point in the
interior. By putting an appropriate topology on the bordification of our hyperbolic
metric space X (§3.4.2), we can generalize this definition to an arbitrary subsemigroup of Isom(X). Many results generalize relatively straightforwardly7 to this new
context, such as the minimality of the limit set (Proposition 7.4.1) and the connection between classification and the cardinality of the limit set (Proposition 7.3.1).
In particular, we call a semigroup elementary if its limit set is finite.
In general, the convex hull of the limit set may need to be replaced by a
quasiconvex hull (cf. Definition 7.5.1), since in certain cases the convex hull does
not accurately reflect the geometry of the group. Indeed, Ancona [9, Corollary
C] and Borbely [32, Theorem 1] independently constructed examples of CAT(-1)
three-manifolds X for which there exists a point ξ ∈ ∂X such that the convex hull
of any neighborhood of ξ is equal to bord X. Although in a non-proper setting the
limit set may no longer be compact, compactness of the limit set is a reasonable
geometric condition that is satisfied for many examples of subgroups of Isom(H∞ )
5We remark that the results of [48, §3.A] can be generalized to non-geodesic hyperbolic metric
spaces by using the Bonk–Schramm embedding theorem [31, Theorem 4.1] (see also [29, Corollary
A.10]).
6Some sources (e.g. [148, §5.5]) define nonelementarity in a way such that global fixed points are
automatically ruled out, but this is not true of our definition (Definition 7.3.2).
7As is the case for many of our results, the classical proofs use compactness in a crucial way –
so here “straightforwardly” means that the statements of the theorems themselves do not require
modification.
xxviii
1. INTRODUCTION AND OVERVIEW
(e.g. Examples 13.2.2, 13.4.2). We call this condition compact type (Definition
7.7.1).
1.2. The Bishop–Jones theorem and its generalization
The term Poincaré series classically referred to a variety of averaging procedures, initiated by Poincaré in his aforementioned Acta memoirs, with a view
towards uniformization of Riemann surfaces via the construction of automorphic
b→C
b with no poles
forms. Given a Fuchsian group Γ and a rational function H : C
on ∂B2 , Poincaré proved that for every m ≥ 2 the series
X
H(γ(z))(γ ′ (z))m
γ∈Γ
(defined for z outside the limit set of Γ) converges uniformly to an automorphic
form of dimension m; see [63, p.218]. Poincaré called these series “θ-fuchsian series
of order m”, but the name “Poincaré series” was later used to refer to such objects.8
The question of for which m < 2 the Poincaré series still converges was investigated
by Schottky, Burnside, Fricke, and Ritter; cf. [2, pp.37-38].
In what would initially appear to be an unrelated development, mathematicians
began to study the “thickness” of the limit set of a Fuchsian group: in 1941 Myrberg
[135] showed that the limit set Λ of a nonelementary Fuchsian group has positive
logarithmic capacity; this was improved by Beardon [17] who showed that Λ has
positive Hausdorff dimension, thus deducing Myrberg’s result as a corollary (since
positive Hausdorff dimension implies positive logarithmic capacity for compact subsets of R2 [166]). The connection between this question and the Poincaré series was
first observed by Akaza, who showed that if G is a Schottky group for which the
Poincaré series converges in dimension s, then the Hausdorff s-dimensional measure
of Λ is zero [5, Corollary of Theorem A]. Beardon then extended Akaza’s result to
finitely generated Fuchsian groups [19, Theorem 5], as well as defining the exponent
of convergence (or Poincaré exponent ) δ = δG of a Fuchsian or Kleinian group to
be the infimum of s for which the Poincaré series converges in dimension s (cf. Definition 8.1.1 and [18]). The reverse direction was then proven by Patterson [142]
using a certain measure on Λ to produce the lower bound, which we will say more
about below in §1.4. Patterson’s results were then generalized by Sullivan [161] to
the setting of geometrically finite Kleinian groups. The necessity of the geometrical finiteness assumption was demonstrated by Patterson [143], who showed that
there exist Kleinian groups of the first kind (i.e. with limit set equal to ∂Hd ) with
8The modern definition of Poincaré series (cf. Definition 8.1.1) is phrased in terms of hyperbolic
geometry rather than complex analysis, but it agrees with the special case of Poincaré’s original
definition which occurs when H ≡ 1 and z = 0, with the caveat that γ ′ (z)m should be replaced
by |γ ′ (z)|m .
1.2. THE BISHOP–JONES THEOREM AND ITS GENERALIZATION
xxix
arbitrarily small Poincaré exponent [143] (see also [100] or [157, Example 8] for
an earlier example of the same phenomenon).
Generalizing these theorems beyond the geometrically finite case requires the
introduction of the radial and uniformly radial limit sets. In what follows, we will
denote these sets by Λr and Λur , respectively. Note that the radial and uniformly
radial limit sets as well as the Poincaré exponent can all (with some care) be defined
for general hyperbolic metric spaces; see Definitions 7.1.2, 7.2.1, and 8.1.1. The
radial limit set was introduced by Hedlund in 1936 in his analysis of transitivity of
horocycles [90, Theorem 2.4].
After some intermediate results [72, 158], Bishop and Jones [28, Theorem
1] generalized Patterson and Sullivan by proving that if G is a nonelementary
Kleinian group, then dimH (Λr ) = dimH (Λur ) = δ.9 Further generalization was
made by Paulin [144], who proved the equation dimH (Λr ) = δ in the case where
G ≤ Isom(X), and X is either a word-hyperbolic group, a CAT(-1) manifold, or a
locally finite unweighted simplicial tree which admits a discrete cocompact action.
We may now state the first major theorem of this monograph, which generalizes all
the aforementioned results:
that
Theorem 1.2.1. Let G ≤ Isom(X) be a nonelementary group. Suppose either
(1) G is strongly discrete,
(2) X is a CAT(-1) space and G is moderately discrete,
(3) X is an algebraic hyperbolic space and G is weakly discrete, or that
(4) X is an algebraic hyperbolic space and G acts irreducibly (cf. Section 7.6)
and is COT-discrete.
Then there exists σ > 0 such that
(1.2.1)
dimH (Λr ) = dimH (Λur ) = dimH (Λur ∩ Λr,σ ) = δ
(cf. Definitions 7.1.2 and 7.2.1 for the definition of Λr,σ ); moreover, for every
0 < s < δ there exist τ > 0 and an Ahlfors s-regular 10 set Js ⊆ Λur,τ ∩ Λr,σ .
For the proof of Theorem 1.2.1, see the comments below Theorem 1.2.3.
Remark. We note that weaker versions of Theorem 1.2.1 already appeared in
[58] and [73], each of which has a two-author intersection with the present paper.
In particular, case (1) of Theorem 1.2.1 appeared in [73] and the proofs of Theorem
1.2.1 and [73, Theorem 5.9] contain a number of redundancies. This was due to the
9Although Bishop and Jones’ theorem only states that dim (Λ ) = δ, they remark that their
r
H
proof actually shows that dimH (Λur ) = δ [28, p.4].
10Recall that a measure µ on a metric space Z is called Ahlfors s-regular if for all z ∈ Z and
0 < r ≤ 1, we have that µ(B(z, r)) ≍× r s . The topological support of an Ahlfors s-regular
measure is called an Ahlfors s-regular set.
xxx
1. INTRODUCTION AND OVERVIEW
fact that we worked on two projects which, despite having fundamentally different
objectives, both required essentially the same argument to produce “large, nice”
subsets of the limit set: in the present monograph, this argument forms the core of
the proof of our generalization of the Bishop–Jones theorem, while in [73], the main
use of the argument is in proving the full dimension of the set of badly approximable
points, in two different senses of the phrase “badly approximable” (approximation
by the orbits of distinguished points, vs. approximation by rational vectors in an
ambient Euclidean space). There are also similarities between the proof of Theorem
1.2.1 and the proof of the weaker version found in [58, Theorem 8.13], although
in this case the presentation is significantly different. However, we remark that
the main Bishop–Jones theorem of this monograph, Theorem 1.2.3, is significantly
more powerful than both [73, Theorem 5.9] and [58, Theorem 8.13].
Remark. The “moreover” clause is new even in the case which Bishop and
Jones considered, demonstrating that the limit set Λur can be approximated by
subsets which are particularly well distributed from a geometric point of view. It
does not follow from their theorem since a set could have large Hausdorff dimension
without having any closed Ahlfors regular subsets of positive dimension (much less
full dimension); in fact it follows from the work of Kleinbock and Weiss [116] that
the set of well approximable numbers forms such a set.11 In [73], a slight strengthening of this clause was used to deduce the full dimension of badly approximable
vectors in the radial limit set of a Kleinian group [73, Theorem 9.3].
Remark. It is possible for a group satisfying one of the hypotheses of Theorem
1.2.1 to also satisfy δ = ∞ (Examples 13.2.1-13.3.3 and 13.5.1-13.5.2);12 note that
Theorem 1.2.1 still holds in this case.
Remark. A natural question is whether (1.2.2) can be improved by showing
that there exists some σ > 0 for which dimH (Λur,σ ) = δ (cf. Definitions 7.1.2 and
7.2.1 for the definition of Λur,σ ). The answer is negative. For a counterexample,
take X = H2 and G = SL2 (Z) ≤ Isom(X); then for all σ > 0 there exists ε > 0
such that Λur,σ ⊆ BA(ε), where BA(ε) denotes the set of all real numbers with
Lagrange constant at most 1/ε. (This follows e.g. from making the correspondence
in [73, Observation 1.15 and Proposition 1.21] explicit.) It is well-known (see e.g.
[118] for a more precise result) that dimH (BA(ε)) < 1 for all ε > 0, demonstrating
that dimH (Λur,σ ) < 1 = δ.
11It could be objected that this set is not closed and therefore should not constitute a counterexample. However, since it has full measure, it has closed subsets of arbitrarily large measure (which
in particular still have dimension 1).
12For the parabolic examples, take a Schottky product (Definition 10.2.1) with a lineal group
(Definition 6.2.13) to get a nonelementary group, as suggested at the beginning of Chapter 13.
1.2. THE BISHOP–JONES THEOREM AND ITS GENERALIZATION
xxxi
Remark. Although Theorem 1.2.1 computes the Hausdorff dimension of the
radial and uniformly radial limit sets, there are many other subsets of the limit set
whose Hausdorff dimension it does not compute, such as the horospherical limit set
(cf. Definitions 7.1.3 and 7.2.1) and the “linear escape” sets (Λα )α∈(0,1) [122]. We
plan on discussing these issues at length in [57].
Finally, let us also remark that the hypotheses (1) - (4) cannot be weakened in
any of the obvious ways:
Proposition 1.2.2. We may have dimH (Λr ) < δ even if:
(1) G is moderately discrete (even properly discontinuous) (Example 13.4.4).
(2) X is a proper CAT(-1) space and G is weakly discrete (Example 13.4.1).
(3) X = H∞ and G is COT-discrete (Example 13.4.9).
(4) X = H∞ and G is irreducible and UOT-discrete (Example 13.4.2).
(5) X = H2 (Example 13.4.5).
In each case the counterexample group G is of general type (see Definition 6.2.13)
and in particular is nonelementary.
1.2.1. The modified Poincaré exponent. The examples of Proposition
1.2.2 illustrate that the Poincaré exponent does not always accurately calculate
the Hausdorff dimension of the radial and uniformly radial limit sets. In Chapter 8
we introduce a modified version of the Poincaré exponent which succeeds at accurately calculating dimH (Λr ) and dimH (Λur ) for all nonelementary groups G. (When
G is an elementary group, dimH (Λr ) = dimH (Λur ) = 0, so there is no need for a
sophisticated calculation in this case.) Some motivation for the following definition
is given in §8.2.
Definition 8.2.3. Let G be a subsemigroup of Isom(X).
• For each set S ⊆ X and s ≥ 0, let
X
Σs (S) =
b−skxk
x∈S
∆(S) = {s ≥ 0 : Σs (S) = ∞}
δ(S) = sup ∆(S).
• The modified Poincaré set of G is the set
\\
eG =
(8.2.2)
∆
∆(Sρ ),
ρ>0 Sρ
where the second intersection is taken over all maximal ρ-separated sets
Sρ ⊆ G(o).
xxxii
1. INTRODUCTION AND OVERVIEW
e G is called the modified Poincaré exponent of G.
• The number δeG = sup ∆
e G , we say that G is of generalized divergence type,13 while if
If δeG ∈ ∆
e G , we say that G is of generalized convergence type. Note
δeG ∈ [0, ∞) \ ∆
that if δeG = ∞, then G is neither of generalized convergence type nor of
generalized divergence type.
We may now state the most powerful version of our Bishop–Jones theorem:
Theorem 1.2.3 (Proven in Chapter 9). Let G Isom(X) be a nonelementary
semigroup. There exists σ > 0 such that
(1.2.2)
e
dimH (Λr ) = dimH (Λur ) = dimH (Λur ∩ Λr,σ ) = δ.
Moreover, for every 0 < s < δe there exist τ > 0 and an Ahlfors s-regular set
Js ⊆ Λur,τ ∩ Λr,σ .
Theorem 1.2.1 can be deduced as a corollary of Theorem 1.2.3; specifically,
Propositions 8.2.4(ii) and 9.3.1 show that any group satisfying the hypotheses of
e and hence for such a group (1.2.2) implies (1.2.1).
Theorem 1.2.1 satisfies δ = δ,
On the other hand, Proposition 1.2.2 shows that Theorem 1.2.3 applies in many
cases where Theorem 1.2.1 does not.
We call a group Poincaré regular if its Poincaré exponent δ and modified
Poincaré exponent δe are equal. In this language, Proposition 9.3.1/Theorem 1.2.1
describes sufficient conditions for a group to be Poincaré regular, and Proposition
1.2.2 provides a list of examples of groups which are Poincaré irregular.
Though Theorem 1.2.3 requires G to be nonelementary, the following corollary
does not:
Corollary 1.2.4. Fix G Isom(X). Then for some σ > 0,
(1.2.3)
dimH (Λr ) = dimH (Λur ) = dimH (Λur ∩ Λr,σ ).
Proof. If G is nonelementary, then (1.2.3) follows from (1.2.2). On the other
hand, if G is elementary, then all three terms of (1.2.3) are equal to zero.
1.3. Examples
A theory of groups acting on infinite-dimensional space would not be complete
without some good ways to construct examples. Techniques used in the finitedimensional setting, such as arithmetic construction of lattices and Dehn surgery,
do not work in infinite dimensions. (The impossibility of constructing lattices in
13We use the adjective “generalized” rather than “modified” because all groups of convergence/divergence type are also of generalized convergence/divergence type; see Corollary 8.2.8
below.
1.3. EXAMPLES
xxxiii
Isom(H∞ ) as a direct limit of arithmetic lattices in Isom(Hd ) is due to known lower
bounds on the covolumes of such lattices which blow up as the dimension goes
to infinity; see Proposition 12.2.3 below.) Nevertheless, there is a wide variety of
groups acting on H∞ , including many examples of actions which have no analogue
in finite dimensions.
1.3.1. Schottky products. The most basic tool for constructing groups or
semigroups on hyperbolic metric spaces is the theory of Schottky products. This
theory was created by Schottky in 1877 when he considered the Fuchsian group
generated by a finite collection of loxodromic isometries gi described by a disjoint
collection of balls Bi+ and Bi− with the property that gi (H2 \ Bi− ) = Bi+ . It was
extended further in 1883 by Klein’s Ping-Pong Lemma, and used effectively by
Patterson [143] to construct a “pathological” example of a Kleinian group of the
first kind with arbitrarily small Poincaré exponent.
We consider here a quite general formulation of Schottky products: a collection
of subsemigroups of Isom(X) is said to be in Schottky position if open sets can be
found satisfying the hypotheses of the Ping-Pong lemma whose closure is not equal
to X (cf. Definition 10.2.1). This condition is sufficient to guarantee that the
product of groups in Schottky position (called a Schottky product ) is always COTdiscrete, but stronger hypotheses are necessary in order to prove stronger forms of
discreteness. There is a tension here between hypotheses which are strong enough to
prove useful theorems and hypotheses which are weak enough to admit interesting
examples. For the purposes of this monograph we make a fairly strong assumption
(the strong separation condition, Definition 10.3.1), one which rules out infinitely
generated Schottky groups whose generating regions have an accumulation point
(for example, infinitely generated Schottky subgroups of Isom(Hd )). However, we
plan on considering weaker hypotheses in future work [57].
One theorem of significance in Chapter 10 is Theorem 10.4.7, which relates
the limit set of a Schottky product to the limit set of its factors together with the
image of a Cantor set ∂Γ under a certain symbolic coding map π : ∂Γ → ∂X.
As a consequence, we deduce that the properties of compact type and geometrical finiteness are both preserved under finite strongly separated Schottky products
(Corollary 10.4.8 and Proposition 12.4.19, respectively). A result analogous to Theorem 10.4.7 in the setting of infinite alphabet conformal iterated function systems
can be found in [128, Lemma 2.1].
In §10.5, we discuss some (relatively) explicit constructions of Schottky groups,
showing that Schottky products are fairly ubiquitous - for example, any two groups
which act properly discontinuously at some point of ∂X may be rearranged to be in
Schottky position, assuming that X is sufficiently symmetric (Proposition 10.5.1).
xxxiv
1. INTRODUCTION AND OVERVIEW
1.3.2. Parabolic groups. A major point of departure where the theory of
subgroups of Isom(H∞ ) becomes significantly different from the finite-dimensional
theory is in the study of parabolic groups. As a first example, knowing that a group
admits a discrete parabolic action on Isom(X) places strong restrictions on the algebraic properties of the group if X = HdF , but not if X = H∞
F . Concretely, discrete
parabolic subgroups of Isom(HdF ) are always virtually nilpotent (virtually abelian
if F = R), but any group with the Haagerup property admits a parabolic strongly
discrete action on H∞ (indeed, this is a reformulation of one of the equivalent definitions of the Haagerup property; cf. [50, p.1, (4)]). Examples of groups with
the Haagerup property include all amenable groups and free groups. Moreover,
strongly discrete parabolic subgroups of Isom(H∞ ) need not be finitely generated;
cf. Example 11.2.20.
Moving to infinite dimensions changes not only the algebraic but also the geometric properties of parabolic groups. For example, the cyclic group generated
by a parabolic isometry may fail to be discrete in any reasonable sense (Example
11.1.12), or it may be discrete in some senses but not others (Example 11.1.14).
The Poincaré exponent of a parabolic subgroup of Isom(HdF ) is always a half-integer
[54, Proof of Lemma 3.5], but the situation is much more complicated in infinite
dimensions. We prove a general lower bound on the Poincaré exponent of a parabolic subgroup of Isom(X) for any hyperbolic metric space X, depending only on
the algebraic structure of the group (Theorem 11.2.6); in particular, the Poincaré
exponent of a parabolic action of Zk on a hyperbolic metric space is always at least
k/2. Of course, it is well-known that all parabolic actions of Zk on Hd achieve
equality. By contrast, we show that for every δ > k/2 there exists a parabolic
action of Zk on H∞ whose Poincaré exponent is equal to δ (Theorem 11.2.11).
1.3.3. Geometrically finite and convex-cobounded groups. It has been
known for a long time that every finitely generated Fuchsian group has a finite-sided
convex fundamental domain (e.g. [108, Theorem 4.6.1]). This result does not generalize beyond two dimensions (e.g. [25, 102]), but subgroups of Isom(H3 ) with
finite-sided fundamental domains came to be known as geometrically finite groups.
Several equivalent definitions of geometrical finiteness in the three-dimensional setting became known, for example Beardon and Maskit’s condition that the limit set
is the union of the radial limit set Λr with the set Λbp of bounded parabolic points
[21], but the situation in higher dimensions was somewhat murky until Bowditch
[34] wrote a paper which described which equivalences remain true in higher dimensions, and which do not. The condition of a finite-sided convex fundamental
domain is no longer equivalent to any other conditions in higher dimensions (e.g.
1.3. EXAMPLES
xxxv
[12]), so a higher-dimensional Kleinian group is said to be geometrically finite if it
satisfies any of Bowditch’s five equivalent conditions (GF1)-(GF5).
In infinite dimensions, conditions (GF3)-(GF5) are no longer useful (cf. Remark
12.4.6), but appropriate generalizations of conditions (GF1) (convex core is equal
to a compact set minus a finite number of cusp regions) and (GF2) (the Beardon–
Maskit formula Λ = Λr ∪Λbp ) are still equivalent for groups of compact type. In fact,
(GF1) is equivalent to (GF2) + compact type (Theorem 12.4.5). We define a group
to be geometrically finite if it satisfies the appropriate analogue of (GF1) (Definition
12.4.1). A large class of examples of geometrically finite subgroups of Isom(H∞ )
is furnished by combining the techniques of Chapters 10 and 11; specifically, the
strongly separated Schottky product of any finite collection of parabolic groups
and/or cyclic loxodromic groups is geometrically finite (Corollary 12.4.20).
It remains to answer the question of what can be proven about geometrically
finite groups. This is a quite broad question, and in this monograph we content
ourselves with proving two theorems. The first theorem, Theorem 12.4.14, is a
generalization of the Milnor–Schwarz lemma [39, Proposition I.8.19] (see also Theorem 12.2.12), and describes both the algebra and geometry of a geometrically finite
group G: firstly, G is generated by a finite subset F ⊆ G together with a finite
collection of parabolic subgroups Gξ (which are not necessarily finitely generated,
e.g. Example 11.2.20), and secondly, the orbit map g 7→ g(o) is a quasi-isometric
embedding from (G, dG ) into X, where dG is a certain weighted Cayley metric (cf.
S
Example 3.1.2 and (12.4.6)) on G whose generating set is F ∪ ξ Gξ . As a conse-
quence (Corollary 12.4.17), we see that if the groups Gξ , ξ ∈ Λbp , are all finitely
generated, then G is finitely generated, and if these groups have finite Poincaré
exponent, then G has finite Poincaré exponent.
1.3.4. Counterexamples. A significant class of subgroups of Isom(H∞ ) that
has no finite-dimensional analogue is provided by the Burger–Iozzi–Monod (BIM)
representation theorem [40, Theorem 1.1], which states that any unweighted simplicial tree can be equivariantly and quasi-isometrically embedded into an infinitedimensional real hyperbolic space, with a precise relation between distances in the
domain and distances in the range. We call the embeddings provided by their
theorem BIM embeddings, and the corresponding homomorphisms provided by the
equivariance we call BIM representations. We generalize the BIM embedding theorem to the case where X is a separable R-tree rather than an unweighted simplicial
tree (Theorem 13.1.1).
If we have an example of an R-tree X and a subgroup Γ ≤ Isom(X) with a
certain property, then the image of Γ under a BIM representation generally has
the same property (Remark 13.1.4). Thus, the BIM embedding theorem allows
xxxvi
1. INTRODUCTION AND OVERVIEW
us to translate counterexamples in R-trees into counterexamples in H∞ . For example, if Γ is the free group on two elements acting on its Cayley graph, then
the image of Γ under a BIM representation provides a counterexample both to
an infinite-dimensional analogue of Margulis’s lemma (cf. Example 13.1.5) and to
an infinite-dimensional analogue of I. Kim’s theorem regarding length spectra of
finite-dimensional algebraic hyperbolic spaces (cf. Remark 13.1.6).
Most of the other examples in Chapter 13 are concerned with our various
notions of discreteness (cf. §1.1.3 above), the notion of Poincaré regularity (i.e.
e and the relations between them. Specifically, we show that
whether or not δ = δ),
the only relations are the relations which were proven in Chapter 5 and Proposition 9.3.1, as summarized in Table 1, p.93. Perhaps the most interesting of the
counterexamples we give is Example 13.4.2, which is the image under a BIM representation of (a countable dense subgroup of) the automorphism group Γ of the
4-regular unweighted simplicial tree. This example is notable because discreteness
properties are not preserved under taking the BIM representation: specifically, Γ
is weakly discrete but its image under the BIM representation is not. It is also
interesting to try to visualize this image geometrically (cf. Figure 13.4.1).
1.3.5. R-trees and their isometry groups. Motivated by the BIM representation theorem, we discuss some ways of constructing R-trees which admit
natural isometric actions. Our first method is the cone construction, in which one
starts with an ultrametric space (Z, D) and builds an R-tree X as a “cone” over
Z. This construction first appeared in a paper of F. Choucroun [52], although it is
similar to several other known cone constructions: [85, 1.8.A.(b)], [168], [31, §7].
R-trees constructed by the cone method tend to admit natural parabolic actions,
and in Theorem 14.1.5 we provide a necessary and sufficient condition for a function
to be the orbital counting function of some parabolic group acting on an R-tree.
Our second method is to staple R-trees together to form a new R-tree. We give
sufficient conditions on a graph (V, E), a collection of R-trees (Xv )v∈V , and a collection of sets A(v, w) ⊆ Xv and bijections ψv,w : A(v, w) → A(w, v) ((v, w) ∈ E)
such that stapling the trees (Xv )v∈V along the isometries (ψv,w )(v,w)∈E yields an
R-tree (Theorem 14.4.4). In §14.5, we give three examples of the stapling construction, including looking at the cone construction as a special case of the stapling
construction. The stapling construction is somewhat similar to a construction of
G. Levitt [120].
1.4. Patterson–Sullivan theory
The connection between the Poincaré exponent δ of a Kleinian group and the
geometry of its limit set is not limited to Hausdorff dimension considerations such
1.4. PATTERSON–SULLIVAN THEORY
xxxvii
as those in the Bishop–Jones theorem. As we mentioned before, Patterson and
Sullivan’s proofs of the equality dimH (Λ) = δ for geometrically finite groups rely on
the construction of a certain measure on Λ, the Patterson–Sullivan measure, whose
Hausdorff dimension is also equal to δ. In addition to connecting the Poincaré
exponent and Hausdorff dimension, the Patterson–Sullivan measure also relates to
the spectral theory of the Laplacian (e.g. [142, Theorem 3.1], [161, Proposition
28]) and the geodesic flow on the quotient manifold [103]. An important property
of Patterson–Sullivan measures is conformality. Given s > 0, a measure µ on ∂Bd
is said to be s-conformal with respect to a discrete group G ≤ Isom(Bd ) if
Z
|g ′ (ξ)|s dµ(ξ) ∀g ∈ G ∀A ⊆ ∂Bd .
(1.4.1)
µ(g(A)) =
A
The Patterson–Sullivan theorem on the existence of conformal measures may now
be stated as follows: For every Kleinian group G, there exists a δ-conformal measure
on Λ, where δ is the Poincaré exponent of G and Λ is the limit set of G.
When dealing with “coarse” spaces such as arbitrary hyperbolic metric spaces,
it is unreasonable to expect equality in (1.4.1). Thus, a measure µ on ∂X is said
to be s-quasiconformal with respect to a group G ≤ Isom(X) if
Z
g′ (ξ)s dµ(ξ) ∀g ∈ G ∀A ⊆ ∂X.
µ(g(A)) ≍×
A
Here g ′ (ξ) denotes the upper metric derivative of g at ξ; cf. §4.2.2. We remark that
if X is a CAT(-1) space and G is countable, then every quasiconformal measure is
coarsely asymptotic to a conformal measure (Proposition 15.2.1).
In Chapter 15, we describe the theory of conformal and quasiconformal meae
sures in hyperbolic metric spaces. The main theorem is the existence of δ-conformal
measures for groups of compact type (Theorem 15.4.6). An important special case
of this theorem has been proven by Coornaert [53, Théorème 5.4] (see also [41,
§1], [152, Lemme 2.1.1]): the case where X is proper and geodesic and G satisfies
δ < ∞. The main improvement from Coornaert’s theorem to ours is the ability
to construct quasiconformal measures for Poincaré irregular (δe < δ = ∞) groups;
this improvement requires an argument using the class of uniformly continuous
functions on bord X.
The big assumption of Theorem 15.4.6 is the assumption of compact type.
All proofs of the Patterson–Sullivan theorem seem to involve taking a weak-*
limit of a sequence of measures in X and then proving that the limit measure
is (quasi)conformal, but how can we take a weak-* limit if the limit set is not
compact? In fact, Theorem 15.4.6 becomes false if you remove the assumption of
compact type. In Proposition 16.6.1, we construct a group acting on an R-tree and
xxxviii
1. INTRODUCTION AND OVERVIEW
satisfying δ < ∞ which admits no δ-conformal measure on its limit set, and then
use the BIM embedding theorem (Theorem 13.1.1) to get an example in H∞ .
Surprisingly, it turns out that if we replace the hypothesis of compact type with
the hypothesis of divergence type, then the theorem becomes true again. Specifically,
we have the following:
Theorem 1.4.1 (Proven in Chapter 16). Let G ≤ Isom(X) be a nonelementary
e
group of generalized divergence type (see Definition 8.2.3). Then there exists a δquasiconformal measure µ for G supported on Λ, where δe is the modified Poincaré
exponent of G. It is unique up to a multiplicative constant in the sense that if
µ1 , µ2 are two such measures then µ1 ≍× µ2 (cf. Remark 15.1.2). In addition, µ
is ergodic and gives full measure to the radial limit set of G.
To motivate Theorem 1.4.1, we recall the connection between the divergence
type condition and Patterson–Sullivan theory in finite dimensions. Although the
Patterson–Sullivan theorem guarantees the existence of a δ-conformal measure, it
does not guarantee its uniqueness. Indeed, the δ-conformal measure is often not
unique; see e.g. [10]. However, it turns out that the hypothesis of divergence type
is enough to guarantee uniqueness. In fact, the condition of divergence type turns
out to be quite important in the theory of conformal measures:
Theorem 1.4.2 (Hopf–Tsuji–Sullivan theorem, [138, Theorem 8.3.5]). Fix d ≥
2, let G ≤ Isom(Hd ) be a discrete group, and let δ be the Poincaré exponent of G.
Then for any δ-conformal measure µ ∈ M(Λ), the following are equivalent:
(A) G is of divergence type.
(B) µ gives full measure to the radial limit set Λr (G).
(C) G acts ergodically on (Λ, µ) × (Λ, µ).
In particular, if G is of divergence type, then every δ-conformal measure is ergodic,
so there is exactly one (ergodic) δ-conformal probability measure.
We remark that our sentence “In particular . . . ” stated in theorem above was
not included in [138, Theorem 8.3.5] but it is well-known and follows easily from
the equivalence of (A) and (C).
Remark 1.4.3. Theorem 1.4.2 has a long history. The equivalence (B) ⇔ (C)
was first proven by E. Hopf in the case δ = d − 114 [99, 100] (1936, 1939). The
equivalence (A) ⇔ (B) was proven by Z. Yûjôbô in the case δ = d − 1 = 1 [176]
(1949), following an incorrect proof by M. Tsuji [169] (1944).15 Sullivan proved (A)
⇔ (C) in the case δ = d − 1 [163, Theorem II], then generalized this equivalence
14In this paragraph, when we say that someone proves the case δ = d − 1, we mean that they
considered the case where µ is Hausdorff (d − 1)-dimensional measure on S d−1 .
15See [163, p.484] for some further historical remarks on the case δ = d − 1 = 1.
1.4. PATTERSON–SULLIVAN THEORY
xxxix
to the case δ > (d − 1)/2 [161, Theorem 32]. He also proved (B) ⇔ (C) in full
generality [161, Theorem 21]. Next, W. P. Thurston gave a simpler proof of (A)
⇒ (B)16 in the case δ = d − 1 [4, Theorem 4 of Section VII]. P. J. Nicholls finished
the proof by showing (A) ⇔ (B) in full generality [138, Theorems 8.2.2 and 8.2.3].
Later S. Hong re-proved (A) ⇒ (B) in full generality twice in two independent
papers [97, 98], apparently unaware of any previous results. Another proof of (A)
⇒ (B) in full generality, which was conceptually similar to Thurston’s proof, was
given by P. Tukia [171, Theorem 3A]. Further generalization was made by C. Yue
[175] to negatively curved manifolds, and by T. Roblin [151, Théorème 1.7] to
proper CAT(-1) spaces.
Having stated the Hopf–Tsuji–Sullivan theorem, we can now describe why Theorem 1.4.1 is true, first on an intuitive level and then giving a sketch of the real
proof. On an intuitive level, the fact that divergence type implies both “existence
and uniqueness” of the δ-conformal measure in finite dimensions indicates that perhaps the compactness assumption is not needed – the sequence of measures used
to construct the Patterson–Sullivan measure converges already, so it should not be
necessary to use compactness to take a convergent subsequence.
The real proof involves taking the Samuel–Smirnov compactification of bord X,
considered as a metric space with respect to a visual metric (cf. §3.6.3). The
Samuel–Smirnov compactification of a metric space (cf. [136, §7]) is conceptually
similar to the more familiar Stone–Čech compactification, except that only uniformly continuous functions on the metric space extend to continuous functions
on the compactification, not all continuous functions. If we used the Stone–Čech
compactification rather than the Samuel–Smirnov compactification, then our proof
would only apply to groups with finite Poincaré exponent; cf. Remark 16.1.3 and
Remark 16.3.5.
Sketch of the proof of Theorem 1.4.1. We denote the Samuel–Smirnov
b By a nonstandard analogue of Theorem 15.4.6
compactification of bord X by X.
e
d By a
(viz. Lemma 16.3.4), there exists a δ-quasiconformal
measure µ
b on ∂X.
generalization of Theorem 1.4.2 (viz. Proposition 16.4.1), µ
b gives full measure to
cr . But a simple computation (Lemma 16.2.5) shows that
the radial limit set Λ
c
Λr = Λr , demonstrating that µ
b ∈ M(Λ).
1.4.1. Quasiconformal measures of geometrically finite groups. Let
us consider a geometrically finite group G ≤ Isom(X) with Poincaré exponent
δ < ∞, and let µ be a δ-quasiconformal measure on Λ. Such a measure exists
since geometrically finite groups are of compact type (Theorem 12.4.5 and Theorem
16By this point, it was considered obvious that (B) ⇒ (A).
xl
1. INTRODUCTION AND OVERVIEW
15.4.6), and is unique as long as G is of divergence type (Corollary 16.4.6). When
X = Hd , the geometry of µ is described by the Global Measure Formula [165,
Theorem on p.271], [160, Theorem 2]: the measure of a ball B(η, e−t ) is coarsely
asymptotic to e−δt times a factor depending on the location of the point ηt := [o, η]t
in the quotient manifold Hd /G. Here [o, η]t is the unique point on the geodesic
connecting o and η with distance t from o; cf. Notations 3.1.6, 4.4.3.
In a general hyperbolic metric space X (indeed, already for X = H∞ ), one
cannot get a precise asymptotic for µ(B(η, e−t )), due to the fact that the measure
µ may fail to be doubling (Example 17.4.12). Instead, our version of the global
measure formula gives both an upper bound and a lower bound for µ(B(η, e−t )).
Specifically, we define a function m : Λ × [0, ∞) → (0, ∞) (for details see (17.2.1))
and then show:
Theorem 1.4.4 (Global measure formula, Theorem 17.2.2; proven in Section
17.3). For all η ∈ Λ and t > 0,
(1.4.2)
m(η, t + σ) .× µ(B(η, e−t )) .× m(η, t − σ),
where σ > 0 is independent of η and t.
It is natural to ask for which groups (1.4.2) can be improved to an exact asymptotic, i.e. for which groups µ is doubling. We address this question in Section
17.4, proving a general result (Proposition 17.4.8), a special case of which is that if
X is a finite-dimensional algebraic hyperbolic space, then µ is doubling (Example
17.4.11). Nevertheless, there are large classes of examples of groups G ≤ Isom(H∞ )
for which µ is not doubling (Example 17.4.12), illustrating once more the wide
difference between H∞ and its finite-dimensional counterparts.
It is also natural to ask about the implications of the Global Measure Formula
for the dimension theory of the measure µ. For example, when X = Hd , the Global
Measure Formula was used to show that dimH (µ) = δ [160, Proposition 4.10]. In
our case we have:
Theorem 1.4.5 (Cf. Theorem 17.5.9). If for all p ∈ P , the series
X
e−δkhk khk
(1.4.3)
h∈Gp
converges, then µ is exact dimensional (cf. Definition 17.5.2) of dimension δ. In
particular,
dimH (µ) = dimP (µ) = δ .
The hypothesis that (1.4.3) converges is a very non-restrictive hypothesis. For
example, it is satisfied whenever δ > δp for all p ∈ P (Corollary 17.5.10). Combining
1.5. APPENDICES
xli
with Proposition 10.3.10 shows that any counterexample must satisfy
X
X
e−δkhk khk
e−δkhk < ∞ =
h∈Gp
h∈Gp
for some p ∈ P , creating a very narrow window for the orbital counting function
Np (cf. Notation 17.2.1) to lie in. Nevertheless, we show that there exist counterexamples (Example 17.5.14) for which the series (1.4.3) diverges. After making
some simplifying assumptions, we are able to prove (Theorem 17.5.13) that the
Patterson–Sullivan measures of groups for which (1.4.3) diverges cannot be exact
dimensional, and in fact satisfy dimH (µ) = 0.
There is a relation between exact dimensionality of the Patterson–Sullivan measure and the theory of Diophantine approximation on the boundary of ∂X, as described in [73]. Specifically, if VWAξ denotes the set of points which are very well
approximable with respect to a distinguished point ξ (cf. §17.5.1), then we have
the following:
Theorem 1.4.6 (Cf. Theorem 17.5.8). The following are equivalent:
(A) For all p ∈ P , µ(VWAp ) = 0.
(B) µ is exact dimensional.
(C) dimH (µ) = δ.
(D) For all ξ ∈ Λ, µ(VWAξ ) = 0.
In particular, combining with Theorem 1.4.5 demonstrates that the equation
µ(VWAξ ) = 0
holds for a large class of geometrically finite groups G and for all ξ ∈ Λ. This
improves the results of [73, §1.5.3].
1.5. Appendices
We conclude this monograph with two appendices. Appendix A contains a list
of open problems, and Appendix B an index of defined terms.
Part 1
Preliminaries
This part will be divided as follows: In Chapter 2 we define the class of algebraic
hyperbolic spaces, which are often called rank one symmetric spaces of noncompact
type. In Chapters 3 and 4, we define the class of hyperbolic metric spaces and study
their geometry. In Chapter 5, we explore different notions of discreteness for groups
of isometries of a metric space. In Chapter 6 we prove two classification theorems,
one for isometries (Theorem 6.1.4) and one for semigroups of isometries (Theorem
6.2.3). Finally, in Chapter 7 we define and study the limit set of a semigroup of
isometries.
CHAPTER 2
Algebraic hyperbolic spaces
In this chapter we introduce our main objects of interest, algebraic hyperbolic spaces in finite and infinite dimensions. References for the theory of finitedimensional algebraic hyperbolic spaces, which are often called rank one symmetric
spaces of noncompact type, include [39, 45, 123]. Infinite-dimensional algebraic
hyperbolic spaces, as well as some non-hyperbolic infinite-dimensional symmetric
spaces, have been discussed in [67].
2.1. The definition
Finite-dimensional rank one symmetric spaces of noncompact type come in
four flavors, corresponding to the classical division algebras R, C, Q (quaternions),
and O (octonions).1 The first three division algebras have corresponding rank one
symmetric spaces of noncompact type of arbitrary dimension, but there is only one
rank one symmetric space of noncompact type corresponding to the octonions; it
occurs in dimension two (which corresponds to real dimension 16). Consequently,
the octonion rank one symmetric space of noncompact type (known as the Cayley
hyperbolic plane 2) does not have an infinite-dimensional analogue, while the other
three classes do admit infinite-dimensional analogues.
The rank one symmetric spaces of noncompact type corresponding to R have
constant negative curvature. However, those corresponding to the other division
algebras have variable negative curvature [147, Lemmas 2.3, 2.7, 2.11] (see also
[93, Corollary of Proposition 4]).
Remark 2.1.1. In this monograph we will use the term “algebraic hyperbolic
spaces” to refer to all rank one symmetric spaces of noncompact type except the
Cayley hyperbolic plane H2O , in order to avoid dealing with the complicated algebra of the octonions.3 However, we feel confident that all the theorems regarding
1We denote the quaternions by Q in order to avoid confusion with the rank one symmetric space
of noncompact type (defined over Q) itself, which we will denote by H. Be aware that Q should
not be confused with the set of rational numbers.
2Not to be confused with the Cayley plane, a different mathematical object.
3The complications come from the fact that the octonions are not associative, thus making it
somewhat unclear what it means to say that O3 is a vector space “over” the octonions, since in
general (xa)b 6= x(ab).
3
4
2. ALGEBRAIC HYPERBOLIC SPACES
algebraic hyperbolic spaces in this monograph can be generalized to the Cayley
hyperbolic plane (possibly after modifying the statements slightly). We leave this
task to an algebraist.
For the reader interested in learning more about the Cayley hyperbolic plane,
see [133, pp.136-139], [156], or [7]; see also [14] for an excellent introduction to
octonions in general.
Fix F ∈ {R, C, Q} and an index set J, and let us construct an algebraic hyper-
bolic space (i.e. a rank one symmetric space of noncompact type) over the field F
in dimension #(J). We remark that usually we will let J = N = {1, 2, . . .}, but
occasionally J may be an uncountable set. Let
(
H=
HFJ
:=
x = (xi )i∈J ∈ F
and for x ∈ H let
kxk :=
X
i∈J
X
J
|xi |2
i∈J
!1/2
2
|xi | < ∞
)
,
.
We will think of H as a right F-module, so scalars will always act on the right.4
Note that
kxak = |a| · kxk ∀x ∈ H ∀a ∈ F.
A sesquilinear form on H is an R-bilinear map B(·, ·) : H × H → F satisfying
B(xa, y) = aB(x, y) and B(x, ya) = B(x, y)a.5
Here and from now on a denotes the conjugate of a complex or quaternionic number
a ∈ F; if F = R, then a = a.
A sesquilinear form is said to be skew-symmetric if B(y, x) = B(x, y). For
example, the map
BE (x, y) :=
X
xi yi
i∈J
is a skew-symmetric sesquilinear form. Note that
E(x) := BE (x, x) = kxk2 .
2.2. The hyperboloid model
Assume that 0 ∈
/ J, and let
J∪{0}
J∪{0}
L = LF
= HF
= x = (xi )i∈J∪{0} ∈ FJ∪{0}
X
i∈J∪{0}
|xi |2 < ∞
.
4The advantage of this convention is that it allows operators to act on the left.
5In the case F = C, this disagrees with the usual convention; we follow here the convention of
[123, §3.3.1].
2.2. THE HYPERBOLOID MODEL
5
Consider the skew-symmetric sesquilinear form BQ : L × L → F defined by
X
xi yi
BQ (x, y) := −x0 y0 +
i∈J
and its associated quadratic form
(2.2.1)
Q(x) := BQ (x, x) = −|x0 |2 +
X
i∈J
|xi |2 .
We observe that the form Q is not positive definite, since Q(e0 ) = −1.
Remark 2.2.1. If F = R, then the form Q is called a Lorentzian quadratic
form, and the pair (L, Q) is called a Minkowski space.
Let P(L) denote the projectivization of L, i.e. the quotient of L \ {0} under
the equivalence relation x ∼ xa (x ∈ L \ {0}, a ∈ F \ {0}). Let
J∪{0}
H = HJF := {[x] ∈ P(LF
) : Q(x) < 0},
and consider the map dH : H × H → [0, ∞) defined by the equation
(2.2.2)
|BQ (x, y)|
cosh dH ([x], [y]) = p
, [x], [y] ∈ H.
|Q(x)| · |Q(y)|
Note that the map dH is well-defined because the right hand side is invariant under
multiplying x and y by scalars.
Proposition 2.2.2. The map dH is a metric on H that is compatible with the
natural topology, when viewed as a subspace of the quotient space P(L). Moreover,
for any two distinct points [x], [y] ∈ H there exists a unique isometric embedding
γ : R → H such that γ(0) = [x] and γ ◦ dH ([x], [y]) = [y].
Remark 2.2.3. The second sentence is the unique geodesic extension property
of H. It holds more generally for Riemannian manifolds (cf. Remark 2.2.7 below),
but is an important distinguishing feature in the larger class of uniquely geodesic
metric spaces.
Proof of Proposition 2.2.2. The key to the proof is the following lemma,
which may also be deduced from the infinite-dimensional analogue of Sylvester’s
law of inertia [124, Lemma 3].
Lemma 2.2.4. Fix z ∈ L with Q(z) < 0, and let z⊥ = {w : BQ (z, w) = 0}.
Then Q ↿ z⊥ is positive definite.
⊥
Proof of Lemma 2.2.4. By contradiction, suppose Q(y) ≤ 0 for some y ∈
z . There exist a, b ∈ F, not both zero, such that y0 a + z0 b = 0. But then
0 < Q(ya + zb) = |a|2 Q(y) + |b|2 Q(z) ≤ 0,
which provides a contradiction.
⊳
6
2. ALGEBRAIC HYPERBOLIC SPACES
Now fix [x], [y], [z] ∈ H, and let x, y, z ∈ L\{0} be representatives which satisfy
BQ (x, z) = BQ (y, z) = Q(z) = −1.
Then
1
cosh dH ([x], [z]) = p
1 − Q(x − z)
p
Q(x − z)
sinh dH ([x], [z]) = p
1 − Q(x − z)
1
cosh dH ([y], [z]) = p
1 − Q(y − z)
p
Q(y − z)
sinh dH ([y], [z]) = p
·
1 − Q(y − z)
By the addition law for hyperbolic cosine we have
p
p
1 + Q(x − z) Q(y − z)
p
·
cosh(dH ([x], [z]) + dH ([y], [z])) = p
1 − Q(x − z) 1 − Q(y − z)
On the other hand, we have
1
1
p
|−1 + BQ (x − z, y − z)| .
cosh dH ([x], [y]) = p
1 − Q(x − z) 1 − Q(y − z)
Since x− z, y − z ∈ z⊥ , the Cauchy–Schwartz inequality together with Lemma 2.2.4
gives
|−1 + BQ (x − z, y − z)| ≤ 1 +
p
p
Q(x − z) Q(y − z),
with equality if and only if x − z and y − z are proportional with a negative real
constant of proportionality. This demonstrates the triangle inequality.
To show that dH is compatible with the natural topology, it suffices to show
that if U is a neighborhood in the natural topology of a point [x] ∈ H, then there
exists ε > 0 such that B([x], ε) ⊆ U . Indeed, fix a representative x ∈ [x]; then there
exists δ > 0 such that ky − xk ≤ δ implies [y] ∈ U . Now, given [y] ∈ B([x], ε),
choose a representative y ∈ [y] such that z := y − x satisfies BQ (x, z) = 0; this
is possible since any representative y ∈ [y] satisfies BQ (x, y) 6= 0 by Lemma 2.2.4.
Then
s
Q(x)
|Q(x)|
·
cosh dH ([x], [y]) = p
=
Q(x) + Q(z)
|Q(x)| · |Q(x) + Q(z)|
So if dH ([x], [y]) ≤ ε, then Q(z) ≤ Q(x)[1 − 1/ cosh(ε)]. By Lemma 2.2.4, there
exists C > 0 such that kzk2 ≤ CQ(x)[1−1/ cosh(ε)]. In particular, we may choose ε
so that CQ(x)[1 − 1/ cosh(ε)] ≤ δ, which completes the proof that dH is compatible
with the natural topology.
Now suppose that γ : R → H is an isometric embedding, and let [z] = γ(0).
Choose a representative z ∈ L \ {0} such that Q(z) = −1, and for each t ∈ R \ {0}
choose a representative xt ∈ L \ {0} such that BQ (xt , z) = −1. The preceding
argument shows that for t1 < 0 < t2 , xt1 − z and xt2 − z are proportional with a
2.2. THE HYPERBOLOID MODEL
7
negative constant of proportionality. Together with (2.2.2), this implies that
(2.2.3)
xt = z + tanh(t)w
for some w ∈ z⊥ with Q(w) = 1. Conversely, direct calculation shows that the
equation (2.2.3) defines an isometric embedding γz,w : R → H via the formula
γz,w (t) = [xt ].
Definition 2.2.5. An algebraic hyperbolic space is a pair (HJF , dH ), where F ∈
{R, C, Q} and J is a nonempty set such that 0 ∈
/ J.
Remark 2.2.6. In finite dimensions, the class of algebraic hyperbolic spaces is
identical (modulo the Cayley hyperbolic plane, cf. Remark 2.1.1) to the class of
rank one symmetric spaces of noncompact type. This follows from the classification
theorem for finite-dimensional symmetric spaces, see e.g. [94, p.518].6 It is not clear
whether an analogous theorem holds in infinite dimensions (but see [68] for some
results in this direction).
Remark 2.2.7. In finite dimensions, the metric dH may be defined as the length
metric associated to a certain Riemannian metric on H; cf. [147, §2.2]. The same
procedure works in infinite dimensions; cf. [119] for an exposition of the theory
of infinite dimensional manifolds. Although a detailed account of the theory of
infinite-dimensional Riemannian manifolds would be too much of a digression, let
us make the following points:
• An infinite-dimensional analogue of the Hopf–Rinow theorem is false [13],
i.e. there exists an infinite-dimensional Riemannian manifold such that
some two points on that manifold cannot be connected by a geodesic.
However, if an infinite-dimensional Riemannian manifold X is nonpositively curved, then any two points of X can be connected by a unique
geodesic as a result of the infinite-dimensional Cartan–Hadamard theorem [119, IX, Theorem 3.8]; moreover, this geodesic is length-minimizing.
In particular, if one takes a Riemannian manifolds approach to defining
infinite-dimensional algebraic hyperbolic spaces, then the second assertion
of Proposition 2.2.2 follows from the Cartan–Hadamard theorem.
• A bijection between two infinite-dimensional Riemannian manifolds is an
isometry with respect to the length metric if and only if it is a diffeomorphism which induces an isometry on the Riemannian metric [76, Theorem
7]. This theorem is commonly known as the Myers–Steenrod theorem, as
S. B. Myers and N. E. Steenrod proved its finite-dimensional version [134].
The difficult part of this theorem is proving that any bijection which is an
6In the notation of [94], the spaces Hp , Hp , Hp , and H2 are written as SO(p, 1)/ SO(p),
O
Q
C
R
SU(p, 1)/ SU(p), Sp(p, 1)/ Sp(p), and (f4(−20) , so(9)), respectively.
8
2. ALGEBRAIC HYPERBOLIC SPACES
isometry with respect to the length metric is differentiable. In the case of
algebraic hyperbolic spaces, however, this follows directly from Theorem
2.3.3 below.
2.3. Isometries of algebraic hyperbolic spaces
We define the group of isometries of a metric space (X, d) to be the group
Isom(X) := {g : X → X : g is a bijection and d(g(x), g(y)) = d(x, y) ∀x, y ∈ X}.
In this section we will compute the group of isometries of an arbitrary algebraic
hyperbolic space. Fix F ∈ {R, C, Q} and an index set J, and let H = HFJ , L =
J∪{0}
LF
, and H = HJF . We begin with the following observation:
Observation 2.3.1. Let OF (L; Q) denote the group of Q-preserving F-linear
automorphisms of L. Then for all T ∈ OF (L; Q), the map [T ] : H → H defined by
the equation
(2.3.1)
[T ]([x]) = [T x]
is an isometry of (H, dH ).
Proof. The map [T ] is well-defined by the associativity property T (xa) =
(T x)a. Since T is Q-preserving and F-linear, the polarization identity (the three
versions cover the three cases when the base field F = R, C, and Q respectively)
1
4 [Q(x + y) − Q(x − y)]
1
+ y) − Q(x − y) − iQ(x + yi) + iQ(x − yi)]
BQ (x, y) = 4 [Q(x
#
"
P
1
Q(x + y) − Q(x − y) +
− ℓQ(x + yℓ) + ℓQ(x − yℓ)
4
ℓ=i,j,k
guarantees that
(2.3.2)
BQ (T x, T y) = BQ (x, y) ∀x, y ∈ H.
Comparing with (2.2.2) shows that [T ] is an isometry.
The group OF (L; Q) is quite large. In addition to containing all maps of the
form T ⊕ I, where T ∈ OF (H; E) and I : F → F is the identity map, it also contains
the so-called Lorentz boosts
xi
(2.3.3)
Tj,t (x) = cosh(t)xj + sinh(t)x0
sinh(t)x + cosh(t)x
j
0
i 6= 0, j
i=j
i=0
i∈J∪{0}
, j ∈ J, t ∈ R.
2.3. ISOMETRIES OF ALGEBRAIC HYPERBOLIC SPACES
9
We leave it as an exercise that OF (H; E) ⊕ {I} and the Lorentz boosts in fact
generate the group OF (L; Q).
Observation 2.3.2. The group
POF (L; Q) = {[T ] : T ∈ OF (L; Q)} ≤ Isom(H)
acts transitively on H.
Proof. Let o = [(1, 0)]. The orbit of o under POF (L; Q) contains its image
under the Lorentz boosts. Specifically, for every t ∈ R the orbit of o contains the
point [(cosh(t), sinh(t), 0)]. Applying maps of the form [T ⊕I], T ∈ OF (H, E), shows
that the orbit of o is H.
We may ask the question of whether the group POF (L; Q) is equal to Isom(H)
or is merely a subgroup. The answer turns out to depend on the division algebra
F:
Theorem 2.3.3. If F ∈ {R, Q} then Isom(H) = POF (L; Q). If F = C, then
POF (L; Q) is of index 2 in Isom(H).
Remark 2.3.4. In finite dimensions, Theorem 2.3.3 is given as an exercise
in [39, Exercise II.10.21]. Because of the importance of Theorem 2.3.3 to this
monograph, we provide a full proof.
Before proving Theorem 2.3.3, it will be convenient for us to introduce a group
somewhat larger than OF (L; Q). Let Aut(F) denote the group of automorphisms
of F as an R-algebra, i.e.
(
Aut(F) =
σ:F→F
σ is an R-linear bijection and
σ(ab) = σ(a)σ(b) for all a, b ∈ F
)
.
We will say that an R-linear map T : L → L is F-skew linear if there exists
σ ∈ Aut(F) such that
(2.3.4)
T (xa) = T (x)σ(a) for all x ∈ H and a ∈ F.
The group of skew-linear bijections T : L → L which preserve Q will be denoted
O∗F (L; Q). For each T , the unique σ ∈ Aut(F) satisfying (2.3.4) will be denoted σT .
Note that the map T 7→ σT is a homomorphism.
Warning. The associative law (T x)a = T (xa) is not valid for T ∈ O∗F (L; Q);
rather, T (xa) = (T x)σT (a) by (2.3.4). Thus when discussing elements of O∗F (L; Q),
we must be careful of parentheses.
Example 2.3.5. For each σ ∈ Aut(F), the map
σ J (x) = (σ(xi ))i∈J
10
2. ALGEBRAIC HYPERBOLIC SPACES
is F-skew-linear and Q-preserving, and σσJ = σ.
Observation 2.3.6. For T ∈ O∗F (L; Q),
BQ (T x, T y) = σT (BQ (x, y)) ∀x, y ∈ L.
Proof. By (2.3.2), the formula holds when T ∈ OF (L; Q), and direct calcula-
tion shows that it holds when T = σ J for some σ ∈ Aut(F). Since O∗F (L; Q) is a
semidirect product of the groups OF (L; Q) and {σ J : σ ∈ Aut(F)}, this completes
the proof.
We observe that if T ∈ O∗F (L; Q), then (2.3.4) shows that T preserves F-lines,
i.e. T (xF) = T (x)F for all x ∈ L \ {0}. Thus the equation (2.3.1) defines a map
[T ] : H → H, which is an isometry by Observation 2.3.6. Thus if
PO∗F (L; Q) = {[T ] : T ∈ O∗F (L; Q)},
then
POF (L; Q) ≤ PO∗F (L; Q) ≤ Isom(H).
We are now ready to begin the
Proof of Theorem 2.3.3. The proof will consist of two parts. In the first,
we show that PO∗F (L; Q) = Isom(H), and in the second we show that POF (L; Q) is
equal to PO∗F (L; Q) if F = R, Q and is of index 2 in PO∗F (L; Q) if F = C.
Fix g ∈ Isom(H); we claim that g ∈ PO∗F (L; Q). Let z = (1, 0), and let o = [z].
By Observation 2.3.2, there exists [T ] ∈ POF (L; Q) such that [T ](o) = g(o). Thus,
we may without loss of generality assume that g(o) = o.
We observe that z⊥ = H. Let S(H) denote the unit sphere of H, i.e. S(H) =
{w ∈ H : Q(w) = 1}. For each w ∈ S(H), the embedding γz,w : R → H defined in
the proof of Proposition 2.2.2 is an isometry. By Proposition 2.2.2, its image under
g must also be an isometry. Specifically, there exists f (w) ∈ S(H) such that
(2.3.5)
g([z + tanh(t)w]) = [z + tanh(t)f (w)] ∀t ∈ R.
The fact that g is a bijection implies that f : S(H) → S(H) is a bijection. Moreover,
the fact that g is an isometry means that for all w1 , w2 ∈ S(H) and t1 , t2 ∈ R, we
have
d([z + tanh(t1 )w1 ], [z + tanh(t2 )w2 ]) = d([z + tanh(t1 )f (w1 )], [z + tanh(t2 )f (w2 )]).
2.3. ISOMETRIES OF ALGEBRAIC HYPERBOLIC SPACES
11
Recalling that
cosh d([z + tanh(t1 )w1 ], [z + tanh(t2 )w2 ])
|BQ (z + tanh(t1 )w1 , z + tanh(t2 )w2 )|
= p
|Q(z + tanh(t1 )w1 )| · |Q(z + tanh(t2 )w2 )|
=
we see that
| − 1 + tanh(t1 ) tanh(t2 )BQ (w1 , w2 )|
q
,
1 − tanh2 (t1 ) 1 − tanh2 (t2 )
| − 1 + tanh(t1 ) tanh(t2 )BQ (w1 , w2 )| = | − 1 + tanh(t1 ) tanh(t2 )BQ (f (w1 ), f (w2 ))|.
Write θ = tanh(t1 ) tanh(t2 ). Squaring both sides gives
LHS2
(2.3.6)
=
θ2 |BQ (w1 , w2 )|2 − 2θ Re[BQ (w1 , w2 )] + 1
=
θ2 |BQ (f (w1 ), f (w2 ))|2 − 2θ Re[BQ (f (w1 ), f (w2 ))] + 1.
||
RHS2
We observe that for w1 , w2 ∈ S(H) fixed, (2.3.6) holds for all −1 < θ < 1. In
particular, taking the first and second derivatives and plugging in θ = 0 gives
(2.3.7)
Re[BQ (w1 , w2 )] = Re[BQ (f (w1 ), f (w2 ))]
(2.3.8)
|BQ (w1 , w2 )| = |BQ (f (w1 ), f (w2 ))|.
Extend f to a bijection f : H → H by letting f (0) = 0 and f (tw) = tf (w) for
t > 0, w ∈ S(H). We observe that (2.3.7) and (2.3.8) hold also for the extended
version of f .
Claim 2.3.7. f is R-linear.
Proof. Fix w1 , w2 ∈ H and c1 , c2 ∈ R. By (2.3.7), the maps
w 7→ Re[BQ (f (c1 w1 + c2 w2 ), f (w))] and w 7→ Re[BQ (c1 f (w1 ) + c2 f (w2 ), f (w))]
are identical. By the surjectivity of f together with the Riesz representation theorem, this implies that f (c1 w1 + c2 w2 ) = c1 f (w1 ) + c2 f (w2 ).
⊳
Claim 2.3.8. f preserves F-lines.
Proof. For each x ∈ H \ {0}, the F-line xF may be defined using the quantity
|BQ | via the formula
xF = {y ∈ H : ∀w ∈ H, |BQ (x, w)| = 0 ⇔ |BQ (y, w)| = 0} .
The claim therefore follows from (2.3.8).
⊳
12
2. ALGEBRAIC HYPERBOLIC SPACES
From Claim 2.3.8, we see that for all x ∈ H \ {0} and a ∈ F, there exists
σx (a) ∈ F such that
f (xa) = f (x)σx (a).
Claim 2.3.9. For x, y ∈ H \ {0},
σx (a) = σy (a).
Proof. By Claim 2.3.7,
[f (x) + f (y)]σx+y (a) = f (xa + ya) = f (x)σx (a) + f (y)σy (a).
Rearranging, we see that
f (x)[σx+y (a) − σx (a)] + f (y)[σx+y (a) − σy (a)] = 0.
If x and y are linearly independent, then σx+y (a) − σx (a) = 0 and σx+y (a) −
σy (a) = 0, so σx (a) = σy (a). But the general case clearly follows from the linearly
independent case.
⊳
For a ∈ F, denote the common value of σx (a) (x ∈ H \ {0}) by σ(a). Then
(2.3.9)
f (xa) = f (x)σ(a) ∀x ∈ H ∀a ∈ F.
Claim 2.3.10. σ ∈ Aut(F).
Proof. The R-linearity of σ follows from Claim 2.3.7, and the bijectivity of σ
follows from the bijectivity of f . Fix x ∈ H \ {0} arbitrary. For a, b ∈ F,
f (x)σ(ab) = f (xab) = f (xa)σ(b) = f (x)σ(a)σ(b),
which proves that σ is a multiplicative homomorphism.
⊳
Thus f ∈ O∗F (H; E), and so T = f ⊕I ∈ O∗F (L; Q). But [T ] = g by (2.3.5), so g ∈
PO∗F (L; Q). This completes the first part of the proof, namely that PO∗F (L; Q) =
Isom(H).
To complete the proof, we need to show that POF (L; Q) is equal to PO∗F (L; Q) if
F = R, Q and is of index 2 in PO∗F (L; Q) if F = C. If F = R, this is obvious. If F = C,
it follows from the semidirect product structure O∗F (L; Q) = OF (L; Q) ⋉ {σ J : σ ∈
Aut(F)} together with the fact that Aut(F) = {I, z 7→ z̄} ≡ Z2 .
If F = Q, then Aut(F) = {Φa : a ∈ S(Q)}, where Φa (b) = aba−1 . Here
S(F) = {a ∈ F : |a| = 1}. So O∗Q (L; Q) 6= OQ (L; Q); nevertheless, we will show
that PO∗Q (L; Q) = POQ (L; Q). Fix [T ] ∈ PO∗Q (L; Q), and fix a ∈ S(Q) for which
σT = Φa . Consider the map
(2.3.10)
Ta (x) = xa.
2.3. ISOMETRIES OF ALGEBRAIC HYPERBOLIC SPACES
13
−1
= I, so Ta T is
We have Ta ∈ O∗Q (L; Q) and σTa = Φ−1
a . Thus σTa T = Φa Φa
F-linear. But
[Ta T ] = [T ],
so [T ] ∈ POQ (L; Q). The completes the proof of Theorem 2.3.3.
Remark 2.3.11. Using algebraic language, the automorphisms Φa of Q are
inner automorphisms, while the automorphism z 7→ z̄ of C is an outer automorphism. Although both inner and outer automorphisms contribute to the quotient O∗F (L; Q)/ OF (L; Q), only the outer automorphisms contribute to the quotient
PO∗F (L; Q)/ POF (L; Q). This explains why the index #(PO∗F (L; Q)/ POF (L; Q)) is
smaller when F = Q than when F = C: although the group Aut(Q) is much larger
than Aut(C), it consists entirely of inner automorphisms, while Aut(C) has an outer
automorphism.
Definition 2.3.12. The bordification of H is its closure relative to the topological space P(L), i.e.
bord H = {[x] : Q(x) ≤ 0}.
The boundary of H is its topological boundary relative to P(L), i.e.
∂H = bord H \ H = {[x] : Q(x) = 0}.
The following is a corollary of Theorem 2.3.3:
Corollary 2.3.13. Every isometry of H extends uniquely to a homeomorphism
of bord H.
Proof. If T ∈ O∗F (L; Q), then the formula (2.3.1) defines a homeomorphism
of bord H which extends the action of [T ] on H. The uniqueness is automatic.
Remark 2.3.14. Corollary 2.3.13 can also be proven independently of Theorem
2.3.3 via the theory of hyperbolic metric spaces; cf. Lemma 3.4.25 and Proposition
3.5.3.
The following observation will be useful in the sequel:
Observation 2.3.15. Fix [x], [y] ∈ bord H. Then
BQ (x, y) = 0 ⇔ [x] = [y] ∈ ∂H.
Proof. If either [x] or [y] is in H, this follows from Lemma 2.2.4. Suppose that
[x], [y] ∈ ∂H, and that BQ (x, y) = 0. Then Q is identically zero on xF + yF. Thus
(xF+ yF)∩H = {0}, and so xF+ yF is one-dimensional. This implies [x] = [y].
14
2. ALGEBRAIC HYPERBOLIC SPACES
2.4. Totally geodesic subsets of algebraic hyperbolic spaces
Given two pairs (X, bord X) and (Y, bord Y ), where X and Y are metric spaces
contained in the topological spaces bord X and bord Y (and dense in these spaces),
an isomorphism between (X, bord X) and (Y, bord Y ) is a homeomorphism between
bord X and bord Y which restricts to an isometry between X and Y .
Proposition 2.4.1. Let K ≤ F be an R-subalgebra, and let V ≤ L be a closed
(right) K-module such that
BQ (x, y) ∈ K ∀x, y ∈ V.
(2.4.1)
Then either [V ] ∩ H = and #([V ] ∩ bord H) ≤ 1, or ([V ] ∩ H, [V ] ∩ bord H) is
isomorphic to an algebraic hyperbolic space together with its closure.
Proof.
Case 1: [V ] ∩ H 6= . In this case, fix [z] ∈ [V ] ∩ H, and let z be a representative
of [z] with Q(z) = −1. By Lemma 2.2.4, Q is positive-definite on z⊥ .
We leave it as an exercise that the quadratic forms Q ↿ z⊥ and E ↿ z⊥
agree up to a bounded multiplicative error factor, which implies that z⊥
√
is complete with respect to the norm Q.
From (2.4.1), we see that (V ∩ z⊥ , BQ ) is a K-Hilbert space. By
the usual Gram–Schmidt process, we may construct an orthonormal basis
′
(ei )i∈J ′ for V ∩z⊥ , thus proving that V ∩z⊥ is isomorphic to HKJ for some
J ′ ∪{0}
set J ′ . Thus V is isomorphic to LK
′
′
(HJK , bord HJK ).
, and so ([V ] ∩ H, [V ] ∩ bord H) is
isomorphic to
Case 2: [V ] ∩ H = . We need to show that #([V ] ∩ bord H) ≤ 1. By contradiction fix [x], [y] ∈ [V ] distinct, and let x, y ∈ V be representatives. By
Observation 2.3.15, BQ (x, y) 6= 0. On the other hand, Q(x) = Q(y) = 0
since [x], [y] ∈ ∂H. Thus Q(x − yB(x, y)−1 ) = −2 < 0. On the other
hand, x − yB(x, y)−1 ∈ V by (2.4.1). Thus [x − yB(x, y)−1 ] ∈ [V ] ∩ H,
a contradiction.
Definition 2.4.2. A totally geodesic subset of an algebraic hyperbolic space
H is a set of the form [V ] ∩ bord H, where V is as in Proposition 2.4.1. A totally
geodesic subset is nontrivial if it contains an element of H.
Remark 2.4.3. As with Definition 2.2.5, the terminology “totally geodesic” is
motivated here by the finite-dimensional situation, where totally geodesic subsets
correspond precisely with the closures of those submanifolds which are totally geodesic in the sense of Riemannian geometry; see [147, Proposition A.4 and A.7].
2.4. TOTALLY GEODESIC SUBSETS OF ALGEBRAIC HYPERBOLIC SPACES
15
However, note that we consider both the empty set and singletons in ∂H to be
totally geodesic.
Remark 2.4.4. If V ≤ L is a closed K-module satisfying (2.4.1), then for each
a ∈ F \ {0}, V a is a closed a−1 Ka-module satisfying (2.4.1) (with K = a−1 Ka).
Lemma 2.4.5. The intersection of any collection of totally geodesic sets is totally geodesic.
Proof. Suppose that (Sα )α∈A is a collection of totally geodesic sets, and supT
pose that S = α Sα 6= . Fix [z] ∈ S, and let z be a representative of [z]. Then for
each α ∈ A, there exist (cf. Remark 2.4.4) an R-algebra Kα and a closed Kα -subspace
Vα ≤ L satisfying (2.4.1) (with K = Kα ) such that z ∈ Vα and Sα = [Vα ] ∩ bord H.
T
T
Let K = α Kα and V = α Vα . Clearly, V is a K-module and satisfies (2.4.1).
We have [V ] ∩ bord H ⊆ S. To complete the proof, we must show the converse
direction. Fix [x] ∈ S \ {[z]}. By Observation 2.3.15, there exists a representative
x of [x] such that BQ (z, x) = 1. Then for each α, we may find aα ∈ F \ {0} such
that xaα ∈ Vα . We have
aα = BQ (z, x)aα = BQ (z, xaα ) ∈ Kα .
Since Vα is a Kα -module, this implies x ∈ Vα . Since α was arbitrary, x ∈ V , and
so [x] ∈ [V ] ∩ bord H.
Remark 2.4.6. Given K ⊆ bord H, Lemma 2.4.5 implies that there exists a
smallest totally geodesic set containing K. If we are only interested in the geometry
of K, then by Proposition 2.4.1 we can assume that this totally geodesic set is really
our ambient space. In such a situation, we may without loss of generality suppose
that there is no proper totally geodesic subset of bord H which contains K. In this
case we say that K is irreducible.
Warning. Although the intersection of any collection of totally geodesic sets
is totally geodesic, it is not necessarily the case that the decreasing intersection of
nontrivial totally geodesic sets is nontrivial; cf. Remark 11.2.19.
The main reason that totally geodesic sets are relevant to our development is
their relationship with the group of isometries. Specifically, we have the following:
Theorem 2.4.7. Let (gn )∞
1 be a sequence in Isom(H), and let
n
o
(2.4.2)
S = [x] ∈ bord H : gn ([x]) −
→ [x] .
n
Then either S ⊆ ∂H and #(S) = 2, or S is a totally geodesic set.
16
2. ALGEBRAIC HYPERBOLIC SPACES
Remark 2.4.8. An important example is the case where the sequence (gn )∞
1
is constant, say gn = g for all n. Then S is precisely the fixed point set of g:
S = Fix(g) := {[x] ∈ bord H : g([x]) = [x]} .
If H is finite-dimensional, then it is possible to reduce Theorem 2.4.7 to this special
case by a compactness argument.
Proof of Theorem 2.4.7. If S = , then the statement is trivial. Suppose
that S 6= , and fix [z] ∈ S.
Step 1: Choosing representatives Tn . From the proof of Theorem 2.3.3, we see
that each gn may be written in the form [Tn ] for some Tn ∈ O∗F (L; Q). We have
some freedom in choosing the representatives Tn ; specifically, given an ∈ S(F) we
may replace Tn by Tn Tan , where Tan is defined by (2.3.10).
Since gn ([z]) → [z], there exist representatives zn of gn ([z]) such that zn → z.
For each n, there is a unique representative Tn of gn such that
(Tn z)cn = zn for some cn ∈ R \ {0}.
Then
(Tn z)cn → z.
Remark 2.4.9. If F = Q, it may be necessary to choose Tn ∈ O∗F (L; Q) \
OF (L; Q), despite the fact that each gn can be represented by an element of
OF (L; Q).
Step 2: A totally geodesic set. Write σn = σTn , and let
K = {a ∈ F : σn (a) → a}
n
o
V = x ∈ L : Tn x −
→x .
n
Then K is an R-subalgebra of F, and V is a K-module. Given x, y ∈ V , by Observation 2.3.6 we have
σn (BQ (x, y)) = BQ (Tn x, Tn y) −
→ BQ (x, y),
n
so B(x, y) ∈ K. Thus V satisfies (2.4.1). If V is closed, then the above observations
show that [V ] ∩ bord H is totally geodesic. However, this issue is a bit delicate:
Claim 2.4.10. If #([V ] ∩ bord H) ≥ 2, then V is closed.
Proof. Suppose that #([V ] ∩ bord H) ≥ 2. The proof of Proposition 2.4.1
shows that [V ] ∩ H 6= . Thus, there exists x ∈ V for which [x] ∈ H. In particular,
gn ([x]) → [x]. Letting o = [(1, 0)], we have
dH (o, gn (o)) ≤ 2dH (o, [x]) + dH ([x], gn ([x])) −
→ 2dH (o, [x]).
n
2.4. TOTALLY GEODESIC SUBSETS OF ALGEBRAIC HYPERBOLIC SPACES
17
In particular dH (o, gn (o)) is bounded, say dH (o, gn (o)) ≤ C.
Lemma 2.4.11. Fix T ∈ O∗F (L; Q), and let kT k denote the operator norm of T .
Then
kT k = edH (o,[T ](o)) .
Proof. Write T = Tj,t (A ⊕ I), where Tj,t is a Lorentz boost (cf. (2.3.3)) and
A ∈ O∗F (H; E). Then
[T ](o) = [Tj,t ](o) = [(cosh(t), sinh(t), 0)].
Here the second entry represents the jth coordinate. In particular,
cosh(t)
|BQ ((1, 0), (cosh(t), sinh(t), 0))|
= cosh(t).
=
cosh dH (o, [T ](o)) = p
1
|Q(1, 0)| · |Q(cosh(t), sinh(t), 0)|
On the other hand,
cosh(t)
kT k = kTj,t k = sinh(t)
sinh(t)
cosh(t)
I
This completes the proof.
t
=e .
⊳
Thus kTn k ≤ eC for all n, and so the sequence (Tn )∞
1 is equicontinuous. It follows
that V is closed.
⊳
Since #([V ] ∩ bord H) ≤ 1 implies that [V ] ∩ bord H is totally geodesic, we conclude
that [V ] ∩ bord H is totally geodesic, regardless of whether or not V is closed.
Remark 2.4.12. When #([V ] ∩ bord H) ≤ 1, there seems to be no reason to
think that V should be closed.
Step 3: Relating S to [V ] ∩ bord H. The object of this step is to show that
S = [V ] ∩ bord H unless S ⊆ ∂H and #(S) ≤ 2. For each [x] ∈ S \ {[z]}, let x
be a representative of [x] such that BQ (z, x) = 1; this is possible by Observation
([x])
2.3.15. It is possible to choose a sequence of scalars (an )∞
n=1 in F \ {0} such that
([x])
(Tn x)an
([z])
→ x. Let an
= cn . For [x], [y] ∈ S, we have
(2.4.3)
([x])
an
([x])
= an
σTn (BQ (x, y))a([y])
n
BQ (Tn x, Tn y)a([y])
n
(by Observation 2.3.6)
= BQ ((Tn x)a([x])
, (Tn y)a([y])
)
n
n
−
→ BQ (x, y).
n
In particular,
(2.4.4)
|a([x])
| · |a([y])
|−
→ 1 whenever BQ (x, y) 6= 0.
n
n
n
18
2. ALGEBRAIC HYPERBOLIC SPACES
Claim 2.4.13. Unless S ⊆ ∂H and #(S) ≤ 2, then for all [x] ∈ S we have
|a([x])
|−
→ 1.
n
(2.4.5)
n
Proof. We first observe that it suffices to demonstrate (2.4.5) for one value
of x; if (2.4.5) holds for x and [y] 6= [x], then BQ (x, y) 6= 0 by Observation 2.3.15
([y])
and so (2.4.4) implies |an | → 1.
Now suppose that S * ∂H, and choose [x] ∈ S ∩ H. Then BQ (x, x) 6= 0, and
so (2.4.4) implies (2.4.5).
Finally, suppose that #(S) ≥ 3, and choose [x], [y], [z] ∈ S distinct. By (2.4.4)
([x])
([y])
([x])
([z])
together with Observation 2.3.15, we have |an | · |an | → 1, |an | · |an | → 1,
([y])
([z])
and |an | · |an | → 1. Multiplying the first two formulas and dividing by the
([x])
third, we see that |an
| → 1.
⊳
For the remainder of the proof we assume that either S * ∂H or #(S) ≥ 3.
Plugging z = x into (2.4.5), we see that cn → 1. In particular, [z] ∈ [V ]∩bord H.
Now fix [x] ∈ S \ {[z]}. Since cn → 1 and BQ (z, x) = 1, (2.4.3) becomes
a([x])
→ 1.
n
Thus x ∈ V , and so [x] ∈ [V ] ∩ bord H.
2.5. Other models of hyperbolic geometry
Fix F ∈ {R, C, Q} and a set J, and let H = HJF . The pair (H, bord H) is known
as the hyperboloid model of hyperbolic geometry (over the division algebra F and
in dimension #(J)). In this section we discuss two other important models of
hyperbolic geometry. Note that the Poincaré ball model, which many of the figures
of later chapters are drawn in, is not discussed here. References for this section
include [45, 78].
2.5.1. The (Klein) ball model. Let
B = BJF = {x ∈ H := HFJ : kxk < 1},
and let bord B denote the closure of B relative to H.
Observation 2.5.1. The map eB,H : bord B → bord H defined by the equation
eB,H (x) = [(1, x)]
is a homeomorphism, and eB,H (B) = H. Thus if we let
(2.5.1)
|1 − BE (x, y)|
p
,
cosh dB (x, y) = cosh dH (eB,H (x), eB,H (y)) = p
1 − kxk2 1 − kyk2
2.5. OTHER MODELS OF HYPERBOLIC GEOMETRY
19
then eB,H is an isomorphism between (B, bord B) and (H, bord H).
The pair (B, bord B) is called the ball model of hyperbolic geometry. It is often
convenient for computations, especially those for which a single point plays an
important role: by Observation 2.3.2, such a point can be moved to the origin
0 ∈ B via an isomorphism of (B, bord B).
Remark 2.5.2. We should warn that the ball model BJR of real hyperbolic
geometry is not the same as the well-known Poincaré model, rather, it is the same
as the Klein model.
0.
Observation 2.5.3. For all T ∈ O∗F (H; E), T ↿ B is an isometry which stabilizes
Proposition 2.5.4. In fact,
Stab(Isom(B); 0) = {T ↿ B : T ∈ O∗F (H; E)}.
Proof. This is an immediate consequence of Theorem 2.3.3.
2.5.2. The half-space model. Now suppose F = R.7 Assume that 1 ∈ J,
and let
E = EJ = x ∈ H := HFJ x1 > 0 .
We will view E as resting inside the larger space
b := H ∪ {∞}.
H
b is defined as follows: a subset U ⊆ H
b is open if and only if
The topology on H
U ∩ H is open and (∞ ∈ U ⇒ H \ U is bounded).
b according to the topology
The boundary and closure of E will be subsets of H
defined above, i.e.
∂E = {x ∈ H : x1 = 0} ∪ {∞}
bord E = {x ∈ H : x1 ≥ 0} ∪ {∞}.
Proposition 2.5.5. The map eE,H : bord E → bord H defined by the formula
2x
i
=
6
0,
1
i
x 6= ∞
1 + kxk2 i = 0
(2.5.2)
eE,H (x) =
2
1
−
kxk
i
=
1
i∈J∪{0}
[(1, −1, 0)]
x=∞
7The appropriate analogue of the half-space model when F ∈ {C, Q} is the paraboloid model ; see
e.g. [78, Chapter 4].
20
2. ALGEBRAIC HYPERBOLIC SPACES
is a homeomorphism, and eE,H (E) = H. Thus if we let
(2.5.3)
cosh dE (x, y) = cosh dH (eE,H (x), eE,H (y)) = 1 +
ky − xk2
,
2x1 y1
then eE,H is an isomorphism between (E, bord E) and (H, bord H).
Proof. For x ∈ bord E \ {∞},
Q(eE,H (x)) = −(1 + kxk2 )2 + (1 − kxk2 )2 +
X
i∈J\{1}
(2xi )2 = −4x21 .
It follows that eE,H (E) ⊆ H and eE,H (∂E) ⊆ ∂H. Calculation verifies that the map
x /2
i
=
6
1
i
p
if x0 + x1 = 2
−Q(x)/2 i = 1
(2.5.4)
eH,E ([x]) =
i∈J
∞
if x = (1, −1, 0)
is both a left and a right inverse of eE,H . Notice that it is defined in a way such
that for each [x] ∈ bord H, there is a unique representative x of [x] for which the
formula (2.5.4) makes sense. We leave it to the reader to verify that eE,H and eH,E
are both continuous, and that (2.5.3) holds.
The point ∞ ∈ ∂E, corresponding to the point [(1, −1, 0)] ∈ ∂H, plays a special
role in the half-space model. In fact, the half-space model can be thought of as an
attempt to understand the geometry of hyperbolic space when a single point on the
boundary is fixed. Consequently, we are less interested in the set of all isometries
of E than simply the set of all isometries which fix ∞.
J\{1}
Observation 2.5.6 (Poincaré extension). Let B = ∂E \ {∞} = HR
, and
let g : B → B be a similarity, i.e. a map of the form
g(x) = λT x + b,
where λ > 0, T ∈ OR (B; E), and b ∈ B. Then the map b
g : bord E → bord E defined
by the formula
(λx , g(π(x))) x 6= ∞
1
(2.5.5)
g (x) =
b
∞
x=∞
is an isomorphism of (E, bord E); in particular, gb ↿ E is an isometry of E. Here
π : H → B is the natural projection.
Proof. This is immediate from (2.5.3).
The isometry gb defined by (2.5.5) is called the Poincaré extension of g to E.
Remark 2.5.7. Intuitively we shall think of the number x1 as representing the
height of a point x ∈ bord E. Then (2.5.5) says that if g : B → B is an isometry,
2.5. OTHER MODELS OF HYPERBOLIC GEOMETRY
21
then the Poincaré extension of g is an isometry of E which preserves the heights of
points.
Proposition 2.5.8. For all g ∈ Isom(E) such that g(∞) = ∞, there exists a
h.
similarity h : B → B such that g = b
Proof. By Theorem 2.3.3, there exists T ∈ O(L; Q) such that [T ] = eE,H ◦ g ◦
e−1
.
E,H This gives an explicit formula for g, and one must check that if [T ] preserves
[(1, −1, 0)], then g is a Poincaré extension.
2.5.3. Transitivity of the action of Isom(H) on ∂H. Using the ball and
half-plane models of hyperbolic geometry, it becomes easy to prove the following
assertion:
Proposition 2.5.9. If F = R, the group Isom(H) acts triply transitively on ∂H.
This complements the fact that Isom(H) acts transitively on H (Observation
2.3.2).
Proof. By Observation 2.5.1 and Proposition 2.5.5, we may switch between
models as convenient. It is clear that Isom(B) acts transitively on ∂B, and that
Stab(Isom(E); ∞) acts doubly transitively on ∂E \ {∞}. Therefore given any triple
(ξ1 , ξ2 , ξ3 ), we may conjugate to B, conjugate ξ1 to a standard point, conjugate to
E while conjugating ξ1 to ∞, and then conjugate ξ2 , ξ3 to standard points.
We end this chapter with a convention:
J
Convention 6. When α is a cardinal number, Hα
F will denote HF for any set
J of cardinality α, but particularly J = {1, . . . , n} if α = n ∈ N and J = N if α =
#(N)
= HN
#(N). Moreover, H∞
F , the unique (up to
F will always be used to denote HF
isomorphism) infinite-dimensional separable algebraic hyperbolic space defined over
F. Finally, real hyperbolic spaces will be denoted without using R as a subscript,
J
J
α
α
e.g. H∞ = H∞
R , B = BR , H = HR .
CHAPTER 3
R-trees, CAT(-1) spaces, and Gromov hyperbolic
metric spaces
In this chapter we review the theory of “negative curvature” in general metric
spaces. A good reference for this subject is [39]. We begin by defining the class of Rtrees, the main class of examples we will talk about in this monograph other than the
class of algebraic hyperbolic spaces, which we will discuss in more detail in Chapter
14. Next we will define CAT(-1) spaces, which are geodesic metric spaces whose
triangles are “thinner” than the corresponding triangles in two-dimensional real
hyperbolic space H2 . Both algebraic hyperbolic spaces and R-trees are examples of
CAT(-1) spaces. The next level of generality considers Gromov hyperbolic metric
spaces. After defining these spaces, we proceed to define the boundary ∂X of a
hyperbolic metric space X, introducing the families of so-called visual metametrics
and extended visual metrics on the bordification bord X := X ∪ ∂X. We show that
the bordification of an algebraic hyperbolic space X is isomorphic to its closure
bord X defined in Chapter 2; under this isomorphism, the visual metric on ∂Bα is
proportional to the Euclidean metric.
3.1. Graphs and R-trees
To motivate the definition of R-trees we begin by defining simplicial trees, which
requires first defining graphs.
Definition 3.1.1. A weighted undirected graph is a triple (V, E, ℓ), where V
is a nonempty set, E ⊆ V × V \ {(x, x) : x ∈ V } is invariant under the map
(x, y) 7→ (y, x), and ℓ : E → (0, ∞) is also invariant under (x, y) → (y, x). (If ℓ ≡ 1,
the graph is called unweighted, and can be denoted simply (V, E).) The graph is
called connected if for all x, y ∈ V , there exist x = z0 , z1 , . . . , zn = y such that
(zi , zi+1 ) ∈ E for all i = 0, . . . , n − 1. If (V, E, ℓ) is connected, then the path metric
on V is the metric
)
(n−1
z0 = x, zn = y,
X
ℓ(zi , zi+1 )
(3.1.1)
dE,ℓ (x, y) := inf
(zi , zi+1 ) ∈ E ∀i = 0, . . . , n − 1
i=0
23
24
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
The geometric realization of the graph (V, E, ℓ) is the metric space
[
[0, ℓ(v, w)] / ∼,
X = X(V, E, ℓ) = V ∪
(v,w)∈E
where ∼ represents the following identifications:
v ∼ ((v, w), 0) ∀(v, w) ∈ E
((v, w), t) ∼ ((w, v), ℓ(v, w) − t) ∀(v, w) ∈ E ∀t ∈ [0, ℓ(v, w)]
and the metric d on X is given by
d ((v0 , v1 ), t), ((w0 , w1 ), s) = min {|t − iℓ(v0 , v1 )| + d(vi , wj ) + |s − jℓ(w0 , w1 )|}.
i∈{0,1}
j∈{0,1}
(The geometric realization of a graph is sometimes also called a graph. In the
sequel, we shall call it a geometric graph.)
Example 3.1.2 (The Cayley graph of a group). Let Γ be a group, and let
E0 ⊆ Γ be a generating set. (In most circumstances E0 will be finite; there is an
exception in Example 13.3.2 below.) Assume that E0 = E0−1 . The Cayley graph of
Γ with respect to the generating set E0 is the unweighted graph (Γ, E), where
(3.1.2)
(γ, β) ∈ E ⇔ γ −1 β ∈ E0 .
More generally, if ℓ0 : E0 → (0, ∞) satisfies ℓ0 (g −1 ) = ℓ0 (g), the weighted Cayley
graph of Γ with respect to the pair (E0 , ℓ0 ) is the graph (Γ, E, ℓ), where E is defined
by (3.1.2), and
(3.1.3)
ℓ(γ, β) = ℓ0 (γ −1 β).
The path metric of a Cayley graph is called a Cayley metric.
Remark 3.1.3. The equations (3.1.2), (3.1.3) guarantee that for each γ ∈ Γ,
the map Γ ∋ β → γβ ∈ Γ is an isometry of Γ with respect to any Cayley metric.
This isometry extends in a unique way to an isometry of the geometric Cayley
graph X = X(Γ, E, ℓ). The map sending γ to this isometry is a homomorphism
from Γ to Isom(X), and is called the natural action of Γ on X.
Remark 3.1.4. The path metric (3.1.1) satisfies the following universal prop-
erty: If Y is a metric space and if φ : V → Y satisfies d(φ(v), φ(w)) ≤ ℓ(v, w) for
every (v, w) ∈ E, then d(φ(v), φ(w)) ≤ d(v, w) for every v, w ∈ V .
Remark 3.1.5. The main difference between the metric space (V, dE,ℓ ) and the
geometric graph X = X(V, E, ℓ) is that the latter is a geodesic metric space. A
metric space X is said to be geodesic if for every p, q ∈ X, there exists an isometric
embedding π : [t, s] → X such that π(t) = p and π(s) = q, for some t, s ∈ R. The
3.1. GRAPHS AND R-TREES
25
set π([t, s]) is denoted [p, q] and is called a geodesic segment connecting p and q.
The map π is called a parameterization of the geodesic segment [p, q]. (Note that
although [q, p] = [p, q], π is not a parameterization of [q, p].)
Warning: Although we may denote any geodesic segment connecting p and q
by [p, q], such a geodesic segment is not necessarily unique. A geodesic metric space
X is called uniquely geodesic if for every p, q ∈ X, the geodesic segment connecting
p and q is unique.
Notation 3.1.6. If π : [0, t0 ] → X is a parameterization of the geodesic seg-
ment [p, q], then for each t ∈ [0, t0 ], [p, q]t denotes the point π(t), i.e. the unique
point on the geodesic segment [p, q] such that d(p, [p, q]t ) = t.
We are now ready to define the class of simplicial trees. Let (V, E, ℓ) be a
weighted undirected graph. A cycle in (V, E, ℓ) is a finite sequence of distinct
vertices v1 , . . . , vn ∈ V , with n ≥ 3, such that
(3.1.4)
(v1 , v2 ), (v2 , v3 ), . . . , (vn−1 , vn ), (vn , v1 ) ∈ E.
Definition 3.1.7. A simplicial tree is the geometric realization of a weighted
undirected graph with no cycles. A Z-tree (or unweighted simplicial tree, or just
tree 1) is the geometric realization of an unweighted undirected graph with no cycles.
Example 3.1.8. Let F2 (Z) denote the free group on two elements γ1 , γ2 . Let
E0 = {γ1 , γ1−1 , γ2 , γ2−1 }. The geometric Cayley graph of F2 (Z) with respect to the
generating set E0 is an unweighted simplicial tree.
Example 3.1.9. Let V = {C, p, q, r}, and fix ℓp , ℓq , ℓr > 0. Let
E = {(C, x), (x, C) : x = p, q, r},
ℓ(C, x) = ℓ(x, C) = ℓx .
The geometric realization of the graph (V, E, ℓ) is a simplicial tree; see Figure
3.1.1. It will be denoted ∆ = ∆(p, q, r), and will be called a tree triangle. For
x, y ∈ {p, q, r} distinct, the distance between x and y is given by
d(x, y) = ℓx + ℓy .
Solving for ℓp in terms of d(p, q), d(p, r), d(q, r) gives
(3.1.5)
ℓp = d(p, C) =
1
[d(p, q) + d(p, r) − d(q, r)].
2
Definition 3.1.10. A metric space X is an R-tree if for all p, q, r ∈ X, there
exists a tree triangle ∆ = ∆(p, q, r) and an isometric embedding ι : ∆ → X sending
p, q, r to p, q, r, respectively.
1However, in [167], the word “trees” is used to refer to what are now known as R-trees.
26
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
Definition 3.1.11. Let X be an R-tree, fix p, q, r ∈ X, and let ι : ∆ → X
be as above. The point C = C(p, q, r) := ι(C) is called the center of the geodesic
triangle ∆ = ∆(p, q, r).
As the name suggests, every simplicial tree is an R-tree; the converse does not
hold; see e.g. [51, Example on p.50]. Before we can prove that every simplicial tree
is an R-tree, we will need a lemma:
Lemma 3.1.12 (Cf. [51, p.29]). Let X be a metric space. The following are
equivalent:
(A) X is an R-tree.
(B) There exists a collection of geodesics G, with the following properties:
(BI) For each x, y ∈ X, there is a geodesic [x, y] ∈ G connecting x and y.
(BII) Given [x, y] ∈ G and z, w ∈ [x, y], we have [z, w] ∈ G, where [z, w] is
interpreted as the set of points in [x, y] which lie between z and w.
(BIII) Given x1 , x2 , x3 ∈ X distinct and geodesics [x1 , x2 ], [x1 , x3 ], [x2 , x3 ] ∈
G, at least one pair of the geodesics [xi , xj ], i 6= j, has a nontrivial
intersection. More precisely, there exist distinct i, j, k ∈ {1, 2, 3} such
that
[xi , xj ] ∩ [xi , xk ] % {xi }.
Proof of (A) ⇒ (B). Note that (BI) and (BII) are true for any uniquely
geodesic metric space. Given x1 , x2 , x3 distinct, let C be the center. Then xi 6= C
for some i; without loss of generality x1 6= C. Then
[x1 , x2 ] ∩ [x1 , x3 ] = [x1 , C] % {x1 }.
Proof of (B) ⇒ (A). We first show that given points x1 , x2 , x3 ∈ X and
T
geodesics [x1 , x2 ], [x1 , x3 ], [x2 , x3 ] ∈ G, the intersection i6=j [xi , xj ] is nonempty.
Indeed, suppose not. For i = 2, 3 let γi : [0, d(x1 , xi )] → X be a parameterization
of [x1 , xi ], and let
t1 = max{t ≥ 0 : γ2 (t) = γ3 (t)}.
By replacing x with γ2 (t1 ) = γ3 (t1 ) and using (BII), we may without loss of generality assume that t1 = 0, or equivalently that [x1 , x2 ] ∩ [x1 , x3 ] = {x1 }. Simi-
larly, we may without loss of generality assume that [x2 , x1 ] ∩ [x2 , x3 ] = {x2 } and
[x3 , x1 ] ∩ [x3 , x2 ] = {x3 }. But then (BIII) implies that x1 , x2 , x3 cannot be all
T
distinct. This immediately implies that i6=j [xi , xj ] 6= .
To complete the proof, we must show that X is uniquely geodesic. Indeed
suppose that for some x1 , x2 ∈ X, there is more than one geodesic connecting x1
and x2 . Let [x1 , x2 ] ∈ G be a geodesic connecting x1 and x2 , and let [x1 , x2 ]′ be
3.2. CAT(-1) SPACES
27
q
C(p, q, r)
r
p
Figure 3.1.1. A geodesic triangle in an R-tree
another geodesic connecting x1 and x2 . Then there exists x3 ∈ [x1 , x2 ]′ \ [x1 , x2 ].
T
By the above paragraph, there exists w ∈ i6=j [xi , xj ]. Since w ∈ [xi , x3 ], we have
d(xi , w) ≤ d(xi , x3 ).
(3.1.6)
On the other hand, since w ∈ [x1 , x2 ] and x3 ∈ [x1 , x2 ]′ , we have
d(x1 , x2 ) = d(x1 , w) + d(x2 , w) ≤ d(x1 , x3 ) + d(x2 , x3 ) = d(x1 , x2 ).
It follows that equality holds in (3.1.6), i.e. d(xi , w) = d(xi , x3 ). Since w ∈ [xi , x3 ],
this implies w = x3 . But then x3 = w ∈ [x1 , x2 ], a contradiction.
Corollary 3.1.13. Every simplicial tree is an R-tree.
Proof. Let X = X(V, E, ℓ) be a simplicial tree, and let G be the collection
of all geodesics; then (BI) and (BII) both hold. By contradiction, suppose that
there exist points x1 , x2 , x3 ∈ X such that [xi , xj ] ∩ [xi , xk ] = {xi } for all distinct
i, j, k ∈ {1, 2, 3}. Then the path [x1 , x2 ] ∪ [x2 , x3 ] ∪ [x3 , x1 ] is equal to the union of
the edges of a cycle of the graph (V, E, ℓ). This is a contradiction.
We shall investigate R-trees in more detail in Chapter 14, where we will give
various examples of R-trees together with groups acting isometrically on them.
3.2. CAT(-1) spaces
The following definitions have been modified from [39, p.158], to which the
reader is referred for more details.
A geodesic triangle in X consists of three points p, q, r ∈ X (the vertices of
the triangle) together with a choice of three geodesic segments [p, q], [q, r], and
[r, p] joining them (the sides). Such a geodesic triangle will be denoted ∆(p, q, r),
although we note that this could cause ambiguity if X is not uniquely geodesic.
Although formally ∆(p, q, r) is an element of X 3 ×P(X)3 , we will sometimes identify
∆(p, q, r) with the set [p, q] ∪ [q, r] ∩ [r, p] ⊆ X, writing x ∈ ∆(p, q, r) if x ∈ [p, q] ∪
[q, r] ∪ [r, p].
28
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
A triangle ∆ = ∆(p, q, r) in H2 is called a comparison triangle for ∆ = ∆(p, q, r)
if d(p, q) = d(p, q), d(q, r) = d(q, r), and d(p, r) = d(p, r). Any triangle admits a
comparison triangle, unique up to isometry. For any point x ∈ [p, q], we define its
comparison point x ∈ [p, q] to be the unique point such that d(x, p) = d(x, p) and
d(x, q) = d(x, q). In the notation above, the comparison point of [p, q]t is equal to
[p, q]t for all t ∈ [0, d(p, q)] = [0, d(p, q)]. For x ∈ [q, r] and x ∈ [r, p], the comparison
point is defined similarly.
Let X be a metric space and let ∆ be a geodesic triangle in X. We say that ∆
satisfies the CAT(-1) inequality if for all x, y ∈ ∆,
(3.2.1)
d(x, y) ≤ d(x, y),
where x and y are any2 comparison points for x and y, respectively. Intuitively, ∆
satisfies the CAT(-1) inequality if it is “thinner” than its comparison triangle ∆.
Definition 3.2.1. X is a CAT(-1) space if it is a geodesic metric space and if
all of its geodesic triangles satisfy the CAT(-1) inequality.
Observation 3.2.2 ([39, Proposition II.1.4(1)]). CAT(-1) spaces are uniquely
geodesic.
Proof. Let X be a CAT(-1) space, and suppose that two points p, q ∈ X are
connected by two geodesic segments [p, q] and [p, q]′ . Fix t ∈ [0, d(p, q)] and let
x = [p, q]t , x′ = [p, q]′t . Consider the triangle ∆(p, q, x) determined by the geodesic
segments [p, q]′ , [p, x], and [x, q], and a comparison triangle ∆(p, q, x). Then x and
x′ have the same comparison point x, so by the CAT(-1) inequality
d(x, x′ ) ≤ d(x, x) = 0,
and thus x = x′ . Since t was arbitrary, it follows that [p, q] = [p, q]′ . Since [p, q]′
was arbitrary, [p, q] is the unique geodesic segment connecting p and q.
3.2.1. Examples of CAT(-1) spaces. In this text we concentrate on two
main examples of CAT(-1) spaces: algebraic hyperbolic spaces and R-trees. We
therefore begin by proving the following result which implies that algebraic hyperbolic spaces are CAT(-1):
Proposition 3.2.3. Any Riemannian manifold (finite- or infinite-dimensional)
with sectional curvature bounded above by −1 is a CAT(-1) space.
Proof. The finite-dimensional case is proven in [39, Theorem II.1A.6]. The
infinite-dimensional follows upon augmenting the finite-dimensional proof with the
2The comparison points x and y may not be uniquely determined if either x or y lies on two sides
of the triangle simultaneously.
3.3. GROMOV HYPERBOLIC METRIC SPACES
29
infinite-dimensional Cartan–Hadamard theorem [119, IX, Theorem 3.8] to guarantee surjectivity of the exponential map.
Since algebraic hyperbolic spaces have sectional curvature bounded between
−4 and −1 (e.g. [93, Corollary of Proposition 4]; see also [147, Lemmas 2.3, 2.7,
and 2.11]), the following corollary is immediate:
Corollary 3.2.4. Every algebraic hyperbolic space is a CAT(-1) space.
Remark 3.2.5. One can prove Corollary 3.2.4 without using the full strength
of Proposition 3.2.3. Indeed, any geodesic triangle in an algebraic hyperbolic space
is isometric to a geodesic triangle in H2F for some F ∈ {R, C, Q}. Since H2F is finitedimensional, thinness of its geodesic triangles follows from the finite-dimensional
version of Proposition 3.2.3.
Observation 3.2.6. R-trees are CAT(-1).
Proof. First of all, an argument similar to the proof of Observation 3.2.2
shows that R-trees are uniquely geodesic, justifying Figure 3.1.1. In particular,
if ∆(p, q, r) is a geodesic triangle in an R-tree and if C = C(p, q, r) then [p, q] =
[p, C]∪[q, C], [q, r] = [q, C]∪[r, C], and [r, p] = [r, C]∪[p, C]. It follows that any two
points x, y ∈ ∆ share a side in common, without loss of generality say x, y ∈ [p, q].
Then
d(x, y) = d(p, q) − d(x, p) − d(y, q) = d(p, q) − d(x, p) − d(y, q) ≤ d(x, y).
In a sense R-trees are the “most negatively curved spaces”; although we did
not define the notion of a CAT(κ) space, R-trees are CAT(κ) for every κ ∈ R.
3.3. Gromov hyperbolic metric spaces
We now come to the theory of Gromov hyperbolic metric spaces. In a sense,
Gromov hyperbolic metric spaces are those which are “approximately R-trees”. A
good reference for this section is [172].
For any three numbers dpq , dqr , drp ≥ 0 satisfying the triangle inequality, there
exists an R-tree X and three points p, q, r ∈ X such that d(p, q) = dpq , etc. Thus
in some sense looking at triples “does not tell you” that you are looking at an
R-tree. Now let us look at quadruples. A quadruple (p, q, r, s) in an R-tree X looks
something like Figure 3.3.1. Of course, the points p, q, r, s ∈ X could be arranged
in any order. However, let us consider them the way that they are arranged in
Figure 3.3.1 and note that
30
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
r
q
p
s
Figure 3.3.1. A quadruple of points in an R-tree
(3.3.1)
C(p, q, r) = C(p, q, s).
In order to write this equality in terms of distances, we need some way of measuring
the distance from the vertex of a geodesic triangle to its center.
Observation 3.3.1. If ∆(p, q, r) is a geodesic triangle in an R-tree then the
distance from the vertex p to the center C(p, q, r), i.e. d(p, C(p, q, r)), is equal to
(3.3.2)
hq|rip :=
1
[d(p, q) + d(p, r) − d(q, r)].
2
The expression hq|rip is called the Gromov product of q and r with respect to
p, and it makes sense in any metric space. It can be thought of as measuring the
“defect in the triangle inequality”; indeed, the triangle inequality is exactly what
assures that hq|rip ≥ 0 for all p, q, r ∈ X.
Now (3.3.1) implies that
hq|rip = hq|sip ≤ hr|sip .
(The last inequality does not follow from (3.3.1) but it may be seen from Figure
3.3.1.) However, since the arrangement of points was arbitrary we do not know
which two Gromov products will be equal and which one will be larger. An inequality which captures all possibilities is
(3.3.3)
hq|rip ≥ min(hq|sip , hr|sip ).
Now, as mentioned before, we will define hyperbolic metric spaces as those which
are “approximately R-trees”. Thus they will satisfy (3.3.3) with an asymptotic.
Definition 3.3.2. A metric space X is called hyperbolic (or Gromov hyperbolic)
if for every four points x, y, z, w ∈ X we have
(3.3.4)
hx|ziw &+ min(hx|yiw , hy|ziw ),
We refer to (3.3.4) as Gromov’s inequality.
3.3. GROMOV HYPERBOLIC METRIC SPACES
31
From the above discussion, every R-tree is Gromov hyperbolic with an implied
constant of 0 in (3.3.4). (This can also be deduced from Proposition 3.3.4 below.)
Note that many authors require X to be a geodesic metric space in order to
be hyperbolic; we do not. If X is a geodesic metric space, then the condition
of hyperbolicity can be reformulated in several different ways, including the thin
triangles condition; for details, see [39, § III.H.1] or Section 4.3 below.
It will be convenient for us to make a list of several identities satisfied by the
Gromov product. For each z ∈ X, let B z denote the Busemann function
(3.3.5)
B z (x, y) := d(z, x) − d(z, y).
Proposition 3.3.3. The Gromov product and Busemann function satisfy the
following identities and inequalities:
(a)
hx|yiz = hy|xiz
(b)
d(y, z) = hy|xiz + hz|xiy
(c)
0 ≤ hx|yiz ≤ min(d(x, z), d(y, z))
(d)
hx|yiz ≤ hx|yiw + d(z, w)
(e)
hx|yiw ≤ hx|ziw + d(y, z)
| B x (z, w)| ≤ d(z, w)
(f)
1
hx|yiz = hx|yiw + [B x (z, w) + B y (z, w)]
2
1
hx|yiz = [d(x, z) + B y (z, x)]
2
B x (y, z) = hz|xiy − hy|xiz
(g)
(h)
(j)
hx|yiz = hx|yiw + d(z, w) − hx|ziw − hy|ziw
(k)
1
hx|yiw = hx|ziw + [B w (y, z) − B x (y, z)]
2
(l)
The proof is a straightforward computation. We remark that (a)-(e) may be
found in [172, Lemma 2.8].
3.3.1. Examples of Gromov hyperbolic metric spaces.
Proposition 3.3.4 (Proven in Section 3.5). Every CAT(-1) space (in particular
every algebraic hyperbolic space) is Gromov hyperbolic. In fact, if X is a CAT(-1)
space then for every four points x, y, z, w ∈ X we have
(3.3.6)
e−hx|ziw ≤ e−hx|yiw + e−hy|ziw ,
and so X satisfies (3.3.4) with an implied constant of log(2).
32
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
y
hz|xiy
z
hy|xiz
x
Figure 3.3.2. An illustration of (b) of Proposition 3.3.3 in an R-tree.
Remark 3.3.5. The first assertion of Proposition 3.3.4, namely, that CAT(-1)
spaces are Gromov hyperbolic, is [39, Proposition III.H.1.2]. The inequality (3.3.6)
in the case where x, y, z ∈ ∂X and w ∈ X can be found in [33, Théorème 2.5.1].
Definition 3.3.6. A space X satisfying the conclusion of Proposition 3.3.4 is
said to be strongly hyperbolic.
Note that
R-tree ⇒ CAT(-1) ⇒ Strongly hyperbolic ⇒ Hyperbolic.
A large class of examples of hyperbolic metric spaces which are not CAT(-1) is
furnished by the Cayley graphs of finitely presented groups. Indeed, we have the
following:
Theorem 3.3.7 ([86, p.78], [139]; see also [49]). Fix k ≥ 2 and an alpha-
±1
±1
bet A = {a±1
1 , a2 , · · · , ak }. Fix i ∈ N and a sequence of positive integers
(n1 , · · · , ni ). Let N = N (k, i, n1 , · · · , ni ) be the number of group presentations
G = ha1 , · · · , ak |r1 , · · · , ri i such that r1 , · · · , ri are reduced words in the alphabet A
such that the length of rj is nj for j = 1, 2, · · · , i. If Nh is the number of groups in
this collection whose Cayley graphs are hyperbolic and if n = min(n1 , · · · , ni ) then
limn→∞ Nh /N = 1.
This theorem says that in some sense, “almost every” finitely presented group
is hyperbolic.
If one has a hyperbolic metric space X, there are two ways to get another
hyperbolic metric space from X, one trivial and one nontrivial.
Observation 3.3.8. Any subspace of a hyperbolic metric space is hyperbolic.
Any subspace of a strongly hyperbolic metric space is strongly hyperbolic.
To describe the other method we need to define the notion of a quasi-isometric
embedding.
3.4. THE BOUNDARY OF A HYPERBOLIC METRIC SPACE
33
Definition 3.3.9. Let (X1 , d1 ) and (X2 , d2 ) be metric spaces. A map Φ :
X1 → X2 is a quasi-isometric embedding if for every x, y ∈ X1
d2 (Φ(x), Φ(y)) ≍+,× d1 (x, y).
A quasi-isometric embedding Φ is called a quasi-isometry if its image Φ(X1 ) is
cobounded in X2 , that is, if there exists R > 0 such that Φ(X1 ) is R-dense in X2 ,
meaning that minx∈X2 d(x, Φ(X1 )) ≤ R. In this case, the spaces X1 and X2 are
said to be quasi-isometric.
Theorem 3.3.10 ([39, Theorem III.H.1.9]). Any geodesic metric space which
can be quasi-isometrically embedded into a geodesic hyperbolic metric space is also
a hyperbolic metric space.
Remark 3.3.11. Theorem 3.3.10 is not true if the hypothesis of geodesicity is
dropped. For example, R is quasi-isometric to [0, ∞) × {0} ∪ {0} × [0, ∞) ⊆ R2 , but
the former is hyperbolic and the latter is not.
There are many more examples of hyperbolic metric spaces which we will not
discuss; cf. the list in §1.1.2.
3.4. The boundary of a hyperbolic metric space
In this section we define the Gromov boundary of a hyperbolic metric space
X. The construction will depend on a distinguished point o ∈ X, but the resulting
space will be independent of which point is chosen. If X is an R-tree, then the
boundary of X will turn out to be the set of infinite branches through X, i.e. the
set of all isometric embeddings π : [0, ∞) → X sending 0 to o, where o ∈ X is a
distinguished fixed point. If X is an algebraic hyperbolic space, then the boundary
of X will turn out to be isomorphic to the space ∂X defined in Chapter 2.
To motivate the definition of the boundary, suppose that X is an R-tree. An
infinite branch through X can be approximated by finite branches which agree
on longer and longer segments. Suppose that ([o, xn ])∞
1 is a sequence of geodesic
segments. For each n, m ∈ N, the length of the intersection of [o, xn ] and [o, xm ] is
equal to d(o, C(o, xn , xm )), which in turn is equal to hxn |xm io . Thus, the sequence
([o, xn ])∞
1 converges to an infinite geodesic if and only if
(3.4.1)
hxn |xm io −−→ ∞.
n,m
(Cf. Figure 3.4.1.) The formula (3.4.1) is reminiscent of the definition of a Cauchy
sequence. This intuition will be made explicit in Section 3.6, where we will introduce
a metametric on X with the property that a sequence in X satisfies (3.4.1) if and
only if it is Cauchy with respect to this metametric.
34
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
x5
x3
x1
...
x2
o
Figure 3.4.1. A Gromov sequence in an R-tree
Definition 3.4.1. A sequence (xn )∞
1 in X for which (3.4.1) holds is called a
∞
Gromov sequence. Two Gromov sequences (xn )∞
1 and (yn )1 are called equivalent
if
hxn |yn io −
→ ∞,
n
or equivalently if
hxn |ym io −−→ ∞.
n,m
∞
In this case, we write (xn )∞
1 ∼ (yn )1 . It is readily verified using Gromov’s inequal-
ity that ∼ is an equivalence relation on the set of Gromov sequences in X. We will
∞
denote the class of sequences equivalent to a given sequence (xn )∞
1 by [(xn )1 ].
Definition 3.4.2. The Gromov boundary of X is the set of Gromov sequences
modulo equivalence. It is denoted ∂X. The Gromov closure or bordification of X
is the disjoint union bord X := X ∪ ∂X.
Remark 3.4.3. If X is an algebraic hyperbolic space, then this notation causes
some ambiguity, since it is not clear whether ∂X represents the Gromov boundary
of X, or rather the topological boundary of X as in Chapter 2. This ambiguity
will be resolved in §3.5.1 below when it is shown that the two bordifications are
isomorphic.
Remark 3.4.4. In the literature, the ideal boundary of a hyperbolic metric
space is often taken to be the set of equivalence classes of geodesic rays under
asymptotic equivalence, rather than the set of equivalence classes of Gromov sequences (e.g. [39, p.427]). If X is proper and geodesic, then these two notions are
equivalent [39, Lemma III.H.3.13], but in general they may be different.
Remark 3.4.5. By (d) of Proposition 3.3.3, the concepts of Gromov sequence
and equivalence do not depend on the basepoint o. In particular, the Gromov
boundary ∂X is independent of o.
3.4. THE BOUNDARY OF A HYPERBOLIC METRIC SPACE
35
3.4.1. Extending the Gromov product to the boundary. We now wish
to extend the Gromov product and Busemann function to the boundary “by continuity”. Fix ξ, η ∈ ∂X and z ∈ X. Ideally, we would like to define hξ|ηiz to
be
lim hxn |ym iz ,
(3.4.2)
n,m→∞
∞
where (xn )∞
1 ∈ ξ and (ym )1 ∈ η. (The definition would then have to be shown
independent of which sequences were chosen.) The naive definition (3.4.2) does not
work, because the limit (3.4.2) does not necessarily exist:
Example 3.4.6. Let
X = {x ∈ R2 : x2 ∈ [0, 1]}
be interpreted as a subspace of R2 with the L1 metric. Then X is a hyperbolic
metric space, since it contains the cobounded hyperbolic metric space R × {0}. Its
Gromov boundary consists of two points −∞ and +∞, which are the limits of x
as x1 approaches −∞ or +∞, respectively. Let y = (0, 1) and z = (1, 0). Then for
all x ∈ X, hx|yiz = x2 . In particular, we can find a sequence xn → +∞ such that
limn→∞ hxn |yiz does not exist.
Fortunately, the limit (3.4.2) “exists up to a constant”:
∞
Lemma 3.4.7. Let (xn )∞
1 and (ym )1 be Gromov sequences, and fix y, z ∈ X.
Then
lim inf hxn |ym iz ≍+ lim suphxn |ym iz
(3.4.3)
n,m→∞
n,m→∞
lim inf hxn |yiz ≍+ lim suphxn |yiz ,
(3.4.4)
n→∞
n→∞
with equality if X is strongly hyperbolic.
Note that except for the statement about strongly hyperbolic spaces, this
lemma is simply [172, Lemma 5.6].
Proof of Lemma 3.4.7. Fix n1 , n2 , m1 , m2 ∈ N. By Gromov’s inequality
hxn1 |ym1 iz &+ min(hxn2 |ym2 iz , hxn1 |xn2 iz , hym1 |ym2 iz ).
Taking the liminf over n1 , m1 and the limsup over n2 , m2 gives
lim inf hxn |ym iz
&+ min lim suphxn |ym iz , lim inf hxn1 |xn2 iz , lim inf hym1 |ym2 iz
n,m→∞
n,m→∞
= lim suphxn |ym iz ,
n,m→∞
m1 ,m2 →∞
n1 ,n2 →∞
(since
(xn )∞
1
and (ym )∞
1 are Gromov)
36
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
demonstrating (3.4.3). On the other hand, suppose that X is strongly hyperbolic.
Then by (3.3.6) we have
exp −hxn1 |ym1 iz ≤ exp −hxn2 |ym2 iz +exp −hxn1 |xn2 iz +exp −hym1 |ym2 iz ;
taking the limsup over n1 , m1 and the liminf over n2 , m2 gives
exp − lim inf hxn |ym iz ≤ exp − lim suphxn |ym iz +
n,m→∞
n,m→∞
+ exp − lim inf hxn1 |xn2 iz +
n1 ,n2 →∞
+ exp − lim inf hym1 |ym2 iz
m1 ,m2 →∞
= exp − lim suphxn |ym iz ,
n,m→∞
∞
(since (xn )∞
1 and (ym )1 are Gromov)
demonstrating equality in (3.4.3). The proof of (3.4.4) is similar and will be omitted.
Remark 3.4.8. Many of the statements in this monograph concerning strongly
hyperbolic metric spaces are in fact valid for all hyperbolic metric spaces satisfying
the conclusion of Lemma 3.4.7.
Now that we know that it does not matter too much whether we replace the
limit in (3.4.2) by a liminf or a limsup, we make the following definition without
fear:
Definition 3.4.9. For ξ, η ∈ ∂X and y, z ∈ X, let
∞
(3.4.5)
hξ|ηiz := inf lim inf hxn |ym iz : (xn )∞
∈
ξ,
(y
)
∈
η
m 1
1
n,m→∞
n
o
(3.4.6)
hξ|yiz := hy|ξiz := inf lim inf hxn |yiz : (xn )∞
1 ∈ξ
n→∞
(3.4.7)
B ξ (y, z) = hz|ξiy − hy|ξiz .
As a corollary of Lemma 3.4.7, we have the following:
∞
Lemma 3.4.10. Fix ξ, η ∈ ∂X and y, z ∈ X. For all (xn )∞
1 ∈ ξ and (ym )1 ∈ η
we have
(3.4.8)
hxn |ym iz −−−−→ hξ|ηiz
(3.4.9)
hxn |yiz −−−−→ hξ|yiz
(3.4.10)
B xn (y, z) −−−−→ B ξ (y, z),
n,m,+
n,+
n,+
(cf. Convention 2), with exact limits if X is strongly hyperbolic.
3.4. THE BOUNDARY OF A HYPERBOLIC METRIC SPACE
37
Note that except for the statement about strongly hyperbolic spaces, this
lemma is simply [172, Lemma 5.11].
(i)
(i)
∞
Proof of Lemma 3.4.10. Say we are given (xn )∞
1 ∈ ξ and (ym )1 ∈ η for
each i = 1, 2. Let
x(1)
n even
n/2
,
xn =
x(2)
n odd
(n+1)/2
and define ym similarly. It may be verified using Gromov’s inequality that (xn )∞
1 ∈
ξ and (ym )∞
1 ∈ η. Applying Lemma 3.4.7, we have
2
2
2
2
(i) (j)
(j)
min min lim inf hx(i)
n |ym iz ≍+ max max lim suphxn |ym iz .
i=1 j=1 n,m→∞
i=1 j=1 n,m→∞
In particular,
(1)
(1) (1)
lim inf hx(1)
n |yn iz .+ lim suphxn |yn iz
n,m→∞
n,m→∞
(2)
.+ lim inf hx(2)
n |yn iz
n,m→∞
(2)
(1) (1)
.+ lim suphx(2)
n |yn iz .+ lim inf hxn |yn iz .
n,m→∞
n,m→∞
(2)
(2)
∞
Taking the infimum over all (xn )∞
1 ∈ ξ and (ym )1 ∈ η gives (3.4.8). A similar
argument gives (3.4.9). Finally, (3.4.10) follows from (3.4.9), (3.4.7), and (j) of
Proposition 3.3.3.
If X is strongly hyperbolic, then all error terms are equal to zero, demonstrating
that the limits converge exactly.
Remark 3.4.11. In the sequel, the statement that “if X is strongly hyperbolic,
then all error terms are zero” will typically be omitted from our proofs.
A simple but useful consequence of Lemma 3.4.10 is the following:
Corollary 3.4.12. The formulas of Proposition 3.3.3 together with Gromov’s
inequality hold for points on the boundary as well, if the equations and inequalities
there are replaced by additive asymptotics. If X is strongly hyperbolic, then we may
keep the original formulas without adding an error term.
Proof. For each identity, choose a Gromov sequence representing each element of the boundary which appears in the formula. Replace each occurrence of
this element in the formula by the general term of the chosen sequence. This yields
a sequence of formulas, each of which is known to be true. Take a subsequence
on which each term in these formulas converges. Taking the limit along this subsequence again yields a true formula, and by Lemma 3.4.10 we may replace each
limit term by the term which it stood for, with only bounded error in doing so,
38
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
and no error if X is strongly hyperbolic. Thus the formula holds as an additive
asymptotic, and holds exactly if X is strongly hyperbolic.
Remark 3.4.13. In fact, (a), (c), (d), and (e) of Proposition 3.3.3 hold in
bord X in the usual sense, i.e. as exact formulas without additive constants.
Proof. These are the identities where there is at most one Gromov product
on each side of the formula. For each element of the boundary, we may simply
replace each occurence of that element with the general term of an arbitrary Gromov
sequence, take the liminf, and then take the infimum over all Gromov sequences.
Observation 3.4.14. hx|yiz = ∞ if and only if x = y ∈ ∂X.
Proof. This follows directly from (3.4.5) and (3.4.6).
3.4.2. A topology on bord X. One can endow the bordification bord X =
X ∪ ∂X with a topological structure T as follows: Given S ⊆ bord X, write S ∈ T
(i.e. call S open) if
(I) S ∩ X is open, and
(II) For each ξ ∈ S ∩ ∂X there exists t ≥ 0 such that Nt (ξ) ⊆ S, where
Nt (ξ) := Nt,o (ξ) := {y ∈ bord X : hy|ξio > t}
Remark 3.4.15. The topology T may equivalently be defined to be the unique
topology on bord X satisfying:
(I) T ↿ X is compatible with the metric d, and
(II) For each ξ ∈ ∂X, the collection
{Nt (ξ) : t ≥ 0}
(3.4.11)
is a neighborhood base for T at ξ.
Remark 3.4.16. It follows from Lemma 3.4.23 below that the sets Nt (ξ) are
open in the topology T .
Remark 3.4.17. By (d) of Proposition 3.3.3 (cf. Remark 3.4.13), we have
Nt,x (ξ) ⊇ Nt+d(x,y),y (ξ) for all x, y ∈ X, ξ ∈ ∂X, and t ≥ 0. Thus the topology T
is independent of the basepoint o.
The topology T is quite nice. In fact, we have the following:
Proposition 3.4.18. The topological space (bord X, T ) is completely metrizable. If X is proper and geodesic, then bord X (and thus also ∂X) is compact. If
X is separable, then bord X (and thus also ∂X) is separable.
Remark 3.4.19. If X is proper and geodesic, then Proposition 3.4.18 is [39,
Exercise III.H.3.18(4)].
3.4. THE BOUNDARY OF A HYPERBOLIC METRIC SPACE
39
Proof of Proposition 3.4.18. We delay the proof of the complete metrizability of bord X until Section 3.6, where we will introduce a class of compatible
complete metrics on bord X which are important from a geometric point of view,
the so-called visual metrics.
Since X is dense in bord X, the separability of X implies the separability of
bord X. Moreover, since bord X is metrizable (as we will show in Section 3.6), the
separability of bord X implies the separability of ∂X.
Finally, assume that X is proper and geodesic; we claim that bord X is compact.
∞
Let (xn )∞
1 be a sequence in X. If (xn )1 contains a bounded subsequence, then
since X is proper it contains a convergent subsequence. Thus we assume that (xn )∞
1
contains no bounded subsequence, i.e. kxn k → ∞.
For each n ∈ N and t ≥ 0 let
xn,t = [o, xn ]t∧kxn k , 3
where [o, xn ] is any geodesic connecting o and xn . Since X is proper, there exists a
∞
sequence (nk )∞
1 such that for each t ≥ 0, the sequence (xnk ,t )1 is convergent, say
xnk ,t −
→ xt .
k
It is readily verified that the map t 7→ xt is an isometric embedding from [0, ∞) to
X. Thus there exists a point ξ ∈ ∂X such that xt → ξ. We claim that xnk → ξ.
Indeed, for each t ≥ 0,
lim sup D(xnk , xnk ,t ) ≍× lim sup D(xnk ,t , xt ) ≍× lim sup D(xt , ξ) ≍× b−t ,
k→∞
k→∞
k→∞
and so the triangle inequality gives
lim sup D(xnk , ξ) .× b−t .
k→∞
Letting t → ∞ shows that xnk → ξ.
Observation 3.4.20. A sequence (xn )∞
1 in bord X converges to a point ξ ∈ ∂X
if and only if
(3.4.12)
hxn |ξio −
→ ∞.
n
Observation 3.4.21. A sequence (xn )∞
1 in X converges to a point ξ ∈ ∂X if
∞
and only if (xn )∞
1 is a Gromov sequence and (xn )1 ∈ ξ.
We now investigate the continuity properties of the Gromov product and Busemann function.
Lemma 3.4.22 (Near-continuity of the Gromov product and Busemann function). The maps (x, y, z) 7→ hx|yiz and (x, z, w) 7→ B x (z, w) are nearly continuous
3Here and from now on A ∧ B = min(A, B) and A ∨ B = max(A, B).
40
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
∞
in the following sense: Suppose that (xn )∞
1 and (yn )1 are sequences in bord X
which converge to points xn → x ∈ bord X and yn → y ∈ bord X. Suppose that
∞
(zn )∞
1 and (wn )1 are sequences in X which converge to points zn → z ∈ X and
wn → w ∈ X. Then
(3.4.13)
hxn |yn izn −−→ hx|yiz
(3.4.14)
B xn (zn , wn ) −−→ B x (z, w),
n,+
n,+
with −
→ if X is strongly hyperbolic.
n
Proof. In the proof of (3.4.13), there are three cases:
Case 1: x, y ∈ X. In this case, (3.4.13) follows directly from (d) and (e) of Proposition 3.3.3.
Case 2: x, y ∈ ∂X. In this case, for each n ∈ N, choose x
bn ∈ X such that either
(1) x
bn = xn (if xn ∈ X), or
(2) hb
xn |xn iz ≥ n (if xn ∈ ∂X).
(3.4.15)
Choose ybn similarly. Clearly, x
bn → x and ybn → y. By Observation 3.4.21,
∞
∞
yn )1 ∈ y. Thus by Lemma 3.4.10,
(b
xn )1 ∈ x and (b
hb
xn |b
yn iz −−→ hx|yiz .
n,+
Now by Gromov’s inequality and (e) of Proposition 3.3.3, either
(1)
hb
xn |b
yn iz ≍+ hxn |yn izn or
(2)
hb
xn |b
yn iz &+ n,
with which asymptotic is true depending on n. But for n sufficiently large,
(3.4.15) ensures that the (2) fails, so (1) holds.
Case 3: x ∈ X, y ∈ ∂X, or vice-versa. In this case, a straightforward combination
of the above arguments demonstrates (3.4.13).
Finally, note that (3.4.14) is an immediate consequence of (3.4.13), (3.4.7), and (j)
of Proposition 3.3.3.
Although Lemma 3.4.22 is generally sufficient for applications, we include the
following lemma which reassures us that the Gromov product does behave somewhat regularly even on an “exact” level.
Lemma 3.4.23. The function (x, y, z) 7→ hx|yiz is lower semicontinuous on
bord X × bord X × X.
Proof. Since bord X is metrizable, it is enough to show that if xn → x,
yn → y, and zn → z, then
lim inf hxn |yn izn ≥ hx|yiz .
n→∞
3.5. THE GROMOV PRODUCT IN ALGEBRAIC HYPERBOLIC SPACES
41
Now fix ε > 0.
Claim 3.4.24. For each n ∈ N, there exist points x
bn , ybn ∈ X satisfying:
hb
xn |b
yn izn ≤ hxn |yn izn + ε,
(3.4.16)
hb
xn |xn io ≥ n, or x
bn = xn ∈ X,
(3.4.17)
hb
yn |yn io ≥ n, or ybn = yn ∈ X.
(3.4.18)
Proof. Suppose first that xn , yn ∈ ∂X. By the definition of hxn |yn izn , there
∞
exist (xn,k )∞
1 ∈ xn and (yn,ℓ )1 ∈ yn such that
lim inf hxn,k |yn,ℓ izn ≤ hxn |yn izn + ε/2.
k,ℓ→∞
It follows that there exist arbitrarily large4 pairs (k, ℓ) ∈ N2 such that the points
x
bn := xn,k and ybn := yn,ℓ satisfy (3.4.16). Since (3.4.17) and (3.4.18) are satisfied
for all sufficiently large (k, ℓ) ∈ N2 , this completes the proof. Finally, if either
xn ∈ X, yn ∈ X, or both, a straightforward adaptation of the above argument
yields the claim.
⊳
The equations (3.4.17) and (3.4.18), together with Gromov’s inequality, imply
that x
bn → x and ybn → y. Now suppose that x, y ∈ ∂X. Then by Observation
3.4.21, (b
xn )∞
yn )∞
1 ∈ x and (b
1 ∈ y. So by the definition of hx|yiz , we have
hx|yiz ≤ lim inf hb
xn |b
yn iz
n→∞
= lim inf hb
xn |b
yn izn
n→∞
≤ lim inf hxn |yn izn + ε.
n→∞
(by the definition of hx|yiz )
(by (d) of Proposition 3.3.3)
(by (3.4.16))
Letting ε tend to zero completes the proof. A similar argument applies to the case
where x ∈ X, y ∈ X, or both.
Lemma 3.4.25. If g is an isometry of X, then it extends in a unique way to a
continuous map ge : bord X → bord X.
Proof. This follows more or less directly from Remarks 3.4.5 and 3.4.17; de-
tails are left to the reader.
In the sequel we will omit the tilde from the extended map ge.
3.5. The Gromov product in algebraic hyperbolic spaces
In this section we analyze the Gromov product in an algebraic hyperbolic space
X. We prove Proposition 3.3.4 which states that CAT(-1) spaces are strongly
4Here, of course, “arbitrarily large” means that min(k, ℓ) can be made arbitrarily large.
42
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
x
y
θ
0
Figure 3.5.1. If F = R and x, y ∈ ∂B, then e−hx|yi0 = 12 ky−xk =
sin(θ/2), where θ denotes the angle ∡0 (x, y) drawn in the figure.
hyperbolic, and then we show that the Gromov boundary of X is isomorphic to its
topological boundary, justifying Remark 3.4.3.
In what follows, we will switch between the hyperboloid model H = Hα
F and
α
the ball model B = BF according to convenience. In the following lemma, ∂B and
bord B denote the topological boundary and closure of B as defined in Chapter 2,
not the Gromov boundary and closure as defined above.
Lemma 3.5.1. The Gromov product (x, y, z) 7→ hx|yiz : B × B × B → [0, ∞)
extends uniquely to a continuous function (x, y, z) 7→ hx|yiz : bord B×bord B×B →
[0, ∞]. Moreover, the extension satisfies the following:
(i) hx|yiz = ∞ if and only if x = y ∈ ∂B.
(ii) For all x, y ∈ bord B,
(3.5.1)
1
e−hx|yi0 ≥ √ ky − xk.
8
If F = R and x, y ∈ ∂B, then
1
ky − xk.
2
Proof. We begin by making some computations in the hyperboloid model H.
For [x], [y] ∈ bord H and [z] ∈ H, let
(3.5.2)
e−hx|yi0 =
α[z] ([x], [y]) =
|Q(z)| · |BQ (x, y)|
∈ [0, ∞).
|BQ (x, z)| · |BQ (y, z)|
By (2.2.2), for [x], [y], [z] ∈ H we have
(3.5.3)
α[z] ([x], [y]) =
cosh dH ([x], [y])
·
cosh dH ([x], [z]) cosh dH ([y], [z])
3.5. THE GROMOV PRODUCT IN ALGEBRAIC HYPERBOLIC SPACES
43
3
Let D = {(A, B, C) ∈ [0, ∞) : cosh(A) cosh(B)C ≥ 1}, and define F : D → [0, ∞)
by
exp cosh−1 (cosh(A) cosh(B)C)
F (A, B, C) =
·
eA eB
Then by (3.5.3), we have
e−2h[x]|[y]i[z] = F dH ([z], [x]), dH ([z], [y]), α[z] ([x], [y])
for all [x], [y], [z] ∈ H. Now since limt→∞
C>0
et
cosh(t)
= 2, we have for all A ≥ 0 and
2 (cosh(A) cosh(B)C)
eA eB
cosh(A)
=
C
eA
lim F (A, B, C) = lim
B→∞
B→∞
and
cosh(A)
C = C/2.
A→∞
eA
b be the closure of D relative to [0, ∞]2 × [0, ∞), i.e.
Let D
b = D ∪ [0, ∞]2 × [0, ∞) \ [0, ∞)3 .
D
lim
If we let
Fb (A, B, C) :=
F (A, B, C)
cosh(A) C
eA
cosh(B)
C
eB
C/2
A, B < ∞
A<B=∞
B<A=∞
A=B=∞
b → [0, ∞) is a continuous function.5 Thus, letting
then Fb : D
1
h[x]|[y]i[z] := − log Fb dH ([z], [x]), dH ([z], [y]), α[z] ([x], [y])
2
defines a continuous extension of the Gromov product to bord H × bord H × H.
We now prove (i)-(ii):
(i) Using the inequality et /2 ≤ cosh(t) ≤ et , it is easily verified that
Fb (A, B, C) ≥ C/4
(3.5.4)
b In particular, if Fb (A, B, C) = 0 then C = 0. Thus if
for all (A, B, C) ∈ D.
h[x]|[y]i[z] = ∞ then α[z] ([x], [y]) = 0; since [z] ∈ H we have BQ (x, y) = 0,
and by Observation 2.3.15 we have [x] = [y] ∈ ∂H. Conversely, if [x] =
[y] ∈ ∂H, then dH ([z], [x]) = dH ([z], [y]) = ∞ and α[z] ([x], [y]) = 0, so
h[x]|[y]i[z] = − 1 log Fb(∞, ∞, 0) = ∞.
2
5Technically, the calculations above do not prove the continuity of F
b; however, this continuity is
easily verified using standard methods.
44
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
(ii) Recall that in B, o = [(1, 0)]. For x, y ∈ B,
αo (eB,H (x), eB,H (y)) =
|Q(1, 0)| · |BQ ((1, x), (1, y))|
|BQ ((1, 0), (1, x))| · |BQ ((1, 0), (1, y))|
= |1 − BE (x, y)|
≥ 1 − Re BE (x, y)
1
[kxk2 + kyk2 ] − Re BE (x, y)
2
1
= ky − xk2 .
2
≥
(with equality if F = R)
(with equality if x, y ∈ ∂B)
Combining with (3.5.4) gives
1
1
αo (eB,H (x), eB,H (y)) ≥ ky − xk2 .
4
8
If F = R and x, y ∈ ∂B, then
e−2hx|yi0 ≥
e−2hx|yi0 =
1
1
αo (eB,H (x), eB,H (y)) = ky − xk2 .
2
4
We now prove Proposition 3.3.4, beginning with the following lemma:
Lemma 3.5.2. If F = R then B is strongly Gromov hyperbolic.
Proof. By the transitivity of the isometry group (Observation 2.3.2), it suffices to check (3.3.6) for the special case w = o. So let us fix x, y, z ∈ B, and by
contradiction suppose that
e−hx|zio > e−hx|yio + e−hy|zio ,
or equivalently that
1 > ehx|zio −hx|yio + ehx|zio −hy|zio .
Clearly, the above inequality implies that x 6= z and y 6= o. Now let γ1 and γ2
be the unique bi-infinite geodesics extending the geodesic segments [x, z] and [o, y],
respectively. Let x∞ , z∞ ∈ ∂B be the appropriate endpoints of γ1 , and let y∞ be
the endpoint of γ2 which is closer to y than to o. (See Figure 3.5.2.) For each
t ∈ [0, ∞), let
xt = [x, x∞ ]t ∈ γ1 ,
and let yt ∈ γ2 , zt ∈ γ1 be defined similarly.
3.5. THE GROMOV PRODUCT IN ALGEBRAIC HYPERBOLIC SPACES
45
x∞
y∞
xt
x
yt
y
o
z
zt
z∞
Figure 3.5.2. If Gromov’s inequality fails for the quadruple
x, y, z, o, then it also fails for the quadruple x∞ , y∞ , z∞ , o.
We observe that
1 ∂
∂
dB (o, zt ) + dB (xt , yt ) − dB (xt , zt ) − dB (o, yt )
[hxt |zt io − hxt |yt io ] =
∂t
2 ∂t
1 ∂
=
dB (o, zt ) + dB (xt , yt ) − 2t − t
2 ∂t
1 ∂
t + 2t − 2t − t = 0,
≤
2 ∂t
i.e. the expression hxt |zt io − hxt |yt io is nonincreasing with respect to t. Taking the
limit as t approaches infinity, we have
hx∞ |z∞ io − hx∞ |y∞ io ≤ hx|zio − hx|yio
and a similar argument shows that
hx∞ |z∞ io − hy∞ |z∞ io ≤ hx|zio − hy|zio .
Thus
1 > ehx∞ |z∞ io −hx∞ |y∞ io + ehx∞ |z∞ io −hy∞ |z∞ io
or equivalently,
e−hx∞ |z∞ io > e−hx∞ |y∞ io + e−hy∞ |z∞ io .
But by (3.5.2), if we write x∞ = x, y∞ = y, and z∞ = z, then
1
1
1
kz − xk > ky − xk + kz − yk.
2
2
2
This is a contradiction.
We are now ready to prove
46
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
Proposition 3.3.4. Every CAT(-1) space is strongly hyperbolic.
Proof. Let X be a CAT(-1) space, and fix x, y, z, w ∈ X. By [39, Proposition
II.1.11], there exist x, y, z, w ∈ H2 such that
d(x, y) = d(x, y)
d(y, z) = d(y, z)
d(z, w) = d(z, w)
d(w, x) = d(w, x)
d(x, z) ≤ d(x, z)
d(y, w) ≤ d(y, w).
It follows that
e−hx|ziw ≤ e−hx|ziw ≤ e−hx|yiw + e−hy|ziw ≤ e−hx|yiw + e−hy|ziw .
3.5.1. The Gromov boundary of an algebraic hyperbolic space. Again
let X = H = Hα
F be an algebraic hyperbolic space. By Proposition 3.3.4, X is a
(strongly) hyperbolic metric space. (If F = R, we can use Lemma 3.5.2.) In
particular, X has a Gromov boundary, defined in Section 3.4. On the other hand,
X also has a topological boundary, defined in Chapter 2. For this subsection only,
we will write
∂G X = Gromov boundary of X,
∂T X = topological boundary of X.
We will now show that this distinction is in fact unnecessary.
Proposition 3.5.3. The identity map id : X → X extends uniquely to a
b : X ∪ ∂G X → X ∪ ∂T X. Thus the pairs (X, X ∪ ∂G X) and
homeomorphism id
(X, X ∪ ∂T X) are isomorphic in the sense of Section 2.4.
Proof of Proposition 3.5.3. By Observation 2.5.1 and Proposition 2.5.5,
it suffices to consider the case where X = B = Bα
F is the ball model. Fix ξ ∈ ∂G B.
∞
By definition, ξ = [(xn )∞
1 ] for some Gromov sequence (xn )1 . By (3.5.1), the
sequence (xn )∞
1 is Cauchy in the metric k · − · k. Thus xn → x for some x ∈ bord B;
is
a Gromov sequence, we have
since (xn )∞
1
hx|xi0 =
lim hxn |xm i0 = ∞,
n,m→∞
and thus x ∈ ∂T B by (i) of Lemma 3.5.1. Let
b
id(ξ)
= x.
3.5. THE GROMOV PRODUCT IN ALGEBRAIC HYPERBOLIC SPACES
47
b is well-defined, note that if (yn )∞ is another Gromov
To see that the map id
1
sequence equivalent to (xn )∞
1 , and if yn → y ∈ ∂T B, then
hx|yi0 = lim hxn |yn i0 = ∞,
n→∞
and so by (i) of Lemma 3.5.1 we have x = y.
b : ∂G B → ∂T B is a bijection. To demonstrate injectivity,
We next claim that id
b
b
we note that if id(ξ) = id(η) = x, then by (i) of Lemma 3.5.1
lim hxn |yn i0 = hx|xi0 = ∞,
n→∞
∞
where (xn )∞
1 and (yn )1 are Gromov sequences representing ξ and η, respectively.
∞
Thus (xn )∞
1 and (yn )1 are equivalent, and so ξ = η.
To demonstrate surjectivity, we observe that for x ∈ ∂T B, we have
∞
n−1
b
x
= x.
id
n
1
b is a homeomorphism, or in other words that
Finally, we must demonstrate that id
the topology defined in §3.4.2 is the usual topology on bord B (i.e. the topology
inherited from H). It suffices to demonstrate the following:
Claim 3.5.4. For any x ∈ ∂T B, the collection (3.4.11) (with ξ = x) is a
neighborhood base of x with respect to the usual topology.
Proof. By (3.5.1), we have
√
Nt (x) ⊆ B(x, 8e−t ).
On the other hand, the continuity of the Gromov product on bord B guarantees
that Nt (x) contains a neighborhood of x with respect to the usual topology.
⊳
In the sequel, the following will be useful:
Proposition 3.5.5. Let E = Eα be the half-space model of a real hyperbolic
space. For x, y ∈ E, we have
B ∞ (x, y) = − log(x1 /y1 ).
Proof. By (2.5.3) we have
eB∞ (x,y) = lim
z→∞
cosh dE (z, x)
exp dE (z, x)
= lim
exp dE (z, y) z→∞ cosh dE (z, y)
= lim
z→∞
1+
1+
kz−xk2
2x1 z1
kz−yk2
2y1 z1
= lim
z→∞
kz−xk2
2x1 z1
kz−yk2
2y1 z1
=
y1
·
x1
48
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
∞
x
y
x1
y1
Figure 3.5.3. The value of the Busemann function B ∞ (x, y) depends on the heights of the points x and y.
3.6. Metrics and metametrics on bord X
3.6.1. General theory of metametrics.
Definition 3.6.1. Recall that a metric on a set Z is a map D : Z × Z → [0, ∞)
which satisfies:
(I) Reflexivity: D(x, x) = 0.
(II) Reverse reflexivity: D(x, y) = 0 ⇒ x = y.
(III) Symmetry: D(x, y) = D(y, x).
(IV) Triangle inequality: D(x, z) ≤ D(x, y) + D(y, z).
Now we can define a metametric on Z to be a map D : Z × Z → [0, ∞) which
satisfies (II), (III), and (IV), but not necessarily (I). This concept is not to be
confused with the more common notion of a pseudometric, which satisfies (I), (III),
and (IV), but not necessarily (II). The term “metametric” was introduced by J.
Väisälä in [172].
If D is a metametric, we define its domain of reflexivity to be the set Zrefl :=
{x ∈ Z : D(x, x) = 0}.6 Obviously, D restricted to its domain of reflexivity is a
metric.
As in metric spaces, a sequence (xn )∞
1 in a metametric space (Z, D) is called
Cauchy if D(xn , xm ) −−→ 0, and convergent if there exists x ∈ Z such that
n,m
D(xn , x) → 0. (However, see Remark 3.6.5 below.) The metametric space (Z, D) is
called complete if every Cauchy sequence is convergent. Using these definitions, the
standard proof of the Banach contraction principle immediately yields the following:
Theorem 3.6.2 (Banach contraction principle for metametric spaces). Let
(Z, D) be a complete metametric space. Fix 0 < λ < 1. If g : Z → Z satisfies
D(g(z), g(w)) ≤ λD(z, w) ∀z, w ∈ Z,
6In the terminology of [172, p.19], the domain of reflexivity is the set of “small points”.
3.6. METRICS AND METAMETRICS ON bord X
49
then there exists a unique point z ∈ Z so that g(z) = z. Moreover, for all w ∈ Z,
we have g n (w) → z with respect to the metametric D.
Observation 3.6.3. The fixed point coming z coming from Theorem 3.6.2
must lie in the domain of reflexivity Zrefl .
Proof.
D(z, z) = D(g(z), g(z)) ≤ λD(z, z),
and thus D(z, z) = 0.
Recall that a metric is said to be compatible with a topology if that topology
is equal to the topology induced by the metric. We now generalize this concept by
introducing the notion of compatibility between a topology and a metametric.
Definition 3.6.4. Let (Z, D) be a metametric space. A topology T on Z is
compatible with the metametric D if for every ξ ∈ Zrefl , the collection
(3.6.1)
{BD (ξ, r) := {y ∈ Z : D(ξ, y) < r} : r > 0}
forms a neighborhood base for T at ξ.
Note that unlike a metric, a metametric may have multiple topologies with
which it is compatible.7 The metametric is viewed as determining a neighborhood
base for points in the domain of reflexivity; neighborhood bases for other points
must arise from some other structure. In the case we are interested in, namely
the case where the underlying space for the metametric is the Gromov closure of
a hyperbolic metric space X, the topology on the complement of the domain of
reflexivity will come from the original metric d on X.
Remark 3.6.5. If (Z, D) is a metametric space with a compatible topology T ,
then there are two notions of what it means for a sequence (xn )∞
1 in Z to converge
to a point x ∈ Z: the sequence may converge with respect to the topology T , or
it may converge with respect to the metametric (i.e. D(xn , x) → 0). The relation
between these notions is as follows: xn → x with respect to the metametric D if
and only if both of the following hold: xn → x with respect to the topology T ,
and x ∈ Zrefl.
Remark 3.6.6. If a metametric D on a set Z is compatible with a topology
T , then the metric D ↿ Zrefl is compatible with the topology T ↿ Zrefl. However,
the converse does not necessarily hold.
For the remainder of this chapter, we fix a hyperbolic metric space X, and we
let T be the topology on bord X introduced in §3.4.2. We will consider various
metrics and metametrics on bord X which are compatible with the topology T .
7The topology considered in [172, p.19] is the finest topology compatible with a given metametric.
50
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
3.6.2. The visual metametric based at a point w ∈ X. The first meta-
metric that we will consider is designed to emulate the Euclidean or “spherical”
metric on the boundary of the ball model B. Recall from Lemma 3.5.1 that
(3.5.2)
1
ky − xk = e−hx|yi0 for all x, y ∈ ∂B.
2
The metric (x, y) 7→ 21 ky−xk can be thought of as “seen from 0”. The expression on
the right hand side makes sense if x, y ∈ bord B, and defines a metametric on bord B.
Moreover, the formula can be generalized to an arbitrary strongly hyperbolic metric
space:
Observation 3.6.7. If X is a strongly hyperbolic metric space, then for each
w ∈ X the map Dw : bord X × bord X → [0, ∞) defined by
(3.6.2)
Dw (x, y) := e−hx|yiw
is a complete metametric on bord X. This metametric is compatible with the
topology T ; moreover, its domain of reflexivity is ∂X.
Proof. Reverse reflexivity and the fact that (bord X)refl = ∂X follow directly
from Observation 3.4.14; symmetry follows from (a) of Proposition 3.3.3 together
with Corollary 3.4.12; the triangle inequality follows from the definition of strong
hyperbolicity together with Corollary 3.4.12.
To show that Dw is complete, suppose that (xn )∞
1 is a Cauchy sequence in
X. Applying (3.6.2), we see that hxn |xm iw −−→ ∞, i.e. (xn )∞
1 is a Gromov
n,m
sequence. Letting ξ = [(xn )∞
1 ], we have xn → ξ in the Db,w metametric. Thus
every Cauchy sequence in X converges in bord X. Since X is dense in bord X, a
standard approximation argument shows that bord X is complete.
Given ξ ∈ (bord X)refl = ∂X, the collection (3.6.1) is equal to the collection
(3.4.11), and is therefore a neighborhood base for T at ξ. Thus Dw is compatible
with T .
Next, we drop the assumption that X is strongly hyperbolic. Fix b > 1 and
w ∈ X, and consider the function
(3.6.3)
Db,w (x, y) = infn
(xi )0
n−1
X
b−hxi |xi+1 iw ,
i=0
where the infimum is taken over finite sequences (xi )n0 satisfying x0 = x and xn = y.
Proposition 3.6.8. If b > 1 is sufficiently close to 1, then for each w ∈ X,
the function Db,w defined by (3.6.3) is a complete metametric on bord X satisfying
the following inequality:
(3.6.4)
b−hx|yiw /4 ≤ Db,w (x, y) ≤ b−hx|yiw .
3.6. METRICS AND METAMETRICS ON bord X
51
This metametric is compatible with the topology T ; moreover, its domain of reflexivity is ∂X.
We will refer to Db,w as the “visual (meta)metric from the point w with respect
to the parameter b”.
Remark 3.6.9. The metric Db,w ↿ ∂X has been referred to in the literature as
the Bourdon metric.
Remark 3.6.10. The first part of Proposition 3.6.8 is [172, Propositions 5.16
and 5.31].
Proof of Proposition 3.6.8. Let δ ≥ 0 be the implied constant in Gro-
mov’s inequality, and fix 1 < b ≤ 21/δ . Then raising b−1 to the power of both sides
of Gromov’s inequality gives
b−hx|ziw ≤ 2 max b−hx|yiw , b−hy|ziw ,
i.e. the function
(x, y) 7→ b−hx|yiw
satisfies the “weak triangle inequality” of [154]. A straightforward adaptation of
the proof of [154, Theorem 1.2] demonstrates (3.6.4). Condition (II) of being a
metametric and the equality (bord X)refl = ∂X now follow from Observation 3.4.14.
Conditions (III) and (IV) of being a metametric are immediate from (3.6.3).
The argument for completeness is the same as in the proof of Observation 3.6.7.
Finally, given ξ ∈ (bord X)refl = ∂X, we observe that although the collections
(3.6.1) and (3.4.11) are no longer equal, (3.6.4) guarantees that the filters they
generate are equal, which is enough to show that Db,w is compatible with T .
Remark 3.6.11. If X is strongly hyperbolic, then Proposition 3.6.8 holds for
all 1 < b ≤ e; moreover, the metametric De,w is equal to the metametric Dw defined
in Observation 3.6.7.
Remark 3.6.12. If (X, d) is an R-tree, then for all t > 0, (X, td) is also an
R-tree and is therefore strongly hyperbolic (by Observation 3.2.6 and Proposition
3.3.4). It follows that Proposition 3.6.8 holds for all b > 1.
For the remainder of this chapter, we fix b > 1 close enough to 1 so that
Proposition 3.6.8 holds.
3.6.3. The extended visual metric on bord X. Although the metametric
Db,w has the advantage of being directly linked to the Gromov product via (3.6.4),
it is sometimes desirable to put a metric on bord X, not just a metametric. We
show now that such a metric can be constructed which agrees with Db,w on ∂X.
52
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
In the following proposition, we use the convention that d(x, y) = ∞ if x, y ∈
bord X and either x ∈ ∂X or y ∈ ∂X.
Proposition 3.6.13. Fix w ∈ X, and for all x, y ∈ bord X let
Db,w (x, y) = min log(b)d(x, y), Db,w (x, y) .
Then D = Db,w is a complete metric on bord X which agrees with D = Db,w on
∂X and induces the topology T .
We call the metric D an extended visual metric.
As an immediate consequence we have the following result which was promised
in §3.4.2:
Corollary 3.6.14. The topological space (bord X, T ) is completely metrizable.
Proof of Proposition 3.6.13. Let us show that D is a metric. Conditions
(I)-(III) are obvious. To demonstrate the triangle inequality, fix x, y, z ∈ bord X.
(1) If D(x, y) = log(b)d(x, y) and D(y, z) = log(b)d(y, z), then D(x, z) ≤
log(b)d(x, z) ≤ log(b)d(x, y) + log(b)d(y, z) = D(x, y) + D(y, z). Similarly,
if D(x, y) = D(x, y) and D(y, z) = D(y, z), then D(x, z) ≤ D(x, z) ≤
D(x, y) + D(y, z) = D(x, y) + D(y, z).
(2) If D(x, y) = log(b)d(x, y) and D(y, z) = D(y, z), fix ε > 0, and let y =
y0 , y1 , . . . , yn = z be a sequence such that
n−1
X
i=0
b−hyi ,yi+1 iw ≤ D(y, z) + ε.
Let xi = yi for i ≥ 1 but let x0 = x. Then by (e) of Proposition 3.3.3 and
the inequality
b−t ≤ s log(b) + b−(t+s)
(s, t ≥ 0),
we have
b−hx|y1 iw ≤ log(b)d(x, y) + b−hy|y1 iw .
It follows that
D(x, z) ≤ D(x, z) ≤
n−1
X
b−hxi |xi+1 iw
i=0
= b−hx|y1 iw +
n−1
X
b−hyi |yi+1 iw
i=1
≤ log(b)d(x, y) + b−hy|y1 iw +
n−1
X
b−hyi |yi+1 iw
i=1
≤ log(b)d(x, y) + D(y, z) + ε = D(x, y) + D(y, z) + ε.
3.6. METRICS AND METAMETRICS ON bord X
53
Taking the limit as ε goes to zero finishes the proof.
(3) The third case is identical to the second.
If a sequence (xn )∞
1 is Cauchy with respect to D, then Ramsey’s theorem (for
example) guarantees that some subsequence is Cauchy with respect to either d or
D. This subsequence converges with respect to that metametric, and therefore also
with respect to D. It follows that the entire sequence converges, and therefore that
D is complete.
Finally, to show that D induces the topology T , suppose that U ⊆ ∂X is
open in T , and fix x ∈ U . If x ∈ X, then Bd (x, r) ⊆ U for some r > 0. On the
other hand, by the triangle inequality D(x, y) ≥ 12 D(x, x) > 0 for all y ∈ bord X.
Letting re = min(r, 21 D(x, x)), we have BD (x, re) ⊆ Bd (x, r) ⊆ U . If x ∈ ∂X, then
Nt (x) ⊆ U for some t ≥ 0; letting C be the implied constant of (3.6.4), we have
BD (x, e−t /C) = BD (x, e−t /C) ⊆ Nt (x) ⊆ U . Thus U is open in the topology
generated by the D metric. The converse direction is similar but simpler, and will
be omitted.
Remark 3.6.15. The proof of Proposition 3.6.13 actually shows more, namely
that
D(x, z) ≤ D(x, y) + D(y, z) ∀x, y, z ∈ bord X.
Since D(x, x) = b−kxk = inf y∈bord X D(x, y), plugging in x = z gives
b−kxk ≤ b−kyk + D(x, y) ∀x, y ∈ bord X.
Remark 3.6.16. Although the metric D is convenient since it induces the
correct topology on bord X, it is not a generalization of the Euclidean metric on
the closure of an algebraic hyperbolic space. Indeed, when X = B2 , then D is not
bi-Lipschitz equivalent to the Euclidean metric on bord X.
3.6.4. The visual metametric based at a point ξ ∈ ∂X. Our final metametric is supposed to generalize the Euclidean metric on the boundary of the halfspace model E. This metric should be thought of as “seen from the point ∞”.
Notation 3.6.17. If X is a hyperbolic metric space and ξ ∈ ∂X, then let
Eξ := bord X \ {ξ}.
Since we have not yet introduced a formula analogous to (3.5.2) for the Euclidean metric on ∂E \ {∞}, we will instead motivate the visual metametric based
at a point ξ ∈ ∂X by considering a sequence (wn )∞
1 in X converging to ξ, and
taking the limits of their visual metametrics.
In fact, Db,wn (y1 , y2 ) → 0 for every y1 , y2 ∈ Eξ . Some normalization is needed.
54
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
Lemma 3.6.18. Fix o ∈ X, and suppose wn → ξ ∈ ∂X. Then for all y1 , y2 ∈ Eξ ,
bkwn k Db,wn (y1 , y2 ) −−→ b−[hy1 |y2 io −
P2
i=1 hyi |ξio ]
n,×
.
with −
→ if X is strongly hyperbolic.
n
Proof.
bkwn k Db,wn (y1 , y2 ) ≍× b−[hy1 |y2 iwn −kwn k]
≍× b−[hy1 |y2 io −
(by (3.6.4))
P2
−−→ b−[hy1 |y2 io −
i=1 hyi |wn io ]
P2
i=1 hyi |ξio ]
n,×
(by (k) of Proposition 3.3.3)
.
(by Lemma 3.4.22)
In each step, equality holds if X is strongly hyperbolic.
We can now construct the visual metametric based at a point ξ ∈ ∂X.
Proposition 3.6.19. For each o ∈ X and ξ ∈ ∂X, there exists a complete
metametric Db,ξ,o on Eξ satisfying
Db,ξ,o (y1 , y2 ) ≍× b−[hy1 |y2 io −
(3.6.5)
P2
i=1 hyi |ξio ]
,
with equality if X is strongly hyperbolic. The metametric Db,ξ,o is compatible with
the topology T ↿ Eξ ; moreover, a set S ⊆ Eξ is bounded in the metametric Db,ξ,o if
and only if ξ ∈
/ S.
Remark 3.6.20. The metric Db,ξ,o ↿ Eξ ∩ ∂X has been referred to in the
literature as the Hamenstädt metric.
Proof of Proposition 3.6.19. Let
Db,ξ,o (y1 , y2 ) = lim sup bkwk Dw (y1 , y2 )
w→ξ
kwn k
→ξ
:= sup lim sup b
Dwn (y1 , y2 ) : wn −
n→∞
n
Since the class of metametrics is closed under suprema and limits, it follows that
Db,ξ,o is a metametric. The asymptotic (3.6.5) follows from Lemma 3.6.18.
For the remainder of this proof, we write D = Db,o and Dξ = Db,ξ,o .
For all x ∈ Eξ ,
(3.6.6)
Dξ (o, x) ≍× b−[ho|xio−ho|ξio −hx|ξio ] = bhx|ξio ≍×
1
,
D(x, ξ)
with equality if X is strongly hyperbolic. It follows that for any set S ⊆ Eξ , the
function Dξ (o, ·) is bounded on S if and only if the function D(·, ξ) is bounded from
below on S. This demonstrates that S is bounded in the Dξ metametric if and only
if ξ ∈
/ S.
3.6. METRICS AND METAMETRICS ON bord X
55
Let (xn )∞
1 be a Cauchy sequence with respect to Dξ . Since D .× Dξ , it follows
that (xn )∞
1 is also Cauchy with respect to the metametric D, so it converges to a
point x ∈ bord X with respect to D. If x ∈ Eξ , then we have
Dξ (xn , x) ≍× bhxn |ξio +hx|ξio D(xn , x) −−→ b2hx|ξio 0 = 0.
n,×
On the other hand, if x = ξ, then the sequence (xn )∞
1 is unbounded in the Dξ
metametric, which contradicts the fact that it is Cauchy. Thus Dξ is complete.
Finally, given η ∈ (Eξ )refl = Eξ ∩ ∂X, consider the filters F1 and F2 generated
by the collections {BD (η, r) : r > 0} and {BDξ (η, r) : r > 0}, respectively. Since
D .× Dξ , we have F2 ⊆ F1 . Conversely, since BDξ (η, 1) is bounded in the Dξ
metametric, its closure does not contain ξ, and so the function h·|ξio is bounded
on this set. Thus D ≍×,η Dξ on BDξ (η, 1). Letting C be the implied constant of
the asymptotic, we have BDξ (η, min(r, 1)) ⊆ BD (η, Cr), which demonstrates that
F1 ⊆ F2 . Thus Dξ is compatible with the topology T ↿ Eξ .
From Lemma 3.6.18 and Proposition 3.6.19 it immediately follows that
bd(o,wn) Db,wn (y1 , y2 ) −−→ Db,ξ,o (y1 , y2 )
(3.6.7)
n,×
whenever (wn )∞
1 ∈ ξ.
Remark 3.6.21. It is not clear whether a result analogous to Proposition 3.6.13
holds for the metametric Db,ξ,o . A straightforward adaptation of the proof of Proposition 3.6.19 does not work, since
bkwn k Db,wn (x, y) = min(bkwn k d(x, y)), bkwn k Db,wn (x, y)
−−→ min(∞, Db,ξ,o (x, y))
n,×
= Db,ξ,o (x, y).
We finish this chapter by describing the relation between the visual metametric
based at ∞ and the Euclidean metric on the boundary of the half-space model E.
Proposition 3.6.22 (Cf. Figure 3.6.1). Let X = E = Eα , let o = (1, 0) ∈ X,
and fix x, y ∈ E∞ = E ∪ B. We have
(3.6.8)
De,∞,o (x, y) ≍× max(x1 , y1 , ky − xk),
with equality if x, y ∈ B = ∂E \ {∞}.
56
3. R-TREES, CAT(-1) SPACES, AND GROMOV HYPERBOLIC METRIC SPACES
∞
x
y
Figure 3.6.1. The Hamenstädt distance De,∞,o (x, y) between
two points x, y ∈ E is coarsely asymptotic to the maximum of
the following three quantities: x1 , y1 , and ky − xk. Equivalently,
De,∞,o (x, y) is coarsely asymptotic to the length of the shortest
path which both connects x and y and touches B.
Proof. First suppose that x, y ∈ E. By (h) of Proposition 3.3.3,
1
d(x, y) + B ∞ (o, x) + B ∞ (o, y)
De,∞,o (x, y) = exp
2
√
1
ky − xk2
x1 y1 exp
=
cosh−1 1 +
2
2x1 y1
(by (2.5.3) and Proposition 3.5.5)
s
ky − xk2
√
≍× x1 y1 1 +
2x1 y1
p
(since et/2 ≍× cosh(t))
p
x1 y1 + ky − xk2
=
√
≍× max( x1 y1 , ky − xk).
Since
√
x1 y1 ≤ max(x1 , y1 ), this demonstrates the . direction of (3.6.8). Since
y1 ≤ x1 + ky − xk and x1 ≤ y1 + ky − xk, we have
√
max(x1 , y1 ) .× max(min(x1 , y1 ), ky − xk) ≤ max( x1 y1 , ky − xk)
which demonstrates the reverse inequality. Thus (3.6.8) holds for x, y ∈ E; a
continuity argument demonstrates (3.6.8) for x, y ∈ E∞ .
3.6. METRICS AND METAMETRICS ON bord X
If x, y ∈ B, then
57
1
ky − xk2
cosh−1 1 +
a,b→0
2
2ab
s
p
√
ky − xk2
(since lim et/2 / 2 cosh(t) = 1)
= lim ab 2 1 +
t→∞
a,b→0
2ab
s
p
ky − xk2
2 ab +
= ky − xk2 = ky − xk.
= lim
a,b→0
2
De,∞,o (x, y) = lim
√
ab exp
Corollary 3.6.23 (Cf. [165, Fig. 5]). For x, y ∈ E, we have
ed(x,y) ≍×
max(x21 , y12 , ky − xk2 )
·
x1 y1
Proof. The result follows from
max(x21 , y12 , ky − xk2 ) ≍× De,∞,o (x, y)2 = exp d(x, y) + B ∞ (o, x) + B ∞ (o, y)
= x1 y1 ed(x,y)
which may be easily rearranged to complete the proof.
CHAPTER 4
More about the geometry of hyperbolic metric
spaces
In this chapter we discuss various topics regarding the geometry of hyperbolic
metric spaces, including metric derivatives, the Rips condition for hyperbolicity,
construction of geodesic rays and lines in CAT(-1) spaces, “shadows at infinity”,
and some functions which we call “generalized polar coordinates”. We start by
introducing some conventions to apply in the remainder of the paper.
4.1. Gromov triples
The following definition is made for convenience of notation:
Definition 4.1.1. A Gromov triple is a triple (X, o, b), where X is a hyperbolic
metric space, o ∈ X, and b > 1 is close enough to 1 to guarantee for every w ∈ X
the existence of a visual metametric Db,w via Proposition 3.6.8 above.
Notation 4.1.2. Let (X, o, b) be a Gromov triple. Given w ∈ X and ξ ∈ ∂X,
we will let Dw = Db,w be the metametric defined in Proposition 3.6.8, we will let
Dw = Db,w be the metric defined in Proposition 3.6.13, and we will let Dξ,w =
Db,ξ,w be the metametric defined in Proposition 3.6.19. If w = o, then we use the
further shorthand D = Do , D = Do , and Dξ = Dξ,o .
We will denote the diameter of a set S with respect to the metametric D by
Diam(S).
Convention 7. For the remainder of the paper, with the exception of Chapter
5, all statements should be assumed to be universally quantified over Gromov triples
(X, o, b) unless context indicates otherwise.
Convention 8. For the remainder of the paper, whenever we make statements
of the form “Let X = Y ”, where Y is a hyperbolic metric space, we implicitly want
to “beef up” X into a Gromov triple (X, o, b) whose underlying hyperbolic metric
space is Y . For general Y , this may be done arbitrarily, but if Y is strongly
hyperbolic, we want to set b = e, and if Y is an algebraic hyperbolic space, then
we want to set o = [(1, 0)], o = 0, or o = (1, 0) depending on whether Y is the
hyperboloid model H, the ball model B, or the half-space model E, respectively.
59
60
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
For example, when saying “Let X = H = H∞ ”, we really mean “Let X = H =
H∞ , let o = [(1, 0)], and let b = e.”
Convention 9. The term “Standard Case” will always refer to the finitedimensional situation where X = Hd for some 2 ≤ d < ∞.
4.2. Derivatives
4.2.1. Derivatives of metametrics. Let (Z, T ) be a perfect topological
space, and let D1 and D2 be two metametrics on Z. The metric derivative of
D1 with respect to D2 is the function D1 /D2 : Z → [0, ∞] defined by
D1 (z, w)
D1
(z) := lim
,
w→z
D2
D2 (z, w)
assuming the limit exists. If the limit does not exist, then we can speak of the
upper and lower derivatives; these will be denoted (D1 /D2 )∗ (z) and (D1 /D2 )∗ (z),
respectively. Note that the chain rule for metric derivatives takes the following
form:
D1 D2
D1
=
,
D3
D2 D3
assuming all limits exist.
We proceed to calculate the derivatives of the metametrics that were introduced
in Section 3.6:
Observation 4.2.1. Fix y1 , y2 ∈ bord X.
(i) For all w1 , w2 ∈ X, we have
(4.2.1)
Dw1 (y1 , y2 )
1
≍× b− 2 [By1 (w1 ,w2 )+By2 (w1 ,w2 )] .
Dw2 (y1 , y2 )
(ii) For all w ∈ X and ξ ∈ ∂X, we have
(4.2.2)
Dξ,w (y1 , y2 )
≍× b−[hy1 |ξiw +hy2 |ξiw ] .
Dw (y1 , y2 )
(iii) For all w1 , w2 ∈ X and ξ ∈ ∂X, we have
(4.2.3)
Dξ,w1 (y1 , y2 )
≍× bBξ (w1 ,w2 ) .
Dξ,w2 (y1 , y2 )
In each case, equality holds if X is strongly hyperbolic.
Proof. (i) follows from (g) of Proposition 3.3.3, while (ii) is immediate from
(3.6.4) and (3.6.5). (iii) follows from (3.6.7).
Combining with Lemma 3.4.22 yields the following:
Corollary 4.2.2. Suppose that bord X is perfect. Fix y ∈ bord X.
4.2. DERIVATIVES
61
(i) For all w1 , w2 ∈ X, we have
∗
Dw1
Dw1
(4.2.4)
(y) ≍× b− By (w1 ,w2 ) .
(y) ≍×
Dw2
Dw2 ∗
(ii) For all w ∈ X and ξ ∈ ∂X, we have
∗
Dξ,w
Dξ,w
(y) ≍×
(y) ≍× b−2hy|ξiw .
(4.2.5)
Dw
Dw ∗
(iii) For all w1 , w2 ∈ X and ξ ∈ ∂X, we have
∗
Dξ,w1
Dξ,w1
(y) ≍× bBξ (w1 ,w2 ) .
(y) ≍×
(4.2.6)
Dξ,w2
Dξ,w2 ∗
In each case, equality holds if X is strongly hyperbolic.
Remark 4.2.3. In case bord X is not perfect, (4.2.4) - (4.2.6) may be taken as
definitions. We will ignore the issue henceforth.
Combining Observation 4.2.1 with Corollary 4.2.2 yields the following:
Proposition 4.2.4 (Geometric mean value theorem). Fix y1 , y2 ∈ bord X.
(i) For all w1 , w2 ∈ X, we have
1/2
Dw1
Dw1
Dw1 (y1 , y2 )
≍×
(y1 )
(y2 )
.
Dw2 (y1 , y2 )
Dw2
Dw2
(ii) For all w ∈ X and ξ ∈ ∂X, we have
1/2
Dξ,w
Dξ,w (y1 , y2 )
Dξ,w
(y1 )
(y2 )
≍×
.
Dw (y1 , y2 )
Dw
Dw
(iii) For all w1 , w2 ∈ X and ξ ∈ ∂X, we have
1/2
Dξ,w1 (y1 , y2 )
Dξ,w1
Dξ,w1
≍×
(y1 )
(y2 )
.
Dξ,w2 (y1 , y2 )
Dξ,w2
Dξ,w2
In each case, equality holds if X is strongly hyperbolic.
4.2.2. Derivatives of maps. As before, let (Z, T ) be a perfect topological
space, and now fix just one metametric D on Z. For any map g : Z → Z, the
metric derivative of G is the function g ′ : Z → (0, ∞) defined by
g ′ (z) :=
D(g(z), g(w))
D◦g
(z) = lim
·
w→z
D
D(z, w)
If the limit does not exist, the upper and lower metric derivatives will be denoted
g′ and g′ , respectively.
Remark 4.2.5. To avoid confusion, in what follows g ′ will always denote the
derivative of an isometry g ∈ Isom(X) with respect to the metametric D = Db,o ,
rather than with respect to any other metametric.
62
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
Proposition 4.2.6. For all g ∈ Isom(X),
g′ (y) ≍× g′ (y) ≍× b− By (g
−1
(o),o)
∀y ∈ bord X
D(g(y1 ), g(y2 ))
1/2
∀y1 , y2 ∈ bord X,
≍× (g′ (y1 )g ′ (y2 ))
D(y1 , y2 )
with equality if X is strongly hyperbolic.
Proof. This follows from (i) of Corollary 4.2.2, (i) of Proposition 4.2.4, and
the fact that D ◦ g = Dg−1 (o) .
Corollary 4.2.7. For any distinct y1 , y2 ∈ Fix(g) ∩ ∂X we have
g ′ (y1 )g ′ (y2 ) ≍× 1,
with equality if X is strongly hyperbolic.
The next proposition shows the relation between the derivative of an isometry
g ∈ Isom(X) at a point ξ ∈ Fix(g) and the action on the metametric space (Eξ , Dξ ):
Proposition 4.2.8. Fix g ∈ Isom(X) and ξ ∈ Fix(g). Then for all y1 , y2 ∈ Eξ ,
Dξ (g(y1 ), g(y2 ))
1
≍× ′ ,
Dξ (y1 , y2 )
g (ξ)
with equality if X is strongly hyperbolic.
Proof.
Dξ (g(y1 ), g(y2 ))
=
Dξ (y1 , y2 )
Dξ,g−1 (o) (y1 , y2 )
Dξ,o (y1 , y2 )
≍× b− Bξ (o,g
≍× 1/g ′ (ξ).
−1
(o))
(by (4.2.3))
(by Proposition 4.2.6)
Remark 4.2.9. Proposition 4.2.8 can be interpreted as a geometric mean value
theorem for the action of g on the metametric space (Eξ , Dξ ). Specifically, it tells
us that the derivative of g on this metametric space is identically 1/g ′ (ξ).
Remark 4.2.10. If g ′ (ξ) = 1, then Proposition 4.2.8 tells us that the biLipschitz constant of g is independent of g, and that g is an isometry if X is
strongly hyperbolic. This special case will be important in Chapter 11.
Example 4.2.11. Suppose that X = E = Eα is the half-space model of a real
hyperbolic space, let B = ∂E \ {∞}, let g(x) = λT (x) + b be a similarity of B, and
consider the Poincaré extension gb ∈ Isom(E) defined in Observation 2.5.6. Clearly
4.2. DERIVATIVES
63
g+ = ∞
x
o
g−1 (x)
height(g−1 (x))
g−1 (o)
g−
Figure 4.2.1. The derivative of g at ∞ is equal to the reciprocal
of the dilatation ratio of g. In particular, ∞ is an attracting fixed
point if and only if g is expanding, and ∞ is a repelling fixed point
if and only if g is contracting.
g acts as a similarity on the metametric space (E∞ , D∞ ) in the following sense: For
b
all y1 , y2 ∈ E∞ ,
D∞ (g(y1 ), g(y2 )) = λD∞ (y1 , y2 ).
Comparing with Proposition 4.2.8 shows that g ′ (∞) = 1/λ.
4.2.3. The dynamical derivative. We can interpret Corollary 4.2.2 as saying that the metric derivative is well-defined only up to an asymptotic in a general
hyperbolic metric space (although it is perfectly well defined in a strongly hyperbolic metric space). Nevertheless, if ξ is a fixed point of the isometry g, then we
can iterate in order to get arbitrary accuracy.
Proposition 4.2.12. Fix g ∈ Isom(X) and ξ ∈ Fix(g). Then
1/n
1/n
g ′ (ξ) := lim (g n )′ (ξ)
= lim (g n )′ (ξ)
.
n→∞
n→∞
Furthermore
g′ (ξ) ≤ g ′ (ξ) ≤ g ′ (ξ).
The number g ′ (ξ) will be called the dynamical derivative of g at ξ.
Proof of Proposition 4.2.12. The limits converge due to the submultiplicativity and supermultiplicativity of the expressions inside the radicals, respectively. To see that they converge to the same number, note that by Corollary
4.2.2
lim
n→∞
(g n )′ (ξ)
(g n )′ (ξ)
1/n
≤ lim C 1/n = 1
n→∞
64
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
for some constant C independent of n.
Remark 4.2.13. Let βξ denote the Busemann quasicharacter of [48, p.14].
Then βξ is related to the dynamical derivative via the following formula: g ′ (ξ) =
b−βξ (g) .
Note that although the dynamical derivative is “well-defined”, it is not necessarily the case that the chain rule holds for any two g, h ∈ Stab(Isom(X); ξ) (although
it must hold up to a multiplicative coarse asymptotic). For a counterexample see
[48, Example 3.12]. Note that this counterexample includes the possibility of two
elementa g, h ∈ Stab(Isom(X); ξ) such that g ′ (ξ) = h′ (ξ) = 1 but (gh)′ (ξ) 6= 1. A
sufficient condition for the chain rule to hold exactly is given in [48, Corollary 3.9].
Despite the failure of the chain rule, the following “iteration” version of the
chain rule holds:
Proposition 4.2.14. Fix g ∈ Isom(X) and ξ ∈ Fix(g). Then
(g n )′ (ξ) = [g ′ (ξ)]n ∀n ∈ Z.
In particular
(4.2.7)
(g −1 )′ (ξ) =
1
g ′ (ξ)
·
Proof. The only difficulty lies in establishing (4.2.7):
1/n
1
(g −1 )′ (ξ) = lim (g −n )′ (ξ)
= exp1/b lim
B ξ (o, g n (o))
n→∞
n→∞ n
1
B ξ (g −n (o), o)
= exp1/b lim
n→∞ n
1
−n
= exp1/b − lim
B ξ (o, g (o))
n→∞ n
1
= ′ ·
g (ξ)
Combining with Corollary 4.2.7 yields the following:
Corollary 4.2.15. For any distinct y1 , y2 ∈ Fix(g) ∩ ∂X we have
g ′ (y1 )g ′ (y2 ) = 1.
We end this section with the following result relating the dynamical derivative
with the Busemann function:
Proposition 4.2.16. Fix g ∈ Isom(X) and ξ ∈ Fix(g). Then for all x ∈ X
and n ∈ Z,
B ξ (x, g −n (x)) ≍+ n logb g ′ (ξ).
4.3. THE RIPS CONDITION
65
with equality if X is strongly hyperbolic.
Proof. If x = o, then
b− Bξ (o,g
−n
(o))
≍× bBξ (g
−n
(o),o)
≍× (g n )′ (ξ)
(by Proposition 4.2.6)
≍× (g n )′ (ξ) = (g ′ (ξ))n .
For the general case, we note that
B ξ (x, g −n (x)) ≍+ B ξ (x, o) + B ξ (o, g −n (o)) + B ξ (g −n (o), g −n (x))
≍+ B ξ (x, o) + n logb g ′ (ξ) + B ξ (o, x)
≍+ n logb g ′ (ξ).
4.3. The Rips condition
In this section, in addition to assuming that X is a hyperbolic metric space (cf.
§4.1), we assume that X is geodesic. Recall (Section 3.2) that [x, y] denotes the
geodesic segment connecting two points x, y ∈ X.
Proposition 4.3.1.
(i) For all x, y, z ∈ X,
d(z, [x, y]) ≍+ hx|yiz .
(ii) (Rips’ thin triangles condition) For all x, y1 , y2 ∈ X and for any z ∈
[y1 , y2 ], we have
2
min d(z, [x, yi ]) ≍+ 0.
i=1
In fact, the thin triangles condition is equivalent to hyperbolicity; see e.g. [39,
Proposition III.H.1.22].
Proof.
(i) By the intermediate value theorem, there exists w ∈ [x, y] such that
hx|ziw = hy|ziw . Applying Gromov’s inequality gives hx|ziw = hy|ziw .+
hx|yiw = 0. Now (k) of Proposition 3.3.3 shows that
d(z, [x, y]) ≤ d(z, w) .+ hx|yiz .
The other direction is immediate, since for each w ∈ [x, y], we have
hx|yiw = 0, and so (d) of Proposition 3.3.3 gives hx|yiw ≤ d(z, w).
66
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
x
y
տ
z
≍+ hx, yiz
Figure 4.3.1. An illustration of Proposition 4.3.1(i).
(ii) This is immediate from (i), Gromov’s inequality, and the equation
hy1 |y2 iz = 0.
The next lemma demonstrates the correctness of the intuitive notion that if
two points are close to each other, then the geodesic connecting them should not
be very large.
Lemma 4.3.2. Fix x1 , x2 ∈ bord X. We have
Diam([x1 , x2 ]) ≍× D(x1 , x2 ).
Proof. It suffices to show that if y ∈ [x1 , x2 ], then
D(y, {x1 , x2 }) .× D(x1 , x2 ).
Indeed, by the thin triangles condition, we may without loss of generality suppose
that d(y, [o, x1 ]) ≍+ 0. Write d(y, z) ≍+ 0 for some z ∈ [o, x1 ]. Then
D(x1 , y) ≍× D(x1 , z) = e−kzk ≍× e−kyk ≤ e−d(o,[x1,x2 ])
≍× e−hx1 |x2 io
≍× D(x1 , x2 ).
4.4. GEODESICS IN CAT(-1) SPACES
67
4.4. Geodesics in CAT(-1) spaces
Observation 4.4.1. Any isometric embedding π : [t, ∞) → X extends uniquely
to a continuous map π : [t, ∞] → bord X. Similarly, any isometric embedding π :
(−∞, +∞) → X extends uniquely to a continuous map π : [−∞, +∞] → bord X.
Abusing terminology, we will also call the extended maps “isometric embeddings”.
Definition 4.4.2. Fix x ∈ X and ξ, η ∈ ∂X.
• A geodesic ray connecting x and ξ is the image of an isometric embedding
π : [0, ∞] → X satisfying
π(0) = x,
π(∞) = ξ.
• A geodesic line or bi-infinite geodesic connecting ξ and η is the image of
an isometric embedding π : [−∞, +∞] → X satisfying
π(−∞) = ξ,
π(+∞) = η.
When we do not wish to distinguish between geodesic segments (cf. Section 3.2),
geodesic rays, and geodesic lines, we shall simply call them geodesics. For x, y ∈
bord X, any geodesic connecting x and y will be denoted [x, y].
Notation 4.4.3. Extending Notation 3.1.6, if [x, ξ] is the image of the isometric
embedding π : [0, ∞] → X, then for t ∈ [0, ∞] we let [x, ξ]t = π(t), i.e. [x, ξ]t is the
unique point on the geodesic ray [x, ξ] such that d(x, [x, ξ]t ) = t.
The main goal of this section is to prove the following:
Proposition 4.4.4. Suppose that X is a complete CAT(-1) space. Then:
(i) For any two distinct points x, y ∈ bord X, there is a unique geodesic [x, y]
connecting them.
∞
(ii) Suppose that (xn )∞
1 and (yn )1 are sequences in bord X which converge
to points xn → x ∈ bord X and yn → y ∈ bord X, with x 6= y. Then
[xn , yn ] → [x, y] in the Hausdorff metric on (bord X, D). If x = y, then
[xn , yn ] → {x} in the Hausdorff metric.
Definition 4.4.5. A hyperbolic metric space X satisfying the conclusion of
Proposition 4.4.4 will be called regularly geodesic.
Remark 4.4.6. The existence of a geodesic connecting any two points in bord X
was proven in [42, Proposition 0.2] under the weaker hypothesis that X is a Gromov
hyperbolic complete CAT(0) space. However, this weaker hypothesis does not imply
the uniqueness of such a geodesic, nor does it imply (ii) of Proposition 4.4.4, as
shown by the following example:
68
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
Example 4.4.7 (A proper and uniquely geodesic hyperbolic CAT(0) space
which is not regularly geodesic). Let
X = {x ∈ R2 : x2 ∈ [0, 1]}
be interpreted as a subspace of R2 with the usual metric. Then X is hyperbolic,
proper, and uniquely geodesic, but is not regularly geodesic.
Proof. It is hyperbolic since it is roughly isometric to R. It is uniquely geodesic since it is a convex subset of R2 . It is proper because it is a closed subset of
R2 . It is not regularly geodesic because if we write ∂X = {ξ+ , ξ− }, then the two
points ξ+ and ξ− have infinitely many distinct geodesics connecting them: for each
t ∈ [0, 1], R × {t} is a geodesic connecting ξ+ and ξ− .
The proof of Proposition 4.4.4 will proceed through several lemmas, the first
of which is as follows:
Lemma 4.4.8. Fix ε > 0. There exists δ = δX (ε) > 0 such that if ∆ =
∆(x, y1 , y2 ) is a geodesic triangle in X satisfying
D(y1 , y2 ) ≤ δ,
(4.4.1)
then for all t ∈ [0, min2i=1 d(x, yi )], if zi = [x, yi ]t , then
D(z1 , z2 ) ≤ ε.
(4.4.2)
Proof. We prove the assertion first for X = H2 and then in general:
1
n
If X = H2 : Let ε > 0, and by contradiction, suppose that for each δ =
(n) (n) (n) (n)
(x(n) , y1 , y2 , z1 , z2 )
> 0 there
exists a 5-tuple
satisfying the hypotheses but
not the conclusion of the theorem. Since bord H2 is compact, there exists
a convergent subsequence
(nk )
(x(nk ) , y1
(nk )
, y2
(nk )
, z1
(nk )
, z2
) → (x, y1 , y2 , z1 , z2 ) ∈ bord H2
5
.
Taking the limit of (4.4.1) as k → ∞ shows that D(y1 , y2 ) = 0, so y1 = y2 .
Conversely, taking the limit of (4.4.2) shows that D(z1 , z2 ) ≥ ε > 0, so
z1 6= z2 . Write y = y1 = y2 .
We will take for granted that Proposition 4.4.4 holds when X = H2 .
(This can be proven using the explicit form of geodesics in this space.) It
follows that zi ∈ [x, y] if x 6= y, and zi = x if x = y. The second case is
clearly a contradiction, so we assume that x 6= y.
(n )
(n )
Writing zi k = [x(nk ) , yi k ]tk , we observe that
(nk )
tk − kx(nk ) k = hzi
(nk )
|yi
(nk )
io − hzi
−
→ hzi |yio − hzi |xio − hx|yio .
k
(nk )
|x(nk ) io − hx(nk ) |yi
io
4.4. GEODESICS IN CAT(-1) SPACES
69
y1
z1
w1
o
y2
w2
z2
x
Figure 4.4.1. The triangle ∆(x, y1 , y2 ).
Since the left hand side is independent of i, so is the right hand side. But
the function
z 7→ hz|yio − hz|xio − hx|yio
is an isometric embedding from [x, y] to [−∞, +∞]; it is therefore injective.
Thus z1 = z2 , a contradiction.
In general: Let ε > 0, and fix εe > 0 to be determined, depending on ε. Let δe = δH2 (e
ε),
e Now suppose that
and fix δ > 0 to be determined, depending on δ.
∆ = ∆(x, y1 , y2 ) is a geodesic triangle in X satisfying (4.4.1), fix t ≥ 0, and
let zi = [x, yi ]t . To complete the proof, we must show that D(z1 , z2 ) ≤ ε.
By contradiction suppose not, i.e. suppose that D(z1 , z2 ) > ε. Then
D(x, zi ) > ε/2 for some i = 1, 2; without loss of generality suppose
D(x, z1 ) > ε/2. By Proposition 4.3.1 this implies d(o, [x, z1 ]) ≍+,ε 0; fix
w1 ∈ [x, z1 ] with kw1 k ≍+,ε 0. Let s = d(x, w1 ) ≤ t, and let w2 = [x, z2 ]s .
(See Figure 4.4.1.)
Now let ∆ = ∆(x, y1 , y2 ) be a comparison triangle for ∆(x, y1 , y2 ), and
let z1 , z2 , w1 , w2 be the corresponding comparison points. Note that zi =
[x, yi ]t and wi = [x, yi ]s . Without loss of generality, suppose that w1 =
oH . Then ky2 k ≤ kw1 k + d(w1 , y2 ) ≍+,ε d(oH , y2 ), and so hy1 |y2 io .+,ε
hy1 |y2 ioH , and thus
D(y1 , y2 ) .×,ε D(y1 , y2 ) ≤ δ.
Setting δ equal to δe divided by the implied constant, we have
D(y1 , y2 ) ≤ δe = δH (e
ε).
(4.4.3)
Thus D(z1 , z2 ) ≤ εe and D(w1 , w2 ) ≤ εe.
– If d(z1 , z2 ) ≤ εe, then the CAT(-1) inequality finishes the proof (as
long as εe ≤ ε). Thus, suppose that
D(z1 , z2 ) ≤ εe.
70
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
(4.4.4)
ε), a contradiction for
– If D(w1 , w2 ) ≤ εe, then 0 = hw1 |w2 ioH ≥ − log(e
εe sufficiently small. Thus, suppose that
d(w1 , w2 ) ≤ εe.
ε). Applying (4.4.4) gives
By (4.4.3), we have d(oH , zi ) ≥ − log(e
hzi |yi iwi = d(wi , zi ) = d(wi , zi ) &+ − log(e
ε).
Applying (4.4.4), the coarse asymptotic kw1 k ≍+,ε 0, and the CAT(-1)
inequality, we have
hzi |yi io &+,ε − log(e
ε),
and thus D(zi , yi ) .×,ε εe. Using the triangle inequality together with the
assumption D(y1 , y2 ) ≤ δ, we have
D(z1 , z2 ) .×,ε max(δ, εe).
Setting εe equal to ε divided by the implied constant, and decreasing δ if
necessary, completes the proof.
Notation 4.4.9. If the map π : [t, s] → X is an isometric embedding, then the
map π
e : [−∞, +∞] → X is defined by the equation
π
e(r) = π(t ∨ r ∧ s).
Corollary 4.4.10. If ε, δ, and ∆(x, y1 , y2 ) are as in Lemma 4.4.8, and if
π1 : [t, s1 ] → [x, y1 ] and π2 : [t, s2 ] → [x, y2 ] are isometric embeddings, then
D(e
π1 (r), π
e2 (r)) .× ε ∀r ∈ [−∞, +∞].
Proof. If r ≤ t, then π
e1 (r) = x = π
e2 (r). If t ≤ r ≤ min2i=1 si , then π
ei (r) =
[x, yi ]r−t , allowing us to apply Lemma 4.4.8 directly. Finally, suppose r ≥ r0 :=
min2i=1 si . Without loss of generality suppose that s1 ≤ s2 , so that r0 = s1 .
Applying the previous case to r0 , we have
D(y1 , w2 ) ≤ ε,
where w2 = π2 (s1 ). Now π
e1 (r) = y1 , and π
e2 (r) ∈ [w2 , y2 ], so Lemma 4.3.2 completes the proof.
∞
Lemma 4.4.11. Suppose that (xn )∞
1 and (yn )1 are sequences in X which con-
verge to points xn → x ∈ bord X and yn → y ∈ bord X, with x 6= y. Then
there exists a geodesic [x, y] connecting x and y such that [xn , yn ] → [x, y] in the
Hausdorff metric. If x = y, then [xn , yn ] → {x} in the Hausdorff metric.
4.4. GEODESICS IN CAT(-1) SPACES
71
Proof. We observe first that if x = y, then the conclusion follows immediately
from Lemma 4.3.2. Thus we assume in what follows that x 6= y.
For any pair p, q ∈ X, we define the standard parameterization of the geodesic
[p, q] to be the unique isometry π : [−ho|qip , ho|piq ] → [p, q] sending −ho|qip to p
and ho|piq to q. For each n let πn : [tn , sn ] → [xn , yn ] be the standard parameterization, and for each m, n ∈ N let πm,n : [tm,n , sm,n ] → [xn , ym ] be the standard
parameterization. Let π
en : [−∞, +∞] → [xn , yn ] and π
em,n : [−∞, +∞] → [xn , ym ]
be as in Notation 4.4.9. Note that
tn − tm,n = ho|ym ixn − ho|yn ixn = hxn |yn io − hxn |ym io −−→ hx|yio − hx|yio = 0.
m,n
(We have hx|yio < ∞ since x 6= y.) Thus
D(e
πn (r), π
en (r − tn + tm,n )) ≤ d(e
πn (r), π
en (r − tn + tm,n )) ≤ |tn − tm,n | −
→ 0.
n
Here and below, the limit converges uniformly for r ∈ [−∞, +∞]. On the other
hand, Corollary 4.4.10 implies that
D(e
πn (r − tn + tm,n ), π
em,n (r)) −−→ 0,
m,n
so the triangle inequality gives
D(e
πn (r), π
em,n (r)) −−→ 0.
m,n
A similar argument shows that
D(e
πm,n (r), π
em (r)) −−→ 0,
m,n
so the triangle inequality gives
D(e
πn (r), π
em (r)) −−→ 0,
m,n
i.e. the sequence of functions (e
πn )∞
1 is uniformly Cauchy. Since (bord X, D) is
complete, they converge uniformly to a function π
e : [−∞, +∞] → X.
Clearly, [xn , yn ] = π
en ([−∞, +∞]) → π
e([−∞, +∞]) in the Hausdorff metric.
We claim that π
e([−∞, +∞]) is a geodesic connecting x and y. Indeed,
tn −
→ t := hx|yio − kxk and sn −
→ s := kyk − hx|yio .
n
n
For all t < r1 < r2 < s, we have tn < r1 < r2 < sn for all sufficiently large n, which
implies that
d(e
π (r1 ), π
e(r2 )) = lim d(e
πn (r1 ), π
en (r2 )) = lim (r2 − r1 ) = r2 − r1 ,
n→∞
n→∞
i.e. π
e ↿ (t, s) is an isometric embedding. Since π
e is continuous (being the uniform
limit of continuous functions), π := π
e ↿ [t, s] is also an isometric embedding. A
similar argument shows that π
e(r) = π(t) for all r ≤ t, and π
e(r) = π(s) for all r ≥ s;
72
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
thus π
e ([−∞, +∞]) = π([t, s]) is a geodesic. To complete the proof, we must show
that π(t) = x and π(s) = y. Indeed,
π(t) = π
e (−∞) = lim π
en (−∞) = lim xn = x,
n→∞
n→∞
and a similar argument shows that π(s) = y. Thus the geodesic π([t, s]) connects
x and y.
Using Lemma 4.4.11, we prove Proposition 4.4.4.
Proof of Proposition 4.4.4.
(i) Given distinct points x, y ∈ bord X, we may find sequences X ∋ xn → x
and X ∋ yn → y. Applying Lemma 4.4.11 proves the existence of a
geodesic connecting x and y. To show uniqueness, suppose that [x, y]1
and [x, y]2 are two geodesics connecting x and y. Fix sequences [x, y]1 ∋
(1)
(2)
(1)
(2)
xn → x, [x, y]2 ∋ xn → x, [x, y]1 ∋ yn → y,and [x, y]2 ∋ yn → y. By
considering the intertwined sequences
(1)
(2)
(1)
(2)
(1)
(2)
(1)
(2)
x1 , x1 , x2 , x2 , . . .
and
y1 , y1 , y2 , y2 , . . .
(1)
(1)
(2)
(2)
∞
Lemma 4.4.11 shows that both sequences ([xn , yn ])∞
1 and ([xn , yn ])1
converge in the Hausdorff metric to a common geodesic [x, y]. But clearly
the former tend to [x, y]1 , and the latter tend to [x, y]2 ; we must have
[x, y]1 = [x, y]2 .
(ii) Suppose that bord X ∋ xn → x and bord X ∋ yn → y. For each n, choose
xn , xn ), D(b
yn , yn ) ≤ 1/n. Then x
bn → x
x
bn , ybn ∈ [xn , yn ] ∩ X such that D(b
and ybn → y, so by Lemma 4.4.11 we have [b
xn , ybn ] → [x, y] in the Hausdorff
metric, or [b
xn , ybn ] → {x} if x = y. To complete the proof it suffices to show
that the Hausdorff distance between [xn , yn ] and [b
xn , ybn ] tends to zero as
n tends to infinity. Indeed, [b
xn , ybn ] ⊆ [xn , yn ], and for each z ∈ [xn , yn ],
either z ∈ [xn , x
bn ], z ∈ [b
xn , ybn ], or z ∈ [b
yn , yn ]. In the first case, Lemma
4.3.2 shows that D(z, [b
xn , ybn ]) ≤ D(z, x
bn ) .× D(xn , x
bn ) ≤ 1/n → 0; the
third case is treated similarly.
Having completed the proof of Proposition 4.4.4, in the remainder of this section
we prove that a version of the CAT(-1) equality holds for ideal triangles.
Definition 4.4.12. A geodesic triangle ∆ = ∆(x, y, z) consists of three distinct
points x, y, z ∈ bord X together with the geodesics [x, y], [y, z], and [z, x].
4.5. THE GEOMETRY OF SHADOWS
73
A geodesic triangle ∆ = ∆(x, y, z) is called a comparison triangle for ∆ if
hx|yiz = hx|yiz , etc.
For any point p ∈ [x, y], its comparison point is defined to be the unique point
p ∈ [x, y] such that
hx|zip − hy|zip = hx|zip − hy|zip .
We say that the geodesic triangle ∆ satisfies the CAT(-1) inequality if for all points
p, q ∈ ∆ and for any comparison points p, q ∈ ∆, we have d(p, q) ≤ d(p, q).
It should be checked that these definitions are consistent with those given in
Section 3.2.
Proposition 4.4.13. Any geodesic triangle (including ideal triangles) satisfies
the CAT(-1) inequality.
Proof. Let ∆ = ∆(x, y, z) be a geodesic triangle, and fix p, q ∈ ∆. Choose
sequences xn → x, yn → y, and zn → z. By Proposition 4.4.4, we have ∆n =
∆(xn , yn , zn ) → ∆ in the Hausdorff metric, so we may choose pn , qn ∈ ∆n so that
pn → p, qn → q. For each n, let ∆n = ∆(xn , yn , z n ) be a comparison triangle for
∆n . Without loss of generality, we may assume that
(4.4.5)
o ∈ [xn , y n ] and hxn |z n io = hy n |z n io ≍+ 0.
By extracting a convergent subsequence, we may without loss of generality assume
that xn → x, y n → y, and z n → z for some points x, y, z ∈ bord H2 . By (4.4.5),
the points x, y, z are distinct. Thus ∆ = ∆(x, y, z) is a geodesic triangle, and is in
fact a comparison triangle for ∆. If p, q are comparison points for p, q, then pn → p
and q n → q. It follows that
d(p, q) = lim d(pn , qn ) ≤ lim d(pn , q n ) = d(p, q).
n→∞
n→∞
4.5. The geometry of shadows
4.5.1. Shadows in regularly geodesic hyperbolic metric spaces. Suppose that X is regularly geodesic. For each z ∈ X we consider the relation
πz ⊆ X × ∂X defined by
(x, ξ) ∈ πz ⇔ x ∈ [z, ξ]
(see Definition 4.4.2 for the definition of [z, ξ]). Note that if X is an algebraic
hyperbolic space, then the relation πz is a function when restricted to X \ {z}; in
74
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
πz B(x, σ)
B(x, σ)
σ
x
z
Figure 4.5.1. The set πz (B(x, σ)). Although this set is not equal
to Shadz (x, σ), they are approximately the same in regularly geodesic spaces by Corollary 4.5.5. In our drawings, we will draw the
set πz (B(x, σ)) to indicate the set Shadz (x, σ) (since the latter is
hard to draw).
particular, for x ∈ B = Bα
F with x 6= 0 we have
x
·
π0 (x) =
kxk
However, in general the relation πz is not necessarily a function; R-trees provide a
good counterexample. The reason is that in an R-tree, there may be multiple ways
to extend a geodesic segment to a geodesic ray.
For any set S, we define its shadow with respect to the light source z to be the
set
πz (S) := {ξ ∈ ∂X : ∃x ∈ S (x, ξ) ∈ πz }.
4.5.2. Shadows in hyperbolic metric spaces. In regularly geodesic hyper-
bolic metric spaces, it is particularly useful to consider πz (B(x, σ)) where x ∈ X
and σ > 0. We would like to have an analogue for this set in the Gromov hyperbolic
setting.
Definition 4.5.1. For each σ > 0 and x, z ∈ X, let
Shadz (x, σ) = {η ∈ ∂X : hz|ηix ≤ σ}.
We say that Shadz (x, σ) is the shadow cast by x from the light source z, with
parameter σ. For shorthand we will write Shad(x, σ) = Shado (x, σ).
4.5. THE GEOMETRY OF SHADOWS
75
The relation between πz (B(x, σ)) and Shadz (x, σ) in the case where X is a
regularly geodesic hyperbolic metric space will be made explicit in Corollary 4.5.5
below.
Let us establish up front some geometric properties of shadows.
Observation 4.5.2. For each x, z ∈ X and σ > 0 the set Shadz (x, σ) is closed.
Proof. This follows directly from Lemma 3.4.23.
Observation 4.5.3. If η ∈ Shadz (x, σ), then
hx|ηiz ≍+,σ d(z, x).
Proof. Follows directly from (b) of Proposition 3.3.3 together with the definition of Shadz (x, σ).
Lemma 4.5.4 (Intersecting Shadows Lemma). For each σ > 0, there exists
τ = τσ > 0 such that for all x, y, z ∈ X satisfying d(z, y) ≥ d(z, x) and Shadz (x, σ)∩
Shadz (y, σ) 6= , we have
(4.5.1)
Shadz (y, σ) ⊆ Shadz (x, τ )
and
(4.5.2)
d(x, y) ≍+,σ d(z, y) − d(z, x).
Proof. Fix η ∈ Shadz (x, σ) ∩ Shadz (y, σ), so that by Observation 4.5.3
hx|ηiz ≍+,σ d(z, x) and hy|ηiz ≍+,σ d(z, y) ≥ d(z, x).
Gromov’s inequality along with (c) of Proposition 3.3.3 then gives
(4.5.3)
hx|yiz ≍+,σ d(z, x).
Rearranging yields (4.5.2). In order to show (4.5.1), fix ξ ∈ Shadz (y, σ), so that
hy|ξiz ≍+,σ d(z, y) ≥ d(z, x). Gromov’s inequality and (4.5.3) then give
hx|ξiz ≍+,σ d(z, x),
i.e. ξ ∈ Shadz (x, τ ) for some τ > 0 sufficiently large (depending on σ).
Corollary 4.5.5. Suppose that X is regularly geodesic. For every σ > 0, there
exists τ = τσ > 0 such that for any x, z ∈ X we have
(4.5.4)
πz (B(x, σ)) ⊆ Shadz (x, σ) ⊆ πz (B(x, τ )).
Proof. Suppose ξ ∈ πz (B(x, σ)). Then there exists a point y ∈ B(x, σ)∩[z, ξ].
By (d) of Proposition 3.3.3
hz|ξix ≤ hz|ξiy + d(x, y) ≤ hz|ξiy + σ = σ,
76
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
Shadz (x, σ)
B(x, σ)
Shadz (y, σ)
B(y, σ)
z
Figure 4.5.2. In this figure, d(z, y) ≥ d(z, x) and Shadz (x, σ) ∩
Shadz (y, σ) 6= . The Intersecting Shadows Lemma (Lemma 4.5.4)
provides a τσ > 0 such that the shadow cast from z about B(x, τσ )
will capture Shadz (y, σ).
i.e. ξ ∈ Shadz (x, σ). This demonstrates the first inclusion of (4.5.4). On the other
hand, suppose that ξ ∈ Shadz (x, σ). Let y ∈ [z, ξ] be the unique point so that
d(z, y) = d(z, x). Clearly ξ ∈ Shadz (y, σ), so Shadz (x, σ) ∩ Shadz (y, σ) 6= ; by the
Intersecting Shadows Lemma 4.5.4 we have
d(x, y) ≍+,σ B ξ (y, x) = 0
i.e. d(x, y) ≤ τ for some τ = τσ > 0 depending only on σ. Then y ∈ B(x, τ ) ∩ [z, ξ],
which implies that ξ = πz (y) ∈ πz (B(x, τ )). This finishes the proof.
Lemma 4.5.6 (Bounded Distortion Lemma). Fix σ > 0. Then for every g ∈
Isom(X) and for every y ∈ Shadg−1 (o) (o, σ) we have
(4.5.5)
g′ (y) ≍×,σ b−kgk .
Moreover, for every y1 , y2 ∈ Shadg−1 (o) (o, σ), we have
(4.5.6)
D(g(y1 ), g(y2 ))
≍×,σ b−kgk .
D(y1 , y2 )
−1
−1
Proof. We have g ′ (y) ≍× bBy (o,g (o)) ≍× b2hg (o)|yio −kgk ≍×,σ b−kgk , giving
(4.5.5). Now (4.5.6) follows from (4.5.5) and the geometric mean value theorem
(Proposition 4.2.4).
4.5. THE GEOMETRY OF SHADOWS
77
σ
o
z
∂X \ Shadz (o, σ)
Figure 4.5.3. The Big Shadows Lemma 4.5.7 tells us that for any
ε > 0, we may choose σ > 0 sufficiently large so that Diam(∂X \
Shadz (o, σ)) ≤ ε for every z ∈ X.
Lemma 4.5.7 (Big Shadows Lemma). For every ε > 0, for every σ > 0 sufficiently large (depending on ε), and for every z ∈ X, we have
(4.5.7)
Diam(∂X \ Shadz (o, σ)) ≤ ε.
Proof. If ξ, η ∈ ∂X \ Shadz (o, σ), then hz|ξio > σ and hz|ηio > σ. Thus by
Gromov’s inequality we have
hξ|ηio &+ σ.
Exponentiating gives D(ξ, η) .× b
−σ
. Thus
Diam(∂X \ Shadz (o, σ)) .× b−σ −
→ 0,
σ
and the convergence is uniform in z.
Lemma 4.5.8 (Diameter of Shadows Lemma). For all σ > 0 sufficiently large,
we have for all g ∈ Isom(X) and for all z ∈ X
(4.5.8)
Diamz (Shadz (g(o), σ)) .×,σ b−d(z,g(o)) ,
with ≍ if #(∂X) ≥ 3. Moreover, for every C > 0 there exists σ > 0 such that
(4.5.9)
Bz (x, Ce−d(z,x) ) ⊆ Shadz (x, σ) ∀x, z ∈ X.
78
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
Shadz (g(o), σ)
σ
g(o)
x
B(g(o), σ)
z
Figure 4.5.4. The Diameter of Shadows Lemma 4.5.8 says that
the diameter of Shad(g(o), σ) is coarsely asymptotic to b−kgk .
Proof. Let x = g(o). For any ξ, η ∈ Shadz (x, σ), we have
Dz (ξ, η) ≍× b−hξ|ηiz .×
b− min(hx|ξiz ,hx|ηiz ) .×,σ b−d(z,x)
which demonstrates (4.5.8).
Now let us prove the converse of (4.5.8), assuming #(∂X) ≥ 3. Fix ξ1 , ξ2 , ξ3 ∈
∂X, let ε = mini6=j D(ξi , ξj )/2, and fix σ > 0 large enough so that (4.5.7) holds for
every z ∈ X. By (4.5.7) we have
Diam(∂X \ Shadg−1 (z) (o, σ)) ≤ ε,
and thus
# i = 1, 2, 3 : ξi ∈ Shadg−1 (z) (o, σ) ≥ 2.
Without loss of generality suppose that ξ1 , ξ2 ∈ Shadg−1 (z) (o, σ). By applying g,
we have g(ξ1 ), g(ξ2 ) ∈ Shadz (x, σ). Then
Diamz (Shadz (x, σ)) ≥ Dz (g(ξ1 ), g(ξ2 ))
≍× b−hg(ξ1 )|g(ξ2 )iz = b−hξ1 |ξ2 ig−1 (z)
&× b−hξ1 |ξ2 io b−kg
−1
(z)k
≍×,ξ1 ,ξ2 b−d(z,x) .
Finally, given y ∈ Bz (x, Cb−d(z,x) ), we have
hx|yiz &+ − logb (Cb−kxk ) ≍+ d(z, x)
and thus hz|yix ≍+ 0, demonstrating (4.5.9).
4.6. GENERALIZED POLAR COORDINATES
79
x
kxk
∡(x)
x1
0
Figure 4.6.1. The quantities kxk and ∡(x) can be interpreted as
“polar coordinates” of x.
4.6. Generalized polar coordinates
Suppose that X = E = Eα is the half-space model of a real hyperbolic space.
Fix a point x ∈ E, and consider the numbers kxk and ∡(x) := cos−1 (x1 /kxk),
i.e. the radial and unsigned angular coordinates of x. (The angular coordinate is
computed with respect to the ray {(t, 0) : t ∈ [0, ∞)}; cf. Figure 4.6.1.) These
“polar coordinates” of x do not completely determine x, but they are enough to
compute certain important quantities depending on x, e.g. dE (o, x), B ∞ (o, x), and
B 0 (o, x). (We omit the details.) In this section we consider a generalization, in a
loose sense, of these coordinates to an arbitrary hyperbolic metric space.
Let us note that the isometries of E which preserve the polar coordinate functions defined above are exactly those of the form Tb where T ∈ O(E). Equivalently,
these are the members of Isom(E) which preserve 0, o = (1, 0), and ∞. This suggests that our “coordinate system” is fixed by choosing a point in E and two distinct
points in ∂E.
We now return to the general case of §4.1. Fix two distinct points ξ1 , ξ2 ∈ ∂X.
Definition 4.6.1. The generalized polar coordinate functions are the functions
r = rξ1 ,ξ2 ,o and θ = θξ1 ,ξ2 ,o : X → R defined by
1
[B ξ1 (x, o) − B ξ2 (x, o)]
2
1
θ(x) = [B ξ1 (x, o) + B ξ2 (x, o)] ≍+ hξ1 |ξ2 ix − hξ1 |ξ2 io .
2
r(x) =
The connection between generalized polar coordinates and classical polar coordinates is given in Proposition 4.6.4 below. For now, we list some geometrical
facts about generalized polar coordinates. Our first lemma says that the hyperbolic
80
4. MORE ABOUT THE GEOMETRY OF HYPERBOLIC METRIC SPACES
distance from a point to the origin is essentially the sum of the “radial” distance
and the “angular” distance.
Lemma 4.6.2. For all x ∈ X we have
2
kxk ≍+,o,ξ1 ,ξ2 max B ξi (x, o) = |r(x)| + θ(x).
i=1
Proof. The equality is trivial, so we concentrate on the asymptotic. The &
direction follows directly from (f) of Proposition 3.3.3. On the other hand, by
Gromov’s inequality
2
2
i=1
i=1
kxk − max B ξi (x, o) ≍+ 2 minhx|ξi io .+ 2hξ1 |ξ2 io ≍+,o,ξ1 ,ξ2 0.
Our next lemma describes the effect of isometries on generalized polar coordinates.
Lemma 4.6.3. Fix g ∈ Isom(X) such that ξ1 , ξ2 ∈ Fix(g). For all x ∈ X we
have
(4.6.1)
r(g(x)) ≍+ r(x) + logb g ′ (ξ1 ) = r(x) − logb g ′ (ξ2 )
(4.6.2)
θ(g(x)) ≍+ θ(x),
with equality if X is strongly hyperbolic. The implied constants are independent of
g, ξ1 , and ξ2 .
Proof.
B ξ1 (g(x), o) − B ξ2 (g(x), o) − B ξ1 (x, o) + B ξ2 (x, o)
=
B ξ1 (x, g −1 (o)) − B ξ2 (x, g −1 (o)) − B ξ1 (x, o) − B ξ2 (x, o)
2[r(g(x)) − r(x)] =
≍+ B ξ1 (o, g −1 (o)) − B ξ2 (o, g −1 (o))
≍+ logb g ′ (ξ1 ) − logb g ′ (ξ2 ).
(by Proposition 4.2.16)
Now (4.6.1) follows from Corollary 4.2.15.
On the other hand, by (g) of Proposition 3.3.3,
θ(g(x)) − θ(x) ≍+ hξ1 |ξ2 ig(x) − hξ1 |ξ2 io − hξ1 |ξ2 ix − hξ1 |ξ2 io
= hg −1 (ξ1 )|g −1 (ξ2 )ix − hξ1 |ξ2 ix = 0,
proving (4.6.2).
We end this chapter by describing the relation between generalized polar coordinates and classical polar coordinates.
4.6. GENERALIZED POLAR COORDINATES
81
Proposition 4.6.4. If X = E, o = (1, 0), ξ1 = 0, and ξ2 = ∞, then
r(x) = log kxk
θ(x) = − log(x1 /kxk) = − log cos(∡(x)).
Thus the notations r and θ are slightly inaccurate as they really represent
the logarithm of the radius and the negative logarithm of the cosine of the angle,
respectively.
Proof of Proposition 4.6.4. We consider first the case kxk = 1. Let us set
g(y) = y/kyk2 , and note that g ∈ Isom(E), g(o) = o, and g(ξi ) = ξ3−i . On the
other hand, since kxk = 1 we have g(x) = x, and so
B ξ1 (x, o) = B g(ξ1 ) (g(x), g(o)) = B ξ2 (x, o).
It follows that r(x) = 0 and θ(x) = B ξ2 (x, o) = B ∞ (x, o). By Proposition 3.5.5, we
have B ∞ (x, o) = − log(x1 /o1 ) = − log(x1 /kxk) = − log cos(∡(x)).
The general case follows upon applying Lemma 4.6.3 to maps of the form
gλ (x) = λx, λ > 0.
CHAPTER 5
Discreteness
Let X be a metric space. In this chapter we discuss several different notions
of what it means for a group or semigroup G Isom(X) to be discrete. We show
that these notions are equivalent in the Standard Case. Finally, we give examples
to show that these notions are no longer equivalent when X = H∞ .
Throughout this chapter, the standing assumptions that X is a (not necessarly
hyperbolic) metric space and that o ∈ X replace the paper’s overarching standing
assumption that (X, o, b) is a Gromov triple (cf. §4.1). Of course, if (X, o, b) is a
Gromov triple then X is a metric space and o ∈ X, and therefore all theorems in
this chapter can be used in other chapters without comment.
5.1. Topologies on Isom(X)
In this section we discuss different topologies that may be put on the isometry
group of the metric space X.
In the Standard Case, the most natural topology is the compact-open topology
(COT), i.e. the topology whose subbasic open sets are of the form
G(K, U ) = {f ∈ Isom(X) : f (K) ⊆ U }
where K ⊆ X is compact and U ⊆ X is open. When we replace X by a metric
space which is not proper, it is tempting to replace the compact-open topology with
a “bounded-open” topology. However, it is hard to define such a topology in a way
that does not result in pathologies. It turns out that the compact-open topology is
still the “right” topology for many applications in an arbitrary metric space. But
we are getting ahead of ourselves.
Let’s start by considering the case where X is an algebraic hyperbolic space,
i.e., X = H = Hα
F , and figure out what topology or topologies we can put on
Isom(H). Recall from Theorem 2.3.3 that
(5.1.1)
Isom(H) ≡ PO∗ (L; Q) ≡ O∗ (L; Q)/ ∼
where L = HFα+1 , Q is the quadratic form (2.2.1), and T1 ∼ T2 means that [T1 ] =
[T2 ] (in the notation of Section 2.3). Thus Isom(H) is isomorphic to a quotient of
a subspace of L(L), the set of bounded linear maps from L to itself. This indicates
83
84
5. DISCRETENESS
that to define a topology or topologies on Isom(H), it may be best to start from the
functional analysis point of view and look for topologies on L(L). In particular, we
will be interested in the following widely used topologies on L(L):
• The uniform operator topology (UOT) is the topology on L(L) which
comes from looking it as a metric space with the metric
d(T1 , T2 ) = kT1 − T2 k = sup{k(T1 − T2 )xk : x ∈ L, kxk = 1}.
• The strong operator topology (SOT) is the topology on L(L) which comes
from looking at it as a subspace of the product space LL . Note that in
this topology,
Tn −
→T
n
⇔
Tn x −
→ T x ∀x ∈ L.
n
The strong operator topology is weaker than the uniform operator topology.
Remark 5.1.1. There are many other topologies used in functional analysis,
for example the weak operator topology, which we do not consider here.
Starting with either the uniform operator topology or the strong operator topology, we may restrict to the subspace O∗ (L; Q) and then quotient by ∼ to induce
a topology on Isom(H) using the identification (5.1.1). For convenience, we will
also call these induced topologies the uniform operator topology and the strong
operator topology, respectively.
We now return to the general case of a metric space X. Define the Tychonoff
topology to be the topology on Isom(X) inherited from the product topology on
XX.
Proposition 5.1.2.
(i) The Tychonoff topology and the compact-open topology on Isom(X) are
identical.
(ii) If X is an algebraic hyperbolic space, then the strong operator topology
is identical to the Tychonoff topology (and thus also to the compact-open
topology).
Proof.
(i) Since subbasic sets in the Tychonoff topology take the form G({x}, U ), it
is clear that the compact-open topology is at least as fine as the Tychonoff
topology. Conversely, suppose that G(K, U ) is a subbasic open set in the
Tychonoff topology, and fix f ∈ G(K, U ). Let ε = d(f (K), X \ U ) > 0,
5.1. TOPOLOGIES ON Isom(X)
and let (xi )n1 be a set of points in K such that K ⊆
f ∈ U :=
n
\
i=1
G({xi }, Nε/3 (f (K))).1
85
Sn
1
B(xi , ε/3). Then
The set U is open in the Tychonoff topology; we claim that U ⊆ G(K, U ).
Indeed, suppose that fe ∈ U. Then for x ∈ K, fix i with x ∈ B(xi , ε/3);
since fe is an isometry, d(fe(x), f (K)) ≤ d(fe(x), fe(xi )) + d(fe(xi ), f (K)) ≤
2ε/3 < ε. It follows that fe(x) ∈ U ; since x ∈ K was arbitrary, fe ∈
G(K, U ).
(ii) It is clear that the strong operator topology is at least as fine as the
Tychonoff topology. Conversely, suppose that a set U ⊆ Isom(H) is open
in the strong operator topology, and fix [T ] ∈ U. Let T ∈ O∗ (L; Q)
be a representative of [T ]. There exist (vi )n1 in L and ε > 0 such that
for all Te ∈ O∗ (L; Q) satisfying k(Te − T )vi k ≤ ε ∀i, we have [Te] ∈ U.
Let f0 = e0 , and let V = hf0 , v1 , . . . , vn i. Extend {f0 } to an F-basis
{f0 , f1 , . . . , fk } of V with the property that BQ (fj1 , fj2 ) = 0 for all j1 6= j2 .
Without loss of generality, suppose that k ≥ 1. For each i = 1, . . . , n we
P
have vi = j fj ci,j for some ci,j ∈ F, so there exists ε2 > 0 such that for
all Te ∈ O∗ (L; Q) satisfying k(Te − T )fj k ≤ ε2 ∀j and kσTe − σT k ≤ ε2 , we
have [Te] ∈ U.
Let
(5.1.2)
and let
{1}
F=R
IF = {1, i}
F=C,
{1, i, j, k} F = Q
F = {e0 } ∪ {e0 ± (1/2)f1 ℓ : j = 1, . . . , k, ℓ ∈ IF }.
Fix ε3 > 0 small to be determined, and for the remainder of this proof
write A ∼ B if kA − Bk is bounded by a constant which tends to zero as
ε → 0. Let
o
n
V = [Te] ∈ Isom(H) : ∀x ∈ F, ∃yx ∈ [Te]([x]) such that kyx − T xk < ε3 .
For each x ∈ F , we have [x] ∈ H, so the set
{[y] ∈ H : ∃y ∈ [y] such that ky − T xk < ε3 }
is open in the natural topology on H. It follows that V is open in the
Tychonoff topology. Moreover, [T ] ∈ V. To complete the proof we show
1Here and elsewhere N (S) = {x ∈ X : d(x, S) ≤ ε}.
ε
86
5. DISCRETENESS
that V ⊆ U. Indeed, fix [Te] ∈ V, and let y = ye0 . There exists a
representative Te ∈ O∗ (L; Q) such that Tee0 = λy for some λ > 0. Since
−1 = Q(e0 ) ∼ Q(y) = λ−2 Q(λy) = −λ−2 ,
we have λ ∼ 1 and thus Tee0 ∼ T e0 .
Now for each x ∈ F \{e0 }, there exists ax ∈ F such that yx = Te(xax ).
Fix j = 1, . . . , k and ℓ ∈ IF . Writing a± = ae0 ±(1/2)fj ℓ , we have
1
1
T e0 ± fj ℓ − Te
< ε3 ,
e0 ± fj ℓ a±
2
2
i.e. T (e0 ± (1/2)fj ℓ) ∼ Te((e0 ± (1/2)fj ℓ)a± ). Substituting ± = + and
± = − and adding the resulting equations gives
1
2T e0 ∼ Te(e0 (a+ + a− )) + Te(fj ℓ(a+ − a− ));
2
using T e0 ∼ Tee0 and rearranging gives
1
Te(e0 (2 − a+ − a− )) ∼ Te(fj ℓ(a+ − a− )).
2
Now by Lemma 2.4.11, we have kTek ∼ 1, and thus e0 (2 − a+ − a− ) ∼
(1/2)fj ℓ(a+ − a− ). Since ke0 a + fj ℓbk ≍× max(|a|, |b|) for all a, b ∈ F, it
follows that
2 − a+ − a− ∼ ℓ(a+ − a− ) ∼ 0,
from which we deduce a+ ∼ a− ∼ 1. Thus
1
1
e
T e0 ± fj ℓ ∼ T e0 ± fj ℓ .
2
2
Substituting ± = + and ± = −, subtracting the resulting equations, and
using the fact that T e0 ∼ Tee0 gives
T (fj ℓ) ∼ Te(fj ℓ).
In particular, letting ℓ = 1 we have T fj ∼ Tefj . Thus
(T fj )(σT ℓ) ∼ (Tefj )(σTe ℓ) ∼ (T fj )(σTe ℓ).
Since this holds for all ℓ ∈ IF , we have σT ∼ σTe . By the definition of ∼,
this means that we can choose ε3 small enough so that kT fj ℓ − Tefj ℓk ≤
ε2 ∀j and kσ e − σT k ≤ ε2 . Then [Te] ∈ U, completing the proof.
T
Proposition 5.1.3. The compact-open topology makes Isom(X) into a topological group, i.e. the maps
(g, h) 7→ gh,
g 7→ g −1
5.2. DISCRETE GROUPS OF ISOMETRIES
87
are continuous.
Proof. Fix g0 , h0 ∈ Isom(X), and let G({x}, U ) be a neighborhood of g0 h0 .
For some ε > 0, we have B(g0 h0 (x), ε) ⊆ U . We claim that
G {h0 (x)}, B(g0 h0 (x), ε/2) G {x}, B(h0 (x), ε/2) ⊆ G({x}, U ).
Indeed, fix g ∈ G({h0 (x)}, B(g0 h0 (x), ε/2)) and h ∈ G({x}, B(h0 (x), ε/2)). Then
d(gh(x), g0 h0 (x)) ≤ d(h(x), h0 (x)) + d(gh0 (x), g0 h0 (x)) ≤ ε/2 + ε/2 = ε,
demonstrating that gh(x) ∈ U , and thus that the map (g, h) 7→ gh is continuous.
Now fix g0 ∈ Isom(X), and let G({x}, U ) be a neighborhood of g0−1 . For some
ε > 0, we have B(g0−1 (x), ε) ⊆ U . We claim that
−1
G {g0−1 (x)}, B(x, ε)
⊆ G({x}, U ).
Indeed, fix g ∈ G({g0−1 (x)}, B(x, ε)). Then
d(g −1 (x), g0−1 (x)) = d(x, gg0−1 (x)) ≤ ε,
demonstrating that g −1 (x) ∈ U , and thus that the map g 7→ g −1 is continuous.
Remark 5.1.4 ([109, 9.B(9), p.60]). If X is a separable complete metric space,
then the group Isom(X) with the compact-open topology is a Polish space.
5.2. Discrete groups of isometries
In this section we discuss several different notions of what it means for a group
G ≤ Isom(X) to be discrete, and then we show that they are equivalent in the
Standard Case. However, each of our notions will be distinct when X = H = Hα
F
for some infinite cardinal α.
Definition 5.2.1. Fix G ≤ Isom(X).
• G is called strongly discrete (SD) if for every bounded set B ⊆ X, we have
#{g ∈ G : g(B) ∩ B 6= } < ∞.
• G is called moderately discrete (MD) if for every x ∈ X, there exists an
open set U ∋ x such that
#{g ∈ G : g(U ) ∩ U 6= } < ∞.
• G is called weakly discrete (WD) if for every x ∈ X, there exists an open
set U ∋ x such that
g(U ) ∩ U 6= ⇒ g(x) = x.
88
5. DISCRETENESS
Remark 5.2.2. Strongly discrete groups are known in the literature as metrically proper, and moderately discrete groups are known as wandering.
Remark 5.2.3. We may equivalently give the definitions as follows:
• G is strongly discrete (SD) if for every R > 0 and x ∈ X,
(5.2.1)
#{g ∈ G : d(x, g(x)) ≤ R} < ∞.
• G is moderately discrete (MD) if for every x ∈ X, there exists ε > 0 such
that
(5.2.2)
#{g ∈ G : d(x, g(x)) ≤ ε} < ∞.
• G is weakly discrete (WD) if for every x ∈ X, there exists ε > 0 such that
(5.2.3)
G(x) ∩ B(x, ε) = {x}.
As our naming suggests, the condition of strong discreteness is stronger than
the condition of moderate discreteness, which is in turn stronger than the condition
of weak discreteness.
Proposition 5.2.4. Any strongly discrete group is moderately discrete, and
any moderately discrete group is weakly discrete.
Proof. It is clear from the second formulation that strongly discrete groups
are moderately discrete. Let G ≤ Isom(X) be a moderately discrete group. Fix
x ∈ X, and let ε > 0 be such that (5.2.2) holds. Letting ε′ = ε ∧ min{d(x, g(x)) :
g(x) 6= x, g(x) ∈ B(x, ε)}, we see that (5.2.3) holds.
The reverse directions, WD ⇒ MD and MD ⇒ SD, both fail in infinite dimensions. Examples 11.1.14 and 13.3.1-13.3.3 are moderately discrete groups which are
not strongly discrete, and Examples 13.5.2 and 13.4.1 are weakly discrete groups
which are not moderately discrete.
If X is a proper metric space, then the classes MD and SD coincide, but are still
distinct from WD. Example 13.4.1 is a weakly discrete group acting on a proper
metric space which is not moderately discrete. We show now that MD ⇔ SD when
X is proper:
Proposition 5.2.5. Suppose that X is proper. Then a subgroup of Isom(X) is
moderately discrete if and only if it is strongly discrete.
Proof. Let G ≤ Isom(X) be a moderately discrete subgroup. Fix x ∈ X, and
let ε > 0 satisfy (5.2.2). Fix R > 0 and let K = G(o) ∩ B(x, R); K is compact
since X is proper. The collection {B(g(x), ε) : g ∈ G} covers K, so there is a finite
5.2. DISCRETE GROUPS OF ISOMETRIES
89
subcover {B(gi (x), ε) : i = 1, . . . , n}. Now
#{g ∈ G : d(x, g(x) ≤ R)} ≤
n
X
i=1
#{g ∈ G : g(x) ∈ B(gi (x), ε)} < ∞,
i.e. (5.2.1) holds.
5.2.1. Topological discreteness.
Definition 5.2.6. Let T be a topology on Isom(X). A group G ≤ Isom(X)
is T -discrete if it is discrete as a subspace of Isom(X) in the topology T .
Most of the time, we will let T be the compact-open topology (COT). The
relation between COT-discreteness and our previous notions of discreteness is as
follows:
Proposition 5.2.7.
(i) Any moderately discrete group is COT-discrete.
(ii) Any weakly discrete group that is acting on an algebraic hyperbolic space
is COT-discrete.
(iii) Any COT-discrete group that is acting on a proper metric space is strongly
discrete.
Proof.
(i) Let G ≤ Isom(X) be moderately discrete, and let ε > 0 satisfy (5.2.2).
Then the set U := G({o}, B(o, ε)) ⊆ Isom(X) satisfies #(U ∩ G) < ∞.
But U is a neighborhood of id in the compact-open topology. It follows
that G is COT-discrete.
(ii) Suppose that X = H = Hα
F . Let G ≤ Isom(H) be weakly discrete, and by
contradiction suppose it is not COT-discrete. For any finite set F ⊆ H,
let ε > 0 be small enough so that (5.2.3) holds for all x ∈ F ; since G is
not COTD, there exists g = gF ∈ G \ {id} such that d(x, g(x)) ≤ ε for all
x ∈ F , and it follows that g(x) = x for all x ∈ F . Now suppose that J is
a finite set of indices, and let F = {[e0 ]} ∪ {[e0 ± (1/2)ei ]ℓ : i ∈ J, ℓ ∈ IF },
where IF is as in (5.1.2). Then if TI is a representative of gF satisfying
TJ e0 = e0 , an argument similar to the proof of Proposition 5.1.2(ii) shows
that σTJ = I and TJ ei = ei for all i ∈ J.
Now we define an infinite sequence of indices (in )∞
1 as follows: If
i1 , . . . , in−1 have been defined, let Tn = T{i1 ,...,in−1 } , and let in be such
/ Fix(Tn ).
that ein ∈
Choose a nonnegative summable sequence (tn )∞
1 , and let x = e0 +
P∞
n=1 tn ein . Then Tn x → x; since G is weakly discrete, it follows that
90
5. DISCRETENESS
Tn x = x for all n sufficiently large. Fix such an n, and observe that
X
0 = Tn x − x = tn (Tn (en ) − en ) +
tm (Tn (em ) − em );
m>n
the triangle inequality gives
tn ≤
P
2tm
·
kTn en − en k
m>n
By choosing the sequence (tn )∞
1 to satisfy
1
1
kTn en − en ktn ≤ tn ,
4
2
we arrive at a contradiction.
(iii) Let G be a COT-discrete group acting by isometries on a proper metric
tn+1 <
space X. By contradiction, suppose that G is not strongly discrete. Then
there exists an infinite set A ⊆ G such that the set A(o) is bounded.
Without loss of generality we may suppose that A−1 = A. Note that for
each x ∈ X, the set A(x) is bounded and therefore precompact. Now
since X is a proper metric space, it is σ-compact and therefore separable.
Let S be a countable dense subset of X. Then
2
Y
K :=
A(q)
q∈S
is a compact metrizable space. For each g ∈ A let
φg := (g(q))q∈S , (g −1 (q))q∈S ∈ K.
Since A is infinite, there exists an infinite sequence (gn )∞
1 in A such that
φgn → (yq(+) )q∈S , (yq(−) )q∈S ∈ K.
Thus
gn± (q) −
→ yq(±) ∀q ∈ S.
n
The density of S and the equicontinuity of the sequences (gn )∞
1 and
(±)
(±)
±
(gn−1 )∞
imply
that
for
all
x
∈
X,
there
exist
y
such
that
g
(y)
→
yx .
x
1
n
Thus, the sequence (gn )∞
1 converges in the Tychonoff topology to some
(+)
X
(−)
g
∈ X . Similarly, the sequence (gn−1 )∞
∈ XX.
1 converges to some g
−1 ∞
Again, the equicontinuity of the sequences (gn )∞
1 and (gn )1 allows us
to take limits and deduce that
g (+) g (−) = lim gn gn−1 = id.
n→∞
5.2. DISCRETE GROUPS OF ISOMETRIES
91
Similarly, g (−) g (+) = id. Thus g (+) and g (−) are inverses, and in particular
g (+) ∈ Isom(X). Since gn → g (+) in the compact-open topology, the proof
is completed by the following lemma from topological group theory:
Lemma 5.2.8. Let H be a topological group, and let G be a subgroup
of H. Suppose there is a sequence (gn )∞
1 of distinct elements in G which
converges to an element of H. Then G is not discrete in the topology
inherited from H.
Proof. Suppose gn → h ∈ H. Then
−1
gn gn+1
→ hh−1 = id,
−1
while on the other hand gn gn+1
6= id (since the sequence (gn )∞
1 consists
of distinct elements). This demonstrates that G is not discrete in the
inherited topology.
⊳
If X is not an algebraic hyperbolic space, then it is possible for a weakly discrete
group to not be COT-discrete; see Example 13.4.1. Conversely, it is possible for a
COT-discrete group to not be weakly discrete; see Examples 13.4.9 amd 13.5.1.
On the other hand, suppose that X is an algebraic hyperbolic space. The
uniform operator topology (abbreviated as UOT) is finer than the COT, i.e. it has
more open sets, and therefore it is easier for every subset of G to be relatively open
in that topology, which is exactly what it means to be discrete. Notice that there is
an “order switch” here; the UOT is finer than the COT, but the condition of being
COT-discrete is stronger than the condition of being UOT-discrete. We record this
for later use as the following
Observation 5.2.9. Let X be an algebraic hyperbolic space. If a subgroup
G ≤ Isom(X) is COT-discrete, then it is also UOT-discrete.
The inclusion in the previous observation is strict. A significant example of a
group acting on H∞ which is UOT-discrete but not COT-discrete is described in
Example 13.4.2.
The various relations between the distinct shades of discreteness are somewhat
subtle when first discerned. We speculate that it may be fruitful to study such
distinctions with a finer lens. For the reader’s ease, we summarize the relations
between our different notions of discreteness in Table 1 below.
5.2.2. Equivalence in finite dimensions.
92
5. DISCRETENESS
Proposition 5.2.10. Suppose that X is a finite-dimensional Riemannian manifold. Then the notions of strong discreteness, moderate discreteness, weak discreteness, and COT-discreteness agree. If X is an algebraic hyperbolic space, these
notions also agree with the notion of UOT-discreteness.
Proof. By Propositions 5.2.4 and 5.2.7, the conditions of strong discreteness,
moderate discreteness, and COT-discreteness agree and imply weak discreteness.
Conversely, suppose that G ≤ Isom(X) is weakly discrete, and by contradiction
suppose that G is not COT-discrete. Since X is separable, so is Isom(X), and thus
there exists a sequence Isom(X)\{id} ∋ gn → id in the compact-open topology. For
S∞
each n let Fn = {x ∈ X : gn (x) = x}. Since G is weakly discrete, X = 1 Fn , so by
the Baire category theorem, Fn has nonempty interior for some n. But then gn = id
on an open set; in particular there exists a point x0 ∈ X such that gn (x0 ) = x0
and gn′ (x0 ) is the identity map on the tangent space of x0 . By the naturality of the
exponential map, this implies that gn is the identity map, a contradiction.
Finally, suppose X = H = Hα
F is an algebraic hyperbolic space, and let L =
Lα+1
. Since L is finite-dimensional, the SOT and UOT topologies on L(L) are
F
equivalent. This in turn demonstrates that the notions of COT-discreteness and
UOT-discreteness agree.
In such a setting, we shall call a group satisfying any of these equivalent definitions simply discrete.
5.2.3. Proper discontinuity.
Definition 5.2.11. A group G ≤ Isom(X) acts properly discontinuously (PrD)
on X if for every x ∈ X, there exists an open set U ∋ x with
g(U ) ∩ U 6= ⇒ g = id,
or equivalently, if
d(x, {g(x) : g 6= id}) > 0.
Let us discuss the relations between proper discontinuity and some of our notions of discreteness. We begin by noting that even in finite dimensions, the notion
of proper discontinuity is not the same as the notion of discreteness; instead, a
group acts properly discontinuously if and only if it both discrete and torsion-free.
We also remark that in finite dimensions Selberg’s lemma (see e.g. [8]) can be used
to pass from a discrete group to a finite-index subgroup that acts properly discontinuously. However, it is impossible to do this in infinite dimensions; cf. Example
11.2.18.
Although no notion of discreteness implies proper discontinuity, the reverse is
true for certain types of discreteness. Namely, since #{id} = 1 < ∞, we have:
5.2. DISCRETE GROUPS OF ISOMETRIES
93
Observation 5.2.12. Any group which acts properly discontinuously is moderately discrete.
In particular, by combining with Proposition 5.2.5 we see that if X is proper
then any group which acts properly discontinuously is strongly discrete. This provides a connection between our results, in which strong discreteness is often a
hypothesis, and many results from the literature in which proper discontinuity and
properness are both hypotheses.
Observation 5.2.12 admits the following partial converse, which generalizes the
fact that in finite dimensions every discrete torsion-free group acts properly discontinuously:
Remark 5.2.13. If X is a proper CAT(0) space, then a group acts properly
discontinously if and only if it is moderately discrete and torsion free.
Proof. Suppose that G ≤ Isom(X) acts properly discontinuously. If g ∈
G \ {id} is a torsion element, then by Cartan’s lemma [39, II.2.8(1)], g has a fixed
point. This contradicts G acting properly discontinuously. Thus G is torsion-free.
Conversely, suppose that G ≤ Isom(X) is moderately discrete and torsion-free.
Given x ∈ X, let ε > 0 be as in (5.2.3), and by contradiction suppose that there
exists g 6= id such that d(x, g(x)) < ε. By (5.2.3), g(x) = x. But then by (5.2.2),
the set {g n : n ∈ Z} is finite, i.e. g is a torsion element. This is a contradiction, so
G acts properly discontinuously.
We summarize the relations between our various notions of discreteness, together with proper discontinuity, in the following table:
Finite dimensional
Riemannian manifold
General metric space
Infinite dimensional
algebraic hyperbolic space
Proper metric space
SD ↔ MD
↑
PrD
SD → MD
ր
PrD
SD → MD
ր
PrD
SD ↔ MD
↑
PrD
↔
WD
l
COTD ↔ UOTD
→
WD
ց
COTD
→
WD
↓
COTD → UOTD
↔ COTD
↓
WD
Table 1. The relations between different notions of discreteness.
COTD and UOTD stand for discrete with respect to the compactopen and uniform operator topologies respectively. All implications not listed have counterexamples; see Chapter 13.
94
5. DISCRETENESS
5.2.4. Behavior with respect to restrictions. Fix G ≤ Isom(X), and
suppose Y ⊆ X is a subspace of X preserved by G, i.e. g(Y ) = Y for all g ∈ G.
Then G can be viewed as a group acting on the metric space (Y, d ↿Y ).
Observation 5.2.14.
(i) G is strongly discrete ⇔ G ↿ Y is strongly discrete
(ii) G is moderately discrete ⇒ G ↿ Y is moderately discrete
(iii) G is weakly discrete ⇒ G ↿ Y is weakly discrete
(iv) G is T -discrete ⇐ G ↿ Y is T ↿ Y -discrete
(v) G acts properly discontinuously on X ⇒ G acts properly discontinuously
on Y .
In particular, strong discreteness is the only concept which is “independent of
the space being acted on”. It is thus the most robust of all our definitions.
Note that for the notions of topological discreteness like COTD and UOTD,
the order of implication reverses; restricting to a subspace may cause a group to no
longer be discrete. Example 13.4.9 is an example of this phenomenon.
5.2.5. Countability of discrete groups. In finite dimensions, all discrete
groups are countable. In general, it depends on what type of discreteness you are
considering.
Proposition 5.2.15. Fix G ≤ Isom(X), and suppose that either
(1) G is strongly discrete, or
(2) X is separable and G is COT-discrete.
Then G is countable.
Proof. If G is strongly discrete, then
X
X
#(G) ≤
#{g ∈ G : kgk ≤ n} ≤
#(N) = #(N).
n∈N
n∈N
On the other hand, if X is a separable metric space, then by Remark 5.1.4 Isom(X)
is separable metrizable, so it contains no uncountable discrete subspaces.
Remark 5.2.16. An example of an uncountable UOT-discrete subgroup of
Isom(H∞ ) is given in Example 13.4.2, and an example of an uncountable weakly
discrete group acting on a separable R-tree is given in Example 13.4.1. An example
of an uncountable moderately discrete group acting on a (non-separable) R-tree is
given in Remark 13.3.4.
CHAPTER 6
Classification of isometries and semigroups
In this chapter we classify subsemigroups G Isom(X) into six categories,
depending on the behavior of the orbit of the basepoint o ∈ X. We start by
classifying individual isometries, although it will turn out that the category into
which an isometry is classified is the same as the category of the cyclic group that
it generates.
We remark that if X is geodesic and G ≤ Isom(X) is a group, then the main
results of this chapter were proven in [88]. Moreover, our terminology is based on
[48, §3.A], where a similar classification was given based on [85, § 3.1].
6.1. Classification of isometries
Fix g ∈ Isom(X), and let
Fix(g) := {x ∈ bord X : g(x) = x}.
Consider ξ ∈ Fix(g) ∩ ∂X. Recall that g ′ (ξ) denotes the dynamical derivative of g
at ξ (see §4.2.3).
Definition 6.1.1. ξ is said to be
• a neutral or indifferent fixed point if g ′ (ξ) = 1,
• an attracting fixed point if g ′ (ξ) < 1, and
• a repelling fixed point if g ′ (ξ) > 1.
Definition 6.1.2. An isometry g ∈ Isom(X) is called
• elliptic if the orbit {g n (o) : n ∈ N} is bounded,
• parabolic if it is not elliptic and has a unique fixed point in ∂X, which is
neutral, and
• loxodromic if it has exactly two fixed points in ∂X, one of which is attracting and the other of which is repelling.
Remark 6.1.3. We use the terminology “loxodromic” rather than the more
common “hyperbolic” to avoid confusion with the many other meanings of the
word “hyperbolic”. In particular, when we get to classification of groups it would
95
96
6. CLASSIFICATION OF ISOMETRIES AND SEMIGROUPS
be a bad idea to call any group “hyperbolic” if it is not hyperbolic in the sense of
Gromov.
The categories of elliptic, parabolic, and loxodromic are clearly mutually exclusive.1 In the converse direction we have the following:
Theorem 6.1.4. Any isometry is either elliptic, parabolic, or loxodromic.
The proof of Theorem 6.1.4 will proceed through several lemmas.
Lemma 6.1.5 (A corollary of [107, Proposition 5.1]). If g ∈ Isom(X) is not
elliptic, then Fix(g) ∩ ∂X 6= .
We include the proof for completeness.
Proof. For each t ∈ N, let nt be the smallest integer such that
kg nt k ≥ nt .
The sequence (nt )∞
1 is nondecreasing. Given s, t ∈ N with s < t, we have
d(g ns (o), g nt (o)) = kg nt −ns k < nt ,
and thus
1
1
[ns + nt − nt ] = ns −→ ∞,
s,t
2
2
nt
nt
i.e. (g (o))t is a Gromov sequence. Let ξ = [(g (o))t ], and note that
hξ|g(ξ)io = lim hg nt (o)|g nt +1 (o)i ≥ lim kg nt k − d(g nt (o), g nt +1 (o)) = ∞.
hg ns (o)|g nt (o)io >
t→∞
t→∞
Thus g(ξ) = ξ, i.e. ξ ∈ Fix(g) ∩ ∂X.
Remark 6.1.6 ([107, Proposition 5.2]). If g ∈ Isom(X) is elliptic and if X is
CAT(0), then Fix(g) ∩ X 6= due to Cartan’s lemma (Theorem 6.2.5 below). Thus
if X is a CAT(0) space, then any isometry of X has a fixed point in bord X.
Lemma 6.1.7. If g ∈ Isom(X) has an attracting or repelling periodic point,
then g is loxodromic.
Proof. Suppose that ξ ∈ ∂X is a repelling fixed point for g ∈ Isom(X), i.e.
g (ξ) > 1. Recall from Proposition 4.2.8 that
′
Dξ (g n (y1 ), g n (y2 )) ≤ Cg ′ (ξ)−n Dξ (y1 , y2 ) ∀y1 , y2 ∈ Eξ ∀n ∈ Z
for some constant C > 0. Now let n be large enough so that g ′ (ξ)n > C; then
the above inequality shows that the map g n is a strict contraction of the complete
metametric space (Eξ , Dξ ) (cf. Proposition 3.6.19). Then by Theorem 3.6.2, g has
a unique fixed point η ∈ (Eξ )refl = ∂X \ {ξ}. By Corollary 4.2.15, η is an attracting
1Proposition 4.2.16 can be used to show that loxodromic isometries are not elliptic.
6.1. CLASSIFICATION OF ISOMETRIES
97
fixed point. Corollary 4.2.15 also implies that g cannot have a third fixed point.
Thus g is loxodromic.
On the other hand, if g has an attracting fixed point, then by Proposition 4.2.14,
g −1 has a repelling fixed point. Thus g −1 is loxodromic, so applying Proposition
4.2.14 again, we see that g is loxodromic.
Proof of Theorem 6.1.4. By contradiction suppose that g is not elliptic
or loxodromic, and we will show that it is parabolic. By Lemma 6.1.5, we have
Fix(g) ∩ ∂X 6= ; on the other hand, by Lemma 6.1.7, every fixed point of g in
∂X is neutral. It remains to show that #(Fix(g)) = 1. By contradiction, suppose
otherwise. Since g is not elliptic, we clearly have Fix(g) ∩ X = . Thus we may
suppose that there are two distinct neutral fixed points ξ1 , ξ2 ∈ ∂X.
Now for each n ∈ N, we have
B ξi (o, g n (o)) ≍+ n logb (g ′ (ξi )) = 0,
i = 1, 2
by Proposition 4.2.16. Let r = rξ1 ,ξ2 ,o and θ = θξ1 ,ξ2 ,o be as in Section 4.6. Then
by Lemma 4.6.3 we have r(g n (o)) ≍+ θ(g n (o)) ≍+ 0. Thus by Lemma 4.6.2 we
have
kg n k ≍+ |r(g n (o))| + θ(g n (o)) ≍+ 0,
i.e. the sequence {g n (o) : n ∈ N} is bounded. Thus g is elliptic, contradicting our
hypothesis.
Remark 6.1.8 (Cf. [51, Chapter 3, Theorem 1.4]). For R-trees, parabolic
isometries are impossible, so Theorem 6.1.4 shows that every isometry is elliptic or
loxodromic.
Proof. By contradiction suppose that X is an R-tree and that g ∈ Isom(X)
is a parabolic isometry with fixed point ξ ∈ ∂X. Let x = C(o, g(o), ξ) ∈ X; then
x = [o, ξ]t for some t ≥ 0. Now,
d(g(o), x) = kxk + B ξ (g(o), o) = t + 0 = t.
It follows that g(x) = [g(o), ξ]t = x. Thus g is elliptic, a contradiction.
6.1.1. More on loxodromic isometries.
Notation 6.1.9. Suppose g ∈ Isom(X) is loxodromic. Then g+ and g− denote
the attracting and repelling fixed points of g, respectively.
Theorem 6.1.10. Let g ∈ Isom(X) be loxodromic. Then
(6.1.1)
g ′ (g+ ) =
1
·
g ′ (g− )
98
6. CLASSIFICATION OF ISOMETRIES AND SEMIGROUPS
Furthermore, for every x ∈ bord X \ {g− } and for every n ∈ N we have
(6.1.2)
[g ′ (g+ )]n
,
D(g− , g+ )D(x, g− )
D(g n (x), g+ ) .×
with ≤ if X is strongly hyperbolic. In particular
x 6= g− ⇒ g n (x) −
→ g+ ,
n
and the convergence is uniform over any set whose closure does not contain g− .
Finally,
(6.1.3)
kg n k ≍+ |n| logb g ′ (g− ) = |n| logb
1
g ′ (g
+)
·
Proof. (6.1.1) follows directly from Corollary 4.2.15.
To demonstrate (6.1.2), note that
hx|g− io + hg n (x)|g+ io
&+ B g− (o, x) + B g+ (o, g n (x))
(by (j) of Proposition 3.3.3)
≍+ B g− (o, x) + B g+ (o, x) − n logb g ′ (g+ )
′
≍+ hg− |g+ ix − hg− |g+ io − n logb g (g+ )
(by Proposition 4.2.16)
(by (g) of Proposition 3.3.3)
≥ −hg− |g+ io − n logb g ′ (g+ ).
Exponentiating and rearranging yields (6.1.2).
Finally, (6.1.3) follows directly from Lemmas 4.6.2 and 4.6.3.
6.1.2. The story for real hyperbolic spaces. If X is a real hyperbolic
space, then we may conjugate each g ∈ Isom(X) to a “normal form” whose geo-
metrical significance is clearer. The normal form will depend on the classification
of g as elliptic, parabolic, or hyperbolic.
Proposition 6.1.11. Let X be a real hyperbolic space, and fix g ∈ Isom(X).
(i) If g is elliptic, then g is conjugate to a map of the form T ↿ B for some
linear isometry T ∈ O∗ (H).
(ii) If g is parabolic, then g is conjugate to a map of the form x 7→ Tbx + p :
E → E, where T ∈ O∗ (B) and p ∈ B. Here B = ∂E \ {∞} = Hα−1 .
(iii) If g is hyperbolic, then g is conjugate to a map of the form x 7→ λTbx :
E → E, where 0 < λ < 1 and T ∈ O∗ (B).
Proof.
(i) If g is elliptic, then by Cartan’s lemma (Theorem 6.2.5 below), g has a
fixed point x ∈ X. Since Isom(X) acts transitively on X (Observation
2.3.2), we may conjugate to B in a way such that g(0) = 0. But then by
Proposition 2.5.4, g is of the form (i).
6.2. CLASSIFICATION OF SEMIGROUPS
99
(ii) Let ξ be the neutral fixed point of g. Since Isom(X) acts transitively on
∂X (Proposition 2.5.9), we may conjugate to E in a way such g(∞) = ∞.
Then by Proposition 2.5.8 and Example 4.2.11, g is of the form (ii).
(iii) Since Isom(X) acts doubly transitively on ∂X (Proposition 2.5.9), we
may conjugate to E in a way such that g+ = 0 and g− = ∞. Then by
Proposition 2.5.8 and Example 4.2.11, g is of the form (iii). (We have
p = 0 since 0 ∈ Fix(g).)
Remark 6.1.12. If g ∈ Isom(X) is elliptic or loxodromic, then the orbit
(g (o))∞
1 exhibits some “regularity” - either it remains bounded forever, or it din
verges to the boundary. On the other hand, if g is parabolic then the orbit can
oscillate, both accumulating at infinity and returning infinitely often to a bounded
region. This is in sharp contrast to finite dimensions, where such behavior is impossible. We discuss such examples in detail in §11.1.2.
6.2. Classification of semigroups
Notation 6.2.1. We denote the set of global fixed points of a semigroup G
Isom(X) by
\
Fix(G) :=
Fix(g).
g∈G
Definition 6.2.2. G is
• elliptic if G(o) is a bounded set.
• parabolic if G is not elliptic and has a global fixed point ξ ∈ Fix(G) such
that
g ′ (ξ) = 1 ∀g ∈ G,
i.e. ξ is neutral with respect to every element of G.
• loxodromic if it contains a loxodromic isometry.
Below we shall prove the following theorem:
Theorem 6.2.3. Every semigroup of isometries of a hyperbolic metric space is
either elliptic, parabolic, or loxodromic.
Observation 6.2.4. An isometry g is elliptic, parabolic, or loxodromic according to whether the cyclic group generated by it is elliptic, parabolic, or loxodromic.
A similar statement holds if “group” is replaced by “semigroup”. Thus, Theorem
6.1.4 is a special case of Theorem 6.2.3.
Before proving Theorem 6.2.3, let us say a bit about each of the different
categories in this classification.
100
6. CLASSIFICATION OF ISOMETRIES AND SEMIGROUPS
6.2.1. Elliptic semigroups. Elliptic semigroups are the least interesting of
the semigroups we consider. Indeed, we observe that any strongly discrete elliptic semigroup is finite. We now consider the question of whether every elliptic
semigroup has a global fixed point.
Theorem 6.2.5 (Cartan’s lemma). If X is a CAT(0) space (and in particular
if X is a CAT(-1) space), then every elliptic subsemigroup G Isom(X) has a
global fixed point.
We remark that if G is a group, then this result may be found as [39, Corollary
II.2.8(1)].
Proof. Since G(o) is a bounded set, it has a unique circumcenter [39, Proposition II.2.7], i.e. the minimum
min sup d(x, g(o))
x∈X g∈G
is achieved at a single point x ∈ X. We claim that x is a global fixed point of G.
Indeed, for each h ∈ G we have
sup d(h−1 (x), g(o)) = sup d(x, hg(o)) ≤ sup d(x, g(o));
g∈G
g∈G
since x is the circumcenter we deduce that h
g∈G
−1
(x) = x, or equivalently that h(x) =
x.
On the other hand, if we do not restrict to CAT(0) spaces, then it is possible
to have an elliptic group with no global fixed point. We have the following simple
example:
Example 6.2.6. Let X = B \ BB (0, 1) and let g(x) = −x. Then X is a
hyperbolic metric space, g is an isometry of X, and G = {id, g} is an elliptic group
with no global fixed point.
6.2.2. Parabolic semigroups. Parabolic semigroups will be important in
Chapter 12 when we consider geometrically finite semigroups. In particular, we
make the following definition:
Definition 6.2.7. Let G Isom(X). A point ξ ∈ ∂X is a parabolic fixed point
of G if the semigroup
Gξ := Stab(G; ξ) = {g ∈ G : g(ξ) = ξ}
is a parabolic semigroup.
In particular, if G is a parabolic semigroup then the unique global fixed point
of G is a parabolic fixed point.
6.2. CLASSIFICATION OF SEMIGROUPS
101
Warning 6.2.8. A parabolic group does not necessarily contain a parabolic
isometry; see Example 11.2.18.
Note that Proposition 4.2.8 yields the following observation:
Observation 6.2.9. Let G Isom(X), and let ξ be a parabolic fixed point of
G. Then the action of Gξ on (Eξ , Dξ ) is uniformly Lipschitz, i.e.
Dξ (g(y1 ), g(y2 )) ≍× Dξ (y1 , y2 ) ∀y1 , y2 ∈ Eξ ∀g ∈ G,
and the implied constant is independent of g ∈ G. Furthermore, if X is strongly
hyperbolic, then G acts isometrically on Eξ .
Observation 6.2.10. Let G Isom(X), and let ξ be a parabolic fixed point
of G. Then for all g ∈ Gξ ,
Dξ (o, g(o)) ≍× b(1/2)kgk ,
(6.2.1)
with equality if X is strongly hyperbolic.
Proof. This is a direct consequence of (3.6.6), (h) of Proposition 3.3.3, and
Proposition 4.2.16.
As a corollary we have the following:
Observation 6.2.11. Let G Isom(X), and let ξ be a parabolic fixed point
of G. Then for any sequence (gn )∞
1 in Gξ ,
kgn k −
→ ∞ ⇔ gn (o) −
→ ξ.
n
n
Proof. Indeed,
gn (o) −
→ ξ ⇔ Dξ (o, gn (o)) −
→ ∞ ⇔ kgn k −
→ ∞.
n
n
n
Remark 6.2.12. If X is an R-tree, then any parabolic group must be infinitely
generated. This follows from a straightforward modification of the proof of Remark
6.1.8.
6.2.3. Loxodromic semigroups. We now come to loxodromic semigroups,
which are the most diverse out of these classes. In fact, they are so diverse that we
separate them into three subclasses.
Definition 6.2.13 ([48]). Let G Isom(X) be a loxodromic semigroup. G is
• lineal if Fix(g) = Fix(h) for all loxodromic g, h ∈ G.
• of general type if it has two loxodromic elements g, h ∈ G with
Fix(g) ∩ Fix(h) = .
102
6. CLASSIFICATION OF ISOMETRIES AND SEMIGROUPS
• focal if #(Fix(G)) = 1.
(We remark that focal groups were called quasiparabolic by Gromov [85, §3, Case
4’].
We observe that any cyclic loxodromic group or semigroup is lineal, so this
refined classification does not give any additional information for individual isometries.
Proposition 6.2.14. Any loxodromic semigroup is either lineal, focal, or of
general type.
Proof. Clearly, #(Fix(G)) ≤ 2 for any loxodromic semigroup G; moreover,
#(Fix(G)) = 2 if and only if G is lineal. So to complete the proof, it suffices
to show that #(Fix(G)) = 0 if and only if G is of general type. The backward
direction is obvious. Suppose that #(Fix(G)) = 0, but that G is not of general type.
Combinatorial considerations show that there exist three points ξ1 , ξ2 , ξ3 ∈ ∂X such
that Fix(g) ⊆ {ξ1 , ξ2 , ξ3 } for all g ∈ G. But then the set {ξ1 , ξ2 , ξ3 } would have to
be preserved by every element of g, which contradicts the definition of a loxodromic
isometry.
Let G be a focal semigroup, and let ξ be the global fixed point of G. The
dynamics of G will be different depending on whether or not g ′ (ξ) > 1 for any
g ∈ G.
Definition 6.2.15. G will be called outward focal if g ′ (ξ) > 1 for some g ∈ G,
and inward focal otherwise.
Note that an inward focal semigroup cannot be a group.
Proposition 6.2.16. For G ≤ Isom(X), the following are equivalent:
(A) G is focal.
(B) G has a unique global fixed point ξ ∈ ∂X, and g ′ (ξ) 6= 1 for some g ∈ G.
(C) G has a unique global fixed pont ξ ∈ ∂X, and there are two loxodromic
isometries g, h ∈ G so that g+ = h+ = ξ, but g− 6= h− .
Proof. The implications (C) ⇒ (A) ⇔ (B) are straightforward. Suppose that
G is focal, and let g ∈ G be a loxodromic isometry. Since G is a group, we may
without loss of generality suppose that g ′ (ξ) < 1, so that g+ = ξ. Let j ∈ G be
such that g− ∈
/ Fix(j). By choosing n sufficiently large, we may guarantee that
(jg n )′ (ξ) < 1. Then if h = jg n , then h is loxodromic and h+ = ξ. But g− ∈
/ Fix(h),
so g− 6= h− .
6.3. PROOF OF THE CLASSIFICATION THEOREM
103
6.3. Proof of the Classification Theorem
We begin by recalling the following definition from Section 4.5:
Definition 4.5.1. For each σ > 0 and x, y ∈ X, let
Shady (x, σ) = {η ∈ ∂X : hy|ηix ≤ σ}.
We say that Shady (x, σ) is the shadow cast by x from the light source y, with
parameter σ. For shorthand we will write Shad(x, σ) = Shado (x, σ).
Lemma 6.3.1. For every σ > 0, there exists r > 0 such that for every g ∈
Isom(X) with kgk ≥ r, if there exists a nonempty closed set
Z ⊆ Shadg−1 (o) (o, σ)
satisfying g(Z) ⊆ Z, then g is loxodromic and g+ ∈ Z.
Proof. Recall from the Bounded Distortion Lemma 4.5.6 that
(6.3.1)
D(g(y1 ), g(y2 ))
≤ Cb−kgk ∀y1 , y2 ∈ Z
D(y1 , y2 )
for some C > 0 independent of g. Now choose r > 0 large enough so that Cb−r < 1.
If g ∈ Isom(X) satisfies kgk ≥ r, we can conclude that g : Z → Z is a strict
contraction of the complete metametric space (Z, D). Then by Theorem 3.6.2, g
has a unique fixed point ξ ∈ Zrefl = Z ∩ ∂X.
To complete the proof we must show that g ′ (ξ) < 1 to prove that g is not
parabolic and that ξ = g+ . Indeed, by the Bounded Distortion Lemma, we have
g ′ (ξ) .× b−kgk ≤ b−r , so choosing r sufficiently large completes the proof.
Corollary 6.3.2. For every σ > 0, there exists r = rσ > 0 such that for every
g ∈ Isom(X) with kgk ≥ r, if g is not loxodromic, then
hg(o)|g −1 (o)io ≥ σ.
(6.3.2)
Proof. Fix σ > 0, and let σ ′ = σ + δ, where δ is the implied constant in
Gromov’s inequality. Apply Lemma 6.3.1 to get r′ > 0. Let r = max(r′ , 2σ ′ ). Now
suppose that g ∈ Isom(X) satisfies kgk ≥ r ≥ r′ but is not loxodromic. Then by
Lemma 6.3.1, we have
Shad(g(o), σ ′ ) \ Shadg−1 (o) (o, σ ′ ) 6= .
Let x be a member of this set. By definition this means that
ho|xig(o) ≤ σ ′ < hg −1 (o)|xio .
Since kgk ≥ r ≥ 2σ ′ , we have
hg(o)|xio = kgk − ho|xig(o) ≥ 2σ ′ − σ ′ = σ ′ .
104
6. CLASSIFICATION OF ISOMETRIES AND SEMIGROUPS
Now by Gromov’s inequality we have
hg(o)|g −1 (o)io ≥ min(hg(o)|xio , hg −1 (o)|xio ) − δ ≥ σ ′ − δ = σ.
Lemma 6.3.3. Let G Isom(X) be a semigroup which is not loxodromic, and
∞
let (gn )∞
1 be a sequence in G such that kgn k → ∞. Then (gn (o))1 is a Gromov
sequence.
Proof. Fix σ > 0 large, and let r = rσ be as in Corollary 6.3.2. Since G is
not loxodromic, (6.3.2) holds for every g ∈ G for which kgk ≥ r.
Fix n, m ∈ N with kgn k, kgm k ≥ r; Corollary 6.3.2 gives
hgn (o)|gn−1 (o)io ≥ σ
(6.3.3)
−1
hgm (o)|gm
(o)io ≥ σ.
(6.3.4)
By contradiction, suppose that hgn (o)|gm (o)io ≤ σ/2; then Gromov’s inequality
together with (6.3.3) gives
hgn−1 (o)|gm (o)io ≍+ 0.
(6.3.5)
It follows that
kgn gm k = d(gn−1 (o), gm (o)) ≥ 2r − hgn−1 (o)|gm (o)io ≍+ 2r.
Choosing r sufficiently large, we have kgn gm k ≥ r. So by Corollary 6.3.2,
(6.3.6)
−1 −1
hgn gm (o)|gm
gn (o)io ≥ σ.
Now
hgn (o)|gn gm (o)io = ho|gm (o)ign−1 (o)
= kgn k − hgn−1 (o)|gm (o)io
≍+ kgn k
(by (6.3.5))
≥ r,
i.e.
(6.3.7)
hgn (o)|gn gm (o)io &+ r.
A similar argument yields
(6.3.8)
−1
−1 −1
hgm
(o)|gm
gn (o)io &+ r.
Combining (6.3.4), (6.3.8), (6.3.6), and (6.3.7), together with Gromov’s inequality,
yields
hgn |gm io &+ min(σ, r).
6.4. DISCRETENESS AND FOCAL GROUPS
This completes the proof.
105
Proof of Theorem 6.2.3. Suppose that G is neither elliptic nor loxodromic,
and we will show that it is parabolic. Since G is not elliptic, there is a sequence
∞
(gn )∞
1 in G such that kgn k → ∞. By Lemma 6.3.3, (gn (o))1 is a Gromov sequence;
let ξ ∈ ∂X be the limit point.
Note that ξ is uniquely determined by G; if (hn (o))∞
1 were another Gromov
sequence, then we could let
jn :=
g
n/2
h
(n−1)/2
n even
.
n odd
The sequence (jn (o))∞
1 would tend to infinity, so by Lemma 6.3.3 it would be a
Gromov sequence. But that exactly means that the Gromov sequences (gn (o))∞
1
and (hn (o))∞
1 are equivalent. Moreover, it is easy to see that ξ does not depend on
the choice of the basepoint o ∈ X.
In particular, the fact that ξ is canonically determined by G implies that ξ is
a global fixed point of G. To complete the proof, we need to show that g ′ (ξ) = 1
for all g ∈ G. Suppose we have g ∈ G such that g ′ (ξ) 6= 1. Then g is loxodromic
by Lemma 6.1.7, a contradiction.
6.4. Discreteness and focal groups
Proposition 6.4.1. Fix G ≤ Isom(X), and suppose that either
(1) G is strongly discrete,
(2) X is CAT(-1) and G is moderately discrete, or
(3) X admits unique geodesic extensions (e.g. X is an algebraic hyperbolic
space) and G is weakly discrete.
Then G is not focal.
Strongly discrete case. Suppose that G is a focal group. Let ξ ∈ ∂X be
its global fixed point, and let g, h ∈ G be as in (C) of Proposition 6.2.16. Since
h−n (o) → h− 6= ξ, we have
hh−n (o)|ξio ≍+,h 0
and thus
hhn (o)|ξio ≍+ khn k − ho|ξihn (o) ≍+,h khn k.
Applying g we have
hghn (o)|ξio = hhn (o)|ξig−1 (o) ≍+,g hhn (o)|ξio ≍+,h khn k
106
6. CLASSIFICATION OF ISOMETRIES AND SEMIGROUPS
and applying Gromov’s inequality we have
hhn (o)|ghn (o)io ≍+,g,h khn k ≍+,g kghn k.
Now
kh−n ghn k = d(hn (o), ghn (o))
= khn k + kghn k − 2hhn (o), ghn (o)io ≍+,g,h 0.
Since G is strongly discrete, this implies that the collection {h−n ghn : n ∈ N} is
finite, and so for some n1 < n2 we have
h−n1 ghn1 = h−n2 ghn2
or
hn2 −n1 g = ghn2 −n1 ,
i.e. hn2 −n1 commutes with g. But then hn2 −n1 (g− ) = g− , contradicting that
g− 6= h− . This completes the proof of Proposition 6.4.1(1).
Moderately discrete case. Suppose that G is a focal group. Let ξ ∈ ∂X
be its global fixed point, and let g, h ∈ G be as in (C) of Proposition 6.2.16. Let
k = [g, h] = g −1 h−1 gh ∈ G.
We observe first that
(6.4.1)
k ′ (ξ) =
1 ′
1
g (ξ)h′ (ξ) = 1.
g ′ (ξ) h′ (ξ)
Note that strong hyperbolicity is necessary to deduce equality in (6.4.1) rather than
merely a coarse asymptotic.
Next, we claim that k(g− ) 6= g− . Indeed, g− ∈
/ Fix(h), so h(g− ) 6= g− . This
in turn implies that h(g− ) ∈
/ Fix(g), so gh(g− ) 6= h(g− ). Now applying g −1 h−1 to
both sides shows that k(g− ) 6= g− .
Claim 6.4.2. g −n kg n (o) → o.
Proof. Indeed,
kg −n kg n k = d(g n (o), kg n (o)).
Let
x=ξ
y=o
z = k(o)
pn = g n (o)
qn = kg n (o).
6.4. DISCRETENESS AND FOCAL GROUPS
107
x=ξ
pn = gn (o)
o
qn = kgn (o)
z = k(o)
Figure 6.4.1. The higher the point g n (o) is, the smaller its displacement under k is.
(See Figure 6.4.1.) Then pn , qn ∈ ∆ := ∆(x, y, z). By Proposition 4.4.13 d(pn , qn ) ≤
d(pn , q n ), where pn , q n are comparison points for pn , qn on the comparison triangle
∆ = ∆(x, y, z). Now notice that
B x (pn , q n ) = B ξ (g n (o), kg n (o)) = 0
by Proposition 4.2.16 and (6.4.1). On the other hand, pn , q n → x. An easy
calculation based on (2.5.3) and Proposition 3.5.5 (letting x = ∞) shows that
d(pn , q n ) → 0, and thus that kg −n kg n k → 0 i.e. g −n kg n (o) → o.
⊳
Since G is moderately discrete, this implies that the collection {g −n kg n : n ∈
N} is finite. As before (in the proof of the strongly discrete case), this implies
that g n and k commute for some n ∈ N. But (g n )− = g− , and k(g− ) 6= g− ,
which contradicts that g n and k commute. This completes the proof of Proposition
6.4.1(2).
Weakly discrete case. Suppose that G is a focal group. Let ξ, g, h, and k
be as above. Without loss of generality, supposet that o ∈ [g− , ξ].
Claim 6.4.3. g −n kg n (o) 6= o for all n ∈ N.
108
6. CLASSIFICATION OF ISOMETRIES AND SEMIGROUPS
Proof. Fix n ∈ N. As observed above, k(g− ) 6= g− . On the other hand,
k(ξ) = ξ, and g n (o) ∈ [g− , ξ]. Since X admits unique geodesic extensions, it follows
that k(g n (o)) 6= g n (o), or equivalently that g −n kg n (o) 6= o.
⊳
Together with Claim 6.4.2, this contradicts that G is weakly discrete. This
completes the proof of Proposition 6.4.1(3).
CHAPTER 7
Limit sets
Throughout this chapter, we fix a subsemigroup G Isom(X). We define
the limit set of G, along with various subsets. We then define several concepts in
terms of the limit set including elementariness and compact type, while relating
other concepts to the limit set, such as the quasiconvex core and irreducibility of a
group action. We also prove that the limit set is minimal in an approprate sense
(Proposition 7.4.1 - Proposition 7.4.6).
7.1. Modes of convergence to the boundary
We recall (Observation 3.4.20) that a sequence (xn )∞
1 in X converges to a point
ξ ∈ ∂X if and only if
hxn |ξio −
→ ∞.
n
In this section we define more restricted modes of convergence. To get an intuition
let us consider the case where X = E = Eα is the half-space model of a real
hyperbolic space. Consider a sequence (xn )∞
1 in E which converges to a point
α−1
ξ ∈ B := ∂E \ {∞} = H
. We say that xn → ξ conically if there exists θ > 0
such that if we let
C(ξ, θ) = {x ∈ E : x1 ≥ sin(θ)kx − ξk}
then xn ∈ C(ξ, θ) for all n ∈ N. We call C(ξ, θ) the cone centered at ξ with
inclination θ; see Figure 7.1.1.
Proposition 7.1.1. Let (xn )∞
1 be a sequence in E converging to a point ξ ∈ B.
Then the following are equivalent:
(A) (xn )∞
1 converges conically to ξ.
(B) The sequence (xn )∞
1 lies within a bounded distance of the geodesic ray
[o, ξ].
(C) There exists σ > 0 such that for all n ∈ N,
ho|ξixn ≤ σ,
or equivalently
(7.1.1)
ξ ∈ Shad(xn , σ).
109
110
7. LIMIT SETS
C(ξ, θ)
θ
ξ
Figure 7.1.1. A sequence converging conically to ξ. For each
point x, the height of x is greater than sin(θ) times the distance
from x to ξ.
Moreover, the equivalence of (B) and (C) holds in all geodesic hyperbolic metric
spaces.
Proof. The equivalence of (B) and (C) follows directly from (i) of Proposition
4.3.1. Moreover, conditions (B) and (C) are clearly independent of the basepoint o.
Thus, in proving (A) ⇔ (B) we may without loss of generality suppose that ξ = 0
and o = (1, 0). Note that if θ > 0 is fixed, then
C(0, θ) = {x ∈ E : ∡(x) ≤ π/2 − θ} = {x ∈ E : θ(x) ≤ − log cos(π/2 − θ)},
where θ = θ0,∞,o is as in Proposition 4.6.4. Since − log cos(π/2 − θ) → ∞ as θ → 0,
we have (A) if and only if the sequence (θ(xn ))∞
1 is bounded. But
θ(xn ) = h0|∞ixn ≍+ d(xn , [0, ∞])
(by (i) of Proposition 4.3.1)
= d(xn , [o, 0]),
(for n sufficiently large)
which completes the proof.
Condition (B) of Proposition 7.1.1 motivates calling this kind of convergence
radial ; we shall use this terminology henceforth. However, condition (C) is best
suited to a general hyperbolic metric space.
Definition 7.1.2. Let (xn )∞
1 be a sequence in X converging to a point ξ ∈ ∂X.
We will say that (xn )∞
1 converges to ξ
• σ-radially if (7.1.1) holds for all n ∈ N,
• radially if it converges σ-radially for some σ > 0,
• σ-uniformly radially if it converges σ-radially, x1 = o, and
d(xn , xn+1 ) ≤ σ ∀n ∈ N,
• uniformly radially if it converges σ-uniformly radially for some σ > 0.
7.1. MODES OF CONVERGENCE TO THE BOUNDARY
111
ξ
Figure 7.1.2. A sequence converging horospherically but not radially to ξ.
Note that a sequence can converge σ-radially and uniformly radially without
converging σ-uniformly radially.
We next define horospherical convergence. Again, we motivate the discussion
by considering the case of a real hyperbolic space X = E = Eα . This time, however,
we will let ξ = ∞, and we will say that a sequence (xn )∞
1 converges horospherically
to ξ if
height(xn ) −
→ ∞,
n
where the height of a point x ∈ E is its first coordinate x1 . This terminology
comes from defining a horoball centered at ∞ to be a set of the form H∞,t = {x :
height(x) > et }; then xn → ∞ horospherically if and only if for every horoball
H∞,t centered at infinity, we have xn ∈ H∞,t for all sufficiently large n. (See also
Definition 12.1.1 below.)
Recalling (cf. Proposition 3.5.5) that
height(x) = bB∞ (o,x) ,
the above discussion motivates the following definition:
Definition 7.1.3. A sequence (xn )∞
1 in X converges horospherically to a point
to ξ ∈ ∂X if
B ξ (o, xn ) −
→ +∞.
n
Observation 7.1.4. If xn → ξ radially, then xn → ξ horospherically.
Proof. Indeed,
→ ∞.
B ξ (o, xn ) ≍+ kxn k − 2ho|ξixn ≍+ kxn k −
n
The converse is false, as illustrated in Figure 7.1.2.
112
7. LIMIT SETS
Observation 7.1.5. The concepts of convergence, radial convergence, uniformly radial convergence, and horospherical convergence are independent of the
basepoint o, whereas the concepts of σ-radial convergence and σ-uniformly radial
convergence depend on the basepoint. (Regarding σ-radial convergence, this dependence on basepoint is not too severe; see Proposition 7.2.3 below.)
7.2. Limit sets
We define the limit set of G, a subset of ∂X which encodes geometric information about G. We also define a few important subsets of the limit set.
Definition 7.2.1. Let
N
Λ(G) := {η ∈ ∂X : gn (o) → η for some (gn )∞
1 ∈ G }
N
Λr (G) := {η ∈ ∂X : gn (o) → η radially for some (gn )∞
1 ∈G }
N
Λr,σ (G) := {η ∈ ∂X : gn (o) → η σ-radially for some (gn )∞
1 ∈G }
N
Λur (G) := {η ∈ ∂X : gn (o) → η uniformly radially for some (gn )∞
1 ∈G }
N
Λur,σ (G) := {η ∈ ∂X : gn (o) → η σ-uniformly radially for some (gn )∞
1 ∈ G }
N
Λh (G) := {η ∈ ∂X : gn (o) → η horospherically for some (gn )∞
1 ∈ G }.
These sets are respectively called the limit set, radial limit set, σ-radial limit set,
uniformly radial limit set, σ-uniformly radial limit set, and horospherical limit set
of the semigroup G.
Note that
Λr =
[
Λr,σ
σ>0
Λur =
[
Λur,σ
σ>0
Λur ⊆ Λr ⊆ Λh ⊆ Λ.
Observation 7.2.2. The sets Λ, Λr , Λur , and Λh are invariant1 under the
action of G, and are independent of the basepoint o. The set Λ is closed.
Proof. The first assertion follows from Observation 7.1.5 and the second follows directly from the definition of Λ as the intersection of ∂X with the set of
accumulation points of the set G(o).
1By invariant we always mean forward invariant.
7.3. CARDINALITY OF THE LIMIT SET
113
Proposition 7.2.3 (Near-invariance of the sets Λr,σ ). For every σ > 0, there
exists τ > 0 such that for every g ∈ G, we have
g(Λr,σ ) ⊆ Λr,τ .
(7.2.1)
If X is strongly hyperbolic, then (7.2.1) holds for all τ > σ.
Proof. Fix ξ ∈ Λr,σ . There exists a sequence (hn )∞
1 so that hn (o) → ξ
σ-radially, i.e.
ho|ξihn (o) ≤ σ ∀n ∈ N
and hn (o) → ξ. Now
ho|g −1 (o)ihn (o) ≥ khn k − kg −1 k −
→ +∞.
n
Thus, for n sufficiently large, Gromov’s inequality gives
hg −1 (o)|ξihn (o) .+ σ
(7.2.2)
i.e.
ho|g(ξ)ighn (o) .+ σ.
So ghn (o) → g(ξ) τ -radially, where τ is equal to σ plus the implied constant of this
asymptotic. Thus, g(ξ) ∈ Λr,τ .
If X is strongly hyperbolic, then by using (3.3.6) instead of Gromov’s inequality,
the implied constant of (7.2.2) can be made arbitrarily small. Thus τ may be taken
arbitrarily close to σ.
7.3. Cardinality of the limit set
In this section we characterize the cardinality of the limit set according to the
classification of the semigroup G.
Proposition 7.3.1 (Cardinality of the limit set by classification). Fix G
Isom(X).
(i) If G is elliptic, then Λ = .
(ii) If G is parabolic or inward focal with global fixed point ξ, then Λ = {ξ}.
(iii) If G is lineal with fixed pair {ξ1 , ξ2 }, then Λ ⊆ {ξ1 , ξ2 }, with equality if G
is a group.
(iv) If G is outward focal or of general type, then #(Λ) ≥ #(R). Equality holds
if X is separable.
Case (i) is immediate, while case (iv) requires the theory of Schottky groups
and will be proven in Chapter 10 (see Proposition 10.5.4).
114
7. LIMIT SETS
Proof of (ii). For g ∈ G, g ′ (ξ) ≤ 1, so by Proposition 4.2.16, we have
B ξ (g(o), o) .+ 0. In particular, by (h) of Proposition 3.3.3 we have
hx|ξio &+
1
kxk ∀x ∈ G(o).
2
This implies that xn → ξ for any sequence (xn )∞
1 in G(o) satisfying kxn k → ∞. It
follows that Λ = {ξ}.
Proof of (iii). By Lemma 4.6.3 we have
θ(g(o)) ≍+ θ(o) = o ∀g ∈ G,
where θ = θξ1 ,ξ2 ,o = θξ2 ,ξ1 ,o is as in Section 4.6. Thus
hξ1 |ξ2 ix ≍+ 0 ∀x ∈ G(o).
Fix a sequence G(o) ∋ xn → ξ ∈ Λ. By Gromov’s inequality, there exists i = 1, 2
such that
ho|ξi ixn ≍+ 0 for infinitely many n.
It follows that xn → ξi radially along some subsequence, and in particular ξ = ξi .
Thus Λ ⊆ {ξ1 , ξ2 }.
Definition 7.3.2. Fix G Isom(X). G is called elementary if #(Λ) < ∞ and
nonelementary if #(Λ) = ∞.
Thus, according to Proposition 7.3.1, elliptic, parabolic, lineal, and inward
focal semigroups are elementary while outward focal semigroups and semigroups of
general type are nonelementary.
Remark 7.3.3. In the Standard Case, some authors (e.g. [148, §5.5]) define a
subgroup of Isom(X) to be elementary if there is a global fixed point or a global fixed
geodesic line. According to this definition, focal groups are considered elementary.
By contrast, we follow [48] and others in considering them to be nonelementary.
Another common definition in the Standard Case is that a group is elementary
if it is virtually abelian. This agrees with our definition, but beyond the Standard
Case this equivalence no longer holds (cf. Observation 11.1.4 and Remark 11.1.6).
7.4. Minimality of the limit set
Observation 7.2.2 identified the limit set Λ as a closed G-invariant subset of the
Gromov boundary ∂X. In this section, we give a characterization of Λ depending
on the classification of G.
Proposition 7.4.1 (Cf. [53, Théorème 5.1]). Fix G Isom(X). Then any
closed G-invariant subset of ∂X containing at least two points contains Λ.
7.4. MINIMALITY OF THE LIMIT SET
115
Proof. We begin with the following lemma, which will also be useful later:
(1)
(2)
∞
∞
Lemma 7.4.2. Let (xn )∞
1 , (yn )1 , (yn )1 be sequences in bord X satisfying
hyn(1) |yn(2) ixn ≍+ 0
and
(i)
xn → ξ ∈ ∂X.
Then ξ ∈ {yn : n ∈ N, i = 1, 2}.
Proof. For n ∈ N fixed, by Gromov’s inequality there exists in = 1, 2 such
that
ho|yn(in ) ixn ≍+ 0.
It follows that
hxn |yn(in ) io ≍+ kxn k −
→ ∞.
n
On the other hand
hxn |ξio −
→ ∞,
n
so by Gromov’s inequality
hyn(in ) |ξio −
→ ∞,
n
i.e.
(i )
yn n
→ ξ.
⊳
Now let F be a closed G-invariant subset of ∂X containing two points ξ1 6= ξ2 ,
and let η ∈ Λ. Then there exists a sequence (gn )∞
1 so that gn (o) → η. Applying
(i)
Lemma 7.4.2 with xn = gn (o) and yn = gn (ξi ) ∈ F completes the proof.
The proof of Proposition 7.4.1 may be compared to the proof of [73, Theorem
3.1], where a quantitative convergence result is proven assuming that η is in the
radial limit set (and assuming that G is a group).
Corollary 7.4.3. Let G Isom(X) be nonelementary.
(i) If G is outward focal with global fixed point ξ, then Λ is the smallest closed
G-invariant subset of ∂X which contains a point other than ξ.
(ii) (Cf. [20, Theorem 5.3.7]) If G is of general type, then Λ is the smallest
nonempty closed G-invariant subset of ∂X.
Proof. Any G-invariant set containing a point which is not fixed by G contains
two points.
Corollary 7.4.4. Let G Isom(X) be nonelementary. Then
Λ = Λr = Λur .
116
7. LIMIT SETS
Proof. The implications ⊇ are clear. On the other hand, for each loxodromic
g ∈ G we have g+ ∈ Λur . Thus Λur 6= , and Λur * {ξ} if G is outward focal with
global fixed point ξ. By Proposition 7.3.1, G is either outward focal or of general
type. Applying Corollary 7.4.3, we have Λur ⊇ Λ.
Remark 7.4.5. If G is elementary, it is easily verified that Λ = Λr = Λur unless
G is parabolic, in which case Λr = Λur = $ Λ.
If G is a nonelementary group, then Corollary 7.4.3 immediately implies that
the set of loxodromic fixed points of G is dense in Λ. However, if G is not a group
then this conclusion does not follow, since the set of attracting loxodromic fixed
points is not necessarily G-invariant. (The set of attracting fixed points is the right
set to consider, since the set of repelling fixed points is not necessarily a subset of
Λ.) Nevertheless, we have the following:
Proposition 7.4.6. Let G Isom(X) be nonelementary. Then the set
Λ+ := {g+ : g ∈ G is loxodromic}.
is dense in Λ.
Proof. First note that it suffices to show that Λ+ contains all elements of Λ
which are not global fixed points. Indeed, if this is true, then Λ+ is G-invariant,
and applying Corollary 7.4.3 completes the proof.
Fix ξ ∈ Λ which is not a global fixed point of G, and choose h ∈ G such that
h(ξ) 6= ξ. Fix ε > 0 small enough so that D(B, h(B)) > ε, where B = B(ξ, ε). Let
σ > 0 be large enough so that the Big Shadows Lemma 4.5.7 holds. Since ξ ∈ Λ,
there exists g ∈ G such that
Shad(g(o), σ) ⊆ B.
Let Z = g −1 (Shad(g(o), σ)) = Shadg−1 (o) (o, σ). Then by Lemma 4.5.7, Diam(∂X \
Z) ≤ ε. Thus ∂X \ Z can intersect at most one of the sets B, h(B). So B ⊆ Z or
h(B) ⊆ Z. If B ⊆ Z then
g(B) ⊆ B and B ⊆ Shadg−1 (o) (o, σ),
whereas if h(B) ⊆ Z then
gh(B) ⊆ B and B ⊆ Shad(gh)−1 (o) (o, σ + khk).
So by Lemma 6.3.1, we have j+ ∈ B, where j = g or j = gh is a loxodromic
isometry.
The following improvement over Proposition 7.4.6 has a quite intricate proof:
7.4. MINIMALITY OF THE LIMIT SET
117
Proposition 7.4.7 (Cf. [20, Theorem 5.3.8], [117, p.349]). Let G Isom(X)
be of general type. Then
{(g+ , g− ) : g ∈ G is loxodromic}
is dense in Λ(G) × Λ(G−1 ). Here G−1 = {g −1 : g ∈ G}.
Proof.
Claim 7.4.8. Let g be a loxodromic isometry and fix ε > 0. There exists
δ = δ(ε, g) such that for all ξ1 , ξ2 ∈ ∂X with D(ξ2 , Fix(g)) ≥ ε,
#{i = 0, . . . , 4 : D(g i (ξ1 ), ξ2 ) ≤ δ} ≤ 1.
Proof. Suppose that D(g i (ξ1 ), ξ2 ) ≤ δ for two distinct values of i. Then
D(g i1 (ξ1 ), g i2 (ξ1 )) ≤ 2δ. For every n, we have
n
D(g n+i1 (ξ1 ), g n+i2 (ξ1 )) .× bkg k δ
and thus by the triangle inequality
D(g i1 (ξ1 ), g n(i2 −i1 )+i1 (ξ1 )) .×,n δ.
By Theorem 6.1.10, if n is sufficiently large then D(g n(i2 −i1 )+i1 (ξ1 ), g+ ) ≤ ε/2,
which implies that
ε/2 ≤ D(ξ2 , Fix(g)) − D(g n(i2 −i1 )+i1 (ξ1 ), g+ ) ≤ D(ξ2 , g n(i2 −i1 )+i1 (ξ1 )) .×,n δ,
which is a lower bound on δ independent of ξ1 , ξ2 . Choosing δ less than this lower
bound yields a contradiction.
⊳
Claim 7.4.9. There exist ε, ρ > 0 such that for all ξ1 , ξ2 , ξ3 , ξ4 ∈ Λ, there exists
j ∈ G such that
(7.4.1)
D(j(ξk ), ξℓ ) ≥ ε ∀k = 1, 2 ∀ℓ = 3, 4 and kjk ≤ ρ.
Proof. Fix g, h ∈ G loxodromic with Fix(g) ∩ Fix(h) = , and let
4
4
ρ = max max kg i hj k.
i=0 j=0
Now fix ξ1 , ξ2 , ξ3 , ξ4 ∈ Λ. By Claim 7.4.8, for each k = 1, 2 and η ∈ Fix(g), we have
#{j = 0, . . . , 4 : D(hj (ξk ), η) ≤ δ1 := δ(D(Fix(g), Fix(h)), h)} ≤ 1.
It follows that there exists j ∈ {0, . . . , 4} such that D(hj (ξk ), η) ≥ δ1 for all k = 1, 2
and η ∈ Fix(g). Applying Claim 7.4.8 again, we see that for each k = 1, 2 and
ℓ = 3, 4, we have
#{i = 0, . . . , 4 : D(g −i (ξℓ ), hj (ξk )) ≤ δ2 := δ(δ1 , g −1 )} ≤ 1.
118
7. LIMIT SETS
It follows that there exists i ∈ {0, . . . , 4} such that D(g −i (ξℓ ), hj (ξk )) ≥ δ2 for all
k = 1, 2 and ℓ = 3, 4. But then
D(g i hj (ξk ), ξℓ ) &× δ2 ,
completing the proof.
⊳
Now fix ξ+ ∈ Λ, ξ− ∈ Λ(G−1 ) distinct, and fix δ > 0 arbitrarily small. By the
definition of Λ, there exist g, h ∈ G such that
D(g(o), ξ+ ), D(h−1 (o), ξ− ) ≤ δ.
Let σ > 0 be large enough so that the Big Shadows Lemma 4.5.7 holds for ε = ε/2,
where ε is as in Claim 7.4.9. Then
Diam(∂X \ Shadg−1 (o) (o, σ)), Diam(∂X \ Shadh(o) (o, σ)) ≤ ε/2.
On the other hand,
Diam(Shad(g(o), σ)), Diam(Shad(h−1 (o), σ)) .× δ.
Since Shad(g(o), σ) is far from h−1 (o) and Shad(h−1 (o), σ) is far from g(o), the
Bounded Distortion Lemma 4.5.6 gives
Diam(h(Shad(g(o), σ))), Diam(g −1 (Shad(h−1 (o), σ))) .× δ.
Choose ξ1 ∈ h(Shad(g(o), σ)), ξ2 ∈ g −1 (Shad(h−1 (o), σ)), ξ3 ∈ ∂X\Shadg−1 (o) (o, σ)
and ξ4 ∈ ∂X \ Shadh(o) (o, σ). By Claim 7.4.9, there exists j ∈ G such that (7.4.1)
holds. Then
Diam(jh(Shad(g(o), σ))), Diam(j −1 g −1 (Shad(h−1 (o), σ))) .× δ,
and by choosing δ sufficiently small, we can make these diameters less than ε/2. It
follows that
jh(Shad(g(o), σ)) ⊆ Shadg−1 (o) (o, σ), and
j −1 g −1 (Shad(h−1 (o), σ)) ⊆ Shadh(o) (o, σ)
or equivalently that
gjh(Shad(g(o), σ)) ⊆ Shad(g(o), σ), and
(gjh)−1 (Shad(h−1 (o), σ)) ⊆ Shad(h−1 (o), σ).
By Lemma 6.3.1, it follows that gjh is a loxodromic isometry satisfying
(gjh)+ ∈ Shad(g(o), σ), (gjh)− ∈ Shad(h−1 (o), σ).
In particular D((gjh)+ , ξ+ ), D((gjh)− , ξ− ) .× δ. Since δ was arbitrary, this completes the proof.
7.5. CONVEX HULLS
119
7.5. Convex hulls
In this section, we assume that X is regularly geodesic (see Section 4.4). Recall
that for points x, y ∈ bord X, the notation [x, y] denotes the geodesic segment, line,
or ray joining x and y.
Definition 7.5.1. Given S ⊆ bord X, let
[
Hull1 (S) :=
[x, y],
x,y∈S
Hulln (S) := Hull1 · · · Hull1 (S)
{z
}
|
Hull∞ (S) :=
[
n times
Hulln (S).
n∈N
The set Hulln (S) will be called the nth convex hull of S. Moreover, Hull∞ (S) will
be called the convex hull of S, and Hull1 (S) will be called the quasiconvex hull of
S.
The terminology “convex hull” comes from the following fact:
Proposition 7.5.2. Hull∞ (S) is the smallest closed set F ⊆ bord X such that
S ⊆ F and
(7.5.1)
[x, y] ⊆ F ∀x, y ∈ F.
A set F satisfying (7.5.1) will be called convex.
Proof. It is clear that S ⊆ Hull∞ (S) ⊆ bord X. To show that Hull∞ (S)
is convex, fix x, y ∈ Hull∞ (S). Then there exist sequences A ∋ xn → x and
S
A ∋ yn → y, where A = n∈N Hulln (S). For each n, [xn , yn ] ⊆ A ⊆ Hull∞ (S). But
since X is regularly geodesic, [xn , yn ] → [x, y] in the Hausdorff metric on bord X.
Since Hull∞ (S) is closed, it follows that [x, y] ⊆ Hull∞ (S).
Conversely, if S ⊆ F ⊆ bord X is a closed convex set, then an induction
argument shows that F ⊇ Hulln (S) for all n. Since F is closed, we have F ⊇
Hull∞ (S).
Another connection between the operations Hull1 and Hull∞ is given by the
following proposition:
Proposition 7.5.3. Suppose that X is a algebraic hyperbolic space. Then there
exists τ > 0 such that for every set S ⊆ bord X we have
X ∩ Hull1 (S) ⊆ X ∩ Hull∞ (S) ⊆ Nτ (X ∩ Hull1 (S)).
(Recall that Nτ (S) denotes the τ -thickening of a set with respect to the hyperbolic
metric d.)
120
7. LIMIT SETS
Proof. The proof will proceed using the ball model X = B = Bα
F . We will
need the following lemma:
Lemma 7.5.4. There exists a closed convex set F $ bord B whose interior
intersects ∂B.
Proof. If α < ∞, this is a consequence of [11, Theorem 3.3].
We will use the finite-dimensional case to prove the infinite-dimensional case.
Suppose that α is infinite. Let Y = B3F ⊆ X. Then by the α < ∞ case of Lemma
7.5.4, there exists a closed convex set F2 $ bord Y whose interior intersects ∂Y ,
say ξ ∈ Int(F2 ) ∩ ∂Y . Choose ε > 0 such that BY (ξ, ε) ⊆ F2 . Then
F1 := Hull∞ (BY (ξ, ε)) ⊆ F2 $ bord Y
by Proposition 7.5.2. On the other hand, F1 is invariant under the action of the
group
G1 := {g ∈ Isom(Y ) : g(0) = 0, g(ξ) = ξ}.
Let
G = {g ∈ Isom(X) : g(0) = 0, g(ξ) = ξ},
and note that G(bord Y ) = bord X. Let F = G(F1 ), and note that F ∩bord Y = F1 .
We claim that F is convex. Indeed, suppose that x, y ∈ F ; then there exists g ∈ G
such that g(x), g(y) ∈ bord Y . (Note that in this step, we need all three dimensions
of Y .) Then g(x), g(y) ∈ F ∩ bord Y = F1 , so by the convexity of F1 we have
g([x, y]) = [g(x), g(y)] ⊆ F1 ⊆ F . Since F is G-invariant, we have [x, y] ⊆ F .
In addition to being convex, F is also closed and contains the set G(BY (ξ, ε)) =
BX (ξ, ε). Thus, ξ ∈ Int(F ). Finally, since F ∩ bord Y = F1 $ bord Y , it follows
that F $ bord X.
⊳
Let F be as in Lemma 7.5.4. Since F $ bord B is a closed set, it follows that
B \ F 6= . By the transitivity of Isom(B) (Observation 2.3.2), we may without loss
of generality assume that 0 ∈ B \ F . By the transitivity of Stab(Isom(B); 0) on ∂B,
we may without loss of generality assume that e1 ∈ Int(F ). Fix ε > 0 such that
B(e1 , ε) ⊆ F .
We now proceed with the proof of Proposition 7.5.3. It is clear from the definitions that B ∩ Hull1 (S) ⊆ B ∩ Hull∞ (S). To prove the second inclusion, fix
z ∈ B \ Nτ (Hull1 (S)) and we will show that z ∈
/ Hull∞ (S). By the transitivity
of Isom(B), we may without loss of generality assume that z = 0. Now for every
x, y ∈ S, we have z = 0 ∈
/ Nτ ([x, y]). By (i) of Proposition 4.3.1, we have
hx|yi0 &+ τ
and thus by (3.5.1),
ky − xk .× e−τ .
7.5. CONVEX HULLS
121
By choosing τ sufficiently large, this implies that
ky − xk ≤ ε/2 ∀x, y ∈ S.
Moreover, since d(0, x) = 2hx|xi0 &+ τ , by choosing τ sufficiently large we may
guarantee that
kxk ≥ 1 − ε/2 ∀x ∈ S.
Since the claim is trivial if S = , assume that S 6= and choose x ∈ S. Without
loss of generality, assume that x = λe1 for some λ ≥ 2/3. Then S ⊆ BE (x, ε/2) ⊆
B(e1 , ε) ⊆ F . But then F is a closed convex set containing S, so by Proposition
7.5.2 Hull∞ (S) ⊆ F . Since z = 0 ∈
/ F , it follows that z ∈
/ Hull∞ (S).
Corollary 7.5.5. Suppose that X is an algebraic hyperbolic space. Then for
every closed set S ⊆ bord X, we have
Hull∞ (S) ∩ ∂X = S ∩ ∂X.
Proof. The inclusion ⊇ is immediate. Suppose that ξ ∈ Hull∞ (S) ∩ ∂X, and
find a sequence X ∩ Hull∞ (S) ∋ xn → ξ. By Proposition 7.5.3, for each n there
(1)
(2)
exist yn , yn
(1)
(2)
(1)
(2)
∈ S such that xn ∈ Nτ ([yn , yn ]); by Proposition 4.3.1 we have
hyn |yn ixn ≍+ 0. Applying Lemma 7.4.2 gives ξ ∈ S.
Remark 7.5.6. Corollary 7.5.5 was proven for the case where X is a pinched
(finite-dimensional) Hadamard manifold and S ⊆ ∂X by M. T. Anderson [11,
Theorem 3.3]. It was conjectured to hold whenever X is “strictly convex” by
Gromov [83, p.11], who observed that it holds in the Standard Case. However,
this conjecture was proven to be false independently by A. Ancona [9, Corollary
C] and A. Borbély [32, Theorem 1], who each constructed a three-dimensional
CAT(-1) manifold X and a point ξ ∈ ∂X such that for every neighborhood U of ξ,
Hull∞ (U ) = bord X.
Thus, although the ∞-convex hull has more geometric and intuitive appeal
based on Proposition 7.5.2, without more hypotheses there is no way to restrain its
geometry. The 1-convex hull is thus more useful for our applications. Proposition
7.5.3 indicates that in the case of an algebraic hyperbolic space, we are not losing
too much by the change.
Definition 7.5.7. The convex core of a semigroup G Isom(X) is the set
CΛ := X ∩ Hull∞ (Λ),
and the quasiconvex core is the set
Co := X ∩ Hull1 (G(o)).
122
7. LIMIT SETS
Observation 7.5.8. The convex core and quasiconvex core are both closed
G-invariant sets. The quasiconvex core depends on the distinguished point o. However:
Proposition 7.5.9. Fix x, y ∈ X. Then
Cx ⊆ NR (Cy )
for some R > 0.
Proof. Fix z ∈ Cx . Then z ∈ [g(x), h(x)] for some g, h ∈ G. It follows that
hg(y)|h(y)iz ≍+ hg(x)|h(x)iz = 0.
So by Proposition 4.3.1, d(z, [g(y), h(y)]) ≍+ 0. But [g(y), h(y)] ⊆ Cy , so d(z, Cy ) ≍+
0. Letting R be the implied constant completes the proof.
Remark 7.5.10. In many cases, we can get information about the action of G
on X by looking just at its restriction to CΛ or to Co . We therefore also remark
that if X is a CAT(-1) space, then CΛ is also a CAT(-1) space.
In the sequel the following notation will be useful:
Notation 7.5.11. For a set S ⊆ bord X let
S ′ = S ∩ ∂X.
(7.5.2)
Observation 7.5.12. (Co )′ = Λ.
Proof. Since Λ = (G(o))′ and G(o) ⊆ Co , we have (Co )′ ⊇ Λ. Suppose that
(1) (2)
ξ ∈ (Co )′ , and let Co ∋ xn → ξ. By definition, for each n there exist yn , yn ∈ G(o)
(1)
(2)
such that xn ∈ [yn , yn ]. Lemma 7.4.2 completes the proof.
7.6. Semigroups which act irreducibly on algebraic hyperbolic spaces
Definition 7.6.1. Suppose that X is an algebraic hyperbolic space, and fix
G Isom(X). We shall say that G acts reducibly on X if there exists a nontrivial
totally geodesic G-invariant subset S $ bord X. Otherwise, we shall say that G
acts irreducibly on X.
Remark 7.6.2. A parabolic or focal subsemigroup of Isom(X) may act either
reducibly or irreducibly on X.
Proposition 7.6.3. Let G Isom(X) be nonelementary. Then the following
are equivalent:
(A) G acts reducibly on X.
7.7. SEMIGROUPS OF COMPACT TYPE
123
(B) There exists a nontrivial totally geodesic subset S $ bord X such that
Λ ⊆ S.
(C) There exists a nontrivial totally geodesic subset S $ bord X such that
CΛ ⊆ S.
(D) There exists a nontrivial totally geodesic subset S $ bord X such that
Co ⊆ S for some o ∈ X.
Proof of (A) ⇒ (B). Let S $ bord X be a nontrivial totally geodesic Ginvariant subset. Fix o ∈ S ∩ X. Then Λ ⊆ G(o) ⊆ S.
Proof of (B) ⇒ (C). If S is any totally geodesic set which contains Λ, then
S is a closed convex set containing Λ, so by Proposition 7.5.2, CΛ ⊆ S.
Proof of (C) ⇒ (D). Since G is nonelementary, CΛ 6= . Fix o ∈ CΛ ; then
Co ⊆ CΛ .
Proof of (D) ⇒ (A). Let S be the smallest totally geodesic subset of X
which contains Co , i.e.
S :=
\
{W : W ⊇ Co totally geodesic}.
Then our hypothesis implies that S $ bord X. Since o ∈ S, S is nontrivial. It is
obvious from the definition that S is G-invariant. This completes the proof.
Remark 7.6.4. If G Isom(X) is nonelementary, then Proposition 7.6.3 gives
us a way to find a nontrivial totally geodesic set on which G acts reducibly; namely,
the smallest totally geodesic set containing Λ, or equivalently CG , will have this
property (cf. Lemma 2.4.5). On the other hand, there exists a parabolic group
G ≤ Isom(H∞ ) such that G does not act irreducibly on any nontrivial totally
geodesic subset S ⊆ bord H∞ (Remark 11.2.19).
7.7. Semigroups of compact type
Definition 7.7.1. We say that a semigroup G Isom(X) is of compact type
if its limit set Λ is compact.
Proposition 7.7.2. For G Isom(X), the following are equivalent:
(A) G is of compact type.
(B) Every sequence (xn )∞
1 in G(o) with kxn k → ∞ has a convergent subsequence.
Furthermore, if X is regularly geodesic, then (A)-(B) are equivalent to:
(C) The set Co is a proper metric space.
and if X is an algebraic hyperbolic space, then they are equivalent to:
124
7. LIMIT SETS
(D) The set CΛ is a proper metric space.
Proof of (A) ⇒ (B). Fix a sequence (gn )∞
1 in G with kgn k → ∞. The exis-
tence of such a sequence implies that G is not elliptic. If G is parabolic or inward
focal, then the proof of Proposition 7.3.1(ii) shows that gn (o) → ξ, where Λ = {ξ}.
So we may assume that G is lineal, outward focal, or of general type, in which case
Proposition 7.3.1 gives #(Λ) ≥ 2.
∞
Fix distinct ξ1 , ξ2 ∈ Λ, and let (nk )∞
1 be a sequence such that (gnk (ξi ))1
converges for i = 1, 2, and such that
hgn−1
(o)|ξ1 io ≤ hgn−1
(o)|ξ2 io
k
k
for all k. (If this is not possible, switch ξ1 and ξ2 .) We have
(o)|ξ1 io
(o)|ξ1 io , hgn−1
(o)|ξ2 io = hgn−1
0 ≍+,ξ1 ,ξ2 hξ1 |ξ2 io &+ min hgn−1
k
k
k
and thus
→ ∞.
hgnk (o)|gnk (ξ1 )io ≍+,ξ1 ,ξ2 kgnk k −
n
On the other hand, there exists η ∈ Λ such that gnk (ξ1 ) −
→ η, and thus
k
hgnk (ξ1 )|ηio −
→ ∞.
n
Applying Gromov’s inequality yields
hgnk (o)|ηio −
→∞
n
and thus gnk (o) −
→ η. This completes the proof.
k
Proof of (B) ⇒ (A). Fix a sequence (ξn )∞
1 in Λ. For each n ∈ N, choose
gn ∈ G with
hgn (o)|ξn io ≥ n.
In particular kgn k ≥ n −
→ ∞. Thus by our hypothesis, there exists a convergent
n
subsequence gnk (o) −
→ η ∈ Λ. Now
k
D(ξnk , η) ≤ D(gnk (o), ξnk ) + D(gnk (o), η) .× b−nk + D(gnk (o), η) −
→ 0,
k
i.e. ξnk −
→ η.
k
Proof of (A) ⇒ (C). Let (xn )∞
1 be a bounded sequence in Co . For each
(1)
(1)
(2)
(2)
n ∈ N, there exist yn , yn ∈ G(o) such that xn ∈ N1/n ([yn , yn ]). Choose a
(1)
(2)
→ α and ynk −
→ β. Since X is regularly geodesic we
sequence (nk )∞
1 on which ynk −
have
k
k
[yn(1)
, yn(2)
]−
→ [y (1) , y (2) ].
k
k
k
For each k, choose zk ∈
(1) (2)
[ynk , ynk ]
with d(xnk , zk ) ≤ 1/nk . Since the sequence (zk )∞
1
is bounded, it must have a subsequence which converges to a point in [y (1) , y (2) ];
7.7. SEMIGROUPS OF COMPACT TYPE
125
it follows that the corresponding subsequence of (xnk )∞
1 is also convergent. Thus
every bounded sequence in Co has a convergent subsequence, so Co is proper.
Proof of (C) ⇒ (B). Obvious since G(o) ⊆ Co .
Proof of (A) ⇒ (D). Note first of all that we cannot get (A) ⇒ (D) immediately from Proposition 7.5.3, since the τ -thickening of a compact set is no longer
compact.
By [35, Proposition 1.5], there exists a metric ρ on bord X compatible with the
topology such that the map F 7→ Hull1 (F ) is a semicontraction with respect to the
Hausdorff metric of (bord X, ρ). (Finite-dimensionality is not used in any crucial
way in the proof of [35, Proposition 1.5],2 and in any case for algebraic hyperbolic
spaces it can be proven by looking at finite-dimensional subsets, as we did in the
proof of Proposition 7.5.3.) We remark that if F = R, then such a metric ρ can
be prescribed explicitly: if X = B is the ball model, then the Euclidean metric on
bord B ⊆ H has this property, due to the fact that geodesics in the ball model are
line segments in H (cf. (2.2.3)).3
Now let us demonstrate (D). It suffices to show that bord CΛ = Hull∞ (Λ) is
compact. Since Hull∞ (Λ) is by definition closed, it suffices to show that Hull∞ (Λ)
is totally bounded with respect to the ρ metric. Indeed, fix ε > 0. Since Λ is
compact, there is a finite set Fε ⊆ Λ such that
Λ ⊆ Nε/2 (Fε ).
(In this proof, all neighborhoods are taken with respect to the ρ metric.) Let Xε ⊆
X be a finite-dimensional totally geodesic set containing Fε . Then Λ ⊆ Nε/2 (Xε ).
On the other hand, since Xε is compact, there exists a finite set Fε′ ⊆ Xε such that
Xε ⊆ Nε/2 (Fε′ ).
Now, our hypothesis on ρ implies that
Hull1 (Nε/2 (Xε )) ⊆ Nε/2 (Hull1 (Xε )) = Nε/2 (Xε ),
and thus that Nε/2 (Xε ) is convex. But Λ ⊆ Nε/2 (Xε ), so Hull∞ (Λ) ⊆ Nε/2 (Xε ).
Thus
Hull∞ (Λ) ⊆ Nε/2 (Xε ) ⊆ Nε (Fε′ ).
Since ε was arbitrary, this shows that Hull∞ (Λ) is totally bounded, completing the
proof.
Proof of (D) ⇒ (B). Since property (B) is clearly basepoint-independent,
we may without loss of generality suppose o ∈ CΛ . Then (D) ⇒ (C) ⇒ (B).
2One should keep in mind that the Cartan–Hadamard theorem [119, IX, Theorem 3.8] can be
used as a substitute for the Hopf–Rinow theorem in most circumstances.
3Recall that our “ball model” B is the Klein model rather than the Poincaré model.
126
7. LIMIT SETS
As an example of an application we prove the following corollary.
Corollary 7.7.3. Suppose that X is regularly geodesic. Then any moderately
discrete subgroup of Isom(X) of compact type is strongly discrete.
Proof. If G is a moderately discrete group, then G ↿ Co is moderately discrete
by Observation 5.2.14, and therefore strongly discrete by Propositions 5.2.5 and
7.7.2. Thus by Observation 5.2.14, G is strongly discrete.
A well-known characterization of the complement of the limit set in the Standard Case is that it is the set of points where the action of G is discrete. We extend
this characterization to hyperbolic metric spaces for groups of compact type:
Proposition 7.7.4. Let G ≤ Isom(X) be a strongly discrete group of compact
type. Then the action of G on bord X \ Λ is strongly discrete in the following sense:
For any set S ⊆ bord X \ Λ satisfying
(7.7.1)
D(S, Λ) > 0,
we have
#{g ∈ G : g(S) ∩ S 6= } < ∞.
Proof. By contradiction, suppose that there exists a sequence of distinct
(gn )∞
1 such that gn (S) ∩ S 6= for all n ∈ N. Since G is strongly discrete, we
have kgn k → ∞, and since G is of compact type there exist an increasing sequence
−1
(nk )∞
1 and ξ+ , ξ− ∈ Λ such that gnk (o) → ξ+ and gnk (o) → ξ− . In the remainder
of the proof we restrict to this subsequence, so that gn (o) → ξ+ and gn−1 (o) → ξ− .
For each n, fix xn ∈ gn−1 (gn (S) ∩ S), so that xn , gn (xn ) ∈ S. Then
D(xn , ξ− ), D(gn (xn ), ξ+ ) ≥ D(S, Λ) ≍× 1,
and so
hxn |ξ− io , hgn (xn )|ξ+ io ≍+ 0.
On the other hand, hgn−1 (o)|ξ− io , hgn (o)|ξ+ io → ∞. Applying Gromov’s inequality
gives
hxn |gn−1 (o)io , hgn (xn )|gn (o)io ≍+ 0
for all n sufficiently large. But then
kgn k = hgn (xn )|oign (o) + hgn (xn )|gn (o)io ≍+ 0,
a contradiction.
Part 2
The Bishop–Jones theorem
This part will be divided as follows: In Chapter 8, we motivate and define
the modified Poincaré exponent of a semigroup, which is used in the statement of
Theorem 1.2.3. In Chapter 9 we prove Theorem 1.2.3 and deduce Theorem 1.2.1
from Theorem 1.2.3.
CHAPTER 8
The modified Poincaré exponent
In this chapter we define the modified Poincaré exponent of a semigroup. We
first recall the classical notion of the Poincaré exponent, introduced in the Standard
Case by A. F. Beardon in [18]. Although it is usually defined only for groups, the
generalization to semigroups is trivial.
8.1. The Poincaré exponent of a semigroup
Definition 8.1.1. Fix G Isom(X). For each s ≥ 0, the series
X
Σs (G) :=
b−skgk
g∈G
is called the Poincaré series of the semigroup G in dimension s (or “evaluated at
s”) relative to b. The number
δG = δ(G) := inf{s ≥ 0 : Σs (G) < ∞}
is called the Poincaré exponent of the semigroup G relative to b. Here, we let
inf = ∞.
Remark 8.1.2. The Poincaré series is usually defined with a summand of e−skgk
rather than b−skgk . The change of exponents here is important because it relates
the Poincaré exponent to the metric D = Db,o defined in Proposition 3.6.8. In
the Standard Case, and more generally for CAT(-1) spaces, we have made the
convention that b = e (see §4.1), so in this case our series reduces to the classical
one.
Remark 8.1.3. Given G Isom(X), we may define the orbital counting func-
tion of G to be the function
NX,G (ρ) = #{g ∈ G : kgk ≤ ρ}.
129
130
8. THE MODIFIED POINCARÉ EXPONENT
The Poincaré series may be written as an integral over the orbital counting function
as follows:
Σs (G) = log(bs )
(8.1.1)
= log(bs )
XZ
g∈G
∞
Z
0
s
= log(b )
Z
∞
b−sρ dρ
kgk
b−sρ
X
g∈G
∞
0
[kgk ≤ ρ] dρ
b−sρ NX,G (ρ) dρ.
The Poincaré exponent is written in terms of the orbital counting function as
(8.1.2)
δG = lim sup
ρ→∞
1
logb NX,G (ρ)
ρ
Definition 8.1.4. A semigroup G Isom(X) with δG < ∞ is said to be of
convergence type if ΣδG (G) < ∞. Otherwise, it is said to be of divergence type. In
the case where δG = ∞, we say that the semigroup is neither of convergence type
nor of divergence type.
The most basic question about the Poincaré exponent is whether it is finite.
For groups, the finiteness of the Poincaré exponent is related to strong discreteness:
Observation 8.1.5. Fix G ≤ Isom(X). If G is not strongly discrete, then
δG = ∞.
Proof. Fix ρ > 0 such that #{g ∈ G : kgk ≤ ρ} = ∞. Then for all s ≥ 0 we
have
X
X
Σs (G) ≥
b−skgk ≥
b−sρ = ∞.
g∈G
kgk≤ρ
g∈G
kgk≤ρ
Since s was arbitrary, we have δG = ∞.
Remark 8.1.6. Although the converse to Observation 8.1.5 holds in the Standard Case, it fails for infinite-dimensional algebraic hyperbolic spaces; see Example
13.2.2.
Notation 8.1.7. The Poincaré exponent and type can be conveniently combined into a single mathematical object, the Poincaré set
[0, δG ] G is of divergence type
∆G := {s ≥ 0 : Σs (G) = ∞} = [0, δG ) G is of convergence type .
[0, ∞) δ = ∞
G
8.2. The modified Poincaré exponent of a semigroup
From a certain perspective, Observation 8.1.5 indicates a flaw in the Poincaré
exponent: If G ≤ Isom(X) is not strongly discrete, then the Poincaré exponent is
8.2. THE MODIFIED POINCARÉ EXPONENT OF A SEMIGROUP
131
always infinity even though there may be more geometric information to capture.
In this section we introduce a modification of the Poincaré exponent which agrees
with the Poincaré exponent in the case where G is strongly discrete, but can be
finite even if G is not strongly discrete.
We begin by defining the modified Poincaré exponent of a locally compact
group G ≤ Isom(X). Let µ be a Haar measure on G, and for each s consider the
Poincaré integral
Z
(8.2.1)
Is (G) := b−skgk dµ(g).
Definition 8.2.1. The modified Poincaré exponent of a locally compact group
G ≤ Isom(X) is the number
e
δeG = δ(G)
:= inf{s ≥ 0 : Is (G) < ∞},
where Is (G) is defined by (8.2.1).
Example 8.2.2. Let X = Hd for some 2 ≤ d < ∞, and let G ≤ Isom(X) be
a positive-dimensional Lie subgroup. Then G is locally compact, but not strongly
discrete. Although the Poincaré series diverges for every s, the exponent of convergence of the Poincaré integral (or “modified Poincaré exponent”) is equal to
the Hausdorff dimension of the limit set of G (Theorem 1.2.3 below), and so in
particular the Poincaré integral converges whenever s > d − 1.
We now proceed to generalize Definition 8.2.1 to the case where G ≤ Isom(X)
is not necessarily locally compact. Fix ρ > 0, and consider a maximal ρ-separated1
subset Sρ ⊆ G(o). Then we have
[
[
B(x, ρ/2) ⊆ G(o) ⊆
B(x, ρ),
x∈Sρ
x∈Sρ
and the former union is disjoint. Now suppose that G is in fact locally compact,
and let ν denote the image of Haar measure on G under the map g 7→ g(o). Then
if f is a positive function on X whose logarithm is uniformly continuous, we have
Z
X
XZ
XZ
X
f (x) ≍×,ρ,f
f dν ≤ f dν ≤
f dν ≍×,ρ,f
f (x).
x∈Sρ
x∈Sρ
B(x,ρ/2)
x∈Sρ
B(x,ρ)
x∈Sρ
Thus in some sense, the counting measure on Sρ is a good approximation to the
measure ν. In particular, taking f (x) = b−kxk gives
X
Is (G) ≍×,ρ
b−kxk .
x∈Sρ
1Here, as usual, a ρ-separated subset of a metric space X is a set S ⊆ X such that d(x, y) ≥ ρ
for any distinct x, y ∈ S. The existence of a maximal ρ-separated subset of any metric space is
guaranteed by Zorn’s lemma.
132
8. THE MODIFIED POINCARÉ EXPONENT
Thus the integral Is (G) converges if and only if the series
P
x∈Sρ
b−kxk converges.
But the latter series is well-defined even if G is not locally compact. This discussion
shows that the definition of the “modified Poincaré exponent” given in Definition
8.2.1 agrees with the following definition:
Definition 8.2.3. Fix G Isom(X).
• For each set S ⊆ X and s ≥ 0, let
X
Σs (S) =
b−skxk
x∈S
∆(S) = {s ≥ 0 : Σs (S) = ∞}
δ(S) = sup ∆(S).
• Let
eG =
∆
(8.2.2)
\\
∆(Sρ ),
ρ>0 Sρ
where the second intersection is taken over all maximal ρ-separated sets
Sρ .
e G is called the modified Poincaré exponent of
• The number δeG = sup ∆
e
e
G. If δG ∈ ∆G , we say that G is of generalized divergence type,2 while if
δeG ∈ [0, ∞) \ ∆G , we say that G is of generalized convergence type. Note
that if δeG = ∞, then G is neither of generalized convergence type nor of
generalized divergence type.
The basic properties of the modified Poincaré exponent are summarized as
follows:
Proposition 8.2.4. Fix G Isom(X).
e G ⊆ ∆G . (In particular δeG ≤ δG .)
(i) ∆
(ii) If G satisfies
(8.2.3)
sup #{g ∈ G : d(g(o), x) ≤ ρ} < ∞ ∀ρ > 0,
x∈X
e G = ∆G . (In particular δeG = δG .)
then ∆
(iii) If δeG < ∞, then there exist ρ > 0 and a maximal ρ-separated set Sρ ⊆ G(o)
such that #(Sρ ∩ B) < ∞ for every bounded set B.
(iv) For all ρ > 0 sufficiently large and for every maximal ρ-separated set
e
e G . (In particular δ(Sρ ) = δ(G).)
Sρ ⊆ G(o), we have ∆(Sρ ) = ∆
2We use the adjective “generalized” rather than “modified” because all groups of conver-
gence/divergence type are also of generalized convergence/divergence type; see Corollary 8.2.8
below.
8.2. THE MODIFIED POINCARÉ EXPONENT OF A SEMIGROUP
133
Remark 8.2.5. If G is a group, then it is clear that (8.2.3) is equivalent to
the assertion that G is strongly discrete. If G is not a group, then by analogy we
will say that G is strongly discrete if (8.2.3) holds. (Recall that in Chapter 5, the
various notions of discreteness are defined only for groups.)
Proof of Proposition 8.2.4.
(i) Indeed, for every s ≥ 0, ρ > 0, and maximal ρ-separated set Sρ we have
e
Σs (Sρ ) ≤ Σs (G) and thus ∆(G)
⊆ ∆(Sρ ) ⊆ ∆(G).
(ii) Fix ρ > 0, and let Sρ ⊆ G(o) be a maximal ρ-separated set. For every
x ∈ G(o) there exists yx ∈ Sρ with d(x, yx ) ≤ ρ. Then for each y ∈ Sρ ,
we have
#{x ∈ G(o) : yx = y} ≤ Mρ ,
where Mρ is the value of the supremum (8.2.3). Therefore for each s ≥ 0
we have
X
X
X
b−skyx k ≤ Mρ
b−skxk ≍×
b−skyk = Mρ Σs (Sρ ).
Σs (G) =
x∈G(o)
x∈G(o)
y∈Sρ
In particular, Σs (G) < ∞ if and only if Σs (Sρ ) < ∞, i.e. ∆(G) = ∆(Sρ ).
e
Intersecting over ρ > 0 and Sρ ⊆ G(o) yields ∆(G) = ∆(G).
(iii) Take ρ and Sρ such that δ(Sρ ) < ∞.
Before proving (iv), we need a lemma:
Lemma 8.2.6. Fix ρ1 , ρ2 > 0 with ρ2 ≥ 2ρ1 . Let S1 ⊆ G(o) be a ρ1 -net,3 and
let S2 ⊆ G(o) be a ρ2 -separated set. Then
∆(S2 ) ⊆ ∆(S1 ).
(8.2.4)
Proof. Since S1 is a ρ1 -net, for every y ∈ S2 , there exists xy ∈ S1 with
d(y, xy ) < ρ1 . If xy = xz for some y, z ∈ S2 , then d(y, z) < 2ρ1 ≤ ρ2 and since S2
is ρ2 -separated we have y = z. Thus the map y 7→ xy is injective. It follows that
for every s ≥ 0, we have
X
X
X
Σs (S2 ) =
b−skyk ≍×
b−skxy k ≤
b−skxk = Σs (S1 ),
y∈S2
demonstrating (8.2.4).
y∈S2
x∈S1
⊳
(iv) The statement is trivial if δeG = ∞. So suppose that δeG < ∞, and let ρ, Sρ
be as in (iii). Fix ρ′ ≥ 2ρ and a maximal ρ′ -separated set Sρ′ ⊆ G(o), and
e G . The inclusion ⊇ follows by definition.
we will show that ∆(Sρ′ ) = ∆
3Here, as usual, a ρ-net in a metric space X is a subset S ⊆ X such that X = N (S). Note that
ρ
every maximal ρ-separated set is a ρ-net (but not conversely).
134
8. THE MODIFIED POINCARÉ EXPONENT
To prove the reverse direction, fix ρ′′ > 0 and a maximal ρ′′ -separated set
Sρ′′ , and we will show that ∆(Sρ′ ) ⊆ ∆(Sρ′′ ).
Let F = Sρ ∩ B(o, ρ′′ + ρ); then #(F ) < ∞. We then set
[
Sρ′ :=
gx (F )
x∈Sρ′′
where for each x ∈ Sρ′′ , x = gx (o). Then for all s ≥ 0,
X
X X
b−skxk ≍×
Σs (Sρ′′ ) =
b−skxk
x∈Sρ′′
x∈Sρ′′ y∈F
≍×
X X
b−skgx (y)k
x∈Sρ′′ y∈F
= Σs (Sρ′ )
and therefore ∆(Sρ′′ ) = ∆(Sρ′ ). But Sρ′ is a ρ-net, so by Lemma 8.2.6, we
have ∆(Sρ′ ) ⊆ ∆(Sρ′ ). This completes the proof.
Combining with Observation 8.1.5 yields the following:
e then
Corollary 8.2.7. Suppose that G is a group. If ∆ 6= ∆
δe < δ = ∞.
Corollary 8.2.8. If a group G is of convergence or divergence type, then it
is also of generalized convergence or divergence type, respectively.
e G = ∆G , and Poincaré
We will call a group G ≤ Isom(X) Poincaré regular if ∆
irregular otherwise. A list of sufficient conditions for Poincaré regularity is given in
Proposition 9.3.1 below. Conversely, several examples of Poincaré irregular groups
may be found in Section 13.4.
CHAPTER 9
Generalization of the Bishop–Jones theorem
In this chapter we prove Theorem 1.2.3, the first part of which states that if
G Isom(X) is a nonelementary semigroup, then
(1.2.2)
dimH (Λr ) = dimH (Λur ) = dimH (Λur ∩ Λr,σ ) = δe
for some σ > 0. Our strategy is to prove that dimH (Λur ∩ Λr,σ ) ≤ dimH (Λur ) ≤
dimH (Λr ) ≤ δe ≤ dimH (Λur ∩ Λr,σ ) for some σ > 0. The first two inequalities are
obvious. The third we prove now, and the proof of the fourth inequality will occupy
§§9.1-9.2.
Lemma 9.0.9. For G Isom(X), we have
e
dimH (Λr ) ≤ δ.
e we have
Proof. It suffices to show that for each σ > 0 and for each s > δ,
e
dimH (Λr,σ ) ≤ s. Fix σ > 0 and s > δ. Then there exists ρ > 0 and a maximal
ρ-separated set Sρ ⊆ G(o) such that s > δ(Sρ ), which implies that Σs (Sρ ) < ∞.
For each x ∈ Sρ let Px = Shad(x, σ + ρ).
Claim 9.0.10.
ξ ∈ Λr,σ ⇒ ξ ∈ Px for infinitely many x ∈ Sρ .
Proof. Fix ξ ∈ Λr,σ . Then there exists a sequence gn (o) → ξ such that for
all n ∈ N we have ξ ∈ Shad(gn (o), σ). For each n, let xn ∈ Sρ be such that
d(gn (o), xn ) ≤ ρ; such an xn exists since Sρ is maximal ρ-separated. Then by (d)
of Proposition 3.3.3 we have ξ ∈ Pxn = Shad(xn , σ + ρ).
To complete the proof of Claim 9.0.10, we need to show that the collection
(xn )∞
1 is infinite. Indeed, if xn ∈ F for some finite F and for all n ∈ N, then we
would have d(gn (o), F ) ≤ ρ for all n ∈ N. This would imply that the sequence
⊳
(gn (o))∞
1 is bounded, contradicting that gn (o) → ξ.
We next observe that by the Diameter of Shadows Lemma 4.5.8 we have
X
X
Diams (Px ) .×,σ,ρ
b−skxk = Σs (Sρ ) < ∞.
x∈Sρ
x∈Sρ
135
136
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
Thus by the Hausdorff–Cantelli lemma [24, Lemma 3.10], we have Hs (Λr,σ ) = 0,
and thus dimH (Λr,σ ) ≤ s.
9.1. Partition structures
In this section we introduce the notion of a partition structure, an important
technical tool for proving Theorem 1.2.3. We state some theorems about these
structures, which will be proven in subsequent sections, and then use them to
prove Theorem 1.2.3. 1
Throughout this section, (Z, D) denotes a metric space. We will constantly
have in mind the special case Z = ∂X, D = Db,o .
Notation 9.1.1. Let
N∗ =
∞
[
Nn .
n=0
If ω ∈ N∗ ∪ NN , then we denote by |ω| the unique element of N ∪ {∞} such that
ω ∈ N|ω| and call |ω| the length of ω. For each r ∈ N, we denote the initial segment
of ω of length r by
ω1r := (ωn )r1 ∈ Nr .
For two words ω, τ ∈ NN , let ω ∧ τ denote their longest common initial segment,
and let
d2 (ω, τ ) = 2−|ω∧τ |.
Then (NN , d2 ) is a metric space.
Definition 9.1.2. A tree on N is a set T ⊆ N∗ which is closed under initial
segments. (Not to be confused with the various notions of “trees” introduced in
Section 3.1.)
Notation 9.1.3. If T is a tree on N, then we denote its set of infinite branches
by
T (∞) := {ω ∈ NN : ω1n ∈ T ∀n ∈ N}.
On the other hand, for n ∈ N we let
T (n) := T ∩ Nn .
For each ω ∈ T , we denote the set of its children by
T (ω) := {a ∈ N : ωa ∈ T }.
1 Much of the material for this section has been taken (with modifications) from [73, §5]. In [73]
we also included as standing assumptions that G was strongly discrete and of general type (see
Definitions 5.2.1 and 6.2.13). Thus some propositions which appear to have the exact same statement are in fact stronger in this monograph than in [73]. Specifically, this applies to Proposition
9.1.9 and Lemmas 9.2.1 and 9.2.5.
9.1. PARTITION STRUCTURES
137
Definition 9.1.4. A partition structure on Z consists of a tree T ⊆ N∗ together
with a collection of closed subsets (Pω )ω∈T of Z, each having positive diameter and
enjoying the following properties:
(I) If ω ∈ T is an initial segment of τ ∈ T then Pτ ⊆ Pω . If neither ω nor τ
is an initial segment of the other then Pω ∩ Pτ = .
(II) For each ω ∈ T let
Dω = Diam(Pω ).
There exist κ > 0 and 0 < λ < 1 such that for all ω ∈ T and for all
a ∈ T (ω), we have
D(Pωa , Z \ Pω ) ≥ κDω
(9.1.1)
and
(9.1.2)
κDω ≤ Dωa ≤ λDω .
Fix s > 0. The partition structure (Pω )ω∈T is called s-thick if for all ω ∈ T ,
X
s
Dωa
≥ Dωs .
(9.1.3)
a∈T (ω)
Definition 9.1.5. Given a partition structure (Pω )ω∈T , a substructure of
(Pω )ω∈T is a partition structure of the form (Pω )ω∈Te , where Te ⊆ T is a subtree.
Observation 9.1.6. Let (Pω )ω∈T be a partition structure on a complete metric
space (Z, D). For each ω ∈ T (∞), the set
\
Pω1n
n∈N
is a singleton. If we define π(ω) to be the unique member of this set, then the map
π : T (∞) → Z is continuous. (In fact, it was shown in [73, Lemma 5.11] that π is
quasisymmetric.)
Definition 9.1.7. The set π(T (∞)) is called the limit set of the partition
structure.
We remark that a large class of examples of partition structures comes from the
theory of conformal iterated function systems [128] (or in fact even graph directed
Markov systems [129]) satisfying the strong separation condition (also known as the
disconnected open set condition [150]; see also [71], where the limit sets of iterated
function systems satisfying the strong separation condition are called dust-like).
Indeed, the notion of a partition structure was intended primarily to generalize
these examples. The difference is that in a partition structure, the sets (Pω )ω do
not necessarily have to be defined by dynamical means. We also note that if Z = Rd
for some d ∈ N, and if (Pω )ω∈T is a partition structure on Z, then the tree T has
138
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
bounded degree, meaning that there exists N < ∞ such that #(T (ω)) ≤ N for
every ω ∈ T .
We will now state two propositions about partition structures and then use
them to prove Theorem 1.2.3. Theorem 9.1.8 will be proven below, and Proposition
9.1.9 will be proven in the following section.
Theorem 9.1.8 ([73, Theorem 5.12]). Fix s > 0. Then any s-thick partition
structure (Pω )ω∈T on a complete metric space (Z, D) has a substructure (Pω )ω∈Te
whose limit set is Ahlfors s-regular. Furthermore the tree Te can be chosen so that
for each ω ∈ Te, we have that Te(ω) is an initial segment of T (ω), i.e. Te(ω) =
T (ω) ∩ {1, . . . , Nω } for some Nω ∈ N.
After these theorems about partition structures on an abstract metric space,
we return to our more geometric setting of a Gromov triple (X, o, b):
Proposition 9.1.9 (Cf. [73, Lemma 5.13], Footnote 1). Let G Isom(X) be
nonelementary. Then for all σ > 0 sufficiently large and for every 0 < s < δeG ,
there exist τ > 0, a tree T on N, and an embedding T ∋ ω 7→ xω ∈ G(o) such that
if
Pω := Shad(xω , σ),
then (Pω )ω∈T is an s-thick partition structure on (∂X, D), whose limit set is a
subset of Λur,τ ∩ Λr,σ .
Proof of Theorem 1.2.3 using Theorem 9.1.8 & Proposition 9.1.9.
We first demonstrate the “moreover” clause. Fix σ > 0 large enough such that
e and let (Pω )ω∈T be the partition structure
Proposition 9.1.9 holds. Fix 0 < s < δ,
guaranteed by Proposition 9.1.9. Since this structure is s-thick, applying Theorem
9.1.8 yields a substructure (Pω )ω∈Te whose limit set Js ⊆ Λur,τ ∩ Λr,σ is Ahlfors
s-regular, where τ > 0 is as in Proposition 9.1.9. Since 0 < s < δe was arbitrary,
this completes the proof of the “moreover” clause.
To demonstrate (1.2.2), note that the inequality dimH (Λr ) ≤ δe has already
been established (Lemma 9.0.9), and that the inequalities
dimH (Λur ∩ Λr,σ ) ≤ dimH (Λur ) ≤ dimH (Λr )
are obvious. Thus it suffices to show that
e
dimH (Λur ∩ Λr,σ ) ≥ δ.
But the mass distribution principle guarantees that
dimH (Λur ∩ Λr,σ ) ≥ dimH (Js ) ≥ s
e This completes the proof.
for each 0 < s < δ.
9.1. PARTITION STRUCTURES
139
Proof of Theorem 9.1.8. We will recursively define a sequence of maps
µn : T (n) → [0, 1]
with the following consistency property:
X
µn+1 (ωa).
(9.1.4)
µn (ω) =
a∈T (ω)
The Kolmogorov consistency theorem will then guarantee the existence of a measure
µ
e on T (∞) satisfying
µ
e([ω]) = µn (ω)
(9.1.5)
for each ω ∈ T (n).
Let c = 1 − λs > 0, where λ is as in (9.1.2). For each n ∈ N, we will demand
of our function µn the following property: for all ω ∈ T (n), if µn (ω) > 0, then
cDωs ≤ µn (ω) < Dωs .
(9.1.6)
s
We now begin our recursion. For the case n = 0, let µ0 () := cD∅
; (9.1.6) is
clearly satisfied.
For the inductive step, fix n ∈ N and suppose that µn has been constructed
satisfying (9.1.6). Fix ω ∈ T (n), and suppose that µn (ω) > 0. Formulas (9.1.3)
and (9.1.6) imply that
X
s
Dωa
> µn (ω).
a∈T (ω)
Let Nω ∈ T (ω) be the smallest integer such that
X
s
(9.1.7)
Dωa
> µn (ω).2
a≤Nω
Then the minimality of Nω says precisely that
X
s
Dωa
≤ µn (ω).
a≤Nω −1
Using the above, (9.1.7), and (9.1.2), we have
X
s
s
(9.1.8)
µn (ω) <
≤ µn (ω) + λs Dωs .
Dωa
≤ µn (ω) + DωN
ω
a≤Nω
For each a ∈ T (ω) with a > Nω , let µn+1 (ωa) = 0, and for each a ≤ Nω , let
Ds µn (ω)
µn+1 (ωa) = P ωa
s .
b≤Nω Dωb
Obviously, µn+1 defined in this way satisfies (9.1.4). Let us prove that (9.1.6)
holds (of course, with n = n + 1). The second inequality follows directly from the
2Obviously, this and similar sums are restricted to T (ω).
140
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
definition of µn+1 and from (9.1.7). Using (9.1.8), (9.1.6) (with n = n), and the
equation c = 1 − λs , we deduce the first inequality as follows:
s
λs Dωs
Dωa
µn (ω)
s
= Dωa 1 −
µn+1 (ωa) ≥
µn (ω) + λs Dωs
µn (ω) + λs Dωs
s
λ
s
≥ Dωa
1−
c + λs
s
= cDωa
.
The proof of (9.1.6) (with n = n + 1) is complete. This completes the recursive
step.
Let
Te =
∞
[
{ω ∈ T (n) : µn (ω) > 0}.
n=1
Clearly, the limit set of the partition structure (Pω )ω∈Te is exactly the topological
support of µ := π[e
µ], where µ
e is defined by (9.1.5). Furthermore, for each ω ∈ Te,
we have Te(ω) = T (ω) ∩ {1, . . . , Nω }. Thus, to complete the proof of Theorem 9.1.8
it suffices to show that the measure µ is Ahlfors s-regular.
To this end, fix z = π(ω) ∈ Supp(µ) and 0 < r ≤ κD∅ , where κ is as in (9.1.1)
and (9.1.2). For convenience of notation let
Pn := Pω1n , Dn := Diam(Pn ),
and let n ∈ N be the largest integer such that r < κDn . We have
(9.1.9)
κ2 Dn ≤ κDn+1 ≤ r < κDn .
(The first inequality comes from (9.1.2), whereas the latter two come from the
definition of r.)
We now claim that
B(z, r) ⊆ Pn .
Indeed, by contradiction suppose that w ∈ B(z, r) \ Pn . By (9.1.1) we have
D(z, w) ≥ D(z, Z \ Pn ) ≥ κDn > r
which contradicts the fact that w ∈ B(z, r).
Let k ∈ N be large enough so that λk ≤ κ2 . It follows from (9.1.9) and repeated
applications of the second inequality of (9.1.2) that
Dn+k ≤ λk Dn ≤ κ2 Dn ≤ r,
and thus
Pn+k ⊆ B(z, r) ⊆ Pn .
9.2. A PARTITION STRUCTURE ON ∂X
141
Thus, invoking (9.1.6), we get
(9.1.10)
s
(1 − λs )Dn+k
≤ µ(Pn+k ) ≤ µ(B(z, r)) ≤ µ(Pn ) ≤ Dns .
On the other hand, it follows from (9.1.9) and repeated applications of the first
inequality of (9.1.2) that
Dn+k ≥ κk Dn ≥ κk−1 r.
(9.1.11)
Combining (9.1.9), (9.1.10), and (9.1.11) yields
(1 − λs )κs(k−1) rs ≤ µ(B(z, r)) ≤ κ−2s rs ,
i.e. µ is Ahlfors s-regular. This completes the proof of Theorem 9.1.8.
9.2. A partition structure on ∂X
We begin by stating our key lemma.
Lemma 9.2.1 (Construction of children; cf. [73, Lemma 5.14], Footnote 1). Let
G Isom(X) be nonelementary. Then for all σ > 0 sufficiently large, for every
0 < s < δeG , for every 0 < λ < 1, and for every w ∈ G(o), there exists a finite
subset T (w) ⊆ G(o) (the children of w) such that if we let
Px := Shad(x, σ)
Dx := Diam(Px )
then the following hold:
(i) The family (Px )x∈T (w) consists of pairwise disjoint shadows contained in
Pw .
(ii) There exists κ > 0 independent of w such that for all x ∈ T (w),
D(Px , ∂X \ Pw ) ≥ κDw
κDw ≤ Dx ≤ λDw .
(iii)
X
x∈T (w)
s
Dxs ≥ Dw
.
It is not too hard to deduce Proposition 9.1.9 from Lemma 9.2.1. We do it
now:
Proof of Proposition 9.1.9 assuming Lemma 9.2.1. Let σ > 0 be large
e and let λ = 1/2. For each
enough so that Lemma 9.2.1 holds. Fix 0 < s < δ,
N (w)
w ∈ G(o), let (yn (w))n=1 be an enumeration of T (w). Define a tree T ⊆ N∗ and
142
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
o
o
g
−1
g(o)
g−1 (o)
Figure 9.2.1. The strategy for the proof of Lemma 9.2.1. To
construct a collection of “children” of the point w = g(o), we “pull
back” the entire picture via g −1 . In the pulled-back picture, the
Big Shadows Lemma 4.5.7 guarantees the existence of many points
x ∈ G(o) such that Shadz (x, σ) ⊆ Shadz (o, σ), where z = g −1 (o).
(Cf. Lemma 9.2.5 below.) These children can then be pushed
forward via g to get children of w.
a collection (xω )ω∈T inductively as follows:
x∅ = o
T (ω) = {1, . . . , N (xω )}
xωa = ya (xω ).
Then the conclusion of Lemma 9.2.1 precisely implies that (Pω := Pxω )ω∈T is an
s-thick partition structure on (∂X, D).
To complete the proof, we must show that the limit set of the partition structure
(Pω )ω∈T is contained in Λur,τ ∩ Λr,σ for some τ > 0. Indeed, fix ω ∈ T (∞). Then
for each n ∈ N, π(ω) ∈ Pω1n = Shad(xω1n , σ) and kxω1n k → ∞. So, the sequence
(xω1n )∞
1 converges σ-radially to π(ω). On the other hand,
d(xω1n , xωn+1 ) ≍+,σ B o (xωn+1 , xω1n )
1
1
(by Lemma 4.5.8)
− logb (κ) ≍+,κ 0.
(by (9.1.2))
≍+,σ − logb
≤
(by (4.5.2))
!
Dωn+1
1
Dω1n
Thus the sequence (xω1n )∞
1 converges to π(ω) τ -uniformly radially, where τ depends
only on σ and κ (which in turn depends on s).
The proof of Lemma 9.2.1 will proceed through a series of lemmas.
Lemma 9.2.2 (Cf. [73, Lemma 5.15]). Fix τ > 0, and let Sτ ⊆ G(o) be a
maximal τ -separated subset. Let B ⊆ bord X be an open set which intersects Λ.
9.2. A PARTITION STRUCTURE ON ∂X
C−3 C−2 C−1 C0
C1
143
C2
g−
g+
Figure 9.2.2. The sets Cn , for n ∈ Z.
e the series
Then for every 0 < t < δ,
Σt (Sτ ∩ B)
diverges.
Proof. By Proposition 7.4.6, there exists a loxodromic isometry g ∈ G such
that g+ ∈ B. Let ℓ(g) = logb g ′ (g− ) = − logb g ′ (g+ ) > 0, and let the functions
r = rg+ ,g− ,o , θ = θg+ ,g− ,o be as in Section 4.6. Fix N ∈ N large to be determined,
let κ = N ℓ(g), and for each n ∈ Z let
Cn = {x ∈ X : nκ ≤ r(x) < (n + 1)κ}
(cf. Figure 9.2.2). Let
C+,0 =
[
Cn ,
C+,1 =
n≥0
even
C−,0 =
[
[
Cn
n≥0
odd
Cn ,
C−,1 =
n<0
even
[
Cn .
n<0
odd
Fix ρ > 0, and let Sρ ⊆ G(o) be a maximal ρ-separated set. Since Σt (Sρ ) = ∞,
one of the series Σt (Sρ ∩ C+,0 ), Σt (Sρ ∩ C+,1 ), Σt (Sρ ∩ C−,0 ), and Σt (Sρ ∩ C−,1 )
must diverge. By way of illustration let us consider the case where
Σt (Sρ ∩ C−,0 ) = ∞.
144
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
Let
Aρ =
[
n<0
even
g 2N |n| (Cn ∩ Sρ ).
Claim 9.2.3. Σt (Aρ ) = ∞.
Proof. Fix n = −m < 0 even and x ∈ Cn . Then by (4.6.1),
r(g 2N m (x)) ≍+ 2N mℓ(g) + r(x) = 2mκ + r(x) ≍+,κ 2mκ − mκ = mκ
and thus
|r(g 2N m (x))| ≍+,κ |r(x)|.
On the other hand, by (4.6.2) we have θ(g 2N m (x)) ≍+ θ(x). Combining with
Lemma 4.6.2 gives
d(0, x) ≍+,κ d(0, g 2N m (x)).
Thus
Σt (Aρ ) =
X
X
b−tkg
2N m
(x)k
m>0 x∈C−m ∩Sρ
even
≍×,κ
X
X
b−tkxk
m>0 x∈C−m ∩Sρ
even
= Σt (C−,0 ∩ Sρ ) = ∞.
⊳
Claim 9.2.4. Aρ is a ρ-separated set.
Proof. Fix y1 , y2 ∈ Aρ . Then for some m1 , m2 > 0 even, we have xi :=
g −2N mi (yi ) ∈ C−mi (i = 1, 2). If n1 = n2 , then we have
d(y1 , y2 ) = d(x1 , x2 ) ≥ ρ,
since x1 , x2 ∈ Sρ and Sρ is ρ-separated. So suppose n1 6= n2 ; without loss of
generality we may assume n1 > n2 . Then by (4.6.1) we have
r(y1 ) − r(y2 ) ≍+ 2N m1 ℓ(g) + r(x1 ) − (2N m2 ℓ(g) + r(x2 ))
= 2κ(m1 − m2 ) + r(x1 ) − r(x2 )
≥ 2κ[m1 − m2 ] + κ(−m1 ) − κ(−m2 + 1)
= κ(m1 − m2 − 1)
≥ κ = N ℓ(g).
By choosing N sufficiently large, we may guarantee that r(y1 ) − r(y2 ) ≥ ρ, which
implies d(y1 , y2 ) ≥ ρ.
⊳
9.2. A PARTITION STRUCTURE ON ∂X
145
For all x ∈ Nρ (Aρ ), we have r(x) &+ 0. Thus g− ∈
/ Nρ (Aρ ). So by Theorem
6.1.10, we can find n ∈ N such that Nρ (g n (Aρ )) ⊆ B.
Let Sρ/2 ⊆ G(o) be a maximal ρ/2-separated set. By Lemma 8.2.6, we have
Σt (Sρ/2 ∩ B) &× Σt (g n (Aρ )) ≍× Σt (Aρ ) = ∞.
Since ρ > 0 was arbitrary, this completes the proof.
Lemma 9.2.5 (Cf. [73, Sublemma 5.17], Footnote 1). Let B ⊆ bord X be an
e
open set which intersects Λ. For all σ > 0 sufficiently large and for all 0 < s < δ,
there exists a set SB ⊆ G(o) ∩ B such that for all z ∈ X \ B,
(i) If
Pz,x := Shadz (x, σ),
then the family (Pz,x )x∈SB consists of pairwise disjoint shadows contained
in Pz,o ∩ B.
(ii) There exists κ > 0 independent of z (but depending on s) such that for all
x ∈ SB ,
(9.2.1)
Db,z (Pz,x , ∂X \ Pz,o ) ≥ κ Diamz (Pz,o )
(9.2.2)
κ Diamz (Pz,o ) ≤ Diamz (Pz,x ) ≤ λ Diamz (Pz,o ).
(iii)
X
x∈SB
Diamsz (Pz,x ) ≥ Diamsz (Pz,o ).
Proof. Let B ⊆ bord X be an open set which contains a point η ∈ Λ. Choose
ρ > 0 large enough so that
{x ∈ bord X : hx|ηio ≥ ρ} ⊆ B.
Then fix σ > 0 large to be determined, depending only on ρ. Fix ρe ≥ ρ large to be
determined, depending only on ρ and σ.
Fix 0 < s < δe and z ∈ X \ B. For all x ∈ X we have
0 ≍+,ρ hz|ηio &+ min(hx|ηio , hx|zio ).
Let
e = {x ∈ X : hx|ηio ≥ ρe}.
B
If ρe is chosen large enough, then we have
hx|zio ≍+,ρ 0,
(9.2.3)
e We emphasize that the implied constants of these asymptotics are
for all x ∈ B.
independent of both z and s.
For each n ∈ N let
An := B(o, n) \ B(o, n − 1)
146
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
be the nth annulus centered at o. We shall need the following variant of the Intersecting Shadows Lemma:
Claim 9.2.6. There exists τ > 0 depending on ρ and σ such that for all n ∈ N
e if
and for all x, y ∈ An ∩ B,
Pz,x ∩ Pz,y 6= ,
then
d(x, y) < τ.
Proof. Without loss of generality suppose d(z, y) ≥ d(z, x). Then by the
Intersecting Shadows Lemma 4.5.4 we have
d(x, y) ≍+,σ B z (y, x) = B o (y, x) + 2hx|zio − 2hy|zio .
e we have
Now | B o (y, x)| ≤ 1 since x, y ∈ An . On the other hand, since x, y ∈ B,
hx|zio ≍+,ρ hy|zio ≍+,ρ 0.
Combining gives
d(x, y) ≍+,ρ,σ 0,
and letting τ be the implied constant finishes the proof.
⊳
Fix M > 0 large to be determined, depending on ρ and τ (and thus implicitly
e then by Lemma
on σ). Let Sτ ⊆ G(o) be a maximal τ -separated set. Fix t ∈ (s, δ);
9.2.2 we have
e =
∞ = Σt (Sτ ∩ B)
=
∞
X
n=1
∞
X
e ∩ An )
Σt (Sτ ∩ B
X
b−tkxk
n=1 x∈Sτ ∩B∩A
e
n
≍×
∞
X
n=1
b−(t−s)n
X
b−skxk .
e
x∈Sτ ∩B∩A
n
It follows that there exist arbitrarily large numbers n ∈ N such that
X
b−skxk ≥ M.
(9.2.4)
e
x∈Sτ ∩B∩A
n
Fix such an n, also to be determined, depending on λ, ρ, ρe, and M (and thus
e ∩ An . To complete the proof, we
implicitly on τ and σ), and let SB = Sτ ∩ B
demonstrate (i)-(iii).
Proof of (i). In order to see that the shadows (Pz,x )x∈SB are pairwise disjoint, suppose that x, y ∈ SB are such that Pz,x ∩ Pz,y 6= . By Claim 9.2.6 we
have d(x, y) < τ . Since SB is τ -separated, this implies x = y.
9.2. A PARTITION STRUCTURE ON ∂X
147
Fix x ∈ SB . Using (9.2.3) and the fact that x ∈ An , we have
ho|zix ≍+ kxk − hx|zio ≍+,ρ kxk ≍+ n.
Thus for all ξ ∈ Pz,x ,
0 ≍+,σ hz|ξix &+ min(ho|zix , ho|ξix ) ≍+ min(n, ho|ξix );
taking n sufficiently large (depending on σ), this gives
ho|ξix ≍+,σ 0,
from which it follows that
hx|ξio ≍+ d(o, x) − ho|ξix ≍+,σ n.
e we get
Therefore, since x ∈ B,
hξ|ηio &+ min(hx|ξio , hx|ηio ) &+,σ min(n, ρe).
Thus ξ ∈ B as long as ρe and n are large enough (depending on σ). Thus Pz,x ⊆ B.
Finally, note that we do not need to prove that Pz,x ⊆ Pz,o , since it is implied
by (9.2.1) which we prove below.
⊳
Proof of (ii). Take any x ∈ SB . Then by (9.2.3), we have
(9.2.5)
d(x, z) − kzk = kxk − 2hx|zio ≍+,ρ kxk ≍+ n.
Combining with the Diameter of Shadows Lemma 4.5.8 gives
(9.2.6)
b−d(z,x)
Diamz (Pz,x )
≍×,σ −d(z,o) ≍×,ρ b−n .
Diamz (Pz,o )
b
Thus by choosing n sufficiently large depending on σ, λ, and ρ (and satisfying
(9.2.4)), we guarantee that the second inequality of (9.2.2) holds. On the other
hand, once n is chosen, (9.2.6) guarantees that if we choose κ sufficiently small,
then the first inequality of (9.2.2) holds.
In order to prove (9.2.1), let ξ ∈ Pz,x and let γ ∈ ∂X \ Pz,o . We have
hx|ξiz ≍+ d(x, z) − hz|ξix ≥ d(x, z) − σ
ho|γiz ≍+ kzk − ho|ξix ≤ kzk − σ.
Also, by (9.2.3) we have
ho|xiz ≍+ kzk − hx|zio ≍+,ρ kzk.
148
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
Applying Gromov’s inequality twice and then applying (9.2.5) gives
kzk − σ &+ ho|γiz &+ min (ho|xiz , hx|ξiz , hξ|γiz )
&+,ρ min (kzk, d(x, z) − σ, hξ|γiz )
≍+ min (kzk, kzk + n − σ, hξ|γiz ) .
By choosing n and σ sufficiently large (depending on ρ), we can guarantee that neither of the first two expressions can represent the minimum without contradicting
the inequality. Thus
kzk − σ &+,ρ hξ|γiz ;
exponentiating and the Diameter of Shadows Lemma 4.5.8 give
Db,z (ξ, γ) &×,ρ b−(kzk−σ) ≍×,σ b−kzk ≍×,σ Diamz (Pz,o ).
Thus we may choose κ small enough, depending on ρ and σ, so that (9.2.1) holds.
⊳
Proof of (iii).
X
Diamsz (Pz,x ) ≍×
x∈SB
X
b−sd(z,x)
x∈SB
≍×,ρ b−sd(z,o)
≥
X
(by the Diameter of Shadows Lemma)
b−kxk
(by (9.2.5))
x∈SB
M b−sd(z,o)
≍× M Diamsz (Pz,o ).
(by (9.2.4))
(by the Diameter of Shadows Lemma)
Letting M be larger than the implied constant yields the result.
⊳
We may now complete the proof of Lemma 9.2.1:
Proof of Lemma 9.2.1. Let η1 , η2 ∈ Λ be distinct points, and let B1 and B2
be disjoint neighborhoods of η1 and η2 , respectively. Let σ > 0 be large enough so
e and let S1 ⊆ G(o) ∩ B1
that Lemma 9.2.5 holds for both B1 and B2 . Fix 0 < s < δ,
and S2 ⊆ G(o) ∩ B2 be the sets guaranteed by Lemma 9.2.5. Now suppose that
−1
w = gw (o) ∈ G(o). Let z = gw
(o). Then either z ∈
/ B1 or z ∈
/ B2 ; say z ∈
/ Bi . Let
T (w) = gw (Si ); then (i)-(iii) of Lemma 9.2.5 exactly guarantee (i)-(iii) of Lemma
9.2.1.
9.3. Sufficient conditions for Poincaré regularity
We end this chapter by relating the modified Poincaré exponent to the classical
Poincaré exponent under certain additional assumptions, thus completing the proof
of Theorem 1.2.1.
9.3. SUFFICIENT CONDITIONS FOR POINCARÉ REGULARITY
that
149
Proposition 9.3.1. Let G ≤ Isom(X) be nonelementary, and assume either
(1) X is regularly geodesic and G is moderately discrete,
(2) X is an algebraic hyperbolic space and G is weakly discrete, or that
(3) X is an algebraic hyperbolic space and G is COT-discrete and acts irreducibly.
Then G is Poincaré regular.
Remark 9.3.2. Example 13.4.2 shows that Proposition 9.3.1 cannot be improved by replacing “COT” with “UOT”, Example 13.4.9 shows that Proposition
9.3.1 cannot be improved by removing the assumption that G acts irreducibly, Example 13.4.1 shows that Proposition 9.3.1 cannot be improved by removing the
hypothesis that X is an algebraic hyperbolic space from (ii), and Example 13.4.4
shows that Proposition 9.3.1 cannot be improved by removing the assumption that
X is regularly geodesic.
We begin with the following observation:
Observation 9.3.3. If (3) implies that G is Poincaré regular, then (2) does as
well.
Proof. Suppose (2) holds, and let S be the smallest totally geodesic subset of
bord X which contains Λ (cf. Lemma 2.4.5). Since G is nonelementary, V := S ∩ X
is nonempty; it is clear that V is G-invariant. By Observation 5.2.14, the action G ↿
V is weakly discrete. By Proposition 5.2.7, G ↿ V is COT-discrete. Furthermore, G
acts irreducibly on V because of the way V was defined (cf. Proposition 7.6.3). Thus
e G (since the
(3) holds for the action G ↿ V , which by our hypothesis implies ∆G = ∆
Poincaré set and modified Poincaré set are clearly stable under restrictions).
We now proceed to prove that (1) and (3) each imply that G is Poincaré regular.
By contradiction, let us suppose that G is Poincaré irregular. By Proposition
8.2.4(ii), we have that G is not strongly discrete and thus
δeG < δG = ∞.
This gives us two contrasting behaviors: On one hand, by Proposition 8.2.4(iii),
there exist ρ > 0 and a maximal ρ-separated set Sρ ⊆ G(o) so that Sρ does not
contain an bounded infinite set. On the other hand, since G is not strongly discrete,
there exists σ > 0 such that #(Gσ ) = ∞, where
Gσ := {g ∈ G : g(o) ∈ B(o, σ)}.
Claim 9.3.4. For every ξ ∈ Λ, the orbit Gσ (ξ) is precompact.
150
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
Proof. Suppose not. Then the set Gσ (ξ) is complete (with respect to the
metric D) but not compact. It follows that Gσ (ξ), and thus also Gσ (ξ), is not
totally bounded. So there exists ε > 0 and an infinite ε-separated subset (gn (ξ))∞
1 .
Fix L large to be determined. Since ξ ∈ Λ, we can find x ∈ G(o) such that
hx|ξio ≥ L.
Subclaim 9.3.5. By choosing L large enough we can ensure
d(gm (x), gn (x)) ≥ 2ρ ∀m, n ∈ N.
Proof. By (d) of Proposition 3.3.3,
hgn (x)|gn (ξ)io ≍+,σ hgn (x)|gn (ξ)ign (o) = hx|ξio ≥ L,
and thus
D(gn (x), gn (ξ)) .×,σ b−L .
If L is large enough, then this implies
D(gn (x), gn (ξ)) ≤ ε/3.
Since by construction the sequence (gn (ξ))∞
1 is ε-separated, we also have
D(gm (ξ), gn (ξ)) ≥ ε
and then the triangle inequality gives
D(gm (x), gn (x)) ≥ ε/3,
or, taking logarithms,
hgm (x)|gn (x)io .+ − logb (ε/3).
Now we also have
kgn (x)k ≍+,σ kxk ≥ hx|ξio ≥ L
and therefore
d(gm (x), gn (x)) =
kgm (x)k + kgn (x)k − 2hgm (x)|gn (x)io
&+,σ 2L − 2(− logb (ε/3)).
Thus by choosing L sufficiently large, we ensure that d(gm (x), gn (x)) ≥ 2ρ.
⊳
Recall that Sρ is a maximal ρ-separated set. Thus for each n ∈ N, we can find
yn ∈ Sρ with d(gn (x), yn ) < ρ. Then the subclaim implies yn 6= ym for n 6= m. But
on the other hand
kyn k ≤ kxk + σ + ρ ∀n ∈ N,
which implies that Sρ contains a bounded infinite set, a contradiction.
9.3. SUFFICIENT CONDITIONS FOR POINCARÉ REGULARITY
151
We now proceed to disprove the hypotheses (1) and (3) of Proposition 9.3.1.
Thus if either of these hypotheses are assumed, we have a contradiction which
finishes the proof.
Proof that (1) cannot hold. Since G is assumed to be nonelementary, we
can find distinct points ξ1 , ξ2 ∈ Λ. By Claim 9.3.4, there exist a sequence (gn )∞
1 in
Gσ and points η1 , η2 ∈ Λ such that
gn (ξi ) −
→ ηi .
n
Next, choose a point x ∈ [ξ1 , ξ2 ]. For each n ∈ N, we have
gn (x) ∈ [gn (ξ1 ), gn (ξ2 )].
Thus since X is regularly geodesic there exist a sequence (nk )∞
1 and a point z ∈
[η1 , η2 ] such that
gnk (x) −
→ z.
k
Since gn ∈ Gσ
∀n, the sequence (gn (x))∞
1 is bounded and thus z ∈ X. By
contradiction, suppose that G is moderately discrete, and fix ε > 0 satisfying
(5.2.2). For all m, n ∈ N large enough so that gm (x), gn (x) ∈ B(z, ε/2), we have
−1
gn (x)) = d(gm (x), gn (x)) ≤ ε. Thus for some N ∈ N, we have
d(x, gm
−1
#{gm
gn : m, n ≥ N } < ∞.
This is clearly a contradiction.
Proof that (3) cannot hold. Now we assume that X is an algebraic hyperbolic space, say X = H = Hα
F , and that G acts irreducibly on X. Using the
identification
Isom(H) ≡ O∗ (L; Q)/ ∼,
(Theorem 2.3.3), for each g ∈ Gσ let Tg ∈ O∗ (L; Q) be a representative of g. Recall
(Lemma 2.4.11) that
kTg k = kTg−1k = ekgk ,
so since g ∈ Gσ we have kTg k = kTg−1 k ≤ bσ . In particular, the family (Tg )g∈Gσ
acts equicontinuously on L.
For simplicity of exposition, in the following proof we will assume that X
is separable. (In the non-separable case, the reader should use nets instead of
sequences.) It follows that Λ ⊆ ∂X is also separable; let (ξk = [xk ])∞
1 be a dense
sequence, with xk ∈ L, kxk k = 1.
152
9. GENERALIZATION OF THE BISHOP–JONES THEOREM
Claim 9.3.6. There exists a sequence of distinct elements (gn )∞
1 in Gσ such
that the following hold:
(+)
∈ L \ {0}
(−)
∈ L \ {0}
→ yk
Tgn [xk ] −
n
[xk ] −
→ yk
Tg−1
n
n
→ σ ∈ Aut(F).
σ(Tgn ) −
n
Proof. For each k ∈ N let
Kk = {y ∈ L \ {0} : [y] ∈ Gσ (ξk ) and b−σ ≤ kyk ≤ bσ },
and let
K :=
Y
k∈N
Kk
!2
× Aut(F).
Then by Claim 9.3.4 (and general topology), K is a compact metrizable space, and
is in particular sequentially compact. Now for each g ∈ Gσ ,
b−σ ≤ kTg [xk ]k ≤ bσ and b−σ ≤ kTg−1 [xk ]k ≤ bσ
and thus
−1
∞
φg := (Tg (xk ))∞
1 , (Tg (xk ))1 , σ(Tg ) ∈ K,
and so since #(Gσ ) = ∞, there exists a sequence of distinct elements (gn )∞
1 in Gσ
so that the sequence (φgn )∞
1 converges to a point
(+)
(−) ∞
(yk )∞
1 , (yk )1 , σ ∈ K.
Writing out what this means yields the claim.
⊳
Let Tn = Tgn and σn = σ(Tn ) → σ. We claim that the sequence (Tn )∞
1 is
convergent in the strong operator topology. Let
K = {a ∈ F : σ(a) = a}
V = {x ∈ L : the sequence (Tn [x])∞
1 converges}.
Then K is an R-subalgebra of F, and V is a K-module. Given x, y ∈ V , by Observation 2.3.6 we have
σn (BQ (x, y)) = BQ (Tn x, Tn y) −
→ BQ (x, y),
n
so B(x, y) ∈ K. Thus V satisfies (2.4.1). On the other hand, since the family
(Tn )∞
1 acts equicontinuously on L, the set V is closed. Thus [V ] ∩ bord X is totally
geodesic. But by construction, ξk ∈ [V ] for all k, and so
Λ ⊆ [V ]
9.3. SUFFICIENT CONDITIONS FOR POINCARÉ REGULARITY
153
Therefore, since by hypothesis G acts irreducibly, it follows that [V ] = X, i.e.
V = L (Proposition 7.6.3). So for every x ∈ L, the sequence (Tn (x))∞
1 converges.
Thus
Tn −
→ T (+) ∈ L(L)
n
in the strong operator topology. (The boundedness of the operator T (+) follows
from the uniform boundedness of the operators (Tn )∞
1 .) We do not yet know that
(+)
T
is invertible. But a similar argument yields that
Tn−1 −
→ T (−) ∈ L(L),
n
and since the sequences
(Tn )∞
1
and (Tn−1 )∞
1 are equicontinuous, we have
T (+) T (−) = lim Tn Tn−1 = I,
n→∞
and similarly T
(−)
T
(+)
= I. Thus T
(+)
and T (−) are inverses of each other and in
particular
T (+) ∈ O∗ (L; Q).
Let h = [T (+) ] ∈ Isom(X). By Proposition 5.1.2, we have gn → h in the compact-
open topology. Thus, Lemma 5.2.8 completes the proof.
Part 3
Examples
This part will be divided as follows: In Chapter 10 we consider semigroups of
isometries which can be written as the “Schottky product” of two subsemigroups.
Next, we analyze in detail the class of parabolic groups of isometries in Chapter
11. In Chapter 12, we define a subclass of the class of groups of isometries which
we call geometrically finite, generalizing known results from the Standard Case.
In Chapter 13, we provide a list of examples whose main importance is that they
are counterexamples to certain implications; however, these examples are often
geometrically interesting in their own right. Finally, in Chapter 14, we consider
methods of constructing R-trees which admit natural group actions, including what
we call the “stapling method”.
CHAPTER 10
Schottky products
An important tool for constructing examples of discrete subgroups of Isom(X)
is the technique of Schottky products. Schottky groups are a special case of Schottky
products; cf. Definition 10.2.4. In this chapter we explain the basics of Schottky
products on hyperbolic metric spaces, and give several important examples. We
intend to give a more comprehensive account of Schottky products in [57], where
we will study their relation to pseudo-Markov systems (defined in [159]).
Remark 10.0.7. Throughout this chapter, E denotes an index set with at least
two elements. There are no other restrictions on the cardinality of E; in particular,
E may be infinite.
10.1. Free products
We provide a brief review of the theory of free products, mainly to fix notation.
Let (Γa )a∈E be a collection of nontrivial abstract semigroups. Let
a
[
ΓE =
(Γa \ {e}) =
{a} × (Γa \ {e}).
a∈E
a∈E
∗
Let (ΓE ) denote the set of finite words with letters in ΓE , including the empty
word, which we denote by . The free product of (Γa )a∈E , denoted ∗a∈E Γa , is the
set
{g = (a1 , γ1 ) · · · (an , γn ) ∈ (ΓE )∗ : ai 6= ai+1 ∀i = 1, . . . , n − 1, n ≥ 0} ,
together with the operation of multiplication defined as follows: To multiply two
words g, h ∈ ∗a∈E Γa , begin by concatenating them. The concatenation may no
longer satisfy ai 6= ai+1 for all i; namely, this condition may fail at the point where
the two words are joined. Reduce the concatenated word g ∗ h using the rule
(a, γ γ ) γ γ 6= e
1 2
1 2
.
(a, γ1 )(a, γ2 ) =
γ γ =e
1 2
The word may require multiple reductions in order to satisfy ai 6= ai+1 . The
reduced form of g ∗ h will be denoted gh.
One verifies that the operation of multiplication defined above is associative, so
that the free product ∗a∈E Γa is a semigroup. If (Γa )a∈E are groups, then ∗a∈E Γa
157
158
10. SCHOTTKY PRODUCTS
is a group. The inclusion maps ιa : Γa → ∗a∈E Γa defined by ιa (γ) = (a, γ) are
homomorphisms, and ∗a∈E Γa = hιa (Γa )ia∈E .
An important fact about free products is their universal property: Given any
semigroup Γ and any collection of homomorphisms (πa : Γa → Γ), there exists a
unique homomorphism π : ∗a∈E Γa → Γ such that πa = π ◦ιa for all a. For example,
if (Γa )a∈E are subsemigroups of Γ and (πa )a∈E are the identity inclusions, then
π((a1 , γ1 ) · · · (an , γn )) = γ1 · · · γn . We will call the map π the natural map from
∗a∈E Γa to Γ.
Remark 10.1.1. We will use the notation Γ1 ∗ · · · ∗ Γn to denote ∗a∈{1,...,n} Γa .
The semigroups
· · ∗ N}
Fn (Z) = Z
· · ∗ Z} and Fn (N) = |N ∗ ·{z
| ∗ ·{z
n times
n times
are called the free group on n elements and the free semigroup on n elements,
respectively.
10.2. Schottky products
Given a collection of semigroups Ga Isom(X), we can ask whether the semi-
group hGa ia∈E Isom(X) is isomorphic to the free product ∗a∈E Ga . A sufficient
condition for this is that the groups (Ga )a∈E are in Schottky position.
Definition 10.2.1. A collection of nontrivial semigroups (Ga Isom(X))a∈E
is in Schottky position if there exist disjoint open sets Ua ⊆ bord X satisfying:
(I) For all a, b ∈ E distinct and g ∈ Ga \ {id},
(II) There exists o ∈ X \
(10.2.1)
S
g(Ub ) ⊆ Ua .
a∈E
Ua satisfying
g(o) ∈ Ua ∀a ∈ E ∀g ∈ Ga \ {id}.
Such a collection (Ua )a∈E is called a Schottky system for (Ga )a∈E . If the collection
(Ga )a∈E is in Schottky position, then we will call the semigroup G = hGa ia∈E the
Schottky product of (Ga )a∈E .
A Schottky system will be called global if for all a ∈ E and g ∈ Ga \ {id},
(10.2.2)
g(bord X \ Ua ) ⊆ Ua .
Remark. In most references (e.g. [56, §5]), (10.2.2) or a similar hypothesis
is taken as the definition of Schottky position. So what these references call a
“Schottky group”, we would call a “global Schottky group”. There are important
examples of Schottky semigroups which are not global; see e.g. (B) of Proposition
10.5.4. It should be noted that such examples tend to be semigroups rather than
10.2. SCHOTTKY PRODUCTS
159
groups, which explains why references which consider only groups can afford to
include globalness in their definition of Schottky position.
Remark. The above definition may be slightly confusing to someone familiar
with classical Schottky groups, since in that context the sets Ua in the above definition are not half-spaces but rather unions of pairs of half-spaces; cf. Definition
10.2.4.
The basic properties of Schottky products are summarized in the following
lemma:
Lemma 10.2.2. Let G = hGa ia∈E be a Schottky product. Then:
(i) (Ping-Pong Lemma) The natural map π : ∗a∈E Ga → G is an injection
(and therefore an isomorphism).
(ii) Fix g = (a1 , g1 )(a2 , g2 ) · · · (an , gn ) ∈ ∗a∈E Ga , and let g = π(g). Then
g(o) ∈ Ua1 .
(10.2.3)
Moreover, for all b 6= an
g(Ub ) ⊆ Ua1 ,
(10.2.4)
and if the system (Ua )a∈E is global
(10.2.5)
g(bord X \ Uan ) ⊆ Ua1 .
(iii) If G is a group, then G is COT-discrete.
Proof. (10.2.3)-(10.2.5) may be proven by an easy induction argument. Now
(10.2.3) immediately demonstrates (i), since it implies that π(g) 6= id. (iii) also
follows from (10.2.3), since it shows that kgk is bounded from below for all g ∈
G \ {id}.
Remark 10.2.3. Lemma 10.2.2(i) says that Schottky products are (isomorphic
to) free products. However, we warn the reader that the converse is not necessarily
true; cf. Lemma 13.4.6.
Two important classes of Schottky products are Schottky groups and Schottky
semigroups.
Definition 10.2.4. A Schottky group is the Schottky product of cyclic groups
Ga = (ga )Z with the following property: For each a ∈ E, Ua may be written as the
disjoint union of two sets Ua+ and Ua− satisfying
ga (bord X \ Ua− ) ⊆ Ua+ .
A Schottky semigroup is simply the Schottky product of cyclic semigroups; no
additional hypotheses are needed.
160
10. SCHOTTKY PRODUCTS
Remark 10.2.5. In the classical theory of Schottky groups, the sets Ua± are
required to be half-spaces. A half-space in bord Hα is a connected component of the
complement of a totally geodesic subset of bord Hα of codimension one. Requiring
the sets Ua± to be half-spaces has interesting effects on the geometry of Schottky
groups.
Although the notion of half-spaces cannot be generalized to hyperbolic metric
spaces in general or even to nonreal algebraic hyperbolic spaces (since a totally
geodesic subspace of an algebraic hyperbolic space over F = C or Q always has real
codimension at least 2, so deleting it yields a connected set), it at least makes sense
over real hyperbolic spaces and in the context of R-trees. A half-space in an R-tree
X is a connected component of the complement of a point in X.
We hope to study the effect of requiring the sets Ua± to be half-spaces, both
in the case of real (but infinite-dimensional) algebraic hyperbolic spaces and in the
case of R-trees, in more detail in [57].
10.3. Strongly separated Schottky products
Many questions about Schottky products cannot be answered without some
additional information. For example, one can ask whether or not the Schottky
product of strongly (resp. moderately, weakly) discrete groups is strongly (resp.
moderately, weakly) discrete. One can also ask about the relation between the
Poincaré exponent of a Schottky group and the Poincaré exponent of its factors.
For the purposes of this monograph, we will be interested in Schottky products
which satisfy the following condition:
Definition 10.3.1. A Schottky product G = hGa ia∈E is said to be strongly
separated (with respect to a Schottky system (Ua )a∈E ) if there exists ε > 0 such
that for all a, b ∈ E distinct and g ∈ Ga \ {id},
(10.3.1)
D Ua ∪ g −1 (bord X \ Ua ), Ub ≥ ε.
Here D is as in Proposition 3.6.13. Abusing terminology, we will also call the
semigroup G and the Schottky system (Ua )a∈E strongly separated.
The product G = hGa ia∈E is weakly separated if (10.3.1) holds for a constant
ε > 0 which depends on a and b (but not on g).
Remark 10.3.2. There are many important examples of Schottky products
which are not strongly separated, and we hope to analyze these in more detail in
[57]. Some examples of Schottky products that do satisfy the condition are given
in Section 10.5.
Standing Assumptions 10.3.3. For the remainder of this chapter,
G = hGa ia∈E
10.3. STRONGLY SEPARATED SCHOTTKY PRODUCTS
161
denotes a strongly separated Schottky product and (Ua )a∈E denotes the corresponding Schottky system. Moreover, from now on we assume that the hyperbolic
metric space X is geodesic.
Notation 10.3.4. Let Γ denote the free product Γ = ∗a∈E Ga , and let π : Γ →
G denote the natural isomorphism. Whenever we have specified an element g ∈ Γ,
we denote its length by |g| and we write g = (a1 , g1 ) · · · (a|g| , g|g|). For h ∈ Γ, we
write h = (b1 , h1 ) · · · (b|h| , h|h| ).
S
Let o ∈ X satisfy (10.2.1). Let ε ≤ d(o, a Ua ) satisfy (10.3.1), and for each a ∈
E let Va denote the closed ε/4-thickening of Ua with respect to the D metric. Then
the sets (Int(Va ))a∈E are also a Schottky system for (Ga )a∈E ; they are strongly
separated with ε = ε/2; moreover,
D(Ua , bord X \ Va ) ≥ ε/2 ∀a ∈ E.
(10.3.2)
Finally, let
so that
bord X \ Int(V ) (U )
a
a a∈E is global
Xa =
,
{o} ∪ S
otherwise
b6=a Vb
g(Xa ) ⊆ Ua ∀a ∈ E.
(10.3.3)
Note that since the sets (Va )a∈E are ε/2-separated, they have no accumulation
points and thus Xa is closed for all a ∈ E.
The strong separation condition will allow us to relate the discreteness of the
groups Ga to the discreteness of their Schottky product G. It will also allow us
to relate the Poincaré exponents of Ga with the Poincaré exponent of G. The
underlying fact which will allow us to prove both of these relations is the following
lemma:
Lemma 10.3.5. There exist constants C, ε > 0 such that for all g ∈ Γ,
(10.3.4)
|g|
X
i=1
In particular
(10.3.5)
(kgi k − C) ∨ ε ≤ d(X \ Va1 , π(g)(Xa|g| )) ≤
|g|
|g|
X
X
kgi k
(kgi k − C) ∨ ε ≤ kπ(g)k ≤
i=1
i=1
and thus
(10.3.6)
kπ(g)k ≍×
|g|
X
i=1
1 ∨ kgi k
|g|
X
i=1
kgi k.
162
10. SCHOTTKY PRODUCTS
Va
Ua
g2 (Xb )
x1
Ub
Vb
g1 (Xa )
x2
π(g)(o)
o
Vc
x3
Uc
g3 (Xc )
Figure 10.3.1. The geodesic segment [o, π(g)(o]) splits up naturally into four subsegments, which can then be rearranged by the
isometry group to form geodesic segments which connect o with
g1 (Xa ), Va with g2 (Xb ), Vb with g3 (Xc ), and Vc with o, respectively. Here g = (a, g1 )(b, g2 )(c, g3 ).
Proof. The second inequality of (10.3.4) is immediate from the triangle inequality. For the first inequality, fix g ∈ Γ, x ∈ X \ Va1 , and y ∈ Xa|g| . Write
n = |g|. We have
π(g)(y) ∈ g1 · · · gn (Xan ) ⊆ g1 · · · gn−1 (Van ) ⊆ g1 · · · gn−1 (Xan−1 )
⊆ · · · ⊆ g1 (Va2 ) ⊆ g1 (Xa1 ) ⊆ Va1 6∋ x.
Consequently, the geodesic [x, π(g)(y)] intersects the sets
∂Va1 , g1 (∂Xa1 ), g1 (∂Va2 ), . . . , g1 · · · gn−1 (∂Van ), g1 · · · gn (∂Xan )
in their respective orders. Thus
d(x, π(g)(y)) ≥
(10.3.7)
=
≥
n
X
i=1
n
X
i=1
n
X
i=1
d(g1 · · · gi−1 (∂Vai ), g1 · · · gi (∂Xai ))
d(∂Vai , gi (∂Xai ))
d(X \ Vai , gi (Xai )).
10.3. STRONGLY SEPARATED SCHOTTKY PRODUCTS
163
(Cf. Figure 10.3.1.) Now fix i = 1, . . . , n, and we will estimate the distance d(X \
Vai , gi (Xai )). For convenience of notation write a = ai and g = gi .
Fix z ∈ X \ Va and w ∈ g(Xa ). Combining (10.3.2) and (10.3.3) gives
D(z, w) ≥ ε/2
and in particular
d(z, w) ≥ ε/2.
(10.3.8)
On the other hand, converting the inequality D(z, w) ≥ ε/2 into a statement about
Gromov products shows that
d(z, w) ≍+,ε d(o, z) + d(o, w) ≥ d(o, w) ≥ d(g −1 (o), Xa ).
Since D(g −1 (o), Xa ) ≥ D(g −1 (bord X \ Va ), Xa ) ≥ ε/2, we have
d(g −1 (o), Xa ) &+,ε d(g −1 (o), o) = kgk.
Combining with (10.3.8) gives
d(z, w) ≥ (kgk − C) ∨ (ε/2)
for some C > 0 depending only on ε. Taking the infimum over all z, w gives
d(X \ Vai , gi (Xai )) ≥ (kgi k − C) ∨ (ε/2).
Summing over all i = 1, . . . , n and combining with (10.3.7) yields (10.3.4). Since o ∈
Xa|g| and o ∈ X \ Va1 , (10.3.5) follows immediately. Finally, the coarse asymptotic
(kgi k − C) ∨ ε ≍×,C,ε 1 ∨ kgi k.
implies (10.3.6).
Corollary 10.3.6. Suppose that #{a ∈ E : d(o, Ua ) ≤ ρ} < ∞ for all ρ > 0.
If the groups (Ga )a∈E are strongly discrete, then G is strongly discrete.
In fact, this corollary holds even if G is only weakly separated and not strongly
separated.
Proof. Since kgk ≥ d(o, Ua ) for all a ∈ E and g ∈ Ga , our hypotheses implies
that
#{(a, g) ∈ ΓE : kgk ≤ ρ} < ∞ ∀ρ.
164
10. SCHOTTKY PRODUCTS
It follows that for all N ∈ N,
|g|
X
1 ∨ kgi k ≤ N
# g∈Γ:
i=1
(10.3.9)
≤
≤
N
X
n=0
N
X
n=0
# {g ∈ (ΓE )n : kgi k ≤ N ∀i = 1, . . . , n}
n
# {(a, g) ∈ ΓE : kgk ≤ N } < ∞.
Applying (10.3.6) completes the proof. If G is only weakly separated, then for
all ρ > 0 the Schottky product hGa id(o,Ua )≤ρ is still stronglly separated, which is
enough to apply (10.3.6) in this context.
Proposition 10.3.7.
(i) If #(E) < ∞ and the groups Ga satisfy δGa < ∞, then δG < ∞.
(ii) Suppose that for some a ∈ E, Ga is of divergence type. Then δG > δGa .
(iii) Suppose that G is a group. If δGa = ∞ for some a, and if Gb is infinite
for some b 6= a, then δeG = ∞.
(iv) If E = {a, b} and Gb = g Z , then
lim δ(Ga ∗ g nZ ) = δ(Ga ).
n→∞
Moreover, if Ga is of convergence type, then for all n sufficiently large,
Ga ∗ g nZ is of convergence type.
Moreover, (ii) holds for any free product G = hGa ia∈E , even if the product is not
Schottky.
Remark 10.3.8. Property (iii) tells us that an analogue of property (i) cannot
hold for the modified Poincaré exponent: if we take the Schottky product of two
e i ) < ∞ but δ(Gi ) = ∞, then the product G will have
groups G1 , G2 with δ(G
e
δ(G) = ∞.
Proof of (i). (10.3.9) shows that for some C > 0,
#{g ∈ G : kgk ≤ ρ} ≤ # {(a, g) ∈ ΓE : kgk ≤ Cρ}
Applying (8.1.2) completes the proof.
Cρ
∀ρ > 0.
10.3. STRONGLY SEPARATED SCHOTTKY PRODUCTS
165
Proof of (ii). For all s ≥ 0,
X
X
P|g|
Σs (G) =
b−skπ(g)k ≥
b−s 1 kgi k
g∈Γ
g∈Γ
=
|g|
XY
b−skgi k
g∈Γ i=1
=
∞
X
X
X
n=0 a1 6=···6=an g1 ∈Ga1 \{id}
=
∞
X
X
n
Y
···
X
X
n
Y
b−skgi k
gn ∈Gan \{id} i=1
b−skgk
n=0 a1 6=···6=an i=1 g∈Gai \{id}
=
∞
X
X
n
Y
(Σs (Gai ) − 1).
n=0 a1 6=···6=an i=1
To simplify further calculations, we will assume that #(E) = 2; specifically we will
let E = {1, 2}. Then
!n/2
Y
2
Σs (Ga ) − 1
n even
∞
X
a∈E
Σs (G) ≥
!(n−1)/2
!
Y
X
n=0
Σs (Ga ) − 1
n odd
Σs (Ga ) − 1
a∈E
≍×
∞
X
n=0
Y
a∈E
Σs (Ga ) − 1
!n/2
a∈E
.
This series diverges if and only if
Y
(10.3.10)
(Σs (Ga ) − 1) ≥ 1.
a∈E
Now suppose that G1 is of divergence type, and let δ1 = δ(G1 ). By the monotone
convergence theorem,
Y
Y
lim
(Σs (Ga ) − 1) =
(Σδ1 (Ga ) − 1) = ∞(Σδ1 (G2 ) − 1) = ∞.
sցδ1
a∈E
a∈E
(The last equality holds since G2 is nontrivial, see Definition 10.2.1.) So for s
sufficiently close to δ1 , (10.3.10) holds, and thus Σs (G) = ∞.
Proof of (iii). Fix ρ > 0, and let h ∈ Gb satisfy d(h(o), Ua ) ≥ ρ. (This is
possible since Gb is non-elliptic and d(h(o), Ua ) ≍+ khk ∀h ∈ Gb .) Then the set
Sρ = {gh(o) : g ∈ Ga }
is ρ-separated, but δ(Sρ ) = δ(Ga ) = ∞. Since ρ was arbitrary, it follows from
e
Proposition 8.2.4(iv) that δ(G)
= ∞.
166
10. SCHOTTKY PRODUCTS
Proof of (iv). We will in fact show the following more general result:
Proposition 10.3.9. Suppose E = {a, b}, and fix s ∈
/ ∆(Ga ) ∪ ∆(Gb ). Then
there exists a finite set F ⊆ Gb such that for all H ≤ Gb , if H ∩ F = {id}, then
s∈
/ ∆(Ga ∗ H).
Indeed, for such an s, the Poincaré series Σs (Ga ∗H) can be estimated using (10.3.5)
as follows:
Σs (Ga ∗ H) =
X
g∈Γ
b−skπ(g)k ≤
X
b−s
P|g|
1
(kgi k−C)
g∈Γ
=
X
bsC|g|b−s
P|g|
1
kgi k
.
g∈Γ
Continuing as in the proof of part (ii), we get
Σs (Ga ∗ H) .×
∞
X
bsCn
n=0
n/2
Σs (Ga ) − 1 Σs (H) − 1
.
Since Σs (H) − 1 ≤ Σs (Gb \ F ), to show that Σs (Ga ∗ H) < ∞ it suffices to show
that
(10.3.11)
bsC
1/2
Σs (Ga ) − 1 Σs (Gb \ F )
< 1.
But since the series Σs (Ga ) and Σs (Gb ) both converge by assumption, (10.3.11)
holds for all F ⊆ Gb sufficiently large.
We will sometimes find the following variant of Proposition 10.3.7(ii) more
useful than the original:
Proposition 10.3.10 (Cf. [55, Proposition 2]). Fix H ≤ G ≤ Isom(X), and
suppose that
(I) ΛH $ ΛG ,
(II) G is of general type, and
(III) H is of compact type and of divergence type.
Then δG > δH .
Proof. Fix ξ ∈ ΛG \ ΛH , and fix ε > 0 small enough so that B(ξ, ε)∩ΛH = .
Since G is of general type, by Proposition 7.4.7, there exists a loxodromic isometry
g ∈ G such that g+ , g− ∈ B(ξ, ε/4). After replacing g by an appropriate power, we
may assume that g n (bord X \ B(ξ, ε/2)) ⊆ B(ξ, ε/2) for all n ∈ Z \ {0}. Now let
U1 = B(ξ, ε/2)
U2 = Nε/4 (ΛH ).
Since H is of compact type (and strongly discrete by Observation 8.1.5), Proposition
7.7.4 shows that there exists a finite set F ⊆ H such that for all h ∈ H \ F ,
10.3. STRONGLY SEPARATED SCHOTTKY PRODUCTS
h(bord X \ U2 ) ⊆ U2 . Let
S=
a
167
n
(H \ F ) × (g Z \ {id}) ,
n≥0
and define π : S → G via the formula
π (hi , ji )ni=1 = h1 j1 · · · hn jn .
A variant of the Ping-Pong Lemma shows that π is injective. On the other hand,
for all (hi , ji )ni=1 ∈ S, the triangle inequality gies
n
X
[khi k + kji k].
d o, π (hi , ji )ni=1 (o) ≤
i=1
Thus for all s > δH ,
Σs (G) ≥
≥
=
X
g∈π(S)
X
(hi ,ji )n
i=1 ∈S
X
n≥0
=
e−skgk
X
n≥0
= ∞
< ∞
X
n
Y
e−s[khi k+kji k]
i=1
X
h∈H\F j∈gZ \{id}
n
e−s[khk+kjk]
n
Σs (H \ F )Σs (g Z \ {id})
Σs (H \ F )Σs (g Z \ {id}) ≥ 1
Σs (H \ F )Σs (g Z \ {id}) < 1
.
Now since H is of divergence type, by the monotone convergence theorem,
lim Σs (H \ F )Σs (g Z \ {id}) = ΣδH (H \ F )ΣδH (g Z \ {id})
sցδH
= ∞ · (positive constant)
= ∞.
Thus, for s sufficiently close to δH , Σs (G) = ∞. This shows that δG > δH .
Remark 10.3.11. The reason that we couldn’t deduce Proposition 10.3.10 directly from Proposition 10.3.7(ii) is that the group hH, g Z i considered in the proof
of Proposition 10.3.10 is not necessarily a free product due to the existence of the
finite set F . In the Standard Case, this could be solved by taking a finite-index
subgroup of H whose intersection with F is trivial, but in general, it is not clear
that such a subgroup exists.
168
10. SCHOTTKY PRODUCTS
10.4. A partition-structure–like structure
For each g ∈ Γ, let
Wg = π(g)(Xa|g| ),
unless g = , in which case let W∅ = bord X.
Standing Assumption 10.4.1. In what follows, we assume that for each a ∈
E, either
(1) Ga is a group, or
(2) Ga ≡ N.
For g, h ∈ Γ, write g ≤ h if h = g ∗ k for some k ∈ Γ.
Lemma 10.4.2. Fix g, h ∈ Γ. If g ≤ h, then Wh ⊆ Wg . On the other hand, if
g and h are incomparable (g 6≤ h and h 6≤ g), then Wg ∩ Wh = .
Proof. The first assertion follows from Lemma 10.2.2. For the second assertion, it suffices to show that if (a, g), (b, h) ∈ ΓE are distinct, then W(a,g) ∩ W(b,h) =
. Since W(a,g) ⊆ Ua and (Ua )a∈E are disjoint, if a 6= b then W(a,g) ∩W(b,h) = . So
suppose a = b. Assumption 10.4.1 guarantees that either g −1 h ∈ Ga or h−1 g ∈ Ga ;
without loss of generality assume that g −1 h ∈ Ga . Then
W(a,g) ∩ W(b,h) = g(Xa ) ∩ h(Xa ) = g(Xa ∩ g −1 h(Xa )) ⊆ g(Xa ∩ Ua ) = .
Lemma 10.4.3. There exists σ > 0 such that for all g ∈ Γ,
(10.4.1)
Wg ⊆ Shad(π(g)(o), σ).
In particular
(10.4.2)
Diam(Wg ) .× b−kπ(g)k .
Proof. Let n = |g|, g = gn , a = an , and z = π(g)−1 (o). Observe that
if g(z) ∈ Va , then Lemma 10.2.2 implies that o ∈ Va1 , a contradiction. Thus
z ∈ g −1 (X\Va ). If X is not global, then (10.3.1)Ua =Va ,ε=ε/2 implies that D(z, Xa ) ≥
ε/2. On the other hand, if X is global then we have z ∈ Ua , so (10.3.2) implies
that D(z, Xa ) ≥ ε/2. Either way, we have D(z, Xa ) ≥ ε/2.
Let σ > 0 be large enough so that the Big Shadows Lemma 4.5.7 holds; then
we have Xa ⊆ Shadz (o, σ). Applying π(g) yields (10.4.1), and combining with the
Diameter of Shadows Lemma 4.5.8 yields (10.4.2).
Let ∂Γ denote the set of all infinite words with letters in ΓE such that ai 6= ai+1
for all i. Given g ∈ ∂Γ, for each n, g ↿ n ∈ Γ. Then Lemmas 10.4.2 and 10.4.3 show
that the sequence (Wg↿n )∞
0 is an infinite descending sequence of closed sets with
10.4. A PARTITION-STRUCTURE–LIKE STRUCTURE
diameters tending to zero; thus there exists a unique point ξ ∈
will be denoted π(g).
169
T∞
0
Wg↿n , which
Lemma 10.4.4. For all g ∈ ∂Γ, π(g ↿ n)(o) → π(g) radially. In particular
π(∂Γ) ⊆ Λr (G).
Proof. This is immediate from (10.4.1), since by definition π(g) ∈ Wg↿n for
all n.
Lemma 10.4.5 (Cf. Klein’s combination theorem [121, Theorem 1.1], [115]).
The set
(10.4.3)
D = bord X \
[ [
a∈E g∈Ga
g(Xa ) = bord X \
[
W(a,g)
(a,g)∈ΓE
satisfies G(D) = bord X \ π(∂Γ).
Before we begin the proof of this lemma, note that since D ∩ X is open (Lemma
10.4.6 below), the connectedness of X implies that g(D) ∩ D =
6 for some g ∈ G.
Thus D is not a fundamental domain.
Proof. Fix x ∈ bord X \ π(∂Γ), and consider the set
Γx := {g ∈ Γ : x ∈ Wg } .
By Lemma 10.4.2, Γx is totally ordered as a subset of Γ. If Γx is infinite, let g ∈ ∂Γ
be the unique word such that Γx = {g ↿ n : n ∈ N ∪ {0}}; Lemma 10.4.3 implies
that x = π(g) ∈ π(∂Γ), contradicting our hypothesis. Thus Γx is finite. If Γx = ,
we are done. Otherwise, let g be the largest element of Γx . Then x ∈ Wg , so
π(g)−1 (x) ∈ Xa , where a = a|g| . The maximality of g implies that
π(g)−1 (x) ∈
/ W(b,h) = h(Xb ) ∀b ∈ E \ {a} ∀h ∈ Gb \ {id},
but on the other hand π(g)−1 (x) ∈ Xa ⊆ bord X \ Ua implies that π(g)−1 (x) ∈
/
W(a,h) for all h ∈ Ga \ {id}. Thus π(g)−1 (x) ∈ D.
Lemma 10.4.6. Suppose that for each a ∈ E, Ga is strongly discrete. Then
[
D \ Int(D) ⊆
Λa ,
a∈E
where Λa = Λ(Ga ). In particular, D ∩ X is open.
Proof. Fix x ∈ D \ Int(D), and find a sequence (an , gn ) ∈ ΓE such that
D(x, gn (Xan )) → 0. Since gn (Xan ) ⊆ Uan , (10.3.1) implies that an is constant for
all sufficiently large n, say an = a. On the other hand, if there is some g ∈ Ga
such that gn = g for infinitely many n, then since g(Xa ) is closed we would have
x ∈ g(Xa ), contradicting that x ∈ D. Since Ga is strongly discrete, it follows that
170
10. SCHOTTKY PRODUCTS
kgn k → ∞, and thus Diam(gn (Xa )) → 0 by Lemma 10.4.3. Since gn (o) ∈ gn (Xa ),
it follows that gn (o) → x, and thus x ∈ Λa .
Theorem 10.4.7.
Λ = π(∂Γ) ∪
[ [
g(Λa ).
g∈G a∈E
Proof. The ⊇ direction follows from Lemma 10.4.4, so let us show ⊆. It
S
suffices to show that Λ ∩ D ⊆ a∈E Λa . Indeed, for all g ∈ Γ \ {}, Lemma 10.2.2
S
gives π(g)(o) ∈ g1 (Xa1 ) ⊆ bord X \D. Thus Λ∩D = D ∩∂D ⊆ a∈E Λa by Lemma
10.4.6.
Corollary 10.4.8. If E is finite and each Ga is strongly discrete and of compact type, then G is strongly discrete and of compact type.
Proof. Strong discreteness follows from Corollary 10.3.6, so let us show that
G is of compact type. Let (ξn )∞
1 be a sequence in Λ. For each n ∈ N, if ξn ∈ π(∂Γ),
write ξn = π(gn ) for some gn ∈ ∂Γ; otherwise, write ξn = π(gn )(ηn ) for some
gn ∈ Γ and ηn ∈ Λan . Either way, note that ξn ∈ Wh for all h ≤ g.
For each h ∈ Γ, let
Sh = {n ∈ N : h ≤ gn }.
Since Γ is countable, by extracting a subsequence we may without loss of generality
assume that for all h ∈ Γ, either n ∈ Sh for all but finitely many n, or n ∈ Sh for
only finitely many n. Let
Γ′ = {h ∈ Γ : n ∈ Sh for all but finitely many n}.
Then by Lemma 10.4.2, the set Γ′ is totally ordered. Moreover, ∈ Γ′ . If Γ′ is
infinite, then choose g ∈ ∂Γ such that Γ′ = {g ↿ m : m ≥ 0}; by Lemma 10.4.3,
we have ξn → π(g). Otherwise, let g be the largest element of Γ′ . For each n,
either ξn ∈ Wg(bn ,hn ) for some (bn , hn ) ∈ ΓE , or ξn = π(g)(ηn ) for some an ∈ E
and ηn ∈ Λan . By extracting another subsequence, we may assume that either the
first case holds for all n, or the second case holds for all n. Suppose the first case
holds for all n. The maximality of g implies that for each (b, h) ∈ ΓE , there are
only finitely many n such that (bn , hn ) = (b, h). Since E is finite, by extracting
a further subsequence we may assume that bn = b for all n. Since Gb is strongly
discrete and of compact type, by extracting a further subsequence we may assume
that hn (o) → η for some η ∈ Λb . But then ξn → π(g)(η) ∈ Λ.
S
Suppose the second case holds for all n. Since ΛE = a∈E Λa is compact, by
extracting a further subsequence we may assume that ηn → η for some η ∈ ΛE .
But then ξn → π(g)(η) ∈ Λ.
Corollary 10.4.9. If #(ΓE ) ≥ 3, then #(Λ) ≥ #(R).
10.5. EXISTENCE OF SCHOTTKY PRODUCTS
171
Proof. The hypothesis #(ΓE ) ≥ 3 implies that for each g ∈ Γ,
X
#{(a, g)(b, h) ∈ Γ2E : g(a, g)(b, h) ∈ Γ} =
(#(Ga ) − 1)(#(Gb ) − 1) ≥ 2.
b6=a6=a|g|
Thus, the tree Γ has no terminal nodes or infinite isolated paths. It follows that
∂Γ is perfect and therefore has cardinality at least #(R); since π(∂Γ) ⊆ Λ, we have
#(Λ) ≥ #(∂Γ) ≥ #(R).
Proposition 10.4.10. Suppose that the Schottky system (Ua )a∈E is global.
Then if (Ga )a∈E are moderately (resp. weakly) discrete, then G is moderately (resp.
weakly) discrete. If (Ga )a∈E act properly discontinuously, then G acts properly discontinuously.
Proof. Let D be as in (10.4.3). Fix x ∈ D and g ∈ Γ, let n = |g|, and suppose
that π(g)(x) ∈ D. Then:
(A) For all i = 1, . . . , n, if gi+1 · · · gn (x) ∈ Xai then Lemma 10.2.2 would give
π(g)(x) ∈ g1 (Xa1 ), so gi+1 · · · gn (x) ∈ Vai .
(B) For all i = 0, . . . , n − 1, if gi+1 · · · gn (x) ∈ Xai+1 then Lemma 10.2.2 would
give x ∈ gn−1 (Xan ), so gi+1 · · · gn (x) ∈ Vai+1 .
If n ≥ 2, letting i = 1 in (A)-(B) yields a contradiction, so n = 0 or 1. Moreover,
if n = 1, plugging in i = 1 in (A) gives x ∈ Va1 .
To summarize, if we let
Gx =
then
G
a
{id}
x ∈ Va
S
x∈
/ a∈E Va
g(x) ∈ D ⇒ g ∈ Gx ∀g ∈ G.
More concretely,
d(x, g(x)) < d(x, X \ D) ⇒ g ∈ Gx ∀g ∈ G.
Comparing with the definitions of moderate and weak discreteness (Definition 5.2.1)
and the definition of proper discontinuity (Definition 5.2.11) completes the proof.
10.5. Existence of Schottky products
Proposition 10.5.1. Suppose that ΛIsom(X) = ∂X, and let G1 , G2 ≤ Isom(X)
be groups with the following property: For i = 1, 2, there exist ξi ∈ ∂X and ε > 0
such that
(10.5.1)
D(ξi , g(ξi )) ≥ ε ∀g ∈ Gi \ {id}.
172
10. SCHOTTKY PRODUCTS
Then there exists φ ∈ Isom(X) such that the product hG1 , φ(G2 )i is a global strongly
separated Schottky product.
Proof. We begin the proof with the following
Claim 10.5.2. For each i = 1, 2, there exists an open set Ai ∋ ξi such that
g(Ai ) ∩ Ai = for all g ∈ Gi \ {id}.
Proof. Fix i = 1, 2. Clearly, (10.5.1) implies that ξi ∈
/ Λ(Gi ). Thus, there
exists δ > 0 such that D(g(o), ξi ) ≥ δ for all g ∈ Gi . Applying the Big Shadows
Lemma 4.5.7, there exists σ > 0 such that B(ξi , δ/2) ⊆ Shadg−1 (o) (o, σ) for all
g ∈ G. But then by the Bounded Distortion Lemma 4.5.6, we have
Diam(g(B(ξi , γ))) ≍×,σ b−kgk Diam(B(ξi , γ)) ≤ 2γ ∀g ∈ G ∀0 < γ ≤ δ/2.
Choosing γ appropriately gives Diam(g(B(ξi , γ))) < ε/2 for all g ∈ G. Letting
Ai = B(ξi , γ) completes the proof of the claim.
⊳
For each i = 1, 2, let Ai be as above, and fix an open set Bi ∋ ξi such that
D(Bi , bord X \ Ai ) > 0. Since ΛIsom(X) = ∂X, there exists a loxodromic isometry
φ ∈ Isom(X) such that φ− ∈ B1 and φ+ ∈ B2 (Proposition 7.4.7). Then by
Theorem 6.1.10, φn → φ+ uniformly on bord X \ B1 , so φn (B1 ) ∪ B2 = bord X
for all n ∈ N sufficiently large. Fix such an n, and let U1 = φn (bord X \ A1 ),
U2 = bord X \ A2 , V1 = φn (bord X \ B1 ), V2 = bord X \ B2 . Then (V1 , V2 ) is
a global Schottky system for (G1 , φ(G2 )), which implies that (U1 , U2 ) is a global
strongly separated Schottky system for (G1 , φ(G2 )). This concludes the proof of
the proposition.
Remark 10.5.3. The hypotheses of the above theorem are satisfied if X is
an algebraic hyperbolic space and for each i = 1, 2, Gi is strongly discrete and of
compact type and Λi = Λ(Gi ) $ ∂X.
Proof. We have ΛIsom(X) = ∂X by Observation 2.3.2. Fix i = 1, 2. Since
Λi $ ∂X, ∂X \ Λ(Gi ) is a nonempty open set. For each g ∈ Gi \ {id}, the set
Fix(g) is totally geodesic (Theorem 2.4.7) and therefore nowhere dense; since Gi is
S
countable, it follows that g∈Gi \{id} Fix(g) is a meager set, so the set
[
Fix(g)
Si = ∂X \ Λ(Gi ) \
g∈Gi \{id}
is nonempty. Fix ξi ∈ Si . By Proposition 7.7.4,
lim inf D(ξi , g(ξi )) ≥ D(ξi , Λ(Gi )) > 0.
g∈Gi
10.5. EXISTENCE OF SCHOTTKY PRODUCTS
173
On the other hand, for all g ∈ Gi \ {id} we have ξi ∈
/ Fix(g) and therefore
D(ξi , g(ξi )) > 0. Combining yields inf g∈Gi \{id} D(ξi , g(ξi )) > 0 and thus (10.5.1).
This concludes the proof of the remark.
Proposition 10.5.4. For a semigroup G Isom(X), the following are equiv-
alent:
(A) G is either outward focal or of general type.
(B) G contains a strongly separated Schottky subsemigroup.
e
(C) δ(G)
> 0.
(D) #(ΛG ) ≥ #(R).
(E) #(ΛG ) ≥ 3, i.e. G is nonelementary.
If G is a group, then these are also equivalent to:
(F) G contains a global strongly separated Schottky subgroup.
The implications (C) ⇒ (E) ⇒ (A) have been proven elsewhere in the paper; see
Proposition 7.3.1 and Theorem 1.2.3. The implication (B) ⇒ (D) is an immediate
consequence of Corollary 10.4.9, and (D) ⇒ (E) and (F) ⇒ (B) are both trivial.
So it remains to prove (A) ⇒ (B) ⇒ (C), and that (A) ⇒ (F) if G is a group.
Proof of (A) ⇒ (B). Suppose first that G is outward focal with global fixed
point ξ. Then there exists g ∈ G with g ′ (ξ) > 1, and there exists h ∈ G such that
h+ 6= g+ . If we let j = g n h, then j ′ (ξ) > 1 (after choosing n sufficiently large), and
j+ 6= g+ .
So regardless of whether G is outward focal or of general type, there exist
loxodromic isometries g, h ∈ G such that g+ ∈
/ Fix(h) and h+ ∈
/ Fix(g). It follows
that there exists ε > 0 such that
B(g+ , ε) ∩ hn B(g+ , ε) = B(h+ , ε) ∩ g n B(h+ , ε) = ∀n ≥ 1.
Let U1 = B(g+ , ε/2), U2 = B(h+ , ε/2), V1 = B(g+ , ε), and V2 = B(h+ , ε). By
Theorem 6.1.10, for all sufficiently large n we have g n (V1 ∪ V2 ) ⊆ U1 and hn (V1 ∪
V2 ) ⊆ U2 . It follows that (V1 , V2 ) is a Schottky system for ((g n )N , (hn )N ), and that
(U1 , U2 ) is a strongly separated Schottky system for ((g n )N , (hn )N ).
Proof of (B) ⇒ (C). Since a cyclic loxodromic semigroup is of divergence
type (an immediate consequence of (6.1.3)), Proposition 10.3.7(i),(ii) shows that
e
0 < δ(H) < ∞, where H G is a Schottky subsemigroup. Thus δ(H)
> 0, and so
e
δ(G)
> 0.
Proof of (A) ⇒ (F) for groups. Fix loxodromic isometries g, h ∈ G with
Fix(g) ∩ Fix(h) = . Choose ε > 0 such that
B(Fix(g), ε) ∩ hn B(Fix(g), ε) = B(Fix(h), ε) ∩ g n B(Fix(h), ε) = ∀n ≥ 1.
174
10. SCHOTTKY PRODUCTS
Let U1 = B(Fix(g), ε/2), U2 = B(Fix(h), ε/2), V1 = B(Fix(g), ε), and V2 =
B(Fix(h), ε). By Theorem 6.1.10, for all sufficiently large n we have g n (bord X \
B(g− , ε/2)) ⊆ B(g+ , ε/2) and hn (bord X \ B(h− , ε/2)) ⊆ B(h+ , ε/2). It follows
that (V1 , V2 ) is a global Schottky system for ((g n )Z , (hn )Z ), and that (U1 , U2 ) is a
global strongly separated Schottky system for ((g n )Z , (hn )Z ).
CHAPTER 11
Parabolic groups
In this chapter we study parabolic groups. We begin with a list of several examples of parabolic groups acting on E∞ , the half-space model of infinite-dimensional
real hyperbolic geometry. These examples include a counterexample to the infinitedimensional analogue of Margulis’s lemma, as well as a parabolic isometry that
generates a cyclic group which is not discrete in the ambient isometry group. The
latter example is the Poincaré extension of an example due to M. Edelstein. After
giving these examples of parabolic groups, we prove a lower bound on the Poincaré
exponent of a parabolic group in terms of its algebraic structure (Theorem 11.2.6).
We show that it is optimal by constructing explicit examples of parabolic groups
acting on E∞ which come arbitrarily close to this bound.
11.1. Examples of parabolic groups acting on E∞
Let X = E = E∞ be the half-space model of infinite-dimensional real hyperbolic
geometry (§2.5.2). Recall that B = ∂E \ {∞} is an infinite-dimensional Hilbert
space, and that Poincaré extension is the homomorphism b· : Isom(B) → Isom(E)
given by the formula
b· (g)(t, x) = b
g(t, x) = (t, g(x))
(Observation 2.5.6). The image of b· is the set {g ∈ Stab(Isom(E); ∞) : g ′ (∞) = 1}.
Thus, Poincaré extension provides a bijection between the class of subgroups of
Isom(B) and the class of subgroups of Isom(E) for which ∞ is a neutral global fixed
b We may
point. Given a group G ≤ Isom(B), we will denote its image under b· by G.
b and G as follows:
summarize the relation between G
Observation 11.1.1.
b is parabolic if and only if G(0) is unbounded; otherwise G
b is elliptic.
(i) G
b is strongly (resp. moderately, weakly, COT) discrete if and only if G
(ii) G
b acts properly discontinuously if and only if G does.
is. G
(iii) Write Isom(B) = O(B) ⋉ B. Then the preimage of the uniform operator
topology under b· is equal to the product of the uniform operator topology
on O(B) with the usual topology on B. Thus if we denote this topology
b is UOT-discrete if and only if G is UOT*-discrete.
by UOT*, then G
175
176
11. PARABOLIC GROUPS
(iv) For all g ∈ G, we have
ekbgk ≍× cosh kb
gk = 1 +
k(1, g(0)) − (1, 0)k2
≍× 1 ∨ kg(0)k2
2
and thus for all s ≥ 0,
(11.1.1)
b ≍× Σs (G) :=
Σs (G)
X
g∈G
(1 ∨ kg(0)k)−2s .
b and we say that G
In what follows, we let δ(G) = inf{s : Σs (G) < ∞} = δ(G),
b
is of convergence or divergence type if G is.
11.1.1. The Haagerup property and the absence of a Margulis lemma.
One question which has been well studied in the literature is the following: For
which abstract groups Γ can Γ be embedded as a strongly discrete subgroup of
Isom(B)? Such a group is said to have the Haagerup property.1 For a detailed
account, see [50].
Remark 11.1.2. The following groups have the Haagerup property:
• [62, pp.73-74] Groups which admit a cocompact action on a proper R-tree.
In particular this includes Fn (Z) for every n.
• [101] Amenable groups. This includes solvable and nilpotent groups.
A class of examples of groups without the Haagerup property is the class of infinite
groups with Kazdan’s property (T). For example, if n ≥ 3 then SLn (Z) does not
have the Haagerup property [22, §4.2].
The example of (virtually) nilpotent groups will be considered in more detail
in §11.2.3, since it turns out that every parabolic subgroup of Isom(E) which has
finite Poincaré exponent is virtually nilpotent.
Recall that Margulis’s lemma is the following lemma:
Proposition 11.1.3 (Margulis’s lemma, [61, p.126] or [15, p.101]). Let X be
a Hadamard manifold with curvature bounded away from −∞. Then there exists
ε = εX > 0 with the following property: For every discrete group G ≤ Isom(X) and
for every x ∈ X, the group
Gε (x) := hg ∈ G : d(x, g(x)) ≤ εi
is virtually nilpotent.
For convenience, we will say that Margulis’s lemma holds on a metric space
X if the conclusion of Proposition 11.1.3 holds, i.e. if there exists ε > 0 such
1The Haagerup property can also be defined for locally compact groups, by replacing “finite” with
“precompact” in the definition of strong discreteness. However, in this monograph we consider
only discrete groups.
11.1. EXAMPLES OF PARABOLIC GROUPS ACTING ON E∞
177
that for every strongly discrete group G ≤ Isom(X) and for every x ∈ X, Gε (x)
is virtually nilpotent. It was proven recently by E. Breuillard, B. Green, and T.
C. Tao [38, Corollary 1.15] that Margulis’s lemma holds on all metric spaces with
bounded packing in the sense of [38]. This result includes Proposition 11.1.3 as a
special case.
By contrast, in infinite dimensions we have the following:
Observation 11.1.4. Margulis’s lemma does not hold on the space X = E =
∞
E .
Proof. Since F2 (Z) has the Haagerup property, there exists a strongly discrete
b be the
group G ≤ Isom(B) isomorphic to F2 (Z), say G = (g1 )Z ∗ (g2 )Z . Let G
Poincaré extension of G. Fix ε > 0, and let
x = (t, 0) ∈ E
for t > 0 large to be determined. Then by (2.5.3),
d(x, gbi (x)) = d (t, 0), (t, gi (0)) ≍× kgi (0)k/t.
bε (x), and so
So if t is large enough, then d(x, gbi (x)) ≤ ε. It follows that b
g1 , b
g2 ∈ G
b ε (x) = G
b ≡ F2 (Z) is not virtually nilpotent.
G
Remark 11.1.5. In view of the fact that in the finite-dimensional Margulis’s
lemma, εEd depends on the dimension d and tends to zero as d → ∞ (see e.g.
[23, Proposition 5.2]), we should not be surprised that the lemma fails in infinite
dimensions.
Remark 11.1.6. In some references (e.g. [148, Theorem 12.6.1]), the conclusion of Margulis’s lemma states that Gε (x) is elementary rather than virtually
nilpotent. The above example shows that the two statements should not be confused with each other. We will show (Example 13.1.5 below) that the alternative
formulation of Margulis’s lemma which states that Gε (x) is elementary also fails in
infinite dimensions.
Remark 11.1.7. Parabolic groups acting on proper CAT(-1) spaces must be
amenable [41, Proposition 1.6]. Therefore the existence of a parabolic subgroup
of Isom(H∞ ) isomorphic to F2 (Z) also distinguishes H∞ from the class of proper
CAT(-1) spaces.
11.1.2. Edelstein examples. One of the oldest results in the field of groups
acting by isometries on Hilbert space is the following example due to M. Edelstein:
178
11. PARABOLIC GROUPS
Proposition 11.1.8 ([70, Theorem 2.1]). There exist an isometry g belonging
(1)
(2)
∞
to Isom(ℓ2 (N; C)) and sequences (nk )∞
1 , (nk )1 such that
(1)
(2)
g nk (0) −
→ 0 but kg nk (0)k −
→ ∞.
(11.1.2)
k
k
Since the specific form of Edelstein’s example will be important to us, we recall
the proof of Proposition 11.1.8, in a modified form suitable for generalization:
Proof of Proposition 11.1.8. For each k ∈ N let ak = 1/k!, let bk = 1,
and let
ck = e2πiak , dk = bk (1 − ck ).
(11.1.3)
Then
X
k∈N
X
|dk |2 .×
k∈N
|ak bk |2 =
X 1 2
k∈N
k!
< ∞,
2
2
so d = (dk )∞
1 ∈ ℓ (N; C). Let g ∈ Isom(ℓ (N; C)) be given by the formula
(11.1.4)
g(x)k = ck xk + dk .
Then
(11.1.5)
g n (x)k = cnk xk +
n−1
X
cik dk = cnk xk +
i=0
n
1 − cnk
dk = cnk xk + bk (1 − cnk ).
1 − ck
In particular, g (0)k = bk (1 − cnk ). So
(11.1.6)
∞
∞
∞
X
X
X
n
2
n 2
2πinak 2
kg (0)k =
|bk (1 − ck )| =
|bk (1 − e
)| ≍×
|bk |2 d(nak , Z)2 .
k=1
k=1
k=1
Now for each k ∈ N, let
(1)
nk = k!
k
(2)
nk =
Then
kg
(1)
nk
1X
j!
2 j=1
2
∞ 2
∞
X
X
1
k!
2
, Z = (k!)
d
(0)k ≍×
j!
j!
j=1
j=k+1
2
k!
≍×
(k + 1)!
1
−
→ 0,
=
(k + 1)2 k
2
11.1. EXAMPLES OF PARABOLIC GROUPS ACTING ON E∞
179
but on the other hand
kg
(2)
nk
2
(0)k &×
k
X
j=1
d
"
#2
!2
j
k
(2)
X
X
nk
i!
1
1−
,Z
=
(j + 1)!
4
(j + 1)!
i=1
j=1
≥
2
k
2
1X
1−
−
→ ∞.
k
4 j=1
j+1
This demonstrates (11.1.2).
Remark 11.1.9. Let us explain the significance of Edelstein’s example from
the point of view of hyperbolic geometry. Let g ∈ Isom(B) be as in Proposition
11.1.8, and let gb ∈ Isom(E∞ ) be its Poincaré extension. By Observation 11.1.1, b
g is
a parabolic isometry. But the orbit of o = (1, 0) (cf. §4.1) is quite irregular; indeed,
(1)
(2)
gb nk (o) −
→ o but gb nk (o) −
→ ∞ ∈ ∂E.
k
n
(o))∞
1
k
So the orbit (g
simultaneously tends to infinity on one subsequence while
remaining bounded on another subsequence. Such a phenomenon cannot occur in
proper metric spaces, as we demonstrate now:
Theorem 11.1.10. If X is a proper metric space and if G ≤ Isom(X) is cyclic,
then either G has bounded orbits or G is strongly discrete.
Proof. Write G = g Z for some g ∈ Isom(X), and fix a point o ∈ X. For each
n ∈ Z write knk = kg n k. Then k − nk = knk, and km + nk ≤ kmk + knk.
Suppose that G is not strongly discrete. Then there exists R > 0 such that
(11.1.7)
#{n ∈ N : knk ≤ R} = ∞.
Now let g S (o) ⊆ g Z (o) ∩ B(o, 2R) be a maximal R-separated set. Since X is proper,
S is finite. For each k ∈ S, choose ℓk > k such that kℓk k ≤ R; such an ℓk exists by
(11.1.7).
Now let n ∈ N be arbitrary. Let 0 ≤ m ≤ n be the largest number for which
kmk ≤ 2R. Since g S (o) is a maximal R-separated subset of g Z (o) ∩ B(o, 2R) ∋
g m (o), there exists k ∈ S for which km − kk ≤ R. Then
km − k + ℓk k ≤ R + R = 2R.
On the other hand, m − k + ℓk > m since ℓk > k by construction. Thus by the
maximality of m, we have m − k + ℓk > n. So
n − m < ℓk − k ≤ C := max(ℓk − k).
k∈S
It follows that
knk ≤ kmk + kn − mk ≤ 2R + Ckgk,
i.e. knk is bounded independent of n. Thus G has bounded orbits.
180
11. PARABOLIC GROUPS
At this point, we shall use the different notions of discreteness introduced in
Chapter 5 to distinguish between different variations of Edelstein’s example. To
this end, we make the following definition:
Definition 11.1.11. An isometry g ∈ Isom(ℓ2 (N; C)) is said to be Edelstein∞
type if it is of the form (11.1.4), where the sequences (ck )∞
1 and (dk )1 are of
∞
the form (11.1.3), where (ak )∞
1 and (bk )1 are sequences of positive real numbers
satisfying
∞
X
|ak bk |2 < ∞.
k=1
Our proof of Proposition 11.1.8 shows that the isometry g is always well-defined
and satisfies (11.1.6). On the other hand, the conclusion of Proposition 11.1.8 does
not hold for all Edelstein-type isometries; it is possible that the cyclic group G = g Z
is strongly discrete, and it is also possible that this group has bounded orbits. (But
the two cannot happen simultaneously unless g is a torsion element.) In the sequel,
we will be interested in Edelstein-type isometries for which G has unbounded orbits
but is not necessarily strongly discrete. We will be able to distinguish between the
examples using our more refined notions of discreteness.
Edelstein-type Example 11.1.12. In our notation, Edelstein’s original example can be described as the Edelstein-type isometry g defined by the sequences
ak = 1/k!, bk = 1. Edelstein’s proof shows that G = g Z has unbounded orbits and
is not weakly discrete. However, we can show more:
Proposition 11.1.13. The cyclic group generated by the isometry in Edelstein’s
example is not UOT-discrete.
Proof. As in the proof of Proposition 11.1.8, we let nk = k!, so that g nk (0) →
0. But if T n denotes the linear part of g n , then
T nk (x) = (e2πik!/j! xj )∞
j=1
and so
kT nk − Ik ≤
∞
X
j=k+1
|1 − e2πik!/j! | ≍×
1
→ 0.
k+1
Thus T nk → I in the uniform operator topology, so by Observation 11.1.1(iii),
gbnk → id in the uniform operator topology. Thus gbZ is not UOT-discrete.
Edelstein-type Example 11.1.14. The Edelstein-type isometry g defined
by the sequences ak = 1/2k , bk = 1. This example was considered by A. Valette
[174, Proposition 1.7]. It has unbounded orbits, and is moderately discrete (in fact
properly discontinuous) but not strongly discrete.
11.1. EXAMPLES OF PARABOLIC GROUPS ACTING ON E∞
181
(1)
Proof. Letting nk = 2k , we have by (11.1.6)
k 2
∞
∞
X
X
(1)
2
1
d
kg nk (0)k2 ≍×
,
Z
=
(2k−j )2 =
j
2
3
j=1
j=k+1
(2)
and so g Z is not strongly discrete. Letting nk = ⌊2k /3⌋, we have
2
k
2 X
k k−j
∞
X
(2)
2
1
⌊2 /3⌋
d
d
,
Z
≥
,
Z
−
kg nk (0)k2 ≍×
2j
3
2j
j=1
j=1
≥
k
X
1
100
j=7
≍+,× k −
→∞
k
and therefore g Z has unbounded orbits.
Finally, we show that g Z acts properly discontinuously. To begin with, we
observe that for all n ∈ N, we may write n = 2k (2j + 1) for some j, k ≥ 0; then
2
2
k
k
∞
X
2 (2j + 1)
2 (2j + 1)
n
2
,Z ≥ d
, Z = 1/4,
d
kg (0)k ≍×
2i
2k+1
i=1
i.e. 0 is an isolated point of g Z (0). So for some ε > 0,
kg n (0)k ≥ ε ∀n ∈ N.
Now let x ∈ ℓ2 (N; C) be arbitrary, and let N be large enough so that
k(xN +1 , . . .)k ≤ ε/3.
Now for all n ∈ N, we have
N
kg 2
n
(x) − xk = kg 2
N
n
N
(0, . . . , 0, xN +1 , . . .) − (0, . . . , 0, xN +1 , . . .)k
≥ kg 2
n
(0)k − 2ε/3 ≥ ε/3
which implies that the set g 2N Z (x) is discrete. But g Z (x) is the union of finitely
N
many isometric images of g 2
Z
(x), so it must also be discrete.
Remark 11.1.15. It is not possible to differentiate further between unbounded
Edelstein-type isometries by considering separately the conditions of weak discreteness, moderate discreteness, and proper discontinuity. Indeed, if X is any metric
space and if G ≤ Isom(X) is any cyclic group with unbounded orbits, then the
following are equivalent: G is weakly discrete, G is moderately discrete, G acts
properly discontinuously. This can be seen as follows: every nontrivial subgroup
of G is of finite index, and therefore also has unbounded orbits; it follows that no
element of G\ {id} has a fixed point in X; it follows from this that the three notions
of discreteness are equivalent.
182
11. PARABOLIC GROUPS
Example 11.1.16. Let g ∈ Isom(ℓ2 (N; C)) be as in Proposition 11.1.8, let σ :
ℓ2 (Z; C) → ℓ2 (Z; C) be the shift map σ(x)k = xk+1 , and let T : ℓ2 (N; C) → ℓ2 (N; C)
be given by the formula
T (x)k = e2πi/k xk .
Then g1 = g ⊕σ has unbounded orbits and is COT-discrete but not weakly discrete;
g2 = g ⊕ T has unbounded orbits and is UOT-discrete but not COT-discrete.
Proof. Since g has unbounded orbits and is not weakly discrete, the same
is true for both g1 and g2 . Since the sequence (σ n (x))∞
1 diverges for every x ∈
ℓ2 (Z; C), the group generated by σ is COT-discrete, which implies that g1 is as
well. Since the sequence (kT n − Ik)∞
1 is bounded from below, the group generated
by T is UOT-discrete, which implies that g2 is as well. On the other hand, if we
let nk = k!, then T nk (x) → x for all x ∈ ℓ2 (Z; C). But we showed in Proposition
11.1.13 that g nk (x) → x for all x ∈ ℓ2 (N; C); it follows that g ⊕ T is not COTdiscrete.
Remark 11.1.17. One might object to the above examples on the grounds that
the isometries g1 and g2 do not act irreducibly. However, Edelstein-type isometries
never act irreducibly: if g is defined by (11.1.3) and (11.1.4) for some sequences
∞
2
(ak )∞
1 and (bk )1 , then for every k the affine hyperplane Hk = {x ∈ ℓ (N; C) :
xk = bk } is invariant under g. In general, it is not even possible to find a minimal
subspace on which the restricted action of g is irreducible, since such a minimal
subspace would be given by the formula
P∞
2
\
1 |bk | = ∞
,
Hk =
P
∞
{(b )∞ }
|b |2 < ∞
k
k 1
1
k
and if g has unbounded orbits (as in Examples 11.1.12 and 11.1.14), the first case
must hold.
We conclude this section with one more Edelstein-type example:
Edelstein-type Example 11.1.18. The Edelstein-type isometry g defined by
the sequences ak = 1/2k , bk = log(1 + k). In this example, g Z is strongly discrete
but has infinite Poincaré exponent.
Proof. To show that g Z is strongly discrete, fix n ≥ 1, and let k be such that
2 ≤ n < 2k+1 . Then 1/4 ≤ n/2k+2 < 1/2, so by (11.1.6),
n
2
bk+2
kg n (0)k2 &× bk+2 d k+2 , Z ≥
−
→ ∞.
2
16 n
k
11.2. THE POINCARÉ EXPONENT OF A FINITELY GENERATED PARABOLIC GROUP183
To show that δ(g Z ) = ∞, fix ℓ ≥ 0, and note that by (11.1.6),
ℓ
kg 2 (0)k2 ≍×
∞
X
4ℓ
|bk |2 ≍× |bℓ+1 |2 = log2 (2 + ℓ).
4k
k=ℓ+1
It follows that
Σs (g Z ) ≥
∞
∞
X
X
ℓ
(1 ∨ kg 2 (0)k)−2s ≍×
log−2s (2 + ℓ) = ∞ ∀s ≥ 0.
ℓ=0
ℓ=0
11.2. The Poincaré exponent of a finitely generated parabolic group
In this section, we relate the Poincaré exponent δG of a parabolic group G
with its algebraic structure. We will show below that δG is infinite unless G is
virtually nilpotent (Theorem 11.2.6 below), so we begin with a digression on the
coarse geometry of finitely generated virtually nilpotent groups.
11.2.1. Nilpotent and virtually nilpotent groups. Recall that the lower
central series of an abstract group Γ is the sequence (Γi )∞
1 defined recursively by
the equations
Γ1 = Γ and Γi+1 = [Γ, Γi ].
Here [A, B] denotes the commutator of two sets A, B ⊆ Γ, i.e. [A, B] = haba−1 b−1 :
a ∈ A, b ∈ Bi. The group Γ is nilpotent if its lower central series terminates, i.e. if
Γk+1 = {id} for some k ∈ N. The smallest integer k for which this equality holds
is called the nilpotency class of Γ, and we will denote it by k.
Note that a group is abelian if and only if it is nilpotent of class 1. The
fundamental theorem of finitely generated abelian groups says that if Γ is a finitely
generated abelian group, then Γ ≡ Zd × F for some d ∈ N ∪ {0} and some finite
abelian group F . The number d will be called the rank of Γ, denoted rank(Γ). Note
that the large-scale structure of Γ depends only on d and not on the finite group
F . Specifically, if dΓ is a Cayley metric on Γ then
(11.2.1)
NΓ (R) ≍× Rd ∀R ≥ 1.
Here NΓ (R) = #{γ ∈ Γ : d(e, γ) ≤ R} is the orbital counting function of Γ
interpreted as acting on the metric space (Γ, dΓ ) (cf. Remark 8.1.3).
The following analogue of (11.2.1) was proven by H. Bass and independently
by Y. Guivarch:
184
11. PARABOLIC GROUPS
Theorem 11.2.1 ([16, 87]). Let Γ be a finitely generated nilpotent group with
lower central series (Γi )∞
1 and nilpotency class k, and let
αΓ =
k
X
i rank(Γi /Γi+1 ).
i=1
Let dΓ be a Cayley metric on Γ. Then for all R ≥ 1,
(11.2.2)
NΓ (R) ≍× RαΓ .
The number αΓ will be called the (polynomial) growth rate of NΓ .
A group is virtually nilpotent if it has a nilpotent subgroup of finite index. The
following is an immediate corollary of Theorem 11.2.1:
Corollary 11.2.2. Let Γ be a finitely generated virtually nilpotent group. Let
′
Γ ≤ Γ be a nilpotent subgroup of finite index, and let dΓ be a Cayley metric. Let
αΓ = αΓ′ . Then for all R ≥ 1,
(11.2.3)
NΓ (R) ≍× RαΓ .
Example 11.2.3. If Γ is abelian, then (11.2.2) reduces to (11.2.1).
Example 11.2.4. Let Γ be the discrete Heisenberg group, i.e.
1 a c
Γ=
1 b : a, b, c ∈ Z .
1
We compute the growth rate of NΓ . Note that Γ is nilpotent of class 2, and its
lower central series is given by Γ1 = Γ,
1
Γ2 =
1
c
1
Thus
:c∈Z .
αΓ = rank(Γ1 /Γ2 ) + 2 rank(Γ2 ) = 2 + 2 · 1 = 4.
Corollary 11.2.2 implies that finitely generated virtually nilpotent groups have
polynomial growth, meaning that the growth rate
(11.2.4)
αΓ := lim
R→∞
log NΓ (R)
log(R)
exists and is finite. The converse assertion is a deep theorem of M. Gromov:
Theorem 11.2.5 ([155]). A finitely generated group Γ has polynomial growth if
and only if Γ is virtually nilpotent. Moreover, if Γ does not have polynomial growth
then the limit (11.2.4) exists and equals ∞.
11.2. THE POINCARÉ EXPONENT OF A FINITELY GENERATED PARABOLIC GROUP185
Thus the limit (11.2.4) exists in all circumstances, so we may refer to it unambiguously.
11.2.2. A universal lower bound on the Poincaré exponent. Now let
G ≤ Isom(X) be a parabolic group. Recall that in the Standard Case, if a group
G is discrete then it is virtually abelian. Moreover, in this case δG = 12 rank(G).
If G is virtually nilpotent, then it is natural to replace this formula by the
formula δG = 21 αG . However, in general equality does not hold in this formula, as
we will see below (Theorem 11.2.11). We show now that the ≥ direction always
holds. Precisely:
Theorem 11.2.6. Let G ≤ Isom(X) be a finitely generated parabolic group. Let
αG be as in (11.2.4). Then
αG
(11.2.5)
δG ≥
·
2
Moreover, if equality holds and δG < ∞, then G is of divergence type.
Before proving Theorem 11.2.6, we make a few remarks:
Remark 11.2.7. In this theorem, it is crucial that b > 1 is chosen close enough
to 1 so that Proposition 3.6.8 holds (cf. §4.1). Indeed, by varying the parameter b
one may vary the Poincaré exponent at will (cf. (8.1.2)); in particular, by choosing
b large, one could make δG arbitrarily small. If X is strongly hyperbolic, then of
course we may let b = e.
Remark 11.2.8. Expanding on the above remark, we recall that if X is an
R-tree, then any value of b is permitted in Proposition 3.6.8 (Remark 3.6.12). This
demonstrates that if a finitely generated parabolic group acting on an R-tree has
finite Poincaré exponent, then its growth rate is zero. This may also be seen more
directly from Remark 6.2.12.
Remark 11.2.9. Let G ≤ Isom(X) be a group of general type, and let H ≤ G
be a finitely generated parabolic subgroup. Then combining Theorem 11.2.6 with
Proposition 10.3.10 shows that δG > αH /2. This generalizes a well-known theorem
of A. F. Beardon [18, Theorem 7].
Combining Theorems 11.2.5 and 11.2.6 gives the following corollary:
Corollary 11.2.10. Any finitely generated parabolic group with finite Poincaré
exponent is virtually nilpotent.
This corollary can be viewed very loosely as a generalization of Margulis’s lemma
(Proposition 11.1.3). As we have seen above (Observation 11.1.4), a strict analogue
of Margulis’s lemma fails in infinite dimensions.
186
11. PARABOLIC GROUPS
Proof of Theorem 11.2.6. Let g1 , . . . , gn be a set of generators for G, and
let dG denote the corresponding Cayley metric. Let ξ ∈ ∂X denote the unique fixed
point of G. Fix g ∈ G, and write g = gi1 · · · gim . By the universal property of path
metrics (Remark 3.1.4), we have
Dξ (o, g(o)) .× dG (id, g).
Now we apply Observation 6.2.10 to get
b(1/2)kgk .× dG (id, g).
Letting C > 0 be the implied constant, we have
NX,G (ρ) ≥ NG (bρ/2 /C) ∀ρ > 0
(11.2.6)
(cf. Remark 8.1.3). In particular, by (8.1.2)
δG = lim
ρ→∞
logb NG (R)
αG
logb NX,G (ρ)
≥ lim
=
·
R→∞ 2 logb (R)
ρ
2
To demonstrate the final assertion of Theorem 11.2.6, suppose that equality holds
in (11.2.5) and that δG < ∞. Then by Theorem 11.2.5, G is virtually nilpotent. Combining (11.2.6) with (11.2.2) and then plugging into (8.1.1) gives us that
Σδ (G) = ∞, completing the proof.
11.2.3. Examples with explicit Poincaré exponents. Theorem 11.2.6
raises a natural question: do the exponents allowed by this theorem actually occur
as the Poincaré exponent of some parabolic group? More precisely, given a finitely
generated abstract group Γ and a number δ ≥ αΓ /2, does there exist a hyperbolic metric space X and an injective homomorphism Φ : Γ → Isom(X) such that
G = Φ(Γ) is a parabolic group satisfying δG = δ? If δ = αΓ /2, then the problem
appears to be difficult; cf. Remark 11.2.12. However, we can provide a complete
answer when δ > αΓ /2 by embedding Γ into Isom(B) and then using Poincaré
extension to get an embedding into Isom(E∞ ). Specifically, we have the following:
Theorem 11.2.11. Let Γ be a virtually nilpotent group, and let α = αΓ be the
growth rate of NΓ . Then for all δ > αΓ /2, there exists an injective homomorphism
Φ : Γ → Isom(B) such that
δ(Φ(Γ)) = δ.
Moreover, Φ(Γ) may be taken to be either of convergence type or of divergence type.
Remark 11.2.12. Theorem 11.2.11 raises the question of whether there exists
an injective homomorphism Φ : Γ → Isom(B) such that
(11.2.7)
δ(Φ(Γ)) = αΓ /2.
11.2. THE POINCARÉ EXPONENT OF A FINITELY GENERATED PARABOLIC GROUP187
It is readily computed that if the map γ 7→ Φ(γ)(0) is bi-Lipschitz, then (11.2.7)
holds. In particular, if Γ = Zd for some d ∈ N, then such a Φ is given by
Φ(n)(x) = x + (n, 0). By contrast, if Γ is a virtually nilpotent group which is
not virtually abelian, then it is known [59, Theorem 1.3] that there is no quasiisometric embedding ι : Γ → B (see also [145, Theorem A] for an earlier version
of this result which applied to nilpotent Lie groups). In particular, there is no homomorphism Φ : Γ → Isom(B) such that γ 7→ Φ(γ)(0) is a bi-Lipschitz embedding.
So this approach of constructing an injective homomorphism Φ satisfying (11.2.7)
is doomed to failure. However, it is possible that another approach will work. We
leave the question as an open problem.
Remark 11.2.13. Letting Γ = Z in Theorem 11.2.11, we have the following
corollary: For any δ > 1/2, there exists an isometry gδ ∈ Isom(B) such that the
cyclic group Gδ = (gδ )Z satisfies δ(Gδ ) = δ, and may be taken to be either of
convergence type or of divergence type. The isometries (gδ )δ>1/2 exhibit “intermediate” behavior between the isometry g1/2 (x) = x + e1 (which has Poincaré
exponent 1/2 as noted above) and the isometries described in the Edelstein-type
isometries 11.1.12, 11.1.14, and 11.1.18: since δ > 1/2, the sequence (gδn (0))∞
1 conn
∞
verges to infinity much more slowly than the sequence (g1/2 (0))1 , but since δ < ∞,
the sequence converges faster than in Example 11.1.18, not to mention Examples
11.1.12 and 11.1.14 where the sequence (gδn (0))∞
1 does not converge to infinity at
all (although it converges along a subsequence).
Remark 11.2.14. Theorem 11.2.11 leaves open the question of whether there
is a homomorphism Φ : Γ → Isom(B) such that Φ(Γ) is strongly discrete but
δ(Φ(Γ)) = ∞. If Γ = Z, this is answered affirmatively by Example 11.1.18, and if Γ
contains Z as a direct summand, i.e. Γ ≡ Z × Γ′ for some Γ′ ≤ Γ, then the answer
can be achieved by taking the direct sum of Example 11.1.18 with an arbitrary
strongly discrete homomorphism from Γ′ to Isom(B). However, the Heisenberg
group does not contain Z as a direct summand. Thus, it is unclear whether or not
there is a a homomorphism from the Heisenberg group to Isom(B) whose image is
strongly discrete with infinite Poincaré exponent.
Proof of Theorem 11.2.11. We will need the following variant of the Assouad embedding theorem:
Theorem 11.2.15. Let X be a doubling metric space,2 and let F : (0, ∞) →
(0, ∞) be a nondecreasing function such that
(11.2.8)
0 < α∗ (F ) ≤ α∗ (F ) < 1.
2Recall that a metric space X is doubling if there exists M > 0 such that for all x ∈ X and ρ > 0,
the ball B(x, ρ) can be covered by M balls of radius ρ/2.
188
11. PARABOLIC GROUPS
Here
log F (λR) − log F (R)
log(λ)
log F (λR) − log F (R)
α∗ (F ) := lim sup sup
·
log(λ)
λ→∞ R>0
α∗ (F ) := lim inf inf
λ→∞ R>0
Then there exist d ∈ N and a map ι : X → Rd such that for all x, y ∈ X,
kι(y) − ι(x)k ≍× F (d(x, y)).
(11.2.9)
Proof. The classical Assouad embedding theorem (see e.g. [91, Theorem
12.2]) gives the special case of Theorem 11.2.15 where F (t) = tε for some 0 < ε < 1.
It is possible to modify the standard proof of the classical version in order to
accomodate more general functions F satisfying (11.2.8); however, we prefer to
prove Theorem 11.2.15 directly as a consequence of the classical version.
Fix ε ∈ (α∗ (F ), 1), and let
F (s)
Fb(t) = tε inf ε ·
s≤t s
The inequality ε > α∗ (f ) implies that Fb ≍× F , so we may replace F by Fb without
affecting either the hypotheses or the conclusion of the theorem. Thus, we may
without loss of generality assume that the function t 7→ F (t)/tε is nonincreasing.
Let G(t) = F (t)1/ε , so that t 7→ G(t)/t is nonincreasing. It follows that
G(t + s) ≤ G(t) + G(s).
Combining with the fact that G is nondecreasing shows that G ◦ d is a metric on
X. On the other hand, since α∗ (G) = α∗ (F )/ε > 0, there exists λ > 0 such that
G(λt) ≥ 2G(t) for all t > 0. It follows that the metric G ◦ d is doubling. Thus we
may apply the classical Assouad embedding theorem to the metric space (X, G ◦ d)
and the function t 7→ tε , giving a map ι : X → Rd satisfying
kι(y) − ι(x)k ≍× Gε ◦ d(x, y) = F (d(x, y)).
This completes the proof.
⊳
Now let Γ be a virtually nilpotent group, and let dΓ be a Cayley metric on Γ.
Lemma 11.2.16. (Γ, dΓ ) is a doubling metric space.
Proof. For all γ ∈ Γ and R > 0, we have by Corollary 11.2.2
#(B(γ, R)) = #(γ(B(e, R))) = #(B(e, R)) ≍× (1 ∨ R)αΓ .
Now let S ⊆ B(γ, 2R) be a maximal R-separated set. Then {B(β, R) : β ∈ S} is a
cover of B(γ, 2R). On the other hand, {B(β, R/2) : β ∈ S} is a disjoint collection
11.2. THE POINCARÉ EXPONENT OF A FINITELY GENERATED PARABOLIC GROUP189
of subsets of B(γ, 3R), so
X
β∈S
#(S) ≤
#(B(β, R/2)) ≤ #(B(γ, 3R))
(1 ∨ 3R)αΓ
#(B(γ, 3R))
≍× 1,
≍×
minβ∈S #(B(β, R/2))
(1 ∨ R/2)αΓ
i.e. #(S) ≤ M for some M independent of γ and R. But then B(γ, 2R) can be
covered by M balls of radius R, proving that Γ is doubling.
⊳
Now let f : [1, ∞) → [1, ∞) be a continuous increasing function satisfying
α < α∗ (f ) ≤ α∗ (f ) < ∞
(11.2.10)
and f (1) = 1. Let
Then
0 < α∗ (F ) = min
f −1 (Rα ) R ≥ 1
F (R) =
.
R1/2
R≤1
1
α
,
2 α∗ (f )
≤ α∗ (F ) = max
1
α
,
2 α∗ (f )
< 1.
Thus F satisfies the hypotheses of Theorem 11.2.15, so there exists an embedding
ι : Γ → H satisfying (11.2.9). By [59, Proposition 4.4], we may without loss of
generality assume that ι(γ) = Φ(γ)(0) for some homomorphism Φ : Γ → Isom(B).
Now for all R ≥ 1,
NB,Φ(Γ) (R) = #{γ ∈ Γ : Dξ (o, Φ(γ)(o)) ≤ R}
= #{γ ∈ Γ : F (dΓ (e, γ)) ≤ R}
α
= NΓ (F −1 (R)) ≍× F −1 (R) = f (R).
In particular, given δ > αΓ /2 and k ∈ {0, 2}, we can let
f (R) = R2δ (1 + log(R))−k .
It is readily verified that α < α(f ) = 2δ < ∞, so in particular (11.2.10) holds.
By (8.1.2), δ(Φ(Γ)) = δ and by (8.1.1), Φ(Γ) is of divergence type if and only if
k = 0.
Remark 11.2.17. The above proof shows a little more that what was promised;
namely, it has been shown that
(i) for every function F : (0, ∞) → (0, ∞) satisfying (11.2.8), there exists an
injective homomorphism Φ : Γ → B such that kΦ(γ)(0)k ≍× F (d(e, γ))
for all γ ∈ Γ, and that
190
11. PARABOLIC GROUPS
(ii) for every function f : [1, ∞) → [1, ∞) satisfying (11.2.10), there exists a
group G ≤ Isom(B) isomorphic to Γ such that NB,G (R) ≍× f (R) for all
R ≥ 1.
The latter will be of particular interest in Chapter 17, in which the orbital counting
function of a parabolic subgroup of a geometrically finite group is shown to have
implications for the geometry of the Patterson–Sullivan measure via the Global
Measure Formula (Theorem 17.2.2).
We conclude this chapter by giving two examples of how the Poincaré exponents
of infinitely generated parabolic groups behave somewhat erratically.
Example 11.2.18 (A class of infinitely generated parabolic torsion groups).
Let (bn )∞
1 be an increasing sequence of positive real numbers, and for each n ∈ N,
let gn ∈ Isom(B) be the reflection across the hyperplane Hn := {x : xn = bn }.
Then G := hgn : n ∈ Ni is a strongly discrete subgroup of Isom(B) consisting
b is a strongly
of only torsion elements. It follows that its Poincaré extension G
discrete parabolic subgroup of Isom(H∞ ) with no parabolic element. To compute
b we use (11.1.1):
the Poincaré exponent of G,
Σs (G) =
X
g∈G
−2s
(1 ∨ kg(0)k)
=
X
1∨
S⊆N
finite
=
Y
X
1∨
S⊆N
finite
gn
n∈S
X
!
2
(2bn )
n∈S
(0)
!−s
!−2s
.
The special case bn = n gives
Σs (G) ≥
X
S⊆{1,...,N }
N
X
n=1
2
(2n)
!−s
≍× 2N N −3s −→ ∞ ∀s ≥ 0
N
and thus δ = ∞, while the special case bn = nn gives
Σs (G) ≤
∞
X
n=1
X
(nn )−2s =
∞
X
n=1
S⊆N
max(S)=n
2n−1 (nn )−2s < ∞ ∀s > 0
and thus δ = 0. Intermediate values of δ can be achieved either by setting bn =
2n/(2δ) , which gives a group of divergence type:
−s
∞
X
X
X
Σs (G) ≍×
≍×
1 ∨ max(2bn )2
b−2s
n
S⊆N
finite
n∈S
n=1
=
∞
X
n=1
S⊆N
max(S)=n
2n−1 2−ns/δ
= ∞
< ∞
for s ≤ δ
for s > δ
11.2. THE POINCARÉ EXPONENT OF A FINITELY GENERATED PARABOLIC GROUP191
or by setting bn = 2n/(2δ) n1/δ , which gives a group of convergence type:
∞
∞
= ∞ for s < δ
X
X
X
Σs (G) ≍×
b−2s
=
2n−1 2−ns/δ n−2s/δ
n
< ∞ for s ≥ δ
n=1
n=1
S⊆N
max(S)=n
Remark 11.2.19. In Example 11.2.18, for each n the hyperplane Hn is a totally
T
geodesic subset of E∞ which is invariant under G. However, the intersection n Hn
is trivial, since no point x ∈ bord E∞ \ {∞} can satisfy xn = bn for all n. In
particular, G does not act irreducibly on any nontrivial totally geodesic set S ⊆
bord H∞ .
Example 11.2.20 (A torsion-free infinitely generated parabolic group with finite Poincaré exponent). Let Γ = {n/2k : n ∈ Z, k ≥ 0}. Then Γ is an infinitely
generated abelian group. For each k ∈ N let Bk = k k , and define an action
Φ : Γ → Isom(ℓ2 (N; C)) by the following formula:
k
Φ(q)(x0 , x) = x0 + q, e2πi2 q (xk − Bk ) + Bk
,
k
i.e. Φ(q) is the direct sum of the Edelstein-type example (cf. Definition 11.1.11)
defined by the sequences ak = 2k q, bk = Bk with the map R ∋ x0 7→ x0 + q. It is
readily verified that Φ is a homomorphism (cf. (11.1.5)). We have
X
k
kΦ(q)(0)k2 = |q|2 +
Bk2 |e2πi2 q − 1|2
k
2
≍× |q| +
X
Bk2 d(2k q, Z)
k
≍× max(|q|2 , Bk2q ),
/ Z. Equivalently, kq is the unique
where kq is the largest integer such that 2kq q ∈
integer such that q = n/2kq +1 for some k.
192
11. PARABOLIC GROUPS
To compute the Poincaré exponent of G = Φ(Γ), fix s > 1/2 and observe that
X
Σs (G) =
(1 ∨ kg(0)k)−2s
g∈G
=
X
q∈Γ
≤
≍×
=
=
=
≍×
(|q| ∨ Bkq )−2s
XX
(|n|/2k+1 ∨ Bk )−2s
k∈N n∈Z
XZ ∞
k∈N
0
−2s
x
dx
∨
B
k
2k+1
#
x −2s
dx
dx +
k+1
0
2k+1 Bk 2
k∈N
"
#
∞
1−2s
X
k+1 1−2s
k+1 2s x
2
Bk
+ (2
)
1 − 2s x=2k+1 Bk
k∈N
X
1
2k+1 Bk1−2s
2k+1 Bk1−2s +
2s − 1
k∈N
X
X
2k Bk1−2s =
2k (k k )1−2s < ∞.
"
X Z
k∈N
2k+1 Bk
Bk−2s
Z
∞
k∈N
Thus δ(G) ≤ 1/2, but Theorem 11.2.6 guarantees that δ(G) ≥ δ(Φ(Z)) ≥ 1/2. So
δ(G) = 1/2.
CHAPTER 12
Geometrically finite and convex-cobounded groups
In this chapter we generalize the notion of geometrically finite groups to regularly geodesic strongly hyperbolic metric spaces, mainly CAT(-1) spaces. We
generalize finite-dimensional theorems such as the Beardon–Maskit theorem [21]
and Tukia’s isomorphism theorem [170, Theorem 3.3].
Standing Assumptions 12.0.21. Throughout this chapter, we assume that
(I) X is regularly geodesic and strongly hyperbolic, and that
(II) G ≤ Isom(X) is strongly discrete.
Recall that for x, y ∈ bord X, [x, y] denotes the geodesic segment, ray, or line
connecting x and y.
Note that we do not assume that G is nonelementary.
12.1. Some geometric shapes
To define geometrically finite groups requires three geometric concepts. The
first, the quasiconvex core Co of the group G, has already been introduced in Section
7.5. The remaining two concepts are horoballs and Dirichlet domains.
12.1.1. Horoballs.
Definition 12.1.1. A horoball is a set of the form
Hξ,t = {x ∈ X : B ξ (o, x) > t},
where ξ ∈ ∂X and t ∈ R. The point ξ is called the center of a horoball Hξ,t , and
will be denoted center(Hξ,t ). Note that for any horoball H, we have
H ∩ ∂X = {center(H)}.
(Cf. Figure 12.1.1.)
Lemma 12.1.2. For every horoball H ⊆ X, we have
Diam(H) ≍× b−d(o,H) .
193
194
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
ξ
Hξ,t
o
t
ξ
Hξ,t
t
o
Figure 12.1.1. Two pictures of the same horoball, in the ball
model and half-space model, respectively.
Proof. Write H = Hξ,t for some ξ ∈ ∂X, t ∈ R. If t < 0, then o ∈ H, so
d(o, H) = 0 and Diam(H) = 1. So suppose t ≥ 0. Then the intersection [o, ξ] ∩ ∂H
consists of a single point x0 satisfying kx0 k = t. It follows that d(o, H) ≤ kx0 k = t
and Diam(H) ≥ D(x0 , x0 ) = b−t . For the reverse directions, fix x ∈ H. Since
B ξ (o, x) > t, we have
kxk > t
and
D(x, ξ) = b−hx|ξio = b−[Bξ (o,x)+ho|ξix ] ≤ b− Bξ (o,x) < b−t .
It follows that Diam(H) ≍× b−t = b−d(o,H) .
Lemma 12.1.3 (Cf. Figure 12.1.2). Suppose that H is a horoball not containing
o. Then
Diam(H \ B(o, ρ)) ≤ 2e−(1/2)ρ .
Proof. Write H = Hξ,t for some ξ ∈ ∂X and t ∈ R; we have t ≥ 0 since
o∈
/ H. Then for all x ∈ H \ B(o, ρ),
hx|ξio =
1
1
1
[kxk + B ξ (o, x)] ≥ [ρ + t] ≥ ρ
2
2
2
and so D(x, ξ) ≤ e−(1/2)ρ .
12.1.2. Dirichlet domains.
Definition 12.1.4. Let G be a group acting by isometries on a metric space
X. Fix z ∈ X. We define the Dirichlet domain for G centered at z by
(12.1.1)
Dz := {x : d(z, x) ≤ d(z, g(x)) ∀g ∈ G} = {x : B x (z, g −1 (z)) ≤ 0 ∀g ∈ G}.
12.1. SOME GEOMETRIC SHAPES
ρ
195
ւH \ B(o, ρ)
ξ
o
H
Figure 12.1.2. The set H \ B(o, ρ) decreases in diameter as ρ → ∞.
The idea is that the Dirichlet domain is a “tile” whose iterates under G tile the
space X. This is made explicit in the following proposition:
Proposition 12.1.5. For all z ∈ X, G(Dz ) = X.
Proof. Fix x ∈ X. Since the group G is strongly discrete, the minimum
ming∈G {d(x, g(z))} is attained at some g ∈ G. Now for every h ∈ G, we have
d(x, g(z)) ≤ d(x, h(z)). Replacing h by gh, it follows that for every h ∈ G we have
d(x, g(z)) ≤ d(x, gh(z)) which is the same as d(g −1 (x), z) ≤ d(g −1 (x), h(z)). Thus
g −1 (x) ∈ Dz , i.e. x ∈ g(Dz ).
Corollary 12.1.6. Let S ⊆ X be a G-invariant set. The following are equiv-
alent:
(A) There exists a bounded set S0 ⊆ X such that S ⊆ G(S0 ).
(B) The set S ∩ Dz is bounded.
Proof of (A) ⇒ (B). Given x ∈ S ∩ Dz , fix g ∈ G with x ∈ g(S0 ). Then
d(z, x) ≤ d(z, g −1 (x)) ≍+ 0, i.e. x is in a bounded set.
Proof of (B) ⇒ (A). The set S0 = S ∩ Dz is such a set. Specifically, given
x ∈ S by Proposition 12.1.5 there exists g ∈ G such that x ∈ g(Dz ). Since S is
G-invariant, g −1 (x) ∈ S ∩ Dz = S0 .
Remark 12.1.7. It is tempting to define the Dirichlet domain of G centered at
z to be the set
Dz∗ := {x : d(z, x) < d(z, g(x)) ∀g ∈ G such that g(z) 6= z},
196
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
γ2
γ2
γ1−1
e
z
γ1
γ1−1
γ2−1
z
e
Dz∗
γ1
G(Dz∗ )
γ2−1
Figure 12.1.3. The Cayley graph of Γ = F2 (Z) = hγ1 , γ2 i. The
closure of the naive Dirichlet domain Dz∗ is the geodesic segment
Dz∗ = [e, γ1 ]. Its orbit G(Dz∗ ) is the union of all geodesic segments
which appear as horizontal lines in this picture.
and then to try to prove that G(Dz∗ ) = X. However, there is a simple example
which disproves this hypothesis. Let X be the Cayley graph of Γ = F2 (Z) =
hγ1 , γ2 i, let Φ : Γ → Isom(X) be the natural action, and let G = Φ(Γ). If we let
z = ((e, γ1 ), 1/2), then Dz∗ = {((e, γ1 ), t) : t ∈ (0, 1)}, and
G(Dz∗ ) = {((g, gγ1 ), t) : g ∈ Γ, t ∈ [0, 1]}.
This set excludes all elements of the form ((g, gγ2 ), t), t ∈ (0, 1). (Cf. Figure
12.1.3.)
Remark 12.1.8. The assumption that G is strongly discrete is crucial for
Proposition 12.1.5. In general, tiling Hilbert spaces turns out to be a very subtle problem and has been studied (among others) by Klee [113, 114], Fonf and
Lindenstrauss [75] and most recently by Preiss [146].
12.2. Cobounded and convex-cobounded groups
Before studying geometrically finite groups, we begin by considering the simpler
case of cobounded and convex-cobounded groups. The theory of these groups will
provide motivation for the theory of geometrically finite groups.
Definition 12.2.1. Let G be a group acting by isometries on a metric space
X. We say that G is cobounded if there exists σ > 0 such that X = G(B(o, σ)).
It has been a long-standing conjecture to prove or disprove the existence of
cobounded subgroups of Isom(H∞ ) that are discrete in an appropriate sense. To
12.2. COBOUNDED AND CONVEX-COBOUNDED GROUPS
197
the best of our knowledge, this conjecture was first stated explicitly by D. P. Sullivan
in his IHÉS seminar on conformal dynamics [164, p.17]. We give here two partial
answers to this question, both negative. Our first partial answer is as follows:
Proposition 12.2.2. A strongly discrete subgroup of Isom(H∞ ) cannot be
cobounded.
Proof. Let us work in the ball model B∞ . Suppose that G ≤ Isom(B∞ ) is a
strongly discrete cobounded group, and choose σ > 0 so that B∞ = G(BB (0, σ)).
Since G is strongly discrete, we have #(F ) < ∞ where
F := {x ∈ G(0) : dB (0, x) ≤ 2σ + 1}.
Choose v ∈ ∂B∞ such that BE (v, z) = 0 for all z ∈ F , and let x = tv, where
0 < t < 1 is chosen to make
dB (0, x) = σ + 1.
Since x ∈ B, we have x ∈ BB (y, σ) for some y ∈ G(0). But then d(0, y) ≤ 2σ + 1,
which implies y ∈ F , and thus BE (x, y) = 0. On the other hand
dB (x, y) ≤ σ < σ + 1 = dB (0, x),
which contradicts (2.5.1).
Proposition 12.2.2 leaves open the question of whether there exist cobounded
subgroups of Isom(H) which satisfy a weaker discreteness condition than strong
discreteness. One way that we could try to construct such a group would be to
take the direct limit of a sequence cobounded subgroups of Isom(Hd ) as d → ∞.
The most promising candidate for such a direct limit has been the direct limit
of a sequence of arithmetic cocompact subgroups of Isom(Hd ). (See e.g. [23] for
the definition of an arithmetic subgroup of Isom(Hd ).) Nevertheless, such innocent
hopes are dashed by the following result:
Proposition 12.2.3. If Gd ≤ Isom(Hd ) is a sequence of arithmetic subgroups,
then the codiameter of Gd tends to infinity, that is, there is no σ > 0 such that
Gd (B(o, σ)) = Hd for every d.
Proof. It is known [23, Corollary 3.3] that the covolume of Gd tends to infinity
superexponentially fast as d → ∞. On the other hand, the volume of B(o, σ) in Hd
tends to zero superexponentially fast. Indeed, it is equal to
Z σ
d/2
(2π /Γ(d/2))
sinhd−1 (r) dr ≍× π d/2 σ d−1 /Γ(d/2).
0
Thus, for sufficiently large d, the volume of B(o, σ) is less than the covolume of Gd ,
which implies that Gd (B(o, σ)) $ Hd .
198
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
Remark 12.2.4. Proposition 12.2.3 strongly suggests, but does not prove, that
it is impossible to get a cobounded subgroup of Isom(H∞ ) as the direct limit of
arithmetic subgroups of Isom(Hd ). One might ask whether one can get a cobounded
subgroup of Isom(H∞ ) as the direct limit of non-arithmetic subgroups of Isom(Hd );
the analogous known lower bounds on volume [1, 110] are insufficient to disprove
this. However, this approach seems unlikely to work, for two reasons: first of all,
the much worse lower bounds for the covolumes of non-arithmetic groups may just
be a failure of technique; there are no known examples of non-arithmetic groups
with volume lower than the bound which holds for arithmetic groups, and it is
conjectured that there are no such examples [23, p.9]. Second of all, even if such
groups exist, they are of no use to the problem unless an entire sequence of groups
may be found, each one of which is a subgroup of all its higher dimensional analogues. Such structure exists in the arithmetic case but it is unclear whether or not
it will also exist in the non-arithmetic case.
From Propositions 12.2.2 and 12.2.3, we see that the theory of cobounded
groups acting on H∞ will be rather limited. Consequently we focus on the weaker
condition of convex-coboundedness.
For the remainder of this chapter, we return to our standing assumption that
the group G is strongly discrete.
Definition 12.2.5. We say that G ≤ Isom(X) is convex-cobounded if its re-
striction to the quasiconvex core Co is cobounded, or equivalently if there exists
σ > 0 such that
Co ⊆ G(B(o, σ)).
We remark that whether or not G is convex-cobounded is independent of the
base point o (cf. Proposition 7.5.9).
From Proposition 7.5.3 we immediately deduce the following:
Observation 12.2.6. If X is an algebraic hyperbolic space and if G is nonelementary, then the following are equivalent:
(A) G is convex-cobounded.
(B) There exists σ > 0 such that CΛ ⊆ G(B(o, σ)).
In particular, when X is finite-dimensional, we see that the notion of convexcoboundedness coincides with the standard notion of convex-cocompactness.
12.2.1. Characterizations of convex-coboundedness. The property of
convex-coboundedness can be characterized in terms of the limit set. Precisely:
Theorem 12.2.7. The following are equivalent:
(A) G is convex-cobounded.
12.2. COBOUNDED AND CONVEX-COBOUNDED GROUPS
199
(B) G is of compact type and any of the following hold:
(B1) Λ(G) = Λur,σ (G) for some σ > 0.
(B2) Λ(G) = Λur (G).
(B3) Λ(G) = Λr (G).
(B4) Λ(G) = Λh (G).
Remark 12.2.8. (B1)-(B4) should be regarded as equivalent conditions which
also assume that G is of compact type, so that there are a total of 5 equivalent
conditions in this theorem.
The implications (B1) ⇒ (B2) ⇒ (B3) ⇒ (B4) follow immediately from the
definitions. We therefore proceed to prove (A) ⇒ (B1) and (B4) ⇒ (A).
Proof of (A) ⇒ (B1). The proof consists of two parts: showing that Λ(G) =
Λur,σ (G) for some σ > 0, and showing that Λ(G) is compact.
Proof that Λ(G) = Λur,σ (G) for some σ > 0. Fix ξ ∈ Λ(G), so that
[o, ξ] ⊆ Co ⊆ G(B(o, σ)).
For each n ∈ N, let xn = [o, ξ]n , so that xn → ξ and d(xn , xn+1 ) = 1. Then for
each n, there exists gn ∈ G satisfying d(gn (o), xn ) ≤ σ. Then
ho|ξign (o) ≤ ho|ξixn + σ = σ;
moreover,
d(gn (o), gn+1 (o)) ≤ d(xn , xn+1 ) + 2σ = 2σ + 1.
Thus the convergence gn (o) → ξ is (2σ + 1)-uniformly radial, so ξ ∈ Λur,2σ+1 (G).
⊳
Proof that G is of compact type. By contradiction, suppose that G is
not of compact type. Then Λ is a complete metric space which is not compact,
which implies that there exist ε > 0 and an infinite ε-separated set I ⊆ Λ. Fix ρ > 0
large to be determined. For each ξ ∈ I, let xξ = [o, ξ]ρ . Then xξ ∈ Co ⊆ G(B(o, σ)),
so there exists gξ ∈ G such that d(gξ (o), xξ ) ≤ σ.
Claim 12.2.9. For ρ sufficiently large, the function ξ 7→ gξ (o) is injective.
Proof. Fix ξ1 , ξ2 ∈ I distinct, and suppose gξ1 (o) = gξ2 (o). Then (cf. Figure
12.2.1) we have that
hξ1 |ξ2 io ≥ hx1 |x2 io =
1
[2ρ − d(x1 , x2 )] ≥ ρ − σ.
2
On the other hand, since I is ε-separated we have hξ1 |ξ2 io ≤ − log(ε). This is a
contradiction if ρ > σ − log(ε).
⊳
200
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
ξ1
ξ2
x1
2σ
σ
σ
x2
gξ (o)
o
Figure 12.2.1. If gξ1 (o) = gξ2 (o), then ξ1 and ξ2 must be close
to each other.
The strong discreteness of G therefore implies
#(I) ≤ #{g ∈ G : kgk ≤ ρ + σ} < ∞,
which is a contradiction since #(I) = ∞ by assumption.
⊳
This completes the proof of (A) ⇒ (B1).
Proof of (B4) ⇒ (A). We use the notation (7.5.2).
Lemma 12.2.10. Λh ∩ Do′ = .
(Lemma 12.2.10 is true even without assuming (B4); this fact will be used in
the proof of Theorem 12.4.5 below.)
Proof. By contradiction fix ξ ∈ Λh ∩ Do′ . Since ξ ∈ (Do )′ , (12.1.1) gives
B ξ (o, g(o)) ≤ 0 for all g ∈ G (cf. Lemma 3.4.22). But then ξ ∈
/ Λh , since by definition ξ ∈ Λh if and only if there exists a sequence (gn )∞
satisfying
B ξ (o, gn (o)) →
1
+∞.
⊳
Now by (B4) and Observation 7.5.12, we have (Co ∩ Do )′ ⊆ Λ ∩ Do′ = Λh ∩ Do′ ,
and so (Co ∩ Do )′ = . By (C) of Proposition 7.7.2, we get that Co ∩ Do is bounded,
and Corollary 12.1.6 finishes the proof.
The proof of Theorem 12.2.7 is now complete.
Remark 12.2.11. (B4) ⇒ (A) may also be deduced as a consequence of Theo-
rem 12.4.5(B3)⇒(A) below; cf. Remark 12.4.11. However, the above prove is much
shorter. Alternatively, the above proof may be viewed as the “skeleton” of the proof
of Theorem 12.4.5(B3)⇒(A), which is made more complicated by the presence of
parabolic points.
12.3. BOUNDED PARABOLIC POINTS
201
12.2.2. Consequences of convex-coboundedness. The notion of convexcoboundedness also has several important consequences. In the following theorem,
G is endowed with an arbitrary Cayley metric (cf. Example 3.1.2).
Theorem 12.2.12 (Cf. [39, Proposition I.8.19]). Suppose that G is convexcobounded. Then:
(i) G is finitely generated.
(ii) The orbit map g 7→ g(o) is a quasi-isometric embedding (cf. Definition
3.3.9).
(iii) δG < ∞.
We shall prove Theorem 12.2.12 as a corollary of a similar statement about
geometrically finite groups; cf. Theorem 12.4.14 and Observation 12.4.15 below.
For now, we list some corollaries of Theorem 12.2.12.
Corollary 12.2.13. Suppose that G is convex-cobounded. Then G is wordhyperbolic, i.e. G is a hyperbolic metric space with respect to any Cayley metric.
Proof. This follows from Theorem 12.2.12(ii) and Theorem 3.3.10.
Corollary 12.2.14. Suppose that G is convex-cobounded. Then
dimH (Λ) = δ < ∞.
Proof. This follows from Theorem 12.2.12(iii), Theorem 1.2.1, and Theorem
12.2.7.
12.3. Bounded parabolic points
The difference between groups that are geometrically finite and those that
are convex-cobounded is the potential presence of bounded parabolic points in the
former. In the Standard Case, a parabolic fixed point ξ in the limit set of a geometrically finite group G, is said to be bounded if (Λ \ {ξ})/ Stab(G; ξ) is compact
[34, p.272]. We will have to modify this definition a bit to make it work for arbitrary hyperbolic metric spaces, but we show that in the usual case, our definition
coincides with the standard one (Remark 12.3.7).
Fix ξ ∈ ∂X. Recall that Eξ denotes the set bord X \ {ξ}.
Definition 12.3.1. A set S ⊆ Eξ is ξ-bounded if ξ ∈
/ S.
The motivation for this definition is that if X = Hd and ξ = ∞, then ξ-bounded
sets are exactly those which are bounded in the Euclidean metric. Actually, this
can be generalized as follows:
202
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
Observation 12.3.2. Fix S ⊆ Eξ . The following are equivalent:
(A) S is ξ-bounded.
(B) hx|ξio ≍+ 0 for all x ∈ X.
(C) Dξ (o, x) .× 1 for all x ∈ X.
(D) S has bounded diameter in the Dξ metametric.
Condition (D) motivates the terminology “ξ-bounded”.
Proof of Observation 12.3.2. (A) ⇔ (B) follows from the definition of the
topology on bord X, (B) ⇔ (C) follows from (3.6.6), and (C) ⇔ (D) is obvious.
Now fix G ≤ Isom(X), and let Gξ denote the stabilizer of ξ relative to G.
Recall (Definition 6.2.7) that ξ is said to be a parabolic fixed point of G if Gξ is a
parabolic group, i.e. if Gξ (o) is unbounded and
g ∈ Gξ ⇒ g ′ (ξ) = 1.
(Here g ′ (ξ) is the dynamical derivative of g at ξ; cf. Proposition 4.2.12.)
Observation 12.3.3. If ξ is a parabolic point then ξ ∈ Λ.
Proof. This follows directly from Observation 6.2.11.
Definition 12.3.4. A parabolic point ξ ∈ Λ is a bounded parabolic point if
there exists a ξ-bounded set S ⊆ Eξ such that
(12.3.1)
G(o) ⊆ Gξ (S).
We denote the set of bounded parabolic points by Λbp .
Lemma 12.3.5. Let G ≤ Isom(X), and fix ξ ∈ ∂X. The following are equiva-
lent:
(A) ξ is a bounded parabolic point.
(B) All three of the following hold:
(BI) ξ ∈ Λ,
(BII) g ′ (ξ) = 1 ∀g ∈ Gξ , and
(BIII) there exists a ξ-bounded set S ⊆ Eξ satisfying (12.3.1).
Proof. The only thing to show is that if (B) holds, then Gξ (o) is unbounded.
By contradiction suppose otherwise. Let S be a ξ-bounded set satisfying (12.3.1).
Then for all x ∈ G(o), we have x ∈ h(S) for some h ∈ Gξ , and so
hx|ξio = hh−1 (x)|ξih−1 (o) ≍+ hh−1 (x)|ξio
≍+ 0.
(since Gξ (o) is bounded)
(since h−1 (x) ∈ S)
By Observation 12.3.2, the set G(o) is ξ-bounded and so ξ ∈
/ Λ, contradicting
(BI).
12.3. BOUNDED PARABOLIC POINTS
203
We now prove a lemma that summarizes a few geometric properties about
bounded parabolic points.
Lemma 12.3.6. Let ξ be a parabolic limit point of G. The following are equivalent:
(A) ξ is a bounded parabolic point, i.e. there exists a ξ-bounded set S ⊆ Eξ
such that
(12.3.2)
G(o) ⊆ Gξ (S).
(B) There exists a ξ-bounded set S ⊆ Eξ ∩ ∂X such that
(12.3.3)
Λ \ {ξ} ⊆ Gξ (S).
Moreover, if H is a horoball centered at ξ satisfying G(o) ∩ H = , then (A)-(B)
are moreover equivalent to the following:
(C) There exists a ξ-bounded set S ⊆ Eξ such that
(12.3.4)
Co \ H ⊆ Gξ (S).
(D) There exists ρ > 0 such that
(12.3.5)
Co ∩ ∂H ⊆ Gξ (B(o, ρ)).
Remark 12.3.7. The equivalence of conditions (A) and (B) implies that in the
Standard Case, our definition of a bounded parabolic point coincides with the usual
one.
Proof of (A) ⇒ (B). This is immediate since Λ \ {ξ} ⊆ G(o)(1)e .
Here
N1,e (S) denotes the 1-thickening of S with respect to the Euclidean metametric
Dξ .
Proof of (B) ⇒ (A). If #(Λ) = 1, then G = Gξ and there is nothing to
prove. Otherwise, let η1 , η2 ∈ Λ be distinct points.
Let S be as in (12.3.3). Fix x = gx (o) ∈ G. Since hgx (η1 )|gx (η2 )igx (o) ≍+ 0,
Gromov’s inequality implies that there exists i = 1, 2 such that hgx (ηi )|ξix ≍+ 0.
By (12.3.3), there exists hx ∈ Gξ such that h−1
x gx (ηi ) ∈ S. We have
−1
≍+ 0.
hh−1
x gx (ηi )|ξio ≍+ hhx gx (ηi )|ξih−1
x (x)
By Proposition 4.3.1(i), this means that o and yx := hx−1 (x) are both within a
bounded distance of the geodesic line [h−1
x gx (ηi ), ξ]. Since one of these two points
must lie closer to ξ then the other, we have either
(12.3.6)
hyx |ξio ≍+ 0 or ho|ξiyx ≍+ 0.
204
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
By contradiction, let us suppose that there exists a sequence xn ∈ G(o) such that
Dξ (o, yxn ) → ∞. (If no such sequence exists, then for some N ∈ N the set S =
{y ∈ X : Dξ (o, y) ≤ N } is a ξ-bounded set satisfying (12.3.2).) For n sufficiently
large, the first case of (12.3.6) cannot hold, so the second case holds. It follows
that yn := yxn → ξ radially. So ξ is a radial limit point of G. In the remainder of
the proof, we show that this yields a contradiction.
By Proposition 4.3.1(i), for each n ∈ N there exists a point zn ∈ [o, ξ] satisfying
d(yn , zn ) ≍+ 0.
(12.3.7)
Now let ρ be the implied constant of (12.3.7), and let δ be the implied constant
of Proposition 4.3.1(ii). Since G is strongly discrete, M := #{g ∈ G : kgk ≤
2ρ + 2δ} < ∞. Let F ⊆ Gξ be a finite set with cardinality strictly greater than M .
By Proposition 4.3.1(ii), there exists t > 0 such that for all y ∈ [o, ξ] with y > t,
then d(y, [h(o), ξ]) ≤ δ for all h ∈ F .
Suppose zn > t. Then for all h ∈ F , we have d(zn , [h(o), ξ]) ≤ δ. On the other
hand, h(zn ) ∈ [h(o), ξ] and B ξ (zn , h(zn )) = 0; this implies that d(zn , h(zn )) ≤ 2δ
and thus d(yn , h(yn )) ≤ 2ρ + 2δ. But yn = gn (o) for some gn ∈ G, so we have
kgn−1 hgn k ≤ 2ρ + 2δ. But since #(F ) > M , this contradicts the definition of M .
It follows that zn ≤ t. But then kyn k ≤ kzn k + ρ ≤ t + ρ, implying that the
sequence (yn )∞
1 is bounded, a contradiction.
For the remainder of the proof, we fix a horoball H = Hξ,t ⊆ X disjoint from
G(o).
Proof of (A) ⇒ (C). Let S be as in (12.3.2). Fix x ∈ Co \ H. Then there
exist g1 , g2 ∈ G with x ∈ [g1 (o), g2 (o)]. We have hg1 (o)|g2 (o)ix = 0, so by Gromov’s
inequality there exists i = 1, 2 such that hgi (o)|ξix ≍+ 0. By (3.6.6), we have
Dξ,x (x, gi (o)) ≍× 1, and combining with (4.2.6) gives
Dξ (x, gi (o)) ≍× eBξ (o,x) ≤ et ≍×,H 1.
Now by (12.3.2), there exists h ∈ Gξ such that h−1 (gi (o)) ∈ S. Then by Observation
6.2.9,
Dξ (o, h−1 (x)) ≤ Dξ (o, h−1 (gi (o))) + Dξ (x, gi (o)) .× 1.
Thus h−1 (x) lies in some ξ-bounded set which is independent of x.
Proof of (C) ⇒ (D). Let S be a ξ-bounded set satisfying (12.3.4). Then for
all x ∈ S ∩ ∂H, by (h) of Proposition 3.3.3 we have
kxk = 2
hx|ξio
| {z }
≍+ 0 since x∈S
−
B ξ (o, x)
| {z }
=t since x∈∂H
≍+,H 0.
12.3. BOUNDED PARABOLIC POINTS
205
ξ
H
x
h−1 (x)
o
h−1 g(o)
g(o)
Figure 12.3.1. By moving x close to o with respect to the d
metric, h−1 also moves g(o) close to o with respect to the Dξ
metametric.
Thus S ∩ ∂H ⊆ B(o, ρ) for sufficiently large ρ. Applying Gξ demonstrates (12.3.5).
Proof of (D) ⇒ (A). Let ρ be as in (12.3.5), and fix g ∈ G. Since by as-
sumption G(o) ∩ H = , we have B ξ (o, g(o)) ≤ t. Let x = [g(o), ξ]t−Bξ (o,g(o)) , so
that x ∈ [g(o), ξ] ∩ ∂H (cf. Figure 12.3.1). By (12.3.5), there exists h ∈ Gξ such
that x ∈ B(h(o), ρ). Then
hh−1 g(o)|ξio = hg(o)|ξih(o) ≤ hg(o)|ξix + d(h(o), x)
= d(h(o), x)
(since x ∈ [g(o), ξ])
≤ ρ.
This demonstrates that g(o) ∈ h(S) for some ξ-bounded set S.
Remark 12.3.8. The proof of (B) ⇒ (A) given above shows a little more than
asked for, namely that a parabolic point of a strongly discrete group cannot also
be a radial limit point.
It will also be useful to rephrase the above equivalent conditions in terms of a
Dirichlet domain of Gξ . Indeed, letting Do (Gξ ) denote such a Dirichlet domain, we
have the following analogue of Corollary 12.1.6:
Lemma 12.3.9. Let ξ be a parabolic point of G, and let S ⊆ Eξ be a Gξ -invariant
set. The following are equivalent:
(A) There exists a ξ-bounded set S0 ⊆ Eξ such that S ⊆ Gξ (S0 ).
(B) The set S ∩ Do (Gξ ) is ξ-bounded.
206
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
Proof. We first observe that for all x ∈ Eξ and h ∈ Gξ , (g) of Proposition
3.3.3 gives
hx|ξio − hx|ξih(o) =
1
1
[B x (o, h(o)) + B ξ (o, h(o))] = B x (o, h(o)).
2
2
In particular
x ∈ Do (Gξ ) ⇔ hx|ξio ≤ hx|ξih(o) ∀h ∈ Gξ
⇔ Dξ (x, ξ) ≤ Dξ (h(x), ξ) ∀h ∈ Gξ ,
i.e. Do (Gξ ) is the Dirichlet domain of o for the action of Gξ on the metametric
space (Eξ , Dξ ). Note that this action is isometric (Observation 6.2.9) and strongly
discrete (Proposition 7.7.4). Modifying the proof of Corollary 12.1.6 now yields the
conclusion.
Corollary 12.3.10. In Lemma 12.3.6, the equivalent conditions (A)-(D) are
also equivalent to:
(A′ ) G(o) ∩ Do (Gξ ) is ξ-bounded.
(B′ ) Do (Gξ ) ∩ Λ \ {ξ} is ξ-bounded.
(C′ ) Co ∩ Do (Gξ ) \ H is ξ-bounded.
12.4. Geometrically finite groups
Definition 12.4.1. We say that G is geometrically finite if there exists a disS
joint G-invariant collection of horoballs H satisfying o ∈
/ H such that
(I) for every ρ > 0, the set
Hρ := {H ∈ H : d(o, H) ≤ ρ}
(12.4.1)
is finite, and
(II) there exists σ > 0 such that
Co ⊆ G(B(o, σ)) ∪
(12.4.2)
[
H.
Observation 12.4.2. Notice that the following implications hold:
G cobounded ⇒ G convex-cobounded ⇒ G geometrically finite.
Indeed, G is convex-cobounded if and only if it satisfies Definition 12.4.1 with
H = .
Remark 12.4.3. It is not immediately obvious that the definition of geometrical
finiteness is independent of the basepoint o, but this follows from Theorems 12.4.5
and 12.4.14 below.
12.4. GEOMETRICALLY FINITE GROUPS
207
Remark 12.4.4. Geometrical finiteness is closely related to the notion of relative hyperbolicity of a group; see e.g. [37]. The main differences are:
1. Relative hyperbolicity is a property of an abstract group, whereas geometrical finiteness is a property of an isometric group action (equivalently, of
a subgroup of an isometry group)
2. The maximal parabolic subgroups of relatively hyperbolic groups are assumed to be finitely generated, whereas we do not make this assumption
(cf. Corollary 12.4.17(i)).
3. The relation between relative hyperbolicity and geometrical finiteness is
only available in retrospect, once one proves that both are equivalent
to a decomposition of the limit set into radial and bounded parabolic
limit points plus auxiliary assumptions (compare Theorem 12.4.5 with
[37, Definition 1]).
12.4.1. Characterizations of geometrical finiteness. We now state and
prove an analogue of Theorem 12.2.7 in the setting of geometrically finite groups.
In the Standard Case, the equivalence (A) ⇔ (B2) of the following theorem was
proven by A. F. Beardon and B. Maskit [21]. Note that while in Theorem 12.2.7,
one of the equivalent conditions involved the uniformly radial limit set, no such
characterization exists for geometrically finite groups. This is because for many
geometrically finite groups, the typical point on the limit set is neither parabolic
nor uniformly radial. (For example, the set of uniformly radial limit points of the
geometrically finite Fuchsian group SL2 (Z) is equal to the set of badly approximable
numbers; cf. e.g. [73, Observation 1.15 and Proposition 1.21].)
Theorem 12.4.5 (Generalization of the Beardon–Maskit Theorem; see also
[151, Proposition 1.10]). The following are equivalent:
(A) G is geometrically finite.
(B) G is of compact type and any of the following hold (cf. Remark 12.2.8):
(B1) Λ(G) = Λr,σ (G) ∪ Λbp (G) for some σ > 0.
(B2) Λ(G) = Λr (G) ∪ Λbp (G).
(B3) Λ(G) = Λh (G) ∪ Λbp (G).
Remark 12.4.6. Of the equivalent definitions of geometrical finiteness discussed in [34], it seems the above definitions most closely correspond with (GF1)
and (GF2).1 It seems that definitions (GF3) and (GF5) cannot be generalized to
our setting. Indeed, (GF5) depend on the notion of volume, which does not exist
in infinite dimensional spaces, while (GF3) already fails in the case of variable curvature; cf. [36]. It seems plausible that a version of (GF4) could be made to work
1Cf. Remark 12.3.7 above regarding (GF2).
208
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
at least in the setting of algebraic hyperbolic spaces, but we do not study the issue
at this stage.
The implications (B1) ⇒ (B2) ⇒ (B3) follow immediately from the definitions.
We therefore proceed to prove (A) ⇒ (B1) and then the more difficult (B3) ⇒ (A).
Proof of (A) ⇒ (B1). The proof consists of two parts: showing that Λ(G) =
Λr,σ (G) ∪ Λbp (G) for some σ > 0, and showing that G is of compact type.
Proof that Λ(G) = Λr,σ (G) ∪ Λbp (G) for some σ > 0. Let H be the disjoint G-invariant collection of horoballs as defined in Definition 12.4.1, and let
σ > 0 be large enough so that (12.4.2) holds. Fix ξ ∈ Λ, and we will show that
ξ ∈ Λr,σ ∪ Λbp . For each t ≥ 0, recall that [o, ξ]t denotes the unique point on [o, ξ]
so that d(o, [o, ξ]t ) = t; since [o, ξ]t ∈ Co , by (12.4.2) either [o, ξ]t ∈ G(B(o, σ)) or
S
[o, ξ]t ∈ H .
Now if there exists a sequence tn → ∞ satisfying [o, ξ]tn ∈ G(B(o, σ)), then
S
ξ ∈ Λr,σ (Corollary 4.5.5). Assume not; then there exists t0 such that [o, ξ]t ∈ H
for all t > t0 . This in turn implies that the collection
{t > t0 : [o, ξ]t ∈ H} : H ∈ H
is a disjoint open cover of (t0 , ∞). Since (t0 , ∞) is connected, we have (t0 , ∞) =
{t > t0 : [o, ξ]t ∈ H} for some H ∈ H , or equivalently
[o, ξ]t ∈ H ∀t > t0 .
Therefore ξ = center(H). Now it suffices to show
Lemma 12.4.7. For every H ∈ H , if center(H) ∈ Λ, then center(H) ∈ Λbp .
Proof. Let ξ = center(H). For every g ∈ Gξ , we have g(H) ∩ H 6= . Since
H is disjoint, this implies g(H) = H and thus g ′ (ξ) = 1. Thus ξ is neutral with
respect to every element of Gξ .
We will demonstrate equivalent condition (D) of Lemma 12.3.6. First of all, we
S
S
observe that G(o) is disjoint from H since o ∈
/ H . Fix x ∈ Co ∩ ∂H ⊆ Co \ H .
Then by (12.4.2), we have x ∈ gx (B(o, σ)) for some gx ∈ G. It follows that
gx−1 (x) ∈ B(o, σ) and so gx−1 (H) ∩ B(o, σ + ε) 6= for every ε > 0. Equivalently,
gx−1 (H) ∈ Hσ+ε , where Hρ is defined as in (12.4.1). Therefore, by (I) of Definition
12.4.1, the set
{gx−1 (H) : x ∈ Co ∩ ∂H}
(H))n1 be an enumeration of this set. Then for any x ∈ Co ∩ ∂H
is finite. Let (gx−1
i
(H) = H and so
(H). Then gx gx−1
there exists i = 1, . . . , n with gx−1 (H) = gx−1
i
i
12.4. GEOMETRICALLY FINITE GROUPS
209
ξ1
x1
ξ2
x2
o
Hξi
Figure 12.4.1. If Hξ1 = Hξ2 , then ξ1 and ξ2 must be close to each other.
∈ Gξ . Thus
(ξ) = ξ. Equivalently, hx := gx gx−1
gx gx−1
i
i
n
k + kgx−1 (x)k ≤ σ + max kgxi k.
(o), gx−1 (x)) ≤ kgx−1
d(x, Gξ (o)) ≤ d(hx (o), x) = d(gx−1
i
i
i=1
Letting ρ = σ + maxni=1 kgxi k, we have (12.3.5), which completes the proof.
⊳
The identity Λ(G) = Λr,σ (G) ∪ Λbp (G) has been proven.
⊳
Proof that G is of compact type. By contradiction, suppose otherwise.
Then Λ is a complete metric space which is not compact, which implies that there
exist ε > 0 and an infinite ε-separated set I ⊆ Λ. Fix ρ > 0 large to be determined.
S
For each ξ ∈ I, let xξ = [o, ξ]ρ . Then xξ ∈ Co ⊆ G(B(o, σ)) ∪ H , so either
(1) there exists gξ ∈ G such that d(gξ (o), xξ ) ≤ σ, or
(2) there exists Hξ ∈ H such that xξ ∈ Hξ .
Claim 12.4.8. For ρ sufficiently large, the partial functions ξ 7→ gξ (o) and
ξ 7→ Hξ are injective.
Proof. For the first partial function ξ 7→ gξ (o), see Claim 12.2.9. Now fix
ξ1 , ξ2 ∈ I distinct, and suppose that Hξ1 = Hξ2 (cf. Figure 12.4.1). Then xi :=
xξi ∈ Hξi \ B(o, ρ). By Lemma 12.1.3, this implies that
ε ≤ D(ξ1 , ξ2 ) ≤ D(x1 , x2 ) ≤ 2e−(1/2)ρ .
For ρ > 2(log(2) − log(ε)), this is a contradiction. Thus the second partial function
ξ 7→ Hξ is also injective.
⊳
210
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
The strong discreteness of G and (12.4.1) therefore imply
#(I) ≤ #{H ∈ H : d(o, H) ≤ ρ} + #{g ∈ G : kgk ≤ ρ + σ} < ∞ ,
which is a contradiction since #(I) = ∞ by assumption.
⊳
This completes the proof of (A) ⇒ (B1).
Proof of (B3) ⇒ (A). Let F := (Co ∩Do )′ , where we use the notation (7.5.2).
By Lemma 12.2.10 Observation 7.5.12, and our hypothesis (B3), we have
F ⊆ Λ \ Λh ⊆ Λbp .
(12.4.3)
Claim 12.4.9. #(F ) < ∞.
Proof. Note that F is compact since G is of compact type and so it is enough
to show that F has no accumulation points. By contradiction, suppose there exists
ξ ∈ F such that ξ ∈ F \ {ξ}. Then by (12.4.3), ξ ∈ Λbp , so by (B′ ) of Corollary
12.3.10, Do (Gξ )∩Λ\{ξ} is ξ-bounded. But F \{ξ} ⊆ Do′ ∩Λ\{ξ} ⊆ Do (Gξ )∩Λ\{ξ},
contradicting that ξ ∈ F \ {ξ}.
⊳
Let P be a transversal of the partition of F into G-orbits. Fix t > 0 large to
be determined. For each p ∈ P let
Hp = Hp,t = {x : B p (o, x) > t},
and let
(12.4.4)
H := {g(Hp ) : p ∈ P, g ∈ G} .
Clearly, H is a G-invariant collection of horoballs. To finish the proof, we need to
show that:
S
(i) o ∈
/ H.
(ii) For t sufficiently large, H is a disjoint collection.
(iii) ((I) of Definition 12.4.1) For every ρ > 0 we have #(Hρ ) < ∞.
(iv) ((II) of Definition 12.4.1) There exists σ > 0 satisfying (12.4.2).
It turns out that (ii) is the hardest, so we prove it last.
Proof of (i). Fix g ∈ G and p ∈ P . Since p ∈ P ⊆ Do′ , we have
B p (o, g −1 (o)) ≤ 0 < t.
It follows that g −1 (o) ∈
/ Hp , or equivalently o ∈
/ g(Hp ).
⊳
Proof of (iii). Fix H = g(Hp ) ∈ Hσ for some p ∈ P . Consider the point
xH = [o, g(p)]d(o,H) ∈ ∂H, and note that d(o, xH ) = d(o, H) ≤ σ. Now g −1 (xH ) ∈
Hp , so by (D) of Lemma 12.3.6 there exists h ∈ Gp such that
d(h(o), g −1 (xH )) ≍+ 0.
12.4. GEOMETRICALLY FINITE GROUPS
211
p
g−1 (xH )
Hp
h−1 g−1 (xH )
o
Figure 12.4.2. Since g −1 (xH ) lies on the boundary of the
horoball Hp , an element of Gp can move it close to o.
(Cf. Figure 12.4.2.) Letting C be the implied constant, we have
kghk ≤ d(o, xH ) + d(xH , gh(o)) ≤ ρ + C.
On the other hand, gh(Hp ) = g(Hp ) = H since h ∈ Gp . Summarizing, we have
Hρ ⊆ {g(Hp ) : p ∈ P, kgk ≤ ρ + C}.
But this set is finite because G is strongly discrete and because of Claim 12.4.9.
Thus #(Hρ ) < ∞.
⊳
Proof of (iv).
Claim 12.4.10.
[ ′
Co ∩ Do \ H = .
Proof. By contradiction, suppose that there exists
[ ′
(12.4.5)
ξ ∈ Co ∩ Do \
H ⊆ F ⊆ Λbp .
By the definition of P , there exist p ∈ P and g ∈ G so that g(p) = ξ. Then
Hξ := g(Hp ) ∈ H is centered at ξ, and so by (C′ ) of Corollary 12.3.10,
Co ∩ Do \ Hξ ⊆ Do (Gξ ) ∩ Co \ Hξ
is ξ-bounded, contradicting (12.4.5).
⊳
S
Since G is of compact type, Claim 12.4.10 implies that the set Co ∩ Do \ H
is bounded (cf. (C) of Proposition 7.7.2), and Corollary 12.1.6 finishes the proof.
⊳
212
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
Proof of (ii). Fix H1 , H2 ∈ H distinct, and write Hi = gi (Hξi ) for i =
1, 2. The distinctness of H1 and H2 implies that they have different centers, i.e.
g1 (ξ1 ) 6= g2 (ξ2 ). (This is due to the inequivalence of distinct points in P .) By
contradiction, suppose that H1 ∩ H2 6= . Without loss of generality, we may
suppose that g1 = id and that g2 (ξ2 ) ∈ Do (Gξ1 ). Otherwise, let h ∈ Gξ1 be such
that hg1−1 g2 (ξ2 ) ∈ Do (Gξ1 ) (such an h exists by Proposition 12.1.5), and we have
Hξ1 ∩ hg1−1 g2 (Hξ2 ) 6= .
By (B′ ) of Corollary 12.3.10, we have
hξ1 |g2 (ξ2 )io ≍+ 0,
where the implied constant depends on ξ1 . Since there are only finitely many choices
for ξ1 , we may ignore this dependence.
Fix x ∈ H1 ∩ H2 . We have
B g2 (ξ2 ) (o, x) = B ξ2 (g2−1 (o), o) + B ξ2 (o, g2−1 (x))
> B ξ2 (g2−1 (o), o) + t
≥ 0 + t.
(since x ∈ H2 = g2 (Hξ2 ))
(since ξ2 ∈ Do′ )
On the other hand, B ξ1 (o, x) > t since x ∈ H1 . Thus (g) of Proposition 3.3.3 gives
1
B ξ1 (o, x) + B g2 (ξ2 ) (o, x)
2
1
≤ hξ1 |g2 (ξ2 )io − [t + t] ≍+ −t.
2
This is a contradiction for sufficiently large t.
0 ≤ hξ1 |g2 (ξ2 )ix = hξ1 |g2 (ξ2 )io −
The implication (B3) ⇒ (A) has been proven.
⊳
The proof of Theorem 12.4.5 is now complete.
Remark 12.4.11. The implication (B4) ⇒ (A) of Theorem 12.2.7 follows di-
rectly from the proof of the implication (B3) ⇒ (A) of Theorem 12.4.5, since if
there are no parabolic points then we have F = and so no horoballs will be
defined in (12.4.4).
Observation 12.4.12. The proof of Theorem 12.4.5 shows that if G ≤ Isom(X)
is geometrically finite, then the set G\Λbp (G) is finite. When X = H3 , this is a
special case of Sullivan’s Cusp Finiteness Theorem [162], which applies to all finitely
generated subgroups of Isom(H3 ) (not just the geometrically finite ones). However,
the Cusp Finiteness Theorem does not generalize to higher dimensions [105].
Proof. Let H be the collection of horoballs defined in the proof of (B3)
⇒ (A), i.e. H = {g(Hp ) : p ∈ P } for some finite set P . We claim that Λbp = G(P ).
Indeed, fix ξ ∈ Λbp . By the proof of (A) ⇒ (B1), either ξ ∈ Λr or ξ = center(H)
12.4. GEOMETRICALLY FINITE GROUPS
213
for some H ∈ H . Since Λbp ∩ Λr = (Remark 12.3.8), the latter possibility holds.
Write H = g(Hp ); then ξ = g(p) ∈ G(P ).
The set G\Λbp (G) is called the set of cusps of G.
Definition 12.4.13. A complete set of inequivalent parabolic points for a geometrically finite group G is a transversal of G\Λbp (G), i.e. a set P such that
Λbp = G(P ) but G(p1 ) ∩ G(p2 ) = for all p1 , p2 ∈ P distinct.
Then Observation 12.4.12 can be interpreted as saying that any complete set
of inequivalent parabolic points for a geometrically finite group is finite.
12.4.2. Consequences of geometrical finiteness. Geometrical finiteness,
like convex-coboundedness, has some further geometric consequences. Recall (Theorem 12.2.12) that if G is convex-cobounded, then G is finitely generated, and for
any Cayley graph of G, the orbit map g 7→ g(o) is a quasi-isometric embedding. If
G is only geometrically finite rather than convex-cobounded, then in general neither of these things is true.2 Nevertheless, by considering a certain weighted Cayley
metric with infinitely many generators, we can recover the rough metric structure
of the orbit G(o).
Recall that the weighted Cayley metric of G with respect to a generating set
E0 and a weight function ℓ0 : E0 → (0, ∞) is the metric
dG (g1 , g2 ) :=
inf
n
n
X
(hi )1 ∈(E∪F )n
g1 =g2 h1 ···hn i=1
ℓ0 (hi ).
(Example 3.1.2). To describe the generating set and weight function that we want
to use, let P be a complete set of inequivalent parabolic points of G, and consider
the set
[
E :=
Gp .
p∈P
We will show that there exists a finite set F such that G is generated by E ∪ F .
Without loss of generality, we will assume that this set is symmetric, i.e. h−1 ∈ F
for all h ∈ F . For each h ∈ E ∪ F let
(12.4.6)
ℓ0 (h) := 1 ∨ khk.
We then claim that when G is endowed with its weighted Cayley metric with respect
to (E ∪ F, ℓ0 ), then the orbit map will be a quasi-isometric embedding. Specifically:
Theorem 12.4.14. If G is geometrically finite, then
2For examples of infinitely generated strongly discrete parabolic groups, see Examples 11.2.18 and
11.2.20; these examples can be extended to nonelementary examples by taking a Schottky product
with a lineal group. Theorem 11.2.6 guarantees that the orbit map of a parabolic group is never
a quasi-isometric embedding.
214
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
(i) There exists a finite set F such that G is generated by E ∪ F .
(ii) With the metric dG as above, the orbit map g 7→ g(o) is a quasi-isometric
embedding.
Observation 12.4.15. Theorem 12.2.12 follows directly from Theorem 12.4.14,
since by Theorem 12.2.7 we have Λbp = if G is convex-cobounded.
We now begin the proof of Theorem 12.4.14. Of course, part (i) has been proven
already (Theorem 12.4.5).
Proof of (i) and (ii). Let H and σ be as in Definition 12.4.1. Without loss
of generality, we may suppose that H = {k(Hp,t ) : k ∈ G, p ∈ P } for some t > 0
(cf. the proof of Theorem 12.4.5).
Fix ρ > 2σ + 1 large to be determined, and let F = {g ∈ G : kgk ≤ ρ}. Then
F is finite since G is strongly discrete.
Claim 12.4.16. For all g ∈ G \ F , there exist h1 , h2 ∈ E ∪ F such that
kgk − d(h1 h2 (o), g(o)) &×,ρ 1 ∨ kh1 k ∨ kh2 k ≍× ℓ0 (h1 ) + ℓ0 (h2 ).
Proof. Let γ : [0, kgk] → [o, g(o)] be the unit speed parameterization. Let
I = [σ + 1, ρ − σ]. Then γ(I) ⊆ Co , so by (12.4.2), either γ(I) ∩ h(B(o, σ)) 6= for
S
some h ∈ G, or γ(I) ⊆ H .
Case 1: γ(I) ∩ h(B(o, σ)) 6= for some h ∈ G. In this case, fix x ∈ γ(I) ∩
h(B(o, σ)). Then
khk ≤ kxk + d(x, h(o)) ≤ (ρ − σ) + σ = ρ,
so h ∈ F . On the other hand,
d(h(o), g(o)) ≤ d(h(o), x) + d(x, g(o))
= d(h(o), x) + kgk − kxk
≤ σ + kgk − (σ + 1)
= kgk − 1,
so
kgk − d(h(o), g(o)) ≥ 1 ≍×,ρ khk.
The claim follows upon letting h1 = h and h2 = id.
S
Case 2: γ(I) ⊆ H . In this case, since γ(I) is connected and H is a disjoint
open cover of γ(I), there exists H ∈ H such that γ(I) ⊆ H. Since
γ(0), γ(kgk) ∈ G(o) ⊆ X \ H, there exist
0 < t1 < σ + 1 < ρ − σ < t2 < kgk
so that γ(t1 ), γ(t2 ) ∈ ∂H. Let xi = γ(ti ) for i = 1, 2 (cf. Figure 12.4.3).
12.4. GEOMETRICALLY FINITE GROUPS
215
k−1 (p)
p
γ(I)
x1
kj1 (o)
o
k−1 γ(I)
Hp
x2
k−1
kj2 (o)
j1 (o)
k−1 (x2 )
k−1 (x1 )
j2 (o)
k−1 (o)
g(o)
k−1 (Hp )
k
−1
g(o)
Figure 12.4.3. Since j1−1 j2 ∈ E and kj1 ∈ F , the points o, kj1 (o),
and kj2 (o) are connected to each other by edges in the weighted
Cayley graph. Since the distance from kj2 (o) to g(o) are both
significantly less than the distance from o to g(o), our recursive
algorithm will eventually halt.
Since H ∈ H , we have H = k(Hp ) for some p ∈ P and k ∈ G. By
(D) of Lemma 12.3.6, there exist j1 , j2 ∈ Gp with
d(k −1 (xi ), ji (o)) ≤ ρp (i = 1, 2)
for some ρp > 0 depending only on p. Letting ρ0 = maxp∈P ρp , we have
kkj1 k ≤ kx1 k + d(x1 , kj1 (o)) ≤ (σ + 1) + ρ0 .
Letting ρ = max(ρ0 + σ + 1, 2σ + 2), we see that kkj1 k ≤ ρ, so h1 := kj1 ∈
F . On the other hand, h2 := j1−1 j2 ∈ E by construction, since j1 , j2 ∈ Gp .
Observe that h1 h2 = kj2 . Now
d(h1 h2 (o), g(o)) ≤ d(g(o), x2 ) + d(x2 , kj2 (o))
≤ (kgk − t2 ) + ρ0 ,
and so
kgk − d(h1 h2 (o), g(o)) ≥ t2 − ρ0 .
(12.4.7)
Now
t2 ≥ t2 − t1 = d(x1 , x2 )
≥ d(j1 (o), j2 (o)) − d(k −1 (x1 ), j1 (o)) − d(k −1 (x2 ), j2 (o))
≥ kj1−1 j2 k − 2ρ0 = kh2 k − 2ρ0
and on the other hand
t2 ≥ ρ − σ ≥ ρ0 + 1.
216
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
Combining with (12.4.7), we see that
kgk − d(h1 h2 (o), g(o)) ≥ (kh2 k − 2ρ0 ) ∨ (ρ0 + 1) − ρ0
= (kh2 k − 3ρ0 ) ∨ 1
≍× 1 ∨ kh1 k ∨ kh2 k.
⊳
Fix j ∈ G, and define the sequence (hi )n1 in E ∪ F inductively as follows: If
h1 , . . . , h2i have been defined for some i ≥ 0, then let
−1
g = g2i = h2i
· · · h1−1 j = (h1 · · · h2i )−1 j.
(Note that g0 = j.) If g ∈ F , then let h2i+1 = g and let n = 2i + 1 (i.e. stop the
sequence here). Otherwise, by Claim 12.4.16 there exist h2i+1 , h2i+2 ∈ E ∪ F such
that
(12.4.8)
kg2i k − d(h2i+1 h2i+2 (o), g2i (o)) &×,ρ ℓ0 (h2i+1 ) + ℓ0 (h2i+2 ).
This completes the inductive step, as now h1 , . . . , h2(i+1) have been defined. We
remark that a priori, this process could be infinite and so we could have n = ∞;
however, it will soon be clear that n is always finite.
We observe that (12.4.8) may be rewritten:
kg2i k − kg2(i+1) k &×,ρ ℓ0 (h2i+1 ) + ℓ0 (h2i+2 ).
Iterating yields
(12.4.9)
kjk − kg2m k &×
2m
X
i=1
ℓ0 (hi ) ∀m ≤ n/2.
In particular, since ℓ0 (hi ) ≥ 1 for all i, we have
kjk &× ⌊n/2⌋ ≍× n,
and thus n < ∞. This demonstrates that the sequence (hi )n1 is in fact a finite
sequence. In particular, since the only way the sequence can terminate is if g2i ∈ F
for some i ≥ 0, we have gn−1 ∈ F and hn = gn−1 . From the definition of gn−1 ,
it follows that j = h1 · · · hn . Since j was arbitrary and h1 , . . . , hn ∈ E ∪ F , this
demonstrates that E ∪ F generates G, completing the proof of (i).
12.4. GEOMETRICALLY FINITE GROUPS
217
To demonstrate (ii), we observe that by (12.4.9) we have
kjk &×
≍+
n−1
X
i=1
n
X
ℓ0 (hi )
(since hn ∈ F )
ℓ0 (hi )
i=1
≥ dG (id, j),
where dG denotes the weighted Cayley metric. Conversely, if (hi )n1 is any sequence
satisfying j = h1 · · · hn , then
kjk ≤
n
X
i=1
d(h1 · · · hi−1 (o), h1 · · · hi (o)) =
and taking the infimum gives kjk ≤ dG (id, j).
n
X
i=1
khi k ≤
n
X
ℓ0 (hi ),
i=1
This finishes the proof of Theorem 12.4.14.
Corollary 12.4.17. If G is geometrically finite, then
(i) If for every ξ ∈ Λbp , Gξ is finitely generated, then G is finitely generated.
(ii) If for every ξ ∈ Λbp , δ(Gξ ) < ∞, then δ(G) < ∞.
Proof of (i). This is immediate from Theorem 12.4.14(i) and Observation
12.4.12.
Proof of (ii). Call a sequence (hi )n1 ∈ E n minimal if
(12.4.10)
n
X
i=1
ℓ0 (hi ) = dG (id, h1 · · · hn ).
Then for each g ∈ G \ {id}, there exists a minimal sequence (hi )n1 ∈ (E ∪ F )n so
that g = h1 · · · hn .
Let C be the implied multiplicative constant of (12.4.10), so that for every
minimal sequence (hi )n1 , we have
n
X
i=1
ℓ0 (hi ) &+
1
kh1 · · · hm k.
C
218
12. GEOMETRICALLY FINITE AND CONVEX-COBOUNDED GROUPS
Fix s > 0. Then
Σs (G) − 1 ≤
=
X
X
X
X
X
e−skgk
X
X
X
X
e−skgk
n
g∈G\{id} n∈N (hi )n
1 ∈(E∪F )
minimal
g=h1 ···hn
n
n∈N (hi )n
1 ∈(E∪F )
minimal
.×
n∈N (hi )n
1 ∈(E∪F )
minimal
≤
=
n∈N (hi )n
1 ∈(E∪F )
X
!
n
s X
exp −
ℓ0 (hi )
C i=1
n
!
n
s X
exp −
ℓ0 (hi )
C i=1
n
n
Y
X
e−(s/C)ℓ0 (hi )
n i=1
n∈N (hi )n
1 ∈(E∪F )
=
n
XY
X
e−(s/C)ℓ0 (h)
n∈N i=1 h∈E∪F
=
X
n∈N
In particular, if
λs :=
X
h∈E∪F
X
e
−(s/C)ℓ0 (h)
!n
.
e−(s/C)ℓ0 (h) < 1,
h∈E∪F
then Σs (G) < ∞. Now when s/C > maxp∈P δ(Gp ), we have λs < ∞. On the other
hand, each term of the sum defining λs tends to zero as s → ∞. Thus λs → 0 as
s → ∞, and in particular there exists some value of s for which λs < 1. For this s,
Σs (G) < ∞ and so δG ≤ s < ∞.
12.4.3. Examples of geometrically finite groups. We conclude this section by giving some basic examples of geometrically finite groups. We begin with
the following observation:
Observation 12.4.18.
(i) Any elliptic or lineal group is convex-cobounded.
(ii) Any parabolic group is geometrically finite and is not convex-cobounded.
Proof. This follows directly from Theorems 12.2.7 and 12.4.5. It may also be
proven directly; we leave this as an exercise to the reader.
Proposition 12.4.19. The strongly separated Schottky product G = hGa ia∈E
of a finite collection of geometrically finite groups is geometrically finite. Moreover,
12.4. GEOMETRICALLY FINITE GROUPS
219
if P1 and P2 are complete sets of inequivalent parabolic points for G1 and G2 respectively, then P = P1 ∪ P2 is a complete set of inequivalent parabolic points for
G. In particular, if the groups (Ga )a∈E are convex-cobounded, then G is convexcobounded.
Proof. This follows direction from Lemma 10.4.4, Theorem 10.4.7, Corollary
10.4.8, and Theorem 12.4.5.
Combining Observation 12.4.18 and Proposition 12.4.19 yields the following:
Corollary 12.4.20. The Schottky product of finitely many parabolic and/or
lineal groups is geometrically finite. If only lineal groups occur in the product, then
it is convex-cobounded.
CHAPTER 13
Counterexamples
In Chapter 5 we defined various notions of discreteness and demonstrated some
relations between them, and in Section 9.3 we related some of these notions to the
e In this chapter we give counterexamples to show
modified Poincaré exponent δ.
that the relations which we did not prove are in fact false. Specifically, we prove
that no more arrows can be added to Table 1 (reproduced below as Table 1), and
that the discreteness hypotheses of Proposition 9.3.1 cannot be weakened.
Finite dimensional
Riemannian manifold
General metric space
Infinite dimensional
algebraic hyperbolic space
Proper metric space
SD ↔ MD
↑
PrD
SD → MD
ր
PrD
SD → MD
ր
PrD
SD ↔ MD
↑
PrD
↔
WD
l
COTD ↔ UOTD
→
WD
ց
COTD
→
WD
↓
COTD → UOTD
↔ COTD
↓
WD
Table 1. The relations between different notions of discreteness.
COTD and UOTD stand for discrete with respect to the compactopen and uniform operator topologies respectively. All implications not listed have counterexamples, which are described below.
The examples are arranged roughly in order of discreteness level; the most
discrete examples are listed first.
We note that many of the examples below are examples of elementary groups.
In most cases, a nonelementary example can be achieved by taking the Schottky
product with an approprate group; cf. Proposition 10.5.1.
The notations B = ∂E∞ \ {∞} ≡ ℓ2 (N) and b· : Isom(B) → Isom(H∞ ) will be
used without comment; cf. Section 11.1.
221
222
13. COUNTEREXAMPLES
13.1. Embedding R-trees into real hyperbolic spaces
Many of the examples in this chapter are groups acting on R-trees, but it turns
out that there is a natural way to convert such an action into an action on a real
hyperbolic space. Specifically, we have the following:
Theorem 13.1.1 (Generalization of [40, Theorem 1.1]). Let X be a separable R-tree. Then for every λ > 1 there is an embedding Ψλ : X → H∞ and a
homomorphism πλ : Isom(X) → Isom(H∞ ) such that:
(i) The map Ψλ is πλ -equivariant and extends equivariantly to a boundary
map Ψλ : ∂X → ∂H∞ which is a homeomorphism onto its image.
(ii) For all x, y ∈ X we have
(13.1.1)
λd(x,y) = cosh d(Ψλ (x), Ψλ (y)).
(iii)
(13.1.2)
√
Hull1 (Ψλ (∂X)) ⊆ B(Ψλ (X), cosh−1 ( 2)).
(iv) For any set S ⊆ X, the dimension of the smallest totally geodesic subspace [VS ] ⊆ H∞ containing Ψλ (S) is #(S) − 1. Here cardinalities are
interpreted in the weak sense: if #(S) = ∞, then dim([VS ]) = ∞ but S
may be uncountable even though [VS ] is separable.
Proof. Let V = {x ∈ RX : xv = 0 for all but finitely many v ∈ X}, and
define the bilinear form BQ on V via the formula
X
(13.1.3)
BQ (x, y) = −
λd(v,w) xv yw .
v,w∈X
Claim 13.1.2. The associated quadratic form Q(x) = BQ (x, x) has signature
(ω, 1).
Proof. It suffices to show that Q ↿ e⊥
v0 is positive definite, where v0 ∈ X is
fixed. Indeed, fix x ∈ e⊥
v0 \ {0}, and we will show that Q(x) > 0. Now, the set
X0 = {v ∈ X : xv 6= 0} ∪ {v0 } is finite. It follows that the convex hull of X0 can
be written in the form X(V, E, ℓ) for some finite acyclic weighted undirected graph
(V, E, ℓ). Consider the subspace
V0 = {x ∈ e⊥
v0 : xv = 0 for all v ∈ X \ V } ⊆ V,
which contains x. We will construct an orthogonal basis for V0 as follows. For each
edge (v, w) ∈ E, let
fv,w = ev − λd(v,w) ew
13.1. EMBEDDING R-TREES INTO REAL HYPERBOLIC SPACES
223
if w ∈ [v0 , v]; otherwise let fv,w = fw,v . This vector has the following key property:
(13.1.4)
For all v ′ ∈ X, if [v, w] intersects [v0 , v ′ ] in
at most one point, then BQ (fv,w , ev′ ) = 0.
(The hypothesis implies that w ∈ [v ′ , v] and thus d(v, v ′ ) = d(v, w) + d(w, v ′ ).)
In particular, letting v ′ = v0 we see that fv,w ∈ e⊥
v0 . Moreover, the tree structure of (V, E) implies that for any two edges (v1 , w1 ) 6∼ (v2 , w2 ), we have either
#([v1 , w1 ] ∩ [v0 , v2 ]) ≤ 1 or #([v2 , w2 ] ∩ [v0 , v1 ]) ≤ 1; either way, (13.1.4) implies
that BQ (fv1 ,w1 , fv2 ,w2 ) = 0. Finally, Q(fv,w ) = λ2d(v,w) − 1 > 0 for all (v, w) ∈ E,
⊥
so Q ↿ V0 is positive definite. Thus Q(x) > 0; since x ∈ e⊥
v0 was arbitrary, Q ↿ ev0
is positive definite. This concludes the proof of the claim.
⊳
It follows that for any v ∈ X, the quadratic form
BQv (x, y) = BQ (x, y) + 2BQ (x, ev )BQ (ev , y)
is positive definite. We leave it as an exercise to show that for any v1 , v2 ∈ X,
the norms induced by Qv1 and Qv2 are comparable. Let L be the completion of
V with respect to any of these norms, and (abusing notation) let BQ denote the
unique continuous extension of BQ to L. Since the map X ∋ v 7→ ev ∈ L is
continuous with respect to the norms in question, L is separable. On the other
hand, since these norms are nondegenerate, we have dim(hev : v ∈ Si) = #(S)
for all S ⊆ X, and in particular dim(L) = ∞. Thus L is isomorphic to L∞ , so
H := {[x] ∈ P(L) : Q(x) < 0} is isomorphic to H∞ .
We define the embedding Ψλ : X → H via the formula Ψλ (v) = [ev ]. (13.1.1)
now follows immediately from (13.1.3) and (2.2.2). In particular, we have
d(Ψλ (v), Ψλ (w)) ≍+ log(λ)d(v, w),
which implies that Ψλ extends naturally to a boundary map Ψλ : ∂X → ∂Hα which
is a homeomorphism onto its image. Given any g ∈ Isom(X), we let πλ (g) = [Tg ] ∈
Isom(H), where Tg ∈ OR (L; Q) is given by the formula Tg (ev ) = eg(v) . Then Ψλ
and its extension are both πλ -equivariant, demonstrating condition (i).
For S ⊆ X, we have dim(VS ) = dim(hev : v ∈ Si) = #(S) as noted above, and
thus dim([VS ]) = dim(VS ) − 1 = #(S) − 1. This demonstrates (iv).
It remains to show (iii). Fix ξ, η ∈ ∂X and [z] ∈ [Ψλ (ξ), Ψλ (η)]. Write Ψλ (ξ) =
[x] and Ψλ (η) = [y]. Since [x], [y] ∈ ∂H and [z] ∈ H, we have Q(x) = Q(y) = 0,
and we may choose x, y, and z to satisfy BQ (x, y) = BQ (x, z) = BQ (y, z) = −1.
Since [z] ∈ [[x], [y]], we have z = ax + by for some a, b ≥ 0; we must have a = b = 1
and thus Q(z) = −2.
224
13. COUNTEREXAMPLES
Now, since Ψλ (w) = [ew ] → Ψλ (ξ) = [x] as w → ξ, there exists a function
f : X → R such that f (w)ew → x as w → ξ. Fixing v ∈ [ξ, η], we have
BQ (x, ev ) = lim f (w)BQ (ew , ev ) = − lim f (w)λd(v,w) .
w→ξ
w→ξ
In particular BQ (x, ev2 ) = BQ (x, ev1 )λBξ (v2 ,v1 ) , which implies that there exists
v ∈ [ξ, η] such that BQ (x, ev ) = 0. Similarly, there exists a function g : X → R
such that g(w′ )ew′ → y as w′ → η; we have
BQ (y, ev ) = − lim
g(w′ )λd(v,w
′
′
)
w →η
−1 = BQ (x, y) = lim f (w)g(w′ )BQ (ew , ew′ )
w→ξ
w ′ →η
′
= − lim f (w)g(w′ )λd(w,w ) = −BQ (x, ev )BQ (y, ev )
w→ξ
w ′ →η
BQ (x, ev ) = BQ (y, ev ) = −1,
so ev = z + w for some w ∈ x⊥ ∩ y⊥ . Since Q(z) = −2 and Q(ev ) = −1, we have
Q(w) = 1 and thus
√
2
|BQ (ev , z)|
= 2.
= √
cosh d([ev ], [z]) = p
1·2
|Q(ev )| · |Q(z)|
√
−1
In particular d([z], Ψλ (X)) ≤ cosh ( 2).
Definition 13.1.3. Given an R-tree X and a parameter λ > 1, the maps
Ψλ and πλ will be called the BIM embedding and the BIM representation with
parameter λ, respectively. (Here BIM stands for M. Burger, A. Iozzi, and N.
Monod, who proved the special case of Theorem 13.1.1 where X is an unweighted
simplicial tree.)
Remark 13.1.4. Let X, λ, Ψλ , and πλ be as in Theorem 13.1.1. Fix Γ ≤
Isom(X), and suppose that ΛΓ = ∂X. Let G = πλ (Γ) ≤ Isom(H∞ ).
(i) (13.1.2) implies that if Γ is convex-cobounded in the sense of Definition
12.2.5 below, then G is convex-cobounded as well. Moreover, we have
Λr (G) = ∂Ψλ (Λr (Γ)) and Λur (G) = ∂Ψλ (Λur (Γ)).
(ii) Since cosh(t) ≍× et for all t ≥ 0, (13.1.1) implies that
X
X
Σs (G) =
e−skπλ (γ)k ≍×
cosh−s (kπλ (γ)k)
γ∈Γ
γ∈Γ
=
X
γ∈Γ
λ−skγk = Σs log(λ) (Γ)
13.1. EMBEDDING R-TREES INTO REAL HYPERBOLIC SPACES
225
for all s ≥ 0. In particular δG = δΓ / log(λ). A similar argument shows
that δeG = δeΓ / log(λ), which implies that G is Poincaré regular if and only
if Γ is.
(iii) G is strongly discrete (resp. COT-discrete) if and only if Γ is strongly
discrete (resp. COT-discrete). However, this fails for weak discreteness;
cf. Example 13.4.2 below.
Proof of (iii). The difficult part is showing that if G is COT-discrete, then
Γ is as well. Suppose that Γ is not COT-discrete. Then there exists a sequence Γ ∋
γn → id in the compact-open topology. Let gn = πλ (γn ) ∈ G ≤ Isom(H∞ ) ≡ O(L).
Then the set
{x ∈ L : gn (x) → x}
contains Ψλ (X). On the other hand, since the sequence (gn )∞
1 is equicontinuous
(Lemma 2.4.11), this set is a closed linear subspace of L. Clearly, the only such
subspace which contains Ψλ (X) is L. Thus gn (x) → x for all x ∈ H∞ , and so
gn → id in the compact-open topology. Thus G is not COT-discrete.
We begin our list of examples with the following counterexample to an infinitedimensional analogue of Margulis’s lemma suggested in Remark 11.1.6:
Example 13.1.5. Let Γ = F2 (Z) = hγ1 , γ2 i, and let X be the Cayley graph of Γ.
Let Φ : Γ → Isom(X) be the natural action. Then H := Φ(Γ) is nonelementary and
strongly discrete. For each λ > 1, the image of H under the BIM representation πλ
is a nonelementary strongly discrete subgroup G = πλ (H) ≤ Isom(H∞ ) generated
by the elements g1 = πλ Φ(γ1 ), g2 = πλ Φ(γ2 ). But
cosh kgi k = λd(e,γi ) = λ,
so by an appropriate choice of λ, kgi k can be made arbitrarily small. So for arbitrarily small ε, we can find a free group G ≤ Isom(H∞ ) such that Gε (o) = G is nonelementary. This provides a counterexample to a hypothetical infinite-dimensional
analogue of Margulis’s lemma, namely, the claim that there exists ε > 0 such that
for every strongly discrete G ≤ Isom(H∞ ), Gε (o) is elementary.
Remark 13.1.6. If H is a finite-dimensional algebraic hyperbolic space and
G ≤ Isom(H) is nonelementary, then a theorem of I. Kim [112] states that the
length spectrum of G
L = {log g ′ (g− ) : g ∈ G is loxodromic}
is not contained in any discrete subgroup of R. Example 13.1.5 shows that this result
does not generalize to infinite-dimensional algebraic hyperbolic spaces. Indeed, if
G ≤ Isom(H∞ ) is as in Example 13.1.5 and if g = πλ (γ) ∈ G, then (13.1.1) implies
226
13. COUNTEREXAMPLES
that
log g ′ (g− ) = lim
n→∞
n
1 n
kg k = lim cosh−1 λkγ k
n→∞
n
1
= log(λ) lim kγ n k
n→∞ n
= log(λ) log γ ′ (γ− ),
demonstrating that L is contained in the discrete subgroup log(λ)Z ≤ R.
13.2. Strongly discrete groups with infinite Poincaré exponent
We have already seen two examples of strongly discrete groups with infinite
Poincaré exponent, namely the Edelstein-type Example 11.1.18, and the parabolic
torsion Example 11.2.18. We give three more examples here.
Example 13.2.1 (A nonelementary strongly discrete group G acting on a
proper R-tree X and satisfying δG = ∞). Let Y = [0, ∞), let P = N, and for
each p = n ∈ P let
Γp = Z/n!Z
(or more generally, let Γp be any sufficiently large finite group). Let (X, G) be the
geometric product of Y with (Γp )p∈P , as defined below in Example 14.5.10. By
Proposition 14.5.12, X is proper, and G = hGp ip∈P is a global weakly separated
Schottky product. So by Corollary 10.3.6, G is strongly discrete. Clearly, G is
nonelementary. Finally, δG = ∞ because for all s ≥ 0,
X
X
X X
e−skgk =
#(Γp \ {e})e−2skpk =
(n! − 1)e−2ns = ∞.
Σs (G) ≥
p∈P g∈Γp \{e}
p∈P
n∈N
Applying a BIM representation gives:
Example 13.2.2 (A nonelementary strongly discrete convex-cobounded group
acting on H∞ and satisfying δ = ∞). Cf. Remark 13.1.4 and the example above.
Example 13.2.3 (A parabolic strongly discrete group G acting on H∞ and
satisfying δG = ∞). Since F2 (Z) has the Haagerup property (Remark 11.1.2), there
is a homomorphism Φ : F2 (Z) → Isom(B) whose image G = Φ(F2 (Z)) is strongly
b must have infinite Poincaré exponent by Corollary 11.2.10.
discrete. However, G
13.3. Moderately discrete groups which are not strongly discrete
We have already seen one example of a moderately discrete group which is not
strongly discrete, namely the Edelstein-type Example 11.1.14 (parabolic acting on
H∞ ). We give three more examples here, and we will give one more example in
Section 13.4, namely Example 13.4.4. All five examples are are also examples of
properly discontinuous actions, so they also demonstrate that proper discontinuity
13.3. MODERATELY DISCRETE GROUPS WHICH ARE NOT STRONGLY DISCRETE 227
does not imply strong discreteness. (The fact that moderate discreteness (or even
strong discreteness) does not imply proper discontinuity can be seen e.g. from
Examples 11.2.18, 13.2.1, and 13.2.2, all of which are generated by torsion elements.)
H
∞
Example 13.3.1 (A parabolic group which acts properly discontinuously on
but is not strongly discrete). Let Z∞ ⊆ B = ℓ2 (N) denote the set of all infinite
sequences in Z with only finitely many nonzero entries. Let
G := {x 7→ x + n : n ∈ Z∞ } ⊆ Isom(B).
Then G acts properly discontinuously, since k(x + n) − xk ≥ 1 for all x ∈ B and
n ∈ Z∞ \ {0}. On the other hand, G is not strongly discrete since knk = 1 for
infinitely many n ∈ Z∞ . By Observation 11.1.1, these properties also hold for the
b ≤ Isom(H∞ ).
Poincaré extension G
Example 13.3.2 (A nonelementary group G which acts properly discontinuously on a separable R-tree X but is not strongly discrete). Let X be the Cayley
graph of Γ = F∞ (Z) with respect to its standard generators, and let Φ : Γ →
Isom(X) be the natural action. Then G = Φ(Γ) acts properly discontinuously on
X. On the other hand, since by definition each generator g ∈ G satisfies kgk = 1,
G is not strongly discrete.
Applying a BIM representation gives:
Example 13.3.3 (A nonelementary group which acts properly discontinuously
on H∞ but is not strongly discrete). Let X and G be as in Example 13.3.2. Fix λ >
1 large to be determined, and let πλ : Isom(X) → Isom(H∞ ) be the corresponding
BIM representation. By Remark 13.1.4, the group πλ (G) is a nonelementary group
which acts isometrically on H∞ but is not strongly discrete. To complete the proof,
we must show that πλ (G) acts properly discontinuously. By Proposition 10.4.10,
Q∞
it suffices to show that G = 1 πλ (γi )Z is a global strongly separated Schottky
group. And indeed, if we denote the generators of Γ = F∞ (Z) by γi (i ∈ N), and if
we consider the balls Ui± = B(Ψλ ((γi )± ), 1/2) (taken with respect to the Euclidean
metric), and if λ is sufficiently large, then the sets Ui = Ui+ ∪ Ui− form a global
strongly separated Schottky system for G.
Remark 13.3.4. The groups of Examples 13.3.2-13.3.3 can be easily modified
to make the group G uncountable at the cost of separability; let X be the Cayley
graph of F#(R) (Z) in Example 13.3.2, and applying (a modification of) Theorem
13.1.1 gives an action on H#(R) .
Remark. By Proposition 9.3.1, the groups of Examples 13.3.2-13.3.3 are all
Poincaré regular and therefore satisfy dimH (Λur ) = ∞.
228
13. COUNTEREXAMPLES
13.4. Poincaré irregular groups
We give six examples of Poincaré irregular groups, providing counterexamples
to many conceivable generalizations of Proposition 9.3.1.
Example 13.4.1 (A Poincaré irregular nonelementary group G acting on a
proper R-tree X which is weakly discrete but not COT-discrete). Let X be the
Cayley graph of V = F2 (Z) (equivalently, let X be the unique 3-regular unweighted
simplicial tree), and let G = Isom(X). Since #(Stab(G; e)) = ∞, G is not strongly
discrete, so by Proposition 5.2.7, G is also not COT-discrete. (The fact that G is
not COTD can also be deduced from Proposition 9.3.1, since we will soon show
that G is Poincaré irregular.)
On the other hand, suppose x ∈ X. Then either x ∈ V , or x = ((vx , wx ), tx )
for some (vx , wx ) ∈ E and tx ∈ (0, 1). In the first case, we observe that G(x) = V ,
while in the second we observe that
G(x) = {((v, w), tx ) : (v, w) ∈ E}.
In either case x is not an accumulation point of G(x). Thus G is weakly discrete.
To show that G is Poincaré irregular, we first observe that δ = ∞ since G is not
strongly discrete. On the other hand, Proposition 8.2.4(iv) can be used to compute
that δe = log (2). (Alternatively, one may use Theorem 1.2.3 together with the fact
b
that dimH (∂X) = logb (2).)
Remark. The group G in Example 13.4.1 is uncountable. However, if G is
replaced by a countable dense subgroup (cf. Remark 5.1.4) then the conclusions
stated above will not be affected. This remark applies also to Examples 13.4.2 and
13.4.4 below.
Applying a BIM representation to the group of Example 13.4.1 yields:
Example 13.4.2 (A Poincaré irregular nonelementary group acting irreducibly
on H∞ which is UOT-discrete but not COT-discrete). Let G ≤ Isom(X) be as in
Example 13.4.1 and let πλ : Isom(X) → Isom(H∞ ) ≡ O(L) be a BIM representation. Remark 13.1.4 shows that the group πλ (G) is Poincaré irregular and is not
COT-discrete. Note that it follows from either Proposition 5.2.7(ii) or Proposition
9.3.1 that πλ (G) is not weakly discrete, despite G being weakly discrete.
To complete the proof, we must show that πλ (G) is UOT-discrete. Let Ψλ :
X → H∞ ⊆ L be the BIM embedding corresponding to the BIM representation πλ ,
and write z = Ψλ (o); without loss of generality we may assume z = (1, 0), so that
Q(x) = kxk2 for all x ∈ z⊥ .
√
Now fix T = πλ (g) ∈ πλ (G)\{id}, and we will show that kT −Ik ≥ min( 2, λ−
1) > 0. We consider two cases. If g(o) 6= o, then kgk ≥ 1, which implies that
13.4. POINCARÉ IRREGULAR GROUPS
229
g(x)
y
o
x
Figure 13.4.1. The point y is the center of the triangle
∆(o, x, g(x)). Both o and y are fixed by g. Intuitively, this means
that g (really, πλ (g)) must have a significant rotational component
in order to “swing up” the point x to the point g(x).
|BQ (z, T z)| ≥ λ and thus that kT z − zk ≥ |BQ (z, T z − z)| ≥ λ − 1. So suppose
g(o) = o. Since g 6= id, we have g(x) 6= x for some x ∈ V ; choose such an x so as to
minimize kxk. Letting y = [o, x]kxk−1 , the minimality of kxk implies that g(y) = y
(cf. Figure 13.4.1).
Let x = Φλ (x) and y = Φλ (y), so that T y = y but T x 6= x. Let w1 = x − λy
and w2 = T w1 = T x − λy. An easy computation based on (13.1.1) and (2.2.2)
gives BQ (z, w1 ) = BQ (z, w2 ) = BQ (w1 , w2 ) = 0 (cf. (13.1.4)). It follows that
p
k(T − I)w1 k = kw2 − w1 k = Q(w2 − w1 )
p
= Q(w2 ) + Q(w1 )
p
√
= 2Q(w1 ) = 2kw1 k,
√
and thus kT − Ik ≥ 2.
Remark 13.4.3. Let G, πλ be as above and fix ξ ∈ ∂X. Then πλ (Gξ ) is a
focal group acting irreducibly on H∞ whose limit set is totally disconnected. This
contrasts with the finite-dimensional situation, where any nondiscrete group (and
thus any focal group) acting irreducibly on Hd is of the first kind [81, Theorem 2].
Example 13.4.4 (A Poincaré irregular nonelementary group G′ acting properly
discontinuously on a hyperbolic metric space X ′ ). Let G be the group described in
Example 13.4.1. Let X ′ = G and let
1 ∨ d(g(o), h(o))
d′ (g, h) :=
0
g 6= h
.
g=h
Since the orbit map X ′ ∋ g → g(o) ∈ X is a quasi-isometric embedding, (X ′ , d′ )
is a hyperbolic metric space. The left action of G on X ′ is isometric and properly
discontinuous. Denote its image in Isom(X ′ ) by G′ . Clearly δG′ = δG and δeG′ = δeG
(the Poincaré exponent and modified Poincaré exponent do not depend on whether
G is acting on X or on X ′ ), so G′ is Poincaré irregular.
The next set of examples have a somewhat different flavor.
230
13. COUNTEREXAMPLES
Example 13.4.5 (A Poincaré irregular group G acting on Hd ). Fix 2 ≤ d < ∞,
and let G be any nondiscrete subgroup of Isom(Hd ). Then
δeG = dimH (Λr ) ≤ dimH (∂Hd ) = d − 1.
On the other hand, since G is not strongly discrete we have δG = ∞. Thus G is
Poincaré irregular.
In Example 13.4.5, G could be a Lie subgroup with nontrivial connected component (e.g. G = Isom(Hd ), but this is not the only possibility - G can even be
finitely generated, as we now show:
Lemma 13.4.6. Let H be a connected algebraic group which contains a copy
of the free group F2 (Z). Then there exist g1 , g2 ∈ H such that G := hg1 , g2 i is a
nondiscrete group isomorphic to F2 (Z).
By Lemma 10.2.2, the group G cannot be a Schottky product - thus this lemma
provides an example of a free product which is not a Schottky product.
Proof. An orders-of-magnitude argument shows that there exists ε > 0 such
that for any h1 , h2 ∈ H with d(id, hi ) ≤ ε, we have
d(id, [h1 , h2 ]) ≤
1
max d(id, hi ),
2 i
where [h1 , h2 ] denotes the commutator of h1 and h2 . Thus for any g1 , g2 ∈ H such
that d(id, gi ) ≤ ε, letting
h1 = g1 , h2 = g2 , hn+2 = [hn , hn+1 ]
gives hn → id. But the elements hn are the images of nontrivial words in the free
group F2 (Z) under the natural homomorphism, so if this homomorphism is injective
then G is not discrete. For each element g ∈ F2 (Z), the set of homomorphisms
π : F2 (Z) → H such that π(g) = id is a proper algebraic subset of the set of all
homomorphisms, and therefore has measure zero. Thus for typical g1 , g2 satisfying
d(id, gi ) ≤ ε, G is a nondiscrete free group.
Instead of a Lie subgroup of Isom(Hd ), we could also take a locally compact
subgroup of Isom(H∞ ); there are many interesting examples of such subgroups. In
particular, one such example is given by the following theorem:
Theorem 13.4.7 (Monod–Py representation theorem, [132, Theorems B and
C]). For any d ∈ N and 0 < t < 1, there exist an irreducible representation ρt :
Isom(Hd ) → Isom(H∞ ) and a ρt -equivariant embedding ft : bord Hd → bord H∞
such that
(13.4.1)
d(ft (x), ft (y)) ≍+ td(x, y) for all x, y ∈ Hd .
13.4. POINCARÉ IRREGULAR GROUPS
231
The pair (ρt , ft ) is unique up to conjugacy.
Example 13.4.8 (A Poincaré irregular nonelementary group G acting irreducibly on H∞ ). Fix d ∈ N and 0 < t < 1, and let ρt , ft be as in Theorem 13.4.7.
Let Γ = Isom(Hd ), and let G = ρt (Γ). As G is locally compact, the modified
Poincaré exponent of G can be computed using Definition 8.2.1:
Z
−skgk
e
e
dg < ∞
δG = inf s ≥ 0 :
G
Z
e−skρt (γ)k dγ < ∞
= inf s ≥ 0 :
Γ
Z
= inf s ≥ 0 :
e−stkγk dγ < ∞
Γ
=
dimH (ΛΓ )
d−1
δeΓ
=
=
·
t
t
t
On the other hand, since G is convex-cobounded by [132, Theorem D], Theorem
12.2.12 shows that ΛG = Λr (G) = Λur (G). (It may be verified that the strong discreteness assumption is not needed for those directions.) Combining with Theorem
1.2.3, we have
d−1
> d − 1 = dimH (ΛΓ ).
t
In particular, it follows that the map ft : ΛΓ → ΛG cannot be smooth or even LipdimH (ΛG ) = dimH (Λr (G)) = dimH (Λur (G)) =
schitz. This contrasts with the smoothness of ft in the interior (see [132, Theorem
C(2)]).
Remark. The Hausdorff dimension of ΛG may also be computed directly from
the formulas (13.4.1) and (3.6.4), which imply that the map ft ↿ ΛΓ and its inverse
are Hölder continuous of exponents t and 1/t, respectively. However, the computation above gives a nice application of the Poincaré irregular case of Theorem
1.2.3.
In Examples 13.4.5 and 13.4.8, the group G does not satisfy any of the discreteness conditions discussed in Chapter 5. Our next example satisfies a weak
discreteness condition:
Example 13.4.9 (A Poincaré irregular nonelementary COT-discrete group G
acting reducibly on H∞ which is not weakly discrete). Let Γ = F2 (Z) and let
ι1 : Γ → Isom(Hd ) ≡ O(Ld+1 ) be an injective homomorphism whose image is a
nondiscrete group; this is possible by Lemma 13.4.6. Define ι2 : Γ → O(HΓ ) by
letting
ι2 (γ)[eδ ] = eγδ .
Note that ι2 (Γ) is COT-discrete, since kι2 (γ)ee − ee k =
√
2 for all γ ∈ Γ \ {e}.
232
13. COUNTEREXAMPLES
The direct sum ι := ι1 ⊕ ι2 : Γ → O(Ld+1 × HΓ ) is an isometric action of Γ on
˙
HΓ∪{1,...,d}
≡ H∞ . Let G = ι(Γ). Since ι1 (Γ) is the restriction of G to the invariant
totally geodesic subspace Hd , we have δG = δι1 (Γ) = ∞ and δeG = δeι1 (Γ) < ∞, so
G is Poincaré irregular. On the other hand, G is COT-discrete because ι2 (Γ) is.
Finally, the fact that G is not weakly discrete can be seen from either Observation
5.2.14 or Proposition 9.3.1.
13.5. Miscellaneous counterexamples
Our remaining examples include a COTD group which is not WD and a WD
group which is not MD.
Example 13.5.1 (A nonelementary COT-discrete group G which acts irreducibly on H∞ and satisfies δG = δeG = ∞ but which is not weakly discrete). Let
G1 ≤ Isom(H∞ ) be as in Example 13.4.9, and let g be a loxodromic isometry whose
˙
˙
fixed points are g± = [e0 ± ee ] ∈ ∂HΓ∪{1,...,d}
⊆ PLΓ∪{0,...,d}
. Then for n suffi-
ciently large, the product G = hG1 , (g n )Z i is a global strongly separated Schottky
product. By Lemma 10.2.2, G is COT-discrete. Since G contains G1 , G is not
weakly discrete.
The fact that δeG = ∞ follows from either Proposition 10.3.7(iii) or Proposition
9.3.1. So the only thing left to show is that G acts irreducibly. We assume that the
original group ι1 (Γ) acts irreducibly. Then if [V ] ⊆ H∞ is a G-invariant totally
geodesic subspace containing the limit set of G, then Ld+1 ⊆ V and so V =
Ld+1 ⊕ V2 for some V2 ⊆ HΓ . But [e0 + ee ] ∈ ΛG , so ee ∈ V2 . The G-invariance of
[V ] implies that V2 is ι2 (Γ)-invariant, and thus that V2 = HΓ and so [V ] = H∞ .
Remark. Example 13.5.1 gives a good example of how Theorem 1.2.3 gives
interesting information even when δeG = ∞. Namely, in this example Theorem 1.2.3
tells us that dimH (Λr ) = dimH (Λur ) = ∞, which is not at all obvious simply from
looking at the group.
Example 13.5.2 (An elliptic group G acting on H∞ which is weakly discrete
but not moderately discrete). Let H = ℓ2 (Z), and let T ∈ O(H) be the shift map
Z
T (x) = (xn+1 )∞
≤ O(H) ≤ Isom(B∞ ).
n=1 . Let G be the cyclic group G = T
Since g(0) = 0 for all g ∈ G, G is not moderately discrete. On the other hand, fix
x ∈ H \ {0}. Then T n (x) → 0 weakly as n → ±∞, so #{n ∈ Z : kT n(x) − xk ≤
kxk/2} < ∞. Thus G is weakly discrete.
CHAPTER 14
R-trees and their isometry groups
In this chapter we describe various ways to construct R-trees which admit
isometric actions. Section 14.1 describes the cone construction, in which one starts
with an ultrametric space (Z, D) and builds an R-tree X whose Gromov boundary
contains a point ∞ such that (Z, D) = (∂X \ {∞}, D∞,o). Sections 14.2 and 14.3
are preliminaries for Section 14.4, which describes the “stapling method” in which
one starts with a collection of R-trees (Xv )v∈V and staples them together to get
another R-tree. We give three very general examples of the stapling method in
which the resulting R-tree admits a natural isometric action.
We recall that whenever we have an example of an R-tree X with an isometric action Γ ≤ Isom(X), then we can get a corresponding example of a group of
isometries of H∞ by applying a BIM representation (Theorem 13.1.1). Thus, the
examples of this chapter contribute to our goal of understanding the behavior of
isometry groups acting on H∞ .
14.1. Construction of R-trees by the cone method
The construction of hyperbolic metric spaces by cone methods has a long history; see e.g. [85, 1.8.A.(b)], [168], [31, §7]. The construction below does not
appear to be equivalent to any of those existing in the literature, although our formula (14.1.1) is similar to [31, 7.1] (with the difference that their + sign is replaced
by a ∨; this change only works because we assume that Z is ultrametric).
Let (Z, D) be a complete ultrametric space. Define an equivalence relation
on Z × (0, ∞) by letting (z1 , r1 ) ∼ (z2 , r2 ) if d(z1 , z2 ) ≤ r1 = r2 , and denote the
equivalence class of (z, r) by hz, ri. Let X = Z × (0, ∞)/ ∼, and define a distance
function on X:
2
r ∨ r22 ∨ D2 (z1 , z2 )
(14.1.1)
d hz1 , r1 i, hz2 , r2 i = log 1
r1 r2
(cf. Corollary 3.6.23). We call (X, d) the cone of (Z, D). Note that
(14.1.2)
(r0 ∨ r1 ∨ D(z0 , z1 ))(r0 ∨ r2 ∨ D(z0 , z2 ))
.
hz1 , r1 i|hz2 , r2 i hz0 ,r0 i = log
r0 (r1 ∨ r2 ∨ D(z1 , z2 ))
233
234
14. R-TREES AND THEIR ISOMETRY GROUPS
Theorem 14.1.1. The cone (X, d) is an R-tree. Moreover, there exists a map
ι : Z → ∂X such that ∂X\ι(Z) consists of one point, ∞, and such that D = D∞,o ◦ι,
where o = hz0 , 1i for any z0 ∈ Z.
Proof. Fix xi = hzi , ri i ∈ X, i = 1, 2, let R = r1 ∨ r2 ∨ D(z1 , z2 ), and let
γi : [log(ri ), log(R)] → X be defined by γi (t) = hzi , et i. Then γi parameterizes a
geodesic connecting xi and hzi , Ri. Since (z1 , R) ∼ (z2 , R), the geodesics γi can be
concatenated, and their concatenation is a geodesic connecting x1 and x2 . It can
be verified that the collection of such geodesics satisfies the conditions of Lemma
3.1.12. Thus (X, d) is an R-tree. (For an alternative proof that (X, d) is an R-tree,
see Example 14.5.1 below.)
Fix z0 ∈ Z. For all z1 , z2 ∈ Z and R > 0, (14.1.2) gives
2
X
√
D(z0 , zi )
√
− log D(z1 , z2 ).
log( R) ∨ log
lim hz1 , r1 i hz2 , r2 i hz0 ,Ri =
r1 ,r2 →0
R
i=1
∞
In particular, if z1 = z2 = z, then this shows that the sequence hz, 1/ni 1 is a
∞
∞
Gromov sequence. Let ι(z) = hz, 1/ni 1 . Similarly, the sequence hz0 , ni 1 is
∞
a Gromov sequence; let ∞ = hz0 , 1/ni 1 . Then Lemma 3.4.22 gives
hι(z1 )|ι(z2 )i(z0 ,R) =
and thus
2
X
i=1
− log D∞,o (ι(z1 ), ι(z2 )) = lim
log(R) ∨ log
R→∞
i.e. D∞,o ≡ D.
D(z0 , zi )
R
− log D(z1 , z2 )
h
i
hι(z1 )|ι(z2 )i(z0 ,R) − log(R) = − log D(z1 , z2 ),
To complete the proof we need to show that ∂X = ι(Z) ∪ {∞}. Indeed, fix
∞
ξ = hzn , rn i 1 ∈ ∂X. Without loss of generality suppose that rn → r ∈ [0, ∞]
and D(z0 , zn ) → R ∈ [0, ∞]. If r = ∞ or R = ∞, then it follows from (14.1.2) that
hhzn , rn i|∞ihz0 ,1i → ∞, i.e. ξ = ∞. Otherwise, it follows from (14.1.2) that
∞=
hzn , rn i|hzm , rm i hz0 ,1i
= 2 log(1 ∨ r ∨ R) − log
lim rn ∨ rm ∨ D(zn , zm ) ,
lim
n,m→∞
n,m→∞
which implies that rn ∨ rm ∨ D(zn , zm ) −−→ 0, i.e. rn → 0 and (zn )∞
1 is a Cauchy
n,m
sequence. Since Z is complete we can find a limit point zn → z ∈ Z. Then (14.1.2)
shows that ξ = ι(z).
Corollary 14.1.2. Every ultrametric space can be isometrically embedded into
an R-tree.
14.1. CONSTRUCTION OF R-TREES BY THE CONE METHOD
235
Proof. Let (Y, d) be an ultrametric space, and without loss of generality suppose that Y is complete. Let Z = Y , and let D(z1 , z2 ) = e(1/2)d(z1 ,z2 ) . Then (Z, D)
is a complete ultrametric space. Let (X, d) be the cone of (Z, D); by Theorem
14.1.1, X is an R-tree. Now define an embedding ι : Y → X via ι(y) = hy, 1i. Then
d(ι(y1 ), ι(y2 )) = 0 ∨ log D2 (y1 , y2 ) = d(y1 , y2 ),
i.e. ι is an isometric embedding.
Remark 14.1.3. Corollary 14.1.2 can also be proven from [31, Theorem 4.1]
by verifying directly that an ultrametric space satisfies Gromov’s inequality with
an implied constant of zero, and then proving that every geodesic metric space
satisfying Gromov’s inequality with an implied constant of zero is an R-tree.
However, the proof of Corollary 14.1.2 yields the additional information that
the isometric image of (Y, d) is contained in a horosphere, i.e.
B ∞ (ι(y1 ), ι(y2 )) = 0 ∀y1 , y2 ∈ Y,
(14.1.3)
where ∞ is as in Theorem 14.1.1.
Remark 14.1.4. The converse of the cone construction also holds: if (X, d) is
an R-tree and o ∈ X, ξ ∈ ∂X, then (∂X \ {ξ}, Dξ,o) and ({x ∈ X : B ξ (o, x) = 0}, d)
are both ultrametric spaces.
Proof. For all x, y ∈ Eξ , we have Dξ (x, y) = exp B ξ (o, C(x, y, ξ))), where
C(x, y, ξ) denotes the center of the geodesic triangle ∆(x, y, ξ) (cf. Definition
3.1.11). It can be verified by drawing appropriate diagrams (cf. Figure 3.3.1)
that for all x1 , x2 , x3 ∈ Eξ , there exists i such that C(xi , xj , ξ) = C(xi , xk , ξ) and
C(xj , xk , ξ) ∈ [ξ, C(xi , xj , ξ)] (where j, k are chosen so that {i, j, k} = {1, 2, 3}),
from which follows the ultrametric inequality for Dξ . Since Dξ = e(1/2)d on
{x ∈ X : B ξ (o, x) = 0}, the space ({x ∈ X : B ξ (o, x) = 0}, d) is also ultramet-
ric.
Theorem 14.1.5. Given an unbounded function f : [0, ∞) → N, the following
are equivalent:
(A) f is right-continuous and satisfies
∀R1 , R2 ≥ 0 such that R1 ≤ R2 , f (R1 ) divides f (R2 ).
(14.1.4)
(B) There exist an R-tree X (with a distinguished point o) and a parabolic
group G ≤ Isom(X) such that NX,G = f .
(C) There exist an R-tree X (with a distinguished point o) and a parabolic
group G ≤ Isom(X) such that NEp ,G = f , where p is the global fixed point
of G.
236
14. R-TREES AND THEIR ISOMETRY GROUPS
Moreover, in (B) and (C) the R-tree X may be chosen to be proper.
∞
Proof of (A) ⇒ (B). Let (λn )∞
1 and (Nn )1 be sequences such that
Y
f (ρ) =
Nn .
n∈N
λn ≤ρ
The hypotheses on f guarantee that (Nn )∞
1 can be chosen to be integers. Then for
each n ∈ N, let Γn be a finite group of cardinality Nn , and let
(
)
Y
∞
Γ = (γn )1 ∈
Γn : γn = e for all but finitely many n .
n∈N
For each
(γn )∞
1
∈ Γ let
(14.1.5)
k(γn )∞
1 k = max λn ,
n∈N
γn 6=e
with the understanding that kek = 0. For each α, β ∈ Γ let d(α, β) = kα−1 βk.
It is readily verified that d is an ultrametric on Γ. Thus by Corollary 14.1.2,
(Γ, d) can be isometrically embedded into an R-tree (X, d). Since Γ is proper, X
is proper. Moreover, the natural isometric action of Γ on itself extends naturally
to an isometric action on X. Denote this isometric action by φ, and let G = φ(Γ).
Then by (14.1.3), G is a parabolic group with global fixed point ∞. If we let o be
the image of e under the isometric embedding of Γ into X, then G satisfies
Y
NX,G (ρ) = #{γ ∈ Γ : kγk ≤ ρ} =
#(Γn ) = f (ρ).
n∈N
λn ≤ρ
This completes the proof.
Proof of (B) ⇒ (A). For each ρ > 0 let
Gρ = {g ∈ G : d(o, g(o)) ≤ ρ}.
Since G(o) is an ultrametric space by Remark 14.1.4, Gρ is a subgroup of G. Thus
by Lagrange’s theorem, the function f (ρ) = NX,G (ρ) = #(Gρ ) satisfies (14.1.4).
Since orbital counting functions are always right-continuous, this completes the
proof.
Proof of (A) ⇔ (C). Since the equation
NEp ,G (R) = NX,G (2 log(R))
holds for strongly hyperbolic spaces, including R-trees (Observation 6.2.10), and
since condition (A) is invariant under the transformation f 7→ (R 7→ f (2 log(R))),
the equivalence (A) ⇔ (B) directly implies the equivalence (A) ⇔ (C).
14.2. GRAPHS WITH CONTRACTIBLE CYCLES
237
Remark 14.1.6. Applying a BIM representation (Theorem 13.1.1) shows that
if f : [0, ∞) → N is an unbounded function satisfying (A) of Theorem 14.1.5, then
there exists a parabolic group G ≤ Isom(H∞ ) such that NX,G = f . This improves
a previous result of two of the authors [73, Proposition A.2].
14.2. Graphs with contractible cycles
In Section 14.4, we will describe a method of stapling together a collection of
R-trees (Xv )v∈V based on some data. This data will include a collection of edge
pairings E ⊆ V × V \ {(v, v) : v ∈ V } that indicates which trees are to be stapled to
each other. In this section, we describe the criterion which this collection of edge
pairings needs to satisfy in order for the construction to work (Definition 14.2.1),
and we analyze that criterion.
Let (V, E) be an unweighted undirected graph, and let dE denote the path
metric of (V, E) (cf. Definition 3.1.1). A sequence (vi )n0 in V will be called a path
if (vi , vi+1 ) ∈ E ∀i < n. The path (vi )n0 is said to connect the vertices v0 and vn .
The path (vi )n0 is called a geodesic if n = dE (v0 , vn ), in which case it is denoted
[v0 , vn ]. Note that a sequence is a geodesic if and only if [v0 , v1 ] ∗ · · · ∗ [vn−1 , vn ] is a
geodesic in the metrization X(V, E) (cf. Definition 3.1.1). Also, recall that a cycle
in (V, E) is a finite sequence of distinct vertices v1 , . . . , vn ∈ V , with n ≥ 3, such
that (v1 , v2 ), (v2 , v3 ), . . . , (vn−1 , vn ), (vn , v1 ) ∈ E (cf. (3.1.4)).
Definition 14.2.1. The graph (V, E) is said to have contractible cycles if every cycle forms a complete graph, i.e. if for every cycle (vi )n0 we have (vi , vj ) ∈
E ∀i, j such that vi 6= vj .
Standing Assumption 14.2.2. In the remainder of this section, (V, E) denotes
a connected graph with contractible cycles.
Lemma 14.2.3. For every v, w ∈ V there exists a unique geodesic [v, w] = (vi )n0
connecting v and w; moreover, if (wj )m
0 is any path connecting v and w, then the
vertices (vi )n0 appear in order (but not necessarily consecutively) in the sequence
(wj )m
0 .
Proof.
Claim 14.2.4. Let (vi )n0 be a geodesic, and let (wj )m
0 be a path connecting v0
and vn . Suppose n ≥ 2. Then there exist i = 1, . . . , n − 1 and j = 1, . . . , m − 1 such
that vi = wj .
Proof. By contradiction suppose not, and without loss of generality supm
pose that (wj )m
0 is minimal with this property. Then the vertices (wj )0 are distinct, since if we had wj1 = wj2 for some j1 < j2 , we could replace (wj )m
0 by
238
14. R-TREES AND THEIR ISOMETRY GROUPS
(w0 , . . . , wj1 −1 , wj1 = wj2 , wj2 +1 , . . . , wm ). Since n ≥ 2, it follows that the path
(v0 , v1 , . . . , vn = wm , wm−1 , . . . , w1 , w0 = v0 ) is a cycle. But then (v, w) ∈ E,
contradicting that (vi )n0 is a geodesic of length n ≥ 2.
⊳
Claim 14.2.5. Let (vi )n0 be a geodesic, and let (wj )m
0 be a path connecting v0
and vn . Then the vertices (vi )n0 appear in order in the sequence (wj )m
0 .
Proof. We proceed by induction on n. The cases n = 0, n = 1 are trivial.
Suppose the claim is true for all geodesics of length less than n. By Claim 14.2.4,
there exist i0 = 1, . . . , n − 1 and j0 = 1, . . . , m − 1 such that vi = wj . By the
induction hypothesis, the vertices (vi )i00 appear in order in the sequence (wj )j00 ,
and the vertices (vi )ni0 appear in order in the sequence (wj )m
j0 . Combining these
facts yields the conclusion.
⊳
To finish the proof of Lemma 14.2.3, it suffices to observe that if (vi )n0 and
(wj )m
0 are two geodesics connecting the same vertices v and w, then by Claim
14.2.5 the vertices (vi )n0 appear in order in the sequence (wj )m
0 , and the vertices
n
n
m
(wj )m
0 appear in order in the sequence (vi )0 . It follows that (vi )0 = (wj )0 , so
geodesics are unique.
Lemma 14.2.6 (Cf. Figure 14.2.1). Fix v1 , v2 , v3 ∈ V distinct. Then either
(1) there exists w ∈ V such that for all i 6= j, [vi , vj ] = [vi , w] ∗ [w, vj ], or
(2) there exists a cycle w1 , w2 , w3 ∈ V such that for all i 6= j, [vi , vj ] =
[vi , wi ] ∗ [wi , wj ] ∗ [wj , vj ].
Proof. For each i = 1, 2, 3, let ni be the number of initial vertices on which
the geodesics [vi , vj ] and [vi , vk ] agree, i.e.
ni = max{n : [vi , vj ]ℓ = [vi , vk ]ℓ ∀ℓ = 0, . . . , n},
and let wi = [vi , vj ]ni . Here j, k are chosen such that {i, j, k} = {1, 2, 3}. Then
uniqueness of geodesics implies that the geodesics [wi , wj ], i 6= j are disjoint ex-
cept for their common endpoints. If (wi )31 are distinct, then the path [w1 , w2 ] ∗
[w2 , w3 ] ∗ [w3 , w1 ] is a cycle, and since (V, E) has contractible cycles, this implies
(w1 , w2 ), (w2 , w3 ), (w3 , w1 ) ∈ E, completing the proof. Otherwise, we have wi = wj
for some i 6= j; letting w = wi = wj completes the proof.
Corollary 14.2.7 (Cf. Figure 14.2.2). Fix v1 , v2 , u ∈ V distinct such that
(v1 , v2 ) ∈ E. Then either v1 ∈ [u, v2 ], v2 ∈ [u, v1 ], or there exists w ∈ V such that
for each i = 1, 2, (w, vi ) ∈ E and w ∈ [u, vi ].
Proof. Write v3 = u, so that we can use the same notation as Lemma 14.2.6.
If we are in case (1), then the equation [v1 , v2 ] = [v1 , w] ∗ [w, v2 ] implies that
14.3. THE NEAREST-NEIGHBOR PROJECTION ONTO A CONVEX SET
v1
v1
w
w1
w2
239
w3
v2
v2
v3
v3
Figure 14.2.1. The two possibilities for a geodesic triangle in a
graph with contractible cycles. Lemma 14.2.6 states that either the
geodesic triangle looks like a triangle in an R-tree (right figure), or
there is 3-cycle in the “center” of the triangle (left figure).
v1
u
v1
v2
u
v2
v1
u
w
v2
Figure 14.2.2. When the vertices v1 and v2 are adjacent, Corollary 14.2.7 describes three possible pictures for the geodesic triangle ∆(u, v1 , v2 ). In the rightmost figure, w is the vertex adjacent
to both v1 and v2 from which the paths [u, v1 ] and [u, v2 ] diverge.
w ∈ {v1 , v2 }, and so either v1 = w ∈ [u, v2 ] or v2 = w ∈ [u, v1 ]. If we are in case
2, then the equation [v1 , v2 ] = [v1 , w1 ] ∗ [w1 , w2 ] ∗ [w2 , v2 ] implies that w1 = v1 and
w2 = v2 . Letting w = w3 completes the proof.
14.3. The nearest-neighbor projection onto a convex set
Let X be an R-tree, and let A ⊆ X be a nonempty closed convex set. Since
X is a CAT(-1) space, for each z ∈ X there is a unique point π(z) ∈ A such that
d(z, π(z)) = d(z, A), and the map z → π(z) is semicontracting (see e.g. [39]). Since
X is an R-tree, we can say more about this nearest-neighbor projection map π, as
well as providing a simpler proof of its existence. In the following theorems, X
denotes an R-tree.
Lemma 14.3.1. Let A ⊆ X be a nonempty closed convex set. Then for each
z ∈ X there exists a unique point π(z) ∈ A such that for all x ∈ A, π(z) ∈ [z, x].
240
14. R-TREES AND THEIR ISOMETRY GROUPS
Moreover, for all z1 , z2 ∈ X, we have
(14.3.1)
d(π(z1 ), π(z2 )) = 0 ∨ (d(z1 , z2 ) − d(z1 , A) − d(z2 , A)).
Proof. Since A is nonempty and closed, there exists a point π(z) ∈ A such
that [z, π(z)] ∩ A = {π(z)}. Fix z ∈ A. Since C(x, z, π(z)) ∈ [z, π(z)] ∩ [x, π(x)] ⊆
[z, π(z)] ∩ A, we get C(x, z, π(z)) = π(z), i.e. hx|ziπ(z) = 0, i.e. π(z) ∈ [z, x]. This
completes the proof of existence; uniqueness is trivial.
To demonstrate the equation (14.3.1), we consider two cases:
Case 1: If [z1 , z2 ] ∩ A 6= , then π(z1 ) and π(z2 ) both lie on the geodesic [z1 , z2 ],
so d(π(z1 ), π(z2 )) = d(z1 , z2 ) − d(z1 , A) − d(z2 , A) ≥ 0.
Case 2: Suppose that [z1 , z2 ] ∩ A = ; we claim that π(z1 ) = π(z2 ). Indeed, by
the definition of π(z2 ) we have π(z2 ) ∈ [z2 , π(z1 )], and by assumption we
have π(z2 ) ∈
/ [z1 , z2 ], so we must have π(z2 ) ∈ [z1 , π(z1 )]. But from the
definition of π(z1 ), this can only happen if π(z1 ) = π(z2 ). The proof is
completed by noting that the triangle inequality gives d(z1 , z2 )−d(z1 , A)−
d(z2 , A) = d(z1 , z2 ) − d(z1 , π(z1 )) − d(z2 , π(z1 )) ≤ 0.
Lemma 14.3.2. Let A1 , A2 ⊆ X be closed convex sets such that A1 ∩ A2 6= .
For each i let πi : X → Ai denote the nearest-neighbor projection map. Then for
all z ∈ X, either π1 (z) ∈ A2 or π2 (z) ∈ A1 . In particular, π1 (A2 ) ⊆ A1 ∩ A2 .
Proof. Let x1 = π1 (z) and x2 = π2 (z), and fix y ∈ A1 ∩ A2 . By Lemma
14.3.1, x1 , x2 ∈ [z, y]. Without loss of generality assume d(z, x1 ) ≤ d(z, x2 ), so that
x2 ∈ [x1 , y]. Since A1 is convex, x2 ∈ A1 .
Lemma 14.3.3. Let A1 , A2 ⊆ X be closed convex sets such that A1 ∩ A2 6= .
Then A1 ∪ A2 is convex.
Proof. It suffices to show that if x1 ∈ A1 and x2 ∈ A2 , then [x1 , x2 ] ⊆ A1 ∪A2 .
Since x2 ∈ A2 , Lemma 14.3.1 shows that [x1 , x2 ] intersects the point π2 (x1 ). By
Lemma 14.3.2, π2 (x1 ) ∈ A1 ∩ A2 . But then the two subsegments [x1 , π2 (x1 )] and
[π2 (x1 ), x2 ] are contained in A1 ∪ A2 , so the entire geodesic [x1 , x2 ] is contained in
A1 ∪ A2 .
14.4. Constructing R-trees by the stapling method
We now describe the “stapling method” for constructing R-trees. The following
definition is phrased for arbitrary metric spaces.
Definition 14.4.1. Let (V, E) be an unweighted undirected graph, let (Xv )v∈V
be a collection of metric spaces, and for each (v, w) ∈ E fix a set A(v, w) ⊆ Xv
14.4. CONSTRUCTING R-TREES BY THE STAPLING METHOD
241
−1
and an isometry ψv,w : A(v, w) → A(w, v) such that ψw,v = ψv,w
. Let ∼ be the
`
equivalence relation on v∈V Xv defined by the relations
x ∼ ψv,w (x) ∀(v, w) ∈ E ∀x ∈ A(v, w).
Then the stapled union of of the collection (Xv )v∈V with respect to the sets
(A(v, w))(v,w)∈E and the bijections (ψv,w )(v,w)∈E is the set
X=
st
a
Xv :=
v∈V
equipped with the path metric
(14.4.1)
d hv, xi, hw, yi = inf
n
X
i=0
a
v∈V
Xv / ∼,
dvi (xi , yi )
v0 , . . . , vn ∈ V
(vi , vi+1 ) ∈ E ∀i < n
v0 = v, vn = w
yi ∈ A(vi , vi+1 ) ∀i < n
xi+1 = ψvi ,vi+1 (yi ) ∀i < n
x0 = x, yn = y
.
Note that d is finite as long as the graph (V, E) is connected. We leave it to the
reader to verify that in this case, d is a metric on X.
Example 14.4.2. If for each (v, w) ∈ E we fix a point p(v, w) ∈ Xv , then we
can let A(v, w) = {p(v, w)} and let ψv,w be the unique bijection between {p(v, w)}
and {p(w, v)}.
`st
Intuitively, the stapled union v∈V Xv is the metric space that results from
starting with the spaces (Xv )v∈V and for each (v, w) ∈ E, stapling the set A(v, w) ⊆
Xv with the set A(w, v) ⊆ Xw along the bijection ψv,w .
Definition 14.4.3 (Cf. Figure 14.4.1). We say that the consistency condition
is satisfied if for every 3-cycle u, v, w ∈ V , we have
(I) A(u, v) ∩ A(u, w) 6= , and
(II) for all z ∈ A(u, v) ∩ A(u, w), we have
(a) ψu,w (z) ∈ A(w, v) and
(b) ψw,v ψu,w (z) = ψu,v (z).
Obviously, the consistency condition is satisfied whenever (V, E) has no cycles.
Theorem 14.5.5 and Examples 14.5.1-14.5.10 below show how it can be satisfied in
many reasonable circumstances. Now we prove the main theorem of this chapter:
for a connected graph with contractible cycles, the consistency condition implies
that the stapled union of R-trees is an R-tree, if the staples are taken along convex
sets. More precisely:
242
14. R-TREES AND THEIR ISOMETRY GROUPS
Figure 14.4.1. In this diagram, the arrows represent the bijections ψvi ,vj , while the ovals represent the sets A(vi , vj ). The consistency condition (Definition 14.4.3) states that (I) each of the
shaded regions is nonempty, (IIa) shaded regions go to shaded regions, and (IIb) if you start in a shaded region and traverse the
diagram, then you will get back to where you started.
Theorem 14.4.4. Let (V, E) be a connected graph with contractible cycles, let
(Xv )v∈V be a collection of R-trees, and for each (v, w) ∈ E let A(v, w) ⊆ Xv be a
nonempty closed convex set and let ψv,w : A(v, w) → A(w, v) be an isometry such
−1
that ψw,v = ψv,w
. Assume that the consistency condition is satisfied. Then
`st
(i) The stapled union X = v∈V Xv is an R-tree.
(ii) The infimum in (14.4.1) is achieved when
(a) (vi )n0 = [v, w], and
(b) for each i < n, yi is the image of xi under the nearest-neighbor
projection to A(vi , vi+1 ).
Proof. We prove part (ii) first. For each (v, w) ∈ E, let πv,w : Xv → A(v, w)
be the nearest-neighbor projection; then πv,w is 1-Lipschitz. Now fix v ∈ V arbitrary. We define a map πv : X → Xv as follows. Fix x = hw, xi ∈ X, so that
x ∈ Xw . Let (vi )n0 = [v, w], and let
πv (x) = πv (w, x) = ψv1 ,v0 πv1 ,v0 · · · ψvn ,vn−1 πvn ,vn−1 (x).
Claim 14.4.5. The map πv is well-defined.
14.4. CONSTRUCTING R-TREES BY THE STAPLING METHOD
243
Proof. Fix (u, w) ∈ E and x ∈ A(u, w) and let y = ψu,w (x); we need to
show that πv (u, x) = πv (w, y). If w ∈ [v, u] or u ∈ [v, w] then the equality is
trivial, so by Corollary 14.2.7 we are reduced to proving the case where there exists
v ′ ∈ V such that (v ′ , w), (v ′ , u) ∈ E and v ′ ∈ [v, w], [v, u]. We have πv (u, x) =
πv (v ′ , ψu,v′ πu,v′ (x)) and πv (w, y) = πv (v ′ , ψw,v′ πw,v′ (y)), so to complete the proof
it suffices to show that
ψu,v′ πu,v′ (x) = ψw,v′ πw,v′ (y).
(14.4.2)
Since u, v ′ , w form a 3-cycle, part (I) of the consistency condition gives A(u, v ′ ) ∩
A(u, w) 6= . By Lemma 14.3.2, we have x′ := πu,v′ (x) ∈ A(u, v ′ )∩A(u, w). Apply-
ing part (IIa) of the consistency condition gives y ′′ := ψu,w (x′ ) ∈ A(w, v ′ ) and thus
d(x, A(u, v ′ )) = d(x, x′ ) = d(y, y ′′ ) ≤ d(y, A(w, v ′ )). A symmetric argument gives
d(y, A(w, v ′ )) ≤ d(x, A(u, v ′ )), so we have equality and thus y ′′ = y ′ := πw,v′ (y).
Applying part (IIb) of the consistency condition gives ψu,v′ (x′ ) = ψw,v′ (y ′ ), i.e.
(14.4.2) holds.
⊳
Since for each w ∈ V the map Xw ∋ x 7→ πv (w, x) ∈ Xv is 1-Lipschitz, the
map πv : X → Xv is also 1-Lipschitz.
Fix x = hv, xi, y = hw, yi ∈ X. Let (vi )n0 , (xi )n0 , and (yi )n0 be as in (ii), i.e.
(vi )n0 = [v, w], where x0 = x,
yi = πvi ,vi+1 (xi ) ∀i < n,
xi+1 = ψvi ,vi+1 (yi ) ∀i < n, and
yn = y.
We define a function f : X → Rn+1 as follows: for each z ∈ X, we let
n
f (z) = dvi (xi , πvi (z)) i=0 .
Then f is 1-Lipschitz, when Rn+1 is interpreted as having the max norm.
Claim 14.4.6. Fix z ∈ X and i = 0, . . . , n − 1. If fi+1 (z) > 0, then fi (z) ≥
ri := dvi (xi , yi ).
Proof. By contradiction, suppose that fi+1 (z) > 0 but fi (z) < dvi (xi , yi ).
Then zi+1 := πvi+1 (z) 6= xi+1 , but zi := πvi (z) ∈ B(xi , ri ) \ {yi }. In particular,
πvi+1 ,vi (zi ) = yi , so
(14.4.3)
zi+1 6= ψvi ,vi+1 πvi ,vi+1 (zi ).
On the other hand, since zi ∈
/ A(vi , vi+1 ), we have
(14.4.4)
zi 6= ψvi+1 ,vi πvi+1 ,vi (zi+1 ).
244
14. R-TREES AND THEIR ISOMETRY GROUPS
Write z = hw, zi. Then the definition of the maps (πv )v∈V together with (14.4.3),
(14.4.4) implies that vi ∈
/ [w, vi+1 ] and vi+1 ∈
/ [w, vi ]. Thus by Corollary 14.2.7,
there exists w′ ∈ V such that (w′ , vi ), (w′ , vi+1 ) ∈ E and w′ ∈ [w, vi ], [w, vi+1 ].
Let z ′ = πw′ (z), so that ψw′ ,vi πw′ ,vi (z ′ ) = zi and ψw′ ,vi+1 πw′ ,vi+1 (z ′ ) = zi+1 . Let
F = ψvi ,w′ (B(xi , ri ) ∩ A(vi , w′ )), and let πF : Xw′ → F be the nearest-neighbor
projection map. By Lemma 14.3.2, either πF (z ′ ) ∈ A(w′ , vi+1 ) or πw′ ,vi+1 (z ′ ) ∈ F .
Case 1: πF (z ′ ) ∈ A(w′ , vi+1 ). Since F ⊆ A(w′ , vi ) and πw′ ,vi (z ′ ) ∈ F , we have
πw′ ,vi (z ′ ) = πF (z ′ ) ∈ A(w′ , vi+1 ) and then part (IIa) of the consistency
condition gives zi = ψw′ ,vi πw′ ,vi (z ′ ) ∈ A(vi , vi+1 ), a contradiction.
Case 2: πw′ ,vi+1 (z ′ ) ∈ F . Since F ⊆ A(w′ , vi ), part (IIa) of the consistency condition gives zi+1 = ψw′ ,vi+1 πw′ ,vi+1 (z ′ ) ∈ A(vi+1 , vi ) and ψvi+1 ,vi (zi+1 ) ∈
ψw′ ,vi (F ) ⊆ B(xi , ri ). But then ψvi+1 ,vi (zi+1 ) = yi and thus zi+1 = xi+1 ,
a contradiction.
⊳
Thus f (X) is contained in the set
S = {(ti )n0 : ∀i = 0, . . . , n − 1 ti+1 > 0 ⇒ ti ≥ ri } ⊆ Rn+1 .
Now the function h : S → R defined by
h (ti )n0 = max [r0 + . . . + ri−1 + ti ]
i∈{0,...,n}
ti >0 if i>0
is Lipschitz 1-continuous with respect to the path metric of the max norm. Thus
since X is a path-metric space, h ◦ f : X → R is Lipschitz 1-continuous. Thus
d(x, y) ≥ h ◦ f (y) − h ◦ f (x) ≥ r0 + . . . + rn =
n
X
dvi (xi , yi ),
i=0
completing the proof of (ii).
For each x = hv, xi, y = hw, yi ∈ X, let
[x, y] = [x0 , y0 ]v0 ∗ · · · ∗ [xn , yn ]vn ,
where ∗ denotes the concatenation of geodesics, and (vi )n0 , (xi )n0 , and (yi )n0 are
as in (ii). Here [x, y]v denotes the image of the geodesic [x, y] under the map
Xv ∋ z → hv, zi ∈ X. Then by (ii), [x, y] is a geodesic connecting x and y. Thus
we have a family of geodesics ([x, y])x,y∈X .
We now prove that X is an R-tree, using the criteria of Lemma 3.1.12. Con-
dition (BII) is readily verified. So to complete the proof, we must demonstrate
(BIII). Fix x1 , x2 , x3 ∈ X distinct, and we show that two of the geodesics [xi , xj ]
have a nontrivial intersection. Write xi = hvi , xi i. If there is more than one possible
P
choice, choose (vi )31 so as to minimize i6=j dE (vi , vj ).
14.5. EXAMPLES OF R-TREES CONSTRUCTED USING THE STAPLING METHOD
245
Let w1 , w2 , w3 ∈ V be as in Lemma 14.2.6, with the convention that w1 = w2 =
w3 = w if we are in Case 1 of Lemma 14.2.6.
Case A: For some i, vi 6= wi . Choose j, k such that i, j, k are distinct. Then there
exists a vertex w ∈ V adjacent to vi such that w ∈ [vi , vj ] ∩ [vi , vk ]. The
choice of (vi )31 guarantees that xi ∈
/ A(vi , w), so that [xi , πvi ,w (xi )]vi forms
a common initial segment of the geodesics [xi , xj ] and [xi , xk ].
Case B: For all i, vi = wi . Then either v1 = v2 = v3 , or v1 , v2 , v3 form a cycle.
Case B1: Suppose that v1 = v2 = v3 = v. Then since Xv is an R-tree, there
exist distinct i, j, k ∈ {1, 2, 3} such that the geodesics [xi , xj ]v and
[xi , xk ]v have a common initial segment.
Case B2: Suppose that v1 , v2 , v3 form a cycle. Then by part (I) of the consistency condition A(v1 , v2 ) ∩ A(v1 , v3 ) 6= , so by Lemma 14.3.3 the
set F = A(v1 , v2 ) ∪ A(v1 , v3 ) is convex. But the choice of (vi )31 guar-
antees that x1 ∈
/ F , so that [x1 , πF (x1 )]v1 forms a common initial
segment of the geodesics [x1 , x2 ] and [x1 , x3 ].
14.5. Examples of R-trees constructed using the stapling method
We give three examples of ways to construct R-trees using the stapling method
so that the resulting R-tree admits a natural isometric action.
Example 14.5.1 (Cone construction again). Let (Z, D) be a complete ultrametric space, let V = Z and E = V × V \ {(v, v) : v ∈ V }, and for each v ∈ V
let Xv = R. For each v, w ∈ V let A(v, w) = [log D(v, w), ∞), and let ψv,w be the
identity map. Since (V, E) is a complete graph, it is connected and has contractible
cycles. Part (IIa) of the consistency condition is equivalent to the ultrametric inequality for D, while parts (I) and (IIb) are obvious. Thus we can consider the
`st
stapled union X = v∈V Xv . One can verify that the stapled union is isomet-
ric to the R-tree X considered in the proof of Theorem 14.1.1. Indeed, the map
hz, ti 7→ hz, et i provides the desired isometry. Note that the map ι constructed in
Theorem 14.1.1 can be described in terms of the stapled union as follows: For each
z ∈ Z, ι(z) is the image of −∞ under the isometric embedding of Xz ≡ R into X.
(The image of +∞ is ∞).
Our next example is a type of Schottky product which we call a “pure Schottky
product”. To describe it, it will be convenient to introduce the following terminology:
Definition 14.5.2. If Γ is a group, a function k · k : Γ → [0, ∞) is called tree-
geometric if there exist an R-tree X, a distinguished point o ∈ X, and an isometric
246
14. R-TREES AND THEIR ISOMETRY GROUPS
action φ : Γ → Isom(X) such that
kφ(γ)k = kγk ∀γ ∈ Γ.
Example 14.5.3. Theorem 14.1.5 gives a sufficient but not necessary condition
for a function to be tree-geometric.
Remark 14.5.4. If the group Γ is countable, then whenever Γ is a treegeometric function, the R-tree X can be chosen to be separable.
Proof. Without loss of generality, we may replace X by the convex hull of
Γ(o).
Theorem 14.5.5 (Cf. Figure 14.5.1). Let (Hj )j∈J be a (possibly infinite) collection of groups and for each j ∈ J let k · k : Hj → [0, ∞) be a tree-geometric
function. Then the function k · k : G = ∗j∈J Hj → [0, ∞) defined by
(14.5.1)
kh1 · · · hn k := kh1 k + · · · + khn k
(assuming h1 . . . hn is given in reduced form) is a tree-geometric function.
Proof. For each j ∈ J write Hj ≤ Isom(Xj ) and khk = d(oj , h(oj )) ∀h ∈ Hj
for some R7-tree Xj and for some distinguished point oj ∈ Xj . Let V = J × G,
and for each (j, g) ∈ V let Xv = Xj . Let
E1 = {((j, g), (k, g)) : j 6= k, g ∈ G}
E2 = {((j, g), (j, gh)) : j ∈ J, g ∈ G, h ∈ Hj \ {e}}
E = E1 ∪ E2 .
Claim 14.5.6. Any cycle in (V, E) is contained in a complete graph of one of
the following forms:
(14.5.2)
{(j, gh) : h ∈ Hj } (j ∈ J, g ∈ G fixed),
(14.5.3)
{(j, g) : j ∈ J} (g ∈ G fixed).
In particular, (V, E) is a graph with contractible cycles.
Proof. Let (vi )n0 be a cycle in V , and for each i = 0, . . . , n − 1 let ei =
(vi , vi+1 ). By contradiction suppose that (vi )n0 is not contained in a complete graph
of one of the forms (14.5.2),(14.5.3). Without loss of generality suppose that (vi )n0
is minimal with this property. Then no two consecutive edges ei , ei+1 can lie in the
same set Ek . After reindexing if necessary, we find ourselves in the position that
ei ∈ E2 for i even and ei ∈ E1 for i odd. Write v0 = (j1 , g); then
v0 = (j1 , g), v1 = (j1 , gh1 ), v2 = (j2 , gh1 ), v3 = (j2 , gh1 h2 ), [etc.]
14.5. EXAMPLES OF R-TREES CONSTRUCTED USING THE STAPLING METHOD
247
with hi ∈ Hji , ji 6= ji+1 . Since G is a free product, this contradicts that vn = v0 .
⊳
For each (v, w) = ((j, g), (k, g)) ∈ E1 , we let A(v, w) = {oj } and we let
ψv,w (oj ) = oj . For each (v, w) = ((j, g), (j, gh)) ∈ E2 , we let A(v, w) = Xj and
we let ψv,w = h−1 . Claim 14.5.6 then implies the consistency condition. Consider
`
`
`st
the stapled union X = (j,g)∈V Xj = (j,g)∈V Xj / ∼. Elements of (j,g)∈V Xj
consist of pairs ((j, g), x), where g ∈ G and x ∈ Xj . We will abuse notation by
writing ((j, g), x) = (j, g, x) and h(j, g), xi = hj, g, xi. Then the “staples” are given
by the relations
(j, g, oj ) ∼ (k, g, ok ) [g ∈ G, j, k ∈ J],
(j, gh, x) ∼ (j, g, h(x)) [g ∈ G, j ∈ J, h ∈ Hj , x ∈ Xj ].
`
Now consider the following action of G on (j,g)∈V Xj :
g1 (j, g2 , x) = (j, g1 g2 , x).
Since the “staples” are preserved by this action, it descends to an action on the
stapled union X. To finish the proof, we need to show that d(o, g(o)) = kgk ∀g ∈ G,
where o = hj, e, oj i ∀j ∈ J, and k · k is given by (14.5.1). Indeed, fix g ∈ G and
write g = h1 · · · hn , where for each i = 1, . . . , n, hi ∈ Hji \ {e} for some j ∈ J, and
ji 6= ji+1 ∀i. For each i = 0, . . . , n let gi = h1 · · · hi , and for each i = 1, . . . , n let
(1)
vi
(1)
(2)
(2)
= (ji , gi−1 ), vi
(1)
(1)
= (ji , gi ).
(2)
Then the sequence (v1 , v1 , v2 , . . . , vn , vn ) is a geodesic whose endpoints are
(k)
(k)
(j1 , e) and (jn , g). We compute the sequences (xi ), (yi ) as in Theorem 14.4.4(ii):
(1)
xi
(1)
= oji , yi
(2)
= oji , xi
(2)
= h−1
i (oji ), yi
= oji ,
It follows that
d(o, g(o)) =
n X
2
X
i=1 j=1
(j)
(j)
d(xi , yi ) =
n
X
i=1
khi k = kgk,
which completes the proof.
Definition 14.5.7. Let (Hj )j∈J and G be as in Theorem 14.5.5. If we write
G ≤ Isom(X) and kgk = d(o, g(o)) ∀g ∈ G for some R-tree X and some distinguished point o ∈ X, then we call (X, G) the pure Schottky product of (Hj )j∈J . (It
is readily verified that every pure Schottky product is a Schottky product.)
Proposition 14.5.8. The Poincaré set of a pure Schottky product H1 ∗ H2 can
be computed by the formula
s ∈ ∆(H1 ∗ H2 ) ⇔ (Σs (H1 ) − 1)(Σs (H2 ) − 1) ≥ 1.
248
14. R-TREES AND THEIR ISOMETRY GROUPS
Figure 14.5.1. The Cayley graph of F2 (Z), interpreted as the
pure Schottky product H1 ∗ H2 , where H1 = H2 = Z is interpreted as acting on X1 = X2 = R by translation. The horizontal
lines correspond to copies of R which correspond to vertices of the
form (1, g), while the vertical lines correspond to copies of R which
correspond to vertices of the form (2, g). The intersection points
between horizontal and vertical lines are the staples which hold the
tree together.
Proof. Let
E = (H1 \ {id})(H2 \ {id}),
so that
G=
[
H2 E n H1 .
n≥0
Then by (14.5.1), we have for all s ≥ 0
Σs (G) =
X
e
−skgk
=
∞
X
X
X
X
n=0 h0 ∈H2 g1 ,...,gn ∈E hn+1 ∈H1
g∈G
= Σs (H2 )Σs (H1 )
∞
X
n=0
= Σs (H2 )Σs (H1 )
∞
X
n=0
This completes the proof.
X
g∈E
e−s[kh0 k+
Pn
1
kgi k+khn+1 k]
n
e−skgk
n
(Σs (H1 ) − 1)(Σs (H2 ) − 1) .
Proposition 14.5.8 generalizes to the case of more than two groups as follows:
Proposition 14.5.9. The Poincaré set of a finite pure Schottky product
G = ∗kj=1 Hj
14.5. EXAMPLES OF R-TREES CONSTRUCTED USING THE STAPLING METHOD
249
can be computed by the formula
s ∈ ∆(H1 ∗ H2 ) ⇔ ρ(As ) ≥ 1,
where ρ denotes spectral radius, and As denotes the matrix whose (j, j ′ )th entry is
Σ (H ) j ′ 6= j
s
j
.
(As )j,j ′ =
0
j′ = j
Proof. Let J = {1, . . . , k}. Then
G=
∞
[
[
n=0 j1 ,...,jn ∈J
j1 6=···6=jn
{h1 · · · hn : h1 ∈ Hj1 , · · · , hn ∈ Hjn }.
So by (14.5.1), we have for all s ≥ 0
Σs (G) =
X
e−skgk =
∞
X
X
∞
X
X
X
n=0 j1 ,...,jn ∈J h1 ∈Hj1
j1 6=···6=jn
g∈G
=
n
Y
n=0 j1 ,...,jn ∈J i=1
j1 6=···6=jn
···
X
e−s
Pn
khi k
hn ∈Hjn
(Σs (Hji ) − 1)
Σs (H1 ) − 1
..
=1+
[1 · · · 1]Asn−1
.
n=1
Σs (Hn ) − 1
= ∞ ρ(A ) ≥ 1
s
.
< ∞ ρ(A ) < 1
∞
X
1
s
This completes the proof.
Note that only the last step (the series converges or diverges according to
whether or not the spectral radius is at least one) uses the hypothesis that J is
finite.
Our last example of an R-tree constructed using the stapling method is similar
to the method of pure Schottky products, but differs in important ways:
Example 14.5.10 (Geometric products). Let Y be an R-tree, let P ⊆ Y be
a set, and let (Γp )p∈P be a collection of abstract groups. Let Γ = ∗p∈P Γp . Let
V = Γ, and let
E = {(γ, γα) : γ ∈ Γ, α ∈ Γp \ {e}}.
For each v ∈ V , let Xv = Y . For each (v, w) = (γ, γα) ∈ E, where γ ∈ Γ and
α ∈ Γp \ {e}, we let A(v, w) = {p}, and we let ψv,w (p) = p. In a manner similar
to the proof of Claim 14.5.6, one can check that every cycle in (V, E) is contained
250
14. R-TREES AND THEIR ISOMETRY GROUPS
in one of the complete graphs γΓp ⊆ V (γ ∈ Γ, p ∈ P ), so (V, E) has contractible
cycles. The consistency condition is trivial. Thus we can consider the stapled union
`st
X = v∈V Xv , which admits a natural left action φ : Γ → Isom(X):
ι(γ)(hv, xi) = hγv, xi.
We let G = φ(Γ), and we call the pair (X, G) the geometric product of Y with
(Γp )p∈P .
Note that if (X, G) is the geometric product of Y with (Γy )y∈A , then for all
g = (p1 , γ1 ) · · · (pn , γn ) ∈ G, we have
(14.5.4)
kgk = d(o, p1 ) +
n−1
X
d(pi , pi+1 ) + d(pn , o).
i=1
To compare this formula with (14.5.1), we observe that if n = 1, then we get
k(a, γ)k = 2d(o, a), so that
k(p1 , γ1 )k + · · · + k(pn , γn )k =
n
X
2d(o, pi )
i=1
= d(o, p1 ) +
n−1
X
[d(o, pi ) + d(o, pi+1 )] + d(o, pn ).
i=1
So if (X, G) is a geometric product, then the right hand side of (14.5.1) exceeds
Pn−1
the left hand side by i=1
2hpi |pi+1 io . The formula (14.5.4) is more complicated
to deal with because its terms depend on the relation between the neighborhing
points pi and pi+1 , rather than just on the individual terms pi . In particular, it
is more difficult to compute the Poincaré exponent of a geometric product than it
is to compute the Poincaré exponent of a group coming from Theorem 14.5.5. We
will investigate the issue of computing Poincaré exponents of geometric products
in [57], as well as other topics related to the geometry of these groups.
Example 14.5.11 (Cf. Figure 14.5.3). Let (an )∞
1 be an increasing sequence of
nonnegative real numbers, and let (bn )∞
1 be a sequence of nonnegative real numbers.
Let
∞
[
Y = ([0, ∞) × {0}) ∪
({an } × [0, bn ])
n=1
with the path metric induced from R2 . Let P = {pn : n ∈ N}, where pn = (an , bn ).
Then
(14.5.5)
d(pn , pm ) = bn + bm + |an − am | ∀m 6= n,
14.5. EXAMPLES OF R-TREES CONSTRUCTED USING THE STAPLING METHOD
251
(Y, bab)
(Y, ba)
(Y, b)
0
1
(Y, e)
(Y, a)
(Y, ab)
(Y, aba)
Figure 14.5.2. The geometric product of Y with (Γp )p∈P , where
Y = [0, 1], P = {0, 1}, Γ0 = {e, γ0 } ≡ Z2 , and Γ1 = {e, γ1 } ≡ Z2 .
In the left hand picture, copies of Y are drawn as horizontal lines
and identifications between points in different copies are drawn
as vertical lines. The right hand picture is the result of stapling
together certain pairs of points in the left hand picture.
so (14.5.4) would become
kgk = b1 + a1 +
=
n
X
i=1
n−1
X
i=1
[bi + bi+1 + |ai+1 − ai |] + bn + an
2bi + a1 +
n−1
X
i=1
|ai+1 − ai | + an .
This formula exhibits clearly the fact that the relation between neighborhing points
pi and pi+1 is involved, via the appearance of the term |ai+1 − ai |.
Proposition 14.5.12. Let (X, G) be the geometric product of Y with (Γp )p∈P ,
where P ⊆ Y .
(i) If
(14.5.6)
inf{d(y, z) : y, z ∈ E, y 6= z} > 0,
then G = hGa ia∈E is a global weakly separated Schottky product. If furthermore
(14.5.7)
inf{D(y, z) : y, z ∈ E, y 6= z} > 0,
then G is strongly separated.
(ii) X is proper if and only if all three of the following hold: Y is proper,
#(Γa ) < ∞ for all a ∈ E, and #(E ∩ B(o, ρ)) < ∞ for all ρ > 0.
Proof of (i). Suppose that (14.5.6) holds, and for each p ∈ P , let
Up = {hg1 · · · gn , yi ∈ X : g1 ∈ Gp } ∪ {hid, yi : y ∈ B(p, ε)},
252
14. R-TREES AND THEIR ISOMETRY GROUPS
0
∞
Figure 14.5.3. The set Y of Example 14.5.11. The points at the
tops of the vertical lines are “branch points” which correspond to
fixed points in the geometric product (X, G). If a geodesic in the
geometric product is projected down to Y , the result will be a
sequence of geodesics, each of which starts and ends at one of the
indicated points (either o, an element of P , or ∞).
where ε ≤ inf{d(y, z) : y, z ∈ P, y 6= z}/2. Then (Up )p∈P a global Schottky system
for G. If (14.5.7) also holds, then it is strongly separated, because
inf{D(Up , Uq ) : p 6= q} ≥ inf{D(y, z) : y, z ∈ P, y 6= z} − 2ε
can be made positive if ε is sufficiently small. Finally, if we go back to assuming
only that (14.5.6) holds, then (Up )p∈P is still weakly separated, because (14.5.7)
holds for finite subsets.
Proof of (ii). The necessity of these conditions is obvious; conversely, suppose they hold. Fix ρ > 0 and x = hg, yi ∈ BX (o, ρ); by (14.4.1), we have
d(o, p1 ) + d(p1 , p2 ) + . . . + d(pn−1 , pn ) + d(pn , y) ≤ ρ,
where g = h1 · · · hn , hi ∈ Gpi \ {id}, pi ∈ P , and pi 6= pi+1 for all i. It follows
that kpi k ≤ ρ for all i = 1, . . . , n, i.e. pi ∈ P ∩ B(o, ρ). In particular, letting
ε = minp,q∈P ∩B(o,ρ) d(a, b), we have (n − 1)ε ≤ ρ, or equivalently n ≤ 1 + ρ/ε. It
follows that
[
[
(Gp1 \ {id}) · · · (Gpn \ {id}),
g∈
n≤1+ρ/ε p1 ,...,pn ∈P ∩B(o,ρ)
a finite set. Thus, BX (o, ρ) is contained in the union of finitely many compact sets
of the form BY (o, ρ) × {g} ⊆ X, and is therefore compact.
Part 4
Patterson–Sullivan theory
This part will be divided as follows: In Chapter 15 we recall the definition of
quasiconformal measures, and we prove basic existence and non-existence results.
In Chapter 16, we prove Theorem 1.4.1 (Patterson–Sullivan theorem for groups
of divergence type). In Chapter 17, we investigate the geometry of quasiconformal
measures of geometrically finite groups, and we prove a generalization of the Global
Measure Formula (Theorem 17.2.2) as well as giving various necessary and/or sufficient conditions for the Patterson–Sullivan measure of a geometrically finite group
to be doubling (§17.4) or exact dimensional (§17.5).
CHAPTER 15
Conformal and quasiconformal measures
Standing Assumption. Throughout the final part of the monograph, i.e. in
Chapters 15-17, we fix (X, d, o, b) as in §4.1, and a group G ≤ Isom(X).
15.1. The definition
Conformal measures, introduced by S. G. Patterson [142] and D. P. Sullivan
[161], are an important tool in studying the geometry of the limit set of a Kleinian
group. Their definition can be generalized directly to the case of a group acting
on a strongly hyperbolic metric space, but for a hyperbolic metric space which is
not strongly hyperbolic, a multiplicative error term is required. Thus we make the
following definition (cf. [53, Definition 4.1]):
Definition 15.1.1. For each s ≥ 0, a nonzero measure1 µ on ∂X is called
s-quasiconformal 2 if
Z
(15.1.1)
µ(g(A)) ≍×
[g ′ (ξ)]s dµ(ξ)
A
for every g ∈ G and for every Borel set A ⊆ ∂X. If X is strongly hyperbolic and if
equality holds in (15.1.1), then µ is called s-conformal.
Remark 15.1.2. For two measures µ1 , µ2 , write µ1 ≍× µ2 if µ1 and µ2 are in
the same measure class and if the Radon–Nikodym derivative dµ1 /dµ2 is bounded
from above and below. Then a measure µ is s-quasiconformal if and only if
µ ◦ g ≍× [g ′ (ξ)]s µ,
and is s-conformal if X is strongly hyperbolic and if equality holds.
Remark 15.1.3. One might ask whether it is possible to generalize the notions
of conformal and quasiconformal measures to semigroups. However, this appears
to be difficult. The issue is that the condition (15.1.1) is sometimes impossible to
satisfy for measures supported on Λ – for example, it may happen that there exist
g1 , g2 ∈ G such that g1 (Λ) ∩ g2 (Λ) = , in which case letting A = ∂X \ Λ in (15.1.1)
shows both that Supp(µ) ⊆ g1 (Λ) and that Supp(µ) ⊆ g2 (Λ), and thus that µ = 0.
1In this monograph, “measure” always means “nonnegative finite Borel measure”.
2Not to be confused with the concept of a quasiconformal map, cf. [92].
255
256
15. CONFORMAL AND QUASICONFORMAL MEASURES
One may try to fix this by changing the formula (15.1.1) somehow, but it is not
clear what the details of this should be.
15.2. Conformal measures
Before discussing quasiconformal measures, let us consider the relation between
conformal measures and quasiconformal measures. Obviously, every conformal measure is quasiconformal. In the converse direction we have:
Proposition 15.2.1. Suppose that G is countable and that X is strongly hyperbolic. Then for every s ≥ 0, if µ is an s-quasiconformal measure, then there
exists an s-conformal measure ν satisfying ν ≍× µ.
Proof. For each g ∈ G, let fg : ∂X → (0, ∞) be a Radon–Nikodym derivative
of µ ◦ g with respect to µ. Since µ is s-quasiconformal, we have for µ-a.e. ξ ∈ ∂X
(15.2.1)
fg (ξ) ≍× [g ′ (ξ)]s .
Since G is countable, the set of ξ ∈ ∂X for which (15.2.1) holds for all g ∈ G is of
full µ-measure. In particular, if
fg (ξ)
,
′
s
g∈G [g (ξ)]
f (ξ) = sup
then f (ξ) ≍× 1 for µ-a.e. ξ ∈ X. Now for each g, h ∈ G, the equality µ ◦ (gh) =
(µ ◦ g) ◦ h implies that
fgh (ξ) = fg (h(ξ))fh (ξ) for µ-a.e. ξ ∈ ∂X.
Combining with the chain rule for metric derivatives, we have
fg (h(ξ)) fh (ξ)
fgh (ξ)
= ′
for µ-a.e. ξ ∈ ∂X.
[(gh)′ (ξ)]s
[g (h(ξ))]s [h′ (ξ)]s
Note that we are using the strong hyperbolicity assumption here to get equality
rather than a coarse asymptotic. Taking the supremum over all g gives
f (ξ) = f (h(ξ))
fh (ξ)
for µ-a.e. ξ ∈ ∂X.
[h′ (ξ)]s
We now claim that ν := f µ is an s-conformal measure. Indeed,
f (g(ξ)) dµ ◦ g
f (g(ξ))
dν ◦ g
(ξ) =
(ξ) =
fg (ξ) = [g ′ (ξ)]s for µ-a.e. ξ ∈ ∂X.
dν
f (ξ)
dµ
f (ξ)
15.3. Ergodic decomposition
Let M(∂X) denote the set of all measures on ∂X, and let M1 (∂X) denote the
set of all probability measures on ∂X.
15.3. ERGODIC DECOMPOSITION
257
Definition 15.3.1. A measure µ ∈ M(∂X) is ergodic if for every G-invariant
Borel set A ⊆ ∂X, we have µ(A) = 0 or µ(∂X \ A) = 0.
It is often useful to be able to write a non-ergodic measure as the convex
combination of ergodic measures. To make this rigorous, suppose that X is complete
and separable, so that bord X and ∂X are Polish spaces. Then ∂X together with its
Borel σ-algebra forms a standard Borel space. Let B denote the smallest σ-algebra
on M(∂X) with the following property:
Property 15.3.2. For every bounded Borel-measurable function f : ∂X → R,
the function
Z
µ 7→ f dµ
is a B-measurable map from M(∂X) to R.
Then (M(∂X), B) is a standard Borel space. We may now state the following
theorem:
Proposition 15.3.3 (Ergodic decomposition of quasiconformal measures). We
suppose that G is countable and that X is separable. Fix s ≥ 0.
(i) For every s-quasiconformal measure µ, there is a measure µ
b on M1 (∂X)
which satisfies
Z
(15.3.1)
µ(A) = ν(A) db
µ(ν) for every Borel set A ⊆ ∂X
and gives full measure to the set of ergodic s-quasiconformal measures.3
(ii) If X is strongly hyperbolic, then for every s-conformal measure µ, there is
a unique measure µ
b on M(∂X) which satisfies (15.3.1) and which gives
full measure to the set of ergodic s-conformal measures.
Remark 15.3.4. Note that we have uniqueness in (ii) but not in (i).
Proof of Proposition 15.3.3. Both cases of the proposition are essentially
special cases of [82, Theorem 1.4], as we now demonstrate:
(i) Let µ be an s-quasiconformal measure. Let ̺ : G × ∂X → R satisfy [82,
(1.1)-(1.3)]. Then by [82, Theorem 1.4], there is a measure µ
b satisfying (15.3.1) supported on the set of ergodic probability measures which
are “̺-admissible” (in the terminology of [82]). But by [82, (1.1)], we
have b̺(g,ξ) ≍× g′ (ξ)s for µ-a.e. ξ ∈ ∂X, say for all ξ ∈ ∂X \ S, where
µ(S) = 0. Then every ̺-admissible measure ν satisfying ν(S) = 0 is squasiconformal. But by (15.3.1), ν(S) = 0 for µ
b-a.e. ν, so µ
b-a.e. ν is
s-quasiconformal.
3If A is a non-measurable set, then a measure µ gives full measure to A if and only if A contains a
measurable set of full µ-measure. Thus we do not need to check whether or not the set of ergodic
s-quasiconformal measures is a measurable set in M1 (∂X).
258
15. CONFORMAL AND QUASICONFORMAL MEASURES
(ii) Let µ be an s-conformal measure. Let ̺ : G × ∂X → R satisfy [82, (1.1)-
(1.3)]. Then by [82, (1.1)], we have b̺(g,ξ) = g ′ (ξ)s for µ-a.e. ξ ∈ ∂X, say
for all ξ ∈ ∂X \ S, where µ(S) = 0. Then for every measure ν satisfying
ν(S) = 0, ν is ̺-admissible if and only if ν is s-conformal. By [82,
Theorem 1.4], there is a unique measure µ
b satisfying (15.3.1) supported
on the set of ̺-admissible ergodic probability measures; such a measure is
also unique with respect to satisfying (15.3.1) being supported on the set
of s-conformal ergodic measures.
Corollary 15.3.5. Suppose that G is countable and that X is separable, and
fix s ≥ 0. If there is an s-(quasi)conformal measure, then there is an ergodic s(quasi)conformal measure.
In the sequel, we will be concerned with when an s-quasiconformal measure is
unique up to coarse asymptotic. This is closely connected with ergodicity:
Proposition 15.3.6. Suppose that G is countable and that X is separable, and
fix s ≥ 0. Suppose that there is an s-quasiconformal measure µ. The following are
equivalent:
(A) µ is unique up to coarse asymptotic i.e. µ ≍× µ
e for any s-quasiconformal
measure µ
e.
(B) Every s-quasiconformal measure is ergodic.
If in addition X is strongly hyperbolic, then (A)-(B) are equivalent to
(C) There is exactly one s-conformal probability measure.
Proof of (A) ⇒ (B). If µ is a non-ergodic s-quasiconformal measure, then
there exists a G-invariant set A ⊆ ∂X such that µ(A), µ(∂X \ A) > 0. But then
ν1 = µ ↿ A and ν2 = µ ↿ ∂X \ A are non-asymptotic s-quasiconformal measures, a
contradiction.
Proof of (B) ⇒ (A). Suppose that µ1 , µ2 are two s-quasiconformal measures.
Then the measure µ = µ1 + µ2 is also s-quasiconformal, and therefore ergodic. Let
fi be a Radon–Nikodym derivative of µi with respect to µ. Then for all g ∈ G,
(15.3.2)
fi ◦ g(ξ) =
It follows that
[g ′ (ξ)]s dµi
dµi ◦ g
(ξ) ≍× ′
(ξ) = fi (ξ) for µ-a.e. ξ ∈ ∂X.
dµ ◦ g
[g (ξ)]s dµ
hi (ξ) := sup fi ◦ g(ξ) ≍× fi (ξ) for µ-a.e. ξ ∈ ∂X.
g∈G
15.4. QUASICONFORMAL MEASURES
259
But the functions hi are G-invariant, so since µ is ergodic, they are constant µ-a.e.,
say hi = ci . It follows that µi ≍× ci µ; since µi 6= 0, we have ci > 0 and thus
µ1 ≍ × µ2 .
Proof of (B) ⇒ (C). The existence of an s-conformal measure is guaranteed
by Proposition 15.2.1. If µ1 , µ2 are two s-conformal measures, then the Radon–
Nikodym derivatives fi = dµi /d(µ1 + µ2 ) satisfy (15.3.2) with equality, so fi = ci
for some constants ci . It follows that µ1 = (c1 /c2 )µ2 , and so if µ1 , µ2 are probability
measures then µ1 = µ2 .
Proof of (C) ⇒ (A). Follows immediately from Proposition 15.2.1.
15.4. Quasiconformal measures
We now turn to the deeper question of when a quasiconformal measure exists in
the first place. To approach this question we begin with a fundamental geometrical
lemma about quasiconformal measures:
Lemma 15.4.1 (Sullivan’s Shadow Lemma, cf. [161, Proposition 3], [152,
§1.1]). Fix s ≥ 0, and let µ be a s-quasiconformal measure on ∂X which is not
a pointmass. Then for all σ > 0 sufficiently large and for all g ∈ G,
µ(Shad(g(o), σ)) ≍×,σ,µ b−skgk .
Proof. We have
µ(Shad(g(o), σ)) ≍×,µ
Z
g′
g−1 (Shad(g(o),σ))
s
dµ
(by the definition of s-quasiconformality)
Z
s
g ′ dµ
=
Shadg−1 (o) (o,σ)
≍×,σ
Z
b−skgk dµ
Shadg−1 (o) (o,σ)
(by the Bounded Distortion Lemma 4.5.6)
=
b−skgk µ Shadg−1 (o) (o, σ) .
Thus, to complete the proof, it is enough to show that
µ Shadg−1 (o) (o, σ) ≍×,µ,σ 1,
assuming σ is sufficiently large (depending on µ). The upper bound is automatic since µ is finite. Now, since by assumption µ is not a pointmass, we have
#(Supp(µ)) ≥ 2. Choose distinct ξ1 , ξ2 ∈ Supp(µ), and let ε = D(ξ1 , ξ2 )/3. By the
260
15. CONFORMAL AND QUASICONFORMAL MEASURES
Big Shadows Lemma 4.5.7, we have
Diam(∂X \ Shadg−1 (o) (o, σ)) ≤ ε
for all σ > 0 sufficiently large (independent of g). Now since
D(B(ξ1 , ε), B(ξ2 , ε)) ≥ ε,
it follows that
∃i = 1, 2 B(ξi , ε) ⊆ Shadg−1 (o) (o, σ)
and thus
2
µ Shadg−1 (o) (o, σ) ≥ min µ B(ξi , ε) > 0.
i=1
The right hand side is independent of g, which completes the proof.
Sullivan’s Shadow Lemma suggests that in the theory of quasiconformal measures, there is a division between those measures which are pointmasses and those
which are not. Let us first consider the easier case of a pointmass quasiconformal measure, and then move on to the more interesting theory of non-pointmass
quasiconformal measures.
15.4.1. Pointmass quasiconformal measures.
Proposition 15.4.2. A pointmass δξ is s-quasiconformal if and only if
(I) ξ ∈ ∂X is a global fixed point of G, and
(II) either
(IIA) ξ is neutral with respect to every g ∈ G, or
(IIB) s = 0.
Proof. To begin we recall that g ′ (ξ) denotes the dynamical derivative, cf.
Proposition 4.2.12. For each ξ ∈ ∂X,
δξ is s-quasiconformal ⇔ δξ ◦ g ≍× (g ′ )s δξ ∀g ∈ G
⇔ g(ξ) = ξ and [g ′ (ξ)]s ≍× 1 ∀g ∈ G
⇔ g(ξ) = ξ and [g ′ (ξ)]s = 1 ∀g ∈ G
⇔ g(ξ) = ξ and (g ′ (ξ) = 1 or s = 0) ∀g ∈ G.
Corollary 15.4.3.
(i) If G is of general type, then no pointmass is s-quasiconformal for any
s ≥ 0.
(ii) If G is loxodromic, then no pointmass is s-quasiconformal for any s > 0.
15.4. QUASICONFORMAL MEASURES
261
15.4.2. Non-pointmass quasiconformal measures. Next we will ask the
following question: Given a group G, for what values of s does a non-pointmass
quasiconformal measure exist, and when is it unique up to coarse asymptotic?
We first recall the situation in the Standard Case, where the answers are wellknown. The first result is the Patterson–Sullivan theorem [161, Theorem 1], which
states that any discrete subgroup G ≤ Isom(Hd ) admits a δG -conformal measure
supported on Λ. It is unique up to a multiplicative constant if G is of divergence
type ([138, Theorem 8.3.5] together with Proposition 15.3.6). The next result is
negative, stating that if s < δG , then G admits no non-pointmass s-conformal
measure. From these results and from Corollary 15.4.3, it follows that if G is
of general type, then δG is the infimum of s for which there exists an s-conformal
measure [161, Corollary 4]. Finally, for s > δG , an s-conformal measure on Λ exists
if and only if G is not convex-cocompact ([10, Theorem 4.1] for ⇐, [138, Theorem
4.4.1] for ⇒); no nontrivial conditions are known which guarantee uniqueness in
this case.
We now generalize the above results to the setting of hyperbolic metric spaces,
replacing the Poincaré exponent δG with the modified Poincaré exponent δeG , and
the notion of divergence type with the notion of generalized divergence type. By
Proposition 8.2.4(ii), our theorems will reduce to the known results in the case of
a strongly discrete group.
We begin with the negative result, as its proof is the easiest:
Proposition 15.4.4 (cf. [161, p.178]). For any s < δeG , there does not exist a
non-pointmass s-quasiconformal measure.
Proof. By contradiction, suppose that µ is a non-pointmass s-quasiconformal
measure. Let σ > 0 be large enough so that Sullivan’s Shadow Lemma 15.4.1 holds,
and let τ > 0 be the implied constant of (4.5.2) from the Intersecting Shadows
Lemma 4.5.4. Let Sτ +1 be a maximal (τ + 1)-separated subset of G(o). Fix n ∈ N,
and let An be the nth annulus An = B(o, n) \ B(o, n − 1). Now by the Intersecting
Shadows Lemma 4.5.4, the shadows Shad(x, σ) x∈Sτ +1 ∩An are disjoint, and so by
Sullivan’s Shadow Lemma 15.4.1
1 ≍×,µ µ(∂X) ≥
X
µ(Shad(x, σ))
x∈Sτ +1 ∩An
≍×,σ,µ
≍× b
X
b−skxk
x∈Sτ +1 ∩An
−sn
#(Sτ +1 ∩ An ).
262
15. CONFORMAL AND QUASICONFORMAL MEASURES
Thus for all t > s,
Σt (Sτ +1 ) ≍×
X
n∈N
b−tn #(Sτ +1 ∩ An ) .×,σ,µ
X
n∈N
b(s−t)n < ∞.
But this implies that δeG ≤ t (cf. (8.2.2)); letting t ց s gives δeG ≤ s, contradicting
our hypothesis.
Remark 15.4.5. The above proof shows that if there exists a non-pointmass
e
δ-conformal measure, then
e
#(Sτ +1 ∩ An ) .× bδn ∀n ≥ 1.
In particular, if δe > 0 then summing over n = 1, . . . , N gives
e
#(Sτ +1 ∩ B(o, N )) .× bδN ∀n ≥ 1.
If G is strongly discrete, then for all ρ > 0,
NX,G (ρ) = #{g ∈ G : kgk ≤ ρ} .× #(Sτ +1 ∩ B(o, ρ + τ + 1))
.× bδ⌈ρ+τ +1⌉
≍× bδρ .
The bound NX,G (ρ) .× bδρ in fact holds without assuming the existence of a
δ-conformal measure; see Corollary 16.7.1.
Next we study hypotheses which guarantee the existence of a δeG -quasiconformal
measure. In particular, we will show that if δeG < ∞ and if G is of compact type
or of generalized divergence type, then there exists a δeG -quasiconformal measure.
The first case we consider now, while the case of a group of generalized divergence
type will be considered in Chapter 16.
Theorem 15.4.6 (cf. [53, Théorème 5.4]). Assume that G is of compact type
e
and that δe < ∞. Then there exists a δ-quasiconformal
measure supported on Λ. If
e
X is strongly hyperbolic, then there exists a δ-conformal
measure supported on Λ.
Remark 15.4.7. Any group acting on a proper geodesic hyperbolic metric
space is of compact type, so Theorem 15.4.6 includes the case of proper geodesic
hyperbolic metric spaces.
Remark 15.4.8. Combining Theorem 15.4.6 with Proposition 15.4.4 and Corollary 15.4.3 shows that for G nonelementary of compact type,
δe = inf{s > 0 : there exists an s-quasiconformal measure supported on Λ},
thus giving another geometric characterization of δe (the first being Theorem 1.2.3).
Before proving Theorem 15.4.6, we recall the following lemma due to Patterson:
15.4. QUASICONFORMAL MEASURES
263
Lemma 15.4.9 ([142, Lemma 3.1]). Let A = (an )∞
1 be a sequence of positive
real numbers, and let
δ = δ(A) = inf
(
s≥0:
∞
X
a−s
n
n=1
)
<∞ .
Then there exists an increasing continuous function k : (0, ∞) → (0, ∞) such that:
(i) The series
Σs,k (A) =
∞
X
k(an )a−s
n
n=1
converges for s > δ and diverges for s ≤ δ.
(ii) There exists a decreasing function ε : (0, ∞) → (0, ∞) such that for all
y > 0 and x > 1,
k(xy) ≤ xε(y) k(y),
(15.4.1)
and such that limy→∞ ε(y) = 0.
Proof of Theorem 15.4.6. By Proposition 8.2.4, there exist ρ > 0 and a
e
maximal ρ-separated set Sρ ⊆ G(o) such that δ(G)
= δ(Sρ ); moreover, this ρ
may be chosen large enough so that Sρ/2 does not contain a bounded infinite set,
where Sρ/2 is a ρ/2-separated set. Let A = (an )∞
1 be any indexing of the sequence
(bkxk )x∈Sρ , and let k : (0, ∞) → (0, ∞) be the function given by Lemma 15.4.9. For
shorthand let
k(x) = k(bkxk )
ε(x) = ε(bkxk )
Σs,k = Σs,k (A) =
X
k(x)b−skxk .
x∈Sρ
e moreover, the function s 7→ Σs,k is continuous.
Then Σs,k < ∞ if and only if s > δ;
For each s > δeG , let
1 X
(15.4.2)
µs =
k(x)b−skxk δx ∈ M1 (Sρ ∪ Λ).
Σs,k
x∈Sρ
Now since G is of compact type, the set Sρ ∪ Λ is compact (cf. (B) of Proposition
7.7.2). Thus by the Banach–Alaoglu theorem, the set M1 (Sρ ∪ Λ) is compact in
the weak-* topology. So there exists a sequence sn ց δe so that if we let µn = µsn ,
then µn → µ ∈ M1 (Sρ ∪ Λ). We will show that µ is δeG -quasiconformal and that
Supp(µ) = Λ.
Claim 15.4.10. Supp(µ) ⊆ Λ.
264
15. CONFORMAL AND QUASICONFORMAL MEASURES
Proof. Fix R > 0. Since δ(Sρ ) < ∞, we have #(Sρ ∩ B(o, R)) < ∞. Thus,
µ(B(o, R)) ≤ lim sup µs (B(o, R)) ≤ lim sup
e
sցδ
e
sցδ
=
e
#(Sρ ∩ B(o, R))k(bR )b−δR
Σs,k
e
#(Sρ ∩ B(o, R))k(bR )b−δR
= 0.
∞
Letting R → ∞ shows that µ(X) = 0; thus Supp(µ) ⊆ Sρ ∪ Λ \ X = Λ.
⊳
e
To complete the proof, we must show that µ is δ-quasiconformal.
Fix g ∈ G,
and let
e
νg = [(g ′ )δ µ] ◦ g −1 .
We want to show that νg ≍× µ.
Claim 15.4.11. For every continuous function f : bord X → (0, ∞), we have
Z
Z
(15.4.3)
f dνg ≍× f dµ.
Proof. Since Sρ ∪ Λ is compact, logb (f ) is uniformly continuous on Sρ ∪ Λ
with respect to the metric D. Let φf denote the modulus of continuity of logb (f ),
so that
f (x)
≤ bφf (r) ∀x, y ∈ Sρ ∪ Λ.
f (y)
D(x, y) ≤ r ⇒
(15.4.4)
For each n ∈ N let
νg,n = [(g ′ )sn µn ] ◦ g −1 ,
so that νg,n −−→ ν. Then
n,×
νg,n =
≍×
=
=
and so
(15.4.5)
1
Σsn ,k
1
Σsn ,k
1
Σsn ,k
1
Σsn ,k
X
x∈Sρ
X
x∈Sρ
X
k(x)b−sn kxk [(g ′ )sn δx ] ◦ g −1
k(x)b−sn kxk bsn [kxk−kg(x)k] [δx ◦ g −1 ]
b−sn kg(x)k k(x)δg(x)
x∈Sρ
X
b−sn kxk k(g −1 (x))δx ,
x∈g(Sρ )
P
R
−sn kxk
k(g −1 (x))f (x)
f dνg,n
x∈g(Sρ ) b
R
P
·
≍×
−sn kyk k(y)f (y)
f dµn
y∈Sρ b
For each x ∈ g(Sρ ) ⊆ G(o), there exists yx ∈ Sρ such that d(x, yx ) ≤ ρ.
Observation 15.4.12. #{x : yx = y} is bounded independent of y and g.
15.4. QUASICONFORMAL MEASURES
265
Proof. Write y = h(o); then
#{x : yx = y} ≤ #(g(Sρ ) ∩ B(y, ρ)) = #(h−1 g(Sρ ) ∩ B(o, ρ)).
But Sρ′ := h−1 g(Sρ ) is a ρ-separated set. For each x ∈ Sρ′ , choose zx ∈ Sρ/2 such
that d(x, zx ) < ρ/2; then the map x 7→ zx is injective, so
#(Sρ′ ) ≤ #(Sρ/2 ∩ B(o, 2ρ)),
which is bounded independent of y and g.
⊳
Now
D(x, yx ) ≤ b−hx|yxio ≤ bρ−kyx k ;
applying (15.4.4) gives
ρ−kyx k
)
f (x) ≤ bφf (b
f (yx ).
On the other hand, by (15.4.1) we have
k(g −1 (x)) ≤ bε(yx )[ρ+kgk] k(yx ),
and we also have
b−sn kxk ≤ bsn ρ b−sn kyx k .
Combining everything gives
X
b−sn kxk k(g −1 (x))f (x)
x∈g(Sρ )
≤
.×
X
x∈g(Sρ )
X
y∈Sρ
expb sn ρ + ε(yx )[ρ + kgk] + φf (bρ−kyx k ) b−sn kyx k k(yx )f (yx )
expb ε(y)[ρ + kgk] + φf (bρ−kyk ) b−sn kyk k(y)f (y),
and taking the limit as n → ∞ we have
Z
Z
Z
f (x) dν(x) .× expb ε(y)[ρ + kgk] + φf (bρ−kyk ) f (y) dµ(y) = f (y) dµ(y)
since φf (bρ−kyk ) = ε(y) = 0 for all y ∈ ∂X. A symmetric argument gives the
converse direction.
⊳
Now let C be the implied constant of (15.4.3). Then for every continuous
function f : X → (0, ∞),
Z
Z
Z
Z
C f dν − f dµ ≥ 0 and C f dµ − f dν ≥ 0,
R
R
R
R
i.e. the linear functionals I1 [f ] = C f dν − f dµ and I2 [f ] = C f dµ − f dν
are positive. Thus by the Riesz representation theorem, there exist measures γ1 , γ2
such that Iγi = Ii (i = 1, 2). The uniqueness assertion of the Riesz representation
266
15. CONFORMAL AND QUASICONFORMAL MEASURES
theorem then guarantees that
(15.4.6)
γ1 + µ = Cν and γ2 + ν = Cµ.
In particular, Cν ≥ µ, and Cµ ≥ ν. This completes the proof.
CHAPTER 16
Patterson–Sullivan theorem for groups of
divergence type
In this chapter, we prove Theorem 1.4.1, which states that a nonelementary
e
measure.
group of generalized divergence type possesses a δ-quasiconformal
16.1. Samuel–Smirnov compactifications
We begin by summarizing the theory of Samuel–Smirnov compactifications,
which will be used in the proof of Theorem 1.4.1.
Proposition 16.1.1. Let (Z, D) be a complete metric space. Then there exists
b together with a homeomorphic embedding ι : Z → Z
b
a compact Hausdorff space Z
with the following property:
Property 16.1.2. If A, B ⊆ Z, then A ∩ B 6= if and only if
D(A, B) = 0. Here A and B denote the closures of A and B relative
b
to Z.
b ι) is unique up to homeomorphism. Moreover, if Z1 , Z2 are two comThe pair (Z,
plete metric spaces and if f : Z1 → Z2 is uniformly continuous, then there exists
b1 → Z
b2 such that ι ◦ f = fb ◦ ι. The reverse is also
a unique continuous map fb : Z
true: if f admits such an extension, then f is uniformly continuous.
b will be called the Samuel–Smirnov compactification of Z.
The space Z
Proof of Proposition 16.1.1. The metric D induces a proximity on Z in
the sense of [136, Definition 1.7]. Then the existence and uniqueness of a pair
b ι) for which Property 16.1.2 holds is guaranteed by [136, Theorem 7.7]. The
(Z,
assertions concerning uniformly continuous maps follow from [136, Theorem 7.10]
and [136, Theorem 4.4], respectively (cf. [136, Remark 4.8] and [136, Definition
4.10]).
Remark 16.1.3. The Samuel–Smirnov compactification may be compared with
the Stone–Čech compactification, which is usually larger. The difference is that
instead of Property 16.1.2, the Stone–Čech compactification has the property that
for all A, B ⊆ Z, A ∩ B 6= if and only if A ∩ B ∩ Z 6= . Moreover, in the
267
268
16. PATTERSON–SULLIVAN THEOREM FOR GROUPS OF DIVERGENCE TYPE
remarks following Property 16.1.2, “uniformly continuous” should be replaced with
just “continuous”.
We remark that if δG < ∞ (i.e. if G is of divergence type rather than of
generalized divergence type), then the proof below works equally well if the Samuel–
Smirnov compactification is replaced by the Stone–Čech compactification. This is
not the case for the general proof; cf. Remark 16.3.5.
To prove Theorem 1.4.1, we will consider the Samuel–Smirnov compactification
of the complete metric space (bord X, D) (cf. Proposition 3.6.13), which we will
b For convenience of notation we will assume that bord X is a subset
denote by X.
b and that ι : bord X → X
b is the inclusion map. As a point of terminology we
of X
b \ bord X “nonstandard
will call points in bord X “standard points” and points in X
points”.
Remark 16.1.4. Since D ≍× Dx for all x ∈ X, the Samuel–Smirnov compactb is independent of the basepoint o.
ification X
At this point we can give a basic outline of the proof of Theorem 1.4.1: First
b which satisfies the transformation equation
we will construct a measure µ
b on X
(15.1.1). We will call such a measure µ
b a quasiconformal measure, although it is
not a priori a quasiconformal measure in the sense of Definition 15.1.1, as it is not
necessarily supported on the set of standard points. Then we will use Thurston’s
proof of the Hopf–Tsuji–Sullivan theorem [4, Theorem 4 of Section VII] (see also
[138, Theorem 2.4.6]) to show that µ
b is supported on the nonstandard analogue of
radial limit set. Finally, we will show that the nonstandard analogue of the radial
limit set is actually a subset of bord X, i.e. we will show that radial limit points
are automatically standard. This demonstrates that µ
b is a measure on bord X, and
is therefore a bona fide quasiconformal measure.
We now begin the preliminaries to the proof of Theorem 1.4.1. As always
b be the Samuel–Smirnov compactification
(X, o, b) denotes a Gromov triple. Let X
of bord X.
Remark 16.1.5. Throughout this chapter, S denotes the closure of a set S
b not bord X.
taken with respect to X,
b
16.2. Extending the geometric functions to X
We begin by extending the geometric functions d(·, ·), h·|·i, and B(·, ·) to the
b Extending d(·, ·) is the easiest:
Samuel–Smirnov compactification X.
Observation 16.2.1. If x ∈ X is fixed, then the function fx : bord X → [0, 1]
defined by fx (y) = b−d(x,y) is uniformly continuous by Remark 3.6.15. Thus by
b
16.2. EXTENDING THE GEOMETRIC FUNCTIONS TO X
269
b → [0, 1]. We
Proposition 16.1.1, there exists a unique continuous extension fbx : X
write
b yb) = − logb fbx (b
d(x,
y ).
We define the extended boundary of X to be the set
d := {b
b : d(o,
b b
∂X
ξ∈X
ξ) = ∞}.
d ∩ bord X = ∂X.
b y) = d(x, y) if x, y ∈ X, and ∂X
Note that d(x,
d 6= ∂X.
Warning. It is possible that ∂X
b presents some diffiOn the other hand, extending the Gromov product to X
culty, since the Gromov product is not necessarily continuous (cf. Example 3.4.6).
Our solution is as follows: Fix x ∈ X and y ∈ bord X. Then by Remark 3.6.15,
the map bord X ∋ z 7→ Dx (y, z) is uniformly continuous, so by Proposition 16.1.1
b ∋ zb 7→ D
b x (y, zb). We define the Gromov product
it extends to a continuous map X
b via the formula
in X
b x (y, zb).
hy|b
z ix = − logb D
Note that if zb ∈ bord X, then this notation conflicts with the previous definition
of the Gromov product, but by Proposition 3.6.8 the harm is only an additive
asymptotic. We will ignore this issue in what follows.
Observation 16.2.2. Using (j) of Proposition 3.3.3 we may define for each
x, y ∈ X the Busemann function
b zb(x, y) = hx|b
B
z iy − hy|b
z ix .
Again, if zb ∈ bord X, then this definition conflicts with the previous one, but again
the harm is only an additive asymptotic.
Remark 16.2.3. We note that an appropriate analogue of Proposition 3.3.3
b Specifically, each formula of Proposition
(cf. also Corollary 3.4.12) holds on X.
3.3.3 holds with an additive asymptotic, as long as all expressions are defined. Note
in particular that we have not defined the value of expressions which contain more
than one nonstandard point. Such a definition would present additional difficulties
(namely, noncommutativity of limits) which we choose to avoid.
We are now ready to define the nonstandard analogue of the radial limit set:
let
Definition 16.2.4 (cf. Definitions 4.5.1 and 7.1.2). Given x ∈ X and σ > 0,
b : ho|b
[
Shad(x,
σ) = {b
ξ∈X
ξix ≤ σ},
[
so that Shad(x,
σ) ∩ bord X = Shad(x, σ). A sequence (xn )∞
1 in X will be said to
d
b
[ n , σ) for all
converge to a point ξ ∈ ∂X σ-radially if kxn k → ∞ and if b
ξ ∈ Shad(x
270
16. PATTERSON–SULLIVAN THEOREM FOR GROUPS OF DIVERGENCE TYPE
n ∈ N. Note that in the definition of σ-radial convergence, we do not require that
b although this can be seen from the proof of Lemma
xn → b
ξ in the topology on X,
16.2.5 below.
We conclude this section with the following lemma:
Lemma 16.2.5 (Every radial limit point is a standard point). Suppose that a
d
b
sequence (xn )∞
1 converges to a point ξ ∈ ∂X σ-radially for some σ > 0. Then
b
ξ ∈ ∂X.
Proof. We observe first that
b b
→ d(o,
ξ) = ∞.
hxn |b
ξio ≍+ kxn k − ho|b
ξixn ≍+,σ kxn k −
n
Together with Gromov’s inequality hxn |xm io &+ min(hxn |b
ξio , hxm |b
ξio ), this implies
that (xn )∞
1 is a Gromov sequence.
By the definition of the Gromov boundary, it follows that there exists a (standard) point η ∈ ∂X such that the sequence (xn )∞
1 converges to η. Gromov’s
b
inequality now implies that hη|ξio = ∞. We claim now that b
ξ = η, so that b
ξ is
b is a Hausdorff space, it follows
standard. By contradiction, suppose b
ξ 6= η. Since X
b containing b
that there exist disjoint open sets U, V ⊆ X
ξ and η, respectively. Since
V contains a neighborhood of η, the function fo,η (z) = hη|zio is bounded from
above on bord X \ V . By continuity, fbo,η is bounded from above on bord X \ V . In
particular, b
ξ∈
/ bord X \ V . On the other hand b
ξ∈
/ V , since b
ξ is in the open set U
b a contradiction.
which is disjoint from V . It follows that b
ξ∈
/ bord X = X,
Remark 16.2.6. In fact, the above proof shows that if
(16.2.1)
hxn |b
ξio → ∞
b d
b
for some sequence (xn )∞
1 in X and some ξ ∈ ∂X, then ξ ∈ ∂X. However, there
∞
b but for which
may be a sequence (xn )1 such that xn → b
ξ in the topology on X
b
(16.2.1) does not hold. In this case, we could have ξ ∈
/ ∂X.
b
16.3. Quasiconformal measures on X
b as follows:
We define the notion of a quasiconformal measure on X
Definition 16.3.1 (cf. Definition 15.1.1, Proposition 4.2.6). For each s ≥ 0, a
d is called s-quasiconformal if
Radon probability measure µ
b on ∂X
Z
−1
b
µ
b(b
g (A)) ≍×
bsBηb (o,g (o)) db
µ(b
η ).
A
d Here b
for every g ∈ G and for every Borel set A ⊆ ∂X.
g denotes the unique
b (cf. Proposition 16.1.1).
continuous extension of g to X
b
16.3. QUASICONFORMAL MEASURES ON X
271
Remark 16.3.2. Note that we have added here the assumption that the measure µ
b is Radon. Since the phrase “Radon measure” seems to have no generally
accepted meaning in the literature, we should make clear that for us a (finite,
nonnegative, Borel) measure µ on a compact Hausdorff space Z is Radon if the
following two conditions hold (cf. [74, §7]):
µ(A) = inf{µ(U ) : U ⊇ A, U open} ∀A ⊆ Z Borel
µ(U ) = sup{µ(K) : K ⊆ U, K compact} ∀U ⊆ Z open.
The assumption of Radonness was not needed in Definition 15.1.1, since every
measure on a compact metric space is Radon [74, Theorem 7.8]. However, the
b is not necessarily metrizable,
assumption is important in the present proof, since X
and so it may have non-Radon measures.
On the other hand, the Radon condition itself is of no importance to us, except
for the following facts:
(i) The image of a Radon measure under a homeomorphism is Radon.
(ii) Every measure absolutely continuous to a Radon measure is Radon.
(iii) The sum of two Radon measures is Radon.
(iv) (Riesz representation theorem, [74, Theorem 7.2]) Let Z be a compact
Hausdorff space. For each measure µ on Z, let Iµ denote the nonnegative
linear function
Z
Iµ [f ] := f dµ.
Then for every nonnegative linear functional I : C(Z) → R, there exists
a unique Radon measure µ on Z such that Iµ = I. (If µ1 and µ2 are not
both Radon, it is possible that Iµ1 = Iµ2 while µ1 6= µ2 .)
We now state two lemmas which are nonstandard analogues of lemmas proven
in Chapter 15. We omit the parts of the proofs which are the same as in the
standard case, reminding the reader that the important point is that no function is
ever used which takes two nonstandard points as inputs. We begin by proving an
analogue of Sullivan’s shadow lemma:
b cf. Lemma 15.4.1). Fix s ≥ 0,
Lemma 16.3.3 (Sullivan’s Shadow Lemma on X;
d
and let µ
b ∈ M(∂X) be an s-quasiconformal measure which is not a pointmass
supported on a standard point. Then for all σ > 0 sufficiently large and for all
g ∈ G, we have
[
µ
b(Shad(g(o),
σ)) ≍× b−skgk .
272
16. PATTERSON–SULLIVAN THEOREM FOR GROUPS OF DIVERGENCE TYPE
Proof. Obvious modifications1 to the proof of Lemma 15.4.1 yield
[
[ g−1 (o) (o, σ) .
µ
b(Shad(g(o),
σ)) ≍×,µ,σ b−skgk µ
b Shad
So to complete the proof, we need to show that
[ g−1 (o) (o, σ) ≍×,µ,σ 1,
µ
b Shad
assuming σ > 0 is sufficiently large (depending on µ
b). By contradiction, suppose
that for each n ∈ N there exists gn ∈ G such that
[ −1 (o, n) ≤ 1 ·
µ
b Shad
gn (o)
2n
b
Then for µ
b-a.e. b
ξ ∈ X,
which implies
b
[ −1 (o, n) for all but finitely many n,
ξ ∈ Shad
gn (o)
hgn−1 (o)|b
ξio &+ n −
→ ∞.
n
By Remark 16.2.6, it follows that b
ξ ∈ ∂X and gn−1 (o) → b
ξ. This implies that µ
b
is a pointmass supported on the standard point limn→∞ gn−1 (o), contradicting our
hypothesis.
Lemma 16.3.4 (cf. Theorem 15.4.6). Assume that δe = δeG < ∞. Then there
e
d
exists a δ-quasiconformal
measure supported on ∂X.
b replaces
Proof. Let the measures µs be as in (15.4.2). The compactness of X
the assumption that G is of compact type which occurs in Theorem 15.4.6, so
b for some Radon measure
there exists a sequence sn ց δe such that µn := µsn → µ
d
b
µ
b ∈ M(X). Claim 15.4.10 shows that µ
b is supported on ∂X.
e
To complete the proof, we must show that µ
b is δ-quasiconformal.
Fix g ∈ G and
b → (0, ∞). The final assertion of Proposition 16.1.1
a continuous function f : X
guarantees that log(f ) ↿ bord X is uniformly continuous, so the proof of Claim
15.4.11 shows that (15.4.3) holds.
The equation (15.4.6) deserves some comment; it depends on the uniqueness
assertion of the Riesz representation theorem, which, now that we are no longer in
a metric space, holds only for Radon measures. But by Remark 16.3.2, all measures
involved in (15.4.6) are Radon, so (15.4.6) still holds.
Remark 16.3.5. In this lemma we used the final assertion of Proposition 16.1.1
in a nontrivial way. The proof of this lemma would not work for the Stone–Čech
1We remark that the expression g ′ (ξ) occuring in the proof of Lemma 15.4.1 should be replaced
b (o,g −1 (o))
−B
b makes no sense, since
by b ξb
as per Proposition 4.2.6; of course, the expression g ′ (ξ)
b is not a metric space.
X
16.4. THE MAIN ARGUMENT
273
compactification, except in the case δ < ∞, in which case the uniform continuity
of f is not necessary in the proof of Theorem 15.4.6.
b cf. Lemma 4.5.4). For
Lemma 16.3.6 (Intersecting Shadows Lemma on X;
each σ > 0, there exists τ = τσ > 0 such that for all x, y, z ∈ X satisfying d(z, y) ≥
[ z (x, σ) ∩ Shad
[ z (y, σ) 6= , we have
d(z, x) and Shad
(16.3.1)
[ z (y, σ) ⊆ Shad
[ z (x, τ )
Shad
and
(16.3.2)
d(x, y) ≍+,σ d(z, y) − d(z, x).
Proof. The proof of Lemma 4.5.4 goes through with no modifications needed.
16.4. The main argument
Proposition 16.4.1 (Generalization/nonstandard version of Theorem 1.4.2(A)
e
d which is not a pointmass sup⇒ (B)). Let µ
b be a δ-quasiconformal
measure on ∂X
ported on a standard point. If G is of generalized divergence type, then µ
b(Λr (G)) >
0.
Proof. Fix σ > 0 large enough so that Sullivan’s Shadow Lemma 16.3.3 holds.
Let ρ > 0 be large enough so that there exists a maximal ρ-separated set Sρ ⊆ G(o)
which has finite intersection with bounded sets (cf. Proposition 8.2.4(iii)). Let
(xn )∞
1 be an indexing of Sρ . By Lemma 16.2.5, we have
\ [
[ n , σ + ρ) ⊆ Λr (G).
Shad(x
N ∈N n≥N
By contradiction suppose that µ
b(Λr (G)) = 0. Fix ε > 0 small to be determined.
Then there exists N ∈ N such that
[
[ n , σ + ρ) ≤ ε.
µ
b
Shad(x
n≥N
Let R = ρ + maxn<N kxn k. Then
[
[
µ
b
Shad(g(o),
σ)
≤ ε.
g∈G
kgk>R
We shall prove the following.
Observation 16.4.2. If A ⊆ G(o) is any subcollection satisfying
(I) kxk > R for all x ∈ A, and
274
16. PATTERSON–SULLIVAN THEOREM FOR GROUPS OF DIVERGENCE TYPE
[
(II) (Shad(x,
σ))x∈A are disjoint,
then
X
(16.4.1)
b−skxk .× ε.
x∈A
Proof. The disjointness condition guarantees that
X
x∈A
[
[
[ n (o), σ) ≤ ε.
µ
b(Shad(x,
σ)) ≤ µ
b
Shad(g
g∈G
kgk>R
Combining with Sullivan’s Shadow Lemma 16.3.3 yields (16.4.1).
⊳
Now choose R′ > R and σ ′ > σ large to be determined. Let SR′ be a maximal
′
R -separated subset of G(o). For convenience we assume o ∈ SR′ . By Proposition
8.2.4(iv), if R′ is sufficiently large then Σδe(SR′ ) = ∞ if and only if δe is of generalized
divergence type. So to complete the proof, it suffices to show that
Σδe(SR′ ) < ∞.
Notation 16.4.3. Let (xi )∞
1 be an indexing of SR′ such that i < j implies
kxi k ≤ kxj k. For xi , xj ∈ SR′ distinct, we write xi < xj if
(I) i < j and
[ i , σ ′ ) ∩ Shad(x
[ j , σ ′ ) 6= .
(II) Shad(x
(This is just a notation, it does not mean that < is a partial order on SR′ .)
Lemma 16.4.4. If R′ and σ ′ are sufficiently large (with σ ′ chosen first), then
[ x (y, σ) ⊆ Shad(y,
[
x < y ⇒ Shad
σ ′ ).
[
[ σ ′ ) 6= . By the Intersecting
Proof. Suppose x < y; then Shad(x,
σ ′ )∩Shad(y,
Shadows Lemma 16.3.6, we have d(x, y) ≍+,σ′ kyk − kxk. On the other hand, since
SR′ is R′ -separated we have d(x, y) ≥ R′ . Thus
ho|xiy &+,σ′ R′ .
b we have
Now for any b
ξ ∈ X,
hx|b
ξiy &+ min(ho|b
ξiy , ho|xiy ).
[ x (y, σ), then
Thus if b
ξ ∈ Shad
σ &+ ho|b
ξiy or σ &+,σ′ R′ .
Let σ ′ be σ plus the implied constant of the first asymptotic, and then let R′ be σ+1
plus the implied constant of the second asymptotic. Then the second asymptotic
16.4. THE MAIN ARGUMENT
275
is automatically impossible, so
ho|b
ξiy ≤ σ ′ ,
[
i.e. b
ξ ∈ Shad(y,
σ ′ ).
⊳
If x ∈ SR′ is fixed, let us call y ∈ SR′ an immediate successor of x if x < y
but there is no z such that x < z < y. We denote by SR′ (x) the collection of all
immediate successors of x.
Lemma 16.4.5. For each z ∈ SR′ , we have
X
b−skyk .× εb−skzk .
(16.4.2)
y∈SR′ (z)
[ σ ′ ))y∈S ′ (z) consists of muProof. We claim first that the collection (Shad(y,
R
[ 1 , σ ′ ) ∩ Shad(y
[ 2 , σ ′ ) 6= for some distinct
tually disjoint sets. Indeed, if Shad(y
y1 , y2 ∈ SR′ (z), then we would have either z < y1 < y2 or z < y2 < y1 , contradicting the definition of immediate successor. Combining with Lemma 16.4.4, we see
[ z (y, σ))y∈S ′ (z) also consists of mutually disjoint sets.
that the collection (Shad
R
Fix g ∈ G such that g(o) = z. We claim that the collection
A = g −1 (SR′ (z))
satisfies the hypotheses of Observation 16.4.2. Indeed, as o ∈
/ A (since z ∈
/ SR′ (z))
and as g is an isometry of X, (I) follows from the fact that SR′ is R′ -separated and
[ −1 (y), σ) = g −1 (Shad
[ z (y, σ)) for all y ∈ SR′ (z), the collection
R′ > R. Since Shad(g
[
(Shad(x,
σ))x∈A consists of mutually disjoint sets, meaning that (II) holds. Thus,
by Observation 16.4.2, we have
X
b−skxk .× ε,
x∈A
or, since g is an isometry of X and z = g(o),
X
b−sd(z,y) .× ε.
y∈SR′ (z)
Inserting (16.3.2) into the last inequality yields (16.4.2).
⊳
Using Lemma 16.4.5, we complete the proof. Define the sequence (Sn )∞
n=0
inductively as follows:
S0 = {o},
[
Sn+1 =
SR′ (x).
x∈Sn
276
16. PATTERSON–SULLIVAN THEOREM FOR GROUPS OF DIVERGENCE TYPE
Clearly, all immediate successors of all points of
claim that
S R′ =
[
S
n≥0
Sn belong to
S
n≥0
Sn . We
Sn .
n≥0
Indeed, let (xi )∞
1 be the indexing of SR′ considered in Notation 16.4.3, and by
S∞
induction suppose that xi ∈ 1 Sn for all i < j. If j = 0, then xj = o ∈ S0 .
Otherwise, let i < j be maximal satisfying xi < xj . Then xj is an immediate
S∞
S∞
successor of xi ∈ 1 Sn , so xj ∈ 1 Sn .
Summing (16.4.2) over all x ∈ Sn , we have
X
X
b−skyk .× ε
b−skxk .
y∈Sn+1
x∈Sn
Set ε equal to 1/2 divided by the implied constant, so that
X
1 X −skxk
b−skyk ≤
b
.
2
y∈Sn+1
x∈Sn
Applying the Ratio Test, we see that the series Σδe(SR′ ) converges, contradicting
that G was of generalized divergence type.
e
d If G is of
Corollary 16.4.6. Let µ
b be a δ-quasiconformal
measure on ∂X.
generalized divergence type, then µ
b(Λr (G)) = 1.
d \ Λr (G) is a δe
Proof. By contradiction suppose not. Then νb := µ
b ↿ ∂X
d which gives zero measure to Λr (G), contradicting
quasiconformal measure on ∂X
Proposition 16.4.1.
16.5. End of the argument
We now complete the proof of Theorem 1.4.1:
e
Proof of Theorem 1.4.1. Let µ
b be the δ-quasiconformal
measure supported
d
on ∂X guaranteed by Lemma 16.3.4. By Corollary 15.4.3, µ
b is not a pointmass
supported on a standard point. By Corollary 16.4.6, µ
b is supported on Λr (G) ⊆ ∂X.
This completes the proof of the existence assertion.
e
Suppose that µ1 , µ2 are two δ-quasiconformal
measures on ∂X. By Corollary
16.4.6, µ1 and µ2 are both supported on Λr (G).
Suppose first that µ1 , µ2 are supported on Λr,σ for some σ > 0. Fix an open
set U ⊆ ∂X. By the Vitali covering theorem, there exists a collection of disjoint
shadows (Shad(g(o), σ))g∈A contained in U such that
[
µ1 (U \
Shad(g(o), σ)) = 0.
g∈A
16.6. NECESSITY OF THE GENERALIZED DIVERGENCE TYPE ASSUMPTION
277
Then we have
µ1 (U ) =
X
g∈A
µ1 (Shad(g(o), σ)) ≍×,µ1
≍×,µ2
X
b−skgk
g∈A
X
µ2 (Shad(g(o), σ))
g∈A
≤ µ2 (U ).
A similar argument shows that µ2 (U ) .× µ1 (U ). Since U was arbitrary, a standard
approximation argument shows that µ1 ≍× µ2 . It follows that any individual
measure µ supported on Λr,σ is ergodic, because if A is an invariant set with 0 <
1
1
µ ↿ A and 1−µ(A)
µ ↿ (Λr \ A) are two measures which are not
µ(A) < 1 then µ(A)
asymptotic, a contradiction.
In the general case, define the function f : Λr → [0, ∞) by
f (ξ) = sup{σ > 0 : ∃g ∈ G g(ξ) ∈ Λr,σ }.
By Proposition 7.2.3, f (ξ) < ∞ for all ξ ∈ Λr . On the other hand, f is G-invariant.
e
Now let µ be a δ-quasiconformal
measure on Λr . Then for each σ0 < ∞ the measure
µ ↿ f −1 ([0, σ0 ]) is supported on Λr,σ0 , and is therefore ergodic; thus f is constant
µ ↿ f −1 ([0, σ0 ])-a.s. It is clear that this constant value is independent of σ0 for large
enough σ0 , so f is constant µ-a.s. Thus there exists σ > 0 such that µ is supported
on Λr,σ , and we can reduce to the previous case.
16.6. Necessity of the generalized divergence type assumption
The proof of Theorem 1.4.1 makes crucial use of the generalized divergence type
assumption, just as the proof of Theorem 15.4.6 made crucial use of the compact
type assumption. What happens if neither of these assumptions holds? Then there
e
may not be a δ-quasiconformal
measures supported on the limit set, as we now
show:
Proposition 16.6.1. There exists a strongly discrete group of general type
G ≤ Isom(H∞ ) satisfying δ < ∞, such that there does not exist any quasiconformal
measure supported on Λ.
Proof. The idea is to first construct such a group in an R-tree, and then to
use a BIM embedding (Theorem 13.1.1) to get an example in H∞ . Fix a sequence
of numbers (ak )∞
1 . For each k let Γk = {e, γk } ≡ Z2 , and let k · k : Γk → R be
defined by kγk k = ak , kek = 0. Clearly, the function k · k is tree-geometric in the
sense of Definition 14.5.2, so by Theorem 14.5.5, the function k · k : Γ → [0, ∞)
defined by (14.5.1) is tree-geometric, where Γ = ∗k∈N Γk . So there exist an R-tree
X and a homomorphism φ : Γ → Isom(X) such that kφ(γ)k = kγk ∀γ ∈ Γ. Let
G = φ(Γ).
278
16. PATTERSON–SULLIVAN THEOREM FOR GROUPS OF DIVERGENCE TYPE
Claim 16.6.2. If the sequences (ak )∞
1 is chosen appropriately, then G is of
convergence type.
Proof. For s ≥ 0 we have
X
e−skgk
Σs (G) − 1 =
g∈G\{id}
=
X
(k1 ,γ1 )···(kn ,γn )∈(ΓE )∗ \{∅}
=
X
X
X
X
X
exp − s ak1 + . . . + akn
n∈N k1 6=k2 6=···6=kn γ1 ∈Γk1 \{e}
=
n
Y
n∈N k1 6=k2 6=···6=kn i=1
Σs (G) ≤ 1 +
Σs (G) ≥ 1 +
X
n∈N
X
X
e
−sak
k∈N
···
X
γn ∈Γkn \{e}
exp − s ak1 + . . . + akn
e−saki
!n
e−sak .
k∈N
Thus, letting
Ps =
X
e−sak ,
k∈N
we have
(16.6.1)
Σ (G) < ∞
s
Σ (G) = ∞
s
if Ps < 1
if Ps = ∞
.
Now clearly, there exists a sequence (ak )∞
1 such that P1/2 < 1 but Ps = ∞ for all
s < 1/2; for example, take ak = log(k) + 2 log log(k) + C for sufficiently large C.
⊳
Claim 16.6.3. Λ(G) = Λr (G).
Proof. For all ξ ∈ Λ, the path traced by the geodesic ray [o, ξ] in X/G is the
S
concatenation of infinitely many paths of the form [o, g(o)], where g ∈ n∈N φ(Γn ).
Each such path crosses o, so the path traced by the geodesic ray [o, ξ] in X/G
crosses o infinitely often. Equivalently, the geodesic ray [o, ξ] crosses G(o) infinitely
often. By Proposition 7.1.1, this implies that ξ ∈ Λr (G).
⊳
e be the image of G under a BIM representation (cf. Theorem 13.1.1).
Now let G
e is of convergence type and Λ(G)
e = Λr (G).
e
By Remark 13.1.4, G
The proof is
completed by the following lemma:
Lemma 16.6.4. If the group G is of generalized convergence type and µ is a
e
δ-quasiconformal
measure, then µ(Λr ) = 0.
16.7. ORBITAL COUNTING FUNCTIONS OF NONELEMENTARY GROUPS
279
Proof. Fix σ > 0 large enough so that Sullivan’s Shadow Lemma 15.4.1 holds.
Fix ρ > 0 and a maximal ρ-separated set Sρ ⊆ G(o) such that Σδe(Sρ ) < ∞. Then
X
X e
µ(Shad(x, ρ + σ)) ≍×,ρ,σ
b−δkxk < ∞.
x∈Sρ
x∈Sρ
On the other hand, Λr,σ ⊆ lim supx∈Sρ Shad(x, ρ + σ). So by the Borel–Cantelli
lemma, µ(Λr,σ ) = 0. Since σ was arbitrary, µ(Λr ) = 0.
⊳
Combining Theorem 1.4.1 and Lemma 16.6.4 yields the following:
Proposition 16.6.5. Let G ≤ Isom(X) be a nonelementary group with δe < ∞.
Then the following are equivalent:
(A) G is of generalized divergence type.
e
(B) There exists a δ-conformal
measure µ on Λ satisfying µ(Λr ) > 0.
e
(C) Every δ-conformal
measure µ on Λ satisfies µ(Λr ) = 1.
e
(D) There exists a unique δ-conformal
measure µ on Λ, and it satisfies µ(Λr ) =
1.
16.7. Orbital counting functions of nonelementary groups
Theorem 1.4.2 allows us to prove the following result which, on the face of it,
does not involve quasiconformal measures at all:
Corollary 16.7.1. Let G ≤ Isom(X) be nonelementary and satisfy δ < ∞.
Then
NX,G (ρ) .× bδρ ∀ρ ≥ 0.
Proof. If G is of convergence type, then the bound is obvious, as
X
b−δρ NX,G (ρ) ≤
b−δkgk ≤ Σδ (G) < ∞.
g∈G
kgk≤ρ
On the other hand, if G is of divergence type, then by Theorem 1.4.1, there exists
a δ-conformal measure µ on Λ, which is not a pointmass by Corollary 15.4.3 and
Proposition 10.5.4(C). Remark 15.4.5 finishes the proof.
We contrast this with a philosophically related result whose proof uses the
Ahlfors regular measures constructed in the proof of Theorem 1.2.1:
Proposition 16.7.2. Let G ≤ Isom(X) be a nonelementary and strongly dis-
crete. Then
logb NX,G (ρ)
−−−→ δ.
ρ→∞
ρ
280
16. PATTERSON–SULLIVAN THEOREM FOR GROUPS OF DIVERGENCE TYPE
Proof. Fix s < δ. By Theorem 1.2.1, there exist τ > 0 and an Ahlfors sregular measure µ supported on Λur,τ . Now fix ρ > 0. By definition, for all ξ ∈ Λur,τ
there exists g ∈ G such that ρ − τ ≤ kgk ≤ ρ and ho|ξig(o) ≤ τ . Equivalently,
[
Λur,τ ⊆
Shad(g(o), τ ).
g∈G
ρ−τ ≤kgk≤ρ
Applying µ to both sides gives
1≤
X
g∈G
ρ−τ ≤kgk≤ρ
µ Shad(g(o), τ ) .
By the Diameter of Shadows Lemma, Diam(Shad(g(o), τ )) .×,s b−kgk and thus
since µ is Ahlfors s-regular,
So
µ Shad(g(o), τ ) .×,s b−skgk ≍×,s b−sρ .
1 .×,s b−sρ #{g ∈ G : ρ − τ ≤ kgk ≤ ρ}
and thus
#{g ∈ G : kgk ≤ ρ} &×,s bsρ .
Since s < δ was arbitrary, we get
lim inf
ρ→∞
logb #{g ∈ G : kgk ≤ ρ}
≥ δ.
ρ
Combining with (8.1.2) completes the proof.
CHAPTER 17
Quasiconformal measures of geometrically finite
groups
In this chapter we investigate the δ-quasiconformal measure or measures associated to a geometrically finite group. Note that since geometrically finite groups
are of compact type (Theorem 12.4.5), Theorem 15.4.6 guarantees the existence
of a δ-quasiconformal measure µ on Λ. However, this measure is not necessarily
unique (Corollary 17.1.8); a sufficient condition for uniqueness is that G is of divergence type (Theorem 1.4.1). In Section 17.1, we generalize a theorem of Dal’bo,
Otal, and Peigne [55, Théorème A] which shows that “most” geometrically finite
groups are of divergence type. In Sections 17.2-17.5 we investigate the geometry of
δ-conformal measures; specifically, in Sections 17.2-17.3 we prove a generalization
of the Global Measure Formula (Theorem 17.2.2), in Sections 17.4 and 17.5 we
investigate the questions of when the δ-conformal measure of a geometrically finite
group is doubling and exact dimensional, respectively.
Standing Assumptions 17.0.3. In this chapter, we assume that
(I) X is regularly geodesic and strongly hyperbolic,
(II) G ≤ Isom(X) is nonelementary and geometrically finite, and δ < ∞.1
Moreover, we fix a complete set of inequivalent parabolic points P ⊆ Λbp , and for
each p ∈ P we write δp = δ(Gp ), and let Sp ⊆ Ep be a p-bounded set satisfying
(A)-(C) of Lemma 12.3.6. Finally, we choose a number t0 > 0 large enough so that
if
Hp = Hp,t0 = {x ∈ X : B p (o, x) > t0 }
H = {g(Hp ) : p ∈ P, g ∈ G},
then the collection H is disjoint (cf. Proof of Theorem 12.4.5(B3) ⇒ (A)).
17.1. Sufficient conditions for divergence type
In the Standard Case, all geometrically finite groups are of divergence type
[165, Proposition 2]; however, once one moves to the more general setting of pinched
Hadamard manifolds, one has examples of geometrically finite groups of convergence
1Note that by Corollary 12.4.17(ii), we have δ < ∞ if and only if δ < ∞ for all p ∈ P .
p
281
282
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
type [55, Théorème C]. On the other hand, Proposition 16.6.5 shows that for every
δ-conformal measure µ, G is of divergence type if and only if µ(Λ \ Λr) = 0. Now by
Theorem 12.4.5, Λ \ Λr = Λbp = G(P ), so the condition µ(Λ \ Λr ) = 0 is equivalent
to the condition µ(P ) = 0. To summarize:
Observation 17.1.1. The following are equivalent:
(A) G is of divergence type.
(B) There exists a δ-conformal measure µ on Λ satisfying µ(P ) = 0.
(C) Every δ-conformal measure µ on Λ satisfies µ(P ) = 0.
(D) There exists a unique δ-conformal measure µ on Λ, and it satisfies µ(P ) =
0.
In particular, every convex-cobounded group is of divergence type.
It is of interest to ask for sufficient conditions which are not phrased in terms
of measures. We have the following:
Theorem 17.1.2 (Cf. [165, Proposition 2], [55, Théorème A]). If δ > δp for
all p ∈ P , then G is of divergence type.
Proof. We will demonstrate (B) of Observation 17.1.1. Let µ be the measure
constructed in the proof of Theorem 15.4.6, fix p ∈ P , and we will show that
µ(p) = 0. In what follows, we use the same notation as in the proof of Theorem
15.4.6. Since G is strongly discrete, we can let ρ be small enough so that Sρ = G(o).
For any neighborhood U of p, we have
(17.1.1)
µ(p) ≤ lim inf µs (U ) = lim inf
sցδ
sցδ
1
Σs,k
X
k(x)e−skxk .
x∈G(o)∩U
Lemma 17.1.3.
hh(o)|xio ≍+ 0 ∀x ∈ Sp .
Proof. Since Sp is p-bounded, Gromov’s inequality implies that
hh(o)|xio ∧ hh(o)|pio ≍+ 0
for all h ∈ Gp and x ∈ Sp . Denote the implied constant by σ. For all h ∈ Gp such
that hh(o)|pio > σ, we have hh(o)|xio ≤ σ ∀x ∈ Sp . Since this applies to all but
finitely many h ∈ Gp , (c) of Proposition 3.3.3 completes the proof.
⊳
Let T be a transversal of Gp \G such that T (o) ⊆ Sp . Then by Lemma 17.1.3,
kh(x)k ≍+ khk + kxk ∀h ∈ Gp ∀x ∈ T (o).
17.1. SUFFICIENT CONDITIONS FOR DIVERGENCE TYPE
Thus for all s > δ and V ⊆ X,
X
k(x)e−skxk =
(17.1.2)
X
X
283
k(ekxk )e−skxk
h∈Gp x∈hT (o)∩U
x∈G(o)∩U
≍×
X
h∈Gp
X
k(ekhk+kxk )e−s[khk+kxk] .
x∈T (o)∩h−1 (U)
Now fix 0 < ε < δ − δp , and note that by (15.4.1),
k(R) ≤ k(λR) .×,ε λε k(R) ∀λ > 1 ∀R ≥ 1.
Thus setting V = U in (17.1.2) gives
X
X
k(x)e−skxk .×,ε
e−(s−ε)khk
h∈Gp
h(Sp )∩U6=∅
x∈G(o)∩U
X
k(x)e−skxk ,
x∈T (o)
while setting V = X gives
X
X
X
k(x)e−skxk .
k(x)e−skxk &×
e−skhk
Σs,k =
h∈Gp
x∈G(o)
x∈T (o)
Dividing these inequalities and combining with (17.1.1) gives
X
X
1
1
µ(p) .×,ε lim inf
e−(s−ε)khk =
sցδ Σs (Gp )
Σδ (Gp )
h∈Gp
h(Sp )∩U6=∅
e−(δ−ε)khk .
h∈Gp
h(Sp )∩U6=∅
Note that the right hand series converges since δ − ε > δp by construction. As the
neighborhood U shrinks, the series converges to zero. This completes the proof.
Combining Theorem 17.1.2 with Proposition 10.3.10 gives the following immediate corollary:
Corollary 17.1.4. If for all p ∈ P , Gp is of divergence type, then G is of
divergence type.
Thus in some sense divergence type can be “checked locally” just like the properties of finite generation and finite Poincaré exponent (cf. Corollary 12.4.17).
Corollary 17.1.5. Every convex-cobounded group is of divergence type.
Remark 17.1.6. It is somewhat awkward that it seems to be difficult or impossible to prove Theorem 17.1.2 via any of the equivalent conditions of Observation
17.1.1 other than (B). Specifically, the fact that the above argument works for the
measure constructed in Theorem 15.4.6 (the “Patterson–Sullivan measure”) but
not for other δ-conformal measures seems rather asymmetric. However, after some
thought one realizes that it would be impossible for a proof along similar lines to
work for every δ-conformal measure. This is because the above proof shows that
284
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
the Patterson–Sullivan measure µ satisfies
µ(p) = 0 for all p ∈ P satisfying δ > δp ,
(17.1.3)
but there are geometrically finite groups for which (17.1.3) does not hold for all δconformal measures µ. Specifically, one may construct geometrically finite groups
of convergence type (cf. [55, Théorème C]) such that δp < δ for some p ∈ P ;
the following proposition shows that there exists a δ-conformal measure for which
(17.1.3) fails:
Proposition 17.1.7. If G is of convergence type, then for each p ∈ P there
exists a δ-conformal measure supported on G(p).
Proof. Let
µ=
X
[g ′ (p)]δ δg(p) ;
g(p)∈G(p)
clearly µ is a δ-conformal measure, but we may have µ(∂X) = ∞. To prove that
this is not the case, as before we let T be a transversal of Gp \G such that T (o) ⊆ Sp .
Then
X
X
X
[g ′ (p)]δ =
µ(∂X) =
[g ′ (p)]δ ≍×
e−δkgk ≤ Σδ (G) < ∞.
g(p)∈G(p)
g∈T −1
g∈T −1
Proposition 17.1.7 yields the following characterization of when there exists a
unique δ-conformal measure:
Corollary 17.1.8. The following are equivalent:
(A) There exists a unique δ-conformal measure on Λ.
(B) Either G is of divergence type, or #(P ) = 1.
17.2. The global measure formula
In this section and the next, we fix a δ-quasiconformal measure µ, and ask the
following geometrical question: Given η ∈ Λ and r > 0, can we estimate µ(B(η, r))?
If G is convex-cobounded, then we can show that µ is Ahlfors δ-regular (Corollary
17.2.3), but in general the measure µ(B(η, r)) will depend on the point η, in a
manner described by the global measure formula. To describe the global measure
formula, we need to introduce some notation:
Notation 17.2.1. Given ξ = g(p) ∈ Λbp , let tξ > 0 be the unique number such
that
Hξ = Hξ,tξ = g(Hp ) = g(Hp,t0 ),
17.2. THE GLOBAL MEASURE FORMULA
285
b(η, t)
log m(η, t)
Figure 17.2.1. A possible (approximate) graph of the functions
t 7→ b(η, t) and t 7→ log m(η, t) (cf. (17.2.1) and (17.2.6)). The
graph indicates that there are at least two inequivalent parabolic
points p1 , p2 ∈ P , which satisfy Npi (R) ≍× R2δ Ipi (R) ≍× Rki for
some k1 < 2δ < k2 . The dotted line in the second graph is just the
line y = −δt.
Note the relation between the two graphs, which may be either
direct or inverted depending on the functions Np . Specifically, the
relation is direct for the first cusp but inverted for the second cusp.
i.e. tξ = t0 + B ξ (o, g(o)). (Note that tp = t0 for all p ∈ P .) Fix θ > 0 large
to be determined below (cf. Proposition 17.2.5). For each η ∈ Λ and t > 0, let
ηt = [o, η]t , and write
S
e−δt
ηt ∈
/ (H )
(17.2.1) m(η, t) = e−δtξ [Ip (et−tξ −θ ) + µ(p)]
ηt ∈ Hξ and t ≤ hξ|ηio
e−δ(2hξ|ηio −tξ ) N (e2hξ|ηio −t−tξ −θ ) η ∈ H and t > hξ|ηi
p
t
ξ
o
(cf. Figure 17.2.1.) Here we use the notation
X
Ip (R) =
khk−2δ
p
h∈Gp
khkp ≥R
Np (R) = NEp ,Gp (R) = #{h ∈ Gp : khkp ≤ R}
where
khkp = Dp (o, h(o)) = e(1/2)khk ∀h ∈ Gp .
286
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
Theorem 17.2.2 (Global measure formula; cf. [160, Theorem 2] and [153,
Théorème 3.2]). For all η ∈ Λ and t > 0,
m(η, t + σ) .× µ(B(η, e−t )) .× m(η, t − σ),
(17.2.2)
where σ > 0 is independent of η and t (but may depend on θ).
Corollary 17.2.3. If G is convex-cobounded, then
µ(B(η, r)) ≍× rδ ∀η ∈ Λ ∀0 < r ≤ 1,
(17.2.3)
i.e. µ is Ahlfors δ-regular.
Proof. If G is convex-cobounded then H = , so m(η, t) = e−δt ∀η, t, and
thus (17.2.2) reduces to (17.2.3).
Remark 17.2.4. Corollary 17.2.3 can be deduced directly from Lemma 17.3.7
below.
We will prove Theorem 17.2.2 in the next section. For now, we investigate more
closely the function t 7→ m(η, t) defined by (17.2.1). The main result of this section
is the following proposition, which will be used in the proof of Theorem 17.2.2:
Proposition 17.2.5. If θ is chosen sufficiently large, then for all η ∈ Λ and
0 < t1 < t2 ,
(17.2.4)
m(η, t2 ) .×,θ m(η, t1 ).
The proof of Proposition 17.2.5 itself requires several lemmas.
Lemma 17.2.6. Fix ξ, η ∈ ∂X and t > 0, and let x = ηt . Then
B ξ (o, x) ≍+ t ∧ (2hξ|ηio − t).
(17.2.5)
Proof. Since ho|ηix = 0, Gromov’s inequality gives ho|ξix ∧ hξ|ηix ≍+ 0.
Case 1: ho|ξix ≍+ o. In this case, by (h) of Proposition 3.3.3,
B ξ (o, x) = − Bξ (x, o) = −[2ho|ξix − kxk] ≍+ kxk = t,
while (g) of Proposition 3.3.3 gives
1
1
hξ|ηio = hξ|ηix + [B ξ (o, x) + B η (o, x)] &+ [t + t] = t;
2
2
thus B ξ (o, x) ≍+ t ≍+ t ∧ (2hξ|ηio − t).
Case 2: hξ|ηix ≍+ o. In this case, (g) of Proposition 3.3.3 gives
hξ|ηio ≍+
1
1
1
[B ξ (o, x) + B η (o, x)] = [B ξ (o, x) + t] .+ [t + t] = t;
2
2
2
thus B ξ (o, x) ≍+ 2hξ|ηio − t ≍+ t ∧ (2hξ|ηio − t).
17.2. THE GLOBAL MEASURE FORMULA
Corollary 17.2.7. The function
0
(17.2.6)
b(η, t) =
t ∧ (2hξ|ηi − t) − t
o
ξ
ηt ∈
/
287
S
(H )
ηt ∈ Hξ
satisfies
b(η, t + τ ) ≍+,τ b(η, t − τ ).
(17.2.7)
Proof. Indeed, by Lemma 17.2.6,
0
b(η, t) ≍+
B ξ (o, ηt ) − tξ
ηt ∈
/
S
(H )
ηt ∈ Hξ
= 0 ∨ max (B ξ (o, ηt ) − tξ ).
ξ∈Λbp
The right hand side is 1-Lipschitz continuous with respect to t, which demonstrates
(17.2.7).
Lemma 17.2.8. For all ξ ∈ G(p) ⊆ Λbp , p ∈ P , there exists g ∈ G such that
(17.2.8)
ξ = g(p), kgk ≍+ tξ , and {η ∈ ∂X : [o, η] ∩ Hξ 6= } ⊆ Shad(g(o), σ),
where σ > 0 is independent of ξ.
Proof. Write ξ = g(p) for some g ∈ G. Since x := ξtξ ∈ ∂Hξ , Lemma
12.3.6(D) shows that
d(g −1 (x), h(o)) ≍+ 0
for some h ∈ Gp . We claim that gh is the desired isometry. Clearly kghk ≍+ kxk =
tξ . Fix η ∈ ∂X such that [o, η] ∩ Hξ 6= , say ηt ∈ Hξ . By Lemma 17.2.6, we have
kxk = tξ < B ξ (o, ηt ) ≍+ t ∧ (2hξ|ηio − t) ≤ hξ|ηio ≤ hx|ηio ,
i.e. η ∈ Shad(x, σ) ⊆ Shad(g(o), σ + τ ) for some σ, τ > 0.
Proof of Proposition 17.2.5. Fix η ∈ Λ and 0 < t1 < t2 .
Case 1: ηt1 , ηt2 ∈ Hξ for some ξ = g(p) ∈ Λbp , g satisfying (17.2.8). In this case,
(17.2.4) follows immediately from (17.2.1) unless t1 ≤ hξ|ηio < t2 . If the
latter holds, then
lim
m(η, t) = e−δtξ [Ip (ehξ|ηio −tξ −θ ) + µ(p)]
lim
m(η, t) = e−δ(2hξ|ηio −tξ ) Np (ehξ|ηio −tξ −θ ).
m(η, t1 ) ≥
tրhξ|ηio
m(η, t2 ) ≤
tցhξ|ηio
Consequently, to demonstrate (17.2.4) it suffices to show that
(17.2.9)
Np (et ) .×,θ e2δt Ip (et ),
where t := hξ|ηio − tξ − θ > 0.
288
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
To demonstrate (17.2.9), let ζ = g −1 (η) ∈ Λ. We have
hp|ζio = hξ|ηig(o) ≍+ hξ|ηio − kgk ≍+ hξ|ηio − tξ = t + θ
and thus
Dp (o, ζ) ≍× et+θ .
Since p is a bounded parabolic point, there exists hζ ∈ Gp such that
Dp (hζ (o), ζ) .× 1. Denoting all implied constants by C, we have
C −1 et+θ − C ≤ Dp (o, ζ) − Dp (hζ (o), ζ) ≤ khζ kp
≤ Dp (o, ζ) + Dp (hζ (o), ζ) ≤ Cet+θ + C.
Choosing θ ≥ log(4C), we have
2et ≤ khζ kp ≤ 2Cet+θ unless et+θ ≤ 2C 2 .
If 2et ≤ khζ kp ≤ 2Cet+θ , then for all h ∈ Gp satisfying khkp ≤ et we have
et ≤ khζ hkp .×,θ et ; it follows that
X
≍×,θ e−2δt Np (et ),
khζ hk−2δ
Ip (et ) ≥
p
h∈Gp
thus demonstrating (17.2.9). On the other hand, if et+θ ≤ 2C 2 , then both
sides of (17.2.9) are bounded from above and below independent of t.
Case 2: No such ξ exists. In this case, for each i write ηi ∈ Hξi for some ξi =
gi (pi ) ∈ Λbp if such a ξi exists. If ξ1 exists, let s1 > t1 be the smallest
number such that ηs1 ∈ ∂Hξ1 , and if ξ2 exists, let s2 < t2 be the largest
number such that ηs2 ∈ ∂Hξ2 . If ξi does not exist, let si = ti . Then
t1 ≤ s1 ≤ s2 ≤ t2 . Since m(η, si ) = e−δsi , we have m(η, s2 ) ≤ m(η, s1 ),
so to complete the proof it suffices to show that
m(η, s1 ) .×,θ m(η, t1 ) and
m(η, s2 ) &×,θ m(η, t2 ).
By Case 1, it suffices to show that
m(η, s1 ) .× lim m(η, t) if ξ1 exists, and
tրs1
m(η, s2 ) &× lim m(η, t) if ξ2 exists.
tցs2
Comparing with (17.2.1), we see that the desired formulas are
e−δs1 .× e−δ(2hξ|ηio −tξ1 ) Np (e2hξ1 |ηio −s1 −tξ1 )
e−δs2 &× e−δtξ2 [Ip (es2 −tξ2 ) + µ(p)],
17.3. PROOF OF THE GLOBAL MEASURE FORMULA
289
which follow upon observing that the definitions of s1 and s2 imply that
s1 ≍+ 2hξ|ηio − tξ1 and s2 ≍+ tξ2 (cf. Lemma 17.2.6).
17.3. Proof of the global measure formula
Although we have finished the proof of Proposition 17.2.5, we still need a few
lemmas before we can begin the proof of Theorem 17.2.2. Throughout these lemmas, we fix p ∈ P , and let
Rp = sup Dp (o, x) < ∞.
x∈Sp
Here Sp ⊆ Ep is a p-bounded set satisfying Λ \ {p} ⊆ Gp (Sp ), as in Standing
Assumptions 17.0.3.
Lemma 17.3.1. For all A ⊆ Gp ,
!
[
X
X
(17.3.1)
µ
h(Sp ) ≍×
e−δkhk =
khk−2δ
p .
h∈A
h∈A
h∈A
Proof. As the equality follows from Observation 6.2.10, we proceed to demonstrate the asymptotic. By Lemma 17.1.3, there exists σ > 0 such that Sp ⊆
Shadh−1 (o) (o, σ) for all h ∈ Gp . Then by the Bounded Distortion Lemma 4.5.6,
Z
′
(h )δ dµ ≍×,σ e−δkhk µ(Sp ) ≍× e−δkhk .
µ(h(Sp )) =
Sp
(In the last asymptotic, we have used the fact that µ(Sp ) > 0, which follows from
the fact that Λ \ {p} ⊆ Gp (Sp ) together with the fact that µ is not a pointmass
(Corollary 15.4.3).) Combining with the subadditivity of µ gives the . direction of
the first asymptotic of (17.3.1). To get the & direction, we observe that since Sp
is p-bounded, the strong discreteness of Gp implies that Sp ∩ h(Sp ) 6= for only
finitely many h ∈ Gp ; it follows that the function η 7→ #{h ∈ Gp : η ∈ h(Sp )} is
bounded, and thus
µ
[
h∈A
!
h(Sp )
≍×
=
Z
X
#{h ∈ Gp : η ∈ h(Sp )} dµ(η)
µ(h(Sp ))
h∈A
≍×
X
e−δkhk .
h∈A
290
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
ξ=∞
o
ηt
ξ
η
ηt
Hξ
ηtξ
g(o)
o
η
Figure 17.3.1. Cusp excursion in the ball model (left) and upper
half-space model (right). Since ξ = g(p) ∈ B(η, e−t ), our estimate of µ(B(η, e−t )) is based on the function Ip , which captures
information “at infinity” about the cusp p. In the right-hand picture, the measure of B(η, e−t ) can be estimated by considering the
measure from the perspective of g(o) of a small ball around ξ.
Corollary 17.3.2. For all r > 0,
2
1
(17.3.2)
Ip
.× µ B(p, r) \ {p} .× Ip
r
2r
Proof. Since
[
h(Sp ) ⊆ B(p, 1/R) \ {p} = Ep \ Bp (o, R) ⊆
h∈Gp
khkp ≥R+Rp
Lemma 17.3.1 gives
Ip
1
+ Rp
r
.× µ(B(p, r)) .× Ip
[
h(Sp ),
h∈Gp
khkp ≥R−Rp
1
− Rp ,
r
thus proving the lemma if r ≤ 1/(2Rp ). But when r > 1/(2Rp ), all terms of (17.3.2)
are bounded from above and below independent of r.
Adding µ(p) to all sides of (17.3.2) gives
2
1
(17.3.3)
Ip
+ µ(p) .× µ(B(p, r)) .× Ip
+ µ(p).
r
2r
Corollary 17.3.3 (Cf. Figure 17.3.1). Fix η ∈ Λ and t > 0 such that ηt ∈ Hξ
for some ξ = g(p) ∈ Λbp satisfying t ≤ hξ|ηio − log(2). Then
e−δtξ [Ip (et−tξ +σ ) + µ(p)] .× µ B(η, e−t ) .× e−δtξ [Ip (et−tξ −σ ) + µ(p)],
where σ > 0 is independent of η and t.
17.3. PROOF OF THE GLOBAL MEASURE FORMULA
291
Proof. The inequality hξ|ηio ≥ t + log(2) implies that
B(ξ, e−t /2) ⊆ B(η, e−t ) ⊆ B(ξ, 2e−t ).
Without loss of generality suppose that g satisfies (17.2.8). Since t > tξ , (4.5.9)
guarantees that B(ξ, 2e−t ) ⊆ Shad(g(o), σ0 ) for some σ0 > 0 independent of η and
t. Then by the Bounded Distortion Lemma 4.5.6, we have
B p, e−(t−tξ ) /(2C) ⊆ g −1 (B(ξ, e−t /2))
⊆ g −1 (B(η, e−t ))
⊆ g −1 (B(ξ, 2e−t ))
⊆ B p, 2Ce−(t−tξ )
for some C > 0, and thus
e−δtξ µ B p, e−(t−tξ ) /(2C) .× µ(B(η, e−t )) .× e−δtξ µ B p, 2Ce−(t−tξ ) .
Combining with (17.3.3) completes the proof.
Lemma 17.3.4. For all η ∈ Λ \ {p} and 3Rp ≤ R ≤ Dp (o, η)/2,
Dp (o, η)−2δ Np (R/2) .× µ(Bp (η, R)) .× Dp (o, η)−2δ Np (2R).
Proof. Since η ∈ Λ\{p} ⊆ Gp (Sp ), there exists hη ∈ Gp such that η ∈ hη (Sp ).
Since
[
[
hη h(Sp ) ⊆ Bp (η, R) ⊆
hη h(Sp ),
h∈Gp
khkp ≤R−Rp
Lemma 17.3.1 gives
X
h∈Gp
khkp ≤R−Rp
h∈Gp
khkp ≤R+Rp
khη hk−2δ
.× µ(Bp (η, R)) .×
p
X
h∈Gp
khkp ≤R+Rp
khη hk−2δ
p .
The proof will be complete if we can show that for each h ∈ Gp such that khkp ≤
R + Rp , we have
khη hkp ≍× Dp (o, η).
(17.3.4)
And indeed,
Dp (η, hη h(o)) ≤ Dp (η, hη (o)) + khkp ≤ Rp + (R + Rp ) ≤
5
Dp (o, η),
6
demonstrating (17.3.4) with an implied constant of 6.
Corollary 17.3.5. For all η ∈ Λ \ {p} and 6Rp D(p, η)2 ≤ r ≤ D(p, η)/4, we
have
2δ
D(p, η) Np
r
4D(p, η)2
2δ
.× µ(B(η, r)) .× D(p, η) Np
4r
D(p, η)2
.
292
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
r
Proof. By (4.2.2), for every ζ ∈ Bp η, D(p,η)(D(p,η)+r)
we have that
D(η, ζ) =
Dp (η, ζ)
Dp (η, ζ)
≤
Dp (o, η)Dp (o, ζ)
Dp (o, η)(Dp (o, η) − Dp (η, ζ))
≤
r
D(p,η)(D(p,η)+r)
Dp (o, η) Dp (o, η) −
= r.
r
D(p,η)(D(p,η)+r)
Analogously, (4.2.2) also implies that for every ζ ∈ B(η, r) we have
D(η, ζ)
D(p, η)D(p, ζ)
r
≤
·
D(p, η) (D(p, η) − r)
Dp (η, ζ) =
Combining these inequalities gives us that
r
r
⊆ B(η, r) ⊆ Bp η,
Bp η,
D(p, η)(D(p, η) + r)
D(p, η)(D(p, η) − r)
Now since r ≤ D(p, η)/4, we have
2r
r
⊆ B(η, r) ⊆ Bp η,
.
Bp η,
2D(p, η)2
D(p, η)2
On the other hand, since 6Rp D(p, η)2 ≤ r ≤ D(p, η)/4, we have
3Rp ≤
r
2r
D(p, η)
≤
≤
2
2
2D(p, η)
D(p, η)
2
whereupon Lemma 17.3.4 completes the proof.
Corollary 17.3.6 (Cf. Figure 17.3.2). Fix η ∈ Λ and t > 0 such that ηt ∈ Hξ
for some ξ = g(p) ∈ Λbp . If
hξ|ηio + τ ≤ t ≤ 2hξ|ηio − tξ − τ,
(17.3.5)
then
(17.3.6)
e−δ(2hξ|ηio −tξ ) Np (e2hξ|ηio −tξ −t−σ ) .× µ B(η, e−t )
.× e−δ(2hξ|ηio −tξ ) Np (e2hξ|ηio −tξ −t+σ )
where σ, τ > 0 are independent of η and t.
Proof. Without loss of generality suppose that g satisfies (17.2.8), and write
ζ = g −1 (η). Since t > tξ , (4.5.9) guarantees that B(η, e−t ) ⊆ Shad(g(o), σ0 ) for
some σ0 > 0 independent of η and t. Then by the Bounded Distortion Lemma
4.5.6, we have
B(ζ, e−(t−tξ ) /C) ⊆ g −1 (B(η, e−t )) ⊆ B(ζ, Ce−(t−tξ ) )
17.3. PROOF OF THE GLOBAL MEASURE FORMULA
293
ξ=∞
ξ
o
ηt
η
Hξ
ηtξ
ηt
g(o)
o
η
Figure 17.3.2. Cusp excursions in the ball model (left) and upper
half-space model (right). Since ξ = g(p) ∈
/ B(η, e−t ), our estimate
of µ(B(η, e−t )) is based on the function Np , which captures “local” information about the cusp p. In the right-hand picture, the
measure of B(η, e−t ) can be estimated by considering the measure
from the perspective of g(o) of a large ball around η taken with
respect to the Dξ -metametric.
for some C > 0, and thus
e−δtξ µ(B(ζ, e−(t−tξ ) /C)) .× µ(B(η, e−t )) .× e−δtξ µ(B(ζ, Ce−(t−tξ ) )).
If
D(p, ζ)
e−(t−tξ )
≤ Ce−(t−tξ ) ≤
,
C
4
then Corollary 17.3.5 guarantees that
−(t−tξ )
e
e−δtξ D(p, ζ)2δ Np
.× µ(B(η, e−t ))
4CD(p, ζ)2
4Ce−(t−tξ )
−δtξ
2δ
.
.× e
D(p, ζ) Np
D(p, ζ)2
(17.3.7)
6Rp D(p, η)2 ≤
On the other hand, since ξ, η ∈ Shad(g(o), σ0 ), the Bounded Distortion Lemma
4.5.6 guarantees that
D(p, ζ) ≍× etξ D(ξ, η) = e−(hξ|ηio −tξ ) .
Denoting the implied constant by K, we deduce (17.3.6) with σ = log(4CK 2 ). The
proof is completed upon observing that if τ = log(4CK ∨ 6Rp CK 2 ), then (17.3.5)
implies (17.3.7).
294
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
Lemma 17.3.7 (Cf. Lemma 15.4.1). Fix η ∈ Λ and t > 0 such that ηt ∈
/
Then
µ(B(η, e−t )) ≍× e−δt .
S
(H ).
Proof. By (12.4.2), there exists g ∈ G such that d(g(o), ηt ) ≍+ 0. By (4.5.9),
we have B(η, e−t ) ⊆ Shad(g(o), σ) for some σ > 0 independent of η, t. It follows
that
µ(B(η, e−t )) ≍× e−δt µ(g −1 (B(η, e−t ))).
To complete the proof it suffices to show that µ(g −1 (B(η, e−t ))) is bounded from
below. By the Bounded Distortion Lemma 4.5.6,
g −1 (B(η, e−t )) ⊇ B(g −1 (η), ε)
for some ε > 0 independent of η, t. Now since G is of compact type, we have
inf µ(B(x, ε)) ≥ min µ(B(x, ε/2)) > 0
x∈Λ
x∈Sε/2
where Sε/2 is a maximal ε/2-separated subset of Λ. This completes the proof.
We are now ready to prove Theorem 17.2.2:
Proof of Theorem 17.2.2. Let σ0 denote the implied constant of (17.2.5).
Then by (17.2.1), for all η ∈ Λ, t > 0, and ξ ∈ Λbp ,
(17.3.8)
e−δtξ [Ip (et−tξ −θ ) + µ(p)]
tξ + σ0 ≤ t ≤ hξ|ηio
−δ(2hξ|ηi
−t
)
2hξ|ηi
−t−t
−θ
o
o
ξ
ξ
m(η, t) = e
Np (e
) hξ|ηio < t ≤ 2hξ|ηio − tξ − σ0
unknown
otherwise
Applying this formula to Corollaries 17.3.3 and 17.3.6 yields the following:
Lemma 17.3.8. There exists τ ≥ σ0 such that for all η ∈ Λ and t > 0.
(i) If for some ξ, tξ + τ ≤ t ≤ hξ|ηio − τ , then (17.2.2) holds.
(ii) If for some ξ, hξ|ηio + τ ≤ t ≤ 2hξ|ηio − tξ − τ , then (17.2.2) holds.
Now fix η ∈ Λ, and let
n
o [
[
[
[hξ|ηio +τ, 2hξ|ηio −tξ −τ ].
[tξ +τ, hξ|ηio −τ ]∪
A = t > 0 : ηt ∈
/ (H ) ∪
ξ∈Λbp
ξ∈Λbp
Then by Lemmas 17.3.7 and 17.3.8, (17.2.2)σ=τ holds for all t ∈ A.
Claim 17.3.9. Every interval of length 2τ intersects A.
Proof. If [s − τ, s + τ ] does not intersect A, then by connectedness, there
exists ξ ∈ Λbp such that ηt ∈ Hξ for all t ∈ [s − τ, s + τ ]. By Lemma 17.2.6,
the fact that ηs±τ ∈ Hξ implies that tξ ≤ s ≤ 2hξ|ηio − tξ (since τ ≥ σ0 ). If
17.4. GROUPS FOR WHICH µ IS DOUBLING
295
s ≤ hξ|ηio , then [s − τ, s + τ ] ∩ [tξ + τ, hξ|ηio − τ ] 6= , while if s ≥ hξ|ηio , then
[s − τ, s + τ ] ∩ [hξ|ηio + τ, 2hξ|ηio − tξ − τ ] 6= .
⊳
Thus for all t > 0, there exist t± ∈ A such that t − 2τ ≤ t− ≤ t ≤ t+ ≤ t − 2τ ;
then
m(η, t + 3τ ) .× m(η, t+ + τ ) .× µ B(η, e−t+ )
≤ µ B(η, e−t )
≤ µ B(η, e−t− ) .× m(η, t− − τ ) .× m(η, t − 3τ ),
i.e. (17.2.2)σ=3τ holds.
17.4. Groups for which µ is doubling
Recall that a measure µ is said to be doubling if for all η ∈ Supp(µ) and r > 0,
µ(B(η, 2r)) ≍× µ(B(η, r)). In the Standard Case, the Global Measure Formula
implies that the δ-conformal measure of a geometrically finite group is always doubling (Example 17.4.11). However, in general there are geometrically finite groups
whose δ-conformal measures are not doubling (Example 17.4.12). It is therefore of
interest to determine necessary and sufficient conditions on a geometrically finite
group for its δ-conformal measure to be doubling. The Global Measure Formula
immediately yields the following criterion:
Lemma 17.4.1. µ is doubling if and only if the function m satisfies
(17.4.1)
m(η, t + τ ) ≍×,τ m(η, t − τ ) ∀η ∈ Λ ∀t, τ > 0.
Proof. If (17.4.1) holds, then (17.2.2) reduces to
(17.4.2)
µ(B(η, e−t )) ≍× m(η, t),
and then (17.4.1) shows that µ is doubling. On the other hand, if µ is doubling,
then (17.2.2) implies that
m(η, t − τ ) .× µ(B(η, e−(t−τ −σ) )) ≍× µ(B(η, e−(t+τ +σ) )) .× m(η, t + τ );
combining with Proposition 17.2.5 shows that (17.4.1) holds.
Of course, the criterion (17.4.1) is not very useful by itself, since it refers to the
complicated function m. In what follows we find more elementary necessary and
sufficient conditions for doubling. First we must introduce some terminology.
Definition 17.4.2. A function f : [1, ∞) → [1, ∞) is called doubling if there
exists β > 1 such that
(17.4.3)
f (βR) .×,β f (R) ∀R ≥ 1,
296
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
and codoubling if there exists β > 1 such that
f (βR) − f (R) &×,β f (R) ∀R ≥ 1.
(17.4.4)
Observation 17.4.3. If there exists β > 1 such that
Np (βR) > Np (R) ∀R ≥ 1,
then Np is codoubling.
Proof. Fix R ≥ 1; there exists h ∈ Gp such that 2R < khkp ≤ 2βR. We have
h{j ∈ Gp : kjkp ≤ R} ⊆ {j ∈ Gp : R < kjkp ≤ (2β + 1)R},
and taking cardinalities gives
Np (R) ≤ Np (2β + 1)R − Np (R).
We are now ready to state a more elementary characterization of when µ is
doubling:
Proposition 17.4.4. µ is doubling if and only if all of the following hold:
(I) For all p ∈ P , Np is both doubling and codoubling.
(II) For all p ∈ P and R ≥ 1,
(17.4.5)
Ip (R) ≍× R−2δ Np (R).
(III) G is of divergence type.
Moreover, (II) can be replaced by
(II′ ) For all p ∈ P and R ≥ 1,
(17.4.6)
Iep (R) :=
∞
X
k=0
e−2δk Np (ek R) ≍× Np (R).
Proof that (I)-(III) imply µ doubling. Fix η ∈ Λ and t, τ > 0, and we will
demonstrate (17.4.1). By (II), (III), and Observation 17.1.1, we have
S
e−δt
ηt ∈
/ (H )
m(η, t) ≍× e−δtξ e−2δ(t−tξ −θ) Np (et−tξ −θ )
ηt ∈ Hξ and t ≤ hξ|ηio
e−δ(2hξ|ηio −tξ ) N (e2hξ|ηio −t−tξ −θ ) η ∈ H and t > hξ|ηi
(17.4.7)
p
t
ξ
o
S
1
ηt ∈
/ (H )
≍× e−δt
e−δb(η,t) N (eb(η,t)−θ ) η ∈ H
p
t
g(p)
where b(η, t) is as in (17.2.6). Let t± = t ± τ . We split into two cases:
17.4. GROUPS FOR WHICH µ IS DOUBLING
297
Case 1: ηt+ , ηt− ∈ Hg(p) for some g(p) ∈ Λbp . In this case, (17.4.1) follows from
Corollary 17.2.7 together with the fact that Np is doubling.
S
Case 2: ηt+s ∈
/ (H ) for some s ∈ [−τ, τ ]. In this case, Corollary 17.2.7 shows
that b(η, t± ) ≍+,τ 0 and thus
m(η, t+ ) ≍×,τ e−δt ≍×,τ m(η, t− ).
Before continuing the proof of Proposition 17.4.4, we observe that
Ip (R) + R−2δ Np (R) ≍×
X
h∈Gp
(R ∨ khkp )−2δ ≍×
=
∞
X X
h∈Gp k=1
∞
X
k=1
(ek R)−2δ [ek R ≥ khkp ]
(ek R)−2δ Np (ek R)
= R−2δ Iep (R).
In particular, it follows that (17.4.6) is equivalent to
Ip (R) .× R−2δ Np (R).
(17.4.8)
Proof that (I) and (II′ ) imply (II). Since Np is codoubling, let β > 1 be as
in (17.4.4). Then
X
Ip (R) ≥
h∈Gp
R<khkp ≤βR
(βR)−2δ = (βR)−2δ (Np (βR) − Np (R)) &×,β R−2δ Np (R).
Combining with (17.4.8) completes the proof.
Proof that µ doubling implies (I)-(III) and (II′ ). Since a doubling measure whose topological support is a perfect set cannot have an atomic part, we must
have µ(P ) = 0 and thus by Observation 17.1.1, (III) holds. Since
m(p, t) ≍×,p Ip (et−t0 −θ ) + µ(p) = Ip (et−t0 −θ )
for all sufficiently large t, setting η = p in (17.4.1) shows that the function Ip is
doubling.
Fix η ∈ Λ \ {p}. Let σ0 > 0 denote the implied constant of (17.2.5). For
s ∈ [t0 + σ0 + τ, hp|ηio − τ ], plugging t = 2hp|ηio − s into (17.4.1) and simplifying
using (17.3.8)ξ=p shows that
(17.4.9)
Np (es−τ −t0 −θ ) ≍×,τ Np (es+τ −t0 −θ ).
Since hp|ηio can be made arbitrarily large, (17.4.9) holds for all s ≥ t0 + σ0 + τ . It
follows that Np is doubling.
298
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
Next, we compare the values of m(η, hp|ηio ± τ ). This gives (assuming hp|ηio >
t0 + σ0 + τ )
e−δt0 Ip (ehp|ηio −τ −t0 −θ ) ≍× e−δ(2hp|ηio −t0 ) Np (ehp|ηio −τ −t0 −θ ).
Letting Rη = exp(hp|ηio − τ − t0 − θ), we have
Ip (Rη ) ≍× Rη−2δ Np (Rη ).
(17.4.10)
Now fix ζ ∈ Λ \ {p} and h ∈ Gp , and let η = h(ζ). Then Dp (h(o), η) ≍+,ζ 0, and
thus the triangle inequality gives
1 ≤ Dp (o, η) ≍+,ζ khkp ≥ 1,
and so Rη ≍× Dp (o, η) ≍×,ζ khkp . Combining with (17.4.10) and the fact that the
functions Ip and Np are doubling, we have
Ip (khkp ) ≍× khk−2δ
p Np (khkp )
(17.4.11)
for all h ∈ Gp .
Now fix 1 ≤ R1 < R2 such that khi kp = Ri for some h1 , h2 ∈ Gp , but such that
the formula R1 < khkp < R2 is not satisfied for any h ∈ Gp . Then
lim Ip (R) = lim Ip (R) and lim Np (R) = lim Np (R).
RցR1
RրR2
RցR1
RրR2
On the other hand, applying (17.4.11) with h = h1 , h2 gives
Ip (Ri ) ≍× Ri−2δ Np (Ri ).
Since Ip and Np are doubling, we have
R1−2δ ≍×
limRցR1 Ip (R)
limRրR2 Ip (R)
Ip (R1 )
=
≍×
Np (R1 )
limRցR1 Np (R)
limRրR2 Np (R)
≍×
Ip (R2 )
≍× R2−2δ
Np (R2 )
and thus R1 ≍× R2 . Since R1 , R2 were arbitrary, Observation 17.4.3 shows that
Np is codoubling. This completes the proof of (I).
It remains to demonstrate (II) and (II′ ). Given any R ≥ 1, since Np is codoubling, we may find h ∈ Gp such that khkp ≍× R; combining with (17.4.11) and the
fact that Ip and Np are doubling gives (17.4.5) and (17.4.8), demonstrating (II)
and (II′ ).
We note that the proof actually shows the following (cf. (17.4.7)):
Corollary 17.4.5. If µ is doubling, then
1
µ(B(η, e−t )) ≍× e−δt
e−δb(η,t) N (eb(η,t) )
p
ηt ∈
/
S
(H )
ηt ∈ Hg(p)
17.4. GROUPS FOR WHICH µ IS DOUBLING
299
for all η ∈ Λ, t > 0. Here b(η, t) is as in (17.2.6).
Although Proposition 17.4.4 is the best necessary and sufficient condition we
can give for doubling, in what follows we give necessary conditions and sufficient
conditions which are more elementary (Proposition 17.4.8), although the necessary
conditions are not the same as the sufficient conditions. In practice these conditions
are usually powerful enough to determine whether any given measure is doubling.
To state the result, we need the concept of the polynomial growth rate of a
function:
Definition 17.4.6 (Cf. (11.2.4)). The (polynomial) growth rate of a function
f : [1, ∞) → [1, ∞) is the limit
α(f ) :=
lim
λ,R→∞
log f (λR) − log f (R)
log(λ)
if it exists. If the limit does not exist, then the numbers
α∗ (f ) := lim sup
λ,R→∞
log f (λR) − log f (R)
log(λ)
log f (λR) − log f (R)
λ,R→∞
log(λ)
α∗ (f ) := lim inf
are the upper and lower polynomial growth rates of f , respectively.
Lemma 17.4.7. Let f : [1, ∞) → [1, ∞).
(i) f is doubling if and only if α∗ (f ) < ∞.
(ii) f is codoubling if and only if α∗ (f ) > 0.
(iii)
log f (λ)
log f (λ)
α∗ (f ) ≤ lim inf
≤ lim sup
≤ α∗ (f ).
λ→∞ log(λ)
log(λ)
λ→∞
In particular, α∗ (Np ) ≤ 2δp ≤ α∗ (Np ).
Proof of (i). Suppose that f is doubling, and let C > 1 denote the implied
constant of (17.4.3). Iterating gives
f (β n R) ≤ C n f (R) ∀n ∈ N ∀R ≥ 1
and thus
f (λR) .× λlogβ (C) f (R) ∀λ > 1 ∀R ≥ 1.
It follows that α∗ (f ) ≤ logβ (C) < ∞. The converse direction is trivial.
Proof of (ii). Suppose that f is codoubling, and let C > 1 denote the implied
constant of (17.4.4). Then
f (βR) ≥ (1 + C −1 )f (R) ∀R ≥ 1.
300
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
Iterating gives
f (β n R) ≥ (1 + C −1 )n f (R) ∀n ∈ N ∀R ≥ 1
and thus
f (λR) &× λlogβ (1+C
−1
)
f (R) ∀λ > 1 ∀R ≥ 1.
It follows that α∗ (f ) ≥ logβ (1 + C −1 ) > 0. The converse direction is trivial.
Proof of (iii). Let Rn → ∞. For each n ∈ N,
lim sup
λ→∞
log f (λRn ) − log f (Rn )
log f (λ)
= s := lim sup
·
log(λ)
log(λ)
λ→∞
Thus given s < s, we may find a large number λn > 1 such that
log f (λn Rn ) − log f (Rn )
> s.
log(λn )
Since λn , Rn → ∞ as n → ∞, it follows that α∗ (f ) ≥ s; since s was arbitrary,
α∗ (f ) ≥ s. A similar argument shows that α∗ (f ) ≤ s.
Finally, when f = Np , the equality s = s = 2δp is a consequence of (8.1.2) and
Observation 6.2.10.
We can now state our final result regarding criteria for doubling:
Proposition 17.4.8. In the following list, (A) ⇒ (B) ⇒ (C):
(A) For all p ∈ P , 0 < α∗ (Np ) ≤ α∗ (Np ) < 2δ.
(B) µ is doubling.
(C) For all p ∈ P , 0 < α∗ (Np ) ≤ α∗ (Np ) ≤ 2δ.
Proof of (A) ⇒ (B). Suppose that (A) holds. Then by Lemma 17.4.7, (I) of
Proposition 17.4.4 holds. Since δp ≤ α∗ (Np )/2 < δ for all p ∈ P , Theorem 17.1.2
implies that (III) of Proposition 17.4.4 holds. To complete the proof, we need to
show that (II′ ) of Proposition 17.4.4 holds. Fix s ∈ (α∗ (Np ), 2δ). Since s > α∗ (Np ),
we have
Np (λR) .×,s λs Np (R) ∀λ > 1, R ≥ 1
and thus
Np (R) ≤ Iep (R) .×
∞
X
k=0
e−2δk esk Np (R) ≍× Np (R),
demonstrating (17.4.6) and completing the proof.
Proof of (B) ⇒ (C). Suppose µ is doubling. By (I) of Proposition 17.4.4,
α∗ (Np ) > 0. On the other hand, by (17.4.6) we have
λ−2δ Np (λR) .× Np (R) ∀λ > 1, R ≥ 1
and thus α∗ (Np ) ≤ 2δ.
17.4. GROUPS FOR WHICH µ IS DOUBLING
301
Proposition 17.4.4 shows that if G is a geometrically finite group with δconformal measure µ, then the question of whether µ is doubling is determined
entirely by its parabolic subgroups (Gp )p∈P and its Poincaré set ∆G . A natural
question is when the second input can be removed, that is: if we are told what
the parabolic subgroups (Gp )p∈P are, can we sometimes determine whether µ is
doubling without looking at ∆G ? A trivial example is that if α∗ (Np ) = 0 or
α∗ (Np ) = ∞ for some p ∈ P , then we automatically know that µ is not doubling.
Conversely, the following definition and proposition describe when we can deduce
that µ is doubling:
Definition 17.4.9. A parabolic group H ≤ Isom(X) with global fixed point
p ∈ ∂X is pre-doubling if
(17.4.12)
0 < α∗ (NEp ,H ) ≤ α∗ (NEp ,H ) = 2δH < ∞
and H is of divergence type.
Proposition 17.4.10.
(i) If Gp is pre-doubling for every p ∈ P , then µ is doubling.
(ii) Let H ≤ Isom(X) be a parabolic subgroup, and let g ∈ Isom(X) be a loxodromic isometry such that hg, Hi is a strongly separated Schottky product.
Then the following are equivalent:
(A) H is pre-doubling.
(B) For every n ∈ N, the δn -quasiconformal measure µn of Gn = hg n , Hi
is doubling. Here we assume that δn := δ(Gn ) < ∞.
Proof of (i). For all p ∈ P , the fact that Gp is of divergence type implies
that δ > δp (Proposition 10.3.10); combining with (17.4.12) gives 0 < α∗ (Np ) ≤
α∗ (Np ) < 2δ. Proposition 17.4.8 completes the proof.
Proof of (ii). Since (up to equivalence) the only parabolic point of Gn is the
global fixed point of H (Proposition 12.4.19), the implication (A) ⇒ (B) follows
from part (i). Conversely, suppose that (B) holds. Then by Proposition 17.4.8, we
have
0 < α∗ (NEp ,H ) ≤ α∗ (NEp ,H ) ≤ 2δn < ∞.
Since δn → δH as n → ∞ (Proposition 10.3.7(iv)), taking the limit and combining
with the inequality 2δH ≤ α∗ (NEp ,H ) yields (17.4.12). On the other hand, by
Proposition 17.4.4, for each n, Gn is of divergence type, so applying Proposition
10.3.7(iv) again, we see that H is of divergence type.
Example 17.4.11. If
(17.4.13)
Np (R) ≍× R2δp ∀p ∈ P,
302
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
then the groups (Gp )p∈P are pre-doubling, and thus by Proposition 17.4.10(i), µ is
doubling. Combining with Corollary 17.4.5 gives
1
µ(B(η, e−t )) ≍× e−δt
e(2δξ −δ)b(η,t)
ηt ∈
/
S
(H )
ηt ∈ Hξ
.
This generalizes B. Schapira’s global measure formula [153, Théorème 3.2] to the
setting of regularly geodesic strongly hyperbolic metric spaces.
We remark that the asymptotic (17.4.13) is satisfied whenever X is a finitedimensional algebraic hyperbolic space; see e.g. [137, Lemma 3.5]. In particular,
specializing Schapira’s global measure formula to the settings of finite-dimensional
algebraic hyperbolic spacess and finite-dimensional real hyperbolic spaces give the
global measure formulas of Newberger [137, Main Theorem] and Stratmann–Velani–
Sullivan [160, Theorem 2], [165, Theorem on p.271], respectively.
By contrast, when X = H = H∞ , the asymptotic (17.4.13) is usually not
satisfied. Let us summarize the various behaviors that we have seen for the orbital
counting functions of parabolic groups acting on H∞ , and their implications for
doubling:
Examples 17.4.12 (Examples of doubling and non-doubling Patterson–Sullivan
measures of geometrically finite subgroups of Isom(H∞ )).
1. In the proof of Theorem 11.2.11 (cf. Remark 11.2.12), we saw that if Γ is
a finitely generated virtually nilpotent group and if f : [1, ∞) → [1, ∞) is
a function satisfying
αΓ < α∗ (f ) ≤ α∗ (f ) < ∞,
then there exists a parabolic group H ≤ Isom(H∞ ) isomorphic to Γ whose
orbital counting function is asymptotic to f . Now, a group H constructed
in this way may or may not be pre-doubling; it depends on the chosen
function f . We note that by applying Proposition 17.4.10(ii) to such a
group, one can construct examples of geometrically finite subgroups of
Isom(H∞ ) whose Patterson–Sullivan measures are not doubling. On the
other hand, for any parabolic group H constructed in this way, if H is embedded into a geometrically finite group G with sufficiently large Poincaré
exponent (namely 2δG > α∗ (f )), then the Patterson–Sullivan measure of
G may be doubling (assuming that no other parabolic subgroups of G are
causing problems).
2. In Theorem 14.1.5, we showed that if f : [0, ∞) → N satisfies the condition
∀0 ≤ R1 ≤ R2 f (R1 ) divides f (R2 ),
17.5. EXACT DIMENSIONALITY OF µ
303
then there exists a parabolic subgroup of Isom(H∞ ) whose orbital counting
function is equal to f . This provides even more examples of parabolic
groups which are not pre-doubling. In particular, it provides examples of
parabolic groups H which satisfy either α∗ (NH ) = 0 or α∗ (NH ) = ∞ (cf.
Example 11.2.18); such groups cannot be embedded into any geometrically
finite group with a doubling Patterson–Sullivan measure.
Note that example 2 can be used to construct a geometrically finite group
acting isometrically on an R-tree which does not have a doubling Patterson–Sullivan
measure. On the other hand, example 1 has no analogue in R-trees by Remark 6.1.8.
17.5. Exact dimensionality of µ
We now turn to the question of the fractal dimensions of the measure µ. We
recall that the Hausdorff dimension and packing dimension of a measure µ on ∂X
are defined by the formulas
dimH (µ) = inf {dimH (A) : µ(∂X \ A) = 0}
dimP (µ) = inf {dimP (A) : µ(∂X \ A) = 0} .
If G is of convergence type, then µ is atomic, so dimH (µ) = dimP (µ) = 0. Consequently, for the remainder of this chapter we make the
Standing Assumption 17.5.1. G is of divergence type.
Given this assumption, it is natural to expect that dimH (µ) = dimP (µ) = δ.
Indeed, the inequality dimH (µ) ≤ δ follows immediately from Theorems 1.2.1 and
12.4.5, and in the Standard Case equality holds [160, Proposiiton 4.10]. Even
stronger than the equalities dimH (µ) = dimP (µ) = δ, it is natural to expect that
µ is exact dimensional :
Definition 17.5.2. A measure µ on a metric space (Z, D) is called exact dimensional of dimension s if the limit
(17.5.1)
dµ (η) := lim
t→∞
1
1
log
t
µ(B(η, e−t ))
exists and equals s for µ-a.e. η ∈ Z.
For example, every Ahlfors s-regular measure is exact dimensional of dimension
s.
If the limit in (17.5.1) does not exist, then we denote the lim inf by dµ (η) and
the lim sup by dµ (η).
304
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
Proposition 17.5.3 ([127, §8]). For any measure µ on a metric space (Z, D),
dimH (µ) = ess sup dµ (η)
η∈Z
dimP (µ) = ess sup dµ (η).
η∈Z
In particular, if µ is exact dimensional of dimension s, then
dimH (µ) = dimP (µ) = s.
Combining Proposition 17.5.3 with Lemma 17.3.7 and Observation 17.1.1 immediately yields the following:
Observation 17.5.4. If µ is the Patterson–Sullivan measure of a geometrically
finite group of divergence type, then
dimH (µ) ≤ δ ≤ dimP (µ).
In particular, if µ is exact dimensional, then µ is exact dimensional of dimension δ.
It turns out that µ is not necessarily exact dimensional (Example 17.5.14), but
counterexamples to exact dimensionality must fall within a very narrow window
(Theorem 17.5.9), and in particular if µ is doubling then µ is exact dimensional
(Corollary 17.5.12). As a first step towards these results, we will show that exact
dimensionality is equivalent to a certain Diophantine condition. For this, we need
to recall some results from [73].
17.5.1. Diophantine approximation on Λ. Classically, Diophantine approximation is concerned with the approximation of a point x ∈ R \ Q by a rational
number p/q ∈ Q. The two important quantities are the error term |x − p/q| and
the height q. Given a function Ψ : N → [0, ∞), the point x ∈ R \ Q is said to be
Ψ-approximable if
x−
p
≤ Ψ(q) for infinitely many p/q ∈ Q.
q
In the setting of a group acting on a hyperbolic metric space, we can instead
talk about dynamical Diophantine approximation, which is concerned with the
approximation of a point η ∈ Λ by points g(ξ) ∈ G(ξ), where ξ ∈ Λ is a distinguished
point. For this to make sense, one needs a new definition of error and height: the
error term is defined to be D(g(ξ), η), and the height is defined to be bkgk . (If there
is more than one possibility for g, it may be chosen so as to minimize the height.)
Some motivation for these definitions comes from considering classical Diophantine
approximation as a special case of dynamical Diophantine approximation which
occurs when X = H2 and G = SL2 (Z); see e.g. [73, Observation 1.15] for more
details. Given a function Φ : [0, ∞) → (0, ∞), the point η ∈ Λ is said to be Φ, ξ-well
17.5. EXACT DIMENSIONALITY OF µ
305
approximable if for every K > 0 there exists g ∈ G such that
D(g(ξ), η) ≤ Φ(Kbkgk ) for infinitely many g ∈ G
(cf. [73, Definition 1.36]). Moreover, η is said to be ξ-very well approximable if
ωξ (η) := lim sup
g∈G
g(ξ)→η
− logb D(g(ξ), η)
>1
kgk
(cf. [73, p.9]). The set of Φ, ξ-well approximable points is denoted WAΦ,ξ , while
the set of ξ-very well approximable points is denoted VWAξ . Finally, a point η is
said to be Liouville if ωξ (η) = ∞; the set of Liouville points is denoted Liouvilleξ .
In the following theorems, we return to the setting of Standing Assumptions
17.0.3 and 17.5.1.
Theorem 17.5.5 (Corollary of [73, Theorem 8.1]). Fix p ∈ P , and let Φ :
[0, ∞) → (0, ∞) be a function such that the function t 7→ tΦ(t) is nonincreasing.
Then
(i) µ(WAΦ,p ) = 0 or 1 according to whether the series
X
1
−δkgk
(17.5.2)
e
Ip
ekgk Φ(Kekgk )
g∈G
converges for some K > 0 or diverges for all K > 0, respectively.
(ii) µ(VWAp ) = 0 or 1 according to whether the series
X
(17.5.3)
Σdiv (p, κ) :=
e−δkgk Ip (eκkgk )
g∈G
converges for all κ > 0 or diverges for some κ > 0, respectively.
(iii) µ(Liouvillep ) = 0 or 1 according to whether the series Σdiv (p, κ) converges
for some κ > 0 or diverges for all κ > 0, respectively.
Proof. Standing Assumption 17.5.1, Theorem 1.4.1, and Observation 17.1.1
imply that µ is ergodic and that µ(p) = 0, thus verifying the hypotheses of [73,
Theorem 8.1]. Theorem 17.2.2 shows that
Ip (C1 /r) .×,p µ(B(p, r)) .×,p Ip (C2 /r)
306
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
for some constants C1 ≥ 1 ≥ C2 > 0. Thus for all K > 0,
X
1
−δkgk
e
Ip
ekgk Φ(KC1 ekgk )
g∈G
X
C1
≤
e−δkgk Ip
ekgk Φ(Kekgk )
g∈G
.× [73, (8.1)]
X
.×
e−δkgk Ip
g∈G
≤
X
g∈G
e
−δkgk
Ip
C2
ekgk Φ(Kekgk )
1
ekgk Φ((K/C1 )ekgk )
.
Thus, [73, (8.1)] diverges for all K > 0 if and only if (17.5.2) diverges for all
K > 0. This completes the proof of (i). To demonstrate (ii) and (iii), simply note
T
S
that VWAp = c>0 WAΦc ,p and Liouvillep = c>0 WAΦc ,p , where Φc (t) = t−(1+c) ,
and apply (i). The constant K may be absorbed by a slight change of κ.
Theorem 17.5.6 (Corollary of [73, Theorem 7.1]). For all ξ ∈ Λ and c > 0,
dimH (WAΦc ,ξ ) ≤
δ
,
1+c
where Φc (t) = t−(1+c) as above. In particular, dimH (Liouvilleξ ) = 0, and VWAξ
can be written as the countable union of sets of Hausdorff dimension strictly less
than δ.
(No proof is needed as this follows directly from [73, Theorem 7.1].)
There is a relation between dynamical Diophantine approximation by the orbits
of parabolic points and the lengths of cusp excursions along geodesics. A well-known
example is that a point η ∈ Λ is dynamically badly approximable with respect to
every parabolic point if and only if the geodesic [o, η] has bounded cusp excursion
lengths [73, Proposition 1.21]. The following observation is in a similar vein:
Observation 17.5.7. For η ∈ Λ, we have:
[
b(η, t)
hξ|ηi − tξ
> 0 ⇔ lim sup
>0
η∈
VWAp ⇔ lim sup
tξ
t
t→∞
ξ∈Λbp
p∈P
η∈
[
p∈P
tξ →∞
Liouvillep ⇔ lim sup
ξ∈Λbp
tξ →∞
b(η, t)
hξ|ηi − tξ
= ∞ ⇔ lim sup
= 1.
tξ
t
t→∞
17.5. EXACT DIMENSIONALITY OF µ
307
Proof. If ξ = g(p) ∈ Λbp , then kgk &+ tξ , with ≍+ for at least one value of g
(Lemma 17.2.8). Thus
max ωp (η) = max lim sup
p∈P
p∈P
g∈G
g(p)→η
hξ|ηi
log D(g(p), η)
,
= lim sup
kgk
tξ
ξ∈Λbp
ξ→η
so
(17.5.4)
lim sup
ξ∈Λbp
tξ →∞
hξ|ηi − tξ
= max ωp (η) − 1.
p∈P
tξ
On the other hand, it is readily verified that if [o, η] intersects Hξ , then the function
f (t) = b(η, t)/t attains its maximum at t = hξ|ηio , at which f (t) = hξ|ηio −tξ . Thus
we have that
lim sup
t→∞
(17.5.5)
b(η, t)
hξ|ηio − tξ
b(η, t)
= lim sup sup
= lim sup
t
t
hξ|ηio
t>0
ξ∈Λbp
ξ∈Λbp
tξ →∞ ηt ∈Hξ
tξ →∞
=1−
Since
1
maxp∈P ωp (η)
S
=∞
η ∈ p∈P Liouvillep
S
S
max ωp (η) ∈ (1, ∞) η ∈ p∈P VWAp \ p∈P Liouvillep
p∈P
S
= 1
η∈
/ p∈P VWAp
applying (17.5.4) and (17.5.5) completes the proof.
We are now ready to state our main theorem regarding the relation between
exact dimensionality and dynamical Diophantine approximation:
Theorem 17.5.8. The following are equivalent:
(A) µ(VWAp ) = 0 ∀p ∈ P .
(B) µ is exact dimensional.
(C) dimH (µ) = δ.
(D) µ(VWAξ ) = 0 ∀ξ ∈ Λ.
The implication (B) ⇒ (C) is part of Proposition 17.5.3, while (C) ⇒ (D) is
an immediate consequence of Theorem 17.5.6, and (D) ⇒ (A) is trivial. Thus we
demonstrate (A) ⇒ (B):
S
Proof of (A) ⇒ (B). Fix η ∈ Λ \ p∈P VWAp and t > 0. Suppose that
ηt ∈ Hξ for some ξ ∈ Λbp . Let t− < t < t+ satisfy
/
t− ≍+ tξ , t+ ≍+ 2hξ|ηio − tξ , and ηt± ∈
Then by Lemma 17.3.7,
µ(B(η, e−t± )) ≍× e−δt± .
[
(H ).
308
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
In particular
(17.5.6)
1
.+ δt+ .
µ(B(η, e−t ))
δt− .+ log
Now, by Observation 17.5.7, we have
2(hξ|ηio − tξ + (constant))
t+ − t−
→ 0 as t → ∞.
≤
t
tξ
Since t− < t < t+ , it follows that t− /t, t+ /t → 1 as t → ∞. Combining with
(17.5.6) gives dµ (η) = δ (cf. (17.5.1)). But by assumption (A), this is true for
µ-a.e. η ∈ Λ. Thus µ is exact dimensional.
17.5.2. Examples and non-examples of exact dimensional measures.
Combining Theorems 17.5.8 and 17.5.5 gives a necessary and sufficient condition
for µ to be exact dimensional in terms of the convergence or divergence of a family
of series. We can ask how often this condition is satisfied. Our first result shows
that it is almost always satisfied:
Theorem 17.5.9. If for all p ∈ P , the series
(17.5.7)
X
h∈Gp
e−δkhk khk ≍×
X
h∈Gp
log khkp ≍×
khk−2δ
p
∞
X
e−2δk kNp (ek )
k=0
converges, then µ is exact dimensional.
Proof. Fix p ∈ P and κ > 0. We have
X
X
e−δkhk
Σdiv (p, κ) =
e−δkgk
g∈G
=
h∈Gp
khk≥κkgk/2
X
e−δkhk
X
e−δkhk
h∈Gp
≍×
≤
.×
e−δkhk
≍×
h∈Gp
e−δk #{g ∈ G : k − 1 ≤ kgk < k}
X
k≤2khk/κ+1
e
−δkhk
e
−δkhk
h∈Gp
X
X
k≤2khk/κ+1
h∈Gp
X
e−δkgk
g∈G
kgk≤2khk/κ
h∈Gp
X
X
X
e−δk NX,G (k)
1
(by Corollary 16.7.1)
k≤2khk/κ+1
khk.
So if (17.5.7) converges, so does Σdiv (p, κ), and thus by Theorems 17.5.5 and 17.5.8,
µ is exact dimensional.
Corollary 17.5.10. If for all p ∈ P , δp < δ, then µ is exact dimensional.
17.5. EXACT DIMENSIONALITY OF µ
309
Proof. In this case, the series (17.5.7) converges, as it is dominated by Σs (Gp )
for any s ∈ (δp , δ).
Remark 17.5.11. Combining with Proposition 10.3.10 shows that if µ is not
exact dimensional, then
X
X
e−δkhk < ∞ =
e−δkhk khk
h∈Gp
h∈Gp
for some p ∈ P . Equivalently,
∞
X
k=0
e−2δk Np (ek ) < ∞ =
∞
X
e−2δk kNp (ek ).
k=0
This creates a very “narrow window” for the orbital counting function Np .
Corollary 17.5.12. If µ is doubling, then µ is exact dimensional.
Proof. If µ is doubling, then
∞
X
e−2δk kNp (ek ) =
k=0
=
≍×
∞ X
∞
X
e−2δ(k+ℓ) Np (ek+ℓ )
k=1 ℓ=0
∞
X
e−2δk Iep (ek )
k=1
∞
X
k=1
e−2δk Np (ek ).
Remark 17.5.11 completes the proof.
(by Proposition 17.4.4)
Our next theorem shows that in certain circumstances, the converse holds in
Theorem 17.5.9. Specifically:
Theorem 17.5.13. Suppose that X is an R-tree and that G is the pure Schottky
product (cf. Definition 14.5.7) of a parabolic group H with a lineal group J. Let p be
the global fixed point of H, so that P = {p} is a complete set of inequivalent parabolic
points for G (Proposition 12.4.19). Suppose that the series (17.5.7) diverges. Then
µ is not exact dimensional; moreover, µ(Liouvillep ) = 1 and dimH (µ) = 0.
Example 17.5.14. To see that the hypotheses of this theorem are not vacuous,
fix δ > 0 and let
R2δ
,
f (R) =
log2 (R)
P∞ −2δk
kf (ek )
or more generally, let f be any increasing function such that
1 e
P∞ −2δk
k
diverges but 1 e
f (e ) converges. By Theorem 14.1.5, there exists an R-tree
X and a parabolic group H ≤ Isom(X) such that NEp ,H ≍× f . Then the series
310
17. QUASICONFORMAL MEASURES OF GEOMETRICALLY FINITE GROUPS
(17.5.7) diverges, but Σδ (H) < ∞. Thus, there exists a unique r > 0 such that
2
∞
X
e−δr =
n=1
1
·
Σδ (H) − 1
Let J = rZ, interpreted as a group acting by translations on the R-tree R, and
P∞
let G be the pure Schottky product of H and J. Then Σδ (J) − 1 = 2 n=1 e−δr ,
so (Σδ (H) − 1)(Σδ (J) − 1) = 1. Since the map s 7→ (Σs (H) − 1)(Σs (J) − 1) is
decreasing, it follows from Proposition 14.5.8 that ∆(G) = [0, δ]. In particular, G
is of divergence type, so Standing Assumption 17.5.1 is satisfied.
Remark 17.5.15. Applying a BIM embedding allows us to construct an example acting on H∞ .
Proof of Theorem 17.5.13. As in the proof of Proposition 14.5.8 we let
E = (H \ {id})(J \ {id}),
so that
G=
[
JE n H.
n≥0
Define a measure θ on E via the formula
X
θ=
e−δkgk δg .
g∈E
By Proposition 14.5.8, the fact that G is of divergence type (Standing Assumption 17.5.1), and the fact that Σδ (J), Σδ (H) < ∞ (Proposition 10.3.10), θ is a
probability measure. The Patterson–Sullivan measure of G is related to θ by the
formula
X
1
µ=
e−δkjk j∗ π∗ [θN ],
Σδ (J) − 1
j∈J
where π : E N → ΛG is the coding map.
Next, we use a theorem proven independently by H. Kesten and A. Raugi,2
which we rephrase here in the language of measure theory:
Theorem 17.5.16 ([111]; see also [149]). Let θ be a probability measure on a
set E, and let f : E → R be a function such that
Z
|f (x)| dθ(x) = ∞.
N
Then for θN -a.e. (xn )∞
1 ∈R ,
|f (xn+1 )|
= ∞.
lim sup Pn
n→∞ |
1 f (xi )|
2We are grateful to “cardinal” of http://mathoverflow.net and J. P. Conze, respectively, for
these references.
17.5. EXACT DIMENSIONALITY OF µ
311
Letting f (g) = kgk, the theorem applies to our measure θ, because our assumpR
tion that (17.5.7) diverges is equivalent to the assertion that f (x) dθ(x) = ∞.
N
N
Now fix j ∈ J, and let (gn )∞
1 ∈ E be a θ -typical point. Then the limit point
η = lim jg1 · · · gn (o)
n→∞
represents a typical point with respect to the Patterson–Sullivan measure µ. By
Theorem 17.5.16, we have
kgn+1 k
= ∞.
lim sup Pn
n→∞
1 kgi k
Write gi = hi ji for each i. Then kgi k = khi k + kji k. Since
law of large numbers implies that
kjn+1 k
limn→∞ P
n
1 kji k
< ∞, so
R
kjkdθ(hj) < ∞, the
khn+1 k
= ∞.
lim sup Pn
n→∞
1 kgi k
But khn+1 k represents the length of the excursion of the geodesic [o, η] into the
cusp corresponding to the parabolic point g1 · · · gn (p). Combining with Observa-
tion 17.5.7 shows that η ∈ Liouvillep . Since η was a µ-typical point, this shows
that µ(Liouvillep ) = 1. By Theorem 17.5.6, this implies that dimH (µ) = 0. By
Observation 17.5.4, µ is not exact dimensional.
APPENDIX A
Open problems
Problem 1 (Cf. Chapter 8, Section 13.4). Do there exist a hyperbolic metric
e
e
space X and a group G such that δ(G)
< δ(G) = ∞, but δ(G)
cannot be “computed
from” the modified Poincaré exponent of a locally compact group via Definition
8.2.1? This question is vague because a more precise version might be contradicted
e
by Example 13.4.9, in which a group G is constructed such that δ(G)
< δ(G) = ∞
but the closure of G (in the compact-open topology) is not locally compact. In this
e
e
but there is still a locally compact group
case, δ(G)
cannot be computed from δ(G),
“hidden” in the argument, namely the closure of G ↿ Hd = ι1 (Γ). Is there any
Poincaré irregular group whose construction is not somehow “based on” a locally
compact group?
Problem 2 (Cf. Theorem 1.2.3). If G is a Poincaré irregular parabolic group,
e
does the modified Poincaré exponent δ(G)
have a geometric significance? Theorem
1.2.3 does not apply directly since G is elementary. It is tempting to claim that
(A.1)
e
δ(G)
= inf{dimH (Λr (H)) : H ≥ G nonelementary}
(under some reasonable hypotheses about the isometry group of the space in question), but it seems that the right hand side is equal to infinity in most cases due
to Proposition 10.3.7(iii). Note that by contrast, (A.1) is usually true for Poincaré
regular groups; for example, it holds in the Standard Case [18].
Problem 3 (Cf. Chapter 11, Remark 11.2.12). Given a virtually nilpotent
group Γ which is not virtually abelian, determine whether there exists a homomorphism Φ : Γ → Isom(B) such that δ(Φ(Γ)) = α(Γ)/2, where both quantities are
defined in Section 11.2. Intuitively, this corresponds to the existence an equivariant embedding of Γ into B which approaches infinity “as fast as possible”. It is
known [59, Theorem 1.3] that such an embedding cannot be quasi-isometric, but
this by itself does not imply the non-existence of a homomorphism with the desired
property.
313
314
A. OPEN PROBLEMS
Problem 4 (Cf. Chapter 11, Remark 11.2.14). Does there exist a strongly
discrete parabolic subgroup of Isom(H∞ ) isomorphic to the Heisenberg group which
has infinite Poincaré exponent? 1
Problem 5 (Cf. Chapter 12, Section 12.2). Is there any form of discreteness for which there exists a cobounded subgroup of Isom(H) (for example, UOT
discreteness)? If so, what is the strongest such form of discreteness?
Problem 6 (Cf. Chapter 17). Can Theorem 17.5.13 be improved as follows?
Conjecture. Let X be a hyperbolic metric space and let G ≤ Isom(X) be a
geometrically finite group such that for some p ∈ Λbp , the series (17.5.7) diverges.
Then the δ-quasiconformal measure µ is not exact dimensional.
What if some of the hypotheses of this conjecture are strengthened, e.g. X is
strongly hyperbolic (e.g. X = H∞ ), or G is a Schottky product of a parabolic group
with a lineal group?
1It has been pointed out to us by X. Xie that this question is answered affirmatively by [59,
Proposition 3.10], letting the function f in that proposition be any function whose growth is
sublinear.
APPENDIX B
Index of defined terms
See also Conventions 1-9 on pages xix, 21, and 59.
• acts irreducibly: Definition 7.6.1, p.122
• acts properly discontinuously: Definition 5.2.11, p.92
• acts reducibly: Definition 7.6.1, p.122
• algebraic hyperbolic space: Definition 2.2.5, p.7
• attracting fixed point : Definition 6.1.1, p.95
• ball model : §2.5.1, p.18
• bi-infinite geodesic: Definition 4.4.2, p.67
• BIM embedding: Definition 13.1.3, p.224
• BIM representation: Definition 13.1.3, p.224
• bordification: Definition 3.4.2, p.34
• ξ-bounded : Definition 12.3.1, p.201
• bounded parabolic point : Definition 12.3.4, p.202
• Busemann function: (3.3.5), p.31
• CAT(-1) inequality: (3.2.1), p.28
• CAT(-1) space: Definition 3.2.1, p.28
• Cayley graph: Example 3.1.2, p.24
• Cayley hyperbolic plane: Remark 2.1.1, p.3
• Cayley metric: Example 3.1.2, p.24
• center (of a triangle in an R-tree): Definition 3.1.11, p.26
• center (of a horoball): Definition 12.1.1, p.193
• cobounded : Definition 12.2.1, p.196
• codoubling (function): Definition 17.4.2, p.295
• convergence type: Definition 8.1.4, p.130
• compact-open topology (COT): p.83
• compact type, semigroup of : Definition 7.7.1, p.123
• comparison point : p.27, Definition 4.4.12, p.72
• comparison triangle: Example 3.1.9, p.25; Definition 4.4.12, p.72
• compatible (regarding a metametric and a topology): Definition 3.6.4, p.49
• complete set of inequivalent parabolic points: Definition 12.4.13, p.213
• cone: (14.1.1), p.233
315
316
B. INDEX OF DEFINED TERMS
• conformal measure: Definition 15.1.1, p.255
• conical convergence: p.109
• connected graph: Definition 3.1.1, p.23
• contractible cycles (property of a graph): Definition 14.2.1, p.237
• convex-cobounded : Definition 12.2.5, p.198
• convex hull : Definition 7.5.1, p.119
• convex : (7.5.1), p.119
• convex core: Definition 7.5.7, p.121
• cycle: (3.1.4), p.25
• Dirichlet domain: Definition 12.1.4, p.194
• divergence type: Definition 8.1.4, p.130
• domain of reflexivity: Definition 3.6.1, p.48
• doubling (metric space): Footnote 2, p.187
• doubling (function): Definition 17.4.2, p.295
• doubling (measure): Section 17.4, p.295
• dynamical derivative: Proposition 4.2.12, p.63
• Edelstein-type isometry: Definition 11.1.11, p.180
• elementary: Definition 7.3.2, p.114
• elliptic isometry: Definition 6.1.2, p.95
• elliptic semigroup: Definition 6.2.2, p.99
• ergodic: Definition 15.3.1, p.257
• equivalent (for Gromov sequences): Definition 3.4.1, p.34
• extended visual metric: Proposition 3.6.13, p.52
• fixed point (neutral/attracting/repelling): Definition 6.1.2, p.95
• fixed point (parabolic): Definition 6.2.7, p.100
• focal semigroup: Definition 6.2.13, p.101
• free group: Remark 10.1.1, p.158
• free product : Section 10.1, p.157
• free semigroup: Remark 10.1.1, p.158
• general type, semigroup of : Definition 6.2.13, p.101
• generalized convergence type: Definition 8.2.3, p.132
• generalized divergence type: Definition 8.2.3, p.132
• generalized polar coordinate functions: Definition 4.6.1, p.79
• geodesic metric space: Remark 3.1.5, p.24
• geodesic segment : Remark 3.1.5, p.24
• geodesic triangle: p.27, Definition 4.4.12, p.72
• geometric product : Example 14.5.10, p.249
• geodesic path: Section 14.2, p.237
• geodesic ray/line: Definition 4.4.2, p.67
B. INDEX OF DEFINED TERMS
• geometric realization: Definition 3.1.1, p.23
• geometric graph: Definition 3.1.1, p.23
• geometrically finite: Definition 12.4.1, p.206
• Gromov boundary: Definition 3.4.2, p.34
• Gromov hyperbolic: Definition 3.3.2, p.30
• Gromov’s inequality: (3.3.4), p.30
• Gromov product : (3.3.2), p.30
• Gromov sequence: Definition 3.4.1, p.34
• Gromov triple: Definition 4.1.1, p.59
• global fixed points: Notation 6.2.1, p.99
• growth rate: (11.2.2), p.184; Definition 17.4.6, p.299
• global Schottky product : Definition 10.2.1, p.158
• group of isometries: p.8
• Haagerup property: §11.1.1, p.176
• half-space: Remark 10.2.5, p.160
• half-space model : §2.5.2, p.19
• horoball : Definition 12.1.1, p.193
• horospherical convergence: Definition 7.1.3, p.111
• horospherical limit set : Definition 7.2.1, p.112
• hyperbolic: Definition 3.3.2, p.30
• hyperboloid model : §2.2, p.4
• implied constant : Convention 1, p.xix
• inward focal : Definition 6.2.15, p.102
• irreducible action: Definition 7.6.1, p.122
• isomorphism (between pairs (X, bord X) and (Y, bord Y )): p.14
• length spectrum: Remark 13.1.6, p.225
• limit set (of a semigroup): Definition 7.2.1, p.112
• limit set (of a partition structure): Definition 9.1.7, p.137
• lineal semigroup: Definition 6.2.13, p.101
• Lorentz boosts: (2.3.3), p.8
• lower central series: §11.2.1, p.183
• lower polynomial growth rate: Definition 17.4.6, p.299
• loxodromic isometry: Definition 6.1.2, p.95
• loxodromic semigroup: Definition 6.2.2, p.99
• Margulis’s lemma: Proposition 11.1.3, p.176
• metametric: Definition 3.6.1, p.48
• metric derivative: p.60, p.61
• moderately discrete (MD): Definition 5.2.1, p.87
• modified Poincaré exponent : Definition 8.2.3, p.132
317
318
B. INDEX OF DEFINED TERMS
• natural action: (on a Cayley graph) Remark 3.1.3, p.24
• natural map (from a free product): Section 10.1, p.157
• ρ-net : Footnote 3, p.133
• neutral fixed point : Definition 6.1.1, p.95
• nilpotent : §11.2.1, p.183
• nilpotency class: §11.2.1, p.183
• nonelementary: Definition 7.3.2, p.114
• orbital counting function: Remark 8.1.3, p.129
• outward focal : Definition 6.2.15, p.102
• parabolic isometry: Definition 6.1.2, p.95
• parabolic fixed point : Definition 6.2.7, p.100
• parabolic semigroup: Definition 6.2.2, p.99
• parameterization (of a geodesic): Remark 3.1.5, p.24
• partition structure: Definition 9.1.4, p.137
• path: Section 14.2, p.237
• path metric: Definition 3.1.1, p.23, (14.4.1), p.241
• Poincaré exponent : Definition 8.1.1, p.129
• Poincaré extension: Observation 2.5.6, p.20
• Poincaré integral : (8.2.1), p.131
• Poincaré regular/irregular : p.134
• Poincaré set : Notation 8.1.7, p.130
• Poincaré series: Definition 8.1.1, p.129
• polynomial growth rate: (11.2.2), p.184; Definition 17.4.6, p.299
• pre-doubling (parabolic group): Definition 17.4.9, p.301
• proper : Remark 1.1.3, p.xxii
• properly discontinuous (PrD): Definition 5.2.11, p.92
• pure Schottky product : Definition 14.5.7, p.247
• quasiconformal measure: Definition 15.1.1, p.255
• quasiconvex core: Definition 7.5.7, p.121
• quasi-isometry/quasi-isometric: Definition 3.3.9, p.33
• radial convergence: Definition 7.1.2, p.110
• radial limit set : Definition 7.2.1, p.112
• Radon: Remark 16.3.2, p.271
• rank (of an abelian group): §11.2.1, p.183
• reducible action: Definition 7.6.1, p.122
• regularly geodesic: Definition 4.4.5, p.67
• repelling fixed point : Definition 6.1.1, p.95
• Samuel–Smirnov compactification: Proposition 16.1.1, p.267
• Schottky group: Definition 10.2.4, p.159
B. INDEX OF DEFINED TERMS
319
• Schottky position: Definition 10.2.1, p.158
• Schottky product : Definition 10.2.1, p.158
• Schottky semigroup: Definition 10.2.4, p.159
• Schottky system: Definition 10.2.1, p.158
• ρ-separated set : Footnote 1, p.131
• sesquilinear form: p.3
• shadow : Definition 4.5.1, p.74
• similarity: Observation 2.5.6, p.20
• simplicial tree: Definition 3.1.7, p.25
• F-skew linear : (2.3.4), p.9
• skew-symmetric: p.3
• Standard Case: Convention 9, p.60
• standard parameterization: p.70
• stapled union: Definition 14.4.1, p.240
• strong operator topology (SOT): p.83
• strongly discrete (SD): Definition 5.2.1, p.87, Remark 8.2.5, p.133
• strongly (Gromov) hyperbolic: Definition 3.3.6, p.32
• strongly separated Schottky group/product/system: Definition 10.3.1, p.160
• substructure (of a partition structure): Definition 9.1.5, p.137
• s-thick : Definition 9.1.4, p.137
• topological discreteness: Definition 5.2.6, p.89
• totally geodesic subset : Definition 2.4.2, p.14
• tree, simplicial : Definition 3.1.7, p.25
• tree (on N): Definition 9.1.2, p.136
• R-tree: Definition 3.1.10, p.25
• Z-tree: Definition 3.1.7, p.25
• tree-geometric: Definition 14.5.2, p.245
• tree triangle: p.27
• Tychonoff topology: p.84
• uniform operator topology (UOT): p.83
• uniformly radial convergence: Definition 7.1.2, p.110
• uniformly radial limit set : Definition 7.2.1, p.112
• uniquely geodesic metric space: Remark 3.1.5, p.24
• unweighted simplicial tree: Definition 3.1.7, p.25
• upper polynomial growth rate: Definition 17.4.6, p.299
• virtually nilpotent : §11.2.1, p.183
• visual metric: p.50
• weakly discrete (WD): Definition 5.2.1, p.87
• weakly separated Schottky group/product/system: Definition 10.3.1, p.160
320
B. INDEX OF DEFINED TERMS
• weighted Cayley graph: Example 3.1.2, p.24
• weighted undirected graph: Definition 3.1.1, p.23
Bibliography
1. I. Adeboye and G. Wei, On volumes of hyperbolic orbifolds, Algebr. Geom. Topol. 12 (2012),
no. 1, 215–233.
2. Clemens Adelmann and Eberhard H.-A. Gerbracht, Letters from William Burnside to Robert
Fricke: automorphic functions, and the emergence of the Burnside problem, Arch. Hist.
Exact Sci. 63 (2009), no. 1, 33–50. MR 2470902
3. I. Agol, Tameness of hyperbolic 3-manifolds, http://arxiv.org/abs/math/0405568, preprint
2004.
4. L. V. Ahlfors, Möbius transformations in several dimensions, Ordway Professorship Lectures
in Mathematics, University of Minnesota, School of Mathematics, Minneapolis, Minn., 1981.
5. T. Akaza, Poincaré theta series and singular sets of Schottky groups, Nagoya Math. J. 24
(1964), 43–65.
6. A. D. Aleksandrov, A theorem on triangles in a metric space and some of its applications,
Trudy Mat. Inst. Steklov., v 38, Trudy Mat. Inst. Steklov., v 38, Izdat. Akad. Nauk SSSR,
Moscow, 1951, pp. 5–23.
7. D. Allcock, Reflection groups on the octave hyperbolic plane, J. Algebra 213 (1999), no. 2,
467–498.
8. Roger C. Alperin, An elementary account of Selberg’s lemma, Enseign. Math. (2) 33 (1987),
no. 3-4, 269–273. MR 925989
9. A. Ancona, Convexity at infinity and Brownian motion on manifolds with unbounded negative curvature, Rev. Mat. Iberoamericana 10 (1994), no. 1, 189–220.
10. J. W. Anderson, K. Falk, and P. Tukia, Conformal measures associated to ends of hyperbolic
n-manifolds, Q. J. Math. 58 (2007), no. 1, 1–15.
11. M. T. Anderson, The Dirichlet problem at infinity for manifolds of negative curvature, J.
Differential Geom. 18 (1983), no. 4, 701–721.
12. B. N. Apanasov, Discrete groups in space and uniformization problems. Translated and
revised from the 1983 Russian original, Mathematics and its Applications (Soviet Series),
40, Kluwer Academic Publishers Group, Dordrecht, 1991.
13. C. J. Atkin, The Hopf-Rinow theorem is false in infinite dimensions, Bull. Lond. Math. Soc.
7 (1975), no. 3, 261–266.
14. J. C. Baez, The octonions, Bull. Amer. Math. Soc. (N.S.) 39 (2002), no. 2, 145–205.
15. W. Ballmann, M. Gromov, and V. Schroeder, Manifolds of nonpositive curvature, Progress
in Mathematics, 61, Birkhäuser Boston, Inc., Boston, MA, 1985.
16. H. Bass, The degree of polynomial growth of finitely generated nilpotent groups, Proc. London
Math. Soc. (3) 25 (1972), 603–614.
17. A. F. Beardon, The Hausdorff dimension of singular sets of properly discontinuous groups,
Amer. J. Math. 88 (1966), 722–736.
18.
, The exponent of convergence of Poincaré series, Proc. London Math. Soc. (3) 18
(1968), 461–483.
321
322
19.
20.
BIBLIOGRAPHY
, Inequalities for certain Fuchsian groups, Acta Math. (1971), 221–258.
, The geometry of discrete groups, Graduate Texts in Mathematics, vol. 91, SpringerVerlag, New York, 1983.
21. A. F. Beardon and B. Maskit, Limit points of Kleinian groups and finite sided fundamental
polyhedra, Acta Math. 132 (1974), 1–12.
22. M. E. B. Bekka, P. de la Harpe, and A. Valette, Kazhdan’s property (T), New Mathematical
Monographs, 11, Cambridge University Press, Cambridge, 2008.
23. M. V. Belolipetsky, Hyberbolic orbifolds of small volume, http://arxiv.org/abs/1402.5394,
preprint 2014.
24. V. I. Bernik and M. M. Dodson, Metric Diophantine approximation on manifolds, Cambridge
Tracts in Mathematics, vol. 137, Cambridge University Press, Cambridge, 1999.
25. L. Bers, On boundaries of Teichmüller spaces and on Kleinian groups. I, Ann. of Math. (2)
91 (1970), 570–600.
26. M. Bestvina, The topology of Out(Fn ), Proceedings of the International Congress of Mathematicians, Vol. II (Beijing, 2002), Higher Ed. Press, Beijing, 2002, pp. 373–384.
27. M. Bestvina and M. Feighn, Hyperbolicity of the complex of free factors, Adv. Math. 256
(2014), 104–155.
28. C. J. Bishop and P. W. Jones, Hausdorff dimension and Kleinian groups, Acta Math. 179
(1997), no. 1, 1–39.
29. S. Blachère, P. Haı̈ssinsky, and P. Mathieu, Harmonic measures versus quasiconformal measures for hyperbolic groups, Ann. Sci. Éc. Norm. Supér. (4) 44 (2011), no. 4, 683–721.
30. J. Blanc and S. Cantat, Dynamical degrees of birational transformations of projective surfaces, http://arxiv.org/abs/1307.0361, preprint 2013.
31. M. Bonk and O. Schramm, Embeddings of Gromov hyperbolic spaces, Geom. Funct. Anal.
10 (2000), no. 2, 266–306.
32. A. Borbély, The nonsolvability of the Dirichlet problem on negatively curved manifolds,
Differential Geom. Appl. 8 (1998), no. 3, 217–237.
33. M. Bourdon, Structure conforme au bord et flot géodésique d’un CAT(-1)-espace (Conformal
structure at the boundary and geodesic flow of a CAT(-1)-space), Enseign. Math. (2) 41
(1995), no. 1–2, 63–102 (French).
34. B. H. Bowditch, Geometrical finiteness for hyperbolic groups, J. Funct. Anal. 113 (1993),
no. 2, 245–317.
35.
, Some results on the geometry of convex hulls in manifolds of pinched negative
curvature, Comment. Math. Helv. 69 (1994), no. 1, 49–81.
36.
, Geometrical finiteness with variable negative curvature, Duke Math. J. 77 (1995),
no. 1, 229–274.
37.
, Relatively hyperbolic groups, Internat. J. Algebra Comput. 22 (2012), no. 3, 66 pp.
38. E. Breuillard, B. Green, and T. C. Tao, Approximate subgroups of linear groups, Geom.
Funct. Anal. 21 (2011), no. 4, 774–819.
39. M. R. Bridson and A. Haefliger, Metric spaces of non-positive curvature, Grundlehren der
Mathematischen Wissenschaften, vol. 319, Springer-Verlag, Berlin, 1999.
40. M. Burger, A. Iozzi, and N. Monod, Equivariant embeddings of trees into hyperbolic spaces,
Int. Math. Res. Not. (2005), no. 22, 1331–1369.
41. M. Burger and S. Mozes, CAT(-1)-spaces, divergence groups and their commensurators, J.
Amer. Math. Soc. 9 (1996), no. 1, 57–93.
BIBLIOGRAPHY
323
42. S. Buyalo, Geodesics in Hadamard spaces, Algebra i Analiz 10 (1998), no. 2, 93–123 (Russian), translation in St. Petersburg Math. J. 10 (1999), no. 2, 293–313.
43. D. Calegari and D. Gabai, Shrinkwrapping and the taming of hyperbolic 3-manifold, J. Amer.
Math. Soc. 19 (2006), no. 2, 385–446.
44. J. W. Cannon, The combinatorial structure of cocompact discrete hyperbolic groups, Geom.
Dedicata 16 (1984), no. 2, 123–148.
45. J. W. Cannon, W. J. Floyd, R. W. Kenyon, and W. R. Parry, Hyperbolic geometry, Flavors
of geometry, Math. Sci. Res. Inst. Publ., vol. 31, Cambridge Univ. Press, Cambridge, 1997,
pp. 59–115.
46. S. Cantat, The Cremona group in two variables, Proceedings of the sixth European Congress of Math., pp. 211-225 (Europ. Math. Soc., 2013), 2013, available at
http://perso.univ-rennes1.fr/serge.cantat/Articles/ecm.pdf.
47. S. Cantat and S. Lamy, Normal subgroups in the Cremona group, with an appendix by Yves
de Cornulier, Acta Math. 210 (2013), no. 1, 31–94.
48. P.-E. Caprace, Y. de Cornulier, N. Monod, and R. Tessera, Amenable hyperbolic groups,
http://arxiv.org/abs/1202.3585v1, preprint 2012.
49. C. Champetier, Propriétés statistiques des groupes de présentation finie (statistical properties of finitely presented groups), Adv. Math. 116 (1995), no. 2, 197–262 (French).
50. P.-A. Cherix, M. G. Cowling, P. Jolissaint, P. Julg, and A. Valette, Groups with the Haagerup
property: Gromov’s a-T-menability, Progress in Mathematics, vol. 197, Birkhäuser Verlag,
Basel, 2001.
51. I. M. Chiswell, Introduction to Λ-trees, World Scientific Publishing Co., Inc., River Edge,
NJ, 2001.
52. F. Choucroun, Arbres, espaces ultramétriques et bases de structure uniforme (Trees, ultrametric spaces and bases with uniform structure), Geom. Dedicata 53 (1994), no. 1, 69–74
(French).
53. M. Coornaert, Mesures de Patterson-Sullivan sur le bord d’un espace hyperbolique au sens
de Gromov (Patterson-Sullivan measures on the boundary of a hyperbolic space in the sense
of Gromov), Pacific J. Math. 159 (1993), no. 2, 241–270 (French).
54. K. Corlette and A. Iozzi, Limit sets of discrete groups of isometries of exotic hyperbolic
spaces, Trans. Amer. Math. Soc. 351 (1999), no. 4, 1507–1530.
55. F. Dal’bo, J.-P. Otal, and M. Peigné, Séries de Poincaré des groupes géométriquement
finis. (Poincaré series of geometrically finite groups), Israel J. Math. 118 (2000), 109–124
(French).
56. F. Dal’bo, M. Peigné, J.-C. Picaud, and A. Sambusetti, On the growth of quotients of
Kleinian groups, Ergodic Theory Dynam. Systems 31 (2011), no. 3, 835–851.
57. Tushar Das, David Simmons, and Mariusz Urbański, Geometry and dynamics in Gromov
hyperbolic metric spaces II: Thermodynamic formalism, in preparation.
58. Tushar Das, Bernd O. Stratmann, and Mariusz Urbański, The Bishop–Jones relation and the
Hausdorff geometry of convex-cobounded limit sets in infinite-dimensional hyperbolic space,
http://www.urbanskimath.com/wp-content/uploads/2014/07/DaStUrBishopJones_2014_07_23C.pdf,
preprint 2014, to appear in Stoch. Dyn.
59. Y. de Cornulier, R. Tessera, and A. Valette, Isometric group actions on Hilbert spaces:
growth of cocycles, Geom. Funct. Anal. 17 (2007), no. 3, 770–792.
60. P. de la Harpe, Classical Banach-Lie algebras and Banach-Lie groups of operators in Hilbert
space, Lecture Notes in Mathematics, Vol. 285, Springer-Verlag, Berlin-New York, 1972.
324
BIBLIOGRAPHY
, Topics in geometric group theory, Chicago Lectures in Mathematics, University of
61.
Chicago Press, Chicago, IL, 2000.
62. P. de la Harpe and A. Valette, La propriété (T) de Kazhdan pour les groupes localement
compacts (avec un appendice de Marc Burger) (Kazhdan’s property (T) for locally compact
groups (with an appendix by Marc Burger)), Astérisque 175 (1989), 158 (French).
63. H. P. de Saint-Gervais, Uniformisation des surfaces de Riemann. Retour sur un théorème
centenaire (Uniformization of Riemann surfaces. A look back at a 100-year-old theorem),
ENS Éditions, Lyon, 2010 (French), The name of Henri Paul de Saint-Gervais covers a
group composed of fifteen mathematicians : A. Alvarez, C. Bavard, F. Béguin, N. Bergeron,
M. Bourrigan, B. Deroin, S. Dumitrescu, C. Frances, É. Ghys, A. Guilloux, F. Loray, P.
Popescu-Pampu, P. Py, B. Sévennec, and J.-C. Sikorav.
64. M. Dehn, Über unendliche diskontinuierliche Gruppen, Math. Ann. 71 (1911), no. 1, 116–144
(German).
65. T. Delzant and P. Py, Kähler groups, real hyperbolic spaces and the Cremona group, Compos.
Math. 148 (2012), no. 1, 153–184.
66. G. Dimitrov, F. Haide, L. Katzarkov, and M. Kontsevich, Dynamical systems and categories,
http://arxiv.org/abs/1307.8418, preprint 2013.
67. B. Duchesne, Infinite-dimensional nonpositively curved symmetric spaces of finite rank, Int.
Math. Res. Not. IMRN (2013), no. 7, 1578–1627.
, Infinite dimensional riemannian symmetric spaces with fixed-sign curvature opera-
68.
tor, http://arxiv.org/abs/1109.0441, preprint 2012.
69. P. B. Eberlein and B. O’Neill, Visibility manifolds, Pacific J. Math. 46 (1973), 45–109.
70. M. Edelstein, On non-expansive mappings of Banach spaces, Math. Proc. Cambridge Philos.
Soc. 60 (1964), 439–447.
71. K. J. Falconer and D. T. Marsh, On the Lipschitz equivalence of Cantor sets, Mathematika
39 (1992), no. 2, 223–233.
72. J. L. Fernández and M. V. Melián, Bounded geodesics of Riemann surfaces and hyperbolic
manifolds, Trans. Amer. Math. Soc. 347 (1995), no. 9, 3533–3549.
73. Lior
Fishman,
imation
and
David
the
Simmons,
geometry
of
and
limit
Mariusz
sets
in
Urbański,
Gromov
Diophantine
hyperbolic
metric
approxspaces,
http://arxiv.org/abs/1301.5630, preprint 2013, to appear Mem. Amer. Math. Soc.
74. G. B. Folland, Real analysis: Modern techniques and their applications, Pure and Applied
Mathematics (New York), John Wiley & Sons, Inc., New York, 1984.
75. V. P. Fonf and J. Lindenstrauss, Some results on infinite-dimensional convexity, Israel J.
Math. 108 (1998), 13–32.
76. M. I. Garrido Carballo, J. Á. Jaramillo, and Y. C. Rangel, Algebras of differentiable functions
on Riemannian manifolds, Bull. Lond. Math. Soc. 41 (2009), no. 6, 993–1001.
77. É Ghys and P. de la Harpe, Espaces métriques hyperboliques (hyperbolic metric spaces),
Sur les groupes hyperboliques d’aprés Mikhael Gromov (Bern, 1988), Progr. Math., 83,
Birkhäuser Boston, Boston, MA, 1990, pp. 27–45 (French).
78. W. M. Goldman, Complex hyperbolic geometry, Oxford Mathematical Monographs. Oxford
Science Publications, The Clarendon Press, Oxford University Press, New York, 1999.
79. J. J. Gray, Linear differential equations and group theory from Riemann to Poincaré, Modern
Birkhäuser Classics, Birkhäuser Boston, Inc., Boston, MA, 2008.
, Henri Poincaré. A scientific biography, Princeton University Press, Princeton, NJ,
80.
2013.
BIBLIOGRAPHY
325
81. L. Greenberg, Discrete subgroups of the Lorentz group, Math. Scand. 10 (1962), 85–107.
82. G. Greschonig and K. Schmidt, Ergodic decomposition of quasi-invariant probability measures, Colloq. Math. 84/85 (2000), part 2, 493–514.
83. M. Gromov, Hyperbolic manifolds, groups and actions, Riemann surfaces and related topics:
Proceedings of the 1978 Stony Brook conference (State Univ. New York, Stony Brook, N.Y.,
1978), Ann. of Math. Stud., vol. 97, Princeton Univ. Press, Princeton, N.J., 1981, pp. 183–
213.
, Infinite groups as geometric objects, Proceedings of the International Congress of
84.
Mathematicians, Vol. 1, 2 (Warsaw, 1983), PWN, Warsaw, 1984, pp. 385–392.
, Hyperbolic groups, Essays in group theory, Math. Sci. Res. Inst. Publ., vol. 8,
85.
Springer, New York, 1987, pp. 75–263.
86.
, Asymptotic invariants of infinite groups, Geometric group theory, Vol. 2 (Sussex,
1991), London Math. Soc. Lecture Note Ser., vol. 182, Cambridge Univ. Press, Cambridge,
1993, pp. 1–295.
87. Y. Guivarc’h, Groupes de Lie à croissance polynomiale, C. R. Acad. Sci. Paris Sér. A-B 271
(1970), A237–A239 (French).
88. M. Hamann,
Group actions on metric spaces:
fixed points and free subgroups,
http://arxiv.org/abs/1301.6513.
89. M. Handel and L. Mosher, The free splitting complex of a free group, I: hyperbolicity, Geom.
Topol. 17 (2013), no. 3, 1581–1672.
90. G. A. Hedlund, Fuchsian groups and transitive horocycles, Duke Math. J. 2 (1936), no. 3,
530–542.
91. J. Heinonen, Lectures on analysis on metric spaces, Universitext, Springer-Verlag, New York,
2001.
92.
, What is ... a quasiconformal mapping?, Notices Amer. Math. Soc. 53 (2006), no.
11, 1334–1335.
93. E. Heintze, On homogeneous manifolds of negative curvature, Math. Ann. 211 (1974), 23–34.
94. S. Helgason, Differential geometry, Lie groups, and symmetric spaces., Pure and Applied
Mathematics, 80, Academic Press, Inc. (Harcourt Brace Jovanovich, Publishers), New YorkLondon, 1978.
95. S. W. Hensel, P. Przytycki, and R. C. H. Webb, Slim unicorns and uniform hyperbolicity for
arc graphs and curve graphs, http://arxiv.org/abs/1301.5577, preprint 2013.
96. A. Hilion and C. Horbez, The hyperbolicity of the sphere complex via surgery paths,
http://arxiv.org/abs/1210.6183, preprint 2012.
97. S. Hong, Patterson-Sullivan measure and groups of divergence type, Bull. Korean Math. Soc.
30 (1993), no. 2, 223–228.
98.
, Conical limit points and groups of divergence type, Trans. Amer. Math. Soc. 346
(1994), no. 1, 341–357.
99. E. Hopf, Fuchsian groups and ergodic theory, Trans. Amer. Math. Soc. 39 (1936), no. 2,
299–314.
100.
, Statistik der geodätischen Linien in Mannigfaltigkeiten negativer Krümmung, Ber.
Verh. Sächs. Akad. Wiss. Leipzig 91 (1939), 261–304 (German).
101. P. Jolissaint, Borel cocycles, approximation properties and relative property T., Ergodic
Theory Dynam. Systems 20 (2000), no. 2, 483–499.
102. T. Jørgensen, Compact 3-manifolds of constant negative curvature fibering over the circle,
Ann. of Math. (2) 106 (1977), no. 1, 61–72.
326
BIBLIOGRAPHY
103. V. A. Kaimanovich, Bowen-Margulis and Patterson measures on negatively curved compact
manifolds, Dynamical systems and related topics (Nagoya, 1990), Adv. Ser. Dynam. Systems,
9, World Sci. Publ., River Edge, NJ, 1991, pp. 223–232.
104. I. Kapovich and K. Rafi, On hyperbolicity of free splitting and free factor complexes, Groups
Geom. Dyn. 8 (2014), no. 2, 391–414.
105. M. Kapovich, On the absence of Sullivan’s cusp finiteness theorem in higher dimensions,
Algebra and analysis (Irkutsk, 1989), 77-89, Amer. Math. Soc. Transl. Ser. 2, 163, Amer.
Math. Soc., Providence, RI, 1995.
106.
, Kleinian groups in higher dimensions, Geometry and dynamics of groups and spaces,
Progr. Math., vol. 265, Birkhäuser, Basel, 2008, pp. 487–564.
107. A. Karlsson, Nonexpanding maps and Busemann functions, Ergodic Theory Dynam. Systems
21 (2001), 1447–1457.
108. S. R. Katok, Fuchsian groups, Chicago Lectures in Mathematics, University of Chicago Press,
Chicago, IL, 1992.
109. A. S. Kechris, Classical descriptive set theory, Graduate Texts in Mathematics, vol. 156,
Springer-Verlag, New York, 1995.
110. R. Kellerhalls, Volumes of cusped hyperbolic manifolds, Topology 37 (1998), no. 4, 719–734.
111. H. Kesten, Problems and solutions: Solutions of advanced problems: 5716, Amer. Math.
Monthly 78 (1971), no. 3, 305–308.
112. I. Kim, Length spectrum in rank one symmetric space is not arithmetic, Proc. Amer. Math.
Soc. 134 (2006), no. 12, 3691–3696.
113. V. L. Klee, Dispersed Chebyshev sets and coverings by balls, Math. Ann. 257 (1981), no. 2,
251–260.
114.
, Do infinite-dimensional Banach spaces admit nice tilings?, Studia Sci. Math. Hungar. 21 (1986), no. 3–4, 415–427.
115. F. Klein, Neue beiträge zur riemann’schen funktionentheorie, Math. Ann. 21 (1883), 141–214
(German).
116. D. Y. Kleinbock and B. Weiss, Badly approximable vectors on fractals, Israel J. Math. 149
(2005), 137–170.
117. P. Koebe, Riemannsche Mannigfaltigkeiten und nichteuklidische Raumformen, Sitzungsberichte Akad. Berlin 1930 (1930), 304–364, 505–541 (German).
118. J. Kurzweil, A contribution to the metric theory of diophantine approximations, Czechoslovak Math. J. 1(76) (1951), 149–178.
119. S. Lang, Fundamentals of differential geometry, Graduate Texts in Mathematics, vol. 191,
Springer-Verlag, New York, 1999.
120. G. Levitt, Graphs of actions on R-trees, Comment. Math. Helv. 69 (1994), no. 1, 28–38.
121. L. Li, K. Ohshika, and X. T. Wang, On Klein-Maskit combination theorem in space I, Osaka
J. Math. 46 (2009), no. 4, 1097–1141.
122. T. Lundh, Geodesics on quotient manifolds and their corresponding limit points, Michigan
Math. J. 51 (2003), no. 2, 279–304.
123. J. M. Mackay and J. T. Tyson, Conformal dimension: Theory and application, University
Lecture Series, 54, American Mathematical Society, Providence, RI, 2010.
124. J. H. Maddocks, Restricted quadratic forms and their application to bifurcation and stability
in constrained variational principles, SIAM J. Math. Anal. 16 (1985), no. 1, 47–68.
BIBLIOGRAPHY
327
125. Y. I. Manin, Cubic forms: algebra, geometry, arithmetic. Translated from the Russian by
M. Hazewinkel. North-Holland Mathematical library, vol. 4, North-Holland Publishing Co.,
Amsterdam-London; American Elsevier Publishing Co., New York, 1974.
126. H. A. Masur and S. Schleimer, The geometry of the disk complex, J. Amer. Math. Soc. 26
(2013), no. 1, 1–62.
127. R. D. Mauldin, T. Szarek, and Mariusz Urbański, Graph directed Markov systems on Hilbert
spaces, Math. Proc. Cambridge Philos. Soc. 147 (2009), 455–488.
128. R. D. Mauldin and Mariusz Urbański, Dimensions and measures in infinite iterated function
systems, Proc. London Math. Soc. (3) 73 (1996), no. 1, 105–154.
, Graph directed Markov systems: Geometry and dynamics of limit sets, Cambridge
129.
Tracts in Mathematics, vol. 148, Cambridge University Press, Cambridge, 2003.
130. A. D. Michal, General differential geometries and related topics, Bull. Amer. Math. Soc. 45
(1939), 529–563.
, Infinite dimensional differential metrics with constant curvature, Proc. Nat. Acad.
131.
Sci. U. S. A. 34 (1948), 17–21.
132. N. Monod and P. Py, An exotic deformation of the hyperbolic space, Amer. J. Math. 136
(2014), no.5, 1249–1299.
133. G. D. Mostow, Strong rigidity of locally symmetric spaces, Annals of Mathematics Studies,
No. 78, Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1973.
134. S. B. Myers and N. E. Steenrod, The group of isometries of a Riemannian manifold, Ann.
of Math. (2) 40 (1939), no. 2, 400–416.
135. P. J. Myrberg, Die Kapazität der singulären Menge der linearen Gruppen, Ann. Acad. Sci.
Fennicae. Ser. A. I. Math.-Phys. 1941 (1941), no. 10, 19 pp. (German).
136. S. A. Naimpally and B. D. Warrack, Proximity spaces, Cambridge Tracts in Mathematics
and Mathematical Physics, No. 59, Cambridge University Press, London-New York, 1970.
137. F. Newberger, On the Patterson-Sullivan measure for geometrically finite groups acting on
complex or quaternionic hyperbolic space (Special volume dedicated to the memory of Hanna
Miriam Sandler (1960-1999)), Geom. Dedicata 97 (2003), 215–249.
138. P. J. Nicholls, The ergodic theory of discrete groups, London Mathematical Society Lecture
Note Series, vol. 143, Cambridge University Press, Cambridge, 1989.
139. A. Y. Ol’shanskiı̆, Almost every group is hyperbolic, Internat. J. Algebra Comput. 2 (1992),
no. 1, 1–17.
140. D. V. Osin, Acylindrically hyperbolic groups, http://arxiv.org/abs/1304.1246, preprint
2013.
141. P. Pansu, Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de
rang un (Carnot-Carathéodory metrics and quasi-isometries of rank one symmetric spaces),
Ann. of Math. (2) 129 (1989), no. 1, 1–60 (French).
142. S. J. Patterson, The limit set of a Fuchsian group, Acta Math. 136 (1976), no. 3–4, 241–273.
143.
, Further remarks on the exponent of convergence of Poincaré series, Tohoku Math.
J. (2) 35 (1983), no. 3, 357–373.
144. F. Paulin, On the critical exponent of a discrete group of hyperbolic isometries, Differential
Geom. Appl. 7 (1997), no. 3, 231–236.
145. S. D. Pauls, The large scale geometry of nilpotent Lie groups, Comm. Anal. Geom. 9 (2001),
no. 5, 951–982.
146. D. Preiss, Tilings of Hilbert spaces, Mathematika 56 (2010), no. 2, 217–230.
328
BIBLIOGRAPHY
147. J.-F.
Quint,
An
overview
of
Patterson–Sullivan
theory,
www.math.u-bordeaux1.fr/~ jquint/publications/courszurich.pdf.
148. J. G. Ratcliffe, Foundations of hyperbolic manifolds, Graduate Texts in Mathematics, vol.
149, Springer, New York, 2006.
149. A. Raugi, Dépassement des sommes partielles de v.a.r. indépendantes équidistribuées sans
moment d’ordre 1 (Exceedance of partial sums of independent uniformly distributed real
random variables with undefined mean), Ann. Fac. Sci. Toulouse Math. (6) 9 (2000), no. 4,
723–734 (French).
150. R. H. Riedi and B. B. Mandelbrot, Multifractal formalism for infinite multinomial measures,
Adv. in Appl. Math. 16 (1995), no. 2, 132–150.
151. T. Roblin, Ergodicité et équidistribution en courbure négative (Ergodicity and uniform distribution in negative curvature), Mém. Soc. Math. Fr. (N. S.) 95 (2003), vi+96 pp. (French).
152.
, Un théoréme de Fatou pour les densités conformes avec applications aux revetements galoisiens en courbure négative (A Fatou theorem for conformal densities with applications to Galois coverings in negative curvature), Israel J. Math. 147 (2005), 333–357
(French).
153. B. Schapira, Lemme de l’ombre et non divergence des horosphéres d’une variété
géométriquement finie. (The shadow lemma and nondivergence of the horospheres of a geometrically finite manifold), Ann. Inst. Fourier (Grenoble) 54 (2004), no. 4, 939–987 (French).
154. V. Schroeder, Quasi-metric and metric spaces, Conform. Geom. Dyn. 10 (2006), 355–360.
155. Y. Shalom and T. C. Tao, A finitary version of Gromov’s polynomial growth theorem, Geom.
Funct. Anal. 20 (2010), no. 6, 1502–1547.
156. T. A. Springer and F. D. Veldkamp, Elliptic and hyperbolic octave planes. I, II, III., Nederl.
Akad. Wetensch. Proc. Ser. A 66=Indag. Math. 25 (1963), 413–451.
157. A. N. Starkov, Fuchsian groups from the dynamical viewpoint, J. Dynam. Control Systems
1 (1995), no. 3, 427–445.
158. B. O. Stratmann, The Hausdorff dimension of bounded geodesics on geometrically finite
manifolds, Ergodic Theory Dynam. Systems 17 (1997), no. 1, 227–246.
159. B. O. Stratmann and Mariusz Urbański, Pseudo-Markov systems and infinitely generated
Schottky groups, Amer. J. Math. 129 (2007), 1019–1062.
160. B. O. Stratmann and S. L. Velani, The Patterson measure for geometrically finite groups
with parabolic elements, new and old, Proc. London Math. Soc. (3) 71 (1995), no. 1, 197–220.
161. D. P. Sullivan, The density at infinity of a discrete group of hyperbolic motions, Inst. Hautes
Études Sci. Publ. Math. 50 (1979), 171–202.
162.
163.
, A finiteness theorem for cusps, Acta Math. 147 (1981), no. 3–4, 289–299.
, On the ergodic theory at infinity of an arbitrary discrete group of hyperbolic motions,
Riemann surfaces and related topics: Proceedings of the 1978 Stony Brook conference (State
Univ. New York, Stony Brook, N.Y., 1978), Ann. of Math. Stud., vol. 97, Princeton Univ.
Press, Princeton, N.J., 1981, pp. 465–496.
164.
, Seminar on conformal and hyperbolic geometry, 1982, Notes by M. Baker and J.
Seade, IHES.
165.
, Entropy, Hausdorff measures old and new, and limit sets of geometrically finite
Kleinian groups, Acta Math. 153 (1984), no. 3–4, 259–277.
166. S. J. Taylor, On the connexion between Hausdorff measures and generalized capacity, Proc.
Cambridge Philos. Soc. 57 (1961), 524–531.
BIBLIOGRAPHY
329
167. J. Tits, A “theorem of Lie–Kolchin” for trees, Contributions to algebra (collection of papers
dedicated to Ellis Kolchin), Academic Press, New York, 1977, pp. 377–388.
168. D. A. Trotsenko and J. Väisälä, Upper sets and quasisymmetric maps, Ann. Acad. Sci. Fenn.
Math. 24 (1999), no. 2, 465–488.
169. M. Tsuji, Theory of conformal mapping of a multiply connected domain. III, Jap. J. Math.
19 (1944), 155–188.
170. P. Tukia, On isomorphisms of geometrically finite Möbius groups, Inst. Hautes Études Sci.
Publ. Math. 61 (1985), 171–214.
, The Poincaré series and the conformal measure of conical and Myrberg limit points,
171.
J. Anal. Math. 62 (1994), 241–259.
172. J. Väisälä, Gromov hyperbolic spaces, Expo. Math. 23 (2005), no. 3, 187–231.
173.
, Hyperbolic and uniform domains in Banach spaces, Ann. Acad. Sci. Fenn. Math.
30 (2005), no. 2, 261–302.
174. A.
Valette,
Affine
isometric
actions
on
Hilbert
spaces
and
amenability,
http://www2.unine.ch/files/content/sites/math/files/shared/documents/articles/ESIValette.pdf,
preprint.
175. C. B. Yue, The ergodic theory of discrete isometry groups on manifolds of variable negative
curvature, Trans. Amer. Math. Soc. 348 (1996), no. 12, 4965–5005.
176. Z. Yûjôbô, A theorem on Fuchsian groups, Math. Japonicae 1 (1949), 168–169.
| 4 |
1
Computing the projected reachable set of switched
affine systems: an application to systems biology
arXiv:1705.00400v1 [] 1 May 2017
Francesca Parise, Maria Elena Valcher and John Lygeros
Abstract—A fundamental question in systems biology is what
combinations of mean and variance of the species present in
a stochastic biochemical reaction network are attainable by
perturbing the system with an external signal. To address this
question, we show that the moments evolution in any generic
network can be either approximated or, under suitable assumptions, computed exactly as the solution of a switched affine
system. Motivated by this application, we propose a new method
to approximate the reachable set of switched affine systems. A
remarkable feature of our approach is that it allows one to easily
compute projections of the reachable set for pairs of moments of
interest, without requiring the computation of the full reachable
set, which can be prohibitive for large networks. As a second
contribution, we also show how to select the external signal in
order to maximize the probability of reaching a target set. To
illustrate the method we study a renown model of controlled gene
expression and we derive estimates of the reachable set, for the
protein mean and variance, that are more accurate than those
available in the literature and consistent with experimental data.
I. I NTRODUCTION
One of the most impressive results achieved by synthetic
biology in the last decade is the introduction of externally
controllable modules in biochemical reaction networks. These
are biochemical circuits that react to external signals, as for
example light pulses [1], [2], [3] or concentration signals [4],
[5], allowing researchers to influence and possibly control the
behavior of cells in vivo. To fully exploit these tools, it is
important to first understand what range of behaviors they
can exhibit under different choices of the external signal. For
deterministic systems, this amounts to computing the set of
states that can be reached by the controlled system trajectories
starting from a known initial configuration [6], [7]. Since
chemical species are often present in low copy numbers
inside the cell, biochemical reaction networks can however be
inherently stochastic [8]. In other words, if we apply the same
signal to a population of identical cells, then every cell will
have a different evolution (with different likelihood), requiring
a probabilistic analysis.
If we interpret each cell has an independent realization, we
can then study the effect of the external signal on a population
of cells by characterizing how such a signal influences the
F. Parise is with the Laboratory for Information and Decision
Systems, MIT, Cambridge, MA: [email protected], M.E.
Valcher is with the Department of Information Engineering, University
of Padova, Italy: [email protected] and J. Lygeros is
with the Automatic Control Laboratory, ETH, Zurich, Switzerland:
[email protected]. We thank M. Khammash for
allowing us to perform the experiments in the CTSB Laboratory, ETH, and
J. Ruess and A.M. Argeitis for their help in collecting the data of Fig. 5a).
This work was supported by the SNSF grant number P2EZP2 168812.
moments of the underlying stochastic process. Specifically, in
this paper we pose the following question:
“What combinations of moments of the stochastic process
can be achieved by applying the external signal?”
This approach is motivated for example by biotechnology
applications, where one would like to control the average
behavior of the cells in large populations, instead of each
cell individually. More on the theoretical side, this perspective
can be useful to investigate fundamental questions on noise
suppression in biochemical reaction networks, as in [9].
The cornerstone of our approach is the observation that
while the number of copies in each cell is stochastic, the
evolution of the moments is deterministic and can either
be described or approximated by a switched affine system.
Consequently, the above question can be reformulated as a
reachability problem in the moment space. Computing the
exact reachable set of a switched affine system is in general
far from trivial, see [10], [11]. We thus start our analysis by
proposing a new method to approximate the reachable set of a
switched affine system. This is an extension of the hyperplane
method for linear systems suggested in [12] and is of interest
on its own. We then show how to apply the proposed approach
to biochemical reaction networks by distinguishing two cases:
1) If all the reactions follow the laws of mass action kinetics
and are at most of order one, the system of moments
equations is switched affine. Consequently, for this class
of networks, the above question can be solved by directly
applying the newly suggested hyperplane method in the
moments space;
2) For all other reaction networks the moments equations
are in general non-closed (i.e., the evolution of mean and
variance depends on higher order moments). We show
however that the evolution of the probability of being in
a given state can be described by an infinite dimensional
switched system and that the desired moments can be
computed as the output of such system. We then show:
i) How to approximate such an infinite dimensional
system with a finite dimensional one, by extending the
finite state projection method [13] to controllable networks, ii) How to compute the reachable set of the finite
dimensional system by applying the newly suggested
hyperplane method in the probability space, and iii) How
to recover an approximation of the original reachable set
from the reachable set of the finite dimensional system.
In the last part of the paper, we change perspective and,
instead of focusing on population properties, we consider the
behaviour of a single cell (i.e., a single realization of the
2
process), given a fixed initial condition or an initial probability
distribution. Such perspective has been commonly employed
for the case without external signals, see e.g. [14], [15], [16],
[13]. Our objective is to show how the external signal can be
used to control single cell realizations by posing the following
question
“What external signal should be applied to maximize the
probability that the cell trajectory reaches a prespecified
subset of the state space at the end of the experiment?”
We show that such a problem can be addressed by using
similar tools as those derived for the population analysis.
Comparison with the literature: A vast literature has been
devoted to the analysis of the reachable set of piecewiseaffine systems in the context of hybrid systems, see e.g. [10],
[17], [18], [19], [20], [21], [22] among many. Our results
are different because we exploit the specific structure of the
problem at hand, that is, the fact that the switching signal is
a control variable and that the dynamics in each mode are
autonomous and affine. In other words, we consider switched
affine systems for which the switching signal is the only
control action. We also note that many different methods have
been proposed in the literature to compute the reachable set
of generic nonlinear systems. Among these there are level set
methods [23], ellipsoidal methods [24] and sensitivity based
methods [25]. For example, we became aware at the time
of submission that the authors of [26] extended our previous
works [27], [28] by suggesting the use of ellipsoidal methods.
It is important to stress that the choice of a method that
scales well with the system size is essential in our context,
since biochemical networks are typically very large. Moreover,
biologists are often interested in analyzing the behavior of
only a few chemical species of the possibly many involved
in the network. Consequently, one is usually interested in
computing the projection of the reachable set (which is a
high-dimensional object) on some low-dimensional space of
interest. The hyperplane method that we propose stands out
in this respect since, by using a method tailored for switched
systems, it allows one to compute directly the projections of
the reachable set, without requiring the computation of the
full high-dimensional reachable set first. We thus avoide the
curse of dimensionality that characterises all the previously
mentioned methods. We note that part of the results of this
paper appeared in our previous works [28], [29]. Specifically,
in [28] we first suggest the use of the hyperplane method to
compute the reachable set of biochemical networks with linear
moment equations, which we then adapted in [28] to the case
of switched affine moment equations. As better detailed in
Section IV-A, the assumptions made both in [28] and [29]
do not allow for bimolecular reactions, which are instead
present in the vast majority of biochemical networks. The key
contribution of this paper is the generalisation of our analysis
to any biochemical network by using the approach described
in point 2) above. The analysis of single cell realizations is
also entirely new.
Outline: In Section II we present the hyperplane method.
In Section III-A we review how to compute the hyperplane
constants for linear systems, while in Section III-B we propose
a new procedure for switched affine systems. In Section IV
we introduce stochastic biochemical reaction networks and the
controlled chemical master equation (CME). Additionally, we
recap how to derive the moments equations from the CME
(Section IV-A) and we derive an extension of the finite state
projection method to controlled biochemical networks (Section
IV-B). In Section V we show how to compute the reachable
set of biochemical networks and in Section VI we derive the
results on single cell realizations. Section VII illustrates our
theoretical results on a gene expression case study.
Notation: Given a < b ∈ N, we set N[a, b] := {a, a +
1, . . . , b}. Given a set S, the symbol ∂S denotes its boundary,
conv(S) its convex hull and |S| its cardinality. For a vector
x ∈ Rn , xp := [x]p denotes its pth component, |x| :=
[|x1 |> , . . . , |xn |> ]> and kxk∞ := maxp=1,2,...,n |xp | denotes
the infinity norm. 1 denotes a vector of all ones. Given two
random variables Z1 , Z2 , we denote by V[Z1 ] and V[Z1 , Z2 ]
their variance and covariance, respectively.
II. R EACHABILITY TOOLS
A. The reachable set and the hyperplane method
Consider the n-dimensional nonlinear control system
ẋ(t) = f (x(t), σ(t)),
t ≥ 0,
(1)
where x is the n-dimensional state and σ the m-dimensional
input function. Set a final time T > 0 and let S be the set of
admissible input functions that we assume to be a subset of
the set of all measurable functions that map [0, T ] into Rm .
We assume that the function f : Rn × Rm → Rn is such that,
for every initial condition x(0) ∈ Rn and every input function
σ ∈ S, the solution of (1), denoted by x(t; x(0), σ), t ≥ 0, is
well defined and unique at every time t ≥ 0. The reachable
set of system (1) at time T is defined as the set of all states
x ∈ Rn that can be reached at time T , starting from x(0), by
using an admissible input function σ ∈ S.
Definition 1 (Reachable set at time T ). The reachable set at
time T > 0 from x(0) = x0 , for system (1) with admissible
input set S, is
RT (x0 ) := {x ∈ Rn | ∃ σ ∈ S : x = x(T ; x0 , σ)}.
(2)
From now on we will assume that the set RT (x0 ) is
compact, since this will be the case for all the systems of
interest analysed in the following. Computing such a reachable
set for nonlinear systems is in general a very difficult task. For
the case of linear systems with bounded inputs a method to
construct an outer approximation of RT (x0 ) as the intersection
of a family of half-spaces that are tangent to its boundary (see
Fig. 1) was proposed in [12].
We present here a generalisation of this method to system (1). For a given direction c ∈ Rn , let us define
vT (c) :=
max
c> x,
x∈RT (x0 )
(3)
where, for simplicity, we omitted the dependence of vT (c) on
the initial condition x0 . Let
HT (c) := {x ∈ Rn | c> x = vT (c)}
(4)
3
required to obtain a good characterisation of the reachable
set. In the next subsection we show how to avoid this curse
of dimensionality, in cases when only the projection of the
reachable set on a plane of interest is needed.
B. The output reachable set
Let the output of system (1) be
y(t) = Lx(t),
(7)
p×n
Fig. 1. Illustration of the hyperplane method for a convex reachable set
RT (x0 ) (in blue). The external parallelogram is the outer approximation, the
region in between the dotted lines is the inner approximation.
be the corresponding hyperplane. By definition of the constant
vT (c), the associated half-space
HT (c) := {x ∈ Rn | c> x ≤ vT (c)}
(5)
is a superset of RT (x0 ). We note that if ∂RT (x0 ) is smooth,
then HT (c) is the tangent plane to ∂RT (x0 ). By evaluating
the above hyperplanes and half-spaces for various directions,
one can construct an outer approximation of the reachable set,
as illustrated in the next theorem. If the reachable set is convex
then an inner approximation can also be derived.
Theorem 1 (The hyperplane method [12]). Given system (1),
an initial condition x0 ∈ Rn , a fixed time T > 0, an integer
number D ≥ 2, and a set of D directions C := {c1 , . . . , cD },
define the half-spaces HT (cd ) as in (5), for d = 1, . . . , D.
1) The set
D
d
Rout
T (x0 ) := ∩d=1 HT (c )
is an outer approximation of the reachable set RT (x0 )
at time T starting from x0 .
2) If the set RT (x0 ) is convex and for each d = 1, 2, . . . , D,
we select a (tangent) point
x?T (cd ) ∈ RT (x0 ) ∩ HT (cd )
(6)
then the set
Rin
T (x0 )
:= conv
{x?T (cd ), d
= 1, 2, . . . , D}
is an inner approximation of the reachable set RT (x0 )
at time T starting from x0 .
Remark 1. We note that by construction the outer approximation Rout
T (x0 ) is a convex object. Specifically, when the
number of hyperplanes tends to infinity Rout
T (x0 ) coincides
with the convex hull of RT (x0 ). Similarly, for any set RT (x0 ),
the set Rin
T (x0 ) is an inner approximation of the convex hull
of RT (x0 ). However, the inner approximation of the convex
hull of a set is an inner approximation of the set itself only if
such set is convex, as assumed in the previous theorem.
The main advantage of this method is that hyperplanes
are very easy objects to handle and visualise. The main
disadvantage is that the higher the dimension n of the state
space, the higher in general is the number of directions D
for L ∈ R
, and the output reachable set be the set of all
output values that can be generated at time T from x(0) = x0 ,
by using an admissible input function σ ∈ S.
Definition 2 (Output reachable set at time T ). The output
reachable set RyT (x0 ) from x0 at time T > 0, for system (1)
with admissible input set S and output as in (7), is
RyT (x0 ) := {y ∈ Rp | ∃ x ∈ RT (x0 ) : y = Lx}.
For simplicity, in the following we restrict our discussion
to the case of a two-dimentional output vector, that is
>
l1 x(t)
y(t) = Lx(t) = >
∈ R2 ,
(8)
l2 x(t)
for some l1 , l2 ∈ Rn , the generalization to higher dimentions
is however immediate. Note that, for any pair of indices
i, j ∈ {1, . . . , n}, i 6= j, one can recover the projection of
the reachable set RT (x0 ) onto an (xi , xj )-plane of interest by
imposing l1 = ei and l2 = ej . The two-dimentional output
vector case can therefore be applied to study the relation
between the mean behavior of two species or between mean
and variance of a single species in large biochemical networks.
In the following theorem we show that inner and outer
approximations of RyT (x0 ) can be efficiently computed by
selecting only hyperplanes that are perpendicular to the plane
of interest.
Theorem 2 (Projection on a two dimensional subspace).
Consider system (1), with output (8) and initial condition
x0 ∈ Rn . Let T > 0 be a fixed time, D ≥ 2 an integer number
and choose D values γ d ∈ R. Set cd := l2 − γ d l1 ∈ Rn and
HTy (γ d ) := {y ∈ R2 | y2 ≤ γ d y1 + vT (cd )},
where vT (cd ) is as in (3). Set yT? (γ d ) := Lx?T (cd ), where
x?T (cd ) is defined as in (6). Then the set
y
d
Ry,out
(x0 ) := ∩D
d=1 HT (γ )
T
is an outer approximation of
convex then the set
RyT (x0 ).
Moreover, if RT (x0 ) is
?
d
Ry,in
T (x0 ) := conv {yT (γ ), d = 1, 2, . . . , D}
is an inner approximation of
(9)
(10)
RyT (x0 ).
for any ȳ ∈ RyT (x0 )
ȳ > = [l1> x̄, l2> x̄]. By
Proof: By definition,
there exists
an x̄ ∈ RT (x0 ) such that
Theorem 1,
for any direction cd it holds that RT (x0 ) ⊂ HT (cd ). Consequently, x̄ ∈ RT (x0 ) implies x̄ ∈ HT (cd ). By substituting the
definition of cd given in the statement we get
x̄ ∈ HT (cd ) ⇔ (cd )> x̄ ≤ vT (cd ) ⇔
⇔ (l2 − γ d l1 )> x̄ ≤ vT (cd ) ⇔ l2> x̄ ≤ γ d l1> x̄ + vT (cd ).
4
The last inequality implies ȳ > = [l1> x̄, l2> x̄] ∈ HTy (γ d ).
Consequently, RyT (x0 ) ⊆ HTy (γ d ) for any γ d and therefore
RyT (x0 ) ⊆ Ry,out
(x0 ). If RT (x0 ) is convex, then RyT (x0 )
T
is convex as well. The points yT? (γ d ) belong to RyT (x0 ) by
construction. Consequently, by convexity, it must hold that
y
Ry,in
T (x0 ) ⊆ RT (x0 ).
III. C OMPUTING THE TANGENT HYPERPLANES
The success of the hyperplane method hinges on the possibility of efficiently evaluating, for any given direction c, the
constant vT (c) in (3). Note that this problem is equivalent to
the following finite time optimal control problem
vT (c) := max c> x(T )
(11)
σ∈S
s.t.
ẋ(t) = f (x(t), σ(t)),
∀t ∈ [0, T ],
x(0) = x0 .
In the rest of this section, we aim at solving (11). To this end,
we start by recalling the linear case, for which the hyperplane
method was originally derived in [12].
A. Linear systems with bounded input
The hyperplane method was originally proposed for linear
systems with bounded inputs
ẋ(t) = Ax(t) + Bσ(t),
n
n×n
n×m
(12)
m
where x(t) ∈ R , A ∈ R
, B ∈R
and σ(t) ∈ R .
Since biological signals are non-negative and bounded, we
here make the following assumption on the input set S.
Assumption 1. The input function σ belongs to the admissible set SΣ := {σ | σ(t) ∈ Σ, ∀t ∈ [0, T ]}, where
Σ = Σ1 × . . . × Σm . Moreover, there exist σ̄r > 0, r ∈
N[1, m], such that either (a) every set Σr is the interval
Σcr := [0, σ̄r ] (continuous and bounded input set), or (b)
for every
set Σr there exists 2 ≤ qr < +∞ such that
Σdr := 0 = σr1 < σr2 < . . . < σrqr = σ̄r ⊂ R≥0 (finite input
set). We set Σc := Σc1 × . . . × Σcm , Σd := Σd1 × . . . × Σdm , and
denote by SΣc and SΣd the corresponding admissible sets.
In the case of a continuous and bounded input set, i.e. under
Assumption 1-(a), it was shown in [12] that it is possible to
solve the control problem in (11) in closed form by using the
Maximum Principle [30].
Proposition 1 (Tangent hyperplanes for linear systems with
bounded and continuous inputs). Consider system (12) and
suppose that Assumption 1-(a) holds. Define the following
admissible input function, expressed component-wise for every
rth entry, r = 1, . . . , m, as
if c> eA(T −t) br > 0;
σ̄r
σr? (t) : = 0
(13)
if c> eA(T −t) br < 0;
0 ≤ σr ≤ σ̄r if c> eA(T −t) br = 0;
where br denotes the rth column of B. Then
RT
Pm
vT (c) = c> eAT x0 + r=1 σ̄r 0 c> eA(T −t) br + dt, (14)
where [g(t)]+ denotes the positive part of the function, namely
[g(t)]+ = g(t) when g(t) > 0 and zero otherwise. Suppose
additionally that the pair (A, br ) is reachable, for every r ∈
N[1, m]. Then there exists no interval [τ1 , τ2 ], with 0 ≤ τ1 <
τ2 ≤ T , such that c> eA(T −t) br = 0 for every t ∈ [τ1 , τ2 ].
Consequently, a tangent point can be obtained as
RT
x?T (c) := eAT x0 + 0 eA(T −t) Bσ ? (t)dt.
(15)
The proof follows the same lines as [12, Lemma 2.1 and
Theorem 2.1] and is omitted for the sake of brevity.
By using the explicit characterisation given in Proposition 1
together with Theorems 1 and 2, one can efficiently construct
both an inner and an outer approximation of the (output)
reachable set for linear systems with continuous and bounded
input set Σc , as summarised in the next corollary. Therein we
also show how the same result can be extended to finite input
sets Σd .
Corollary 1 (The hyperplane method for linear systems).
Consider system (12) and suppose that either Assumption 1(a) or Assumption 1-(b) holds. Let vT (cd ) and x?T (cd ) be
in
computed as in (14) and (15). Then Rout
T (x0 ) and RT (x0 )
y,out
y,in
(RT (x0 ) and RT (x0 ), resp.) as defined in Theorem 1
(Theorem 2, resp.) are outer and inner approximations of
RT (x0 ) (of RyT (x0 ), resp.).
Proof: In the case of continuous and bounded input,
that is, under Assumption 1-(a), the reachable set RT (x0 )
is convex and the statement is a trivial consequence on
Theorems 1 and 2 and Proposition 1. We here show that the
same result holds also under Assumption 1-(b). The proof of
this second part follows from the fact that the reachable set
RcT (x0 ), obtained by using the continuous input set Σc , and
the reachable set RdT (x0 ), obtained by using the discrete input
set Σd , coincide. To prove this, let Rbb
T (x0 ) be the reachable
:=
{0,
σ̄
}
for
any
r, that is, the set of
set obtained using Σbb
r
r
vertices of Σc . Consider now an arbitrary point x̄ ∈ RcT (x0 ),
which is a compact set. By definition there exists an admissible
input function in Σc that steers x0 to x̄ in time T . Since Σc
is a convex polyhedron, by [31, Theorem 8.1.2], system (12)
with input set Σc has the bang-bang with bound on the number
of switchings (BBNS) property. That is, for each x̄ ∈ RcT (x0 )
there exists a bang-bang input function in Σbb that reaches x̄
in the same time T with a finite number of discontinuities.
c
Thus x̄ ∈ Rbb
T (x0 ). Since this is true for any x̄ ∈ RT (x0 ),
c
bb
bb
d
c
we get RT (x0 ) ⊆ RT (x0 ). From Σ ⊆ Σ ⊆ Σ we get
d
c
Rbb
T (x0 ) ⊆ RT (x0 ) ⊆ RT (x0 ), concluding the proof.
B. Switched affine systems
In this section, we propose an extension of the hyperplane
method to the case of a switched affine system of the form
ẋ(t) = Aσ(t) x(t) + bσ(t) ,
(16)
where the switching signal σ(t) ∈ N[1, I] is the input
function, I ≥ 2 is the number of modes, x(t) ∈ Rn and
Ai ∈ Rn×n , bi ∈ Rn for all i ∈ N[1, I]. We make the
following assumption.
5
Assumption 2. The switching signal σ(t) switches K times
within the finite set N[1, I] at fixed switching instants 0 = t0 <
. . . < tK+1 = T , that is, σ ∈ SIK , where
following optimisation problem
vT (c) := max
xk ,zik ,γik
s.t.
SIK
:= {σ | σ(t) = ik ∈ N[1, I], ∀t ∈ [tk , tk+1 ), k ∈ N[0, K]}.
For every k ∈ N[0, K] and i ∈ N[1, I] we define Āki :=
R (t −t )
e
and b̄ki = [ 0 k+1 k eAi τ dτ ]bi . Moreover, we
set xk := x(tk ). Note that under Assumption 2 the reachable
set of system (16) consists of a finite number of points that
can be computed by solving the state equations for each
possible switching signal. Since the cardinality of the set
SIK grows exponentially with K, this approach is however
computationally infeasible even for small systems. We here
show that, on the other hand, the hyperplane constants defined
in (11) can be computed by solving a mixed integer linear
program (MILP), thus allowing us to exploit the sophisticated
software that has been developed to solve large MILPs in the
last years.
Ai (tk+1 −tk )
Proposition 2 (Tangent hyperplanes for switched affine systems). Consider system (16) and suppose that Assumption 2
holds. Take a vector M ∈ Rn such that M ≥ |xk | componentwise for all k ∈ N[0, K]. Then
vT (c) = max
xk ,zik ,γik
c> xK+1
(17)
c> xK+1
(19)
zik+1 = (Āki xk + b̄ki )γik , ∀i ∈ Σ,
PI
k
∀k ∈ N[0, K],
i=1 γi = 1,
PI
k
xk = i=1 zi , ∀k ∈ N[1, K + 1],
x0 ∈ R assigned.
Finally, by using the big-M method in [33, Eq. (5b)], the first
equality constraint in the optimization problem (19) can be
equivalently replaced by
zik+1 ≤ (Āki xk + b̄ki ) + M(1 − γik ),
zik+1 ≥ −Mγik ,
zik+1 ≥ (Āki xk + b̄ki ) − M(1 − γik ),
zik+1 ≤ Mγik ,
leading to the equivalent reformulation given in (17).
We summarize our results on the hyperplane method for
switched affine systems in the next corollary, which is an
immediate consequence of Proposition 2 and Theorems 1, 2.
Corollary 2 (The hyperplane method for switched affine
systems). Given system (16), let x0 ∈ Rn be the initial state
and suppose that Assumption 2 holds. Let vT (cd ) be computed
y,out
(x0 ) as defined in
as in (17). Then Rout
T (x0 ) and RT
Theorems 1 and 2 are outer approximations of RT (x0 ) and
RyT (x0 ), respectively.
Note that in the case of switched affine systems it is not
possible to recover an inner approximation, since there is
no guarantee in general that the reachable set is convex. By
zik+1 ≥ (Āki xk + b̄ki ) − M(1 − γik ),
computing the convex hull of the points xK+1 in (17) for each
k+1
k+1
k
k
zi
≥ −Mγi , zi
≤ Mγi ,
direction c one could however recover an inner approximation
zik ∈ Rn , ∀k ∈ N[1, K + 1], ∀i ∈ N[1, I], of the convex hull of R (x ).
T
0
γik ∈ {0, 1}, ∀k ∈ N[0, K], ∀i ∈ N[1, I],
PI
IV. C ONTROLLED STOCHASTIC BIOCHEMICAL REACTION
xk = i=1 zik ∈ Rn , ∀k ∈ N[1, K + 1],
PI
NETWORKS
k
∀k ∈ N[0, K],
i=1 γi = 1,
A biochemical reaction network is a system comprising S
x0 ∈ Rn assigned.
molecular species Z1 , ..., ZS that interact through R reactions.
Let Z(t) = [Z1 (t), ..., ZS (t)]> be the vector describing the
Proof: To prove the statement we follow a procedure number of molecules present in the network for each species
similar to the one in [32, Section IV.A]. Under Assumption 2 at time t, that is, the state of the network at time t. Since
the switching signal σ(t) is such that σ(t) = ik , ∀t ∈ each reaction r is a stochastic event [8], Z(t) is a stochastic
[tk , tk+1 ), ∀k ∈ N[0, K]. Therefore, the finite time optimal process. In the following, we always use the upper case to
denote a process and the lower case to denote its realizations.
control problem in (11) can be rewritten as
For example, z = [z1 , ..., zS ]> denotes a particular realization
of the state Z(t) of the stochastic process at time t.
vT (c) = max
c> xK+1
(18)
A typical reaction r ∈ N[1, R] can be expressed as
ik ∈{1,...,I}
s.t.
zik+1 ≤ (Āki xk + b̄ki ) + M(1 − γik ),
s.t. xk+1 = Ākik xk + b̄kik
∀k ∈ N[0, K]
x0 ∈ R assigned.
Let us introduce the binary variables γik ∈ {0, 1} defined
so that, for each i ∈ N[1, I] and k ∈ N[0, K], γik = 1
if and only if the system is in mode i in the time interval
[tk , tk+1 ). Moreover, let us introduce a copy of the state
vector for each possible update of the system in each possible
mode: zik+1 = (Āki xk + b̄ki )γik . Then (18) is equivalent to the
0
0
ν1r
Z1 + . . . + νSr
ZS
−→
00
00
ν1r
Z1 + . . . + νSr
ZS , (20)
0
0
00
00
where ν1r
, . . . , νSr
∈ N and ν1r
, . . . , νSr
∈ N are the coefficients that determine how many molecules for each species
are respectively consumed and produced by the reaction. The
net effect of each reaction can thus be summarized with
the stoichiometric vector νr ∈ NS , whose components are
00
0
νsr
− νsr
for s = 1, . . . , S. We say that
is of order
PaS reaction
0
k if it involves k reactant units (i.e., s=1 νsr
= k) and we
distinguish two classes of reactions:
6
-uncontrolled reactions that happen, in the infinitesimal interval [t, t + dt], with probability
αr (θr , z)dt := θr · hr (z) · dt,
(21)
where hr (z) is a given function of the available molecules z
and θr ∈ R≥0 is the so-called rate parameter;
- controlled reactions for which there exists an external signal
ur (t) such that the reaction fires at time t with probability
ur (t) · αr (θr , z)dt.
(22)
counts can theoretically grow unbounded. Consequently, the
controlled CME in (23) is a system of infinitely many coupled
ordinary differential equations that cannot be solved, even
for very simple systems. Several analytical and computational
methods have been proposed in the literature to circumvent this
difficulty, see [34], [35], [36] for a comprehensive review. In
the following we limit our discussion to two methods: moment
equations [37] and finite state projection (FSP) [13].
A. The moment equations
In the following we refer to αr (θr , z) as the propensity of the
reaction and without loss of generality we assume that the con-
trolled reactions are the first Q ones. If hr (z) := ΠSs=1 νz0s
sr
we say that reaction r follows the laws of mass action kinetics
as derived in [8]. Our analysis can however be applied to
generic functions hr (z), allowing us to model different types
of kinetics, as the Michaelis-Menten [34, Section 7.3].
To illustrate the following results, we consider a model of
gene expression as running example.
We start by considering the case when all the reactions
follow the laws of mass action kinetics and are at most of
order one. In this case for each reaction r the propensity hr (z)
is affine in the molecule counts vector z and one can show
that the moments equations are closed (i.e., the dynamics of
moments up to any order k do not depend on higher order
moments), see for example [38]. Let x≤2 (t) be a vector whose
components are the moments of Z(t) up to second order. From
[38, Equations (6) and (7)] one gets
Example 1 (Gene expression reaction network). Consider a
biochemical network consisting of two species, the mRNA (M )
and the corresponding protein (P ), and the following reactions
ẋ≤2 (t) = A(u(t))x≤2 (t) + b(u(t)).
∅
α1 (kr ,z)
−−−−−−−−→
α2 (γr ,z)
−−−−−−−−→
M
M
∅
M
P
α3 (kp ,z)
−−−−−−−−→
α4 (γp ,z)
−−−−−−−−→
M +P
∅
where the parameters kr and kp are the mRNA and protein
production rates, while γr and γp are the mRNA and protein
degradation rates, respectively. The empty set notation is used
whenever a certain species is produced or degrades without
involving the other species. In this context, Z = [M, P ]> ,
z = [m, p]> , θ = [θ1 , θ2 , θ3 , θ4 ]> := [kr , γr , kp , γp ]> and the
stoichiometric matrix is
1 −1 0 0
ν := [ν1 , ν2 , ν3 , ν4 ] =
.
0 0 1 −1
In the case of mass action kinetics the propensities αr (θr , z)
can be further specified as α1 (kr , z) = kr , α2 (γr , z) = γr ·
m, α3 (kp , z) = kp · m, α4 (γp , z) = γp · p.
Note that since the propensity of each reaction depends
only on the current state of the system, the process Z(t) is
Markovian. Let p(t, z) := P[Z(t) = z] be the probability that
the realization of the process Z at time t is z. Following the
same procedure as in [8] one can derive a set of equations,
known as chemical master equation (CME), describing the
evolution of p(z, t) as a function of the external signal u(t)
ṗ(z, t) =
Q
X
[p(z − νr , t)αr (θr , z − νr ) − p(z, t)αr (θr , z)] ur (t)
r=1
+
R
X
[p(z − νr , t)αr (θr , z − νr ) − p(z, t)αr (θr , z)] , ∀z ∈ NS .
r=Q+1
(23)
Since the previous set of equations depends on the external
signal u we refer to it as the controlled CME. Typical biochemical reaction networks involve many different species, whose
(24)
Example 2. Consider the gene expression model of Example 1. Assume that the reactions follow the mass action kinetics
and that an external input signal influencing the first reaction,
that is the mRNA production, is available (as in [1], [2], [3],
[4], [5]), so that α1 (kr , z) := kr · u(t). Set
x≤2 := [E[M ], E[P ], V[M ], V [M, P ], V[P ]]> .
Then the moments evolution over time is expressed as
ẋ≤2 (t) = Ax≤2 (t) + Bu(t),
(25)
where
−γr 0
0
0
0
kp −γp 0
0
0
0
0 ,
A = γr 0 −2γr
0
0
kp −(γr + γp ) 0
kp γ p
0
2kp
−2γp
kr
0
B = kr .
0
0
Since the input u(t) may appear in the entries of the A
matrix, the moment equations (24) are in general nonlinear.
To overcome this issue we introduce the following assumption
on the external signal u(t).
Assumption 3. The external signal u(t) can switch at most
K times within the set Σd , as defined in Assumption 1, at
preassigned switching instants 0 = t0 < . . . < tK+1 = T .
Assumption 3 imposes that the number of switchings and
their timing during a given experiment is fixed a priori.
This assumption can be motivated by the fact that changes
in the external stimulus are costly and/or stressful for the
cells. Moreover, it is trivially satisfied if the stimulus can
only be changed simultaneously with some fixed events, such
as culture dilution or measurements. The great advantage of
Assumption 3 is that, as illustrated in the following remark, it
allows us to rewrite the nonlinear moment equations (24) as a
switched affine system so that the theoretical tools described
in Section III-B can be applied.
7
Remark 2. The set Σd has finite cardinality I := Πm
r=1 qr
and we can enumerate its elements as ui , i ∈ N[1, I].
Consequently, for any fixed external signal u(t) satisfying
Assumption 3 we can construct a sequence of indices in N[1, I]
such that, at any time t, σ(t) = i if and only if u(t) = ui .
Such switching sequence σ satisfies Assumption 2.
B. The finite state projection
Let us introduce a total ordering {z j }∞
j=1 in the set of
all possible state realizations z ∈ NS . For the system in
Example 1, we could for instance use the mapping
z 1 = (0, 0), z 2 = (1, 0), z 3 = (0, 1), z 4 = (2, 0),
z 5 = (1, 1), z 6 = (0, 2), z 7 = (3, 0), z 8 = (2, 1), . . .
where (m, p) denotes the state with m mRNA copies and p
proteins (see Fig. 2).
Note that
while
the full matrix Fσ(t) is stochastic, the reduced
matrix Fσ(t) J is substochastic. Consequently, the probability
mass is in general not preserved in (28) (i.e. 1> P̄J (t) may
decrease with time). From now on, we denote by P (T ; σ)
and P̄J (T ; σ) the solutions at time T of system (27) and system (28), respectively, when the switching signal σ is applied.
The dependence on the initial conditions P (0) and PJ (0) is
omitted to keep the notation compact. As in the uncontrolled
case, the truncated system (28) is a good approximation of
the original system (27) if most of the probability mass lies in
J. However in the controlled case we need to guarantee that
this happens for all possible switching signals. This intuition
is formalized in the following assumption.
Assumption 4. For a given finite set of state indices J, an
initial condition PJ (0), a given tolerance ε > 0 and a finite
instant T > 0,
1> P̄J (T ; σ) ≥ 1 − ε, ∀σ ∈ SIK .
(29)
Note that Assumption 4 holds if and only if
1 − ε ≤ min
σ∈SIK
1> P̄J (T ; σ)
s.t. P̄˙J (t; σ) = Fσ(t) J P̄J (t; σ), P̄J (0) = PJ (0).
Fig. 2.
State space for the gene expression system of Example 1.
Following the same steps as in [13] and setting1 Pj (t) :=
p(z j , t), the controlled CME in (23) can be rewritten as the
nonlinear infinite dimensional system
Ṗ (t) = F (u(t))P (t),
(27)
with switching signal σ(t) constructed from u(t) as detailed
i
in Remark 2, I = Πm
r=1 qr modes and matrices Fi := F (u ).
Note that system (27) can also be thought of as a Markov
chain with countably many states z j ∈ NS and time-varying
transition matrix Fσ(t) .
As in the FSP method for the uncontrolled CME [13], one
can try to approximate the behavior of the infinite Markov
chain in (27) by constructing a reduced Markov chain that
keeps track of the probability of visiting only the states
indexed in a suitable set J. To this end, let us define the
reduced order system
P̄˙J (t) = Fσ(t) J P̄J (t), P̄J (0) = PJ (0),
(28)
where PJ (0) is the subvector of P (0) corresponding to the
indices in J, and [F ]J denotes the submatrix of F obtained
by selecting only the rows and columns with indices in J.
1 Not
Proposition 3 (FSP for controlled CME). If Assumptions 2
and 4 hold, then for every switching signal σ ∈ SIK , it holds
(26)
where P (t) is an infinite dimensional vector with entries in
[0, 1]. If the signal
u(t) satisfies Assumption 3, then (26) can be rewritten as
an infinite dimensional linear switched system
Ṗ (t) = Fσ(t) P (t),
This problem has the same structure as (11). Therefore, as
illustrated in Section III-B, Assumption 4 can be checked by
solving the MILP (17) for the switched affine system (28) by
setting c = 1 and M = 1. Under Assumption 4, the following
relation between the solutions of (27) and (28) holds.
to be confused with the symbol used to denote the amount of protein.
Pj (T ; σ) ≥ P̄j (T ; σ),
∀j ∈ J
kPJ (T ; σ) − P̄J (T ; σ)k1 ≤ ε.
Proof: This result has been proven in [13] for linear
systems. We extend it here to the case of switched systems
with K switchings. Note that for any i ∈ N[1, I], Fi := F (ui )
has non-negative off diagonal elements [13]. Hence, using the
same argument as in [13, Theorem 2.1] it can be shown that
for any index set J, and any τ ≥ 0
[exp(Fi τ )]J ≥ exp([Fi ]J τ ) ≥ 0,
∀i ∈ 1, . . . , I.
Consider an arbitrary switching signal σ ∈ SIK . We have
PJ (T ; σ) = [ΠK
k=0 exp(Fik (tk+1 − tk )) · P (0)]J
≥
≥
ΠK
k=0 [exp(Fik (tk+1 − tk ))]J
ΠK
k=0 exp([Fik ]J (tk+1 − tk ))
(30)
· PJ (0)
· PJ (0) = P̄J (T ; σ).
P
Moreover, from 1 =
j=1 Pj (T ; σ) ≥
j∈J Pj (T ; σ) =
1> PJ (T ; σ) and Assumption 4, we get
P∞
1> P̄J (T ; σ) ≥ 1 − ε ≥ 1> PJ (T ; σ) − ε.
(31)
Combining (30) and (31) yields 0 ≤ 1> PJ (T ; σ) −
1> P̄J (T ; σ) ≤ ε, thus kPJ (T ; σ) − P̄J (T ; σ)k1 ≤ ε.
8
V. A NALYSIS OF THE REACHABLE SET
We here show how the reachability tools of Sections II
and III can be applied to the moment equation and FSP
reformulations derived in Sections IV-A and IV-B, under
different assumptions. Fig. 3 presents a conceptual scheme
of this section.
Fig. 3.
Conceptual scheme for the reachable set analysis of biochemical networks.
A. Reachable set of networks with affine propensities via
moment equations
The methods developed in Sections II and III can be applied
to the moments equations in (24) to approximate the desired
projected reachable set. To illustrate the proposed procedure,
we distinguish two cases depending on whether the external
signal u(t) influences reactions of order zero or one.
1) Linear moments equations: We start by considering the
case when all and only the reactions of order zero are con>
trolled, so that hr (z) = 1 for r ∈ N[1, Q] and hr (z) = νr0 z
for r ∈ N[Q + 1, R]. This is the simplest scenario since the
system of moment equations given in (24) becomes linear
ẋ≤2 (t) = Ax≤2 (t) + Bu(t),
(32)
see [38, Equations (6) and (7)]. Consequently, the theoretical
results of Section III-A can be applied to (32) by setting
σ(t) ≡ u(t). If the external signal u ≡ σ satisfies Assumption
1, both inner and outer approximations of the reachable set
can be computed by using Corollary 1.
2) Switched affine moments equations: If reactions of order
one are controlled then the external input u(t) appears also in
the entries of the A matrix and system (24) is nonlinear. To
overcome this issue we exploit Assumption 3. Specifically, let
σ(t) be the switching signal associated with u(t) as described
in Remark 2. Then (24) can be equivalently rewritten as the
switched affine system
ẋ≤2 (t) = Aσ(t) x≤2 (t) + bσ(t) ,
(33)
with matrices Ai := A(ui ), bi := b(ui ), for all i ∈ N[1, I].
Consequently, the theoretical results of Section III-B can be
applied to (33) and an outer approximation of the reachable
set can be computed by using Corollary 2.
B. Reachable set of networks with generic propensities via
finite state projection
If the network contains reactions of order higher than one or
if the reactions do not follow the laws of mass action kinetics,
then hr (z) might be non-affine. In such cases, the arguments
illustrated in the previous subsection cannot be applied. We
here show how the FSP approximation of the CME derived in
Section IV-B can be used to overcome this problem.
Firstly note that, from system (27), one can compute the
evolution of the uncentered moments of Z(t), as a linear
function of P (t). 2 For example, if we let zsj be the amount of
species Zs in the state z j , then the mean E[Zs ] of any species
>
s can be obtained as l> P (t), by setting l := zs1 , zs2 , . . . ,
and the second uncentered moment E[Zs2 ] can be obtained as
>
l> P (t), by setting l := (zs1 )2 , (zs2 )2 , . . . . Consequently
the desired projected reachable set coincides with the output
reachable set of the infinite dimensional linear switched system (27) with linear output
>
y(t) = l1 , l2 P (t),
(34)
where l1 and l2 are the infinite vectors associated with any
desired pair of moments. Note that l1 and l2 are non-negative.
Example 1 (cont.) With the ordering introduced at the
beginning of the section, the uncentered protein moments up
to order two can be computed as the output of (27) by setting
>
l1 = 0 0 1 0 1 2 0 1 . . .
,
(35)
>
l2 = 0 0 1 0 1 4 0 1 . . .
.
Let lj1 and lj2 be the j-th components of the vectors l1 and l2 ,
respectively, as defined in (34). For a given species of interest
s and set J, we denote by
P
y1 (t; σ) :=
j∈J
P
lj1 · Pj (t; σ)
j∈J
Pj (t; σ)
P
, y2 (t; σ) :=
j∈J
P
lj2 · Pj (t; σ)
j∈J
Pj (t; σ)
(36)
the moments associated with l1 and l2 conditioned on
the fact that Z(t) is in J and the switching signal σ
is applied. For example if one is interested in the mean
and second order moment of a specific species Zs (t) we
get y1 (t; σ) = E [Zs (t)
| Z(t) ∈ J, σ(·)] and y2 (t; σ) =
E Zs2 (t) | Z(t) ∈ J, σ(·) . The aim of this section is to obtain an outer approximation of the output reachable set of
the infinite system (27) with the nonlinear output (36), by
using computations involving only the finite dimensional system (28). To this end, we define the two entries of the output
of the finite dimensional system as
P
ȳ1 (t; σ) := j∈J lj1 · P̄j (t; σ) =: (¯l1 )> P̄J (t; σ)
P
(37)
ȳ2 (t; σ) :=
l2 · P̄j (t; σ) =: (¯l2 )> P̄J (t; σ).
j∈J j
Theorem 3. Suppose Assumptions 3 and 4 hold. Let RyT (x0 )
be the output reachable set at time T > 0 of system (27)
with output (36). Choose D values γ d ∈ R and set cd :=
(¯l2 ) − γ d (¯l1 ) ∈ Rn , with ¯l1 , ¯l2 as in (37). Set
HTy (γ d ) := {w ∈ R2 | w2 ≤ γ d w1 + v̄T (cd ) + δ(γ d )},
2 The reachable set for the centered moments can be immediately computed
from the reachable set of the uncentered ones, since there is a bijective relation
between the set of centered and uncentered moments up to any desired order.
9
where v̄T (cd ) is the constant that makes the hyperplane
HT (cd ) in (5) tangent to the reachable set of the finite
system (28) (i.e. v̄T (cd ) can be computed as in (17)) and
δ(γ d ) :=
2ε
1−ε
Therefore for every signal σ and every γ d it holds
y2 (T ; σ) ≤ γ d y1 (T ; σ) + v̄T (cd ) + δ(γ d ) and consequently
[y1 (T ; σ), y2 (T ; σ)]> ∈ HTy (γ d ).
· (max{0, −γ d } · k¯l1 k∞ + k¯l2 k∞ ),
with ε as in Assumption 4. Then the set Ry,out
(x0 ) :=
T
y
y
d
∩D
{H
(γ
)}
is
an
outer
approximation
of
R
(x
).
0
d=1
T
T
Proof: Firstly note that if the external signal u satisfies
Assumption 3 then the corresponding switching signal σ(t)
(constructed as in Remark 2) satisfies Assumption 2. Let
R̄yT (x0 ) be the output reachable set of the finite dimensional
system (28) with output (37). Proposition 2 guarantees that for
any direction cd the constant v̄T (cd ) that makes
H̄Ty (γ d ) := {w ∈ R2 | w2 ≤ γ d w1 + v̄T (cd )}
tangent to R̄yT (x0 ) can be computed by solving the MILP (17)
for system (28). The main idea of the proof is to show that if
we shift the halfspace H̄Ty (γ d ) by a suitably defined constant
δ(γ d ) we can guarantee that the original reachable set RyT (x0 )
is a subset of the shifted halfspace HTy (γ d ) defined in the
statement. The result then follows since Ry,out
(x0 ) is defined
T
as the intersection of hyperspaces containing RyT (x0 ).
To derive the constant δ(γ d ) we start by focusing on the
first component of the output and for simplicity we will
omit the dependence on (T ; σ) in Pj , P̄j , y and ȳ. Take any
switching signal σ ∈ SIK . By taking into account the following
conditions: (1) lj1 ≥ 0 for all j P
∈ J; (2) Pj ≥ P̄j for all j ∈ J,
due to Proposition 3, and (3) j∈J Pj ≤ 1, we get y1 ≥ ȳ1 .
Consequently, at time t = T we have
P
P
l1 ·Pj
|y1 − ȳ1 | = y1 − ȳ1 = Pj∈J jPj − j∈J lj1 · P̄j
j∈J
P
P
lj1 ·Pj
−
≤ j∈J
lj1 · P̄j
1−ε
P j∈J
P
ε
1
1
= 1 + 1−ε
j∈J lj · Pj −
j∈J lj · P̄j
P
P
ε
1
1
= 1−ε
j∈J lj · Pj +
j∈J lj · (Pj − P̄j )
P
P
ε
≤ k¯l1 k∞ 1−ε
j∈J Pj +
j∈J (Pj − P̄j )
ε
2ε
≤ k¯l1 k∞ 1−ε
+ kPJ − P̄J k1 ≤ k¯l1 k∞ 1−ε
,
P
P
where we used
j∈J Pj ≥
j∈J P̄j ≥ 1 − ε (due to
Assumption 4), and Pj ≥ P̄j , kPJ − P̄J k1 ≤ ε (following from
2ε
.
Proposition 3). To summarize, ȳ1 ≤ y1 ≤ ȳ1 + k¯l1 k∞ 1−ε
2ε
2
¯
Similarly, it can be proven that ȳ2 ≤ y2 ≤ ȳ2 + kl k∞ 1−ε
.
Consider any pair (y1 , y2 ) ∈ RyT (x0 ) and the associated
pair (ȳ1 , ȳ2 ) ∈ R̄yT (x0 ) (i.e. the two output pairs obtained
from (27) and (28) when the same σ is applied). Note that
(ȳ1 , ȳ2 ) ∈ R̄yT (x0 ) implies (ȳ1 , ȳ2 ) ∈ H̄Ty (γ d ) for any γ d .
The previous relations then imply that if γ d ≥ 0,
2ε
2ε
y2 ≤ ȳ2 + k¯l2 k∞ 1−ε
≤ γ d ȳ1 + v̄T (cd ) + k¯l2 k∞ 1−ε
≤ γ d y1 + v̄T (cd ) + k¯l2 k∞ 2ε = γ d y1 + v̄T (cd ) + δ(γ d ).
1−ε
d
VI. A NALYSIS OF SINGLE CELL REALIZATIONS
The previous analysis focused on characterising what combinations of moments of the stochastic biochemical reaction
network are achievable by using the available external input.
In this section, we change perspective and instead of looking
at population properties we focus on single cell trajectories.
Specifically, we are interested in characterising the probability
that a single realization of the stochastic process will satisfy a
specific property at the final time T (e.g. the number of copies
of a certain species is higher/lower than a certain threshold)
when starting from an initial condition P (0). Note that we can
start either deterministically from a given state z i (by setting
P (0) = ei ) or stochastically from any state according to a
generic vector of probabilities P (0). To define the problem
let us call T the target set, that is, the set of all indices i
associated with a state z i in the Markov chain (26) that satisfies
the desired property. Note that this set might be of infinite
size. We restrict our analysis to external signals satisfying
Assumption 3, so that we can map the external signal u to the
switching signal σ, as detailed in Remark 2. For a fixed signal
σ the solution of (27) immediately allows one to compute the
probability that the state at time T belongs to T , and thus
has the desired property, as PT (σ) := 1>
T P (T ; σ) where 1T
is an infinite vector that has the ith component equal to 1 if
i ∈ T and 0 otherwise. Our objective is to select the switching
signal σ(t) (and thus the external signal u(t)) that maximizes
the probability PT (σ).3 That is, we aim at solving
PT? := max PT (σ),
σ∈SIK
σ ? := arg max PT (σ),
σ∈SIK
(38)
where I is the cardinality of Σd as by Remark 2. Note that
PT (σ) in (38) is computed according to P (T ; σ) which is an
infinite dimentional vector. In the next theorem we show how
to overcome this issue and approximately solve (38) by using
the FSP approach of Proposition 3 and the reformulation as
MILP given in Proposition 2. To this end, let
σ̄ ? := arg max P̄T (σ).
σ∈SIK
(39)
where P̄T (σ) := 1̄>
T P̄J (T ; σ) is the probability that the final
state of the reduced Markov chain (28) belongs to T ∩ J at
time T given the switching signal σ, and 1̄T is a vector of
size |J| that has 1 in the positions corresponding to states of
J that belong also to T , and 0 otherwise.
Theorem 4. Suppose that Assumptions 3 and 4 hold. Then
PT (σ̄ ? ) ≥ PT? − 2ε.
On the other hand, when γ < 0
2ε
2ε
y2 ≤ ȳ2 + k¯l2 k∞ 1−ε
≤ γ d ȳ1 + v̄T (cd ) + k¯l2 k∞ 1−ε
≤ γ d y1 + v̄T (cd ) + (k¯l2 k∞ − γ d k¯l1 k∞ ) 2ε
1−ε
= γ d y1 + v̄T (cd ) + δ(γ d ).
Moreover (39) can be solved by solving the MILP in (17) for
system (28) with c = 1̄T and M = 1.
3 Note that one can use the same tools to maximize the probability of
avoiding a given set D by maximizing the probability of being in T = Dc .
10
Proof: Under Assumption 3 and 4, for any set T and any
signal σ, we get
P
P
P
P
1>
Pi ≤ i∈T ∩J Pi + i∈J
TP =
/ Pi ≤
i∈T ∩J Pi +ε
Pi∈T
P
≤ i∈T ∩J P̄i + i∈T ∩J |Pi − P̄i | + ε
P
≤ i∈T ∩J P̄i + kPJ − P̄J k1 + ε = 1̄>
T P̄ + 2ε,
P
P
P
>
and 1>
TP =
i∈T Pi ≥
i∈T ∩J Pi ≥
i∈T ∩J P̄i = 1̄T P̄ ,
where we used Assumption 4 and Proposition 3 and we
omitted (T ; σ) for simplicity. To sum up, for each σ,
P̄T (σ) ≤ PT (σ) ≤ P̄T (σ) + 2ε.
By imposing σ = σ ? we get PT? = PT (σ ? ) ≤ P̄T (σ ? )+2ε ≤
P̄T (σ̄ ? )+2ε. By imposing σ = σ̄ ? we get P̄T (σ̄ ? ) ≤ PT (σ̄ ? ).
Combining the last two inequalities we get the desired bound.
The last result can be proven as in Proposition 2. Note that P̄
is a vector of probabilities, hence we can set M = 1.
VII. T HE GENE EXPRESSION NETWORK CASE STUDY
To illustrate our method we consider again the gene expression model of Example 1 and determine what combinations of
the protein mean and variance are achievable starting from the
zero state, under different assumptions on the external signal.
A. Single input
Consider the gene expression model with one external
signal and reactions following the mass action kinetics, as
described in Example 2. In this case, the moments equations
are linear and the protein mean and variance can be obtained
by assuming as output matrix for the linear system (25)
0 1 0 0 0
L :=
.
0 0 0 0 1
Depending on the experimental setup, the external signal u(t)
may take values in the set Σd := {0, 1}, if the input is of
the ON-OFF type [1], [2], [3], [5], or in the interval Σc :=
[0, 1], if the input is continuous [4]. Corollary 1 guarantees
the validity of the following results both for Σd and Σc . The
problem of computing an outer approximation of the reachable
set of this system was studied in [27] using ad hoc methods. In
Fig. 4 we compare the outer approximation obtained therein
(magenta dashed/dotted line) with the inner (solid red) and
outer (dashed blue) approximations that we obtained using
the methods for linear moment equations of Section V-A1.
We used the parameters kr = 0.0236, γr = 0.0503, kp =
0.18, γp = 0.0121 (all in units of min−1 ) and set T = 360
min. Figure 4 shows that the outer approximation computed
using the hyperplane method is more accurate than the one
previously obtained in the literature. Moreover, since inner and
outer approximations practically coincide, this method allows
one to effectively recover the reachable set.
B. Single input and saturation
As second case study we consider again Example 2, but we
now assume that not all the reactions follow the laws of mass
action kinetics. Specifically, we are interested in investigating
how the reachable set changes if we assume that the number
Fig. 4.
Comparison of the inner (red solid) and outer (blue dashed)
approximations of the reachable set for the protein mean and variance,
according to model (25), computed using the hyperplane method and the
outer approximation computed according to [27] (magenta dashed/dotted).
of ribosomes in the cell is limited and consequently we impose
a saturation to the translation propensity. Following [39], we
assume that the translation rate follows the Michaelis-Menten
kinetics so that
α3 (kp , z) = k̃p ·
a·m
b+a·m
instead of α3 (kp , z) = kp · m.
For the simulations we impose k̃p = 0.7885, b = 0.06, a =
0.02, so that the maximum reachable protein mean is the
same as in the case without saturation analysed in the previous subsection. The corresponding propensity function is
illustrated in Fig. 5a). All the other propensities are assumed
as in Section VII-A. Note that in this case the propensities
are not affine. Consequently, we estimate the reachable set by
using the FSP approach derived in Theorem 3. Specifically
we consider as set J the indices corresponding to states with
less than 6 mRNA copies and 40 protein copies. By assuming
T = 360 min and that u can switch any 30 minutes in the
set Σd = {0, 1}, we obtain an error ε = 2.84 · 10−4 . Fig. 5b)
shows the comparison of the reachable sets obtained for the
cases with and without saturation. From this plot it emerges
that, for the chosen values of parameters, saturation leads to
a decrease of variability in the population.
a)
b)
Fig. 5. Comparison of the reachable set for the protein mean and variance, according
to the model in Example 2, when all the reactions follows the mass action kinetics (as
in Fig 4) and when the translation is saturated. Figure a): h3 (z) when the translation
reaction follows the mass action kinetics (dashed blue) or the Michaelis Menten kinetics
(dashed dotted green). Figure b): comparison of the outer approximations of the reachable
sets in the two cases. The blue dashed line is as in Fig 4. The grey line is the outer
approximation of the reachable set of the FSP system (28), the green dashed dotted line
is the outer approximation of the original system (27) according to Theorem 3.
11
a)
C. Fluorescent protein and the two inputs case
Consider again Example 1, but now assume that:
1) mRNA production and degradation can both be controlled, so that the vector of propensities is α(z)h = [kri·
(t)
u1 (t), γr ·m·u2 (t), kp ·m, γp ·p]> and u(t) := uu21 (t)
;
2) the protein P can mature into a fluorescent protein F
according to the additional maturation and degradation
reactions
α5 (kf ,z)
−−−−−→
P
F,
F
α6 (γp ,z)
−−−−−→
∅,
where α5 (kf , z) := kf ·p, α6 (γp , z) := γp ·f and kf >
0 is the maturation rate. For simplicity, the degradation
rate of F is assumed to be the same as that of P ;
3) the fluorescence intensity I(t) of each cell can be measured and is proportional to the amount of fluorescence
protein, that is, I(t) = rF (t) for a fixed scaling parameter
r > 0.
Since all the propensities are affine, the system describing the
evolution of means and variances of the augmented network
is
ẋ≤2 (t) = Af (u(t))x≤2 (t) + bf (u(t)),
(40)
b)
c)
where the state vector x≤2 (t) and Af (u(t)), bf (u(t)) are
x≤2= [E[M ], E[P ], E[F ], V[M, P ], V[M, F ], V [P ], V[P, F ], V [F ]]>
d (u (t))
0
0
0
0
0 0 0
1
2
kp
0
kp
0
kp
0
0
f
A =
d2
γp
0
0
(γp + kf )
−γp
γp
bf = kr u1 (t)
0
0
0
0
d3
0
0
0 d4 (u2 (t))
0
0
γp
d5 (u2 (t))
0
2kp
0
0
0
kp
kf
0
0
0 0
0 0
0 0
0 0
d6 0
γp d7
0 2γp
0
0
0
0
0
0
0
,
0
0
0
d8
>
0 0
Fig. 6.
with d1 (u2 (t)) = −γr u2 (t), d2 = −(γp + kf ), d3 = −kf ,
d4 (u2 (t)) = −(γr u2 (t) + γp + kf ), d5 (u2 (t)) = −(γr u2 (t) + kf ),
d6 = −2(γp + kf ), d7 = −(2kf + γp ), d8 = −2kf .
System (40) depends on the parameter vector θ =
[kr , γr , kp , γp , kf , r] (for more details see [40, Supplementary
Information pg. 16]). For the parameters we use the MAP
estimates identified in [28] (all in min−1 )
kr
kf
= 0.0236
= 0.0212
γr = 0.0503 kp = 178.398
γp = 0.0121 r−1 = 646.86
(41)
and we set
f
L :=
0
0
0
0
r
0
0 0 0 0
0 0 0 0
0
r2
,
(42)
to compute the mean and variance reachable set for the
fluorescence intensity.
Our first aim is to compare the reachable set of such extended model with experimental data, when only one external
signal (“1in”) is available. In the case of one input, (40) is a
linear system and the methods of Section V-A1 can be applied.
Fig. 6a) shows the estimated reachable set compared with the
real data collected in [2].
Our second goal is to investigate how the reachable set
changes when both mRNA production and degradation are
Output reachable set of system (40) with output as in (42) and parameters as
in (41). Figure a) [1 external signal]: Comparison between the inner (red contour) and
outer (blue lines) approximation of the output reachable set, when the set of possible
modes is Σ1in , and the measured data. Different colors refers to data collected in different
experiments. Figure b) [2 external signals]: Outer approximation of the output reachable
set, when the set of possible modes is Σ2in . The two green dots represent the outputs
when u(t) = [1, 0.5]> ∀t and u(t) = [1, 1]> ∀t, respectively. The black crosses
represent the output for random signals in Σ2in . Figure c) [Comparison]: The red solid
line is the outer approximation obtained for u(t) ∈ Σ1in , as in Fig. a), the blue dashed
line the one for u(t) ∈ Σ2in , as in Fig. b).
controlled (“2in”), as studied in [41]. Note that in this case,
system (40) is nonlinear. We therefore set T = 300 min
and assume that switchings can occur every 20 min, so that
Assumption 3 is satisfied with K = 15 and use the hyperplane
method as described in Section V-A2 with input sets
0
0
1
1
Σ2in :=
,
,
,
, so that I = 4,
1
0.5
1
0.5
0
1
Σ1in :=
,
, so that I = 2,
1
1
respectively. Note that we set the minimum input for the
mRNA degradation to 0.5 > 0 to avoid unboundedness. With
these input choices it is intuitive that the largest possible state
is reached when the mRNA production is at its maximum and
the mRNA degradation is at its minimum. Therefore, in the
1 ] ∀t)
MILPs we can use the bounds M = x (T ; 0, u(t) = [ 0.5
12
for the case of two inputs and M = x(T ; 0, u(t) = [ 11 ] ∀t), for
the case of one input. Fig. 6b) shows the output reachable set
for the case of two inputs. The simulation time for computing
the outer approximation with the hyperplane method was 5.6
hrs. Computing the exact reachable set by simulating all the
possible switching signals, assuming that one simulation takes
10−4 sec and neglecting the time needed to enumerate all
possible signals, would take 29.8 hrs. The black crosses in
Fig. 6b) are obtained by simulating the output of the system for
5000 randomly constructed input signals. This simulation
illustrates that random approaches might lead to significantly
under estimate the reachable set. Fig. 6c) shows a comparison
of the reachable sets obtained in Fig. 6a) and b) when the
input set is Σ1in and Σ2in , respectively.
VIII. C ONCLUSION
In the paper we have: i) proposed a method to approximate
the projected reachable set of switched affine systems with
fixed switching times, ii) extended the FSP approach to
controllable networks, iii) illustrated how these new theoretical
tools can be used to analyse generic networks both from a
population and single cell perspective and iv) provided an
extensive gene expression case study using both in silico
and in vivo data. Even though our analysis is motivated by
biochemical reaction networks, our results can actually be
applied to study the moments of any Markov chain with
transitions rates that switch among I possible configurations
at K fixed instants of times. Our results hold both in case
of finite and infinite state space. Moreover, while we have
assumed here that cells are identical, we showed in [29] that
also in the case of heterogeneous population one can derive
equations describing the moments evolution. The reachable set
of such populations can be obtained, as described in this paper,
by applying Corollary 1 or 2 to such system.
R EFERENCES
[1] A. Milias-Argeitis, S. Summers, J. Stewart-Ornstein, I. Zuleta, D. Pincus,
H. El-Samad, M. Khammash, and J. Lygeros, “In silico feedback for
in vivo regulation of a gene expression circuit,” Nature Biotechnology,
vol. 29, pp. 1114–1116, 2011.
[2] J. Ruess, F. Parise, A. Milias-Argeitis, M. Khammash, and J. Lygeros,
“Iterative experiment design guides the characterization of a lightinducible gene expression circuit,” National Academy of Sciences of the
USA, vol. 112, no. 26, pp. 8148–8153, 2015.
[3] E. J. Olson, L. A. Hartsough, B. P. Landry, R. Shroff, and J. J.
Tabor, “Characterizing bacterial gene circuit dynamics with optically
programmed gene expression signals.” Nature Methods, vol. 11, pp.
449–455, 2014.
[4] J. Uhlendorf, A. Miermont, T. Delaveau, G. Charvin, F. Fages, S. Bottani, G. Batt, and P. Hersen, “Long-term model predictive control of gene
expression at the population and single-cell levels.” National Academy
of Sciences of the USA, vol. 109, no. 35, pp. 14 271–14 276, 2012.
[5] F. Menolascina, M. Di Bernardo, and D. Di Bernardo, “Analysis, design
and implementation of a novel scheme for in-vivo control of synthetic
gene regulatory networks,” Automatica, pp. 1265–1270, 2011.
[6] G. Batt, H. De Jong, M. Page, and J. Geiselmann, “Symbolic reachability
analysis of genetic regulatory networks using discrete abstractions,”
Automatica, vol. 44, no. 4, pp. 982–989, 2008.
[7] N. Chabrier and F. Fages, “Symbolic model checking of biochemical
networks,” in Computational Methods in Systems Biology, C. Priami,
Ed. Springer, 2003, pp. 149–162.
[8] D. T. Gillespie, “A rigorous derivation of the chemical master equation,”
Physica A: Statistical Mechanics and its Applications, pp. 404–425,
1992.
[9] I. Lestas, G. Vinnicombe, and J. Paulsson, “Fundamental limits on the
suppression of molecular fluctuations,” Nature, pp. 174–178, 2010.
[10] Z. Sun and S. S. Ge, Switched linear systems: Control and design.
Springer Science & Business Media, 2006.
[11] C. Altafini, “The reachable set of a linear endogenous switching system,”
Systems & control letters, vol. 47, no. 4, pp. 343–353, 2002.
[12] T. J. Graettinger and B. H. Krogh, “Hyperplane method for reachable
state estimation for linear time-invariant systems,” Journal of optimization theory and applications, vol. 69, no. 3, pp. 555–588, 1991.
[13] B. Munsky and M. Khammash, “The finite state projection algorithm for
the solution of the chemical master equation.” The Journal of chemical
physics, vol. 124, no. 4, p. 044104, 2006.
[14] C. Baier, B. Haverkort, H. Hermanns, and J.-P. Katoen, “Modelchecking algorithms for continuous-time Markov chains,” IEEE Trans.
on Software Engineering, vol. 29, no. 6, pp. 524–541, 2003.
[15] A. Abate, J.-P. Katoen, J. Lygeros, and M. Prandini, “Approximate model
checking of stochastic hybrid systems,” European Journal of Control,
vol. 16, no. 6, pp. 624–641, 2010.
[16] H. El-Samad, S. Prajna, A. Papachristodoulou, J. Doyle, and M. Khammash, “Advanced methods and algorithms for biological networks analysis,” IEEE, vol. 94, no. 4, pp. 832–853, 2006.
[17] R. Alur, C. Courcoubetis, T. A. Henzinger, and P.-H. Ho, “Hybrid
automata: An algorithmic approach to the specification and verification
of hybrid systems,” in Hybrid systems. Springer, 1993, pp. 209–229.
[18] R. Alur, T. A. Henzinger, G. Lafferriere, and G. J. Pappas, “Discrete
abstractions of hybrid systems,” Proceedings of the IEEE, vol. 88, no. 7,
pp. 971–984, 2000.
[19] X. D. Koutsoukos and P. J. Antsaklis, “Safety and reachability of piecewise linear hybrid dynamical systems based on discrete abstractions,”
Discrete Event Dynamic Systems, vol. 13, no. 3, pp. 203–243, 2003.
[20] R. Ghosh and C. Tomlin, “Symbolic reachable set computation of piecewise affine hybrid automata and its application to biological modelling:
Delta-notch protein signalling,” Systems Biology, pp. 170–183, 2004.
[21] L. Habets, P. J. Collins, and J. H. van Schuppen, “Reachability and
control synthesis for piecewise-affine hybrid systems on simplices,”
IEEE Trans. on Automatic Control, vol. 51, no. 6, pp. 938–948, 2006.
[22] A. Hamadeh and J. Goncalves, “Reachability analysis of continuoustime piecewise affine systems,” Automatica, pp. 3189–3194, 2008.
[23] I. M. Mitchell and J. A. Templeton, “A toolbox of Hamilton-Jacobi
solvers for analysis of nondeterministic continuous and hybrid systems,”
in Hybrid systems: Computation and control, M. Morari and L. Thiele,
Eds. Springer, 2005, pp. 480–494.
[24] A. Kurzhanski and I. Valyi, Ellipsoidal calculus for estimation and
control. Springer, 1997.
[25] A. Donzé and O. Maler, “Systematic simulation using sensitivity analysis,” in Hybrid Systems: Computation and Control, A. Bemporad,
A. Bicchi, and G. Buttazzo, Eds. Springer, 2007, pp. 174–189.
[26] E. Lakatos and M. Stumpf, “Control mechanisms for stochastic biochemical systems via computation of reachable sets,” bioRxiv, 2016.
[27] F. Parise, M. E. Valcher, and J. Lygeros, “On the reachable set of the
controlled gene expression system,” in IEEE CDC, 2014, pp. 4597–4604.
[28] ——, “ On the use of hyperplane methods to compute the reachable set
of controlled stochastic biochemical reaction networks,” in IEEE CDC,
2015, pp. 1259–1264.
[29] ——, “Reachability analysis for switched affine autonomous systems
and its application to controlled stochastic biochemical reaction networks,” in IEEE CDC, 2016.
[30] D. Liberzon, Calculus of variations and optimal control theory: A
concise introduction. Princeton University Press, 2011.
[31] H. J. Sussmann, “Lie Brackets, Real Analyticity and Geometric Control,”
in Differential Geometric Control Theory, R. W. Brockett, R. S. Millman,
and H. J. Sussmann, Eds. Birkhauser Boston, 1983, vol. 27, pp. 1–116.
[32] C. Seatzu, D. Corona, A. Giua, and A. Bemporad, “Optimal control
of continuous-time switched affine systems,” IEEE Trans. on Automatic
Control, vol. 51, no. 5, pp. 726–741, 2006.
[33] A. Bemporad and M. Morari, “Control of systems integrating logic,
dynamics, and constraints,” Automatica, pp. 407–427, 1999.
[34] D. J. Wilkinson, Stochastic modeling for systems biology. CRC, 2011.
[35] J. Goutsias and G. Jenkinson, “Markovian dynamics on complex reaction
networks,” Physics Reports, vol. 529, no. 2, pp. 199–264, 2013.
[36] J. Ruess, “Moment-based methods for the analysis and identification of
stochastic models of biochemical reaction networks.” Doctoral dissertation, ETH Zurich, 2014.
[37] J. Hespanha, “Moment closure for biochemical networks,” in IEEE
International Symposium on Communications, Control and Signal Processing, 2008, pp. 142–147.
13
[38] C. H. Lee, K.-H. Kim, and P. Kim, “A moment closure method for
stochastic reaction networks,” The Journal of Chemical Physics, vol.
130, no. 13, p. 134107, 2009.
[39] R. Brockmann, A. Beyer, J. J. Heinisch, and T. Wilhelm, “Posttranscriptional expression regulation: what determines translation rates?” PLoS
Comput Biol, vol. 3, no. 3, p. e57, 2007.
[40] J. Ruess, A. Milias-Argeitis, and J. Lygeros, “Designing experiments to
understand the variability in biochemical reaction networks.” Journal of
the Royal Society Interface, vol. 10, no. 88, p. 20130588, 2013.
[41] C. Briat and M. Khammash, “Computer control of gene expression:
Robust setpoint tracking of protein mean and variance using integral
feedback,” in IEEE CDC, 2012, pp. 3582–3588.
Francesca Parise Francesca Parise was born in
Verona, Italy, in 1988. She received the B.Sc. and
M.Sc. degrees (cum Laude) in Information and Automation Engineering from the University of Padova,
Italy, in 2010 and 2012, respectively. She conducted
her master thesis research at Imperial College London, UK, in 2012. She graduated from the Galilean
School of Excellence, University of Padova, Italy,
in 2013. She defended her PhD at the Automatic
Control Laboratory, ETH Zurich, Switzerland in
2016 and she is currently a Postdoctoral researcher
at the Laboratory for Information and Decision Systems, M.I.T., USA.
Her research focuses on identification, analysis and control of complex
systems, with application to distributed multi-agent networks and systems
biology.
Maria Elena Valcher Maria Elena Valcher received
the Laurea degree and the PhD from the University
of Padova, Italy. Since January 2005 she is full
professor of Automatica at the University of Padova.
She is author/co-author of 76 papers appeared
in international journals, 92 conference papers, 2
text-books and several book chapters. Her research
interests include multidimensional systems theory,
polynomial matrix theory, behavior theory, convolutional coding, fault detection, delay-differential
systems, positive systems, positive switched systems
and Boolean control networks.
She has been involved in the Organizing Committees and in the Program
Committees of several conferences. In particular, she was the Program Chair
of the CDC 2012. She was in the Editorial Board of the IEEE Transactions
on Automatic Control (1999-2002), Systems and Control Letters (2004-2010)
and she is currently in the Editorial Boards of Automatica (2006-today),
Multidimensional Systems and Signal Processing (2004-today), SIAM J. on
Control and Optimization (2012-today), European Journal of Control (2103today) and IEEE Access (2014-today).
She was Appointed Member of the CSS BoG (2003); Elected Member of
the CSS BoG (2004-2006; 2010-2012); Vice President Member Activities of
the CSS (2006-2007); Vice President Conference Activities of the CSS (20082010); CSS President (2015). She is presently a Distinguished Lecturer of the
IEEE CSS and IEEE CSS Past President. She received the 2011 IEEE CSS
Distinguished Member Award and she is an IEEE Fellow since 2012.
John Lygeros John Lygeros completed a B.Eng. degree in electrical engineering in 1990 and an M.Sc.
degree in Systems Control in 1991, both at Imperial
College of Science Technology and Medicine, London, U.K.. In 1996 he obtained a Ph.D. degree from
the Electrical Engineering and Computer Sciences
Department, University of California, Berkeley. During the period 1996-2000 he held a series of research
appointments at the National Automated Highway
Systems Consortium, Berkeley, the Laboratory for
Computer Science, M.I.T., and the Electrical Engineering and Computer Sciences Department at U.C. Berkeley. Between 2000
and 2003 he was a University Lecturer at the Department of Engineering,
University of Cambridge, U.K., and a Fellow of Churchill College. Between
2003 and 2006 he was an Assistant Professor at the Department of Electrical
and Computer Engineering, University of Patras, Greece. In July 2006 he
joined the Automatic Control Laboratory at ETH Zurich, first as an Associate
Professor, and since January 2010 as a Full Professor. Since 2009 he is
serving as the Head of the Automatic Control Laboratory and since 2015
as the Head of the Department of Information Technology and Electrical
Engineering. His research interests include modelling, analysis, and control of
hierarchical, hybrid, and stochastic systems, with applications to biochemical
networks, automated highway systems, air traffic management, power grids
and camera networks. He teaches classes in the area of systems and control
both at the undergraduate and at the graduate level at ETH Zurich, notably
the 4th semester class Signals and Systems II, which he delivers in a flipped
classroom format. John Lygeros is a Fellow of the IEEE, and a member of the
IET and the Technical Chamber of Greece. He served as an Associate Editor
of the IEEE Transactions on Automatic Control and in the IEEE Control
Systems Society Board of Governors; he is currently serving as the Treasurer
of the International Federation of Automatic Control.
| 3 |
Learning Hybrid Algorithms for Vehicle Routing Problems
Yves Caseau1, Glenn Silverstein2 , François Laburthe1
1 BOUYGUES
e-Lab., 1 av. E. Freyssinet, 78061 St. Quentin en Yvelines cedex, FRANCE
2 Telcordia
ycs;[email protected]
Technologies, 445 South Street, Morristown, NJ, 07960, USA
[email protected]
Abstract
This paper presents a generic technique for improving hybrid algorithms through the
discovery of and tuning of meta-heuristics. The idea is to represent a family of
“push/pull” heuristics that are based upon inserting and removing tasks in a current
solution, with an algebra. We then let a learning algorithm search for the best possible
algebraic term, which represents a hybrid algorithm for a given set of problems and an
optimization criterion. In a previous paper, we described this algebra in detail and
provided a set of preliminary results demonstrating the utility of this approach, using
vehicle routing with time windows (VRPTW) as a domain example. In this paper we
expand upon our results providing a more robust experimental framework and learning
algorithms, and report on some new results using the standard Solomon benchmarks. In
particular, we show that our learning algorithm is able to achieve results similar to the
best-published algorithms using only a fraction of the CPU time. We also show that the
automatic tuning of the best hybrid combination of such techniques yields a better
solution than hand tuning, with considerably less effort.
1. Introduction
Recent years have seen a rise in the use of hybrid algorithms in many fields such as
scheduling and routing, as well as generic techniques that seem to prove useful in many
different domains (e.g., Limited Discrepancy Search (LDS) [HG95] and Large Neighborhood
Search (LNS) [Sha98]). Hybrid Algorithms are combinatorial optimization algorithms that
incorporate different types of techniques to produce higher quality solutions. Although
hybrid algorithms and approaches have achieved many interesting results, they are not a
panacea yet as they generally require a large amount of tuning and are often not robust
enough: i.e., a combination that works well for a given data set does poorly on another one.
In addition, the application to the “real world” of an algorithm that works well on academic
benchmarks is often a challenging task.
The field of Vehicle Routing is an interesting example. Many real-world applications rely
primarily on insertion algorithms, which are known to be poor heuristics, but have two major
advantages: they are incremental by nature and they can easily support the addition of
domain-dependent side constraints, which can be utilized by a constraint solver to produce a
high-quality insertion [CL99]. We can abstract the routing aspect and suppose that we are
solving a multi-resource scheduling problem, and that we know how to incrementally insert a
new task into a current solution/schedule, or how to remove one of the current tasks. A
simple approach is to derive a greedy heuristic (insert all the tasks, using a relevant order); a
simple optimization loop is then to try 2-opt moves where pairs of tasks (one in and one out)
are swapped. The goal of our work is to propose a method to (1) build more sophisticated
hybrid strategies that use the same two push and pull operations; (2) automatically tune the
hybrid combination of meta-heuristics. In [CSL99] we described such a framework for
discovering and tuning hybrid algorithms for optimization problems such as vehicle routing
based on a library of problem independent meta-methods. In this paper, we expand upon the
results in [CLS99] providing a more robust experimentation framework and experimentation
methodology with more emphasis on the automated learning and tuning of new terms.
This paper is organized as follows. Section 2 presents a generic framework that we propose
for some optimization problems, based on the separation between domain-specific low-level
methods, for which constraint solving is ideally suited, and generic meta-heuristics. We then
recall the definition of a Vehicle Routing Problem and show what type of generic metaheuristics may be applied. Section 3 provides an overview of the algebra of terms
representing the hybrid methods. Section 4 describes a learning algorithm for inventing and
tuning terms representing problem solutions. Section 5 describes an experimentation
framework, which is aimed at discovering the relative importance of various learning
techniques (mutation, crossover, and invention) and how they, along with experiment
parameters such as the pool size and number of iterations, affect the convergence rate and
stability. Sections 6 and 7 provide the results of a various experiments along with the
conclusions.
2. Meta-Heuristics and Vehicle Routing
2.1 A Framework for Problem Solving
The principle of this work is to build a framework that produces efficient problem solving
algorithms for a large class of problems at reduced development cost, while using state-ofthe-art meta-heuristics. The main idea is the separation between two parts, as explained in
Figure 1, that respectively contain a domain-dependent implementation of two simple
operations (push and pull) and contain a set of meta-heuristics and a combination engine, that
are far more sophisticated but totally generic. Obviously, the first postulate is that many
problems may actually fit this scheme. This postulate is based on our experience with a large
number of industrial problems, where we have found ourselves to re-using “the same set of
tricks” with surprisingly few differences. Here is a list of such problems:
(1) Vehicle Routing Problems. As we shall later see, such problems come in all kinds of
flavor, depending on the objective function (what to minimize) and the sideconstraints (on the trucks, on the clients, etc.). The key methods are the insertion of a
new task into a route, which is precisely the resolution of a small TSP with side
constraints.
(2) Assignment Problems. We have worked on different types of assignment problems,
such as broadcast airspace optimization or workflow resource optimization that may
be seen as assigning tasks to constrained resources. The unit operations are the
insertion and the removal of one task into one resource.
(3) Workforce Scheduling. Here the goal is to construct a set of schedules, one for each
operator, so that a given workload, such as a call center distribution, is handled
optimally. The algorithm that has produced the best overall results and that is being
used today in our software product is based on decomposing the workload into small
units that are assigned to workers. The unit “push” operation is the resolution of a
small “one-machine” scheduling problem, which can be very tricky because of labor
regulations. Constraint solving is a technique of choice for solving such problems.
(4) Frequency Allocation. We participated to the 2001 ROADEF challenge [Roa01] and
applied this decomposition to a frequency allocation problem. Here the unit operation
is the insertion of a new path into the frequency plan. This problem is solved using a
constraint-based approach similar to jobshop scheduling algorithms [CL95].
2
For all these problems, we have followed a similar route: first, build a greedy approach that is
quickly extended into a branch-and-bound scheme, which must be limited because of the size
of the problem. Then local-optimization strategies are added, using a swap (push/pull)
approach, which are extended towards large neighborhood search methods.
Automated Combination
Limited
Branching
(Search)
Push
Generic
(framework)
Remove
Fragment
(forget) and
re-build
Pull
Ejection
method:
force a task
into solution
Selection
Heuristics
Problem-dependent
Side-constraints
Figure 1:A Framework for Meta-Heuristics
The problem that we want to address with this paper is twofold:
1. How can we build a library of meta-heuristics that is totally problem-independent, so that
any insertion algorithm based on a constraint solver can be plugged?
2. How can we achieve the necessary tuning to produce at a low cost a robust solution for
each different configuration?
The first question is motivated by the fact that the meta-heuristic aspect (especially with local
optimization) is the part of the software that is most difficult to maintain when new
constraints are added. There is a tremendous value in confining the domain-dependent part to
where constraint-programming techniques can be used. The second question is drawn from
our practical experience: the speed of the hardware, the runtime constraints, the objective
functions (total travel, number of routes, priority respect,...) all have a strong influence when
designing a hybrid algorithm. For instance, it is actually a difficult problem to re-tune a
hybrid algorithm when faster hardware is installed.
2.2 Application to the Vehicle Routing Problem
2.2.1 Vehicle Routing Problems
A vehicle routing problem is defined by a set of tasks (or nodes) i and a distance matrix
(d[i,j]). Additionally, each task may be given a duration (in which case the matrix d denotes
travel times) and a load, which represents some weight to be picked in a capacitated VRP.
The goal is to find a set of routes that start from a given node (called the depot) and return to
it, so that each task is visited only once and so that each route obeys some constraints about
its maximal length or the maximum load (the sum of the weights of the visited tasks). A VRP
is an optimization problem, where the objective is to minimize the sum of the lengths of the
routes and/or the number of routes [Lap92].
We report experiments with the Solomon benchmarks [Sol87], which are both relative small
(100 customers) and simple (truck capacity and time-windows). Real-world routing problems
3
include a lot of side constraints (not every truck can go everywhere at anytime, drivers have
breaks and meals, some tasks have higher priorities, etc.). Because of this additional
complexity, the most commonly used algorithm is an insertion algorithm, one of the simplest
algorithms for solving a VRP.
Let us briefly describe the VRPTW problem that is proposed in the Solomon benchmarks
more formally.
- We have N = 100 customers, defined by a load ci, a duration di and a time window
[ai,bi]
- d(i,j) is the distance (in time) to go from the location of customer i to the location of
customer j (all trucks travel at the same speed). For convenience, 0 represents the
initial location of the depot where the trucks start their routes.
- A solution is given by a route/time assignment: ri is the truck that will service the
customer i and ti is the time at which this customer will be serviced. We define prev(i)
= j if j is the customer that precedes i in the route ri and prev(i) = 0 is i is the first
customer in ri. We also define last(r) as the last customer for route r.
- The set of constraints that must be satisfied is the following:
o ∀i, ti ∈ [ai,bi] ∧ ti ≥ d(0,i)
-
o
∀i,j ri = rj => tj –ti ≥ di + d(i,j) ∨ ti –tj ≥ dj + d(j,i)
o
∀r,
∑c ≤C
i
i∈{i r =ri }
The goal is to minimize E = max(ri) (the total number of routes) and
i≤ N
D=
∑length(r) with length(r) = ∑d(prev(i),i) + d(last(r), 0)
r ≤K
i ri =r
There are two reasons for using the Solomon benchmarks instead of other larger problems.
First, they are the only problems for which there are many published results. Larger problems,
including our own combined benchmarks (with up to 1000 nodes [CL99]), did not receive as
much attention yet and we will see that it is important to have a good understanding of
competitive methods (i.e., quality vs. run-time trade-off) to evaluate how well our learning
approach is doing. Second, we have found using large routing problems from the
telecommunication maintenance industry that these benchmarks are fairly representative:
techniques that produced improvements on the Solomon benchmarks actually showed similar
improvement on larger problems, provided that their run-time complexity was not prohibitive.
Thus, our goal in this paper is to focus on the efficient resolution of Solomon problems, with
algorithms that could later scale up to larger problems (which explains our interest for finding
good solutions within a short span of time, from a few seconds to a minute).
2.2.2 Insertion and Incremental Local Optimization
Let us first describe an insertion-based greedy algorithm. The tasks (i.e., customers) to be
visited are placed in a stack, that may be sorted statically (once) or dynamically, and a set of
empty routes is created. For each task, a set of candidate routes is selected and the feasibility
of the insertion is evaluated. The task is inserted into the best route found during this
evaluation. This loop is run until all tasks have been inserted. Notice that an important
parameter is the valuation of the feasible route. A common and effective strategy is to pick
the route for which the increase in length due to the insertion is minimal.
The key component is the node insertion procedure (i.e., the push operation of Figure 1),
since it must check all the side constraints. CP techniques can be used either through the full
resolution of the one-vehicle problem, which is the resolution of a small with side-constraints
[CL97][RGP99], or it can be used to supplement a simple insertion heuristic by doing all the
side-constraint checking. We have shown in [CL99] that using a CP solver for the node
insertion increases the quality of the global algorithm, whether this global algorithm is a
simple greedy insertion algorithm or a more complex tree search algorithm.
4
The first hybridization that we had proposed in [CL99] is the introduction of incremental
local optimization. This is a very powerful technique, since we have shown that it is much
more efficient than applying local optimization as a post-treatment, and that it scales very
well to large problems (many thousands of nodes). The interest of ILO is that it is defined
with primitive operations for which the constraint propagation can be easily implemented.
Thus, it does not violate the principle of separating the domain-dependent part of the problem
from the optimization heuristics.
Instead of applying the local moves once the first solution is built, the principle of
incremental local optimization is to apply them after each insertion and only for those moves
that involve the new node that got inserted. The idea of incremental local optimization within
an insertion algorithm had already brought good results in [GHL94], [Rus95] and [KB95].
We have applied ILO to large routing problems (up to 5000 nodes) as well as call center
scheduling problems.
Our ILO algorithm uses three moves, which are all 2- or 3- edge exchanges. The first three
are used once the insertion is performed. These moves are performed in the neighborhood of
the inserted node, to see if some chains from another route would be better if moved into the
same route. They include a 2-edge exchange for crossing routes (see Figure 2), a 3-edge
exchange for transferring a chain from one route to another and a simpler node transfer move
(a limited version of the chain transfer).
The 2-edge move (i.e., exchange between (x,y) and (i,i’)) is defined as follows. To perform
the exchange, we start a branch where we perform the edge substitution by linking i to y and
i’ to x. We then compute the new length of the route r and check side constraints if any apply.
If the move is illegal we backtrack to the previous state. Otherwise, we perform a route
optimization on the two modified routes (we apply 3-opt moves within the route r). We also
recursively continue looking for 2-opt moves and we apply the greedy 3-opt optimization to
r’ that will be defined later.
r’
r
x
i
r’
r
x
i
i’
y
r
r
’
x
y
i
r
’
x
r
y
i
i’
y
y’
i’
y’
z
exchange (2opt / 2routes)
i’
z
transfer (3opt / 2routes)
Figure 2: edge exchange moves used for ILO
The second move transfers a chain (y → y’) from a route r’ to a route r right after the node i.
We use the same implementation technique and create a branch where we perform the 3-edge
exchange by linking i to y, y’ to i’ and the predecessor x of y to the successor z of y’. We then
optimize the augmented route r (assuming the new route does not violate any constraints) and
check that the sum of the lengths of the two resulting routes has decreased. If this is not the
case, we backtrack; otherwise, we apply the same optimization procedures to r’ as in the
precedent routine.
We also use a more limited version of the transfer routine that we can apply to a whole route
(as opposed to the neighborhood of a new node i). A « greedy optimization » looks for nodes
outside the route r that are close to one node of the route and that could be inserted with a
gain in the total. The algorithm is similar to the previous one, except that we do not look for a
chain to transfer, but simply a node.
In the rest of the paper, we will call INSERT(i) the insertion heuristic obtained by applying
greedily the node insertion procedure and a given level of ILO depending on the value i:
i = 0 ⇔ no ILO
i = 1 ⇔ perform only 2-opt moves (exchange)
5
i = 2 ⇔ performs 2 and 3-opt moves (exchange and transfer)
i = 3 ⇔ perform 2 and 3-opt moves, plus greedy optimization
i = 4 ⇔ similar, but in addition, when the insertion fail we try to reconstruct the route by
inserting the new node first.
2.3 Meta-Heuristics for Insertion Algorithms
In the rest of this section we present a set of well-known meta-heuristics that have in common
the property that they only rely on inserting and removing nodes from routes. Thus, if we can
use a node insertion procedure that checks all domain-dependent side-constraints, we can
apply these meta-heuristics freely. It is important to note that not all meta-heuristics have this
property (e.g., splitting and recombining routes does not), but the ones that do make a large
subset and we will show that these heuristics, together with ILO, yield powerful hybrid
combinations.
2.3.1 Limited Discrepancy Search
Limited Discrepancy Search is an efficient technique that has been used for many different
problems. In this paper, we use the term LDS loosely to describe the following idea:
transform a greedy heuristic into a search algorithm by branching only in a few (i.e., limited
number) cases when the heuristic is not “sure” about the best insertion. A classical complete
search (i.e., trying recursively all insertions for all nodes) is impossible because of the size of
the problem and a truncated search (i.e., limited number of backtracks) yields poor
improvements. The beauty of LDS is to focus the “power of branching” to those nodes for
which the heuristic decision is the least compelling. Here the choice heuristic is to pick the
feasible route for which the increase in travel is minimal. Applying the idea of LDS, we
branch when two routes have very similar “insertion costs” and pick the obvious choice when
one route clearly dominates the others. There are two parameters in our LDS scheme: the
maximum number of branching points along a path in the search tree and the threshold for
branching. A low threshold will provoke a lot of branching in the earlier part of the search
process, whereas a high threshold will move the branching points further down. These two
parameters control the shape of the search tree and have a definite impact on the quality of
the solutions.
2.3.2 Ejection Chains and Trees
The search for ejection chains is a technique that was proposed a few years ago for Tabu
search approaches [RR96]. An ejection link is an edge between a and b that represents the
fact that a can be inserted in the route that contains b if b is removed. An ejection chain is a
chain of ejection edges where the last node is free, which means that it can be inserted freely
in a route that does not intersect the ejection chain, without removing any other node. Each
time an ejection chain is found, we can compute its cost, which is the difference in total
length once all the substitutions have been performed (which also implies the insertion of the
root node).
The implementation is based on a breadth-first search algorithm that explores the set of
chains starting from a root x. We use a marker for each node n to recall the cost of the
cheapest ejection chain that was found from x to n, and a reverse pointer to the parent in the
chain. The search of ejection chains was found to be an efficient technique in [CL98] to
minimize the number of routes by calling it each time no feasible insertion was found during
the greedy insertion. However, it is problem-dependent since it only works well when nodes
are of similar importance (as in the Solomon benchmarks). When nodes have different
processing times and characteristics, one must move to ejection trees.
An ejection tree is similar to an ejection chain but we allow multiple edges from one node a
to b1, .. , bn to represent the fact that the “forced insertion” of a into a route r causes the
ejection of b1,..,bn. For one node a and a route r, there are usually multiple subsets of such
{b1, .., bn} so we use a heuristic to find a set as small as possible. An ejection tree is then a
tree of root a such that all leaves are free nodes that can be inserted into different routes that
6
all have an empty intersection with the tree. There are many more ejection trees than there are
chains, and the search for ejection trees with lowest possible cost must be controlled with
topological parameters (maximum depth, maximum width, etc.). The use of ejection trees is
very similar to the use of ejection chains, i.e. we can use it to insert a node that has no
feasible route, or as a meta-heuristic by removing and re-inserting nodes.
The implementation is more complex because we cannot recursively enumerate all trees
using a marking algorithm. Thus we build a search tree using a stack of ejected nodes. When
we start, the stack contains the root node; then each step can be described as follows:
• Pick a node n from the stack.
• For all routes r into which a forced insertion is possible, create a branch of the search tree.
In this branch, perform the insertion (a into r) and produce a subset of nodes that got
“ejected”. To keep that set as small as possible, every free node is immediately reinserted. The set of ejected nodes is then placed into the stack.
• If the stack is empty we register the value of the current tree (each node in the search tree
corresponds to an ejection tree)
To make this algorithm work, it is necessary to put an upper bound on the depth of the tree
and on the branching factor. We only select no more than k routes for the forced insertion, by
filtering the k best routes once all possible routes have been tried. Furthermore, we use a LDS
scheme: each time we use a route that was not the best route found (the valuation is simply
the weight of the ejected set), we count one discrepancy. The LDS approach simply means to
cut all trees that would require more than D (a fixed parameter) discrepancies.
2.3.3 Large Neighborhood Search
Large Neighborhood Search (LNS) is the name given by Shaw [Sha98] to the application of
shuffling [CL95] to routing. The principle is to forget (remove) a fragment of the current
solution and to rebuild it using a limited search algorithm. For jobshop scheduling, we have
developed a large variety of heuristics to determine the fragment that is forgotten and we use
them in rotation until a fix-point is reached. We have used a truncated search to re-build the
solution, using the strength of branch-and-bound algorithms developed for jobshop
scheduling. In his paper, Shaw introduced a heuristic randomized criterion for computing the
“forgotten” set and proposed to use LDS to re-build the solution. Since he obtained excellent
results with this approach, we have implemented the same heuristic to select the set of n (an
integer parameter) nodes that are removed from the current solution. His procedure is based
on a relatedness criteria and a pseudo-random selection of successive “neighbors”. A
parameter is used to vary the heuristic from deterministic to totally random. We have
extended this heuristic so that nodes that are already without a route are picked first (when
they exist).
The implementation of LNS is then straightforward: select a set of k nodes using Shaw’s
procedure and then remove them from the current solution. These nodes are then re-inserted
using a LDS insertion algorithm. There are, therefore, four parameters needed to describe this
algorithm: two for LDS (number of discrepancies and threshold), the randomness parameter
and the number of nodes to be reinserted. As we shall later see, the procedure for reconstructing the solution could be anything, which opens many possible combinations.
Notice that the heuristic for selecting the fragment to remove is clearly problem-dependent.
For VRP, we use Shaw’s technique, for frequency allocation, we had to come up with a fairly
complex new method that computes the number of constraint violations for all tasks that
could not be inserted.
3. An Algebra of Hybrid Algorithms
3.1 Representing the Combination of Meta-Heuristics
We represent hybrid algorithms obtained through the composition of meta-heuristics with
algebraic formulas (terms). As for any term algebra, the grammar of all possible terms is
7
derived from a fixed set of operators, each of them representing one of the
resolution/optimization techniques that we presented in the previous section. There are two
kinds of terms in the grammar: <Build> terms represent algorithms that create a solution
(starting from an empty set of routes) and <Optimize> terms for algorithms that improve a
solution (we replace a set of routes with another one). A typical algorithm is, therefore, the
composition of one <Build> term and many <Optimize> terms.
More precisely, we may say that a build algorithm has no parameter and returns a solution
object that represents a set of routes. Each route is defined as a linked list of nodes. An
optimize algorithm has one input parameter, which is a current solution and returns another
set of routes. In addition, a global object is used to represent the optimization context, which
tells which valuation function should be used depending on the optimization criterion.
<Build>::
INSERT(i) |
<LDS> |
DO( <Build>,<Optimize>) |
FORALL(<LDS>, <Optimize>)
<Optimize> ::
<LDS> ::
CHAIN(n,m) |
TREE(n,m,k) |
LNS(n,h,<Build>) |
LOOP(n,<Optimize>) |
THEN(<Optimize>, …,
LDS(i,n,l)
Figure 3: A grammar for hybrid algorithms
The definition of the elementary operators is straightforward:
• INSERT(i) builds a solution by applying a greedy insertion approach and a
varying level of ILO according to the parameter i (cf. Section 2.2, 0 means no ILO
and 4 means full ILO)
• LDS(i,n,l) builds a solution by applying a limited discrepancy search on top of the
INSERT(i) greedy heuristic. The parameter n represents the maximum number of
discrepancies (number of branching points for one solution) and l represents the
threshold. A LDS term can also be used as a generator of different solutions when
it is used in a FORALL.
• FORALL(t1, t2) produces all the solutions that can be built with t1, which is
necessarily a LDS (as opposed to only the best) and applies the post-optimization
step t2 to each of them. The result is the best solution that was found.
• CHAIN(n,m) is a post-optimization step that select n nodes using the heuristic
represented by m and successively removes them (one at a time) and tries to reinsert them using an ejection chain. We did not come up with any significant
selection heuristic so we mostly use the one presented in Section 2.3.2.
• TREE(n,m,k) is similar but uses an ejection tree strategy for the post-optimization.
The extra-parameter represents the number of discrepancies for the LDS search (of
the ejection tree).
• LNS(n,h,t) applies Large Neighborhood Search as a post-optimization step. We
select n nodes using Shaws’s heuristics with the h randomness parameter and we
rebuild the solution using the algorithm represented by t, which must be a <Build>.
Notice that we do not restrict ourselves to a simple LDS term.
• DO(t1,t2) simply applies t1 to build a solution and t2 to post-optimize it
• THEN(t1,t2) is the composition of two optimization algorithms t1 and t2
8
•
LOOP(n,t) repeats n times the optimization algorithm t. This is used with an
optimization algorithm, which repetition will incrementally improve the value of
the current solution.
Here are some examples of algebraic terms.
LDS(3,3,100) represents a LDS search using the 3rd level of ILO (every move is tried) but
with the regular insertion procedure, trying 23 solutions (3 choice points when the
difference between the two best routes is less than 100) and returning the best.
DO(INSERT(2),CHAIN(80,2)) is an algorithm obtained by combining a regular greedy
heuristic with the 2nd level of ILO with a post-optimization phase of 80 removal/reinsertion through an ejection chain.
FORALL(LDS(0,4,100),LOOP(3,TREE(5,2))) is an algorithm that performs a LDS search
with no ILO and 24 branching points and then applies 3 times an ejection tree postoptimization step for each intermediate solution.
3.2 Evaluation
To evaluate the algorithms represented by the terms, we have defined a small interpreter to
apply the algorithm represented by the operator (a <Build> or an <Optimize>) to the current
problem (and solution for an <Optimize>). The metric for complexity that we use is the
number of calls to the insertion procedure. This is a reasonable metric since the CPU time is
roughly linear in the number of insertions and has the advantage that it is machine
independent and is easier to predict based on the structure of the term. To evaluate the quality
of a term, we run it on a set of test files and average the results. The generic objective
function is defined as the sum of the total lengths plus a penalty for the excess in the number
of routes over a pre-defined objective. In the rest of the paper, we report the number of
insertions and the average value of the objective function. When it is relevant (in order to
compare with other approaches) we will translate them into CPU (s) and (number of routes,
travel).
In [CSL99] we used the algebra to test a number of hand generated terms in the algebra to
evaluate the contribution of ILO, and the other four search meta-heuristics comparing
algorithms both with and without each of the meta-heuristics. The results are summarized in
Table 1, which shows different examples of algebraic terms that represent hybrid
combinations of meta-heuristics. For instance, the first two terms (INSERT(3) and
INSERT(0)) are a simple comparison of the basic greedy insertion algorithm with and
without ILO. In a previous paper [CL99] we had demonstrated that applying 2- and 3-opt
moves (with a hill-climbing strategy) as a post-processing step was both much slower and
less effective than ILO. Here we tried a different set of post-optimization techniques, using
two different objective functions: the number of routes and the total travel time. In this
section, the result is the average for the 12 R1* Solomon benchmarks.
We measure the run-time of each algorithm by counting the number of calls to the insertion
sub-procedure. The ratio with CPU time is not really constant but this measure is independent
of the machine and makes for easier comparisons. For instance, on a PentiumIII-500Mhz,
1000 insertions (first term) translate into 0.08 s of CPU time and 100K (4th term) translates
into 8s.
In order to experiment with different optimization goals, we have used the following
objective function:
f=
1
×
[(E×1000000×min(1,max(0,E −Eopt,t))+D+∑di]
test _cases t∈test∑
_ cases
i≤ N
Notice that we use Eopt,t to represent the optimal number of routes that is known for test case t.
If we use Eopt,t = 25, we simply optimize the total travel time for the solution. The constant
term (sum of the durations) comes from using this formula to evaluate a partial solution. In
the following table, we will use this function to compare different algorithms, but also report
9
the average number of routes when appropriate. Columns 2 and 3 correspond to minimizing
the number of trucks, whereas columns 4 and 5 correspond to minimizing the total travel time.
Term
Value
# of trucks
#of
INSERT(0)
14692316
1000
25293
1000
INSERT(3)
13939595
1151
22703
1167
LDS(3,8,50)
49789 (12.83)
162K
22314
182K
DO(LDS(4,3,100),CHAIN(80,2))
52652 (12.91)
100K
22200
100K
DO(LDS(3,3,100),
41269 (12.66)
99K
22182
104K
FORALL(LDS(3,2,100),CHAIN(20,2))
54466
101K
22190
100K
DO(INSERT(3),CHAIN(90,2))
10855525
97k
22219
98K
DO(LDS(3,2,100),LOOP(2,TREE(40,2)))
43564
90K
22393
63K
DO(LDS(3,2,100),LOOP(6,CHAIN(25,2)))
56010
101K
22154
108K
SuccLNS = DO(LDS(3,0,100),
40181
26K
22066
50K
objective
insertions
Value
# of insertions
travel
# of trucks
travel
LOOP(30,LNS(10,4,LDS(4,4,1000))
THEN(LOOP(50,LNS(4,4,LDS(3,3,1000))),
LOOP(40,LNS(6,4,LDS(3,3,1000))),
LOOP(30,LNS(8,4,LDS(3,3,1000))),
LOOP(20,LNS(10,4,LDS(3,3,1000))),
LOOP(10,LNS(12,4,LDS(3,3,1000))) ))
Table 1. A few hand-generated terms
In these preliminary experiments, LNS dominated as the technique of choice (coupled with
LDS and ejection tree optimization works well on larger problems. Additional experiments in
[CSL99] showed that the combination of these techniques with ILO worked better than LNS
alone. We can also notice here that the different heuristics are more-or-less suited for
different objective functions, which will become more obvious in the next section. We use
the name “succLNS” to represent the last (large) term, which is the best from this table and
was generated as a (simplified) representation of the strategy proposed in [Sha98], with the
additional benefit of ILO.
4. A Learning Algorithm for discovering New Terms
4.1 Tools for learning
The primary tools for learning are invention of new terms, mutation of existing terms, and the
crossing of existing terms with each other. Throughout learning a pool of the best n terms is
maintained (the pool size n is a fixed parameter of the experiment) from which new terms are
added and created from existing terms. The invention of new terms is defined by structural
induction from the grammar definition. The choice among the different subclasses (e.g. what
to pick when we need an <Optimize>) and the values for their parameters are made using a
random number generator and a pre-determined distribution. The result is that we can create
terms with an arbitrary complexity (there are no boundaries on the level of recursion, but the
invention algorithm terminates with probability 1). One of the key experimental parameters,
which are used to guide invention, is a bound on the complexity for the term (i.e., the
complexity goal). The complexity of a term can be estimated from its structure and only those
terms that satisfy the complexity goal will be allowed to participate in the pool of terms.
10
Complexity is an estimate of the number of insertions that will be made when running one of
the hybrid algorithms. For some terms, this number can be computed exactly, for others it is
difficult to evaluate precisely. However, since we use complexity as a guide when inventing a
new term, a crude estimate is usually enough. Here is the definition that we used.
•
•
•
•
•
•
•
•
•
complexity(INSERT(i)) = 1000.
n
complexity(LDS(i,n,l)) = (if (i = 4) 6000 else 1000) × (2 ).
complexity(FORALL(t1, t2)) = complexity(t1) + complexity(t2)
complexity(CHAIN(n,m)) = 1500 × n).
k
complexity(TREE(n,m,k)) = 600 × n × (2 ).
n
complexity(LNS(n,h,t)) = (complexity(t) ) / 100,.
complexity(DO(t1,t2)) = complexity(t1) + complexity(t2)
complexity(THEN(t1,t2)) = complexity(t1) + complexity(t2)
complexity(LOOP(n,t)) = n ×complexity(t).
Mutation is also defined by structural induction according to two parameters. The first
parameter tells if we want a shallow modification, an average modification or a deep
modification. In the first case, the structure of the term does not change and the mutation only
changes the leaf constants that are involved in the term (integers). Moreover, only small
changes for these parameters are supported. In the second case, the type (class) does not
change, but large changes in the parameters are allowed and, with a given probability and
some sub-terms can be replaced by terms of other classes. In the last case, a complete
substitution with a different term is allowed (with a given probability). The second parameter
gives the actual performance of the term, as measured in the experiment. The mutation
algorithm tries to adjust the term in such a way that the (real) complexity of the new term is
as close as possible to the global objective. This compensates the imprecision of the
complexity estimate quite effectively. The definition of the mutation operator is a key
component of this approach. If mutation is too timid, the algorithm is quickly stuck into local
optimums. The improvements shown in this paper compared to our earlier work [CSL99] are
largely due to a better tuning of the mutation operator. For instance, although we guide the
mutation according to the observed complexity (trying to raise or lower it), we randomly
decide (10% of the time) to ignore this guiding indication.
The complete description of the mutation operator is too long to be given in this paper, but
the following is the CLAIRE [CL96] method that we use to mutate a AND term. The two
parameters are respectively a Boolean that tells if the current term x is larger or smaller than
the complexity goal and an integer (i) which is the previously mentioned mutation level. The
AND object to which mutation is applied (x = AND(t1,t2)) has two slots, x.optim = t1 and
x.post = t2. We can notice that the default strategy is simply to recursively apply mutation to
t1 and t2, but we randomly select more aggressive strategies either to simplify the term or to
get a more complex one.
11
mutate(x:AND,s:boolean,i:integer) : Term
-> let y := random(100 / i), y2 := random(100) in
(if ((s & y2 > 20) | y > 90)
(if (y < 10)
// try to get a more complex term
THEN(x,invent(Optimizer))
else if (y < 20)THEN(optim(x),LOOP(randomIn(3,10), post(x)))
else if (y < 30)THEN(LOOP(randomIn(3,10),optim(x)), post(x))
else
// recursive application of mutation
THEN(mutate(optim(x),s,i), mutate(post(x),s,i)))
else
// try to get a simpler term
(if ((i = 3 & y < 50) | (i > 1 & y < 10)) optim(x)
else if ((i = 3) | (i > 1 & y < 20)) post(x)
else THEN(mutate(optim(x),s,i), mutate(post(x),s,i))))
Finally, crossover is also defined structurally and is similar to mutation. A crossover method
is defined for crossing integers, for crossing each of the terms, and recursively crossing their
components. A term can be crossed directly with another term or one of its sub-components.
The idea of crossover is to find an average, middle-point between two values. For an integer,
this is straightforward. For two terms from different classes, we pick a “middle” class based
on the hierarchical representation of the algebra tree. When this makes sense, we use the subterms that are available to fill the newly generated class. For two terms from the same class,
we apply the crossover operator recursively.
The influence of genetic algorithms [Gol89][Ree93] in our learning loop is obvious, since we
use a pool of terms to which we apply crossover and Darwinian selection. However,
crossover is mixed with mutation that is the equivalent of parallel randomized hill-climbing.
Hill-climbing translates the fact that we select the best terms at each iteration (cf. next
section). Randomized tells that the neighborhood of a term is very complex and we simply
randomly pick one neighbor. Parallel comes from the use of a pool to explore different paths
in simultaneous ways. The use of a mutation level index shows that we use three
neighborhood structures at the same time. We shall see in Section 5 that using a pure genetic
algorithm framework turned out to be more difficult to tune, which is why we are also
applying a hybrid approach at the learning level.
4.2 The Learning Loop
The learning process is performed with a serie of iterations and works with a pool of terms
(i.e. algorithms) of size M. During each iteration, terms are evaluated and the K best ones are
selected and mutated and crossed in different ways. Experimental parameters govern how
many terms will be generated via mutation, crossover, and invention along with how many of
the very best ones from the previous generation will be kept (K). We shall see in Section 4.2
the relative influence of these different techniques. After completing all experiments, the best
terms are rated by running tests using different data sets and averaging over all runs. This
second step of evaluation, described as “more thorough” in Figure 3, is explained in the
following sub-section. The goal is to identify the best term from the pool in a more robust
(stable) manner.
The best term is always kept in the pool. For each of the K “excellent” terms, we apply
different levels of mutation (1,2 and 3), we then cross them to produce (K * (K – 1) / 2) new
terms and finally we inject a few invented terms (M – 1 – K * 3 – (K * (K – 1) / 2)). The new
pool replaces the old one and this process (the “learning loop”) is iterated I times.
We have tried to increase the efficiency of the search by taking the distance to the best term
as a parameter for choice for the other “excellent” terms, but this was not successful. The use
of this distance to increase the diversity in the pool subset resulted in a lower ability to finely
explore the local neighborhood. The use of a tabu list is another direction that we are
currently exploring but the difficulty is to define the semantic of this list in a world of
potentially infinite terms.
12
The number of iterations I is one of a number of parameters that need to be set for a serie of
experiments. Additional parameters include the pool size M and a set of parameters
governing the break down of the pool (i.e., how many terms generated via invention,
mutation, and cross-over kept from the previous generation). In Section 5, we describe a set
of experiments aimed at determining “good” values for each of these parameters.
To avoid getting stuck with a local minimum, we do not run the learning loop for a very large
number of iterations. When we started studying the stability of the results and the influence
of the number of iterations, we noticed that the improvement obtained after 20 iterations are
not really significant, while the possibility of getting stuck with a local minimum of poor
quality is still high. Thus we decided to divide the set of N iterations of the previous learning
loop as follows (we pick N = 50 to illustrate our point).
- We compute the number R of rounds with R = N / 12 (e.g., 50 / 12 = 4)
- For each round, we run the learning loop 10 times, starting from a pool of randomly
generated terms.
We keep the best term for each round. (Note: If N is less than 24, only a single round is
executed)
- We create a new pool with these k best terms, completed with random invention and
apply the learning loop for the remaining number of iterations (N – R * 10).
The following figure summarizes the structure of the learning algorithm.
Randomized
Invention
R best results
Input pool (M terms)
I = 10
I = N – 10R
I = 10
Evaluate M terms and pick K bests
Evaluate K terms thoroughly and pick best
I
Iterations
Keep
Best
round
Crossover
Mutate
K
Bests
…
R
rounds
Last
Round
Inventions
Result = best term
Figure 3. The Learning Algorithm
Final result
This figure makes it clear that the algorithm is more complex than the preliminary version
that was presented in [CSL99]. The main reason for this complexity is the increase in
robustness. We shall see in section 6.1 that that this more sophisticated algorithm produces
better terms, but the most important improvement is the stability and statistical significance
of the results that will be shown in Section 5. In particular, we discuss the importance of the
training set in Section 5.4. We have used our approach in two types of situations: a machine
learning paradigm where the algorithm is trained on a data set and tested on a different data
set, and also a data mining paradigm where the algorithm is trained on the same set onto
which it is applied. Section 5 will also deal with the tuning of the learning loop parameters (N,
R, K, …).
13
4.3 Randomization and Further Tuning
Randomized algorithms such as LNS have a built-in instability, in the sense that different
runs will produce different results for the same algorithm. Thus it is difficult to compare
terms using one run only, even though the results are already averaged using a data set. A
better approach is to perform multiple runs for each term, but this is expensive and slows the
learning process quite a bit. The compromise that we make is that we first sort the whole pool
of term using a single run to select the subset of “excellent terms”, and then run multiple
experiments on these selected terms to sort them more precisely. This is important to increase
the probability of actually selecting the best term of the pool.
Another issue is the minimization of the standard deviation associated with a randomized
algorithm. Once an algorithm (i.e., a term) incorporates a randomized technique, its
performance is not simply measured by the value of one run or the average of multiple runs,
but also by the standard deviation of the values during these multiple runs. In order to ensure
that we minimize this deviation as well as the average value, we enrich the semantic of the
evaluation of the sub-pool of “excellent” terms. When we perform multiple runs on these
selected terms, the final value associated to each term is the average of the worst value and
the average value. Thus, we discriminate against terms that have a high deviation.
The balancing between minimizing the average and the standard deviation is problemdependent and the definition of a unique quality indicator only makes sense in a given
industrial settings. For some problems such as on-line optimization, standard deviation is
important since a poor solution translates into customer non-satisfaction. For some batch
problems, the goal is purely to save money and minimizing the average value is quite
sufficient. Thus, we report three numbers for each algorithm:
- The average value, E(v) that represents the average of the value of the term produced
by the algorithm over multiple learning experiments. If we define v as the valuation of
the term t produced at the end of the learning loop according the definition given in
Section 3.2, we have:
∑v
E(v) = 1 ×
L
i
i≤ L
assuming that we ran the learning process L times and obtained the values v1, …vL.
The standard deviation σ(v) represents the standard deviation of the previously defined
value v over the multiple learning runs:
-
⎛
⎝L
∑vi ⎞⎟⎠−E(v)
σ(v) = ⎜ 1 ×
-
2
2
i≤ L
Last, we also measure the average standard deviation E(σ), since we recall that the
terms produced by our learning algorithm represent randomized algorithms and that
their value vi is simply an average. More precisely, to produce a value for a term, we
make 10 runs of the algorithm on each test file of the data set and we record both the
average value vi and the standard deviation σi. We may then define E(σ) to represent
the average of this standard deviation, when multiple learning experiments are made:
∑
E(σ) = 1 × σ i
L
i≤ L
Obviously, reporting a standard deviation falls short of characterizing a distribution.
However, our experience is that standard deviation is a reasonable first-order indicator for all
practical purposes. For instance, the difficulties that we experienced with the early approach
of [CLS99], which we report in the next section, did translate into high standard deviation
and the reduction of E(σ) was indeed a symptom of the removal of the problem.
14
The three techniques that we use for creating new terms (invention, mutation and crossover)
have the potential to develop terms that are overly complex. The complexity bound that
guides each of these tools can prevent the creation of terms that are too expensive to evaluate.
However, it is still possible to generate terms, which meet the complexity bound, but which
are overly long and complex. Large terms are undesirable for two reasons: they are difficult
to read and they tend to get out of hand quickly (large terms crossed with large terms
generally produce even larger terms). For this reason, we have introduced a “diet” function,
which takes a bound on the physical size (i.e., number of subterms and components) of the
term. This function is quite simple: if the term is too large (too many sub-terms), the subterms are truncated at an arbitrary distance from the root (of the tree that describes the term).
5. Learning Characteristics
5.1 Convergence Rate and Stability
The algorithms defined by the terms have a number of randomness issues. As noted above,
algorithms such as LNS are randomized algorithms and hence, testing a term over a single
data set could yield different results. Also, the process of learning itself is randomized. To
deal with these issues, we must be careful to average each of our experiments over multiple
runs, recording the standard deviation over all runs, as explained in the previous section.
The learning process itself runs over a number of iterations. Showing that there is in fact a
“convergence” (i.e., the stability and quality of the results should improve as the number of
iterations grows) and determining the number of iterations achieves the best tradeoff between
processing time and producing the best results are two key issues. To determine this we have
run a serie of experiments over the same problem running from 10 iterations to 50 iterations
of the learning loop. For each experiment we report the three measures: E(v), σ(v), and E(σ),
as explained earlier. We ran these experiments twice, using two different complexity goals of
respectively 50K (columns 2 to 4) and 200K insertions (columns 5 to 7). The goal here is to
minimize the number of trucks. The results are described in Table 2:
σ(v)
E(v)
Goal
E(σ)
50K
50K
σ(v)
E(v)
50K
200K
E(σ)
200K
200K
10 iterations
39706
5842
1092
34600
4679
1193
20 iterations
34753
5433
775
31303
4040
740
30 iterations
35020
4663
1426
31207
2090
755
40 iterations
34347
4623
1142
30838
3698
866
50 iterations
32173
2069
1217
29869
1883
862
Table 2. Influence of the number of iterations
A number of observations can be made from these results. We see that the quality of the
terms is good, since they are clearly better than those that were hand-generated. This point
will be further developed in the next section. We also see that the standard deviation of the
generated algorithm is fairly small, but tend to increase when the number of iteration rises.
This is due to the fact that better terms are making a heavier use of the randomized LNS. Last,
we see that the standard deviation of the average value is quite high, which means that our
learning algorithm is not very robust, even though it is doing a good job at finding terms.
It is interesting to notice that this aspect was even worse with the simpler version presented in
[CSL99], where we did not partition the set of iterations into rounds. In that case, while the
average value goes down with a higher number of iterations, the worst-case value is pretty
much constant, which translates into a standard deviation that increases with the number of
iterations.
15
In practice, the remedy to this instability is to run the algorithm with a much larger number of
iterations, or to take the best result of many separate runs, which is what we shall do in the
last section where we attempt to invent higher-quality terms.
5.2 Relative contribution of Mutation, crossover, and invention
The next set of experiments compares four different settings of our Learning Loop, which
emphasize more heavily one of the term invention techniques. All settings use the same pool
size, the difference comes from the way the pool is re-combined during each iteration:
- al2 is our default setting, the pool size is 16 terms, out of which 3 “excellent terms” are
selected, producing 9 mutated terms and 3 products of crossover, completed by 3 newly
invented terms.
- am2 is a setting that uses a sub-pool of 4 “excellent” terms, yielding 12 mutated terms, but
only one crossover (the two best terms) and 2 inventions.
- ag2 is a setting that uses the crossover more extensively, with only one mutation (of the best
term), but 10 crossovers produced from the 5 “excellent” terms.
- ai2 is a setting that uses 6 mutated terms and 1 crossover, leaving the room for 8 invented
terms.
The next table reports our results using the 50K goals for term complexity and the same set of
data tests. We report results when the number of iteration is respectively set to 20 (one large
round) and 40 (three rounds).
σ(v)
E(v)
20
#of iterations
E(σ)
20
σ(v)
E(v)
20
40
E(σ)
40
40
Al2
34753
5433
775
31303
4040
740
Am2
34996
3778
1091
33832
2363
355
Ag2
40488
5627
1060
36974
4557
1281
Ai2
37209
4698
566
32341
2581
374
Table 3. Comparing Mutation, Crossover and Invention
We can see that the better approach is to use a combination of techniques (al* family), as we
have selected in the rest of the experiments.
These results also suggests that mutation is the strongest technique, which means that
randomized hill-climbing is better suited for our learning problem than a regular genetic
algorithm approach. This result is probably due to the inadequacy of our crossover operator.
It should be noticed that when we introduced crossover, it brought a significant improvement
before we refined our mutation operator as explained in Section 4.1 (i.e., at that time, al* was
much better than am*). We also notice that invention also works pretty well, given the time
(ai2 is a blend of mutation and invention). The conclusion is that the randomization of the
“evolution” technique is very important, due to the size of the search space. A logical step,
which will be explored in the future, is to use a more randomized crossover.
5.3 Number of Iterations versus Pool Size
There are two simple ways to improve the quality of the learning algorithm: we can increase
the number of iterations, or we can increase the size of the pool. The next set of experiments
16
compare 4 settings, which are roughly equivalent in term of complexity but use different pool
size vs. number-of-iterations compromise:
-
al3, which is our regular setting (cf. al2) with 30 iterations (whereas al2 uses 20
iterations)
ap0, with a smaller pool size (10), where only two best terms are picked to be mutated
and crossed. The number of iteration is raised to 50.
ap1, with a pool size of 35, with 6 best terms chosen for mutation and 5 for crossovers.
The number of iteration is reduced to 15.
ap2, with a pool size of 24, with 5 best terms chosen for mutation, and 4 for crossovers.
The number of iterations is set to 20.
These experiments are made with two complexity objectives, respectively 50 000 and 200
000 insertions.
σ(v)
E(v)
Goal
50K
E(σ)
50K
σ(v)
E(v)
50K
200K
E(σ)
200K
50K
Al3
35020
4663
1426
31207
2090
755
Ap0
34745
3545
829
31735
1912
189
Ap1
35101
4920
751
30205
1905
556
Ap2
37122
4910
979
31625
2401
468
Table 4. Pool Size versus Number of Iterations
These results show that a compromise must be indeed be found, since neither using a very
large pool nor using a small one seems a good idea. Our experience is that a size around 20
seems the best trade-off, but this is based on our global bound on the number of iterations,
which is itself limited by the total CPU time. Many of these experiments represent more than
a full day of CPU. When faster machines are available, one should re-examine the issue with
a number of iterations in the hundreds.
5.4 Importance of training set
Finally, we need to study the importance of the training set. It is a well-known fact for
researchers who develop hybrid algorithms for combinatorial optimization that a large set of
benchmarks is necessary to judge the value of a new idea. For instance, in the world of jobshop scheduling, the set of ORB benchmarks is interesting because techniques that
significantly improve the resolution of one problem often degrade the resolution of others.
The same lesson applies here: if the data set is too small, the learning algorithm will discover
“tricks” of little value since they do not apply generally. To measure the importance of this
phenomenon, we have run the following experiments, using the same corpus of the 12 R1*
Solomon benchmarks:
• rc1: train on 12 data samples, and measure on the same 12 data samples
• rc2: train on the first 6, but measure on the whole set
• rc3: train on the 12 data samples, but measure only on the last 6 (reference point)
• rc4: train on the first 6, measure on the last 6
• rc5: train on last 6 and measure on the last 6.
We use our “al2” setting for the learning algorithm, with a complexity goal of 50K insertions.
17
rc1
rc2
rc3
rc4
rc5
E(v)
σ(v)
37054
4154
34753
5433
35543
3362
34753
5433
29933
4053
Table 5a. Impact of the training set (I)
E(σ)
1136
983
1636
1418
2015
These results can be appreciated in different manners. On the one hand, it is clear that a large
training set is better than a smaller one, but this is more a robustness issue, as shown by the
degradation of the standard deviation with a smaller training set. Surprisingly, the average
value is actually better with a smaller subset. We also made experiment with a training set of
size 2 or 3, and the results were really not good, from which we concluded that there is a
minimal size around 5. On the other hand, the sensitivity to the training set may be a
desirable feature, and the degradation from learning with 6 vs. 12 data samples is not very
large. This is even true in the worst case (rc4) where the term is trained from different
samples than those that are used to test the result. The only result that is significantly
different is rc5, which corresponds to the case where we allow the algorithm to discover
“tricks” that will only work for a few problems. The gain is significant, but the algorithm is
less robust. The other four results are quite similar.
The following table shows the results obtained with training and testing respectively on the
R1*, C1* and RC1* data sets from the Solomon benchmarks. We added a fourth line with a
term that represents out “hand-tuned” hybrid algorithm, which we use as a reference point to
judge the quality of the invented terms.
evaluation →
train on R1*
train on C1*
train on RC1*
hand-tuned term
R1*
C1*
RC1*
41555
98267
65369
61238
98266
79128
42504
98266
66167
50442
98297
78611
Table 5b. Impact of the training set (II)
If we replace the difference into perspective using the reference algorithm, we see that the
learning algorithm is not overly sensitive to the training set. With the exception of the C1*
data set, which contains very specific problems with clusters, the algorithms found by using
R1* and RC1* do respectively well on the other data set. In the experiments reported
previously in this section, we have used the same set for training and evaluating. For
industrial problems, for which we have hundreds of sample, we can afford to use separate
sets for training and evaluation, but the results confirm that the differences between a proper
machine learning setting (different sets) and a “data mining” setting (same set) are not
significant.
Our conclusion is that we find the learning algorithm to be reasonably robust with respect to
the training set, and the ability to exploit specific features of the problem is precisely what
makes this approach interesting for an industrial application. For instance, a routing
algorithm for garbage pick-up may work on problems that have a specific type of graph, and
for which some meta-heuristic combinations are well suited, while they are less relevant for
the general case. The goal of this work is precisely to take this specificity into account.
18
6. Learning Experiments
6.1 Hand-crafted vs. discovered terms
The first experiment that we can make is to check whether the terms that are produced by the
learning algorithm are indeed better than we could produce directly using the algebra as a
component box. Here we try to build a term with a complexity of approximately 50Ki (which
translates into a run-time of 2s on a PIII-500MHz), which minimizes the total travel time.
The data set is the whole R1* set. In the following table, we compare:
- four terms that were hand-generated in an honest effort to produce a good contender.
- the term succLNS that we have shown in Figure 1 to be the best among our introduction
examples, which tries to emulate the LNS strategy of [Sha98].
- the term (Ti1) that we presented in [CSL99], that is the output of our first version of the
learning algorithm.
- the term (Ti2) that is obtained with the new learning algorithm with the al2 settings and
40 iterations.
Term
Objective
Run-time (i)
LDS(3,5,0)
DO(LDS(3,3,100),
LOOP(8,LNS(10,4,LDS(4,4,1000))))
DO(LDS(3,2,100),TREE (20,2))
FORALL(LDS(3,2,100),CHAIN(8,2))
SuccLNS (cf. Table 1)
Ti1:
FORALL(LDS+(0,2,2936,CHAIN(1,1)),
LOOP(48,LOOP(4,
LNS(3,16,LDS(3,1,287)))))
22669
22080
1.3Ki
99Ki
22454
22310
21934
21951
(invented
[CSL99])
21880
(invented !)
23Ki
59Ki
59Ki
57ki
Ti2: DO(INSERT(0),
THEN(LOOP(22,THEN(
LOOP(26,LNS(6,22,LDS(3,1,196))),
LNS(5,23,LDS(2,1,207)))),
LNS(4,26,LDS(1,1,209))))
57ki
Table 6. Inventing new terms (travel minimization)
These results are quite good, since not only the newly invented term is clearly better than the
hand-generated example, but it is even better than the complex SuccLNS term that is the
implementation with our algebra of a strategy found in [Sha98], which is itself the result of
careful tuning.
The term shown in the table is the best term found in 5 runs. The average value for term Ti2
was 21980, which is still much better than the hand-generated terms. It is also interesting to
note that the results of the algorithms have been improved significantly by the tuning of the
mutation operator and the introduction of a new control strategy for the Learning Loop, as
explained in Section 3.
6.2 Changing the Objective function
We now report a different set of experiments where we have changed the objective function.
The goal is now to minimize the number of trucks (and then to minimize travel for a given
number of trucks). We compare the same set of terms, together with two new terms that are
invented using this different objective function:
- the term (Ti3) that we presented in [CSL99], that is the output of our first version of the
learning algorithm.
19
-
the term (Ti4) that is obtained with the new learning algorithm with the al2 settings and
40 iterations.
Term
Objective
Run-time (i)
LDS(3,5,0)
68872
47000
1.3Ki
99Ki
47098
54837
40603
525000
36531
(invented
[CSL99])
28006
(invented !)
23Ki
59Ki
59Ki
59Ki
57ki
DO(LDS(3,3,100),
LOOP(8,LNS(10,4,LDS(4,4,1000))))
DO(LDS(3,2,100),TREE(20,2))
FORALL(LDS(3,2,100),CHAIN(8,2))
SuccLNS (cf. Table 1)
Ti2 (cf. Table 5)
Ti3:DO(LDS(2,5,145),
THEN(CHAIN(5,2),TREE(9,2,2)))
Ti4:DO(LDS(2,4,178),
THEN(LOOP(3,LOOP(81,LNS(7,2,LDS(2,1,534)))),
LNS(3,7,LDS(2,1,534))),
LOOP(3,LOOP(83,LNS(4,3,LDS(2,1,534)))),
LNS(3,25,LDS(0,2,983)))))
57ki
Table 7. Inventing new terms (travel minimization)
These results are even more interesting since then invented term is much better than the other
one, showing a very significant improvement since [CSL99]. The fact that the strategy used
to build SuccLNS was proposed to minimize the number of trucks makes the difference
between the terms quite surprising. The average number of routes corresponding to Ti4 is
12.33, which is excellent, as we shall discuss in the next section. The term Ti4 is the best
found in a set of 5 experiments and is not totally representative since the average value was
found 32600, which is still much better than the hand-generated terms.
It is interesting to notice that the term Ti3 is not good either as far as travel optimization is
concerned (the value of the objective function accentuates strongly the decrease in the quality
of the solution, because the value of an ideal solution is subtracted from it). The
specialization of the term according to the objective function is thus proven to be an
important feature. This is even more so in an industrial application context, where the
objective function also includes the satisfaction of soft constraints such as operator
preferences. These constraints are business-dependent and they change rapidly over the years.
6.3 Changing the complexity goal
Here we report the results found with different complexity goals, ranging from 50K to 1200K
insertions. These experiments were made on a PIII-800Mhz machine.
Value
Equivalent
# of trucks
Complexity
Equivalent run-time
50K
28006
12.33
56K
2s
200K
29732
12.44
211K
10s
600K
26000
12.26
460K
20s
1200K
24459
12.25
1200K
40s
Table 8. Inventing new terms
These results are impressive, especially with a limited amount of CPU time. In a previous
paper, using the best combination of LNS and ILO and restricting ourselves to 5 minutes of
CPU time, we obtained an average number of route of 12.31 on the R1* set of Solomon
benchmarks, whereas [TGB95] and [Sha95] respectively got 12.64 and 12.45, using 10
20
minutes for [Sha98] or 30 minutes for [TGB95] (the adjustment in CPU time is due to the
difference in hardware). Here we obtain a much better solution (12.25 routes on R1*) in less
than 1 minutes of CPU time. This is even better that was previously obtained when we
augmented the available CPU time by a factor of 3: 12.39 [Sha98] and 12.33[TGB95]. This
clearly shows that the algorithms that were found by our learning methods produce state-ofthe-art results.
The improvement with respect to [CSL99] is also obtained on the RC1* set, since the term
obtained with complexity 600K has an average number of routes of 11.875 in 30s of CPU
time, whereas our best strategy in [CSL99] obtained 12.0 routes in 5 minutest and whereas
[TGB95] and [Sha95] respectively got 12.08 and 12.05.
6.4 Future Directions
This paper has shown that this learning method produces precise tuning for hybrid algorithms
applied to medium-sized problems and short to medium run-times (seconds or minutes).
Depending on the parameters and the size of the data set, the ratio between the learning loop
and the final algorithm run-times varies from 1000 to 10000, with a practical value of at least
10000 to get stable and meaning-full results. For larger problems for which a longer run-time
is expected, this is a limiting factor. For instance, we use a hybrid algorithm for the ROADEF
challenge [Roa01] that can indeed be described with our algebra. However, the target runtime proposed in the challenge is one hour, which translates into more than a year for the
learning process. Needless to say, we could not apply this approach and used manual tuning
that produced very competitive results [Roa01].
Thus, we either need to find a way to train on smaller data set and running time, or to shorten
the training cycle. The first direction is difficult; we found that going from medium-sized
VRP to larger one seemed to preserve the relative ordering of hybrid algorithms, but this is
not true of frequency assignment problems given in the ROADEF challenge. On the other
hand, using techniques found for smaller problems is still a good idea, even though
parameters need to be re-tuned. Thus we plan to focus on “warm start” loops, trying to cut the
number of cycles by feeding the loop with terms that are better than random inventions.
Our second direction for future work is the extension of the algebra. There are two operators
that are natural to add in our framework. First, we may add a MIN(n,t) operator that applies n
times a build-or-optimize term t and selects the best iteration. This is similar to LOOP(n,t)
except that LOOP applied t recursively to the current solution, whereas MIN applies it n
times “in parallel”. The other extension is the introduction of a simple form of tabu search
[Glo86] as one of the meta-heuristics in the “tool box”, which could be used instead of LNS.
The interest of a tabu search is to explore a large number of simple moves, as opposed to
LNS, which explores a small number of complex moves. In our framework, the natural tabu
exploration is to apply push and pull as unit moves.
7. Conclusion
We presented an approach for learning hybrid algorithms that can be expressed as a
combination of meta-heuristics. Although these experiments were only applied to the field of
Vehicle Routing with Time Windows, hybrid algorithms have shown very strong results in
many fields of practical combinatorial optimization. Thus, we believe that the technique that
we propose here has a wide range of applicability. We showed how to represent a hybrid
program with a term from an algebra and presented a learning algorithm that discovers good
algorithms that can be expressed with this algebra. We showed that this learning algorithm is
stable and robust, and produces terms that are better than those that a user would produce
through manual tuning.
The idea of learning new optimization algorithm is not new. One of the most interesting
approaches is the work of S. Minton [Min97], which has inspired our own work. However,
we differ through the use of a rich program algebra, which is itself inspired from the work on
SALSA[LC98], and which enables to go much further in the scope of the invention. The
result is that we can create terms that truly represent state-of-the-art algorithms in a very
21
competitive field. The invention in [Min97] is mostly restricted to the choice of a few
heuristics and control parameters in a program template. Although it was already shown that
learning is already doing a better job than hand tuning in such a case, we think that the
application of learning to a hybrid program algebra is a major contribution to this field. The
major contribution of this paper is to provide a robust and stable algorithm, which happens to
be significantly better than our preliminary work in [CSL99].
This work on automated learning of hybrid algorithm is significant because we found that the
combination of heuristics is a difficult task. This may not be a problem to solve wellestablished hard problems such as benchmarks, but it makes using hybrid algorithms for
industrial application less appealing. The hardware on which these algorithms are running is
changed every 2 or 3 years with machines that are twice faster; the objective function of the
optimization task changes each time a new soft constraint is being added, which happens on a
continuous basis. For many businesses, the size of the problems is also evolving rapidly as
they need to service more customers. All these observations means that the optimization
algorithm needs to be maintained regularly by experienced developers, or it will become
rapidly quite far from the state-of-the-art (which is what happens most of the times, according
to our own industrial experience). Our goal is to incorporate our learning algorithm to
optimization applications as a self-adjusting feature.
To achieve this ambitious goal, more work is still needed. First, we need to demonstrate the
true applicability of this work with other domains. We are planning to evaluate our approach
on another (large) routing problem and on a complex bin-packing problem. We also need to
continue working on the learning algorithm to make sure that it is not wasting its computing
time and that similar or even better terms could not be found with a different approach.
Another promising future direction is the use of a library of terms and/or relational clichés.
By collecting the best terms from learning and using them as a library of terms and/or
deriving a set of term abstractions like the clichés of [SP91], it will be possible to bootstrap
the learning algorithm with a tool box of things that have worked in the past.
Acknowledgments
The authors are thankful to Jon Kettenring for pointing out the importance of a statistical
analysis on the learning process. Although this task is far from complete, many insights have
been gain through the set of systematic experiments that are presented here.
The authors are equally thankful to the anonymous reviewers who provided numerous
thought-provoking remarks and helped us to enrich considerably the content of this paper.
References
[CL95] Y. Caseau, F. Laburthe. Disjunctive Scheduling with Task Intervals. LIENS
Technical Report 95-25, École Normale Supérieure, France, 1995
[CL96] Y. Caseau, F. Laburthe. Introduction to the Claire Programming Language. LIENS
Technical Report 96-15, Ecole Normale Supérieure, 1996.
[CL97] Y. Caseau, F. Laburthe. Solving small TSPs with Constraints. Proc. of the 14th
International Conference on Logic Programming, The MIT Press, 1997.
[CL99]. Y. Caseau, F. Laburthe. Heuristics for Large Constrained Vehicle Routing Problems,
Journal of Heuristics 5 (3), Kluwer, Oct. 1999.
[CSL99] Y. Caseau, G. Silverstein, F. Laburthe. A Meta-Heuristic Factory for Vehicle
Routing Problems, Proc. of the 5th Int. Conference on Principles and Practice of Constraint
Programming CP’99, LNCS 1713, Springer, 1999.
[Glo86] F. Glover. Future paths for integer programming and links to artificial intelligence.
Computers and Operations Research, vol. 5, p. 533-549, 1986.
22
[Gol89] D.E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning.
Addison-Wesley, 1989.
[GHL94] M. Gendreau, A. Hertz, G. Laporte. A Tabu Search Heuristic for the Vehicle
Routing Problem, Management Science, vol. 40, p. 1276-1290, 1994.
[HG95] W. Harvey, M. Ginsberg. Limited Discrepancy Search, Proceedings of the 14th
IJCAI, p. 607-615, Morgan Kaufmann, 1995.
[KB95] G. Kontoravdis, J. Bard. A GRASP for the Vehicle Routing Problem with Time
Windows, ORSA Journal on Computing, vol. 7, N. 1, 1995.
[Lap92] G. Laporte. The Vehicle Routing Problem: an overview of Exact and Approximate
Algorithms, European Journal of Operational Research 59, p. 345-358, 1992.
[LC98] F. Laburthe, Y. Caseau. SALSA : A Language for Search Algorithms, Proc. of
Constraint Programming’98, M.Maher, J.-F. Puget eds., Springer, LNCS 1520, p.310-324,
1998.
[Min97] S. Minton. Configurable Solvers : Tailoring General Methods to Specific
Applications, Proc. of Constraint Programming, G. Smolka ed., Springer, LNCS 1330, p.372374, 1997.
[Ree93] C. Reeves (ed.). Modern Heuristic Techniques for Combinatorial Problems. Halsted
Press, Wiley, 1993.
[RR96] C. Rego, C. Roucairol. A Parallel Tabu Search Algorithm Using Ejection Chains for
the Vehicle Routing Problem, in Meta-Heuristics: Theory and Applications, Kluwer, 1996.
[RGP99] L.-M. Rousseau, M. Gendreau, G. Pesant. Using Constraint-Based Operators with
Variable Neighborhood Search to solve the Vehicle Routing Problem with Time Windows,
CP-AI-OR’99 workshop, Ferrara, February 1999.
[Roa01] The 2001 ROADEF Challenge. http://www.roadef.org.
[Rus95] R. Russell. Hybrid Heuristics for the Vehicle Routing Problem with Time Windows,
Transportation Science, vol. 29, n. 2, may 1995.
[SP91] G. Silverstein, M. Pazzani. Relational clichés: Constraining constructive induction
during relational learning, Machine Learning Proceedings of the Eighth International
Workshop (ML91), p. 203-207, Morgan Kaufmann 1991.
[Sol87] M. Solomon. Algorithms for the Vehicle Routing and Scheduling Problems with
Time Window Constraints, Operations Research, vol. 35, n. 2, 1987.
[Sha98] P. Shaw. Using Constraint Programming and Local Search Methods to Solve Vehicle
Routing Problems, Principles and Practice of Constraint Programming, proceedings of CP’98,
LNCS 1520, Springer, 1998.
[TBG95] E. Taillard, P. Badeau, M. Gendreau, F. Guertain, J.-Y. Rousseau. A New
Neighborhood Structure for the Vehicle Routing Problem with Time Windows, technical
report CRT-95-66, Université de Montréal, 1995.
23
| 6 |
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
1
Parametric Identification Using
Weighted Null-Space Fitting
arXiv:1708.03946v3 [] 26 Mar 2018
Miguel Galrinho, Cristian R. Rojas, Member, IEEE, and Håkan Hjalmarsson, Fellow, IEEE
the aforementioned issues. We will not attempt to fully review
this vast field, but below we highlight some of the milestones.
Instrumental variable (IV) methods [3] allow consistency
to be obtained in a large variety of settings without the
issue of non-convexity. Asymptotic efficiency can be obtained
for some problems using iterative algorithms [4,5]. However,
IV methods cannot achieve the CR bound with closed-loop
data [6].
Realization-based methods [7], which later evolved into
subspace methods [8], are non-iterative and thus attractive
for their computational efficiency. The bias issue for closedloop data has been overcome by more recent algorithms [9]–
[12]. However, structural information is difficult to incorporate,
and—even if a complete analysis is still unavailable (significant contributions have been provided [12]–[14])—subspace
methods are in general not believed to be as accurate as PEM.
Index Terms—System identification, least squares.
Some methods are based on fixing some parameters in
certain places of the cost function but not others to obtain
a quadratic cost function, so that the estimate can be obtained
I. I NTRODUCTION
by (weighted) least squares. Then, the fixed coefficients are
For parametric system identification, the prediction error
replaced by an estimate from the previous iteration in the
method (PEM) is the reference in the field. With open-loop weighting or in a filtering step. This leads to iterative methdata, consistency is guaranteed if the model can describe the
ods, which date back to [15]. Some of these methods have
system dynamics, irrespective of the used noise model. For
been denoted iterative quadratic maximum likelihood (IQML),
Gaussian noise1 and with a noise model able to describe the originally developed for filter design [16,17] and later applied
noise spectrum, PEM with a quadratic cost function correto dynamical systems [18]–[20]. Another classical example is
sponds to maximum likelihood (ML), and is asymptotically
the Steiglitz-McBride method [21] for estimating output-error
efficient with respect to the used model structure [1], meaning models, which is equivalent to IQML for an impulse-input
that the covariance of the estimate asymptotically achieves the
case [22]. In the identification field, weightings or filterings
Cramér-Rao (CR) lower bound: the best possible covariance have not been determined by statistical considerations. In
for consistent estimators.
this perspective, the result in [23], showing that the SteiglitzThere are two issues that may hinder successful application McBride method is not asymptotically efficient, is not surprisof PEM. The first—and most critical—is the risk of converging ing.
to a non-global minimum of the cost function, which is
Another approach is to estimate, in an intermediate step, a
in general not convex. Thus, PEM requires local non-linear more flexible model, followed by a model reduction step to
optimization algorithms and good initialization points. The recover a model with the desired structure. The motivation for
second issue concerns closed-loop data. In this case, PEM this procedure is that, in some cases, each step corresponds
is biased unless the noise model is flexible enough. For to a convex optimization problem or a numerically reliable
asymptotic efficiency, the noise model must be of correct order procedure. To guarantee asymptotic efficiency, it is important
and estimated simultaneously with the dynamic model.
that the intermediate model is a sufficient statistic and the
During the half decade since the publication of [2], alterna- model reduction step is performed in a statistically sound way.
tives to PEM/ML have appeared, addressing one or both of Indirect PEM [24] formalizes the requirements starting with
an over-parametrized model of fixed order and uses ML in the
Automatic Control Lab and ACCESS Linnaeus Center, School of Electrical
Engineering, KTH Royal Institute of Technology, SE-100 44 Stockholm, model reduction step. The latter step corresponds in general
Sweden. (e-mail: {galrinho, crro, hjalmars}@kth.se.)
to a weighted non-linear least-squares problem.
This work was supported by the Swedish Research Council under contracts
It has also been recognized that the intermediate model does
2015-05285 and 2016-06079.
not need to capture the true system perfectly, but only with
1 When maximum likelihood and asymptotic efficiency are discussed in the
sufficient accuracy. Subspace algorithms can be interpreted in
following, the standard assumption is that the noise is Gaussian.
Abstract—In identification of dynamical systems, the prediction error method using a quadratic cost function provides
asymptotically efficient estimates under Gaussian noise and
additional mild assumptions, but in general it requires solving a
non-convex optimization problem. An alternative class of methods
uses a non-parametric model as intermediate step to obtain the
model of interest. Weighted null-space fitting (WNSF) belongs to
this class. It is a weighted least-squares method consisting of three
steps. In the first step, a high-order ARX model is estimated. In a
second least-squares step, this high-order estimate is reduced to
a parametric estimate. In the third step, weighted least squares
is used to reduce the variance of the estimates. The method is
flexible in parametrization and suitable for both open- and closedloop data. In this paper, we show that WNSF provides estimates
with the same asymptotic properties as PEM with a quadratic
cost function when the model orders are chosen according to the
true system. Also, simulation studies indicate that WNSF may
be competitive with state-of-the-art methods.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
this way: for example, SSARX [11] estimates an ARX model
followed by a singular-value-decomposition (SVD) step and
least-squares estimation. For spectral estimation, the THREElike approach [25] is also a two-step procedure that first
obtains a non-parametric spectral estimate and then reduces
it to a parametric estimate that in some sense is closest to
the non-parametric one, and whose optimization function is
convex [26].
In the field of time-series estimation, methods based on an
intermediate high-order time series have also been suggested
as alternative to ML, whose properties were studied in [27].
Durbin’s first method [28] for auto-regressive moving-average
(ARMA) time series uses an intermediate high-order AR time
series to simulate the innovations sequence, which allows
obtaining the ARMA parameters by least squares. This method
does not have the same asymptotic properties as ML, unlike
Durbin’s second method [28]. The latter is an extension of [29],
which had been proposed for MA time series, whose parameters can be estimated from the high-order AR estimates using
least squares, with an accuracy that can be made arbitrarily
close to the Cramér-Rao bound by increasing the AR-model
order. When applied to ARMA time series, the idea to achieve
efficiency is to iterate between estimating the AR and MA
polynomials using this procedure, initialized with Durbin’s
first method. Another way to achieve efficiency from Durbin’s
first method as starting point was proposed in [30] by using an
additional filtering step with the MA estimates from Durbin’s
first method, and then re-estimating the ARMA-parameters.
The asymptotic properties of these methods have been
analyzed by considering the high order tending to infinity, but
“small” compared to the sample size. A preferable analysis
should handle the relation between the high order and the
sample size formally, as done in [31] to prove consistency
of the method in [30], where the high order is assumed to
tend to infinity as function of the sample size at a particular
rate. This class of methods has become popular for vector
ARMA time series, with several available algorithms using
different procedures for obtaining the asymptotic efficient
model parameter estimates from the estimated innovations
(e.g., [32]–[34], with further information in references in these
papers). Despite sharing the same asymptotic properties, which
have been analyzed with the high order as a function of the
sample size, these algorithms may have different computational requirements and finite sample properties.
For identification of dynamical systems, instead of using
the high-order model to estimate the innovations, it has been
suggested that identification of the model of interest can be
done by applying asymptotic ML directly to the high-order
model [35]. The ASYM method [36] is an instantiation of this
approach. Because an ARX-model estimate and its covariance
constitute a sufficient statistic as the model order grows,
this approach can produce asymptotically efficient estimates.
However, the plant and noise models are estimated separately,
preventing asymptotic efficiency for closed-loop data. Also,
although such model reduction procedures may have numerical
advantages over direct application of PEM [36], this approach
still requires local non-linear optimization techniques. The
Box-Jenkins Steiglitz-McBride (BJSM) method [37] instead
2
uses the Steiglitz-McBride method in the model reduction step,
resulting in asymptotically efficient estimates of the plant in
open loop. Two drawbacks of BJSM are that the number of
iterations is required to tend to infinity (as for the SteiglitzMcBride method) and that, similarly to [35] and [36], the CR
bound cannot be attained in closed loop. The Model Order
Reduction Steiglitz-McBride (MORSM) method solves the
first drawback of BJSM, but not the second [38].
In this contribution, we focus on weighted null-space fitting
(WNSF), introduced in [39]. This method uses two of the
features of the methods above: i) an intermediate high-order
ARX model; ii) the high-order model is directly used for estimating the low-order model using ML-based model reduction.
However, instead of an explicit minimization of the modelreduction cost function—as in indirect PEM (directly via the
model parameters), ASYM (in the time domain), and [35]
(in the frequency domain)—the model reduction step consists
of a weighted least-squares problem. Asymptotic efficiency
requires that the weighting depend on the (to be estimated)
model parameters. To handle this, an additional least-squares
step is introduced. Consisting of three (weighted) least-squares
steps, WNSF has attractive computational properties in comparison with, for example, PEM, ASYM, and BJSM. More
steps may be added to this standard procedure, using an
iterative weighted least-squares algorithm, which may improve
the estimate for finite sample size.
Another interesting feature of WNSF is that, unlike many of
the methods above (including MORSM), the dynamic model
and the noise model are estimated jointly. If this is not done,
an algorithm cannot be asymptotically efficient for closedloop data [40]. Nevertheless, in some applications, the noise
model may be of no concern. WNSF can then be simplified
and a noise model not estimated, still maintaining asymptotic
efficiency for open-loop data. In closed loop, consistency
is still maintained because the high-order model captures
the noise spectrum consistently, while the resulting accuracy
corresponds to the covariance of PEM with an infinite-order
noise model [40]. Thus, besides the attractive numerical properties, WNSF has theoretical properties matched only by PEM.
However, WNSF has the additional benefit that an explicit
noise model is not required to obtain consistency with closedloop data.
In [39], some theoretical properties of WNSF are claimed
and supported by simulations, but with no formal proof. The
robust performance that the method has shown has provided
the motivation to extend the simulation study and deepen
the theoretical analysis. Take Fig. 1 as an example, showing
the FITs (see (V-C) for a definition of this quality measure)
for estimates obtained in closed loop from 100 Monte Carlo
runs with the following methods: PEM with default MATLAB
implementation (PEM-d), the subspace method SSARX [11],
WNSF, and PEM initialized at the true parameters as benchmark (PEM-t). Here, the default MATLAB initialization for
PEM is often not accurate enough, and the non-convex cost
function of PEM converges to non-global minima, while
the low FIT of SSARX indicates that this method is not
a viable alternative to deal with the non-convexity of PEM
for the situation at hand. On the other hand, WNSF has a
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
3
B. Definitions and Assumptions
FIT
100
Assumption 1 (Model and true system). The model has input
{ut }, output {yt } and is subject to the noise {et }, all realvalued, related by
50
yt = G(q, θ)ut + H(q, θ)et .
0
↓6
PEM-d
SSARX
WNSF
PEM-t
The transfer functions G(q, θ) and H(q, θ) are rational functions in q −1 , according to
L(q, θ)
l1 q −1 + · · · + lml q −ml
:=
,
F (q, θ)
1 + f1 q −1 + · · · + fmf q −mf
1 + c1 q −1 + · · · + cmc q −mc
C(q, θ)
:=
,
H(q, θ) :=
D(q, θ)
1 + d1 q −1 + · · · + dmd q −md
Fig. 1. FITs from 100 Monte Carlo runs with a highly resonant system.
performance close to PEM initialized at the true parameters,
suggesting that the weighted least-squares procedure applied
to a non-parametric estimate may be more robust than an
explicit minimization of the PEM cost function in regards to
convergence issues.
In this paper, we provide a theoretical and experimental
analysis of WNSF applied to stable single-input single-output
(SISO) Box-Jenkins (BJ) systems, which may operate in
closed loop. Our main contributions are to establish conditions
for consistency and asymptotic efficiency. A major effort of
the analysis is to keep track of the model errors induced
by using an ARX model on data generated by a system of
BJ type. It is a delicate matter to determine how the ARXmodel order should depend on the sample size such that it is
ensured that these errors vanish as the sample size grows: to
this end, the results in [41] have been instrumental. We finally
conduct a finite sample simulation study where WNSF shows
competitive performance with state-of-the-art methods.
The paper is organized as follows. In Section II, we introduce definitions, assumptions, and background. In Section III,
we review the WNSF algorithm. In Section IV, we provide the
theoretical analysis; in Section V, the experimental analysis.
II. P RELIMINARIES
A. Notation
•
•
•
•
•
•
•
•
•
•
(1)
G(q, θ) :=
where θ is the parameter vector to be estimated, given by
⊤
θ = f ⊤ l⊤ c⊤ d⊤ ∈ Rmf +ml +mc +md ,
(2)
with
f1
f = ... ,
f mf
l1
l = ... ,
l ml
c1
c = ... ,
cmc
d1
d = ... .
dmd
If the noise model is not of interest, we consider that we want
to obtain an estimate G(q, θ̄), where θ̄ = [f ⊤ l⊤ ]⊤ .
The true system is described by (1) when θ = θo . The
transfer functions Go := G(q, θo ) and Ho := H(q, θo ) are
assumed to be stable, and Ho inversely stable. The polynomials
Lo := L(q, θo ) and Fo := F (q, θo ), as well as Co := C(q, θo )
and Do := D(q, θo ), do not share common factors.
Because we allow for data to be collected in closed loop,
the input {ut } is allowed to have a stochastic part. Then, let
Ft−1 be the σ-algebra generated by {es , us , s ≤ t − 1}. For
the noise, the following assumption applies.
Assumption 2 (Noise). The noise sequence {et } is a stochastic process that satisfies
P
E[et |Ft−1 ] = 0, E[e2t |Ft−1 ] = σo2 , E[|et |10 ] ≤ C, ∀t.
||x||p = ( nk=1 |xk |p )1/p , with xk the k th entry of the
n × 1 vector x, and p ∈ N (for simplicity ||x|| := ||x||2 ).
Before stating the assumption on the input sequence, we
||A||p = supx6=0 ||Ax||p /||x||p , with A a matrix, x introduce the following definitions, used in [41].
a vector of appropriate dimensions, and p ∈ N (for
Definition 1 (fN -quasi-stationarity). Let fN be a decreasing
simplicity ||A|| := ||A||2 ); also, ||A||∞ = ||A⊤ ||1 .
C and N̄ denote any constant, which need not be the same sequence of positive scalars, with fN → 0 as N → ∞, and
1 PN
in different expressions, and may be random variables.
⊤
,
0 ≤ τ < N,
N t=τ +1 vt vt−τ
PN +τ
Γn (q) = [q −1 · · · q −n ]⊤ , where q −1 is the backward
N
1
⊤
Rvv (τ ) =
v
v
,
−N
< τ ≤ 0,
time-shift operator.
N t=1 t t−τ
0,
otherwise.
∗
A is the complex conjugate transpose of the matrix A.
Tn,m (X(q)) is the Toeplitz matrix of size n × m (m ≤ The vector sequence {vt } is fN -quasi-stationary if
n) with first column [x0 · · ·P xn−1 ]⊤ and first row
i) There exists Rvv (τ ) such that
∞
[x0 01×m−1 ], where X(q) = k=0 xk q −k . The dimenN
sup|τ |≤N Rvv
(τ ) − Rvv (τ ) ≤ C1 fN ,
P
sion n may be infinity, denoted T∞,m (X(q)).
N
2
1
ii) N t=−N kvt k ≤ C2
Ex denotes expectation
of the random vector x.
P
N
for all N large enough, where C1 and C2 are finite constants.
Ēxt := lim N1 t=1 Ext .
N →∞
xN = O(fN ): the function xN tends to zero at a rate not
This definition allows us to work with some stochastic
slower than fN , as N → ∞, w.p.1.
signals that have deterministic components, as in [1]. In
xN ∼ AsN (a, P ): the random variable xN is normally addition to the standard definition of quasi-stationarity, a rate
distributed with mean a and covariance P as N → ∞.
of convergence for the sample covariances is defined.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
4
Definition 2P
(fN -stability). A filter G(q) =
∞
fN -stable if k=0 |gk |/fk < ∞.
P∞
gk q −k is
In open loop, the asymptotic covariance of the dynamicmodel parameters is the top-left block of (II-C) even if the
noise-model orders mc and md are larger than the true ones;
Definition 3 (Power spectral density). The power spectral
if smaller, the dynamic-model estimates are consistent but
density ofPan fN -quasi-stationary sequence {vt } is given by
not asymptotically efficient. In closed loop, the covariance of
∞
Φv (z) = τ =−∞ Rvv (τ )z −τ , if the sum exists for |z| = 1.
the dynamic-model estimates only corresponds to the top-left
For the input, the following assumption applies.
block of (II-C) if the noise-model orders are the true ones;
if smaller, the dynamic-model estimates are biased; if larger,
Assumption 3 (Input). The input sequence {ut } is defined by
they are consistent and the asymptotic covariance matrix can
ut = −K(q)yt + rt under the following conditions.
−1
be bounded by σo2 MCL
, where [40]
i) The sequence {rt } p
is independent of {et }, fN -quasiZ π
1
stationary with fN = log N/N , and uniformly bounded.
Ω̄(eiω )Φru (eiω )Ω̄∗ (eiω )dω
(7)
MCL =
−
1
2π
ii) With Φr (z) = Fr (z)Fr (z ) the spectral factorization of
−π
{rt } and Fr (z) causal, Fr (q) is BIBO stable.
with Φru the spectrum of the input due to the reference. This
√
iii) The closed loop system is fN -stable with fN = 1/ N .
corresponds to the case with infinite noise-model order.
iv) The transfer function K(z) is bounded on the unit circle.
The main drawback with PEM is that minimizing (II-C)
v) The spectral density of {[rt et ]⊤ } is coercive (i.e., is in general a non-convex optimization problem. Therefore,
PEM
bounded from below by the matrix δI, for some δ > 0). the global minimizer θ̂N
is not guaranteed to be found. An
exception
is
the
ARX
model.
Operation in open loop is obtained by taking K(q) = 0.
k=0
The choice of fN in iii) guarantees that the impulse responses
of the closed-loop system have a minimum rate of decay,
necessary to derive the results in [41]. This minimum decay
rate is trivially satisfied here, as the system is stable and finite
dimensional, and hence has exponentially decaying impulse
responses.
D. High-Order ARX Modeling
The true system can alternatively be written as
Ao (q)yt = Bo (q)ut + et ,
(8)
where the transfer functions
∞
X
1
=: 1+
aok q −k ,
Ao (q) :=
Ho (q)
C. The Prediction Error Method
The prediction error method minimizes a cost function of
prediction errors, which, for the model structure (1), are
L(q, θ)
D(q, θ)
yt −
ut .
εt (θ) =
C(q, θ)
F (q, θ)
Using a quadratic cost function, the PEM estimate of θ is
obtained by minimizing
PN 2
(3)
J(θ) = N1
t=1 εt (θ),
where N is the sample size. Assuming that θ belongs to
an appropriate domain [1, Def. 4.3], when the data set is
informative [1, Def. 8.1] and under appropriate technical
PEM
conditions [1, Chap. 8], the global minimizer θ̂N
of (II-C)
is asymptotically distributed as [1, Theorem 9.1]
√
N (θ̂PEM − θ ) ∼ AsN 0, σ 2 M −1 ,
(4)
o
N
o
CR
where
MCR
1
=
2π
Z
π
Ω(eiω )Φz (eiω )Ω∗ (eiω )dω,
(5)
k=1
A(q, η n )yt = B(q, η n )ut + et ,
where
η n = a1
A(q, η n ) = 1 +
· · · an
n
X
· · · bn
b1
ak q − k ,
B(q, η n ) =
k=1
(10)
⊤
,
n
X
bk q − k ,
k=1
can approximate (II-D) arbitrarily well if the model order n
is chosen arbitrarily large.
Because the prediction errors for the ARX model (II-D),
εt (η n ) = A(q, η n )yt − B(q, η n )ut , are linear in the model
parameters η n , the corresponding PEM cost function (II-C)
can be minimized with least squares. This is done as follows.
First, re-write (II-D) in regression form as
yt = (ϕnt )⊤ η n + et ,
iω
⊤
(9)
bok q −k
are stable (Assumption 1). Therefore, the ARX model
−π
with (for notational simplicity, we omit the argument e )
Go
0
− Ho Fo Γmf
1
0
Ho Fo Γml
(6)
Ω=
1
0
C o Γmc
0
− D1o Γmd
k=1
∞
X
Go (q)
=:
Bo (q) :=
Ho (q)
(11)
where
ϕnt = −yt−1
· · · −yt−n
ut−1
· · · ut−n
⊤
.
Then, the least-squares estimate of η n is obtained by
η̂ n,ls = [Rn ]−1 rn ,
N
and Φz the spectrum of [ut et ] . When the error sequence is where
Gaussian, PEM with a quadratic cost function is asymptotiN
1 X n n ⊤
cally efficient, with (II-C) corresponding to the CR bound [1,
n
ϕ (ϕ ) ,
RN
=
N t=n+1 t t
Chap. 9].
N
N
n
rN
=
N
1 X n
ϕ yt .
N t=n+1 t
(12)
(13)
(14)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
As the sample size increases, we have [41]
n
RN
→ R̄n := Ē ϕnt (ϕnt )⊤ , as N → ∞, w.p.1,
n
as N → ∞, w.p.1.
rN
→ r̄n := Ē [ϕnt yt ] ,
5
Then, (III) can be expanded as
(15)
∞
X
aok q −k
(1 + co1 q −1 + · · · + comc q −mc ) 1 +
k=1
Consequently,
n,ls
η̂N
−1
→ η̄ := R̄n r̄n , as N → ∞, w.p.1.
n
For future reference, we define
⊤
ηon := ao1 · · · aon bo1 · · · bon ,
⊤
ηo := ao1 ao2 · · · bo1 bo2 · · · .
(19a)
− (1 + do1 q −1 + · · · + domd q −md ) = 0,
(16)
o
(1 + f1o q −1 + · · · + fm
q −mf )
f
−
(l1o q −1
+ ···+
∞
X
bok q −k
k=1
(19b)
∞
X
1+
aok q −k = 0.
o
lm
q −ml )
l
k=1
(17)
To express θo in terms of ηo , we can
so in vector
P∞form. BePdo
∞
cause a power-series product (e.g., k=0 αk q −k k=0 βk q −k )
The attractiveness of ARX modeling is the simplicity of es- can be written as the (Toeplitz-)matrix-vector product
timation while approximating more general classes of systems
α0
β0
β0
α0
0
0
β1 β1 β0
α1
with arbitrary accuracy. However, as the order n typically has α1 α0
β2 = β2 β1 β0
α2 ,
to be taken large, the estimated ARX model will have high α2 α1 α0
variance. Nevertheless, this estimate can in principle be used
..
..
.. . . . . . .
.. . . . . . .
.
.
.
.
.
.
.
.
.
.
as a means to obtain an asymptotically estimate of the Box(20)
Jenkins (BJ) model (1) when the measurement noise is Gauswe may write (1) as (keeping the first n equations)
sian. To understand intuitively why this is the case, we observe
that if we neglect the truncation error from approximating
ηon − Qn (ηon )θo = 0,
(21)
n,ls
n
the true system (II-D) by the model (II-D), η̂N
and RN
with θo defined by (1) evaluated at the true parameters and
constitute a sufficient statistic for our problem. Therefore, they
0
0
−Qcn (η n ) Qdn
can replace the data without loss of information. If ML is used
n
Qn (η ) =
, (22)
−Qfn (η n ) Qln (η n )
0
0
for the subsequent estimation, we need to solve a non-convex
optimization problem [35]. An accurate estimate to initialize where, when evaluated at the true parameters η n ,
o
the optimization procedure is then crucial; a standard result
n
l
c
n
Qn (ηo ) = Tn,mc (A(q, ηo )), Qn (ηo ) = Tn,ml (A(q, ηo )),
is that, if initialized with a strongly consistent estimate, one
Gauss-Newton iteration provides an asymptotically efficient
Imd ,md
d
f
n
Qn (ηo ) = Tn,mf (B(q, ηo )), Qn =
.
estimate (e.g., [42, Chap. 23]).
0n−md ,md
III. W EIGHTED N ULL -S PACE F ITTING M ETHOD
The idea of weighted null-space fitting [39] is to avoid
the burden of a non-convex optimization by using weighted
least squares, but maintaining the properties of maximum
likelihood. The method consists of three steps. In the first
step, a high-order ARX model is estimated with least squares.
In the second step, the parametric model is estimated from
the high-order ARX model with least squares, providing a
consistent estimate. In the third step, the parametric model is
re-estimated with weighted least squares. Because the optimal
weighting depends on the true parameters, we replace these by
the consistent estimate obtained in the previous step. Similarly
to maximum likelihood with an optimization algorithm initialized at a consistent estimate, this provides an asymptotically
efficient estimate. We now proceed to detail each of these
steps.
n,ls
The first step consists in estimating η̂N
from (II-D). As
n,ls
n
discussed before, η̂N and RN are almost a sufficient statistic
for our problem, if the ARX-model truncation error is small
enough (later, this will be treated formally). Then, we will use
n,ls
n
η̂N
and RN
instead of data to estimate the model of interest.
The second step implements this as follows. Re-write (II-D)
as
Co (q)Ao (q) − Do (q) = 0,
(18)
Fo (q)Bo (q) − Lo (q)Ao (q) = 0.
n,ls
Motivated by (III), we replace ηon by its estimate η̂N
, obtaining an over-determined system of equations, which may be
solved for θ using, for example, least squares:
−1
n,ls n,ls
n,ls
n,ls
LS
Q⊤
(23)
)
)Q
(η̂
θ̂N
= Q⊤
(η̂
n
n (η̂N )η̂N .
n N
N
n,ls
In (III), invertibility follows from convergence of η̂N
to ηon ,
which is of larger dimension than θ, and the block-Toeplitz
structure of Qn (η n ) (this is treated formally in Lemma 1).
With (III), we have not accounted for the residuals in (III)
n,ls
when η̂N
replaces ηon . The third step remedies this by reestimating θ in a statistically sound way.
For some η n , and using the same logic as (III), we can
write (III) as
η n − Qn (η n )θo = Tn (θo )(η n − ηon ) =: δn (η n , θo ).
where
Tnc (θ)
Tn (θ) =
−Tnl (θ)
0
,
Tnf (θ)
(24)
(25)
with Tnc (θo ) = Tn,n (C(q, θo )), Tnl (θo ) = Tn,n (L(q, θo )), and
Tnf (θo ) = Tn,n (F (q, θo )). The objective is then to estimate
n,ls
θ that minimizes the residuals δn (η̂N
, θ). If we neglect
the bias error from truncation of the ARX model, which
should be close to zero for sufficiently large n, we have that,
approximately,
√
−1
n,ls
N η̂N
− ηon ∼ AsN 0, σo2 R̄n
.
(26)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
6
Then, using (III) and (III), we may write that, approximately,
n,ls
δn (η̂N
) ∼ AsN 0, Tn (θo )σo2 [R̄n ]−1 Tn⊤ (θo ) .
(27)
Because the residuals we want to minimize, given by
n,ls
n,ls
n,ls
δn (η̂N
, θ) = η̂N
−Qn (η̂N
)θ, are asymptotically distributed
by (III), the estimate of θ with minimum variance is given by
the weighted least-squares estimate
−1
n,ls
n,ls
n,ls
n,ls
WLSo
Q⊤
θ̂N
= Q⊤
n (η̂N )W̄n (θo )η̂N ,
n (η̂N )W̄n (θo )Qn (η̂N )
where the weighting matrix
−1
W̄n (θo ) = Tn (θo )σo2 [R̄n ]−1 Tn⊤ (θo )
(28)
is the inverse of the covariance of the residuals [43]. Because
LS
n
θo and R̄n are not available, we replace them by θ̂N
and RN
,
2
respectively (σo can be disregarded, because the weighting can
be scaled arbitrarily without influencing the solution). Thus,
the third step consists in re-estimating θ by
−1
n,ls
n,ls
n,ls
WLS
LS
LS n,ls
θ̂N
= Q⊤
(η̂
)W
(
θ̂
)Q
(η̂
)
Q⊤
n
n
n
N
n (η̂N )Wn (θ̂N )η̂N ,
N
N
(29)
where (we take the inverses of the matrices individually)
W (θ̂LS ) = T −⊤ (θ̂LS )Rn T −1 (θ̂LS ),
(30)
n
N
n
N
N
n
N
LS
Tn (θ̂N
)
with
obtained using (III). Invertibility in (III) follows
from (besides what was mentioned for Step 2) invertibility
LS
of Wn (θ̂N
), which in turn follows from the lower-Toeplitz
LS
n
structure of Tn (θ), convergence of θ̂N
to θo and RN
to
n
R̄ (this is treated formally in Lemmas 2 and 3). Because
LS
θ̂N
is a consistent estimate of θo with an error decaying
LS
sufficiently fast, using θ̂N
in the weighting should not change
WLS
the asymptotic properties of θ̂N
. Analogously to taking
one Gauss-Newton iteration for a maximum likelihood cost
WLS
function initialized at a strongly consistent estimate, θ̂N
is
an asymptotically efficient estimate, as will be proven in the
next section.
In summary, WNSF consists of the following three steps:
1) estimate a high-order ARX model with least
squares (II-D);
2) reduce the high-order ARX model to the model of interest
with least squares (III);
3) re-estimate the model of interest by weighted least
squares (III) using the weighting (III).
Two notes can be made about this procedure. First, the
objective of the second step is to obtain a consistent estimate
to construct the weighting; hence, the choice of least squares
is arbitrary, and weighted least squares with any invertible
n
weighting (e.g., Wn = RN
) can be used. Second, although
WLS
θ̂N is asymptotically efficient, it is possible to continue
iterating, which may improve the estimate for finite sample
size.
Other Settings
Despite having been presented for a fully parametrized
SISO BJ model, we point out that the method is flexible
in parametrization. For example, it is possible to fix some
parameters in θ if they are known, or to impose linear relations
between parameters. Hence, other common model structures
(e.g., OE, ARMA, ARMAX) may also be used, as well as
multi-input multi-output (MIMO) versions of such structures.
The requirement is that a relation between the high- and loworder parameters can be written in the form (III).
Moreover, a parametric noise model does not need to be
estimated. In this case, disregard (1) and consider only (1). The
subsequent steps can then be derived similarly. This approach
is presented in detail and analyzed in [44]. In open loop,
it provides asymptotically efficient estimates of the dynamic
model; in closed loop, the estimates are consistent and with
asymptotic covariance corresponding to (II-C).
IV. A SYMPTOTIC P ROPERTIES
We now turn to the asymptotic analysis of WNSF. Here, we
make a distinction between the main algorithm presented here
and the aforementioned case without a low-order noise model
estimate. Although apparently simpler because of the smaller
dimension of the problem, the case without a noise-model
estimate requires additional care in the analysis. The reason is
that the corresponding Tn (θ) in that case will not be square.
Then, inverting the weighting as in (III) (a relation that will be
used for the analysis in this paper) will not be valid, requiring
another approach. Including in this paper under-parametrized
noise models is then not possible for space concerns. Thus,
the asymptotic analysis in this paper considers the dynamic
and noise models correctly parametrized, in which case the
algorithm is consistent and asymptotically efficient. The case
with an under-parametrized noise model (in particular, the
limit case where (1) is neglected and no noise-model is
estimated) is considered in [44].
Because the ARX model (II-D) is a truncation of the true
system (II-D), its estimate (and the respective covariance)
will not be a sufficient statistic for finite order, and some
information will be lost in this step. Then, we need to make
sure that, as N grows, the truncation error will be sufficiently
small so that, asymptotically, no information is lost. To keep
track of the truncation error in the analysis (see appendices),
we let the model order n depend on the sample size N —
denoting n = n(N )—according to the following assumption.
Assumption 4 (ARX-model order). It holds that
D1. n(N ) → ∞, as N → ∞;
D2. n4+δ (N )/N → 0, for some δ > 0, as N → ∞.
Condition D1 implies that, as the sample size N tends to
infinity, so does the model order n. Condition D2 establishes a
maximum rate at which the model order n is allowed to grow,
as we cannot use too high order compared with the number
of observations. A consequence of Condition D2 is that [41]
n2 (N ) log(N )/N → 0, as N → ∞,
(31)
P∞
Moreover, defining d(N ) := k=n(N )+1 |aok | + |bok | , we have
√
N d(N ) → 0, as N → ∞,
(32)
as consequence of stability and rational description of the true
system in Assumption 1. Although (IV) and (IV) follow from
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
7
other assumptions, they are stated explicitly as they will be
required to show our theoretical results.
To facilitate the statistical analysis, the results in this section
consider, instead of (II-D), a regularized estimate
n,reg
n
n
n
η̂N
:= η̂N
= [Rreg
(N )]−1 rN
,
(33)
where
n
Rreg
(N )
=
n
RN
n
RN
+ 2δ I2n
n −1
if ||[RN
] || < 2/δ,
otherwise,
for some small δ > 0. Asymptotically, the first and second
n,ls
n
order properties of η̂N
and η̂N
are identical [41].
When we let n = n(N ) according to Assumption 4, we
n(N )
n(N )
,
use η̂N := η̂N . We will also denote η̄ n(N ) and ηo
defined in (II-D) and (II-D), respectively. Concerning the
matrices (II-D), (III), (III), (III), and (III), for notational
simplicity we maintain the subscript n even if n = n(N ).
Some of the technical assumptions used in this paper differ
from those used for the asymptotic analysis of PEM [1]. For
example, the bound in Assumption 2 is stronger than what is
required for PEM. On the other hand, for PEM the parameter
vector θ is required to belong to a compact set, which is
not imposed here. However, such differences in technical
assumptions have little relevance in practice.
LS
We have the following result for consistency θ̂N
.
LS
Theorem 1. Let Assumptions 1, 2, 3, and 4 hold, and and θ̂N
be defined by (III). Then,
LS
θ̂N
In this section, we perform simulation studies and discuss
practical issues. First, we illustrate the asymptotic properties
of the method. Second, we consider how to choose the order
of the non-parametric model. Third, we exemplify with two
difficult scenarios for PEM how WNSF can be advantageous in
terms of robustness against convergence to non-global minima
and convergence speed. Fourth, we perform a simulation with
random systems to test the robustness of the method compared
with other state-of-the-art methods.
Although WNSF and the approach in [38] are different
algorithms, they share the similarities of using high-order
models and iterative least squares. However, [38] is only
applicable in open loop. Here, to differentiate WNSF as a
more general approach that is applicable in open or closed
loop without changing the algorithm, we focus on the typically
more challenging closed-loop setting, for which many standard
methods are not consistent.
A. Illustration of Asymptotic Properties
The first simulation has the purpose of illustrating that the
method is asymptotically efficient. Here, we consider only
the case where we estimate a correct noise model (the case
where a low-order noise model is not estimated is illustrated in
[44]). We perform open- and closed-loop simulations, where
the closed-loop data are generated by
K(q)Ho (q)
1
rt −
et ,
1 + K(q)Go (q)
1 + K(q)Go (q)
Go (q)
Ho (q)
yt =
rt +
et ,
1 + K(q)Go (q)
1 + K(q)Go (q)
ut =
→ θo , as N → ∞, w.p.1.
Moreover, we have that
r
log N
LS
1 + d(N ) .
n(N )
||θ̂N − θo || = O
N
V. S IMULATION S TUDIES
(34)
Proof. See Appendix B.
WLS
We have the following result for consistency of θ̂N
.
WLS
Theorem 2. Let Assumptions 1, 2, 3, and 4 hold, and θ̂N
be defined by (III). Then,
WLS
θ̂N
→ θo , as N → ∞, w.p.1.
Proof. See Appendix C.
We have the following result for asymptotic distribution and
WLS
covariance of θ̂N
.
WLS
Theorem 3. Let Assumptions 1, 2, 3, and 4 hold, and θ̂N
be defined by (III). Then,
√
−1 ),
WLS
N (θ̂N
− θo ) ∼ AsN (0, σo2 MCR
where MCR is given by (II-C).
Proof. See Appendix D.
Theorem 3 implies, comparing with (II-C), that WNSF has
the same asymptotic properties as PEM. For Gaussian noise,
this corresponds to an asymptotically efficient estimate.
(35)
and the open-loop data by
ut =
1
rt ,
1 + K(q)Go (q)
yt = Go (q)ut + Ho (q)et ,
where {rt } and {et } are independent Gaussian white sequences with unit variance, K(q) = 1, and
Go (q) =
q −1 + 0.1q −2
,
1 − 0.5q −1 + 0.75q −2
Ho (q) =
1 + 0.7q −1
.
1 − 0.9q −1
We perform 1000 Monte Carlo runs, with sample sizes
N ∈ {300, 600, 1000, 3000, 6000, 10000}. We apply WNSF
with an ARX model of order 50 with the open- and closedloop data. Performance is evaluated by the mean-squared
error of the estimated parameter vector of the dynamic model,
WLS
− θ̄o ||2 , where θ̄ contains only the elements of
MSE = ||θ̄ˆN
θ contributing to G(q, θ). As this simulation has the purpose
of illustrating asymptotic properties, initial conditions are zero
and assumed known—that is, the sums in (II-D) start at t = 1
instead of t = n + 1.
The results are presented in Fig. 2, with the average MSE
over 1000 Monte Carlo runs plotted as function of the sample
size (closed loop in solid line, open loop in dash-dotted line),
where we also plot the corresponding CR bounds (closed loop
in dashed line, open loop in dotted line). The respective CR
bounds are attained as the sample size increases.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
8
TABLE I
C OMPARISON WITH PEM: AVERAGE FIT S WITH DIFFERENT METHODS
(M ETH ) AND INITIALIZATIONS (I NIT ).
MSE
10−2
Init
10−3
10−4
103
N
104
Fig. 2. Illustration of asymptotic properties: CR bounds in closed loop
(dashed) and open loop (dotted), and average MSE for the dynamic-model
parameter estimates as function of sample size obtained with WNSF in closed
loop (solid) and open loop (dash-dotted).
B. Practical Issues
Meth
WNSF
PEM GN
PEM LM
MtL
LS
true
98
74
98
98
87
85
–
98
98
with {et } and {rtw } Gaussian white noise sequences with variances 4 and 0.25, respectively. The sample size is N = 2000.
We estimate an OE model with the following algorithms:
• WNSF with a non-parametric model of order n = 250;
• PEM with default MATLAB initialization and GaussNewton (GN) algorithm;
• PEM with default MATLAB initialization (MtL) and
Levenberg-Marquardt (LM) algorithm;
• WNSF with a non-parametric model of order n = 250,
where the weighting matrix, instead of being initialized
LS
with θ̂N
(III), is initialized with the default MATLAB
initialization (MtL);
LS
• PEM initialized with θ̂N (LS) and the GN algorithm;
LS
• PEM initialized with θ̂N (LS) and the LM algorithm;
• PEM initialized at the true parameters (true).
All the methods use a maximum of 100 iterations, but stop
early upon convergence (default settings for PEM, 10−4 as
tolerance for the normalized relative change in the parameter
estimates) and initial conditions are zero.
Performance is evaluated by the FIT of the impulse response
WLS
of the estimated OE model G(q, θ̂N
), given in percent by
kgo − ĝk
,
(36)
FIT = 100 1 −
kgo − mean(go )k
In the previous simulation, an ARX model of order 50
was estimated in the first step. Although the order of this
model should, in theory, tend to infinity at some maximum
rate to attain efficiency (Assumption 4), a fixed order was
sufficient to illustrate the asymptotic properties of WNSF in
this particular scenario. This suggests that when the number of
data samples increases, a non-parametric model of fixed order
with sufficiently low bias error may be enough for practical
purposes. However, for fixed sample size, the question remains
on how to choose the most appropriate non-parametric model
order: a too small n will introduce bias, and a too large n
will introduce unnecessary noise in the non-parametric model
estimate, which may affect the accuracy of the parametric
model estimate. Some previous knowledge about the speed
of the system may help in choosing this order, but the
most appropriate value may also depend on sample size and
signal-to-noise ratio. In this paper, we use the PEM cost
WLS
function (II-C) as criterion to choose n: we compute θ̂N
for several n, and choose the estimate that minimizes (II-C). where go is a vector with the impulse response parameters
WLS
need not be used as final estimate, as, for finite of Go (q), and similarly for ĝ but for the estimated model.
Also, θ̂N
sample size, performance may improve by iterating. However, In (V-C), sufficiently long impulse responses are taken to make
because WNSF does not minimize the cost function (II-C) sure that the truncation of their tails does not affect the FIT.
The average FITs for 100 Monte Carlo runs are shown
explicitly, it is not guaranteed that subsequent iterations correin
Table I. For PEM, the results depend on the optimization
spond to a lower cost-function value than previous ones. Here,
method
and the initialization point: as consequence of the nonwe will also use the cost function (II-C) as criterion to choose
convexity
of PEM, the algorithms do not always converge to
the best model among the iterations performed.
the global optimum. For PEM implementations, the average
FIT is the same as for PEM started at the true parameters only
C. Comparison with PEM
with default MATLAB initialization and LM algorithm. For
One of the main limitations of PEM is the non-convex WNSF, the average FIT is the same as for PEM started at the
cost function, which may make the method sensitive to the true parameters independently of the initialization point used
initialization point. Here, we provide examples illustrating how in the weighting matrix, suggesting robustness to different
WNSF may be a more robust method than PEM regarding initial weighting matrices.
initialization: in cases where the PEM cost function is highly
In this simulation, PEM was most robust with the LM
non-convex, WNSF may require less iterations and be more algorithm and the default MATLAB initialization, having on
robust against convergence to non-global minima.
average the same accuracy as WNSF. Then, it is appropriate to
We consider a system where Ho (q) = 1, K(q) = 0.3, and
compare the performance of these methods by iteration when
WNSF
also is initialized with the same parameter values. In
−1
−2
1.0q − 1.2q
,
Go (q) =
Fig.
3,
we
plot the average FITs for these methods as function
1 − 2.5q −1 + 2.4q −2 − 0.88q −3
of the maximum number of iterations. Here, WNSF reaches
with data generated according to (V-A), where
an average FIT of 98 after two iterations, while PEM with
−1
LM
takes 20 iterations to reach the same value. This suggests
1 + 0.7q
rt =
rtw ,
that,
even if WNSF and some PEM implementation start and
−1
1 − 0.9q
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
9
For the simulation, we use 100 systems with structure
100
average FIT
80
Go (q) =
60
40
20
WNSF
PEM LM
0
−20
0
5
10
15
max iterations
20
25
Fig. 3. Comparison with PEM: average FIT from 100 Monte Carlo runs
function of the maximum number of iterations.
converge to the same value, WNSF may do it faster than
standard optimization methods for PEM.
The robustness of WNSF against convergence to non-global
minima compared with different instances of PEM can be even
more evident than in Table I, as WNSF seems to be appropriate
for modeling systems with many resonant peaks, for which the
PEM cost function can be highly non-linear. Take the example
in Fig. 1, based on 100 Monte Carlo runs for a system with
Lo (q) = q −1 − 3.4q −2 + 4.8q −3 − 3.3q −4 + 0.96q −5 ,
As we have observed, PEM may have difficulties with slow
resonant systems: therefore, it is for this class of systems that
WNSF may be most beneficial. With this purpose, we generate
the polynomial coefficients in the following way. The poles are
located in an annulus with the radius uniformly distributed
between 0.88 and 0.98, and the phase uniformly distributed
between 0 and 90◦ (and respective complex conjugates). One
pair of zeros is generated in the same way, and a third real
zero is uniformly distributed between −1.2 and 1.2 (this
allows for non-minimum-phase systems). The noise models
have structure
1 + co1 q −1 + co2 q −2
,
Ho (q) =
1 + do1 q −1 + do2 q −2
with the poles and zeros having uniformly distributed radius
between 0 and 0.95, and uniformly distributed phase between
0 and 180◦ (and respective complex conjugates).
The data are generated in closed loop by
Fo (q) = 1 − 5.4q −1 + 13.5q −2 − 20.1q −3 + 19.5q −4
− 12.1q −5 + 4.5q −6 ,
and data generated according to (V-A) with K(q) = −0.05,
0.05
rt =
rw ,
1 − 0.99q −1 t
{rtw }
where
and {et } are Gaussian white sequences with
unit variance. Here, initial conditions are not assumed zero:
PEM estimates initial conditions by backcasting and WNSF
uses the approach in [45]. In this scenario, PEM with the
LM algorithm and default initialization fails in most runs
to find the global optimum. Subspace methods, often used
to avoid the non-convexity of PEM, may not help in this
scenario: SSARX [11], a subspace method that is consistent
in closed loop, provides an average FIT around 20% (default
MATLAB implementation). Here, WNSF with n between 100
and 600 spaced with intervals of 50 performs similarly to
PEM initialized at the true parameters, accurately capturing
the resonance peaks of the system.
l1o q −1 + · · · + l4o q −4
.
o q −6
1 + f6o q −6 + · · · + fm
K(q)
K(q)Ho (q)
rt −
et ,
1 + K(q)Go (q)
1 + K(q)Go (q)
Ho (q)
K(q)Go (q)
rt +
et ,
yt =
1 + K(q)Go (q)
1 + K(q)Go (q)
ut =
where
rt =
1 − 1.273q −1 + 0.81q −2 w
r
1 − 1.559q −1 + 0.81q −2 t
with {rtw } a Gaussian white-noise sequence with unit variance, {et } a Gaussian white-noise sequence with the variance
chosen such that the signal-to-noise ratio (SNR) is
PN h K(q)Go (q) i2
t=1 1+K(q)Go (q) rt
SNR =
= 2,
PN
2
t=1 [Ho (q)et ]
and the controller K(q) is obtained using a Youlaparametrization to have an integrator and a closed-loop transfer
function that has the same poles as the open loop except that
the radius of the slowest open-loop pole pair is reduced by
80%. The sample size is N = 2000 and we perform 100 Monte
Carlo runs (one for each system; different noise realizations).
D. Random Systems
We compare the following methods:
In order to test the robustness of the method, we now
• PEM initialized at the true parameters (PEMt);
perform a simulation with random systems. Also, closed-loop
• PEM with default MATLAB initialization (PEMd);
data often introduces additional difficulties: for example, many
• SSARX with the default MATLAB options;
standard methods are not consistent. Thus, we perform a
• WNSF using the approach in Section V-B to choose n
simulation with these settings and compare the performance of
from the grid {50, 100, 150, 200, 250, 300}.
WNSF with other methods available in the Mathworks MAT• PEM initialized with WNSF (PEMw).
LAB System Identification Toolbox. For a fair comparison, we
only use methods that are consistent in closed loop and only All methods estimate a fully parametrized noise model. We
use input and output data. From the subspace class, we use use the MATLAB2016b System Identification Toolbox impleSSARX, as this method is competitive with other subspace mentation of SSARX and PEM. For PEM, the optimization
algorithms such as CVA [46,47] and N4SID [48], while it is algorithm is LM. For SSARX, the horizons are chosen auconsistent in closed loop [11]. IV methods are not considered, tomatically by MATLAB, based on the Akaike Information
as they in closed loop require the reference signal to construct Criterion. WNSF and PEM use a maximum of 100 iterations,
but stop earlier upon convergence (default settings for PEM,
the instruments.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
10
Future work includes also extensions to dynamic networks and
non-linear model structures.
100
FIT
75
50
A PPENDIX A
AUXILIARY R ESULTS
25
0
−25
In this appendix, we present some results that will be
applied in the remainder of the paper.
↓3
PEMd
SSARX WNSF
PEMw
PEMt
Fig. 4. Random systems: FITs from 100 Monte Carlo runs.
10−4 as tolerance for the normalized relative change in the
parameter estimates. PEM estimates initial conditions by backcasting and WNSF truncates them ([45] does not apply to BJ
models).
The FITs obtained in this simulation are presented in Fig. 4.
In this scenario, PEM with default MATLAB initialization
(PEMd) often fails to find a point close to the global optimum,
which can be concluded by comparison with PEM initialized at
the true parameters (PEMt). Also, SSARX is not an alternative
for achieving better performance. WNSF can be an appropriate
alternative, failing only once to provide an acceptable estimate,
and having otherwise a performance close to the practically
infeasible PEMt. The estimate obtained with WNSF may be
used to initialize PEM. This provides a small improvement
only, suggesting that the estimates obtained with WNSF are
already close to a (local) minimum of the PEM cost function.
Proposition 1. Let Assumptions 1, 2, and 3 hold. Also, let η̄ n
be defined by (II-D) and ηon by (II-D). Then,
P∞
kη̄ n − ηon k ≤ C k=n+1 |aok | + |bok | → 0, as n → ∞.
Proof. The result follows from [41, Lemma 5.1] and (IV).
Proposition 2. Let Assumptions 1, 2, 3, and 4 hold. Also, let
n(N )
η̂N := η̂N
be defined by (IV) and η̄ n(N ) by (II-D). Then,
r
n(N ) log N
n(N )
η̂N − η̄
=O
[1 + d(N )] ,
(37)
N
and η̂N − η̄ n(N ) → 0, as N → ∞, w.p.1.
Proof. For the first part, see [41, Theorem 5.1]. The second
part follows from (IV) and (IV).
Proposition 3. Let Assumptions 1, 2, 3, and 4 hold. Then,
!
r
2
n
(N
)
log
N
n(N )
.
+C
RN − R̄n(N ) = O 2n(N )
N
N
Proof. See [41, Lemma 4.1].
VI. C ONCLUSION
Methods for parameter estimation based on an intermediate
unstructured model have a long history in system identification (e.g., [11,35]–[37]). Here, we believe to have taken a
significant step further in this class of methods, with a method
that is flexible in parametrization and provides consistent and
asymptotically efficient estimates in open and closed loop
without using a non-convex optimization or iterations.
In this paper, we provided a theoretical and experimental
analysis of this method, named weighted null-space fitting
(WNSF). Theoretically, we showed that the method is consistent and asymptotically efficient for stable Box-Jenkins systems. Experimentally, we performed Monte Carlo simulations,
comparing PEM, subspace, and WNSF under settings where
PEM typically performs poorly. The simulations suggest that
WNSF is competitive with these methods, being a viable
alternative to PEM or to provide initialization points for PEM.
Although WNSF was here presented for SISO BJ models,
it was also pointed out that the flexibility in parametrization
allows for a wider range of structures to be used, as well as
for incorporating structural information (e.g., fixing specified
parameters). Moreover, based on the analysis in [44], WNSF
does not require a parametric noise model to achieve asymptotic efficiency in open loop and consistency in closed loop.
An extension that was not covered in this paper is the
MIMO case, where subspace or IV methods are typically
used [49], as PEM often has difficulty with estimation of such
systems. Based on the theoretical foundation provided in this
contribution, this important extension is already in preparation.
Proposition 4. Let Assumptions 1, 2, 3, and 4 hold. Also, let
Υn be an m × 2n deterministic matrix, with m fixed. Then,
√
N Υn (η̂N − η̄ n(N ) ) ∼ AsN (0, P ),
where P = σo2 lim Υn [R̄n ]−1 (Υn )⊤ , if the limit exists.
n→∞
Proof. See [41, Theorem 7.3].
Qp
(i)
Proposition 5. Consider the product i=1 XN , where p
(i)
is finite and XN are stochastic matrices of appropriate
dimensions (possibly a function of N ) such that
(i)
||XN − X̄ (i) || → 0, as N → ∞, w.p.1
and ||X̄ (i) || < Ci . Then, we have that
Qp
Qp
(i)
(i)
→ 0, as N → ∞, w.p.1. (38)
i=1 X̄
i=1 XN −
Proof. We show this by induction. First, let p = 2 and define
(i)
(i)
(i)
∆N := XN − X̄N . Then, we can write
(1)
(2)
(1)
(2)
(1)
(2)
XN XN − X̄ (1) X̄ (2) = ∆N X̄ (2) + X̄ (1) ∆N + ∆N ∆N ,
(39)
which yields, using the assumptions,
(1)
(2)
(1)
||XN XN − X̄ (1) X̄ (2) || ≤ ||∆N || ||X̄ (2) ||
(2)
(1)
(2)
+||X̄ (1) || ||∆N ||+||∆N || ||∆N || → 0 as N → ∞, w.p.1.
(40)
Second, we consider an arbitrary p, and assume that
Qp−1 (i) Qp−1 (i)
→ 0, as N → ∞, w.p.1. (41)
i=1 X̄
i=1 XN −
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
11
Then, using a similar procedure as (A), we have
Qp−1
Qp
Qp
(p)
(i)
(i)
≤ ||∆N || || i=1 X̄ (i) ||
i=1 X̄
i=1 XN − Q
Qp−1 (i)
(i)
(p)
+ ||X̄ (p) || || p−1
i=1 ∆N || + ||∆N || ||
i=1 ∆N ||,
(42)
which, in turn, is bounded by
Q
(p) Q
(i)
(i)
(p)
||∆N || p−1
|| p−1
i=1 ||X̄ || + ||X̄
i=1 ||∆N ||
(i)
(p) Qp−1
+ ||∆N || i=1 ||∆N || → 0 as N → ∞, w.p.1,
where the convergence follows by assumption. Then, (5) is
verified when assuming (A), which considering also (A) and
an induction argument, concludes the proof.
• kTn (θo )k is bounded for all n
Consider the matrix Tn (θo ), givenP
by (III). First, we introduce
∞
the following result. Let X(q) = k=0 xk q −k and define
x0 0
0 ···
x1 x0 0 . . .
T[X(q)] :=
(50)
.
x2 x1 x0 . . .
.. . . . .
..
.
.
.
.
pP∞
2
If
k=0 |xk | < C1 , we have that [50]
kT[X(q)]k ≤ C.
(51)
A PPENDIX B
When X(q) can be written as a rational transfer function, (B)
C ONSISTENCY OF S TEP 2
follows from X(q) having all poles strictly inside the unit
The main purpose of this appendix is to prove Theorem 1. circle, as, in this case, the sum of squares of its impulse
However, before we do so, we introduce some results regard- response coefficients is bounded.
We observe that the blocks of Tn (θo ) satisfy that Tnf (θo ),
ing the norm of some vectors and matrices.
c
Tn (θo ), and Tnl (θo ) are sub-matrices of T[Fo (q)], T[Co (q)],
n(N )
•
η̂N − ηo
tends to zero, as N tends to infinity, w.p.1
and T[Lo (q)], respectively. Then, we have that
n(N )
Consider the estimated parameter vector η̂N := η̂N
(IV),
n(N )
kTn (θo )k ≤ Tnf (θo ) + kTnc (θo )k + Tnl (θo )
and the truncated true parameter vector ηo
(II-D). Using
≤ kT[Fo (q)]k + kT[Co (q)]k + kT[Lo (q)]k (52)
the triangular inequality, we have
η̂N − ηon(N ) ≤ η̂N − η̄ n(N ) + η̄ n(N ) − ηon(N ) , (43)
where η̄ n is defined by (II-D). Then, from Proposition 1,
the second term on the right side of (B) tends to zero as
n(N ) → ∞. From Proposition 2, the first term on the right
side of (B) tends to zero, as N → ∞, w.p.1. Thus,
η̂N − ηon(N ) → 0, as N → ∞, w.p.1.
n(N )
• Qn (η̂N )−Qn (ηo
) tends to zero, as N
n(N )
Consider Qn (ηo
), given by (III) evaluated
n(N )
true parameter vector ηo
, and the matrix
(44)
to infinity, w.p.1
at the truncated
Qn (η̂N ), given
by (III) evaluated at the estimated parameters η̂N . We have
Qn (η̂N ) − Qn (ηon(N ) ) ≤
+ Qln (η̂N ) − Qln (ηon(N ) )
≤ C η̂N − ηon(N ) .
Qcn (η̂N ) − Qcn (ηon(N ) )
+ Qfn (η̂N ) − Qfn (ηon(N ) )
(45)
Then, using (B), we conclude that
Qn (η̂N ) − Qn (ηon(N ) ) → 0, as N → ∞, w.p.1.
(46)
Qn (ηon ) is bounded for all n
We have that
•
kQn (ηon )k ≤ kQcn (ηon )k+ Qln (ηon ) + Qfn (ηon ) + Qdn
≤ C kηon k + 1 ≤ C kηo k + 1, ∀n
(47)
which is bounded, by stability of the true system.
• kQn (η̂N )k is bounded for large N , w.p.1
Using the triangular inequality, we have
kQn (η̂N )k ≤ Qn (η̂N )−Qn (ηon(N ) ) + Qn (ηon(N ) ) . (48)
Using now (B) and (B), the first term on the right side of (B)
can be made arbitrarily small as N increases, while the second
term is bounded for all n(N ). Then, there exists N̄ such that
kQn (η̂N )k ≤ C,
∀N > N̄ .
(49)
≤ C ∀n,
where the last inequality follows from (B) and from F (q),
C(q), and L(q) being finite order polynomials.
The following lemma is useful for invertibility of the leastsquares problem (III).
Lemma 1. Let Assumption 1 hold and
n
n
M (ηo ) := lim Q⊤
n (ηo )Qn (ηo ),
n→∞
(53)
where Qn (ηon ) is given by (III) evaluated at ηon , defined
in (II-D). Then, M (ηo ) is invertible.
Proof. Firts, we observe that the limit in (1) is well defined,
because the entries of M (ηon ) := Q⊤ (ηon )Q(ηon ) are either
zero or sums with form
Pn
Pn
Pn
o o
o o
o o
k=1 bk bk+p ,
k=1 ak bk+p ,
k=1 ak ak+p ,
for some finite integers p, and the coefficients aok and bok are
stable sequences. Thus, these sums converge as n → ∞. For
simplicity of notation, let Q∞ (ηo ) := limn→∞ Qn (ηon ); that is,
Q∞ (ηo ) is block Toeplitz according to (III), with each block
having an infinite number of rows and given by
Qc∞ (ηo ) = T∞,mc (A(q, ηo )), Ql∞ (ηo ) = T∞,ml (A(q, ηo )),
Imd ,md
f
d
Q∞ (ηo ) = T∞,mf (B(q, ηo )), Q∞ =
.
0∞,md
We can then write M (ηo ) = Q⊤
∞ (ηo )Q∞ (ηo ). From this
factorization, we observe that M (ηo ) is singular if and only if
Q∞ (ηo ) has a non-trivial right null-space. Moreover, the block
anti-diagonal structure of Q∞ (ηo ) implies that Q∞ has full
column rank if and only if both matrices [−Qf∞ (ηo ) Ql∞ (ηo )]
and [−Qc∞ (ηo ) Qd∞ (ηo )] have full column rank. We proceed
by contradiction. Suppose that
α
f
l
−Q∞ (ηo ) Q∞ (ηo )
= −Qf∞ (ηo )α + Ql∞ (ηo )β = 0,
β
(54)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
12
where α and β are some vectors α = [α0 . . . αmf −1 ]⊤ and
β = [β0 . . . βml−1 ]⊤ . Then, (B) implies
B(q, ηo )α(q) = A(q, ηo )β(q) ⇔ L(q, θo )α(q) = F (q, θo )β(q),
Pmf −1
Pml −1 (55)
−k
−k
where α(q) =
α
q
and
β(q)
=
.
k
k=0
k=0 βk q
Because L(q, θo ) and F (q, θo ) are co-prime by Assumption 1
and polynomials of order ml − 1 and mf , and α(q) and
β(q) are polynomials of orders at most mf − 1 and ml − 1,
(B) can only be satisfied if α(q) ≡ 0 ≡ β(q). Hence,
[−Qf∞ (ηo ) Ql∞ (ηo )] has full column rank.
Analogously for [−Qc∞ (ηo ) Qd∞ (ηo )], this matrix has full
column rank if and only if C(q, θo )α(q) = D(q, θo )β(q) is
satisfiedP
only for α(q) ≡ 0 ≡ β(q),
here we have
Pmdwhere
mc −1
−1
α(q) = k=0
αk q −k and β(q) = k=0
βk q −k . This is the
case, as C(q, θo ) and D(q, θo ) are co-prime and polynomials of
higher order than α(q) and β(q). Hence, [−Qf∞ (ηo ) Ql∞ (ηo )]
and [−Qc∞ (ηo ) Qd∞ (ηo )] are full column rank, implying that
Q∞ (ηo ) has a trivial right null-space and M (ηo ) is invertible.
Returning to (B), we may now write
LS
θ̂N
− θo ≤ C η̂N − η̂on(N ) , ∀N > N̄ .
→ 0, as N → ∞, w.p.1.
(59)
Moreover, using (B), we can re-write (B) as
LS
θ̂N
− θo ≤ C
η̂N − η̄ n(N ) + η̄ n(N ) − ηon(N )
n(N )
.
From Proposition 1, we have ||η̄ n(N ) − ηo
|| ≤ Cd(N ),
which thus approaches zero faster than ||η̂N − η̄ n(N ) ||, whose
LS
decay rate is according to (2). For the decay rate of ||θ̂N
−θo ||,
it suffices then to take the rate of the slowest-decaying term,
which is given by (1), as we wanted to show.
A PPENDIX C
C ONSISTENCY OF S TEP 3
The main purpose of this appendix is to prove Theorem 2.
However, before we do so, we introduce some results regarding the norm of some vectors and matrices.
n
• kRN k is bounded for all n and sufficiently large N , w.p.1
n
Let RN
be defined as in (II-D). Then, from Lemma 4.2 in [41],
we have that there exists N̄ such that, w.p.1,
Finally, we have the necessary results to prove Theorem 1.
Proof of Theorem 1: We start by using (III) to write
−1 ⊤
LS
θ̂N
−θo = Q⊤
Qn(η̂N)η̂N − θo
n(η̂N)Qn (η̂N)
n
kRN
k ≤ C, ∀n, ∀N > N̄ .
(60)
⊤
− 1 ⊤
= Qn(η̂N)Qn (η̂N) Qn(η̂N) [η̂N − Qn (η̂N)θo ]
−1
• ||Tn (θo )|| is bounded for all n
⊤
− 1 ⊤
n(N )
= Qn(η̂N)Qn (η̂N) Qn(η̂N)Tn (θo )[η̂N −ηo
]
We observe that, with Tn (θ) is given by (III), the inverse of
(56)
Tn (θo ) is given by
where the last equality follows from (III). If n were fixed,
consistency would follow if η̂N − ηon would approach zero as
Tnc (θ)−1
0
−1 (θ) =
,
(61)
T
n
N → ∞, provided the inverse of Q⊤
Tnf (θ)−1 Tnl (θ)Tnc (θ)−1 Tcf (θ)−1
n (η̂N )Qn (η̂N ) existed for
sufficiently large N . However, n = n(N ) increases according
to Assumption 4. This implies that the dimensions of the evaluated at the true parameters θo . Also, Tnf (θo )−1 , Tnc (θo )−1 ,
n(N )
, and of the matrices Qn (η̂N ) (number and Tnl (θo ) are sub-matrices of T[1/Fo (q)], T[1/Co (q)], and
vectors η̂N and ηo
of rows) and Tn (θo ) (number of rows and columns), become T[Lo (q)], respectively, where T[X(q)] is defined by (B). Then,
arbitrarily large. Therefore, extra requirements are necessary.
Tn−1 (θo ) ≤ Tnf (θo )−1 + Tnc (θo )−1
In particular, we use (B) to write
+ Tnf (θo )−1 Tnl (θo ) Tnc (θo )−1
n(N )
LS
(η̂
)T
(θ
)(η̂
−
η
)
θ̂N
− θo = M −1 (η̂N )Q⊤
N
n o
N
n
o
≤ kT[1/Fo (q)]k + kT[1/Co (q)]k
≤ M −1 (η̂N ) Qn (η̂N ) Tn (θo ) η̂N − ηon(N ) , (57)
+ kT[1/Fo (q)]k kT[Lo (q)]k kT[1/Co (q)]k
where M (η̂N ) := Q⊤
(η̂
)Q
(η̂
).
Consistency
is
achieved
≤C, ∀n,
(62)
N
n N
n
if the last factor on the right side of the inequality in (B)
approaches zero, as N → ∞, w.p.1, and the remaining factors where the last inequality follows from (B) and the fact that
are bounded for sufficiently large N , w.p.1. This can be shown 1/Fo (q), 1/Co (q), and Lo (q) are stable transfer functions.
LS
)|| is bounded for all n and sufficiently large N
using (B), (B), and (B), but we need additionally that M (η̂N ) • ||Tn−1 (θ̂N
LS
LS
is invertible for sufficiently large N , w.p.1.
)||, where Tn−1 (θ̂N
) is given by (C)
Consider the term ||Tn−1 (θ̂N
LS
With this purpose, we write
evaluated at θ̂N . We have that, proceeding as in (C),
||M (η̂N ) − M (ηon(N ) )||
⊤ n(N )
= ||Q⊤
)Qn (ηon(N ) )||
n (η̂N )Qn (η̂N ) − Qn (ηo
LS
LS
LS
) ≤ T[1/C(q, θ̂N
)] + T[1/F (q, θ̂N
)]
Tn−1 (θ̂N
Using (B), (B), (B), and Proposition 5, and because
M (ηon ) → M (ηo ) as n → ∞, we have that
M (η̂N ) → M (ηo ), as N → ∞, w.p.1.
(58)
As M (ηo ) is invertible (Lemma 1), by (B) and because the map
from the entries of a matrix to its eigenvalues is continuous,
there is N̄ such that M (η̂N ) is invertible for all N > N̄ , w.p.1.
LS
+ T[1/C(q, θ̂N
)]
LS
T[L(q, θ̂N
)]
LS
T[1/F (q, θ̂N
)]
LS
LS
for all n. This will be bounded if F (q, θ̂N
) and C(q, θ̂N
) have
all poles strictly inside the unit circle. From Theorem 1 and
stability of the true system by Assumption 1, we conclude that
LS
LS
there exists N̄ such that F (q, θ̂N
) and C(q, θ̂N
) have all roots
strictly inside the unit circle for all N > N̄ . Thus, we have
LS
) ≤ C, ∀n, ∀N > N̄ , w.p.1.
Tn−1 (θ̂N
(63)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
13
LS
||Tn−1 (θ̂N
) − Tn−1 (θo )|| tends to zero, as N tends to infinity,
w.p.1
LS
For the term ||Tn−1 (θ̂N
) − Tn−1 (θo )||, with n = n(N ), we have
•
LS
) − Tn−1 (θo ) ≤
Tn−1 (θ̂N
(64)
LS
LS
≤ Tn−1 (θ̂N
) Tn (θ̂N
) − Tn (θo ) Tn−1 (θo ) .
p
Because ||X|| ≤
||X||1 ||X||∞ , where X is an arbitrary
matrix, we have that
Pmf +ml +mc k,LS
LS
Tn (θ̂N
) − Tn (θo ) ≤ k=1
|θ̂N − θok |
LS
− θo ,
≤ C θ̂N
with superscript k denoting the k th element of the vector, we
can use Theorem 1 to show that
r
log N
LS
n(N )
Tn (θ̂N )− Tn(θo ) = O
1 + d(N ) . (65)
N
From (IV) and (IV),
r
log N
1 + d(N ) → 0, as N → ∞,
n2 (N )
N
Together with (C), (C), and (C), this implies that
(66)
The following two lemmas are useful for the invertibility of
the weighted least squares problem (III).
Lemma 2. Let Assumption 1 hold and
n→∞
(67)
where W̄n (θo ) is given by (III), and Qn (ηon ) is defined by (III)
at the true parameters ηon . Then, M̄ (ηo , θo ) is invertible.
Proof. Using (II-D) and (III), we re-write (2) as
M̄ (ηo , θo ) =
lim Q⊤ (ηon )Tn−⊤ (θo )Ē
n→∞ n
ϕnt (ϕnt )⊤
−1
Tn (θo )Qn (ηon ). (68)
Re-writing ϕnt , defined in (II-D), as
−Γn Go (q) −Γn Ho (q) ut
−Γn yt
=
,
ϕnt =
Γn
0
Γn u t
et
we can then write
Z π
n n ⊤
1
Ē ϕt (ϕt ) =
Λn (eiω )Φz Λ∗n (eiω )dω,
2π −π
where
0
where the argument q of the polynomials was dropped for
notational simplicity. In turn, we can also write
Q⊤
(η n )T −⊤ (θo )Λn =
n o n
Bo
⊤
−Tn,m
Γn
f Fo
Ao
⊤
Γn
T
n,ml Fo
⊤ Ao
L o Ao
⊤
Γn
Tn,mc Co Γn Go−Tn,m
c Fo Co
Lo
1
⊤
⊤
−Tn,md Co Γn Go+Tn,md Fo Co Γn
n→∞
LS
Tn (θ̂N
) − Tn (θo ) → 0, as N → ∞, w.p.1.
n
n
M̄ (ηo , θo ) := lim Q⊤
n (ηo )W̄n (θo )Qn (ηo ),
Bo
⊤
−Tn,m
f
Fo
Ao
⊤
0
Tn,m
l
Fo
⊤ n
−
⊤
,
Qn (ηo )Tn (θo ) =
L o Ao
Ao
⊤
⊤
−Tn,mc Fo Co
−Tn,mc Co
Lo
1
⊤
⊤
Tn,m
Tn,m
Co
Fo Co
d
d
0
.
Ao
⊤
Tn,m
Γ
H
n
o
c Co
1
⊤
−Tn,md Co Γn Ho
0
It is possible
to observe that, for some polynomial
P∞
⊤
X(q) = k=0 xk q −k , lim Tn,m
(X(q))Γn = X(q)Γm . Then,
n→∞
using also (II-D), we have lim Q⊤ (η n )T −⊤ (θ )Λ = Ω,
and thus
LS
) − Tn−1 (θo ) → 0, as N → ∞, w.p.1.
Tn−1 (θ̂N
Moreover,
−Γn Go (q) −Γn Ho (q)
Λn (q) =
.
Γn
0
Then, we can re-write (C) as
Z π
1
lim Q⊤ (η n )T −⊤ (θo )Λn (eiω )
M̄ (ηo , θo ) =
2π −π n→∞ n o n
· Φz Λ∗n (eiω )Tn−1 (θo )Qn (ηon )dω. (69)
n
o
n
o
n
where Ω is given by (II-C). This allows us to re-write (C)
as
Z π
1
ΩΦz Ω∗ dω = MCR ,
(70)
M̄ (ηo , θo ) =
2π −π
which is invertible because the CR bound exists for an
informative experiment [1].
Lemma 3. Let Assumptions 1, 2, 3, and 4 hold.
LS
LS
Also, let M (η̂N , θ̂N
) := Q⊤
n (η̂N )Wn (θ̂N )Qn (η̂N ), where
n(N )
η̂N := η̂N
is defined by (IV), Qn (η̂N ) is defined by (III)
LS
evaluated at the estimated parameters η̂N , and Wn (θ̂N
) is
defined by (III). Then,
LS
M (η̂N , θ̂N
) → M̄ (ηo , θo ), as N → ∞, w.p.1.
(71)
Proof. For the purpose of showing (3), we will show that
LS
n(N )
M (η̂N , θ̂N
) − Q⊤
)W̄n (θo )Qn (ηon(N ) )
n (ηo
→ 0, as N → ∞, w.p.1.
(72)
We apply Proposition 5, whose conditions can be verified
using (B), (B), (C), and (C), but additionally we need to verify
LS
that ||W̄n (θo )|| is bounded and ||Wn (θ̂N
) − W̄n (θo )|| tends to
zero. For the first, we have that
||W̄n (θo )|| ≤ ||Tn−1 (θo )||2 ||R̄n || ≤ C,
(73)
LS
following from (C) and (C). For the second, with Wn (θ̂N
)
given by (III) and W̄n (θo ) by (III), conditions in Proposition 5
are satisfied using (C), (C), (C) and Proposition 3, from where
it follows that
LS
||Wn (θ̂N
) − W̄n (θo )|| → 0, as N → ∞, w.p.1.
(74)
Having shown (C) and (C), the assumptions of Proposition 5
are verified, from which (C) follows and implies (3).
We now have the necessary results to prove Theorem 2.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
14
Proof of Theorem 2: Similarly to (B), we write
WLS
θ̂N
− θo
LS
LS
n(N )
= M −1 (η̂N , θ̂N
)Q⊤
),
n (η̂N )Wn (θ̂N )Tn (θo )(η̂N − ηo
(75)
and analyze
WLS
LS
θ̂N
− θo ≤ M −1 (η̂N , θ̂N
)
·
LS
Wn (θ̂N
)
Qn (η̂N )
Tn (θo )
η̂N − ηon(N ) .
and δ̂N is a stochastic vector of compatible dimensions. The
dimensions may increase to infinity as function of N , except for
the number of rows of ÂN , which is fixed. We assume that there
is N̄ such that ||ÂN || < C for all N > N̄ , there is B̄ such that
||B̂N − B̄|| → 0√as N → ∞ w.p.1, and ||δ̂N || → 0 as N → ∞
w.p.1. Then,
√ if N ||B̂N − B̄|| ||δ̂N || → 0 as N → ∞ w.p.1,
x̂N and N ÂN B̄ δ̂N have the same asymptotic distribution
and covariance.
Proof. We begin by writing
√
√
x̂N = N ÂN B̄ δ̂N + N ÂN (B̂N − B̄)δ̂N .
LS
(77)
From Lemma 3, M (η̂N , θ̂N
) converges to M̄ (ηo , θo ), which
is invertible from Lemma 2. Hence, because the map from Although some of the matrix and vector dimensions may
the entries of the matrix to its eigenvalues is continuous, increase to infinity with N , the number of rows of ÂN is fixed,
LS
M (η̂N , θ̂N
) is invertible for sufficiently large N , and therefore which makes x̂N finite dimensional, to which [51, Lemma B.4]
its norm is bounded, as it is a matrix of fixed dimensions. may be applied. Then, it is a consequence of this lemma that
√
Also, from (B), kQn (η̂N )k is bounded for sufficiently large x̂N and N ÂN B̄ δ̂N have the same asymptotic distribution
N . Moreover, we have that, making explicit that n = n(N ),
and covariance if the second term on the right side of (D)
n(N )
−
1
LS 2
LS
tends to zero with probability one. By assumption, we have
RN
.
Wn(N ) (θ̂N ) ≤ Tn(N ) (θ̂N )
√
√
|| N ÂN (B̂N − B̄)δ̂N || ≤ N ||ÂN || ||B̂N − B̄|| ||δ̂N ||
Then, from (C) and (C), we have
→ 0, as N → ∞, w.p.1,
LS
Wn(N ) (θ̂N
) ≤ C, ∀N > N̄ .
which completes the proof.
Finally, using also (B), (B), and (B), we conclude that
WLS
θ̂N
− θo → 0, as N → ∞, w.p.1.
A PPENDIX D
A SYMPTOTIC D ISTRIBUTION AND C OVARIANCE
We now have the necessary results to prove Theorem 3.
Proof of Theorem 3: We start by re-writing (D) as
√
WLS
LS
LS
N (θ̂N
− θo ) = M −1 (η̂N , θ̂N
)x(η̂N , θ̂N
),
where
OF
S TEP 3
The purpose of this appendix is to prove Theorem 3:
asymptotic distribution and covariance of
√
√
WLS
LS
N (θ̂N
− θo ) = N Υn (η̂N , θ̂N
)(η̂N − ηon(N ) ), (76)
which is re-written from (C), where
LS
LS
−1
Υn (η̂N , θ̂N
) = [Q⊤
n (η̂N )Wn (θ̂N )Qn (η̂N )]
⊤
LS
· Qn (η̂N )Wn (θ̂N )Tn (θo ).
LS
If Υn (η̂N , θ̂N
) were of fixed dimensions, the standard idea
LS
would be to show that Υn (η̂N , θ̂N
) converges w.p.1 to a
LS
being
deterministic matrix, as consequence of η̂N and θ̂N
consistent estimates of ηo and θo , respectively. Then, for
computing the asymptotic distribution and covariance of (D),
one√can consider the asymptotic distribution and covariance
n(N )
LS
of N (η̂N − ηo
) while Υn (η̂N , θ̂N
) can be replaced by
the deterministic matrix it converges to. This standard result
follows from [51, Lemma B.4], but it is not applicable here
n(N )
LS
because the dimensions of Υn (η̂N , θ̂N
) and η̂N − ηo
are
not fixed. In this scenario, Proposition 4 must be used instead.
However, (D) is not ready to be used with Proposition 4,
because it requires η̂N − ηon to be pre-multiplied by a deterministic matrix. The key idea of proving Theorem 3 is to show
that (D) has the same asymptotic distribution and covariance
n(N )
as an expression of the form Ῡn [η̂N − ηo
], where Ῡn
is a deterministic matrix, and then apply Proposition 4. The
following result will be useful for this purpose.
√
N ÂN B̂N δ̂N be a finiteProposition 6. Let x̂N =
dimensional vector, where ÂN and B̂N are stochastic matrices
LS
LS
M (η̂N , θ̂N
) = Q⊤
n (η̂N )Wn (θ̂N )Qn (η̂N ),
√
LS
n(N )
LS
).
x(η̂N , θ̂N
) = N Q⊤
n (η̂N )Wn (θ̂N )Tn (θo )(η̂N −ηo
LS
LS
Both M (η̂N , θ̂N
) and x(η̂N , θ̂N
) are of fixed dimension, and
we have from (C) and (3) that
−1 , as N → ∞, w.p.1.
LS
M −1 (η̂N , θ̂N
) → MCR
Then, if we assume that
LS
x(η̂N , θ̂N
) ∼ AsN (0, P ),
we have that, from [51, Lemma B.4],
√
−
1
−
1
WLS
N (θ̂N − θo ) ∼ AsN 0, MCR P MCR .
(78)
(79)
We will proceed to show that (D) is verified with
n
n
2
P = σo2 lim Q⊤
n (ηo )W̄n (θo )Qn (ηo ) = σo MCR ,
n→∞
(80)
where the second equality follows directly from (2) and (C).
We now proceed to show the first equality.
In the following arguments, we will apply Proposition 6
LS
repeatedly to x(η̂N , θ̂N
). This required the boundedness of
LS
some matrices; however, because all the matrices in x(η̂N , θ̂N
)
have been shown to be bounded for sufficiently large N w.p.1,
for readability we will refrain from referring to this every time
Proposition 6 is applied.
Because it is more convenient to work with η̄ n(N ) than
n(N )
LS
ηo
, we start by re-writing x(η̂N , θ̂N
) as
√
⊤
LS
LS
x(η̂N , θ̂N ) = N Qn (η̂N )Wn (θ̂N )Tn√(θo )(η̂N − η̄ n(N ) )
LS
n(N )
+ Q⊤
− ηon(N ) ).
n (η̂N )Wn (θ̂N )Tn (θo ) N (η̄
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Using Proposition 1, we have, for sufficiently large N w.p.1,
√
LS
n(N )
Q⊤
− ηon(N ) ) ≤
n (η̂N )Wn (θ̂N )Tn (θo ) N (η̄
√
≤ C N d(N ) → 0, as N → ∞.
Using an identical argument to Proposition 6, we have that
LS
x(η̂N , θ̂N
) and
√
LS
n(N )
N Q⊤
)
(81)
n (η̂N )Wn (θ̂N )Tn (θo )(η̂N − η̄
have the same asymptotic distribution and covariance, so we
will analyze (D) instead.
LS
Expanding Wn (θ̂N
) in (D), we obtain
√
⊤
LS
NQ
)Tn (θo )(η̂N − η̄ n(N ) )
√n (η̂N⊤)Wn (θ̂N
−⊤ LS
n −1 LS
= N Qn(η̂N )Tn (θ̂N )RN
Tn (θ̂N )Tn (θo )(η̂N − η̄ n(N ) ).
(82)
Using Proposition 6, we conclude that (D) and
√
−⊤ LS
n −1
N Q⊤
(θ̂N )RN
Tn (θo )Tn (θo )(η̂N − η̄ n(N ) )
n(η̂N )Tn √
⊤
LS
n
)RN
(η̂N − η̄ n(N ) )
= N Qn (η̂N )Tn−⊤ (θ̂N
(83)
have the same asymptotic properties if
√
LS
N ||Tn−1 (θ̂N
) − Tn−1 (θo )|| ||η̂N − η̄ n(N ) || → 0,
as N → ∞, w.p.1.
Using (C), (C) and (C) to write
√
LS
−1
n(N )
N ||Tn−1 (θ̂N
||
o )|| ||η̂N − η̄
√ ) − Tn (θ
LS
≤ C N ||Tn (θ̂N ) − Tn (θo )|| ||η̂N − η̄ n(N ) ||,
we have from (C) and Proposition 2 that
√
LS
N Tn (θo ) − Tn (θ̂N
) η̂N − η̄ n(N ) =
2
n(N ) log N
√
=O
, (84)
1 + d(N )
N
where
n(N ) log N
√
=
N
n3+δ (N )
N
1
3+δ
log N
→ 0, as N → ∞,
1+δ
N 2(3+δ)
due to Condition D2 in Assumption 4. This implies that (D)
LS
and (D), and in turn x(η̂N , θ̂N
), have the same asymptotic
distribution and covariance. Repeating this procedure, it can
be shown that these, in turn, have the same asymptotic
distribution and covariance as
√
−⊤
n
n(N )
N Q⊤
).
(85)
n (η̂N )Tn (θo )RN (η̂N − η̄
There are two stochastic matrices left in (D), which we
need to replace by deterministic matrices that do not affect
the asymptotic properties. Using Proposition 3,
√
n
N RN
− R̄n η̂N − η̄ n(N ) =
3/2
n (N ) log N
√
1 + d(N )
=O 2
N
r
r
n2 (N ) log N n3 (N )
1 + d(N ) ,
+C
N
N
15
where the first term tends to zero by applying Condition D2
in Assumption 4 to
n3/2 (N ) log N
√
=
N
n4+δ (N )
N
3
2(4+δ)
log N
1+δ
N 2(4+δ)
→ 0, as N → ∞,
and the second because of Condition D2 in Assumption 4,
and (IV). Then, from Proposition 6, we have that (D) and
√
−⊤
n
n(N )
N Q⊤
),
(86)
n (η̂N )Tn (θo )R̄ (η̂N − η̄
LS
and in turn x(η̂N , θ̂N
), have the same asymptotic distribution
and covariance, so we will analyze (D).
Applying again Proposition 6, we have that (D) and
√
N Q⊤ (η̄ n(N ) )T −⊤ (θ )R̄n (η̂ − η̄ n(N ) )
(87)
n
n
o
N
have the same asymptotic properties, since
√
N Qn (η̂N ) − Qn (η̄ n(N ) ) η̂N − η̄ n(N )
√
2
≤ C N η̂N − η̄ n(N )
2
n(N ) log N
√
=O
1 + d(N )
N
by using (B) and Proposition 2, which tends to zero as
N → ∞, identically to (D).
In (D), the matrix multiplying η̂N − η̄ n(N ) is finally detern(N )
ministic, but it will be more convenient to work with Q(ηo
).
With this purpose, Proposition 1 can be used to show that (D)
and
√
n(N )
N Q⊤
)Tn−⊤ (θo )R̄n (η̂N − η̄ n(N ) )
(88)
n (ηo
have the same asymptotic properties, as
√
N Qn (η̄ n(N ) ) − Qn (ηon ) η̂N − η̄ n(N )
√
≤ C N η̄ n(N ) − ηon(N ) η̂N − η̄ n(N )
r
√
n(N ) log N
1 + d(N ) N d(N ) ,
=O
N
LS
which tends to zero due to (IV) and (IV). Thus, x(η̂N , θ̂N
)
and (D) have the same asymptotic distribution and covariance,
so we will analyze (D) instead.
n
−⊤
n
Let Υn := Q⊤
n (ηo )Tn (θo )R̄ . Then, using Proposition 4,
√
N Υn (η̂N − η̄ n(N ) ) ∼ AsN (0, P ),
where P is given by (D). Finally, using (C), (D), and (D):
√
−1 ).
WLS
N (θ̂N
− θo ) ∼ AsN (0, σo2 MCR
R EFERENCES
[1] L. Ljung System Identification. Theory for the User. Prentice-Hall, 1999.
[2] K. Åström and T. Bohlin. Numerical identification of linear dynamic
systems from normal operating records. In IFAC Symposium on SelfAdaptive Systems, Teddington, United Kingdom, 1965.
[3] T. Söderström and P. Stoica. Instrumental variable methods for system
identification. Springer-Verlag, 1983.
[4] P. Stoica and T. Söderström. Optimal instrumental variable estimation
and approximate implementations. IEEE Transactions on Automatic
Control, vol. 28:757–772, July 1983.
[5] P. Young. The refined instrumental variable method: Unified estimation
of discrete and continuous-time transfer function models. Journal
Européen des Systèmes Automatisés, 42:149–179, 2008.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
[6] M. Gilson and P. van den Hof. Instrumental variable methods for closedloop system identification. Automatica, 41(2):241–249, 2005.
[7] S. Y. Kung. A new identification and model reduction algorithm via
singular value decomposition. In Asilomar Conference on Circuits,
Systems and Computers, pages 705–714, Pacific Grove, USA, 1978.
[8] P. van Overschee and B. de Moor. Subspace identification for linear systems: theory—implementation—applications. Kluwer Academic, Boston,
1996.
[9] M. Verhaegen. Application of a subspace model identification technique to identify LTI systems operating in closed-loop. Automatica,
29(4):1027–1040, 1993.
[10] S. Qin and L. Ljung. Closed-loop subspace identification with innovation
estimation. In 13th IFAC Symposium on System Identification, pages
861–866, Rotterdam, The Netherlands, 2003.
[11] M. Jansson. Subspace identification and ARX modeling. In 13th IFAC
Symposium on System Identification, Rotterdam, The Netherlands, 2003.
[12] A. Chiuso and G. Picci. Consistency analysis of some closed-loop
subspace identification methods. Automatica, 41(3):377–391, 2005.
[13] M. Jansson and B. Wahlberg. A linear regression approach to state-space
subspace system identification. Signal Processing, 52:103–129, 1996.
[14] D. Bauer. Asymptotic properties of subspace estimators. Automatica,
41(3):359 – 376, 2005.
[15] C. Sanathanan and J. Koerner. Transfer function synthesis as a ratio
of two complex polynomials. IEEE Transactions on Automatic Control,
8(1):56–58, 1963.
[16] A. G. Evans and R. Fischl. Optimal least squares time-domain synthesis
of recursive digital filters. IEEE Transactions on Audio and Electroacoustics, 21(1):61–65, 1973.
[17] Y. Bresler and A. Macovski. Exact maximum likelihood parameter estimation of superimposed exponential signals in noise. IEEE Transactions
on Signal Processing, 34(5):1081–1089, 1986.
[18] A. K. Shaw. Optimal identification of discrete-time systems from
impulse response data. IEEE Trans. on Signal Processing, 42(1):113–
120, 1994.
[19] A. K. Shaw, P. Misra, and R. Kumaresan. Identification of a class of
multivariable systems from impulse response data: Theory and computational algorithm. Circuits, Systems and Signal Processing, 13(6):759–
782, 1994.
[20] P. Lemmerling, L. Vanhamme, S. van Huffel, and B. de Moor. IQMLlike algorithms for solving structured total least squares problems: a
unified view. Signal Processing, 81:1935–1945, 2001.
[21] K. Steiglitz and L. E. McBride. A technique for the identification of
linear systems. IEEE Trans. on Automatic Control, 10:461–464, 1965.
[22] J. H. McClellan and D. Lee. Exact equivalence of the SteiglitzMcBride iteration and IQML. IEEE Transactions on Signal Processing,
39(2):509–512, 1991.
[23] P. Stoica and T. Söderström. The Steiglitz-McBride identification
algorithm revisited–convergence analysis and accuracy aspects. IEEE
Transactions on Automatic Control, 26(3):712–717, 1981.
[24] T. Söderström, P. Stoica, and B. Friedlander. An indirect prediction error
method for system identification. Automatica, 27(1):183–188, 1991.
[25] C. L. Byrnes, T. Georgiou, and A. Lindquist. A new approach to
spectral estimation: A tunable high-resolution spectral estimator. IEEE
Transactions on Signal Processing, 48(11):3189–3205, 2000.
[26] M. Zorzi. An interpretation of the dual problem of the three-like
approaches. Automatica, 62:87–92, 2015.
[27] E. J. Hannan. The asymptotic theory of linear time-series models.
Journal of Applied Probability, 10(1):130–145, 1973.
[28] J. Durbin. The fitting of time-series models. Revue de l’Institut
International de Statistique, pages 233–244, 1960.
[29] James Durbin. Efficient estimation of parameters in moving-average
models. Biometrika, 46(3/4):306–316, 1959.
[30] D. Mayne and F. Firoozan. Linear identification of ARMA processes.
Automatica, 18(4):461–466, 1982.
[31] E. J. Hannan and L. Kavalieris. Linear estimation of ARMA processes.
Automatica, 19(4):447–448, 1983.
[32] E. Hannan and L. Kavalieris. Multivariate linear time series models.
Advances in Applied Probability, 16(3):492–561, 1984.
[33] G. Reinsel, S. Basu, and S. Yap. Maximum likelihood estimators in the
multivariate autoregressive moving-average model from a generalized
least squares viewpoint. J. Time Series Anal., 13(2):133–145, 1992.
[34] J. M. Dufour and T. Jouini. Asymptotic distributions for quasi-efficient
estimators in echelon VARMA models. Computational Statistics & Data
Analysis, 73:69–86, 2014.
[35] B. Wahlberg. Model reduction of high-order estimated models: the
asymptotic ML approach. Int. Journal of Control, 49(1):169–192, 1989.
16
[36] Y. Zhu. Multivariable System Identification for Process Control. Pergamon, 2001.
[37] Y. Zhu and H. Hjalmarsson. The Box-Jenkins Steiglitz-McBride algorithm. Automatica, 65:170–182, 2016.
[38] N. Everitt, M. Galrinho, and H. Hjalmarsson. Open-loop asymptotically
efficient model reduction with the Steiglitz–McBride method. Automatica, 89:221–234, 2018.
[39] M. Galrinho, C. R. Rojas, and H. Hjalmarsson. A weighted least-squares
method for parameter estimation in structured models. In IEEE Conf.
on Decision and Control, pages 3322–3327, Los Angeles, USA, 2014.
[40] U. Forssell and L. Ljung. Closed-loop identification revisited. Automatica, 35:1215–1241, 1999.
[41] L. Ljung and B. Wahlberg. Asymptotic properties of the least-squares
method for estimating transfer functions and disturbance spectra. Advances in Applied Probabilities, 24:412–440, 1992.
[42] C. Gourieroux and A. Monfort. Statistics and econometric models,
volume 2. Cambridge University Press, 1995.
[43] T. Kailath, A. H. Sayed, and B. Hassibi. Linear Estimation. PrenticeHall, 2000.
[44] M. Galrinho, C. R. Rojas, and H. Hjalmarsson. Estimating models with
high-order noise dynamics using semi-parametric Weighted Null-Space
Fitting. to be submitted (arXiv 1708.03947).
[45] M. Galrinho, C. R. Rojas, and H. Hjalmarsson. On estimating initial
conditions in unstructured models. In IEEE Conference on Decision and
Control, pages 2725–2730, Osaka, Japan, 2015.
[46] W. Larimore. System identification, reduced-order filtering and modeling
via canonical variate analysis. In American Control Conference, 1983.
[47] K. Peternell, W. Scherrer, and M. Deistler. Statistical analysis of novel
subspace identification methods. Signal Processing, 52:161–177, 1996.
[48] P. van Overschee and B. de Moor. N4SID: Subspace algorithms for the
identification of combined deterministic-stochastic systems. Automatica,
30:75–93, 1994.
[49] P. Stoica and M. Jansson. MIMO system identification: state-space
and subspace approximations versus transfer function and instrumental
variables. IEEE Trans. on Signal Processing, 48(11):3087–3099, 2002.
[50] V. Peller. Hankel operators and their applications. Springer, 2003.
[51] T. Söderström and P. Stoica. System Identification. Prentice Hall, 1989.
PLACE
PHOTO
HERE
Miguel Galrinho was born in 1988. He received his
M.S. degree in aerospace engineering in 2013 from
Delft University of Technology, The Netherlands,
and the Licentiate degree in electrical engineering
in 2016 from KTH Royal Institute of Technology,
Stockholm, Sweden.
He is currently a PhD student at KTH, with the
Department of Automatic Control, School of Electrical Engineering, under supervision of Professor
Håkan Hjalmarsson. His research is on least-squares
methods for identification of structured models.
PLACE
PHOTO
HERE
Cristian R. Rojas (M’13) was born in 1980. He
received the M.S. degree in electronics engineering
from the Universidad Técnica Federico Santa Marı́a,
Valparaı́so, Chile, in 2004, and the Ph.D. degree in
electrical engineering at The University of Newcastle, NSW, Australia, in 2008.
Since October 2008, he has been with the Royal
Institute of Technology, Stockholm, Sweden, where
he is currently Associate Professor at the Department
of Automatic Control, School of Electrical Engineering. His research interests lie in system identification
and signal processing.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Håkan Hjalmarsson (M’98–SM’11–F’13) was
born in 1962. He received the M.S. degree in electrical engineering in 1988, and the Licentiate and
Ph.D. degrees in automatic control in 1990 and
PLACE
1993, respectively, all from Linköping University,
PHOTO
Linköping, Sweden.
HERE
He has held visiting research positions at California Institute of Technology, Louvain University,
and at the University of Newcastle, Australia. His research interests include system identification, signal
processing, control and estimation in communication
networks and automated tuning of controllers.
Dr. Hjalmarsson has served as an Associate Editor for Automatica (1996–
2001) and for the IEEE Transactions on Automatic Control (2005–2007),
and has been Guest Editor for the European Journal of Control and Control
Engineering Practice. He is a Professor at the School of Electrical Engineering,
KTH, Stockholm, Sweden. He is a Chair of the IFAC Coordinating Committee
CC1 Systems and Signals. In 2001, he received the KTH award for outstanding
contribution to undergraduate education. He is co-recipient of the European
Research Council advanced grant.
17
| 3 |
Age Minimization in Energy Harvesting
Communications: Energy-Controlled Delays
Ahmed Arafa
Sennur Ulukus
arXiv:1712.03945v1 [] 11 Dec 2017
Department of Electrical and Computer Engineering
University of Maryland, College Park, MD 20742
[email protected] [email protected]
Abstract— We consider an energy harvesting source that is collecting measurements from a physical phenomenon and sending
updates to a destination within a communication session time.
Updates incur transmission delays that are function of the energy
used in their transmission. The more transmission energy used
per update, the faster it reaches the destination. The goal is to
transmit updates in a timely manner, namely, such that the total
age of information is minimized by the end of the communication
session, subject to energy causality constraints. We consider
two variations of this problem. In the first setting, the source
controls the number of measurement updates, their transmission
times, and the amounts of energy used in their transmission
(which govern their delays, or service times, incurred). In the
second setting, measurement updates externally arrive over time,
and therefore the number of updates becomes fixed, at the
expense of adding data causality constraints to the problem. We
characterize age-minimal policies in the two settings, and discuss
the relationship of the age of information metric to other metrics
used in the energy harvesting literature.
I. I NTRODUCTION
A source collects measurements from a physical phenomenon and sends information updates to a destination.
The source relies solely on energy harvested from nature to
communicate, and the goal is to send these updates in a timely
manner during a given communication session time, namely,
such that the total age of information is minimized by the end
of the session time. The age of information is the time elapsed
since the freshest update has reached the destination.
Power scheduling in energy harvesting communication systems has been extensively studied in the recent literature.
Earlier works [1]–[4] consider the single-user setting under different battery capacity assumptions, with and without
fading. References [5]–[8] extend this to multiuser settings:
broadcast, multiple access, and interference channels; and [9]–
[13] consider two-hop, relay, and two-way channels.
Minimizing the age of information metric has been studied mostly in a queuing-theoretic framework; [14] studies a
source-destination link under random and deterministic service
times. This is extended to multiple sources in [15]. References
[16]–[18] consider variations of the single source system, such
as randomly arriving updates, update management and control,
and nonlinear age metrics, while [19] shows that last-comefirst-serve policies are optimal in multi-hop networks.
This work was supported by NSF Grants CNS 13-14733, CCF 14-22111,
CCF 14-22129, and CNS 15-26608.
Our work is most closely related to [20], [21], where age
minimization in single-user energy harvesting systems is considered; the difference of these works from energy harvesting
literature in [1]–[13] is that the objective is age of information
as opposed to throughput or transmission completion time,
and the difference of them from age minimization literature
in [14]–[19], [22] is that sending updates incurs energy expenditure where energy becomes available intermittently. [20]
considers random service time (time for the update to take
effect) and [21] considers zero service time. Recently in [23],
we considered a fixed non-zero service time in two-hop and
single hop settings. In our work here, we consider an energycontrolled (variable) service time in a single-user setting.
We consider a source-destination pair where the source
relies on energy harvested from nature to send information
updates to the destination. Different from [20], [21], updates’
service times depend on the amounts of energy used to send
them; the higher the energy used to send an update, the faster
it reaches the destination. Hence, a tradeoff arises; given an
amount of energy available at the source, it can either send a
few number of updates with relatively small service times, or
it can send a larger number of updates with relatively higher
service times. In this paper, we investigate this tradeoff and
characterize the optimal solution in the offline setting. We
formulate the most general setting of this problem where the
source decides on the number of updates to be sent, when
to send them, and the amounts of energy consumed in their
transmission (and therefore the amounts of service times or
delays they incur), such that the total age of information is
minimized by the end of the session time, subject to energy
causality constraints. We present some structural insights of
the optimal solution in this general setting, and propose an
iterative solution. Our results show that the optimal number
of updates depends on the parameters of the problem: the
amounts and times of the harvested energy, delay-energy
consumption relationship, and the session time.
We also consider the scenario where update arrival times
at the source (measurement times) cannot be controlled; they
arrive during the communication session. Thus, two main
changes occur to the previously mentioned model. First, the
total number of updates gets fixed; and second, data causality
constraints are enforced, since the source cannot transmit an
update before receiving it. We formulate the problem in this
setting and characterize its optimal solution.
age
age
Q2
Q2
0
t1
L
Q3
Q1
L
t 1 + d1
t 2 + d2
t2
x1
t 3 t 3 + d3
T
0
time
t1
t 1 + d1
a1
x3
x2
t 2 + d2
t2
a2
t 3 t 3 + d3
T
time
a3
Fig. 2.
Age evolution versus time in a system where N = 3 update
measurements arriving during communication.
x4
Fig. 1. Age evolution versus time in a controlled measurement times system,
with N = 3 updates.
harvested. We also have the service time constraints
ti + di ≤ ti+1 ,
II. S YSTEM M ODEL AND P ROBLEM F ORMULATION
A source node acquires measurement updates from some
physical phenomenon and sends them to a destination during a
communication session of duration T time units. Updates need
to be sent as timely as possible, i.e., such that the total age of
information is minimized by time T . The age of information
metric is defined as
a(t) , t − U (t),
∀t
(1)
where U (t) is the time stamp of the latest received information
(measurement) update, i.e., the time at which it was acquired
at the source. Without loss of generality, we assume a(0) = 0.
The objective is to minimize the following quantity
Z T
AT ,
a(t)dt
(2)
0
The source powers itself using energy harvested from
nature, and is equipped with an infinite battery to store its
incoming energy. Energy is harvested in packets of sizes Ej
at times sj , 1 ≤ j ≤ M . Without loss of generality, we assume
s1 = 0. The total energy harvested by time t is
X
E(t) =
Ej
(3)
j: sj ≤t
We denote by ei , the energy used in transmitting update i,
and denote by di , its transmission delay (service time) until it
reaches the destination. These are related as follows
ei = f (di )
(4)
where f is a deceasing convex function1. Let ti denote the
transmission time of update i. The following then holds
k
X
i=1
f (di ) ≤
k
X
E(ti ),
∀k
(5)
i=1
which represent the energy causality constraints [1], which
mean that energy cannot be used in transmission prior to being
1 This relationship is valid, for instance, if the channel is AWGN. With
normalized bandwidth and noise variance, we have f (d) = d 22B/d − 1 ,
with B denoting the size of the update packet in bits [24].
∀i
(6)
which ensure that there can be only one transmission at a time.
A. Controlled Measurements
In this setting, the source controls when to take a new
measurement update, and the goal is to choose total number
N
of updates N , transmission times {ti }N
i=1 , and delays {di }i=1 ,
such that AT is minimized, subject to energy causality constraints in (5) and service time constraints in (6). We note
that the source should start the transmission of an update
measurement whenever it is acquired. Otherwise, its age can
only increase. In Fig. 1, an example run of the age evolution
versus time is presented in a system with N = 3 updates.
The area under the age curve is given by the sum of the
areas of the three trapezoids Q1 , Q2 , and Q3 , plus the area
of the triangle L. The area of Q2 for instance is given by
2
1
1 2
2 (t2 + d2 − t1 ) − 2 d2 . Computing the area for a general N
updates, we formulate the problem as follows
min
N,t,d
s.t.
N
X
(ti + di − ti−1 )2 − d2i + (T − tN )2
i=1
ti + di ≤ ti+1 ,
k
X
i=1
f (di ) ≤
k
X
1≤i≤N
E(ti ),
1≤k≤N
(7)
i=1
with t0 , 0 and tN +1 , T .
B. Externally Arriving Measurements
In this setting, measurement updates arrive during the communication session at times {ai }N
i=1 , where N is now fixed.
We now have the following constraints
ti ≥ a i ,
∀i
(8)
representing the data causality constraints [1], which mean
that updates cannot be transmitted prior to being received at
the source. In Fig. 2, we show an example of the age evolution
in a system with N = 3 arriving updates. The area of Q2 in
this case is given by 21 (t2 +d2 −a1 )2 − 12 (t2 +d2 −a2 )2 and the
area of L is the constant term 21 (T − a3 )2 . Computing the area
for P
general N update arrivals, we write the objective function
N
2
2
as
i=1 (ti + di − ai−1 ) − (ti + di − ai ) , with a0 , 0.
This can be further simplified after some algebra to get the
following problem formulation2
min
t,d
N
X
(ai − ai−1 ) (ti + di )
k
X
k
X
i=1
s.t.
ti + di ≤ ti+1 ,
f (di ) ≤
1≤i≤N
E(ti ),
1≤k≤N
i=1
i=1
ti ≥ a i
1≤i≤N
(9)
We note that both problems (7) and (9) are non-convex.
One main reason is that the total energy arriving up to time
t, E(t), is not concave in t. Henceforth, in the next sections,
we solve the two problems when all the energy packets arrive
at the beginning of communication, i.e., when M = 1 energy
arrival. In this case E(t) = E, ∀t. The solutions in the case
of multiple energy arrivals follow similar structures.
III. C ONTROLLED M EASUREMENTS
propose an iterative algorithm to find the optimal inter-update
times {x∗i } given the optimal number of updates N ∗ and the
optimal service times {d∗i }. This is described as follows.
+1
Let {x̄i }N
and let
i=1 denote the output of this algorithm,
PN ∗ +1
us define the stopping condition to be when
x̄i =
i=1
PN ∗ ∗
∗
∗
T + i=1 di . We initialize by setting x̄1 = d1 ; x̄i = di +d∗i−1 ,
2 ≤ i ≤ N ; and x̄N +1 = d∗N . We then check the stopping
condition. If it is not satisfied, we compute m1 , arg min x̄i ,
and increase x̄m1 until either the stopping condition is satisfied, or x̄m1 is equal to mini6=m1 x̄i . In the latter case,
we compute m2 , arg mini6=m1 x̄i , and increase both x̄m1
and x̄m2 simultaneously until either the stopping condition is
satisfied, or they are both equal to mini∈{m
x̄i . In the
/
1 ,m2 }
latter case, we compute m3 and proceed similarly as above
until the stopping condition is satisfied. Note that if mk is
not unique at some stage k of the algorithm, we increase the
whole set {x̄i , i ∈ mk } simultaneously.
The above algorithm has a water-filling flavor; it evens out
the xi ’s to the extent allowed by the service times di ’s and the
session time T , while keeping them as low as possible. The
next lemma shows its optimality.
In this section, we focus on problem (7) with a single energy
arrival. We first have the following lemma.
Lemma 2 In problem (10), given N ∗ and {d∗i }N
i=1 , the optimal x∗i = x̄i , 1 ≤ i ≤ N + 1.
Lemma 1 In problem (7), all energy is consumed by the end
of communication.
Proof: First, note that the algorithm initializes xi ’s by their
least possible values. If this satisfies the stopping (feasibility)
condition, then it is optimal. Otherwise, since we need to
increase at least one of the xi ’s, the algorithm chooses the least
one; this gives the least objective function since y < z implies
(y + ǫ)2 < (z + ǫ)2 for y, z ≥ 0 and ǫ > 0. Next, observe that
while increasing one of the xi ’s, if the stopping condition is
satisfied, then we have reached the minimal feasible solution.
Otherwise, if two xi ’s become equal, then by convexity of
the square function, it is optimal to increase both of them
simultaneously [25]. This shows that each step of the algorithm
is optimal, and hence it achieves the age-minimal solution.
We note that the above algorithm is essentially a variation
of the solution of the single-hop problem in [23]. There, all the
inter-update delays are fixed, while here they can be different.
Next, we present an example to show how the choice of the
number of updates and inter-update delays affect the solution,
in a specific scenario. In particular, we focus on the case where
the inter-update delays are fixed for all update packets, i.e.,
di = d, ∀i. In this case, by Lemma 1, for a given N , the
optimal inter-update delay is given by d = f −1 (E/N ). We
can then use the algorithm above to find the optimal xi ’s, as
shown in Lemma 2. For example, we consider a system with
energy E = 20 energy units, with f (d) = d 22/d − 1 , and
T = 10 time units. We plot the optimal age in this case versus
N in Fig. 3. We see that the optimal number of updates is equal
to 5; it is not optimal to send too few or too many updates
(the maximum feasible is 7 in this example). This echoes the
early results in [14], where the optimal rate of updating is not
the maximum (throughput-wise) or the minimum (delay-wise),
but rather lies in between.
Proof: By direct first derivatives, we observe that the objective
function is increasing in {di }. Thus, if not all energy is
consumed, then one can simply use the remaining amount to
decrease the last service time and achieve lower age.
Next, we apply the change of variables x1 , t1 + d1 , xi ,
ti + di − ti−1 P
for 2 ≤ i ≤ N , and
PNxN +1 , T − tN . Then,
N +1
we must have i=1 xi = T + i=1 di , which reflects the
dependence relationship between the variables. This can also
be seen geometrically in Fig. 1. Then, the problem becomes
min
N,x,d
s.t.
N
+1
X
i=1
x2i −
N
X
d2i
i=1
x1 ≥ d1
xi ≥ di + di−1 ,
2≤i≤N
xN +1 ≥ dN
N
+1
X
i=1
N
X
xi = T +
f (di ) ≤ E
N
X
di
i=1
(10)
i=1
+1
The variables {xi }N
i=1 control the inter-update times, which
are lower bounded by the service times {di }N
i=1 , which are in
turn controlled by the amount of harvested energy, E. We
2 An inherent assumption in this model is that 0 < a < a < · · · < a .
1
2
N
Otherwise the areas of the trapezoids become 0 and the problem becomes
degenerate. Also, the parameters of the problem are such that it is feasible.
Hence, the optimal λ∗ is given by the unique solution of
28
N
X
26
(15)
i=1
24
Age of Information
h (−ci /λ∗ ) = E
22
20
18
16
14
12
1
2
3
4
5
6
7
Number of updates
Fig. 3.
Age of information versus number of updates.
IV. E XTERNALLY A RRIVING M EASUREMENTS
In this section, we solve problem (9) with a single energy
arrival. We observe that the problem in this case is convex and
can be solved by standard techniques [25]. We first have the
following lemma.
Lemma 3 In problem (9), the optimal update times satisfy
t∗1 = a1
t∗i = max
(11)
ai , ai−1 + d∗i−1 , . . . , a1 +
i−1
X
j=1
d∗j
, i ≥ 2 (12)
Proof: This follows directly from the constraints of problem
(9); the optimal update times should always be equal to their
lower bounds. Hence, we have t∗1 = a1 , t∗2 = max{a2 , t∗1 +
d∗1 } = max{a2 , a1 + d∗1 }, t∗3 = max{a3 , t∗2 + d∗2 } =
max{a3 , a2 + d∗2 , a1 + d∗1 + d∗2 }, and so on.
By the previous lemma, the problem now reduces to finding
the optimal inter-update delays {d∗i }. We note that starting
from t∗1 = a1 , we have two choices for t∗2 ; either a2 or a1 +d∗1 .
Once t∗2 is fixed, t∗3 in turn has two choices; either a3 or t∗2 +d∗2 .
Now observe that once a choice pattern is fixed,
PN the objective
function of problem (9) will be given by
i=1 ci di where
ci > 0 is a constant that depends on the choice pattern. For
instance, for N = 3, choosing the pattern t∗2 = a1 + d∗1 and
t∗3 = a3 gives c1 = a2 , c2 = a2 − a1 , and c3 = a3 − a2 . We
introduce the following Lagrangian for this problem [25]
!
N
N
X
X
f (di ) − E
(13)
ci di + λ
L=
i=1
i=1
where λ is a non-negative Lagrange multiplier. The KKT
conditions are
ci = −λf ′ (di )
(14)
−1
where h , f ◦ g and g , (f ′ ) . To see this, note that since
f is convex, it follows that g exists and is increasing. By (14),
we then have d∗i = g (−ci /λ∗ ). Substituting in the energy
constraint, which has to be satisfied with equality, gives (15).
By monotonicity of f and g, h is also monotone, and therefore
(15) has a unique solution in λ∗ .
Therefore, we solve problem (9) by first fixing a choice
pattern for the update times, which gives us a set of constants
{ci } allowing us to solve for λ∗ using (15). We go through
all possible choice patterns and choose the one that is feasible
and gives minimal age.
We finally note that the measurements’ arrival times can be
so close to each other that the optimal solution is such that
t∗i > ai+l for some i and l ≥ 1. That is, there would be l + 1
measurements waiting in the data queue before t∗i . If the total
number of updates can be changed, then this solution can be
made better by transmitting only the freshest, i.e., the (i+l)th,
measurement packet at t∗i and ignoring all the rest. This strictly
improves the age and saves some energy as well. The solution
can be further optimized by re-solving the problem with Ñ =
N − l arriving measurements at times ã1 = a1 , . . . , ãi−1 =
ai−1 , ãi = ai+l , . . . , ãÑ = aN .
V. D ISCUSSION : R ELATIONSHIP
TO
OTHER M ETRICS
In this section we discuss the relationship between the proposed problems in this work and other well-known problems
in the energy harvesting literature: transmission completion
time minimization, and delay minimization. Reference [1]
introduced the transmission completion time minimization
problem. In this problem, given some amounts of data arriving
during the communication session, the objective is to minimize
the time by which all the data is delivered to the destination,
subject to energy and data causality constraints.
Reference [26] studies this problem from a different perspective. Instead of minimizing the completion time of all the
data, the objective is to minimize the delay experienced by
each bit, which is equal to the difference between the time
of its reception at the receiver and the time of its arrival
at the transmitter. Delay-minimal policies are fundamentally
different than those minimizing completion time. For instance,
in [1], due to the concave rate-power relationship, transmitting
with constant powers in between energy harvests is optimal.
While in [26], the optimal delay-minimal powers are decreasing over time in between energy harvests, since earlier arriving
bits contribute more to the cumulative delay and are thus given
higher priorities (transmission powers and rates).
We note that minimizing the age of information problem
is similar to the delay minimization problem formulated in
[26]. In both problems, there is a time counter that counts
time between data transmissions and receptions. In the age
of information problem, the time counter starts increasing
delay
affected by changing its arrival time as it represents the time
stamp of when the packet arrived, and hence transmission and
service times need to change if arrival times change in order
to achieve the same age.
3B
2B
R EFERENCES
B
0
t1
t 1 + d1
a1
t 2 + d2
t2
a2
t 3 t 3 + d3
T
time
a3
Fig. 4. Cumulative data packets arriving (blue) and departing (black) versus
time with N = 3 data packets. The shaded area in yellow between the two
curves represents the total delay of the system.
from the beginning of the communication session. While in
the delay problem, the time counter is bit-dependent; it starts
increasing only from the moment a new bit enters the system
and stops when it reaches the destination.
The delay minimization problem was previously formulated
in [27] for the case where the delay is computed per packet, as
opposed to per bit in [26] (note that the age is also computed
per packet and not per bit). The transmitter in [27] was energy
constrained but not harvesting energy over time, which models
the case where all energy packets arrive at the beginning of
communication. For the sake of comparison, we extend the
delay minimization problem in [27] to the energy harvesting
case as in [26] and relate it to the age minimization problem
considered in this work.
Following the model in Section II-B, the ith arriving data
packet waits for ti − ai time in queue, and then gets served
in di time units. Following [26], the total delay is defined
as the area in between the cumulative departing data curve,
and the cumulative arriving data curve. In Fig. 4, we show an
example realization using the same transmission, arrival, and
service times used in Fig. 2. The solid blue curve represents the
cumulative received data packets over time; the dotted black
curve represents cumulative departed (served) data packets
over time; and the shaded area in yellow represents the total
delay DT . The delay of the first data packet for instance is
given by B(t1 − a1 ) + 12 Bd1 , where B is the length of the
data packet in bits. Computing the area for general N arrivals,
the delay minimization problem is given by
min
t,d
s.t.
N
X
2ti + di
i=1
problem (9) constraints
(16)
We see that minimizing delay in problem (16) is almost the
same as minimizing age in problem (9). The main difference
is that to minimize age, transmission and service times are
weighted by arrival times, while this is not the case when
minimizing delay. The reason lies in the definitions of age and
delay; the delay of a packet arriving at time a stays the same
if it arrives at time a + δ, provided that its service time is the
same, and that its transmission time relative to its arrival time
is the same. The age of a packet on the other hand is directly
[1] J. Yang and S. Ulukus. Optimal packet scheduling in an energy
harvesting communication system. IEEE Trans. Comm., 60(1):220–230,
January 2012.
[2] K. Tutuncuoglu and A. Yener. Optimum transmission policies for
battery limited energy harvesting nodes. IEEE Trans. Wireless Comm.,
11(3):1180–1189, March 2012.
[3] O. Ozel, K. Tutuncuoglu, J. Yang, S. Ulukus, and A. Yener. Transmission
with energy harvesting nodes in fading wireless channels: Optimal
policies. IEEE JSAC, 29(8):1732–1743, September 2011.
[4] C. K. Ho and R. Zhang. Optimal energy allocation for wireless
communications with energy harvesting constraints. IEEE Trans. Signal
Proc., 60(9):4808–4818, September 2012.
[5] J. Yang, O. Ozel, and S. Ulukus. Broadcasting with an energy harvesting
rechargeable transmitter. IEEE Trans. Wireless Comm., 11(2):571–583,
February 2012.
[6] O. Ozel, J. Yang, and S. Ulukus. Optimal broadcast scheduling for an
energy harvesting rechargebale transmitter with a finite capacity battery.
IEEE Trans. Wireless Comm., 11(6):2193–2203, June 2012.
[7] J. Yang and S. Ulukus. Optimal packet scheduling in a multiple access
channel with energy harvesting transmitters. Journal of Comm. and
Networks, 14(2):140–150, April 2012.
[8] K. Tutuncuoglu and A. Yener. Sum-rate optimal power policies for
energy harvesting transmitters in an interference channel. Journal
Comm. Networks, 14(2):151–161, April 2012.
[9] C. Huang, R. Zhang, and S. Cui. Throughput maximization for the
Gaussian relay channel with energy harvesting constraints. IEEE JSAC,
31(8):1469–1479, August 2013.
[10] D. Gunduz and B. Devillers. Two-hop communication with energy
harvesting. In IEEE CAMSAP, December 2011.
[11] B. Gurakan and S. Ulukus. Cooperative diamond channel with energy
harvesting nodes. IEEE JSAC, 34(5):1604–1617, May 2016.
[12] B. Varan and A. Yener. Delay constrained energy harvesting networks
with limited energy and data storage. IEEE JSAC, 34(5):1550–1564,
May 2016.
[13] A. Arafa, A. Baknina, and S. Ulukus. Energy harvesting two-way
channels with decoding and processing costs. IEEE Trans. Green Comm.
and Networking, 1(1):3–16, March 2017.
[14] S. Kaul, R. Yates, and M. Gruteser. Real-time status: How often should
one update? In IEEE Infocom, March 2012.
[15] R. Yates and S. Kaul. Real-time status updating: Multiple sources. In
IEEE ISIT, July 2012.
[16] C. Kam, S. Kompella, and A. Ephremides. Age of information under
random updates. In IEEE ISIT, July 2013.
[17] M. Costa, M. Codreanu, and A. Ephremides. On the age of information
in status update systems with packet management. IEEE Trans. Info.
Theory, 62(4):1897–1910, April 2016.
[18] A. Kosta, N. Pappas, A. Ephremides, and V. Angelakis. Age and value
of information: Non-linear age case. In IEEE ISIT, June 2017.
[19] A. M. Bedewy, Y. Sun, and N. B. Shroff. Age-optimal information
updates in multihop networks. In IEEE ISIT, June, 2017.
[20] R. D. Yates. Lazy is timely: Status updates by an energy harvesting
source. In IEEE ISIT, June 2015.
[21] B. T. Bacinoglu, E. T. Ceran, and E. Uysal-Biyikoglu. Age of information under energy replenishment constraints. In UCSD ITA, February
2015.
[22] Y. Sun, E. Uysal-Biyikoglu, R. Yates, C. E. Koksal, and N. B. Shroff.
Update or wait: How to keep your data fresh. In IEEE Infocom, April
2016.
[23] A. Arafa and S. Ulukus. Age-minimal transmission in energy harvesting
two-hop networks. In IEEE Globecom, December 2017.
[24] T. Cover and J. A. Thomas. Elements of Information Theory. 2006.
[25] S. P. Boyd and L. Vandenberghe. Convex Optimization. 2004.
[26] T. Tong, S. Ulukus, and W. Chen. Optimal packet scheduling for delay
minimization in an energy harvesting system. In IEEE ICC, June 2015.
[27] J. Yang and S. Ulukus. Delay-minimal transmission for energy constrained wireless communications. In IEEE ICC, 2008.
| 7 |
arXiv:1701.08350v1 [math.DS] 29 Jan 2017
FURSTENBERG ENTROPY OF INTERSECTIONAL
INVARIANT RANDOM SUBGROUPS
YAIR HARTMAN AND ARIEL YADIN
Abstract. We study the Furstenberg-entropy realization problem for stationary actions. It is shown that for finitely supported probability measures on free
groups, any a-priori possible entropy value can be realized as the entropy of an
ergodic stationary action. This generalizes results of Bowen.
The stationary actions we construct arise via invariant random subgroups
(IRSs), based on ideas of Bowen and Kaimanovich. We provide a general framework for constructing a continuum of ergodic IRSs for a discrete group under
some algebraic conditions, which gives a continuum of entropy values. Our tools
apply for example, for certain extensions of the group of finitely supported permutations and lamplighter groups, hence establishing full realization results for
these groups.
For the free group, we construct the IRSs via a geometric construction of
subgroups, by describing their Schreier graphs. The analysis of the entropy
of these spaces is obtained by studying the random walk on the appropriate
Schreier graphs.
1. Introduction
In this paper we study the Furstenberg-entropy realization problem, which we
will describe below. We propose a new generic construction of invariant random
subgroups and we are able to analyse the Furstenberg-entropy via the random
walk and its properties. We apply this construction to free groups and lamplighter groups, generalizing results of Bowen [4] and of the first named author
with Tamuz [16]. In addition, we apply it to the group of finitely supported
infinite permutations and to certain extensions of this group, establishing a full
realization result for this class of groups. Let us introduce the notions and results
precisely.
Let G be a countable discrete group and let µ be a probability measure on
G. We will always assume that µ is generating and of finite entropy; that is, the
support of P
µ generates G as a semigroup and the Shannon entropy of µ is finite,
H(µ) := − g µ(g) log µ(g) < ∞.
2010 Mathematics Subject Classification. Primary .
Y. Hartman is supported by the Fulbright Post-Doctoral Scholar Program. A. Yadin is
supported by the Israel Science Foundation (grant no. 1346/15).
We would like to thank Lewis Bowen and Yair Glasner for helpful discussions.
1
2
YAIR HARTMAN AND ARIEL YADIN
Suppose that G acts measurably on a standard probability space (X, ν). This
−1
induces an action on the probability measure ν given
P by gν(·) = ν(g ·). The
action G y (X, ν) is called (G, µ)-stationary if g∈G µ(g)gν = ν. Thus, the
measure ν is not necessarily invariant, but only “invariant on average”. An important quantity associated to a stationary space is the Furstenberg-entropy, given
by
Z
X
dg −1 ν
hµ (X, ν) := −
µ(g)
log
dν(x)
dν
X
g∈G
It is easy to see that hµ (X, ν) ≥ 0 and that equality holds if and only if the
measure ν is invariant (invariant means gν = ν for all g ∈ G). A classical result
of Kaimanovich-Vershik [20] asserts that the Furstenberg-entropy of any (G, µ)stationary space is bounded above by the random walk entropy
1
H(µn ),
n→∞ n
where µn denotes the n-th convolution power of µ. As a first step of classification of
the possible (G, µ)-stationary actions for a given µ, one may consider the following
definitions.
hRW (G, µ) := lim
Definition 1.1. We say that (G, µ) has an entropy gap if there exists some
c > 0 such that whenever hµ (X, ν) < c for an ergodic (G, µ)-stationary action
(X, ν) then hµ (X, ν) = 0. Otherwise we say that (G, µ) has no-gap.
We say that (G, µ) admits a full realization if any number in [0, hRW (G, µ)]
can be realized as the Furstenberg-entropy of some ergodic (G, µ)-stationary space.
Let us remark that choosing only ergodic actions G y (X, ν) is important.
Otherwise the definitions are non-interesting, since by taking convex combinations
one can always realize any value.
The motivation for the gap definition comes from Nevo’s result in [24] showing
that any discrete group with property (T) admits an entropy gap, for any µ. The
converse however is not true (see e.g. Section 7.2 in [7]) and it is interesting to
describe the class of groups (and measures) with no-gap.
Our main result is the following.
Theorem 1.2. Let µ be a finitely supported, generating probability measure on
the free group Fr on 2 ≤ r < ∞ generators. Then (Fr , µ) admits a full realization.
As will be explained below (see Remark 4.9), the restriction to finitely supported measures µ is technical and it seems that the result should hold in a wider
generality. The case where µ is the simple random walk (uniform measure on
the generators and their inverses) was done by Bowen [4]. We elaborate on the
similarity and differences with Bowen’s result below.
Our next result provides a solution for lamplighter groups.
ENTROPY OF INTERSECTIONAL IRSS
3
Theorem 1.3. Let G = ⊕B L o B be a lamplighter group where L and B are some
non trivial countable discrete groups. Let µ be a generating probability measure
with finite entropy on G and denote by µ̄ the projection of µ onto the quotient
B∼
= G/(⊕B L o {e}).
Then, whenever (B, µ̄) is Liouville, (G, µ) admits a full realization.
Here, Liouville is just the property that the random walk entropy is zero. For
example, if the base group B is virtually nilpotent (equivalently, of polynomial
growth in the finitely generated context), then B is Liouville for any measure µ
(see Theorem 2.4 below).
In [16], the first named author and Tamuz prove that a dense set of values can
be realized, under an additional technical condition on the random walk.
In addition, our tools apply also to nilpotent extensions of Sym∗ (X), the group
of finitely supported permutations of an infinite countable set X. For simplicity,
we state here the result for Sym∗ (X) itself, and refer the reader to Section 3.4 for
the precise statement and proof of the general result.
Theorem 1.4. Let µ be a generating probability measure with finite entropy on
Sym∗ (X). Then (Sym∗ (X), µ) admits a full realization.
Following the basic approach of Bowen in [4], our realization results use only a
specific type of stationary actions known as Poisson bundles that are constructed
out of invariant random subgroups. An invariant random subgroup, or IRS,
of G, is random subgroup whose law is invariant to the natural G-action by conjugation. IRSs serve as a stochastic generalization of a normal subgroup and arise
naturally as the stabilizers of probability measure preserving actions. In fact, any
IRS is obtained in that way (see Proposition 14 in [3] for the discrete case, and [1]
for the general case). Since the term IRS was coined in [2], they have featured in
many works and inspired much research.
In this paper we construct explicit families of IRSs for the free group, the lamplighter group and Sym∗ (X). For further discussion about the structure of IRSs on
these groups the reader is referred to the works of Bowen [5] for the free group,
Bowen, Grigorchuk and Kravchenko [6] for lamplighter groups and Vershik [28]
for the Sym∗ (X) case.
Given an ergodic IRS of a group G, one can construct an ergodic stationary
space which is called a Poisson bundle. These were introduced by Kaimanovich
in [19] and were further studied by Bowen in [4]. In particular, it was shown in
[4] that the Furstenberg-entropy of a Poisson bundle constructed using an IRS,
equals the random walk entropy of the random walk on the associated coset space
(this will be made precise below). Hence our main results can be interpreted as
realization results for the random walk entropies of coset spaces associated with
ergodic IRSs.
Given a subgroup with an infinite conjugacy class, we develop a general tool
to construct a family of ergodic IRSs, that we call intersectional IRSs. This
4
YAIR HARTMAN AND ARIEL YADIN
family of IRSs is obtained by randomly intersecting subgroups from the conjugacy
class (see Section 3). Furthermore, we prove that under a certain condition related to the measure µ and the conjugacy class, the random walk entropy varies
continuously along this family of IRSs (see Section 3.2). In fact, in some cases we
may find a specific conjugacy class that satisfies the above mentioned condition
for any measure µ. An example of such, is a conjugacy class that we call “locally
co-nilpotent”. These are conjugacy classes such that that their associated Schreier
graphs satisfy some geometric condition which we describe in Section 4.5.
Using such families of IRSs we get our main tool for constructing a continuum
of entropy values.
Proposition 1.5. Let G be a countable discrete group. Assume that there exists a
conjugacy class of subgroups which is locally co-nilpotent. (See just before Corollary
3.6 and Section 4.5 for the precise definition of locally co-nilpotent.) Denote its
normal core by N C G.
Then, for any finite entropy, generating, probability measure µ on G, if G/N
has positive µ-random walk entropy, then (G, µ) has no-gap.
Furthermore, if the normal core N is trivial, then (G, µ) admits a full realization.
Note that this proposition reveals a structural condition on G and its subgroups
that allows to conclude realization results for many different measures µ at the
same time. Furthermore, these realization results do not depend on the description
of the Furstenberg-Poisson boundary.
By constructing conjugacy classes in lamplighter groups and in Sym∗ (X) with
the relevant properties, we obtain Theorems 1.3 and 1.4. However, the case of the
free group is more complicated. In Section 4.3 we show how to construct many
subgroups of the free group with a conjugacy class which is locally co-nilpotent,
and such that G/N has positive random walk entropy where N is the normal core
of the conjugacy class. This is enough to realize an interval of entropy values
around 0, showing in particular, that the free group admits no-gap for any µ with
finite entropy. The fact that the free group admits no-gap for any finite first
moment µ was proved in [16].
Theorem 1.6. Let µ be a finite entropy generating probability measure on the
free group Fr on 2 ≤ r < ∞ generators. Then (Fr , µ) has an interval of the form
[0, c] of realizable values.
However, there is no single (self-normalizing) subgroup of the free group which
has a locally co-nilpotent conjugacy class and a trivial normal core (see Lemma 3.7).
The importance of the requirement that the normal core N is trivial, is to ensure
that the random walk entropy on G/N is equal to the full random walk entropy.
Hence, to prove a full realization for the free group, we approximate this property.
That is, we find a sequence of conjugacy classes which are all locally co-nilpotent,
ENTROPY OF INTERSECTIONAL IRSS
5
and such that their normal cores satisfy that the random walk entropy on the
quotients tends to hRW (G, µ).
Finding such a sequence is quite delicate, and proving it is a bit technical.
The difficulty comes from the fact that entropy is semi-continuous, but in the
“wrong direction” (see Theorem 2.2 and the discussion after). It is for this analysis
that we need to assume that µ is finitely supported, although it is plausible that
this condition can be relaxed. By that we mean that the very same conjugacy
classes we construct in this paper might satisfy that random walk entropy on the
quotients by their normal cores tends to hRW (G, µ), even for infinitely supported
µ as well (perhaps under some further regularity conditions, such as some moment
conditions).
1.1. Intersectional IRSs. We now explain how to construct intersectional IRSs.
Let K ≤ G be a subgroup with an infinite conjugacy class |K G | = ∞ (we
use the notation K g = g −1 Kg for conjugation). We may index this conjugacy
class by the integers: K G = (Kz )z∈Z . It is readily checked that the G-action (by
conjugation) on the conjugacy class K G is translated to an action by permutations
on Z. Namely, for g ∈ G, z ∈ Z, we define g.z to be the unique integer such that
(Kz )g = Kg.z . Thus, G also acts on subsets S ⊂ Z element-wise.
T Given a non-empty subset S ⊂ Z, we may associate to S a subgroup CoreS (K) =
s∈S Ks . If S is chosen to be random, with a law invariant to the G-action, then
the random subgroup CoreS (K) is an IRS. Moreover, it is quite simple to get such
a law: for a parameter p ∈ (0, 1] let S be p-percolation on Z; that is, every element
z ∈ Z belongs to S with probability p independently.
There are few issues with this construction.
• It is not immediately clear what should be associated to the “empty intersection” when S = ∅. In Section 3.1 we give an apropriate definition,
which follows from a construction of an associated “semi-norm” on the
group G. This will turn out to be a normal subgroup of G that we denote
by Core∅ (K).
• Note that p = 1 corresponds to the non-random subgroup CoreG (K), which
is just the normal core of the conjugacy class K G . This is also a normal
subgroup, and CoreG (K) C Core∅ (K).
• It seems very reasonable that as p varies, certain quantities related to the
IRS should vary continuously. In Section 3.2 we provide a condition on
the conjugacy class K G that guarantees that the Furstenberg-entropy is
indeed continuous in p.
After establishing continuity of the Furstenberg-entropy in p, one wishes to
show that the full interval [0, hRW (G, µ)] of possible entropy values is realized.
As mentioned above, in the free group we must use a sequence of subgroups
Kn (or conjugacy classes), such that the random walk entropy of the quotient
6
YAIR HARTMAN AND ARIEL YADIN
G/CoreG (Kn ) is large, and push this quotient entropy all the way up to the full
random walk entropy hRW (G, µ).
This leads to a study of CoreG (Kn ), the normal cores of a sequence of subgroups
(Kn )n , with the objective of showing that CoreG (Kn ) are becoming “smaller”
in some sense. As will be discussed below, a naive condition such as having
CoreG (Kn ) converge to the trivial group does not suffice. What is required, is to
approximate the Furstenberg-Poisson boundary of the random walk on the full
group by the boundaries of the quotients. By that we mean that we need to find a
sequence such that one can recover the point in the boundary of the random walk
on the free group, by observing the projections in the boundaries of the quotients
G/CoreG (Kn ).
However, we construct the subgroups Kn geometrically, by specifying their associated Schreier graphs (and not by specifying the algebraic properties of the
subgroup). Hence the structure of CoreG (Kn ) and in particular the FurstenbergPoisson boundary of G/CoreG (Kn ) is somewhat mysterious. A further geometric
argument using the random walk on the aforementioned Schreier graph (specifically transience of this random walk) allows us to “sandwich” the entropy of the
G/CoreG (Kn ) and thus show that these entropies tend to the correct limiting
value. This is done in Section 4.
It may be interesting for some readers to note that in the simple random walk
case considered by Bowen [4], he also shows a full realization by growing intervals
of possible entropy values. However, Bowen obtains the intervals of realization differently than us. In fact, he proves that the entropy is continuous when restricting
to a specific family of subgroups which are “tree-like”. This is a global property
of the Schreier graph that enables him to analyze the random walk entropy of
the simple random walk and to prove a continuity result. Next, within this class
of tree-like subgroups he finds paths of IRSs obtaining realization of intervals of
the form [εn , hRW (G, µ)] for εn → 0 (“top down”). While it is enough for a full
realization for the simple random walk, the condition of being tree-like is quite
restrictive, and difficult to apply to other measures µ on the free group.
In contrast, based on Proposition 1.5, our results provide realization on intervals
of the form [0, hRW (G, µ) − εn ] for εn → 0 (“bottom up”). In order to “push” the
entropy to the top we also use a geometric condition, related to trees. However,
this condition is more “local” in nature, and considers only finite balls in the
Scherier graph. In particular, we show in Section 4.6 that it is easy to construct
many subgroups that satisfy our conditions. Also, it enables us to work with any
finitely supported measure.
1.2. Related results. The first realization result is due to Nevo-Zimmer [25],
where they prove that connected semi-simple Lie groups with finite center and
R-rank at least 2 admit only finitely many entropy values under some mixing
condition on the action. They also show that this fails for lower rank groups,
ENTROPY OF INTERSECTIONAL IRSS
7
such as P SL2 (R), by finding infinitely many entropy values. However, no other
guarantees are given regarding these values.
The breakthrough of the realization results is the mentioned result of Bowen [4],
where he makes a use of IRSs for realization purposes. He proves a full realization
for simple random walks on free groups Fr where 2 ≤ r < ∞.
In [16], the first named author and Tamuz prove a dense realization for lamplighter groups with Liouville base. Furthermore, it is shown that virtually free
groups, such as SL2 (Z), admit no-gap.
In a recent paper [8], Burton, Lupini and Tamuz prove that the Furstenbergentropy is an invariant of weak equivalence of stationary actions.
2. Preliminaries
2.1. Stationary spaces. Throughout the paper we assume that G is a discrete
countable group and that µ is a generating probability measure on G.
A measurable action G y (X,Pν) where (X, ν) is a standard probability spaces
is called (G, µ)-stationary if
g∈G µ(g)gν = ν. The basic example of noninvariant (G, µ)-stationary action is the action on the Furstenberg-Poisson boundary, Π(G, µ) (see e.g. [12–15] and references therein for more on this topic, which
has an extreme body of literature associated to it).
The Kakutani fixed point theorem implies that whenever G acts continuously on
a compact space X, there is always some µ-stationary measure on X. However,
there are not many explicit descriptions of stationary measures. One way to
construct stationary actions is via the Poisson bundle constructed from an IRS:
Denote by SubG the space of all subgroups of G (recall that G is discrete). This
space admits a topology, induced by the product topology on subsets of G, known
as the Chabauty topology. Under this topology, SubG is a compact space, with
a natural right action by conjugation (K g = g −1 Kg where K ∈ SubG ), which
is continuous. An Invariant Random Subgroup (IRS) is a Borel probability
measure λ on SubG which is invariant under this conjugation action. It is useful
to think of an IRS also as a random subgroup with a conjugation invariant law λ.
For more background regarding IRSs we refer the interested reader to [2, 3].
Fix a probability measure µ on G. Given an IRS λ of G, one can consider the
Poisson bundle over λ, denoted by Bµ (λ). This is a probability space with a (G, µ)
stationary action, and when λ is an ergodic IRS, the action on Bµ (λ) is ergodic.
Poisson bundles were introduced in a greater generality by Kaimanovich in
[19]. In [4] Bowen relates Poisson bundles to IRSs. Although the stationary
spaces that we construct in this paper are all Poisson bundles, we do not elaborate
regarding this correspondence, as this will not be important for understanding our
results. Rather, we refer the interested reader to [4, 16, 17, 19], and mention here
the important features of Poisson bundles that are relevant to our context. To
describe these features, we now turn to describe Schreier graphs.
8
YAIR HARTMAN AND ARIEL YADIN
2.2. Schreier graphs. Since we will use Schreier graphs only in the context of
the free group, we assume in this section that G is a free group. To keep the presentation simple, we consider G = F2 the free group on two generators, generated
by a and b. The generalization to higher rank is straightforward.
Let K ∈ SubG . The Schreier graph associated with K is a rooted oriented
graph, with edges labeled by a and b, which is defined as follows. The set of
vertices is the coset space K\G and the set of edges is {(Kg, Kga), (Kg, Kgb) :
Kg ∈ K\G}. Edges of the type (Kg, Kga) are labeled by a and of the type
(Kg, Kgb) are labeled by b. It may be the case that there are multiple edges
between vertices, as well as self loops. The root of the graph is defined to be the
trivial coset K.
From this point on, it is important that we are working with the free group,
and not a general group which is generated by two elements. Consider an abstract
(connected) rooted graph with oriented edges. Suppose that every vertex has
exactly two incoming edges labeled by a and b, and two outgoing edges labeled by
a and b. We may then consider this graph as a Schreier graph of the free group.
Indeed, given such a graph Γ one can recover the subgroup K: Given any
element g = w1 · · · wn , of the free group (where w1 · · · wn is a reduced word in
{a, a−1 , b, b−1 }) and a vertex v in the graph, one can “apply the element g to the
vertex v” to get another vertex denoted v.g, as follows:
• Given wi ∈ {a, b} move from v to the adjacent vertex u = v.wi such that
the oriented edge (v, u) is labeled by wi .
• Given wi ∈ {a−1 , b−1 } move from v to the adjacent vertex u = v.wi such
that the oriented edge (u, v) is labeled wi−1 (that is, follow the label wi
using the reverse orientation).
• Given g = w1 · · · wn written as a reduced word in the generators, apply the
generators one at a time, to get v.g = (· · · ((v.w1 ).w2 ) . . .).wn .
Thus, we arrive at a right action of G on the vertex set. Note that this action is
not by graph automorphisms, rather only by permutations of the vertex set. We
say that g preserves v if v.g = v.
To recover the subgroup K ≤ F2 from a graph Γ as above, let K be the set of all
elements of the free group that preserve the root vertex. Moreover, changing the
root - that is, recovering the graph which is obtained be taking the same labeled
graph but when considering a different vertex as the root - yields a conjugation
of K as the corresponding subgroup. Namely, if the original root was v with a
corresponding subgroup K, and if the new root is v.g for some g, then one gets the
corresponding subgroup K g . Hence when ignoring the root, one can think of the
Schreier graph as representing the conjugacy class K G . In particular, if K is the
subgroup associated with a vertex v, and g is such that when rooting the Schreier
graph in v and in v.g we get two isomorphic labeled graphs, then g belongs to the
normalizer NG (K) = {g ∈ G : K g = K}.
ENTROPY OF INTERSECTIONAL IRSS
9
The subgroups of the free group that we will construct will be described by their
Schreier graph. In the following section we explain the connection between Schreier
graphs and Poisson bundles, which is why using Shcreier graphs to construct
subgroups is more adapted to our purpose.
2.3. Random walks on Schreier graphs. Fix a generating probability measure
µ on a group G. Let (Xt )∞
t=1 be i.i.d. random variables taking values in G with
law µ, and let Zt = X1 · · · Xt be the position of the µ-random walk at time t.
Let K be a subgroup. Although this discussion holds for any group, since we
defined Schreier graphs in the context of the free group, assume that G = F2 .
The Markov chain (KZt )t has a natural representation on the Schreier graph
associated with K, by just considering the position on the graph at time t to be
v.Zt where v is a root associated with K. (This is the importance of considering
left-Schreier graphs and right-random walks.) This is a general correspondence,
not special to the free group.
The boundary of the Markov chain KZt where K is fixed, is the boundary
of the random walk on the graph that starts in v, namely, the collection of tail
events of this Markov chain. The Poisson bundle is a bundle of such boundaries,
or alternatively, it is the boundary of the Markov chain KZt where K is a random
subgroup distributed according to λ. This perspective was our intuition behind
the construction in Section 4.
2.4. Random walk entropy. Recall that we assume throughout
that all random
P
walk measures µ have finite entropy, that is H(Z1 ) = − g P[Z1 = g] log(P[Z1 =
g]) < ∞ where Zt as usual is the position of the µ-random walk at time t. The
sequence of numbers H(Zt ) is a sub-additive sequence so the following limit exists
and finite.
Definition 2.1. The random walk entropy of (G, µ) is given by
1
hRW (G, µ) = lim H(Zt ).
t→∞ t
The Furstenberg-entropy of a (G, µ)-stationary space, (X, ν) is given by
Z
X
dg −1 ν
hµ (X, ν) = −
µ(g)
log
dν(x)
dν
X
g∈G
By Jensen’s inequality, hµ (X, ν) ≥ 0 and equality holds if and only if the action
is a measure preserving action. Furthermore, Kaimanovich and Vershik proved
in [20] that hµ (X, ν) ≤ hRW (G, µ) for any stationary action (X, ν), and equality
holds for the Furstenberg-Poisson boundary, namely
(2.1)
hµ (Π(G, µ)) = hRW (G, µ) ,
where Π(G, µ) denotes the Furstenberg-Poisson boundary of (G, µ).
10
YAIR HARTMAN AND ARIEL YADIN
If hRW (G, µ) = 0 the pair (G, µ) is called Liouville, which is equivalent to
triviality of the Furstenberg-Poisson boundary.
The Furstenberg-entropy realization problem is to find the exact values
in the interval [0, hRW (G, µ)] which are realized as the Furstenberg entropy of
ergodic stationary actions.
In a similar way to the Kaimanovich-Vershik formula (2.1), Bowen proves the
following:
Theorem 2.2 (Bowen, [4]). Let λ be an IRS of G. Then
Z
Z
1
1
hµ (Bµ (λ)) = lim
(2.2)
H(KZt )dλ(K) = inf
H(KZt )dλ(K)
t→∞ t Sub
t t Sub
G
G
Moreover, the (G, µ)-stationary space is ergodic if (and only if ) the IRS λ is
ergodic.
This reduces the problem of finding (G, µ)-stationary actions with many Furstenberg entropy values in [0, hRW (G, µ)], to finding IRSs λ with many different random walk entropy values.
Regarding continuity, recall that the space SubG is equipped with the Chabauty
topology. For finitely generated groups this topology is induced by the natural
metric for which two subgroups are close if they share the exact same elements
from a large ball in G. One concludes that the Shannon entropy, H, as a function
on SubG is a continuous function. The topology on SubG induces a weak* topology
on, P rob (SubG ), the space of IRSs of G. By the continuity of H and the second
equality in (2.2) we get that the Furstenberg-entropy of Poisson bundles of the
form Bµ (λ) is an upper semi-continuous function of the IRS. Since the entropy is
non-negative we conclude:
Corollary 2.3 (Bowen, [4]). If λn → λ in the weak* topology on P rob (SubG ),
such that hµ (Bµ (λ)) = 0 then hµ (Bµ (λn )) → 0.
However, it is important to notice that the Furstenberg-entropy is far from
being a continuous function of the IRS. Indeed, consider the free group, which is
residually finite. There exists a sequence of normal subgroups Nn , such that Nn \F2
is finite for any n, which can be chosen so that, when considered as IRSs λn = δNn ,
we have λn → δ{e} . In that case hµ (Bµ (λn )) = 0 for all n (because H(Nn Zt ) is
bounded in t), but Bµ (δ{e} ) is the Furstenberg-Poisson boundary of the free group,
and its Furstenberg entropy is positive for any µ (follows for example from the
non-amenability of the free group).
We want to point out that this lack of continuity makes it delicate to find
conditions on quotients of the free group to have large entropy. In Section 4
we will deal with this problem by introducing a tool to prove that the entropy
does convergence along a sequence of subgroups which satisfy some geometric
conditions.
ENTROPY OF INTERSECTIONAL IRSS
11
The free group is also residually nilpotent, so one can replace the finite quotient
from the discussion above by infinite nilpotent quotients. We arrive at a similar
discontinuity of the entropy function, due to the following classical result. As we
will use it in the sequel, we quote it here for future reference. The original case of
Abelian groups is due to Choquet-Deny [9] and the nilpotent case appears in [27].
Theorem 2.4 (Choquet-Deny-Raugi). Let G be a nilpotent group. For any probability measure µ on G with finite entropy, the pair (G, µ) is Liouville (i.e. hRW (G, µ) =
0).
We mention that the triviality of the Furstenberg-Poisson boundary holds for
general µ without any entropy assumption, although in this paper we consider only
µ with finite entropy. The proof of the Choquet-Deny Theorem follows from the
fact that the center of the group always acts trivially on the Furstenberg-Poisson
boundary and hence Abelian groups are always Liouville. An induction argument
extends this to nilpotent groups as well.
3. Intersectional IRSs
In this section we show how to construct a family of IRSs, given an infinite
conjugacy class.
Let G be a countable discrete group, and let K ≤ G be a subgroup with infinitely
many different conjugates |K G | = ∞. Equivalently, the normalizer NG (K) := {g ∈
G : K g = K} is of infinite index in G.
Recall that the G action on K G is the right action K g = g −1 Kg. Since K ng =
−1 −1
g n Kng = g −1 Kg = K g for any n ∈ NG (K), the K-conjugation depends only
on the coset NG (K)g. That is, conjugation K θ for a right coset θ ∈ NG (K)\G is
well defined.
We use 2NG (K)\G to denote the family of all subsets of NG (K)\G. This is
identified canonically with {0, 1}NG (K)\G . Note that G acts from the right on
subsets in 2NG (K)\G , via Θ.g = {θg : θ ∈ Θ}. Given a non-empty subset
Θ ⊂ NG (K)\G we define the subgroup
\
CoreΘ (K) =
Kθ.
θ∈Θ
Claim 3.1. The map ϕ : 2NG (K)\G → SubG defined by ϕ(Θ) = CoreΘ (K) is
G-equivariant.
In particular, if we denote by λp,K the ϕ-push-forward of the Bernoulli-p product
measure on 2NG (K)\G , then λp,K is an ergodic IRS for any p ∈ (0, 1).
Proof. Note that
g
(CoreΘ (K)) =
\
θ∈Θ
K
θ
g
=
\
θ∈Θ
(K θ )g =
\
θ0 ∈Θ.g
0
K θ = CoreΘ.g (K)
12
YAIR HARTMAN AND ARIEL YADIN
It follows that we can push forward any G-invariant measure on 2NG (K)\G to get
IRSs. For any p ∈ (0, 1) consider the Bernoulli-p product measure on 2NG (K)\G ∼
=
{0, 1}NG (K)\G , namely, each element in NG (K)\G is chosen to be in Θ independently with probability p. These measures are clearly ergodic invariant measures.
It follows that the push-forward measures λp,K are ergodic invariant measures on
SubG .
We continue to determine the limits of λp,K as p → 0 and p → 1. When the
subgroup K is clear from the context, we write λp = λp,K .
Lemma 3.2. Given K ≤ G with |K G | = ∞, there exist two normal subgroups,
p→0
p→1
Core∅ (K), CoreG (K) / G such that λp,K −−→ δCore∅ (K) and λp,K −−→ δCoreG (K) ,
where the convergence is in the weak* topology on P rob (SubG ).
While the definition of CoreG (K) is apparent, it is less obvious how to define
Core∅ (K). We now give an intrinsic description of the normal subgroup Core∅ (K).
3.1. The subgroup Core∅ (K). For any element g ∈ G, let Ωg ⊂ NG (K)\G be
the set of all cosets, such that g belongs to the corresponding conjugation: Ωg =
{θ ∈ NG (K)\G | g ∈ K θ }. For example, for g ∈ K we have that NG (K) ∈ Ωg .
We observe that:
(1) Ωg ∩ Ωh ⊂ Ωgh
(2) Ωg−1 = Ωg
(3) Ωgγ = Ωg γ −1
The first two follow from the fact that K θ is a group and the third since γK θ γ −1 =
−1
K θγ .
Let fg be the complement of Ωg in NG (K)\G. That is, fg = {θ ∈ NG (K)\G :
θ 6∈ Ωg }. Define kgkK = |fg |. The properties above show that
(1) kghkK ≤ kgkK + khkK
(2) kg −1 kK = kgkK
(3) kg γ kK = kgkK
Two normal subgroups of G naturally arise: the subgroup of all elements with
zero norm and the subgroup of all elements with finite norm. By (1) and (2)
above, these are indeed subgroups and normality follows from (3).
The first subgroup is, by definition, the normal core CoreG (K) = {g | kgkK =
0}. We claim that the second subgroup is the appropriate definition for Core∅ (K).
That is, we define Core∅ (K) = {g | kgkK < ∞} and prove that λp,K → δCore∅ (K)
as p → 0.
p→0
Proof of Lemma 3.2. By definition, to show that λp −−→ δCore∅ (K) we need to show
p→0
that for any g ∈ Core∅ (K), we have λp ({F ∈ SubG | g ∈ F }) −−→ 1 and for any
p→0
g∈
/ Core∅ (K), we have λp ({F ∈ SubG | g ∈ F }) −−→ 0.
ENTROPY OF INTERSECTIONAL IRSS
13
Note that for any Θ 6= ∅, by the definition of CoreΘ (K), we have that g ∈
CoreΘ (K) if and only if Θ ∩ fg = ∅.
Now consider Θ to be a random subset according to the Bernoulli-p product measure on 2NG (K)\G . By the above, we have equality of the events {g ∈
CoreΘ (K)} = {fg ∩ Θ = ∅}. The latter (and hence also the former) has probability (1 − p)|fg | = (1 − p)kgkK . This includes the case where |fg | = kgkK = ∞
(whence we have 0 probability for {fg ∩Θ = ∅}). We conclude that for any g ∈ G,
(
0
if kgkK = ∞,
kgkK
λp ({F ∈ SubG | g ∈ F }) = (1 − p)
=
→ 1 as p → 0 if kgkK < ∞.
It follows that λp → δCore∅ (K) as p → 0.
we show that λp → δCoreG (K) as p → 1, where we define CoreG (K) =
T Finally,
g
g K = CoreNG (K)\G (K). Clearly, for any p ∈ (0, 1), we have that CoreG (K) ⊂ F
for λp -almost every F , since CoreG (K) ≤ CoreΘ (K) for any non-empty subset Θ.
On the other hand, fix some g 6∈ CoreG (K). By definition, there exists some
θ ∈ NG (K)\G such that g 6∈ K θ . Whenever Θ is such that θ ∈ Θ, we have that
g 6∈ CoreΘ (K). The probability that θ 6∈ Θ is (1 − p). Hence, for any g with
kgkK > 0,
p→1
λp ({F ∈ SubG | g ∈ F }) ≤ (1 − p) −−→ 0.
We conclude that λp → δCoreG (K) as p → 1.
3.2. Continuity of the entropy along intersectional IRSs. Lemma 3.2 above
shows that the IRSs λp,K interpolate between the two Dirac measures concentrated
on the normal subgroups Core∅ (K) and CoreG (K). In this section we provide a
condition under which this interpolation holds for the random walk entropy as
well.
Fix a random walk µ on G with finite entropy. Let h : [0, 1] → R+ be the
entropy map defined by h(p) = hµ (Bµ (λp )) when p ∈ (0, 1) and h(0), h(1) defined as the µ-random walk entropies on the quotient groups G/Core∅ (K) and
G/CoreG (K) respectively. In particular, the condition that h(0) = 0 is equivalent
to the Liouville property holding for the projected µ-random walk on G/Core∅ (K).
Proposition 3.3. If h(0) = 0 then h is continuous.
Proof. Let
Z
Hn (p) =
H(F Zn )dλp (F ).
SubG
So h(p) = limn→∞ n1 Hn (p). First, we use a standard coupling argument to prove
the for q = p + ε,
(3.1)
0 ≤ Hn (q) − Hn (p) ≤ Hn (ε)
14
YAIR HARTMAN AND ARIEL YADIN
Indeed, let (Uθ )θ∈NG (K)\G be i.i.d. uniform-[0, 1] random variables, one for each
coset of NG (K). We use P, E to denote the probability measure and expectation
with respect to these random variables. Define three subsets
Θp := {θ : Uθ ≤ p}
Θε := {θ : p < Uθ ≤ q}
and Θq := Θp ∪ Θε . We shorthand Cx := CoreΘx (K) for x ∈ {p, ε, q}. Note that
for x ∈ {p, ε, q} the law of Cx is λx . Thus, Hn (x) = E[H(Cx Zn )] (to be clear, the
entropy is with respect to the random walk Zn , and expectation is with respect
to the random subgroup Cx ).
Now, by definition
\
\
\
Cq =
Kθ =
Kθ ∩
K θ = Cp ∩ Cε .
θ∈Θq
θ∈Θp
θ∈Θε
Since the map (F1 g, F2 g) 7→ (F1 ∩ F2 )g is well defined, we have that Cq Zn is
determined by the pair (Cp Zn , Cε Zn ). Also, since Cq ≤ Cp , we have that Cp Zn is
determined by Cq Zn . Thus, P-a.s.,
H(Cp Zn ) ≤ H(Cq Zn ) ≤ H(Cp Zn ) + H(Cε Zn ).
Taking expectation (under E) we obtain (3.1), which then immediately leads to
0 ≤ h(q) − h(p) ≤ h(ε).
Corollary 2.3 asserts that the entropy function is upper-semi continuous. Thus,
lim supε→0 h(ε) ≤ h(0). Thus, if h(0) = 0 we have that limε→0 h(ε) = 0, and that
the entropy function is continuous.
3.3. Applications to entropy realizations. Let µ be a probability measure
on a group G and N / G a normal subgroup. We use µ̄ to denote the projected
measure on the quotient group G/N .
Corollary 3.4. Let µ be a generating finite entropy probability measure on a
discrete group G. Assume that there exists some K ≤ G such that
• hRW (G/Core∅ (K), µ̄) = 0
• hRW (G/CoreG (K), µ̄) = hRW (G, µ)
Then (G, µ) has a full realization.
More generally, if only the first condition is satisfied, we get a realization of the
interval [0, hRW (G/CoreG (K), µ̄)].
Proof. Consider the path of IRSs λp,K defined in Claim 3.1. Let h(p) be defined
as above, before Proposition 3.3. Since h(0) = hRW (G/Core∅ (K), µ̄) = 0, by
Proposition 3.3 we have that h is continuous, and thus we have a realization of
the interval [h(0), h(1)] = [0, hRW (G/CoreG (K), µ̄)].
In the above corollary, the subgroup K may depend on the measure µ. However,
it is possible in some cases to find a subgroup K that provides realization for any
measure µ, and we now discuss alternative conditions for the corollary.
ENTROPY OF INTERSECTIONAL IRSS
15
The conditions in Corollary 3.4 implies that Core∅ (K) is a “large” subgroup and
CoreG (K) is “small” - at least in the sense of the random walk entropy. The best
scenario hence is when Core∅ (K) = G and CoreG (K) = {1}. The prototypical
example is the following:
Example 3.5. Let Sym∗ (X) denote the group of all finitely supported permutations of some infinite countable set X. Fix some x0 ∈ X and define K to be
the subgroup of all the finitely supported permutations of X that stabilize x0 ; i.e.
K = stab(x0 ) = {π ∈ Sym∗ (X) : π(x0 ) = x0 }. Conjugations of K are of the
form stab(x) for x ∈ X.
It follows that CoreG (K) = {e}, because Sym∗ (X) acts transitively on X and
only the trivial permutation stabilizes all points x ∈ X.
Since each element of Sym∗ (X) is a finitely supported permutation, it stabilizes
all but finitely many points in X. Thus, Core∅ (K) = Sym∗ (X).
This is actually a proof of Theorem 1.4: (Sym∗ (X), µ) admits full realization
for any generating, finite entropy, probability measure µ.
Note that Sym∗ (X) is a countable group which is not finitely generated. However, by adding some elements to it, we can get a finitely generated group with a
similar result, as we explain below in Section 3.4.
To continue our discussion, we want to weaken the condition Core∅ (K) = G to
some algebraic condition that will still guarantee that hRW (G/Core∅ (K), µ̄) = 0
for any µ.
Given a subgroup K (or a conjugacy class K G ) we say that K is locally conilpotent in G if G/Core∅ (K) is nilpotent. (The reason for this name will become
more apparent in Section 4.5.) With this definition, Proposition 1.5 follows immediately, as may be seen by the following.
Corollary 3.6. Assume that G admits a subgroup K ≤ G such that G/Core∅ (K)
is a nilpotent group. Then, for any generating finite entropy probability measure
µ, the pair (G, µ) admits a realization of the interval [0, hRW (G/CoreG (K), µ̄)].
If, in addition hRW (G/CoreG (K), µ̄) = hRW (G, µ) (e.g. if CoreG (K) = {e})
then (G, µ) admits full realization.
Proof. When G/Core∅ (K) is nilpotent, the Choquet-Deny Theorem (Theorem 2.4)
tells us that the first condition of Corollary 3.4 is satisfied.
In Section 3.4 we apply this to obtain full realizations for a class of lamplighter
groups as well as to some extensions of the group of finitely supported permutations. However, one should not expect to find a subgroup satisfying the conditions
of Corollary 3.6 in every group. These conditions are quite restrictive, for example
we the following:
Lemma 3.7. If G is a group with a self-normalizing subgroup K that satisfies the
two conditions of Corollaries 3.4 (or 3.6) then G is amenable.
16
YAIR HARTMAN AND ARIEL YADIN
Proof. First, observe that hRW (G/Core∅ (K), µ̄) = 0 implies that Core∅ (K) is
a co-amenable (normal) subgroup in G. Indeed, it is well know that a group,
G/Core∅ (K) in this case, admits a random walk which is Liouville is amenable.
Next, the condition hRW (G/CoreG (K), µ̄) = hRW (G, µ) implies that CoreG (K)
is an amenable group. This follows from Kaimanovich’s amenable extension theorem [18]: Let µ be a random walk on G. The Furstenberg-Poisson boundary
of G/CoreG (K) with the µ-projected random walk is a (G, µ)-boundary, or, in
other words a G-factor of the Furstenberg-Poisson boundary Π(G, µ). Since it has
the same entropy as the random walk entropy, this factor is actually isomorphic
to Π(G, µ). Now Kaimanovich’s theorem asserts that CoreG (K) is an amenable
group.
To conclude that G is amenable we show that Core∅ (K)/CoreG (K) is amenable,
showing that the normal subgroup CoreG (K) is both amenable and co-amenable
in G.
In general, we claim that Core∅ (K)/CoreG (NG (K)) is always an amenable
group. This will complete the proof, because we assumed that NG (K) = K.
Indeed, G acts by permutations on the conjugacy class K G . Note that any
element in Core∅ (K) has finite support as a permutation on K G . Also, it may be
checked that the elements of Core∅ (K) that stabilize every K γ are precisely the
elements of CoreG (NG (K)). Thus, Core∅ (K)/CoreG (NG (K)) is isomorphic to a
subgroup of the amenable group Sym∗ (X).
The subgroups of the free group that we will consider in Section 4 (to prove
Theorem 1.2) are all self-normalizing. Since the free group is non-amenable, we
cannot obtain the conditions of Corollary 3.4 simultaneously in this fashion. For
this reason, we approximate the second condition using a sequence of subgroups.
Corollary 3.8. Let µ be a generating finite entropy probability measure on a
discrete group G. Assume that there exists a sequence of subgroups Kn ≤ G such
that:
(1) hRW (G/Core∅ (Kn ), µ̄) = 0 (e.g. whenever G/Core∅ (Kn ) are nilpotent),
(2) hRW (G/CoreG (Kn ), µ̄) → hRW (G, µ)
Then (G, µ) admits a full realization.
Proof. By the first condition, for any fixed n, the entropy along the path λp,Kn gives
a realization of the interval [0, hRW (G/CoreG (Kn ), µ̄)]. By the second condition,
we conclude that any number in [0, hRW (G, µ)] is realized as the entropy some
IRS of the form λp,Kn .
3.4. Examples: lamplighter groups and permutation groups. We are now
ready to apply our tools to get Theorem 1.3, that is, full realization for lamplighter
groups with Liouville base.
ENTROPY OF INTERSECTIONAL IRSS
17
Proof of Theorem 1.3. Choose the subgroup K = ⊕B\{e} L o {e} where e is the
trivial element in the base group B. It may be simply verified that CoreG (K) = {e}
and ⊕B L o {e} C Core∅ (K).
Whenever (B, µ̄) is Liouville, also (G/Core∅ (K), µ̄) is. This is because B ∼
=
G/(⊕B L o {e}), so G/Core∅ (K) is a quotient group of B. Thus, we have that
hRW (G/Core∅ (K), µ̄) = 0, and we obtain a realization of the full interval
[0, hRW (G/CoreG (K), µ̄)] = [0, hRW (G, µ)].
We now turn to discuss extensions of the group of finitely supported permutations. Let X be a countable set, and denote by Sym(X) the group of all permutations of X. Recall that Sym∗ (X) is its subgroup of all finitely supported permutations. While Sym∗ (X) is not finitely generated, one can add elements from
Sym(X) to get a finitely generated group. A standard example is when adding a
“shift” element σ ∈ Sym(X), that is, σ acts as a free transitive permutation of
X. Let Σ = hσi be the subgroup generated by σ. Formally the element σ acts on
Sym∗ (X) by conjugation (recall that both groups are subgroups of Sym(X)) and
we get a finitely generated group Sym∗ (X) o Σ.
One can replace Σ by any other subgroup of Sym(X). The reason that we want
to change Σ by a “larger” group is to have a non-Liouville group Sym∗ (X) o Σ.
Theorem 3.9. Let Σ ≤ Sym(X) and consider G = Sym∗ (X) o Σ. Let µ be a
generating finite entropy probability measure on G, such that the the projected
measure µ̄ on Σ ∼
= G/Sym∗ (X) o {e} is Liouville. Then (G, µ) has full realization.
In particular, if Σ is a nilpotent group, (G, µ) admits full realization for any µ.
Proof. Fix some point x0 ∈ X and let K = stab(x0 ) o {e} where stab(x0 ) ≤
Sym∗ (X) is the stabilizer of x0 . It is easy to verify that CoreG (K) = {e} and that
Core∅ (K) = Sym∗ (X) o {e}, hence by the assumption hRW (G/Core∅ (K), µ̄) = 0
and by Corollary 3.4 we get a full realization.
Remark 3.10. Depending of Σ and µ, it might be that (G, µ) is Liouville where
G = Sym∗ (X) o Σ. In that case, this result of full realization is trivial as the only
realizable value is 0.
However, it is easy to find nilpotent Σ such that G is finitely generated and that
(G, µ) is not Liouville (say, for any generating, symmetric, finitely supported µ).
4. Full realization for the free group
We now turn to discuss our realization results for G = Fr the free group of rank
r for any r ≥ 2. It suffices to prove that the free group satisfies the conditions of
Corollary 3.8. We construct the subgroups Kn by describing their Schreier graphs.
For simplicity of the presentation we describe the result only for the free group on
two generators, F2 = ha, bi. The generalization to higher rank is straightforward.
18
YAIR HARTMAN AND ARIEL YADIN
4.1. Schreier graphs. Recall our notations for Schreier graphs from Section 2.2.
It will be convenient to use |v| to denote the graph distance of a vertex v to the
root vertex in a fixed Schreier graph.
4.2. Fixing. In this section we describe a condition on Schreier graphs that we call
fixing. For the normal core of the associated subgroup, this condition will ensure
that the random walk entropy of the quotient approximates the full random walk
entropy as required by Condition 2 in Corollary 3.8 (see Proposition 4.7 below).
Let Λ be a Schreier graph with root o.
For a vertex v in Λ, the shadow of v, denoted shd(v) is the set of vertices u in
Λ such that any path from u to the root o must pass through v.
For an integer n define the finite subtree (F2 )≤n to be the induced (labeled)
subgraph on {g ∈ F2 : |g| ≤ n}. We say that Λ is n-tree-like if the ball of
radius n about o in Λ is isomorphic to (F2 )≤n , and if every vertex at distance n
from o has a non-empty shadow. Informally, an n-tree-like graph is one that is
made up by gluing disjoint graphs to the leaves of a depth-n regular tree.
Lemma 4.1. Let C ≤ K ≤ F2 be subgroups. Let ΛC , ΛK be the Schreier graphs
of C, K respectively.
If ΛK is n-tree-like, then ΛC is also n-tree-like.
Proof. This follows from the fact that C ≤ K implies that ΛC is a cover of ΛK
(see e.g. [21, Lemma 20]).
For an n-tree-like Λ, since any vertex is connected to the root, every vertex at
distance greater than n from the root o is in the shadow of some vertex at distance
precisely n. For a vertex v in Λ with |v| > n, and k ≤ n we define the k-prefix
of v in Λ to be pref k (v) = pref k,Λ (v) = u for the unique u such that both |u| = k
and v ∈ shd(u). If |v| ≤ k then for concreteness we define pref k (v) = v.
Since n-tree-like graphs look like F2 up to distance n, the behavior of pref k on
n-tree-like graphs where n ≥ k is the same as on F2 . This is the content of the
next elementary lemma, whose proof is left as an exercise.
Lemma 4.2. Let Λ be a n-tree-like Schreier graph. Let ϕ : Λ → F2 be a function
mapping the ball of radius n in Λ isomorphically to (F2 )≤n .
Then, for any vertex |v| ≤ n in Λ we have that
pref k,Λ (v) = ϕ−1 pref k,F2 (ϕ(v)).
We use (Zt )t to denote the µ-random walk on F2 and (Z̃t )t the projected walk on
a Schreier graph Λ with root o. (So Z̃t = KZt where K is the associated subgroup
to the root o in the Schreier graph Λ.)
Another notion we require is the k-prefix at infinity. Define the random
variable pref k (Z̃∞ ) = pref k,Λ (Z̃∞ ), taking values in the set of vertices of distance
k from o in Λ, as follows: If the sequence pref k (Z̃t )t stabilizes at u, that is, if there
ENTROPY OF INTERSECTIONAL IRSS
19
exists t0 such that for all t > t0 we have pref k (Z̃t ) = pref k (Z̃t−1 ) = u, then define
pref k (Z̃∞ ) := u. Otherwise, define pref k (Z̃∞ ) = o. Note that for an n-tree-like
graph with n > 0 we have that pref k (Z̃∞ ) = o (almost surely) if and only if the
walk (Z̃t )t returns to the ball of radius k infinitely many times.
Definition 4.3. Let 0 < k < n be natural numbers and α ∈ (0, 1). We say that
Λ is (k, n, α)-fixing if Λ is n-tree-like and for any |v| ≥ n,
P[∀ t , pref k (Z̃t ) = pref k (Z̃0 ) | Z̃0 = v] ≥ α.
That is, with probability at least α, for any |v| ≥ n, the projected random
walk started at v never leaves shd(pref k (v)). Hence, the random walk started at
depth n in the graph fixes the k-prefix with probability at least α. Note that this
definition depends on the random walk µ.
A good example of a fixing graph is the Cayley graph of the free group itself.
Lemma 4.4. Let µ be a finitely supported generating probability measure on F2 .
Then, there exists α = α(µ) > 0 such that the Cayley graph of F2 is (k, k + 1, α)fixing for any k > 0.
Proof. It is well known that a µ-random walk (Zt )t on F2 is transient (for example,
this follows from non-amenability of F2 ). Set r = max{|g| : µ(g) > 0} be the
maximal jump possible by the µ-random walk. In order to change the k-prefix, the
walk must return to distance at most k + r from the origin. That is, if |Zt | > k + r
for all t > t0 , we have that pref k (Zt ) = pref k (Zt0 ) for all t > t0 .
By forcing finitely many initial steps (depending on r, µ), there exists t0 and
α > 0 such that P[A | |Z0 | = k + 1] ≥ α where
A = {∀0 < t ≤ t0 , pref k (Zt ) = pref k (Z0 ) and |Zt0 | > k + r}.
By perhaps making α smaller, using transience, we also have that
P[∀ t > 0 , |Zt | ≥ |Z0 |] ≥ α.
Combining these two estimates, with the Markov property at time t0 ,
P[∀t, pref k (Zt ) = pref k (Z0 ) | |Z0 | = k + 1] ≥ P[A and {∀ t > t0 , |Zt | ≥ |Zt0 |}]
≥ P[A] · P[∀ t > 0 , |Zt | ≥ |Z0 |] ≥ α2 .
This is the definition of F2 being (k, k + 1, α)-fixing.
A very useful property of fixing graphs is that when α is close enough to 1, there
exists a finite random time - the first time the random walk leaves the tree area on which we can already guess with a high accuracy the k-prefix at infinity of the
random walk. A precise formulation is the following.
20
YAIR HARTMAN AND ARIEL YADIN
Lemma 4.5. Let Λ be a (k, n, α)-fixing Schreier graph. Let (Zt )t be a µ-random
walk on F2 and (Z̃t )t the projected walk on Λ. Define the stopping time
T = inf{t : |Z̃t | ≥ n}.
Then,
H(Z1 | pref k (Z̃T )) − H(Z1 | pref k (Z̃∞ )) ≤ ε(α) + 2(log 4)(1 − α)k,
where 0 < ε(α) → 0 as α → 1.
Proof. First note that a general entropy inequality is
H(X | A) − H(X | B) ≤ H(A | B) + H(B | A),
which for us provides the inequality
H(Z1 | pref k (Z̃T )) − H(Z1 | pref k (Z̃∞ ))
≤ H(pref k (Z̃T ) | pref k (Z̃∞ )) + H(pref k (Z̃∞ ) | pref k (Z̃T )).
We now bound this last quantity using Fano’s inequality (see e.g. [11, Section
2.10]). Taking α̃ = P[pref k (Z̃T ) = pref k (Z̃∞ )], since the support of both pref k (Z̃∞ ), pref k (Z̃T )
is of size 4 · 3k−1 < 4k ,
H(pref k (Z̃∞ ) | pref k (Z̃T )) ≤ H(α̃, 1 − α̃) + (1 − α̃) log 4k ,
H(pref k (Z̃T ) | pref k (Z̃∞ )) ≤ H(α̃, 1 − α̃) + (1 − α̃) log 4k .
where here we use the usual notation H(α̃, 1 − α̃) to denote the Shannon entropy
of the probability vector (α̃, 1 − α̃).
Since p 7→ H(p, 1 − p) decreases for p ≥ 12 , it suffices to prove that α̃ =
P[pref k (Z̃T ) = pref k (Z̃∞ )] ≥ α.
But now, the very definition of (k, n, α)-fixing implies that for any relevant u,
P[pref k (Z̃∞ ) = u | pref k (ZT ) = u]
≥ P[∀ t > 0 , pref k (Z̃T +t ) ∈ shd(pref k (u)) | pref k (ZT ) = u] ≥ α,
where we have used the strong Markov property at the stopping time T . Averaging
over the relevant u implies that α̃ ≥ α.
The following is the main technical estimate of this subsection.
Lemma 4.6. Fix a generating probability measure µ on F2 with finite support.
Let (Γj )j be a sequence of Schreier graphs such that Γj is (kj , nj , αj )-fixing. Let
Kj be the subgroup corresponding to Γj .
Suppose that kj → ∞, nj − kj → ∞ and (1 − αj )kj → 0 as j → ∞. Then,
lim sup hRW (F2 /CoreF2 (Kj ), µ̄) = hRW (F2 , µ).
j→∞
ENTROPY OF INTERSECTIONAL IRSS
21
Proof. Let (Zt )t be a µ-random walk on F2 .
Take j large enough so that nj −kj is much larger that r = max{|g| : µ(g) > 0}.
To simplify the presentation we use Γj = Γ, K = Kj , C = Cj = CoreF2 (Kj ) and
k = kj , n = nj , α = αj , omitting the subscript j when it is clear from the context.
Let Λ = Λj beTthe Schreier graph corresponding
T to C = Cj .
Let TC = t σ(CZt , CZt+1 , . . .) and T = t σ(Zt , Zt+1 , . . .) denote that tail
σ-algebras of the random walk on Λ and on F2 respectively. Kaimanovich and
Vershik (eq. 13 on p. 465 in [20]) show that
hRW (F2 /C, µ̄) = H(CZ1 ) − H(CZ1 | TC )
and
hRW (G, µ) = H(Z1 ) − H(Z1 | T ).
Now, since C ≤ K, and since Γ is n-tree-like, also Λ is (Lemma 4.1). Since Λ is
n-tree-like, CZ1 and Z1 determine one another (Lemma 4.2), implying H(CZ1 ) =
H(Z1 ) and H(CZ1 | TC ) = H(Z1 | TC ). Since TC ⊂ T , the inequality H(Z1 | TC ) ≥
H(Z1 | T ) is immediate. Hence, we only need to show that lim inf j H(Z1 | TCj ) ≤
H(Z1 | T ).
Set T = inf{t : |CZt | ≥ n} as in Lemma 4.5. Since Λ is n-tree-like,
pref k (CZT −1 ) and pref k (ZT −1 ) determine one another (Lemma 4.2). Also, since
n − k is much larger than r, and specifically much larger than any jump the
walk can make in one step, we know that pref k (CZT ) = pref k (CZT −1 ) and
pref k (ZT ) = pref k (ZT −1 ). All in all, H(Z1 | pref k (CZT )) = H(Z1 | pref k (ZT )).
Now, set ε = ε(α, k) = 2H(α, 1 − α) + 2(log 4)(1 − α)k. Since pref k (CZ∞ ) is
measurable with respect to TC , by using Lemma 4.5 twice, once for Λ and once
for the tree, we get the bound
H(Z1 | TC ) ≤ H(Z1 | pref k (CZ∞ )) ≤ H(Z1 | pref k (CZT )) + ε
= H(Z1 | pref k (ZT )) + ε ≤ H(Z1 | pref k (Z∞ )) + 2ε.
Now, under the assumptions of the lemma, ε → 0 as j → ∞. Also, since
kj → ∞ we have that
H(Z1 | pref kj (Z∞ )) → H(Z1 | T ).
Thus, taking a limit on the above provides us with
lim sup H(Z1 | TCj ) ≤ lim sup H(Z1 | pref kj (Z∞ )) = H(Z1 | T ).
j→∞
j→∞
Hence, as mentioned, we conclude that lim supj H(Z1 | TCj ) = H(Z1 | T ), and
since H(Z1 ) = H(CZ1 ) we get that
lim sup hRW (F2 /CoreF2 (Kj ), µ̄) = hRW (F2 , µ).
j→∞
22
YAIR HARTMAN AND ARIEL YADIN
To complete the relevance of fixing to convergence of the random walk entropies,
we conclude with the following.
Proposition 4.7. Fix a generating probability measure µ on F2 with finite support.
Let (Γj )j be a sequence of Schreier graphs such that Γj is (kj , nj , αj )-fixing. Let
Kj be the subgroup corresponding to Γj .
Suppose that kj → ∞ and αj → 1 as j → ∞. Then,
lim sup hRW (F2 /CoreF2 (Kj ), µ̄) = hRW (F2 , µ).
j→∞
Proof. Since we are only interested in lim sup we may pass to a subsequence
and show that the parameters along this subsequence satisfy the assumptions
of Lemma 4.6. Notice that the assumption kj → ∞ implies in particular that
nj → ∞.
Note that by definition, if Λ is (k, n, α)-fixing, then it is also (k 0 , n0 , α0 )-fixing
for any k 0 ≤ k, n0 ≥ n, α0 ≤ α.
For any m choose jm large enough so that for all i ≥ jm we have both αi >
1 − m−2 and ki > 2m. The subsequence of graphs (Γjm )m satisfies that Γjm is
(m, njm , 1− m12 )-fixing. Since njm −m ≥ kjm −m > m, Lemma 4.6 is applicable.
4.3. Gluing graphs. Let Λ be a Schreier graph of the free group F2 = ha, bi and
let S = {a, a−1 , b, b−1 }. We say that the pair (Λ, ) is s-marked if is a (oriented)
edge in Λ labeled by s ∈ S.
For s ∈ S we define Ns to be the following graph: the vertices are non-negative
the integers N. The edges are given by (x + 1, x), each labeled by s, and self-loops
labeled by ξ ∈ {a, b} \ {s, s−1 } at each vertex. In order to be a Schreier graph,
this graph is missing one outgoing edge from 0 labeled by s.
b
b
a
0
b
a
1
b
a
2
b
a
3
4
Figure 1. Part of the labeled graph Na .
Given a s-marked pair (Λ, ) and an integer n > 0, we construct a Schreier
graph Γn (Λ, ):
The Cayley graph of F2 is the 4-regular tree. For an integer n and a label
(generator) s recall the finite subtree (F2 )≤n which is the induced subgraph on
{g ∈ F2 : |g| ≤ n}, and let (F2 )n be the set of vertices {g : |g| = n}. For g ∈
(F2 )n there is exactly one outgoing edge incident to g in (F2 )≤n . If this edge is
labeled by ξ ∈ S we say that g is a ξ-leaf.
ENTROPY OF INTERSECTIONAL IRSS
23
w
x
y
z
Figure 2. The subgraph (F2 )≤3 . Edges labeled a are dotted and
labeled b are solid. The vertices x, y, z, w are a, a−1 , b, b−1 -leaves
respectively.
For a ξ-leaf g, in order to complete (F2 )≤n into a Schreier graph, we need
to specify 3 more outgoing edges, with the labels ξ −1 and the two labels from
S \ {ξ, ξ −1 }.
Now, recall our s-marked pair (Λ, ). Let g be a ξ-leaf for ξ 6∈ {s, s−1 }. Let
(Λg , g ) be a copy of (Λ, ), and suppose that g = (xg , yg ). Connect Λg to g by
deleting the edge g from Λg and adding the directed edges (xg , g), (g, yg ) labeling
them both s. Also, let Ng be a copy of Nξ above, and connect this copy to g by a
directed edge (0g , g) labeled by ξ (here 0g is the copy of 0 in Ng ). This takes care
of ξ-leaves for ξ 6∈ {s, s−1 }.
If ξ ∈ {s, s−1 }, the construction is simpler: For any ξ-leaf γ (where ξ ∈ {s, s−1 }),
let Nγ be a copy of Nξ (with 0γ the copy of 0 as before), and connect Nγ by an
edge (0γ , γ) labeled by ξ. Finally add an additional self-loop with the missing
labels at γ (i.e. the labels in S \ {s, s−1 } each in one direction along the loop).
By adding all the above copies to all the leaves in (F2 )≤n , we obtain a Schreier
graph, which we denote Γn (Λ, ). See Figure 3 for a visualization (which in this
case is probably more revealing than the written description). It is immediate
from the construction that Γn (Λ, ) is n-tree-like.
4.4. Marked pairs and transience. Recall the definition of a transient random
walk (see e.g. [22, 26]): We say that a graph Λ is µ-transient if for any vertex v
in Λ there is some positive probability that the µ-random walk started at v will
never return to v.
24
YAIR HARTMAN AND ARIEL YADIN
Ng
0g
g
γ
yg
xg
0γ
Nγ
Λg
Figure 3. A visiulization of gluing (Λ, ) to (F2 )≤3 . The dotted
edges correspond to the label a, and the full edges to b. The vertex g
is a a-leaf. Thus, a copy of Λ, denoted Λg is connected by removing
the edge (dashed) and adding to edges (xg , g), (g, yg ) labeled b
(solid). A copy of Na , denoted Ng , is connected via an edge (0g , g)
labeled a (dotted). Also, to the b-leaf γ, a copy of Nb , denoted Nγ is
added via an edge (0γ , γ) labeled b (full), and an additional self-loop
labeled a (dotted) is added at γ.
We now connect the transience of Λ above to that of fixing from the previous
subsection.
Proposition 4.8. Let µ be a finitely supported generating probability measure on
F2 . Let (Λ, ) be a µ-transient s-marked pair.
Then, for any ε > 0 and k, there exist n > k such that the graph Γn (Λ, ) is
(k, n, 1 − ε)-fixing.
The proof of this proposition is quite technical. For the benefit of the reader,
before proving the proposition we first provide a rough sketch, outlining the main
idea of the proof.
ENTROPY OF INTERSECTIONAL IRSS
25
Proof Sketch. Consider the graph Γn (Λ, ). This is essentially a finite tree with
copies of Λ and copies of N glued to the (appropriate) leaves. At any vertex v with
|v| > k, there is a fixed positive probability α of escaping to the leaves without
changing the k-prefix (because F2 is (k, k + 1, α)-fixing).
Once near the leaves, with some fixed probability δ the walk reaches a copy
of Λ without going back and changing the k-prefix. Note that although there
are copies of the recurrent Nγ (e.g. for symmetric random walks) glued to some
leaves, nonetheless, since k is much smaller than n, there is at least one copy of
the transient Λ in the shadow of v.
Finally, once in the copy of Λ, because of transience, the walk has some positive
probability β to escape to infinity without ever leaving Λ, and thus without ever
changing the k-prefix.
All together, with probability at least αδβ the walk will never change the kprefix.
If this event fails, the walk may jump back some distance, but not too much
since we assume that µ has finite support. So there is some r for which the walk
(deterministically) cannot retreat more than distance r, even conditioned on the
above event failing.
Thus, starting at |v| > k + `r for large enough `, there are at least ` attempts to
escape to infinity without changing the k-prefix, each with a conditional probability of success at least αδβ. By taking ` large enough, we can make the probability
of changing the k-prefix as small as desired.
We now proceed with the actual proof.
Proof of Proposition 4.8. Step I. Let (Z̃t )t denote the projected µ-random walk
on the Schreier graph Λ. Assume that = (x, y), and note that since Λ is µtransient, then
P[∀ t , Z̃t 6= x | Z̃0 = y] + P[∀ t , Z̃t 6= y | Z̃0 = x] > 0.
By possibly changing to (y, x), without loss of generality we may assume that
β := P[∀ t , Z̃t 6= x | Z̃0 = y] > 0.
Let ` > 0 be large enough, to be chosen below. Set n = k + (` + 2)r + 1. We
now change notation and use (Z̃t )t to denote the projected walk on the Schreier
graph Γn (Λ, ). Denote r = max{|g| : µ(g) > 0}, the maximal possible jump of
the µ-random walk. A key fact we will use throughout is that: in order to change
the k-prefix, the walk must at some point reach a vertex u with |u| ≤ k + r.
Let τ = inf{t : n − r ≤ |Z̃t | ≤ n}. If we start from |Z̃0 | ≤ n this is a.s. finite,
since the jumps cannot be larger than r. Now, by Lemma 4.4, F2 is (k, k + 1, η)fixing for all k and some η = η(µ) > 0. Since Γn (Λ, ) is n-tree-like, if v is a vertex
in Γn (Λ, ) with k < |v| < n, then
(4.1)
P[pref k (Z̃τ ) = pref k (v) | Z̃0 = v] ≥ η
26
YAIR HARTMAN AND ARIEL YADIN
For s ∈ S let ts be the smallest number such that µts (s) > 0. Let tS = max{ts :
s ∈ S}. Let δ = min{µts (s) : s ∈ S}. Let v, u be two adjacent vertices in
Γn (Λ, ), and assume that the label of (v, u) is s ∈ S. Note that the definitions
above ensure that
P[Z̃ts = u | Z̃0 = v] ≥ δ.
Thus, when |v| > k + tS r, with probability at least δ we can move from v to
u in at most tS steps without changing the k-prefix. This holds specifically for
|v| ≥ n − r > k + tS r by our choice of n (as long as ` ≥ tS ).
Consider the graph Γn (Λ, ). Recall that it contains many copies of Λ (glued to
the appropriate leaves). Let Λ1 , . . . , Λm be the list of these copies, and denote by
j = (xj , yj ) the corresponding copies of in each. Define Y = {y1 , . . . , ym }.
Now, define stopping times
Um = inf{t : |Z̃t | < m}
T = inf{t : Z̃t ∈ Y }
and
(where inf ∅ = ∞). Any vertex v in Γn (Λ, ) of depth |v| < n must have some
copy of (Λ, ) in its shadow shd(v). So there exists some path of length at most
n−|v|+1 from v into some copy of (Λ, ), ending in some vertex in Y . If |v| ≥ n−r
then we can use the strong Markov property at the first time the walk gets to some
vertex u with |u| < n. Since this u must have |u| ≥ n − r, we obtain that for any
|v| ≥ n − r,
P[T < Un−r | Z̃0 = v] ≥ δ r+1 .
By µ-transience of Λ, we have that starting from any yj ∈ Y , with probability at
least β the walk never crosses (yj , xj ), and so never leaves Λj (and thus always
stays at distance at least n from the root). We conclude that for any |v| ≥ n − r,
P[pref k (Z̃∞ ) = pref k (Z̃0 ) | Z̃0 = v] ≥ P[Un−r = ∞ | Z̃0 = v]
≥ P[T < Un−r | Z̃0 = v] · inf P[∀t , Z̃t ∈ Λj | Z̃0 = yj ]
yj ∈Y
≥ δ r+1 β.
(where we have used that k + r < n − r). Combining this with (4.1), using the
strong Markov property at time τ , we obtain that for any |v| > k,
(4.2)
P[pref k (Z̃∞ ) = pref k (v) | Z̃0 = v] ≥ ηδ r+1 β.
Step II. Now, define the events
Aj = {pref k (Z̃∞ ) 6= pref k (Z̃Un−jr )}.
When Z̃0 = v for |v| ≥ n = k + (` + 2)r + 1 the event {pref k (Z̃∞ ) 6= pref k (Z̃0 )}
implies that
U|v|−r < U|v|−2r < · · · < U|v|−`r < Uk+r < ∞.
ENTROPY OF INTERSECTIONAL IRSS
27
But, at time U|v|−jr we have that
|Z̃U|v|−jr | ≥ |Z̃U|v|−jr −1 | − r ≥ |v| − (j + 1)r > k + r.
Thus, we have by (4.2), using the strong Markov property at time U|v|−jr ,
implying that
P[Aj+1 | Z̃0 , . . . , Z̃U|v|−jr ] ≤ 1 − ηδ r+1 β,
P[Aj+1 | (A1 )c ∩ · · · ∩ (Aj )c ] ≤ 1 − ηδ r+1 β.
Thus, as long as |v| ≥ n we have that
P[pref k (Z̃∞ ) 6= pref k (v) | Z̃0 = v] ≤ P[A1 ∪ · · · ∪ A` | Z̃0 = v] ≤ (1 − ηδ r+1 β)` .
Choosing ` such that (1 − ηδ r β)` < ε, we obtain that for any |v| ≥ n
P[pref k (Z̃∞ ) 6= pref k (v) | Z̃0 = v] < ε.
That is, the graph Γn (Λ, ) is (k, n, 1 − ε)-fixing.
Remark 4.9. Proposition 4.8 is the only place we require the measure µ to have
finite support. Indeed, we believe that the proposition should hold with weaker
assumptions on µ, perhaps even only assuming that µ is generating and has finite
entropy. If this is the case, we could obtain full realization results for the free
group for such measures.
4.5. Local properties. Recall from Section 3.1 that given a subgroup K ≤ F2
we define kgkK as the number of cosets of the normalizer b ∈ NG (K)\G for which
g 6∈ K b . Recall also that Core∅ (K) = {g : kgkK < ∞}.
In this subsection we will provide conditions on K under which Core∅ (K) contains a given subgroup of F2 . This will be useful in determining properties of
F2 /Core∅ (K) (specifically, whether F2 /Core∅ (K) is nilpotent).
First, some notation: Let g ∈ F2 . Let g = w1 · · · w|g| be a reduced word with
wi ∈ S for all i. Let Λ be a Schreier graph with corresponding subgroup K ≤ F2 .
Fix a vertex v in Λ. Let v0 (g) = v and inductively define vn+1 (g) to be the unique
vertex of Λ such that (vn (g), vn+1 (g)) is labeled by wn+1 . In other words, if v = Kγ
then vi (g) = Kγw1 · · · wi .
Recall that g ∈ K if and only if for v = o the root of Λ. Hence, for any g ∈ K,
we have v|g| (g) = v0 (g). From this we can deduce that when v = Kγ, then
v|g| (g) = v0 (g) ⇐⇒ γgγ −1 ∈ K ⇐⇒ g ∈ K γ .
Definition 4.10. Let Γ1 , . . . , Γn be some Schreier graphs with roots ρ1 , . . . , ρn .
Let BΛ (v, r) be the ball of radius r around v in the graph Λ (and similarly for
Γj ). We say that Λ (or sometimes K) is locally-(Γ1 , . . . , Γn ) if for any r > 0
there exists R > 0 such that for all vertices v in Λ with |v| > R we have that
the ball BΛ (v, r) is isomorphic (as a labeled graph) to at least one of the balls
BΓ1 (ρ1 , r), . . . , BΓn (ρn , r).
28
YAIR HARTMAN AND ARIEL YADIN
That is, a subgroup K is locally-(Γ1 , . . . , Γn ) if, after ignoring some finite area,
locally we see one of the graphs Γ1 , . . . , Γn .
The main purpose of this subsection is to prove:
Proposition 4.11. Let K ≤ F2 . Let G1 , . . . , Gn ≤ F2 be subgroups with corresponding Schreier graphs Γ1 , . . . , Γn .
If K is locally-(Γ1 , . . . , Γn ) then G1 ∩ · · · ∩ Gn ≤ Core∅ (K).
Proof. Let g ∈ G1 ∩ · · · ∩ Gn . We need to prove that ||g||K < ∞.
Let Λ be the Schreier graph of K. Let r = |g|. Let R be large enough so that for
|v| > R in Λ the ball BΛ (v, r) is isomorphic to one of BΓj (ρj , r) (where as before,
ρj is the root vertex of Γj ).
Let |v| > R and let j be such that BΛ (v, r) is isomorphic to BΓj (ρj , r). Set
Γ = Γj , ρ = ρj .
Consider the paths ρ = ρ0 (g), . . . , ρr (g) in Γ and v = v0 (g), . . . , vr (g) in Λ. Since
these each sit in BΓ (ρ, r) and BΛ (v, r) respectively, we have that ρr (g) = ρ0 (g) if
and only if vr (g) = v0 (g). Thus, g ∈ Gj implies that g ∈ K γ where v = Kγ.
In conclusion, we have shown that if γ is such that |Kγ| > R then g ∈ K γ .
Hence, there are only finitely many γ ∈ F2 such that g 6∈ K γ , which implies that
||g||K < ∞.
4.6. Full realization. In this subsection we prove Theorem 1.2.
In light of Corollary 3.8 and Proposition 4.7, in order to prove full realization
for F2 , we need to find a sequence of subgroups Kn with Schreier graphs Γn such
that the following properties hold:
• Γn is (kn , kn0 , αn )-fixing, with kn → ∞ and αn → 1.
• The subgroups Core∅ (Kn ) are co-nilpotent in F2 .
To show the first property, we will use the gluing construction from Proposition
4.8, by finding a suitable marked pair (Λ, ) that is µ-transient.
Lemma 4.12. Let N be a normal co-nilpotent subgroup in F2 , and let Λ be its
associated Schreier graph. Choose some edge in Λ labeled with s ∈ S and consider
(Λ, ) as a marked pair. For any n > 0 let Kn ≤ F2 be the subgroup corresponding
to the root of the Shcreier graph Γn (Λ, ).
Then, for any n > 0, the normal subgroup Core∅ (Kn ) is co-nilpotent in F2 .
Proof. Let G0 = F2 , Gj+1 = [Gj , F2 ] be the descending central series of F2 . Since
N is co-nilpotent, there exists m such that Gm C N .
By definition of Γn (Λ, ), outside a ball of radius n in Γn (Λ, ), we only have
glued copies of Λ or of Ns for s ∈ S = {a, a−1 , b, b−1 } (recall Subsection 4.3).
Let Zs denote the Schreier graph with vertices in Z, edges (x + 1, x) labeled by
s and a self loop with label ξ 6∈ {s, s−1 } at each vertex. Let ϕs : F2 → Z be the
homomorphism defined via s 7→ −1, s−1 7→ 1 and ξ 7→ 0 for ξ 6∈ {s, s−1 }. Then,
ENTROPY OF INTERSECTIONAL IRSS
29
the subgroup corresponding to Zs is ker ϕs . Since F2 / ker ϕs is abelian, we have
that Gm C [F2 , F2 ] C ker ϕs .
It is now immediate that Kn is locally-(Λ, Za , Za−1 , Zb , Zb−1 ). By Proposition
4.11, this implies that
\
Gm ≤ N ∩
ker ϕs ≤ Core∅ (Kn ).
s∈S
Thus, Core∅ (Kn ) is co-nilpotent.
We are now ready to prove our main result, Theorem 1.2.
Proof of Theorem 1.2. By Corollary 3.8, Proposition 4.7 and Lemma 4.12 it suffices to find N C F2 such that F2 /N is nilpotent and transient. Since any 2generated nilpotent group is a quotient of F2 , it suffices to find a 2-generated
nilpotent group, whose Cayley graph is transient, and then take N to be the corresponding kernel of the canonical projection from F2 onto our nilpotent group.
It is well known that recurrent subgroups must have at most quadratic volume growth (this follows for instance from combining the Coulhon-Sallof-Coste
inequality [10] and the method of evolving sets [23], see e.g. [26, Chapters 5 &
8]). Thus, it suffices to find a 2-generated nilpotent group that has volume growth
larger than quadratic. This is not difficult, and there are many possibilities.
For a concrete example, we may consider take H3 (Z), the Heisenberghgroupi (over
1 a c
Z). This is the group whose elements are 3 × 3 matrices of the form 0 1 b with
0 0 1
a, b, c ∈ Z. It is well known that H3 (Z) is two-step nilpotent,
with
the
commutator
h
i
1 0 c
subgroup [H3 (Z), H3 (Z)] containing matrices of the form 0 1 0 . It follows that
0 0 1
[[H3 (Z), H3 (Z)], H3 (Z)] = {1} and H3 (Z)/[H3 (Z), H3 (Z)] ∼
= Z2 . This structure
shows that H3 (Z) has volume growth at least like a degree 3 polynomial (in fact
the volume growth is like r4 ), and thus must be transient. h
i
h
i
1 1 0
1 0 0
Finally, H3 (Z) is generated as a group by two elements 0 1 0 and 0 1 1 , so
0 0 1
0 0 1
can be seen as a quotient of F2 , i.e. H3 (Z) ∼
= F2 /N for some N . This N will satisfy
our requirements, and this finishes the proof for F2 .
Let us note that for Fr with r > 2 the construction is even simpler: Since Zr
is abelian and transient for any r > 2, we can just take N to be the commutator
subgroup of Fr .
4.7. Proof of Theorem 1.6.
Proof of Theorem 1.6. Let N be a normal co-nilpotent subgroup in F2 , and let Λ
be its associated Schreier graph. Choose some edge in Λ labeled with s ∈ S and
consider (Λ, ) as a marked pair. For any n > 0 let Kn ≤ F2 be the subgroup
corresponding to the root of the Shcreier graph Γn (Λ, ), where Γn (Λ, ) is defined
in Section 4.3.
30
YAIR HARTMAN AND ARIEL YADIN
We have already seen in the proof of Lemma 4.12 that Core∅ (Kn ) is co-nilpotent
in F2 , so the entropy function is continuous and the interval of values [0, c] is
realizable for c = hRW (G/CoreG (Kn ), µ̄)], by Corollary 3.6.
So we only need to prove that hRW (G/CoreG (Kn ), µ̄) > 0 for some n. (Of course
we proved that when µ has finite support this converges to the random walk
entropy, but here we are dealing with any finite entropy generating probability
measure µ, not necessarily with finite support.)
By the well known Kaimanovich-Vershik entropy criterion [20], if there exists a
non-constant bounded harmonic function on the Schreier graph Γn (Λ, ), then the
entropy hRW (G/CoreG (Kn ), µ̄) is positive.
If the graph (Λ, ) is transient, then considering the random walk on Γn (Λ, ),
for each glued copy of (Λ, ) the random walk has positive probability to eventually
end up in that copy. In other words, there exists 0 < α < 1 such that for any
|v| = n we have
P[∃ t0 : ∀ t > t0 Z̃t ∈ shd(v)] ≥ α.
Thus, for a fixed |v| = n, the function
h(g) = hv (g) := Pg [∃ t0 : ∀ t > t0 Z̃t ∈ shd(v) | Z̃0 = g]
is a non-constant bounded harmonic function on G.
Finding a transient Schreier graph (Λ, ) with corresponding subgroup N C F2
which is co-nilpotent, can be done exactly as in the proof of Theorem 1.2.
References
[1] Miklos Abert, Nicolas Bergeron, Ian Biringer, Tsachik Gelander, Nikolay Nikolov, Jean
Raimbault, and Iddo Samet, On the growth of L2 -invariants for sequences of lattices in Lie
groups, arXiv:1210.2961 (2012).
[2] Miklos Abert, Yair Glasner, and Balint Virag, The measurable Kesten theorem,
arXiv:1111.2080 (2011).
[3] Miklós Abért, Yair Glasner, and Bálint Virág, Kestenś theorem for invariant random subgroups, Duke Mathematical Journal 163 (2014), no. 3, 465–488.
[4] Lewis Bowen, Random walks on random coset spaces with applications to Furstenberg entropy., Inventiones mathematicae 196 (2014), no. 2.
[5]
, Invariant random subgroups of the free group, Groups, Geometry, and Dynamics 9
(2015), no. 3, 891–916.
[6] Lewis Bowen, Rostislav Grigorchuk, and Rostyslav Kravchenko, Invariant random subgroups
of lamplighter groups, Israel Journal of Mathematics 207 (2015), no. 2, 763–782.
[7] Lewis Bowen, Yair Hartman, and Omer Tamuz, Generic stationary measures and actions,
Transactions of the American Mathematical Society (2017).
[8] Peter Burton, Martino Lupini, and Omer Tamuz, Weak equivalence of stationary actions
and the entropy realization problem, arXiv preprint arXiv:1603.05013 (2016).
[9] Gustave Choquet and Jacques Deny, Sur lequation de convolution mu= mu-star-sigma,
Comptes rendus hebdomadaires des seances de l’academie des sciences 250 (1960), no. 5,
799–801.
ENTROPY OF INTERSECTIONAL IRSS
31
[10] Thierry Coulhon and Laurent Saloff-Coste, Isopérimétrie pour les groupes et les variétés,
Rev. Mat. Iberoamericana 9 (1993), no. 2, 293–314.
[11] Thomas M Cover and Joy A Thomas, Elements of information theory, John Wiley & Sons,
2012.
[12] Alex Furman, Random walks on groups and random transformations, Handbook of dynamical systems 1 (2002), 931–1014.
[13] H. Furstenberg, Random walks and discrete subgroups of Lie groups, Advances in Probability
and Related Topics 1 (1971), 1–63.
[14] H Furstenberg, Boundary theory and stochastic processes on homogeneous spaces, Harmonic
analysis on homogeneous spaces 26 (1973), 193–229.
[15] H. Furstenberg and E. Glasner, Stationary dynamical systems, Dynamical numbers—
interplay between dynamical systems and number theory, 2010, pp. 1–28. MR2762131
[16] Yair Hartman and Omer Tamuz, Furstenberg entropy realizations for virtually free groups
and lamplighter groups, Journal d’Analyse Mathématique 126 (2015), no. 1, 227–257.
, Stabilizer rigidity in irreducible group actions, Israel Journal of Mathematics 216
[17]
(2016), no. 2, 679–705.
[18] Vadim A Kaimanovich, The poisson boundary of amenable extensions, Monatshefte für
Mathematik 136 (2002), no. 1, 9–15.
, Amenability and the liouville property, Israel Journal of Mathematics 149 (2005),
[19]
no. 1, 45–85.
[20] Vadim A Kaimanovich and Anatoly M Vershik, Random walks on discrete groups: boundary
and entropy, The annals of probability (1983), 457–490.
[21] Paul-Henry Leemann, Schreir graphs: Transitivity and coverings, International Journal of
Algebra and Computation 26 (2016), no. 01, 69–93.
[22] Russell Lyons and Yuval Peres, Probability on trees and networks, Vol. 42, Cambridge University Press, 2016.
[23] Ben Morris and Yuval Peres, Evolving sets, mixing and heat kernel bounds, Probability
Theory and Related Fields 133 (2005), no. 2, 245–266.
[24] Amos Nevo, The spectral theory of amenable actions and invariants of discrete groups,
Geometriae Dedicata 100 (2003), no. 1, 187–218.
[25] Amos Nevo and Robert J Zimmer, Rigidity of Furstenberg entropy for semisimple Lie group
actions, Annales scientifiques de l’ecole normale supérieure, 2000, pp. 321–343.
[26] Gabor Pete, Probability and geometry on groups. available at: http://math.bme.hu/
~gabor/PGG.pdf.
[27] Albert Raugi, A general Choquet–Deny theorem for nilpotent groups, Annales de l’IHP
probabilités et statistiques 40 (2004), no. 6, 677–683.
[28] Anatolii Moiseevich Vershik, Totally nonfree actions and the infinite symmetric group, Mosc.
Math. J 12 (2012), no. 1, 193–212.
YH: Northwestern University, Evanston, IL
E-mail address: [email protected]
AY: Ben-Gurion University of the Negev, Be’er Sheva ISRAEL
E-mail address: [email protected]
| 4 |
Palindromic automorphisms of right-angled Artin groups
Neil J. Fullarton and Anne Thomas
arXiv:1510.03939v2 [] 28 Oct 2016
April 11, 2018
Abstract
We introduce the palindromic automorphism group and the palindromic Torelli
group of a right-angled Artin group AΓ . The palindromic automorphism group ΠAΓ
is related to the principal congruence subgroups of GL(n, Z) and to the hyperelliptic
mapping class group of an oriented surface, and sits inside the centraliser of a certain
hyperelliptic involution in Aut(AΓ ). We obtain finite generating sets for ΠAΓ and for
this centraliser, and determine precisely when these two groups coincide. We also find
generators for the palindromic Torelli group.
1
Introduction
Let Γ be a finite simplicial graph, with vertex set V = {v1 , . . . , vn }. Let E ⊂ V × V be the
edge set of Γ. The graph Γ defines the right-angled Artin group AΓ via the presentation
AΓ = hvi ∈ V | [vi , vj ] = 1 iff (vi , vj ) ∈ Ei.
One motivation, among many, for studying right-angled Artin groups and their automorphisms (see Agol [1] and Charney [3] for others) is that the groups AΓ and Aut(AΓ )
allow us to interpolate between families of groups that are classically well-studied: we
may pass between the free group Fn and free abelian group Zn , between their automorphism groups Aut(Fn ) and Aut(Zn ) = GL(n, Z), and even between the mapping class
group Mod(Sg ) of the oriented surface Sg of genus g and the symplectic group Sp(2g, Z)
(this last interpolation is explained in [8]). See Section 2 for background on right-angled
Artin groups and their automorphisms.
In this paper, we introduce a new subgroup of Aut(AΓ ) consisting of so-called ‘palindromic’
automorphisms of AΓ , which allows us a further interpolation, between certain previously
well-studied subgroups of Aut(Fn ) and of GL(n, Z). An automorphism α ∈ Aut(AΓ ) is said
to be palindromic if α(v) ∈ AΓ is a palindrome for each v ∈ V ; that is, each α(v) may be
expressed as a word u1 . . . uk on V ±1 such that u1 . . . uk and its reverse uk . . . u1 are identical
as words. The collection ΠAΓ of palindromic automorphisms is, a priori, only a subset of
Aut(AΓ ). While it is easy to see that ΠAΓ is closed under composition, it is not obvious
that it is closed under inversion. In Corollary 3.5 we prove that ΠAΓ is in fact a subgroup
of Aut(AΓ ). We thus refer to ΠAΓ as the palindromic automorphism group of AΓ .
When AΓ is free, the group ΠAΓ is equal to the palindromic automorphism group ΠAn
of Fn , which was introduced by Collins [5]. Collins proved that ΠAn is finitely presented and
1
provided an explicit finite presentation. The group ΠAn has also been studied by Glover–
Jensen [10], who showed, for instance, that it has virtual cohomological dimension n − 1.
At the other extreme, when AΓ is free abelian, the group ΠAΓ is the principal level 2
congruence subgroup Λn [2] of GL(n, Z). Thus ΠAΓ enables us to interpolate between these
two classes of groups.
Let ι be the automorphism of AΓ that inverts each v ∈ V . In the case that AΓ is free,
it is easy to verify that the palindromic automorphism group ΠAΓ = ΠAn is equal to the
centraliser CΓ (ι) of ι in Aut(AΓ ) (hence ΠAn is a group). For a general AΓ , we prove
that ΠAΓ is a finite index subgroup of CΓ (ι), by first considering the finite index subgroup
of ΠAΓ consisting of ‘pure’ palindromic automorphisms; see Theorem 3.3 and Corollary 3.5.
The index of ΠAΓ in CΓ (ι) depends entirely on connectivity properties of the graph Γ, and
we give conditions on Γ that are equivalent to the groups ΠAΓ and CΓ (ι) being equal, in
Proposition 3.6. In particular, there are non-free AΓ such that ΠAΓ = CΓ (ι).
The order 2 automorphism ι is the obvious analogue in Aut(AΓ ) of the hyperelliptic involution s of an oriented surface Sg , since ι and s act as −I on H1 (AΓ , Z) and H1 (Sg , Z),
respectively. The group ΠAΓ also allows us to generalise a comparison made by the first
author in [9, Section 1] between ΠAn ≤ Aut(Fn ) and the centraliser in Mod(Sg ) of the hyperelliptic involution s, which demonstrated a deep connection between these groups. Our
study of ΠAΓ is thus motivated by its appearance in both algebraic and geometric settings.
The main result of this paper finds a finite generating set for ΠAΓ . Our generating set
includes the so-called diagram automorphisms of AΓ , which are induced by graph symmetries of Γ, and the inversions ιj ∈ Aut(AΓ ), with ιj mapping vj to vj −1 and fixing every
vk ∈ V \ {vj }. The function Pij : V → AΓ sending vi to vj vi vj and vk to vk (k 6= i) induces a well-defined automorphism of AΓ , also denoted Pij , whenever certain connectivity
properties of Γ hold (see Section 3.2). We establish that these three types of palindromic
automorphisms suffice to generate ΠAΓ .
Theorem A. The group ΠAΓ is generated by the finite set of diagram automorphisms,
inversions and well-defined automorphisms Pij .
We also obtain a finite generating set for the centraliser CΓ (ι), in Corollary 3.8, by combining
the generating set given by Theorem A with a short exact sequence involving CΓ (ι) and
the pure palindromic automorphism group (see Theorem 3.3). Our generating set for CΓ (ι)
consists of the generators of ΠAΓ , along with all well-defined automorphisms of AΓ that
map vi to vi vj and fix every vk ∈ V \ {vi }, for some i 6= j with [vi , vj ] = 1 in AΓ .
Further, for any re-indexing of the vertex set V and each k = 1, . . . , n, we provide a finite generating set for the subgroup ΠAΓ (k) of ΠAΓ which fixes the vertices v1 , . . . , vk , as
recorded in Theorem 3.11. The so-called partial basis complex of AΓ , which is an analogue
of the curve complex, has as its vertices (conjugacy classes of) the images of members of V
under automorphisms of Aut(AΓ ). This complex has not, to our knowledge, appeared in the
literature, but its definition is an easy generalisation of the free group version introduced by
Day–Putman [6] in order to generate the Torelli subgroup of Aut(Fn ). A ‘palindromic’ partial basis complex was also used in [9] to approach the study of palindromic automorphisms
of Fn . Theorem 3.11 is thus a first step towards understanding stabilisers of simplices in
the palindromic partial basis complex of AΓ .
2
We prove Theorem A and our other finite generation results in Section 3, using machinery
developed by Laurence [16] for his proof that Aut(AΓ ) is finitely generated. The added
constraint for us that our automorphisms be expressed as a product of palindromic generators forces a more delicate treatment. In addition, our proof uses Servatius’ Centraliser
Theorem [18], and a generalisation to AΓ of arguments used by Collins [5, Proposition 2.2]
to generate ΠAn . Throughout this paper, we employ a decomposition into block matrices
of the image of Aut(AΓ ) in GL(n, Z) under the canonical map induced by abelianising AΓ ;
this decomposition was observed by Day [7] and by Wade [19].
We also in this work introduce the palindromic Torelli group PI Γ of AΓ , which we define to
consist of the palindromic automorphisms of AΓ that induce the identity automorphism on
H1 (AΓ ) = Zn . The group PI Γ is the right-angled Artin group analogue of the hyperelliptic
Torelli group SI g of an oriented surface Sg , which has applications to Burau kernels of braid
groups [2] and to the Torelli space quotient of the Teichmüller space of Sg [12]. Analogues
of these objects exist for right-angled Artin groups (see, for example, [4]), but are not yet
well-developed. We expect that the palindromic Torelli group will play a role in determining
their structure.
Even in the free group case, where PI Γ is denoted by PI n , little seems to be known
about the palindromic Torelli group. Collins [5] observed that PI n is non-trivial, and
Jensen–McCammond–Meier [14, Corollary 6.3] proved that PI n is not homologically finite
if n ≥ 3. An infinite generating set for PI n was obtained recently in [9, Theorem A],
and this is made up of so-called doubled commutator transvections and separating π-twists.
In Section 4 we recall and then generalise the definitions of these two classes of free group
automorphisms, to give two classes of palindromic automorphisms of a general AΓ , which
we refer to by the same names. As a first step towards understanding the structure of PI Γ ,
we obtain an explicit generating set as follows.
Theorem B. The group PI Γ is generated by the set of all well-defined doubled commutator
transvections and separating π-twists in ΠAΓ .
The generating set we obtain in Theorem B compares favourably with the generators obtained in [9] in the case that AΓ is free. Specifically, the generators given by Theorem B
are the images in Aut(AΓ ) of those generators of PI n that descend to well-defined automorphisms of AΓ (viewing AΓ as a quotient of the free group Fn on the set V ).
The proof of Theorem B in Section 4 combines our results from Section 3 with results
for PI n from [9]. More precisely, as a key step towards the proof of Theorem A, we find
a finite generating set for the pure palindromic subgroup of ΠAΓ (Theorem 3.7). We then
use these generators to determine a finite presentation for the image Θ of this subgroup
under the canonical map Aut(AΓ ) → GL(n, Z) (Theorem 4.2). In order to find this finite
presentation for Θ ≤ GL(n, Z), we also need Corollary 1.1 from [9], which leverages the
generating set for PI n from [9] to obtain a finite presentation for the principal level 2
congruence subgroup Λn [2] ≤ GL(n, Z). Finally, using a standard argument, we lift the
relators of Θ to obtain a normal generating set for PI Γ .
Acknowledgements. The authors would like to thank Tara Brendle and Stuart White,
for encouraging their collaboration on this paper. We also thank an anonymous referee for
spotting a gap in the proof of Proposition 3.1 in an earlier version, and for other helpful
comments.
3
2
Preliminaries
In this section we give definitions and some brief background on right-angled Artin groups
and their automorphisms. Throughout this section and the rest of the paper, we continue to
use the notation introduced in Section 1. We will also frequently use vi ∈ V to denote both
a vertex of the graph Γ and a generator of AΓ , and when discussing a single generator we
may omit the index i. Section 2.1 recalls definitions related to the graph Γ and Section 2.2
recalls some useful combinatorial results about words in the group AΓ . In Section 2.3 we
recall a finite generating set for Aut(AΓ ) and some important subgroups of Aut(AΓ ), and in
Section 2.4 we recall a matrix block decomposition for the image of Aut(AΓ ) in GL(n, Z).
2.1
Graph-theoretic notions
We briefly recall some graph-theoretic definitions, in particular the domination relation on
vertices of Γ.
The link of a vertex v ∈ V , denoted lk(v), consists of all vertices adjacent to v, and the star
of v ∈ V , denoted st(v), is defined to be lk(v) ∪ {v}. We define a relation ≤ on V , with
u ≤ v if and only if lk(u) ⊂ st(v). In this case, we say v dominates u, and refer to ≤ as the
domination relation [15], [16]. Figure 1 demonstrates the link of one vertex being contained
in the star of another. Note that when u ≤ v, the vertices u and v may be adjacent in Γ,
but need not be. To distinguish these two cases, we will refer to adjacent and non-adjacent
domination.
v
u
Figure 1: An example of a vertex u being dominated by a vertex v. The dashed edge is meant to
emphasise that u and v may be adjacent, but need not be.
Domination in the graph Γ may be used to define an equivalence relation ∼ on the vertex
set V , as follows. We say vi ∼ vj if and only if vi ≤ vj and vj ≤ vi , and write [vi ]
for the equivalence class of vi ∈ V under ∼. We also define an equivalence relation ∼0
by vi ∼0 vj if and only if [vi ] = [vj ] and vi vj = vj vi , writing [vi ]0 for the equivalence class
of vi ∈ V under ∼0 . We refer to [vi ] as the domination class of vi and to [vi ]0 as the adjacent
domination class of vi . Note that the vertices in [vi ] necessarily span either an edgeless or
a complete subgraph of Γ; in the former case, we will call [vi ] a free domination class, while
in the latter, where [vi ] = [vi ]0 , we will call [vi ] an abelian domination class.
4
2.2
Word combinatorics in right-angled Artin groups
In this section we recall some useful properties of words on V ±1 , which give us a measure of
control over how we express group elements of AΓ . We include the statement of Servatius’
Centraliser Theorem [18] and of a useful proposition of Laurence from [16].
First, a word on V ±1 is reduced if there is no shorter word representing the same element
of AΓ . Unless otherwise stated, we shall always use reduced words when representing
members of AΓ . Now let w and w0 be words on V ±1 . We say that w and w0 are shuffleequivalent if we can obtain one from the other via repeatedly exchanging subwords of the
form uv for vu when u and v are adjacent vertices in Γ. Hermiller–Meier [13] proved
that two reduced words w and w0 are equal in AΓ if and only if w and w0 are shuffleequivalent, and also showed that any word can be made reduced by a sequence of these
shuffles and cancellations of subwords of the form u u− (u ∈ V , ∈ {±1}). This allows us
to define the length of a group element w ∈ AΓ to be the number of letters in a reduced word
representing w, and the support of w ∈ AΓ , denoted supp(w), to be the set of vertices v ∈ V
such that v or v −1 appears in a reduced word representing w. We say w ∈ AΓ is cyclically
reduced if it cannot be written in reduced form as vw0 v −1 , for some v ∈ V ±1 , w0 ∈ AΓ .
Servatius [18, Section III] analysed centralisers of elements in arbitrary AΓ , showing that
the centraliser of any w ∈ AΓ is again a (well-defined) right-angled Artin group, say A∆ .
Laurence [16] defined the rank of w ∈ AΓ to be the number of vertices in the graph ∆
defining A∆ . We denote the rank of w ∈ AΓ by rk(w).
In order to state his theorem on centralisers in AΓ , Servatius [18] introduced a canonical
form for any cyclically reduced w ∈ AΓ , which Laurence [16] calls a basic form of w. For
this, partition the support of w into its connected components in Γc , the complement graph
of Γ, writing
supp(w) = V1 t · · · t Vk ,
where each Vi is such a connected component. Then we write
w = w1 r1 . . . wk rk ,
where each ri ∈ Z and each wi ∈ hVi i is not a proper power in AΓ (that is, each |ri | is
maximal). Note that by construction, [wi , wj ] = 1 for 1 ≤ i < j ≤ k. Thus the basic form
of w is unique up to permuting the order of the wi , and shuffling within each wi . With this
terminology in place, we now state Servatius’ ‘Centraliser Theorem’ for later use.
Theorem 2.1 (Servatius, [18]). Let w be a cyclically-reduced word on V ±1 representing
an element of AΓ . Writing w = w1 r1 . . . wk rk in basic form, the centraliser of w in AΓ is
isomorphic to
hw1 i × · · · × hwk i × hlk(w)i,
where lk(w) denotes the subset of V of vertices which are adjacent to each vertex in supp(w).
We will also make frequent use of the following result, due to Laurence [16], and so state it
now for reference.
Proposition 2.2 (Proposition 3.5, Laurence [16]). Let w ∈ AΓ be cyclically reduced, and
write w = w1 r1 . . . wk rk in basic form, with Vi := supp(wi ). Then:
5
1. rk(v) ≥ rk(w) for all v ∈ supp(w); and
2. if rk(v) = rk(w) for some v ∈ Vi , then:
(a) v ≤ u for all u ∈ supp(w);
(b) each Vj is a singleton (j 6= i); and
(c) v does not commute with any vertex of Vi \ {v}.
Recall that a clique in a graph Γ is a complete subgraph. If ∆ is a clique in Γ then A∆ is
free abelian of rank equal to the number of vertices of ∆, so any word supported on ∆ can
be written in only finitely many reduced ways. The set of cliques in ∆ is partially ordered
by inclusion, giving rise to the notion of a maximal clique in a graph Γ.
2.3
Automorphisms of right-angled Artin groups
In this section we recall a finite generating set for Aut(AΓ ). This generating set was obtained
by Laurence [16], confirming a conjecture of Servatius [18], who had verified that the set
generates Aut(AΓ ) in certain special cases.
In the following list, the action of each generator of Aut(AΓ ) is given on v ∈ V , with the
convention that if a vertex is omitted from discussion, it is fixed by the automorphism.
There are four types of generators:
1. Diagram automorphisms φ: each φ ∈ Aut(Γ) induces an automorphism of AΓ , which
we also denote by φ, mapping v ∈ V to φ(v).
2. Inversions ιj : for each vj ∈ V , ιj maps vj to vj −1 .
3. Dominated transvections τij : for vi , vj ∈ V , whenever vi is dominated by vj , there
is an automorphism τij mapping vi to vi vj . We refer to a (well-defined) dominated
transvection τij as an adjacent transvection if [vi , vj ] = 1; otherwise, we say τij is a
non-adjacent transvection.
4. Partial conjugations γi,D : fix vi ∈ V , and select a connected component D of Γ \ st(vi )
(see Figure 2). The partial conjugation γvi ,D maps every d ∈ D to vi dvi −1 .
We denote by DΓ , IΓ and PC(AΓ ) the subgroups of Aut(AΓ ) generated by diagram automorphisms, inversions and partial conjugations, respectively, and by Aut0 (AΓ ) the subgroup
of Aut(AΓ ) generated by all inversions, dominated transvections and partial conjugations.
2.4
A matrix block decomposition
Now we recall a useful decomposition into block matrices of an image of Aut(AΓ ) inside
GL(n, Z). This decomposition was observed by Day [7] and by Wade [19].
Let Φ : Aut(AΓ ) → GL(n, Z) be the canonical homomorphism induced by abelianising AΓ .
Note that since DΓ normalises Aut0 (AΓ ), any φ ∈ Aut(AΓ ) may be written (non-uniquely,
in general), as φ = δβ, where δ ∈ DΓ and β ∈ Aut0 (AΓ ).
6
D0
v
D00
D
Figure 2: When we remove the star of v, we leave three connected components D, D0 and D00 .
By ordering the vertices of Γ appropriately, matrices in Φ(Aut0 (AΓ )) ≤ GL(n, Z) will have
a particularly tractable lower block-triangular decomposition, which we now describe. The
domination relation ≤ on V descends to a partial order, also denoted ≤, on the set of
domination classes V / ∼, which we (arbitrarily) extend to a total order,
[u1 ] < · · · < [uk ]
where [ui ] ∈ V / ∼. This total order may be lifted back up to V by specifying an arbitrary
total order on each domination class [ui ] ∈ V / ∼. We reindex the vertices of Γ if necessary
so that the ordering v1 , v2 , . . . , vn is this specified total order on V . Let ni denote the size
of the domination class [ui ] ∈ V / ∼. Under this ordering, any matrix M ∈ Φ(Aut0 (AΓ ))
has block decomposition:
M1 0
0 ...
0
∗ M2 0 . . .
0
∗
∗ M3 . . .
0
,
..
..
..
..
..
.
.
.
.
.
∗
∗
∗
...
Mk
where Mi ∈ GL(ni , Z) and the (i, j) block ∗ (j < i) may only be non-zero if uj is dominated by ui in Γ. This triangular decomposition becomes apparent when the images of the
generators of Aut0 (AΓ ) are considered inside GL(n, Z). The diagonal blocks may be any
Mi ∈ GL(ni , Z), as by definition each domination class gives rise to all ni (ni − 1) transvections in GL(ni , Z), which, together with the appropriate inversions, generate GL(ni , Z).
A diagonal block corresponding to a free domination class will also be called free, and a
diagonal block corresponding to an abelian domination class will be called abelian.
This block decomposition descends to an analogous decomposition of the image of Aut0 (AΓ )
under the canonical map Φ2 to GL(n, Z/2), as this map factors through the homomorphism
GL(n, Z) → GL(n, Z/2) that reduces matrix entries mod 2.
3
Palindromic automorphisms
Our main goal in this section is to prove Theorem A, which gives a finite generating set for
the group of palindromic automorphisms ΠAΓ . First of all, in Section 3.1, we derive a normal
7
form for group elements α(v) ∈ AΓ where v ∈ V and α lies in the centraliser CΓ (ι). In
Section 3.2 we introduce the pure palindromic automorphisms PΠAΓ , and prove that PΠAΓ
is a group by showing that it is a kernel inside CΓ (ι). We then show that ΠAΓ is a group,
and determine when the groups CΓ (ι) and ΠAΓ are equal. The proof of Theorem A is
carried out in Section 3.3, where the main step is to find a finite generating set for PΠAΓ .
We also provide finite generating sets for CΓ (ι) and for certain stabiliser subgroups of ΠAΓ .
3.1
The centraliser CΓ (ι) and a clique-palindromic normal form
In this section we prove Proposition 3.1, which provides a normal form for reduced words
w = u1 . . . uk (ui ∈ V ±1 ) that are equal (in the group AΓ ) to their reverse,
wrev := uk . . . u1 .
We then in Corollary 3.2 derive implications for the diagonal blocks in the matrix decomposition discussed in Section 2.4. The results of this section will be used in Section 3.2
below.
Green, in her thesis [11], established a normal form for elements of AΓ , by iterating an
algorithm that takes a word w0 on V ±1 and rewrites it as w0 = pw1 in AΓ , where p is a
word consisting of all the letters of w0 that may be shuffled (as in Section 2.2) to be the
initial letter of w0 , and w1 is the word remaining after shuffling each of these letters into
the initial segment p. We now use a similar idea for palindromes.
Let ι denote the automorphism of AΓ that inverts each v ∈ V . We refer to ι as the (preferred)
hyperelliptic involution of AΓ . Denote by CΓ (ι) the centraliser in Aut(AΓ ) of ι. Note that
this centraliser is far from trivial: it contains all diagram automorphisms, inversions and
adjacent transvections in Aut(AΓ ), and also contains all palindromic automorphisms. The
following proposition gives a normal form for the image of v ∈ V under the action of
some α ∈ CΓ (ι).
Proposition 3.1 (Clique-palindromic normal form). Let α ∈ CΓ (ι) and v ∈ V . Then we
may write
α(v) = w1 . . . wk−1 wk wk−1 . . . w1 ,
where wi is a word supported on a clique in Γ (1 ≤ i ≤ k), and if k ≥ 3 then [wi , wi+1 ] 6= 1
(1 ≤ i ≤ k − 2). Moreover, this expression for α(v) is unique up to the finitely many
rewritings of each word wi in AΓ .
We refer to this normal form as clique-palindromic because the words under consideration,
while equal to their reverses in the group AΓ as genuine palindromes are, need only be
palindromic ‘up to cliques’, as in the expression in the statement of the proposition.
Proof. Suppose α ∈ CΓ (ι) and v ∈ V . Write α(v) = u1 . . . ur in reduced form, where each
ui is in V ±1 . Since αι(v) = ια(v), we have that
u1 . . . ur = ur . . . u1
(1)
in AΓ . If α(v) is supported on a clique, then there is nothing to show. Otherwise, put
A1 = α(v) and let Z1 be the (possibly empty) subset of V consisting of the vertices in
8
supp(A1 ) which commute with every vertex in supp(A1 ). We note that Z1 is supported on
a clique, and that Z1 is, by assumption, a proper subset of supp(A1 ).
We now rewrite A1 = u1 . . . ur as w1 u1 0 . . . us 0 , where uj 0 ∈ V ±1 (1 ≤ j ≤ s), and w1 ∈ AΓ
is the word consisting of all the ui which are not in Z1±1 and which may be shuffled to the
start of u1 . . . ur . That is, w1 consists of all letters ui 6∈ Z1±1 so that if i ≥ 1, the letter ui
commutes with each of u1 , . . . , ui−1 . Notice that w1 is nonempty since the first ui which is
not in Z1 will be in w1 . By construction, w1 is supported on a clique in Γ.
Now any ui that may be shuffled to the start of u1 . . . ur may also be shuffled to the end
of ur . . . u1 , by (1). Hence we may also rewrite A1 as u001 . . . u00s w1 for the same word w1 .
Since the support of w1 is disjoint from Z1 , the letters of A1 used in the copy of w1 at the
start of w1 u1 0 . . . us 0 are disjoint from the letters of A1 used in the copy of w1 at the end of
u001 . . . u00s w1 . We thus obtain that
A1 = α(v) = w1 u1 00 . . . ut 00 w1
in AΓ , with ui 00 ∈ V ±1 . Since αι(v) = ια(v), it must be the case that u1 00 . . . ut 00 = ut 00 . . . u1 00
in AΓ .
Now put A2 = u1 00 . . . ut 00 , so that A1 = w1 A2 w1 . Note that supp(A2 ) contains Z1 . If A2 is
supported on a clique, for example if supp(A2 ) = Z1 , then we put w2 = A2 and are done.
(In this case, supp(A2 ) = Z1 if and only if w1 and w2 commute.) If A2 is not supported
on a clique, we define Z2 to be the vertices in supp(A2 ) which commute with the entire
support of A2 , and iterate the process described above. Since each word wi constructed by
this process is nonempty, the word Ai+1 is shorter than Ai , hence the process terminates
after finitely many steps. Notice also that Z1 ⊆ Z2 ⊆ · · · ⊆ Zi ⊆ supp(Ai+1 ), so any letters
of Ai which lie in Zi become part of the word Ai+1 . In particular, any letter of A1 = α(v)
which is in some Zi , for example a letter in Z(AΓ ), will end up in the word wk when the
process terminates.
By construction, each wi is supported on a clique in Γ. Now the word Ai+1 is not supported
on a clique if and only if a further iteration is needed, which occurs if and only if i ≤ k − 2.
In this case, Zi must be a proper subset of supp(Ai+1 ) and so wi+1 does not commute with
wi (the word wk may or may not commute with wk−1 ). Thus the expression obtained for
α(v) when this process terminates is as in the statement of the proposition. Moreover, this
expression is unique up to rewriting each of the wi , as they were defined in a canonical
manner. This completes the proof.
This normal form gives us the following corollary regarding the structure of diagonal blocks
in the lower block-triangular decomposition of the image of α ∈ CΓ (ι) under the canonical
map Φ : Aut(AΓ ) → GL(n, Z), discussed in Section 2.4. Recall that Λk [2] denotes the
principal level 2 congruence subgroup of GL(k, Z).
Corollary 3.2. Write α ∈ CΓ (ι) as α = δβ, for some β ∈ Aut0 (AΓ ) and δ ∈ DΓ . Let M be
the matrix appearing in a diagonal block of rank k in the lower block-triangular decomposition
of Φ(β) ∈ GL(n, Z). Then:
1. if the diagonal block is abelian, then M may be any matrix in GL(k, Z); and
9
2. if the diagonal block is free then M must lie in Λk [2], up to permuting columns.
Proof. First, note that since DΓ ≤ CΓ (ι), we must have that β ∈ CΓ (ι). We deal with the
abelian block case first. The group CΓ (ι) ∩ Aut0 (AΓ ) contains all the adjacent transvections
and inversions necessary to generate GL(k, Z) under Φ, so the matrix M in this diagonal
block may be any member of GL(k, Z).
Now, suppose that the diagonal block is free. Suppose the column of M corresponding to
v ∈ V contains two odd entries, in turn corresponding to vertices u1 , u2 ∈ [v], say. This
implies that β(v) has odd exponent sum of u1 and of u2 . Use Proposition 3.1 to write
β(v) = w1 . . . wk . . . w1
in normal form, with each wi ∈ AΓ being supported on some clique in Γ. It must be the
case that wk has odd exponent sum of u1 and of u2 , since all other wi (i 6= k) appear twice
in the normal form expression. Thus u1 and u2 commute. This contradicts the assumption
that the diagonal block is free, so there must be precisely one odd entry in each column
of M . Hence up to permuting columns, we have M ∈ Λk [2].
3.2
Pure palindromic automorphisms
In this section we introduce the pure palindromic automorphisms PΠAΓ , which we will see
form an important finite index subgroup of ΠAΓ . In Theorem 3.3 we prove that PΠAΓ is a
group, by showing that it is the kernel of the map from the centraliser CΓ (ι) to GL(n, Z/2)
induced by mod 2 abelianisation. Proposition 3.4 then says that any element of ΠAΓ can
be expressed as a product of an element of PΠAΓ with a diagram automorphism, and as
Corollary 3.5 we obtain that the collection of palindromic automorphisms ΠAΓ is in fact a
group. This section concludes by establishing a necessary and sufficient condition on the
graph Γ for the groups ΠAΓ and CΓ (ι) to be equal, in Proposition 3.6.
We define PΠAΓ ⊂ ΠAΓ be the subset of palindromic automorphisms of AΓ such that for
each v ∈ V , the word α(v) may be expressed as a palindrome whose middle letter is either v
or v −1 . For instance, IΓ ⊂ PΠAΓ but DΓ ∩PΠAΓ is trivial. If vi ≤ vj , there is a well-defined
pure palindromic automorphism Pij := (ιτij )2 , which sends vi to vj vi vj and fixes every other
vertex in V . We refer to Pij as a dominated elementary palindromic automorphism of AΓ .
The following theorem shows that PΠAΓ is a group, by establishing that it is a kernel
inside CΓ (ι). We will thus refer to PΠAΓ as the pure palindromic automorphism group
of AΓ .
Theorem 3.3. There is an exact sequence
1 −→ PΠAΓ −→ CΓ (ι) −→ GL(n, Z/2).
(2)
Moreover, the image of CΓ (ι) in GL(n, Z/2) is generated by the images of all diagram
automorphisms and adjacent dominated transvections in Aut(AΓ ).
Proof. Let Φ2 : Aut(AΓ ) → GL(n, Z/2) be the map induced by the mod 2 abelianisation
map AΓ → (Z/2)n . We will show that PΠAΓ is the kernel of the restriction of Φ2 to CΓ (ι).
10
Let α ∈ CΓ (ι). Note that for each v ∈ V , the element α(v) necessarily has odd length,
since α(v) must survive under the mod 2 abelianisation map AΓ → (Z/2)n . Now for
each v ∈ V , write α(v) in clique-palindromic normal form w1 . . . wk . . . w1 , as in Proposition 3.1. Both the index k and the word wk here depend upon v, so we write w(v) for the
central clique word in the clique-palindromic normal form for α(v). Then each word w(v)
is a palindrome of odd length which is supported on a clique in Γ. It follows that the
automorphism α lies in PΠAΓ if and only if for each v ∈ V , the exponent sum of v in the
word w(v) is odd, and every other exponent sum is even. Thus PΠAΓ is precisely the kernel
of the restriction of Φ2 .
We now derive the generating set for Φ2 (CΓ (ι)) in the statement of the theorem. Given
α ∈ CΓ (ι), write α = δβ, where δ ∈ DΓ and β ∈ Aut0 (AΓ ). We map β into GL(n, Z/2) using
the canonical map Φ2 , and give Φ2 (β) the lower block-triangular decomposition discussed
in Section 2.4.
By Corollary 3.2, we can reduce each diagonal block of Φ2 (β) to an identity matrix by
composing Φ2 (β) with appropriate members of Φ2 (CΓ (ι)): permutation matrices (in the
case of a free block), or images of adjacent transvections (in the case of an abelian block).
The resulting matrix N ∈ Φ2 (CΓ (ι)) lifts to some α0 ∈ CΓ (ι).
If N has an off-diagonal 1 in its ith column, this corresponds to α0 (vi ) having odd exponent
sum of both vi and vj , say. Writing α0 (vi ) in clique-palindromic normal form w1 . . . wk . . . w1 ,
we must have that vi and vj both have odd exponent sum in wk , and hence commute, by
Proposition 3.1. The presence of the 1 in the (j, i) entry of N implies that vi ≤ vj , and so
we can use the image of the (adjacent) transvection τij to clear it.
Thus we conclude that Φ2 (β) may be written as a product of images of diagram automorphisms and adjacent transvections. Hence Φ2 (CΓ (ι)) is also generated by these automorphisms.
We now use Theorem 3.3 to prove that the collection of palindromic automorphisms ΠAΓ
is a subgroup of Aut(AΓ ). We will require the following result.
Proposition 3.4. Let α ∈ Aut(AΓ ) be palindromic. Then α can be expressed as α = δγ
where γ ∈ PΠAΓ and δ ∈ DΓ .
Proof. Let α ∈ ΠAΓ . Define a function δ : V → V by letting δ(v) be the middle letter
of a reduced palindromic word representing α(v). Note that δ is well-defined, because all
reduced expressions for α(v) are shuffle-equivalent, and in any such reduced expression there
is exactly one letter with odd exponent sum. The map δ must be bijective, otherwise the
image of α in GL(n, Z/2) would have two identical columns. We now show that δ induces
a diagram automorphism of AΓ , which by abuse of notation we also denote δ.
Since δ : V → V is a bijection and Γ is simplicial, it suffices to show that δ induces a
graph endomorphism of Γ. Suppose that u, v ∈ V are joined by an edge in Γ. Then
[α(v), α(u)] = 1, and so we apply Servatius’ Centraliser Theorem (Theorem 2.1). Write
α(u) in basic form w1 r1 . . . ws rs (see Section 2.2). Since α(u) is a palindrome, all but one
of these wi will be an even length palindrome, and exactly one will be an odd length
palindrome, with odd exponent sum of δ(u). We know by the Centraliser Theorem that
11
α(v) lies in
hw1 i × · · · × hws i × hlk(α(u))i.
Since δ(v) 6= δ(u), the only way α(v) can have an odd exponent of δ(v) is if δ(v) ∈ lk(α(u)).
In particular, [δ(v), δ(u)] = 1. Thus δ preserves adjacency in Γ and hence induces a diagram
automorphism.
The proposition now follows, setting γ = δ −1 α ∈ PΠAΓ .
The following corollary is immediate.
Corollary 3.5. The set ΠAΓ forms a group. Moreover, this group splits as PΠAΓ o DΓ .
We are now able to determine precisely when the groups ΠAΓ and CΓ (ι) appearing in the
exact sequence (2) in the statement of Theorem 3.3 are equal.
Proposition 3.6. The groups ΠAΓ and CΓ (ι) are equal if and only if Γ has no adjacent
domination classes.
Proof. If Γ has an adjacent domination class, then the adjacent transvections to which it
gives rise are in CΓ (ι) but not in ΠAΓ .
For the converse, suppose α ∈ CΓ (ι)\ΠAΓ . Write α = δβ, where δ ∈ DΓ and β ∈ Aut0 (AΓ ),
as in the proof of Theorem 3.3. Note that since DΓ ≤ CΓ (ι) we have that β ∈ CΓ (ι). There
must be a v ∈ V such that β(v) has at least two letters of odd exponent sum, say u1 and u2 ,
as otherwise α would lie in ΠAΓ . Recall that u1 and u2 must commute, as they both must
appear in the central clique word of the clique-palindromic normal form of β(v), in order
to have odd exponent.
Consider Φ(β) in GL(n, Z) under our usual lower block-triangular matrix decomposition,
discussed in Section 2.4. It must be the case that both u1 and u2 dominate v. This is
because the odd entries in the column of Φ(β) corresponding to v that arise due to u1
and u2 either lie in the diagonal block containing v, or below this block. In the former
case, this gives u1 , u2 ∈ [v], while in the latter, the presence of non-zero entries below the
diagonal block of v forces u1 , u2 ≥ v (as discussed in Section 2.4). If v dominates u1 , say,
in return, then we obtain u1 ≤ v ≤ u2 , and so by transitivity u1 is (adjacently) dominated
by u2 , proving the proposition in this case.
Now consider the case that neither u1 nor u2 is dominated by v. By Corollary 3.2, we may
carry out some sequence of row operations to Φ(β) corresponding to the images of inversions,
adjacent transvections, or Pij in Φ(CΓ (ι)), to reduce the diagonal block corresponding to [v]
to the identity matrix. The resulting matrix lifts to some β 0 ∈ CΓ (ι), such that β 0 (v) has
exponent sum 1 of v, and odd exponent sums of u1 and of u2 . As we argued in the proof of
Corollary 3.2, this means u1 , u2 and v pairwise commute, and so v is adjacently dominated
by u1 (and u2 ). This completes the proof.
3.3
Finite generating sets
In this section we prove Theorem A of the introduction, which gives a finite generating set
for the palindromic automorphism group ΠAΓ . The main step is Theorem 3.7, where we
12
determine a finite set of generators for the pure palindromic automorphism group PΠAΓ .
We also obtain finite generating sets for the centraliser CΓ (ι) in Corollary 3.8, and for
certain stabiliser subgroups of ΠAΓ in Theorem 3.11.
Theorem 3.7. The group PΠAΓ is generated by the finite set comprising the inversions
and the dominated elementary palindromic automorphisms.
Before proving Theorem 3.7, we state a corollary obtained by combining Theorems 3.3
and 3.7.
Corollary 3.8. The group CΓ (ι) is generated by diagram automorphisms, adjacent dominated transvections and the generators of PΠAΓ .
Our proof of Theorem 3.7 is an adaptation of Laurence’s proof [16] of finite generation
of Aut(AΓ ). First, in Lemma 3.9 below, we show that any α ∈ PΠAΓ may be precomposed with suitable products of our proposed generators to yield what we refer to as a
‘simple’ automorphism of AΓ (defined below). The simple palindromic automorphisms may
then be understood by considering subgroups of PΠAΓ that fix certain free product subgroups inside AΓ ; we define and obtain generating sets for these subgroups in Lemma 3.10.
Combining these results, we complete our proof of Theorem 3.7.
For each v ∈ V , we define α ∈ PΠAΓ to be v-simple if supp(α(v)) is connected in Γc . We
say that α ∈ PΠAΓ is simple if α is v-simple for all v ∈ V . Laurence’s definition of a
v-simple automorphism φ ∈ Aut(AΓ ) is more general and differs from ours, however the
two definitions are equivalent when φ ∈ PΠAΓ .
Let S denote the set of inversions and dominated elementary palindromic automorphisms
in ΠAΓ (that is, the generating set for PΠAΓ proposed by Theorem 3.7). We say that
α, β ∈ PΠAΓ are π-equivalent if there exists θ ∈ hSi such that α = βθ. In other words,
α, β ∈ PΠAΓ are π-equivalent if β −1 α ∈ hSi.
Lemma 3.9. Every α ∈ PΠAΓ is π-equivalent to some simple automorphism χ ∈ PΠAΓ .
Proof. Suppose α ∈ PΠAΓ . We note once and for all that the palindromic word α(u) is
cyclically reduced, for any u ∈ V .
Select a vertex v ∈ V of maximal rank for which α(v) is not v-simple. Now write
α(v) = w1 r1 . . . ws rs
in basic form, reindexing if necessary so that v ∈ supp(w1 ). The ranks of v and α(v) are
equal, since α induces an isomorphism from the centraliser in AΓ of v to that of α(v). Hence
by Proposition 2.2, parts 2(b) and 2(a) respectively, each wi ∈ AΓ (for i > 1) is some vertex
generator in V , and wi ≥0 v. Moreover, for i > 1, each ri is even, since α(v) is palindromic.
Now, for i > 1, suppose wi ≥0 v but [v]0 6= [wi ]0 . By Servatius’ Centraliser Theorem
(Theorem 2.1), we know that the centraliser of a vertex is generated by its star, and hence
conclude that rk(wi ) > rk(v). This gives that α is wi -simple, by our assumption on the
maximality of the rank of v. In basic form, then,
α(wi ) = p` ,
13
where ` ∈ Z, p ∈ AΓ , and supp(p) is connected in Γc . Note also that supp(p) contains wi ,
since α ∈ PΠAΓ .
Suppose there exists t ∈ supp(p) \ {wi }. As for v before, by Proposition 2.2, we have t ≥ wi ,
since rk(α(wi )) = rk(wi ). We know wi ≥0 v, and so t ≥ v. Since wi , v and t are pairwise
distinct, this forces wi and t to be adjacent, which contradicts Proposition 2.2, part 2(c).
So
α(wi ) = wi ` ,
and necessarily ` = ±1. Knowing this, we replace α with αβi where βi ∈ hSi is the
palindromic automorphism of the form
v 7→ wi
`ri
2
vwi
`ri
2
.
By doing this for each such wi , we ensure that any wi that strictly dominates v is not in
the support of αβi (v). Note α(v 0 ) = αβi (v 0 ) for all v 0 6= v.
If s = 1, then α is v-simple, so by our assumption on v, we must have s > 1. Because
we have reduced to the case where wi ∈ [v]0 for i > 1, we must have w1 = v ±1 , otherwise
we get a similar adjacency contradiction as in the previous paragraph: if there exists t ∈
supp(w1 ) \ {v}, then, as before, t ≥ v, and since [wi ]0 = [v]0 , this would force t and v to
be adjacent. Thus α(v) ∈ h[v]0 i. Indeed, the discussion in the previous two paragraphs
goes through for any u ∈ [v]0 , so we may assume that α(u) ∈ h[v]0 i for any u ∈ [v]0 . Thus
αh[v]0 i ≤ h[v]0 i, with equality holding by [16, Proposition 6.1].
The group h[v]0 i is free abelian, and by considering exponent sums, we see that the restriction
of α to the group h[v]0 i is a member of the level 2 congruence subgroup Λk [2], where k = |[v]0 |.
We know that Theorem 3.7 holds in the special case of these congruence groups (see [9,
Lemma 2.4], for example), so we can precompose α with the appropriate automorphisms in
the set S so that the new automorphism obtained, α0 , is the identity on h[v]0 i, and acts the
same as α on all other vertices in V . The automorphisms α and α0 are π-equivalent, and α0
is v-simple (indeed: α0 (v) = v).
From here, we iterate this procedure, selecting a vertex u ∈ V \ {v} of maximal rank for
which α0 is not u-simple, and so on, until we have exhausted the vertices of Γ preventing α
from being simple.
Now, for each v ∈ V , define Γv be the set of vertices that dominate v but are not adjacent
to v. Further define Xv := {v = v1 , . . . , vr } ⊆ Γv to be the vertices of Γv that are also
dominated by v. Partition Γv into its connected components in the graph Γ \ lk(v). This
partition is of the form
!
!
t
r
G
G
Γi t
{vi } ,
i=1
where
Ft
i=1 Γi
i=1
= Γv \ Xv . Letting Hi = hΓi i, we see that
H := hΓv i = H1 ∗ · · · ∗ Ht ∗ hXv i,
(3)
where Fr := hXv i is a free group of rank r. Notice that H is itself a right-angled Artin
group.
14
The final step in proving Theorem 3.7 requires a generating set for a certain subgroup of
palindromic automorphisms in Aut(H), which we now define. Let Y denote the subgroup of
Aut(H) consisting of the pure palindromic automorphisms of H that restrict to the identity
on each Hi . The following lemma says that this group is generated by its intersection with
the finite list of generators stated in Theorem 3.7. In the special case when there are no Hi
factors in the free product (3) above, this result was established by Collins [5]. Our proof
is a generalisation of his.
Lemma 3.10. The group Y is generated by the inversions of the free group Fr and the
elementary palindromic automorphisms of the form P (s, t) : s 7→ tst, where t ∈ Γv and
s ∈ Xv .
Proof. For α ∈ Y, we define its length l(α) to be the sum of the lengths of α(vi ) for each
vi ∈ Xv . We induct on this length. The base case is l(α) = r, in which case α is a product
of inversions of Fr . From now on, assume l(α) > r.
Let L(w) denote the length of a word w in the right-angled Artin group H, with respect to
the vertex set Γv . Suppose for all i , j ∈ {±1} and distinct ai , aj ∈ α(Γv ) we have
L(ai i ajj ) > L(ai ) + L(aj ) − 2(bL(ai )/2c + 1),
(4)
where bxc is the integer part of x ∈ [0, ∞). Conceptually, we are assuming that for every
expression ai i ajj , whatever cancellation occurs between the words ai i and ajj , more than
half of ai i and more than half of ajj survives after all cancellation is complete.
Fix vi ∈ Xv so that ai := α(vi ) satisfies L(ai ) > 1. Such a vertex vi must exist, as we are
assuming that l(α) > r. Notice that since L(ai ) > 1, we have vi 6= a±1
i . Now, any reduced
word in H of length m with respect to the generating set α(Γv ) has length at least m with
respect to the vertex generators Γv , due to our cancellation assumption. Since vi 6= ai ±1 ,
the generator vi must have length strictly greater than 1 with respect to α(Γv ), and so vi
must have length strictly greater than 1 with respect to Γv . But vi is an element of Γv ,
which is a contradiction. Therefore, the above inequality (4) fails at least once.
We now argue each case separately. Let ai , aj ∈ α(Γv ) be distinct and write
ai = α(vi ) = wi vi ηi wi rev
and aj = α(vj ) = wj vj ηj wj rev ,
where vi , vj ∈ Γv , wi , wj ∈ H and ηi , ηj ∈ {±1}. Suppose the inequality (4) fails for this
pair when i = j = 1. Then it must be the case that wj = (wi rev )−1 vi−ηi z, for some z ∈ H,
since H is a free product. In this case, replacing α with αP (vj , vi ) = αPji decreases the
length of the automorphism. We reduce the length of α in the remaining cases as follows:
• For i = j = −1, replace α with αιj P (vj , vi )−1 = αιj Pji−1 .
• For i = −1 and j = 1, or vice versa, replace α with αιj P (vj , vi ) = αιj Pji .
By induction, we have thus established the proposed generating set for the group Y.
We now prove Theorem 3.7, obtaining a finite generating set for the group PΠAΓ .
15
Proof of Theorem 3.7. Let S denote the set of inversions and dominated elementary palindromic automorphisms in PΠAΓ . By Lemma 3.9, all we need do is write any simple
α ∈ PΠAΓ as a product of members of S ±1 .
Let v be a vertex of maximal rank that is not fixed by α. Define Γv , its partition, and the
free product it generates using the same notation as in the discussion before the statement
of Lemma 3.10. By maximality of the rank of v, any vertex of any Γi must be fixed by α
(since it has rank higher than that of v). By Lemma 5.5 of Laurence and its corollary [16],
we conclude that (for this v we have chosen), α(H) = H.
This establishes that α restricted to H lies in the group Y ≤ Aut(H), for which Lemma 3.10
gives a generating set. Thus we are able to precompose α with the appropriate members
of S ±1 to obtain a new automorphism α0 that is the identity on H, and which agrees with α
on Γ \ Γv . In particular, α0 fixes v. We now iterate this procedure until all vertices of Γ are
fixed, and have thus proved the theorem.
With Theorem 3.7 established, we are now able to prove our first main result, Theorem A,
and so obtain our finite generating set for ΠAΓ .
Proof of Theorem A. By Corollary 3.5, we have that ΠAΓ splits as
ΠAΓ ∼
= PΠAΓ o DΓ ,
and so to generate ΠAΓ , it suffices to combine the generating set for PΠAΓ given by Theorem 3.7 with the diagram automorphisms of AΓ . Thus the group ΠAΓ is generated by
the set of all diagram automorphisms, inversions and well-defined dominated elementary
palindromic automorphisms.
We end this section by remarking that the proof techniques we used in establishing Theorem A allow us to obtain finite generating sets for a more general class of palindromic
automorphism groups of AΓ . Having chosen an indexing v1 , . . . , vn of the vertex set V of Γ,
denote by ΠAΓ (k) the subgroup of ΠAΓ that fixes each of the vertices v1 , . . . , vk . Note that
a reindexing of V will, in general, produce non-isomorphic stabiliser groups. We are able
to show that each ΠAΓ (k) is generated by its intersection with the finite set S.
Theorem 3.11. The stabiliser subgroup ΠAΓ (k) is generated by the set of diagram automorphisms, inversions and dominated elementary palindromic automorphisms that fix each
of v1 , . . . , vk .
Throughout the proof of Theorem 3.7, each time that we precomposed some α ∈ PΠAΓ
by an inversion ιi , an elementary palindromic automorphism Pij , or its inverse Pij−1 , it was
because the generator vi was not fixed by α. If vj ∈ V was already fixed by α, we had no
±1
need to use ιj or any of the Pjk
(j 6= k) in this way. (That this claim holds in the secondlast paragraph of the proof of Lemma 3.9, where we are working in the group Λk [2], follows
from [9, Lemma 3.5].) The same is true when we extend PΠAΓ to ΠAΓ using diagram
automorphisms, in the proof of Theorem A. Thus by following the same method as in our
proof of Theorem A, we are also able to obtain the more general result, Theorem 3.11: our
approach had already written α ∈ ΠAΓ (k) as a product of the generators proposed in the
statement of Theorem 3.11.
16
4
The palindromic Torelli group
Recall that we defined the palindromic Torelli group PI Γ to consist of the palindromic
automorphisms of AΓ that act trivially on H1 (AΓ , Z). Our main goal in this section is to
prove Theorem B, which gives a generating set for PI Γ . For this, in Section 4.1 we obtain a
finite presentation for the image in GL(n, Z) of the pure palindromic automorphism group.
Using the relators from this presentation, we then prove Theorem B in Section 4.2.
4.1
Presenting the image in GL(n, Z) of the pure palindromic automorphism group
In this section we prove Theorem 4.2, which establishes a finite presentation for the image
of the pure palindromic automorphism group PΠAΓ in GL(n, Z), under the canonical map
induced by abelianising AΓ . Corollary 4.3 then gives a splitting of PΠAΓ .
Recall that Λn [2] denotes the principal level 2 congruence subgroup of GL(n, Z). We start
by recalling a finite presentation for Λn [2] due to the first author. For 1 ≤ i 6= j ≤ n, let
Sij ∈ Λn [2] be the matrix that has 1s on the diagonal and 2 in the (i, j) position, with 0s
elsewhere, and let Zi ∈ Λn [2] differ from the identity matrix only in having −1 in the (i, i)
position. Theorem 4.1 gives a finite presentation for Λn [2] in terms of these matrices.
Theorem 4.1 (Fullarton [9]). The principal level 2 congruence group Λn [2] is generated by
{Sij , Zi | 1 ≤ i 6= j ≤ n},
subject to the defining relators
1. Zi 2
6. [Ski , Skj ]
2. [Zi , Zj ]
7. [Sij , Skl ]
3. (Zi Sij )2
8. [Sji , Ski ]
4. (Zj Sij )2
9. [Skj , Sji ]Ski −2
10. (Sij Sik −1 Ski Sji Sjk Skj −1 )2
5. [Zi , Sjk ]
where 1 ≤ i, j, k, l ≤ n are pairwise distinct.
We will use this presentation of Λn [2] to obtain a finite presentation of the image of PΠAΓ
in GL(n, Z). Observe that ιj 7→ Zj and Pij 7→ Sji (vi ≤ vj ) under the canonical map
Φ : Aut(AΓ ) → GL(n, Z). Let RΓ be the set of words obtained by taking all the relators in
Theorem 4.1 and removing those that include a letter Sji with vi 6≤ vj .
Theorem 4.2. The image of PΠAΓ in GL(n, Z) is a subgroup of Λn [2], with finite presentation
h{Zk , Sji : 1 ≤ k ≤ n, vi ≤ vj } | RΓ i .
17
Proof. By Theorem 3.7, we know that PΠAΓ ≤ Aut0 (AΓ ), and so matrices in Θ :=
Φ(PΠAΓ ) ≤ GL(n, Z) may be written in the lower-triangular block decomposition discussed in Section 2.4. Moreover, the matrix in a diagonal block of rank k in some A ∈ Θ
must lie in Λk [2].
We now use this block decomposition to obtain the presentation of Θ in the statement of
the theorem. Observe that we have a forgetful map F defined on Θ, where we forget the
first k := |[v1 ]| rows and columns of each matrix. This is a well-defined homomorphism,
since the determinant of a lower block-triangular matrix is the product of the determinants
of its diagonal blocks. Let Q denote the image of this forgetful map, and K its kernel. We
have K = Λk [2] × Zt , where t is the number of dominated transvections that are forgotten
under the map F, and the Λk [2] factor is generated by the images of the inversions and
dominated elementary palindromic automorphisms that preserve the subgroup h[v1 ]i.
The group Θ splits as K o Q, with the relations corresponding to the semi-direct product
action, and those in the obvious presentation of K, all lying in RΓ . Now, we may define a
similar forgetful map on the matrix group Q, so by induction Λ is an iterated semi-direct
product, with a complete set of relations given by RΓ .
Using the above presentation, we are able to obtain the following corollary, regarding a
splitting of the group PΠAΓ . Recall that IΓ is the subgroup of Aut(AΓ ) generated by inversions. We denote by EΠAΓ the subgroup of PΠAΓ generated by all dominated elementary
palindromic automorphisms.
Corollary 4.3. The group PΠAΓ splits as EΠAΓ o IΓ .
Proof. The group PΠAΓ is generated by EΠAΓ and IΓ by Theorem 3.7, and IΓ normalises EΠAΓ . We now establish that EΠAΓ ∩ IΓ is trivial. Suppose α ∈ EΠAΓ ∩ IΓ .
By Theorem 4.2, the image of α under the canonical map Φ : Aut(AΓ ) → GL(n, Z) lies in
the principal level 2 congruence group Λn [2]. This implies that Φ(α) is trivial, since Λn [2]
is itself a semi-direct product of groups containing the images of the groups EΠAΓ and IΓ ,
respectively: this is verified by examining the presentation of Λn [2] given in Theorem 4.1.
So the automorphism α must lie in the palindromic Torelli group PI Γ , which has trivial
intersection with IΓ , and hence α is trivial.
4.2
A generating set for the palindromic Torelli group
Using the relators in the presentation given by Theorem 4.1, we are now able to obtain an
explicit generating set for the palindromic Torelli group PI Γ , and so prove Theorem B.
Recall that when AΓ is a free group, the elementary palindromic automorphism Pij is welldefined for every distinct i and j. The first author defined doubled commutator transvections
and separating π-twists in Aut(Fn ) (n ≥ 3) to be conjugates in ΠAn of, respectively, the automorphisms [P12 , P13 ] and (P23 P13 −1 P31 P32 P12 P21 −1 )2 . The latter of these two may seem
cumbersome; we refer to [9, Section 2] for a simple, geometric interpretation of separating
π-twists.
The definitions of these generators extend easily to the general right-angled Artin groups
18
setting, as follows. Suppose vi ∈ V is dominated by vj and by vk , for distinct i, j and k.
Then
χ1 (i, j, k) := [Pij , Pik ] ∈ Aut(AΓ )
is well-defined, and we define a doubled commutator transvection in Aut(AΓ ) to be a conjugate in ΠAΓ of any well-defined χ1 (i, j, k). Similarly, suppose [vi ] = [vj ] = [vk ] for distinct
i, j and k. Then
χ2 (i, j, k) := (Pjk Pik −1 Pki Pkj Pij Pji −1 )2 ∈ Aut(AΓ )
is well-defined, and we define a separating π-twist in Aut(AΓ ) to be a conjugate in ΠAΓ of
any well-defined χ2 (i, j, k).
We now prove Theorem B, showing that PI Γ is generated by these two types of automorphisms.
Proof of Theorem B. Recall that Θ := Φ(PΠAΓ ) ≤ GL(n, Z). The images in Θ of our
generating set for PΠAΓ (Theorem 3.7) form the generators in the presentation for Θ
given in Theorem 4.2. Thus using a standard argument (see, for example, the proof of
[17, Theorem 2.1]), we are able to take the obvious lifts of the relators of Θ as a normal
generating set of PI Γ in PΠAΓ , via the short exact sequence
1 −→ PI Γ −→ PΠAΓ −→ Θ −→ 1.
The only such lifts and their conjugates that are not trivial in PΠAΓ are the ones of the
form stated in the theorem.
References
[1] Ian Agol. The virtual Haken conjecture. Doc. Math., 18:1045–1087, 2013. With an
appendix by Agol, Daniel Groves, and Jason Manning.
[2] Tara Brendle, Dan Margalit, and Andrew Putman. Generators for the hyperelliptic
Torelli group and the kernel of the Burau representation at t = −1. Invent. Math.,
200(1):263–310, 2015.
[3] Ruth Charney. An introduction to right-angled Artin groups. Geom. Dedicata, 125:141–
158, 2007.
[4] Ruth Charney, Nathaniel Stambaugh, and Karen Vogtmann. Outer space for rightangled Artin groups I. Pre-print, arXiv:1212.4791.
[5] Donald J. Collins. Palindromic automorphisms of free groups. In Combinatorial and
geometric group theory (Edinburgh, 1993), volume 204 of London Math. Soc. Lecture
Note Ser., pages 63–72. Cambridge Univ. Press, Cambridge, 1995.
[6] Matthew Day and Andrew Putman. The complex of partial bases for Fn and finite
generation of the Torelli subgroup of Aut(Fn ). Geom. Dedicata, 164:139–153, 2013.
[7] Matthew B. Day. Peak reduction and finite presentations for automorphism groups of
right-angled Artin groups. Geom. Topol., 13(2):817–855, 2009.
19
[8] Matthew B. Day. Symplectic structures on right-angled Artin groups: between the
mapping class group and the symplectic group. Geom. Topol., 13(2):857–899, 2009.
[9] Neil J. Fullarton. A generating set for the palindromic Torelli group. To appear in
Algebr. Geom. Topol.
[10] Henry H. Glover and Craig A. Jensen. Geometry for palindromic automorphism groups
of free groups. Comment. Math. Helv., 75(4):644–667, 2000.
[11] E.R. Green. Graph products of groups. PhD thesis, University of Leeds, 1990.
[12] Richard Hain. Finiteness and Torelli spaces. In Problems on mapping class groups
and related topics, volume 74 of Proc. Sympos. Pure Math., pages 57–70. Amer. Math.
Soc., Providence, RI, 2006.
[13] Susan Hermiller and John Meier. Algorithms and geometry for graph products of
groups. J. Algebra, 171(1):230–257, 1995.
[14] Craig Jensen, Jon McCammond, and John Meier. The Euler characteristic of
the Whitehead automorphism group of a free product. Trans. Amer. Math. Soc.,
359(6):2577–2595, 2007.
[15] Ki Hang Kim, L. Makar-Limanov, Joseph Neggers, and Fred W. Roush. Graph algebras. J. Algebra, 64(1):46–51, 1980.
[16] Michael R. Laurence. A generating set for the automorphism group of a graph group.
J. London Math. Soc. (2), 52(2):318–334, 1995.
[17] Wilhelm Magnus, Abraham Karrass, and Donald Solitar. Combinatorial group theory.
Dover Publications, Inc., New York, revised edition, 1976. Presentations of groups in
terms of generators and relations.
[18] Herman Servatius. Automorphisms of graph groups. J. Algebra, 126(1):34–60, 1989.
[19] Richard D. Wade. Johnson homomorphisms and actions of higher-rank lattices on
right-angled Artin groups. J. Lond. Math. Soc. (2), 88(3):860–882, 2013.
20
| 4 |
An optimal hierarchical clustering approach to
segmentation of mobile LiDAR point clouds
Sheng Xu, Ruisheng Wang, Han Zheng
Department of Geomatics Engineering, University of Calgary, Alberta, Canada
arXiv:1703.02150v2 [] 9 Nov 2017
Abstract
This paper proposes a hierarchical clustering approach for the segmentation of mobile LiDAR point clouds.
We perform the hierarchical clustering on unorganized point clouds based on a proximity matrix. The dissimilarity
measure in the proximity matrix is calculated by the Euclidean distances between clusters and the difference of
normal vectors at given points. The main contribution of this paper is that we succeed to optimize the combination
of clusters in the hierarchical clustering. The combination is obtained by achieving the matching of a bipartite
graph, and optimized by solving the minimum-cost perfect matching. Results show that the proposed optimal
hierarchical clustering (OHC) succeeds to achieve the segmentation of multiple individual objects automatically
and outperforms the state-of-the-art LiDAR point cloud segmentation approaches.
Key words: Segmentation, MLS, Hierarchical clustering, Bipartite graph.
I. I NTRODUCTION
Segmentation is the process of partitioning input data into multiple individual objects. In general, an
individual object is defined as a region with uniform attribute. In point cloud segmentation, the results
can be divided into three levels, namely the supervoxel level [1], primitive level (e.g. line [2], plane [3]
and cylinder [4]), and object level [5]–[8].
Due to the fact that point clouds are noisy, uneven and unorganized, the accuracy of the segmentation
is far from being desired, especially in the segmentation of multiple overlapping objects. This paper aims
to propose a hierarchical clustering approach for the segmentation of mobile LiDAR point clouds in the
object level. In order to obtain the optimal combination of clusters, we use the matching in a graph to
indicate the combination and achieve the optimal solution by solving the minimum-cost perfect matching
in a bipartite graph.
This paper is organized as follows. Section II reviews merits and demerits of the related segmentation
methods. Section III overviews the procedure of the proposed optimal hierarchical clustering approach.
Section IV calculates the proximity matrix to store the dissimilarity of clusters. Section V presents details
of the clustering optimization. Section VI shows experiments to evaluate the performance of the proposed
algorithm. The conclusions are outlined in Section VII.
II. R ELATED WORK
Segmentation from point clouds collected by laser sensors with an accurate 3D information plays an
important role in the environmental analysis, 3D modeling and object tracking. In the following, related
methods proposed for the segmentation of point clouds will be reviewed and analyzed.
In the work of Douillard et al. [6], they propose a Cluster-All method for the segmentation of dense
point clouds and achieve a trade-off in terms of the simplicity, accuracy and computation time. Their
Cluster-All is performed based on the k-nearest neighbors approach. The principle of KNNiPC (k-nearest
neighbors in point clouds) is to select a number of points in the nearest Euclidean distance for a given
point and assign them the same group index. To decrease the under-segmentation rate in results, the
variance and mean of the Euclidean distances of the points within a group are restricted to be less than
a preset threshold value. A similar approach is shown in the work of Klasing et al. [8]. They present
a radially nearest neighbor strategy to bound the region of neighbor points which helps enhance the
robustness of the neighbor selection. KNNiPC works well in different terrain areas and does not require
1
any prior knowledge of the location of objects. The problem is that results of KNNiPC highly depend on
the selection of a “good value” for k.
In the segmentation of point clouds, the challenging is to separate two overlapping objects. In the method
of Yu et al. [5] , they propose an algorithm for segmenting street light poles from mobile point clouds.
In order to separate a cluster with more than one object, they add the elevation information to extend
the 2D normalized-cut method [9]. Their proposed 3DNCut (3D normalized-cut) partitions input points
into two disjoint groups by minimizing the similarity within each group and maximizing the dissimilarity
between different groups. 3DNCut obtains an optimal solution of the binary segmentation problem which
demonstrates a promising method for mobile LiDAR point cloud segmentation. The shortcoming is that
the number of objects has to be preset manually in the multi-label segmentation task. In the method
of Golovinskiy et al. [7], they present a min-cut based method (MinCut) for segmenting objects from
point clouds. MinCut partitions input points into two disjoint groups, i.e. background and foreground,
by minimizing their energy function. The solution cut, which is used to separate the input scene into
background and foreground, is obtained by the graph cut method [10]. MinCut obtains competitive
segmentation results in terms of the optimum and accuracy. However, MinCut requires for the location
of each object in the multi-object segmentation task. To achieve the desired performance, users have to
set the center point and radius for different objects manually.
Clustering is a well-known technique in data mining. It aims to find borders between groups based on
defined measures, such as the density and Euclidean distance. The goal is that data in the same group
are more similar to each other than to those in other groups. If we use one called ‘object distance’
as the dissimilarity measure in the clustering, the result of the clustering will coincide with the object
segmentation in point clouds. The clustering approach may provide a new perspective in the segmentation
of individual objects.
In general, clustering methods are divided into five groups, including partitioning, hierarchical, densitybased, grid-based or combinations of these [11]. Both density-based [12], [13] and grid-based methods
[14] highly rely on the assumption that cluster centers have a higher density than their neighbors and each
center has a large distance from points with higher densities. Most density- and grid-based algorithms
require users to specify density thresholds for choosing cluster centers. However, the density of the LiDAR
data depends on the sensor and the beam travel distance. Points within an object may have various
densities, thus, density- and grid-based clustering methods are not suitable for achieving the multi-object
segmentation from LiDAR point clouds.
The classical partition method is the k-means approach, which has been used for the point cloud
segmentation in [15], [16]. The idea of KMiPC (k-means in point clouds) is to partition points into
different sets to minimize the sum of distances of each point in the cluster to the center. In the work of
Lavoue et al. [15], they use k-means for the segmentation of ROI (region of interest) from mesh points.
The problem is that KMiPC is easy to produce redundant segments as shown in [15]. Therefore, a merging
process to reduce the over-segmentation rate is required. Moreover, KMiPC asks for setting the number
of clusters manually and tends to segment groups evenly as shown in [16]. In the work of Feng et al. [17],
they propose a plane extraction method based on the agglomerative clustering approach (PEAC). They
segment planes from point clouds efficiently. PEAC can be used for the individual object segmentation
by merging clusters belonging to the same object. The shortcoming is that their input point clouds are
required to be well-organized.
This paper aims to propose a new bottom-up hierarchical clustering algorithm for the individual object
segmentation from LiDAR point clouds. Our agglomerative clustering starts with a number of clusters
and each of them contains only one point. A series of merging operations are then followed to combine
similar clusters into one cluster to move up the hierarchy. An individual object will be lead to the same
group in the top hierarchy. The proposed hierarchical clustering algorithm does not request for presetting
the number of clusters or inputting the location of each object, which is significant in the segmentation
of multiple objects. The combination of clusters in the hierarchical clustering is optimized by solving the
minimum-cost perfect matching of a bipartite graph.
2
III. M ETHODOLOGY
This section will show details of our optimal hierarchical clustering (OHC) algorithm for the point cloud
segmentation. First, we show the procedure of the proposed hierarchical clustering to give an overview of
our methodology. Then, we focus on the calculation of the dissimilarity to form the proximity matrix for
optimizing the cluster combination. Finally, we use the matching in a graph to indicate the combination
solution and achieve the optimal combination via solving the minimum-cost perfect matching in a bipartite
graph.
A. The procedure of the proposed OHC
Assume that the input point set is P = {p1 , p2 , ..., pn } and the cluster set is C = {c1 , c2 , ...ci , ..., cj , ..., ct }.
n is the number of points in P and t is the number of clusters in C. Each cluster ci contains one or more
points from P , and ci ∩ cj is Ø under i 6= j. The goal of OHC is to optimize the set C to achieve that
each ci ∈ C is a cluster of points of an individual object. A brief overview of our OHC is as follows.
(1) Start with a cluster set C and each cluster ci consists of a point pi ∈ P ;
(2) Calculate the proximity matrix by measuring the dissimilarity of clusters in C;
(3) Calculate the optimal solution for merging clusters by
X
arg min
Ω
D(Mi )
(1)
Mi ∈Ω
where Mi = {ca , cb } means a pair of clusters, and ca ∈ C and cb ∈ C. Ω = {M1 , M2 , ..., Mi , ...} is
the set of the pair of clusters and Mi ∩ Mj = Ø. D(∗) is the dissimilarity of two clusters stored in the
proximity matrix;
(4) Combine clusters in each Mi ∈ Ω into one cluster and use those combined clusters to update the
set C;
(5) Repeat (2)-(4) until C converges.
In the initialization, each cluster ci contains only one point from P , t is equal to n and the size of the
proximity matrix is t × t. In the hierarchical clustering, similar clusters are combined into one cluster and
thus the value of t will be reduced in the clustering process. The key steps in the procedure are (2) and
(3) which will be discussed in the following sections.
B. The calculation of the proximity matrix
For a set with t clusters, we define a t × t symmetric matrix called the proximity matrix. The (i, j)th
element of the matrix means the dissimilarity measure for the ith and jth clusters (i, j = 1, 2, ..., t). In
the hierarchical clustering, two clusters with a low dissimilarity are preferred to be combined into one
cluster.
In our work, the calculation of the dissimilarity contains a distance measure α (pi , pj , ci , cj ) and a
direction measure β (pi , pj , ci , cj ). The distance term α (pi , pj , ci , cj ) is formed as
α (pi , pj , ci , cj ) =
d(pi , pj )
max(m(ci ), m(cj ))
(2)
where d(∗, ∗) is the Euclidean distances of two points. Clusters ci and cj are two different clusters in the
set C. Points pi and pj are obtained by
(pi , pj ) = arg min0 d(p, p0 ) : p ∈ ci , p0 ∈ cj
(p,p )
3
(3)
The density of LiDAR points relies on the scanner sensor and the beam travel distance, therefore, the
magnitude of distances of points are various in different groups. In the calculation of α (pi , pj , ci , cj ), we
use m(∗), which is based on a median filter Med and defined by Eq.(4), to normalized the Euclidean
distance values of points in different groups.
m(ci ) = Medp∈ci { 0 min0
p ∈ci ,p 6=p
d(p, p0 )}
(4)
For example, if we have a cluster c0 = {p1 , p2 , p3 } and the minimal distance between the point pi ∈ c0
and other points is di , our m(c0 ) is obtained by the median value of {d1 , d2 , d3 }.
Our direction measure is based on the normal vector information. In our work, the normal vector at
a point is approximated with the normal to its k-neighborhood surface by performing PCA (Principal
Component Analysis) on the neighborhoods covariance matrix [18]. The direction term β (pi , pj , ci , cj ) is
formed as
β (pi , pj , ci , cj ) = 1 − |V(pi ) · V(pj )|
(5)
where V(pi ) and V(pj ) are normal vectors estimated from the k-nearest neighbor points of pi and pj ,
respectively. Points pi and pj are obtained in Eq.(3).
According to our prior knowledge, the object’s exterior points are potential to be the overlapping points
of different objects and interior points are mostly from the same object. Thus, the proximity matrix PM
is defined as
λ−1
λ α (pi , pj , ci , cj ) + λ1 β (pi , pj , ci , cj ) , δpi ,pj = 1
PM(ci , cj ) = λ1 α (pi , pj , ci , cj ) + λ−1
β (pi , pj , ci , cj ) , δpi ,pj = 0
λ
1
1
α
(p
,
p
,
c
,
c
)
+
β
(pi , pj , ci , cj ) , δpi ,pj = 0.5
i j i j
2
2
(6)
where λ is a user-defined weight coefficient to balance the distance measure α (pi , pj , ci , cj ) and the
direction measure β (pi , pj , ci , cj ). The δpi ,pj is to predict the region of the point pi and pj . If both pi and
pj are interior points, which are assumed to be in the same cluster, δpi ,pj = 1; if both of them are exterior
points, which are assumed to be in different clusters, δpi ,pj = 0; in all other conditions, δpi ,pj = 0.5. In our
work, the dissimilarity of two clusters is small if they are spatially close or normal vectors at the given
points are consistent. When both two clusters consist of interior points, the dissimilarity is dominated by
the distance measure. When both two clusters consist of exterior points, the direction measure dominates
their dissimilarity.
To calculate δpi ,pj , we propose a local 3D convex hull testing method to mark the exterior and interior
points. The proposed testing proceeds as follows: (1) initialize all input points as unlabeled points; (2)
check if a point is not labeled as an interior point; (3) pick up its k-nearest neighbor points to construct
a local 3D convex hull; (4) label all points inside the hull as interior points; (5) repeat (2)-(4) until all
input points are tested; (6) mark the rest unlabeled points as exterior points.
To implement the testing, we form a local 3D convex hull using four vertices, namely v1 , v2 , v3 and
v4 , as shown in Fig.1(a). The target is to test if a vertex v0 is an interior point or not. As shown in Fig.1,
the vector g0 = (v0 − v1 ) can be represented by the sum of vectors g1 = (v2 − v1 ), g2 = (v3 − v1 ) and
g3 = (v4 − v1 ) as
g0 = u × g1 + v × g2 + w × g3
(7)
Based on Eq.(7), one can conclude that if u ≮ 0, v ≮ 0, w ≮ 0 and u + v + w < 1, v0 will be inside the
3D convex hull. Our testing relies on values of u, v and w, which can be solved efficiently by
[u, v, w]> = [g1 , g2 , g3 ]−1 · g0
4
Fig. 1. Illustration of the local 3D convex hull testing. (a) The testing of a vertex v0 . (b) A close-up view of a local 3D convex hull. (c)
Extracted exterior points (red) and interior points (blue).
If v0 is inner the 3D convex hull, it will be labeled as an interior point. Each point from the input will
be selected as a vertex and tested in a local convex hull constructed by its k-nearest neighbor points. In
building the 3D convex hull, v1 is chosen as the furthest point to v0 , v2 is the point to obtain the largest
projection of g1 on the g0 direction (i.e. arg max(|g1 | · cos < g1 , g0 >)), v3 is the furthest point to the line
v2
containing v2 and v1 , and v4 is the furthest point to the plane containing v1 , v2 and v3 . A close-up view
of a local 3D convex hull is shown in Fig.1(b) and Fig.1(c) shows the extracted interior points (blue) and
exterior points (red) from a test set.
C. The optimization of the clustering
Usually, the hierarchal clustering uses a greedy strategy in the combination of clusters. The idea is to
find the most similar two clusters based on the proximity matrix first, and then combine them into one
cluster. Repeat the above steps until all objects are grouped in the same cluster. The greedy strategy is
easy to incur a local optimization. This section targets at optimizing the combination of clusters globally.
Denote our graph as G = {Vx , Vy , E}, where the node set Vx = {c1 , c2 , ..., ci , ..., cn } describes the
current clusters in the set C and Vy shows clusters for the combination. In the hierarchical clustering,
any two clusters can be combined into one cluster, therefore, we let Vy = Vx . The edge set E =
{e1,1 , e1,2 , ...., e1,n , e2,1 , ...., e2,n , ..., ei,j , ..., en,1 , ..., en,n } shows connections between the cluster ci ∈ Vx
and cj ∈ Vy . The edge between ci and cj is denoted as ei,j : ci ↔ cj . There is no edge inside Vx or Vy ,
therefore, our graph G can be regarded as a bipartite graph as shown in Fig.2.
In each hierarchy of the hierarchical clustering, a cluster can be combined with no more than one
cluster. In our work, the combination of clusters is indicated by the perfect matching in a bipartite graph.
The matching of a graph is a set of edges without common vertices. The perfect matching means that
every cluster is connected with a different cluster in the matching. The result of the hierarchical clustering
can be determined by a perfect matching as shown in Fig.3. In our perfect-matching-based representation,
the edge ei,j : ci ↔ cj , which is between ci ∈ Vx and cj ∈ Vy under i 6= j, means that the cluster ci and
cj will be combined into one cluster and the edge ei,i : ci ↔ ci means that the cluster ci is not chosen
for the combination in the current hierarchy.
5
Fig. 2. The modeled bipartite graph G = {Vx , Vy , E}.
Fig. 3. Clustering results indicated by the perfect matching. (a) The combination solution: {c1 }, {c2 , c3 }, {c3 , c4 }, {c4 , c2 }, {c5 , c6 }. (b)
The combination solution: {c1 , c2 }, {c3 , c4 }, {c5 , c6 }. (c) The combination solution: {c1 }, {c2 }, {c3 }, {c4 }, {c5 }, {c6 }.
The above-mentioned Eq.(1) aims to find the optimal combination which achieves the minimum sum of
the dissimilarity in the clustering. In our work, the optimization is achieved by solving the minimum-cost
perfect matching in a bipartite graph G.
Denote the capacity of a perfect matching as CostP M , which determines the sum cost of the combination and is calculated as
CostP M =
X
PM (ci , cj ) , ei,j : ci ↔ cj
(8)
ei,j ∈Φ
where Φ is the set of edges in the obtained perfect matching. PM(ci , cj ) aims to calculate the cost of
merging clusters that are connected in the obtained matching. Our goal is to find the minimal CostP M for
the optimal combination in the hierarchical clustering. PM(ci , cj ) is obtained from the above-calculated
proximity matrix. It is worth noting that PM(ci , ci ) is weighted by a user-defined cutoff distance SM
rather than 0. Details of our optimal hierarchical clustering (OHC) via the minimum-cost perfect matching
are shown in Algorithm 1.
In Algorithm 1, steps 1-3 aim to model a bipartite graph G = {Vx , Vy , E}. Edges in the initialization
are in a full connection, i.e. each ci ∈ Vx is connected with all other cj ∈ Vy . The step 4 is to calculate
the dissimilarity between every two clusters to form the proximity matrix. The step 5 aims to solve the
optimization of Eq.(8). The step 6 is to combine those connected clusters in the obtained perfect matching
into one cluster. The step 7 is to update Vx based on the achieved optimal combination solution to move
up the hierarchy.
In the task of the individual object segmentation, spatially non-adjacent clusters are not expected
to be combined, thus, connections are only between two adjacent clusters. The following is a toy example of the segmentation based on the proposed Algorithm 1. As shown in Fig.4(a), the input P is
{p1 , p2 , p3 , p4 , p5 , p6 }. Step 1: initialize the node set Vx as {c1 , c2 , c3 , c4 , c5 , c6 }, where c1 = {p1 }, c2 =
{p2 }, c3 = {p3 }, c4 = {p4 }, c5 = {p5 } and c6 = {p6 }; Step 2: form the node set Vy the same as Vy ; Step
3: connect spatially adjacent clusters between Vx and Vy to form the edge set E as shown in Fig.2(b);
Step 4: calculate the proximity matrix PM as
6
Algorithm 1 Method of the proposed OHC via the minimum-cost Perfect Matching
Input: The point cloud set: P = {p1 , p2 , p3 , ..., pn }.
Output: The cluster set C = {ci , c2 , ..., ct }.
Step 1: Initialize Vx = {c1 , c2 , ..., cn } by setting c1 = {p1 }, c2 = {p2 }, ..., cn = {pn };
Repeat
Step 2: Let Vy = Vx ;
Step 3: Connect the cluster ci ∈ Vx with cj ∈ Vy to form the edge ei,j , where 1 ≤ i, j ≤ n;
Step 4: Compute the proximity matrix PM using Eq.(6);
Step 5: Optimize Eq.(8) by solving the minimum-cost perfect matching of G = {Vx , Vy , E} using the
Kuhn-Munkres algorithm [19];
Step 6: Combine every connected clusters in Φ into one cluster;
Step 7: Update Vx using those combined clusters from Step 6;
Until Vx converges;
return C = Vx .
SM
D({c1 , c2 }) D({c1 , c3 ) D({c1 , c4 }) D({c1 , c5 })
SM
.
.
.
D({c2 , c1 })
.
SM
.
.
D({{c3 , c1 })
PM =
.
.
SM.
.
D({c4 , c1 })
D({c , c })
.
.
.
SM
5 1
D({c6 , c1 }) D({c6 , c2 }) D({c6 , c3 }) D({c6 , c4 }) D({c6 , c5 })
D({c1 , c6 })
D({c2 , c6 })
D({c3 , c6 })
D({c4 , c6 })
D({c5 , c6 })
SM
6×6
Step 5: solve the minimum-cost perfect matching of G as shown in Fig.4(c). Edges in the obtained perfect
matching is Φ = {e1,2 , e2,1 , e3,3 , e4,4 , e5,6 , e6,5 }; Step 6: combine clusters c1 ∈ Vx and c2 ∈ Vy into c1 , and
c5 ∈ Vx and c6 ∈ Vy into c4 . The cluster c3 and c4 are unchanged and renamed as c2 and c3 , respectively.
Step 7: update Vx as {c1 , c2 , c3 , c4 }; Now the first iteration is done. Repeat Step 2-7 and finally Vx
converges to {c1 , c2 , c3 } as shown in Fig.4(i); Step 8: return Vx as the clustering C. The dendrogram of
the above hierarchical clustering is shown in Fig.5
IV. E XPERIMENTAL RESULTS
A. Evaluation methods
Suppose that the cluster result is C = {c1 , c2 , ..., ci , ..., ct } and ground truth is C 0 = {c01 , c02 , ..., c0j , ..., c0t0 }.
Each ci or c0j means a cluster of points and there are t clusters in C and t0 clusters in C 0 . For the purpose
of evaluating our segmentation results, we define the completeness ncom and correctness ncor as Eq.(9).
T
t0
t
max |ci c0j |
X
1
j=1
ncom =
(
)
t i=1
|ci |
T
t
t0
|c0j ci |
1 X max
ncor = 0
( i=1 0
)
t j=1
|cj |
(9)
There are two steps in the calculation of ncom . The first step is to achieve the maximum of |ci ∩ c0j |/|ci |
where i is a constant and j is from 1 to t0 . This is the completeness of each cluster in C. The second
step is to obtain the mean of values from the first step. Similar steps are used to calculate ncor . The
completeness ncom is to measure the ratio between the correctly grouped points and the total points in
7
Fig. 4. A toy example of the proposed OHC algorithm. (a) The initial clusters. (b) The modeled bipartite graph based on (a). (c) The
minimum-cost perfect matching of (b). (d) Clusters after the combination based on (c). (e) The modeled bipartite graph based on (d). (f)
The minimum-cost perfect matching of (e). (g) Clusters after the combination based on (f). (h) The modeled bipartite graph based on (g).
(i) The minimum-cost perfect matching of (h).
Fig. 5. The dendrogram of the example clustering.
the result. The correctness ncor is to measure the ratio between the correctly grouped points and the total
points in the ground truth. Both the completeness and correctness range from 0 to 1.
Our completeness and correctness are similar to the purity index [20] which is an evaluation measure
by calculating the similar between two clusters. The difference is that we focus on the ratio of correctly
grouped points and total points which is a commonly used criterion in the segmentation evaluation as
shown in [21] [22] [23]. The problem of Eq.(9) is that if there is only one cluster in the ground truth
C 0 , ncom will be always 1, and if there is only one cluster in the result C, ncor will be always 1. In
order to address this problem, we choose the minimum of ncom and ncor as the segmentation accuracy,
i.e. nacc = min(ncom , ncor ), to measure the difference of points between C and C 0 . To combine the
completeness and correctness, the criterion F1 -score, i.e. nF 1 = 2 × (ncor · ncom )/(ncor + ncom ), is used
to favor algorithms with a higher sensitivity and challenges those with a higher specificity [24].
8
Fig. 6. Performances of different segmentation methods. (a) HouseSet. (b) BushSet. (c) LamppostSet. (d) TreeSet. (e) PowerlinesSet. Row 1:
the segmentation ground truth of each scene. Row 2: performance of KMiPC [16]. Row 3: performance of KNNiPC [6]. Row 4: performance
of 3DNCut [5]. Row 5: performance of MinCut [7]. Row 6: performance of PEAC [17]. Row 7: performance of the proposed OHC.
B. Comparisons of methods
This section will evaluate performances of KMiPC [16], KNNiPC [6], 3DNCut [5], MinCut [7], PEAC
[17] and the proposed OHC on five typical scenes to show our superiority. The selected scenes for testing
are shown in Fig.6, including the HouseSet (2 labels): a single object, the BushesSet (3 labels): two
separated sparse objects, the LamppostSet (4 labels): two connected rigid objects, the TreesSet (3 labels):
two connected non-rigid objects and the PowerlinesSet (7 labels): a complex scene with different objects.
The first row of Fig.6 shows the segmentation ground truth of each scene. The ground truth is
obtained by manually segmenting visual independent objects. From the second row to the last row are
the performance of KMiPC, KNNiPC, 3DNCut, MinCut, PEAC and the proposed OHC, respectively. In
the visualization, we use different colors to represent distinct segments. KMiPC, KNNiPC and MinCut
are implemented by the Point Cloud Library (www.pointclouds.org/), 3DNCut is extended from the
normalized-cut method (www.cis.upenn.edu/ jshi/software/). PEAC is achieved based on the software
of Feng et al. (www.merl.com/research/?research=license-request&sw=PEAC).
KMiPC is suitable for segmenting symmetric objects and works well in the BushesSet and TreesSet.
However, it fails to split connected objects. KNNiPC relies on the density heavily and causes that an object
is clustered into different groups as shown in the TreesSet. 3DNCut tries to normalize the difference of
points’ Euclidean distances in each group. Therefore, it tends to group points evenly as shown in the
HouseSet and TreesSet. MinCut performances well in most cases when the required center point and
radius of each object are set properly. PEAC clusters points based on the plane information and it asks
for a region growing process to obtain an individual object.
The proposed OHC is not sensitive to the density of points as shown in the BushesSet. Moreover,
it does not require for presetting the number of clusters and the location of each object to achieve the
9
Fig. 7. Evaluation of different methods. (a) Accuracy nacc . (b) F1 -score nF 1 .
multi-object segmentation. The attached traffic sign in the LamppostSet is segmented successfully using
the proposed OHC. The segmentation of the overlapping region between two non-rigid objects is rather
difficult due to the rapid change of normal vectors as shown in the TreesSet. The quantitative evaluation
is shown in Fig.7. One can observe that our OHC is more accurate than other methods in most experiment
scenes.
In the implementation of OHC, there are three parameters, namely the k to select the nearest neighbor
points, λ to balance the weight of the α (pi , pj , ci , cj ) and β (pi , pj , ci , cj ), and SM for calculating the
proximity matrix. A large k or SM may cause the appearance of the under-segmentation and a small k or
SM will increase the over-segmentation rate in results. A large λ works well for segmenting connected
objects and a small λ is preferred when there are less overlapping objects. Evaluation experiments of
parameters are shown in Fig.8. In our work, λ is 4, k is 40 and SM is 0.4.
For dealing with a large-scale scene, we present a framework to increase the efficiency of OHC: 1)
remove ground points using the Cloth Simulation Filter (CSF) approach [25]; 2) down-sample the offground points into a sparse point set; 3) apply the proposed OHC on the above down-sampling data; 4)
assign those unlabeled points in the original off-ground point set with the same label as their nearest
labeled point. An example is described in Fig.9. Fig.9 (a) shows the input point clouds. Fig.9 (b) and (c)
are filtered ground and off-ground points, respectively. Fig.9 (d) is the down-sampling points of Fig.9(c).
Fig.9 (e) is the performance of OHC on the down-sampling points and Fig.9 (f) is the final results of
the input scene. Fig.10 shows the performance of the proposed OHC on a large-scale residential point
set (558 MB, 12,551,837 points) which is obtained by the RIEGL scanner. Fig.11 shows our results of a
large-scale urban point set (528 MB, 10,870,886 points) collected by the Optech scanner.
10
Fig. 8. Parameters setting. k is chosen from {15, 25, 40, 65} and SM is chosen from {0.1, 0.15, 0.25, 0.4, 0.65}. (a) λ = 1. (b) λ = 2.
(c) λ = 3. (d) λ = 4. (e) λ = 5.
11
Fig. 9. An example of segmenting a large-scale point set. (a) Input scene. (b) Ground points. (c) Off-ground points. (d) Down-sampling
data (10% points of (c)). (e) Performance of the proposed OHC on (d). (f) Segmentation results of (a).
Fig. 10. Results of a large-scale residential point set.
12
Fig. 11. Results of a large-scale urban point set.
C. Discussion of the proposed OHC
The proposed OHC algorithm works automatically in the multi-object segmentation. We do not need
to preset any parameters manually, e.g. the initial number of clusters or the location of each object.
Our performance is competitive against the state-of-the-art methods. The proposed optimal hierarchical
clustering is general in the segmentation of point clouds but can be a disadvantage in terms of speed.
The complexity of OHC relies on the Kuhn-Munkres algorithm which is O(N 3 ). The above-mentioned
experiments were done on a Windows 10 Home 64-bit, Intel Core i7-4790 3.6GHz processor with 16 GB
of RAM and computations were carried on Matlab R2017a. It took us around four hours to achieve the
segmentation of the large-scale residential point set (12,551,837 points) and the urban point set (10,870,886
points).
V. C ONCLUSIONS
This paper investigates a new optimal hierarchical clustering method (OHC) for segmenting multiple
individual objects from 3D mobile LiDAR point clouds. The combination of clusters is represented by the
matching of a graph. The optimal combination solution is obtained by solving the minimum-cost perfect
matching of a bipartite graph. The proposed algorithm succeeds to achieve the multi-object segmentation
automatically. We test our OHC on both a residential point set and an urban point set. Experiments show
that the proposed method is effective in different scenes and superior to the state-of-the-art methods in
terms of the accuracy.
Future work will focus on adding the dissimilarity measure of the intensity and color information of
points in the calculation of the proximity matrix. Besides, we will try to improve the efficiency of the
combination by using the supervoxel technique in the clustering.
13
R EFERENCES
[1] H. Wang, C. Wang, H. Luo, P. Li, Y. Chen, and J. Li, “3-d point cloud object detection based on supervoxel neighborhood with hough
forest framework,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 4, pp. 1570–1581,
2015.
[2] S. Xia and R. Wang, “A fast edge extraction method for mobile lidar point clouds,” IEEE Geoscience and Remote Sensing Letters,
vol. 14, no. 8, pp. 1288–1292, 2017.
[3] L. Nan and P. Wonka, “Polyfit: Polygonal surface reconstruction from point clouds,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2017, pp. 2353–2361.
[4] R. Qiu, Q.-Y. Zhou, and U. Neumann, “Pipe-run extraction and reconstruction from point clouds,” in European Conference on Computer
Vision. Springer, 2014, pp. 17–30.
[5] Y. Yu, J. Li, H. Guan, C. Wang, and J. Yu, “Semiautomated extraction of street light poles from mobile lidar point-clouds,” Geoscience
and Remote Sensing, IEEE Transactions on, vol. 53, no. 3, pp. 1374–1386, 2015.
[6] B. Douillard, J. Underwood, N. Kuntz, V. Vlaskine, A. Quadros, P. Morton, and A. Frenkel, “On the segmentation of 3d lidar point
clouds,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011, pp. 2798–2805.
[7] A. Golovinskiy, V. G. Kim, and T. Funkhouser, “Shape-based recognition of 3d point clouds in urban environments,” in Computer
Vision, 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 2154–2161.
[8] K. Klasing, D. Wollherr, and M. Buss, “A clustering method for efficient segmentation of 3d laser data.” in ICRA, 2008, pp. 4043–4048.
[9] J. Shi and J. Malik, “Normalized cuts and image segmentation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on,
vol. 22, no. 8, pp. 888–905, 2000.
[10] Y. Boykov and G. Funka-Lea, “Graph cuts and efficient nd image segmentation,” International journal of computer vision, vol. 70,
no. 2, pp. 109–131, 2006.
[11] J. Han, J. Pei, and M. Kamber, Data mining: concepts and techniques. Elsevier, 2011.
[12] M. Du, S. Ding, and H. Jia, “Study on density peaks clustering based on k-nearest neighbors and principal component analysis,”
Knowledge-Based Systems, vol. 99, pp. 135–145, 2016.
[13] J. Xie, H. Gao, W. Xie, X. Liu, and P. W. Grant, “Robust clustering by detecting density peaks and assigning points based on fuzzy
weighted k-nearest neighbors,” Information Sciences, vol. 354, pp. 19–40, 2016.
[14] Z. Yanchang and S. Junde, “Gdilc: a grid-based density-isoline clustering algorithm,” in Info-tech and Info-net, 2001. Proceedings.
ICII 2001-Beijing. 2001 International Conferences on, vol. 3. IEEE, 2001, pp. 140–145.
[15] G. Lavoué, F. Dupont, and A. Baskurt, “A new cad mesh segmentation method, based on curvature tensor analysis,” Computer-Aided
Design, vol. 37, no. 10, pp. 975–987, 2005.
[16] R. A. Kuçak, E. Özdemir, and S. Erol, “The segmentation of point clouds with k-means and ann (artifical neural network),” ISPRS
- International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLII-1/W1, pp. 595–598,
2017. [Online]. Available: http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-1-W1/595/2017/
[17] C. Feng, Y. Taguchi, and V. R. Kamat, “Fast plane extraction in organized point clouds using agglomerative hierarchical clustering,”
in Robotics and Automation (ICRA), 2014 IEEE International Conference on. IEEE, 2014, pp. 6218–6225.
[18] R. B. Rusu, “Semantic 3d object maps for everyday manipulation in human living environments,” KI-Künstliche Intelligenz, vol. 24,
no. 4, pp. 345–348, 2010.
[19] J. Munkres, “Algorithms for the assignment and transportation problems,” Journal of the Society for Industrial and Applied Mathematics,
vol. 5, no. 1, pp. 32–38, 1957.
[20] D. Manning, “Introduction,” in Introduction to Industrial Minerals. Springer, 1995, pp. 1–16.
[21] B. Yang, L. Fang, and J. Li, “Semi-automated extraction and delineation of 3d roads of street scene from mobile laser scanning point
clouds,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 79, pp. 80–93, 2013.
[22] A. Boyko and T. Funkhouser, “Extracting roads from dense point clouds in large scale urban environment,” ISPRS Journal of
Photogrammetry and Remote Sensing, vol. 66, no. 6, pp. S2–S12, 2011.
[23] P. Kumar, C. P. McElhinney, P. Lewis, and T. McCarthy, “An automated algorithm for extracting road edges from terrestrial mobile
lidar data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 85, pp. 44–55, 2013.
[24] C. Goutte and E. Gaussier, “A probabilistic interpretation of precision, recall and f-score, with implication for evaluation,” in Advances
in information retrieval. Springer, 2005, pp. 345–359.
[25] W. Zhang, J. Qi, P. Wan, H. Wang, D. Xie, X. Wang, and G. Yan, “An easy-to-use airborne lidar data filtering method based on cloth
simulation,” Remote Sensing, vol. 8, no. 6, p. 501, 2016.
14
| 1 |
arXiv:1409.4812v2 [] 13 Jan 2015
Evaluation of the Spectral Finite Element
Method With the Theory of Phononic
Crystals.
Nicolás Guarı́n-Zapata and Juan Gomez
Departamento de Ingenierı́a Civil
Universidad EAFIT
Medellı́n,
Colombia
[email protected]
January 15, 2015
Abstract
We evaluated the performance of the classical and spectral finite element
method in the simulation of elastodynamic problems. We used as a quality measure their ability to capture the actual dispersive behavior of the material. Four
different materials are studied: a homogeneous non-dispersive material, a bilayer
material, and composite materials consisting of an aluminum matrix and brass
inclusions or voids. To obtain the dispersion properties, spatial periodicity is
assumed so the analysis is conducted using Floquet-Bloch principles. The effects in the dispersion properties of the lumping process for the mass matrices
resulting from the classical finite element method are also investigated, since that
is a common practice when the problem is solved with explicit time marching
schemes. At high frequencies the predictions with the spectral technique exactly
match the analytical dispersion curves, while the classical method does not. This
occurs even at the same computational demands. At low frequencies however,
the results from both the classical (consistent or mass-lumped) and spectral finite
element coincide with the analytically determined curves.
Keywords: spectral finite elements, numerical dispersion, phononic crystals.
1
1
Introduction
During the recent years, and with particular interest steaming from the earthquake
engineering community, the spectral finite element method (SFEM) has emerged as a
powerful computational tool for numerical simulation of large scale wave propagation
problems (Seriani et al., 1995; Faccioli et al., 1997; Komatitsch et al., 2004; Magnoni
et al., 2012; Cupillard et al., 2012; Shani-Kadmiel et al., 2012; Kudela et al., 2007; Zhu
et al., 2011; Davies et al., 2009; Luo et al., 2009; Luo & Liu, 2009a,b; Balyan et al.,
2012). The origins of the currently used SFEM, can be traced back to the spectral
techniques introduced by Orszag (1980), Patera (1984) and Maday & Patera (1989),
initially proposed for fluid dynamics problems and Gazdag (1981), Kosloff & Baysal
(1982) and Kosloff et al. (1990) in elastodynamics. In seismic wave propagation, the
spectral element methods were first introduced by Priolo et al. (1994) and Faccioli et al.
(1996). Later Komatitsch & Vilotte (1998) replaced the polynomials used by Priolo
et al. (1994) by Legendre polynomials and incorporated a Gauss-Lobatto-Legendre
quadrature, leading to the currently known version of the method in wave propagation
problems. In these methods π-nodal points per wavelength are necessary to resolve
the wave (compared to the 8-10 in the classical FEM) (Ainsworth & Wajid, 2009);
they are also less sensitive to numerical anisotropy and element distortion; exhibit a
smaller conditioning number and lead by construction, to diagonal mass matrices. The
fundamental idea behind the SFEM is the use of higher order Lagrange interpolation at
non-equidistant nodal points, reducing the so-called Runge-phenomenon and producing
exponential convergence rates. If the nodal points, are also the quadrature points
corresponding to a Gauss-Lobatto-Legendre integration rule, the resulting scheme yields
diagonal mass matrices, whereby the global equilibrium equations from explicit time
marching schemes become uncoupled. In contrast, one of the drawbacks of the spectral
technique, is the fact that special meshing methodologies must be developed, since in the
SFEM algorithm the nodal points must be placed at certain (non-standard) positions
within the element. It is thus of interest to identify the range of frequencies for which
the spectral technique results truly advantageous with respect to classical displacement
based formulations. Although there are several works dealing with the performance
of the spectral approach (Marfurt, 1984; Dauksher & Emery, 2000; De Basabe & Sen,
2007; Ainsworth, 2005; Seriani & Oliveira, 2008; Ainsworth & Wajid, 2009; Mazzieri &
Rapetti, 2012), these have been mainly restrained to problems with particular boundary
conditions or else, have been conducted using strongly simplified assumptions.
In this article we evaluate the capabilities of the SFEM in the simulation of mechanical wave propagation problems. We compare its performance with the classical finite
element algorithm using as a quality index, numerically obtained dispersion curves of
the propagation material, computed with the aid of the theory of phononic crystals.
This is an objective performance metric, since it qualifies the ability of the method to
describe the material and not a particular wave propagation problem, thus the effects
2
of geometry and boundary conditions appearing in the solution of a finite model are not
taken into account in this model. For purposes of the evaluation, we implemented classical and spectral finite element algorithms and found numerical dispersion curves for
four different materials, namely a homogeneous material cell, a bilayer material cell, a
porous material cell and a composite material cell. We contrasted the numerical curves
obtained with both methods, with analytic solutions and identified non-dimensional frequency ranges, where the classical and spectral finite element methods deviated from
each other and from the analytic solution. We also evaluated the error, introduced
in the dispersion properties of the material by the artificial diagonalization process
of the finite element mass matrix. This practice is commonly encountered in explicit
time marching schemes implemented with the classical algorithm. From our study we
found that the spectral finite element method yields reliable results at much higher
frequencies than the classical technique even at the same computational costs. At low
frequencies however both methods give the same precision which clearly favours the
classical treatment.
In the first part of the article we briefly describe the implementation of the spectral
finite element method. We then discuss general aspects regarding the numerical formulation of the eigenvalue problem corresponding to the imposition of the Floquet-Bloch
boundary conditions for a spatially periodic material. In the final part of the article we
present results, in terms of dispersion curves for different material cells all of which are
solved numerically and analytically.
2
The Spectral Element Method
The Spectral Finite Element Method (SFEM) is a finite element technique based on high
order Lagrange interpolation (Zienkiewicz & Taylor, 2005) but with a non-equidistant
nodal distribution. The above difference in the spatial sampling, with respect to classical algorithms, seeks to minimize the Runge phenomenon (Pozrikidis, 2005). This
corresponds to the spurious oscillations appearing at the extremes of an interpolation
interval when–in an attempt to perform p-refinement–the number of sampling points is
increased while keeping an equidistant nodal distribution. The problem is illustrated in
Figure 1, where a Runge function is interpolated with 3 different Lagrange polynomials
of order 10, but with a different distribution of the sampling points. We used equidistant nodal points (as in the classical FEM algorithm) and also Lobatto and Chebyshev
nodes (as in the SFEM algorithm).
The Runge phenomenon corresponds to the strong oscillations observed towards
the ends of the interval associated with the equidistant nodal distribution. In contrast,
when the interpolation interval is sampled at the Lobatto and Chebyshev nodes, the
3
approximation describes closely the Runge function throughout the interval. Since
the method uses high-order interpolation for each element, it in fact corresponds to
a p-refinement version of the classical technique, but with a different spatial sampling
inside the elements. The term spectral refers to the rate of convergence since the method
converges faster than any power of 1/p, where p is the order of the polynomial expansion
(Pozrikidis, 2005).
1.5
Runge Function
Equal spaced interpolation
Lobatto Interpolation
Chebyshev Interpolation
y
1
0.5
0
-0.5
-1
-0.5
0
0.5
1
x
Figure 1. Lagrange interpolating polynomials of order 10 for the Runge function
1
.
1+25x2
There are two nodal distributions commonly used in spectral finite element methods: Chebyshev nodes and Lobatto nodes. The Chebyshev nodes are the roots of the
Chebyshev polynomials of the first kind. The resulting interpolation polynomial not
only minimizes the Runge phenomenon but it also provides the best approximating
polynomial under the maximum norm (Burden & Faires, 2011; Kreyzsig, 1989). The
Chebyshev polynomials of the first kind Tn (x) are defined as the unique polynomial
satisfying
Tn (x) = cos [n arccos x]
for each n ≥ 0, and x ∈ [−1, 1] ,
and with roots corresponding to
xk = cos
2k − 1
π
2n
.
Similarly, the Lobatto nodes are the roots of the Lobatto polynomials, which are
the first derivative of the Legendre polynomials
Loi−1 (x) = Pi0 (x) x ∈ [−1, 1] ,
4
with
1 d(i) (t2 − 1)i
Pi (x) = i
,
2 i!
dt(i)
being the ith Legendre Polynomial (Abramowitz & Stegun, 1965). The Lobatto nodes
are also the Fekete points for the line, the square and the cube. The Fekete points
in an interpolation scheme correspond to the nodal distribution that maximizes the
determinant of the Vandermonde matrix (Bos et al., 2001; Pozrikidis, 2005). If the
nodes (corresponding to the sampling points in the interpolation scheme) are also used
as quadrature points in the computation of volume and surface integrals arising in the
FEM, diagonal mass matrices are obtained due to the discrete orthogonality condition
(Pozrikidis, 2005). This last feature of the SFEM makes it very attractive in explicit
time marching schemes, since this condition uncouples the equilibrium equations. In
the classical displacement-based FEM this lumping process is artificially enforced, while
in the SFEM it is a natural result. The nodal distributions corresponding to the 6 × 6
classical and Lobatto-spectral elements are shown in Figure 2.
(a) Classical 6 × 6
element.
(b)
Lobattospectral 6 × 6
element.
Figure 2. Comparison of a classical and a Lobatto-spectral element.
In this work we use the SFEM to solve the reduced elastodynamic frequency domain
equation. The formulation follows from the minimization of the total potential energy
functional Π = Π(ui ) (Reddy, 1991) leading to the principle of virtual displacements
at the frequency ω given by
Z
Z
Z
Z
∗
∗
2
∗
σrs (x)δrs (x)dΩ−ω
ρ(x)ur (x)δur (x)dΩ− tr (x)δur (x)dΓ− fr∗ (x)δur (x)dΩ = 0
Ω
Ω
Γ
Ω
(1)
and where Ω = solution domain, Γ = boundary, σrs = stress tensor, tr = tractions
vector, rs = strain tensor, ur = displacement vector, fr = body force vector and x =
field point vector. The superscripts ∗ appearing in (1) refer to complex conjugate variables. This is a requirement in the weak form, that produces a consistent inner product
between complex variables and leads to self-adjoint operators and therefore Hermitian
finite element matrices: this last condition guarantees the existence of real eigenvalues.
5
The discrete finite element equations follow after writing in (1) the displacements (and
related variables) in terms of basis functions or its derivatives as
ui (x) = NiQ (x)UQ
where ui (x) is the displacement vector evaluated at point x, NiQ (x) is the shape function
for the Qth node and UQ is the nodal displacement vector at the Qth node. In our
notation we retain the index i in the shape function just to indicate that a vectorial
variable is being interpolated. The final finite element equations are of the familiar
form
[K − ω 2 M ] {U} − {F} = {0} .
(2)
where [K] and [M ] are complex valued Hermitian stiffness and mass matrices and F
is the nodal forces vector comprising boundary tractions and body forces. In 2D the
Shape functions for the Q − th node can be computed as the product of independent
Lagrange polynomials like
NiQ (x, y) = `Qx (x)`Qy (y) .
(3)
Similarly, the derivatives of the shape functions are
∂NiQ (x, y)
= `0Qx (x)`Qy (y),
∂x
∂Ni (x, y)
= `Qx (x)`0Qy (y) ,
∂y
(4)
with
`Q (x) =
M
+1
Y
P =1
Q6=P
0Q
` (x) =
M
+1
X
P =1
Q6=P
x − xP
,
xQ − xP
(5)
M
+1
Y
1
x − xR
,
x − xP R=1 xQ − xR
(6)
R6=P
where M is the order of the Lagrange polynomial.
If the integrals in (1) are numerically integrated using the Gauss-Lobatto nodes,
which at the same time are the element nodal points, a diagonal mass matrix is obtained.
Although this is advantageous in the case of an explicit time marching scheme, only
polynomials of order 2N − 3 (Abramowitz & Stegun, 1965; Pozrikidis, 2005) or smaller
are integrated exactly. This accuracy should be contrasted with the one in the standard
Gauss-Legendre quadrature, that integrates exactly polynomials of order 2N − 1 or
smaller (Abramowitz & Stegun, 1965; Bathe, 1995). This reduction in the integration
accuracy is due to the inclusion of nodes in the extremes of the interval. Since the
order of exact integration decreases in the Gauss-Lobatto case, that selection is useful
for interpolations with order higher than N = 3, because the nodal distribution in both
6
methods is the same for N = 1, 2. For the Gauss-Lobatto quadrature the nodes are the
zeros of the completed Lobatto polynomials of order N
(1 − x2 )L0N (x) = 0
with weights
2
N (N + 1)
2
1
wi =
for i = 2, 3, · · · , N .
2
N (N + 1) LN (xi )
w1 = wN +1 =
3
Phononic Crystals
Phononic crystals may be synthetic or natural materials, formed by the spatially periodic repetition of a microstructural feature. Such materials consist of a number of
identical structural components joined together in an identical fashion into an elementary cell that repeats periodically throughout space, forming the complete material and
described by a lattice construction (Mead, 1996). When subjected to waves propagating
at high enough frequencies, the microstructures behave as micro-scatterers, translating into dispersive propagation behaviour at the macroscopic level and sometimes even
leading to states of imaginary group velocity, i.e., values of frequency for which waves do
not propagate denoted as bandgaps. At even higher frequencies, additional propagation
modes may be triggered once a certain cut-off value of the frequency is reached. The
relation between the frequencies at which propagation is possible and the material periodic microstructure–described in terms of the wave vector–can be elegantly described
in terms of the dispersion curve.
In the last few years a strong effort has been devoted to the study of propagation of waves in periodic materials. In particular, methods to determine–via numerical
techniques–the dispersion curves for a specific material, have emerged from earlier work
in the field of solid-state physics, Brillouin (1953). The dispersion curve can be obtained
through the analysis of a single cell, after having identified the microstructure that repeats periodically. Within that context, the unit cell is described by lattice base vectors
indicating the directions that must be followed, in order to cover the complete space
with the application of successive translation operations. As described in Srikantha
et al. (2006), the joints of any lattice structure can be envisioned as a collection of
points (lattice points). These, at the same time are associated with a set of basis vectors ai . The entire direct lattice is then obtained after tessellating the elemental cell
along the basis vectors ai . In this way the location of any point in the periodic material
can be identified with ni integers, defining the number of translations of the elemental
cell along the ai directions of the basis vectors.
7
With the aid of the fundamental tool provided by Bloch’s theorem (Brillouin, 1953)
and extracted from the theory of solid-state physics, the dispersive properties of the
material can be obtained via finite element analysis of the elementary cell. That theorem
states that the change in wave amplitude occurring from cell to cell does not depend
on the wave location within the periodic system (Ruzzene et al., 2003). The final
statement corresponding to Bloch’s theorem, is written in terms of phase shifts between
displacements and tractions on opposite surfaces of the elemental cell like
ui (x + a) = ui (x)eik·a ,
ti (x + a) = −ti (x)eik·a
(7)
and where k is the wave vector and a = a1 n1 + a2 n2 + a3 n3 is the lattice translation
vector, formed after scaling the basis vectors by the translation integers ni . The factor
eik·a corresponds to the phase shift between opposite sides of the cell. The real and
imaginary components of the wave vector differ by this factor, while the magnitude
remains the same.
3.1
Bloch-periodicity in discrete methods
The formulation of the finite element equations for the determination of the dispersion
properties of an elemental cell requires the assembly of the discrete equations for the
complete cell, the subsequent reduction of the equations after imposition of the Blochperiodic boundary conditions stated in (7), and the solution of a series of generalized
eigenvalue problems to find the dispersion curve. If the eigenvalue problem or numerical
dispersion relation is written like
D̂(k, ω)U = 0
(8)
where k is the wave vector which is progressively assigned successive values, in such a
way that the first Brillouin zone is fully covered. Each application of the wave vector
and the solution of the related eigenvalue problem yields a duple (k, ω) representing a
plane wave propagating at frequency ω.
The basic steps are further elaborated with reference to Figure 3, describing a unit
cell and where 2da and 2db are respectively the cell width and cell height. The subindices labelled as 9 (or i) are related to the internal degrees of freedom, while those
labelled 1 − 4 ( or alternatively l, r, b, t) are used for degrees of freedom over the boundary and 5 − 8 (or lb, rb, lt, rt) correspond to those over the corners of the unit cell
(Srikantha et al., 2006). In the initial step we assembly the complete discrete finite element equations for the unit cell using the following block vectors of nodal displacements
8
and forces
U = [ul ur ub ut ulb urb ult urt ui ]T ≡ [u1 u2 u3 u4 u5 u6 u7 u8 u9 ]T ,
F = [fl fr fb ft flb frb flt frt fi ]T ≡ [f1 f2 f3 f4 f5 f6 f7 f8 f9 ]T .
ult u7
ut u4
urt u8
ult u7
ut u4
urt u8
ul u1
ui u9
ur u2
ul u1
ui u9
ur u2
urb u6
ulb u5
ulb u5
ub u3
(a) Original cell.
ult
ul
ulb
ub u3
urb u6
(b) Reduced cell.
u
21
22
23 t 24
25
16
17
18
19
20
14
15
10
ui
11
12
13
6
7
8
9
1
2
3
ub 4
5
urt
ur
urb
(c) The black circles now correspond to single degrees of freedom.
Notice that the nodal numbering does not match the original numbering for groups of degrees of freedom.
Figure 3. Relevant sets of DOF for an schematic unitary cell discretized with several
finite elements. Each circle represents a family of degrees of freedom for a typical
mesh and not a single degree of freedom. In part a) we show all the degrees of
freedom for the unitary cell before imposing the relevant Bloch-periodic boundary
conditions. Part b) shows the reduced cell where the white circles enclosed by the
dark square represent the reference nodes containing the information from the image
nodes which will be eventually deleted from the system. Part c) shows an example
of node grouping for a mesh of 4 × 4 elements.
In the FEM equations the Bloch conditions take the form of constraint relationships
among the degrees of freedom in opposite boundaries of the cell. This allows reduction
of the system of equations via elimination of the subset of image nodes and leaving only
9
the subset of reference nodes. These constraints are written like
u2 = eiψx u1
u4 = eiψy u3
u6 = eiψx u5
u7 = eiψy u5
u8 = ei(ψx +ψy ) u5 ,
where ψx = 2kx da and ψy = 2ky db define phase shifts in x and y directions respectively
and (kx , ky ) = k corresponds to the wave vector. Similarly, using the corresponding
relationships between the nodal forces we have
f2 + eiψx f1 = 0
f4 + eiψy f3 = 0
f8 + eiψx f6 + eiψy f7 + ei(ψx +ψy ) f5 = 0.
If we assume that the problem does not involve internal forces, i.e. f9 = 0, we get
the reduced generalized eigenvalue problem to be solved for each frequency and related
propagation mode and given by
[KR − ω 2 MR ]{UR } = {0} .
(9)
In a finite element implementation the Bloch-periodic boundary conditions can be considered either including directly the phase shifts in the basis functions at the outset
or performing row and column operations in the assembled matrices, Sukumar & Pask
(2009). The Hermitian matrices KR and MR in the reduced system are function of the
wave vector and angular frequency as indicated in (8).
4
Results
In order to test the performance of the spectral finite element method, we implemented
an in-house code to assemble and solve the generalized eigenvalue problem stated in
(9). We considered a homogeneous (non-dispersive material), a bilayer material and
a composite material made out of an aluminium matrix with a square inclusion. In
particular we studied the case of a pore and a brass inclusion. From the above set, the
homogeneous and bilayer material cells had closed form solutions, so we were able to
compare our numerical results with the analytical dispersion curves.
In all the considered cases we solved the problem using both the classical finite
element technique and the spectral finite element method. On the other hand, and
10
since one of the reported advantages of the spectral technique, with respect to the
classical approach, is the fact that in this method a diagonal mass matrix is obtained,
we also tested the effect of using artificially lumped mass matrices in the classical FEM.
With this goal in mind, we also solved the homogeneous cell and the bilayer material
cell, using the classical FEM with a consistent mass matrix and a lumped mass matrix.
The method for making the matrices diagonals is the Diagonal scaling (see Zienkiewicz
et al. (2005)), where each of the terms over the diagonal are scaled by a factor under
the constraint of keep the total mass of the element constant. In our case, the factor is
the total mass of the element (Mtot ) divided by the trace of the matrix (Tr(M )), i.e.
(lumped)
Mii
=
Mtot
Mii
Tr(M )
(no summation on i) .
The results are presented next in terms of dimensionless frequency vs dimensionless
wave number.
4.1
Homogeneous material
In an ideal homogeneous material a plane propagates non-dispersively according to the
linear relation
ω = ckkk.
(10)
If the wave is travelling parallel to the y-axis (i.e., vertical incidence) and we consider
a square cell of side 2d the above relation particularizes to
r
nπ 2 mπ 2
k+
+
,
(11)
ω i = ci
d
d
where ci is the wave propagation velocity for the P wave (i = 1) or SV wave (i = 2). In
this relation n and m are integers indicating the relative position of any given cell with
respect the reference cell. When n is different from zero, the dispersion branches become
hyperbolas due to the wave vector whose components are determined as modulus 2π and
could be present in any Brillouin zone. These hyperbolas result from the intersection
of a plane parallel to the x−axis and a cone, Langlet (1993); Langlet et al. (1995);
Guarı́n-Zapata (2012).
Figure 4 presents the results for a square cell discretized with 4-th and 7-th order
complete Lagrange polynomials with equidistant and Lobatto sampling nodes. The
analytical and numerical dispersion curves are shown in dotted and continuous lines
respectively. In all the considered cases the differences between the numerical and analytical results are due to numerical dispersion. The results from the classical 5×5
11
element are accurate only at normalized frequencies below 3.0, while the analogous
analysis with the spectral method predicts even modes of higher multiplicity as seen
from the point near (1,4). Moreover, the classical 8×8 element only captures the first
three modes, while this same result was already reached with the 5×5 spectral element.
This of course is a result of the improved convergence properties of the spectral technique. Similarly the 8×8 spectral element captures the solution exactly up to the sixth
mode.
6
6
5
5
4
4
3
3
2
2
1
1
0
0.0
0.2
0.4
0.6
0.8
1.0
(a) Classical 5 × 5 element.
0
0.0
6
5
5
4
4
3
3
2
2
1
1
0.2
0.4
0.6
0.4
0.6
0.8
1.0
(b) Spectral 5 × 5 element.
6
0
0.0
0.2
0.8
1.0
(c) Classical 8 × 8 element.
0
0.0
0.2
0.4
0.6
0.8
(d) Spectral 8 × 8 element.
Figure 4. Dispersion curves from a single homogeneous material cell.
12
1.0
4.2
Bilayer material
Figure 5 describes a composite material formed by two layers of different properties.
When the material cell is submitted to an incident wave in the direction perpendicular to
the layers the dispersion curve can be found in closed-form. The problem was studied by
Lord Rayleigh in the case of electromagnetic waves which, in this case, is mathematically
equivalent to the mechanical problem (Rayleigh, 1888). Of particular interest in this
Figure 5. Cell for a bilayer material.
material is the presence of band gaps in addition of its dispersive behaviour. The
analytic dispersion relation as obtained by Rayleigh (1888) is given by
ωd
(ρ1 c1 )2 + (ρ2 c2 )2
ωd
ωd
ωd
cos
−
sin
sin
,
cos(2dk) = cos
c1
c2
2ρ1 ρ2 c1 c2
c1
c2
where ci the longitudinal/transversal wave speed of the i layer and ρi the density of
the i layer. In the current analysis the material properties are EA = 7.31 × 1010 Pa,
νA = 0.325, ρA = 2770 kg/m3 and EB = 9.2 × 1010 Pa, νB = 0.33, ρB = 8270
kg/m3 for aluminium and brass respectively. Figure 6 shows the comparison between
the results obtained with classical FEM (with consistent and lumped mass matrix)
SFEM and the analytical solution. Notice that the same number of modes (5 modes),
were found with the classical 8 × 8 finite element and with the lower order 4 × 4
spectral element. Furthermore, the spectral element, as shown in Figure 7, reproduces
exactly up to 11 modes, once again showing the improved convergency properties of
the spectral method. It is evident that in the considered cases this approximation does
not introduce considerable error in the dispersive properties as compared with the full
consistent matrix.
13
6
6
5
5
4
4
3
3
2
2
1
1
0
0.0
0.2
0.4
0.6
0.8
1.0
(a) Classical 4 × 4 element.
0
0.0
6
5
5
4
4
3
3
2
2
1
1
0.2
0.4
0.6
0.8
0.4
0.6
0.8
1.0
(b) Classical 8 × 8 element.
6
0
0.0
0.2
1.0
0
0.0
0.2
0.4
0.6
0.8
1.0
(c) Classical 4 × 4 lumped element. (d) Classical 8 × 8 lumped element.
6
6
5
5
4
4
3
3
2
2
1
1
0
0.0
0.2
0.4
0.6
0.8
1.0
(e) Spectral 4 × 4 element.
0
0.0
0.2
0.4
0.6
0.8
1.0
(f ) Spectral 8 × 8 element.
Figure 6. Dispersion curves for a bilayer material. Each layer is meshed with a
single element with homogeneous material, being the materials aluminium and brass.
14
12
10
8
6
4
2
0
0.0
0.2
0.4
0.6
0.8
1.0
Figure 7. Results for 12 branches of the dispersion relation for the bilayer material
computed with a single 8 × 8 SFEM element for layer. The materials are aluminium
and brass.
4.3
Matrix with pore/elastic inclusion
As a final case we considered the cell of an aluminium matrix with a square pore and
a square brass inclusion. Figure 8 shows the dispersion curves with the 8 × 8 spectral
element used to capture the 11 modes in the previous case. This result is also reported
in Langlet (1993). There is a bandgap with a cut-off frequency close to 3.0 in the cell
with the pore. The bandgap however, completely closes in the case of the inclusion
since it presents a smaller impedance contrast.
5
5
4
4
3
3
2
2
1
1
0
0.0
0.2
0.4
0.6
0.8
0
0.0
1.0
(a) Pore.
0.2
0.4
0.6
0.8
1.0
(b) Inclusion.
Figure 8. Dispersion relations for a square brass inclusion and pore in a square cell
of aluminium. Both were computed with 8 × 8 SFEM elements.
15
5
Conclusions
We have implemented a higher order classical finite element method and spectral finite element method to find the dispersion relations for in-plane P and SV waves,
propagating in periodic materials, i.e, phononic crystals. The implemented solution
method is based on the Floquet-Bloch theorem from solid state physics, which allows
the determination of the dispersion curve from the analysis of a single cell. The numerical dispersion curves of the classical and spectral approaches were used to qualify
the performance of each technique in the simulation of mechanical wave propagation
problems. Since the dispersion curve simultaneously describes properties of the propagation medium and the external excitation, it is an engineering and objective measure
to assess the performance of the numerical method and particularly of its ability to
correctly capture the dispersive properties of different material microstructures.
The considered materials correspond to a homogeneous non-dispersive material, a
dispersive bilayer material and a composite material exhibiting bandgaps. The composite material is formed by a matrix with a square pore/inclusion. In general, it was
found that the spectral finite element method yields reliable results at much higher
frequencies than the classical technique, even at the same computational costs. At low
frequencies however, the performance of both methods is equivalent and the analysis
could rely on the classical algorithm. One of the main advantages of the SFEM is
the fact that having used Gauss-Lobatto-Legendre nodes the resulting mass matrix is
diagonal by formulation. In the case of an analysis using Bloch-periodic boundary conditions, a diagonal mass matrix allows reducing the generalized eigenvalue problem into
a standard eigenvalue problem. This can be accomplished via Cholesky decomposition.
As a secondary goal, we also evaluated in this work the effects of enforcing diagonality of the mass matrix resulting from a classical algorithm. The dispersion curves
obtained with the classical method using a consistent and a lumped mass matrix were
the same in practical terms which allows to use such approximation with confidence.
6
Acknowledgements
This work was supported by Colciencias and Universidad EAFIT under the Young
Researcher and Innovators Program “Virginia Gutiérrez de Pineda”. The authors would
like to thank Professor Anne Christine Hladky-Hennion from Institut d’électronique de
microélectronique et de nanotechnologie -Université Lille Nord, France for answering so
kindly our conceptual queries and also providing us a copy of Langlet (1993).
16
References
Abramowitz, M. & Stegun, I., 1965. Handbook of mathematical functions: with formulas, graphs, and mathematical tables, vol. 55, Dover publications.
Ainsworth, M., 2005. Discrete dispersion relation for hp-version finite element approximation at high wave number, SIAM Journal on Numerical Analysis, 42(2), 553–575.
Ainsworth, M. & Wajid, H., 2009. Dispersive and dissipative behavior of the spectral
element method, SIAM Journal on Numerical Analysis, 47(5), 3910–3937.
Balyan, L. K., Dutt, P. K., & Rathore, R., 2012. Least squares h-p spectral element method for elliptic eigenvalue problems, Applied Mathematics and Computation, 218(19), 9596 – 9613.
Bathe, K.-J., 1995. Finite Element Procedures, Prentice Hall, 2nd edn.
Bos, L., Taylor, M., & Wingate, B., 2001. Tensor product Gauss-Lobatto points are
Fekete points for the cube, Mathematics of Computation, 70(236), 1543–1548.
Brillouin, L., 1953. Wave propagation in periodic structures: Electric filters and crystal
lattices, Dover Publications, 1st edn.
Burden, R. & Faires, J., 2011. Numerical Analysis, Brooks/Cole, 9th edn.
Cupillard, P., Delavaud, E., Burgos, G., Festa, G., Vilotte, J.-P., Capdeville, Y., & Montagner, J.-P., 2012. RegSEM: a versatile code based on the spectral element method
to compute seismic wave propagation at the regional scale, Geophysical Journal International , 188(3), 1203–1220.
Dauksher, W. & Emery, A., 2000. The solution of elastostatic and elastodynamic problems with chebyshev spectral finite elements, Computer methods in applied mechanics
and engineering, 188(1), 217–233.
Davies, R., Morgan, K., & Hassan, O., 2009. A high order hybrid finite element method
applied to the solution of electromagnetic wave scattering problems in the time domain, Computational Mechanics, 44, 321–331.
De Basabe, J. D. & Sen, M. K., 2007. Grid dispersion and stability criteria of some
common finite-element methods for acoustic and elastic wave equations, Geophysics,
72(6), T81–T95.
Faccioli, E., Maggio, F., Quarteroni, A., & Tagliani, A., 1996. Spectral-domain decomposition methods for the solution of acoustic and elastic wave equations, Geophysics,
61(4), 1160–1174.
17
Faccioli, E., Maggio, F., Paolucci, R., & Quarteroni, A., 1997. 2d and 3d elastic
wave propagation by a pseudo-spectral domain decomposition method, Journal of
seismology, 1(3), 237–251.
Gazdag, J., 1981. Modeling of the acoustic wave equation with transform methods,
Geophysics, 46(6), 854–859.
Guarı́n-Zapata, N., 2012. Simulación Numérica de Problemas de Propagación de Ondas:
Dominios Infinitos y Semi-infinitos, Master’s thesis, Universidad EAFIT.
Komatitsch, D. & Vilotte, J., 1998. The spectral element method: An efficient tool
to simulate the seismic response of 2d and 3d geological structures, Bulletin of the
Seismological Society of America, 88(2), 368–392.
Komatitsch, D., Liu, Q., Tromp, J., Süss, P., Stidham, C., & Shaw, J. H., 2004.
Simulations of Ground Motion in the Los Angeles Basin Based upon the SpectralElement Method, Bulletin of the Seismological Society of America,, 94(1), 187–206.
Kosloff, D. & Baysal, E., 1982. Forward modeling by a fourier method, Geophysics,
47(10), 1402–1412.
Kosloff, D., Kessler, D., Tessmer, E., Behle, A., Strahilevitz, R., et al., 1990. Solution
of the equations of dynamic elasticity by a chebychev spectral method, Geophysics,
55(6), 734–747.
Kreyzsig, E., 1989. Introductory Functional Analysis with Applications, Wiley, 1st edn.
Kudela, P., Zak, A., Krawczuk, M., & Ostachowicz, W., 2007. Modelling of wave
propagation in composite plates using the time domain spectral element method,
Journal of Sound and Vibration, 302(4-5), 728 – 745.
Langlet, P., 1993. Analyse de la Propagation des Ondes Acoustiques dans les Materiaux
Periodiques a l’aide de la Methode des Elements Finis, Ph.D. thesis, L’Universite de
Valenciennes et du Hainaut-Cambresis.
Langlet, P., Hladky-Hennion, A.-C., & Decarpigny, J.-N., 1995. Analysis of the propagation of plane acoustic waves in passive periodic materials using the finite element
method, Journal of the Acoustical Society of America, 98(5), 2792–2800.
Luo, M. & Liu, Q. H., 2009a. Accurate determination of band structures of twodimensional dispersive anisotropic photonic crystals by the spectral element method,
Journal of The Optical Society of America, 26(7), 1598–1605.
Luo, M. & Liu, Q. H., 2009b. Spectral element method for band structures of threedimensional anisotropic photonic crystals, Physical Review E , 80, 056702.
Luo, M., Liu, Q. H., & Li, Z., 2009. Spectral element method for band structures of
two-dimensional anisotropic photonic crystals, Physical Review E , 79, 026705.
18
Maday, Y. & Patera, A. T., 1989. Spectral element methods for the incompressible Navier-Stokes equations, in State-of-the-art surveys on computational mechanics
(A90-47176 21-64). New York, American Society of Mechanical Engineers, 1989, p.
71-143. Research supported by DARPA., pp. 71–143.
Magnoni, F., Casarotti, E., Michelini, A., Piersanti, A., Komatitsch, D., Peter, D., &
Tromp, J., 2012. Spectral-element and adjoint 3d tomography for the lithosphere of
central italy, in EGU General Assembly Conference Abstracts, vol. 14, p. 7506.
Marfurt, K. J., 1984. Accuracy of finite-difference and finite-element modeling of the
scalar and elastic wave equations, Geophysics, 49(5), 533–549.
Mazzieri, I. & Rapetti, F., 2012. Dispersion analysis of triangle-based spectral element
methods for elastic wave propagation, Numerical Algorithms, 60(4), 631–650.
Mead, D., 1996. Wave propagation in continuous periodic structures: research contributions from southampton, 1964–1995, Journal of Sound and Vibration, 190(3),
495–524.
Orszag, S., 1980. Spectral methods for problems in complex geometries, Journal of
Computational Physics, 37(1), 70–92.
Patera, A. T., 1984. A spectral element method for fluid dynamics: Laminar flow in a
channel expansion, Journal of Computational Physics, 54, 468 – 488, This was the
first paper in Spectral Element Methods.
Pozrikidis, C., 2005. Introduction To Finite And Spectral Element Methods Using Matlab, Chapman & Hall/CRC.
Priolo, E., Carcione, J., & Seriani, G., 1994. Numerical simulation of interface waves
by high-order spectral modeling techniques, The Journal of the Acoustical Society of
America, 95, 681.
Rayleigh, L., 1888. On the remarkable phenomenon of crystalline reflexion described
by Prof. Stokes, Philosophical Magazine Series 5 , 26(160), 256–265.
Reddy, J., 1991. Applied Functional Analysis and Variational Methods in Engineering,
Krieger Publishing, 1st edn.
Ruzzene, M., Scarpa, F., & Soranna, F., 2003. Wave beaming effects in two-dimensional
cellular structures, Smart Materials and Structures, 12, 363–372.
Seriani, G. & Oliveira, S., 2008. Dispersion analysis of spectral element methods for
elastic wave propagation, Wave Motion, 45(6), 729–744.
Seriani, G., Priolo, E., & Pregarz, A., 1995. Modelling waves in anisotropic media by
a spectral element method, in Proceedings of the third international conference on
mathematical and numerical aspects of wave propagation, pp. 289–298, Siam.
19
Shani-Kadmiel, S., Tsesarsky, M., Louie, J., & Gvirtzman, Z., 2012. Simulation of
seismic-wave propagation through geometrically complex basins: The dead sea basin,
Bulletin of the Seismological Society of America, 102(4), 1729–1739.
Srikantha, A., Woodhouse, J., & Flecka, N., 2006. Wave propagation in two-dimensional
periodic lattices, Journal of the Acoustical Society of America, 119, 1995–2005.
Sukumar, N. & Pask, J. E., 2009. Classical and enriched Finite element formulations
for Bloch-periodic boundary conditions, International Journal of Numerical Methods
in Engineering, 77(8), 1121–1138.
Zhu, C., Qin, G., & Zhang, J., 2011. Implicit chebyshev spectral element method for
acoustics wave equations, Finite Elements in Analysis and Design, 47(2), 184 – 194.
Zienkiewicz, O. & Taylor, R., 2005. The Finite Element Method for Solid and Structural
Mechanics, vol. 2, Butterworth-Heinemann, 6th edn.
Zienkiewicz, O., Taylor, R., & Zhu, J., 2005. The Finite Element Method: Its Basis
and Fundamentals, vol. 1, Butterworth-Heinemann, 6th edn.
20
| 5 |
FINDING GENERATORS AND RELATIONS FOR
GROUPS ACTING ON THE HYPERBOLIC BALL
arXiv:1701.02452v1 [] 10 Jan 2017
DONALD I. CARTWRIGHT
TIM STEGER
Abstract. In order to enumerate the fake projective planes, as announced
in [2], we found explicit generators and a presentation for each maximal arithmetic subgroup Γ̄ of P U (2, 1) for which the (appropriately normalized) covolume equals 1/N for some integer N ≥ 1. Prasad and Yeung [4, 5] had given a
list of all such Γ̄ (up to equivalence).
The generators were found by a computer search which uses the natural
action of P U (2, 1) on the unit ball B(C2 ) in C2 . Our main results here give
criteria which ensure that the computer search has found sufficiently many
elements of Γ̄ to generate Γ̄, and describes a family of relations amongst the
generating set sufficient to give a presentation of Γ̄.
We give an example illustrating details of how this was done in the case of
a particular Γ̄ (for which N = 864). While there are no fake projective planes
in this case, we exhibit a torsion-free subgroup Π of index N in Γ̄, and give
some properties of the surface Π\B(C2 ).
1. Introduction
Suppose that P is a fake projective plane. Its Euler-characteristic χ(P ) is 3. The
fundamental group Π = π1 (P ) embeds as a cocompact arithmetic lattice subgroup
of P U (2, 1), and so acts on the unit ball B(C2 ) = {(z1 , z2 ) ∈ C2 : |z1 |2 + |z2 |2 < 1}
in C2 , endowed with the hyperbolic metric. Let F be a fundamental domain for
this action. There is a normalization of the hyperbolic volume vol on B(C2 ) and
of the Haar measure µ on P U (2, 1) so that χ(P ) = 3vol(F ) = 3µ(P U (2, 1)/Π). So
µ(P U (2, 1)/Π) = 1. Let Γ̄ ≤ P U (2, 1) be maximal arithmetic, with Π ≤ Γ̄. Then
µ(P U (2, 1)/Γ̄) = 1/N and [Γ̄ : Π] = N for some integer N ≥ 1.
The fundamental group Π of a fake projective plane must also be torsion-free
and Π/[Π, Π] must be finite (see [2] for example).
As announced in [2], we have found all groups Π with these properties, up to
isomorphism. Our method was to find explicit generators and an explicit presentation for each Γ̄, so that the question of finding all subgroups Π of Γ̄ with index N
and the additional required properties, as just mentioned, can be studied.
In Section 2, we give results about finding generators and relations for groups Γ
acting on quite general metric spaces X. The main theorem gives simple conditions
which ensure that a set S of elements of Γ, all of which move a base point 0 by at
most a certain distance r1 , are all the elements of Γ with this property.
In Section 3 we specialize to the case X = B(C2 ), and treat in detail a particular
group Γ. This Γ is one of the maximal arithmetic subgroups Γ̄ ≤ P U (2, 1) listed
in [4, 5] with covolume of the form 1/N , N an integer. In this case N = 864.
Consider the action of Γ on B(C2 ), and let 0 denote the origin in B(C2 ). Two
elements, denoted u and v, generate the stabilizer K of 0 in Γ. Another element b
of Γ was found by a computer search looking for elements g ∈ Γ for which d(g.0, 0)
is small. We use the results of Section 2 to show that the computer search did not
miss any such g, and to get a simple presentation of Γ. It turns out that this Γ is
one of the Deligne-Mostow groups (see Parker [3]).
1
2
DONALD I. CARTWRIGHT TIM STEGER
We apply this presentation of Γ to exhibit a torsion-free subgroup Π of index 864.
The abelianization Π/[Π, Π] is Z2 , and so Π is not the fundamental group of a fakeprojective plane. However the ball quotient Π\B(C2 ) is a compact complex surface
with interesting properties, some of which we describe. By a lengthy computer
search not discussed here, we showed that any torsion-free subgroup of index 864
in Γ is conjugate to Π, and so no fake projective planes arise in this context.
In Section 4, we calculate the value of r0 for the example of the previous section.
2. General results
Let Γ be a group acting by isometries on a simply-connected geodesic metric
space X. Let S ⊂ Γ be a finite symmetric generating set for Γ. Fix a point 0 ∈ X,
and suppose that d(0, x) is bounded on the set
FS = {x ∈ X : d(0, x) ≤ d(g.0, x) for all g ∈ S}.
(2.1)
Define
r0 = sup{d(0, x) : x ∈ FS }.
Theorem 2.1. Suppose that there is a number r1 > 2r0 such that
(a) if g ∈ S, then d(g.0, 0) ≤ r1 ,
(b) if g, g ′ ∈ S and d((gg ′ ).0, 0) ≤ r1 , then gg ′ ∈ S.
Then S = {g ∈ Γ : d(g.0, 0) ≤ r1 }.
Corollary 2.1. For Γ, S as in Theorem 2.1, FS is equal to the Dirichlet fundamental domain F = {x ∈ X : d(0, x) ≤ d(g.0, x) for all g ∈ Γ} of Γ centered
at 0.
Proof. Clearly F ⊂ FS . If x ∈ FS \ F , pick a g ∈ Γ so that d(g.0, x) < d(0, x). Now
d(0, x) ≤ r0 because x ∈ FS , and so d(0, g.0) ≤ d(0, x)+d(x, g.0) ≤ 2d(0, x) ≤ 2r0 ≤
r1 , so that g ∈ S, by Theorem 2.1. But then d(0, x) ≤ d(g.0, x), a contradiction.
Corollary 2.2. Assume Γ, S are as in Theorem 2.1, and that the action of each
g ∈ Γ \ {1} is nontrivial, so that Γ may be regarded as a subgroup of the group C(X)
of continuous maps X → X. Then Γ is discrete for the compact open topology.
Proof. Let V1 = {f ∈ C(X) : f (0) ∈ Br1 (0)}, where Br (x) = {y ∈ X : d(x, y) < r}.
Theorem 2.1 shows that Γ ∩ V1 ⊂ S. For each g ∈ S \ {1}, choose xg ∈ X so that
g.xg 6= xg , let rg = d(g.xg , xg ), and let Vg = {f ∈ C(X) : f (xg ) ∈ Brg (xg )}. Then
g 6∈ Vg . Hence the intersection V of the sets Vg , g ∈ S, is an open neighborhood
of 1 in C(X) such that Γ ∩ V = {1}.
Before starting the proof of Theorem 2.1, we need some lemmas, which have the
same hypotheses as Theorem 2.1.
Lemma 2.1. The group Γ is generated by S0 = {g ∈ S : d(g.0, 0) ≤ 2r0 }.
Proof. As S is finite, there is a δ > 0 so that d(g.0, 0) ≥ 2r0 + δ for all g ∈ S \ S0 .
Let Γ0 denote the subgroup of Γ generated by S0 , and assume that Γ0 $ Γ. Since
Γ is generated by S, there are g ∈ S \ Γ0 , and we choose such a g with d(g.0, 0) as
small as possible. Then d(g.0, 0) > 2r0 , since otherwise g ∈ S0 ⊂ Γ0 . In particular,
g.0 6∈ FS . Since 0 ∈ FS , there is a last point ξ belonging to FS on the geodesic
from 0 to g.0. Choose any point ξ ′ on that geodesic which is outside FS but satisfies
d(ξ, ξ ′ ) < δ/2. As ξ ′ 6∈ FS , there is an h ∈ S such that d(h.0, ξ ′ ) < d(0, ξ ′ ). Hence
d(h.0, g.0) ≤ d(h.0, ξ ′ ) + d(ξ ′ , g.0) < d(0, ξ ′ ) + d(ξ ′ , g.0) = d(0, g.0),
so that d((h−1 g).0, 0) < d(g.0, 0). So h−1 g cannot be in S \ Γ0 , by choice of g. Also,
d(h.0, 0) ≤ d(h.0, ξ ′ ) + d(ξ ′ , 0) < 2d(0, ξ ′ ) ≤ 2(d(0, ξ) + d(ξ, ξ ′ )) < 2r0 + δ.
FINDING GENERATORS AND RELATIONS
3
Hence h ∈ S0 , by definition of δ. Now h−1 g ∈ S by hypothesis (b) above, since
h−1 , g ∈ S and d((h−1 g).0, 0) < d(0, g.0) ≤ r1 . So h−1 g must be in Γ0 . But then
g = h(h−1 g) ∈ Γ0 , contradicting our assumption.
Lemma 2.2. If x ∈ X and if d(0, x) ≤ r0 + ǫ, where 0 < ǫ ≤ (r1 − 2r0 )/2, then
there exists g ∈ S such that g.x ∈ FS , and in particular, d(0, g.x) ≤ r0 .
Proof. Since S is finite, we can choose g ∈ S so that d(0, g.x) is minimal. If g.x ∈
FS , there is nothing to prove, and so assume that g.x 6∈ FS . There exists h ∈ S
such that d(h.0, g.x) < d(0, g.x), and so d(0, (h−1 g).x) = d(h.0, g.x) < d(0, g.x).
By the choice of g, h−1 g cannot be in S, and also d(0, g.x) ≤ d(0, x). So
d(0, (h−1 g).0) ≤ d(0, (h−1 g).x) + d((h−1 g).x, (h−1 g).0) = d(0, (h−1 g).x) + d(x, 0)
< d(0, g.x) + d(x, 0)
≤ 2d(0, x)
≤ 2r0 + 2ǫ ≤ r1 .
But this implies that h−1 g ∈ S by (b) again, since g, h ∈ S. This is a contradiction,
and so g.x must be in FS .
Remark 1. If in Lemma 2.2 we also assume that ǫ < δ/2, where δ is as in the
proof of Lemma 2.1, then the g in the last lemma can be chosen in S0 . For then
d(0, g.0) ≤ d(0, g.x) + d(g.x, g.0) ≤ 2d(0, x) ≤ 2(r0 + ǫ) < 2r0 + δ.
To prove Theorem 2.1, we consider a g ∈ Γ such that d(g.0, 0) ≤ r1 . By
Lemma 2.1, we can write g = y1 yn · · · yn , where yi ∈ S0 for each i. Since 1 ∈ S0 ,
we may suppose that n is even, and write n = 2m. Moreover, we can assume that
m is odd. Form an equilateral triangle ∆ in the Euclidean plane with horizontal
base whose sides are each divided into m equal segments, marked off by vertices
v0 , v1 , . . . , v3m−1 . We use these to partition ∆ into m2 subtriangles. We illustrate
in the case m = 3:
v3
•
v2 •
• v4
• v5
v1 •
v0 •
•
v8
•
v7
• v6
We define a continuous function ϕ from the boundary of ∆ to X which maps v0
to 0 and vi to (y1 · · · yi ).0 for i = 1, . . . , n, which for i = 1, . . . , n maps the segment
[vi−1 , vi ] to the geodesic from ϕ(vi−1 ) to ϕ(vi ), and which maps the bottom side
of ∆ to the geodesic from g.0 = (y1 · · · yn ).0 to 0.
Because X is simply-connected, we can extend ϕ to a continuous map ϕ : ∆ → X,
where ∆ here refers to the triangle and its interior.
Let ǫ > 0 be as in Lemma 2.2. For a sufficiently large integer r, which we choose
to be odd, by partitioning each of the above subtriangles of ∆ into r2 congruent
subtriangles, we have d(ϕ(t), ϕ(t′ )) < ǫ for all t, t′ in the same (smaller) subtriangle.
We now wish to choose elements x(v) ∈ Γ for each vertex v of each of the
subtriangles, so that
(i) d(x(v).0, ϕ(v)) ≤ r0 for each v not in the interior of the bottom side of ∆,
(ii) d(x(v).0, ϕ(v)) ≤ r1 /2 for the v 6= v0 , vn on the bottom side of ∆.
We shall also define elements y(e) for each directed edge e of each of the subtriangles
in such a way that
(iii) x(w) = x(v)y(e) if e is the edge from v to w.
4
DONALD I. CARTWRIGHT TIM STEGER
Note that the y(e)’s are completely determined by the x(v)’s, and the x(v)’s are
completely determined from the y(e)’s provided that one x(v) is specified.
We first choose x(v) for the original vertices v = vi on the left and right sides of ∆
by setting x(v0 ) = 1 and x(vi ) = y1 · · · yi for i = 1, . . . , n. Thus d(x(v).0, ϕ(v)) =
0 ≤ r0 for these v’s. Now if w0 = vi−1 , w1 , . . . , wr = vi are the r + 1 equally spaced
vertices of the edge from vi−1 to vi , we set x(wj ) = x(vi−1 ) for 1 ≤ j < r/2 and
x(wj ) = x(vi ) for r/2 < j ≤ r − 1. Then if 1 ≤ j < r/2, we have
d(x(wj ).0, ϕ(wj )) = d(x(vi−1 ).0, ϕ(wj )) = d(ϕ(vi−1 ), ϕ(wj ))
1
≤ d(ϕ(vi−1 ), ϕ(vi ))
2
1
= d(0, yi .0) ≤ r0 ,
2
(∗)
where the inequality (∗) holds as ϕ on the segment from vi−1 to vi is the geodesic
from ϕ(vi−1 ) to ϕ(vi ). In the same way, d(x(wj ).0, ϕ(wj )) ≤ r0 for r/2 < j ≤ r − 1.
Having chosen x(v) for the v on the left and right sides of ∆, we set y(e) =
x(v)−1 x(w) if e is the edge from v to w on one of those sides. So of the r edges
in the segment from vi−1 to vi , we have y(e) = 1 except for the middle edge, for
which y(e) = yi .
For the vertices v on the bottom side of ∆, we set x(v) = 1 if v is closer to v0
than to vn , and set x(v) = g = y1 · · · yn otherwise, and we set y(e) = x(v)−1 x(w)
if e is the edge from v to w. Since rm is odd, there is no middle vertex on the side,
so there is no ambiguity in the definition of x(v), but there is a middle edge e, and
y(e) = g if e is directed from left to right. For the other edges e′ on the bottom
side, y(e′ ) = 1. If v is a vertex of the bottom side of ∆ closer to v0 than to vn , then
d(x(v).0, ϕ(v)) = d(0, ϕ(v)) ≤
1
r1
d(0, g.0) ≤ ,
2
2
since ϕ on the bottom side is just the geodesic from 0 to g.0. Similarly, if v is a
vertex of the bottom side of ∆ closer to vn , then again d(x(v).0, ϕ(v)) ≤ r1 /2.
We now choose x(v) for vertices which are not on the sides of ∆ as follows. We
successively define x(v) as we move from left to right on a horizontal line. Suppose
that v is a vertex for which x(v) has not yet been chosen, but for which x(v ′ ) has
been chosen for the vertex v ′ immediately to the left of v. Now d(ϕ(v), ϕ(v ′ )) ≤ ǫ
and d(x(v ′ ).0, ϕ(v ′ )) ≤ r0 , so that d(ϕ(v), x(v ′ ).0) ≤ r0 + ǫ. By Lemma 2.2, applied
to x = x(v ′ )−1 ϕ(v), there is a g ∈ S so that d(gx(v ′ )−1 ϕ(v), 0) ≤ r0 . So we set
x(v) = x(v ′ )g −1 . If e is the edge from v ′ to v, we set y(e) = g −1 ∈ S.
We have now defined x(v) for each vertex of the partitioned triangle ∆ so that
(i) and (ii) hold, and these determine y(e) for each directed edge e so that (iii)
holds. We have seen that y(e) ∈ S if e is an edge on the left or right side of ∆ or if
e is a horizontal edge not lying on the bottom side of ∆ whose right hand endpoint
does not lie on the right side of ∆. Also, y(e) = 1 ∈ S for all edges e in the bottom
side of ∆ except the middle one, for which y(e) = g. The theorem will be proved
once we check that y(e) ∈ S for each edge e.
Lemma 2.3. Suppose that v, v ′ and v ′′ are the vertices of a subtriangle in ∆, and
that e, e′ and e′′ are the edges from v to v ′ , v ′ to v ′′ , and v to v ′′ , respectively.
Suppose that y(e) and y(e′ ) are in S. Then y(e′′ ) ∈ S too.
Proof. We have y(e) = x(v)−1 x(v ′ ), y(e′ ) = x(v ′ )−1 x(v ′′ ) and y(e′′ ) = x(v)−1 x(v ′′ ),
and so y(e′′ ) = y(e)y(e′ ). If e′′ is the middle edge of the bottom side of ∆, then
d(y(e′′ ).0, 0) = d(g.0, 0) ≤ r1 by hypothesis, and so y(e′′ ) ∈ S by (b) above. If e′′
is any other edge of the bottom side of ∆, then y(e′′ ) = 1 ∈ S. So we may assume
that e′′ is not on the bottom side of ∆. Hence at most one of v and v ′′ lies on that
FINDING GENERATORS AND RELATIONS
5
bottom side. Hence
d(0, y(e′′ ).0) = d(x(v).0, x(v)y(e′′ ).0)
= d(x(v).0, x(v ′′ ).0)
≤ d(x(v).0, ϕ(v)) + d(ϕ(v), ϕ(v ′′ )) + d(ϕ(v ′′ ), x(v ′′ ).0)
< d(x(v).0, ϕ(v)) + d(ϕ(v ′′ ), x(v ′′ ).0) + ǫ
≤ r0 + r1 /2 + ǫ
(†)
≤ r1 ,
where the inequality (†) holds because at least one of d(x(v).0, ϕ(v)) ≤ r0 and
d(ϕ(v ′′ ), x(v ′′ ).0) ≤ r0 holds, since at most one of v and v ′′ is a vertex of the bottom
side of ∆, and if say v is such a point, then we still have d(x(v).0, ϕ(v)) ≤ r1 /2.
Conclusion of the proof of Theorem 2.1. We must show that y(e) ∈ S for all edges.
We use Lemma 2.3, working down from the top of ∆, and moving from left to
right. So in the order indicated in the next diagram we work down ∆, finding that
y(e) ∈ S in each case, until we get to the lowest row of triangles.
•
•
e1
e2
•
e4
e5 e6
•
•
e3
•
e7
•
e8
•
e9
•
Then working from the left and from the right, we find that y(e) ∈ S for all the
diagonal edges in the lowest row.
Finally, we get to the middle triangle in the lowest row. For the diagonal edges
e, e′ of that triangle, we have found that y(e), y(e′ ) ∈ S, and so y(e′′ ) ∈ S for the
horizontal edge e′′ of that triangle too, by Lemma 2.3.
If we make the extra assumption that the set of values d(g.0, 0), g ∈ Γ, is discrete,
the following result is a consequence of [1, Theorem I.8.10] (applied to the open set
U = {x ∈ X : d(x, 0) < r0 + ǫ} for ǫ > 0 small). We shall sketch a proof along the
lines of that of Theorem 2.1 which does not make that extra assumption.
Theorem 2.2. With the hypotheses of Theorem 2.1, let S0 = {g ∈ S : d(g.0, 0) ≤
2r0 }, as before. Then the set of generators S0 , and the relations g1 g2 g3 = 1, where
the gi are each in S0 , give a presentation of Γ.
Proof. Let S̃0 be a set with a bijection f : s̃ 7→ s from S̃0 to S0 . Let F be the free
group on S̃0 , and denote also by f the induced homomorphism F → Γ. Then f is
surjective by Lemma 2.1. Let ỹ1 , . . . , ỹn ∈ S̃0 , and suppose that g̃ = ỹ1 · · · ỹn ∈ F
is in the kernel of f . We must show that g̃ is in the normal closure H of the set of
elements of F of the form s̃1 s̃2 s̃3 , where s̃i ∈ S̃0 for each i, and s1 s2 s3 = 1 in Γ.
As 1 ∈ S0 , we may assume that n is a multiple 3m of 3, and form a triangle ∆
partitioned into m2 congruent subtriangles, as in the proof of Theorem 2.1. The
vertices are again denoted vi , and we write v3m = v0 , and yi = f (ỹi ) for each i.
We again define a continuous function ϕ from the boundary of ∆ to X which
maps v0 to 0 and vi to (y1 · · · yi ).0 for i = 1, . . . , 3m, and which for i = 1, . . . , 3m
maps the segment [vi−1 , vi ] to the geodesic from ϕ(vi−1 ) to ϕ(vi ). From y1 y2 · · · yn =
1 we see that ϕ(v0 ) = 1 = g = ϕ(v3m ). This time the bottom side of ∆ is mapped
in the same way as the other two sides.
6
DONALD I. CARTWRIGHT TIM STEGER
Let ǫ > 0, with ǫ < δ/2, (r1 − 2r0 )/2, and as before, we partition ∆ into subtriangles so that whenever t, t′ are in the same subtriangle, d(ϕ(t), ϕ(t′ )) < ǫ holds.
Using Lemma 2.2 and the remark after it, we can again choose elements x(v) ∈ Γ
for each vertex v of each of the subtriangles, so that d(x(v).0, ϕ(v)) ≤ r0 for each v,
without the complications about the bottom side of ∆ which had to be dealt with
in the proof of Theorem 2.1. If e is the edge from v ′ to v, where v ′ is immediately
to the left of v, and v is not on the right side of ∆, then y(e) = x(v ′ )−1 x(v) ∈ S0 .
We then define elements y(e) for each directed edge e of each of the subtriangles so
that x(w) = x(v)y(e) if e is the edge from v to w. Again using Lemma 2.3, with S
there replaced by S0 , and arguing as in the conclusion of the proof of Theorem 2.1,
we deduce that y(e) ∈ S0 for each edge e.
Of the r edges e in the segment from vi−1 to vi , we have y(e) = 1 except for the
middle edge, for which y(e) = yi . So y(e1 )y(e2 ) · · · y(e3mr ) = 1, where e1 , . . . , e3mr
are the successive edges as we traverse the sides of ∆ in a clockwise direction starting
at v0 , and g̃ = ỹ(e1 ) · · · ỹ(e3mr ).
It is now easy to see that g̃ ∈ H, by using y(e′′ ) = y(e)y(e′ ) in the situation
of (1), so that ỹ(e′′ ) = ỹ(e)ỹ(e′ )h for some h ∈ H. Then in the situation of (2), for
example, y(e)y(e′ ) = y(e)(y(e∗ )y(e′′′ )) = (y(e)y(e∗ ))y(e′′′ ) = y(e′′ )y(e′′′ ),
•
•
e
•
∗
′
e
e
e′′
•
e
(1)
e′
e
′′
′′′
•
•
e
(2)
•
so that ỹ(e)ỹ(e′ ) = ỹ(e)(ỹ(e∗ )ỹ(e′′′ )h) = (ỹ(e)ỹ(e∗ ))ỹ(e′′′ )h = (ỹ(e′′ )h′ )ỹ(e′′′ )h =
ỹ(e′′ )ỹ(e′′′ )h′′ for some h, h′ , h′′ ∈ H. We can, for example, successively use this
device to remove the right hand strip of triangles from ∆, reducing the size of the
triangle being treated, and then repeat this process.
In the next proposition, we use [1, Theorem I.8.10] to show that under extra
hypotheses (satisfied by the example in Section 3), we can omit the g ∈ S0 for
which d(g.0, 0) = 2r0 , and still get a presentation.
Proposition 2.1. Let X = B(C2 ), and suppose that the set of values d(g.0, 0),
g ∈ Γ, is discrete. Assume also the hypotheses of Theorem 2.1. Then the set
S0∗ = {g ∈ S : d(g.0, 0) < 2r0 } is a set of generators of Γ, and the relations
g1 g2 g3 = 1, where the gi are each in S0∗ , give a presentation of Γ.
Proof. For each g ∈ S such that d(g.0, 0) = 2r0 , let m be the midpoint of the
geodesic from 0 to g.0. Let M be the set of these midpoints. Let δ1 > 0 be so small
that 2r0 + 2δ1 < r1 . Since M and S are finite, we can choose a positive δ < δ1 so
that if m, m′ ∈ M and g ∈ S, and if d(g.m, m′ ) < 2δ, then g.m = m′ . So if γ, γ ′ ∈ Γ,
m, m′ ∈ M and B(γ.m, δ) ∩ B(γ ′ .m′ , δ) 6= ∅, then d(γ.0, γ ′ .0) < 2r0 + 2δ < r1 , so
that γ −1 γ ′ ∈ S by Theorem 2.1, and d(m, γ −1 γ ′ .m′ ) < 2δ, so that γ.m = γ ′ .m′ by
choice of δ. Thus if Y is the union of the Γ-orbits of all m ∈ M , then the balls
B(y, δ), y ∈ Y , are pairwise disjoint. Now let X ′ denote the subset of X obtained
by removing all these balls. It follows (because the ambient dimension is > 2) that
X ′ is still simply connected.
Let U = {x ∈ X ′ : d(x, 0) < r0 + δ ′ } of X ′ , for some δ ′ > 0. The proposition will
follow from [1, Theorem I.8.10], applied to this U , once we show that if δ ′ > 0 is
small enough, then any g ∈ Γ such that g(U ) ∩ U 6= ∅ must satisfy d(g.0, 0) < 2r0 .
Clearly d(g.0, 0) < 2r0 + 2δ ′ holds, and because of the discreteness hypothesis, if
δ ′ > 0 is small enough, one even has d(g.0, 0) ≤ 2r0 . Now suppose d(g.0, 0) = 2r0 ,
let m be the midpoint on the geodesic from 0 to g.0, and suppose x ∈ g(U ) ∩ U .
Then d(0, x) < r0 + δ ′ and d(g.0, x) < r0 + δ ′ . Using the CAT(0) property of X,
FINDING GENERATORS AND RELATIONS
7
this shows that d(x, m)2 + r02 ≤ (r0 + δ ′ )2 , so that
q
d(x, m) ≤ (r0 + δ ′ )2 − r02 ,
and this last can be made less than δ if δ ′ is chosen small enough. This contradicts
the hypothesis that x ∈ U ⊂ X ′ .
The fact that X ′ is simply connected could also be used to modify the version
of the proof going through the triangle-shaped simplicial complex.
3. An example
Let ℓ = Q(ζ), where ζ is a primitive 12-th root of unity. Then ζ 4 = ζ 2 − 1, so
that ℓ is a degree 4 extension of Q. Let r = ζ + ζ −1 and k = Q(r). Then r√and ζ 3
are square roots of 3 and −1, respectively, and if ζ = e2πi/12 then r = + 3 and
ζ 3 = i. Let
−r − 1
1
0
1 − r 0 ,
F = 1
(3.1)
0
0
1
and form the group
Γ = {g ∈ M3×3 (Z[ζ]) : g ∗ F g = F }/{ζ ν I : ν = 0, 1, . . . , 11}.
We shall use the results of Section 2 to find a presentation for Γ. Let us first
motivate the choice of this example. Now ι(g) = F −1 g ∗ F defines an involution of
the second kind on the simple algebra M3×3 (ℓ). We can define an algebraic group
G over k so that
G(k) = {g ∈ M3×3 (ℓ) : ι(g)g = I and det(g) = 1}.
For the corresponding adjoint group G,
G(k) = {g ∈ M3×3 (ℓ) : ι(g)g = I}/{tI : t ∈ ℓ and t̄t = 1}.
Now k has two archimedean
√ places√v+ and v− , corresponding to the two embeddings
k ֒→ R√mapping r to + 3 and − 3, respectively. The eigenvalues of F are 1 and
−r ± 2. So the form F is definite for v− but not for v+ . Hence
∼ P U (3) and G(kv ) =
∼ P U (2, 1).
G(kv ) =
+
−
Letting Vf denote the set of non-archimedean places of k, if v ∈ Vf , then either
G(kv ) ∼
= P GL(3, kv ) if v splits in ℓ, or G(kv ) ∼
= P UF (3, kv (i)) if v does not split
in ℓ. With a suitable choice of maximal parahorics P v in G(kv ), the following group
Y
Pv
Γ̄ = G(k) ∩
v∈Vf
is one of the maximal arithmetic subgroups of P U (2, 1) whose covolume has the
form 1/N , N an integer. Prasad and Yeung [4, 5] have described all such subgroups,
up to k-equivalence. In this case N = 864. As in [2], lattices can be used to describe
concretely maximal parahorics. We can take P v = {g ∈ G(kv ) : g.xv = xv }, where,
in the cases when v splits in ℓ, xv is the homothety class of the ov -lattice o3v ⊂ kv3 ,
where ov is the valuation ring of kv . When v does not split, xv is the lattice
o3v , where now ov is the valuation ring of kv (i). With this particular choice of
parahorics, Γ̄ is just Γ.
The action of Γ on the unit ball X = B(C2 ) in C2 is described as follows, making
explicit the isomorphism G(kv+ ) ∼
= P U (2, 1). Let
1
0
0
1 0
0
1 0 0
0 , and F0 = 0 1 0 . (3.2)
γ0 = 1 1 − r 0 , Fdiag = 0 1
0
0
1
0 0 1−r
0 0 −1
8
DONALD I. CARTWRIGHT TIM STEGER
Then γ0t Fdiag γ0 = (1 − r)F , so a matrix g is unitary with respect to F if and only if
g ′ = γ0 gγ0−1 is unitary with
to Fdiag . Now let D be the diagonal matrix with
prespect
√
√
diagonal entries 1, 1 and
3 − 1. Taking r = + 3, if g ′ is unitary with respect
to Fdiag , then g̃ = Dg ′ D−1 is unitary with respect to F0 ; that is, g̃ ∈ U (2, 1). If
Z = {ζ ν I : ν = 0, . . . , 11}, for an element gZ of Γ, the action of gZ on B(C2 ) is
given by the usual action of g̃. That is,
′
z
z
(gZ).(z, w) = (z ′ , w′ ) if g̃ w = λ w′ for some λ ∈ C.
1
1
Now let u and v be the matrices
3
ζ + ζ2 − ζ 1 − ζ 0
u = ζ 3 + ζ 2 − 1 ζ − ζ 3 0
0
0
1
ζ3
3
2
and v = ζ + ζ − ζ − 1
0
0 0
1 0 ,
0 1
respectively. They have entries in Z[ζ], are unitary with respect to F , and satisfy
u3 = I, v 4 = I, and (uv)2 = (vu)2 .
They (more precisely, uZ and vZ) generate a subgroup K of Γ of order 288. Magma
shows that an abstract group with presentation hu, v : u3 = v 4 = 1, (uv)2 = (vu)2 i
has order 288, and so K has this presentation.
Let us write simply 0 for the origin (0, 0) ∈ B(C2 ), and g.0 in place of (gZ).(0, 0).
Lemma 3.1. For the action of Γ on X, K is the stabilizer of 0.
Proof. It is easy to see that gZ ∈ Γ fixes 0 if and only if gZ has a matrix representative
g11 g12 0
g21 g22 0 ,
0
0 1
for suitable g√ij ∈ Z[ζ]. Since the 2 × 2 block matrix in the upper left of F is definite
when r = + 3, it is now routine to determine all such gij .
The next step is to find g ∈ Γ \ K for which d(g.0, 0) is small, where d is the
hyperbolic metric on B(C2 ). Now
cosh2 (d(z, w)) =
|1 − hz, wi|2
,
(1 − |z|2 )(1 − |w|2 )
(3.3)
p
(see [1, Page 310] for example) where hz, wi = z1 w̄1 + z2 w̄2 and |z| = |z1 |2 + |z2 |2
for z = (z1 , z2 ) and w = (w1 , w2 ) in B(C2 ).
In particular, writing 0 for the origin in B(C2 ), and using g.0 = (g13 /g33 , g23 /g33 )
and |g13 |2 + |g23 |2 = |g33 |2 − 1 for g = (gij ) ∈ U (2, 1), we see that
cosh2 (d(0, g.0)) = |g33 |2
(3.4)
for g ∈ U (2, 1). Notice that for gZ ∈ Γ, the (3, 3)-entry of g is equal to the
(3, 3)-entry of the g̃ ∈ U (2, 1) defined above, and so (3.4) holds also for gZ ∈ Γ.
The matrix
1
0
0
−ζ 3 − ζ 2
b = −2ζ 3 − ζ 2 + 2ζ + 2 ζ 3 + ζ 2 − ζ − 1
2
3
ζ +ζ
−ζ − 1
−ζ 3 + ζ + 1
is unitary with respect to F . We shall see below that u, v and b generate Γ, and use
the results of Section 2 to show that some relations they satisfy give a presentation
of Γ. This b was found by a computer search for g ∈ Γ \ K for which d(g.0, 0) is
small.
Notice that d(g.0, 0) is constant on each double coset KgK. Calculations using (3.4) showed that amongst the 288 elements g ∈ Γ of the form bkb, k ∈ K,
FINDING GENERATORS AND RELATIONS
9
there are ten different values of d(g.0, 0). Representatives γj of the 20 double
cosets KgK in which the g ∈ bKb lie were chosen. The smallest few |(γj )33 |2 and
the corresponding d(γj .0, 0) (rounded to 4 decimal places) are as follows:
j
γj
1
1
2
b
3
−1
4
bu
bu
|(γj )33 |2
1
b
−1 −1 −1
v
u
b
√
3+2
√
2 3+4
√
3 3+6
d(γj .0, 0)
0
1.2767
1.6629
1.8778
We use Theorem 2.1 to show that our computer search has not missed any g ∈ Γ for
which d(g.0, 0) is small. Let S = K ∪ Kγ2 K√∪ Kγ3 K, consisting of the g ∈ Γ found
by the computer search to satisfy |g33 |2 ≤ 2 3+4. To verify that S generates Γ, we
need to make a numerical estimate. A somewhat longer direct proof that hSi = Γ
is given in the next section.
Proposition 3.1. For the given S ⊂ Γ, the normalized hyperbolic volume vol(FS )
of FS = {x ∈ B(C2 ) : d(x, 0) ≤ d(x, g.0) for all g ∈ S} satisfies vol(FS ) < 2/864.
The set S generates Γ.
Proof. Standard numerical integration methods show that (up to several decimal
place accuracy) vol(FS ) equals 1/864, but all we need is that vol(FS ) < 2/864. Let
us make explicit the normalization of hyperbolic volume element in B(C2 ) which
makes the formula χ(Γ\B(C2 )) = 3vol(F ) true. For z ∈ B(C2 ), write
1 + |z|
1
t = tanh−1 |z| = log
= d(0, z),
2
1 − |z|
Θ = z/|z| ∈ S 1 (C2 ),
dΘ = the usual measure on S 1 (C2 ), having total volume 2π 2 ,
2
dvol(z) = 2 sinh3 (t) cosh(t)dt dΘ.
π
Let Γ′ = hSi. Then the Dirichlet fundamental domains F and F ′ of Γ and Γ′ satisfy
F ⊂ F ′ ⊂ FS . By [4, §8.2, the C11 case], vol(F ) = 1/864. Let M = [Γ : Γ′ ]. Then
vol(F ′ ) = M vol(F ) (and vol(F ′ ) = ∞ if M = ∞), so that M vol(F ) = vol(F ′ ) ≤
vol(FS ) < 2vol(F ) implies that M = 1 and Γ′ = Γ.
Proposition 3.2. For the
√ given S ⊂ Γ, the value of r0 = sup{d(x, 0) : x ∈ FS }
is 21 d(γ3 .0, 0) = 21 cosh−1 ( 3 + 1).
Proof. We defer the proof of this result to the next section.
√
′
′
′
2
One may verify
√ that if g, g2 ∈ S and√gg 6∈ S, then |(gg )33 | ≥ 3 3 + 6. So
if r1 satisfies 2 3 + 4 < cosh (r1 ) < 3 3 + 6, then S satisfies the conditions of
Theorem 2.1.
Remark 2. By Corollary 2.1, FS = F , so that vol(FS ) equals 1/864.
Having verified all the conditions of Theorem 2.1, we now know that S contains
all elements g ∈ Γ satisfying d(g.0, 0) ≤ r1 . The double cosets Kγj K, j = 1, 2, 3,
are symmetric because (buvu)2 v = ζ −1 I, and (bu−1 )4 = I (see below). The sizes of
these Kγj K are 288, 2882/4 and 2882 /3, respectively, adding up to 48, 672, because
{k ∈ K : γ2−1 kγ2 ∈ K} = hvi, and {k ∈ K : γ3−1 kγ3 ∈ K} = hui. Proposition 2.1
gives a presentation of Γ which we now simplify.
10
DONALD I. CARTWRIGHT TIM STEGER
Proposition 3.3. A presentation of Γ is given by the generators u, v and b and
the relations
u3 = v 4 = b3 = 1, (uv)2 = (vu)2 , vb = bv, (buv)3 = (buvu)2 v = 1.
(3.5)
Proof. By Proposition 2.1, the set S0∗ = {g ∈ Γ : d(g.0, 0) < 2r0 } of generators,
and the relations gi gj gk = 1, where g1 , g2 , g3 ∈ S0∗ , give a presentation of Γ. By
Theorem 2.1, S0∗ is the union of the two double cosets Kγ1 K = K and Kγ2 K. So
these relations have the form (k1′ γi1 k1′′ )(k2′ γi2 k2′′ )(k3′ γi3 k3′′ ) = 1, where kν′ , kν′′ ∈ K
and i1 , i2 , i3 ∈ {1, 2}. Using the known presentation of K, and cyclic permutations,
the relations of the form γi1 k1 γi2 k2 γi3 k3 = 1, where 1 ≤ i1 ≤ i2 , i3 ≤ 2, are
sufficient to give a presentation. After finding a word in u and v for each such ki ,
we obtained a list of words in u, v and b coming from these relations. Magma’s
routine Simplify can be used to complete the proof, but it is not hard to see this
more directly as follows. When i1 = 1, we need only include in the list of words
all those coming from the relations of the form (i): bkb = k ′ , for k, k ′ ∈ K. When
i1 = 2 (so that i2 = i3 = 2 too), we need only include, for each k1 ∈ K such that
bk1 b ∈ KbK, a single pair (k2 , k3 ) such that bk1 bk2 bk3 = 1, since the other such
relations follow from this one and the relations (i). The only relations of the form (i)
are the relations bv ν uvub = v ν−1 u−1 v −1 u−1 , ν = 0, 1, 2, 3, which follow from the
relations vb = bv and (buvu)2 v = 1 in (3.5). Next, matrix calculations show that
there are 40 elements k ∈ K such that bkb ∈ KbK, giving 40 relations bkb = k ′ bk ′′ .
We need only show that all of these are deducible from the relations given in (3.5).
Using bv = vb, any equation bkb = k ′ bk ′′ gives equations bv i kv j b = v i k ′ bk ′′ v j . So
we needed only deduce from (3.5) the five relations
b1b = uvuvbuvu,
buvub = (vu)−2 ,
bub = ubu,
buvu−1 b = v −1 u−1 v −1 bu−1 ,
and buvu−1 vub = v 2 u−1 v −1 u−1 bu−1 v −1 u−1 .
Firstly, (buvu)2 v = 1 and (vu)2 = (uv)2 imply that b−1 = vuvubuvu, and this
and b3 = 1 imply the first relation. The relations (bvu)3 = 1 and bv = vb imply
that bub = (vubvuv)−1 , and this and b−1 = vuvubuvu give bub = ubu. To get the
third relation, use vb = bv to see that v(buvu−1 b)u equals
b(vuvu)ubu = b(vu)2 bub = b(vu)2 b(vu)2 (vu)−2 ub = v(uv)−2 u = u−1 v −1 b.
The fourth relation is immediate from (buvu)2 v = 1 and bv = vb. Finally, from
(uv)2 = (vu)2 and our formula for b−1 we have
(uvu)−1 b−1 (uvu) = (uvu)−1 (uvuvbuvu)(uvu) = vbuvu2 vu = vbk
for k = uvu−1 vu, using u3 = 1. So vbk has order 3. Hence bkb = v −1 (kvbv)−1 v −1 ,
and the fifth relation easily follows.
As mentioned above, (bu−1 )4 = 1. By Proposition 3.3, this is a consequence of
the relations in (3.5). Explicitly,
(bu−1 )4 = bu(ubu)(ubu)(ubu)u = bu(bub)(bub)(bub)u = b(ubu)bbubb(ubu)
= b(bub)bbubb(bub)
= bbuuub
= 1.
Let us record here the connection between Γ and a Deligne-Mostow group whose
presentation (see Parker [3]) is
Γ3,4 = hJ, R1 , A1 : J 3 = R13 = A41 = 1, A1 = (JR1−1 J)2 , A1 R1 = R1 A1 i.
Using the fact that the orbifold Euler characteristics of Γ3,4 \B(C2 ) and Γ\B(C2 )
are both equal to 1/288, several experts, including John Parker, Sai-Kee Yeung and
Martin Deraux, knew that Γ and Γ3,4 are isomorphic. The following proof based on
FINDING GENERATORS AND RELATIONS
11
presentations is a slight modification of one communicated to us by John Parker.
It influenced our choice of the generators u and v for K.
Proposition 3.4. There is an isomorphism ψ : Γ → Γ3,4 such that
ψ(u) = JR1 J −1 ,
Its inverse satisfies ψ
−1
ψ(v) = A1 ,
(J) = buv, ψ
−1
and
(R1 ) = b and ψ
ψ(b) = R1 .
−1
(3.6)
(A1 ) = v.
Proof. Setting R2 = JR1 J −1 , we have R1 R2 A1 = J and A1 JR2 = JR1−1 J. So
(ψ(u)ψ(v))2 = (R1−1 J)2 = R1−1 ·A1 JR2 = A1 R1−1 JR2 = A1 R2 A1 R2 = (ψ(v)ψ(u))2 .
Next, ψ(b)ψ(u)ψ(v) = R1 R2 A1 = J, which implies that (ψ(b)ψ(u)ψ(v))3 = 1. Now
(ψ(b)ψ(u)ψ(v)ψ(u))2 ψ(v) = (R1 R2 A1 R2 )2 A1 = (JR2 )2 A1 = JR2 JR1−1 J = 1. So
there is a homomorphism ψ : Γ → Γ3,4 satisfying (3.6). We similarly check that
we have a homomorphism ψ̃ : Γ3,4 → Γ mapping J, R1 and A1 to buv, b and v,
respectively, and that ψ and ψ̃ are mutually inverse.
We now exhibit a torsion-free subgroup of Γ having index 864. It has three
generators, all in KbK. The elements of K are most neatly expressed if we use not
only the generators u and v, but also j = (uv)2 , which is the diagonal matrix with
diagonal entries ζ, ζ and 1, and which generates the center of K.
Lemma 3.2. The non-trivial elements of finite order in Γ have order dividing 24.
(i) Any element of order 2 is conjugate to one of v 2 , j 6 or (bu−1 )2 .
(ii) Any element of order 3 is conjugate to one of u, j 4 , uj 4 , buv, or their
inverses.
Proof. By [1, Corollary II.2.8(1)] for example, any g ∈ U (2, 1) of finite order fixes
at least one point of B(C2 ), and so in particular this holds for any g ∈ Γ of finite
order. Conjugating g, we may assume that the fixed point is in the fundamental
domain F of Γ, and so d(g.0, 0) ≤ 2r0 . Thus g lies in K ∪ Kγ2 K ∪ Kγ3 K, and so
is conjugate to an element of K ∪ γ2 K ∪ γ3 K. Checking these 864 elements, we
see that g must have order dividing 24. After listing the elements of order 2 and 3
amongst them, routine calculations verify (i) and (ii).
Proposition 3.5. The elements
a1 = vuv −1 j 4 buvj 2 ,
a2 = v 2 ubuv −1 uv 2 j
a3 = u−1 v 2 uj 9 bv −1 uv −1 j 8
∼ Z2 .
generate a torsion-free subgroup Π of 864, for which Π/[Π, Π] =
and
Proof. Using our presentation of Γ, Magma’s Index command verifies that Π has
index 864 in Γ, and the AbelianQuotientInvariants command verifies that it has
abelianization Z2 .
We now check that Π is torsion-free. Suppose that Π contains an element π 6= 1
of finite order. By Lemma 3.2, we can assume that π has order 2 or 3. So for one
of the elements t listed in (i) and (ii) of Lemma 3.2, there is a g ∈ Γ such that
gtg −1 ∈ Π. Using Index, one verifies that the 864 elements bµ k, µ = 0, 1, 2, k ∈ K,
form a transversal for Π in Γ. So we may assume that g = bµ k. But now Index
verifies that none of the elements bµ kt(bµ k)−1 is in Γ.
We conclude this section by mentioning some other properties of Π.
Let us first note that Π cannot be lifted to a subgroup of SU (2, 1). The determinants of a1 , a2 and a3 are ζ 3 , ζ 3 and −1, respectively, and so the aν could be replaced
by ζ −1 ω i1 a1 , ζ −1 ω i2 a2 and −ω i3 a3 , where ω = e2πi/3 and i1 , i2 , i3 ∈ Z, to obtain
−3 3 −1 −1 −1
−1
3
−4
generators with determinant 1. But a−3
I is
2 a3 a1 a2 a3 a2 a3 a1 a2 a1 a3 a1 = ζ
unchanged by any choice of the integers i1 , i2 and i3 , as the number of aν ’s appearing in the product on the left is equal to the number of aν−1 ’s, for each ν. So we get
12
DONALD I. CARTWRIGHT TIM STEGER
a relation in P U (2, 1) but not in SU (2, 1). It was found using Magma’s Rewrite
command, which derives a presentation of Π from that of Γ̄.
Magma shows that the normalizer of Π in Γ contains Π as a subgroup of index 3,
and is generated by Π and j 4 . One may verify that
3
j 4 a1 j −4 = ζ 3 a3 a−3
2 a3 a1 ,
j 4 a2 j −4 = ζ −1 a3−1 ,
and
−1
−1 −1
2 −1 −1
j 4 a3 j −4 = ζ −1 a−1
1 a2 a1 a2 a1 a2 a1 a3 a1 a2 a1 .
Let us show that j 4 induces a non-trivial action on Π/[Π, Π]. By Proposition 3.5,
there is an isomorphism ϕ : Π/[Π, Π] → Z2 , so we have a surjective homomorphism
−1
−3 3
3
3
f : Π → Π/[Π, Π] ∼
= Z2 . Using the relation a22 a−1
1 a2 a1 a3 a1 a2 a3 a1 a3 a1 = ζ I, we
2
∼
see that 3f (a1 ) − 2f (a2 ) + 7f (a3 ) = (0, 0), and since Π/[Π, Π] = Z , this must be
the only condition on the f (aν ). So we can choose the isomorphism ϕ so that f
maps a1 , a2 and a3 to (1, 3), (−2, 1) and (−1, −1), respectively, and then
0 −1
4
−4
f (π) = (m, n)
=⇒ f (j πj ) = (m, n)
for all π ∈ Π.
(3.7)
1 −1
Next consider the ball quotient X = Π\B(C2 ). Now j 4 induces an automorphism
of X. Let us show that this automorphism has precisely 9 fixed points.
Proposition 3.6. The automorphism of X induced by j 4 has exactly 9 fixed points.
These are the three points Π(bµ .0), µ = 0, 1, −1, and six points Π(hi .z0 ), where
hi ∈ Γ for i = 1, . . . , 6, and where z0 ∈ B(C2 ) is the unique fixed point of buv.
Proof. If Π(j 4 .z) = Πz, then πj 4 .z = z for some π ∈ Π. This implies that πj 4
has finite order. It cannot be trivial, since Π is torsion-free. If π ∈ Π, then
π ′ = (πj 4 )3 = (π)(j 4 πj 8 )(j 8 πj 4 ) is also in Π. Since the possible orders of the
elements of Γ are the divisors of 24, if πj 4 has finite order, then 1 = (πj 4 )24 = (π ′ )8 ,
so π ′ must be 1, so that (πj 4 )3 must be 1. So πj 4 must have order 3. So for one
of the eight elements t listed in Lemma 3.2(ii), πj 4 = gtg −1 for some g ∈ Γ. Thus
gtg −1 j −4 ∈ Π. Since the elements bµ k, µ = 0, 1, −1 and k ∈ K, form a set of coset
representatives for Π in Γ, and since j 4 normalizes Π, we can assume that g = bµ k
for some µ and k.
When t = j 4 , we have bµ ktk −1 b−µ j −4 = bµ j 4 b−µ j −4 , independent of k. We find
that these three elements are in Π. Explicitly, bµ j 4 b−µ j −4 = πµ for
−3 −1
−1
2
π0 = 1, π1 = ζ −4 a2 a−2
1 a3 a1 and π−1 = a2 a1 a3 a1 .
µ
(3.8)
4
and these equations mean that the three points Π(b .0) are fixed by j .
Write πj 4 = gtg −1 for some g ∈ Γ, where t is one of the eight elements t listed
in Lemma 3.2(ii). In the notation of (3.8), and writing g = π ′ bµ k, where π ′ ∈ Π,
µ ∈ {0, 1, −1}, and k ∈ K, we get
πj 4 = π ′ bµ ktk −1 b−µ π ′
−1
= π ′ bµ ktk −1 (j −4 b−µ πµ j 4 )π ′
−1
= π ′ (bµ k)(tj −4 )(bµ k)−1 (πµ j 4 π ′
−1 −4
j
)j 4 .
So (bµ k)(tj −4 )(bµ k)−1 is in Π, and therefore either t = j 4 or tj −4 has infinite
order. In particular, apart from t = j 4 , our t cannot be in K, and so must be buv
or (buv)−1 .
We find that bµ ktk −1 b−µ j −4 ∈ Π never occurs when t = (buv)−1 . For t = buv,
we find that bµ ktk −1 b−µ j −4 ∈ Π for only 18 pairs (µ, k). This means that j 4 fixes
Π(bµ k.z0 ) for these 18 (µ, k)’s. If (µ, k) satisfies bµ ktk −1 b−µ j −4 ∈ Π, then so does
(µ, kj 4 ), since we can write bµ j 4 = πµ j 4 bµ for some πµ ∈ Π, as we have just seen.
Moreover, Π(bµ kj 4 .z0 ) = Π(bµ k.z0 ), since kj 4 = j 4 k and so
Π(bµ kj 4 .z0 ) = Π(πµ j 4 bµ k.z0 ) = Π(j 4 bµ k.z0 ) = Π(bµ k.z0 ).
FINDING GENERATORS AND RELATIONS
13
So we need only consider six of the (µ, k)’s, and correspondingly setting
h1 = b−1 vuj 3 ,
h2 = u−1 vj,
h3 = buv 2 j 2 ,
h4 = b−1 v 2 uj 3 ,
h5 = vj 2 ,
h6 = bvu−1 v,
−4
we have hi (buv)h−1
= πi′ ∈ Π for i = 1, . . . , 6; explicitly,
i j
π1′ = ζ 4 a22 a1 a33 ,
π2′ = j 8 a1 j 4 ,
−1
π3′ = ζ 2 j 8 a1 a32 j 4 a2 a1 a−2
2 a1 .
π4′ = ζ −5 a33 a21 a33 ,
−1 8
π5′ = ζ −1 j 4 a−1
1 a2 j ,
π6′ = ζa2 a−1
1 .
The six points Π(hi .z0 ) are distinct, as we see by checking that (a) the nontrivial
′
g ∈ Γ fixing z0 are just (buv)±1 , and (b) (bµ k ′ )(buv)ǫ (bµ k)−1 is not in Π for
ǫ = 0, 1, 2, when (µ′ , k ′ ) and (µ, k) in the above list of six pairs are distinct.
Finally, we show that Π is a congruence subgroup of Γ.
The prime 3 ramifies in Q(ζ) (as does 2), and F9 = Z[ζ]/rZ[ζ] is a field of order 9.
Let ρ : Z[ζ] → F9 be the natural map, and write i for ρ(ζ). Then i2 = −1, and
F9 = F3 (i). Applying ρ to matrix entries, we map Γ to a group of matrices over F9 ,
modulo hii. The image ρ(g) of any g ∈ M3×3 (Z[ζ]) unitary with respect to the F
of (3.1) is unitary with respect to ρ(F ), and so if we conjugate by C = ρ(γ0 ), where
γ0 is as in (3.2), then ρ′ (g) = Cρ(g)C −1 is unitary in the “usual” way.
So ρ′ maps Γ to the group P U (3, F9 ) of unitary matrices with entries in F3 (i),
modulo scalars. This map is surjective. In fact, ρ′ (Γ1 ) = P U (3, F9 ), where Γ1
is the normal index 3 subgroup of Γ consisting of the gZ ∈ Γ having a matrix
representative g of determinant 1. One may check that Γ1 = hv, bu−1 , u−1 bi, and
that hρ′ (v), ρ′ (bu−1 ), ρ′ (u−1 b)i = P U (3, F9 ).
The given generators a1 , a2 and a3 of Π have determinants ζ 3 , ζ 3 and −1, respectively, and so Π ⊂ Γ1 . Now −ζa2 and −a1 a2 are mapped by ρ′ to the matrices
−i −i − 1
i
i
−i
i+1
i−1
−1 , and M = −i − 1
i
−i ,
R= 1
i−1
0
i−1
i
−i − 1
i
respectively, which satisfy R7 = I, M 3 = I and M RM −1 = R2 . Moreover, −a3
is mapped to R−1 . Hence Π is mapped onto the subgroup hR, M i of P U (3, F9 ),
which has order 21. Now |P U (3, F9 )| = 6048 = 288 × 21, and so the conditions on
a gZ ∈ Γ to be in Π are that gZ ∈ Γ1 and that ρ′ (g) ∈ hR, M i.
4. Calculation of r0 .
For any symmetric set S ⊂ U (2, 1), the following lemma simplifies the description
of the set FS defined in (2.1) in the case X = B(C2 ).
Lemma 4.1. If g ∈ U (2, 1) and z = (z1 , z2 ) ∈ B(C2 ), then d(0, z) ≤ d(0, g.z) if
and only if |g3,1 z1 + g3,2 z2 + g3,3 | ≥ 1.
Proof. Since U (2, 1) acts transitively on B(C2 ), we may write z = h.0 for some
h ∈ U (2, 1). So by (3.4), d(0, z) ≤ d(0, g.z) if and only if |h3,3 | ≤ |(gh)3,3 |. Since
zν = hν,3 /h3,3 for ν = 1, 2, we have (gh)3,3 = (g3,1 z1 + g3,2 z2 + g3,3 )h3,3 , and the
result follows.
−1
Now
√ ∪ Kbu bK ⊂ Γ be as in Section 3. Write r
√ let Γ and S = K ∪ KbK
for + 3. For 1 < ρ < (r + 1) 2, let Uρ denote the union of the 12 open discs
in C of radius 1 with centers ρζ λ , λ = 0, 1, . . . , 11. Let Bρ denote the bounded
component of C \ Uρ . The conditions on ρ ensure that Bρ exists. See the diagram
below.
√
Let B1 and B2 denote Bρ for ρ = (r + 1)/ 2 and ρ = r + 1, respectively. In
the diagram, ρ′ and ρ′′ are the two solutions t > 0 of |teiπ/12 − ρ| = 1. When
14
DONALD I. CARTWRIGHT TIM STEGER
√
ρ = (r + 1)/√ 2, we have ρ′ = 1 √
and ρ′′ = r + 1. When ρ = r + 1, we have
ρ′ = (r + 1)/ 2 and ρ′′ = r(r + 1)/ 2.
Write κ for the square root of r − 1.
Lemma 4.2. Let (w1 , w2 ) ∈ C2 . Then (w1 , w2 ) ∈ FS if and only if
√
√
(i) u1 w1 + u2 w2 ∈ B1 for each of the pairs (u1 , u2 ) = ( r + 1, 0), (0, r + 1)
and (κ−1 e−iπ/12 , κ−1 ζ 3ν e−iπ/12 ) for ν = 0, 1, 2, 3, and
(ii) u1 w1 + u2 w2 ∈ B2 for each of the pairs (u1 , u2 ) = (κ−1 , κ−1 (ζ + 1)ζ 1+3ν )
and (κ−1 (ζ + 1)ζ 1+3ν , κ−1 ) for ν = 0, 1, 2, 3,
√
in which case, |w1 |, |w2 | ≤ 1/ r + 1.
ρeiπ/6
•
′
0
•
Bρ
•ρ
•
′′
•ρ
eiπ/12
iπ/12
e
ρ−1
•
ρ
Proof. Given w = (w1 , w2 ) ∈ B(C2 ), to verify that w ∈ FS , we must show that
d(0, w) ≤ d(0, (bk).w) and that d(0, w) ≤ d(0, (bu−1 bk).w) for all k ∈ K. Since b
commutes with v, and bu−1 b commutes with u, we must check 288/4 + 288/3 = 168
conditions.
Let γ0 be as in (3.2), and let D, as before, be the diagonal matrix with diagonal
entries 1, 1 and κ. The g ∈ U (2, 1) to which we apply Lemma 4.1 are the matrices
(bk)˜= Dγ0 bkγ0−1 D−1 and (bu−1 bk)˜= Dγ0 bu−1 bkγ0−1 D−1 , where k ∈ K. Now
(bk)˜3i = κ(γ0 bkγ0−1 )3i
for
i = 1, 2,
and (bk)˜33 = (γ0 bkγ0−1 )33 ,
and similarly with b replaced by bu−1 b. Note also that for λ ∈ Z,
(γ0 bkj λ γ0−1 )3i = (γ0 bkγ0−1 )3i ζ λ for i = 1, 2, and (γ0 bkj λ γ0−1 )33 = (γ0 bkγ0−1 )33 ,
and similarly with b replaced by bu−1 b. So the conditions for w = (w1 , w2 ) ∈ FS
to hold have the form
|κ(g31 w1 + g32 w2 )ζ λ + g33 | ≥ 1 for λ = 0, . . . , 11,
for 6 matrices g of the form γ0 bkγ0−1 , and 8 matrices g of the form γ0 bu−1 bkγ0−1 .
If g = γ0 bkγ0−1 , then g33 = (ζ + 1)/ζ, and if g = γ0 bu−1 bkγ0−1 , then g33 = r + 1.
By taking k = uv 2 u−1 j, j −2 , and uv 2−ν j 3(ν−1) , for ν = 0, 1, 2, 3, respectively, we
get from g = γ0 bkγ0−1 the triples (g31 , g32 , g33 ) equal to ((ζ + 1)ζ −1 , 0, (ζ + 1)ζ −1 ),
(0, (ζ + 1)ζ −1 , (ζ + 1)ζ −1 ) and ((r + 1)ζ −1 /2, (r + 1)ζ −1 ζ 3ν /2, (ζ + 1)ζ −1 ). Using
√ eiπ/12 and κ(r + 1)/2 = κ−1 , and replacing λ by 6 − λ, we see that
ζ + 1 = r+1
2
the conditions coming from the six g√of the form γ0 bkγ0−1 are just the conditions
u1 w1 + u2 w2 6∈ Uρ for
√ ρ = (r + 1)/ 2 for the six√(u1 , u2 ) listed in (i). Taking
the case (u1 , u2 ) = ( r + 1,
√0), if u1 w1 + u2 w2 = r + 1 w1 is in the unbounded
component of √
C \ Uρ , then r + 1 |w1 | ≥ r + 1 (since ρ′′ equals r + 1 in this
√ case),
2
,
w
)
∈
B(C
).
So
r + 1 w1
and so |w1 | ≥ r + 1 > 1, which is impossible for (w
√1 2
′
is in the bounded component
B
,
and
so
|w
|
≤
1/
r
+
1
(since
ρ
equals
1
in this
ρ
1
√
case). Similarly, |w2 | ≤ 1/ r + 1 for all (w1 , w2 ) ∈ FS .
By taking k = v 1−ν j 3ν−1 , and k = vu−1 v 2+ν j 9 , for ν = 0, 1, 2, 3, respectively,
we get from g = γ0 bu−1 bkγ0−1 the triples (g31 , g32 , g33 ) equal to ((r + 1)/2, (r +
1)(ζ + 1)ζ 1+3ν /2, r + 1) and ((r + 1)(ζ + 1)ζ 1+3ν /2, (r + 1)/2, r + 1), ν = 0, 1, 2, 3.
Replacing λ by 6 − λ, we see that the conditions coming from the eight g of the
FINDING GENERATORS AND RELATIONS
15
form γ0 bu−1 bkγ0−1 are just the conditions u1 w1 +√u2 w2 6∈ Uρ for ρ = r + 1 for the
eight (u1 , u2 ) listed in (ii). Using |w1 |, |w2 | ≤ 1/ r + 1 for (w1 , w2 ) ∈ FS , we see
that u1 w1 + u2 w2 is in the bounded component Bρ of C \ Uρ in each case.
So calculation of r0 in this case is equivalent to calculation of the maximum
value ρ0 , say, of |w| on the set of w = (w1 , w2 )∈ C2 satisfying the conditions (i)
1+ρ0
. As we have seen, |w1 |, |w2 | ≤
and (ii) in Lemma 4.2, and r0 = 12 log 1−ρ
0
√
√
1/ r + 1 for (w1 , w2 ) ∈ FS . So FS is compact, and ρ0 ≤ r − 1.
We can now show that the value of r0 is 12 d(γ3 .0, 0) = 21 cosh−1 (r + 1), where
γ3 = bu−1 b. We first prove that this is a lower bound for r0 .
Lemma 4.3. For the above S, we have r0 ≥ 12 d(γ3 .0, 0) =
p
is, ρ0 ≥ (r − 1) r/2.
1
2
cosh−1 (r + 1). That
Proof. Consider the geodesic [0, γ3 .0] from 0 to γ3 .0, and let m be the point
on [0, γ3 .0] equidistant between 0 and γ3 .0. Let us show that m ∈ FS . If m 6∈ FS ,
there is a g ∈ S so that d(g.0, m) < d(0, m). Now g.0 6= 0, so that g 6∈ K. Also,
d(g.0, 0) ≤ d(g.0, m) + d(m, 0) < 2d(m, 0) = d(γ3 .0, 0),
and so g 6∈ Kγ3 K. So g must be in Kγ2 K = KbK. Since m ∈ [0, γ3 .0],
d(g.0, γ3 .0) ≤ d(g.0, m) + d(m, γ3 .0) < d(0, m) + d(m, γ3 .0) = d(0, γ3 .0),
so that g −1 γ3 ∈ K ∪ KbK. Now g −1 γ3 6∈ K, since otherwise g.0 = γ3 .0, so that m
is closer to γ3 .0 than to 0. Since KbK is symmetric, we have γ3−1 g ∈ KbK. Thus
g must be in G = KbK ∩ γ3 KbK. One may verify that G = (huibK) ∪ (huib−1 K).
Since u is in K and commutes with γ3 , it fixes [0, γ3 .0], and so d(g.0, m) is constant
on both double cosets huibK and huib−1 K. Note that uγ3−1 = γ3 u−1 is an element
in Γ of order 2 which interchanges 0 and γ3 .0, and so fixes m. The map f : g 7→
uγ3−1 g is an involution of G, and d(f (g).0, m) = d(g.0, m). Also, f (b) = ub−1 u, so
that f interchanges the two double cosets, and so d(g.0, m) is constant on G. So to
show the result, we need only check that
not hold. Now
√ d(b.0, m) < d(0, m) does √
γ3 .0 = (z1 , z2 ) for z1 = −(r − 1)ζ 2 /(2 r − 1) and z2 = (i +p1)ζ 2 /(2 r − 1). Write
mt =p(tz1 , tz2 ) for 0 ≤ t ≤ 1. Then m = mt for t = (1 − 1 − |γ3 .0|2 )/|γ3 .0|2 =
(1 − 1 − r/2 )/(r/2) = r − 1. Some routine calculations show that b.m = m, and
so d(b.0, m) = d(0, m). So m ∈ FS , and r0 ≥ d(0, m) = 12 d(0, γ3 .0).
Proposition 4.1. If (w1 , w2 ) ∈ FS , then |w1 |2 + |w2 |2 ≤ 2r − 3 = 2r (r − 1)2 .
Proof. Let FS ∗ = {z ∈ B(C2 ) : d(0, z) ≤ d(g.0, z) for all g ∈ S ∗ } for S ∗ = K∪KbK.
Since S ∗ ⊂ S, we have FS ⊂ FS ∗ . We shall in fact show that |w1 |2 + |w2 |2 ≤ 2r − 3
for (w1 , w2 ) ∈ FS ∗ .
Suppose that w = (w1 , w2 ) ∈ FS ∗ and |w1 |2 + |w2 |2 is maximal. Then d(0, w) =
d(0, g.w) for some g ∈ KbK. Replacing w by k.w for some k ∈ K, if necessary,
we may suppose that g = buv 2 u−1 j 7 . Let g̃ = Dγ0 gγ0−1 D−1 ∈ U (2, 1) for this g.
−1
Since g̃31 = −κ(ζ + 1)ζ −1 , g̃32 = 0 and
4.1 shows that
√
√ g̃33 = (ζ + 1)ζ , Lemma
r
−
1
w
−
1|.
Since
r
+
1
w1 ∈ B1 , this
1 = |g̃31 w1 +
g̃
w
+
g̃
|
=
|ζ
+
1|
|
1
33
√ 32 2
means that r + 1 w1 is on the rightmost arc of the boundary of B1 . That is,
1
1
−√
e−iθ for some θ ∈ [−π/12, π/12].
w1 = √
r−1
r+1
Fixing w1 , we see from Lemma 4.2(i) that w2 must lie on or outside
√ various circles,
including, for ν = 0, 1, 2, 3 and ǫ = ±, the circles Cν,ǫ of radius r − 1 and center
√
√
αν,ǫ = −iν w1 − r + 1 eǫiπ/12 = iν eǫiπ/4 + e−iθ / r + 1 .
In the following diagram, we have taken θ = π/24:
16
DONALD I. CARTWRIGHT TIM STEGER
C1,+
C1,−
C2,−
C0,+
p
•
0
C2,+
P
q
C0,−
Q
C3,−
C3,+
Let Uǫ (θ) denote the union of the four open discs bounded by the circles Cν,ǫ ,
ν = 0, 1, 2, 3. Using 0 < cos(θ+ǫπ/4) < 1, we see that C\Uǫ (θ) has two components,
the bounded one containing 0. So the complement of U (θ) = U+ (θ) ∪ U− (θ) has
a bounded component containing 0. The set U (θ) is obviously invariant under the
rotations z 7→ iλ z, λ = 0, 1, 2, 3. It is also invariant under the reflection Rν,ǫ in the
line through 0 and αν,ǫ , for each ν and ǫ. For if 0 6= α ∈ C, the reflection in the
line through 0 and α is the map Rα : z 7→ αz̄/ᾱ. It is then easy to check that
Rν,ǫ (αν ′ ,ǫ′ ) = αν ′′ ,ǫ′
for ν ′′ = 2ν − ν ′ + (ǫ − ǫ′ )/2 (mod 4).
Let p and P be the points of intersection of C0,+ and C0,− . Then R0,− (C0,+ ) = C3,+
and R0,− (C0,− ) = C0,− , so that R0,− (p) and R0,− (P ) are the points q and Q of
intersection of C0,− and C3,+ . In particular, |q| = |p| and |Q| = |q|.
It is now clear that |z| ≤ |p| for all z in the bounded component of C \ U (θ), and
that |z| ≥ |P | for all z in the unbounded component.
We now evaluate |p| and |P |. Let z ∈ C0,+ ∩ C0,− . Then
z + w1 −
√
√
√
r + 1 eiπ/12 = r − 1 = z + w1 − r + 1 e−iπ/12 .
For α, β ∈ C with β 6∈ R, |α √
− β| = |α − β̄| if and only if α ∈ R. So z + w1 must be
real, and writing z + w1 = t r + 1, with t ∈ R, we have
√
√
√
r − 1 = t r + 1 − r + 1 eiπ/12 ,
so that
r+1
t2 − √ t + r − 1 = 0.
2
√
√
The solutions of this are t = (r − 1)/ 2 and t = 2. Taking the smaller of these,
√
r − 1√
r + 1 = r − 1.
p + w1 = √
2
√ √
√
√
So |p| = |√r √
− 1 − w1 |. Taking
instead
√
√ t = 2, we see that |P | = | √2 r + 1 − w1 |.
So |P | ≥ 2 r + 1 − 1/ r + 1 > 1/ r + 1, and therefore |z| > 1/ r + 1 for all z
in the unbounded component of C \ U (θ).
FINDING GENERATORS AND RELATIONS
17
that w2 is in the bounded component of C \ U (θ), and
So (w1 , w2 ) ∈ FS ∗ implies
√
therefore |w2 | ≤ |p| = | r − 1 − w1 |. Thus
√
|w1 |2 + |w2 |2 ≤ |w1 |2 + | r − 1 − w1 |2
√
= r − 1 + 2|w1 |2 − 2 r − 1 Re(w1 )
1
√
1
=r−1+2
+
− 2 cos θ
r−1 r+1
1
√
1
−√
cos θ
−2 r−1 √
r−1
r+1
√
= 3(r − 1) − r(r − 1) 2 cos θ
√ r+1
≤ 3(r − 1) − r(r − 1) 2 √
8
= 2r − 3.
We conclude by giving a direct proof that the set S generates Γ. The following
lemma uses a modification of an argument shown to us by Gopal Prasad.
√
Lemma 4.4. If g ∈ Γ \ K, then d(g.0, 0) ≥ d(b.0, 0) = cosh−1 ( r + 2 ).
Proof. By considering the (3, 3)-entry of g ∗ F g − F = 0, we see that
|g13 |2 + |g13 − (r − 1)g23 |2 = (r − 1) |g33 |2 − 1 .
(4.1)
Write α = g13 , β = g13 − (r − 1)g23 and γ = g33 . By hypothesis, g.0 6= 0, and so
g13 6= 0 or g23 6= 0. Hence
α, β, γ ∈ Z[ζ], |α|2 + |β|2 = (r − 1) |γ|2 − 1 , and α, β are not both 0. (4.2)
We claim that under conditions (4.2), |γ|2 ≥ r+2 must hold. Writing α = a0 +a1 ζ +
a2 ζ 2 +a3 ζ 3 , we have |α|2 = P (α)+rQ(α), where P (α) = a20 +a0 a2 +a22 +a21 +a1 a3 +a23
and Q(α) = a0 a1 + a1 a2 + a2 a3 . Our hypothesis is that
P (α) + P (β) + P (γ) = 3Q(γ) + 1
and P (γ) = Q(α) + Q(β) + Q(γ) + 1, (4.3)
and we want to show that P (γ) + rQ(γ) ≥ 2 + r. Now P is a positive definite form,
and γ 6= 0 and either α 6= 0 or β 6= 0. So the left hand side of the first equation
in (4.3) is at least 2, so that Q(γ) > 0. Since Q(γ) ∈ Z, we have Q(γ) ≥ 1. So all
we have to do is show that P (γ) ≥ 2. Now Q(γ) 6= 0 implies that a0 a1 , a1 a2 or
a2 a3 6= 0. If a0 a1 6= 0, then P (α) = (a2 + a0 /2)2 + 3a20 /4 + (a3 + a1 /2)2 + 3a21 /4 ≥
3(a20 + a21 )/4 ≥ 3/2 > 1, so that P (α) ≥ 2. The other two cases are similar.
The following result implies that Γ is generated by K and b.
Lemma 4.5. If g ∈ Γ \ K, then there exists k ∈ K so that d(bkg.0, 0) < d(g.0, 0).
Proof. If there is no such k ∈ K, then d(s.(g.0), 0) ≥ d(g.0, 0) for all s ∈ S ∗ , and
4.1, and so d(g.0, 0) ≤ r0 .
so g.0 ∈ FS ∗ , in the notation of the proof of Proposition
√
But by Lemma 4.4, d(g.0, 0) ≥ d(b.0, 0) = cosh−1 ( r + 2 ), which is greater than
−1
1
(r + 1) = r0 .
2 cosh
References
1. Martin R. Bridson, André Häfliger, “Metric Spaces of Non-Positive Curvature.” SpringerVerlag, 1999.
2. Donald I. Cartwright, Tim Steger, Enumeration of the 50 fake projective planes, C.R. Acad.
Sci. Paris, I 348 (2010) 11–13.
3. John R. Parker, Complex hyperbolic lattices, Contemp. Math. 501 (2009), 1–42.
4. G. Prasad, S.-K. Yeung, Fake projective planes, Invent. math. 168 (2007) 321–370.
5. G. Prasad, S.-K. Yeung, Fake projective planes. Addendum. Invent. math. 182 (2010) 213–227.
| 4 |
Maximum likelihood estimation for the Fréchet
distribution based on block maxima extracted
from a time series
AXEL BÜCHER1 and JOHAN SEGERS2
arXiv:1511.07613v2 [] 16 Sep 2016
1
Fakultät für Mathematik, Ruhr-Universität Bochum, Universitätsstr. 150, 44780 Bochum, Germany.
E-mail: [email protected]
2
Université catholique de Louvain, Institut de Statistique, Biostatistique et Sciences Actuarielles, Voie
du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium. E-mail: [email protected]
The block maxima method in extreme-value analysis proceeds by fitting an extreme-value distribution
to a sample of block maxima extracted from an observed stretch of a time series. The method is usually
validated under two simplifying assumptions: the block maxima should be distributed exactly according
to an extreme-value distribution and the sample of block maxima should be independent. Both assumptions are only approximately true. The present paper validates that the simplifying assumptions can in
fact be safely made.
For general triangular arrays of block maxima attracted to the Fréchet distribution, consistency
and asymptotic normality is established for the maximum likelihood estimator of the parameters of
the limiting Fréchet distribution. The results are specialized to the common setting of block maxima
extracted from a strictly stationary time series. The case where the underlying random variables are
independent and identically distributed is further worked out in detail. The results are illustrated by
theoretical examples and Monte Carlo simulations.
Keywords: block maxima method, maximum likelihood estimation, asymptotic normality, heavy tails,
triangular arrays, stationary time series.
1. Introduction
For the analysis of extreme values, two fundamental approaches can be distinguished. First, the
peaks-over-threshold method consists of extracting those values from the observation period which
exceed a high threshold. To model such threshold excesses, asymptotic theory suggests the use of
the Generalized Pareto distribution (Pickands, 1975). Second, the block maxima method consists
of dividing the observation period into a sequence of non-overlapping intervals and restricting
attention to the largest observation in each time interval. Thanks to the extremal types theorem,
the probability distribution of such block maxima is approximately Generalized Extreme-Value
(GEV), popularized by Gumbel (1958). The block maxima method is particularly common in
environmental applications, since appropriate choices of the block size yield a simple but effective
way to deal with seasonal patterns.
For both methods, honest theoretical justifications must take into account two distinct features. First, the postulated models for either threshold excesses or block maxima arise from
asymptotic theory and are not necessarily accurate at sub-asymptotic thresholds or at finite
block lengths. Second, if the underlying data exhibit serial dependence, then the same will likely
be true for the extreme values extracted from those data.
How to deal with both issues is well-understood for the peaks-over-threshold method. The
model approximation can be justified under a second-order condition (see, e.g., de Haan and Ferreira,
1
2
A. Bücher and J. Segers
2006 for a vast variety of applications), while serial dependence is taken care of in Hsing (1991);
Drees (2000) or Rootzén (2009), among others. Excesses over large thresholds often occur in clusters, and such serial dependence usually has an impact on the asymptotic variances of estimators
based on these threshold excesses.
Surprisingly, perhaps, is that for the block maxima method, no comparable analysis has yet
been done. With the exception of some recent articles, which we will discuss in the next paragraph,
the commonly used assumption is that the block maxima constitute an independent random
sample from a GEV distribution. The heuristic justification for assuming independence over
time, even for block maxima extracted from time series data, is that for large block sizes, the
occurrence times of the consecutive block maxima are likely to be well separated.
A more accurate framework is that of a triangular array of block maxima extracted from
a sequence of random variables, the block size growing with the sample size. While Dombry
(2015) shows consistency of the maximum likelihood estimator (Prescott and Walden, 1980) for
the parameters of the GEV distribution, Ferreira and de Haan (2015) show both consistency and
asymptotic normality of the probability weighted moment estimators (Hosking, Wallis and Wood,
1985). In both papers, however, the random variables from which the block maxima are extracted
are supposed to be independent and identically distributed. In many situations, this assumption
is clearly violated. To the best of our knowledge, Bücher and Segers (2014) is the only reference
treating both the approximation error and the time series character, providing large-sample theory of nonparametric estimators of extreme-value copulas based on samples of componentwise
block maxima extracted out of multivariate stationary time series.
The aim of the paper is to show the consistency and asymptotic normality of the maximum
likelihood estimator for more general sampling schemes, including the common situation of extracting block maxima from an underlying stationary time series. For technical reasons explained
below, we restrict attention to the heavy-tailed case. The block maxima paradigm then suggests
to use the two-parametric Fréchet distribution as a model for a sample of block maxima extracted
from that time series.
The first (quite general) main result, Theorem 2.5, is that for triangular arrays of random variables whose empirical measures, upon rescaling, converge in an appropriate sense to a Fréchet
distribution, the maximum likelihood estimator for the Fréchet parameters based on those variables is consistent and asymptotically normal. The theorem can be applied to the common set-up
discussed above of block maxima extracted from an underlying time series, and the second main
result, Theorem 3.6, shows that, in this case, the asymptotic variance matrix is the inverse of the
Fisher information of the Fréchet family: asymptotically, it is as if the data were an independent
random sample from the Fréchet attractor. In this sense, our theorem confirms the soundness of
the common simplifying assumption that block maxima can be treated as if they were serially
independent. Interestingly enough, the result allows for time series of which the strong mixing
coefficients are not summable, allowing for some long range dependence scenarios.
Restricting attention to the heavy-tailed case is done because of the non-standard nature
of the three-parameter GEV distribution. The issue is that the support of a GEV distribution
depends on its parameters. Even for the maximum likelihood estimator based on an independent
random sample from a GEV distribution, asymptotic normality has not yet been established. The
article usually cited in this context is Smith (1985), although no formal result is stated therein.
Even the differentiability in quadratic mean of the three-parameter GEV is still to be proven;
Marohn (1994) only shows differentiability in quadratic mean for the one-parameter GEV family
(shape parameter only) at the Gumbel distribution. We feel that solving all issues simultaneously
(irregularity of the GEV model, finite block size approximation error and serial dependence) is
a far too ample program for one paper. For that reason, we focus on the analytically simpler
Fréchet family, while thoroughly treating the triangular nature of the array of block maxima
Maximum likelihood estimation for the Fréchet distribution
3
and the issue of serial dependence within the underlying time series. In a companion paper
(Bücher and Segers, 2016), we consider the maximum likelihood estimator in the general GEVmodel based on independent and identically distributed random variables sampled directly from
the GEV distribution. The main focus of that paper is devoted to resolving the considerable
technical issues arising from the dependence of the GEV support on its parameters.
We will build up the theory in three stages. First, we consider general triangular arrays of
observations that asymptotically follow a Fréchet distribution in Section 2. Second, we apply
the theory to the set-up of block maxima extracted from a strictly stationary time series in
Section 3. Third, we further specialize the results to the special case of block maxima formed
from independent and identically distributed random variables in Section 4. This section can
hence be regarded as a continuation of Dombry (2015) by reinforcing consistency to asymptotic
normality, albeit for the Fréchet domain of attraction only. We work out an example and present
finite-sample results from a simulation study in Section 5. The main proofs are deferred to
Appendix A, while some auxiliary results concerning the Fréchet distribution are mentioned in
Appendix B. The proofs of the less central results are postponed to a supplement.
2. Triangular arrays of block maxima
In this section, we summarize results concerning the maximum likelihood estimator for the parameters of the Fréchet distribution: given a sample of observations which are not all tied, the
Fréchet likelihood admits a unique maximum (Subsection 2.1). If the observations are based on a
triangular array which is approximately Fréchet distributed in the sense that certain functionals
admit a weak law of large numbers or a central limit theorem, the maximum likelihood estimator
is consistent or asymptotically normal, respectively (Subsections 2.2 and 2.3). Proofs are given
in Subsection A.1.
2.1. Existence and uniqueness
Let Pθ denote the two-parameter Fréchet distribution on (0, ∞) with parameter θ = (α, σ) ∈
(0, ∞)2 = Θ, defined through its cumulative distribution function
Gθ (x) = exp{−(x/σ)−α },
x > 0.
Its probability density function is equal to
α
pθ (x) = exp{−(x/σ)−α } (x/σ)−α−1 ,
σ
with log-likelihood function
x > 0,
ℓθ (x) = log(α/σ) − (x/σ)−α − (α + 1) log(x/σ),
x > 0,
and score functions ℓ̇θ = (ℓ̇θ,1 , ℓ̇θ,2 )T , with
ℓ̇θ,1 (x) = ∂α ℓθ (x) = α−1 + (x/σ)−α − 1 log(x/σ),
ℓ̇θ,2 (x) = ∂σ ℓθ (x) = 1 − (x/σ)−α α/σ.
(2.1)
(2.2)
Let x = (x1 , . . . , xk ) ∈ (0, ∞)k be a sample vector to which the Fréchet distribution is to be
fitted. Consider the log-likelihood function
L(θ | x) =
k
X
i=1
ℓθ (xi ).
(2.3)
4
A. Bücher and J. Segers
Further, define
1
Ψk (α | x) = +
α
σ(α | x) =
1
k
Pk
−α
log(xi )
i=1 xi
P
k
−α
1
i=1 xi
k
!−1/α
k
1 X −α
x
k i=1 i
−
k
1X
log(xi ),
k i=1
.
(2.4)
(2.5)
Lemma 2.1. (Existence and uniqueness) If the scalars x1 , . . . , xk ∈ (0, ∞) are not all equal
(k ≥ 2), then there exists a unique maximizer
θ̂(x) = α̂(x), σ̂(x) = arg max L(θ | x).
θ∈Θ
We have σ̂(x) = σ(α̂(x) | x) while α̂(x) is the unique zero of the strictly decreasing function
α 7→ Ψk (α | x):
(2.6)
Ψk α̂(x) | x = 0.
It is easily verified that the estimating equation for α is scale invariant: for any c ∈ (0, ∞),
we have Ψk (α | cx) = Ψk (α | x). As a consequence, the maximum likelihood estimator for the
shape parameter is scale invariant:
α̂(cx) = α̂(x).
Moreover, the estimator for σ is a scale parameter in the sense that
σ̂(cx) = σ(α̂(cx) | cx) = c σ(α̂(x) | x) = c σ̂(x).
Until now, the maximum likelihood estimator is defined only in case not all xi values are
identical. For definiteness, if x1 = . . . = xk , define α̂(x) = ∞ and σ̂(x) = min(x1 , . . . , xk ) = x1 .
2.2. Consistency
We derive a general condition under which the maximum likelihood estimator for the parameters
of the Fréchet distribution is consistent. The central result, Theorem 2.3 below, shows that, apart
from a not-all-tied condition, the only thing that is required for consistency is a weak law of large
numbers for the functions appearing in the estimating equation (2.6) for the shape parameter.
Suppose that for each positive integer n, we are given a random vector Xn = (Xn,1 , . . . , Xn,kn )
taking values in (0, ∞)kn , where kn ≥ 2 is a positive integer sequence such that kn → ∞
as n → ∞. One may think of Xn,i as being (approximately) Fréchet distributed with shape
parameter α0 > 0 and scale parameter σn > 0. This statement is made precise in Condition 2.2
below. On the event that the kn variables Xn,i are not all equal, Lemma 2.1 allows us to define
α̂n = α̂(Xn ),
(2.7)
the unique zero of the function 0 < α 7→ Ψkn (α | Xn ). Further, as in (2.5), put
σ̂n = σ(α̂n | Xn ) =
kn
1 X
X −α̂n
kn i=1 n,i
!−1/α̂n
.
(2.8)
Maximum likelihood estimation for the Fréchet distribution
5
For definiteness, put α̂n = ∞ and σ̂n = Xn,1 on the event {Xn,1 = . . . = Xn,kn }. Subsequently,
we will assume that this event is asymptotically negligible:
lim Pr(Xn,1 = . . . = Xn,kn ) = 0.
(2.9)
n→∞
We refer to (α̂n , σ̂n ) as the maximum likelihood estimator.
The fundamental condition guaranteeing consistency of the maximum likelihood estimator
concerns the asymptotic behavior of sample averages of f (Xn,i /σn ) for certain functions f . For
0 < α− < α+ < ∞, consider the function class
F1 (α− , α+ ) = {x 7→ log x} ∪ {x 7→ x−α : α− < α < α+ } ∪ {x 7→ x−α log x : α− < α < α+ },
(2.10)
all functions being from (0, ∞) into R. Let the arrow ‘ ’ denote weak convergence.
Condition 2.2. There exist 0 < α− < α0 < α+ < ∞ and a positive sequence (σn )n∈N such
that, for all f ∈ F1 (α− , α+ ),
kn
1 X
f (Xn,i /σn )
kn i=1
Z
∞
f (x) pα0 ,1 (x) dx,
0
n → ∞.
(2.11)
Theorem 2.3. (Consistency) Let Xn = (Xn,1 , . . . , Xn,kn ) be a sequence of random vectors in
(0, ∞)kn , where kn → ∞. Assume that Equation (2.9) and Condition 2.2 hold. On the complement of the event {Xn,1 = . . . = Xn,kn }, the random vector (α̂n , σ̂n ) is the unique maximizer of
the log-likelihood (α, σ) 7→ L(α, σ | Xn,1 , . . . , Xn,kn ). Moreover, the maximum likelihood estimator
is consistent in the sense that
(α̂n , σ̂n /σn )
(α0 , 1),
n → ∞.
2.3. Asymptotic distribution
We formulate a general condition under which the estimation error of the maximum likelihood
estimator for the Fréchet parameter vector converges weakly. The central result is Theorem 2.5
below.
For 0 < α− < α+ < ∞, recall the function class F1 (α− , α+ ) in (2.10) and define another one:
F2 (α− , α+ ) = F1 (α− , α+ ) ∪ {x 7→ x−α (log x)2 : α− < α < α+ }.
(2.12)
Furthermore, fix α0 > 0 and consider the following triple of real-valued functions on (0, ∞):
H = {f1 , f2 , f3 } = {x 7→ x−α0 log(x), x 7→ x−α0 , x 7→ log x}.
(2.13)
The following condition strengthens Condition 2.2.
Condition 2.4. There exist α0 ∈ (0, ∞) and a positive sequence (σn )n∈N such that the following
two statements hold:
(i) There exist 0 < α− < α0 < α+ < ∞ such that Equation (2.11) holds for all f ∈ F2 (α− , α+ ).
(ii) There exists a sequence 0 < vn → ∞ and a random vector Y = (Y1 , Y2 , Y3 )T such that,
denoting
!
Z ∞
kn
1 X
f (Xn,i /σn ) −
f (x) pα0 ,1 (x) dx ,
(2.14)
Gn f = vn
kn i=1
0
6
A. Bücher and J. Segers
we have, for fj as in (2.13),
(Gn f1 , Gn f2 , Gn f3 )T
n → ∞.
Y,
(2.15)
Let Γ be the Euler gamma function and let γ = −Γ′ (1) = 0.5772 . . . be the Euler–Mascheroni
constant. Recall Γ′′ (2) = (1 − γ)2 + π 2 /6 − 1. Define the matrix
2
6
α0
α0 (1 − γ)
−α20
,
α0 ∈ (0, ∞).
(2.16)
M (α0 ) = 2
γ − 1 −(Γ′′ (2) + 1)/α0 1 − γ
π
Theorem 2.5. (Asymptotic distribution) Let Xn = (Xn,1 , . . . , Xn,kn ) be a sequence of random vectors in (0, ∞)kn , where kn → ∞. Assume that Equation (2.9) and Condition 2.4 hold.
As n → ∞, the maximum likelihood estimator (α̂n , σ̂n ) satisfies
Gn x−α0 log(x)
vn (α̂n − α0 )
+ op (1)
= M (α0 ) Gn x−α0
M (α0 )Y ,
(2.17)
vn (σ̂n /σn − 1)
Gn log(x)
where Y = (Y1 , Y2 , Y3 )T and M (α0 ) are given in Equations (2.15) and (2.16), respectively.
For block√maxima extracted from a strongly mixing stationary time series, Condition 2.4
with vn = kn , where kn denotes the number of blocks, will be derived from the Lindeberg
central limit theorem. In that case, the distribution of Y is trivariate Gaussian with some mean
vector µY (possibly different from 0, see Theorem 3.6 below for details) and covariance matrix
1 − 4γ + γ 2 + π 2 /3 α0 (γ − 2) π 2 /6 − γ
1
α0 (γ − 2)
α20
−α0 .
(2.18)
ΣY = 2
α0
2
π /6 − γ
−α
π 2 /6
0
According to Lemma B.2 below, the right-hand side in (2.18) coincides with the covariance
T
matrix of the random vector X −α0 log(X), X −α0 , log(X) , where X is Fréchet distributed
with parameter (α0 , 1). From Lemma B.3, recall the inverse of the Fisher information matrix of
the Fréchet family at (α, σ) = (α0 , 1):
6
α20
(γ − 1)
−1
I(α
=
.
(2.19)
−2
0 ,1)
π 2 (γ − 1) α0 {(1 − γ)2 + π 2 /6}
Addendum 2.6. If Y is normally distributed with covariance matrix ΣY as in (2.18), then the
limit M (α0 )Y in Theorem 2.5 is also normally distributed and its covariance matrix is equal to
−1
the inverse of the Fisher information matrix of the Fréchet family, M (α0 ) ΣY M (α0 )T = I(α
.
0 ,1)
3. Block maxima extracted from a stationary time series
Let (ξt )t∈Z be a strictly stationary time series, that is, for any k ∈ N and τ, t1 , . . . , tk ∈ Z, the
distribution of (ξt1 +τ , . . . , ξtk +τ ) is the same as the distribution of (ξt1 , . . . , ξtk ). For positive
integer i and r, consider the block maximum
Mr,i = max(ξ(i−1)r+1 , . . . , ξir ).
Abbreviate Mr,1 = Mr . The classical block maxima method consists of choosing a sufficiently
large block size r and fitting an extreme-value distribution to the sample of block maxima
Maximum likelihood estimation for the Fréchet distribution
7
Mr,1 , . . . , Mr,k . The likelihood is constructed under the simplifying assumption that the block
maxima are independent. The present section shows consistency and asymptotic normality of
this method in an appropriate asymptotic framework.
For the block maxima distribution to approach its extreme-value limit, the block sizes must
increase to infinity. Moreover, consistency can only be achieved when the number of blocks grows
to infinity too. Hence, we consider a positive integer sequence rn , to be thought of as a sequence
of block sizes. The number of disjoint blocks of size rn that fit into a sample of size n is equal to
kn = ⌊n/rn ⌋, where ⌊x⌋ denotes the integer part of a real number x. Assume that both rn → ∞
and kn → ∞ as n → ∞.
The theory will be based on an application of Theorem 2.5 to the sample of left-truncated
block maxima Xn,i = Mrn ,i ∨ c (i = 1, . . . , kn ), for some positive constant c specified below. The
estimators α̂n and σ̂n are thus the ones in (2.7) and (2.8), respectively. The reason for the lefttruncation is that otherwise, some of the block maxima could be zero or negative. Asymptotically,
such left-truncation does not matter, since all maxima will simultaneously diverge to infinity in
probability (Condition 3.2 below).
In Section 4 below, we will specialize things further to the case where the random variables
ξt are independent. In particular, we will simplify the list of conditions given in this section.
The basic assumption is that the distribution of rescaled block maxima is asymptotically
Fréchet. The sequence of scaling constants should possess a minimal degree of regularity. The
assumption is satisfied in case the stationary distribution of the series is in the Fréchet domain
of attraction and the series possesses a positive extremal index; see Remark 3.7 below.
Condition 3.1. (Domain of attraction) The time series (ξt )t∈Z is strictly stationary and there
exists a sequence (σn )n∈N of positive numbers with σn → ∞ and a positive number α0 such that
Mn /σn
Fréchet(α0 , 1),
n → ∞.
(3.1)
Moreover, σmn /σn → 1 for any integer sequence (mn )n∈N such that mn /n → 1 as n → ∞.
The domain-of-attraction condition implies that, for every scalar c, we have Pr[Mn ≤ c] =
Pr[Mn /σn ≤ c/σn ] → 0 as n → ∞. In words, the block maxima become unboundedly large as
the sample size grows to infinity. Still, out of a sample of kn block maxima, the smallest of the
maxima might still be small, especially when the number of blocks is large, or, equivalently, the
block sizes are not large enough. The following condition prevents this from happening.
Condition 3.2. (All block maxima diverge) For every c ∈ (0, ∞), we have
lim Pr[min(Mrn ,1 , . . . , Mrn ,kn ) ≤ c] = 0.
n→∞
To control the serial dependence within the time series, we require that the Rosenblatt mixing
coefficients decay sufficiently fast: for positive integer ℓ, put
α(ℓ) = sup |Pr(A ∩ B) − Pr(A) Pr(B)| : A ∈ σ(ξt : t ≤ 0), B ∈ σ(ξt : t ≥ ℓ) ,
where σ( · ) denotes the σ-field generated by its argument.
Condition 3.3. (α-Mixing with rate) We have limℓ→∞ α(ℓ) = 0. Moreover, there exists ω > 0
such that
kn1+ω α(rn ) → 0,
n → ∞.
(3.2)
8
A. Bücher and J. Segers
Condition 3.3 can be interpreted as requiring the block sizes rn to be sufficiently large. For
instance, if α(ℓ) = O(ℓ−a ) for some a > 0, then (3.2) is satisfied as soon as rn is of larger
order than n(1+ε)/(1+a) for some 0 < ε < a; in that case, one may choose ω = ε. Note that the
exponent a is allowed to be smaller than one, in which case the sequence of mixing coefficients
is not summable.
In order to be able to integrate (3.1) to the limit, we require an asymptotic bound on certain
moments of the block maxima; more precisely, on negative power moments in the left tail and
on logarithmic moments in the right tail.
Condition 3.4. (Moments) There exists some ν > 2/ω with ω from Condition 3.3 such that
lim sup E gν,α0 (Mn ∨ 1)/σn < ∞,
(3.3)
n→∞
where gν,α0 (x) = {x−α0 1(x ≤ e) + log(x)1(x > e)}2+ν .
An elementary argument shows that if Condition 3.4 holds, then Mn ∨ 1 in the lim sup may
be replaced
R ∞ by Mn ∨ c, for arbitrary c > 0. Moreover, note that the limiting Fréchet distribution
satisfies 0 xβ pα0 ,1 (x) dx < ∞ if and only if β is less than α0 . In some scenarios, e.g., for the
iid case considered in Section 4 or for the moving maximum process considered in Section 5.1, it
can be shown that the following sufficient condition for Condition 3.4 is true:
β
lim sup E (Mn ∨ c)/σn
<∞
(3.4)
n→∞
for all c > 0 and all β ∈ (−∞, α0 ). In that case, Condition 3.4 is easily satisfied for any ν > 0.
By Condition 3.2 and Lemma A.5, the probability that all block maxima Mrn ,1 , . . . , Mrn ,kn
are larger than some positive constant c and that they are not all equal tends to unity. On
this event, we can study the maximum likelihood estimators (α̂n , σ̂n ) for the parameters of the
Fréchet distribution based on the sample of block maxima.
Fix c ∈ (0, ∞) and put
Xn,i = Mrn ,i ∨ c.
Let Gn be the empirical process associated to Xn,1 /σrn , . . . , Xn,kn /σrn as in (2.14) with vn =
√
kn . The empirical process is not necessarily centered, which is why we need a handle on its
expectation.
Condition 3.5. (Bias) There exists c ∈ (0, ∞) such that for every function f in H defined in
(2.13), the following limit exists:
Z ∞
p
lim
f (x) pα0 ,1 (x) dx = B(f ).
(3.5)
kn E f (Mrn ∨ c)/σrn −
n→∞
0
Theorem 3.6. Suppose that Conditions 3.1 up to 3.5 are satisfied and fix c as in Condition 3.5.
Then, with probability tending to one, there exists a unique maximizer (α̂n , σ̂n ) of the Fréchet
log-likelihood (2.3) based on the block maxima Mrn ,1 , . . . , Mrn ,kn , and we have, as n → ∞,
√
Gn x−α0 log(x)
+ op (1)
√ kn (α̂n − α0 )
= M (α0 ) Gn x−α0
N2 M (α0 ) B, Iα−1
.
,1
0
kn (σ̂n /σrn − 1)
Gn log(x)
Here, M (α0 ) and Iα−1
are defined in Equations (2.16) and (2.19), respectively, while B =
0 ,1
(B(f1 ), B(f2 ), B(f3 ))T , where B(f ) is the limit in (3.5) and where f1 , f2 , f3 are defined in (2.13).
Maximum likelihood estimation for the Fréchet distribution
9
The proof of Theorem 3.6 is given in Subsection A.2. The conditions imposed in Theorem 3.6
are rather high-level. In the setting of a sequence of independent and identically distributed
random variables, they can be brought down to analytical conditions on the tail of the stationary distribution function (Theorem 4.2). Moreover, all conditions will be worked out in a
moving maximum model in Section 5.1. Still, we admit that for more common time series models, such as linear time series with heavy-tailed innovations or solutions to stochastic recurrence
equations, checking the conditions in Theorem 3.6 may not be an easy matter. Especially the
bias Condition 3.5, which requires quite detailed knowledge on the distribution of the sample
maximum, may be hard to verify. Even in the i.i.d. case, where the distribution of the sample
maximum is known explicitly, checking Condition 3.5 occupies more than three pages in the
proof of Theorem 4.2 below.
Interestingly, the asymptotic covariance matrix
√ in Theorem 3.6 is unaffected by√serial dependence and the asymptotic standard deviation of kn (α̂n − α0 ) is always equal to ( 6/π) × α0 ≈
0.7797 × α0. The reason for this invariance is that even for time series, maxima over large disjoint
blocks are asymptotically independent because of the strong mixing condition.
Remark 3.7. (Domain-of-attraction condition for positive extremal index) Let F be
the cumulative distribution function of ξ1 . Assume that there exist 0 < an → ∞ and α0 ∈ (0, ∞)
such that
x ∈ (0, ∞).
lim F n (an x) = exp(−x−α0 ),
n→∞
Moreover, assume that the sequence (ξt )t∈Z has extremal index ϑ ∈ (0, 1] (Leadbetter, 1983): If
un → ∞ is such that F n (un ) converges, then
Pr(Mn ≤ un ) = F nϑ (un ) + o(1),
1/α0
Note that we assume that ϑ > 0. Putting σn = ϑ
for every x ∈ (0, ∞), we have
n → ∞.
an we obtain that Condition 3.1 is satisfied:
Pr(Mn /σn ≤ x) = F nϑ (σn x) + o(1) → exp −ϑ(ϑ1/α0 x)−α0 = exp(−x−α0 ),
n → ∞.
4. Block maxima extracted from an iid sample
We specialize Theorem 3.6 to the case where the random variables ξ1 , ξ2 , . . . are independent and
identically distributed with common distribution function F . In this setting, fitting extremevalue distributions to block maxima is also considered in Dombry (2015) (consistency of the
maximum likelihood estimator in the GEV-family with γ > −1) and Ferreira and de Haan (2015)
(asymptotic normality of the probability weighted moment estimator in the GEV-family with
γ < 1/2). Assume that F is in the maximum domain of attraction of the Fréchet distribution
with shape parameter α0 ∈ (0, ∞): there exists a positive scalar sequence (an )n∈N such that, for
every x ∈ (0, ∞),
−α0
n → ∞.
(4.1)
F n (an x) → e−x ,
Because of serial independence, the conditions in Theorem 3.6 can be simplified considerably.
In addition, the mean vector of the asymptotic bivariate normal distribution of the maximum
likelihood estimator can be made explicit. Required is a second-order reinforcement of (4.1) in
conjunction with a growth restriction on the number of blocks.
Equation (4.1) is equivalent to regular variation of − log F at infinity with index −α0 (Gnedenko,
1943): we have F (x) < 1 for all x ∈ R and
− log F (ux)
= x−α0 ,
u→∞ − log F (u)
lim
x ∈ (0, ∞).
(4.2)
10
A. Bücher and J. Segers
The scaling constants in (4.1) may be chosen as any sequence (an )n∈N that satisfies
lim n {− log F (an )} = 1.
n→∞
(4.3)
Being constructed from the asymptotic inverse of a regularly varying function with non-zero
index, the sequence (an )n∈N is itself regularly varying at infinity with index 1/α0 .
The following condition reinforces (4.2) and thus (4.1) from regular variation to second-order
regular variation (Bingham, Goldie and Teugels, 1987, Section 3.6). With − log F replaced by
1 − F , it appears for instance in de Haan and Ferreira (2006, Theorem 3.2.5) in the context of
the asymptotic distribution of the Hill estimator. For τ ∈ R, define hτ : (0, ∞) → R by
τ
Z x
x − 1 , if τ 6= 0,
τ −1
τ
y
dy =
hτ (x) =
(4.4)
log(x), if τ = 0.
1
Condition 4.1. (Second-Order Condition) There exists α0 ∈ (0, ∞), ρ ∈ (−∞, 0], and a real
function A on (0, ∞) of constant, non-zero sign such that limu→∞ A(u) = 0 and such that, for
all x ∈ (0, ∞),
− log F (ux)
1
(4.5)
lim
− x−α0 = x−α0 hρ (x).
u→∞ A(u)
− log F (u)
The function A can be regarded as capturing the speed of convergence in (4.2). The form of
the limit function in (4.5) may seem unnecessarily specific, but actually, it is not, as explained
in Remark 4.3 below.
Let ψ = Γ′ /Γ denote the digamma function and recall the Euler–Mascheroni constant γ =
−Γ′ (1) = 0.5772 . . .. To express the asymptotic bias of the maximum likelihood estimators, we
will employ the functions b1 and b2 defined by
(1 + x) Γ(x){γ + ψ(1 + x)}, if x > 0,
(4.6)
b1 (x) = π 2
,
if x = 0,
6
and
π2
+ (1 + x) Γ(x){Γ′′ (2) + γ + (γ − 1) ψ(1 + x)}, if x > 0,
−
b2 (x) =
6x
0,
if x = 0.
(4.7)
See Figure 1 for the graphs of these two functions. For (α0 , ρ) ∈ (0, ∞) × (−∞, 0], define the bias
function
6
b1 (|ρ| /α0 )
B(α0 , ρ) = − 2
.
(4.8)
b2 (|ρ| /α0 )/α20
π
The proof of the following theorem is given in Section A.3.
Theorem 4.2. Let ξ1 , ξ2 , . . . be independent random variables with common distribution function
F satisfiying Condition 4.1. Let the block sizes rn be such that rn → ∞ and kn = ⌊n/rn ⌋ → ∞
as n → ∞ and assume that
p
(4.9)
kn A(arn ) = λ ∈ R.
lim
n→∞
Then, with probability tending to one, there exists a unique maximizer (α̂n , σ̂n ) of the Fréchet
log-likelihood (2.3) based on the block maxima Mrn ,1 , . . . , Mrn ,kn , and we have
p α̂n − α0
−1
kn
N2 λ B(α0 , ρ), I(α
,
n → ∞,
(4.10)
0 ,1)
σ̂n /arn − 1
11
6
Maximum likelihood estimation for the Fréchet distribution
2
3
4
5
b1
b2
0
1
π2 6
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Figure 1. Graphs of the functions b1 and b2 in (4.6) and (4.7).
−1
where I(α
denotes the inverse of the Fisher information of the Fréchet family as in (2.19) and
0 ,1)
with B(α0 , ρ) as in (4.8).
We conclude this section with a series of remarks on the second-order Condition 4.1 and its
link to the block-size condition in (4.9) and the mean vector of the limiting distribution in (4.10).
Remark 4.3. (Second-order regular variation) Let F satisfy (4.2). For x > 0 sufficiently
large such that F (x) > 0, define L(x) by
− log F (x) = x−α0 L(x).
(4.11)
In view of (4.2), the function L is slowly varying at infinity, that is,
lim
u→∞
L(ux)
= 1,
L(u)
x ∈ (0, ∞).
A second-order refinement of this would be that there exist A : (0, ∞) → (0, ∞) and h : (0, ∞) →
R, the latter not identically zero, such that limu→∞ A(u) = 0 and
L(ux)
1
− 1 = h(x),
x ∈ (0, ∞).
lim
u→∞ A(u)
L(u)
Writing g(u) = A(u) L(u), Theorem B.2.1 in de Haan and Ferreira (2006) (see also Bingham, Goldie and Teugels,
1987, Section 3.6) implies that there exists ρ ∈ R such that g and thus A = g/L are regularly
varying at infinity with index ρ. Since A vanishes at infinity, necessarily ρ ≤ 0. Furthermore,
there exists κ ∈ R\{0} such that h(x) = κ hρ (x) for x ∈ (0, ∞), with hρ as in (4.4). Incorporating
the constant κ into the function A, we can assume without loss of generality that κ = 1 and
we arrive at Condition 4.1. The function A then possibly takes values in (−∞, 0) rather than in
(0, ∞).
Remark 4.4. (Asymptotic mean squared error) According to (4.9) and (4.10), the distribution of the estimation error α̂n − α0 is approximately equal to
rn 6 2
6
α
.
N −A(arn ) 2 b1 (|ρ| /α0 ),
π
n π2 0
12
A. Bücher and J. Segers
The asymptotic mean squared error is therefore equal to
AMSE(α̂n ) = ABias2 (α̂n ) + AVar(α̂n ) = |A(arn )|2
rn 6 2
36
b1 (|ρ| /α0 )2 +
α .
4
π
n π2 0
The choice of the block size rn (or, equivalently, the number of blocks kn ), thus involves a bias–
variance trade-off; see Section 5. Alternatively, if ρ and A(arn ) could be estimated, then one
could construct bias-reduced estimators, just as in the case of the Hill estimator (see, e.g., Peng,
1998, among others) or probability weighted moment estimators (Cai, de Haan and Zhou, 2013).
Remark 4.5. (On the number of blocks) A version of (4.9) is used in Ferreira and de Haan
(2015) to prove asymptotic normality of probability weighted moment estimators. Equation (4.9)
also implies the following limit relation, which is imposed in Dombry (2015) and which we will
be needing later on as well:
kn log(kn )
= 0.
(4.12)
lim
n→∞
n
Indeed, in view of Remark 4.3 and regular variation of (an )n∈N , the sequence (|A(ar )|)r∈N
is regularly varying at infinity. Potter’s theorem (Bingham, Goldie and Teugels, 1987, Theorem 1.5.6) then implies that there exists β > 0 such that r−β = o(|A(ar )|) as r → ∞. But then
√
√
1/2+β
/nβ = o(1) as n → ∞. We obtain
kn (rn )−β = kn o(|A(arn )|) = o(1) by (4.9) and thus kn
1+1/(2β)
that kn
/n = o(1), which implies (4.12).
Remark 4.6. (No asymptotic bias) If λ = 0 in (4.9), then the limiting normal distribution in
(4.10) is centered and the maximum likelihood estimator is said to be asymptotically unbiased. If
the index, ρ, of regular variation of the auxiliary function |A| is strictly negative (see Remark 4.3),
then a sufficient condition for λ = 0 to occur is that kn = O(nβ ) for some β < |ρ|/(α0 /2 + |ρ|).
5. Examples and finite-sample results
5.1. Verification of conditions in a moving maximum model
For many stationary time series models, the distribution of the sample maximum is a difficult
object to work with. This is true even for linear time series models, since the maximum operator is
non-linear. In such cases, checking the conditions of Section 3 may be a hard or even impossible
task. An exception occurs for moving maximum models, where the sample maximum can be
linked directly to maxima of the innovation sequence.
Let (Zt )t∈Z be a sequence of independent and identically distributed random variables with
common distribution function F in the domain of attraction of the Fréchet distribution with
shape parameter α0 > 0, that is, such that (4.1) is satisfied for some sequence an → ∞. Let
p ∈ N, p ≥ 2, be fixed and let b1 , . . . , bp be nonnegative constants, b1 6= 0 6= bp , such that
P
p
i=1 bi = 1. We consider the moving maximum process ξt of order p, defined by
ξt = max{b1 Zt , b2 Zt−1 , . . . , bp Zt−p+1 },
t ∈ Z.
A simple calculation (see also the proof of Lemma 5.1 for the stationary distribution of ξt ) shows
that the extremal index of (ξt )t∈Z is equal to
Pp
0 −1 α0
b(p) ,
θ = { i=1 bα
i }
where b(p) = maxpi=1 bi . Let σn = b(p) an . The proof of the following lemma is given in Section D
in the supplementary material.
Maximum likelihood estimation for the Fréchet distribution
13
Lemma 5.1. The stationary time series (ξt )t∈Z satisfies Conditions 3.1, 3.3 and 3.4. If additionally (4.12) is met, then Condition 3.2 is satisfied as well. Finally, if F satisfies the Second-Order
Condition 4.1, if (4.9) is met and if kn = o(n2/3 ) as n → ∞, then Condition 3.5 is also satisfied,
with B(f ) denoting the same limit as in the iid case, that is, B(f ) = β with β as in (A.23).
As a consequence, Theorem 3.6 may be applied and the asymptotic bias of the maximum
likelihood estimator is the same as specified in Theorem 4.2 for the case of independent and
identically distributed random variables.
5.2. Simulation results
We report on the results of a simulation study, highlighting some interesting features regarding
the finite-sample performance of the maximum likelihood estimator. Attention is restricted to
the estimation of the shape parameter, and particular emphasis is given to a comparison with
the common Hill estimator, which is based on the competing peaks-over-threshold method. Its
variance is of the order O(k −1 ), where k is the number of upper order statistics taken into
account for its calculation. The Hill estimator’s asymptotic variance is given by α20 , which is larger
than the asymptotic variance (6/π 2 ) × α20 of the block maxima maximum likelihood estimator.
Furthermore, numerical experiments (not shown) involving the probability weighted moment
estimator showed a variance that was higher, in all cases considered, than the one of the maximum
likelihood estimator.
We consider three time series models for (ξt )t∈Z : independent and identically distributed
random variables, the moving maximum process from Section 5.1, and the absolute values of a
GARCH(1,1) time series. In the first two models, three choices are considered for the distribution
function F of either the variables ξt in the first model and the innovations Zt in the second model:
absolute values of a Cauchy-distribution, the standard Pareto distribution and the Fréchet(1,1)
distribution itself. All three distribution functions are attracted to the Fréchet distribution with
α0 = 1. For the moving maximum process, we fix p = 4 and bj = j/10 for j ∈ {1, 2, 3, 4}. The
GARCH(1,1) model is based on standard normal innovations, that is, ξt = |Zt |, where Zt is the
stationary solution of the equations
Zt = εt σt ,
(5.1)
2
2
σt2 = λ0 + λ1 Zt−1
+ λ2 σt−1
,
with εt , t ∈ Z, independent standard normal random variables. The parameter vector (λ0 , λ1 , λ2 )
is set to either (0.5, 0.367, 0.367) or (0.5, 0.08, 0.91). By Mikosch and Stărică (2000), the stationary distribution associated to any of these two models is attracted to the Fréchet distribution
with shape parameter being (approximately) equal to α0 = 5.
We generate samples from all of the afore-mentioned models for a fixed sample size of n =
1 000. Based on N = 3 000 Monte Carlo repetitions, we obtain empirical estimates of the finite
sample bias, variance and mean squared error (MSE) of the competing estimators. The results
are summarized in Figure 2 for the iid and the moving maxima model, and in Figure 3 for
the GARCH-model. Additional details for the case of independent random sampling from the
absolute value of a Cauchy distribution are provided in the Supplement, Section F.
In general, (most of) the graphs nicely reproduce the bias-variance tradeoff, its characteristic
form however varying from model to model. Consider the iid scenario: since the Hill estimator
is essentially the maximum likelihood estimator in the Pareto family, it is to be expected that
it outperforms the block maxima estimator. On the other hand, by max-stability of the Fréchet
family, the block maxima estimator should outperform the Hill estimator for that family. These
14
A. Bücher and J. Segers
expectations are confirmed by the simulation results in the left column of Figure 2. For the
Cauchy distribution, it turns out that the block maxima maximum likelihood estimator shows a
better performance.
Now, consider the moving maxima time series scenarios (right column in Figure 2). Compared
to the iid case, we observe an increase in the mean squared error (note that the scale on the
axis of ordinates is row-wise identical). The block maxima method clearly outperforms the Hill
estimator, except for the Pareto model. The big increase in relative performance is perhaps
not too surprising, as the data points from a moving maximum process are already (weighted)
maxima, which principally favors the block maxima method with small block sizes.
Finally, consider the GARCH models in Figure 3. While, as in line with the theoretical findings,
the variance of the block maxima estimator is smaller than the one of the Hill estimator, the
squared bias turns out to be substantially higher for a large range of values for k. The MSEoptimal point is smaller for the Hill estimator.
Appendix A: Proofs
A.1. Proofs for Section 2
Proof of Lemma 2.1. The proof extends the development in Section 2 of Balakrishnan and Kateri
(2008). First, fix α > 0 and consider the function 0 < σ 7→ L(α, σ | x). By Equation (2.2), its
derivative is equal to
!
k
k
X
X
−α
α
.
xi
∂σ ℓθ (xi ) = (α/σ) k − σ
∂σ L(α, σ | x) =
i=1
i=1
We find that ∂σ L(α, σ | x) is positive, zero, or negative according to whether σ is smaller
than, equal to, or larger than σ(α | x), respectively. In particular, for fixed α, the expression
L(α, σ | x) is maximal at σ equal to σ(α | x). Hence we need to find the maximum of the function
0 < α 7→ L(α, σ(α | x) | x). By (2.1), its derivative is given by
k
X
d
∂α ℓα,σ (xi )
L(α, σ(α | x) | x) =
dα
i=1
+
σ=σ(α|x)
k
X
∂σ ℓα,σ (xi )
i=1
σ=σ(α|x)
×
d
σ(α | x).
dα
The second sum is equal to zero, by definition of σ(α | x). We obtain
d
L(α, σ(α | x) | x) = k Ψk (α | x),
dα
with Ψk as in (2.4). This is the same expression as Eq. (2.3) in Balakrishnan and Kateri (2008),
with their xi replaced by our x−1
i . Differentiating once more with respect to α, we obtain that
2
P
Pk
Pk
k
−α
−α
−α
2
log(xi )
−
i=1 xi
i=1 xi (log(xi ))
i=1 xi
k
d2
L(α, σ(α | x) | x) = − 2 − k
.
P
2
dα2
α
k
−α
x
i=1 i
(A.1)
By the Cauchy–Schwartz inequality, the numerator of the big fraction is nonnegative, whence
k
d2
L(α, σ(α | x) | x) ≤ − 2 < 0.
dα2
α
Maximum likelihood estimation for the Fréchet distribution
200
300
400
0.05
0.04
0.03
0.02
Squared Bias, Variance, MSE
0.01
500
100
200
300
400
Frechet(1) − i.i.d. model
Frechet(1) − moving maxima model
0.05
Effective Sample Size k
0.04
0.03
0.02
Squared Bias, Variance, MSE
Bias^2
Var
MSE
Block maxima MLE
Hill estimator
0.00
0.01
0.02
0.03
0.04
Bias^2
Var
MSE
Block maxima MLE
Hill estimator
500
0.01
0.05
Effective Sample Size k
0.00
200
300
400
500
300
400
Pareto(1) − i.i.d. model
Pareto(1) − moving maxima model
500
0.06
0.08
Bias^2
Var
MSE
Block maxima MLE
Hill estimator
0.00
0.00
0.02
0.04
Squared Bias, Variance, MSE
0.08
200
Effective Sample Size k
Bias^2
Var
MSE
Block maxima MLE
Hill estimator
0.06
100
Effective Sample Size k
0.04
100
0.02
Squared Bias, Variance, MSE
Bias^2
Var
MSE
Block maxima MLE
Hill estimator
0.00
0.01
0.02
0.03
0.04
Bias^2
Var
MSE
Block maxima MLE
Hill estimator
100
Squared Bias, Variance, MSE
15
Absolute Cauchy − moving maxima model
0.00
Squared Bias, Variance, MSE
0.05
Absolute Cauchy − i.i.d. model
100
200
300
Effective Sample Size k
400
500
100
200
300
400
500
Effective Sample Size k
Figure 2. Simulation results (Section 5.2). Effective sample size refers to the number of blocks (block maxima
MLE) or the number of upper order statistics (Hill estimator). Time series models: iid (left) and moving maximum
model (right). Innovations: absolute values of Cauchy (top), unit Fréchet (middle) and unit Pareto (bottom)
random variables. Block sizes r ∈ {2, 3, . . . , 24}, resulting in k ∈ {500, 333, . . . , 41} blocks.
16
A. Bücher and J. Segers
GARCH(1,1) with lambda = (0.5, 0.08, 0.91)
7
5
4
3
2
Squared Bias, Variance, MSE
6
Bias^2
Var
MSE
Block maxima MLE
Hill estimator
1
0.5
1.0
1.5
2.0
2.5
Bias^2
Var
MSE
Block maxima ML
Hill estimator
0
0.0
Squared Bias, Variance, MSE
3.0
GARCH(1,1) with lambda = (0.5, 0.367, 0.367)
20
40
60
80
100
120
20
Effective Sample Size k
40
60
80
100
120
140
160
Effective Sample Size k
Figure 3. Simulation results (Section 5.2). Effective sample size refers to the number of blocks (block maxima
MLE) or the number of upper order statistics (Hill estimator). Both panels refer to the GARCH(1,1) model in
(5.1), with (λ0 , λ1 , λ2 ) equal to (0.5, 0.367, 0.367) on the left and to (0.5, 0.08, 0.91) on the right.
Hence, α 7→ Ψk (α | x) is strictly decreasing. For α → 0, this function diverges to ∞, whereas for
P
α → ∞, it converges to log(min(x1 , . . . , xk )) − k −1 ki=1 log(xi ), which is less than zero given the
assumptions on x1 , . . . , xk . Hence, there exists a unique α̂(x) ∈ (0, ∞) such that this function is
zero. We conclude that the function θ 7→ L(θ | x) admits a unique maximum at θ̂(x).
Fix α0 ∈ (0, ∞). Let P denote the Fréchet distribution with parameter θ0 = (α0 , 1), with
support X = (0, ∞). The tentative limit of the functions α 7→ Ψk (α | x) is the function
R ∞ −α
Z ∞
x log(x) dP (x)
1
−
log(x) dP (x).
Ψ(α) = + 0 R ∞ −α
α
dP (x)
0
0 x
Let Γ be the gamma function and let ψ = Γ′ /Γ be the digamma function.
Lemma A.1. Fix α0 ∈ (0, ∞). We have
Ψ(α) =
1
ψ(1) − ψ(α/α0 ) ,
α0
α ∈ (0, ∞).
(A.2)
As a consequence, Ψ : (0, ∞) → R is a decreasing bijection with a unique zero at α = α0 .
Proof of Lemma A.1. By Lemma B.1,
Ψ(α) =
′
(−α−1
1
1
′
0 )Γ (1 + α/α0 )
+
− (−α−1
(α/α0 )−1 − ψ(1 + α/α0 ) + ψ(1) .
0 )Γ (1) =
α
Γ(1 + α/α0 )
α0
The digamma function satisfies the recurrence relation ψ(x + 1) = ψ(x) + x1 . Equation (A.2)
follows. The final statement follows from the fact that the digamma function ψ : (0, ∞) → R is
an increasing bijection.
Proof of Theorem 2.3. By Lemma 2.1, we only have to show the claimed convergence. Define
a random function Ψn on (0, ∞) by
Ψn (α) = Ψkn (α | Xn ) = Ψkn (α | Xn /σn ),
(A.3)
Maximum likelihood estimation for the Fréchet distribution
17
with Ψk (·|·) as in (2.4). Recall Ψ in (A.2). The hypotheses imply that, for each α ∈ (α− , α+ ),
Ψn (α)
Ψ(α),
n → ∞.
By Lemma A.1, the limit Ψ(α) is positive, zero, or negative according to whether α is less than,
equal to, or greater than α0 . Moreover, the function Ψn is decreasing and Ψn (α̂n ) = 0; see the
proof of Lemma 2.1.
Let δ > 0 be such that α− < α0 − δ < α0 + δ < α+ . Since Ψn (α0 − δ)
Ψ(α0 − δ) > 0 as
n → ∞, we find
Pr[α̂n ≤ α0 − δ] ≤ Pr[Ψn (α0 − δ) ≤ 0] → 0,
n → ∞.
Similarly, Pr[α̂n ≥ α0 + δ] → 0 as n → ∞. We can choose δ > 0 arbitrarily small, thereby
concluding that α̂n
α0 as n → ∞.
Second, Condition 2.2 also implies that, for each α ∈ (α− , α+ ) and as n → ∞,
1
σn
kn
1 X
X −α
kn i=1 n,i
!−1/α
!−1/α
kn
1 X
−α
(Xn,i /σn )
=
kn i=1
Z ∞
−1/α
−1/α
x−α pα0 ,1 (x) dx
= Γ(1 + α/α0 )
,
0
where we used Lemma B.1 for the last identity. Both the left-hand and right-hand sides are
continuous, nonincreasing functions of α. Since α̂n
α0 as n → ∞ and since the right-hand
side evaluates to unity at α = α0 , a standard argument then yields
1
σ̂n
=
σn
σn
kn
1 X
X −α̂n
kn i=1 n,i
!−1/α̂n
1,
n → ∞.
The proof of Theorem 2.5 is decomposed into a sequence of lemmas. Recall Ψn and Ψ in (A.3)
and (A.2), respectively, and define Ψ̇n (α) = (d/dα)Ψn (α) and Ψ̇(α) = (d/dα)Ψ(α). By (A.1),
Ψ̇n (α) = −
Pn [x−α (log x)2 ] Pn x−α − (Pn x−α log x)2
1
−
,
α2
(Pn x−α )2
(A.4)
n
and where
where Pn denotes the empirical distribution of the points (Xn,i /σn )ki=1
Pn f =
kn
1 X
f (Xn,i /σn ).
kn i=1
The asymptotic distribution of vn (α̂n − α0 ) can be derived from the asymptotic behavior of Ψ̇n
and vn Ψn , which is the subject of the next two lemmas, respectively.
Lemma A.2. (Slope) Let Xn = (Xn,1 , . . . , Xn,kn ) be a sequence of random vectors in (0, ∞)kn ,
where kn → ∞. Suppose that Equation (2.9) and Condition 2.4(i) are satisfied. If α̃n is a random
sequence in (0, ∞) such that α̃n
α0 as n → ∞, then
Ψ̇n (α̃n )
Ψ̇(α0 ) = −
π2
,
6α20
n → ∞.
18
A. Bücher and J. Segers
Proof. For α ∈ (0, ∞) and m ∈ {0, 1, 2}, define
fm,α (x) = x−α (log x)m ,
x ∈ (0, ∞),
with (log x)0 = 1 for all x ∈ (0, ∞). Suppose that we could show that, for m ∈ {0, 1, 2} and some
ε > 0,
Z ∞
0,
n → ∞.
(A.5)
Pn fm,α −
fm,α (x) pα0 (x) dx
sup
α:|α−α0 |≤ε
0
Then from weak convergence of α̃n to α0 , Slutsky’s lemma (van der Vaart, 1998, Lemma 2.8)
and Lemma B.1 below, it would follow that
Ψ̇n (α̃n )
−
−1 ′
′′
2
1
α−2
0 Γ (2) Γ(2) − (α0 Γ (2))
−
,
2
2
α0
(Γ(2))
n → ∞.
Since Γ(2) = 1, Γ′ (2) = 1 − γ and Γ′′ (2) = (1 − γ)2 + π 2 /6 − 1, the conclusion would follow.
It remains to show (A.5). We consider the three cases m ∈ {0, 1, 2} separately. Let ε > 0 be
small enough such that α− < α0 − ε < α0 + ε < α+ .
R∞
First, let m = 0. The maps α 7→ (Pn f0,α )1/α and α 7→ ( 0 f0,α pα0 ,1 )1/α are monotone by
R
Lyapounov’s inequality [i.e., kf kr ≤ kf ks for 0 < r < s, where kf kr = ( X |f |r dµ)1/r denotes
the Lr -norm of some real-valued function f on a measurable space (X , µ)], and the second one
is also continuous by Lemma B.1. Pointwise convergence of monotone functions to a monotone,
continuous limit implies locally uniform convergence (Resnick, 1987, Section 0.1). This property
easily extends to weak convergence, provided the limit is nonrandom. We obtain
Z ∞
1/α
(Pn f0,α )1/α −
sup
f0,α (x) pα0 (x) dx
0,
n → ∞.
α:|α−α0 |≤ε
0
Uniform continuity of the map (y, α) 7→ y α on compact subsets of (0, ∞)2 then yields (A.5) for
m = 0.
R∞
Second, let m = 1. The maps α 7→ Pn f1,α and α 7→ 0 f1,α pα0 ,1 are continuous and nonincreasing (their derivatives are nonpositive). Pointwise weak convergence at each α ∈ (α− , α+ )
then yields (A.5) for m = 1.
Finally, let m = 2. With probability tending to one, not all variables Xn,i are equal to σn ,
and thus Pn (log x)2 > 0. On the latter event, we have
(
1/α )α
Pn x−α (log x)2
−α
2
2
Pn x (log x) = Pn (log x)
.
Pn (log x)2
By Lyapounov’s inequality, the expression in curly braces is nondecreasing in α. For each α ∈
(α− , α+ ), it converges weakly to {Γ′′ (1+α/α0 )/Γ′′ (1)}1/α , which is nondecreasing and continuous
in α; see Lemma B.1. It follows that
1/α
1/α ′′
Pn x−α (log x)2
Γ (1 + α/α0 )
sup
0,
n → ∞.
−
Pn (log x)2
Γ′′ (1)
α:|α−α0 |≤ε
Equation (A.5) for m = 2 follows.
Lemma A.3. Assume Condition 2.4. Then, as n → ∞,
vn Ψn (α0 ) = Gn x−α0 log(x) +
1−γ
Gn x−α0 − Gn log(x) + op (1).
α0
The expression on the right converges weakly to Y ≡ Y1 +
1−γ
α0 Y2
− Y3 .
(A.6)
Maximum likelihood estimation for the Fréchet distribution
19
Proof. Recall that
Ψn (α0 ) = Ψkn (α0 | Xn /σn ) =
Pn x−α0 log(x)
1
+
− Pn log(x).
α0
Pn x−α0
Define φ : R × (0, ∞) × R → R by
φ(y1 , y2 , y3 ) =
1
y1
+
− y3 .
α0
y2
The previous two displays allow us to write
Ψn (α0 ) = φ Pn x−α0 log(x), Pn x−α0 , Pn log(x) .
Recall Lemma B.1 and put
−1 ′
−1
−1
′
y0 = −α−1
0 Γ (2), Γ(2), −α0 Γ (1) = α0 (γ − 1), 1, α0 γ .
−1
−1
As already noted in the proof of Lemma A.1, we have φ(y0 ) = α−1
0 + α0 (γ − 1) − α0 γ = 0.
As a consequence,
vn Ψn (α0 ) = vn φ Pn x−α0 log(x), Pn x−α0 , Pn log(x) − φ(y0 ) .
In view of Condition 2.4 and the delta method, as n → ∞,
vn Ψn (α0 ) = φ̇1 (y0 ) Gn x−α0 log(x) + φ̇2 (y0 ) Gn x−α0 + φ̇3 (y0 ) Gn log(x) + op (1),
where φ̇j denotes the first-order partial derivative of φ with respect to yj for j ∈ {1, 2, 3}.
Elementary calculations yield
φ̇1 (y0 ) = 1,
φ̇2 (y0 ) = α−1
0 (1 − γ),
φ̇3 (y0 ) = −1.
The conclusion follows by Slutsky’s lemma.
Proposition A.4. (Asymptotic expansion for the shape parameter) Assume that the
conditions of Theorem 2.5 hold. Then, with Y as defined in Lemma A.3,
6α2
vn α̂n − α0 = 20 vn Ψn (α0 ) + op (1)
π
6α20
Y,
π2
n → ∞.
(A.7)
Proof. Recall that, with probability tending to one, α̂n is the unique zero of the random function
α 7→ Ψn (α). Recall that Ψ̇n in (A.4) is the derivative of Ψn . With probability tending to one, we
have, by virtue of the mean-value theorem,
0 = Ψn (α̂n ) = Ψn (α̂n ) − Ψn (α0 ) + Ψn (α0 ) = (α̂n − α0 ) Ψ̇n (α̃n ) + Ψn (α0 );
here α̃n is a convex combination of α̂n and α0 . Since Ψ̇n (α) ≤ −1/α2 < 0 (argument as in the
proof of Lemma 2.1), we can write
vn α̂n − α0 = −
1
vn Ψn (α0 ).
Ψ̇n (α̃n )
By weak consistency of α̂n , we have α̃n
α0 as n → ∞. Lemma A.2 then gives Ψ̇n (α̃n )
−π 2 /(6α20 ) as n → ∞. Apply Lemma A.3 and Slutsky’s lemma to conclude.
20
A. Bücher and J. Segers
Proof of Theorem 2.5 and Addendum 2.6. Combining Equations (A.7) and (A.6) yields
6α2
1−γ
vn (α̂n − α0 ) = 20 Gn x−α0 log(x) +
Gn x−α0 − Gn log(x) + op (1)
π
α0
as n → ∞. This yields the first row in (2.17).
By definition of σ̂n , we have (σ̂n /σn )−α̂n = Pn x−α̂n . Consider the decomposition
vn (σ̂n /σn )−α̂n − 1 = vn Pn x−α̂n − Pn x−α0 + vn Pn x−α0 − 1 .
(A.8)
By the mean value theorem, there exists a convex combination, α̃n , of α̂n and α0 such that
Pn x−α̂n − Pn x−α0 = −(α̂n − α0 ) Pn x−α̃n log(x).
By the argument for the case m = 1 in the proof of Lemma A.2, we have
Pn x−α̃n log(x)
−
1−γ
1 ′
Γ (2) = −
,
α0
α0
n → ∞.
By Proposition A.4 and Lemma A.3, it follows that, as n → ∞,
1−γ
+ op (1)
vn Pn x−α̂n − Pn x−α0 = vn (α̂n − α0 )
α0
6α0 (1 − γ)
=
vn Ψn (α0 ) + op (1)
π2
1−γ
6α0 (1 − γ)
−α0
−α0
log(x) +
Gn x
− Gn log(x) + op (1).
=
Gn x
π2
α0
This expression in combination with (A.8) yields, as n → ∞,
vn (σ̂n /σn )−α̂n − 1
=
6α0 (1 − γ)
π2
1−γ
Gn x−α0 log(x) +
Gn x−α0 − Gn log(x) + Gn x−α0 + op (1). (A.9)
α0
Write Zn = (σ̂n /σn )−α̂n , which converges weakly to 1 as n → ∞. By the mean value theorem,
vn (σ̂n /σn − 1) = vn (Zn−1/α̂n − 1) = vn (Zn − 1) (−1/α̂n ) Z̃n−1/α̂n −1 ,
where Z̃n is a random convex combination of Zn and 1. But then Z̃n
by consistency of α̂n and Slutsky’s lemma,
vn (σ̂n /σn − 1) = (−1/α0 ) vn ((σ̂n /σn )−α̂n − 1) + op (1),
1 as n → ∞, whence,
n → ∞.
Combinining this with (A.9), we find
6(1 − γ)
vn (σ̂n /σn − 1) = −
π2
−α0
Gn x
1−γ
−α0
log(x) +
− Gn log(x)
Gn x
α0
−α0
− α−1
+ op (1)
0 Gn x
as n → ∞. This is the second row in (2.17).
The proof of Addendum 2.6 follows from a tedious but straightforward calculation.
Maximum likelihood estimation for the Fréchet distribution
21
A.2. Proofs for Section 3
Lemma A.5. (Block maxima rarely show ties) Under Conditions 3.1 and 3.3, for every
c ∈ (0, ∞), we have Pr[Mrn ,1 ∨ c = Mrn ,3 ∨ c] → 0 as n → ∞.
Proof of Lemma A.5. By the domain-of-attraction condition combined with the strong mixing property, the sequence of random vectors ((Mrn ,1 ∨ c)/σrn , (Mrn ,3 ∨ c)/σrn ) converges weakly
to the product of two independent Fréchet(α0 , 1) random variables. Apply the Portmanteau
lemma – the set {(x, y) ∈ R2 : x = y} is closed and has zero probability in the limit.
Lemma A.6. (Moments of block maxima converge) Under Conditions 3.1 and 3.4, we
have, for every c ∈ (0, ∞),
Z ∞
lim E[f (Mn ∨ c)/σn ] =
f (x) pα0 ,1 (x) dx
n→∞
0
for every measurable function f : (0, ∞) → R which is continuous almost everywhere and for
which there exist 0 < η < ν such that |f (x)| ≤ gη,α0 (x) = {x−α0 1(x ≤ e) + log(x)1(x > e)}2+η .
Proof of Lemma A.6. An elementary argument shows that we may replace Mn ∨1 by Mn ∨c in
(3.3). Since c/σn → 0 as n → ∞, the sequence (Mn ∨c)/σn converges weakly to the Fréchet(α0 , 1)
distribution in view of Condition 3.1. The result follows from Example 2.21 in van der Vaart
(1998).
In order to separate maxima over consecutive blocks by a time lag of at least ℓ, we clip off the
final ℓ − 1 variables within each block:
[ℓ]
Mr,i = max{ξt : (i − 1)r + 1 ≤ t ≤ ir − ℓ + 1}.
(A.10)
[ℓ]
Clearly, Mr,i ≥ Mr,i . The probability that the maximum over a block of size r is attained by any
of the final ℓ − 1 variables should be small; see Lemma A.8 below.
Lemma A.7. (Short blocks are small) Assume Condition 3.1. If ℓn = o(rn ) and if α(ℓn ) =
o(ℓn /rn ) as n → ∞, then for all ε > 0,
Pr[Mℓn ≥ εσrn ] = O(ℓn /rn ),
n → ∞.
(A.11)
Proof of Lemma A.7. Let Fr be the cumulative distribution function of Mr . By Bücher and Segers
(2014, Lemma 7.1), for every u > 0,
Pr[Frn (Mℓn ) ≥ u] = O(ℓn /rn ),
n → ∞.
Fix ε > 0. By assumption,
lim Frn (εσrn ) = exp(−ε−α0 ).
n→∞
For sufficiently large n, we have
Pr[Mℓn ≥ εσn ] ≤ Pr[Frn (Mℓn ) ≥ Frn (εσn )] ≤ Pr[Frn (Mℓn ) ≥ exp(−ε−α0 )/2].
Set u = exp(−ε−α0 )/2 in (A.12) to arrive at (A.11).
(A.12)
22
A. Bücher and J. Segers
Lemma A.8. (Clipping doesn’t hurt) Assume Condition 3.1. If ℓn = o(rn ) and if α(ℓn ) =
o(ℓn /rn ) as n → ∞, then
Pr[Mrn > Mrn −ℓn ] → 0,
n → ∞.
(A.13)
Proof of Lemma A.8. Recall Lemma A.7. For every ε > 0 we have, by stationarity,
Pr[Mrn > Mrn −ℓn ] ≤ Pr[Mrn −ℓn ≤ εσrn ] + Pr[Mℓn > εσrn ].
Since σrn −ℓn /σrn → 1 as a consequence of Condition 3.1 and the fact that ℓn = o(rn ) as n → ∞,
the first term converges to exp(−ε−α0 ) as n → ∞, whereas the second one converges to 0 by
Lemma A.7. Since ε > 0 was arbitrary, Equation (A.13) follows.
Proof of Theorem
3.6. We apply Theorem 2.5 and Addendum 2.6 to the array Xn,i = Mrn ,i ∨
√
c and vn = kn , where c ∈ (0, ∞) is arbitrary and i ∈ {1, . . . , kn }. By Condition 3.2, we have
limn→∞ Pr[∀i = 1, . . . , kn : Xn,i = Mrn ,i ] = 1.
The not-all-tied property (2.9) has been established in Lemma A.5.
We need to check Condition 2.4, and in particular that the distribution of the random vector
Y in (2.15) is N3 (B, ΣY ) with B as in the statement of Theorem 3.6 and ΣY as in (2.18).
Essentially, the proof employs the Bernstein big-block-small-block method in combination with
the Lindeberg central limit
p theorem.
√
Let ℓn = max{sn , ⌊rn α(sn )⌋}, where sn = ⌊ rn ⌋. Clearly,
ℓn → ∞,
ℓn = o(rn ) and α(ℓn ) = o(ℓn /rn ), as n → ∞.
(A.14)
Consider the truncated and rescaled block maxima
Zr,i = (Mrn ,i ∨ c)/σr ,
[ℓ ]
[ℓ ]
Zr,in = (Mrnn,i ∨ c)/σr ,
[ℓ ]
with Mr,in as in (A.10). Consider the following empirical and population probability measures:
Pn f =
n]
P[ℓ
n f =
kn
1 X
f (Zrn ,i ),
kn i=1
kn
1 X
[ℓ ]
f (Zrnn,i ),
kn i=1
Pn f = E[f (Zrn ,i )],
[ℓ ]
Pn[ℓn ] f = E[f (Zrnn,i )].
Abbreviate the tentative limit distribution by P = Fréchet(α0 , 1). We will also need the following
empirical processes:
p
Gn = kn (Pn − P )
(uncentered),
p
G̃n = kn (Pn − Pn )
(centered),
p
n]
(centered).
G̃[ℓ
= kn (Pn[ℓn ] − Pn[ℓn ] )
n
Finally, the bias arising from the finite block size is quantified by the operator
p
Bn = kn (Pn − P ).
Proof of Condition 2.4(i). Choose η ∈ (2/ω, ν) and 0 < α− < α0 < α+ . Additional constraints
on α+ will be imposed below, while the values of η and α− do not matter. Recall the function
class F2 (α− , α+ ) in (2.12). For every f ∈ F2 (α− , α+ ), we just need to show that
Pn f = P f + op (1),
n → ∞.
Maximum likelihood estimation for the Fréchet distribution
23
The domain-of-attraction property (Condition 3.1) and the asymptotic moment bound (Condition 3.4) yield
E[Pn f ] = Pn f → P f,
n → ∞,
by uniform integrability, see Lemma A.6 (note that |f | is bounded by a multiple of g0,α0 if α+ is
chosen suitably small: α+ < 2α0 must be satisfied). Further,
1
Pn f − Pn f = √ G̃n f.
kn
Below, see (A.16), we will show that
n]
G̃n f = G̃[ℓ
n f + op (1) = Op (1) + op (1) = Op (1),
n → ∞.
(A.15)
It follows that, as required,
Pn f = (Pn f − Pn f ) + Pn f = op (1) + P f + o(1) = P f + op (1),
n → ∞.
Proof of Condition 2.4(ii). We can decompose the empirical process Gn in a stochastic term
and a bias term:
p
p
Gn = kn (Pn − Pn ) + kn (Pn − P ) = G̃n + Bn .
For f ∈ H = {f1 , f2 , f3 }, the bias term Bn f converges to B(f ) thanks to Condition 3.5. It
remains to treat the stochastic term G̃n f , for all f ∈ F2 (α− , α+ ) [in view of the proof of item
(i); see (A.15) above]. We will show that the finite-dimensional distributions of G̃n converge to
those of a P -Brownian bridge, G, i.e., a zero-mean, Gaussian stochastic process with covariance
function given by
f, g ∈ F2 (α− , α+ ).
cov(Gf, Gg) = P (f − P f )(g − P g) = covP f (X), g(X) ,
Decompose the stochastic term in two parts:
n]
G̃n = G̃[ℓ
n + ∆n .
(A.16)
We will show that ∆n converges to zero in probability and that the finite-dimensional distribun]
tions of G̃[ℓ
converge to those of G.
n
n]
First, we treat the main term, G̃[ℓ
n . By the Cramér–Wold device, it suffices to show that
[ℓn ]
G̃n g
Gg as n → ∞, where g is an arbitrary linear combination of functions f ∈ F2 (α− , α+ ).
Define
[ℓ ]
φni (t) = exp − itkn−1/2 {g(Zrnn,i ) − Pn[ℓn ] g} ,
with i the imaginary unit. Note that the characteristic function of G̃n[ℓn] g can be written as
Qkn
t 7→ E[ i=1
φni (t)]. Successively applying Lemma 3.9 in Dehling and Philipp (2002), we obtain
that
# k
"k
kn
n
n
Y
Y
Y
kn
φnj (t) ,
E[φni (t)] ≤ 2πkn max α σ{φni (t)}, σ
φni (t) −
E
i=1
i=1
i=1
j=i+1
where α(A1 , A2 ) denotes the alpha-mixing coefficient between the sigma-fields A1 and A2 . Since
[ℓn ]
over different blocks are based on observations that are at least ℓn observations
the maxima Zr,i
apart, the expression on the right-hand side of the last display is of the order O(kn α(ℓn )), which
24
A. Bücher and J. Segers
converges to 0 as a consequence of Equation (3.2). We can conclude that the weak limit of G̃n[ℓn ] g
is the same as the one of
)
(
kn
p
1 X
[ℓn ]
[ℓn ]
[ℓn ]
g(Z̄rn ,i ) − Pn g ,
H̃n g = kn
kn i=1
where Z̄r[ℓnn,i] are independent over i ∈ N and have the same distribution as Zr[ℓnn,i] . By the classical
central limit theorem for row wise independent triangular arrays, the weak limit of H̃[ℓ]
n g is Gg:
first, its variance
[ℓn ] 2
[ℓn ] 2
n]
Var(H̃[ℓ
n g) = Pn g − (Pn g)
converges to Var(Gg) by Lemma A.6. Note that the square of any linear combination g of
functions f ∈ F2 (α− , α+ ) can be bounded by a multiple of gη,α0 , after possibly decreasing the
value of α+ > α0 . Second, the Lyapunov Condition is satisfied: for all δ > 0,
1
1+δ/2
kn
kn
X
[ℓ ]
E |g(Z̄rnn,i ) − Pn[ℓn ] g|2+δ
i=1
converges to 0 as n → ∞ again as a consequence of Lemma A.6, as |g|2+δ can also be bounded
by a multiple of gη,α0 if δ ∈ (0, η) and α+ > α0 are chosen sufficiently small.
[ℓ ]
Now, consider the remainder term ∆n in (A.16). Since G̃n f and G̃n n f are centered, so is
∆n f , and
!
kn
X
1
[ℓn ]
2
E[(∆n f ) ] = var(∆n f ) =
∆rn ,i f ,
var
kn
i=1
[ℓ ]
[ℓ ]
where ∆r,in f = f (Zr,i ) − f (Zr,in ). By stationarity and the Cauchy–Schwartz inequality,
kn −1
2 X
[ℓ ]
[ℓ ]
[ℓ ]
(kn − h) cov ∆rnn,1 f, ∆rnn,1+h f
E[(∆n f )2 ] = var ∆rnn,1 f +
kn
h=1
[ℓ ]
≤ 3 var ∆rnn,1 f + 2
kX
n −1
h=2
[ℓ ]
[ℓ ]
cov ∆rnn,1 f, ∆rnn,1+h f .
(A.17)
Please note that we left the term h = 1 out of the sum; whence the factor three in front of the
variance term.
Since ℓn = o(rn ) as n → ∞ by Condition 3.3, we have σrn −ℓn +1 /σrn → 1 as n → ∞ by
Condition 3.1. The asymptotic moment bound in Condition 3.4 then ensures that we may choose
δ ∈ (2/ω, ν) and α+ > α0 such that, for every f ∈ F2 (α− , α+ ), we have, by Lemma A.6,
2+δ
[ℓn ]
lim sup E ∆rn ,1 f
< ∞.
(A.18)
n→∞
[ℓ ]
On the event that Mrn ,1 = Mrn −ℓn +1 , we have ∆rnn,1 f = 0. The mixing rate in (A.14) together
with Lemma A.8 then imply
[ℓ ]
∆rnn,1 f = op (1),
n → ∞.
Lyapounov’s inequality and the asymptotic moment bound (A.18) then ensure that
2+δ
[ℓn ]
lim E ∆rn ,1 f
= 0,
f ∈ F2 (α− , α+ ).
n→∞
(A.19)
Maximum likelihood estimation for the Fréchet distribution
25
Recall Lemma 3.11 in Dehling and Philipp (2002): for random variables ξ and η and for
numbers p, q ∈ [1, ∞] such that 1/p + 1/q < 1,
|cov(ξ, η)| ≤ 10 kξkp kηkq {α(σ(ξ), σ(η))}1−1/p−1/q ,
(A.20)
where α(A1 , A2 ) denotes the strong mixing coefficient between two σ-fields A1 and A2 . Use
inequality (A.20) with p = q = 2 + δ to bound the covariance terms in (A.17):
[ℓ ]
[ℓ ]
E[(∆n f )2 ] ≤ 3 k∆rnn,1 f k22 + 20 kn k∆rnn,1 f k22+δ {α(rn )}δ/(2+δ) .
In view of (A.19) and Condition 3.3, the right-hand side converges to zero since ω < 2/δ.
A.3. Proof of Theorem 4.2
Proof of Theorem 4.2. We apply Theorem 3.6. To this end, we verify its conditions.
Proof of Condition 3.1. The second-order regular variation condition (4.5) implies the firstorder one in (4.2), which is in turn equivalent to weak convergence of partial maxima as in (4.1).
Condition 3.1 follows with scaling sequence σn = an . The latter sequence is regularly varying
(Resnick, 1987, Proposition 1.11) with index 1/α0 , which implies that limn→∞ amn /an = 1
whenever limn→∞ mn /n = 1.
Proof of Condition 3.2. For any real c we have, since log F (c) < 0 and since log(kn ) = o(rn )
by (4.12),
Pr[min(Mrn ,1 , . . . , Mrn ,kn ) ≤ c] ≤ kn F rn (c) = exp{log(kn ) + rn log F (c)} → 0,
n → ∞.
Proof of Condition 3.3. Trivial, since α(ℓ) = 0 for integer ℓ ≥ 1.
Proof of Condition 3.4. This follows from Lemma C.1 in the supplementary material (which
in turn is a variant of Proposition 2.1(i) in Resnick, 1987), where we prove that the sufficient
Condition (3.4) is satisfied.
Proof of Condition 3.5. Recall Remark 4.3 and therein the functions L and g(u) = A(u)L(u).
We begin by collecting some non-asymptotic bounds on the function L. Fix δ ∈ (0, α0 ). Potter’s
theorem (Bingham, Goldie and Teugels, 1987, Theorem 1.5.6) implies that there exists some
constant x′ (δ) > 0 such that, for all u ≥ x′ (δ) and x ≥ x′ (δ)/u,
L(u)
≤ (1 + δ) max(x−δ , xδ ).
L(ux)
(A.21)
As a consequence of Theorem B.2.18 in de Haan and Ferreira (2006), accredited to Drees
(1998), there exists some further constant x′′ (δ) > 0 such that, for all u ≥ x′′ (δ) and x ≥ x′′ (δ)/u,
L(ux) − L(u)
≤ c(δ) max(xρ−δ , xρ+δ ),
g(u)
(A.22)
for some constant c(δ) > 0. Define x(δ) = max{x′ (δ), x′′ (δ), 1}.
We are going to show Condition 3.5 for c = x(δ) and σrn = arn . For i = 1, . . . , kn , define
Xn,i = Mrn ,i ∨ x(δ). Let Pn denote the common distribution of the rescaled, truncated
block
√
maxima Xn,i /arn and let P denote the Fréchet(α0 , 1) distribution. Write Bn = kn (Pn − P )
and define the three-by-one vector β by
|ρ|
|ρ|
2 − γ − Γ(2 + α
) − Γ′ (2 + α
)
0
0
λ
|ρ|
β=
(A.23)
) − α0
α0 Γ(2 + α
0
|ρ| α0
|ρ|
1 − Γ(1 + α0 )
26
A. Bücher and J. Segers
if ρ < 0 and by
γ − (1 − γ)2 − π 2 /6
λ
α0 (1 − γ)
β= 2
α0
γ
if ρ = 0. We will show that
T
lim Bn x−α0 log x, Bn x−α0 , Bn log x = β.
(A.24)
n→∞
Elementary calculations yield that M (α0 ) β = λ B(α0 , ρ) as required in (4.8).
Equation (A.24) can be shown coordinatewise. We begin by some generalities. For any f ∈ H
as in (2.13), we can write, for arbitrary x, x0 ∈ (0, ∞),
R
f (x0 ) − x0 f ′ (y) dy, if 0 < x ≤ x0 ,
x
f (x) =
f (x0 ) + R x f ′ (y) dy, if x0 < x < ∞.
x0
By Fubini’s theorem, with Gn and G denoting the cdf-s of Pn and P , respectively,
Z
Z
Pf =
f (x) dP (x) +
f (x) dP (x)
(0,x0 ]
= f (x0 ) −
= f (x0 ) −
= f (x0 ) −
(x0 ,∞)
Z
x∈(0,x0 ]
x0 Z
Z
y=0
Z x0
Z
x0
f ′ (y) dy dP (x) +
y=x
dP (x) f ′ (y) dy +
x∈(0,y]
′
G(y) f (y) dy +
0
Z
Z
x∈(x0 ,∞)
∞ Z
y=x0
Z
∞
x0
Z
x
f ′ (y) dy dP (x)
y=x0
dP (x) f ′ (y) dy
x∈(y,∞)
{1 − G(y)} f ′ (y) dy,
and the same formula holds with P and G replaced by Pn and Gn , respectively. We find that
Z ∞p
p
kn {Gn (y) − G(y)} f ′ (y) dy.
Bn f = kn (Pn − P )f = −
0
Note that
G(y) = exp(−y −α0 ) 1(0,∞) (y),
Gn (y) = F rn (arn y) 1[x(δ)/arn ,∞) (y),
From the definition of L in (4.11), we can write, for y ≥ x(δ)/arn ,
L(arn y)
.
Gn (y) = exp −y −α0 rn {− log F (arn )}
L(arn )
For the sake of brevity, we will only carry out the subsequent parts of the proof in the case where
F is ultimately continuous, so that rn {− log F (arn )} = 1 for all sufficiently large n. In that case,
Bn f = Jn1 (f ) + Jn2 (f ) where
p Z
kn
x(δ)/arn
exp(−y −α0 )f ′ (y) dy,
0
p Z ∞
L(arn y)
exp −y −α0
Jn2 (f ) = − kn
− exp(−y −α0 ) f ′ (y) dy,
L(arn )
x(δ)/arn
Jn1 (f ) =
Maximum likelihood estimation for the Fréchet distribution
27
Let us first show that Jn1 (f ) converges to 0 for any f ∈ H. For that purpose, note that any
f ∈ H satisfies |f ′ (x)| ≤ Kx−α0 −ε−1 for any ε < 1 and for some constant K = K(ε) > 0. As a
consequence, by (4.9), for sufficiently large n,
Z x(δ)/arn
K
max |Jn1 (f )| ≤ {λ + o(1)}
exp(−y −α0 )y −α0 −ε−1 dy.
f ∈H
A(arn ) 0
Since A(x) is bounded from below by a multiple of xρ−ε for sufficiently large x (by Remark 4.3
and Potter’s theorem), the expression on the right-hand side of the last display can be easily
seen to converge to 0 for n → ∞.
For the treatment of Jn2 , note that
Z ∞
J(f, ρ) ≡
hρ (y) exp −y −α0 y −α0 f ′ (y) dy
0R ∞
−α0
) y −2α0 −1 (1 − α0 log y) dy , f (y) = y −α0 log y
R0 hρ (y) exp (−y
∞
=
, f (y) = y −α0
hρ (y) exp (−y −α0 ) (−α0 y −2α0 −1 ) dy
0
R ∞
−α0 −1
−α0
)y
dy
, f (y) = log y
0 hρ (y) exp (−y
−α0
−α0
(α−1
log y
E[hρ (Y )Y
0 − log Y )] , f (y) = y
−α
−α
= −E[hρ (Y )Y 0 ]
, f (y) = y 0
−1
α0 E[hρ (Y )]
, f (y) = log y,
where Y denotes a Fréchet(α0 , 1) random variable. By Lemma B.1 this implies
o
1 n
|ρ|
|ρ|
) + Γ′ (2 + α
) − 1 − Γ′ (2)
Γ(2 + α
J(x−α0 log x, ρ) =
0
0
ρα0
o
1 n
|ρ|
|ρ|
′
2 − γ − Γ(2 + α
)
−
Γ
(2
+
)
,
=
α0
0
|ρ| α0
o
o
1n
1 n
|ρ|
|ρ|
Γ(2) − Γ(2 + α
Γ(2
+
J(x−α0 , ρ) =
)
=
)
−
1
,
α
0
0
ρ
|ρ|
o
o
1 n
1 n
|ρ|
|ρ|
Γ(1 + α
1
−
Γ(1
+
J(log x, ρ) =
)
−
1
=
)
α0
0
ρα0
|ρ| α0
for ρ < 0 and
1
1
{Γ′ (2) + Γ′′ (2)} = 2 γ − (1 − γ)2 − π 2 /6 ,
α20
α0
1 ′
1−γ
J(x−α0 , 0) =
Γ (2) =
,
α0
α0
γ
1
J(log x, 0) = − 2 Γ′ (1) = 2 .
α0
α0
T
Hence, β = λ J(x−α0 log x, ρ), J(x−α0 , ρ), J(log x, ρ) and it is therefore sufficient to show that,
for any f ∈ H,
J(x−α0 log x, 0) = −
Jn2 (f ) → λ J(f, ρ)
as n → ∞. By the mean value theorem, we can write Jn2 (f ) as
Z ∞
p
L(arn y) − L(arn )
exp −y −α0 ξn (y) y −α0 f ′ (y) dy
Jn2 (f ) = kn A(arn )
)
)L(a
A(a
rn
rn
x(δ)/arn
(A.25)
28
A. Bücher and J. Segers
for some ξn (y) between L(arn y)/L(arn ) and 1. For n → ∞, the factor in front of this integral
converges to λ by assumption (4.9), while the integrand in this integral converges to
hρ (y) exp −y −α0 y −α0 f ′ (y),
pointwise in y ∈ (0, ∞), by Condition 4.1. Hence, the convergence in (A.25) follows from dominated convergence if we show that
fn (y) = 1 y >
x(δ)
ar n
L(a y) − L(a )
rn
rn
exp −y −α0 ξn (y) y −α0 f ′ (y)
A(arn )L(arn )
can be bounded by an integrable function on (0, ∞). We split the proof into two cases.
First, for any 1 ≥ y ≥ x(δ)/arn ,
L(arn y) − L(arn )
≤ c(δ)y ρ−δ
A(arn )L(arn )
from (A.22) and
L(arn y)
≥ (1 + δ)−1 y δ
ξn (y) ≥ min 1,
L(arn )
from (A.21). Moreover, for any f ∈ H, the function f ′ (y) is bounded by a multiple of y −α0 −δ−1
for y ≤ 1. Therefore, for any y ∈ (0, 1),
fn (y) ≤ c′ (δ) exp{−(1 + δ)−1 y −α0 +δ }y −2α0 −2δ−1+ρ
and the function on the right is integrable on (0, 1) since δ < α0 .
Second, for y ∈ [1, ∞), we have
L(arn y) − L(arn )
≤ c(δ)y ρ+δ
A(arn )L(arn )
from (A.22) and
L(arn y)
≥ (1 + δ)−1 y −δ
ξn (y) ≥ min 1,
L(arn )
from (A.21). Moreover, f ′ (y) is bounded by a multiple of y −1 for any y ≥ 1 and any f ∈ H.
Therefore,
fn (y) ≤ c′′ (δ) y −α0 −1+ρ+δ
which is easily integrable on [1, ∞).
Appendix B: Auxiliary results
R∞
Let Γ(x) = 0 tx−1 e−t dt be the gamma function and let Γ′ and Γ′′ be its first and second
derivative, respectively. All proofs for this section are given in Section E in the supplementary
material.
Maximum likelihood estimation for the Fréchet distribution
29
Lemma B.1. (Moments) Let P denote the Fréchet distribution with parameter vector (α0 , 1),
for some α0 ∈ (0, ∞). For all α ∈ (−α0 , ∞),
Z ∞
x−α dP (x) = Γ(1 + α/α0 ),
0
Z ∞
1
x−α log(x) dP (x) = − Γ′ (1 + α/α0 ),
α0
Z ∞0
1
x−α (log(x))2 dP (x) = 2 Γ′′ (1 + α/α0 ).
α0
0
Lemma B.2. (Covariance matrix) Let X be a random variable whose distribution is Fréchet
with parameter vector (α0 , 1). The covariance matrix of the random vector Y = (Y1 , Y2 , Y3 )T =
T
X −α0 log(X), X −α0 , log(X) is equal to
1 − 4γ + γ 2 + π 2 /3 α0 (γ − 2) π 2 /6 − γ
1
α0 (γ − 2)
α20
−α0 .
cov(Y ) = 2
α0
π 2 /6 − γ
−α0
π 2 /6
Lemma B.3. (Fisher information) Let Pθ denote the Fréchet distribution with parameter
θ = (α, σ) ∈ (0, ∞)2 . The Fisher information Iθ = Pθ (ℓ̇θ ℓ̇Tθ ) is given by
{(1 − γ)2 + π 2 /6}/α2 (1 − γ)/σ
ι11 ι12
=
.
Iθ =
(1 − γ)/σ
α2 /σ 2
ι21 ι22
Its inverse is given by
Iθ−1 =
6
π2
α2
(γ − 1)σ
(γ − 1)σ
(σ/α)2 {(1 − γ)2 + π 2 /6}
.
Acknowledgments
The authors would like to thank two anonymous referees and an Associate Editor for their
constructive comments on an earlier version of this manuscript, and in particular for suggesting a
sharpening of Conditions 2.2 and 2.4 and for pointing out the connection between Equations (4.9)
and (4.12).
The research by A. Bücher has been supported by the Collaborative Research Center “Statistical modeling of nonlinear dynamic processes” (SFB 823, Project A7) of the German Research
Foundation, which is gratefully acknowledged. Parts of this paper were written when A. Bücher
was a visiting professor at TU Dortmund University.
J. Segers gratefully acknowledges funding by contract “Projet d’Actions de Recherche Concertées” No. 12/17-045 of the “Communauté française de Belgique” and by IAP research network
Grant P7/06 of the Belgian government (Belgian Science Policy).
References
Balakrishnan, N. and Kateri, M. (2008). On the maximum likelihood estimation of parameters of Weibull distribution based on complete and censored data. Statistics & Probability
Letters 78 2971–2975.
30
A. Bücher and J. Segers
Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation. Cambridge
University Press, Cambridge.
Bücher, A. and Segers, J. (2014). Extreme value copula estimation based on block maxima
of a multivariate stationary time series. Extremes 17 495–528.
Bücher, A. and Segers, J. (2016). On the maximum likelihood estimator for the Generalized
Extreme-Value distribution. ArXiv e-prints.
Cai, J.-J., de Haan, L. and Zhou, C. (2013). Bias correction in extreme value statistics with
index around zero. Extremes 16 173-201.
de Haan, L. and Ferreira, A. (2006). Extreme Value Theory. Springer Series in Operations
Research and Financial Engineering. Springer, New York An introduction.
Dehling, H. and Philipp, W. (2002). Empirical process techniques for dependent data. In
Empirical process techniques for dependent data 3–113. Birkhäuser Boston, Boston, MA.
Dombry, C. (2015). Existence and consistency of the maximum likelihood estimators for the
extreme value index within the block maxima framework. Bernoulli 21 420–436.
Drees, H. (1998). On smooth statistical tail functionals. Scand. J. Statist. 25 187–210.
Drees, H. (2000). Weighted approximations of tail processes for β-mixing random variables.
Ann. Appl. Probab. 10 1274–1301.
Ferreira, A. and de Haan, L. (2015). On the block maxima method in extreme value theory:
PWM estimators. Ann. Statist. 43 276–298.
Gnedenko, B. (1943). Sur la distribution limite du terme maximum d’une série aléatoire. Ann.
of Math. (2) 44 423–453.
Gumbel, E. J. (1958). Statistics of extremes. Columbia University Press, New York.
Hosking, J. R. M., Wallis, J. R. and Wood, E. F. (1985). Estimation of the generalized
extreme-value distribution by the method of probability-weighted moments. Technometrics 27
251–261.
Hsing, T. (1991). On Tail Index Estimation Using Dependent Data. Ann. Statist. 19 1547–1569.
Leadbetter, M. R. (1983). Extremes and local dependence in stationary sequences. Z.
Wahrsch. Verw. Gebiete 65 291–306.
Marohn, F. (1994). On testing the Exponential and Gumbel distribution. In Extreme Value
Theory and Applications 159–174. Kluwer Academic Publishers.
Mikosch, T. and Stărică, C. (2000). Limit theory for the sample autocorrelations and extremes of a GARCH (1, 1) process. Ann. Statist. 28 1427–1451.
Peng, L. (1998). Asymptotically unbiased estimators for the extreme-value index. Statistics &
Probability Letters 38 107 - 115.
Pickands, J. (1975). Statistical Inference Using Extreme Order Statistics. Ann. Statist. 3 119–
131.
Prescott, P. and Walden, A. T. (1980). Maximum likelihood estimation of the parameters
of the generalized extreme-value distribution. Biometrika 67 723–724.
Resnick, S. I. (1987). Extreme values, regular variation, and point processes. Applied Probability.
A Series of the Applied Probability Trust 4. Springer-Verlag, New York.
Rootzén, H. (2009). Weak convergence of the tail empirical process for dependent sequences.
Stochastic Processes and their Applications 119 468 - 490.
Smith, R. L. (1985). Maximum Likelihood Estimation in a Class of Nonregular Cases.
Biometrika 72 67-90.
van der Vaart, A. W. (1998). Asymptotic Statistics. Cambridge Series in Statistical and
Probabilistic Mathematics 3. Cambridge University Press, Cambridge.
Maximum likelihood estimation for the Fréchet distribution
31
Supplementary Material on
“Maximum likelihood estimation for the Fréchet distribution
based on block maxima extracted from a time series”
AXEL BÜCHER and JOHAN SEGERS
Ruhr-Universität Bochum and Université catholique de Louvain
This supplementary material contains a lemma on moment convergence of block maxima used in
the proof of Theorem 4.2 (in Section C), the proof of Lemma 5.1 (in Section D) and the proofs of
auxiliary lemmas from Section B (in Section E) from the main paper. Furthermore, we present
additional Monte Carlo simulation results to quantify the finite-sample bias and variance of the
maximum likelihood estimator (in Section F).
Appendix C: Moment convergence of block maxima
The following Lemma is a variant of Proposition 2.1(i) in Resnick (1987). It is needed in the
proof of Theorem 4.2.
Lemma C.1. Let ξ1 , ξ2 , . . . be independent random variables with common distribution function
F satisfying (4.2). Let Mn = max(ξ1 , . . . , ξn ). For every β ∈ (−∞, α0 ) and any constant c > 0,
we have
β
< ∞.
lim sup E (Mn ∨ c)/an
n→∞
Proof of Lemma C.1. Since the case β = 0 is trivial, there are two cases to be considered:
β ∈ (−∞, 0) and β ∈ (0, α0 ). Write Zn = (Mn ∨ c)/an and note that
Pr[Zn < y] = Pr[Mn ∨ c < an y] = F n (an y) 1(c/an ,∞) (y).
Case β ∈ (−∞, 0). We have
Z ∞
Z
E[Znβ ] =
Pr[Znβ > x] dx =
0
∞
Pr[Zn < x1/β ] dx =
0
Z
0
=
Z
∞
Pr[Zn < y] |β| y β−1 dy
∞
c/an
F n (an y) |β| y β−1 dy.
We split the integration domain in two pieces. For y ∈ (1, ∞), the integrand is bounded by
|β| y β−1 , which integrates to unity. Hence we only need to consider the integral over y ∈ (c/an , 1].
We have
− log F (an y)
F n (an y) = exp{n log F (an y)} = exp −n{− log F (an )}
.
− log F (an )
Fix δ ∈ (0, α0 ). By (4.3), we have n{− log F (an )} ≥ 1 − δ for all n larger than some n(δ). By
Potter’s theorem (Bingham, Goldie and Teugels, 1987, Theorem 1.5.6), there exists x(δ) > 0
such that, for all n such that an ≥ x(δ) and for all y ∈ (x(δ)/an , 1],
− log F (an )
≤ (1 + δ) y α0 −δ .
− log F (an y)
32
A. Bücher and J. Segers
Without loss of generality, assume x(δ) > c. For all y ∈ (c/an , x(δ)/an ], we have
− log F (an )
− log F (an )
≤
≤ (1 + δ) (x(δ)/an )α0 −δ ≤ (1 + δ) x(δ)α0 −δ cδ−α0 y α0 −δ .
− log F (an y)
− log F (x(δ))
Combining the previous two displays, we see that there exists a constant c(δ) > 0 such that
− log F (an y)
≥ c(δ) y −α0 +δ
− log F (an )
for all y ∈ (c/an , 1]. We conclude that, for all sufficiently large n and all y ∈ (c/an , 1],
F n (an y) ≤ exp −c(δ) y −α0 +δ ,
where c(δ) is a positive constant, possibly different from the one in the previous equation. For
such n, we have
Z 1
Z 1
n
β−1
F (an y) |β| y
dy ≤
exp −c(δ) y −α0 +δ |β| y β−1 dy < ∞.
c/an
0
Case β ∈ (0, α0 ). Let δ > 0 be sufficiently small such that β + δ < α. Let x(δ) > 0 be as
in Potter’s theorem. Let n(δ) be sufficiently large such that an ≥ x(δ) ∨ c for all n ≥ n(δ). Put
K = supn≥1 n{1 − F (an )}, which is finite by (4.3) and the fact that − log F (x) ∼ 1 − F (x) for
x → ∞. For n ≥ n(δ), we have
Z ∞
Z ∞
E[Znβ ] =
Pr[Zn > x1/β ] dx =
Pr[Mn ∨ c > an x1/β ] dx
0
0
Z ∞
≤1+
Pr[Mn > an x1/β ] dx
1
Z ∞
≤1+
n{1 − F (an x1/β )} dx
1
≤1+K
Z
1
∞
1 − F (an x1/β )
dx.
1 − F (an )
By Potter’s theorem, the integral on the last line is bounded by
Z ∞
(1 + δ)
(x1/β )−α0 +δ dx.
c
The latter integral is finite, since (−α0 + δ)/β < −1.
Appendix D: Proofs for Section 5
Proof of Lemma 5.1. We only give a sketch proof for the case p = 2, the general case being
similar, but notationally more involved. Set b1 = b and b2 = 1 − b, so that b(2) = b ∨ (1 − b).
Clearly,
−1
−1
Pr(Mn ≤ x) = Pr{Z0 ≤ x(1 − b)−1 , Z1 ≤ xb−1
}
(2) , . . . , Zn−1 ≤ xb(2) , Zn ≤ xb
= F (x(1 − b)−1 ) · F (xb−1 ) · F n−1 (xb−1
(2) ).
Maximum likelihood estimation for the Fréchet distribution
33
As a consequence, with b(1) = b ∧ (1 − b),
Hn (x) = Pr(Mn ≤ xb(2) an )
b
(2)
= F (an x 1−b
) · F (an x
b(2)
b )
b
) · F n (an x).
= F (an x b(2)
(1)
· F n−1 (an x)
Since, by assumption, F n (xan ) → exp(−x−α0 ), Condition 3.1 is satisfied.
Condition 3.3 is trivial, since the process is p-dependent.
The proof of Condition 3.4 can be be carried out along the lines of the proof of Lemma C.1.
For β < 0, simply use that
Pr{(Mn ∨ c)/σn ≤ x} = Hn (x) 1(x ≥ c/σn ) ≤ F n (xan ) 1(x ≥ c/σn ),
while, for β > 0,
Pr(Mn > σn x1/β ) ≤ 2n · Pr(Z1 > σn x1/β b(2) ) = 2n{1 − F (an x1/β )},
for any x > 1.
Since log kn = o(rn ), Condition 3.2 follows from
Pr[min(Mrn ,1 , . . . , Mrn ,kn ) ≤ c] ≤ kn Pr(Mrn ≤ c)
−1
= exp{log kn + (rn − 1) log F (cb−1
) · F (cb−1 ).
(2) )} · F (c(1 − b)
Finally, consider Condition 3.5. As in the proof of Theorem 4.2, write
Z ∞√
p
kn E f (Mrn ∨ c)/σrn − P f = −
kn {H̃n (y) − G(y)}f ′ (y) dy,
0
where G(y) = exp(−y −α0 ) and where
H̃n (y) = P{(Mrn ∨ c)/σrn ≤ y} = An (y)Gn (y)
with
b
Gn (y) = F rn (yarn )1(y ≥ c/σrn ).
(2)
An (y) = F (arn y b(1)
),
Write
Z
0
∞
√
c/σrn √
kn {H̃n (y) − G(y)}f (y) dy = −
k n G(y)f ′ (y) dy
0
Z ∞ √
+
kn An (y){Gn (y) − G(y)}f ′ (y) dy
Z
′
c/σrn
+
Z
∞
c/σrn
√
k n {An (y) − 1}G(y)f ′ (y) dy. (D.1)
The first integral converges to 0 as shown in the proof of Theorem 4.2, treatment of Jn1 (f ). The
integrand of the second integral converges pointwise to the same limit as in the iid case. The
integrand can further be bounded by an integrable function as shown in the treatment of Jn2 in
the proof of Theorem 4.2, after splitting the integration domain at 1. Hence, the limit of that
integral is the same as in the iid case by dominated convergence.
34
A. Bücher and J. Segers
Consider the last integral in the latter display. Decompose
√
p
− log An (y)
kn 1 − An (y)
kn |An (y) − 1| =
·
,
·
rn − log An (y) − log F (arn )
where we used the fact that rn {− log F (arn )} = 1. The second factor is bounded by 1, since
log(x) ≤ x − 1 for all x > 0. Consider the third factor. With L(x) = − log{F (x)}xα0 , we have
L(arn yb(2) /b(1) )
− log An (y)
= (yb(2) /b(1) )−α0
.
− log F (arn )
L(arn )
The fraction on the right-hand side is bounded by a multiple of y δ ∨ y −δ by Potter’s theorem, for
some 0 < δ < α0 . Further note that, up to a factor, f ′ (y) ≤ y −α0 −δ−1 for y ≤ 1 and f ′ (y) ≤ y −1
for y > 1. We obtain that the integrand of the third integral on the right-hand side of (D.1) is
bounded by a multiple of
√
kn /rn · exp(−y −α0 )y −2α0 −2δ−1
for y ≤ 1 and by a multiple of
√
k n /rn · y −α0 −1+δ
for √
y > 1. Both functions are integrable on its respective domains. Since kn = o(n2/3 ) is equivalent
to k n = o(rn ), the third integral converges to 0. Hence, Condition 3.5 is satisfied.
Appendix E: Proofs for Section B
Proof of Lemma B.1. If Y is a unit exponential random variable, then the law of Y −1/α0 is
α/α0
equal to P . The integrals stated in the lemma are equal to E[Y α/α0 ], −α−1
log(Y )], and
0 E[Y
−2
α/α0
2
α0 E[Y
(log Y ) ], respectively. First,
Z ∞
Z ∞
y α/α0 exp(−y) dy = Γ(1 + α/α0 ).
x−α dP (x) =
0
0
Second,
Z
∞
Z
∞
0
1
α0
Z
∞
1
α20
Z
∞
x−α log(x) dP (x) = −
0
log(y) y α/α0 exp(−y) dy = −
1 ′
Γ (1 + α/α0 ).
α0
Third,
x−α (log x)2 dP (x) =
0
(log y)2 y α/α0 exp(−y) dy =
0
1 ′′
Γ (1 + α/α0 ).
α20
Proof of Lemma B.2. Recall a few special values of the first two derivatives of the Gamma
function:
Γ′ (1) = −γ,
Γ′ (2) = 1 − γ,
Γ′ (3) = 3 − 2γ,
Γ′′ (1) = γ 2 + π 2 /6,
Γ′′ (2) = (1 − γ)2 + π 2 /6 − 1,
Γ′′ (3) = 2((3/2 − γ)2 + π 2 /6 − 5/4).
Applying the formulas in Lemma B.1 with α ∈ {0, α0 , 2α0 }, we find
′′
2
2
Γ (3) − (Γ′ (2))2 = α−2
var(Y1 ) = α−2
0 (1 − 4γ + γ + π /3),
0
var(Y2 ) = Γ(3) − (Γ(2))2 = 1,
2
Γ′′ (1) − (Γ′ (1))2 = α−2
var(Y3 ) = α−2
0 π /6,
0
Maximum likelihood estimation for the Fréchet distribution
35
as well as
cov(Y1 , Y2 ) = α0−1 (−Γ′ (3)) − (−Γ′ (2))Γ(2) = α−1
0 (γ − 2),
−2
′′
′
′
2
cov(Y1 , Y3 ) = α0 Γ (2) − (−Γ (2))(−Γ (1)) = α−2
0 (π /6 − γ),
cov(Y2 , Y3 ) = α0−1 (−Γ′ (2)) − Γ(2)(−Γ′ (1)) = −α−1
0 .
Proof of Lemma B.3. If X ∼ P(α,σ) , then Z = X/σ ∼ P(α,1) . Therefore, by (2.1) and
Lemma B.1,
ι11 = E {α−1 + (Z −α − 1) log Z}2
1
= 2 1 − 2{Γ′ (2) − Γ′ (1)} + {Γ′′ (3) − 2Γ′′ (2) + Γ′′ (1)}
α
1
= 2 {(1 − γ)2 + π 2 /6}.
α
Similarly, by (2.1) and (2.2),
α
E (1 − Z −α ){α−1 + (Z −α − 1) log Z}
σ
α −1
=
α {Γ(1) − Γ(2)} + α−1 {Γ′ (1) − 2Γ′ (2) + Γ′ (3)}
σ
1−γ
=
.
σ
ι12 =
Finally,
ι22 =
α2
α2
α2
−α 2
E[(1
−
Z
)
]
=
{Γ(1)
−
2Γ(2)
+
Γ(3)}
=
.
σ2
σ2
σ2
Appendix F: Finite-sample bias and variance
We work out the second-order Condition 4.1 and the expressions for the asymptotic bias and
variance of the maximum likelihood estimator of the Fréchet shape parameter for the case of
block maxima extracted from an independent random sample from the absolute value of a Cauchy
distribution. Furthermore, we compare these expressions to those obtained in finite samples from
Monte Carlo simulations.
If the random variable ξ is Cauchy-distributed, then |ξ| has distribution function
F (x) = P{|ξ| ≤ x} =
2
arctan(x) 1(x > 0),
π
x ∈ R.
Based on the asymptotic expansion
2 2
1
2
− log arctan(x) = log
+
+ 2 2 +O 3 ,
π
πx π x
x
x → ∞,
one can show that − log F is regularly varying at infinity with index −α0 = −1 and that the
limit relation
− log F (ux)
1
− x−1 = x−1 hρ (x)
lim
u→∞ A(u)
− log F (u)
is satisfied for
ρ = −1
and
A(u) = −
1
.
1 + πu
36
A. Bücher and J. Segers
In addition, the normalizing sequence (an )n∈N can be chosen as an = 2n
π .
By Theorem 4.2, these facts imply that the theoretical bias and variance of α̂n are given by
Bias = −A(arn )
12
6
b1 (|ρ|) = 2
,
π2
π (1 + 2rn )
Variance =
1 π2
.
kn 6
In particular, the mean squared error is of the order O(1/rn2 ) + O(1/kn ), which can be minimized
by balancing the block size rn and the number of blocks kn , that is, by choosing rn = O(n1/3 )
2
12
and kn = O(n2/3 ) so that rn2 ≈ kn . More precisely, the equations n = kr and ( π2 (1+2r)
)2 = k1 π6
2
imply that 864
π 6 n = r(1 + 2r) , which for n = 1 000 implies that r ≈ 6 and k ≈ 174. These values
are quite close to the optimal finite-sample values of r = 4 and k = 250 to be observed in the
upper-left panel of Figure 2.
In Figure 4, we depict results of a Monte-Carlo simulation study on the finite-sample approximation of the theoretical bias, multiplied by r, and of the theoretical variance, multiplied by k.
Three scenarios have been considered:
• fixed number of blocks k = 200 and block sizes r = 4, . . . , 50;
• fixed block size r = 25 and number of blocks k = 40, . . . , 400;
• block sizes r = 8, 9, . . . , 32 and number of blocks k = r2 .
We find that the variance approximation improves with increasing r or k. For the bias approximation to improve, both r and k must increase.
Maximum likelihood estimation for the Fréchet distribution
37
Fixed block size r=25
Theoretical Bias
Empirical Bias
Theoretical Variance
Empirical Variance
0.2
10
20
30
40
50
0.60
0.55
Theoretical Bias
Empirical Bias
Theoretical Variance
Empirical Variance
0.50
0.4
0.65
0.6
0.70
0.8
0.75
1.0
0.80
Fixed number of blocks k=200
50
100
Block size r
150
200
250
300
350
400
Number of blocks k
0.8
Number of blocks k=r^2
0.4
0.5
0.6
0.7
Theoretical Bias
Empirical Bias
Theoretical Variance
Empirical Variance
200
400
600
800
1000
Number of blocks k
Figure 4. Simulation results in the iid Cauchy-model (see Section F). Theoretical bias multiplied with r and
theoretical variance multiplied with k, together with finite-sample approximations based on N = 5 000 simulation
runs. In the upper left picture, the number of blocks is fixed at k = 200; in the upper right picture, the size of the
blocks is fixed at r = 25; in the lower picture, finally, the number of blocks k and the block size r satisfy r 2 = k,
as suggested by (approximately) minimizing the mean squared error.
| 10 |
Continuation-passing Style Models Complete for
Intuitionistic Logic
arXiv:1102.1061v2 [math.LO] 3 Mar 2011
Danko Ilik
University “Goce Delčev” – Štip
Address: Faculty of Informatics, PO Box 201, Štip, Macedonia
E-mail: [email protected]
Abstract
A class of models is presented, in the form of continuation monads polymorphic for
first-order individuals, that is sound and complete for minimal intuitionistic predicate
logic. The proofs of soundness and completeness are constructive and the computational content of their composition is, in particular, a β-normalisation-by-evaluation
program for simply typed lambda calculus with sum types. Although the inspiration
comes from Danvy’s type-directed partial evaluator for the same lambda calculus, the
there essential use of delimited control operators (i.e. computational effects) is avoided.
The role of polymorphism is crucial – dropping it allows one to obtain a notion of
model complete for classical predicate logic. The connection between ours and Kripke
models is made through a strengthening of the Double-negation Shift schema.
Key words: intuitionistic logic, completeness, Kripke models, Double-negation Shift,
normalization by evaluation
2000 MSC: 03B20, 03B35, 03B40, 68N18, 03F55, 03F50, 03B55
1. Introduction
Although Kripke models are standard semantics for intuitionistic logic, there is
as yet no (simple) constructive proof of their completeness when one considers all
logical connectives. While Kripke’s original proof [20] was classical, Veldman gave an
intuitionistic one [26] by using Brouwer’s Fan Theorem to handle disjunction and the
existential quantifier. To see what the computational content behind Veldman’s proof
is, one might consider a realisability interpretation of the Fan Theorem (for example
[3]), but, all known realisers being defined by general recursion, due to the absence of
an elementary proof of their termination, it is not clear whether one can think of the
program using them as a constructive proof or not.
On the other hand, a connection between normalisation-by-evaluation (NBE) [4]
for simply typed lambda calculus, λ→ , and completeness for Kripke models for the
fragment {∧, ⇒, ∀} has been made [6, 15]. We review this connection in Section 2.
There we also look at Danvy’s extension [8] of NBE from λ→ to λ→∨ , simply typed
lambda calculus with sum types. Even though Danvy’s algorithm is simple and elegant, he uses the full power of delimited control operators which do not yet have a
Preprint submitted to Elsevier
March 25, 2018
typing system that permits to understand them logically. We deal with that problem
in Section 3, by modifying the notion of Kripke model so that we can give a proof
of completeness for full intuitionistic logic in continuation-passing style, that is, without relying on having delimited control operators in our meta-language. In Section 4,
we extract the algorithm behind the given completeness proof, a β-NBE algorithm for
λ→∨ . In Section 5, we stress the importance of our models being dependently typed, by
comparing them to similar models that are complete for classical logic [18]. We there
also relate our and Kripke models by showing that the two are equivalent in presence
of a strengthening of the Double-negation Shift schema [24, 25]. We conclude with
Section 6 by mentioning related work.
The proofs of Section 3 have been formalised in the Coq proof assistant in [16],
which also represents an implementation of the NBE algorithm.
2. Normalisation-by-Evaluation as Completeness
In [4], Berger and Schwichtenberg presented a proof of normalisation of λ→ which
does not involve reasoning about the associated reduction relation. Instead, they interpret λ-terms in a domain, or ambient meta-language, using an evaluation function,
J−K : Λ → D,
and then they define an inverse to this function, which from the denotation in D directly
extracts a term in βη-long normal form. The inverse function ↓, called reification, is
defined by recursion on the type τ of the term, at the same time defining an auxiliary
function ↑, called reflection:
↓τ : D → Λ-nf
↓τ := a 7→ a
↓τ→σ := S 7→ λa. ↓σ (S · ↑τ a)
τ-atomic
a-fresh
↑τ : Λ-ne → D
↑τ := a 7→ a
↑τ→σ := e 7→ S 7→↑σ e(↓τ S )
τ-atomic
Here, S ranges over members of D, and we used 7→ and · for abstraction and application at the meta-level. The subclasses of normal and neutral λ-terms are given by the
following inductive definition.
Λ-nf ∋ r := λaτ .rσ | eτ
Λ-ne ∋ e := aτ | eτ→σ rτ
λ-terms in normal form
neutral λ-terms
It was a subsequent realisation of Catarina Coquand [6], that the evaluation algorithm J·K is also the one underlying the Soundness Theorem for minimal intuitionistic
logic (with ⇒ as the sole logical connective) with respect to Kripke models, and that
the reification algorithm ↓ is also the one underlying the corresponding Completeness
Theorem.
2
Definition 2.1. A Kripke model is given by a preorder (K, ≤) of possible worlds, a
binary relation of forcing (−) (−) between worlds and atomic formulae, and a family
of domains of quantification D(−), such that,
for all w′ ≥ w, w X → w′ X, and
for all w′ ≥ w, D(w) ⊆ D(w′ ).
The relation of forcing is then extended from atomic to composite formulae by the
clauses:
w
A ∧ B := w
A and w
B
w A ∨ B := w A or w B
w A ⇒ B := for all w′ ≥ w, w′
A ⇒ w′
′
′
B
′
w
∀x.A(x) := for all w ≥ w and t ∈ D(w ), w
w
∃x.A(x) := for some t ∈ D(w), w
w ⊥ := false
w
A(t)
A(t)
⊤ := true
More precisely, the following well-known statements hold and their proofs have
been machine-checked [7, 15] for the logic fragment generated by the connectives {⇒
, ∧, ∀}.
Theorem 2.2 (Soundness). If Γ ⊢ p : A then, in any Kripke model, for any world w, if
w Γ then w A.
Proof. By a simple induction on the length of the derivation.
Theorem 2.3 (Model Existence or Universal Completeness). There is a model U (the
“universal model”) such that, given a world w of U, if w A, then there exists a term
p and a derivation in normal form w ⊢ p : A.
Proof. The universal model U is built by setting:
• K to be the set of contexts Γ;
• “≤” to be the subset relation of contexts;
• “Γ
X” to be the set of derivations in normal form Γ ⊢nf X, for X an atomic
formula.
One then proves simultaneously, by induction on the complexity of A, that the two
functions defined above, reify (↓) and reflect (↑), are correct, that is, that ↓ maps a
member of Γ
A to a normal proof term (derivation) Γ ⊢ p : A, and that ↑ maps a
neutral term (derivation) Γ ⊢ e : A to a member of Γ A.
Corollary 2.4 (Completeness (usual formulation)). If in any Kripke model, at any
world w, w Γ implies w A, then there exists a term p and a derivation Γ ⊢ p : A.
3
Proof. If w
Γ→w
A in any Kripke model, then also w
Γ→w
A in the
model U above. Since from the ↑-part of Theorem 2.3 we have that Γ Γ, then from
the ↓-part of the same theorem there exists a term p such that Γ ⊢ p : A.
If one wants to extend this technique for proving completeness for Kripke models to
the rest of the intuitionistic connectives, ⊥, ∨ and ∃, the following meta-mathematical
problems appear, which have been investigated in the middle of the last century. At
that time, Kreisel, based on observations of Gödel, showed (Theorem 1 of [19]) that
for a wide range of intuitionistic semantics, into which Kripke’s can also be fit:
• If one can prove the completeness for the negative fragment of formulae (built
using ∧, ⊥, ⇒, ∀, and negated atomic formulae, X ⇒ ⊥) then one can prove
Markov’s Principle. In view of Theorem 2.3, this implies that having a completeness proof cover ⊥ means being able to prove Markov’s Principle – which
is known to be independent of many constructive logical systems, like Heyting
Arithmetic or Constructive Type Theory.
• If one can prove the completeness for all connectives, i.e. including ∨ and ∃,
then one can prove a strengthening1 of the Double-negation Shift schema on
Σ01 -formulae, which is also independent because it implies Markov’s Principle.
We mentioned that Veldman [26] used Brouwer’s Fan Theorem to handle ∨ and ∃,
but to handle ⊥ he included in his version of Kripke models an “exploding node”
predicate, ⊥ and defined w
⊥ := w ⊥ . We remark in passing that Veldman’s
modification does not defy Kripke original definition, but only makes it more regular:
if in Definition 2.1 one considers ⊥ as an atomic formula, rather than a composite one,
one falls back to Veldman’s definition.
One can also try to straightforwardly extend the NBE-Completeness proof to cover
disjunction (the existential quantifier is analogous) and see what happens. If one does
that, one sees that a problem appears in the case of reflection of sum, ↑A∨B . There,
given a neutral λ-term that derives A ∨ B, one is supposed to prove that w
A∨B
holds, which by definition means to prove that either w
A or w
B holds. But,
since the input λ-term is neutral, it represents a blocked computation from which we
will only be able to see whether A or B was derived, once we substitute values for the
contained free variables that block the computation.
That is where the solution of Olivier Danvy appears. In [8], he used the full power2
of the delimited control operators shift (Sk.p) and reset (#) [10] to give the following
1A
special case of D-DNS+ from page 13.
We say “full power” because his usage of delimited control operators is strictly more powerful than what
is possible with (non-delimited) control operators like call/cc. Danvy’s program makes non-tail calls with
continuations, while in the CPS translation of a program that uses call/cc all continuation calls are tail calls.
2
4
normalisation-by-evaluation algorithm for λ→∨ :
↓τ : D → Λ-nf
↓τ := a 7→ a
τ→σ
↓
↓τ∨σ
τ-atomic
σ
τ
:= S 7→ λa.# ↓ (S · ↑ a)
(
ι1 (↓τ S ′ ) , if S = inl ·S ′
:= S 7→
ι2 (↓σ S ′ ) , if S = inr ·S ′
↑τ : Λ-ne → D
↑τ := a 7→ a
τ→σ
a-fresh
τ-atomic
σ
τ
↑
:= e 7→ S 7→↑ e(↓ S )
↑τ∨σ := e 7→ Sκ.case e of (a1 .#κ · (inl ·(↑τ a1 ))ka2 .#κ · (inr ·(↑σ a2 )))
ai -fresh
We characterise explicitly normal and neutral λ-terms by the following inductive definitions.
Λ-nf ∋ r := eτ | λaτ .rσ | ιτ1 r | ιτ2 r
Λ-ne ∋ e := aτ | eτ→σ rτ | case eτ∨σ of aτ1 .r1ρ kaσ2 .r2ρ
Given Danvy’s NBE algorithm, which is simple and appears correct3, does this
mean that we can obtain a constructive proof of completeness for Kripke models if
we permit delimited control operators in our ambient meta-language? Unfortunately,
not, or not yet, because the available typing systems for them are either too complex
(type-and-effect systems [10] change the meaning of implication), or do not permit to
type-check the algorithm as a completeness proof (for example the typing system from
[12], or the one from Chapter 4 of [17]).
3. Kripke-CPS Models and Their Completeness
However, there is a close connection between shift and reset, and the continuationpassing style (CPS) translations [11]. We can thus hope to give a normalisation-byevaluation proof for full intuitionistic logic in continuation-passing style.
In this section we present a notion of model that we developed following this idea,
by suitably inserting continuations into the notion of Kripke model. We prove that the
new models are sound and complete for full intuitionistic predicate logic.
Definition 3.1. An Intuitionistic Kripke-CPS model (IK-CPS) is given by:
• a preorder (K, ≤) of possible worlds;
• a binary relation on worlds (−)
(−)
⊥
labelling a world as exploding;
3 For more details on the computational behaviour of shift/reset and the algorithm itself, we refer the
reader to the original paper [8] and to Section 3.2 of [17].
5
• a binary relation (−)
lae, such that
s
(−) of strong forcing between worlds and atomic formufor all w′ ≥ w, w
s
X → w′
s
X,
• and a domain of quantification D(w) for each world w, such that
for all w′ ≥ w, D(w) ⊆ D(w′ ).
The relation (−) s (−) of strong forcing is extended from atomic to composite formulae
inductively and by simultaneously defining one new relation, (non-strong) forcing:
A) if, for any formula C,
⋆ A formula A is forced in the world w (notation w
∀w′ ≥ w. ∀w′′ ≥ w′ . w′′
s
A → w′′
• w
s
A ∧ B if w
A and w
• w
s
A ∨ B if w
A or w
• w
s
A ⇒ B if for all w′ ≥ w, w
• w
s
∀x.A(x) if for all w′ ≥ w and all t ∈ D(w′ ), w′
• w
s
∃x.A(x) if w
C
⊥
→ w′
C
⊥;
B;
B;
A implies w
B;
A(t);
A(t) for some t ∈ D(w).
Remark 3.2. Certain details of the definition have been put into boxes to facilitate the
comparison carried out in Section 5.
Lemma 3.3. Strong forcing and (non-strong) forcing are monotone in any IK-CPS
model, that is, given w′ ≥ w, w s A implies w′ s A, and w A implies w′ A.
Proof. Monotonicity of strong forcing is proved by induction on the complexity of the
formula, while that of forcing is by definition. The proof is easy and available in the
Coq formalisation.
Lemma 3.4. The following monadic operations are definable for IK-CPS models:
“unit” η(·) w
s
A→w
A
“bind” (·)∗ (·) (∀w′ ≥ w. w′
s
A → w′
B) → w
A→w
B
Proof. Easy, using Lemma 3.3. If we leave implicit the handling of formulae C,
worlds, and monotonicity, we have the following procedures behind the proofs.
η(α) = κ 7→ κ · α
(φ)∗ (α) = κ 7→ α · (β 7→ φ · β · κ)
6
(a : A) ∈ Γ
Ax
Γ⊢a:A
Γ ⊢ p : A1
Γ ⊢ q : A2
∧I
Γ ⊢ (p, q) : A1 ∧ A2
Γ ⊢ p : A1 ∧ A2 i
∧E
Γ ⊢ π i p : Ai
Γ ⊢ p : Ai
∨i
Γ ⊢ ι i p : A1 ∨ A2 I
Γ ⊢ p : A1 ∨ A2
Γ, a1 : A1 ⊢ q1 : C
Γ, a2 : A2 ⊢ q2 : C
∨E
Γ ⊢ case p of (a1 .q1 ka2 .q2 ) : C
Γ, a : A1 ⊢ p : A2
⇒I
Γ ⊢ λa.p : A1 ⇒ A2
Γ ⊢ p : A1 ⇒ A2
Γ ⊢ q : A1
⇒E
Γ ⊢ pq : A2
Γ ⊢ p : A(x)
x-fresh
∀I
Γ ⊢ λx.p : ∀x.A(x)
Γ ⊢ p : ∀x.A(x)
∀E
Γ ⊢ pt : A(t)
Γ ⊢ p : A(t)
∃I
Γ ⊢ (t, p) : ∃x.A(x)
Γ ⊢ p : ∃x.A(x)
Γ, a : A(x) ⊢ q : C
Γ ⊢ dest p as (x.a) in q : C
x-fresh
∃E
Table 1: Proof term annotation for the natural deduction system of minimal intuitionistic predicate logic (MQC)
With Table 1, we fix a derivation system and proof term notation for minimal intuitionistic predicate logic. There are two kinds of variables, proof term variables a, b, . . .
and individual (quantifier) variables x, y, . . .. Individual constants are denoted by t. We
rely on these conventions to resolve the apparent ambiguity of the syntax: the abstraction λa.p is a proof term for implication, while λx.p is a proof term for ∀; (p, q) is a
proof term for ∧, while (t, q) is a proof term for ∃.
We supplement the characterisation of normal and neutral terms from page 5:
Λ-nf ∋ r :=e | λa.r | ι1 r | ι2 r | (r1 , r2 ) | λx.r | (t, r)
Λ-ne ∋ e :=a | er | case e of (a1 .r1 ka2 .r2 ) | π1 e | π2 e | et |
dest e as (x.a) in r
7
As before, let w
Γ denote that all formulae from Γ are forced.
Theorem 3.5 (Soundness). If Γ ⊢ p : A, then, in any world w of any IK-CPS model, if
w Γ, then w A.
Proof. This is proved by a simple induction on the length of the derivation. We give
the algorithm behind it in section 4.
Remark 3.6. The condition “for all formula C” in Definition 3.1 is only necessary for
the soundness proof to go through, more precisely, the cases of elimination rules for ∨
and ⇒. The completeness proof goes through even if we define forcing by
∀w′ ≥ w. ∀w′′ ≥ w′ . w′′ s A → w′′ ⊥A → w′ ⊥A .
Definition 3.7. The Universal IK-CPS model U is obtained by setting:
• K to be the set of contexts Γ of MQC;
• Γ ≤ Γ′ iff Γ ⊆ Γ′ ;
• Γ s X iff there is a derivation in normal form of Γ ⊢ X in MQC, where X is an
atomic formula;
• Γ
C
⊥
iff there is a derivation in normal form of Γ ⊢ C in MQC;
• for any w, D(w) is a set of individuals for MQC (that is, D(−) is a constant
function from worlds to sets of individuals).
(−)
s
(−) is monotone because of the weakening property for intuitionistic “⊢”.
Remark 3.8. The difference between strong forcing “ s ” and the exploding node predicate “ C⊥ ” in U is that the former is defined on atomic formulae, while the latter is
defined on any kind of formulae.
Lemma 3.9. We can also define the monadic “run” operation on the universal model
U, for atomic formulae X:
µ(·) : w
X→w
s
X.
Proof. By setting C := A and applying the identity function.
Theorem 3.10 (Completeness for U). For any closed formula A and closed context Γ,
the following hold for U:
Γ
A −→ {p | Γ ⊢ p : A}
Γ ⊢ e : A −→ Γ
A
(“reify”)
(↓)
(“reflect”)
(↑)
Moreover, the target of (↓) is a normal term, while the source of (↑) is a neutral term.
8
Proof. We prove simultaneously the two statements by induction on the complexity of
formula A.
We skip writing the proof term annotations, and write just Γ ⊢ A instead of “there
exists p such that Γ ⊢ p : A”, in order to decrease the level of detail. The algorithm
behind this proof that concentrates on proof terms is given in Section 4.
Base case. (↓) is by “run” (Lemma 3.9), (↑) is by “unit” (Lemma 3.4).
Induction case for ∧. Let Γ A ∧ B i.e.
∀C. ∀Γ′ ≥ Γ. ∀Γ′′ ≥ Γ′ . Γ′′ A and Γ′′ B → Γ′′ ⊢ C → Γ′ ⊢ C .
We apply this hypothesis by setting C := A ∧ B and Γ′ := Γ, and then, given Γ′′ ≥ Γ s.t.
Γ′′ A and Γ′′ B, we have to derive Γ′′ ⊢ A ∧ B. But, this is immediate by applying
the ∧I rule and the induction hypothesis (↓) twice, for A and for B.
Let Γ ⊢ A∧ B be a neutral derivation. We prove Γ A∧ B by applying unit (Lemma
3.4), and then applying the induction hypothesis (↓) on ∧1I , ∧2I , and the hypothesis.
Induction case for ∨. Let Γ A ∨ B i.e.
∀C. ∀Γ′ ≥ Γ. ∀Γ′′ ≥ Γ′ . Γ′′ A or Γ′′ B → Γ′′ ⊢ C → Γ′ ⊢ C .
We apply this hypothesis by setting C := A ∨ B and Γ′ := Γ, and then, given Γ′′ ≥ Γ
s.t. Γ′′ A or Γ′′
B, we have to derive Γ′′ ⊢ A ∨ B. But, this is immediate, after a
case distinction, by applying the ∨iI rule and the induction hypothesis (↓).
We now consider the only case (besides ↑∃xA(x) below) where using shift and reset,
or our Kripke-style models, is crucial. Let Γ ⊢ A ∨ B be a neutral derivation. Let a
formula C and Γ′ ≥ Γ be given, and let
∀Γ′′ ≥ Γ′ . Γ′′ A or Γ′′ B → Γ′′ ⊢ C .
(#)
We prove Γ′ ⊢ C by the following derivation tree:
A ∈ A, Γ′
Ax
A, Γ′ ⊢ A
(↑)
A, Γ′ A
inl
A, Γ′ A or A, Γ′ B
Γ⊢ A∨B
(#)
A, Γ′ ⊢ C
Γ′ ⊢ A ∨ B
Γ′ ⊢ C
Induction case for ⇒. Let Γ A ⇒ B i.e.
∀C. ∀Γ′ ≥ Γ.
∀Γ′′ ≥ Γ′ . ∀Γ3 ≥ Γ′′ . Γ3
A → Γ3
B ∈ B, Γ′
Ax
B, Γ′ ⊢ B
(↑)
B, Γ′ B
inr
B, Γ′ A or B, Γ′ B
(#)
B, Γ′ ⊢ C
∨E
B → Γ′′ ⊢ C → Γ′ ⊢ C .
We apply this hypothesis by setting C := A ⇒ B and Γ′ := Γ, and then, given Γ′′ ≥ Γ
s.t.
∀Γ3 ≥ Γ′′ . Γ3 A → Γ3 B
(#)
we have to derive Γ′′ ⊢ A ⇒ B. This follows by applying (⇒I ), the IH for(↓), then (#),
and finally the IH for (↑) with the Ax rule.
Let Γ ⊢ A ⇒ B be a neutral derivation. We prove Γ
A ⇒ B by applying unit
(Lemma 3.4), and then, given Γ′ ≥ Γ and Γ′ A, we have to show that Γ′ B. This is
done by applying the IH for (↑) on the (⇒E ) rule, with the IH for (↓) applied to Γ′ A.
9
Induction case for ∀. We recall that the domain function D(−) is constant in the
universal model U. Let Γ ∀xA(x) i.e.
∀C. ∀Γ′ ≥ Γ.
∀Γ′′ ≥ Γ′ . ∀Γ3 ≥ Γ′′ . ∀t ∈ D. Γ3
A(t) → Γ′′ ⊢ C → Γ′ ⊢ C .
We apply this hypothesis by setting C := ∀xA(x) and Γ′ := Γ, and then, given Γ′′ ≥ Γ
s.t.
∀Γ3 ≥ Γ′′ . ∀t ∈ D. Γ3 A(t)
(#)
we have to derive Γ′′ ⊢ ∀xA(x). This follows by applying (∀I ), the IH for(↓), and then
(#).
Let Γ ⊢ ∀xA(x) be a neutral derivation. We prove Γ
∀xA(x) by applying unit
(Lemma 3.4), and then, given Γ′ ≥ Γ and t ∈ D, we have to show that Γ′ A(t). This
is done by applying the IH for (↑) on the (∀E ) rule and the hypothesis Γ ⊢ ∀xA(x).
Induction case for ∃. Let Γ ∃xA(x) i.e.
∀C. ∀Γ′ ≥ Γ.
∀Γ′′ ≥ Γ′ . ∃t ∈ D. Γ′′
A(t) → Γ′′ ⊢ C → Γ′ ⊢ C .
We apply this hypothesis by setting C := ∃xA(x) and Γ′ := Γ, and then, given Γ′′ ≥ Γ
s.t. ∃t ∈ D. Γ′′ A(t), we have to derive Γ′′ ⊢ ∃xA(x). This follows by applying (∃I )
with t ∈ D, and the IH for(↓).
Let Γ ⊢ ∃xA(x) be a neutral derivation. Let a formula C and Γ′ ≥ Γ be given, and
let
∀Γ′′ ≥ Γ′ . ∃t ∈ D.Γ′′ A(t) → Γ′′ ⊢ C .
(#)
We prove Γ′ ⊢ C by the following derivation tree:
A(x) ∈ A(x), Γ′
Ax
A(x), Γ′ ⊢ A(x)
(↑)
′
Γ ⊢ ∃xA(x)
A(x), Γ
A(x)
(#)
Γ′ ⊢ ∃xA(x)
A(x), Γ′ ⊢ C
x-fresh
∃E
Γ′ ⊢ C
The result of reification “↓” is in normal form. By inspection of the proof.
4. Normalisation by Evaluation in IK-CPS Models
In this section we give the algorithm that we manually extracted from the Coq
formalisation, for the restriction to the interesting propositional fragment that involves
implication and disjunction. The algorithm extracted automatically by Coq contains
too many details to be instructive.
10
The following evaluation function for λ→∨ -terms is behind the proof of Theorem 3.5:
JΓ ⊢ p : AKw
Γ
:w
A
JaKρ := ρ(a)
Jλa.pKρ := κ 7→ κ · α 7→ JpKρ,a7→α = η · α 7→ JpKρ,a7→α
JpqKρ := κ 7→ JpKρ · φ 7→ φ · JqKρ · κ
Jι1 pKρ := κ 7→ κ · inl ·JpKρ = η · inl ·JpKρ
Jι2 pKρ := κ 7→ κ · inr ·JpKρ = η · inr ·JpKρ
(
!
Jq1 Kρ,a1 7→α · κ if γ = inl ·α
Jcase p of (a1 .q1 ka2 .q2 )Kρ := κ 7→ JpKρ · γ 7→
Jq2 Kρ,a2 7→β · κ if γ = inr ·β
The following is the algorithm behind Theorem 3.10:
↓ΓA : Γ
↑ΓA
A → {p ∈ Λ-nf | Γ ⊢ p : A}
: {e ∈ Λ-ne | Γ ⊢ e : A} → Γ
A
↓ΓX := α 7→ µ · α
↑ΓX
↓ΓA⇒B
↑ΓA⇒B
↓ΓA∨B
↑ΓA∨B
X-atomic
:= e 7→ η · e
B
A
:= η · φ 7→ λa. ↓Γ,a:A
φ· ↑Γ,a:A
a
:= e 7→ η · α 7→↑ΓB e ↓ΓA α
(
!
ι1 ↓ΓA α if γ = inl ·α
:= η · γ 7→
ι2 ↓ΓB β if γ = inr ·β
B
A
a2
:= e 7→ κ 7→ case e of a1 .κ · inl · ↑Γ,a
a1 ka2 .κ · inr · ↑Γ,a
2 :B
1 :A
X-atomic
a-fresh
ai -fresh
5. Variants and Relation to Kripke Models
5.1. “Call-by-value” Models
Defining forcing on composite formulae in Definition 3.1 proceeds analogously
to defining the call-by-name CPS translation [23], or Kolmogorov’s double-negation
translation [25, 22]. A definition analogous to the “call-by-value” CPS translation [23]
is also possible, by defining (non-strong) forcing by:
• w
s
A ∧ B if w
s
A and w
• w
s
A ∨ B if w
s
A or w
• w
s
A ⇒ B if for all w′ ≥ w, w
• w
s
∀x.A(x) if for all w′ ≥ w and all t ∈ D(w′ ), w′
s
s
B;
B;
s
A implies w
11
B;
A(t);
• w
s
∃x.A(x) if w
s
A(t) for some t ∈ D(w).
One can prove this variant of IK-CPS models sound and complete, similarly to
Section 3, except for two differences. Firstly, in the statement of Soundness, one needs
to put w s Γ in place of w Γ. Secondly, due to the first difference, the composition
of soundness of completeness that gives normalisation works for closed terms only.
5.2. Classical Models
In [16, 17, 18], we presented the following notion of model which is complete for
classical predicate logic and represents an NBE algorithm for it.
Definition 5.1. A Classical Kripke-CPS model (CK-CPS), is given by:
• a preorder (K, ≤) of possible worlds;
• a unary relation on worlds (−)
• a binary relation (−)
lae, such that
s
⊥
labelling a world as exploding;
(−) of strong forcing between worlds and atomic formufor all w′ ≥ w, w
s
X → w′
s
X,
• and a domain of quantification D(w) for each world w, such that
for all w′ ≥ w, D(w) ⊆ D(w′ ).
The relation (−) s (−) of strong forcing is extended from atomic to composite formulae
inductively and by simultaneously defining two new relations, refutation and (nonstrong) forcing:
⋆ A formula A is refuted in the world w (notation w : A
which strongly forces A, is exploding;
⋆ A formula A is forced in the world w (notation w
which refutes A, is exploding;
• w
s
A ∧ B if w
A and w
• w
s
A ∨ B if w
A or w
• w
s
A ⇒ B if for all w′ ≥ w, w
• w
s
∀x.A(x) if for all w′ ≥ w and all t ∈ D(w′ ), w′
• w
s
∃x.A(x) if w
) if any world w′ ≥ w,
A) if any world w′ ≥ w,
B;
B;
A implies w
B;
A(t);
A(t) for some t ∈ D(w).
The differences between Definition 3.1 and Definition 5.1 are marked with boxes.
We can also present CK-CPS using binary exploding nodes, by defining w s ⊥ :=
∀C.w C⊥ . Then, we get the following statement of forcing in CK-CPS,
∀w′ ≥ w. ∀w′′ ≥ w′ . w′′ s A → ∀I.w′′ ⊥I → ∀O.w′ O
⊥,
12
versus forcing in IK-CPS,
∀C. ∀w′ ≥ w. ∀w′′ ≥ w′ . w′′
s
A → w′′
C
⊥
→ w′
C
⊥
.
The difference between forcing in the intuitionistic and classical models is, then,
that: 1) the dependency on C is necessary in the intuitionistic case, while it is optional
in the classical case; 2) the continuation (the internal implication) in classical forcing is allowed to change the parameter C upon application, whereas in intuitionistic
forcing the parameter is not local to the continuation, but to the continuation of the
continuation.
At this point we also remark that the use of dependent types to handle the parameter
C is determined by the fact that we formalise our definitions in Intuitionistic Type Theory. Otherwise, the quantification ∀C. · · · is quantification over first-order individuals,
for example natural numbers.
5.3. Kripke Models
Let A(n) be an arbitrary first-order formula and let X(n, m) be a Σ01 -formula. Denote
the following arithmetic schema by (D-DNS+ ) for “dependent Double-negation Shift
schema, strengthened”.
∀m. ∀n1 ≥ n. (∀n2 ≥ n1 . A(n2 ) → X(n2, m)) → X(n1 , m)
D-DNS+
A(n)
Proposition 5.2. Let K = (K, ≤, D, , ⊥) be any structure such that denotes forcing
in the standard Kripke model arising from K, and denotes (non-strong) forcing in
the IK-CPS model arising from the same K.
Then, in the presence of (D-DNS+ ) at meta-level, for all formula A, and any w ∈ K,
w A ←→ w
A.
Proof. The proof is by induction on A, using (D-DNS+ ) to prove,
∀C. ∀w1 ≥ w. ∀w2 ≥ w1 . (w2 A or w2 B) → w2 C⊥ ) → w1
C
⊥
,
w A or w B
needed in the case for disjunction, and similarly for the existential quantifier.
Corollary 5.3. Completeness of full intuitionistic predicate logic with respect to standard Kripke models is provable constructively, in the presence of D-DNS+ .
Remark 5.4. It is the other direction of this implication that Kreisel proved, for a specialisation of D-DNS+ . (Section 2) To investigate more precisely whether D-DNS+
captures exactly constructive provability of completeness for Kripke models remains
future work.
13
6. Conclusion
We emphasised that our algorithm is β-NBE, because were we able to identify βηequal terms of λ→∨ through our NBE function, we would have solved the problem of
the existence of canonical η-long normal form for λ→∨ . However, as shown by [14],
due to the connection with Tarski’s High School Algebra Problem [5, 27], the notion
of such a normal form is not finitely axiomatisable. If one looks at examples of λ→∨ terms which are βη-equal but are not normalised to the same term by Danvy’s (and
our) algorithm, one can see that in the Coq type theory these terms are interpreted as
denotations that involve commutative cuts.
In recent unpublished work [9], Danvy also developed a version of his NBE algorithm directly in CPS, without using delimited control operators.
In [2], Barral gives a program for NBE of λ-calculus with sums by just using the
exceptions mechanism of a programming language, which is something a priori strictly
weaker than using delimited control operators.
In [1], Altenkirch, Dybjer, Hofmann, and Scott, give a topos theoretic proof of
NBE for a typed λ-calculus with sums, by constructing a sheaf model. The connection
between sheaves and Beth semantics4 is well known. While the proof is constructive,
due to their use of topos theory, we were unable to extract an algorithm from it.
In [21], Macedonio and Sambin present a notion of model for extensions of Basic
logic (a sub-structural logic more primitive than Linear logic), which, for intuitionistic
logic, appears to be related to our notion of model. However, they demand that their
set of worlds K be saturated, while we do not, and we can hence also work with finite
models.
In [13], Filinski proves the correctness of an NBE algorithm for Moggi’s computational λ-calculus, including sums. We found out about Filinski’s paper right before
finishing our own. He also evaluates the input terms in a domain based on continuations.
Acknowledgements
To Hugo Herbelin for inspiring discussions and, in particular, for suggesting to try
polymorphism, viz. Remark 3.6. To Olivier Danvy for suggesting reference [13].
References
[1] Thorsten Altenkirch, Peter Dybjer, Martin Hofmann, and Philip J. Scott. Normalization by evaluation for typed lambda calculus with coproducts. In LICS, pages
303–310, 2001.
[2] Freiric Barral. Exceptional NbE for sums. In Olivier Danvy, editor, Informal
proceedings of the 2009 Workshop on Normalization by Evaluation, August 15th
2009, Los Angeles, California, pages 21–30, 2009.
4 We remark that, for the fragment {⇒, ∀, ∧}, NBE can also be seen as completeness for Beth semantics,
since forcing in Beth and Kripke models is the same thing on that fragment.
14
[3] U. Berger and P. Oliva. Modified bar recursion and classical dependent choice.
In M. Baaz, S.D. Friedman, and J. Kraijcek, editors, Logic Colloquium ’01, Proceedings of the Annual European Summer Meeting of the Association for Symbolic Logic, held in Vienna, Austria, August 6 - 11, 2001, volume 20 of Lecture
Notes in Logic, pages 89–107. Springer, 2005.
[4] Ulrich Berger and Helmut Schwichtenberg. An inverse of the evaluation functional for typed lambda-calculus. In LICS, pages 203–211. IEEE Computer Society, 1991.
[5] Stanley Burris and Simon Lee. Tarski’s high school identities. The American
Mathematical Monthly, 100(3):231–236, 1993.
[6] Catarina Coquand. From semantics to rules: A machine assisted analysis. In CSL
’93, volume 832 of Lecture Notes in Computer Science, pages 91–105. Springer,
1993.
[7] Catarina Coquand. A formalised proof of the soundness and completeness of a
simply typed lambda-calculus with explicit substitutions. Higher Order Symbol.
Comput., 15(1):57–90, 2002.
[8] Olivier Danvy. Type-directed partial evaluation. In POPL, pages 242–257, 1996.
[9] Olivier Danvy. A call-by-name normalization function for the simply typed
lambda-calculus with sums and products. manuscript, 2008.
[10] Olivier Danvy and Andrzej Filinski. A functional abstraction of typed contexts. Technical report, Computer Science Department, University of Copenhagen, 1989. DIKU Rapport 89/12.
[11] Olivier Danvy and Andrzej Filinski. Abstracting control. In LISP and Functional
Programming, pages 151–160, 1990.
[12] Andrzej Filinski. Controlling Effects. PhD thesis, School of Computer Science,
Carnegie Mellon University, 1996. Technical Report CMU-CS-96-119 (144pp.).
[13] Andrzej Filinski. Normalization by evaluation for the computational lambdacalculus. In Samson Abramsky, editor, Typed Lambda Calculi and Applications,
volume 2044 of Lecture Notes in Computer Science, pages 151–165. Springer
Berlin / Heidelberg, 2001.
[14] Marcelo P. Fiore, Roberto Di Cosmo, and Vincent Balat. Remarks on isomorphisms in typed lambda calculi with empty and sum types. Ann. Pure Appl.
Logic, 141(1-2):35–50, 2006.
[15] Hugo Herbelin and Gyesik Lee. Forcing-based cut-elimination for Gentzen-style
intuitionistic sequent calculus. In Hiroakira Ono, Makoto Kanazawa, and Ruy J.
G. B. de Queiroz, editors, WoLLIC, volume 5514 of Lecture Notes in Computer
Science, pages 209–217. Springer, 2009.
15
[16] Danko Ilik. Formalisation of completeness for Kripke-CPS models, 2009.
http://www.lix.polytechnique.fr/~danko/code/kripke_completeness/.
[17] Danko Ilik. Constructive Completeness Proofs and Delimited Control. PhD thesis, École Polytechnique, October 2010.
[18] Danko Ilik, Gyesik Lee, and Hugo Herbelin. Kripke models for classical logic.
Annals of Pure and Applied Logic, 161(11):1367 – 1378, 2010. Special Issue:
Classical Logic and Computation (2008).
[19] Georg Kreisel. On weak completeness of intuitionistic predicate logic. J. Symb.
Log., 27(2):139–158, 1962.
[20] Saul Kripke. Semantical considerations on modal and intuitionistic logic. Acta
Philos. Fennica, 16:83–94, 1963.
[21] Damiano Macedonio and Giovanni Sambin. From meta-level to semantics via
reflection: a model for basic logic and its extensions. available from the authors.
[22] Chetan Murthy. Extracting Classical Content from Classical Proofs. PhD thesis,
Department of Computer Science, Cornell University, 1990.
[23] G. D. Plotkin. Call-by-name, call-by-value and the [lambda]-calculus. Theoretical Computer Science, 1(2):125–159, 1975.
[24] Clifford Spector. Provably recursive functionals of analysis: a consistency proof
of analysis by an extension of principles formulated in current intuitionistic mathematics. In Proc. Sympos. Pure Math., Vol. V, pages 1–27. American Mathematical Society, Providence, R.I., 1962.
[25] A. S. Troelstra and D. van Dalen. Constructivism in mathematics. Vol. I, volume
121 of Studies in Logic and the Foundations of Mathematics. North-Holland
Publishing Co., Amsterdam, 1988. An introduction.
[26] Wim Veldman. An intuitionistic completeness theorem for intuitionistic predicate
logic. J. Symb. Log., 41(1):159–166, 1976.
[27] A. J. Wilkie. On exponentiation - a solution to Tarski’s high school algebra problem. Technical report, Mathematical Institute, Oxford, UK, 2001.
16
| 6 |
First-Class Functions for First-Order Database Engines
Torsten Grust
Alexander Ulrich
Universität Tübingen, Germany
{torsten.grust, alexander.ulrich}@uni-tuebingen.de
arXiv:1308.0158v1 [cs.DB] 1 Aug 2013
Abstract
We describe query defunctionalization which enables off-the-shelf
first-order database engines to process queries over first-class functions. Support for first-class functions is characterized by the ability
to treat functions like regular data items that can be constructed
at query runtime, passed to or returned from other (higher-order)
functions, assigned to variables, and stored in persistent data structures. Query defunctionalization is a non-invasive approach that
transforms such function-centric queries into the data-centric operations implemented by common query processors. Experiments with
XQuery and PL/SQL database systems demonstrate that first-order
database engines can faithfully and efficiently support the expressive
“functions as data” paradigm.
1.
Functions Should be First-Class
Since the early working drafts of 2001, XQuery’s syntax and
semantics have followed a functional style:1 functions are applied to
form complex expressions in a compositional fashion. The resulting
XQuery script’s top-level expression is evaluated to return a sequence
of items, i.e., atomic values or XML nodes [8].
Ten years later, with the upcoming World Wide Web Consortium
(W3C) XQuery 3.0 Recommendation [28], functions themselves
now turn into first-class items. Functions, built-in or user-defined,
may be assigned to variables, wrapped in sequences, or supplied as
arguments to and returned from higher-order functions. In effect,
XQuery finally becomes a full-fledged functional language. Many
useful idioms are concisely expressed in this “functions as data”
paradigm. We provide examples below and argue that support for
first-class functions benefits other database languages, PL/SQL in
particular, as well.
This marks a significant change for query language implementations, specifically for those built on top of (or right into) database
kernels. While atomic values, sequences, or XML nodes are readily
represented in terms of the widespread first-order database data
models [9], this is less obvious for function items. Database kernels
typically lack a runtime representation of functional values at all.
We address this challenge in the present work.
In query languages, the “functions as data” principle can surface
in various forms.
Functions as Values. XQuery 3.0 introduces name#n as notation to refer to the n-ary function named name: math:pow#2
refers to exponentiation while fn:concat#2 denotes string concatenation, for example. The values of these expressions are
functions—their types are of the form function(t1 ) as t2
or, more succinctly, t1 → t2 —which may be bound to variables and applied to arguments. The evaluation of the expression
let $exp := math:pow#2 return $exp(2,3) yields 8, for example.
1
2
3
4
5
6
7
declare function fold-right(
$f as function(item(), item()*) as item()*,
$z as item()*, $seq as item()*) as item()*
{
if (empty($seq)) then $z else
$f(fn:head($seq), fold-right($f, $z, fn:tail($seq)))
};
Figure 1: Higher-order function fold-right (XQuery 3.0).
Higher-Order Functions. In their role of regular values, functions may be supplied as parameters to and returned from other
functions. The latter, higher-order functions can capture recurring patterns of computation and thus make for ideal building
blocks in query library designs. Higher-order function fold-right
is a prime example here—entire query language designs have
been based on its versatility [13, 18]. The XQuery 3.0 variant fold-right($f, $z, $seq) is defined in Figure 1: it reduces
a given input sequence $seq = (e1 ,e2 ,. . . ,en ) to the value
$f(e1 ,$f(e2 ,$f(. . . ,$f(en ,$z)· · · ))). Different choices for
the functional parameter $f and $z configure fold-right to perform a variety of computations:
fold-right(math:pow#2, 1, (e1 ,e2 ,. . . ,en ))
(with numeric ei ) computes the exponentiation tower ee12
the expression
XQuery is a functional language in which a query is represented as an
expression.” [11, §2]
.e n
, while
fold-right(fn:concat#2, "", (e1 ,e2 ,. . . ,en ))
will return the concatenation of the n strings ei .
Function Literals. Queries may use function(x) { e } to denote
a literal function (also: inline function or λ-expression λx.e). Much
like the literals of regular first-order types (numbers, strings, . . . ),
function literals are pervasive if we adopt a functional mindset:
A map, or associative array, is a function from keys to values.
Figure 2 takes this definition literally and implements maps2 in terms
of functions. Empty maps (created by map:empty) are functions
that, for any key $x, will return the empty result (). A map with
entry ($k,$v) is a function that yields $v if a key $x = $k
is looked up (and otherwise will continue to look for $x in the
residual map $map). Finally, map:new($es) builds a complex
map from a sequence of entries $es—an entry is added through
application to the residual map built so far. As a consequence of
this implementation in terms of functions, lookups are idiomatically
performed by applying a map to a key, i.e., we may write
let $m := map:new((map:entry(1,"one"), map:entry(2,"two")))
return $m(2) (:
"two" :)
2 Our
1 “[. . . ]
..
design follows Michael Kay’s proposal for maps in XSLT 3.0. Of
two entries under the same key, we return the entry inserted first (this is
implementation-dependent: http://www.w3.org/TR/xslt-30/#map).
1
2
3
declare function map:empty() {
function($x) { () }
};
6
7
8
9
declare function map:entry($k,$v) {
function($map) {
function($x) { if ($x = $k) then $v else $map($x) }
}
};
11
13
declare function map:new($es) {
fold-right(function($f,$x) { $f($x) }, map:empty(), $es)
};
16
17
5
6
7
8
9
11
12
13
14
14
15
3
10
10
12
2
declare function map:remove($map,$k) {
function($x) { if ($x = $k) then () else $map($x) }
};
15
Figure 2: Direct implementation of maps as functions from keys
to values (XQuery 3.0).
17
19
20
23
3
4
5
6
7
8
9
10
11
-- find completion date of an order based on its status
CREATE TABLE COMPLETION (
c_orderstatus CHAR(1),
c_completion FUNCTION(ORDERS) RETURNS DATE);
21
22
(: wrap(), unwrap(): see Section 4.1 :)
24
declare function map:empty() {
element map {}
};
26
declare function map:entry($k, $v) {
element entry {
element key { wrap($k) }, element val { wrap($v) }
}
};
30
2
-- determines the completion date of an order based on its items
CREATE FUNCTION item_dates(comp FUNCTION(DATE,DATE) RETURNS DATE)
RETURNS (FUNCTION(ORDERS) RETURNS DATE) AS
BEGIN
RETURN FUNCTION(o)
BEGIN RETURN (SELECT comp(MAX(li.l_commitdate),
MAX(li.l_shipdate))
FROM LINEITEM li
WHERE li.l_orderkey = o.o_orderkey);
END;
END;
16
18
1
-- Based on (an excerpt of) the TPC-H schema:
-- ORDERS(o_orderkey, o_orderstatus, o_orderdate, . . . )
-- LINEITEM(l_orderkey, l_shipdate, l_commitdate, . . . )
4
4
5
1
25
27
28
29
31
32
INSERT
(’F’,
(’P’,
(’O’,
INTO COMPLETION VALUES
FUNCTION(o) BEGIN RETURN o.o_orderdate; END),
FUNCTION(o) BEGIN RETURN NULL; END),
item_dates(GREATEST));
-- determine the completion date of all orders
SELECT o.o_orderkey,
o.o_orderstatus,
c.c_completion(o) AS completion
FROM ORDERS o, COMPLETION c
WHERE o.o_orderstatus = c.c_orderstatus;
Figure 4: Using first-class functions in PL/SQL.
12
13
14
15
16
17
18
19
20
21
22
23
24
declare function map:new($es) {
element map { $es }
};
declare function map:get($map, $k) {
unwrap($map/child::entry[child::key = $k][1]/
child::val/child::node())
};
declare function map:remove($map, $k) {
element map { $map/child::entry[child::key != $k] }
};
Figure 3: A first-order variant of XQuery maps.
An alternative, regular first-order implementation of maps is
shown in Figure 3. In this variant, map entries are wrapped in
pairs of key/val XML elements. A sequence of such pairs under a
common map parent element forms a complex map. Map lookup now
requires an additional function map:get—e.g., with $m as above:
map:get($m,2)—that uses XPath path expressions to traverse
the resulting XML element hierarchy. (We come back to wrap
and unwrap in Section 4.1.)
We claim that the functional variant in Figure 2 is not only shorter
but also clearer and arguably more declarative, as it represents a
direct realization of the “a map is a function” premise. Further, once
we study their implementation, we will see that the functional and
first-order variants ultimately lead the query processor to construct
and traverse similar data structures (Section 4.1). We gain clarity
and elegance and retain efficiency.
Functions in Data Structures. Widely adopted database programming languages, notably PL/SQL [4], treat functions as second-class
citizens: in particular, regular values may be stored in table cells
while functions may not. This precludes a programming style in
which queries combine tables of functions and values in a concise
and natural fashion.
The code of Figure 4 is written in a hypothetical dialect of
PL/SQL in which this restriction has been lifted. In this dialect,
the function type t1 → t2 reads FUNCTION(t1 ) RETURNS t2 and
FUNCTION(x) BEGIN e END denotes a literal function with argument x and body e.3
The example code augments a TPC-H database [34] with
a configurable method to determine order completion dates.
In lines 18 to 25, table COMPLETION is created and populated
with one possible configuration that maps an order status (column c_orderstatus) to its particular method of completion date
computation. These methods are specified as functions of type
FUNCTION(ORDERS) RETURNS DATE4 held in c_completion, a
functional column: while we directly return its o_orderdate
value for a finalized order (status ’F’) and respond with an
undefined NULL date for orders in processing (’P’), the completion date of an open order (’O’) is determined by function item_dates(GREATEST): this function consults the commitment and shipment dates of the order’s items and then returns
the most recent of the two (since argument comp is GREATEST).5
Function item_dates itself has been designed to be configurable. Its higher-order type
(DATE × DATE → DATE) → (ORDERS → DATE)
indicates that item_dates returns a function to calculate order
completion dates once it has been supplied with a suitable date
comparator (e.g., GREATEST in line 25). This makes item_dates a
curried function which consumes its arguments successively (date
comparator first, order second)—a prevalent idiom in functioncentric programming [6].
Note that the built-in and user-defined functions GREATEST
and item_dates are considered values as are the two literal func3 We
are not keen to propose syntax here. Any notation that promotes firstclass functions would be fine.
4 Type ORDERS denotes the type of the records in table ORDERS.
5 Built-in SQL function GREATEST (LEAST) returns the larger (smaller) of its
two arguments.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
declare function group-by($seq as item()*,
$key as function(item()*) as item()*)
as (function() as item()*)*
{
let $keys := for $x in $seq return $key($x)
for $k in distinct-values($keys)
return
function() { $seq[$key(.) = $k] }
};
let $fib := (0,1,1,2,3,5,8,13,21,34)
for $g in group-by($fib, function($x) { $x mod 2 })
return
element group { $g() }
Figure 5: A grouping function that represents the individual
groups in terms of closures (XQuery 3.0).
tions in lines 23 and 24. As such they may be stored in table cells—
e.g., in column c_completion of table COMPLETION—and then
accessed by SQL queries. The query in lines 28 to 32 exercises the
latter and calculates the completion dates for all orders based on the
current configuration in COMPLETION.
Once more we obtain a natural solution in terms of first-class
functions—this time in the role of values that populate tables.
Queries can then be used to combine functions and their arguments
in flexible ways. We have demonstrated further use cases for PL/SQL
defunctionalization (including offbeat examples, e.g., the simulation
of algebraic data types) in [20].
Contributions. The present work shows that off-the-shelf database
systems can faithfully and efficiently support expressive query languages that promote first-class functions. Our specific contributions
are these:
• We apply defunctionalization to queries, a source transformation
that trades functional values for first-order values which existing
query engines can process efficiently.
• We discuss representations of closures that fit database data models
and take size and sharing issues into account.
• We demonstrate how these techniques apply to widely adopted
query languages (XQuery, PL/SQL) and established systems (e.g.,
Oracle and PostgreSQL).
• We show that defunctionalization introduces a tolerable runtime
overhead (first-order queries are not affected at all) and how simple
optimizations further reduce the costs.
Defunctionalization is an established technique in programming
languages and it deserves to be better known in the database systems
arena.
The approach revolves around the concept of closure which we
discuss briefly in Section 2. Section 3 shows how defunctionalization maps queries over first-class functions to regular first-order
constructs. We focus on XQuery first and then carry over to PL/SQL
in Section 3.1. Issues of efficient closure representation are addressed in Section 4. Section 5 assesses the space and time overhead
of defunctionalization and discusses how costs may be kept in check.
Section 6 reviews related efforts before we conclude in Section 7.
2.
Functions as Values: Closures
This work deliberately pursues a non-invasive approach that enables
off-the-shelf database systems to support the function-centric style
of queries we have advocated in Section 1. If these existing firstorder query engines are to be used for evaluation, it follows that we
require a first-order representation of functional values. Closures [5,
23] provide such a representation. We very briefly recall the concept
here.
The XQuery 3.0 snippet of Figure 5 defines the higher-order
grouping function group-by which receives the grouping criterion
in terms of the functional argument $key: a group is the sequence of
those items $x in $seq that map to the same key value $key($x).
Since XQuery implicitly flattens nested sequences, group-by cannot directly yield the sequence of all groups. Instead, group-by
returns a sequence of functions each of which, when applied to zero
arguments, produces “its” group. The sample code in lines 11 to 14
uses group-by to partition the first few elements of the Fibonacci series into odd/even numbers and then wraps the two resulting groups
in XML group elements.
Closures. Note that the inline function definition in line line 8
captures the values of the free variables $k, $key, and $seq which is
just the information required to produce the group for key $k. More
general, the language implementation will represent a functional
value f as a bundle that comprises
(1) the code of f ’s body and
(2) its environment, i.e., the bindings of the body’s free variables at
the time f was defined.
Together, code and environment define the closure for function f .
In the sequel, we will use
`
x1 · · · xn
to denote a closure whose environment contains n > 0 free variables
v1 , . . . , vn bound to the values x1 , . . . , xn .6 Label ` identifies the
code of the function’s body (in the original work on closures, code
pointers were used instead [5]). In the example of Figure 5, two
closures are constructed at line 8 (there are two distinct grouping
keys $k = 0, 1) that represent instances of the literal function. If we
order the free variables as $k, $key, $seq, these closures read
`1
0
`2
(0,1,1,2,. . . )
and
`1
1
`2
(0,1,1,2,. . . ) .
(the two closures share label `1 since both refer to the same body
code $seq[$key(.) = $k]). Observe that
• closures may be nested: $key is bound to closure `2
with empty
environment, representing the literal function($x) { $x mod 2 }
(defined in line 12) whose body has no free variables, and
• closures may contain and share data of significant size: both
closures contain a copy of the $fib sequence (since free variable $seq was bound to $fib).
We will address issues of closure nesting, sharing, and size in Sections 4 and 5.
The key idea of defunctionalization, described next, is to trade
functional values for their closure representation—ultimately, this
leaves us with an equivalent first-order query.
3.
Query Defunctionalization
Query defunctionalization is a source-level transformation that
translates queries over first-class functions into equivalent firstorder queries. Here, our discussion revolves around XQuery but
defunctionalization is readily adapted to other query languages, e.g.,
PL/SQL (see Section 3.1).
The source language is XQuery 3.0, restricted to the constructs
that are admitted by the grammar of Figure 6 (these restrictions aid
brevity—defunctionalization is straightforwardly extended to cover
the full XQuery 3.0 specification). Notably, the language subset
includes
• two kinds of expressions that yield functional values (literal
functions of the form function($x1 ,. . . ,$xn ) { e } as well as
named function references name#n), and
6 If
we agree on a variable order, there is no need to save the variable names vi
in the environment.
Program
FunDecl
Expr
Var
→
→
→
|
|
|
|
|
|
|
|
|
|
|
|
|
→
FunDecl ∗ Expr
declare function QName($Var ∗ ) { Expr };
for $Var in Expr return Expr
let $Var := Expr return Expr
$Var
if (Expr ) then Expr else Expr
(Expr∗ )
Expr /Axis::NodeTest
element QName { Expr }
Expr [Expr ]
.
QName (Expr ∗ )
function ($Var ∗ ) { Expr }
QName#IntegerLiteral
Expr (Expr ∗ )
···
QName
Figure 6: Relevant XQuery subset (source language), excerpt of
the XQuery 3.0 Candidate Recommendation [28].
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Expr
Case
→
|
|
|
|
|
[ constructs of Figure 6 ]
function ($Var ∗ ) { Expr }
QName#IntegerLiteral
Expr (Expr ∗ )
` Expr · · · Expr
case Expr of Case +
→
` $Var · · · $Var
⇒ Expr
Figure 7: Target language: functional values and dynamic function calls are removed. New: closure construction and elimination.
• dynamic function calls of the form e(e1 ,. . . ,en ), in which
expression e evaluates to an n-ary function that is subsequently
applied to the appropriate number of arguments.
The transformation target is a first-order dialect of XQuery 1.0
to which we add closure construction and elimination. A closure
constructor ` x1 · · · xn builds a closure with label ` and an
environment of values x1 , . . . , xn . Closure elimination, expressed
using case · · · of, discriminates on a closure’s label and then extracts
the environment contents: from the b branches in the expression
case e of
`1 $v1,1 · · · $v1,n ⇒ e1
1
..
.
`b $vb,1 · · · $vb,n ⇒ eb ,
b
if e evaluates to the closure `i x1 · · · xn , case · · · of will pick
the ith branch and evaluate ei with the variables $vi,j bound to
the values xj . We discuss ways to express the construction and
elimination of closures in terms of regular query language constructs
in Section 4.
Figure 7 shows the relevant excerpt of the resulting target
language. In a sense, this modified grammar captures the essence of
defunctionalization: functional values and dynamic function calls
are traded for the explicit construction and elimination of first-order
closures. The translation can be sketched as follows:
(1) A literal function is replaced by a closure constructor whose
environment is populated with the bindings of the free variables
referenced in the function’s body. The body’s code is wrapped
inside a new top-level surrogate function ` whose name also
serves as the closure label.
(2) A reference to a function named ` is replaced by a closure
constructor with empty environment and label `.
declare function `2($x) { $x mod 2 };
declare function `1($k, $key, $seq) {
$seq[(dispatch_1($key, .)) = $k]
};
declare function dispatch_0($clos) {
case $clos of
`1 $k $key $seq ⇒ `1($k, $key, $seq)
};
declare function dispatch_1($clos, $b1) {
case $clos of
`2 ⇒ `2($b1)
};
declare function group-by($seq, $key) {
let $keys := for $x in $seq return dispatch_1($key, $x)
for $k in distinct-values($keys)
return `1 $k $key $seq
};
let $fib := (0,1,1,2,3,5,8,13,21,34)
for $g in group-by($fib, `2 )
return
element group { dispatch_0($g) }
Figure 8: Defunctionalized first-order variant of the XQuery group-by example in Figure 5.
(3) A dynamic function call (now equivalent to an application of
a closure with label ` to zero or more arguments) is translated
into a static function call to a generated dispatcher function. The
dispatcher receives the closure as well as the arguments and then
uses closure elimination to forward the call to function `, passing
the environment contents (if any) along with the arguments.
Appendix A elaborates the details of this transformation, including
the generation of dispatchers, for the XQuery case. A syntax-directed
top-down traversal identifies the relevant spots in a given program
at which closure introduction or elimination has to be performed
according to the cases (1) to (3) above. All other program constructs
remain unchanged. The application of defunctionalization to the
XQuery program of Figure 5 yields the code of Figure 8. We find the
expected surrogate functions `1,2 , dispatchers (dispatch_n), and
static dispatcher invocations. Overall, the resulting defunctionalized
query adheres to the target language of Figure 7, i.e., the query is
first-order. Once we choose a specific implementation for closure
construction and elimination, we obtain a query that may be executed
by any XQuery 1.0 processor.
3.1
Query Defunctionalization for PL/SQL
Query defunctionalization does not need to be reinvented if we carry
it over to PL/SQL. Much like for XQuery, the defunctionalization
transformation for a PL/SQL dialect with first-class functions builds
on three core cases (see above and Figure 21 in Appendix A):
(1) the creation of function literals (applies in lines 9, 23, and 24 of
the PL/SQL example in Figure 4),
(2) references to named function values (GREATEST in line 25), and
(3) dynamic function application (applies in lines 10 and 30).
Applied to the example of Figure 4 (order completion dates),
defunctionalization generates the output of Figure 9. The resulting
code executes on vanilla PL/SQL hosts; we show a PostgreSQL 9
dialect here, minor adaptations yield syntactic compatibility with
Oracle.
PL/SQL operates over typed tables and values and thus requires
the generation of typed closures. In the present example, we use
τt1 →t2 to denote the type of closures that represent functions of
type t1 → t2 . (For now, τ is just a placeholder—Section 4 discusses
suitable relational implementations of this type.) As expected, we
find higher-order function item_dates to accept and return values
of such types τ (line 35).
1
2
3
4
5
6
7
CREATE FUNCTION `1(o ORDERS, comp τDATE×DATE→DATE) RETURNS DATE AS
BEGIN
RETURN (SELECT dispatch_2(comp, MAX(li.l_commitdate),
MAX(li.l_shipdate))
FROM LINEITEM li
WHERE li.l_orderkey = o.o_orderkey);
END;
8
9
10
11
12
CREATE FUNCTION `2(o ORDERS) RETURNS DATE AS
BEGIN
RETURN o.o_orderdate;
END;
13
14
15
16
17
CREATE FUNCTION `3(o ORDERS) RETURNS DATE AS
BEGIN
RETURN NULL;
END;
18
19
20
21
22
23
24
25
26
CREATE FUNCTION dispatch_1(clos τORDERS→DATE, b1 ORDERS)
RETURNS DATE AS
BEGIN
case clos of
`1 comp ⇒ `1(b1, comp)
`2
⇒ `2(b1)
`3
⇒ `3(b1)
END;
27
28
29
30
31
32
33
CREATE FUNCTION dispatch_2(clos τDATE×DATE→DATE, b1 DATE, b2 DATE)
RETURNS DATE AS
BEGIN
case clos of
`4 ⇒ GREATEST(d1, d2)
END;
34
35
36
37
38
39
CREATE FUNCTION item_dates(comp τDATE×DATE→DATE)
RETURNS τORDERS→DATE AS
BEGIN
RETURN `1 comp ;
END;
40
41
42
43
CREATE TABLE COMPLETION (
c_orderstatus CHAR(1),
c_completion τORDERS→DATE);
44
45
46
47
48
INSERT INTO COMPLETION VALUES
(’F’, `2 ),
(’P’, `3 );
(’O’, item_dates( `4 )),
49
50
51
52
53
54
SELECT o.o_orderkey,
o.o_orderstatus,
dispatch_1(c.c_completion, o) AS completion
FROM ORDERS o, COMPLETION c
WHERE o.o_orderstatus = c.c_orderstatus;
Figure 9: PL/SQL code of Figure 4 after defunctionalization.
COMPLETION
c_orderstatus c_completion
`2
’F’
`3
’P’
`1 `4
’O’
Figure 10: Table of functions: COMPLETION holds closures of
type τORDERS→DATE in column c_completion.
are closed and have an empty environment. Closure `1 , representing
the function literal defined at line 9 of Figure 4, carries the value of
free variable comp which itself is a (date comparator) function. We
thus end up with a nested closure.
Tables of functions may persist in the database like regular
first-order tables. To guarantee that closure labels and environment
contents are interpreted consistently when such tables are queried,
update and query statements need to be defunctionalized together,
typically as part of the same PL/SQL package [4, §10] (whole-query
transformation, see Appendix A). Still, query defunctionalization
is restricted to operate in a closed world: the addition of new literal
functions or named function references requires the package to be
defunctionalized anew.
4.
Representing (Nested) Closures
While the defunctionalization transformation nicely carries over
to query languages, we face the challenge to find closure representations that fit query runtime environments. Since we operate
non-invasively, we need to devise representations that can be expressed within the query language’s data model itself. (We might
benefit from database engine adaptations but such invasive designs
are not in the scope of the present paper.)
Defunctionalization is indifferent to the exact method of closure
construction and elimination provided that the implementation can
(a) discriminate on the code labels ` and
(b) hold any value of the language’s data model in the environment.
If the implementation is typed, we need to
(c) ensure that all constructed closures for a given function
type t1 → t2 share a common representation type τt1 →t2
(cf. our discussion in Section 3.1).
Since functions can assume the role of values, (b) implies that
closures may be nested. We encountered nested closures of depth 2
in Figure 10 where the environment of closure `1 holds a closure
labeled `4 . For particular programs, the nesting depth may be
unbounded, however. The associative map example of Section 1
creates closures of the form
`1
Likewise, PL/SQL defunctionalization emits typed dispatchers
dispatch_i each of which implement dynamic function invocation for closures of a particular type:7 the dispatcher associated with functions of type t1 → t2 has the PL/SQL signature
FUNCTION(τt1 →t2 ,t1 ) RETURNS t2 . With this typed representation
come opportunities to improve efficiency. We turn to these in the
next section.
Tables of Functions. After defunctionalization, functional values
equate first-order closure values. This becomes apparent with a look
at table COMPLETION after it has been populated with three functions
(in lines 45 to 48 of Figure 9). Column c_completion holds the
associated closures (Figure 10). The closures with labels `2 and `3
represent the function literals in lines 23 and 24 of Figure 4: both
7 Since
PL/SQL lacks parametric polymorphism, we may assume that the ti
denote concrete types. Type specialization [33] could pave the way for a
polymorphic variant of PL/SQL, one possible thread of future work.
k1 v 1
`1
k2 v2
`1
···
`1
kn vn
`3
(∗)
where the depth is determined by the number n of key/value pairs
(ki , vi ) stored in the map.
Here, we discuss closure implementation variants in terms of
representation functions CJ·K that map closures to regular language
constructs. We also point out several refinements.
4.1
XQuery: Tree-Shaped Closures
For XQuery, one representation that equates closure construction
with XML element construction is given in Figure 11. A closure
with label ` maps to an outer element with tag ` that holds the
environment contents in a sequence of env elements. In the environment, atomic items are tagged with their dynamic type such that
closure elimination can restore value and type (note the calls to
function wrap() and its definition in Figure 12): item 1 of type
xs:integer is held as <atom><integer>1</integer></atom>.
Item sequences map into sequences of their wrapped items, XML
nodes are not wrapped at all.
CJ ` x1 · · · xn K
=
CJ ` K
=
=
CJxK
element ` { element env { CJx1 K }, . . . ,
element env { CJxn K } }
element ` {}
wrap(x)
Figure 11: XQuery closure representation in terms of XML
fragments. Function wrap() is defined in Figure 12a.
1
2
3
4
5
6
7
8
9
10
declare function wrap($xs)
{
for $x in $xs return
typeswitch ($x)
case xs:anyAtomicType
return wrap-atom($x)
case attribute(*)
return element attr {$x}
default return $x
};
11
12
(a)
declare function wrap-atom($a)
{
element atom {
typeswitch ($a)
case xs:integer
return element integer {$a}
case xs:string
return element string {$a}
[. . . more atomic types. . . ]
default return element any {$a}
}
};
(b)
Figure 12: Preserving value and dynamic type of environment
contents through wrapping.
Closure elimination turns into an XQuery typeswitch() on the
outer tag name while values in the environment are accessed via
XPath child axis steps (Figure 13). Auxiliary function unwrap()
(obvious, thus not shown) uses the type tags to restore the original
atomic items held in the environment.
In this representation, closures nest naturally. If we apply CJ·K to
the closure (∗) that resulted from key/value map construction, we
obtain the XML fragment of Figure 14 whose nested shape directly
reflects that of the input closure.
Refinements. The above closure representation builds on inherent
strengths of the underlying XQuery processor—element construction and tree navigation—but has its shortcomings: XML nodes held
in the environment lose their original tree context due to XQuery’s
copy semantics of node construction. If this affects the defunctionalized queries, an environment representation based on by-fragment
semantics [36], preserving document order and ancestor context, is
a viable alternative.
Further options base on XQuery’s other aggregate data type:
the item sequence: closures then turn into non-empty sequences
of type item()+. While the head holds label `, the tail can hold
the environment’s contents: (`,x1 ,. . . ,xn ). In this representation,
neither atomic items nor nodes require wrapping as value, type, and
tree context are faithfully preserved. Closure elimination accesses
the xi through simple positional lookup into the tail. Indeed, we
have found this implementation option to perform particularly
well (Section 5). Due to XQuery’s implicit sequence flattening,
this variant requires additional runtime effort in the presence of
sequence-typed xi or closure nesting, though (techniques for the flat
representation of nested sequences apply [25]).
Lastly, invasive approaches may build on engine-internal support
for aggregate data structures. Saxon [25], for example, implements
an appropriate tuple structure that can serve to represent closures.8
4.2
PL/SQL: Typed Closures
Recall that we require a fully typed closure representation to meet
the PL/SQL semantics (Section 3.1). A direct representation of
closures of, in general, unbounded depths would call for a recursive
representation type. Since the PL/SQL type system reflects the flat
relational data model, recursive types are not permitted, however.
8 http://dev.saxonica.com/blog/mike/2011/07/#000186
case e1 of
..
.
` $v1 · · · $vn ⇒ e2
typeswitch
(e1)
..
.
case element(`) return
let $env := e1/env
let $v1 := unwrap($env[1]/node())
..
.
let $vn := unwrap($env[n]/node())
return e2
Figure 13: XQuery closure elimination: typeswitch() discriminates on the label, axis steps access the environment.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<`1>
<env><atom><tkey >k1</tkey ></atom></env>
<env><atom><tval >v1</tval ></atom></env>
<env>
<`1>
<env><atom><tkey >k2</tkey ></atom></env>
<env><atom><tval >v2</tval ></atom></env>
<env>
···
<`1>
<env><atom><tkey >kn</tkey ></atom></env>
<env><atom><tval >vn</tval ></atom></env>
<env><`3/></env>
</`1>
···
</env>
</`1>
</env>
</`1>
Figure 14: XML representation of the nested closure (∗). tkey
and tval denote the types of keys and values, respectively.
CJ ` x1 · · · xn K
=
ROW(`,γ)
CJ ` K
CJxK
=
=
ROW(`,NULL)
x
ENVt1 →t2
id
env
..
..
.
.
γ ROW(CJx1 K,. . . ,CJxn K)
Figure 15: Relational representation for closures, general approach (γ denotes an arbitrary but unique key value).
ENVtkey →tval
id
env
γn ROW(k1 ,v1 ,ROW(`1 ,γn−1 ))
γn−1 ROW(k2 ,v2 ,ROW(`1 ,γn−2 ))
..
..
.
.
γ1 ROW(kn ,vn ,ROW(`3 ,NULL))
Figure 16: Environment table built to represent closure (∗).
Instead, we represent closures as row values, built by constructor ROW(), i.e., native aggregate record structures provided
by PL/SQL. Row values are first-class citizens in PL/SQL and, in
particular, may be assigned to variables, can contain nested row
values, and may be stored in table cells (these properties are covered by feature S024 “support for enhanced structured types” of the
SQL:1999 standard [31]).
Figure 15 defines function CJ·K that implements a row value-based
representation. A closure ` x1 · · · xn of type τt1 →t2 maps to
the expression ROW(`,γ). If the environment is non-empty, CJ·K
constructs an additional row to hold the environment contents. This
row, along with key γ is then appended to binary table ENVt1 →t2
which collects the environments of all functions of type t1 → t2 .
Notably, we represent non-closure values x as is (CJxK = x), saving
the program to perform wrap()/unwrap() calls at runtime.
CJ ` x1 · · · xn K
CJ ` K
CJxK
=
=
=
ROW(`,ROW(CJx1 K,. . . ,CJxn K))
ROW(`,NULL)
x
Figure 17: Relational representation of closures with fixed nesting depth: environment contents inlined into closure.
1
2
3
4
5
6
7
8
This representation variant yields a flat relational encoding regardless of closure nesting depth. Figure 16 depicts the table of
environments that results from encoding closure (∗). The overall
top-level closure is represented by ROW(`1 ,γn ): construction proceeds inside-out with a new outer closure layer added whenever a
key/value pair is added to the map. This representation of closure environments matches well-known relational encodings of tree-shaped
data structures [14].
Environment Sharing. ENV tables create opportunities for environment sharing. This becomes relevant if function literals are
evaluated under invariable bindings (recall our discussion of function group-by in Figure 5). A simple, yet dynamic implementation of environment sharing is obtained if we alter the behavior
of CJ ` x1 · · · xn K: when the associated ENV table already carries an environment of the same contents under a key γ, we return ROW(`,γ) and do not update the table—otherwise a new environment entry is appended as described before. Such upsert operations are a native feature of recent SQL dialects (cf. MERGE [31,
§14.9]) and benefit if column env of the ENV table is indexed. The
resulting many-to-one relationship between closures and environments closely resembles the space-efficient safely linked closures
as described by Shao and Appel in [29]. We return to environment
sharing in Section 5.
Closure Inlining. Storing environments separately from their closures also incurs an overhead during closure elimination, however.
Given a closure encoding ROW(`,γ) with γ 6= NULL, the dispatcher
(1) discriminates on `, e.g., via PL/SQL’s CASE· · · WHEN· · · END CASE,
then (2) accesses the environment through an ENV table lookup with
key γ.
With typed closures, the representation types τt1 →t2 are comprised of (or: depend on) typed environment contents. For the large
class of programs—or parts thereof—which nest closures to a statically known, limited depth, these representation types will be nonrecursive. Below, the type dependencies for the examples of Figures 2 and 4 are shown on the left and right, respectively (read
as “has environment contents of type”):
τtkey →tval
tval
tkey
τORDERS→DATE
τDATE×DATE→DATE
Note how the loop on the left coincides with the recursive shape of
closure (∗). If these dependencies are acyclic (as they are for the
order completion date example), environment contents may be kept
directly with their containing closure: separate ENV tables are not
needed and lookups are eliminated entirely. Figure 17 defines a variant of CJ·K that implements this inlined closure representation. With
this variant, we obtain CJ `1 `4 K = ROW(`1 ,ROW(`4 ,NULL))
(see Figure 10).
We quantify the savings that come with closure inlining in the
upcoming section.
5.
Does it Function? (Experiments)
Adding native support for first-class functions to a first-order query
processor calls for disruptive changes to its data model and the associated set of supported operations. With defunctionalization and its
9
10
11
12
13
14
15
declare function group-by($seq as item()*,
$key as function(item()*) as item()*)
as (function() as item()*)*
{
let $keys := for $x in $seq return $key($x)
for $k in distinct-values($keys)
let $group := $seq[$key(.) = $k]
return
changed from Figure 5
function() { $group }
};
let $fib := (0,1,1,2,3,5,8,13,21,34)
for $g in group-by($fib, function($x) { $x mod 2 })
return
element group { $g() }
Figure 18: Hoisting invariant computation out of the body of
the literal function at line 9 affects closure size.
non-invasive source transformation, these changes are limited to the
processor’s front-end (parser, type checker, query simplification).
Here, we explore this positive aspect but also quantify the performance penalty that the non-native defunctionalization approach
incurs.
XQuery 3.0 Test Suite. Given the upcoming XQuery 3.0 standard, defunctionalization can help to carry forward the significant development effort that has been put into XQuery 1.0 processors. To make this point, we subjected three such processors—
Oracle 11g (release 11.1) [24], Berkeley DB XML 2.5.16 [1] and
Sedna 3.5.161 [15]—to relevant excerpts of the W3C XQuery 3.0
Test Suite (XQTS).9 All three engines are database-supported XQuery processors; native support for first-class functions would require
substantial changes to their database kernels.
Instead, we fed the XQTS queries into a stand-alone preprocessor
that implements the defunctionalization transformation as described
in Section 3. The test suite featured, e.g.,
• named references to user-defined and built-in functions, literal
functions, sequences of functions, and
• higher-order functions accepting and returning functions.
All three systems were able to successfully pass these tests.
Closure Size. We promote a function-centric query style in this
work, but ultimately all queries have to be executed by datacentric database query engines. Defunctionalization implements
this transition from functions to data, i.e., closures, under the hood.
This warrants a look at closure size.
Turning to the XQuery grouping example of Figure 5 again, we
see that the individual groups in the sequence returned by group-by
are computed on-demand: a group’s members will be determined
only once its function is applied ($g() in line 14). Delaying the
evaluation of expressions by wrapping them into (argument-less)
functions is another useful idiom available in languages with firstclass functions [7], but there are implications for closure size: each
group’s closure captures the environment required to determine its
group members. Besides $key and $k, each environment includes
the contents of free variable $seq (the input sequence) such that
the overall closure space requirements are in O(g · |$seq|) where g
denotes the number of distinct groups. A closure representation that
allows the sharing of environments (Section 4.2) would bring the
space requirements down to O(|$seq|) which marks the minimum
size needed to partition the sequence $seq.
Alternatively, in the absence of sharing, evaluating the expression $seq[$key(.) = $k] outside the wrapping function computes
groups eagerly. Figure 18 shows this alternative approach in which
9A
pre-release is available at http://dev.w3.org/cvsweb/2011/
QT3-test-suite/misc/HigherOrderFunctions.xml.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
BaseX
<map>
<entry>
<key><atom><tkey >k1</tkey ></atom></key>
<val><atom><tval >v1</tval ></atom></val>
</entry>
<entry>
<key><atom><tkey >k2</tkey ></atom></key>
<val><atom><tval >v2</tval ></atom></val>
</entry>
···
<entry>
<key><atom><tkey >kn</tkey ></atom></key>
<val><atom><tval >vn</tval ></atom></val>
</entry>
</map>
# free variables
native
dispatch
PostgreSQL
10 500
11 860
2 414
8 271
(a) Unary PL/SQL function.
native
dispatch
BaseX
Saxon
394
448
1 224
755
Saxon
10
1
5
10
native
402
396
467
1 144
1 451
1 725
node
sequence
2 132
743
7 685
1 527
14 535
2 485
2 133
854
7 347
1 526
12 992
2 350
# Calls
Line
1
1 500 000
732 044
732 044
732 044
729 413
38 543
50
19
1
3
28
9
14
Query/Function
SELECT o_orderkey,· · ·
dispatch_1()
`1 ()
SELECT dispatch_2(· · ·
dispatch_2()
`2 ()
`3 ()
ENV
Inline
47 874
44 249
22 748
9 270
3 554
2 942
155
40 093
35 676
22 120
9 363
3 450
2 856
149
Table 3: Profiles for the PL/SQL program of Figure 9: environment tables vs. closure inlining. Averaged cumulative time measured in ms. Line numbers refer to Figure 9.
(b) Literal XQuery function.
Table 1: Performing 106 invocations of closed functions (native
vs. dispatched calls). Wall-clock time measured in ms.
the bracketed part has been changed from Figure 5. A group’s closure now only includes the group’s members (free variable $group,
line 9 in Figure 18) and the overall closure sizes add up to O(|$seq|)
as desired. Closure size thus should be looked at with care during
query formulation—such “space leaks” are not specific to the present
approach, however [30].
With defunctionalization, queries lose functions but gain data. This
does not imply that defunctionalized queries use inappropriate
amounts of space, though. In our experiments we have found
function-centric queries to implicitly generate closures whose size
matches those of the data structures that are explicitly built by
equivalent first-order formulations.
To illustrate, recall the two XQuery map variants of Section 1.
Given n key/value pairs (ki , vi ), the function-centric variant of Figure 2 implicitly constructs the nested closure shown in Figure 14: a
non-empty map of n entries will yield a closure size of 10 · n XML
nodes. In comparison, the first-order map variant of Figure 3 explicitly builds a key/value list of similar size, namely 1 + 9 · n nodes
(Figure 19). Further, key lookups in the map incur almost identical
XPath navigation efforts in both variants, either through closure
elimination or, in the first-order case, the required calls to map:get.
Native vs. Dispatched Function Calls. As expected, the invocation
of functions through closure label discrimination by dispatchers
introduces measurable overhead if compared to native function
calls.10 To quantify these costs, we performed experiments in which
106 native and dispatched calls were timed. We report the averaged
wall-clock times of 10 runs measured on a Linux host, kernel
version 3.5, with Intel Core i5 CPU (2.6 GHz) and 8 GB of primary
memory.
Both, function invocation itself and closure manipulation contribute to the overhead. To assess their impact separately, a first
round of experiments invoked closed functions (empty environment). Table 1a documents the cost of a dispatched PL/SQL function call—i.e., construction of an empty closure, static call to the
10 Remember
5
Table 2: 106 invocations and elimination of closures of varying
size (1/5/10 free variables). Wall-clock time measured in ms.
Figure 19: Key-value map representation generated by the firstorder code of Figure 3 (compare with the closure of Figure 14).
Oracle
1
that this overhead only applies to dynamic function calls—static
calls are still performed natively.
dispatch function, closure label discrimination, static call to a
surrogate function. While dispatched function calls minimally affect Oracle 11g performance—hinting at a remarkably efficient
implementation of its PL/SQL interpreter—the cost is apparent in
PostgreSQL 9.2 (factor 3.5). In the XQuery case, we executed the
experiment using BaseX 7.3 [17] and Saxon 9.4 [3]—both engines
provide built-in support for XQuery 3.0 and thus allow a comparison
of the costs of a native versus a defunctionalized implementation
of first-class functions. BaseX, for example, employs a Java-based
implementation of closure-like structures that refer to an expression
tree and a variable environment. For the dynamic invocation of a
closed literal function, BaseX shows a moderate increase of 14 %
(Table 1b) when dispatching is used. For Saxon, we see a decrease
of 38 % from which we conclude that Saxon implements static
function calls (to dispatch and the surrogate function in this case)
considerably more efficient than dynamic calls. The resulting performance advantage of defunctionalization has also been reported
by Tolmach and Oliva [33].
In a second round of experiments, we studied the dynamic invocation of XQuery functions that access 1, 5, or 10 free variables of
type xs:integer. The defunctionalized implementation shows the
expected overhead that grows with the closure size (see Table 2):
the dispatcher needs to extract and unwrap 1, 5, or 10 environment
entries from its closure argument $clos before these values can
be passed to the proper surrogate function (Section 3). As anticipated in Section 4.1, however, a sequence-based representation of
closures can offer a significant improvement over the XML nodebased variant—both options are shown in Table 2 (rows “node” vs.
“sequence”). If this option is applicable, the saved node construction
and XPath navigation effort allows the defunctionalized invocation
of non-closed functions perform within a factor of 1.36 (Saxon) or 5
(BaseX) of the native implementation.
Environment Tables vs. Closure Inlining. Zooming out from the
level of individual function calls, we assessed the runtime contribution of dynamic function calls and closure elimination in the
context of a complete PL/SQL program (Figure 9). To this end, we
recorded time profiles while the program was evaluated against a
TPC-H instance of scale factor 1.0 (the profiles are based on PostgreSQL’s pg_stat_statements and pg_stat_user_functions
views [2]). Table 3 shows the cumulative times (in ms) over all query
and function invocations: one evaluation of dispatch_1(), includ-
ing the queries and functions it invokes, takes 44 429 ms/1 500 000 ≈
0.03 ms on average (column ENV). The execution time of the toplevel SELECT statement defines the overall execution time of the
program. Note that the cumulative times do not add up perfectly
since the inevitable PL/SQL interpreter overhead and the evaluation
of built-in functions are not reflected in these profiles.
Clearly, dispatch_1() dominates the profile as it embodies
the core of the configurable completion date computation. For
more than 50 % of the overall 1 500 000 orders, the dispatcher
needs to eliminate a closure of type τORDERS→DATE and extract the
binding for free variable comp from its environment before it can
invoke surrogate function `1 (). According to Section 4.2, closure
inlining is applicable here and column Inline indeed shows a
significant reduction of execution time by 18 % (dispatch_2()
does not benefit since it exclusively processes closures with empty
environments.)
Simplifications. A series of simplifications help to further reduce
the cost of queries with closures:
• Identify ` and ` (do not build closures with empty environment).
This benefits dynamic calls to closed and built-in functions.
• If Dispatch(n) is a singleton set, dispatch_n becomes superfluous as it is statically known which case branch will be taken.
• When constructing ` e1 · · · en , consult the types of the ei
to select the most efficient closure representation (recall our
discussion in Section 4).
For the PL/SQL program
of Figure 9, these simpliQuery/Function
Simplified
fications lead to the reSELECT o_orderkey,· · ·
36 010
moval of dispatch_2()
dispatch_1()
31 851
since the functional argu`1 ()
18 023
ment comp is statically
SELECT GREATEST(· · ·
4 770
known to be GREATEST in
`2 ()
2 923
the present example. Exe`3 ()
154
cution time is reduced by
an additional 11 % (see
column Simplified above). We mention that the execution time
now is within 19 % of a first-order formulation of the program—
this first-order variant is less flexible as it replaces the join with
(re-)configurable function table COMPLETION by an explicit hardwired CASE statement, however.
Avoiding Closure Construction. A closer look at the “native” row
of Table 2 shows that a growing number of free variables only has
moderate impact on BaseX’ and Saxon’s native implementations
of dynamic function calls: in the second-round experiments, both
processors expand the definitions of free variables inside the called
function’s body, effectively avoiding the need for an environment.
Unfolding optimizations of this kind can also benefit defunctionalization.
The core of such an inlining optimizer is a source-level query
rewrite in which closure construction and elimination cancel each
other out:
case ` e1 · · · en of
..
.
` $v1 · · · $vn ⇒ e
..
.
let $v1 := e1
..
.
$vn := en
return e
As this simplification depends on the closure label ` and the environment contents e1 , . . . , en to be statically known at the case · · · of
site, the rewrite works in tandem with unfolding transformations:
• Replace let-bound variables by their definitions if the latter are
considered simple (e.g., literals or closures with simple environment contents).
1
2
3
4
5
6
7
8
let $fib := (0,1,1,2,3,5,8,13,21,34)
let $keys := for $x in $fib return $x mod 2
for $x in for $k in distinct-values($keys)
let $group := $fib[((.) mod 2) = $k]
return `1 $group
return element group { case $x of
`1 $group ⇒ $group
}
Figure 20: First-order XQuery code for the example of Figure 18 (defunctionalization and unfolding rewrite applied).
defunctionalization
+ unfolding
+ simplifications
Oracle
Berkeley DB
Sedna
5.03
4.99
1.28
20.60
9.29
7.45
2.56
1.31
0.98
Table 4: Impact of unfolding and simplifications on the evaluation of group-by($seq, function($x) { $x mod 100 }) for
|$seq| = 104 . Averaged wall-clock time measured in seconds.
• Replace applications of function literals or calls to user-defined
non-recursive functions by the callee’s body in which function
arguments are let-bound.
Defunctionalization and subsequent unfolding optimization transform the XQuery group-by example of Figure 18 into the firstorder query of Figure 20. In the optimized query, the dispatchers dispatch_0 and dispatch_1 (cf. Figure 8) have been inlined.
The construction and elimination of closures with label `2 canceled
each other out.
Finally, the above mentioned simplifications succeed in removing
the remaining closures labeled `1 , leaving us with closure-less code.
Table 4 compares evaluation times for the original defunctionalized
group-by code and its optimized variants—all three XQuery 1.0
processors clearly benefit.
6.
More Related Work
Query defunctionalization as described here builds on a body of
work on the removal of higher-order functions in programs written in
functional programming languages. The representation of closures
in terms of first-order records has been coined as closure-passing
style [5]. Dispatchers may be understood as mini-interpreters that
inspect closures to select the next program step (here: surrogate
function) to execute, a perspective due to Reynolds [27]. Our
particular formulation of defunctionalization relates to Tolmach
and Oliva and their work on translating ML to Ada [33] (like the
target query languages we consider, Ada 83 lacks code pointers).
The use of higher-order functions in programs can be normalized
away if specific restrictions are obeyed. Cooper [12] studied such
a translation that derives SQL queries from programs that have a
flat list (i.e., tabular) result type—this constraint rules out tables of
functions, in particular. Program normalization is a runtime activity,
however, that is not readily integrated with existing query engine
infrastructure.
With HOMES [35], Benedikt and Vu have developed higherorder extensions to relational algebra and Core XQuery that add
abstraction (admitting queries of function type that accept queries
as parameters) as well as dynamic function calls (applying queries
to queries). HOMES’ query processor alternates between regular
database-supported execution of query blocks inside PostgreSQL or
BaseX and graph-based β-reduction outside a database system. In
contrast, defunctionalized queries may be executed while staying
within the context of the database kernel.
From the start, the design of FQL [10] relied on functions as
the primary query building blocks: following Backus’ FP language,
FQL offers functional forms to construct new queries out of existing
functions. Buneman et al. describe a general implementation technique that evaluates FQL queries lazily. The central notion is that of
suspensions, pairs hf, xi that represent the yet unevaluated application of function f to argument x. Note how group-by in Figure 8
mimics suspension semantics by returning closures (with label `1 )
that only get evaluated (via dispatch_0) once a group’s members
are required.
A tabular data model that permits function-valued columns has
been explored by Stonebraker et al. [32]. Such columns hold QUEL
expressions, represented either as query text or compiled plans.
Variables may range over QUEL values and an exec(e) primitive is
available that spawns a separate query processor instance to evaluate
the QUEL-valued argument e at runtime.
Finally, the Map-Reduce model [13] for massively distributed
query execution successfully adopts a function-centric style of query
formulation. Functions are not first-class, though: first-order userdefined code is supplied as arguments to two built-in functions map
and reduce—Map-Reduce builds on higher-order function constants
but lacks function variables.
Defunctionalized XQuery queries that rely on an element-based
representation of closures create XML fragments (closure construction) whose contents are later extracted via child axis steps (closure
elimination). When node construction and traversal meet like this,
the creation of intermediate fragments can be avoided altogether.
Such fusion techniques have been specifically described for XQuery [22]. Fusion, jointly with function inlining as proposed in [16],
thus can implement the case · · · of cancellation optimization discussed in Section 5. If cancellation is not possible, XQuery processors can still benefit from the fact that node identity and document
order are immaterial in the remaining intermediate fragments [19].
7.
Closure
We argue that a repertoire of literal function values, higher-order
functions, and functions in data structures can lead to particularly
concise and elegant formulations of queries. Query defunctionalization enables off-the-shelf first-order database engines to support
such a function-centric style of querying. Cast in the form of a
syntax-directed transformation of queries, defunctionalization is
non-invasive and affects the query processor’s front-end only (a
simple preprocessor will also yield a workable implementation).
Experiments show that the technique does not introduce an undue
runtime overhead.
Query defunctionalization applies to any query language that
(1) offers aggregate data structures suitable to represent closures and
(2) implements case discrimination based on the contents of such
aggregates. These are light requirements met by many languages
beyond XQuery and PL/SQL. It is hoped that our discussion of query
defunctionalization is sufficiently self-contained such that it can be
carried over to other languages and systems.
Acknowledgment. We dedicate this work to the memory of John C.
Reynolds ( April 2013).
References
[1] Oracle Berkeley DB XML. http://www.oracle.com/technetwork/
products/berkeleydb/index-083851.html.
[2] PostgreSQL 9.2. http://www.postgresql.org/docs/9.2/.
[3] Saxon. http://saxon.sourceforge.net/.
[4] Oracle Database PL/SQL Language Reference—11g Release 1 (11.1),
2009.
[5] A. Appel and T. Jim. Continuation-Passing, Closure-Passing Style. In
Proc. POPL, 1989.
[6] R. Bird and P. Wadler. Introduction to Functional Programming.
Prentice Hall, 1988.
[7] A. Bloss, P. Hudak, and J. Young. Code Optimizations for Lazy
Evaluation. Lisp and Symbolic Computation, 1(2), 1988.
[8] S. Boag, D. Chamberlin, M. Fernández, D. Florescu, J. Robie, and
J. Siméon. XQuery 1.0: An XML Query Language. W3C Recommendation, 2010.
[9] P. Boncz, T. Grust, M. van Keulen, S. Manegold, J. Rittinger, and
J. Teubner. MonetDB/XQuery: A Fast XQuery Processor Powered by
a Relational Engine. In Proc. SIGMOD, 2006.
[10] P. Buneman, R. Frankel, and R. Nikhil. An Implementation Technique
for Database Query Languages. ACM TODS, 7(2), 1982.
[11] D. Chamberlin, D. Florescu, J. Robie, J. Siméon, and M. Stefanescu.
XQuery: A Query Language for XML. W3C Working Draft, 2001.
[12] E. Cooper. The Script-Writers Dream: How to Write Great SQL in
Your Own Language, and be Sure it Will Succeed. In Proc. DBPL,
2009.
[13] J. Dean and S. Ghemawat. MapReduce: Simplified Data Processing
on Large Clusters. In Proc. OSDI, 2004.
[14] D. Florescu and D. Kossmann. Storing and Querying XML Data Using
an RDBMS. IEEE Data Engineering Bulletin, 22(3), 1999.
[15] A. Fomichev, M. Grinev, and S. Kuznetsov. Sedna: A Native XML
DBMS. In Proc. SOFSEM, 2006.
[16] M. Grinev and D. Lizorkin. XQuery Function Inlining for Optimizing
XQuery Queries. In Proc. ADBIS, 2004.
[17] C. Grün, A. Holupirek, and M. Scholl. Visually Exploring and
Querying XML with BaseX. In Proc. BTW, 2007. http://basex.
org.
[18] T. Grust. Monad Comprehensions: A Versatile Representation for
Queries. In The Functional Approach to Data Management – Modeling,
Analyzing and Integrating Heterogeneous Data. Springer, 2003.
[19] T. Grust, J. Rittinger, and J. Teubner. eXrQuy: Order Indifference in
XQuery. In Proc. ICDE, 2007.
[20] T. Grust, N. Schweinsberg, and A. Ulrich. Functions are Data Too
(Software Demonstration). In Proc. VLDB, 2013.
[21] T. Johnsson. Lambda Lifting: Transforming Programs to Recursive
Equations. In Proc. IFIP, 1985.
[22] H. Kato, S. Hidaka, Z. Hu, K. Nakano, and I. Yasunori. ContextPreserving XQuery Fusion. In Proc. APLAS, 2010.
[23] P. Landin. The Mechanical Evaluation of Expressions. The Computer
Journal, 6(4):308–320, 1964.
[24] Z. Liu, M. Krishnaprasad, and A. V. Native XQuery Processing in
Oracle XMLDB. In Proc. SIGMOD, 2005.
[25] S. Melnik, A. Gubarev, J. Long, G. Romer, S. Shivakumar, M. Tolton,
and T. Vassilakis. Dremel: Interactive Analysis of Web-Scale Datasets.
PVLDB, 3(1), 2010.
[26] F. Pottier and N. Gauthier. Polymorphic Typed Defunctionalization.
In Proc. POPL, 2004.
[27] J. Reynolds. Definitional Interpreters for Higher-Order Programming
Languages. In Proc. ACM, 1972.
[28] J. Robie, D. Chamberlin, J. Siméon, and J. Snelson. XQuery 3.0: An
XML Query Language. W3C Candidate Recommendation, 2013.
[29] Z. Shao and A. Appel. Space-Efficient Closure Representations. In
Proc. Lisp and Functional Programming, 1994.
[30] Z. Shao and A. Appel. Efficient and Safe-for-Space Closure Conversion. ACM TOPLAS, 22(1), 2000.
[31] Database Language SQL—Part 2: Foundation (SQL/Foundation).
ANSI/ISO/IEC 9075, 1999.
[32] M. Stonebraker, E. Anderson, E. Hanson, and B. Rubenstein. QUEL
as a Data Type. In Proc. SIGMOD, 1984.
[33] A. Tolmach and D. Oliva. From ML to Ada: Strongly-Typed Language
Interoperability via Source Translation. J. Funct. Programming, 8(4),
1998.
[34] Transaction Processing Performance Council. TPC-H, a DecisionSupport Benchmark. http://tpc.org/tpch/.
[35] H. Vu and M. Benedikt. HOMES: A Higher-Order Mapping Evalution
System. PVLDB, 4(12), 2011.
[36] Y. Zhang and P. Boncz. XRPC: Interoperable and Efficient Distributed
XQuery. In Proc. VLDB, 2007.
Appendix
A.
Defunctionalization for XQuery
This appendix elaborates the details of defunctionalization for XQuery 3.0. The particular formulation we follow here is a deliberate
adaptation of the transformation as it has been described by Tolmach
and Oliva [33].
We specify defunctionalization in terms of a syntax-directed
traversal, QJeK, over a given XQuery 3.0 source query e (conforming to Figure 6). In general, e will contain a series of function
declarations which precede one main expression to evaluate. Q
calls on the auxiliary DJ·K and EJ·K traversals to jointly transform
declarations and expressions—this makes Q a whole-query transformation [26] that needs to see the input query in its entirety. All
three traversal schemes are defined in Figure 21.
E features distinct cases for each of the syntactic constructs in
the considered XQuery 3.0 subset. However, all cases but those
labeled (1)–(3) merely invoke the recursive traversal of subexpressions, leaving their input expression intact otherwise. The three
cases implement the transformation of literal functions, named function references, and dynamic function calls. We will now discuss
each of them in turn.
Case (1): Literal Functions. Any occurrence of a literal function,
say f = function($x1 ,. . . ,$xn ) { e }, is replaced by a closure
constructor. Meta-level function label () generates a unique label `
which closure elimination will later use to identify f and evaluate its
body expression e; see case (3) below. The evaluation of e depends
on its free variables, i.e., those variables that have been declared in
the lexical scope enclosing f . We use meta-level function fv () to
identify these variables $v1 , . . . , $vm and save their values in the
closure’s environment. At runtime, when the closure constructor is
encountered in place of f , the closure thus captures the state required
to properly evaluate subsequent applications of f (recall Section 2).
Note that defunctionalization does not rely on functions to be pure:
side-effects caused by body e will also be induced by EJeK.
To illustrate, consider the following XQuery 3.0 snippet, taken
from the group-by example in Figure 5:
for $k in distinct-values($keys)
return
function() { $seq[$key(.) = $k] } .
We have fv (function() { $seq[$key(.) = $k] }) = $k, $key,
$seq. According to E and case (1) in particular, the snippet thus
defunctionalizes to
for $k in distinct-values($keys)
return
`1 $k $key $seq
where `1 denotes an arbitrary yet unique label.
If we assume that the free variables are defined as in the example
of Figure 5, the defunctionalized variant of the snippet will evaluate
to a sequence of two closures:
( `1 0
`2
(0,1,1,2,. . . ) ,
`1
1
`2
(0,1,1,2,. . . ) ) .
These closures capture the varying values 0, 1 of the free iteration
variable $k as well as the invariant values of $key (bound to a
function and thus represented in terms of a closure with label `2 )
and $seq (= (0,1,1,2,. . . )).
Since we will use label `1 to identify the body of the function
literal function() { $seq[$key(.) = $k] }, case (1) saves this
label/body association in terms of a case · · · of branch (see the
assignment to branch in Figure 21). We will shed more light
on branch and lifted when we discuss case (3) below.
Case (2): Named Function References. Any occurrence of an
expression name#n, referencing function name of arity n, is replaced by a closure constructor with a unique label `. In XQuery,
named functions are closed as they are exclusively declared in a
query’s top-level scope—either in the query prolog or in an imported module [28]—and do not contain free variables. In case (2),
the constructed closures thus have empty environments. As before, a case · · · of branch is saved that associates label ` with function name.
Case (3): Dynamic Function Calls. In case of a dynamic function
call e(e1 ,. . . ,en ), we know that expression e evaluates to some
functional value (otherwise e may not occur in the role of a function
and be applied to arguments).11 Given our discussion of cases (1)
and (2), in a defunctionalized query, e will thus evaluate to a closure,
say ` x1 · · · xm (m > 0), that represents some function f .
In the absence of code pointers, we delegate the invocation of the
function associated with label ` to a dispatcher, an auxiliary routine
that defunctionalization adds to the prolog of the transformed query.
The dispatcher
(i) receives the closure as well as e1 , . . . , en (the arguments of
the dynamic call) as arguments, and then
(ii) uses case · · · of to select the branch associated with label `.
(iii) The branch unpacks the closure environment to extract the
bindings of the m free variables (if any) that were in place
when f was defined, and finally
(iv) invokes a surrogate function that contains the body of the
original function f , passing the e1 , . . . , en along with the
extracted bindings (the surrogate function thus has arity n+m).
Re (i) and (ii). In our formulation of defunctionalization for
XQuery, a dedicated dispatcher is declared for all literal functions and named function references that are of the same arity.
The case · · · of branches for the dispatcher for arity n are collected
in set Dispatch(n) while E traverses the input query (cases (1)
and (2) in Figure 21 add a branch to Dispatch(n) when an n-ary
functional value is transformed). Once the traversal is complete, Q
adds the dispatcher routine to the prolog of the defunctionalized
query through declare dispatch(n, Dispatch(n)). This meta-level
function, defined in Figure 22, emits the routine dispatch_n which
receives closure $clos along with the n arguments of the original
dynamic call. Discrimination on the label ` stored in $clos selects
the associated branch. Because dispatch_n dispatches calls to any
n-ary function in the original query, we declare it with a polymorphic signature featuring XQuery’s most polymorphic type item()*.
The PL/SQL variant of defunctionalization, discussed in Section 3.1,
relies on an alternative approach that uses typed dispatchers.
Any occurrence of a dynamic function call e(e1 ,. . . ,en ) is replaced by a static call to the appropriate dispatcher dispatch_n.
Figure 8 (in the main text) shows the defunctionalized query
for the XQuery group-by example of Figure 5. The original query
contained literal functions of arity 0 (in line 8) as well as arity 1
(in line 12). Following case (1), both have been replaced by closure
constructors (with labels `1 and `2 , respectively, see lines 16
and 20 in Figure 8). function($x) { $x mod 2 } is closed: its
closure (label `2 ) thus contains an empty environment. Dynamic
calls to both functions have been replaced by static calls to the
dispatchers dispatch_0 or dispatch_1. For the present example,
Dispatch(0) and Dispatch(1) were singleton sets such that both
dispatchers contain case · · · of expressions with one branch only.
(For an example of a dispatcher with three branches, refer to the
PL/SQL function dispatch_1 in Figure 9, line 19.)
11 Note
that EJ·K defines a separate case for static function calls of the
form name(e1 ,. . . ,en ).
DJdeclare function name($x1 , . . . ,$xn ) { e }K
=
declare function name($x1 , . . . ,$xn ) { EJeK }
EJfor $v in e1 return e2 K
EJlet $v := e1 return e2 K
EJ$vK
EJif (e1 ) then e2 else e3 K
EJ(e1 , . . . ,en )K
EJe/a::tK
EJelement n { e }K
EJe1 [e2 ]K
EJ.K
EJname(e1 , . . . ,en )K
EJfunction($x1 as t1 , . . . ,$xn as tn ) as t { e }K
=
=
=
=
=
=
=
=
=
=
=
EJname#nK
=
for $v in EJe1 K return EJe2 K
let $v := EJe1 K return EJe2 K
$v
if (EJe1 K) then EJe2 K else EJe3 K
(EJe1 K, . . . ,EJen K)
EJeK/a::t
element n { EJeK }
EJe1 K[EJe2 K]
.
name(EJe1 K, . . . ,EJen K)
` $v1 · · · $vm
()
Dispatch(n) Dispatch(n) ∪ {branch}
Lifted Lifted ∪ {lifted}
where
` = label(n)
$v1 , . . . , $vm = fv (function($x1 as t1 , . . . ,$xn as tn ) as t { e })
branch = ` $v1 · · · $vm ⇒
`($b1 , . . . ,$bn ,$v1 , . . . ,$vm )
lifted = declare function `(
$x1 as t1 , . . . ,$xn as tn ,$v1 , . . . ,$vm ) as t
{ EJeK };
`
()
Dispatch(n) Dispatch(n) ∪ {branch}
where
` = label(n)
branch = ` ⇒ name($b1 , . . . ,$bn )
EJe(e1 , . . . ,en )K
=
dispatch_n(EJeK,EJe1 K, . . . ,EJen K)
QJd1 ; . . . ;dn ; eK
=
∀ i ∈ dom(Dispatch): declare dispatch(i, Dispatch(i))
Lifted
DJd1 K; . . . ;DJdn K; EJeK
()
Figure 21: Defunctionalization of XQuery 3.0 function declarations (D), expressions (E) and queries (Q).
declare dispatch(n, {case 1 , . . . , case k }) ≡
1
2
3
4
5
6
7
8
9
declare function dispatch_n(
$clos as closure,
$b1 as item()*,. . . , $bn as item()*) as item()*
{
case $clos of
case
.. 1
.
case k
};
Figure 22: Declaring a dispatcher for n-ary functional values.
Re (iii) and (iv). Inside its dispatcher, the case branch for the closure
` x1 · · · xm for function f invokes the associated surrogate
function, also named `. The original arguments e1 , . . . , en are
passed along with the x1 , . . . , xm . Surrogate function ` incorporates
f ’s body expression and can thus act as a “stand-in” for f . We
declare the surrogate function with the same argument and return
types as f —see the types t and t1 , . . . , tn in case (3) of Figure 21.
The specific signature for ` ensures that the original semantics of f
are preserved (this relates to XQuery’s function conversion rules [28,
§3.1.5.2]).
While f contained m free variables, ` is a closed function as
it receives the m bindings as explicit additional function parameters (surrogate function ` is also known as the lambda-lifted variant
of f [21]). When case (1) transforms a literal function, we add its surrogate to the set Lifted of function declarations. When case (2) transforms the named reference name#n, Lifted remains unchanged:
the closed function name acts as its own surrogate because there
are no additional bindings to pass. Again, once the traversal is complete, Q adds the surrogate functions in set Lifted to the prolog of
the defunctionalized query. Returning to Figure 8, we find the two
surrogate functions `1 and `2 at the top of the query prolog (lines 1
to 4).
| 6 |
Sensor Placement for Optimal Kalman Filtering: Fundamental Limits, Submodularity, and
Algorithms✩
Vasileios Tzoumas∗, Ali Jadbabaie, George J. Pappas
arXiv:1509.08146v3 [math.OC] 31 Jan 2016
Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104-6228 USA
Abstract
In this paper, we focus on sensor placement in linear dynamic estimation, where the objective is to place a small number of sensors
in a system of interdependent states so to design an estimator with a desired estimation performance. In particular, we consider a
linear time-variant system that is corrupted with process and measurement noise, and study how the selection of its sensors affects
the estimation error of the corresponding Kalman filter over a finite observation interval. Our contributions are threefold: First, we
prove that the minimum mean square error of the Kalman filter decreases only linearly as the number of sensors increases. That is,
adding extra sensors so to reduce this estimation error is ineffective, a fundamental design limit. Similarly, we prove that the number
of sensors grows linearly with the system’s size for fixed minimum mean square error and number of output measurements over an
observation interval; this is another fundamental limit, especially for systems where the system’s size is large. Second, we prove
that the log det of the error covariance of the Kalman filter, which captures the volume of the corresponding confidence ellipsoid,
with respect to the system’s initial condition and process noise is a supermodular and non-increasing set function in the choice of
the sensor set. Therefore, it exhibits the diminishing returns property. Third, we provide efficient approximation algorithms that
select a small number sensors so to optimize the Kalman filter with respect to this estimation error —the worst-case performance
guarantees of these algorithms are provided as well. Finally, we illustrate the efficiency of our algorithms using the problem of
surface-based monitoring of CO2 sequestration sites studied in Weimer et al. (2008).
Keywords: Least-Squares Linear Estimator, Process and Measurement Noise, Minimal Sensor Placement, Greedy Algorithms.
1. Introduction
sor measurements (Kailath et al. (2000)) and, as a result, we
consider the presence of measurement noise.
In this paper, we aim to monitor dynamic, interdependent
Specifically, we consider phenomena modelled as linear tiphenomena, that is, phenomena with temporal and spatial corme-variant systems corrupted with process and measurement
relations —with the term “spatial” we refer to any kind of internoise. Our first goal is to study how the placement of their sendependencies between the phenomena. For example, the temsors affects the minimum mean square error of their Kalman
perature at any point of an indoor environment depends across
filter over a finite observation interval (Kailath et al. (2000)).
time —temporal correlation— on the temperatures of the adjaMoreover, we aim to select a small number of sensors so to
cent points —spatial correlation (Madsen & Holst (1995)). There- minimize the volume of the corresponding confidence ellipsoid
fore, these correlations allow to monitor such phenomena usof this estimation error. Thereby, this study is an important dising a reduced number of sensors; this is an important observatinction in the minimal sensor placement literature (Clark et al.
tion when operational constraints, such as limited bandwidth
(2012); Lin et al. (2013); Olshevsky (2014); Pequito et al. (2014);
and communication power, necessitate the design of estimaPasqualetti et al. (2014); Summers et al. (2014); Tzoumas et al.
tors using a small number of sensors (Rowaihy et al. (2007);
(2015); Matni & Chandrasekaran (2014); Pequito et al. (2015);
Hero III & Cochran (2011)). Hence, in this paper we consider
Tzoumas et al. (July 2015); Yan et al. (2015); Zhao & Cortés (2015)),
to place a few sensors so to monitor this kind of phenomena. To
since the Kalman filter is the optimal linear estimator —in the
this end, we also account for unknown interdependencies and
minimum mean square sense— given a sensor set (Kalman (1960)).
inputs in their dynamics (Bertsekas (2005)) and, as a result, we
Our contributions are threefold:
consider the presence of process noise, i.e., noise that affects
directly these dynamics. In addition, we account for noisy senFundamental limits. First, we identify fundamental limits in
the design of the Kalman filter with respect to its sensors. In
particular, given a fixed number of output measurements over
✩ This paper was not presented at any IFAC meeting. Some of the results in
any finite observation interval, we prove that the minimum mean
this paper will appear in preliminary form in Tzoumas et al. (2016).
∗ Corresponding author. Tel.: +1 (215) 898 5814; fax: +1 (215) 573 2048.
square error of the Kalman filter decreases only linearly as the
Email addresses: [email protected] (Vasileios Tzoumas),
number of sensors increases. That is, adding extra sensors so to
[email protected] (Ali Jadbabaie),
reduce this estimation error of the Kalman filter is ineffective,
[email protected] (George J. Pappas)
Preprint submitted to Elsevier
February 2, 2016
a fundamental design limit. Similarly, we prove that the number of sensors grows linearly with the system’s size for fixed
minimum mean square error; this is another fundamental limit,
especially for systems where the system’s size is large. On the
contrary, given a sensor set of fixed cardinality, we prove that
the number of output measurements increases only logarithmically with the system’s size for fixed estimation error. Overall,
our novel results quantify the trade-off between the number of
sensors and that of output measurements so to achieve a specified value for the minimum mean square error.
These results are the first to characterize the effect of the
sensor set on the minimum mean square error of the Kalman filter. In particular, in Pasqualetti et al. (2014), the authors quantify only the trade-off between the total energy of the consecutive output measurements and the number of its selected sensors. Similarly, in Yan et al. (2015), the authors consider only
the maximum-likelihood estimator for the system’s initial condition and only for a special class of stable linear time-invariant
systems. Moreover, they consider systems that are corrupted
merely with measurement noise, which is white and Gaussian.
Finally, they also assume an infinite observation interval, that is,
infinite number of consecutive output measurements. Nonetheless, we assume a finite observation interval and study the Kalman
estimator both for the system’s initial condition and for the system’s state at the time of the last output measurement. In addition, we consider general linear time-variant systems that are
corrupted with both process and measurement noise, of any distribution (with zero mean and finite variance). Overall, our results characterize the effect of the cardinality of the sensor set
on the minimum mean square error of the Kalman filter, that is,
the optimal linear estimator.
Algorithms. Third, we consider two problems of sensor placement so to optimize the log det of the error covariance of the
Kalman filter with respect to the system’s initial condition and
process noise of Kalman estimator —henceforth, we refer to
this error as log det error. First, we consider the problem of
designing an estimator that guarantees a specified log det error
and uses a minimal number of sensors —we refer to this problem as P1 . Second, we consider the problem of designing an
estimator that uses at most r sensors and minimizes the log det
error —we refer to this problem as P2 . Naturally, P1 and P2 are
combinatorial, and in particular, they involve the minimization
of the supermodular log det error. Because the minimization
of a general supermodular function is NP-hard (Feige (1998)),
we provide efficient approximation algorithms for their general
solution, along with their worst-case performance guarantees.
Specifically, we first provide an efficient algorithm for P1 that
returns a sensor set that satisfies the estimation guarantee of
P1 and has cardinality up to a multiplicative factor from the
minimum cardinality sensor sets that meet the same estimation
bound. More importantly, this multiplicative factor depends
only logarithmically on the problem’s P1 parameters. Next, we
provide an efficient algorithm for P2 that returns a sensor set of
cardinality l ≥ r (l is chosen by the designer) and achieves a
near optimal value for increasing l. Specifically, for l = r, it
achieves a worst-case approximation factor 1 − 1/e.2
In comparison, the related literature has focused either a) on
only the optimization of the log det error for special cases of
systems with zero process noise, or b) on heuristic algorithms
that provide no worst-case performance guarantees, or c) on
static phenomena. In particular, in Joshi & Boyd (2009), the
authors minimize the log det error for the case where there is
no process noise in the system’s dynamics —we assume both
process and measurement noise. Moreover, to this end they use
convex relaxation techniques that provide no performance guarantees. Furthermore, in Dhingra et al. (2014) and Munz et al.
(2014), the authors design an H2 -optimal estimation gain with
a small number of non-zero columns. To this end, they also
use convex relaxation techniques that provide no performance
guarantees. In addition, in Belabbas (2015), the author designs
an output matrix with a desired norm so to minimize the minimum mean square error of the corresponding Kalman estima-
Submodularity. Second, we identify properties for the log det
of the error covariance of the Kalman filter, which captures the
volume of the corresponding confidence ellipsoid, with respect
to the system’s initial condition and process noise over a finite
observation interval as a sensor set function —the design of an
optimal Kalman filter with respect to the system’s initial condition and process noise implies the design of an optimal Kalman
filter with respect to the system’s state. Specifically, we prove
that it is a supermodular and non-increasing set function in the
choice of the sensor set.
In Krause et al. (2008), the authors study sensor placement
counterexample in the context of sensor scheduling that the minimum mean
for monitoring static phenomena with only spatial correlations.
square error of the Kalman filter with respect to the system’s state is not in
To this end, they prove that the mutual information between
general a supermodular set function. We can extend this counterexample in the
the chosen and non-chosen locations is submodular. Notwithcontext of minimal sensor placement as well: the minimum mean square error
of the Kalman with respect to the system’s state is not in general a supermodular
standing, we consider dynamic phenomena with both spatial
set function with respect to the choice of the sensor set.
and temporal correlations, and as a result, we characterize as
2 Such algorithms, that involve the minimization of supermodular set
submodular a richer class of estimation performance metrics.
functions, are also used in the machine learning (Krause & Guestrin), path
Furthermore, in the sensor scheduling literature (Gupta et al.
planning for information acquisition (Singh et al. (2009); Atanasov et al.
(2014)), leader selection (Clark et al. (2012, 2014b,a)), sensor scheduling
(2006)), the log det of the error covariance of the Kalman filter
(Shamaiah et al. (2010); Jawaid & Smith (2015); Zhang et al. (2015)), achas been proven submodular but only for special cases of systuator placement (Olshevsky (2014); Summers et al. (2014); Tzoumas et al.
tems with zero process noise (Shamaiah et al. (2010); Jawaid & Smith
(2015); Tzoumas et al. (July 2015,D); Zhao & Cortés (2015)) and sensor place(2015)); in contrast, we consider the presence of process noise,
ment in static phenomena (Krause et al. (2008); Jiang et al. (2015)) literature. Their popularity is due to their simple implementation —they are
and prove our result for the general case. 1
1 In
greedy algorithms— and provable worst-case approximation factors, that are
the best one can achieve in polynomial time for several classes of functions
(Nemhauser & Wolsey (1978); Feige (1998)).
Jawaid & Smith (2015); Zhang et al. (2015), the authors prove with a
2
where xk ∈ Rn (n ∈ N) is the state vector, yk ∈ Rc (c ∈ [n])
the output vector, wk the process noise and vk the measurement
noise —without loss of generality, the input vector in (1) is assumed zero. The initial condition is x0 .
tor; nonetheless, the author does not minimize the number of
selected sensors. Finally, in Das & Kempe (2008), the authors
consider the problem of sensor placement for monitoring static
phenomena with only spatial correlation. To this end, they
place a small number of sensors so to minimize a worst-case
estimation error of an aggregate function, such as the average.
In contrast, we consider dynamic phenomena with both spatial
and temporal correlations, and we minimize the log det error of
the Kalman filter; as a result, we do not focus on a worst-case
estimation error, and we can efficiently estimate any function.
Overall, in this paper we optimize the log det error for the general case of dynamic systems and, at the same time, we provide
worst-case performance guarantees for the corresponding algorithms.
The remainder of this paper is organized as follows. In Section 2, we introduce our model, and our estimation and sensor
placement framework, along with our sensor placement problems. In Section 3, we provide a series of design and performance limits and characterize the properties of the Kalman estimator with respect to its sensor set; in Section 4, we prove that
the log det estimation error of the Kalman filter with respect to
the system’s initial condition and process noise is a supermodular and non-increasing set function in the choice of the sensor
set; and in Section 5, we provide efficient approximation algorithms for selecting a small number of sensors so to design
an optimal Kalman filter with respect to its log det error —the
worst-case performance guarantees of these algorithms are provided as well. Finally, in Section 6, we illustrate our analytical
findings, and test the efficiency of our algorithms, using simulation results from an integrator chain network and the problem
of surface-based monitoring of CO2 sequestration sites studied
in Weimer et al. (2008). Section 7 concludes the paper.
Assumption 1 (For all k ≥ 0, the initial condition, the process
noise and the measurement noise are uncorrelated random variables). x0 is a random variable with covariance C(x0 ) ≻ 0.
Moreover, for all k ≥ 0, C(wk ) ≻ 0 and C(vk ) = σ 2 I, where
σ > 0. Finally, for all k, k ′ ≥ 0 such that k 6= k ′ , x0 , wk and
vk , as well as, wk , wk′ , vk and vk′ , are uncorrelated.3
Moreover, for k ≥ 0, consider the vector of measurements
ȳk , the vector of process noises w̄k and the vector of measurement noises v̄k , defined as follows: ȳk ≡ (y0⊤ , y1⊤ , . . . , yk⊤ )⊤ ,
w̄k ≡ (w0⊤ , w1⊤ , . . . , wk⊤ )⊤ , and v̄k ≡ (v0⊤ , v1⊤ , . . . , vk⊤ )⊤ , respectively; the vector ȳk is known, while the w̄k and v̄k are not.
Definition 1 (Observation interval and its length). The interval
[0, k] ≡ {0, 1, . . . , k} is called the observation interval of (1).
Moreover, k + 1 is its length.
Evidently, the length of an observation interval [0, k] equals
the number of measurements y0 , y1 , . . . , yk .
In this paper, given an observation interval [0, k], we consider the minimum mean square linear estimators for xk′ , for
any k ′ ∈ [0, k] (Kailath et al. (2000)). In particular, (1) implies
ȳk = Ok zk−1 + v̄k ,
⊤
⊤ ⊤
where Ok is the c(k+1)×n(k+1) matrix [L⊤
0 C0 , L1 C1 , . . . ,
⊤ ⊤ ⊤
Lk Ck ] , L0 the n × n(k + 1) matrix [I, 0], Li , for i ≥ 1, the
n× n(k + 1) matrix [Ai−1 · · · A0 , Ai−1 · · · A1 , . . . , Ai−1 , I, 0],
⊤
⊤
and zk−1 ≡ (x⊤
0 , w̄k−1 ) . As a result, the minimum mean
square linear estimate of zk−1 is the ẑk−1 ≡ E(zk−1 )+C(zk−1 )
−1
Ok⊤ Ok C(zk−1 )Ok⊤ + σ 2 I
(ȳk − Ok E(zk−1 ) − E(v̄k )); its
error covariance is
Σzk−1 ≡ E (zk−1 − ẑk−1 )(zk−1 − ẑk−1 )⊤
2. Problem Formulation
Notation. We denote the set of natural numbers {1, 2, . . .} as
N, the set of real numbers as R, and the set {1, 2, . . . , n} as [n],
where n ∈ N. Given a set X , |X | is its cardinality. Matrices
are represented by capital letters and vectors by lower-case letters. For a matrix A, A⊤ is its transpose and Aij its√element
located at the i−th row and j−th column. kAk2 ≡ A⊤ A is
its spectral norm, and λmin (A) and λmax (A) its minimum and
maximum eigenvalues, respectively. Moreover, if A is positive
semi-definite or positive definite, we write A 0 and A ≻ 0,
respectively. Furthermore, I is the identity matrix —its dimension is inferred from the context; similarly for the zero matrix
n
0. Finally, for a random
variable x ∈ R , E(x) is its expected
value, and C(x) ≡ E [x − E(x)] [x − E(x)]⊤ its covariance.
The rest of our notation is introduced when needed.
= C(zk−1 ) − C(zk−1 )Ok⊤
−1
Ok C(zk−1 )Ok⊤ + σ 2 I
Ok C(zk−1 )
mmse(zk−1 ) ≡ E (zk−1 − ẑk−1 )⊤ (zk−1 − ẑk−1 )
= tr Σzk−1 .
(4)
As a result, the corresponding minimum mean square linear estimator of xk′ , for any k ′ ∈ [0, k], is
x̂k′ = Lk′ ẑk−1 ,
(since xk′ = Lk′ zk−1 ), with minimum mean square error
mmse(xk′ ) ≡ tr Lk′ Σzk−1 L⊤
k′ .
For k ≥ 0, we consider the linear time-variant system
yk = Ck xk + vk ,
(3)
and its minimum mean square error
2.1. Model and Estimation Framework
xk+1 = Ak xk + wk ,
(2)
(5)
(6)
3 This assumption is common in the related literature (Joshi & Boyd (2009)),
and it translates to a worst-case scenario for the problem we consider in this
paper.
(1)
3
In particular, the recursive implementation of (5) results to the
Kalman filtering algorithm (Bertsekas (2005)).
In this paper, in addition to the minimum mean square error of x̂k′ , we also consider per (5) the estimation error metric that is related to the η-confidence ellipsoid of zk−1 − ẑk−1
(Joshi & Boyd (2009)). Specifically, this is the minimum volume ellipsoid that contains zk−1 − ẑk−1 with probability η, that
(η) and
is, the Eǫ ≡ {z : z ⊤ Σzk−1 z ≤ ǫ}, where ǫ ≡ Fχ−1
2
Objective 1 (Fundamental limits in optimal sensor placement).
Given an observation interval [0, k], i ∈ {0, k} and a desired
mmse(xi ), identify fundamental limits in the design of the sensor set.
As an example of a fundamental limit, we prove that the
number of sensors grows linearly with the system’s size for
fixed estimation error mmse(xi ) —this is clearly a major limitation, especially when the system’s size is large. This result, as
well as, the rest of our contributions with respect to Objective
1, is presented in Section 3.
n(k+1)
Fχ2n(k+1) is the cumulative distribution function of a χ-squared
random variable with n(k + 1) degrees of freedom (Venkatesh
(2012)). Therefore, the volume of Eǫ ,
(ǫπ)n(k+1)/2
det Σ1/2
vol(Eǫ ) ≡
zk−1 ,
Γ (n(k + 1)/2 + 1)
Objective 2 (log det estimation error as a sensor set function).
Given an observation
interval [0, k], identify properties of the
log det Σzk−1 as a sensor set function.
(7)
We addressthis objective in Section 4, where we prove that
log det Σzk−1 is a supermodular and non-increasing set function with respect to the choice of the sensor set —the basic definitions of supermodular set functions are presented in that section as well.
where Γ(·) denotes the Gamma function (Venkatesh (2012)),
quantifies the estimation’s error of ẑk−1 , and as a result, for any
k ′ ∈ [0, k], of x̂k′ as well, since per (5) the optimal estimator
for zk−1 defines the optimal estimator for xk′ .
Henceforth, we consider the logarithm of (7),
(8)
log vol(Eǫ ) = β + 1/2 log det Σzk−1 ;
Objective 3 (Algorithms for optimal sensor placement). Given
an observation interval [0, k], identify a sensor set S that solves
either the minimal sensor placement problem:
β is a constant that depends only on n(k + 1) and ǫ, in accor
dance to (7), and as a result, we refer to the log det Σzk−1 as
the log det estimation error of the Kalman filter of (1):
minimize
S⊆[n]
subject to
Definition 2 (log det estimation error of the Kalman filter).
Given an observation interval [0, k], the log det Σzk−1 is called the log det estimation error of the Kalman filter of (1).
|S|
log det Σzk−1 ≤ R;
(P1 )
or the cardinality-constrained sensor placement problem for minimum estimation error:
minimize log det Σzk−1
S⊆[n]
(P2 )
subject to |S| ≤ r.
In the following paragraphs, we present our sensor placement framework, that leads to our sensor placement problems.
2.2. Sensor Placement Framework
In this paper, we study among others the effect of the selected sensors in (1) on mmse(x0 ) and mmse(xk ). Therefore,
this translates to the following conditions on Ck , for all k ≥
0, in accordance with the minimal sensor placement literature
(Olshevsky (2014)).
That is, with P1 we design an estimator that guarantees
a specified error and uses a minimal number of sensors, and
with P2 an estimator
that uses at most r sensors and minimizes
log det Σzk−1 . The corresponding algorithms are provided in
Section 5.
Assumption 2 (C is a full row-rank constant zero-one matrix).
For all k ≥ 0, Ck = C ∈ Rc×n , where C is a zero-one constant
matrix. Specifically, each row of C has one element equal to
one, and each column at most one, such that C has rank c.
3. Fundamental Limits in Optimal Sensor Placement
In this section, we present our contributions with respect to
Objective 1. In particular, given any finite observation interval,
we prove that the minimum mean square error decreases only
linearly as the number of sensors increases. That is, adding extra sensors so to reduce the minimum mean square estimation
error of the Kalman filter is ineffective, a fundamental design
limit. Similarly, we prove that the number of sensors grows
linearly with the system’s size for fixed minimum mean square
error; this is another fundamental limit, especially for systems
where the system’s size is large. On the contrary, given a sensor
set of fixed cardinality, we prove that the length of the observational interval increases only logarithmically with the system’s
size for fixed minimum mean square error. Overall, our novel
results quantify the trade-off between the number of sensors and
In particular, when for some i, Cij is one, the j-th state of
xk is measured; otherwise, it is not. Therefore, the number of
non-zero elements of C coincides with the number of placed
sensors in (1).
Definition 3 (Sensor set and sensor placement). Consider a C
per Assumption 2 and define S ≡ {i : i ∈ [n] and Cji =
1, for some j ∈ [r]}; S is called a sensor set or a sensor placement and each of its elements a sensor.
2.3. Sensor Placement Problems
We introduce three objectives, that we use to define the sensor placement problems we consider in this paper.
4
at least as large as their harmonic mean, using (12),
that of output measurements so to achieve a specified value for
the minimum mean square error.
To this end, given i ∈ {0, k}, we first determine a lower and
upper bound for mmse(xi ).4
mmse(x0 ) ≥
Theorem 1 (A lower and upper bound for the estimation error with respect to the number of sensors and the length of
the observation interval). Consider a sensor set S, any finite
observation interval [0, k], and a non-zero σ. Moreover, let
µ ≡ maxm∈[0,k] kAm k2 and assume µ 6= 1. Finally, denote the
maximum diagonal element of C(x0 ), C(x0 )−1 , and C(wk′ ),
2
among all k ′ ∈ [0, k], as σ02 , σ0−2 , and σw
, respectively. Given
i ∈ {0, k},
|S| 1 −
nσ 2 li
≤ mmse(xi ) ≤ nui ,
/ (1 − µ2 ) + σ 2 σ0−2
(9)
n2 σ 2
.
tr Õk + nσ 2 σ0−2
2(k+1)
n|S| 1−µ
, and as a result, the lower bound in (9) for mmse(x0 )
1−µ2
follows.
Next, we prove the upper bound in (9), using (18), which is
proved in the proof of Proposition 1, and (6) for k ′ = 0: Ok +
σ 2 C(zk−1 )−1 σ 2 C(zk−1 )−1 , and as a result, from Propo−1
sition 8.5.5 of Bernstein (2009), Ok + σ 2 C(zk−1 )−1
2
σ −2 C(zk−1 ). Hence, mmse(x0 ) ≤ tr L0 C(zk−1 )L⊤
0 ≤ nσ0 .
Finally, to derive the lower and upper bounds for mmse(xk ),
observe that mmse(x0 ) ≤ mmse(zk−1 ) and mmse(zk−1 ) ≤
2
n(k + 1) max(σ02 , σw
) —the proof follows using similar steps
as above. Then, from Theorem 1 of Fang et al. (1994),
λmin L⊤
k Lk mmse(zk−1 ) ≤mmse(xk ) ≤
λmax L⊤
k Lk mmse(zk−1 ).
where l0 = 1, u0 = σ02 , lk = λmin L⊤
k Lk and uk = (k +
2
2
1)λmax L⊤
k Lk max(σ0 , σw ).
Proof. We first prove the lower bound in (9): observe first
that mmse(x0 ) ≥ mmse(x0 )w· =0 , where mmse(x0 )w· =0 is the
minimum mean square error of x0 when the process noise wk
in (1) is zero for all k ≥ 0. To express mmse(x0 )w· =0 in a
closed form similar to (15), note that in this case (2) becomes
⊤
⊤ ⊤ ⊤
ȳk = Õk x0 + v̄k , where Õk ≡ C0⊤ , Φ⊤
1 C1 , . . . , Φk Ck
and Φm ≡ Am−1 · · · A0 , for m > 0, and Φm ≡ I, for m = 0.
Thereby, from Corollary E.3.5 of Bertsekas (2005), the mini· =0
, has
mum mean square linear estimate of x0 , denoted as x̂w
k0
error covariance
· =0 ⊤
· =0
· =0
)
)(x0 − x̂w
≡ E (x0 − x̂w
Σw
k0
k0
k0
and minimum mean square error
· =0
mmse(x0 )w· =0 ≡ tr Σw
k0
−1
2
⊤
2
−1
= σ tr Õk Õk + σ C(x0 )
−1
,
≡ σ 2 tr Õk + σ 2 C(x0 )−1
tr Õk + σ 2 C(x0 )−1
≥
Now, for i ∈ [n], let I (i) be the n × n matrix where Iii is
one,
(j, k) 6= (i,
i). Then, tr(Õk )
=
all P
P
Pwhile Ijk is zero, for
n
k
k
⊤ (i)
⊤ ⊤
=
tr
Φm I Φm ;
i=1 si tr
m=0 Φm C CΦm
P m=0
P
k
k
⊤ (i)
⊤ (i)
=
Φ
I
Φ
≤
nλ
now, tr
Φ
I
Φ
m
max
m
m=0 m
m=0 m
Pk
Pk
2
(i)
⊤ (i)
nk m=0 Φm I Φm k2 ≤ n m=0 kΦm k2 , because kI k2 =
1, and from the definition of Φm and Proposition 9.6.1 of Bernstein
Pk
Pn
2(k+1)
2
(2009), m=0 kΦm k22 ≤ 1−µ
. Therefore, tr(Õk ) ≤ i=1 si n 1−µ
1−µ2
1−
µ2(k+1)
= C(x0 ) − C(x0 )Õk⊤
−1
Õk C(x0 )Õk⊤ + σ 2 I
Õk C(x0 ),
n2 σ 2
The combination of these inequalities completes the proof.
The upper bound corresponds to the case where no sensors
have been placed (C = 0). On the other hand, the lower bound
corresponds to the case where |S| sensors have been placed.
As expected, the lower bound in (9) decreases as the number
of sensors or the length of the observational interval increases;
the increase of either should push the estimation error downwards. Overall, this lower bound quantifies fundamental performance limits in the design of the Kalman estimator: first,
this bound decreases only inversely proportional to the number
of sensors. Therefore, the estimation error of the optimal linear
estimator decreases only linearly as the number of sensors increases. That is, adding extra sensors so to reduce the minimum
mean square estimation error of the Kalman filter is ineffective,
a fundamental design limit. Second, this bound increases linearly with the system’s size. This is another fundamental limit,
especially for systems where the system’s
size is large. Finally,
,
these
scaling extend to
L
for fixed and non-zero λmin L⊤
k k
the mmse(xk ) as well, for any finite k.
(10)
(11)
(12)
where we deduce (11) from (10) using the Woodbury matrix
identity (Corollary 2.8.8 of Bernstein (2009)), and (12) from
(11) using the notation Õk ≡ Õk⊤ Õk . In particular, Õk is the
Pk−1 ⊤ ⊤
observability matrix Õk = m=0
Φm Ck Ck Φm of (1) (Chen
(1998)).
−1
2
2
−1
Hence, mmse(x0 ) ≥ σ tr Õk + σ C(x0 )
, and
Corollary 1 (Trade-off among the number of sensors, estimation error and the length of the observation interval). Consider
any finite observation interval [0, k], a non-zero σ, and for i ∈
{0, k}, that the desired value for mmse(xi ) is α (α > 0). Moreover, let µ ≡ maxm∈[0,k] kAm k2 and assume µ 6= 1. Finally,
denote the maximum diagonal element of C(x0 )−1 as σ0−2 . Any
since the arithmetic mean of a finite set of positive numbers is
4 The extension of Theorem 1 to the case µ = 1 is straightforward, yet
notationally involved; as a result, we omit it.
5
sensor set S that achieves mmse(xi ) = α satisfies:
|S| ≥ nσ 2 li /α − σ 2 σ0−2
1 − µ2
.
1 − µ2(k+1)
We now give the definition of a supermodular set function,
as well as, that of an non-decreasing set function —we follow
Wolsey (1982) for this material.
Denote as 2[n] the power set of [n].
(13)
Definition 4 (Submodularity and supermodularity). A function
h : 2[n] 7→ R is submodular if for any sets S and S ′ , with
S ⊆ S ′ ⊆ [n], and any a ∈
/ S′,
where l0 = 1 and lk = λmin L⊤
k Lk .
The above corollary shows that the number of sensors increases as the minimum mean square error or the number of output measurements decreases. More importantly, it shows that
the number of sensors increases linearly with the system’s size
for fixed minimum mean square error. This is again a fundamental design limit, especially when the system’s size is large.5
h(S ∪ {a}) − h(S) ≥ h(S ′ ∪ {a}) − h(S ′ ).
A function h : 2[n] 7→ R is supermodular if (−h) is submodular.
An alternative definition of a submodular function is based
on the notion of non-increasing set functions.
Corollary 2 (Trade-off among the length of the observation interval, estimation error and the number of sensors). Consider
any finite observation interval [0, k], a sensor set S, a non-zero
σ, and for i ∈ {0, k}, that the desired value for mmse(xi ) is
α (α > 0). Moreover, let µ ≡ maxm∈[0,k] kAm k2 and assume µ 6= 1. Finally, denote the maximum diagonal element of
C(x0 )−1 as σ0−2 . Any observation interval [0, k] that achieves
mmse(xi ) = α satisfies:
log 1 − nσ 2 li /α − σ 2 σ0−2 (1 − µ2 )/|S|
k≥
− 1, (14)
2 log (µ)
where l0 = 1 and lk = λmin L⊤
k Lk .
Definition 5 (Non-increasing and non-decreasing set function).
A function h : 2[n] 7→ R is a non-increasing set function if
for any S ⊆ S ′ ⊆ [n], h(S) ≥ h(S ′ ). Moreover, h is a nondecreasing set function if (−h) is a non-increasing set function.
Therefore, a function h : 2[n] 7→ R is submodular if, for
any a ∈ [n], the function ha : 2[n]\{a} 7→ R, defined as
ha (S) ≡ h(S ∪ {a}) − h(S), is a non-increasing set function.
This property is also called the diminishing returns property.
The first major result of this section follows, where we let
Ok ≡ Ok⊤ Ok ,
given an observation interval [0, k].
As expected, the number of output measurements increases
as the minimum mean square error or the number of sensors decreases. Moreover, in contrast to our comments on Corollary 1
and the number of sensors, Corollary 2 indicates that the number of output measurements increases only logarithmically with
the system’s size for fixed error and number of sensors. On the
other hand, it also decreases logarithmically with the number of
sensors, and this is our final design limit result.
In the following paragraphs, we prove that the log det error
of the Kalman filter is a supermodular and non-increasing set
function in the choice of the sensor set. Then, we use this result
to provide efficient algorithms for the solution of P1 and P2 .
Proposition 1 (Closed formula for the log det error as a sensor
set function). Given an observation interval [0, k] and non-zero
σ, irrespective of Assumption 2,
log det Σzk−1 =
2n(k + 1) log (σ) − log det Ok + σ 2 C(zk−1 )−1 . (15)
Proof. From (3),
Σzk−1 = C(zk−1 ) − C(zk−1 )Ok⊤
−1
Ok C(zk−1 )Ok⊤ + σ 2 I
Ok C(zk−1 )
−1
= σ −2 Ok⊤ Ok + C(zk−1 )−1
−1
,
= σ 2 Ok + σ 2 C(zk−1 )−1
4. Submodularity in Optimal Sensor Placement
In this section, we present our contributions with respect
to Objective 2. In particular, we first derive a closed formula
for log det Σzk−1 and then prove that it is a supermodular
and non-increasing set function in the choice of the sensor set.
This implies that the greedy algorithms for the solution of P1
and P2 return efficient approximate solutions (Feige (1998);
Nemhauser & Wolsey (1978)). In Section 5, we use this supermodularity result, and known results from the literature on submodular function maximization (Nemhauser & Wolsey (1988)),
to provide efficient algorithms for the solution of P1 and P2 .
(16)
(17)
(18)
where we deduce (17) from (16) using the Woodbury matrix
identity (Corollary 2.8.8 of Bernstein (2009)), and (18) from
(17) using the fact that Ok = Ok⊤ Ok .
Therefore, the log det Σzk−1 depends on the sensor set
through Ok . Now, the main result of this section follows, where
we make explicit the dependence of Ok on the sensor set S.
Theorem 2 (The log det error is a supermodular and non-increasing set function with respect to the choice of the sensor
set). Given an observation interval [0, k], the
log det Σzk−1 , S =
2n(k + 1) log (σ) − log det Ok,S + σ 2 C(zk−1 )−1 :
fixed and non-zero λmin L⊤
k Lk , the comments of this paragraph
extend to the mmse(xk ) as well, for any finite k —on the other
hand, if
⊤
λmin L⊤
k Lk varies with the system’s size, since λmin Lk Lk ≤ 1, the
number of sensors can increase sub-linearly with the system’s size for fixed
mmse(xk ).
5 For
S ∈ 2[n] 7→ R
6
is a supermodular and non-increasing set function with respect
to the choice of the sensor set S.
erty: its rate of reduction with respect to newly placed sensors
decreases as the cardinality of the already placed sensors increases. On the one hand, this property implies another fundamental design limit, in accordance to that of Theorem 1: adding
new sensors, after a first few, becomes ineffective for the reduction of the estimation error. On the other hand, it also implies that greedy approach for solving P1 and P2 is effective
(Nemhauser & Wolsey (1978); Feige (1998)). Thereby, we next
use the results from the literature on submodular function maximization (Nemhauser & Wolsey (1988)) and provide efficient
algorithms for P1 and P2 .
Proof. For i ∈ [n], let I (i) be the n × n matrix where Iii is
one, while Ijk is zero, for all (j, k) 6= (i, i). Also, let C̄ ≡
σ 2 C(zk−1 )−1 . Now, to prove that the log det Σzk−1 , S is
non-increasing, observe that
Ok,S =
n
X
m=1
sm
k
X
j=0
(m)
Lj
L⊤
j I
=
n
X
sm Ok,{m} ,
(19)
m=1
Then, for any S1 ⊆ S2 ⊆ [n], (19) and Ok,{1} , Ok,{2} , . . .,
Ok,{n} 0 imply Ok,S1 Ok,S2 , and as a result, Ok,S1 + C̄
Ok,S2 + C̄. Therefore,
from Theorem 8.4.9 of Bernstein (2009),
log det Ok,S1 + C̄ log det Ok,S2 + C̄ , and as a result,
log det Σzk−1 is non-increasing.
set
Next, to prove that log det Σzk−1 is a supermodular
function, it suffices to prove that log det Ok,S + C̄ is a submodular one. In particular, recall that a function h : 2[n] 7→ R
is submodular if and only if, for any a ∈ [n], the function
ha : 2[n]\{a} 7→ R, where ha (S) ≡ h(S ∪ {a}) − h(S), is
a non-increasing set function. Therefore, to prove that h(S) =
log det(Ok,S + C̄) is submodular, we may prove that the ha (S)
is a non-increasing set function. To this end, we follow the
proof of Theorem 6 in Summers et al. (2014): first, observe that
5. Algorithms for Optimal Sensor Placement
In this section, we present our contributions with respect to
Objective 3: P1 and P2 are combinatorial, and in Section 4 we
proved that they involve the minimization of the supermodular set function log det error. In particular, because the minimization of a general supermodular function is NP-hard (Feige
(1998)), in this section we provide efficient approximation algorithms for the general solution of P1 , and P2 , along with their
worst-case performance guarantees.
Specifically, we first provide an efficient algorithm for P1
that returns a sensor set that satisfies the estimation bound of
P1 and has cardinality up to a multiplicative factor from the
minimum cardinality sensor sets that meet the same estimation
bound. More importantly, this multiplicative factor depends
only logarithmically on the problem’s P1 parameters. Next, we
provide an efficient algorithm for P2 that returns a sensor set of
cardinality l ≥ r (l is chosen by the designer) and achieves a
near optimal value for increasing l.
To this end, we first present a fact from the supermodular
functions minimization literature that we use so to construct an
approximation algorithm for P1 —we follow Wolsey (1982)
for this material. In particular, consider the following problem,
which is of similar structure to P1 , where h : 2[n] 7→ R is a
supermodular and non-increasing set function:
ha (S) = log det(Ok,S∪{a} + C̄) − log det(Ok,S + C̄)
= log det(Ok,S + Ok,{a} + C̄) − log det(Ok,S + C̄).
Now, for any S1 ⊆ S2 ⊆ [n] and t ∈ [0, 1], define O(t) ≡ C̄+
Ok,S1 +t(Ok,S2 −Ok,S1 ) and h̄(t) ≡ log det O(t) + Ok,{a} −
log det (O(t)) ; it is h̄(0) = ha (S1 ) and h̄(1) = ha (S2 ). More
over, since d log det(O(t)))/dt = tr O(t)−1 dO(t)/dt (cf.
equation (43) in Petersen & Pedersen (2012)),
i
h
−1
dh̄(t)
= tr
O(t) + Ok,{a}
− O(t)−1 Ok,21 ,
dt
where Ok,21 ≡ Ok,S2 − Ok,S1 . From Proposition 8.5.5 of
minimize |S|
−1
S⊆[n]
(P)
Bernstein (2009), O(t) + Ok,{a}
O(t)−1 , because O(t) ≻
subject to h(S) ≤ R.
0 for all t ∈ [0, 1], since C̄ ≻ 0, Ok,S1 0, and Ok,S2 Ok,S1 .
Thereby, from Corollary 8.3.6 of Bernstein (2009),
The following greedy algorithm has been proposed for its
approximate solution, for which, the subsequent fact is true.
−1
O(t) + Ok,{a}
− O(t)−1 Ok,21 0.
Algorithm 1 Approximation Algorithm for P.
As a result, dh̄(t)/dt ≤ 0, and
Input: h, R.
Output: Approximate solution for P.
Z 1
dh̄(t)
S←∅
ha (S2 ) = h̄(1) = h̄(0) +
dt ≤ h̄(0) = ha (S1 ).
dt
while h(S) > R do
0
ai ← a′ ∈ arg maxa∈[n]\S (h(S) − h(S ∪ {a}))
Therefore, ha (S) is a non-increasing set function, and the proof
S ← S ∪ {ai }
is complete.
end while
The above theorem states that for any finite observation interval, the log det error of the Kalman filter is a supermodular
and non-increasing set function with respect to the choice of
the sensor set. Hence, it exhibits the diminishing returns prop-
Fact 1. Denote as S ⋆ a solution to P and as S0 , S1 , . . . the
sequence of sets picked by Algorithm 1. Moreover, let l be the
7
smallest index such that h(Sl ) ≤ R. Then,
at most n(k + 1) matrices must be computed so that the
arg max log det Σzk−1 , S − log det Σzk−1 , S ∪ {a}
l
h([n]) − h(∅)
≤ 1 + log
.
|S ⋆ |
h([n]) − h(Sl−1 )
a∈[n]\S
can be computed. Furthermore, O(n) time is required to find a
maximum element between n available. Therefore, the computational complexity of Algorithm 2 is O(n2 (nk)3 ).
For several classes of submodular functions, this is the best
approximation factor one can achieve in polynomial time (Feige
(1998)). Therefore, we use this result to provide the approximation Algorithm 2 for P
1 , where we make explicit the dependence of log det Σzk−1 on the selected sensor set S. Moreover, its performance is quantified with Theorem 3.
Therefore, Algorithm 2 returns a sensor set that meets the
estimation bound of P1 . Moreover, the cardinality of this set
is up to a multiplicative factor of Fi from the minimum cardinality sensor sets that meet the same estimation bound —that
is, Fi is a worst-case approximation guarantee for Algorithm 2.
Additionally, Fi depends only logarithmically on the problem’s
P1 parameters. Finally, the dependence of Fi on n, R and
2
) is expected from a design perspective: increasing
max(σ02 , σw
the network size n, requesting a better estimation guarantee by
decreasing R, or incurring a noise of greater variance, should
all push the cardinality of the selected sensor set upwards.
From a computational perspective, the matrix inversion is
the only intensive procedure of Algorithm 2 —and it is necessary for computing the minimum mean square error of x̂i . In
particular, it requires O((nk)3 ) time if we use the Gauss-Jordan
elimination decomposition, since Ok in (15) is an n(k + 1) ×
n(k +1) matrix. On the other hand, we can speed up this procedure using the Coppersmith-Winograd algorithm (Coppersmith & Winogr
(1987)), which requires O(n2.376 ) time for n × n matrices. Alternatively, we can use numerical methods, which efficiently
compute an approximate inverse of a matrix even if its size
is of several thousands (Reusken (2001)). Moreover, we can
speed up Algorithm 2 using the method proposed in Minoux
(1978), which avoids the computation of log det(Σzk−1 , S) −
log det(Σzk−1 , S ∪ {a}) for unnecessary choices of a towards
the computation of the
arg max log det Σzk−1 , S − log det Σzk−1 , S ∪ {a} .
Algorithm 2 Approximation Algorithm for P1 .
For h(S) = log det Σzk−1 , S , where S ⊆ [n], Algorithm 2
is the same as Algorithm 1.
Theorem 3 (A Submodular Set Coverage Optimization for P1 ).
Denote a solution to P1 as S ⋆ and the selected set by Algorithm 2 as S. Moreover, denote the maximum diagonal element
2
,
of C(x0 ) and C(wk′ ), among all k ′ ∈ [0, k], as σ02 and σw
respectively. Then,
log det Σzk−1 , S ≤ R,
(20)
log det Σzk−1 , ∅ − log det Σzk−1 , [n]
|S|
≤ 1 + log
|S ⋆ |
R − log det Σzk−1 , [n]
≡ Fi ,
(21)
2
where log det Σzk−1 , ∅ ≤ n(k+1) log max(σ02 , σw
). Finally,
the computational complexity of Algorithm 2 is O(n2 (nk)3 ).
Proof. Let S0 , S1 , . . . be the sequence of sets selected by Algo
rithm 2 and l the smallest index such that log det Σzk−1 , Sl ≤
R. Therefore, Sl is the set that Algorithm 2 returns, and this
proves (20). Moreover, from Fact 1,
log det Σzk−1 , ∅ − log det Σzk−1 , [n]
l
.
≤ 1 + log
|S ⋆ |
log det Σzk−1 , Sl−1 − log det Σzk−1 , [n]
Now, l is the first time that
log det Σzk−1 , Sl ≤ R, and a
result log det Σzk−1 , Sl−1 > R. This implies (21).
Furthermore, log det Σzk−1 , ∅ = log det Czk−1 , and so
from the geometric-arithmetic mean inequality,
a∈[n]\S
Next, we develop our approximation algorithm for P2 . To
this end, we first present a relevant fact from the supermodular
functions minimization literature —we follow Wolsey (1982)
for this material. In particular, consider the following problem,
which is of similar structure to P2 , where h : 2[n] 7→ R is a
supermodular, non-increasing and non-positive set function:
minimize
tr(Czk−1 )
log det Czk−1 ≤ n(k + 1) log
n(k + 1)
2
n(k + 1) max(σ02 , σw
)
≤ n(k + 1) log
n(k + 1)
S⊆[n]
subject to
h(S)
(P ′ )
|S| ≤ r.
Algorithm 3 has been proposed for its approximate solution,
where l ≥ r, for which, the subsequent fact is true.
2
= n(k + 1) log max(σ02 , σw
).
Fact 2. Denote as S ⋆ a solution to P ′ and as S0 , S1 , . . . , Sl the
sequence of sets picked by Algorithm 3. Then, for all l ≥ r,
h(Sl ) ≤ 1 − e−l/r h(S ⋆ ).
Finally, with respect to the computational complexity of Algorithm 2, note that the while loop is repeated for at most n
times. Moreover, the complexity to compute the determinant of
an n(k + 1) × n(k + 1) matrix, using Gauss-Jordan elimination decomposition, is O((nk)3 ) (this is also the complexity to
multiply two such matrices). Additionally, the determinant of
In particular, for l = r, h(Sl ) ≤ (1 − 1/e) h(S ⋆ ).
8
Thus, Algorithm 3 constructs a solution of cardinality l instead of r and achieves an approximation factor 1 − e−l/r instead of 1 − 1/e. For example, for l = r, 1 − 1/e ∼
= .63, while
for l = 5r, 1 − e−l/r ∼
= .99; that is, for l = 5r, Algorithm 3
returns an approximate solution that although violates the cardinality constraint r, it achieves a value for h that is near to the
optimal one.
Moreover, for several classes of submodular functions, this
is the best approximation factor one can achieve in polynomial
time (Nemhauser & Wolsey (1978)). Therefore, we use this result to provide the approximation Algorithm 4 forP2 , where
we make explicit the dependence of log det Σzk−1 on the selected sensor set S. Theorem 4 quantifies its performance.
1
2
3
4
5
Figure 1: A 5-node integrator chain.
6. Examples and Discussion
6.1. Integrator Chain Network
We first illustrate the efficiency of Algorithms 2 and 4 using
the integrator chain in Fig. 1, where for all i ∈ [5], Aii is −1,
Ai+1,i is 1, and the rest of the elements of A are 0.
We run Algorithm 2 for k ← 5, C(x0 ), C(wk′ ), C(vk′ ) ←
I, for all k ′ ∈ [0, k], and R ← log det(Σz4 , {2, 4}). The
algorithm returned the sensor set {3, 5}. Specifically, {3, 5}
is the best such set, as it follows by comparing the values of
log det(Σz4 , ·) for all sets of cardinality at most two using MATLAB R : for all i ∈ [5], log det(Σz4 , {i}) > R, while for all
i, j ∈ [5] such that {i, j} =
6 {3, 5}, log det(Σz4 , {i, j}) >
log det(Σz4 , {3, 5}). Therefore, any singleton set does not satisfy the bound R, while {3, 5} not only satisfies it, it also achieves the smallest log det error among all other sets of cardinality
two. Hence, {3, 5} is the optimal minimal sensor set to achieve
the error bound R. Similarly, Algorithm 2 returned the optimal
minimal sensor set for every other value of R in the feasible
region of P1 , [log det(Σz4 , [5]), log det(Σz4 , ∅)].
We also run Algorithm 4 for k ← 5, C(x0 ), C(wk′ ), C(vk′ )
← I, for all k ′ ∈ [0, k], and r being equal to 1, 2, . . . , 5, respectively. For all values of r the chosen set coincided with the one
of the same size that also minimizes log det(Σz4 , ·). Thereby,
we again observe optimal performance from our algorithms.
Finally, by increasing r from 0 to 5, the corresponding minimum value of log det(Σz4 , ·) decreases only linearly from 0
to −31. This is in agreement with Theorem 1, since for any
M ∈ Rm×m , log det(M ) ≤ tr(M ) − m (Lemma 6, Appendix
D in Weimer (2010)), and as a result, for M equal to Σzk−1 ,
Algorithm 4 Approximation Algorithm for P2 .
For h(S) = log det Σzk−1 , S , where S ⊆ [n], Algorithm 4
is the same as Algorithm 3.
Theorem 4 (A Submodular Set Coverage Optimization for P2 ).
Denote a solution to P2 as S ⋆ and the sequence of sets picked
by Algorithm 4 as S0 , S1 , . . . , Sl . Moreover, denote the maximum diagonal element of C(x0 ) and C(wk′ ), among all k ′ ∈
2
[0, k], as σ02 and σw
, respectively. Then, for all l ≥ r,
log det Σzk−1 , Sl ≤ (1−e−l/r ) log det Σzk−1 , S ⋆ +
(22)
e−l/r log det Σzk−1 , ∅ ,
2
where log det Σzk−1 , ∅ ≤ n(k+1) log max(σ02 , σw
). Finally,
the computational complexity of Algorithm 2 is O(n2 (nk)3 ).
Proof. log det Σzk−1 , S − log det Σzk−1 , ∅ is a supermodular, non-increasing and non-positive set function. Thus, from
Fact 2 we derive (22).
2
That log det Σzk−1 , ∅ ≤ n(k + 1) log max(σ02 , σw
), as
well as, the computational complexity of Algorithm 4, follows
as in the proof of Theorem 3.
log det(Σzk−1 ) ≤ mmse(zk−1 ) − n(k + 1),
(23)
while, for any k ≥ 0, mmse(x0 ) ≤ mmse(zk−1 ).
Algorithm 4 returns a sensor set of cardinality l ≥ r, and
achieves a near optimal value for increasing l. Moreover, the
2
dependence of its approximation level on n, r and max(σ02 , σw
)
is expected from a design perspective: increasing the network
size n, requesting a smaller sensor set by decreasing r, or incurring a noise of greater variance should all push the quality
of the approximation level downwards.
Finally, from a computational perspective, our comments
on Algorithm 2 apply here as well.
6.2. CO2 Sequestration Sites
We now illustrate the efficiency of Algorithms 2 and 4 using
the problem of surface-based monitoring of CO2 sequestration
sites introduced in Weimer et al. (2008), for an instance that includes 81 possible sensor placement locations. Moreover, we
exemplify the fundamental design limit presented in Theorem
Algorithm 3 Approximation Algorithm for P ′ .
Input: h, l.
Output: Approximate solution for P ′ .
S ← ∅, i ← 0
while i < l do
ai ← a′ ∈ arg maxa∈[n]\S (h(S) − h(S ∪ {a}))
S ← S ∪ {ai }, i ← i + 1
end while
9
Output of Algorithm 1
xk+1 = Ak xk ,
yk = [C, 0] xk + vk ,
Output of Algorithm 2
0
0
−10
−10
Achieved logdet error
Error bound R
1, that the estimation error of the optimal linear estimator decreases only linearly as the number of sensors increases.
CO2 sequestration sites are used so to reduce the emissions
of CO2 from the power generation plans that burn fossil fuels. In particular, the problem of monitoring CO2 sequestration
sites is important due to potential leaks, which we need to identify using a set of sensors. Specifically, since these sites cover
large areas, power and communication constrains necessitate
the minimal sensor placement for monitoring these leaks, as we
undertake next: following Weimer et al. (2008), we consider a)
that the sequestration sites form an 9×9 grid (81 possible sensor
locations), and b) the onset of constant unknown leaks. Then,
for all k ≥ 0, the CO2 concentration between the sequestration
sites is described with the linear time-variant system
−20
−30
−40
−50
−60
−20
−30
−40
−50
−60
0
20
40
60
80
Achieved number of sensors
(24)
0
20
40
60
80
Number of sensors r
Figure 2: Outputs of Algorithms 2 and 4 for the CO2 sequestration sites application of Section 6.2: In the left plot, the output of Algorithm 2 is depicted
for k ← 100, C(x̄0 ), C(vk′ ) ← I, for all k ′ ∈ [0, k], and R that ranged
from log det(Σz99 , [81]) ∼
= −62 to log det(Σz99 , ∅) = 0 with step size ten.
In the right plot, the output of Algorithm 4 is depicted for k ← 100, C(x̄0 ),
C(vk′ ) ← I, for all k ′ ∈ [0, k], as well, and r that ranged from 0 to 81 with
step size ten.
⊤ ⊤
81
where xk ≡ (d⊤
is the vector of the CO2
k , lk ) , dk ∈ R
concentrations in the grid, and lk ∈ R81 is the vector of the
corresponding leaks rates; Ak describes the dependence of the
CO2 concentrations among the adjacent sites in the grid, and
with relation to the occurring leaks —it is constructed as in the
Appendix C of Weimer (2010); finally, C ∈ R81×81 is as in Assumption 2. Hence, we run Algorithms 2 and 4 so to efficiently
estimate x0 of (24) using a small number of sensors.
In particular, we run Algorithm 2 for k ← 100, C(x0 ),
C(vk′ ) ← I, for all k ′ ∈ [0, k], and R that ranged from log
det(Σz99 , [81]) ∼
= −62 to log det(Σz99 , ∅) = 0 with step size
ten —since in (24) the process noise is zero, z99 = x0 . The corresponding number of sensors that Algorithm 2 achieved with
respect to R is shown in the left plot of Fig. 2: as R increases
the number of sensors decreases, as one would expect when the
estimation error bound of P1 is relaxed.
For the same values for k, C(x0 ) and C(vk′ ), for all k ′ ∈
[0, k], we also run Algorithm 4 for r that ranged from 0 to 81
with step size ten. The corresponding achieved values for Problem P2 with respect to r are found in the right plot of Fig. 2:
as the number of available sensors r increases the minimum
achieved value also decreases, as expected by the monotonicity and supermodularity of log det(Σz99 , ·). At the same plot,
we compared these values with the achieved minimums over a
random sample of 80, 000 sensor sets for the various r (10, 000
distinct sets for each r, where r ranged from 0 to 81 with step
size ten): the achieved minimum random values were larger or
at most equal to those of Algorithm 4. Therefore, we observe
effective performance from Algorithm 4.
Now, to compare the outputs of Algorithms 2 and 4, for
error bounds R larger than −20 Algorithm 2 returns a sensor
set that solves P1 , yet does not minimize the log det(Σz99 , ·);
notwithstanding, the difference in the achieved value with respect to the output of Algorithm 4 is small. Moreover, for R
less than −20, Algorithm 2 returns a sensor set that not only
satisfies this bound; it also minimizes log det(Σz99 , ·). Overall,
both Algorithms 2 and 4 outperform the theoretical guarantees
of Theorems 3 and 4, respectively.
Finally, as in the example of Section 6.1, the minimum
value of log det(Σz99 , ·) —right plot of Fig. 2— decreases only
linearly from 0 to −62 as the number of sensors r increases
from 0 to 81. This is in agreement with Theorem 1, in relation
to (23) and the fact that z99 = x0 . Specifically, both the example of this section and that of Section 6.1 exemplify the fundamental design limit presented in Theorem 1, that the estimation
error of the optimal linear estimator decreases only linearly as
the number of sensors increases. That is, adding extra sensors
so to reduce the minimum mean square estimation error of the
Kalman filter is ineffective, a fundamental design limit.
7. Concluding Remarks
We considered a linear time-variant system and studied the
properties of its Kalman estimator given an observation interval and a sensor set. Our contributions were threefold. First, in
Section 3 we presented several design limits. For example, we
proved that the minimum mean square error decreases only linearly as the number of sensors increases. That is, adding extra
sensors so to reduce the minimum mean square estimation error
of the Kalman filter is ineffective, a fundamental design limit.
Similarly, we proved that the number of sensors grows linearly
with the system’s size for fixed minimum mean square error;
this is another fundamental limit, especially for systems where
the system’s size is large. Second, in Section 4 we proved that
the log det estimation error of the system’s initial condition and
process noise is a supermodular and non-increasing set function
with respect to the choice of the sensor set. Third, in Section
5, we used this result to provide efficient approximation algorithms for the solution of P1 and P2 , along with their worst-case
performance guarantees. For example, for P1 , we provided an
efficient algorithm that returns a sensor set that has cardinality up to a multiplicative factor from that of the corresponding
optimal solutions; more importantly, this factor depends only
10
logarithmically on the problem’s parameters. For P2 , we provided an efficient algorithm that returns a sensor set of cardinality l ≥ r and achieves a near optimal value for increasing
l. Finally, in Section 6, we illustrated our analytical findings,
and tested the efficiency of our algorithms, using simulation
results from an integrator chain network and the problem of
surface-based monitoring of CO2 sequestration sites studied in
Weimer et al. (2008) —another application that fits the context
of minimal sensor placement for effective monitoring is that of
thermal control of indoor environments, such as large offices
and buildings (Oldewurtel (2011); Sturzenegger (2014)). Our
future work is focused on extending the results of this paper to
the problem of sensor scheduling.
Hero III, A. O., & Cochran, D. (2011). Sensor management: Past, present, and
future. IEEE Sensors Journal, 11, 3064–3075.
Jawaid, S. T., & Smith, S. L. (2015). Submodularity and greedy algorithms in
sensor scheduling for linear dynamical systems. Automatica, 61, 282–288.
Jiang, C., Chai Soh, Y., & Li, H. (2015). Sensor placement by maximal projection on minimum eigenspace for linear inverse problem. arXiv preprint
arXiv: 1506.00747, .
Joshi, S., & Boyd, S. (2009). Sensor selection via convex optimization. IEEE
Transactions on Signal Processing, 57, 451–462.
Kailath, T., Sayed, A. H., & Hassibi, B. (2000). Linear estimation volume 1.
Prentice Hall Upper Saddle River, NJ.
Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Journal of Fluids Engineering, 82, 35–45.
Krause, A., & Guestrin, C. (). Beyond convexity: Submodularity in machine
learning, .
Krause, A., Singh, A., & Guestrin, C. (2008). Near-optimal sensor placements
in gaussian processes: Theory, efficient algorithms and empirical studies.
The Journal of Machine Learning Research, 9, 235–284.
Lin, F., Fardad, M., & Jovanovic, M. R. (2013). Design of optimal sparse
feedback gains via the alternating direction method of multipliers. IEEE
Transactions on Automatic Control, 58, 2426–2431.
Madsen, H., & Holst, J. (1995). Estimation of continuous-time models for the
heat dynamics of a building. Energy and Buildings, 22, 67 – 79.
Matni, N., & Chandrasekaran, V. (2014). Regularization for design. In IEEE
53rd Annual Conference on Decision and Control (CDC) (pp. 1111–1118).
Minoux, M. (1978). Accelerated greedy algorithms for maximizing submodular
set functions. In Optimization Techniques (pp. 234–243). Springer.
Munz, U., Pfister, M., & Wolfrum, P. (2014). Sensor and actuator placement
for linear systems based on h2 and h∞ optimization. IEEE Transactions on
Automatic Control, 59, 2984–2989.
Nemhauser, G. L., & Wolsey, L. A. (1978). Best algorithms for approximating the maximum of a submodular set function. Mathematics of operations
research, 3, 177–188.
Nemhauser, G. L., & Wolsey, L. A. (1988). Integer and Combinatorial Optimization. New York, NY, USA: Wiley-Interscience.
Oldewurtel, F. (2011). Stochastic model predictive control for energy efficient building climate control. Ph.D. thesis Eidgenössische Technische
Hochschule ETH Zürich, Switzerland.
Olshevsky, A. (2014). Minimal controllability problems. IEEE Transactions
on Control of Network Systems, 1, 249–258.
Pasqualetti, F., Zampieri, S., & Bullo, F. (2014). Controllability metrics, limitations and algorithms for complex networks. IEEE Transactions on Control
of Network Systems, 1, 40–52.
Pequito, S., Kar, S., & Aguiar, A. (2015). A framework for structural input/output and control configuration selection in large-scale systems. IEEE
Transactions on Automatic Control, PP, 1–1.
Pequito, S., Ramos, G., Kar, S., Aguiar, A. P., & Ramos, J. (2014). On the Exact Solution of the Minimal Controllability Problem. arXiv preprint arXiv:
1401.4209, .
Petersen, K. B., & Pedersen, M. S. (2012). The matrix cookbook.
Reusken, A. (2001). Approximation of the determinant of large sparse symmetric positive definite matrices. SIAM J. Matrix Anal. Appl., 23, 799–818.
Rowaihy, H., Eswaran, S., Johnson, M., Verma, D., Bar-Noy, A., Brown, T., &
La Porta, T. (2007). A survey of sensor selection schemes in wireless sensor networks. In Defense and Security Symposium (pp. 65621A–65621A).
International Society for Optics and Photonics.
Shamaiah, M., Banerjee, S., & Vikalo, H. (2010). Greedy sensor selection:
Leveraging submodularity. In 49th IEEE Conference on Decision and Control (CDC), (pp. 2572–2577).
Singh, A., Krause, A., Guestrin, C., & Kaiser, W. J. (2009). Efficient informative sensing using multiple robots. Journal of Artificial Intelligence Research, (pp. 707–755).
Sturzenegger, D. C. T. (2014). Model Predictive Building Climate Control - Steps Towards Practice. Ph.D. thesis Eidgenössische Technische
Hochschule ETH Zürich, Switzerland.
Summers, T. H., Cortesi, F. L., & Lygeros, J. (2014). On submodularity and controllability in complex dynamical networks. arXiv preprint
arXiv:1404.7665, .
Tzoumas, V., Jadbabaie, A., & Pappas, G. J. (2016). Sensor placement for optimal kalman filtering: Fundamental limits, submodularity, and algorithms.
In American Control Conference (ACC). To appear.
Acknowledgements
This work was supported in part by TerraSwarm, one of
six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA, in part by
AFOSR Complex Networks Program and in part by AFOSR
MURI CHASE.
References
References
Atanasov, N., Le Ny, J., Daniilidis, K., & Pappas, G. J. (2014). Information
acquisition with sensing robots: Algorithms and error bounds. In IEEE
International Conference on Robotics and Automation (ICRA) (pp. 6447–
6454).
Belabbas, M.-A. (2015). Geometric methods for optimal sensor design. arXiv
preprint arXiv: 1503.05968, .
Bernstein, D. S. (2009). Matrix mathematics: theory, facts, and formulas.
Princeton University Press.
Bertsekas, D. P. (2005). Dynamic Programming and Optimal Control, Vol. I.
(3rd ed.). Athena Scientific.
Chen, C.-T. (1998). Linear System Theory and Design. (3rd ed.). New York,
NY, USA: Oxford University Press, Inc.
Clark, A., Alomair, B., Bushnell, L., & Poovendran, R. (2014a). Minimizing
convergence error in multi-agent systems via leader selection: A supermodular optimization approach. IEEE Transactions on Automatic Control, 59,
1480–1494.
Clark, A., Bushnell, L., & Poovendran, R. (2012). On leader selection for performance and controllability in multi-agent systems. In IEEE 51st Annual
Conference on Decision and Control (CDC) (pp. 86–93).
Clark, A., Bushnell, L., & Poovendran, R. (2014b). A supermodular optimization framework for leader selection under link noise in linear multi-agent
systems. IEEE Transactions on Automatic Control, 59, 283–296.
Coppersmith, D., & Winograd, S. (1987). Matrix multiplication via arithmetic
progressions. In Proceedings of the nineteenth annual ACM symposium on
Theory of computing (pp. 1–6).
Das, A., & Kempe, D. (2008). Sensor selection for minimizing worst-case prediction error. In Information Processing in Sensor Networks, 2008. IPSN’08.
International Conference on (pp. 97–108).
Dhingra, N. K., Jovanovic, M. R., & Luo, Z.-Q. (2014). An admm algorithm
for optimal sensor and actuator selection. In IEEE 53rd Annual Conference
on Decision and Control (CDC) (pp. 4039–4044).
Fang, Y., Loparo, K., Feng, X. et al. (1994). Inequalities for the trace of matrix
product. IEEE Transactions on Automatic Control,, 39, 2489–2490.
Feige, U. (1998). A threshold of ln n for approximating set cover. J. ACM, 45,
634–652.
Gupta, V., Chung, T. H., Hassibi, B., & Murray, R. M. (2006). On a stochastic
sensor selection algorithm with applications in sensor scheduling and sensor
coverage. Automatica, 42, 251–260.
11
Tzoumas, V., Jadbabaie, A., & Pappas, G. J. (December 2015). Minimal reachability problems. In 54th IEEE Conference on Decision and Control (CDC).
Tzoumas, V., Rahimian, M. A., Pappas, G. J., & Jadbabaie, A. (2015). Minimal
actuator placement with bounds on control effort. IEEE Transactions on
Control of Network Systems, . In press.
Tzoumas, V., Rahimian, M. A., Pappas, G. J., & Jadbabaie, A. (July 2015).
Minimal actuator placement with optimal control constraints. In Proceedings of the American Control Conference (pp. 2081 – 2086).
Venkatesh, S. (2012). The Theory of Probability: Explorations and Applications. Cambridge University Press.
Weimer, J. E. (2010). Large-scale multiple-source detection using wireless sensor networks. Ph.D. thesis Carnegie Mellon University Pittsburgh, PA, USA.
Weimer, J. E., Sinopoli, B., & Krogh, B. H. (2008). A relaxation approach to
dynamic sensor selection in large-scale wireless networks. In Distributed
Computing Systems Workshops, 2008. ICDCS’08. 28th International Conference on (pp. 501–506).
Wolsey, L. A. (1982). An analysis of the greedy algorithm for the submodular
set covering problem. Combinatorica, 2, 385–393.
Yan, G., Tsekenis, G., Barzel, B., Slotine, J.-J., Liu, Y.-Y., & Barabási, A.-L.
(2015). Spectrum of controlling and observing complex networks. Nature
Physics, (pp. 779–786).
Zhang, H., Ayoub, R., & Sundaram, S. (2015). Sensor selection for optimal
filtering of linear dynamical systems: Complexity and approximation. In
IEEE Conference on Decision and Control (CDC).
Zhao, Y., & Cortés, J. (2015). Gramian-based reachability metrics for bilinear
networks. arXiv preprint arXiv:1509.02877, .
12
| 3 |
arXiv:1610.05581v2 [math.RA] 8 Nov 2016
2-CAPABILITY AND 2-NILPOTENT MULTIPLIER OF FINITE
DIMENSIONAL NILPOTENT LIE ALGEBRAS
PEYMAN NIROOMAND AND MOHSEN PARVIZI
Abstract. In the present context, we investigate to obtain some more results
about 2-nilpotent multiplier M(2) (L) of a finite dimensional nilpotent Lie algebra L. For instance, we characterize the structure of M(2) (H) when H is a
Heisenberg Lie algebra. Moreover, we give some inequalities on dim M(2) (L)
to reduce a well known upper bound on 2-nilpotent multiplier as much as
possible. Finally, we show that H(m) is 2-capable if and only if m=1.
1. Introduction
For a finite group G, let G be the quotient of a free group F by a normal subgroup
R, then the c-nilpotent multiplier M(c) (G) is defined as
R ∩ γc+1 (F )/γc+1 [R, F ],
in which γc+1 [R, F ] = [γc [R, F ], F ] for c ≥ 1. It is an especial case of the Baer
invariant [3] with respect to the variety of nilpotent groups of class at most c.
When c = 1, the abelian group M(G) = M(1) (G) is more known as the Schur
multiplier of G and it is much more studied, for instance in [11, 14, 18].
Since determining the c-nilpotent multiplier of groups can be used for the classification of group into isoclinism classes(see [2]), there are multiple papers concerning
this subject.
Recently, several authors investigated to develop some results on the group theory case to Lie algebra. In [22], analogues to the c-nilpotent multiplier of groups,
for a given Lie algebra L, the c-nilpotent multiplier of L is defined as
M(c) (L) = R ∩ F c+1 /[R, F ]c+1 ,
in which L presented as the quotient of a free Lie algebra F by an ideal R, F c+1 =
γc+1 (F ) and [R, F ]c+1 = γc+1 [R, F ]. Similarly, for the case c = 1, the abelian Lie
algebra M(L) = M(1) (L) is more studied by the first author and the others (see
for instance [4, 5, 6, 8, 9, 10, 15, 16, 17, 24]).
The c-nilpotent multiplier of a finite dimensional nilpotent Lie algebra L is a
new field of interest in literature. The present context is involving the 2-nilpotent
multiplier of a finite dimensional nilpotent Lie algebra L. The aim of the current
paper is divided into several steps. In [22, Corollary 2.8], by a parallel result to the
group theory result, showed for every finite nilpotent Lie algebra L, we have
1
(1.1)
dim(M(2) (L)) + dim(L3 ) ≤ n(n − 1)(n + 1).
3
Date: April 9, 2018.
Key words and phrases. 2-nilpotent multiplier; Schur multiplier; Heisenberg algebras; derived
subalgebra; 2-capable Lie algebra,
Mathematics Subject Classification 2010. Primary 17B30 Secondary 17B60, 17B99.
1
2
P. NIROOMAND AND M. PARVIZI
Here we prove that abelian Lie algebras just attain the bound 1.1. It shows that
always Ker θ = 0 in [22, Corollary 2.8 (ii)a].
Since Heisenberg algebras H(m) (a Lie algebra of dimension 2m + 1 with L2 =
Z(L) and dim (L2 ) = 1) have interest in several areas of Lie algebra, similar
to the result of [5, Example 3] and [13, Theorem 24], by a quite different way,
we give explicit structure of 2-nilpotent multiplier of these algebras. Among the
other results since the Lie algebra which attained the upper bound 1.1 completely
described in Lemma 2.6 (they are just abelian Lie algebras), by obtaining some
new inequalities on dimension M(2) (L), we reduce bound 1.1 for non abelian Lie
algebras as much as possible.
Finally, among the class of Heisenberg algebras, we show that which of them
is 2-capable. It means which of them is isomorphic to H/Z2 (H) for a Lie algebra
H. For more information about the capability of Lie algebras see [18, 21]. These
generalized the recently results for the group theory case in [19].
2. Further investigation on 2-nilpotent multiplier of finite
dimensional nilpotent Lie algebra
The present section illustrates to obtain further results on 2-nilpotent multiplier
of finite dimensional nilpotent Lie algebra. At first we give basic definitions and
known results for the seek of convenience the reader.
Let F be a free Lie algebra on an arbitrary totaly ordered set X. Recall from
[25], the basic commutator on the set X, which is defined as follows, is a basis of
F.
The elements of X are basic commutators of length one and ordered relative to
the total order previously chosen. Suppose all the basic commutators ai of length
less than k ≥ 1 have been defined and ordered. Then the basic commutators of
length k to be defined as all commutators of the form [ai , aj ] such that the sum
of lengths of ai and aj is k, ai > aj , and if ai = [as , at ], then aj ≥ at . Also the
number of basic commutators on X of length n, namely ld (n), is
n
1X
µ(m)d m ,
n
m|n
where µ is the Möbius function.
From [8], let F be a fixed field, L, K be two Lie algebras and [ , ] denote the Lie
bracket. By an action of L on K we mean an F -bilinear map
(l, k) ∈ L × K 7→ l k ∈ K satisfying
[l,l′ ]
′
′
k = l ( l k)− l ( l k) and l [k, k ′ ] = [ l k, k ′ ]+[k, l k ′ ], for all c ∈ F, l, l′ ∈ L, k, k ′ ∈ K.
When L is a subalgebra of a Lie algebra P and K is an ideal in P , then L acts
on K by Lie multiplications l k = [l, k]. A crossed module is a Lie homomorphism
σ : K → L together with an action of L on K such that
σ(l k) = [l, σ(k)] and
σ(k) ′
k = [k, k ′ ] for all k, k ′ ∈ K and l ∈ L.
Let σ : L → M and η : K → M be two crossed modules, L and K act on each
other and on themselves by Lie. Then these actions are called compatible provided
that
k
′
l
′
l ′
k = k ( l k) and k l′ = l ( k l).
2-CAPABILITY AND 2- NILPOTENT MULTIPLIER OF FINITE DIMENSIONAL NILPOTENT LIE ALGEBRAS
3
The non-abelian tensor product L ⊗ K of L and K is the Lie algebra generated
by the symbols l ⊗ k with defining relations
c(l ⊗ k) = cl ⊗ k = l ⊗ ck, (l + l′ ) ⊗ k = l ⊗ k + l′ ⊗ k,
l ⊗ (k + k ′ ) = l ⊗ k + l ⊗ k ′ , l l′ ⊗ k = l ⊗
l′
k − l′ ⊗ l k, l ⊗ k k ′ =
k′
l ⊗ k − k l ⊗ k′ ,
′
[l ⊗ k, l′ ⊗ k ′ ] = − k l ⊗ l k ′ , for all c ∈ F, l, l′ ∈ L, k, k ′ ∈ K.
The non-abelian tensor square of L is a special case of tensor product L ⊗ K
when K = L. Note that we denote the usual abelian tensor product L ⊗Z K, when
L and K are abelian and the actions are trivial.
Let LK be the submodule of L ⊗ K generated by the elements l ⊗ k such that
σ(l) = η(k). The factor Lie algebra L ∧ K ∼
= L ⊗ K/LK is called the exterior
product of L and K, and the image of l ⊗ k is denoted by l ∧ k for all l ∈ L, k ∈ K.
Throughout the paper Γ is denoted the universal quadratic functor (see [8]).
Recall from [18], the exterior centre of a Lie algebra Z ∧ (L) = {l ∈ L | l ∧ l′ =
1L∧L , ∀ l′ ∈ L} of L. It is shown that in [18] the exterior centre L is a central
ideal of L which allows us to decide when Lie algebra L is capable, that is, whether
L∼
= H/Z(H) for a Lie algebra H.
The following Lemma is a consequence of [18, Lemma 3.1].
Lemma 2.1. Let L be a finite dimensional Lie algebra, L is capable if and only if
Z ∧ (L) = 0.
The next two lemmas are special cases of [22, Proposition 2.1 (i)] when c = 2
and that is useful for proving the next theorem.
Lemma 2.2. Let I be an ideal in a Lie algebra L. Then the following sequences
are exact.
I∩L3
→ 0.
(i) Ker(µ2I ) → M(2) (L) → M(2) (L/I) → [[I,L],L]
(ii) (I ∧ L/L3) ∧ L/L3 → M(2) (L) → M(2) (L/I) → I ∩ L3 → 0, when
[[I, L], L] = 0.
Lemma 2.3. Let I be an ideal of L, and put K = L/I. Then
I ∩ L3
(i) dim M(2) (K) ≤ dim M(2) (L) + dim
.
[[I, L], L]
(ii) Moreover, if I is a 2-central subalgebra. Then
(a). (I ∧ L) ∧ L → M(2) (L) → M(2) (K) → dim I ∩ L3 → 0.
(b). dim M(2) (L)+ dim I ∩L3 ≤ dim M(2) (K)+ dim (I ⊗ L/L3)⊗ L/L3 .
Proof. (i). Using Lemma 2.2 (i).
(ii)(a). Since [I, L] ⊆ Z(L), Ker µ2I = (I ∧ L) ∧ L and [[I, L], L] = 0 by Lemma
2.2. It follows the result.
(ii)(b). Since there is a natural epimorphism (I ⊗ L/L3) ⊗ L/L3 → (I ∧ L/L3 ) ∧
L/L3, the result deduces from Lemma 2.2 (ii).
The following theorem gives the explicit structure of the Schur multiplier of all
Heisenberg algebra.
Theorem 2.4. [5, Example 3] and [13, Theorem 24] Let H(m) be Heisenberg
algebra of dimension 2m + 1. Then
(i) M(H(1)) ∼
= A(2).
(ii) M(H(m)) = A(2m2 − m − 1) for all m ≥ 2.
4
P. NIROOMAND AND M. PARVIZI
The following result comes from [20, Theorem 2.8] and shows the behavior of
2-nilpotent multiplier respect to the direct sum of two Lie algebras.
Theorem 2.5. Let A and B be finite dimensional Lie algebras. Then
M(2) (A ⊕ B))
∼
= M(2) (A) ⊕ M(2) (B) ⊕ (A/A2 ⊗Z A/A2 ) ⊗Z B/B 2
⊕ (B/B 2 ⊗Z B/B 2 ) ⊗Z A/A2 .
The following theorem is proved in [22] and will be used in the next contribution.
At this point, we may give a short proof with a quite different way of [22, Proposition
1.2] as follows.
Theorem 2.6. Let L = A(n) be an abelian Lie algebra of dimension L. Then
M(2) (L) ∼
= A( 13 n(n − 1)(n + 1)).
Proof. We perform induction on n. Assume n = 2. Then Theorem 2.5 allows us to
conclude that
M(2) (L) ∼
= M(2) (A(1)) ⊕ M(2) (A(1)) ⊕ A(1) ⊗Z A(1) ⊗Z A(1)
⊕ A(1) ⊗Z A(1)) ⊗Z A(1) ∼
= A(1) ⊕ A(1) ∼
= A(2).
Now assume that L ∼
= A(n) ∼
= A(n − 1) ⊕ A(1). By using induction hypothesis
and Theorem 2.5, we have
M(2) (A(n − 1) ⊕ A(1)) ∼
= M(2) (A(n − 1)) ⊕ A(n − 1) ⊗Z A(n − 1) ⊗Z A(1)
⊕ A(1) ⊗Z A(1) ⊗Z A(n − 1)
∼
=
A( 31 n(n − 1)(n − 2)) ⊕ A((n − 1)2 ) ⊕ A(n − 1)
∼
=
A( 13 n(n − 1)(n + 1)).
The main strategy, in the next contribution, is to give the similar argument of
Theorem 2.4 for the 2-nilpotent multiplier. In the first theorem, we obtain the
structure of M(2) (L) when L is non-capable Heisenberg algebra.
Theorem 2.7. Let L = H(m) be a non-capable Heisenberg algebra. Then
8m3 − 2m
).
M(2) (H(m)) ∼
= A(
3
Proof. Since L is non-capable, Lemma 2.1 implies Z ∧ (L) = L2 = Z(L). Invoking
Lemma 2.3 by putting I = Z ∧ (L), we have M(2) (H(m)) ∼
= M(2) (H(m)/H(m)2 ).
Now result follows from Theorem 2.6.
The following theorem from [18, Theorem 3.4] shows in the class of all Heisenberg
algebras which one is capable.
Theorem 2.8. H(m) is capable if and only if m = 1.
Corollary 2.9. H(m) is not 2-capable for all m ≥ 2.
Proof. Since every 2-capable Lie algebra is capable, the result follows from Theorem
2.8.
2-CAPABILITY AND 2- NILPOTENT MULTIPLIER OF FINITE DIMENSIONAL NILPOTENT LIE ALGEBRAS
5
Since H(m) for all m ≥ 2 is not 2-capable, we just need to discus about the
2-capability of H(1). Here, we obtain 2-nilpotent multiplier of H(1) and in the
next section we show H(1) is 2-capable.
Theorem 2.10. Let L = H(1). Then
M(2) (H(1)) ∼
= A(5).
Proof. We know that H(1) is in fact the free nilpotent Lie algebra of rank 2 and
class 2. That is H(1) ∼
= F/F 3 in which F is the free Lie algebra on 2 letters x, y.
The second nilpotent multiplier of H(1) is F 4 ∩F 3 /[F 3 , F, F ] which is isomorphic to
F 3 /F 5 ant the latter is the abelian Lie algebra on the set of all basic commutators of
weights 3 and 4 which is the set {[y, x, x], [y, x, y], [y, x, x, x], [y, x, x, y], [y, x, y, y]}.
So the result holds.
We summarize our result as below
Theorem 2.11. Let H(m) be Heisenberg algebra of dimension 2m + 1. Then
(i) M(2) (H(1)) ∼
= A(5).
3
(ii) M(2) (H(m)) = A( 8m 3−2m ) for all m ≥ 2.
The following Lemma lets us to obtain the structure of the 2-nilpotent multiplier
of all nilpotent Lie algebras with dim L2 = 1.
Lemma 2.12. [15, Lemma 3.3] Let L be an n-dimensional Lie algebra and dim L2 =
1. Then
L∼
= H(m) ⊕ A(n − 2m − 1).
Theorem 2.13. Let L be an n-dimensional Lie algebra with dim L2 = 1. Then
if m > 1,
A( 13 n(n − 1)(n − 2))
(2)
M (L) ∼
=
A( 13 n(n − 1)(n − 2) + 3) if m = 1.
Proof. By using Lemma 2.12, we have L ∼
= H(m) ⊕ A(n − 2m − 1). Using the
behavior of 2-nilpotent multiplier respect to direct sum
M(2) (L) ∼
= M(2) (H(m)) ⊕ M(2) (A(n − 2m − 1))
⊕
⊕
(H(m)/H(m)2 ⊗Z H(m)/H(m)2 ) ⊗Z A(n − 2m − 1)
(A(n − 2m − 1) ⊗Z A(n − 2m − 1)) ⊗Z H(m)/H(m)2 .
First assume that m = 1, then by virtue of Theorems 2.6 and 2.11
1
M(2) (H(1)) ∼
= A( (n − 2)(n − 3)(n − 4)).
= A(5) and M(2) (A(n − 3)) ∼
3
Thus
M(2) (L) ∼
= A(5) ⊕ A( 13 (n − 2)(n − 3)(n − 4))
⊕ A(2) ⊗Z A(2)) ⊗Z A(n − 3)
⊕ (A(n − 3) ⊗Z A(n − 3)) ⊗Z A(2)
∼
= A( 13 n(n − 1)(n − 2) + 3).
The case m ≥ 1 is obtained by a similar fashion.
6
P. NIROOMAND AND M. PARVIZI
Theorem 2.14. Let L be a n-dimensional nilpotent Lie algebra such that dim L2 =
m(m ≥ 1). Then
1
dim M(2) (L) ≤ (n − m) (n + 2m − 2)(n − m − 1) + 3(m − 1) + 3.
3
In particular, dim M(2) (L) ≤ 13 n(n − 1)(n − 2) + 3. The equality holds in last
inequality if and only if L ∼
= H(1) ⊕ A(n − 3).
Proof. We do induction on m. For m = 1, the result follows from Theorem 2.13.
Let m ≥ 2, and taking I a 1-dimensional central ideal of L. Since I and L/L3 act
L/L3
L/L3
to each other trivially we have (I ⊗ L/L3 ) ⊗ L/L3 ∼
= (I ⊗Z (L/L3 )2 ) ⊗Z (L/L3 )2 ).
Thus by Lemma 2.3 (ii)(b)
dim M(2) (L) + dim I ∩ L3 ≤ dim M(2) (L/I) + dim (I ⊗Z
L/L3
L/L3
)
⊗
).
Z
(L/L3 )2
(L/L3 )2
Since
dim M(2) (L/I) ≤
1
(n − m) (n + 2m − 5)(n − m − 1) + 3(m − 2) ,
3
we have
dim M(2) (L) ≤
=
as required.
1
3 (n
− m) (n + 2m − 5)(n − m − 1) + 3(m − 2) + 3 + (n − m)2
1
3 (n − m) (n + 2m − 2)(n − m − 1) + 3(m − 1) + 3,
The following corollary shows that the converse of [22, Proposition 1.2] for c = 2
is also true. In fact it proves always Ker θ = 0 in [22, Corollary 2.8 (ii)a].
Corollary 2.15. Let L be a n-dimensional nilpotent Lie algebra. If dim M(2) (L) =
1
∼
3 n(n − 1)(n + 1), then L = A(n).
3. 2-capability of Lie algebras
Following the terminology of [7] for groups, a Lie algebra L is said to be 2-capable
provided that L ∼
= H/Z2 (H) for a Lie algebra H. The concept Z2∗ (L) was defined in
[23] and it was proved that if π : F/[R, F, F ] → F/R be a natural Lie epimorphism
then
Z2∗ (L) = π(Z2 (F/[[R, F ], F ])), for c ≥ 0
The following proposition gives the close relation between 2-capability and Z2∗ (L).
Proposition 3.1. A Lie algebra L is 2-capable if and only if Z2∗ (L) = 0.
Proof. Let L has a free presentation F/R, and Z2∗ (L) = 0. Consider the natural
epimorphism π : F/[[R, F ], F ] ։ F/R. Obviously
Ker π = R/[[R, F ], F ] = Z2 (F/[[R, F ], F ]),
∼
and hence L = F/[[R, F ], F ]/Z2 (F/[[R, F ], F ]), which is a 2-capable.
Conversely, let L is 2-capable and so H/Z2 (H) ∼
= L for a Lie algebra H. Put
F/R ∼
= H and Z2 (H) ∼
= S/R. There is natural epimorphism η : F/[[S, F ], F ] ։
F/S ∼
= L. Since Z2 (F/[[R, F ], F ]) ⊆ Ker η, Z2∗ (L) = 0, as required.
The following Theorem gives an instrumental tools to present the main.
2-CAPABILITY AND 2- NILPOTENT MULTIPLIER OF FINITE DIMENSIONAL NILPOTENT LIE ALGEBRAS
7
Theorem 3.2. Let I be an ideal subalgebra of L such that I ⊆ Z2∗ (L). Then the
natural Lie homomorphism M(2) (L) → M(2) (L/I) is a monomorphism.
Proof. Let S/R and F/R be two free presentations of L and I, respectively. Looking
the natural homomorphism
φ : M(2) (L) ∼
= R ∩ F 2 /[[R, F ], F ] → M(2) (L/I) ∼
= R ∩ S 2 /[[S, F ], F ]
and the fact that S/R ⊆ Z2 (F/R) show φ has trivial kernel. The result follows.
Theorem 3.3. A Heisenberg Lie algebra H(m) is 2-capable if and only if m = 1.
Proof. Let m ≥ 2, by Corollary 2.9 H(m) is not capable so it is not 2-capable as
well. Hence we may assume that L ∼
= H(1). Let I be an ideal of L od dimension
1. Then L/I is abelian of dimension 2, and hence dim M(2) (L) = 2. On the
other hands, Theorem 2.11 implies dim M(2) (L) = 5, and Theorem 3.2 deduces
M(2) (L) → M(2) (L/I) can not be a monomorphism, as required.
References
[1] R. Baer, Representations of groups as quotient groups, I, II, and III. Trans. Amer. Math.
Soc. 58 (1945) 295-419.
[2] F.R. Beyl, J. Tappe, Group Extensions, Representations and the Schur Multiplicator, Lecture
Notes in Math., vol. 958, Springer- Verlag, Berlin, 1982.
[3] P. Batten, Multipliers and covers of Lie algebras, dissertation, North Carolina State University
(1993).
[4] P. Batten, K. Moneyhun, and E. Stitzinger, On characterizing nilpotent Lie algebras by their
multipliers, Comm. Algebra, 24 (1996), 4319–4330.
[5] P. Batten and E. Stitzinger, On covers of Lie algebras, Comm. Algebra, 24 (1996) 4301–4317.
[6] L. Bosko. On Schur Multipliers of Lie algebras and groups of maximal class, Internat. J.
Algebra Comput., 20 (2010), 807-821.
[7] J. Burns, G. Ellis, On the nilpotent multipliers of a group, Math. Z. 226 (1997) 405-428.
[8] G. Ellis, A non abelian tensor product of Lie algebras, Glasgow Math. J., 33 (1991) 101- 120.
[9] P. Hardy, On characterizing nilpotent Lie algebras by their multipliers III, Comm. Algebra,
33 (2005) 4205-4210.
[10] P. Hardy and E. Stitzinger, On characterizing nilpotent Lie algebras by their multiplierst
t(L) = 3, 4, 5, 6, Comm. Algebra, (26) 1(1998), 3527-3539.
[11] G. Karpilovsky, The Schur multiplier, Oxford University Press, Oxford, 1987.
[12] M. R. R. Moghaddam, The Baer invariant of a direct product, Arch. Math. 33 (1980) 504-511.
[13] Moneyhun K., Isoclinisms in Lie algebras, Algebras Groups Geom., 11,(1994), 9-22.
[14] P. Niroomand, On the order of Schur multiplier of non abelian p–groups, J. Algebra, 322
(2009), 4479–4482.
[15] P. Niroomand, On dimension of the Schur multiplier of nilpotent Lie algebras, Cent. Eur. J.
Math., 9 (2011), 57-64.
[16] P. Niroomand and F. G. Russo. A note on the Schur multiplier of a nilpotent Lie algebra,
Comm. Algebra, 39, 4,(2011) 1293-1297.
[17] P. Niroomand, F. G. Russo, A restriction on the Schur multiplier of nilpotent Lie algebras,
Electron. J. Linear Algebra, 22 (2011),1-9.
[18] P. Niroomand, M. Parvizi, F.G. Russo, Some criteria for detecting capable Lie algebras, J.
Algebra, 384 (2013), 36-44.
[19] P. Niroomand, M. Parvizi, On the 2-nilpotent multiplier of finite p-groups, Glasgow Math J.
1, 57, (2015), 201-210.
[20] P. Niroomand and M. Parvizi, 2-Nilpotent multipliers of a direct product of Lie algebras, to
appear in Rend. Circ. Mat. Palermo.
[21] A.R. Salemkar, V. Alamian, H. Mohammadzadeh, Some properties of the Schur multiplier
and covers of Lie Algebras, Comm. Algebra 36 (2008) 697-707.
[22] A. R. Salemkar, B. Edalatzadeh and M. Araskhan, Some inequalities for the dimension of
the c-nilpotent multiplier of Lie algebras. J. Algebra 322 (2009), no. 5, 1575-1585.
8
P. NIROOMAND AND M. PARVIZI
[23] A. R. Salemkar and Z. Riyahi, Some properties of the c-nilpotent multiplier of Lie algebras,
J. Algebra 370 (2012), 320-325.
[24] B. Yankoski. On the multiplier of a Lie Algebra, Journal of Lie Theory, 13 (2003) 1-6.
[25] A.I. Shirshov, On bases of free Lie algebras, Algebra Logika 1 (1) (1962) 14-19.
School of Mathematics and Computer Science, Damghan University, Damghan, Iran
E-mail address: [email protected], p [email protected]
Department of Pure Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran.
E-mail address: [email protected]
| 0 |
ON ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
arXiv:1608.01364v3 [] 5 Oct 2017
By Rajarshi Mukherjee∗ , Eric Tchetgen Tchetgen† , and James Robins‡
Abstract We provide general adaptive upper bounds for estimating nonparametric functionals based on second order U-statistics arising from finite dimensional approximation of the infinite dimensional
models using projection type kernels. An accompanying general adaptive lower bound tool is provided yielding bounds on chi-square divergence between mixtures of product measures. We then provide
examples of functionals for which the theory produces rate optimally
matching adaptive upper and lower bound.
1. Introduction. Estimation of functionals of data generating distribution has always been
of central interest in statistics. In nonparametric statistics, where data generating distributions are
parametrized by functions in infinite dimensional spaces, there exists a comprehensive literature
addressing such questions. In particular, a large body of research has been devoted to explore
minimax estimation of linear and quadratic functionals in density and white noise models. We
do not attempt to survey the extensive literature in this area. However, the interested reader can
find a comprehensive snapshot of the literature in Hall and Marron (1987), Bickel and Ritov (1988),
Donoho, Liu and MacGibbon (1990), Donoho and Nussbaum (1990), Fan (1991), Kerkyacharian and Picard
(1996), Laurent (1996), Cai and Low (2003), Cai and Low (2004), Cai and Low (2005a), and other
references therein. Although the question of more general nonparametric functionals has received
relatively less attention, some fundamental insights regarding estimation of non-linear integral
functionals in density and white noise models can be found in Ibragimov and Has’ minskii (2013),
Kerkyacharian and Picard (1996), Nemirovski (2000), and references therein.
A general feature of the results obtained while estimating “smooth” nonparametric functionals is
an elbow effect in the rate of estimation based on the smoothness of the underlying function classes.
√
For example while estimating quadratic functionals in a d dimensional density model, n−efficient
estimation can be achieved as soon as Hölder exponent β of the underlying density exceeds d4 ,
4β
whereas the optimal rate of estimation is n− 4β+d (in root mean squared sense) for β < d4 . A similar
elbow in the rate of estimation exists for estimation of non-linear integral functionals as well.
For density model this was demonstrated by Birgé and Massart (1995), Kerkyacharian and Picard
(1996). For signal or white noise model, the problem of general integrated non-linear functionals
√
was studied by Nemirovski (2000), but mostly in the n− regime. However, for more complex
nonparametric models, the approach for constructing minimax optimal procedures for general non√
linear functionals in non− n regimes has been rather case specific. Motivated by this, in recent
times, Robins et al. (2008, 2016), Mukherjee, Newey and Robins (2017) have developed a theory
of inference for nonlinear functionals in parametric, semi-parametric, and non-parametric models
based on higher order influence functions.
Most minimax rate optimal estimators proposed in the literature, however, depend explicitly on
the knowledge of the smoothness indices. Thus, it becomes of interest to understand the question
of adaptive estimation i.e. the construction and analysis of estimators without prior knowledge of
∗
Assistant Professor, Division of Biostatistics, University of California, Berkeley
Professor, Department of Biostatistics, Harvard University
‡
Professor, Department of Biostatistics, Harvard University
AMS 2000 subject classifications: Primary 62G10, 62G20, 62C20
Keywords and phrases: Adaptive Minimax Estimation, Lepski’s Method, Higher Order U-Statistics
†
1
2
the smoothness. The question of adaptation of linear and quadratic functionals has been studied in
detail in the context of density, white noise, and nonparametric additive Gaussian noise regression
models (Low (1992), Efromovich and Low (1994), Efromovich and Low (1996), Tribouley (2000),
Efromovich and Samarov (2000), Klemela and Tsybakov (2001), Laurent and Massart (2000), Cai and Low
(2005b), Cai and Low (2006), Giné and Nickl (2008)). However, adaptive estimation of more general
non-linear functionals in more complex nonparametric models have received much less attention.
This paper is motivated by taking a step in that direction.
In particular, we suppose we observe i.i.d copies of a random vector O = (W; X) ∈ Rm+d with
unknown distribution P on each of n study subjects. The variable X represents a random vector
of baseline covariates such as age, height, weight, etc. Throughout X is assumed to have compact
support and a density with respect to the Lebesgue measure in Rd . The variable W ∈ Rm can be
thought of as a combination of outcome and treatment variables in some of our examples. In the
above set up, we are interested in estimating certain “smooth” functionals φ(P ) in the sense that
under finite dimensional parametric submodels, they admit derivatives which can be represented
as inner products of first order influence functions with score functions (Bickel et al., 1993). For
some classic examples of these functionals, we provide matching upper and lower bounds on the
rate of adaptive minimax estimation over a varying class of smoothness of the underlying functions,
provided the marginal design density of X is sufficiently smooth.
The contributions of this paper are as follows. Extending the theory from density estimation and
Gaussian white noise models, we provide a step towards adaptation theory for non-linear functionals
√
in more complex nonparametric models in the non- n regime. The crux of our arguments relies
on the observation that when the non-adaptive minimax estimators can be written as a sum of
empirical mean type statistics and 2nd −order U-statistics, then one can provide a unified theory
of selecting the “best” data driven estimator using Lepski type arguments (Lepski, 1991, 1992).
Indeed, under certain assumptions on the data generating mechanism P , the non-adaptive minimax
estimators have the desired structure for a large class of problems (Robins et al., 2008). This enables
us to produce a class of examples where a single method helps provide a desired answer. In order
to prove a lower bound for the rate of adaptation, we provide a general tool for bounding the chisquare divergence between two mixtures of suitable product measures. This extends the results in
Birgé and Massart (1995); Robins et al. (2009), where similar results were obtained for the Hellinger
distance. Our results are provided in the low regularity regime, i.e. when it is not possible to
√
achieve n-efficient estimator in an asymptotically minimax sense. This typically happens when the
“average smoothness” of the function classes in consideration is below d4 . Discussions on obtaining
√
a corresponding n-efficient estimator for regularity above d4 is provided in Section 4.
The rest of the paper is organized as follows. In Section 2 we provide the main results of the paper
in a general form. Section 3 is devoted for applications of the main results in specific examples. A
discussion on some issues left unanswered is provided in Section 4. In Section 5 we provide a brief
discussion on some basic wavelet and function space theory and notations, which we use extensively
in our jargon. Finally Section 6 and Appendices A and B are devoted for the proof of the theorems
and collecting useful technical lemmas.
1.1. Notation. For data arising from underlying distribution P we denote by PP and EP the
probability of an event and expectation under P receptively. For any positive integer m ≥ 2d , let
m
j(m)d ≥ m/2d . For a two
j(m) denote the largest integer j such that 2jd ≤ m i.e. j = ⌊ d1 log
log 2 ⌋ and 2
variable function h(O1 , O2 ) let S(h(O1 , O2 )) = 12 [h(O1 , O2 ) + h(O2 , O1 )] be the symmetrization of
h. The results in this paper are mostly asymptotic (in n) in nature and thus requires some standard
asymptotic notations. If an and bn are two sequences of real numbers then an ≫ bn (and an ≪ bn )
implies that an /bn → ∞ (and an /bn → 0) as n → ∞, respectively. Similarly an & bn (and an . bn )
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
3
implies that lim inf an /bn = C for some C ∈ (0, ∞] (and lim sup an /bn = C for some C ∈ [0, ∞)).
Alternatively, an = o(bn ) will also imply an ≪ bn and an = O(bn ) will imply that lim sup an /bn = C
for some C ∈ [0, ∞)). Finally we comment briefly on the various constants appearing throughout
the text and proofs. Given that our primary results concern convergence rates of various estimators,
we will not emphasize the role of constants throughout and rely on fairly generic notation for such
constants. In particular, for any fixed tuple v of real numbers, C(v) will denote a positive real
number which depends on elements of v only. Finally for any linear subspace L ⊆ L2 [0, 1]d , let
Π (h|L) denote the orthogonal projection of h onto L under
R the Lebsegue measure. Also, for a
function defined on [0, 1]d , for 1 ≤ q < ∞ we let khkq : = ( [0,1]d |h(x)|q dx)1/q denote the Lq seminorm of h, khk∞ : = supx∈[0,1]d |h(x)| the L∞ semi-norm of h. We say h ∈ Lq [0, 1]d for q ∈ [1, ∞]
if khkq < ∞. Typical functions arising in this paper will be considered to have memberships in
certain Hölder balls H(β, M ) (see Section 5 for details). This will imply that the functions are
uniformly bounded by a number depending on M . However, to make the dependence of our results
on the uniform upper bound of functions more clear, we will typically assume a bound BU over the
function classes, and for the sake of compactness will avoid the notational dependence of BU on
M.
2. Main Results. We divide the main results of the paper into three main parts. First we
discuss a general recipe for producing a “best” estimator from a sequence of estimators based on
second order U-statistics constructed from compactly supported wavelet based projection kernels
(defined in Section 5). Next we provide a general tool for bounding chi-square divergence between
mixtures of product measures. This serves as a basis of using a version of constrained risk inequality
(Cai and Low, 2011) for producing matching adaptive lower bounds in context of estimation of nonlinear functionals considered in this paper. Finally we provide estimators of design density as well
as regression functions in L∞ norm which adapt over Hölder type smoothness classes (defined in
Section 5).
2.1. Upper Bound.
Consider i.i.d data Oi = (Wi , Xi ) ∼ P , Wi ∈ Rm , Xi ∈ [0, 1]d , i = 1, . . . , n and a real valued functional of interest φ(P ). Given this sample of size n ≥ 1, consider further, a sequence of estimators
{φ̂n,j }j≥1 of φ(P ) defined as follows:
φ̂n,j =
n
X
1
1X
L1 (Oi ) −
S L2l (Oi1 )KVj (Xi1 , Xi2 ) L2r (Oi2 )
n
n(n − 1)
i=1
i1 6=i2
where KVj (Xi1 , Xi2 ) is a resolution 2j wavelet projection kernel defined in Section 5 and L1 , L2l , L2r
are measurable functions such that ∀O one has
max{|L1 (O)|, |L2l (O)|, |L2r (O)|} ≤ B
for a known constant B. Also assume that |g(x)| ≤ B ∀x, g being the marginal density of X with
respect to Lebesgue measure. Such a sequence of estimators can be though about as a bias corrected
version of a usual first order estimator arising from standard first order influence function theory
for “smooth” functionals φ(PP
) (Bickel et al., 1993; Van der Vaart, 2000). In particular, the linear
of
empirical mean type term n1 ni=1 L1 (Oi ) typically
P derives from the classical influence function
1
S
L
(O
)K
(X
,
X
)
L
(O
)
corrects
φ(P ) and the U-statistics quadratic term n(n−1)
i1
Vj
i1
i2
2r
i2
2l
i1 6=i2
for higher order bias terms. While, specific examples in Section 3 will make the structure of these
sequence of estimators more clear, the interested reader will be able to find more detailed theory
in Robins et al. (2008, 2016).
4
The quality of such sequence of estimators will be judged against models for the data generating
mechanism P . To this end, assume P ∈ Pθ where Pθ is a class of data generating distributions
indexed by θ which in turn can vary over an index set Θ. The choices of such a Θ will be clear
from our specific examples in Section 3, and will typically corresponding to smoothness indices
of various infinite dimensional functions parametrizing the data generating mechanisms. Further
assume that there exists positive real values functions f1 and f2 defined on Θ such that the sequence
of estimators {φ̂n,j }j≥1 satisfies the following bounds with known constants C1 > 0 and C2 > 0
whenever n ≤ 2jd ≤ n2 .
Property (A): Bias Bound
f1 (θ)
sup |EP φ̂n,j − φ(P ) | ≤ C1 2−2jd d + n−f2 (θ) .
P ∈Pθ
Property (B): Variance Bound
2
2jd
sup EP φ̂n,j − EP φ̂n,j
≤ C2 2 .
n
P ∈Pθ
Given properties (A) and (B), we employ a Lepski type argument to choose an “optimal” estimator
from the collection of {φ̂n,j }j≥1 ’s. To this end, as in Efromovich and Low (1996), for a given choice
1−
2
of c > 1, let N be the largest integer such that cN −1 ≤ n log log n . Denote k(j) = 2jd , which,
according to the definition in Section 1.1 implies that k(j(m)) ≤ m. For l = 0, . . . , N − 1 let
2
l
βl be the solution of k(jl ) = n 1+4βl /d , where jl = j(cl n) i.e. k(jl ) = 2j(c n)d . Note that,
j(cN−1 n)d
2
n2
≤
cN−1 n
n2
=
1
2
n log log n
k(jN )
n2
=
= o(1). By our choice of discretization, k(j0 ) ≤ k(j1 ) ≤ . . . ≤
β
′
′
k(jN −1 ). Also, there exists constants c1 , c2 such that for all 0 ≤ l ≤ l ≤ N − 1 one has βdl − dl ∈
i
2 1
h ′
′
1+4βl /d
k(jl )
l −l
n
l −l
,
c
.
For
l
=
0,
.
.
.
,
N
−
1
let
k
(j
)
=
=
and R(k∗ (jl )) =
c1 log
2
∗
l
1
n
log n
log n
(log n) 1+4βl /d
k∗ (jl )
n2
=n
8βl /d
− 1+4β
l /d
1
− 1+4β
l /d
(log n)
′
′
. This implies that for l > l there exists constant c ≥ 0 such that
k∗ (jl )
≥
k∗ (jl′ )
n2
log n
4β ′ /d−4βl /d
l
(1+4β
/d)(1+4β /d)
l
′
l
′
≥ ec ≥ 1.
Finally letting s∗ (n) be the smallest l such that k∗ (jl ) ≥ n, define
(
)
2
2 R(k (j ′ )) log n
l:
φ̂
−
φ̂
≤
C
∗
n,j(k
(j
))
n,j(k
(j
))
′
opt
ˆl: = min
∗ l
∗
l
l
′
∀l ≥ l, s∗ ≤ l ≤ N − 1
(2.1)
where Copt is a deterministic constant to be specified later.
With the notations and definitions as above we now have the following theorem which is the
main result in the direction of adaptive upper bound in this paper.
Theorem 2.1. Assume β < d4 . Then there exists a positive C depending on B, C1 , C2 , and
choice of wavelet bases ψ 00,0 , ψ 10,0 (defined in Section 5) such that
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
4β
4β+d
√
8β
2
log n 4β+d
EP φ̂n,j (k∗ (l̂)) − φ(P ) ≤ C
.
n
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
5
A few remarks are in order regarding the implications of Theorem 2.1. In particular, provided
one has a knowledge of a data generating θ and therefore of f1 (θ) and f2 (θ), one can use the bias
and variance properties to achieve an optimal trade-off and subsequently obtain mean squared error
−
8β
in estimating φ(P ) over P ∈ Pθ which scales as n 8β+d . Theorem 2.1 demonstrates a logarithmic
price payed by the estimator φ̂n,j (k∗ (l̂)) in terms of estimating φ(P ) over a class of data generating
n
o
4β
mechanisms Pθ : f1 (θ) = β, f2 (θ) > 4β+d
, β < d4 . As will be demonstrated in Section 3, the term
4β
f1 (θ) = β usually drives the minimax rate of estimation whereas f2 (θ) > 4β+d
is a regularity
condition which typically will relate to marginal density of covariates in observational studies.
Moreover, in our examples, the range of β < d/4 will be necessary to guarantee the non-existence
√
of n-consistent estimators of φ(P ) over P ∈ Pθ in a minimax sense.
Finally, it therefore remain to be explored whether this logarithmic price payed in Theorem
2.1 is indeed necessary. Using a chi-square divergence inequality developed in the next section
along with a suitable version of constrained risk inequality (see Section B) we shall argue that the
logarithmic price of Theorem 2.1 is indeed necessary for a class of examples naturally arising in
many observational studies.
2.2. Lower Bound.
We provide a bound on the chi-square divergence between two mixture of product measures,
and thereby extending results of Birgé and Massart (1995) and Robins et al. (2009). Since both
Birgé and Massart (1995) and Robins et al. (2009) considered demonstrating bounds on the Hellinger
divergence between mixtures of product measures, they do not automatically provide bounds on
the corresponding chi-square divergences. However, such bounds are essential to explore adaptive
lower bound in a mean squared error sense. To this end, as in Robins et al. (2009), let O1 , . . . , On be
a random sample from a density p with respect to measure µ on a sample space (χ, A). For k ∈ N,
let χ = ∪kj=1 χj be a measurable partition of the sample space. Given a vector λ = (λ1 , . . . , λk ) in
some product measurable space Λ = Λ1 × · · · × Λk let Pλ and Qλ be probability measures on χ
such that
• Pλ (χj ) = Qλ (χj ) = pj for every λ and some (p1 , . . . , pk ) in the k-dimensional simplex.
• The restrictions Pλ and Qλ to χj depends on the j th coordinate λj of λ = (λ1 , . . . , λk ) only.
For pλ and qλ densities of the measures Pλ and Qλ respectively that are jointly measurable
in the
R
parameters
λ
and
the
observations,
and
π
a
probability
measure
on
Λ,
define
p
=
p
dπ(λ)
and
λ
R
qλ = qλ d(π(λ)), and set
Z
(pλ − p)2 dµ
,
a = max sup
j
pλ
pj
λ
χj
Z
(pλ − p)2 dµ
b = max sup
,
j
pλ
pj
λ
χj
Z
p2 dµ
,
c = max sup
e
j
λ
χj pλ pj
Z
(q − p)2 dµ
δ = max sup
.
j
pλ
pj
λ
χj
With the notations and definitions as above we now have the following theorem which is the main
result in the direction of adaptive lower bound of this paper.
6
Theorem 2.2. Suppose that npj (1 ∨ a ∨ b ∨ e
c) ≤ A for all j and for all λ, B ≤ pλ ≤ B for
positive constants A, B, B. Then there exists a C > 0 that depends only on A, B, B, such that, for
any product probability measure π = π1 ⊗ · · · ⊗ πk , one has
Z
Z
2
2
2
χ
Pλ d(π(λ)), Qλ d(π(λ)) ≤ eCn (maxj pj )(b +ab)+Cnδ − 1,
2
R dν2
where χ2 (ν1 , ν2 ) =
−
1
dν1 is the chi-square divergence between two probability measures
dν1
ν1 and ν2 with ν2 ≪ ν1 .
2.3. L∞ -Adaptive Estimation of Density and Regression Functions.
We provide adaptive estimator of regression function in L∞ using Lepski type arguments (Lepski,
1992). Consider i.i.d data on Oi = (Wi , Xi ) ∼ P for a scalar variable W such that |W | ≤ BU and
EP (W |X) = f (X) almost surely, and X ∈ [0, 1]d has density g such that 0 < BL ≤ g(x) ≤ BU < ∞
for all x ∈ [0, 1]d . Although to be precise, we should put subscripts P to f, g, we omit this since
the context of their use is clear. We assume Hölder type smoothness (defined in Section 5) on
f, g and let P(β, γ) = {P : (f, g) ∈ H(β, M ) × H(γ, M ), |f (x)| ≤ BU , BL ≤ g(x) ≤ BU ∀ x ∈
[0, 1]d } denote classes of data generating mechanisms indexed by the smoothness indices. Then we
have the following theorem, which considers adaptive estimation of f, g in L∞ norm over (β, γ) ∈
(βmin , βmax ) × (γmin , γmax ), for given positive real numbers βmin , βmax , γmin , γmax .
Theorem 2.3. If γmin > βmax , then there exists an fˆ and ĝ depending on M, BL , BU , βmin ,
βmax , γmin , γmax , and choice of wavelet bases ψ 00,0 , ψ 10,0 (defined in Section 5) such that the following
hold for every (β, γ) ∈ (βmin , βmax ) × (γmin , γmax ) with a large enough C > 0 depending possibly on
M, BL , BU , and γmax .
− β
2β+d
n
,
sup EP kfˆ − f k∞ ≤ (C)
log n
P ∈P(β,γ)
− γ
2γ+d
d
n
2γ+d
EP kĝ − gk∞ ≤ (C)
,
log n
1
/ H(β, C) ≤ 2 ,
sup PP fˆ ∈
n
P ∈P(β,γ)
d
2β+d
sup
P ∈P(β,γ)
PP (ĝ ∈
/ H(γ, C)) ≤
1
,
n2
|fˆ(x)| ≤ 2BU and BL /2 ≤ ĝ(x) ≤ 2BU ∀x ∈ [0, 1]d .
Remark 2.4. A close look at the proof of Theorem 2.3, shows that the proof continues to hold
for βmin = 0. Moreover, although we did not keep track of our constants, the purpose of keeping
them in the form above is to show that the multiplicative constants is not arbitrarily big when β is
large.
Theorems of the flavor of Theorem 2.3 is not uncommon in literature (see (Giné and Nickl,
2015, Chapter 8) for details). In particular, results of the kind stating that ĝ ∈ H(γ, C) with high
probability uniformly over P(β, γ) for a suitably large constant C is often very easy to demonstrate.
However, our proof shows that a suitably bounded estimator ĝ, which adapts over smoothness and
satisfies ĝ ∈ H(γ, C) with probability larger than 1 − n1κ uniformly over P(β, γ), for any κ > 0
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
7
and correspondingly large enough C. This result in turn turns out to be crucial for the purpose
of controlling suitable bias terms in our functional estimation problems. Additionally, the results
concerning fˆ are relatively less common in an unknown design density setting. Indeed, adaptive
estimation of regression function with random design over Besov type smoothness classes has been
obtained by model selection type techniques by Baraud (2002) for the case of Gaussian errors. Our
results in contrast hold for any regression model with bounded outcomes and compactly supported
covariates having suitable marginal design density. Before proceeding we note that the results of
this paper can be extended to include the whole class of doubly robust functionals considered
in Robins et al. (2008). However we only provide specific examples here to demonstrate the clear
necessity to pay a sharp poly-logarithmic penalty for adaptation in low regularity regimes.
3. Examples. In this section we discuss applications of Theorem 2.1 and Theorem 2.2 in producing rate optimal adaptive estimators of certain nonparametric functionals commonly arising in
statistical literature. The proof of the results in this section can be found in Mukherjee, Tchetgen Tchetgen and Rob
(2017).
3.1. Average Treatment Effect. In this subsection, we consider estimating the “treatment
effect” of a treatment on an outcome in presence of multi-dimensional confounding variables
(Crump et al., 2009; Robins, Mark and Newey, 1992). To be more specific, we consider a binary
treatment A and response Y and d-dimensional covariate vector X, and let τ be the variance
weighted average treatment effect defined as
V ar(A|X)c(X)
E(cov(Y, A|X))
τ: = E
=
E(V ar(A|X))
E(V ar(A|X))
Above
c(x) = E(Y |A = 1, X = x) − E(Y |A = 0, X = x).
(3.1)
and under the assumption of no unmeasured confounding, c(x) is often referred to as the average
treatment effect among subjects with covariate value X = x. The reason of referring to τ as the
average treatment effect can e further understood by considering semi-parametrically constrained
model
c(x) = φ∗ for all x,
(3.2)
or specifically the model
E(Y |A, X) = φ∗ A + b(X).
It is clear that under (3.2) it turns out that , τ equals φ∗ . Moreover, the inference on τ is closely
related to the estimation E(Cov(Y, A|X)) (Robins et al., 2008). Specifically, point and interval
estimator for τ can be constructed from point and interval estimator of E(Cov(Y, A|X)). To be
more specific, for any fixed τ ∗ ∈ R, one can define Y ∗ (τ ∗ ) = Y − τ ∗ A and consider
φ(τ ∗ ) = E((Y ∗ (τ ∗ ) − E(Y ∗ (τ ∗ )|X))(A − E(A|X))) = E(cov(Y ∗ (τ ∗ ), A|X)).
it is easy to check that τ is the unique solution of φ(τ ∗ ) = 0. Consequently, if we can construct
estimator φ̂(τ ∗ ) of φ(τ ∗ ), then τ̂ satisfying ψ(τ̂ ) = 0 is an estimator of τ with desirable properties. Moreover, (1 − α) confidence set for τ can be constructed as the set of values of τ ∗ for
8
which (1 − α) interval estimator of φ(τ ∗ ) contains the value 0. Finally, since E(Cov(Y, A|X)) =
E(E(Y |X)E(A|X)) − E(AY ), and E(AY ) is estimable easily at a parametric rate, the crucial part
of the problem hinges on the estimation of E(E(Y |X)E(A|X)).
Henceforth, for the rest of the section, we assume that we observe n iid copies of O = (Y, A, X) ∼
P and we want to estimate φ(P ) = EP (CovP (Y, A|X)). We assume that the marginal distribution
of X has a density with respect to Lebesgue measure on Rd that has a compact Rsupport, which we
assume to be [0, 1]d and let g be the marginal density of X (i.e. EP (h(X)) =
h(x)g(x)dx for
[0,1]d
all P -integrable function h), a(X): = EP (A|X), b(X): = EP (Y |X), and c(X) = EP (Y |A = 1, X) −
EP (Y |A = 0, X). Although to be precise, we should put subscripts P to a, b, g, c, we omit this
< d4 , γmax ≥ γ > γmin ≥
since the context of their use is clear. Let Θ: = {θ = (α, β, γ): α+β
2
2(1 + ǫ) max{α, β}} for some fixed ǫ > 0, and let Pθ denote all data generating mechanisms P
satisfying the following conditions for known positive constants M, BL , BU .
(a) max{|Y |, |A|} ≤ BU a.s. P .
(b) a ∈ H(α, M ), b ∈ H(β, M ), and g ∈ H(γ, M ).
(c) 0 < BL < g(x) < BU for all x ∈ [0, 1]d .
Note that we do not put any assumptions on the function c. Indeed for Y and A binary random
variables, the functions a, b, g, c are variation independent. Following our discussion above, we will
discuss adaptive estimation of φ(P ) = EP (covP (Y, A|X)) = EP ((Y − b(X))(A − a(X))) over P ∈ P.
In particular, we summarize our results on upper and lower bounds on the adaptive minimax
estimation of φ(P ) in the following theorem.
Theorem 3.1. Assume (a) − (c) and (α, β, γ) ∈ Θ. Then the following hold for positive C, C ′
depending on M, BL , BU , γmax .
(i) (Upper Bound) There exists an estimator φ̂, depending only on M, BL , BU , γmax such that
4α+4β
√
2
log n 2α+2β+2d
sup EP φ̂ − φ(P ) ≤ C
.
n
P ∈P(α,β,γ)
(ii) (Lower Bound) Suppose {A, Y } ∈ {0, 1}2 . If one has
4α+4β
√
2
log n 2α+2β+2d
sup EP φ̂ − φ(P ) ≤ C
,
n
P ∈P(α,β,γ)
for an estimator φ̂ of φ(P ). Then there exists a class of distributions P(α′ ,β ′ ,γ ′ ) such that
′
sup
P ′ ∈P
′ ′ ′
(α ,β ,γ )
′
4α +4β
√
2
log n 2α′ +2β ′ +2d
′
′
.
EP ′ φ̂ − φ(P ) ≥ C
n
Theorem 3.1 describes the adaptive minimax estimation of the treatment effect functional in
√
n-rate estimation is not possible. By assuming more
low regularity regime ( α+β
2 < d/4) i.e. when
smoothness on the marginal of g (i.e. lager value of γ) it is possible to include the case of α+β
2 ≥ d/4
α+β
α+β
as well. In particular, if the set of (α, β) includes the case 2 > d/4 as well as 2 < d/4, one
√
should be able to obtain adaptive and semi-parametrically efficient n-consistent estimator of
the treatment effect for α+β
2008), the
2 > d/4. Similar to quadratic functionals (Giné and Nickl, √
α+β
=
d/4
will
however
incur
an
additional
logarithmic
penalty
over
usual
case of 2
n-rate of
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
9
convergence. All these extensions can definitely be incorporated in the designing of the Lepski’s
method in Section 2. However, we do not pursue this in this paper and refer to Section 4 for more
discussions on relevant issues. Finally, it is worth noting that if the set of (α, β) only includes the
case α+β
2 > d/4, one can indeed obtain adaptive and even semiparametrically efficient estimation
of the functionals studied here without effectively any assumption on g. The interested reader can
find the details in Mukherjee, Newey and Robins (2017); Robins et al. (2016).
3.2. Mean Response in Missing Data Models. Suppose we have n i.i.d observations on
O = (Y A, A, X) ∼ P , for a response variable Y ∈ R which is conditionally independent of the missingness indicator variable A ∈ {0, 1} given covariate information X. In literature, this assumption
is typically known as the missing at random model (MAR) and under this assumption, our quantity
of interest φ(P ) = EP (Y ) is identifiable as EE (Y |A = 1, X) from the observed data. This model
is a canonical example of a study with missing response variable and to make this assumption
reasonable, the covariates must contain the information on possible dependence between response
and missingness. We referred the interested reader to Tsiatis (2007) for the history of statistical
analysis of MAR and related models.
To the lay down the mathematical formalism for minimax adaptive
R estimation of φ(P ) in
this model, let f be the marginal density of X (i.e. EP (h(X)) =
h(x)f (x)dx for all P [0,1]d
a−1 (X): =
integrable function h),
EP (A|X), and b(X): = EP (Y |A = 1, X) = EP (Y |X), and
g(X) = f (X)/a(X) (with the convention of the value +∞ when dividing by 0). Although to
be precise, we should put subscripts P to a, b, g, we omit this since the context of their use is clear.
< d4 , γmax ≥ γ > γmin ≥ 2(1 + ǫ) max{α, β}} for some fixed ǫ > 0,
Let Θ: = {θ = (α, β, γ): α+β
2
and let Pθ denote all data generating mechanisms P satisfying the following conditions for known
positive constants M, BL , BU .
(a) |Y | ≤ BU .
(b) a ∈ H(α, M ), b ∈ H(β, M ), and g ∈ H(γ, M ).
(c) BL < g(x), a(x) < BU for all x ∈ [0, 1]d .
We then have the following result.
Theorem 3.2. Assume (a) − (c) and (α, β, γ) ∈ Θ. Then the following hold for positive C, C ′
depending on M, BL , BU , γmax .
(i) (Upper Bound) There exists an estimator φ̂, depending only on M, BL , BU , γmax such that
4α+4β
√
2
log n 2α+2β+2d
sup EP φ̂ − φ(P ) ≤ C
.
n
P ∈P(α,β,γ)
(ii) (Lower Bound) Suppose {A, Y } ∈ {0, 1}2 . If one has
4α+4β
√
2
log n 2α+2β+2d
,
sup EP φ̂ − φ(P ) ≤ C
n
P ∈P(α,β,γ)
for an estimator φ̂ of φ(P ). Then there exists a class of distributions P(α′ ,β ′ ,γ ′ ) such that
′
sup
P ′ ∈P(α′ ,β ′ ,γ ′ )
′
4α +4β
√
2
log n 2α′ +2β ′ +2d
′
′
.
EP ′ φ̂ − φ(P ) ≥ C
n
10
Once again, Theorem 3.2 describes the adaptive minimax estimation of the average outcome in
√
n-rate estimation is not
missing at random models in low regularity regime ( α+β
2 < d/4) i.e. when
d
possible. Extensions to include α+β
≥
is
possible
by
similar
Lepski
type
method
with additional
2
4
smoothness assumption on g. However, we do not pursue this in this paper and refer to Section 4
for more discussions on relevant issues.
3.3. Quadratic and Variance Functionals in Regression Models. Consider a observing data which are n i.i.d copies of O = (Y, X) ∼ P and the functional of interest is the expected value of the
regression of Y on X. Specifically suppose we want to esti from of the
2
mate φ(P ) = EP {EP (Y |X)} . Assume that distribution of X has a density with respect to
Lebesgue measure on Rd that has a compact support, which we assume to be [0, 1]d for sake of
simplicity. Let g be the marginal density of X, and b(X): = EP (Y |X). The class of distributions
Θ: = {P(β, γ): β < d4 , γmax ≥ γ > γmin ≥ 2(1 + ǫ)β} for some fixed ǫ > 0, where by P(β, γ) we
consider all data generating mechanisms P satisfying the following conditions. for known positive
constants M, BL , BU .
(a) max{|Y |} ≤ BU .
(b) b ∈ H(β, M ), and g ∈ H(γ, M ).
(c) 0 < BL < g(x) < BU for all x ∈ [0, 1]d .
Theorem 3.3. Assume (a) − (c) and (β, γ) ∈ Θ. Then the following hold for positive C, C ′
depending on M, BL , BU , γmax .
(i) (Upper Bound) There exists an estimator φ̂, depending only on M, BL , BU , γmax such that
8β
√
2
log n 4β+d
.
sup EP φ̂ − φ(P ) ≤ C
n
P ∈P(β,γ)
(ii) (Lower Bound) Suppose Y ∈ {0, 1}2 . If one has
√
8β
2
log n 4β+d
sup EP φ̂ − φ(P ) ≤ C
,
n
P ∈P(β,γ)
′
′
for an estimator φ̂ of φ(P ). Then there exists a class of distributions P(β , γ ) such that
′
8β′
√
2
4β +d
log
n
′
sup
EP φ̂ − φ(P ) ≥ C ′
.
′
′ ′
n
P ∈P(β ,γ )
Remark 3.4. Although Theorem 3.3 and the discussion before that is made in the context
of estimating a particular quadratic functional in the context of a regression framework, it is
worth noting that the result extends to estimating classical quadratic functionals in density models
(Efromovich and Low, 1996; Giné and Nickl, 2008).
One can also consider in the same set up, the estimation of functionals related to the conditional
variance of Y under such a regression model, which has been studied in detail by Brown and Levine
(2007); Cai and Wang (2008); Fan and Yao (1998); Hall and Carroll (1989); Ruppert et al. (1997).
Whereas, the minimax optimal and adaptive results in Brown and Levine (2007); Cai and Wang
(2008) are in a equi-spaced fixed design setting, one can use an analogue of Theorem 3.3 to demonstrate a rate adaptive estimator and corresponding matching lower bound, with a mean-squared
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
error of the order of
d
4.
√n
log n
−
8β
4β+d
11
for estimating EP (V arP (Y |X)) adaptively over Hölder balls
As noted by Robins et al. (2008), this rate is higher than the rate of estiof regularity β <
mating the conditional variance in mean-squared error for the equispaced design (Cai and Wang,
2008). In a similar vein, one can also obtain similar results for the estimation of conditional
variance under the assumption if homoscedasticity i.e. σ 2 : = V ar (Y |X = x) for all x ∈ [0, 1]d .
In particular, one can once again obtain an estimator with mean-squared error of the order of
− 8β
4β+d
√n
for estimating σ 2 over any Hölder balls of regularity β < d4 . In particular, a canlog n
didate sequence of φ̂n,j ’s for this purpose was constructed in Robins et al. (2008), and in essence
P
1
2
equals φ̂n,j = n(n−1)
i1 6=i2 (Yi1 − Yi2 ) KVj (Xi1 , Xi2 )— whose properties can be analyzed by a
technique similar to the proof of Theorem 3.3. Its however worth noting that, even in this very
easy to state classical problem, matching lower bounds, even in non-adaptive minimax sense is not
available. Therefore, our results related to homoscedastic variance estimation, can be interpreted
from a point of view of selecting a “best” candidate from a collection of estimators.
4. Discussion. In this paper, we have extended the results for adaptive estimation of nonlinear integral functionals from density estimation and Gaussian white noise models, to move towards an adaptive estimators non-linear functionals in more complex nonparametric models. Our
results are provided and are most interesting in the low regularity regime, i.e. when it is not pos√
sible to achieve n-efficient estimator in an asymptotically minimax sense. This typically happens
when the “average smoothness” of the function classes in consideration is below d4 . The reason
for focusing on the low regularity region is two fold. Firstly, this regime corresponds to situations
where adaptation is to smoothness is not possible without paying a logarithmic penalty–making
it an interesting case to study. Secondly, as noted in Robins et al. (2008), the appropriate nonadaptive minimax sequence of estimators of the functionals considered in this paper, which attain
√
n-efficiency rely either on high regularity of the marginal density of the covariates X in our examples or on correcting higher order bias by using U-statistics of degree 3 and higher. Indeed,
under stringent assumptions on the smoothness of the density of X, our results carry through to
√
yield adaptive n-efficient estimators. However, under more relaxed conditions on the density of
X, although we can in principle employ a similar framework of Lepski type idea as implemented
by Theorem 2.1, the mathematical analysis of such a method requires sharp control of tails of
U-statistics of degree 3 and higher. The structure of the higher order U-statistics considered in the
estimators constructed in Robins et al. (2008) makes such an analysis delicate, and we plan to focus
on these issues in a future paper. However, it is worth noting that with the additional knowledge
of smoothness exceeding d4 one can indeed obtain adaptive and even semiparametrically efficient
estimation of the functionals studied here without effectively any assumption on g. The interested
reader can find the details in Mukherjee, Newey and Robins (2017); Robins et al. (2016). Finally,
the results of this paper can be extended to include the whole class of doubly robust functionals
considered in Robins et al. (2008). However we only provide specific examples here to demonstrate
the clear necessity to pay a poly-logarithmic penalty for adaptation in low regularity regimes.
5. Wavelets, Projections, and Hölder Spaces.
We work with certain Besov- Hölder type spaces which we define in terms of moduli of wavelet
coefficients of continuous functions. For d > 1, consider expansions of functions h ∈ L2 [0, 1]d on
an orthonormal basis of compactly supported bounded wavelets of the form
h(x) =
X
k∈Zd
hh, ψ 00,k iψ 00,k (x)
+
∞ X
X
X
l=0 k∈Zd v∈{0,1}d −{0}d
hh, ψ vl,k iψ vl,k (x),
12
where the base functions ψ vl,k are orthogonal for different indices (l, k, v) and are scaled and translated versions of the 2d S-regular base functions ψ v0,0 with S > β, i.e., ψ vl,k (x) = 2ld/2 ψ v0,0 (2l x −
Qd
l
vj
d
d
l
2
k) =
j=1 2 ψ0,0 2 xj − kj for k = (k1 , . . . , kd ) ∈ Z and v = (v1 , . . . , vd ) ∈ {0, 1} with
1
0
= ψ being the scaling function and mother wavelet of regularity S respec= φ and ψ0,0
ψ0,0
tively as defined in one dimensional case. As our choices of wavelets, we will throughout use compactly supported scaling and wavelet functions of Cohen-Daubechies-Vial type with S first null
moments(Cohen, Daubechies and Vial, 1993). In view of the compact support of the wavelets, for
v are non-zero on [0, 1]; let us
each resolution level l and index v, only O(2ld ) base elements ψl,k
denote the corresponding set of indices k by Zl obtaining the representation,
h(x) =
X
k∈ZJ0
hh, ψ 0J0 ,k iψ 0J0 ,k (x) +
∞ X
X
X
l=J0 k∈Zl v∈{0,1}d −{0}d
hh, ψ vl,k iψ vl,k (x),
(5.1)
where J0 = J0 (S) ≥ 1 is such that 2J0 ≥ S (Cohen, Daubechies and Vial, 1993; Giné and Nickl,
2015). Thereafter, let for any h ∈ L2 [0, 1]d , khh, ψ l′ ,· ik2 be the vector L2 norm of the vector
′
hh, ψ vl′ ,k′ i: k ∈ Zl′ , v ∈ {0, 1}d .
We will be working with projections onto subspaces defined by truncating expansions as above
at certain resolution levels. For example letting
n
o
Vj : = span ψ vl,k , J0 ≤ l ≤ j, k ∈ Zl , v ∈ {0, 1}d , j ≥ J0
(5.2)
one immediately has the following orthogonal projection kernel onto Vj as
KVj (x1 , x2 ) =
X
ψ 0J0 ,k (x1 )ψ 0J0 ,k (x2 ) +
k∈ZJ0
j X
X
X
ψ vl,k (x1 )ψ vl,k (x2 ).
l=J0 k∈Zl v∈{0,1}d −{0}
(5.3)
Owing to the MRA property of the wavelet basis, it is easy to see that KVj has the equivalent
representation as
X X
v
v
KVj (x1 , x2 ) =
(x2 ) .
(5.4)
(x1 ) ψjk
ψjk
k∈Zj v∈{0,1}d
Thereafter, using S-regular scaling and wavelet functions of Cohen-Daubechies-Vial type with
S > β let
H(β, M )
J0 (β+ d2 )
:=
khh, ψ 0J0 · ik∞ +
:2
h ∈ C [0, 1]d
sup
d
2l(β+ 2 ) |hh, ψ vl,k i| ≤ M ,
d
l≥0,k∈Zd ,v∈{0,1}d −{0}
(5.5)
with C [0, 1]d being the set of all continuous bounded functions on [0, 1]d . It is standard result in
the theory of wavelets that H(β, M ) is related to classical Hölder-Zygmund spaces with equivalent
norms (see (Giné and Nickl, 2015, Chapter 4) for details). For 0 < β < 1 for example, H(β, M )
13
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
consists of all functions in C [0, 1]d such that kf k∞ +
|f (x1 )−f (x2 )|
sup
≤ C(M ). For nonkx1 −x2 kβ
d
x1 ,x2 ∈[0,1]
integer β > 1, H(β, M ) consists of all functions in C [0, 1]d such that f (⌊β⌋) ∈ C [0, 1]d for any
partial f (⌊β⌋) of order ⌊β⌋ of f and kf k∞ +
sup
x1 ,x2 ∈[0,1]d
|f (⌊β⌋) (x1 )−f (⌊β⌋) (x2 )|
kx1 −x2 kβ−⌊β⌋
≤ C(M ). Therefore, the
functions in H(β, M ) are automatically uniformly bounded by a number depending on the radius
M.
6. Proof of Main Theorems.
Proof of Theorem 2.1.
Proof. In this proof we repeatedly use the fact that for any fixed m ∈ N and a1 , . . . , am real
numbers, one has by Hölder’s Inequality |a1 + . . . , am |p ≤ C(m, p) (|a1 |p + . . . + |am |p ) for p > 1.
Suppose β ∈ (βl+1 , βl ] for some l = 0, . . . , N − 2. Indeed, this l depends on β and n, and therefore,
to be very precise, we should write it as l(β, n). However, for the sake of brevity we omit such
notation. We immediately have the following lemma.
′
For l ≥ l + 2 and Copt large enough depending on C1 , C2 , B, ψ 00,0 , ψ 10,0 ,
Lemma 6.1.
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
C
′
PP ˆl = l ≤ ,
n
where C > 0 is an universal constant.
Proof.
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
PP ˆl ≥ l + 2
′
≤
PP
sup
P ∈Pθ :
l
4β
f1 (θ)=β,f2 (θ)>
4β+d
≤
N
X
l′ =l+1
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
∃l > l + 1:
2
2 log nR(k ∗ (j ′ ))
φ̂n,j(k∗ (jl+1 )) − φ̂n,j(k∗ (j ′ )) > Copt
l
q
PP |φ̂n,j(k∗ (jl+1 )) − φ̂n,j(k∗ (j ′ )) | > Copt log nR(k∗ (jl′ ))
l
4β
4β+d
′
For any fixed l + 2 ≤ l ≤ N1 , using that R(k∗ (jl+1 )) ≤ R(k∗ (jl′ )) and
sup
|EP φ̂n,j(k∗ (jl+1 )) − EP φ̂n,j(k∗ (j ′ )) |
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
l
4β
4β+d
≤ Ck∗ (jl+1 )−2βl+1 /d +
n−f2 (θ) ≤ 2C
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
4β
4β+d
p
log nR(k∗ (jl+1 ))
we have that
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
!
q
PP |φ̂n,j(k∗ (jl+1 )) − φ̂n,j(k∗ (j ′ )) | > Copt log nR(k∗ (jl′ ))
l
14
≤
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
+
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
≤
4β
4β+d
f1 (θ)=β,f2 (θ)>
4β
4β+d
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
4β
4β+d
Copt q
|φ̂n,j(k∗ (jl+1 )) − EP φ̂n,j(k∗ (jl+1 )) | >
log nR(k∗ (jl′ ))
2
p
C
log nR(k∗ (jl′ ))
|φ̂n,j(k∗ (j ′ )) − EP φ̂n,j(k∗ (j ′ )) | > opt
2
l
l
PP
−|EP φ̂n,j(k∗ (jl+1 )) − EP φ̂n,j(k∗ (j ′ )) |
l
PP
sup
P ∈Pθ :
+
PP
Copt p
∗
log nR(k (jl+1 ))
|φ̂n,j(k∗ (jl+1 )) − EP φ̂n,j(k∗ (jl+1 )) | >
2
p
C
∗ (j ′ ))
log
nR(k
|φ̂n,j(k∗ (j ′ )) − EP φ̂n,j(k∗ (j ′ )) | > opt
l
l
l
2
PP
−|EP φ̂n,j(k∗ (jl+1 )) − EP φ̂n,j(k∗ (j ′ )) |
l
Now
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
≤
|EP φ̂n,j(k∗ (jl+1 )) − EP φ̂n,j(k∗ (j ′ )) |
l
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
≤ 2C1 n
≤ 2C1 n
4β
− 4β+d
4β
− 4β+d
2C1 2−2j(k∗ (jl+1 ))f1 (θ) + n−f2 (θ)
− 4β
4βl+1 +d
4β
n
k∗ (jl+1 ) −2β/d
− 4β+d
dβ
√
≤ 2C1 n
+ 2C1
+ 2C1 2
d
2
log n
− 4β
4βl+1 +d
n
2
+ 2C1 2d /4 √
.
log n
p
C
log nR(k∗ (jl′ )) for Copt chosen large
The last quantity in the above display is smaller than opt
4
enough (depending on C1 and d). This implies that for Copt properly chosen based on the given
parameters,
q
sup
PP |φ̂n,j(k∗ (jl+1 )) − φ̂n,j(k∗ (j ′ )) | > Copt log nR(k∗ (jl′ ))
l
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
≤
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
+
f1 (θ)=β,f2 (θ)>
≤
PP
4β
4β+d
sup
P ∈Pθ :
PP
4β
4β+d
Copt p
log nR(k∗ (jl+1 ))
|φ̂n,j(k∗ (jl+1 )) − EP φ̂n,j(k∗ (jl+1 )) | >
2
Copt q
∗
|φ̂n,j(k∗ (j ′ )) − EP φ̂n,j(k∗ (j ′ )) | >
log nR(k (jl′ ))
l
l
4
C
,
n
for an universal constant C. The last inequality can now be obtained by standard Hoeffding’s
decomposition and subsequent application of Lemma B.2 and B.5 to the second and first order
degenerate parts respectively. The control thereafter is standard by our choices of n ≤ 2jd ≤ n2 .
For similar calculations we refer to (Mukherjee and Sen, 2016).
15
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
Returning to the proof of Theorem 2.1 we have
2
EP φ̂n,j (k∗ (j )) − φ(P )
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
≤
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
+
f1 (θ)=β,f2 (θ)>
2
φ̂n,j (k∗ (j )) − φ(P ) I ˆl ≤ l + 1
EP
l̂
4β
4β+d
sup
P ∈Pθ :
l̂
2
ˆ
EP
φ̂n,j (k∗ (j )) − φ(P ) I l ≥ l + 2
l̂
4β
4β+d
= T1 + T2
Control of T1 . Using the definition of ˆl we have the following string of inequalities.
T1
=
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
≤2
2
EP I ˆl ≤ l + 1 φ̂n,j (k∗ (j )) − φ(P )
l̂
4β
4β+d
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
≤2
2
φ̂
∗ (j
−
φ̂
∗
n,j(k
))
l+1
n,j (k (jl̂ ))
2
EP I ˆ
l ≤l+1
+ φ̂
∗
−
φ(P
)
n,j(k (jl+1 ))
4β
4β+d
2
Copt
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
log nR (k (jl+1 )) + 2
4β
4β+d
≤ (4C +
n2
log n
sup
P ∈Pθ :
f1 (θ)=β,f2 (θ)>
2
≤ Copt
log nR (k∗ (jl+1 )) + C
2
Copt
)
∗
4β
4β+d
2j(k∗ (jl+1 ))d
−4j(k∗ (jl+1 ))d βd
+
C2
+
n2
4βl+1 /d
− 1+4β
/d
l+1
c
≤e
′
n2
log n
4β/d
− 1+4β/d
2
EP φ̂n,j(k∗ (jl+1 )) − φ(P )
sup
2Cn−2f2 (θ)
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
.
∗
Above we have used the property of r(β), definition of j(k∗ (l)) to conclude that 2j(k (l))d ≤ k∗ (l),
and also the definition and properties of βl together with the fact that βl+1 ≤ β < βl implies
β/d − βl+1 /d ≤ logc n for some fixed constant c.
Control of T2 .
T2
≤
≤
≤
′
N
X
l =l+2
N
X
l′ =l+2
′
N
X
l =j+2
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
4
C
n
2
′
EP I(ˆl = l ) φ̂n,j(k∗ (j′ )) − φ(P )
1
′
PPp lˆ = l
1
sup
P ∈Pθ :
4β
f1 (θ)=β,f2 (θ)>
4β+d
EPq
2q
φ̂n,j (k∗ (j ′ )) − φ(P )
l
#
1 " j (k∗ (j ′ ))d
1
p
8β
l
C p
2
− 4β+d
−4j (k ∗ (j ′ ))β/d
l
+
2
+
n
.
log
n
,
n2
n
16
where the second last inequality above follows from Lemma 6.1, Lemma B.2, and Lemma B.5, and
the last inequality follows from the choice of N . log n. Therefore for p sufficiently close to 1, we
have desired control over T2 .
Proof of Theorem 2.2.
Proof.
We closely follow Rthe proof strategy of Robins et al. (2009). In particular, consider
R
P n : = Pλn dπ(λ) and Qn : = Qnλ dπ(λ) to be distributions of O1 , . . . , On and define probability
Iχ
Iχ
dP
dQ
measures on χj defined by Pj,λj = jpj λ and Qj,λj = jpj λ . Therefore, if pλ and qλ are densities
of Pλ and Qλ respectively with respect to some common dominating measure, then one has
Z
Z
χ2
Pλn dπ(λ), Qnλ dπ(λ)
2
k Q
R Q
I(Oi ∈ χj )qλ (Oi )dπj (λj )
j=1 i
= EP n k
−1
R Q Q
I(Oi ∈ χj )pλ (Oi )dπj (λj )
j=1 i
= EP n
k
Y
j=1
R
R
Q
i:Oi ∈χj
Q
i:Oi ∈χj
qj,λj (Oi )dπj (λj )
pj,λj (Oi )dπj (λj )
2
− 1.
(6.1)
Define variables I1 , . . . , In such that Ii = j if OI ∈ χj for i = 1, . . . , n, j = 1, . . . , k and
let Nj = |{i: Ii = j}|. Now note that the measure P n arises as the distribution O1 , . . . , On
if this vector is generated in two steps as follows. First one chooses λ ∼ π and then given λ
O1 , . . . , On are generated independent pλ . This implies that given λ, (N1 , . . . , Nk ) is distributed
as Multinomial(Pλ (χ1 ), . . . , Pλ (χk )) = Multinomial(p1 , . . . , pk ) ⇒ (N1 , . . . , Nk ) ⊥ λ ⇒ under P n ,
(N1 , . . . , Nk ) ∼ Multinomial(p1 , . . . , pk ) unconditionally. Similarly, given λ, I1 , . . . , In are independent, and the event Ii = j has probability pj which is again free of λ. This in turn implies that
(I1 , . . . , In ) ⊥ λ under P n . The conditional distribution of O1 , . . . , On given λ and (I1 , . . . , In ) can
also be described as follows. For each partitioning set χj generate Nj variables independently from
Pλ restricted and renormalized to χj , i.e. from the measure Pj,λj . Now we can do so independently across the partitioning sets and attach correct labels {1, . . . , n} which are consistent with
(I1 , . . . , In ). The conditional distribution of O1 , . . . , On under P n given I1 , . . . , In is the mixture of
this distribution relative to the conditional distribution of λ given I1 , . . . , In , which by the virtue of
independence of I1 , . . . , In and λ under P n , was seen to be the unconditional distribution, π. Thus
we can obtain a sample from the conditional distribution under P n of O1 , . . . , On given
I1 , . . . , In
R
by generating for each partitioning set χj a set of Nj variables from the measure Pj,λj dπ(λj ),
independently across the partitioning sets, and next attaching labels consistent with I1 , . . . , In . The
above discussion allows us to write the right hand side of (6.1) as
2
R Q
qj,λj (Oi )dπj (λj )
k
Y
R i:IQ
i =j
EP n EP n
|I1 , . . . , In − 1
pj,λj (Oi )dπj (λj )
j=1
= EP n
k
Y
j=1
Ii =j
R Q
qj,λj (Oi )dπj (λj )
2
i:Ii =j
EP n R Q
|I1 , . . . , In − 1
pj,λj (Oi )dπj (λj )
Ii =j
17
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
= E
k
Y
j=1
ER
N
Pj,λj dπj (λj )
j
R
R
N
Qj,λj j dπj (λj )
N
Pj,λjj dπj (λj )
2
−1
c = maxj supλ
Arguing similar to Lemma 5.2 of Robins et al. (2009), we have with e
c = max{e
c, 1} that
E
k
Y
ER
j=1
≤ E
= E
≤ E
k
Y
j=1
N
Pj,λj dπj (λj )
j
1 + 2
k
Y
R
N
Qj,λj j dπj (λj )
N
Pj,λjj dπj (λj )
Nj
X
Nj
r=2
r
p2 dµ
χj p λ p j
and
2
−1
br + 2Nj2
Nj − 1 r
a b + 2Nj2 e
cNj −1 δ − 1
r
Nj −1
X
r=1
(1 + b)Nj − 1 − Nj b + Nj2 (1 + a)Nj −1 − 1 b + 2Nj2 e
cNj −1 δ − 1
1+2
j=1
k
Y
R
R
(1 + b)Nj − 1 − Nj b + Nj2 (1 + a)Nj −1 − 1 b + 2Nj2 cNj −1 δ − 1
1+2
j=1
To bound the last quantity in the above display we use Shao (2000) to note that since Nj , j =
1,
. . . , k is a multinomial
random
vector,
for any
f on the range of Nj ’s,
P
P increasing function
Qk
Qk
k
k
E
≤ exp
j=1 f (Nj ) = E exp
j=1 Nj
j=1 Ef (Nj ) =
j=1 Ef (Nj ). In this context,
noting that
1 + 2 (1 + b)Nj − 1 − Nj b + Nj2 (1 + a)Nj −1 − 1 b + 2Nj2 cNj −1 δ
is an increasing function in each coordinate of Nj (on the range of Nj ) we therefore have by Shao
(2000) that the last display is bounded by
k
Y
E
j=1
1+2
(1 + b)Nj − 1 − Nj b + Nj2 (1 + a)Nj −1 − 1 b + 2Nj2 cNj −1 δ
−1
(1 + bpj )n − 1 − nbpj
n−2
2
+npj (1 + apj )
(1 + napj + npj − pj )b − npj (1 − pj )b − npj b − 1
=
1+2
j=1
+2δnpj (cpj + 1 − pj )n−2 (cnpj + 1 − pj )
k
Y
≤
≤
k
Y
1 + C (npj b)2 + (npj )2 ab + npj δ
j=1
k
Y
j=1
Pk
≤e
−1
1 + Cn(max pj ) (npj )b + npj ab + npj δ − 1
j=1
2
j
(Cn(maxj pj )((npj )b2 +npj ab)+npj δ) − 1 = eCn2 (maxj pj )(b2 +ab)+Cnδ − 1
where we have used the fact that n(maxj pj ) (1 ∨ a ∨ b ∨ e
c) ≤ A for a positive constant A along
Pk
with j=1 pj = 1.
18
6.1. Proof of Theorem 2.3.
Proof. Let
jmin d
1
2βmax /d+1
n
⌋,
=⌊
log n
lmin d
1
2γmax /d+1
n
=⌊
⌋,
log n
2
2
jmax d
2
lmax d
2
1
2βmin /d+1
n
=⌊
⌋,
log n
1
2γmin /d+1
n
=⌊
⌋.
log n
Without loss of generality assume that we have data {xi , yi }2n
i=1 . We split it into two equal parts
and use the second part to construct the estimator ĝ of the design density g and us the resulting
ĝ to construct the adaptive estimate of the regression function from the first half of the sample.
Throughout the proof, EP,i [·] will denote the expectation with respect to the ith half of the sample,
with the other half held fixed, under the distribution P . Throughout we choose the regularity of
our wavelet bases to be larger than γmax for the desired approximation and moment properties to
hold. As a result our constants depend on γmax .
P
Define T1 = [jmin , jmax ]∩ N and T2 = [lmin , lmax ]∩ N. For l ∈ T2 , let ĝl (x) = n1 2n
i=n+1 KVl (Xi , x).
Now, let
)
(
r
ld ld
2
ˆ
, ∀l ∈ T2 s.t. l ≥ j .
l = min j ∈ T2 : kĝj − ĝl k∞ ≤ C ∗
n
where C ∗ is a constant (depending on γmax , BU ) that can be determined from the proof hereafter.
Thereafter, consider the estimator ge: = ĝl̂ .
Fix a P : = (f, g) ∈ P(β, γ). To analyze the estimator ge, we begin with standard bias variance
type analysis for the candidate estimators ĝl . First note that for any x ∈ [0, 1]d , using standard
facts about compactly supported wavelet basis having regularity larger than γmax (Härdle et al.,
1998), one has for a constant C1 depending only on q and the wavelet basis used,
γ
|EP (ĝl (x)) − g(x)| = |Π (g|Vl ) (x) − g(x)| ≤ C1 M 2−ld d .
(6.2)
Above we have used the fact that
sup
h∈H(γ,M )
kh − Π(h|Vl )k∞ ≤ C1 M 2−lγ .
(6.3)
Also, by standard arguments about compactly supported wavelet basis having regularity larger
than γmax (Giné and Nickl, 2015), one has for a constant C2 : = C(BU , ψ 00,0 , ψ 10,0 , γmax )
r
2ld ld
.
(6.4)
EP (kĝl (x) − EP (ĝl (x)) k∞ ) ≤ C2
n
Therefore, by (6.2), (6.4), and triangle inequality,
−ld γd
EP,2 kĝl − gk∞ ≤ C1 M 2
r
2ld ld
.
n
r
)
+ C2
Define,
∗
(
−ld γd
l : = min l ∈ T2 : C1 M 2
≤ C2
2ld ld
n
.
19
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
The definition of l∗ implies that for n sufficiently large,
d
2
C1
M
C2
2d
2γ+d
n
log n
d
2γ+d
l∗ d
≤2
d+1
≤2
C1
M
C2
2d
2γ+d
n
log n
d
2γ+d
.
The error analysis of ge can now be carried out as follows.
g − gk∞ I ˆl > l∗
EP,2 ke
g − gk∞ = EP,2 ke
g − gk∞ I ˆl ≤ l∗ + EP,2 ke
: = I + II.
(6.5)
(6.6)
We first control term I as follows.
I = EP,2 ke
g − gk∞ I ˆl ≤ l∗
≤ EP,2 kĝl̂ − ĝl∗ k∞ I ˆl ≤ l∗ + EP,2 kĝl∗ − gk∞ I ˆl ≤ l∗
r ∗
r ∗
l d l∗ d
γ
2
2l d l∗ d
∗
≤ C∗
+ C1 M 2−l d d + C2
n
n
r ∗
2d
− γ
l
d
∗
2γ+d
2γ+d
n
2 l d
d+1 C1
∗
.
≤2
M
≤ (C + 2C2 )
n
C2
log n
(6.7)
The control of term II is easier if one has suitable bounds on kĝl − gk∞ . To this end note that, for
any fixed x ∈ [0, 1]d , there exists a constant C3 : = C(ψ 00,0 , ψ 10,0 , γmax )
|ĝl (x)| ≤
n
1XX
n
X
i=1 k∈Zl v∈{0,1}d
v
v
|ψl,k
(Xi )||ψl,k
(x)| ≤ C3 2ld .
This along with the fact that kgk∞ ≤ BU , implies that for n sufficiently large,
kĝl − gk∞ ≤ C3 2ld + BU ≤ 2C3 2ld .
In the above display the last inequality follows since l ≥ lmin ≥
II ≤ C3
lX
max
l=l̂
2ld P ˆl = l .
n
log n
1
2γmax /d+1
. Therefore,
(6.8)
ˆ
We now complete the control over II by suitably bounding P l = l . To this end, note that for any
l > l∗ ,
PP,2 ˆl = l
!
r
ld ld
X
2
≤
PP,2 kĝl − ĝl∗ k∞ > C ∗
n
l>l∗
q
∗
ld ld
C
2
∗
X PP,2 kĝl∗ − E (ĝl∗ ) k∞ > 2
n − kEP,2 (ĝl ) − EP,2 (ĝl ) k∞
q
≤
∗
2ld ld
+PP,2 kĝl − EP,2 (ĝl ) k∞ > C2
l>l∗
n
20
q
2ld ld
C∗
∗
∗
∗
)
−
Π
(g|V
)
k
−
kΠ
(g|V
P
)
k
>
−
E
(ĝ
kĝ
∞
∞
l
l
l
l
X P,2
2
n
q
≤
∗
2ld ld
+P kĝl − EP,2 (ĝl ) k∞ > C2
l>l∗
n
q ∗
q
2ld ld
2l d l∗ d
C∗
∗
∗
X PP,2 kĝl − E (ĝl ) k∞ > 2
n − 2C2
n
q
≤
∗
2ld ld
+P kĝl − EP,2 (ĝl ) k∞ > C2
l>l∗
n
q
∗
ld ld
C
2
X PP,2 kĝl∗ − E (ĝl∗ ) k∞ > ( 2 − 2C2 )
n
q
≤
∗
2ld ld
+P kĝl − EP,2 (ĝl ) k∞ > C2
l>l∗
n
X
2 exp (−Cld),
≤
l>l∗
(6.9)
In the fourth and fifth of the above series of inequalities, we have used (6.3) and the definition of l∗
respectively. The last inequality in the above display holds for a C > 0 depending on BU , ψ 00,0 , ψ 10,0
and the inequality follows from Lemma B.6 provided we choose C ∗ large enough depending on
M, BU , ψ 00,0 , ψ 10,0 , γmax . In particular, this implies that, choosing C ∗ large enough will guarantee
that there exists a η > 3 such that for large enough n, one has for any l > l∗
P(ˆl = l) ≤ n−η .
(6.10)
This along with 6.8 and choice of lmax implies that
II ≤ C3
X
l>l∗
2ld n−η = C3
X 2ld
l>l∗
n
n−η+1 ≤
lmax
log n
≤
.
nη−1
n
(6.11)
Finally combining equations (6.7) and (6.11), we have the existence of an estimator ge depending
on M, BU , and γmax (once we have fixed our choice of father and mother wavelets), such that for
every (β, γ) ∈ [βmin , βmax ] × [γmin , γmax ],
− γ
2γ+d
d
n
,
sup EP ke
g − gk∞ ≤ (C) 2γ+d
log n
P ∈P(β,γ)
with a large enough positive C depending on M, BU , and γmax .
We next show that uniformly over P ∈ P(β, γ), ge belongs to H(γ, C) with probability at least
1 − 1/n2 , for a large enough constant C depending on M, BU , and γmax . Towards this end, note
′
that, for any C > 0 and l > 0, (letting for any h ∈ L2 [0, 1]d , khh, ψ l′ ,· ik2 be the vector L2 norm of
′
the vector hh, ψ vl′ ,k′ i: k ∈ Zl′ , v ∈ {0, 1}d − {0}d . We have,
′
d
PP,2 2l (γ+ 2 ) khe
g , ψ l′ ,· ik∞ > C
=
lX
max
l=lmin
∗
=
l
X
l=lmin
′
′
d
PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· ik∞ > C, ˆl = l I l ≤ l
′
′
d
PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· ik∞ > C, ˆl = l I l ≤ l
21
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
lX
max
+
l=l∗ +1
∗
l
X
≤
l=lmin
+
lX
max
l=l∗ +1
∗
≤
l
X
l=lmin
′
′
d
PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· ik∞ > C, ˆl = l I l ≤ l
′
′
d
PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· ik∞ > C I l ≤ l
′
PP,2 ˆl = l I l ≤ l
′
X
′
d
n−η ,
PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· ik∞ > C I l ≤ l +
(6.12)
l>l∗
where the last inequality follows from (6.10) for some η > 3 provided C ∗ is chosen large enough
as before. Now,
′
d
PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· ik∞ > C
′
d
≤ PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· i − EP,2 hĝl , ψ l′ ,· i k∞ > C/2
′
d
+ PP,2 2l (γ+ 2 ) kEP,2 hĝl , ψ l′ ,· i k∞ > C/2
′
d
= PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· i − EP,2 hĝl , ψ l′ ,· i k∞ > C/2
if C > 2M (by definition 5.5). Therefore, from (6.12), one has for any C > 2M ,
′
d
PP,2 2l (γ+ 2 ) khĝ, ψ l′ ,· ik∞ > C
∗
≤
+
l
X
l=lmin
lX
max
l=l∗
′
′
d
PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· i − EP,2 hĝl , ψ l′ ,· i k∞ > C/2 I l ≤ l
′
n−3 I l ≤ l .
(6.13)
Considering the first term of the last summand of the above display, we have
∗
l
X
l=lmin
=
′
′
d
PP,2 2l (γ+ 2 ) khĝl , ψ l′ ,· i − EP,2 hĝl , ψ l′ ,· i k∞ > C/2 I l ≤ l
∗
l
X
X
X
PP,2
l=lmin k∈Zl′ v∈{0,1}d
2n
C/2
1 X v
> ′
ψl′ ,k (Xi ) − EP,2 ψlv′ ,k (Xi )
d
n
2l (γ+ 2 )
i=n+1
By Bersntein’s Inequality, for any λ > 0,
2n
1 X v
>λ
ψl′ ,k (Xi ) − EP,2 ψlv′ ,k (Xi )
PP,2
n
i=n+1
2
nλ
,
≤ 2 exp −
v
2
2 σ + kψl′ ,k k∞ λ/3
!
!
′
I l ≤l
22
2
. Indeed, there exists constant C4 depending on
where σ 2 = EP,2 ψlv′ ,k (Xi ) − EP,2 ψlv′ ,k (Xi )
the ψ 00,0 , ψ 10,0 , γmax such that σ 2 ≤ C4 and kψlv′ ,k k∞ ≤ C4 2
∗
′
l d
2
. Therefore,
2n
C/2
1 X v
ψl′ ,k (Xi ) − EP,2 ψlv′ ,k (Xi )
> ′
PP,2
d
l
n
2 (γ+ 2 )
i=n+1
l=lmin k∈Z ′ v∈{0,1}d
l
′
l∗
′
−2l (γ+ d2 )
2
X
X X
n2
C
exp −
I
l
≤
l
≤2
′
′
d
l d
8C4
1 + C2 2 2 2−l (γ+ 2 )
l=lmin k∈Zl′ v∈{0,1}d
!
′
l∗
′
X
X X
n2−2l γ
C2
=2
exp −
I
l
≤
l
′
′
8C4 2l d + C 2l (d−γ)
d
l
X
X
X
≤2
X X
X
l
X
X
X
′
I l ≤l
2
l=lmin k∈Zl′ v∈{0,1}
l∗
!
C2
n2−2l
exp −
′
8(1 + C2 )C4 2l d
d
l=lmin k∈Z ′ v∈{0,1}
l
′
γ
!
′
I l ≤l
!
∗
′
C2
n2−2l γ (l∗ −l)(d+2γ) ∗
=2
exp −
2
l
d
I
l
≤
l
∗
8(1 + C2 )C4 2l d l∗ d
l=lmin k∈Zl′ v∈{0,1}d
!
l∗
′
X
X X
C 2 C2
∗
(l −l)(d+2γ) ∗
2
l
d
I
l
≤
l
exp − d+3
≤2
2 C1 (1 + C2 )C4
l=lmin k∈Zl′ v∈{0,1}d
!
l∗
′
X X
X
C 2 C2
exp − d+3
≤2
ld
I
l
≤
l
2 C1 (1 + C2 )C4
l=lmin k∈Z ′ v∈{0,1}d
l
!
l∗
′
X
′
C 2 C2
0
1
l d
C(ψ0,0 , ψ0,0 )2 exp − d+3
≤2
ld
I
l
≤
l
2 C1 (1 + C2 )C4
l=lmin
! !
l∗
′
X
C 2 C2
0
1
−
1
ld
I
l
≤
l
C(ψ0,0 , ψ0,0 ) exp −
≤2
2d+3 C1 (1 + C2 )C4
l=lmin
!
!
′
2C
C
2
0
1
,
−
1
l
d
I
l
≤
l
≤ 2lmax C(ψ0,0
, ψ0,0
) exp −
min
max
2d+3 C1 (1 + C2 )C4
∗
(6.14)
C 2 C2
2d+3 C1 (1+ C
)C4
2
≥ 1. Above inequality 6.14 uses the definition of l∗ . Indeed choosing C large
2C
2
enough, one can guarantee, 2d+3 CC (1+
−
1
lmin d ≥ 4 log n. Such a choice of C implies that,
C
)C
if
1
∗
l
X
X
X
l=lmin k∈Z ′ v∈{0,1}d
l
≤
PP,2
2
4
2n
C/2
1 X v
> ′
ψl′ ,k (Xi ) − EP,2 ψlv′ ,k (Xi )
d
n
2l (γ+ 2 )
i=n+1
!
′
I l ≤l
0 , ψ1 )
C(ψ0,0
′
0,0
I(l ≤ lmax ),
3
n
0 , ψ 1 ) one has
which in turn implies that, for C sufficiently large (depending on M, ψ0,0
0,0
C(ψ 0 , ψ 1 ) + 1 ′
′
d
0,0
0,0
I(l ≤ lmax ).
PP,2 2l (γ+ 2 ) khĝ, ψ l′ ,· ik2 > C ≤
n3
23
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
This along with the logarithmic in n size of lmax implies that for sufficiently large n, uniformly over
P ∈ P(β, γ), e
g belongs to H(γ, C) with probability at least 1 − 1/n2 , for a large enough constant
0 , ψ 1 being fixed by specifying a regularity
C depending on M, BU , and γmax (the choice of ψ0,0
0,0
S > γmax ).
However this ge does not satisfy the desired point-wise bounds. To achieve this let φ be a C ∞
function such that ψ(x)|[BL ,BU ] ≡ x while B2L ≤ ψ(x) ≤ 2BU for all x. Finally, consider the
estimator ĝ(x) = ψ(e
g (x)). We note that |g(x) − ĝ(x)| ≤ |g(x) − e
g(x)|— thus ĝ is adaptive to the
smoothness of the design density. The boundedness of the constructed estimator follows from the
construction. Finally, we wish to show that almost surely, the constructed estimator belongs to the
Hölder space with the same smoothness, possibly of a different radius. This is captured by the next
lemma, proof of which can be completed by following arguments similar to proof of Lemma 3.1 in
Mukherjee and Sen (2016). In particular,
Lemma 6.2. For all h ∈ H(β, M ), ψ(h) ∈ H(β, C(M, β)), where C(M, β) is a universal constant dependent only on M, β and independent of h ∈ H(β, M ).
Now, the construction of fˆ satisfying the desired properties of Theorem 2.3 can be done following
ideas from proof of Theorem 1.1 of Mukherjee and Sen (2016). In particular, construct the estimator
ĝ of the design density g as above from second part of the sample and let for j ∈ T1 , fˆj (x) =
n
1 P Wi
n
ĝ(Xi ) KVj (Xi , x). Now, let
i=1
(
ĵ = min j ∈ T1 : kfˆj − fˆj ′ k∞ ≤ C ∗∗
r
2j ′ d j ′ d
, ∀j ′ ∈ T1 s.t. j ′ ≥ j
n
)
.
where C ∗∗ depends only on the known parameters of the problem and can be determined from the
proof hereafter. Thereafter, consider the estimator fe: = fˆĵ .
Now define
(
)
r
2jd jd
∗
−jd βd
j : = min j ∈ T1 : 2
,
≤
n
Therefore
EP kfe − f k∞ ≤ EP kfˆĵ − f k∞ I(ĵ ≤ j ∗ ) + EP kfˆĵ − f k∞ I(ĵ > j ∗ ).
Thereafter using Lemma B.6 and 6.3 we have
EP kfˆĵ − f k∞ I(ĵ ≤ j ∗ )
≤ EP kfˆĵ − fˆj ∗ k∞ I(ĵ ≤ j ∗ ) + EP kfˆj ∗ − f k∞
r ∗
2j d j ∗ d
∗∗
+ EP,2 kfˆj ∗ − EP,1 (fˆj ∗ )k∞ + EP,2 kEP,1 (fˆj ∗ ) − f k∞
≤C
n
r ∗
2j d j ∗ d
≤ (C ∗∗ + C(BU , BL , ψ 00,0 , ψ 10,0 ))
n
g
+ EP,2 kΠ(f ( − 1)|Vj ∗ )k∞ + kf − Π(f |Vj ∗ )k∞
ĝ
r ∗
2j d j ∗ d
≤ (C ∗∗ + C(BU , BL , ψ 00,0 , ψ 10,0 ))
n
(6.15)
24
+ C(M, ψ 00,0 , ψ 10,0 )2−j
∗β
g
+ EP,2 kΠ(f ( − 1)|Vj ∗ )k∞ .
ĝ
(6.16)
Now, by standard computations involving compactly wavelet bases and property of ĝ
g
g
EP,2 kΠ(f ( − 1)|Vj ∗ )k∞ ≤ C(ψ 00,0 , ψ 10,0 )EP,2 kf ( − 1)k∞
ĝ
ĝ
≤ C(BU , BL , ψ 00,0 , ψ 10,0 )EP,2 kĝ − gk∞
− γ
2γ+d
n
0
1
.
≤ C(BU , BL , M, γmax , ψ 0,0 , ψ 0,0 )
log n
(6.17)
Combining (6.16), (6.17), definition of j ∗ , and the fact that γ > β, we have
EP kfˆĵ − f k∞ I(ĵ ≤ j ∗ ) ≤ C(BU , BL , M, γmax , ψ 00,0 , ψ 10,0 )
n
log n
−
β
2β+d
.
(6.18)
provided C ∗ ∗ is chosen depending only the known parameters of the problem. Now using arguments
similar to those leading to (6.8) we have
X
EP kfˆĵ − f k∞ I(ĵ > j ∗ ) ≤ C(BU , BL , ψ 00,0 , ψ 10,0 )
2jd PP (ĵ = j).
(6.19)
j>j ∗
We now complete the control over II by suitably bounding PP (ĵ = j). To this end, note that for
any j > j ∗ ,
PP (ĵ = j)
X
≤
PP
!
jd jd
2
kfˆj − fˆj ∗ k∞ > C
n
∗
j>j
q
jd jd
∗∗
2
C
ˆ ∗ − EP,1 fˆj k∞
PP,1 kfˆj ∗ − EP,1 fˆj ∗ k∞ > 2
X
n − kEP,1 fj
q
≤
EP,2
∗∗
2jd jd
j>j ∗
+PP,1 kfˆj − EP,1 fˆj k∞ > C2
n
q
jd jd
∗∗
2
g
g
C
∗
− Π f ĝ |Vj k∞
PP,1 kfˆj ∗ − EP,1 fˆj ∗ k∞ > 2
X
n − kΠ f ĝ |Vj
q
≤
EP,2
.
∗∗
2jd jd
j>j ∗
+PP,1 kfˆj − EP,1 fˆj k∞ > C2
n
∗∗
r
Now,
g
Π f |Vj ∗
ĝ
g
− Π f |Vj
ĝ
∗
∞
≤ C(M, ψ 00,0 , ψ 10,0 )2−j β + C(BU , BL , , ψ 00,0 , ψ 10,0 )kĝ − gk∞ .
q ∗
q
2jd jd
2j d j ∗ d
Using the fact that
>
for j > j ∗ , we have using the definition of j ∗ that there
n
n
exists C, C ′ > 0 depending on M, BU , BL , ψ 00,0 , ψ 10,0 such that
PP (ĵ = j)
25
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
q
2jd jd
C ∗∗
ˆ
ˆ
∗
∗
−
C)
P
k
f
−
E
f
k
>
(
P,1
j
P,1
j
∞
X
2
n
q ∗
q
≤
EP,2
2jd jd
2j d j ∗ d
C ∗∗
′
ˆ
ˆ
∗
j>j
+ PP,2 kĝ − gk∞ > C
+PP,1 kfj − EP,1 fj k∞ > 2
n
n
.
(6.20)
Now, provided C ∗∗ > 2C is chosen large enough (depending on BU , BL , ψ 00,0 , ψ 10,0 ) we have there
exists large enough C ′′ (depending on BU , BL , ψ 00,0 , ψ 10,0 ) such that
!
r
∗∗
jd jd
2
C
− C)
PP,1 kfˆj ∗ − EP,1 fˆj ∗ k∞ > (
2
n
!
r
C ∗∗ 2jd jd
′′
+ PP,1 kfˆj − EP,1 fˆj k∞ >
≤ 2e−C jd .
2
n
(6.21)
Henceforth, whenever required, C, C ′ , C ′′ will be chosen to be large enough depending on the known
parameters of the problem, which in turn will imply that C ∗∗ can be chosen large enough depending
on the known parameters of the problem as well. First note that, the last term in the above display
can be bounded rather crudely using the following lemma.
Lemma 6.3. Assume γmin > βmax . Then for C ′ , C1 , C2 > 0 (chosen large enough depending on
BU , ψ 00,0 , ψ 10,0 ) one has
sup
P ∈P(β,γ)
PP,2 kĝ − gk∞ > C
′
r
2j ∗ d j ∗ d
≤ C1 (lmax − lmin )e−C2 lmin d .
n
g ), where ψ(x) is C ∞ function
The proof of Lemma 6.3 can be argued as follows. Indeed, ĝ = ψ(e
which is identically equal to x on [BL , BU ] and has universally bounded first derivative. Therefore,
it is enough to prove Lemma 6.3 for ge instead of ĝ and thereby invoking a simple first order Taylor
series argument along with the fact that ψ(g) ≡ g owing to the bounds on g. The crux of the
argument for proving Lemma 6.3 is that
q by Lemma B.6, any ĝl for l ∈ T2 suitably concentrates
around g in a radius of the order of
2ld ld
n .
The proof of the lemma is therefore very similar to
the proof of adaptivity of ĝ (by dividing into cases where the chosen ˆl is larger and smaller than l∗
respectively and thereafter invoking Lemma B.6) and therefore we omit the details.
Plugging in the result of Lemma 6.3 into (6.20), and thereafter using the facts that γmin > βmax ,
lmax , jmax are both poly logarithmic in nature, along with equations (6.15), and (6.18), (6.19), (6.21)
we have the existence of an estimator fe depending on M, BU , BL , βmin , βmax , γmax , such that for
every (β, γ) ∈ [βmin , βmax ] × [γmin , γmax ],
sup
P ∈P(β,γ)
EP kfe − f k∞ ≤ C
n
log n
−
β
2β+d
,
with a large enough positive constant C depending on M, BU , BL , βmin , γmax , ψ 00,0 , ψ 10,0 .
However this fe does not satisfy the desired point-wise bounds. To achieve this, as before, let φ
be a C ∞ function such that ψ(x)|[BL ,BU ] ≡ x while B2L ≤ ψ(x) ≤ 2BU for all x. Finally, consider
the estimator fˆ(x) = ψ(e
g (x)). We note that |f (x) − fˆ(x)| ≤ |f (x) − fe(x)|— thus fˆ is adaptive to
the smoothness of the design density. The boundedness of the constructed estimator follows from
26
the construction. Finally,the proof of the fact that the constructed estimator belongs to the Hölder
space with the same smoothness, possibly of a different radius follows once again from of Lemma
6.2.
References.
Baraud, Y. (2002). Model selection for regression on a random design. ESAIM: Probability and Statistics 6 127–146.
Bickel, P. J. and Ritov, Y. (1988). Estimating integrated squared density derivatives: sharp best order of convergence estimates. Sankhyā: The Indian Journal of Statistics, Series A 381–393.
Bickel, P. J., Klaassen, C. A., Bickel, P. J., Ritov, Y., Klaassen, J., Wellner, J. A. and Ritov, Y. (1993).
Efficient and adaptive estimation for semiparametric models. Johns Hopkins University Press Baltimore.
Birgé, L. and Massart, P. (1995). Estimation of integral functionals of a density. The Annals of Statistics 11–29.
Brown, L. D. and Levine, M. (2007). Variance estimation in nonparametric regression via the difference sequence
method. The Annals of Statistics 35 2219–2232.
Brown, L. D. and Low, M. G. (1996). A constrained risk inequality with applications to nonparametric functional
estimation. The annals of Statistics 24 2524–2535.
Bull, A. D. and Nickl, R. (2013). Adaptive confidence sets in Lˆ 2. Probability Theory and Related Fields 156
889–919.
Cai, T. T. and Low, M. G. (2003). A note on nonparametric estimation of linear functionals. Annals of statistics
1140–1153.
Cai, T. T. and Low, M. G. (2004). Minimax estimation of linear functionals over nonconvex parameter spaces.
Annals of statistics 552–576.
Cai, T. T. and Low, M. G. (2005a). Nonquadratic estimators of a quadratic functional. The Annals of Statistics
2930–2956.
Cai, T. T. and Low, M. G. (2005b). On adaptive estimation of linear functionals. The Annals of Statistics 33
2311–2343.
Cai, T. T. and Low, M. G. (2006). Optimal adaptive estimation of a quadratic functional. The Annals of Statistics
34 2298–2325.
Cai, T. T. and Low, M. G. (2011). Testing composite hypotheses, Hermite polynomials and optimal estimation of
a nonsmooth functional. The Annals of Statistics 39 1012–1041.
Cai, T. T. and Wang, L. (2008). Adaptive variance function estimation in heteroscedastic nonparametric regression.
The Annals of Statistics 36 2025–2054.
Cohen, A., Daubechies, I. and Vial, P. (1993). Wavelets on the interval and fast wavelet transforms. Applied and
computational harmonic analysis 1 54–81.
Crump, R. K., Hotz, V. J., Imbens, G. W. and Mitnik, O. A. (2009). Dealing with limited overlap in estimation
of average treatment effects. Biometrika asn055.
Donoho, D. L., Liu, R. C. and MacGibbon, B. (1990). Minimax risk over hyperrectangles, and implications. The
Annals of Statistics 1416–1437.
Donoho, D. L. and Nussbaum, M. (1990). Minimax quadratic estimation of a quadratic functional. Journal of
Complexity 6 290–323.
Efromovich, S. and Low, M. G. (1994). Adaptive estimates of linear functionals. Probability theory and related
fields 98 261–275.
Efromovich, S. and Low, M. (1996). On optimal adaptive estimation of a quadratic functional. The Annals of
Statistics 24 1106–1125.
Efromovich, S. and Samarov, A. (2000). Adaptive estimation of the integral of squared regression derivatives.
Scandinavian journal of statistics 27 335–351.
Fan, J. (1991). On the estimation of quadratic functionals. The Annals of Statistics 1273–1294.
Fan, J. and Yao, Q. (1998). Efficient estimation of conditional variance functions in stochastic regression. Biometrika
85 645–660.
Giné, E., Latala, R. and Zinn, J. (2000). Exponential and moment inequalities for U-statistics. In High Dimensional
Probability II 13–38. Springer.
Giné, E. and Nickl, R. (2008). A simple adaptive estimator of the integrated square of a density. Bernoulli 47–61.
Giné, E. and Nickl, R. (2015). Mathematical foundations of infinite-dimensional statistical models. Cambridge
Series in Statistical and Probabilistic Mathematics.
Hall, P. and Carroll, R. (1989). Variance function estimation in regression: the effect of estimating the mean.
Journal of the Royal Statistical Society. Series B (Methodological) 3–14.
Hall, P. and Marron, J. S. (1987). Estimation of integrated squared density derivatives. Statistics & Probability
Letters 6 109–115.
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
27
Härdle, W., Kerkyacharian, G., Tsybakov, A. and Picard, D. (1998). Wavelets, approximation, and statistical
applications. Springer.
Houdré, C. and Reynaud-Bouret, P. (2003). Exponential inequalities, with constants, for U-statistics of order
two. 55–69.
Ibragimov, I. A. and Has’ minskii, R. Z. (2013). Statistical estimation: asymptotic theory 16. Springer Science &
Business Media.
Kerkyacharian, G. and Picard, D. (1996). Estimating nonquadratic functionals of a density using Haar wavelets.
The Annals of Statistics 24 485–507.
Klemela, J. and Tsybakov, A. B. (2001). Sharp adaptive estimation of linear functionals. The Annals of statistics
1567–1600.
Laurent, B. (1996). Efficient estimation of integral functionals of a density. The Annals of Statistics 24 659–681.
Laurent, B. and Massart, P. (2000). Adaptive estimation of a quadratic functional by model selection. Annals of
Statistics 1302–1338.
Lepski, O. (1991). On a problem of adaptive estimation in Gaussian white noise. Theory of Probability & Its Applications 35 454–466.
Lepski, O. V. (1992). On problems of adaptive estimation in white Gaussian noise. Topics in nonparametric estimation 12 87–106.
Low, M. G. (1992). Renormalization and white noise approximation for nonparametric functional estimation problems. The Annals of Statistics 545–554.
Mukherjee, R., Newey, W. K. and Robins, J. M. (2017). Semiparametric Efficient Empirical Higher Order
Influence Function Estimators. arXiv preprint arXiv:1705.07577.
Mukherjee, R. and Sen, S. (2016). Optimal Adaptive Inference in Random Design Binary Regression. Bernoulli
(To Appear).
Mukherjee, R., Tchetgen Tchetgen, E. and Robins, J. (2017). Supplement to “Adpative Estimation of Nonparametric Functionals”.
Nemirovski, A. (2000). Topics in non-parametric. Ecole dEté de Probabilités de Saint-Flour 28 85.
Petrov, V. V. (1995). Limit Theorems of Probability Theory. Sequences of Independent Random Variables, vol. 4
of. Oxford Studies in Probability.
Robins, J. M., Mark, S. D. and Newey, W. K. (1992). Estimating exposure effects by modelling the expectation
of exposure conditional on confounders. Biometrics 479–495.
Robins, J., Li, L., Tchetgen, E. and van der Vaart, A. (2008). Higher order influence functions and minimax
estimation of nonlinear functionals. In Probability and Statistics: Essays in Honor of David A. Freedman 335–421.
Institute of Mathematical Statistics.
Robins, J., Tchetgen, E. T., Li, L. and van der Vaart, A. (2009). Semiparametric minimax rates. Electronic
Journal of Statistics 3 1305–1321.
Robins, J., Li, L., Mukherjee, R., Tchetgen, E. T. and van der Vaart, A. (2016). Higher Order Estimating
Equations for High-dimensional Models. The Annals of Statistics (To Appear).
Ruppert, D., Wand, M. P., Holst, U. and HöSJER, O. (1997). Local polynomial variance-function estimation.
Technometrics 39 262–273.
Shao, Q.-M. (2000). A comparison theorem on moment inequalities between negatively associated and independent
random variables. Journal of Theoretical Probability 13 343–356.
Tribouley, K. (2000). Adaptive estimation of integrated functionals. Mathematical Methods of Statistics 9 19–38.
Tsiatis, A. (2007). Semiparametric theory and missing data. Springer Science & Business Media.
Van der Vaart, A. W. (2000). Asymptotic statistics 3. Cambridge university press.
APPENDIX A: PROOF OF REMAINING THEOREMS
Proof of Theorem 3.1.
Proof.
(i) Proof of Upper Bound
The general scheme of proof involves identifying a non-adaptive minimax estimator of φ(P )
under the knowledge of P ∈ P(α,β,γ) , demonstrating suitable bias and variance properties of this
sequence of estimators, and thereafter invoking Theorem 2.1 to conclude. This routine can be carried out as follows. Without loss of generality assume that we have 3n samples {Yi , Ai , Xi }ni=1 .
Divide the samples in to 3 equal parts (with the lth part being indexed by {(l − 1)n + 1, . . . , ln}
28
for l ∈ {1, 2, 3}), estimate g by ĝ adaptively from the third part (as in Theorem 2.3), and estimate a and b by â and b̂ respectively, adaptively from the second part (as in Theorem 2.3). Let
EP,S denote the expectation while samples with indices in S held fixed, for S ⊂ {1, 2, 3}. A first
order influence function for φ(P ) at P P
is given by (Y − b(X)(A − a(X))) − φ(P ) and a result1
ing first order
estimator for φ(P ) is n ni=1 (Yi − b̂(Xi ))(Ai − â(Xi )). This estimator has a bias
R
d
EP,{2,3}
(b(x) − b̂(x))(a(x) − â(x)) g(x)dx. Indeed for α+β
2 < 2 , this bias turns out to be sub4α+4β
optimal compared to the minimax rate of convergence of n− 2α+2β+d in mean squared loss. The
most intuitive way to proceed is to estimate
R and correct for the bias. If there exists a “dirac-kernel”
K(x1 , x2 ) ∈ L2 [0, 1]d × [0, 1]d such that h(x1 )K(x1 , x2 )dx2 = h(x1 ) almost surely x1 for all h ∈
P
(Ai2 −â(Xi2 ))
(Yi1 −b̂(Xi1 )))
1
√
K(Xi1 , Xi2 ) √
,
L2 [0, 1]d , then one can estimate the bias term by n(n−1)
g(Xi1 )
1≤i1 6=i2 ≤n
g(Xi2 )
provided the marginal density g was known. Indeed there are two concerns with the above suggestion. The first one being the knowledge of g. This can be relatively easy to deal with by plugging in
an suitable estimate ĝ–although there are some subtleties involved (refer to to Section 4 for more on
this). The primary concern though is the non-existence of a “dirac-kernel” of the above sort as an
element of L2 [0, 1]d × L2 [0, 1]d . This necessitates the following modification where one works with
projection kernels on suitable finite dimensional linear subspace L of L2 [0, 1]d which guarantees
existence of such kernels when the domain space is restricted to L. In particular, we work with the
linear subspace Vj (defined in 5) where the choice of j is guided by the balance between the bias
and variance properties of the resulting estimator. In particular, a choice of 2j is guided by the
knowledge of the parameter space P(α,β,γ) . For any j such that n ≤ 2jd ≤ n2 , this implies that our
bias corrected second order estimator of φ(P ) is given by
n
φ̂n,j =
1X
(Yi − b̂(Xi ))(Ai − â(Xi ))
n
i=1
1
−
n(n − 1)
X
S
1≤i1 6=i2 ≤n
(Yi1 − b̂(Xi1 )))
(Ai2 − â(Xi2 ))
p
KVj (Xi1 , Xi2 ) p
ĝ(Xi1 )
ĝ(Xi2 )
!
Note that division by ĝ is permitted by the properties guaranteed by Theorem 2.3. Indeed this
sequence of estimators is in the form of those considered by Theorem 2.1 with
L1 (O) = (Y − b̂(X))(A − â(X)),
L2l (O) =
(Y − b̂(X)))
p
,
ĝ(X)
L2r (O) =
(A − â(X)))
p
,
ĝ(X)
where by Theorem 2.3 max{|L1 (O)|, |L2l (O)|, |L2r (O)|} ≤ C(BL , BU ). Therefore it remains to show
that these sequence φ̂n,j satisfies the bias and variance property (A) and (B) necessary for application of Theorem 2.1.
We first verify the bias property. Utilizing the representation of the first order bias as stated
above, we have
|EP φ̂n,j − φ(P ) |
hR
i
(b(x) − b̂(x))(a(x) − â(x)) g(x)dx
EP,{2,3}
=
(Y1 −b̂(X1 )))
(A2 −â(X2 ))
√
−EP S
KVj (X1 , X2 ) √
ĝ(X1 )
ĝ(X2 )
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
29
(A.1)
Now, using the notation δb(x) = b(x) − b̂(x) and δa(x) = a(x) − â(x), we have
"
#
(Y1 − b̂(X1 )))
(A2 − â(X2 ))
p
EP
KVj (X1 , X2 ) p
ĝ(X1 )
ĝ(X2 )
#
Z Z "
δa(x2 )g(x2 )
δb(x1 )g(x1 )
p
KVj (x1 , x2 ) p
dx1 dx2
= EP,{2,3}
ĝ(x1 )
ĝ(x2 )
#
Z "
δb(x1 )g(x1 )
δag
p
= EP,{2,3}
Π √ |Vj (x1 ) dx1
ĝ
ĝ(x1 )
#
Z "
δb(x1 )g(x1 ) δa(x1 )g(x1 )
p
p
dx1
= EP,{2,3}
ĝ(x1 )
ĝ(x1 )
#
Z "
δb(x1 )g(x1 )
δag ⊥
p
− EP,{2,3}
Π √ |Vj (x1 ) dx1
ĝ
ĝ(x1 )
Z
= EP,{2,3}
(b(x) − b̂(x))(a(x) − â(x)) g(x)dx
Z
1
1
2
+ EP,{2,3}
−
dx1
δa(x1 )δb(x1 )g (x1 )
ĝ(x1 ) g(x1 )
#
Z "
δag ⊥
δb(x1 )g(x1 )
p
Π √ |Vj (x1 ) dx1
− EP,{2,3}
ĝ
ĝ(x1 )
Plugging in A.2 into A.1, we get,
|EP φ̂n,j − φ(P ) |
i
Rh
δa(x1 )δb(x1 )g2 (x1 ) ĝ(x1 1 ) − g(x1 1 ) dx1
EP,{2,3}
=
R δb(x1 )g(x1 ) δag ⊥
√
√
|V
(x1 ) dx1
−EP,{2,3}
Π
ĝ j
(A.2)
ĝ(x1 )
(A.3)
Now, by repeatedly applying Cauchy-Schwarz Inequality and invoking results in Theorem 2.3, we
have
Z
1
1
EP,{2,3}
δa(x1 )δb(x1 )g (x1 )
−
dx1
ĝ(x1 ) g(x1 )
12
Z
g4 (x1 )
2
(ĝ(x1 ) − g(x1 )) dx1
≤ EP,{3}
g(x1 )ĝ(x1 )
1
1
Z
Z
4
4
4
4
× EP,{2,3} (â(x1 ) − a(x1 )) dx1
EP,{2,3}
b̂(x1 ) − b(x1 ) dx1
2
1
1
1
4 4
BU2
2 2
4 4
≤
EP,{3} kĝ − gk2
EP,{2,3} kâ − ak4
EP,{2,3} b̂ − b
BL
4
β
γ
α
− 2α+d − 2β+d − 2γ+d
d
d
d
B2
n
≤ U (C) 2γ+d + 2α+d + 2β+d
BL
log n
(A.4)
30
Moreover,
Z "
#
δag ⊥
δb(x1 )g(x1 )
p
EP,{2,3}
Π √ |Vj (x1 ) dx1
ĝ
ĝ(x1 )
Z
δag ⊥
δbg ⊥
= EP,{2,3}
Π √ |Vj (x1 )Π √ |Vj (x1 ) dx1
ĝ
ĝ
1
"
"
#!
#! 21
2
δag ⊥ 2
δbg ⊥ 2
EP,{2,3} Π √ |Vj
≤ EP,{2,3} Π √ |Vj
ĝ
ĝ
2
2
1
1
1 2
1 2
≤ C 2−2jβ + 2
,
2−2jα + 2
n
n
(A.5)
where the last line follows for some constant C (depending on M, BU , BL , γmax ) by Theorem 2.3,
definition of (5.5), and noting that kΠ (h|Vj ) k∞ ≤ C(BU ) if khk∞ ≤ BU . Therefore, if n ≤ 2jd ≤ n2
d
along with α+β
2 < 4 , one has combining A.3, A.4, and A.5, that for a constant C (depending on
min
M, BU , BL , γmin , γmax ) and γmin (ǫ): = γ1+ǫ
|EP φ̂n,j − φ(P ) |
"
1
1 #
− α − β − γ
2α+d
2β+d 2γ+d
2
2
n
1
1
≤C
+ 2−2jβ + 2
2−2jα + 2
log n
n
n
#
"
− α − β − γ
2α+d
2β+d 2γ+d
α+β
3
n
+ 2−2jd 2d + 3n− 2
≤C
log n
γ
(ǫ)
γ
(ǫ)
β
γ
α
β
γ
α
− 2α+d
− 2β+d
− 2γ min(ǫ)+d 2γ min+d(ǫ) − 2γ+d
+ 2β+d
+ 2γ+d
−2jd α+β
2α+d
2d
min
min
≤ 4C n
+2
n
log n
γ
(ǫ)
γ−γmin (ǫ)
α+β
− α − β − min
≤ 4C n 2α+d 2β+d 2γmin (ǫ)+d n− 2γ+d log n + 2−2jd 2d
γ
(ǫ)
ǫγ
β
α
− 2α+d
− 2β+d
− 2γ min(ǫ)+d − (1+ǫ)(2min
−2jd α+β
max
+d)
2d
min
≤ 4C n
log n + 2
n
γmin (ǫ)
β
α
Now, letting θ = (α, β, γ), f1 (θ) = α+β
2 and f2 (θ) = − 2α+d − 2β+d − 2γmin (ǫ)+d we have the bias
property corresponding to Theorem 2.1 holds with the given choice of f1 and f2 and a constant C
2α+2β
depending on M, BU , BL , γmax for {P ∈ Pθ : f1 (θ) = α+β
2 , f2 (θ) > 2α+2β+d }. To proof of the validity
of the variance property corresponding to Theorem 2.1 is easy to derive by standard Hoeffding
decomposition of φ̂n,j followed by applications of moment bounds in Lemmas B.2 and B.5. For
calculations of similar flavor, refer to proof of Theorem 1.3 in Mukherjee and Sen (2016). Note
d
that this is the step where we have used the fact that α+β
2 ≤ 4 , since otherwise the linear term
1
dominates resulting in O( n ) the variance.
sup
P ∈Pθ :
f1 (θ)=τ,f2 (θ)>
4τ
4τ +d
√
8τ /d
2
log n 1+4τ /d
.
EP φ̂n,j (k∗ (l̂)) − φ(P ) ≤ 8C
n
Noting that for θ ∈ Θ, since γmin > 2(1 + ǫ) max{α, β}, one has automatically, f2 (θ) >
completes the proof of the upper bound.
2α+2β
2α+2β+d ,
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
31
(ii) Proof of Lower Bound
To prove a lower bound matching the upper bound above, note that φ(P ) = EP (covP (Y, A|X)) =
√
EP (AY ) − EP (a(X)b(X)). Indeed, EP (AY ) can be estimated at a n-rate by sample average of Ai Yi . Therefore, it suffices to prove a lower for adaptive estimation of EP (a(X)b(X)).
Let c(X) = EP (Y |A = 1, X) − EP (Y |A = 0, X), which implies owing to the binary nature of
A that EP (Y |A, X) = c(X) (A − a(X)) + b(X). For the purpose of lower bound it is convenient
to parametrize the data generating mechanism by (a, b, c, g), which implies that φ(P ) =
R
a(x)b(x)g(x)dx. With this parametrization, we show that the same lower bound holds in a smaller
class of problems where g ≡ 1 on [0, 1]d . Specifically consider
P = (a, b, c, g):
d
.
Θsub =
<
,
a ∈ H(α, M ), b ∈ H(β, M ), α+β
2
4
g ≡ 1, (a(x), b(x)) ∈ [BL , BU ]2 ∀x ∈ [0, 1]d
The likelihood of O ∼ P for P ∈ Θsub can then be written as
a(X)A (1 − a(X))1−A
× (c(X)(1 − A(X)) + b(X))Y A (1 − c(X)(1 − a(X)) − b(X))(1−Y )A
× (−c(X)a(X) + b(X))Y (1−A) (1 + c(X)a(X) − b(X))(1−Y )(1−A) .
(A.6)
Let for some (α, β, γ) tuple in the original problem Θ, one has
4α+4β
√
2
log n d+2α+2β
.
sup EP φ̂ − φ(P ) ≤ C
n
P ∈P(α,β,γ)
R
d
let H: [0, 1]d → R be a C ∞ function supported on 0, 21 such that H(x)dx = 0 and
R Now,
H 2 (x)dx = 1 and let for k ∈ N (to be decided later) Ω1 , . . . , Ωk be the translates of the cube
d
1
k− d 0, 12 that are disjoint and contained in [0, 1]d . Let x1 , . . . , xk denote the bottom left corners
of these cubes.
Assume first that α < β. We set for λ = (λ1 , . . . , λk ) ∈ {−1, +1}k and α ≤ β ′ < β,
aλ (x) =
k
1
1 1 αd X
λj H (x − xj )k d ,
+
2
k
j=1
k
β′
1
1 1 d X
λj H (x − xj )k d ,
bλ (x) = +
2
k
j=1
cλ (x) =
1
2
− bλ (x)
.
1 − aλ (x)
A properly chosen H guarantees aλ ∈ H(α, M ) and bλ ∈ H(β ′ , M ) for all λ. Let
1
n
k
Θ0 = P : P = aλ , , 0, 1 , λ ∈ {−1, +1} ,
2
and
o
n
Θ1 = P n : P = (aλ , bλ , cλ , 1), λ ∈ {−1, +1}k .
32
Finally let Θtest = Θ0 ∪ Θ1 . Let π0 and π1 be uniform priors on Θ0 and Θ1 respectively. It is easy to
α+β ′
check that by our choice of H, φ(P ) = 41 on Θ0 and φ(P ) = 14 + k1 d for P ∈ Θ1 . Therefore, using
α+β ′
notation from Lemma B.1, µ1 = 14 , µ2 = 41 + k1 d , and σ1 = σ2 = 0. Since Θ0 ⊆ P (α, β, γ), we
√
4α+4β
log n d+2α+2β
must have that worst case error of estimation over Θ0 is bounded by C
. Therefore,
n
√
2α+2β
log n d+2α+2β
. This implies by Lemma B.1,
the π0 average bias over Θ0 is also bounded by C
n
that the π1 average bias over Θ1 (and hence the worst case bias over Θ1 ) is bounded below by
√
√
α+β ′
2α+2β
2α+2β
d
1
log n d+2α+2β
log n d+2α+2β
−C
η,
(A.7)
−C
k
n
n
R
R
where η is the chi-square divergence between the probability measures P n dπ0 (P n ) and P n dπ1 (P n ).
We now bound η using Theorem 2.2.
To put ourselves in the notation of Theorem 2.2, let for λ ∈ {−1, +1}k , Pλ and Qλ be the
probability measures identified from Θ0 and Θ1 respectively.
Therefore, with χj = {0, 1}×{0, 1}×Ωj , we indeed have for all j = 1, . . . , k, Pλ (χj ) = Qλ (χj ) = pj
where there exists a constant c such that pj = kc .
R
R
Letting π be the uniform prior over {−1, +1}k it is immediate that η = χ2
Pλ d(π(λ)), Qλ d(π(λ)) .
It now follows by calculations similar to proof of Theorem 4.1 in Robins et al. (2009), that for a
constant C ′ > 0
χ
2
Z
Now choosing k =
Pλ d(π(λ)),
√ n
c∗ log n
χ
2
Z
2
′
′
− 4β
−4 α+β
′n
d
2d
k
+k
− 1.
Qλ d(π(λ)) ≤ exp C
k
2d
d+2α+2β ′
Z
, we have
Pλ d(π(λ)),
Z
′
Qλ d(π(λ)) ≤ n2C c∗ − 1.
′
2α+2β
2α+2β
Therefore choosing c∗ such that 2C ′ c∗ + 2α+2β
′ +d < 2α+2β+d , we have the desired result by (A.7).
The proof for α > β is similar after changing various quantities to:
aλ (x) =
′
k
1
1 1 αd X
λj H (x − xj )k d ,
+
2
k
j=1
β ≤ α′ < α,
k
β
1
1 1d X
λj H (x − xj )k d ,
bλ (x) = +
2
k
j=1
cλ (X) =
( 12 − aλ (X))bλ (X)
,
aλ (X)(1 − aλ (X))
1
k
n
, bλ , 0, 1 : λ ∈ {−1, +1} ,
Θ0 = P : P =
2
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
33
and
o
n
Θ1 = P n : P = (aλ , bλ , cλ , 1): λ ∈ {−1, +1}k .
For the case of α = β, choose α′ < β and therefore, α′ < β and thereafter work with
aλ (x) =
′
k
1
1 1 αd X
+
λj H (x − xj )k d ,
2
k
j=1
bλ (x) =
k
β
1
1 1d X
λj H (x − xj )k d ,
+
2
k
j=1
cλ (x) =
Θ0 =
1
2
− bλ (x)
.
1 − aλ (x)
1
P n : P = aλ , , 0, 1 : λ ∈ {−1, +1}k ,
2
and
o
n
Θ1 = P n : P = (aλ , bλ , cλ , 1): λ ∈ {−1, +1}k .
This completes the proof of the lower bound.
Proof of Theorem 3.2.
Proof. (i) Proof of Upper Bound
The general scheme of proof is same as that of Theorem 3.2 and involves identifying a nonadaptive minimax estimator of φ(P ) under the knowledge of P ∈ P(α,β,γ) , demonstrating suitable
bias and variance properties of this sequence of estimators, and thereafter invoking Theorem 2.1
to conclude. This routine can be carried out as follows. Without loss of generality assume that we
have 3n samples {Yi Ai , Ai , Xi }ni=1 . Divide the samples in to 3 equal parts (with the lth part being
indexed by {(l − 1)n + 1, . . . , ln} for l ∈ {1, 2, 3}), estimate f by fˆ adaptively from the third part (as
in Theorem 2.3), and estimate E (A|x) and b by E\
(A|x) and b̂(x): = E (Y \
|A = 1, x) respectively,
adaptively from the second part (as in Theorem 2.3). Let EP,S denote the expectation while samples
with indices in S held fixed, for S ⊂
{1, 2, 3}. Note that g(X) = f (X|A = 1)P (A = 1). Therefore,
1 P3n
also estimate P (A = 1) by π̂: = n 2n+1 Ai i.e. the sample average of A’s from the third part of
the sample and fˆ1 is estimated as an estimator of f (X|A = 1) from the third part of our sample
using density estimation technique among observations with A = 1. Finally, our estimate of a
1
and ĝ = fˆ1 π̂ respectively. In the following, we will freely use Theorem
and g are â(x) = \
E(A|x)
2.3, for desired properties of â, b̂, and ĝ. In particular, following the proof of Theorem 2.3, we can
actually assume that our choice of ĝ also satisfies the necessary conditions of boundedness away
from 0 and ∞, as well as membership in H(γ, C) with high probability for a large enough C > 0.
A first order influence function for φ(P ) at P
P is given by Aa(X)(Y − b(X) + b(X) − φ(P ) and a
1
resulting first order
estimator for φ(P ) is n ni=1 Ai a(Xi )(Yi − b̂(Xi )) + b(Xi ). This estimator has
R
d
a bias −EP,{2,3}
(b(x) − b̂(x))(a(x) − â(x)) g(x)dx. Indeed for α+β
2 < 2 , this bias turns out to
34
−
4α+4β
be suboptimal compared to the minimax rate of convergence of n 2α+2β+d in mean squared loss.
Similar to proof of Theorem 3.1 we use a second order bias corrected estimator as follows.
Once again we work with the linear subspace Vj (defined in 5) where the choice of j is guided
by the balance between the bias and variance properties of the resulting estimator. In particular,
a choice of 2j is guided by the knowledge of the parameter space P(α,β,γ) . For any j such that
n ≤ 2jd ≤ n2 , our bias corrected second order estimator of φ(P ) is given by
n
φ̂n,j =
1X
Ai â(Xi )(Yi − b̂(Xi )) + b̂(Xi )
n
i=1
1
+
n(n − 1)
X
S
1≤i1 6=i2 ≤n
(Ai2 â(Xi2 ) − 1)
Ai1 (Yi1 − b̂(Xi1 )))
p
p
KVj (Xi1 , Xi2 )
ĝ(Xi1 )
ĝ(Xi2 )
!
Note that division by ĝ is permitted by the properties guaranteed by Theorem 2.3. Indeed this
sequence of estimators is in the form of those considered by Theorem 2.1 with
L1 (O) = Aâ(X)(Y − b̂(X)) + b̂(X),
L2l (O) = −
A(Y − b̂(X)))
p
,
ĝ(X)
L2r (O) =
(Aâ(X) − 1)
p
,
ĝ(X)
where by Theorem 2.3 max{|L1 (O)|, |L2l (O)|, |L2r (O)|} ≤ C(BL , BU ). Therefore it remains to
show that these sequence φ̂n,j satisfies the bias and variance property (A) and (B) necessary for
application of Theorem 2.1. Using the conditional independence of Y and A given X, one has y
calculations exactly parallel to that in proof of Theorem 3.1, that for a constant C (depending on
M, BU , BL , γmin , γmax ),
|EP φ̂n,j − φ(P ) |
γ
(ǫ)
ǫγ
β
α
α+β
− 2β+d
− 2γ min(ǫ)+d − (1+ǫ)(2min
− 2α+d
−2jd
max +d) log n + 2
2d
min
n
,
≤C n
γmin (ǫ)
β
α
min
where γmin (ǫ): = γ1+ǫ
. Now, letting θ = (α, β, γ), f1 (θ) = α+β
2 and f2 (θ) = − 2α+d − 2β+d − 2γmin (ǫ)+d
we have the bias property corresponding to Theorem 2.1 holds with the given choice of f1 and f2
2α+2β
and a constant C depending on M, BU , BL , γmax for {P ∈ Pθ : f1 (θ) = α+β
2 , f2 (θ) > 2α+2β+d }. To
proof of the validity of the variance property corresponding to Theorem 2.1 is one again easy to
derive by standard Hoeffding decomposition of φ̂n,j followed by applications of moment bounds in
Lemmas B.2 and B.5.
sup
P ∈Pθ :
f1 (θ)=τ,f2 (θ)>
4τ
4τ +d
√
8τ /d
2
log n 1+4τ /d
.
EP φ̂n,j (k∗ (l̂)) − φ(P ) ≤ 8C
n
Noting that for θ ∈ Θ, since γmin > 2(1 + ǫ) max{α, β}, one has automatically, f2 (θ) >
completes the proof of the upper bound.
2α+2β
2α+2β+d ,
(ii) Proof of Lower Bound
First note that we can parametrize our distributions by the tuple of functions (a, b, g). We show
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
35
that the same lower bound holds in a smaller class of problems where g ≡ 1/2 on [0, 1]d . Specifically
consider
P = (a, b, g):
α+β
d
Θsub =
.
a ∈ H(α, M ), b ∈ H(β, M ), 2 < 4 ,
2
d
g ≡ 1/2, (a(x), b(x)) ∈ [BL , BU ] ∀x ∈ [0, 1]
The observed data likelihood of O ∼ P for P ∈ Θsub can then be written as
(a(X) − 1)1−A bY (X)(1 − b(X))1−Y
Let for some (α, β, γ) tuple in the original problem Θ, one has
A
.
(A.8)
4α+4β
√
2
log n d+2α+2β
sup EP φ̂ − φ(P ) ≤ C
.
n
P ∈P(α,β,γ)
R
d → R be a C ∞ function supported on 0, 1 d such that
H(x)dx = 0 and
Now,
let
H:
[0,
1]
2
R 2
H (x)dx = 1 and let for k ∈ N (to be decided later) Ω1 , . . . , Ωk be the translates of the cube
d
1
k− d 0, 12 that are disjoint and contained in [0, 1]d . Let x1 , . . . , xk denote the bottom left corners
of these cubes.
Assume first that α < β. We set for λ = (λ1 , . . . , λk ) ∈ {−1, +1}k and α ≤ β ′ < β,
aλ (x) = 2 +
k
1α X
d
k
j=1
1
λj H (x − xj )k d ,
k
β′
1
1 1 d X
bλ (x) = +
λj H (x − xj )k d .
2
k
j=1
A properly chosen H guarantees aλ ∈ H(α, M ) and bλ ∈ H(β ′ , M ) for all λ. Let
o
n
Θ0 = P n : P = (aλ , 1/2, 1/2) : λ ∈ {−1, +1}k ,
and
o
n
Θ1 = P n : P = (aλ , bλ , 1/2): λ ∈ {−1, +1}k .
Finally let Θtest = Θ0 ∪ Θ1 . Let π0 and π1 be uniform priors on Θ0 and Θ1 respectively. It is
α+β ′
1
1 1
1
easy to check that by our choice of H, φ(P ) = 2 on Θ0 and φ(P ) = 2 + 2 k d for P ∈ Θ1 .
α+β ′
Therefore, using notation from Lemma B.1, µ1 = 21 , µ2 = 12 + 21 k1 d , and σ1 = σ2 = 0.
Since Θ0 ⊆ P (α, β, γ), we must have that worst case error of estimation over Θ0 is bounded by
4α+4β
2α+2β
√
√
log n d+2α+2β
log n d+2α+2β
C
.
Therefore,
the
π
average
bias
over
Θ
is
also
bounded
by
C
.
0
0
n
n
This implies by Lemma B.1, that the π1 average bias over Θ1 (and hence the worst case bias over
Θ1 ) is bounded below by a constant multiple of
α+β ′ √
2α+2β
√
2α+2β
d
log n d+2α+2β
log n d+2α+2β
1
−
−
η,
k
n
n
(A.9)
36
where
chi-square
between the probability measures
R n η is the
R n divergence
n
n
P dπ0 (P ) and P dπ1 (P ). We now bound η using Theorem 2.2.
To put ourselves in the notation of Theorem 2.2, let for λ ∈ {−1, +1}k , Pλ and Qλ be the
probability measures identified from Θ0 and Θ1 respectively.
Therefore, with χj = {0, 1}×{0, 1}×Ωj , we indeed have for all j = 1, . . . , k, Pλ (χj ) = Qλ (χj ) = pj
where there exists a constant c such that pj = kc .
R
R
Letting π be the uniform prior over {−1, +1}k it is immediate that η = χ2
Pλ d(π(λ)), Qλ d(π(λ)) .
It now follows by calculations similar to proof of Theorem 4.1 in Robins et al. (2009), that for a
constant C ′ > 0
χ
2
Z
Now choosing k =
Pλ d(π(λ)),
√ n
c∗ log n
Z
2
′
α+β ′
− 4β
−4
′n
2d
k d +k
− 1.
Qλ d(π(λ)) ≤ exp C
k
2d
d+2α+2β ′
, we have
Z
Z
′
χ2
Pλ d(π(λ)), Qλ d(π(λ)) ≤ n2C c∗ − 1.
′
2α+2β
2α+2β
Therefore choosing c∗ such that 2C ′ c∗ + 2α+2β
′ +d < 2α+2β+d , we have the desired result by (A.7).
The proof for α > β is similar after changing various quantities to:
aλ (x) = 2 +
k
1 α′ X
d
k
bλ (x) =
j=1
1
λj H (x − xj )k d ,
β ≤ α′ < α,
k
β
1
1 1d X
λj H (x − xj )k d .
+
2
k
j=1
and
o
n
Θ0 = P n : P = (2, bλ , 1/2) : λ ∈ {−1, +1}k ,
o
n
Θ1 = P n : P = (aλ , bλ , 1/2): λ ∈ {−1, +1}k .
For the case of α = β, choose α′ < β and therefore, α′ < β and thereafter work with
aλ (x) = 2 +
bλ (x) =
k
1 α′ X
d
k
j=1
1
λj H (x − xj )k d ,
k
β
1
1 1d X
λj H (x − xj )k d
+
2
k
j=1
and
o
n
Θ0 = P n : P = (aλ , 1/2, 1/2) : λ ∈ {−1, +1}k ,
o
n
Θ1 = P n : P = (aλ , bλ , 1/2): λ ∈ {−1, +1}k .
This completes the proof of the lower bound.
37
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
Proof of Theorem 3.3.
Proof. (i) Proof of Upper Bound
Without loss of generality assume that we have 3n samples {Yi , Ai , Xi }ni=1 . Divide the samples in
to 3 equal parts (with the lth part being indexed by {(l − 1)n + 1, . . . , ln} for l ∈ {1, 2, 3}), estimate
g by ĝ adaptively from the third part (as in Theorem 2.3), and estimate b by b̂, adaptively from
the second part (as in Theorem 2.3). Let EP,S denote the expectation while samples with indices
in S held fixed, for S ⊂ {1, 2, 3}. For any j such that n ≤ 2jd ≤ n2 , consider
n
φ̂n,j
1X
(2Yi − b̂(Xi ))b̂(Xi )
=
n
i=1
+
1
n(n − 1)
X
(Yi1 − b̂(Xi1 )))
(Yi2 − b̂(Xi2 ))
p
KVj (Xi1 , Xi2 ) p
ĝ(Xi1 )
ĝ(Xi2 )
1≤i1 6=i2 ≤n
Indeed this sequence of estimators is in the form of those considered by Theorem 2.1 with
L1 (O) = (2Y − b̂(X))b̂(X),
L2l (O) = −
(Y − b̂(X)))
p
,
ĝ(X)
L2r (O) =
(Y − b̂(X)))
p
,
ĝ(X)
where by Theorem 2.3 max{|L1 (O)|, |L2l (O)|, |L2r (O)|} ≤ C(BL , BU ). Therefore it remains to
show that these sequence φ̂n,j satisfies the bias and variance property (A) and (B) necessary for
application of Theorem 2.1. We first verify the bias property. Utilizing the representation of the
first order bias as stated above, we have
|EP φ̂n,j − φ(P ) |
2
R
EP,{2,3}
(b(x) − b̂(x) g(x)dx
=
(Y2 −b̂(X2 ))
(Y1 −b̂(X1 )))
√
KVj (X1 , X2 ) √
−EP S
ĝ(X1 )
ĝ(X2 )
(A.10)
Now, by calculations similar to proof of Theorem 3.1, one can show that for a constant C (depending
on M, BU , BL , γmin , γmax ),
|EP φ̂n,j − φ(P ) |
γ
(ǫ)
ǫγ
2β
γmin
− 2γ min(ǫ)+d − (1+ǫ)(2min
− 2β+d
−2jd α+β
max
+d)
2d
min
n
, γmin (ǫ): =
≤C n
log n + 2
1+ǫ
2β
min (ǫ)
− 2γγmin
Now, letting θ = (β, γ), f1 (θ) = β and f2 (θ) = − 2β+d
(ǫ)+d , the rest of the proof follows
along the lines of the proof of Theorem 3.1.
(ii) Proof of Lower Bound
The proof of the lower bound is very similar to that of the lower bound proof in Theorem 3.1, after
identifying Y = A almost surely P, and hence is omitted.
38
APPENDIX B: TECHNICAL LEMMAS
B.1. Constrained Risk Inequality. A main tool for producing adaptive lower bound arguments is a general version of constrained risk inequality due to Cai and Low (2011), obtained
as an extension of Brown and Low (1996). For the sake of completeness, begin with a summary
of these results. Suppose Z has distribution Pθ where θ belongs to some parameter space Θ. Let
Q̂ = Q̂(Z) be an estimator of a function Q(θ) based on Z with bias B(θ): = Eθ (Q̂) − Q(θ). Now
suppose that Θ0 and Θ1 form
R a disjoint partition
R of Θ with priors π0 and π1 supported on them
respectively. Also, let µi = Q(θ)dπi and σi2 = (Q(θ) − µi )2 dπi , i = 0, 1 be the mean and variance
of Q(θ) under the two priors π0 and π1 . Letting γi be the marginal density with respect to some
common dominating measure of Z under πi , i = 0, 1, let us denote by Eγ0 (g(Z)) the expectation
of g(Z) with respect to the marginal density of Z under prior π0 and distinguish it from Eθ (g(Z)),
which is the expectation under Pθ . Lastly, denote the chi-square divergence between γ0 and γ1 by
2 12
γ1
χ = Eγ0 γ0 − 1
. Then we have the following result.
Lemma B.1 (Cai and Low (2011)).
Z
B(θ)dπ1 (θ) −
If
Z
R
2
Eθ Q̂(Z) − Q(θ) dπ0 (θ) ≤ ǫ2 , then
B(θ)dπ0 (θ) ≥ |µ1 − µ0 | − (ǫ + σ0 )χ.
Since the maximum risk is always at least as large as the average risk, this immediately yields a
lower bound on the minimax risk.
B.2. Tail and Moment Bounds. The U-statistics appearing in this paper are mostly based
on projection kernels sandwiched between arbitrary bounded functions. This necessitates generalizing the U-statistics bounds obtained in Bull and Nickl (2013) as in Mukherjee and Sen (2016)
.
Lemma B.2. O1 , . . . , On ∼ P are iid random vectors of observations such that Xi ∈ [0, 1]d is
a sub-vector of Oi for each i. There exists constant C: = C(B, BU , J0 ) > 0 such that the following
hold
(i)
P
(B.1)
X
1
R (Oi1 , Oi2 ) − E (R (O1 , O2 )) ≥ t
n(n − 1)
i1 6=i2
2
−Cnt2
≤e
− Ct2
+e
a
1
− Ct
a
+e
2
√
t
√
−C
a
+e
3
,
(ii)
E |
1
n(n − 1)
X
i1 6=i2
2q
R (Oi1 , Oi2 ) − E (R (O1 , O2 )) |
≤
2jd
C 2
n
q
,
q
2jd
1
2jd
where a1 =
a2 =
+ 1 , a3 = n−1
,
n + n
R(O1 , O2 ) = S L2l (O1 )KVj (X1 , X2 ) L2r (O2 ) with max{|L2l (O)|, |L2r (O)|} ≤ B , almost surely
O, and Xi ∈ [0, 1]d are iid with density g such that g(x) ≤ BU for all x ∈ [0, 1]d .
jd
1
2
n−1 2 ,
1
n−1
q
2jd
n
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
39
Proof. The proof of part (i) can be found in Mukherjee and Sen (2016). However, for the
sake of completeness we provide the proof here again. We do the proof for the special case where
L2l = L2r = L. However, the details of the argument shows that the proof continues to hold to
symmetrized U-statistics as defined here.
The proof hinges on the following tail bound for second order degenerate U-statistics (Giné and Nickl,
2015) is due to Giné, Latala and Zinn (2000) with constants by Houdré and Reynaud-Bouret (2003)
and is crucial for our calculations.
Lemma B.3. Let Un be a degenerate U-statistic of order 2 with kernel R based on an i.i.d.
sample W1 , . . . , Wn . Then there exists a constant C independent of n, such that
X
√
P [|
R(W1 , W2 )| ≥ C(Λ1 u + Λ2 u + Λ3 u3/2 + Λ4 u2 )] ≤ 6 exp(−u),
i6=j
where, we have,
n(n − 1)
E[R2 (W1 , W2 )],
2
Λ2 = n sup{E[R(W1 , W2 )ζ(W1 )ξ(W2 )]: E[ζ 2 (W1 )] ≤ 1, E[ξ 2 (W1 )] ≤ 1},
Λ21 =
1
2
Λ3 = knE[R2 (W1 , ·)k∞
,
Λ4 = kRk∞ .
We use this lemma to establish Lemma B.2. By Hoeffding’s decomposition one has
X
1
R (Oi1 , Oi2 ) − E (R (O1 , O2 ))
n(n − 1)
=
+
2
n
n
X
i1 =1
i1 6=i2
h
EOi1 R (Oi1 , Oi2 ) − ER (Oi1 , Oi2 )
i
X R (Oi , Oi ) − EO R (Oi , Oi )
1
1
2
1
2
i1
−EOi2 R (Oi1 , Oi2 ) + ER (Oi1 , Oi2 )
n(n − 1)
: = T1 + T2
i1 6=i2
P
B.2.1. Analysis of T1 . Noting that T1 = n2 ni1 =1 H(Oi1 ) where H(Oi1 ) = E (R (Oi1 , Oi2 |Oi1 ))−
ER (Oi1 , Oi2 ) we control T1 by standard Hoeffding’s Inequality. First note that,
|H(Oi1 )|
X X h
2 i
v
v
v
|
=|
L (Oi1 ) ψjk
(Xi1 ) E ψjk
(Xi2 ) L (Oi2 ) − E ψjk
(Xi2 ) L (Oi2 )
k∈Zj v∈{0,1}d
≤
+
X
X
k∈Zj v∈{0,1}d
X
X
v
v
|L (Oi1 ) ψjk
(Xi1 ) E ψjk
(Xi2 ) L (Oi2 ) |
v
E ψjk
(Xi2 ) L (Oi2 )
k∈Zj v∈{0,1}d
2
First, by standard compactness argument for the wavelet bases,
v
(X) L(O) | ≤
|E ψjk
Z
d
jd Y
vl j
(2 xl − kl ) ||g(x)|dx
ψ00
|E (L(O)|X = x) 2 2
l=1
40
jd
≤ C(B, BU , J0 )2− 2 .
(B.2)
Therefore,
X
X
k∈Zj v∈{0,1}d
2
v
E ψjk
(Xi2 ) L (Oi2 )
≤ C(B, BU , J0 )
(B.3)
Also, using the fact that for each fixed x ∈ [0, 1]d , the number indices k ∈ Zj such that x belongs to
v is bounded by a constant depending only on ψ 0 and ψ 1 . Therefore
support of at least one of ψjk
00
00
combining (B.2) and (B.3),
X X
v
v
|L (Oi1 ) ψjk
(Xi1 ) E ψjk
(Xi2 ) L (Oi2 ) |
k∈Zj v∈{0,1}d
jd
jd
≤ C(B, BU , J0 )2− 2 2 2 = C(B, BU , J0 ).
(B.4)
Therefore, by (B.4) and Hoeffding’s Inequality,
2
P (|T1 | ≥ t) ≤ 2e−C(B,BU ,J0 )nt .
(B.5)
B.2.2. Analysis of T2 . Since T2 is a degenerate U-statistics, it’s analysis is based on Lemma
B.3. In particular,
T2 =
X
1
R∗ (Oi1 , Oi2 )
n(n − 1)
i1 6=i2
where
R∗ (Oi1 , Oi2 )
v (X ) − E ψ v (X ) E (L(O )|X )
X
L(Oi1 )ψjk
i
i
i
i1
1
1
1
jk
=
× L(Oi )ψ v (Xi ) − E ψ v (Xi ) E (L(Oi )|Xi )
2
2
2
2
2
jk
jk
k∈Zj v∈{0,1}d
X
Letting Λi , i = 1, . . . , 4 being the relevant quantities as in Lemma B.3, we have the following
lemma.
Lemma B.4.
There exists a constant C = C(B, BU , J0 ) such that
Λ21 ≤ C
jd
n(n − 1) jd
2 , Λ2 ≤ Cn, Λ23 ≤ Cn2jd , Λ4 ≤ C2 2 .
2
Proof. First we control Λ1 . To this end, note that by simple calculations, using bounds on L, g,
v ’s we have,
and orthonormality of ψjk
Λ21 =
n(n − 1) ∗
E {R (O1 , O2 )}2 ≤ 3n(n − 1)E R2 (O1 , O2 )
2
= 3n(n − 1)E L2 (O1 ) KV2j (X1 , X2 ) L2 (O2 )
Z Z hX X
i2
4
v
v
≤ 3n(n − 1)B
ψjk
(x1 ) ψjk
(x2 ) g(x1 )g(x2 )dx1 dx2
k∈Zj v∈{0,1}d
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
≤ 3n(n − 1)B 4 BU2
= 3n(n − 1)B 4 BU2
Z Z hX
X
v
v
(x2 )
(x1 ) ψjk
ψjk
k∈Zj v∈{0,1}d
X
X Z
k∈Zj v∈{0,1}d
≤ C(B, BU , J0 )n(n − 1)2jd .
2
v
ψjk
(x1 ) dx2
Z
i2
41
dx1 dx2
2
v
ψjk
(x2 ) dx2
Next we control
Λ2 = n sup E (R∗ (O1 , O2 ) ζ (O1 ) ξ (O2 )) : E ζ 2 (O1 ) ≤ 1, E ξ 2 (O2 ) ≤ 1 .
To this end, we first control
|E L(O1 )KVj (X1 , X2 ) L(O2 )ζ(O1 )ξ(O2 ) |
Z Z
=|
E(L(O1 )ζ(O1 )|X1 = x1 )KVj (x1 , x2 ) E(L(O2 )ξ(O2 )|X2 = x2 )g(x2 )g(x2 )dx1 dx2 |
Z
= | E(L(O)ζ(O)|X = x)Π (E(L(O)ξ(O)|X = x)g(x)|Vj ) g(x)dx|
≤
Z
1 Z
1
2
2
2
E (L(O)ζ(O)|X = x)g (x)dx
Π (E(L(O)ξ(O)|X = x)g(x)|Vj ) dx
2
2
1 Z
1
2
2
2
2
2
E(L (O)ζ (O)|X = x)g (x)dx
E(L (O)ξ (O)|X = x)g (x)dx
≤
p
≤ B 2 BU E(ζ 2 (O1 ))E(ξ 2 (O2 )) ≤ B 2 BU
Z
2
2
2
Above we have used Cauchy-Schwartz Inequality, Jensen’s Inequality, and the fact that projections
contract norm. Also,
|E E L(O1 )KVj (X1 , X2 ) L(O2 )|O1 ζ(O1 )ξ(O2 ) |
= |E [L(O1 )Π (E (L(O1 )g(X1 )|X1 ) |Vj ) ζ(O1 )ξ(O2 )] |
= |E [L(O1 )Π (E (L(O1 )g(X1 )|X1 ) |Vj ) ζ(O1 )] ||E(ξ(O2 ))|
Z
≤ | Π(E(L(O)ζ(O)|X = x)g(x)|Vj )Π(E(L(O)|X = x)g(x)|Vj )dx| ≤ B 2 BU ,
where the last step once again uses contraction property of projection, Jensen’s Inequality, and
bounds on L and g. Finally, by Cauchy-Schwartz Inequality and (B.3),
E E L(O1 )KVj (X1 , X2 ) L(O2 ) ζ(O1 )ξ(O2 )
X X
v
≤
E2 L(O)ψjk
(X) ≤ C(B, BU , J0 ).
k∈Zj v∈{0,1}d
i 1
h
2
This completes the proof of Λ2 ≤ C(B, BU , J0 )n. Turning to Λ3 = nkE (R∗ (O1 , ·))2 k∞
we have
that
(R∗ (O1 , o2 ))2
≤ 2 [R(O1 , o2 ) − E(R(O1 , O2 )|O1 )]2 + 2 [E(R(O1 , O2 )|O2 = o2 ) − E (R(O1 , O2 ))]2
Now,
E [R(O1 , o2 ) − E(R(O1 , O2 )|O1 )]2
42
≤ 2E L2 (O1 )KV2j (X1 , x2 ) L2 (o2 )
X X
2
v
v
(X2 )L(O2 )
+ 2E
(X1 )E ψjk
L(O1 )ψjk
k∈Zj v∈{0,1}d
≤ 2B 4 BU2
X
X
k∈Zj v∈{0,1}d
2
v
(x2 ) + 2E(H 2 (O2 )) ≤ C(B, BU , J0 )2jd .
ψjk
where the last inequality follows from arguments along the line of (B.4). Also, using inequalities
(B.3) and (B.4)
[E(R(O1 , O2 )|O2 = o2 ) − E (R(O1 , O2 ))]2
hX X
i2
v
v
v
(x2 )L(o2 )
(X1 ) − ψjk
(X1 ) E L(O1 )ψjk
=
E L(O1 )ψjk
k∈Zj v∈{0,1}d
≤ C(B, BU , J0 ).
This completes the proof of controlling Λ3 . Finally, using compactness of the wavelet basis,
X X
v
v
(x2 )| ≤ C(B, BU , J0 )2jd
(x1 )||ψjk
|ψjk
kR(·, ·)k∞ ≤ B 2 sup
x1 ,x2
k∈Zj v∈{0,1}d
Combining this with arguments similar to those leading to (B.4), we have Λ4 ≤ C(B, BU , J0 )2jd .
Therefore, using Lemma B.3 and Lemma B.4 we have
r
C(B, BU , J0 ) √ jd
2jd 3 2jd 2
P |T2 | ≥
2 t+t+
≤ 6e−t .
t2 +
t
n−1
n
n
3
Finally using 2t 2 ≤ t + t2 we have,
h
i
√
Pf |T2 | > a1 t + a2 t + a3 t2 ≤ 6e−t
(B.6)
q
jd
2jd
U ,J0 )
U ,J0 )
2 2 , a2 = C(B,B
where a1 = C(B,B
n−1
n−1
n +1 ,
q
p
2jd
2jd
U ,J0 )
a3 = C(B,B
h(t) + a2 h(t) + a3 h2 (t) ≤ t, then one
+
.
Now
if
h(t)
is
such
that
a
1
n−1
n
n
has by (B.6),
i
h
p
P [|T2 | ≥ t] ≤ P |T2 | ≥ a1 h(t) + a2 h(t) + a3 h2 (t) ≤ 6e−6h(t) .
√
Indeed, there exists such an h(t) such that h(t) = b1 t2 ∧ b2 t ∧ b3 t where b1 =
C(B,BU ,J0 )
,
a2
and b3 =
C(B,B
√ U ,J0 ) .
a3
C(B,BU ,J0 )
,
a21
b2 =
Therefore, there exists C = C(B, BU , J0 ) such that
2
− Ct2
P [|T2 | ≥ t] ≤ e
a
1
− Ct
a
+e
2
√
t
√
−C
a
+e
3
.
(B.7)
B.2.3. Combining Bounds on T1 and T2 . Applying union bound along with B.5 and B.7 completes the proof of Lemma B.2 part (i).
For the proof of part (ii) note that with the notation of the proof of part (i) we have by Hoeffding
decomposition
X
1
R (Oi1 , Oi2 ) − E (R (O1 , O2 )) |2q ≤ 2(E|T1 |2q + E|T2 |2q )
E |
n(n − 1)
i1 6=i2
The proof will be completed by individual control of the two moments above.
ADAPTIVE ESTIMATION OF NONPARAMETRIC FUNCTIONALS
43
P
B.2.4. Analysis of E|T1 |2q . Recall that T1 = n2 ni1 =1 H(Oi1 ) where H(Oi1 ) = E (R (Oi1 , Oi2 |Oi1 ))−
ER (Oi1 , Oi2 ) and |H(O)| ≤ C(B, BU , J0 ) almost surely. Therefore by Rosenthal’s Inequality B.5
we have
)q #
( n
2q "X
n
X
2
E|H(Oi )|2
E|H(Oi )|2q +
E|T1 |2q ≤
n
i=1
i=1
2q
2C(B, BU , J0 )
≤
(n + nq ) ≤ C(B, BU , J0 )q n−q .
n
B.2.5. Analysis of E|T2 |2q . Recall that
h
i
p
P [|T2 | ≥ t] ≤ P |T2 | ≥ a1 h(t) + a2 h(t) + a3 h2 (t) ≤ 6e−6h(t) .
√
where h(t) = b1 t2 ∧ b2 t ∧ b3 t with b1 =
C(B,BU ,J0 )
, b2
a21
=
C(B,BU ,J0 )
,
a2
and b3 =
C(B,BU ,J0 )
√
.
a3
Therefore
Ef (|T2 |2q )
Z ∞
x2q−1 Pf (|T2 | ≥ x)dx
= 2q
0
Z ∞
p
x2q−1 Pf (|T2 | ≥ a1 h(x) + a2 h(x) + a3 h2 (x))dx
≤ 2q
0
Z
∞
x2q−1 e−h(x) dx
≤ 12q
Z0 ∞
√
2
x2q−1 e−{b1 x ∧b2 x∧b3 x} dx
= 12q
0
Z
Z ∞
Z ∞
∞
√
2q−1 −b3 x
2q−1 −b2 x
2q−1 −b1 x2
dx
x
e
x
e
dx +
dx +
x
e
≤ 12q
0
0
0
!
q
Γ(q) Γ(2q) 2Γ(4q)
2jd
= 12q
≤ C 2
+ 2q +
2bq1
n
b2
b4q
3
for a constant C = C(B, BU , J0 ), by our choices of b1 , b2 , b3 .
Since the estimators arising in this paper also have a linear term, we will need the following
standard Bernstein and Rosenthal type tail and moment bounds (Petrov, 1995).
Lemma B.5. If O1 , . . . , On ∼ P are iid random vectors such that |L(O)| ≤ B almost surely P,
then for q ≥ 2 one has for large enough constants C(B) and C(B, q)
n
P(|
2
1X
(L(Oi ) − E(L(Oi ))) | ≥ t) ≤ 2e−nt /C(B) ,
n
i=1
and
E(|
n
X
i=1
(L(Oi ) − E(L(Oi ))) |q )
#q/2
" n
n
X
X
E |L(Oi ) − E(L(Oi ))|2
E (|L(Oi ) − E(L(Oi ))|q ) +
≤
i=1
i=1
q
2
≤ C(B, q)n .
44
We will also need the following concentration inequality for linear estimators based on wavelet
projection kernels, proof of which can be done along the lines of proofs of Theorem 5.1.5 and
Theorem 5.1.13 of Giné and Nickl (2015).
Lemma B.6. Consider i.i.d. observations
Oi = (Y, X)i , i = 1, . . . , n where Xi ∈ [0, 1]d with
P
marginal density g. Let m̂(x) = n1 ni=1 L(Oi )KVl (Xi , x), such that max{kgk∞ , kLk∞ } ≤ BU . If
2ld ld
1
0
n ≤ 1, there exists C, C1 , C2 > 0, depending on BU and scaling functions ψ0,0 , ψ0,0 respectively,
such that
r
2ld ld
E(km̂ − E(m̂)k∞ ) ≤ C
,
n
and for any x > 0
p
3
ld
ld
P nkm̂ − E(m̂)k∞ > nE(km̂ − E(m̂)k∞ ) + C1 n2 x + C2 2 x ≤ e−x .
2
| 10 |
Unrelated Machine Scheduling of Jobs with Uniform Smith Ratios
Christos Kalaitzis∗
Ola Svensson∗
Jakub Tarnawski∗
arXiv:1607.07631v2 [] 3 Nov 2016
November 4, 2016
Abstract
We consider the classic problem of scheduling jobs on unrelated machines so as to minimize
the weighted sum of completion times. Recently, for a small constant ε > 0, Bansal et al.
gave a (3/2 − ε)-approximation algorithm improving upon the “natural” barrier of 3/2 which
follows from independent randomized rounding. In simplified terms, their result is obtained by
an enhancement of independent randomized rounding via strong negative correlation properties.
In this work, we take a different approach and propose to use the same elegant rounding
scheme for the weighted completion time objective as devised by Shmoys and Tardos for optimizing a linear function subject to makespan constraints. Our main result is a 1.21-approximation
algorithm for the natural special case where the weight of a job is proportional to its processing
time (specifically, all jobs have the same Smith ratio), which expresses the notion that each unit
of work has the same weight. In addition, as a direct consequence of the rounding, our algorithm
also achieves a bi-criteria 2-approximation for the makespan objective. Our technical contribution is a tight analysis of the expected cost of the solution compared to the one given by the
Configuration-LP relaxation – we reduce this task to that of understanding certain worst-case
instances which are simple to analyze.
∗
School of Computer and Communication Sciences, EPFL.
Email: {christos.kalaitzis,ola.svensson,jakub.tarnawski}@epfl.ch.
Supported by ERC Starting Grant 335288-OptApprox.
1
Introduction
We study the classic problem of scheduling jobs on unrelated machines so as to minimize the
weighted sum of completion times. Formally, we are given a set M of machines, and a set J of
jobs, each with a weight wj ≥ 0, such that the processing time (also called size) of job j on machine
i isPpij ≥ 0. The objective is to find a schedule which minimizes the weighted completion time, that
is j∈J wj Cj , where Cj denotes the completion time of job j in the constructed schedule.
P In the
three-field notation used in scheduling literature [GLLK79], this problem is denoted as R|| wj Cj .
The weighted completion time objective, along with makespan and flow time minimization,
is one of the most relevant and well-studied objectives for measuring the quality of service in
scheduling. Already in 1956, Smith [Smi56] showed a simple rule for minimizing this objective on a
single machine: schedule the jobs in non-increasing order of wj /pj (where pj denotes the processing
time of job j on the single machine). This order is often referred to as the Smith ordering of the jobs
and the ratio wj /pj is called the Smith ratio of job j. In the case of parallel machines, the problem
becomes significantly harder. Already for identical machines (the processing time of a job is the
same on all machines), it is strongly NP-hard, and for the more general unrelated machine model
that we consider, the problem is NP-hard to approximate within 1 + ε, for a small ε > 0 [HSW01].
Skutella and Woeginger [SW99b] settled the approximability for identical machines by developing a polynomial time approximation scheme. That is, for every ε > 0, they gave a (1 + ε)approximation algorithm for minimizing the weighted sum of completion times on identical parallel
machines.
In contrast, it remains a notorious open problem in scheduling theory to settle the approximability in the unrelated machine model (see e.g. “open problem 8” in [SW99a]). First, Schulz and
Skutella [SS02] and independently Chudak [Chu99] came up with (3/2 + ǫ)-approximation algorithms, employing a time-indexed LP relaxation for the problem. Shortly thereafter, the approximation guarantee was improved to 3/2 by Skutella [Sku01] and Sethuraman and Squillante [SS99]
using a clever convex quadratic programming relaxation. All these results relied on designing a
convex relaxation and then applying independent randomized rounding with the marginal probabilities that were returned by the convex relaxation solution. The analysis of these algorithms is
in fact tight: it is not hard to see that any algorithm using independent randomized rounding cannot achieve a better approximation guarantee than 3/2. Recently, Bansal et al. [BSS16] overcame
this barrier by designing a randomized rounding scheme that, informally, enhances independent
randomized rounding by introducing strong negative correlation properties. Their techniques yield
a (3/2 − ε)-approximation algorithm with respect to either a semidefinite programming relaxation
introduced by them or the Configuration-LP relaxation introduced in [SW13]. Their rounding and
analysis improve and build upon methods used previously for independent randomized rounding.
So a natural question motivating this work is: can a different rounding approach yield significant
improvements of the approximation guarantee?
1.1
Our results
Departing from previous rounding approaches, we propose to use the same elegant rounding scheme
for the weighted completion time objective as devised by Shmoys and Tardos [ST93] for optimizing
a linear function subject to makespan constraints on unrelated machines. We give a tight analysis
which shows that this approach gives a significantly improved approximation guarantee in the
special case where the Smith ratios of all jobs that can be processed on a machine are uniform:
1
that is, we have pij ∈ {wj , ∞} for all i ∈ M and j ∈ J .1
This restriction, which has not been studied previously, captures the natural notion that any
unit of work (processing time) on a fixed machine has the same weight. It corresponds to the
class of instances where P
the order of jobs on a machine does not matter. Compared P
to another
natural restriction of R|| wj Cj – namely, the unweighted sum of completion times R|| Cj – it is
both computationally harder (in fact, whereas the unweighted version is polynomial-time solvable
[Hor73, BCS74], our problem inherits all the known hardness characteristics of the general weighted
version: see Section 1.2) and more intuitive (it is reasonable to expect that larger jobs have larger
significance). Despite the negative results, our main theorem indicates that we obtain far better
understanding of this version than what is known for the general case.
To emphasize that we are considering the casePwhere the weight of a job is proportional to its
processing time, we refer to this problem as R|| pj Cj (with pj as opposed to wj ). With this
notation, our main result can be stated as follows:
√
Theorem
1.1. For any small ε > 0, there exists a 1+2 2 + ε < 1.21-approximation algorithm for
P
R|| pj Cj . Moreover, the analysis is tight:
there exists an instance for which our algorithm returns
√
1+ 2
a schedule with objective value at least 2 − ε times the optimum value.
We remark that the ε in the approximation guarantee arises because we can only solve the
Configuration-LP relaxation (see Section 2) up to any desired accuracy.
Interestingly enough, a similar problem (namely, scheduling jobs with uniform Smith ratios on
identical parallel machines) was studied by Kawaguchi and Kyan [KK86].2 They achieve the same
approximation ratio as we do, by using ideas in a similar direction as ours to analyze a natural
heuristic algorithm.
As we use the rounding algorithm by Shmoys
is that our
√ and Tardos, a pleasant side-effect
P
pj Cj objective
algorithm can also serve as a bi-criteria (1 + 2)/2 + ε-approximation for the
and 2-approximation for the makespan objective.3 This bi-objective setting was previously studied
by Kumar et al. [KMPS09], who gave a bi-criteria 3/2-approximation for the general weighted
completion time objective and 2-approximation for the makespan objective.
Our main technical contribution is a tight analysis of the algorithm with respect to the strong
Configuration-LP relaxation. Configuration-LPs have been used to design approximation algorithms for multiple important allocation problems, often with great success; therefore, as first
noted by Sviridenko and Wiese
P [SW13], they constitute a promising direction to explore in search
for better algorithms for R|| wj Cj . We hope that our analysis can give further insights as to how
the Configuration-LP can be used to achieve this.
On a high level, our analysis proceeds as follows. A fractional solution to the ConfigurationLP defines, for each machine, a local probability distribution of the set of jobs (configuration)
that will be processed by that machine. At the same time, the rounding algorithm (naturally)
produces a global distribution over such assignments, which inherits certain constraints on the
local distribution for each machine. Therefore, focusing on a single machine, we will compare
1
This restriction could be seen as close to the restricted assignment problem. However, we remark that all our
results also apply to the more general (but also more notation-heavy) case where the weight of a job may also depend
on the machine. A general version of our assumption then becomes pij ∈ {αi wij , ∞} for some machine-dependent
αi > 0. Our results apply to this version because our analysis will be done locally for each machine i, and therefore
we will only require that the Smith ratios be uniform for each machine separately.
2
We would like to thank the anonymous SODA 2017 reviewer for bringing this work to our attention.
3
More precisely, given any makespan threshold T > 0, our algorithm will return
√ a schedule with makespan at most
2T + ε and cost (i.e., sum of weighted completion times) within a factor (1 + 2)/2 + ε of the lowest-cost schedule
among those with makespan at most T .
2
the local input distribution (i.e., the one defined by the Configuration-LP) to the worst possible
(among ones satisfying said constraints) local output distribution that could be returned by our
randomized rounding.4 In order to analyze this ratio, we will have both distributions undergo
a series of transformations that can only worsen the guarantee, until we bring them into such a
form that computing the exact approximation ratio is possible. As the final form also naturally
corresponds to a scheduling instance, the tightness of our analysis follows immediately.
1.2
Lower Bounds and Hardness
P
All the known hardness features of the general problem R|| wj Cj transfer to our version
P
R|| pj Cj .
First, it is implicit in the work of Skutella
that both problems are APX-hard.5
√
Furthermore, complementing the (1+ 2)/2 upper bound on the integrality gap of the ConfigurationLP which follows from our algorithm, we have the following lower bound, proved in Section 5:
P
Theorem 1.2. The integrality gap of the Configuration-LP for R|| pj Cj is at least 1.08.
P
Finally, recall that the 32 -approximation algorithms for the general problem R|| j wj Cj by
Skutella [Sku01] and by Sethuraman and Squillante [SS99] are based on independent randomized
rounding of a fractional solution to a convex programming relaxation. It was shown by Bansal et
al. [BSS16] that this relaxation has an integrality gap of 23 and that, moreover, no independent
randomized rounding algorithm can have an approximation ratio better than 32 . We note that
both of these claims also apply to our version. Indeed, the integrality gap example of Bansal et
al. can be modified by having the job of size k2 and weight 1 instead have size k and weight k
(see [BSS16, Claim 2.1]). For the second claim, their problematic instance already has only unit
sizes and weights. Thus, to get better than 23 -approximation to our version, one can use neither
independent randomized rounding nor the relaxation of [Sku01, SS99].
1.3
Outline
The paper is structured as follows. In Section 2, we start by defining the Configuration-LP. Then,
in Section 3, we describe the randomized rounding algorithm by Shmoys and Tardos [ST93] applied
to our setting. We analyze it Section 4. Finally, in Section 5 we present the proof of the lower
bound on the integrality gap.
2
The Configuration-LP
As we are considering the case where pij ∈ {wj , ∞}, we let pj = wj . We further let Ji = {j ∈ J :
pij = pj } denote the set of jobs that can be assigned to machine i ∈ M.
For intuition, consider an optimal schedule of the considered instance. Observe that the schedule
partitions the jobs into |M| disjoint sets J = C1 ∪C2 ∪· · ·∪C|M|, where the jobs in Ci are scheduled
on machine i (so Ci ⊆ Ji ). As described in the introduction, the jobs in Ci are scheduled in an
optimal way on machine i by using a Smith ordering and, since we are considering the case where
4
For example, if we consider all distributions that assign 2 jobs to a machine, each with probability 1/2, then the
distribution which assigns either both jobs together or no job at all, each with probability 1/2, is the worst possible
distribution, i.e., the one that
P maximizes the expected cost.
5
APX-hardness for R|| wj Cj was first proved by Hoogeveen et al. [HSW01]. Skutella [Sku01, Section 7] gives a
different proof, where the reduction generates instances with all jobs having weights equal to processing times.
3
all jobs have the same Smith ratio, any ordering is optimal. The cost of scheduling the set of jobs
Ci on machine i can therefore be written as
X
X pj pj ′
.
cost(Ci ) =
p2j +
2
′
j6=j ∈Ci
j∈Ci
To see this, note thatP
if we pick a random schedule/permutation of Ci , then the expected
completion
P
pj ′
time of job j is pj + j ′ 6=j∈Ci 2 . The total cost of the considered schedule is i∈M cost(Ci ).
The Configuration-LP models, for each machine, the decision of which configuration (set of
jobs) that machine should process. Formally, we have a variable yiC for each machine i ∈ M and
each configuration C ⊆ Ji of jobs. The intended meaning of yiC is that it takes value 1 if C is the
set of jobs that machine i processes, and it takes value 0 otherwise. The constraints of a solution are
that each machine should process at most one configuration and that each job should be processed
exactly once. The Configuration-LP can be compactly stated as follows:
X X
min
yiC cost(C)
i∈M C⊆Ji
s.t.
X
C⊆Ji
X
X
yiC ≤ 1
∀i ∈ M ,
yiC = 1
∀j ∈ J ,
yiC ≥ 0
∀i ∈ M, C ⊆ Ji .
i∈M C⊆Ji :j∈C
This linear program has an exponential number of variables and it is therefore non-trivial to solve;
however, Sviridenko and Wiese [SW13] showed that, for any ε > 0, there exists a polynomial-time
algorithm that gives a feasible solution to the relaxation whose cost is at most a factor (1 + ε) more
than the optimum. Hence, the Configuration-LP becomes a powerful tool that we use to design a
good approximation algorithm for our problem.
3
The Rounding Algorithm
Here, we describe our approximation algorithm. We will analyze it in the next section, yielding
Theorem 1.1. The first step of our algorithm is to solve the Configuration-LP (approximately)
to obtain a fractional solution y ⋆ . We then round this solution in order to retrieve an integral
assignment of jobs to machines. The rounding algorithm that we employ is the same as that used
by Shmoys and Tardos [ST93] (albeit applied to a fractional solution to the Configuration-LP,
instead of the so-called Assignment-LP). For completeness, we describe the rounding scheme in
Algorithm 1; see also Figure 1.
P
⋆ . Intuitively, x denotes the marginal probability
The first step is to define xij = C⊆Ji :j∈C yiC
ij
⋆ . Note that, by the constraint that y ⋆
that job j should be assigned to machine i, according
to
y
P
assigns each job once (fractionally), we have i∈M xij = 1 for each job j ∈ J .
In the next steps, we round the fractional solution randomly so as to satisfy these marginals, i.e.,
so that the probability that job j is assigned toPi is xij . In addition, the number of jobs assigned
to a
P
machine
i
will
closely
match
the
expectation
x
:
our
rounding
will
assign
either
⌊
x
ij
j∈J
j∈J ij ⌋
P
P
or ⌈ j∈J xij ⌉ jobs to machine i. This is enforced by creating ⌈ j∈J xij ⌉ “buckets” for each
machine i, and then matching the jobs to these buckets. More formally, this is modeled by the
complete bipartite graph G = (U ∪V, E) constructed in Step 2 of Algorithm 1, where vertex ui,t ∈ U
corresponds to the t-th bucket of machine i.
4
Input : Solution y ⋆ to the Configuration-LP
Output: Assignment of jobs to machines
P
⋆ .
yiC
1) Define x ∈ RM×J as follows: xij =
C⊆Ji :j∈C
2) Let G = (U ∪ V, E) be the complete bipartite graph where
• the right-hand side consists of one vertex for each job j, i.e., V = {vj : j ∈ J },
P
xij ⌉ vertices for each machine i, i.e.,
• the left-hand side consists of ⌈
j∈J
P
S
xij ⌉}.
{ui,t : 1 ≤ t ≤ ⌈
U=
j∈J
i∈M
3) Define a fractional solution z to the bipartite matching LP for G (initially set to z = 0)
by repeating the following procedure for every machine i ∈ M:
P
• Let k = ⌈ j∈J xij ⌉, and let t be a variable originally set to 1.
• Iterate over all j ∈ J in non-increasing order in terms of pj :
P
If xij +
zui,t vj′ ≤ 1, then set zui,t vj = xij .
j ′ ∈J
Else, set zui,t vj = 1 −
P
j ′ ∈J
zui,t vj′ , increment t, and set zui,t vj = xij − zui,t−1 vj .
P
4) Decompose z into a convex combination of integral matchings z = t λt zt and sample
one integral matching z ∗ by choosing the matching zt with probability λt .
P
xij ⌉.
5) Schedule j ∈ J on i ∈ M iff zu∗i,t vj = 1 for some 1 ≤ t ≤ ⌈
j∈J
Algorithm 1: Randomized rounding
Observe that any integral matching in G that matches all the “job” vertices in V naturally
corresponds to an assignment of jobs to machines. Now, Step 3 prescribes a distribution on such
matchings by defining a fractional matching z. The procedure is as follows: for each machine i,
we iterate over the jobs j ∈ J in non-increasing order in terms of their size, and we insert items
of size xij into the first bucket until adding the next item would cause the bucket to become full;
then, we split that item between the first bucket and the second, and we proceed likewise for all
jobs until we fill up all buckets (except possibly for the last bucket). Having completed this process
for all machines i, we end up with a fractional matching z in G with the following properties:
P
• Every “job” vertex vj ∈ V is fully matched in z, i.e., i,t zui,t vj = 1.
P
P
• For every “bucket” vertex ui,t ∈ U , we have j zui,t vj ≤ 1, with equality if t < ⌈ j xij ⌉.
P
• The fractional matching preserves the marginals, i.e., xij = t zui,t vj for all j ∈ J and i ∈ M.
• We have the following bucket structure: if zui,t vj > 0 and zui,t′ vj′ > 0 with t′ > t, then
pj ≥ pj ′ .
The last property follows because Step 3 considered the jobs in non-increasing order of their processing times; this will be important in the analysis (see the last property of Fact 4.4).
Now, we want to randomly select a matching for G which satisfies the marginals of z (remember
that such a matching corresponds to an assignment of all the jobs to machines). We know that the
bipartite matching LP is integral and that z is a feasible solution for the bipartite matching LP
of G; therefore, using an algorithmic version of Carathéodory’s theorem (see e.g. Theorem 6.5.11
5
Input distribution on
configurations (patterns)
of machine i⋆ (y in
g)
Fractional matching
Bucketing of machine i⋆
2/3
1/3
⇒
0
1/3 2/3
⇒
2/3
1
1/3
2/3
2/3
2/3
1/3
1/3
1/3
1/3
1st bucket 2nd bucket 3rd bucket
⇓
1/3
×
+
1/3
×
+
1/3
×
⇐
0
1/3 2/3
1
Output distribution on
configurations (patterns)
of machine i⋆ (y out
f)
Combination of matchings
Figure 1: A sample execution of our rounding algorithm, restricted to a single machine i⋆ . Jobs
are represented by a rectangle; its height is the job’s processing time and its width is its
fractional assignment to i⋆ . Starting from an input distribution over configurations for
i⋆ , we extract the fractional assignment of each job to i⋆ , we create a bipartite graph
consisting of 3 copies of i⋆ and the jobs that are fractionally assigned to it, and then
we connect the jobs to the copies of i⋆ by iterating through the jobs in non-increasing
order of pj . Finally, we decompose the resulting fractional matching into a convex
combination of integral matchings and we sample one of them. The shown output
distribution is a worst-case distribution in the sense of Section 4.2: it maximizes the
variance of makespan, subject to the marginal probabilities and the bucket structure
enforced by the algorithm.
P
in [GLS93]), we can decompose z into a convex combination z = t λt zt of polynomially many
integral matchings, and sample the matching zt with probability λt . Then, if z ∗ is the matching
we
P
have sampled, we simply assign job j to machine i iff zu∗i,t vj = 1 for some t. Since xij = t zui,t vj
P
and i∈M xij = 1 for all jobs j, z ∗ will match all “job” vertices.6 The above steps are described
in Steps 4 and 5 of Algorithm 1.
The entire rounding algorithm is depicted in Figure 1.
6
We remark that this is the only part of the algorithm that employs randomness; in fact, we can derandomize the
algorithm by choosing the matching zk that minimizes the cost of the resulting assignment.
6
4
Analysis
⋆
Throughout the analysis, we fix a single machine
√ i ∈ M. We will show that the expected cost
of our algorithm on this machine is at most 1+2 2 times the cost of the Configuration-LP on this
machine. This clearly implies (by linearity
of expectation) that the expected cost of the produced
√
1+ 2
solution (on all machines) is at most 2 times the cost of the LP solution (which is in turn within
a factor (1 + ε) of the fractional optimum).
Let C1 , C2 , ..., C2J be all possible configurations sorted by decreasing cost, i.e., cost(C1 ) ≥ . . . ≥
cost(C2J ). To simplify notation, in this section we let J denote the set of jobs that can be processed
on machine i⋆ (i.e., Ji⋆ ).
Recall that the solution y ⋆ to the Configuration-LP gives us an input distribution
P in on configuJ
⋆
in
2
rations assigned to machine i , i.e., it gives us a vector y ∈ [0, 1] such that i yi = 1. With
this notation, we can write the cost of the Configuration-LP on machine i⋆ as
X
yiin cost(Ci ).
i
In order to compare this expression with the expected cost of our algorithm on machine i⋆ ,
we observe that our rounding algorithm also givesPa distribution on configurations. We denote
J
this output distribution by y out ∈ [0, 1]2 (where i yiout = 1). Hence, the expected cost of our
algorithm on machine i⋆ is
X
yiout cost(Ci ).
i
The result of this section that implies the approximation guarantee of Theorem 1.1 can now be
stated as follows.
Theorem 4.1. We have
√
P out
yi cost(Ci )
1+ 2
i
P in
.
≤
2
i yi cost(Ci )
Our strategy for bounding this ratio is, broadly, to work on this pair of distributions by transforming it to another pair of distributions of special form, whose ratio we will be able to bound.
We transform the pair in such a way that the ratio can only increase. In other words, we prove
that no pair of distributions has a worse ratio than a certain worst-case kind of pair, and we bound
the ratio in that worst case.
After these transformations, our pair of distributions may no longer correspond to the original
scheduling problem instance, so it will be convenient for us to work with a more abstract notion
that we define now.
4.1
Compatible Function Pairs
P
J
Given a distribution y ∈ [0, 1]2 with i yi = 1, we can build a corresponding function f from
[0, 1) to multisets of positive numbers as follows: define f (x) for x ∈ [0, y1 ) to be the multiset of
processing times of jobs in C1 , f (x) for x ∈ [y1 , y1 + y2 ) to be the multiset of processing times of
jobs in C2 , and so on.7 If we do this for both y out – obtaining a function f – and y in – obtaining a
function g (see Figure 1 for an illustration of f and g), we will have produced a function pair :
7
Recall that C1 , C2 , ... are sorted by non-increasing cost. Thus f can be thought of as a quantile function (inverse
cumulative distribution function) of the distribution y, except in reverse order (i.e., f (1 − x) is a quantile function).
7
Definition 4.2. A function pair is a pair (f, g) of stepwise-constant functions from the interval
[0, 1) to multisets of positive numbers. We will call these multisets patterns and the numbers they
contain elements (or processing times).
Notation.
If f is such a function, define:
• f1 : [0, 1) → R+ as the maximum element: f1 (x) = max f (x) (set 0 if f (x) = ∅),
• sizef : [0, 1) → R+ as sizef (x) = size(f (x)), where
size(f (x)) =
X
p,
p∈f (x)
• fr as the total size of the multiset after the removal of the maximum: fr (x) = sizef (x)−f1 (x),
R1
• cost(f ) = 0 cost(f (x)) dx as the fractional (expected) cost, where
X
X
q
cost(f (x)) =
p·
p∈f (x)
q∈f (x),qp
for an arbitrary linear order on f (x).8
Function pairs we work with will have special properties that follow from the algorithm. We
argue about them in Fact 4.4. One such property comes from our algorithm preserving the marginal
probabilities of jobs:
Definition 4.3. We say that a function pair (f, g) is a compatible function pair (CFP) if the
fractional number of occurences of any element is the same in f and in g.9
Fact 4.4. Let (f, g) be a function pair obtained from (y out , y in ) as described above. Then:
• (f, g) is a CFP.
P
P
• cost(f ) = i yiout cost(Ci ) and cost(g) = i yiin cost(Ci ).
• f has the following bucket structure: for any two patterns P and Q in the image of f and
for any i, the i-th largest element of P is no smaller than the (i + 1)-th largest element of Q.
Proof. That (f, g) is a CFP follows because our algorithm satisfies the marginals of the involved
jobs. Namely, we know that
job j ∈ J , both distributions y in and y out have the machine i⋆
P for each
⋆
process a fraction xi⋆ j = C∋j yi⋆ C of this job (where y ⋆ is the global Configuration-LP solution).
For any p > 0, summing this up over all jobs j ∈ J with processing time pj = p gives the
compatibility condition.
The equalities of costs are clear from the definition of cost(f ) and cost(g).
For the bucket structure of f , recall the algorithm: a matching is found between jobs and
buckets, in which each bucket is matched with a job (except potentially for the last bucket of each
machine). For any pattern P in the image of f and for any i, the i-th processing time in P is
drawn from the i-th bucket that was constructed by our algorithm. Moreover, all processing times
in the i-th bucket are no smaller than those in the (i + 1)-th bucket, because the algorithm orders
jobs non-increasingly by processing times. (See Figure 1 for an illustration of this process and of a
function f satisfying this bucket structure.)
8
This expression does not depend on , since the Smith ratios are uniform, and it is equal to the cost of a
configuration giving rise to f (x).
R1
R1
9
Formally, for each p > 0 we have: 0 multiplicity of p in f (x) dx = 0 multiplicity of p in g(x) dx.
8
This was the last point in the analysis where we reasoned about how the algorithm rounds the
LP solution. From now on, we will think about elements, patterns and CFPs rather than jobs and
configurations.
√
)
1+ 2
≤
To prove Theorem 4.1, we need to show that cost(f
2 . As indicated above, we will do this
cost(g)
′
cost(f )
)
by proving that there is another CFP (f ′ , g′ ) with special properties and such that cost(f
cost(g) ≤ cost(g ′ ) .
We will actually construct a series of such CFPs in a series of lemmas, obtaining more and more
desirable properties, until we can bound the ratio. Our final objective is a CFP like the pair (f ′ , g′ )
depicted in Figure 3.
4.2
The Worst-Case Output
As a first step, we look at how costly an output distribution of our algorithm can be (while still
satisfying the aforementioned bucket structure and the marginal probabilities, i.e., the compatibility
condition). Intuitively, the maximum-cost f is going to maximize the variance of the total processing
time, which means that larger-size patterns should select larger processing times from each bucket.
(See Figure 1 for an illustration and the proof of Lemma 4.5 for details.) From this, we extract
that the largest processing time in a pattern (the function f1 ) should be non-increasing, and this
should also hold for the second-largest processing time, the third-largest, and so on. This implies
the following properties:
Lemma 4.5. If (f, g) is a CFP where f has the bucket structure described in Fact 4.4, then there
cost(f ′ )
)
′
′
exists another CFP (f ′ , g) such that cost(f
cost(g) ≤ cost(g) and the functions f1 and fr are non-increasing
and size(f ′ (1)) ≥ fr′ (0).
The proof is a simple swapping argument.
Proof. For i = 1, 2, ..., let fi (x) always denote the i-th largest element of f (x). As suggested above,
we will make sure that for each i, the function fi is non-increasing.
Namely, we repeat the following procedure: as long as there exist x, y and i such that
size(f (x)) > size(f (y)) but fi (x) < fi (y), swap the i-th largest elements in f (x) and f (y).10
Once this is no longer possible, we finish by “sorting” f so as to make sizef non-increasing.
Let us verify that once this routine finishes, yielding the function f ′ , we have the desired
properties:
• The function f1′ is non-increasing.
• The same holds for the function fr′ , since fr′ = f2′ + f3′ + ... and each fi′ is non-increasing.
′ (y) for all i, x
• The procedure maintains the bucket structure, which implies that fi′ (x) ≥ fi+1
and y. Thus
f ′ (1) = f1′ (1) + f2′ (1) + ... ≥ f2′ (0) + f3′ (0) + ... = fr′ (0).
• It remains to show that cost(f ′ ) ≥ cost(f ). Without loss of generality, assume there was only
a single swap (as the sorting step is insignificant for the cost). For computing the cost of the
involved patterns, we will think that the involved elements went last (since the order does
10
Formally, choose τ > 0 such that f is constant on [x, x + τ ) and on [y, y + τ ) and perform the swap in these
patterns.
9
not matter); let Rx = size(f (x)) − fi (x) and Ry = size(f (y)) − fi (y) be the total sizes of the
elements not involved. Then Rx ≥ Ry and
∆ cost(f (x)) = fi (y) (Rx + fi (y)) − fi (x) (Rx + fi (x))
= (fi (y) − fi (x)) Rx + fi (y)2 − fi (x)2 ,
∆ cost(f (y)) = fi (x) (Ry + fi (x)) − fi (y) (Ry + fi (y))
= (fi (x) − fi (y)) Ry + fi (x)2 − fi (y)2 ,
thus
∆ cost(f (x)) + ∆ cost(f (y)) = (fi (y) − fi (x)) (Rx − Ry ) > 0.
4.3
Liquification
One of the most important operations we will employ is called liquification. It is the process of
replacing an element (processing time) with many tiny elements of the same total size. These new
elements will all have a size of ε and will be called liquid elements. Elements of size larger than
ε are called solid. One should think that ε is arbitrarily small, much smaller than any pj ; we will
usually work in the limit ε → 0.
The intuition behind applying this process to our pair is that in the ideal worst-case setting
which we are moving towards, there are only elements of two sizes: large and infinitesimally small.
We will keep a certain subset of the elements intact (to play the role of large elements), and liquify
the rest in Lemma 4.7.
Our main claim of this section is that replacing an element with smaller ones of the same total
size (in both f and g) can only increase the ratio of costs. Thus we are free to liquify elements in
our analysis as long as we make sure to liquify the same amount of every element in both f and g
(f and g remain compatible).
Fact 4.6. Let (f, g) be a CFP and p, p1 , p2 > 0 with p = p1 + p2 . Suppose (f ′ , g′ ) is a CFP obtained
from (f, g) by replacing p by p1 and p2 in subsets of patterns in f and g of equal measures.11 Then
cost(f ′ )
cost(f )
cost(g) ≤ cost(g ′ ) .
Proof. Consider a pattern P in which p was replaced. We calculate the change in cost ∆ :=
cost(P \ {p} ∪ {p1 , p2 }) − cost(P ). As the order does not matter, the cost can be analyzed with p
(or p1 , p2 ) being first, so
∆ = (p21 + p2 (p1 + p2 )) − p2 = (p21 + p1 p2 + p22 ) − (p1 + p2 )2 = −p1 p2 ≤ 0
and it does not depend on the other elements in P . Thus, if we make the replacement in a fraction
τ of patterns, then we have
cost(f ) + τ ∆
cost(f )
cost(f ′ )
=
≥
′
cost(g )
cost(g) + τ ∆
cost(g)
since
cost(f )
cost(g)
≥ 1 to begin with (otherwise we are done) and ∆ ≤ 0.
By corollary, we can also replace an element of size p with p/ε liquid elements (of size ε each).
11
Formally, suppose If , Ig ⊆ [0, 1) are finite unions of disjoint intervals of equal total length such that all patterns
f (x) and g(y) for x ∈ If , y ∈ Ig contain p; in all these patterns, remove p and add p1 , p2 .
10
f1 (x) + fr (0)
fr (0)
size(f (x))
⇒
f1 (x)
size(f (x)) − fr (0)
m
0
1
0
m
1
Figure 2: The main step in the proof of Lemma 4.7. The left picture shows f after the liquification
of all elements except one largest element (f1 ) for x ∈ [0, m). The right picture shows
f after the movement of liquid elements between the striped regions. In both pictures,
the light-gray areas contain single large elements, and the dark-gray areas are liquid
elements. Note that there are two pairs of parallel lines in the pictures; the vertical
distance between each pair is fr (0). The threshold m is defined so that the striped
regions have equal areas (thus fr′ is made constant by the movement of liquid elements)
and so that moving the liquid elements can only increase the cost. The height of the
upper striped area is f1 (x) + fr (0) − size(f (x)) = fr (0) − fr (x) for x ∈ [0, m) and the
height of the lower striped area is size(f (x)) − fr (0) for x ∈ [m, 1). All functions are
stepwise-constant, but drawn here using straight downward lines for simplicity; also,
the rightmost dark-gray part will “fall down” (forming a (1 − m) × fr (0) rectangle).
4.4
The Main Transformation
In this section we describe the central transformation in our analysis. It uses liquification and
rearranges elements in f and g so as to obtain two properties: that fr is constant and that f1 = g1 .
(The process is explained in Figure 2, and a resulting CFP is shown in the upper part of Figure 3.)
This greatly simplifies the setting and brings us quite close to our ideal CFP (depicted in the lower
part of Figure 3).
Lemma 4.7. If (f, g) is a CFP with f1 and fr non-increasing and size(f (1)) ≥ fr (0), then there
)
cost(f ′ )
exists another CFP (f ′ , g′ ) with cost(f
cost(g) ≤ cost(g ′ ) such that:
(a) fr′ is constant.
(b) f1′ = g1′ and it is non-increasing.
(c) There exists m ∈ [0, 1] such that:
• for x ∈ [0, m), f ′ (x) has liquid elements and exactly one solid element,
• for x ∈ [m, 1), f ′ (x) has only liquid elements.
Proof. We begin with an overview of the proof. See also Figure 2.
Our procedure to obtain such a CFP has three stages:
1. We liquify all elements except for the largest element of every pattern f (x) for x ∈ [0, m), for
a certain threshold m; at this point, fr (x) becomes essentially the size of liquid elements in
f (x) for x ∈ [0, 1) (since no pattern contains two solid elements).
2. We move liquid elements in f from right to left, i.e., from f (y) for y ∈ [m, 1) to f (x) for
x ∈ [0, m) (which only increases the cost of f ), so as to make fr constant.
11
3. We rearrange elements in g so as to satisfy the condition f1′ = g1′ .
Now let us proceed to the details. First, we explain how to choose the threshold m. It is defined
so as to ensure that after the liquification, patterns in f have liquid elements of total size fr (0)
on average. Thus we can make all fr (x) equal to fr (0). More importantly, we can do this by
only moving the liquid elements “higher”, so that the cost of f only goes up. (See Figure 2 for an
illustration of this move.)
More precisely, m ∈ [0, 1] is chosen so that
Z
0
m
fr (0) − fr (x) dx =
Z
1
m
size(f (x)) − fr (0) dx
(1)
Rm
R1
(which implies fr (0) = 0 fr (x) dx + m size(f (x)) dx: the right-hand expression is the size of
elements which will be liquified next). Such an m is guaranteed to exist by nonnegativity of
the functions under both integrals in (1), which follows from our assumptions on f (we have
size(f (x)) ≥ size(f (1)) ≥ fr (0), because sizef = f1 + fr is non-increasing).
Now we can perform the sequence of steps:
1. The liquification: for each element p > 0 which appears in a pattern f (x) \ {f1 (x)} for
x ∈ [0, m) or in a pattern f (x) for x ∈ [m, 1), we liquify all its such occurences, as well as
their counterparts in g.12 We have a bound on the ratio of costs by Fact 4.6, and f1 remains
non-increasing.
2. Rearranging liquid elements in f : while there exists x ∈ [0, m) with fr (x) < fr (0), find
y ∈ [m, 1) with fr (y) > fr (0) and move a liquid element from f (y) to f (x).13 (Since we
want to make fr constant and equal to fr (0), we move elements from where fr is too large
to where it is too small. Note that since we liquified all elements in f (x) for x ∈ [m, 1), now
sizef is almost equal to fr on [m, 1).) Once this process finishes, fr will be constant by the
definition of m.14 (See the right side of Figure 2.) Note that sizef was non-increasing at the
beginning of this step, and so we are always moving elements from patterns of smaller total
size to patterns of larger total size – this can only increase the cost of f and thus the ratio of
costs increases. This step preserves f1 .
3. Rearranging elements in g: at this point, since f and g are compatible, g only has solid jobs
that appear as f1 (x) for x ∈ [0, m), but they might be arranged differently in g than in f . We
want them to have the same arrangement (i.e., f1′ = g1′ ) so that we can compare f to g more
easily. As a first step, we make sure that every pattern in g has at most one solid element. To
this end, while there exists x ∈ [0, 1) such that g(x) contains two or more solid elements, let
p > 0 be one of them, and find y ∈ [0, 1) such that g(y) contains only liquid elements15 . Now
there are two cases: if size(g(y)) ≥ p, then we move p to g(y) and move liquid elements of the
same total size back to g(x). This preserves cost(g) (think that these elements went first in
12
Formally, let If = {x ∈ [0, m) : p ∈ f (x) \ {f1 (x)}} ∪ {x ∈ [m, 1) : p ∈ f (x)} and Ig = {y ∈ [0, 1) : p ∈ g(y)}; by
compatibility of f and g, the measure of Ig is no less than that of If . We liquify p in If and in a subset of Ig of the
right measure. (If there were patterns where p appeared multiple times, we repeat.)
13
Formally, let τ > 0 be such that fr (x′ ) < fr (0) for x′ ∈ [x, x + τ ) and x + τ ≤ m and fr (y ′ ) > fr (0) for
′
y ∈ [y, y + τ ). Move the liquid elements between these patterns f (y ′ ) and f (x′ ).
14
Since we operate in the limit ε → 0, we ignore issues such as fr being ±ε off.
15
Such a y exists because f and g are compatible: the total measure of patterns with solid elements in f is m and
these patterns contain only one solid element each, so g must have patterns with no solid elements.
12
the linear orders on both g(x) and g(y)). On the other hand, if size(g(y)) < p, then we move
p to g(y) and move all liquid elements from g(y) to g(x). This even decreases cost(g).16
At this point, f and g have the same solid elements, each of which appears as the only solid
element in patterns containing it in both f and g. Thus we can now sort g so that for each
solid element p > 0, it appears in the same positions in f and in g. This operation preserves
cost(g), and thus the entire third step does not decrease the ratio of costs.
4.5
The Final Form
In our last transformation, we will guarantee that in g, each large element is the only member
of a pattern which contains it. (Intuitively, we do this because such CFPs are the ones which
maximize the ratio of costs.) Namely, we prove Lemma 4.8, a strengthened version of Lemma 4.7;
note that condition (c’) below is stronger than condition (c) of Lemma 4.7 (there, g′ could have
had patterns with both liquid and solid elements), and that condition (d) is new. Figure 3 shows
the difference between CFPs postulated by Lemmas 4.7 and 4.8.
Lemma 4.8. Let (f, g) be as postulated in Lemma 4.7. Then there exists another CFP (f ′ , g′ ) such
that:
(a) fr′ is constant.
(b) f1′ = g1′ .
(c’) There exists t ∈ [0, 1] such that:
• for x ∈ [0, t), f ′ (x) has liquid elements and exactly one solid element,
• for x ∈ [t, 1), f ′ (x) has only liquid elements,
• for x ∈ [0, t), g ′ (x) has exactly one solid element (and no liquid ones),
• for x ∈ [t, 1), g ′ (x) has only liquid elements.
(d) The function sizeg′ is constant on [t, 1) (i.e., the liquid part of g is uniform).
Moreover,
cost(f )
cost(f ′ )
≥
min
2,
.
cost(g ′ )
cost(g)
Proof. Let us begin by giving a proof outline; see also Figure 3 for an illustration.
• Our primary objective is to make g1 equal to sizeg on [0, t) (where t is to be determined).
To this end, we increase the sizes of the solid elements in g(x) for x ∈ [0, t) and at the same
time decrease the total sizes of liquid elements for these g(x) (which keeps sizeg unchanged).
To offset this change, we decrease the sizes of solid elements in g(y) for y ∈ [t, m) (and also
increase the total sizes of liquid elements there). We also modify the sizes of solid elements
in f so as to keep f and g compatible and preserve the properties (a)-(b).
16
Formally, select τ > 0 so that g is constant on [x, x + τ ) and on [y, y + τ ); for x′ ∈ [x, x + τ ), replace g(x′ ) with
g(x′ ) \ {p} ∪ g(y), and for y ′ ∈ [y, y + τ ), replace g(y ′ ) with {p}. The cost of g then decreases by τ (size(g(x)) − p)(p −
size(g(y))) > 0.
13
g
f
fr
f1
g1 = f1
t
0
m
1
m
⇓
f′
1
g′
f1
0
t
0
g1 = f1
t
m
1
t
0
m
1
Figure 3: An example of two CFPs: (f, g) is produced by Lemma 4.7, whereas (f ′ , g′ ) is produced
by Lemma 4.8. In this picture, the height of the plot corresponds to size(f (x)) for
each x, while the shaded and wavy parts correspond to the contributions of f1 and fr
to sizef ; similarly for g. The wavy parts are liquid. In Lemma 4.8 we want to make
f1 = g1 equal to sizeg on an interval [0, t), so we increase sizes of solid elements in g
on that interval, while decreasing those on the interval [t, m). The striped regions in
the pictures of g correspond to these changes. (We repeat the same changes in f , and
we also move liquid elements in g to keep sizeg unchanged.) The threshold t ∈ [0, m] is
chosen so that g1 becomes equal to sizeg on [0, t) while the solid elements on [t, m) are
eradicated (i.e., so that the areas of the striped regions in g are equal).
• The threshold t is defined so that after we finish this process, the solid elements in g(x)
for x ∈ [0, t) will have filled out the entire patterns g(x), and the solid elements in g(y) for
y ∈ [t, m) will have disappeared.
• Our main technical claim is that this process does not invalidate the ratio of costs.
• Finally, we can easily ensure condition (d) by levelling g on [t, 1), which only decreases its
cost.
Now we proceed to the details. First we define the threshold t ∈ [0, m] as a solution to the equation
Z
t
gr (x) dx =
0
Z
t
14
m
g1 (x) dx,
which exists because the functions under both integrals are nonnegative. The left-hand side will
be the total increase in sizes of solid elements in g(x) for x ∈ [0, t) and the right-hand side will
be the total decrease in sizes of solid elements in g(y) for y ∈ [t, m) (these elements will disappear
completely).
Now we will carry out the process that we have announced in the outline. Namely, while there
exists x ∈ [0, t) with gr (x) > 0, do the following:
• find y ∈ [t, m) with g1 (y) > ε (i.e., g(y) where the solid element has not been eradicated yet),
• increase the size of the solid element in g(x) by ε,
• do the same in f ,
• decrease the size of the solid element in g(y) by ε,
• do the same in f ,
• move one liquid element (of size ε) from g(x) to g(y).
Formally, as usual, we find τ > 0 such that f and g are constant on [x, x + τ ) and on [y, y + τ )
and we do this in all of these patterns.
Note that the following invariants are satisfied after each iteration:
• f1 = g1 ,
• fr does not change,
• sizeg does not change,
Rm
• 0 g1 (x) dx does not change,
• g1 can only increase on [0, t) and it can only decrease on [t, m),
• f1 (x) ≥ f1 (y) for all x ∈ [0, t) and y ∈ [t, m).17
By the definition of t, when this process ends, the patterns g(x) for x ∈ [0, t) contain only a
single solid element, while g(x) for x ∈ [t, m) (thus also for x ∈ [t, 1)) contain no solid elements.
Since f1 = g1 , the patterns f (x) also have only liquid elements for x ∈ [t, 1). Thus properties (a),
(b) and (c’) are satisfied. We reason about the ratio of costs in the following two technical claims:
Claim 1. In a single iteration, cost(f ) increases by 2α and cost(g) increases by α, for some α ≥ 0.
Proof. The patterns have changed so that:
• f (x) had f1 (x) increased by ε,
• f (y) had f1 (y) decreased by ε,
• g(x) had g1 (x) increased by ε and one liquid element removed,
• g(y) had g1 (y) decreased by ε and one liquid element added.
17
This is because f1 = g1 was initially non-increasing and since then it has increased on [0, t) and decreased on
[t, m).
15
Since the order of elements does not matter, in computing cost(f ) we think that the solid element
goes last in the linear order:
∆ cost(f (x)) = (f1 (x) + ε)(size(f (x)) + ε) − f1 (x) size(f (x)) = ε (size(f (x)) + f1 (x) + ε) ,
∆ cost(f (y)) = (f1 (y) − ε)(size(f (y)) − ε) − f1 (y) size(f (y)) = ε (− size(f (y)) − f1 (y) + ε) ,
∆ cost(f (x)) + ∆ cost(f (y)) = ε (size(f (x)) − size(f (y)) + f1 (x) − f1 (y) + 2ε)
= 2ε (f1 (x) − f1 (y) + ε) ,
where in the last line we used that size(f (x)) − size(f (y)) = f1 (x) + fr (x) − f1 (y) − fr (y) =
f1 (x) − f1 (y) since fr is constant.
In computing cost(g), we think that the solid element and the one liquid element which was
added or removed go first (and other elements are unaffected since sizeg is preserved):
∆ cost(g(x)) = (g1 (x) + ε)2 − (g1 (x)2 + ε(g1 (x) + ε)) = εg1 (x),
∆ cost(g(y)) = ((g1 (y) − ε)2 + εg1 (y)) − g1 (y)2 .
Adding up we have:
∆ cost(g(x)) + ∆ cost(g(y)) = ε (g1 (x) − g1 (y) + ε) = ε (f1 (x) − f1 (y) + ε) ,
where we used that f1 = g1 . Thus we have that
∆ cost(f (x)) + ∆ cost(f (y)) = 2 [∆ cost(g(x)) + ∆ cost(g(y))]
and we get the statement by setting
α = τ (∆ cost(g(x)) + ∆ cost(g(y))) = τ ε (f1 (x) − f1 (y) + ε) ≥ 0
(recall that τ is the fraction of patterns where we increase g1 ; nonnegativity follows by the last
invariant above).
Claim 2. Let (f ′ , g′ ) be the CFP obtained at this point and (f, g) be the original CFP. Then
cost(f ′ )
cost(f )
≥ min 2,
.
cost(g ′ )
cost(g)
Proof. By Claim 1, we have cost(f ′ ) = cost(f ) + 2β and cost(g ′ ) = cost(g) + β for some β ≥ 0
(which is the sum of α’s from Claim 1). Now there are two cases:
• if
cost(f )
cost(g)
≤ 2, then
cost(f )+2β
cost(g)+β
≥
• if
cost(f )
cost(g)
≥ 2, then
cost(f )+2β
cost(g)+β
≥ 2 (even though the ratio decreases, it stays above 2).
cost(f )
cost(g) ,
Finally, as the last step, we equalize the total sizes of liquid elements in g(x) for x ∈ [t, 1) (by
moving liquid elements from larger patterns to smaller patterns, until all are equal), thus satisfying
property (d). Clearly, this can only decrease the cost of g (by minimizing the variance of sizeg on
the interval [t, 1)), so the ratio increases and Lemma 4.8 follows.
16
Note that Lemma 4.8 does not guarantee that the ratio of costs increases; we only claim that
it either increases, or it is now more than 2. However, we will shortly show in Lemma 4.9 that the
ratio is actually much below 2, so the latter is in fact impossible.
Now that we have our ideal CFP, we can finally bound its cost ratio.
Lemma 4.9. Given a CFP (f, g) as postulated in Lemma 4.8 (see the lower part of Figure 3), we
have
√
cost(f )
1+ 2
≤
.
cost(g)
2
The proof proceeds in two simple steps: first, we argue that we can assume without loss of generality
that there is only a single large element (i.e., f1 = g1 is constant on [0, t)). Then, for such pairs of
functions, the ratio is simply a real function of three variables whose maximum is easy to compute.
Proof. As a first step, we assume without loss of generality that there is only a single large element
(i.e., f1 = g1 is constant on [0, t)). This is due to the fact that both f and g can be written as a
weighted average of functions
with a single large element. Formally, let ℓ1 , ℓ2 , ... be the step lengths
P
of f1 on [0, t), so that i ℓi = t and f1 is constant on [0, ℓ1 ), on [ℓ1 , ℓ1 + ℓ2 ) and so on. Define f i
to be f with the whole f1 on [0, t) replaced by the i-th step of f1 , i.e.,
(
f (ℓ1 + ... + ℓi−1 ) for x ∈ [0, t),
f i (x) =
f (x)
for x ∈ [t, 1).
Define g i similarly. Then
cost(f ) =
X
i
=
X ℓi
i
=
and similarly cost(g) =
t
X ℓi
i
P
ℓi cost(f (ℓ1 + ... + ℓi−1 )) + (1 − t) · cost(f (1))
ℓi
i t
t
[t · cost(f (ℓ1 + ... + ℓi−1 )) + (1 − t) · cost(f (1))]
cost(f i )
cost(gi ). Thus, if we have
cost(f i )
cost(g i )
≤
√
1+ 2
2
for each i, then
√
P ℓi
P ℓi 1+√2
i
cost(g i )
cost(f )
1+ 2
i t cost(f )
i t
2
= P ℓ
.
≤
=
P ℓi
i
i
i
cost(g)
2
i t cost(g )
i t cost(g )
So we assume that f1 is constant on [0, t) (i.e., the shaded areas in Figure 3 are rectangles). Let
γ = f1 (0) be the large element and λ be the total mass of liquid elements (the same in f as in g),
i.e., λ = f (1) = (1 − t)g(1). In the limit ε → 0 we have
R
R
2 + λ (γ + x) dx + (1 − t) λ xdx
2 + γλ + λ2 + (1 − t) λ2
2
t
γ
t
γ
tγ 2 + tγλ + λ2
2
2
0
0
cost(f )
=
=
=
R g(1)
2
λ
λ2
cost(g)
( 1−t
)
tγ 2 + 2(1−t)
tγ 2 + (1 − t) 0 x dx
2
tγ + (1 − t) 2
√
and we need to prove that this expression is at most 1+2 2 for all t ∈ [0, 1), γ ≥ 0 and λ ≥ 0. So
we want to show
√
1+ 2
λ2
λ2
2
2
≤
tγ + tγλ +
tγ +
,
2
2
2(1 − t)
17
that is,
!
√
√
1
2−1 2
1+ 2
− λ · tγ +
−
tγ ≥ 0.
4(1 − t) 2
2
λ2
√
1+ 2
Note that 4(1−t)
− 12 > 0 for t ∈ [0, 1), so this is a quadratic polynomial in λ whose minimum value
(over λ ∈ R) is
!
√
√
t2 γ 2
t
2−1 2
2−1
2
= tγ
tγ − √
− √
1+ 2
2
2
−2
4 1+ 2 − 1
4(1−t)
1−t
2
and we should prove that this is nonnegative. If t= 0√or γ =
0, then this is clearly true; otherwise
1−t 1+ 2
we multiply both sides of the inequality by tγ 2
1−t − 2 (a positive number) and after some
calculations we are left with showing
√
√
3−2 2
2
≥0
t +
2−2 t+
2
but this is again a quadratic polynomial, whose minimum is 0.
Proof of Theorem 4.1. Now it is straightforward to see that
√
P out
cost(Ci )
cost(f ′ )
cost(f )
1+ 2
i yi
P
≥
≥ min 2,
= min 2,
in
2
cost(g′ )
cost(g)
i yi cost(Ci )
where (f, g) is produced from (y out , y in ) as in Fact 4.4 and (f ′ , g′ ) is produced from (f, g) by
applying
Lemmas 4.5,√4.7 and
4.8; the first inequality is by Lemma 4.9. It follows that either
P out
√
y
cost(Ci )
1+ 2
1+ 2
Pi iin
.
≥
2
(false)
or
≥
2
2
y cost(C )
i
i
i
On the other hand, to conclude the proof of Theorem 1.1, we have the following lemma regarding
the tightness of our analysis of Algorithm 1:
P
Lemma 4.10. For any δ > 0, there is an instance I of R|| pj Cj whose optimal value is c, and
an optimal Configuration-LP solution whose objective
value is also c, such that the rounded solution
√
1+ 2
returned by Algorithm 1 has cost at least ( 2 − δ)c.
Proof. The intuitive explanation is that the bound in Lemma 4.9 is tight and
√ thus there exists a
1+ 2
CFP in the final form (as postulated by Lemma 4.8)
P with ratio exactly 2 . Furthermore, this
CFP indeed almost corresponds to an instance of R|| pj Cj (except for the fact that the parameters
which maximize the ratio are irrational). We make this intuition formal in the following.
Let
2
tγ 2 + tγλ + λ2
h(t, γ, λ) =
λ2
tγ 2 + 2(1−t)
be the function specifying the ratio of CFPs in the final form. In Lemma 4.9, we proved that
h(t, γ, λ) ≤ 12 + √12 for all t, γ, λ ∈ [0, 1]. To begin, let us fix the following maximizer (t⋆ , λ⋆ , γ ⋆ ) of
h: t⋆ = 1 −
√1 ,
2
γ⋆ =
1
2
and λ⋆ =
√
2−1
2 ;
we have h(t⋆ , λ⋆ , γ ⋆ ) =
√1
2
+ 12 .
Let us choose a small rational η. Next,
let us fix rationals t̃ ∈ [t⋆ − η, t⋆ ], λ̃ ∈ [λ⋆ − η, λ⋆ ] and
√
2
1+
γ̃ ∈ [γ ⋆ , γ ⋆ + η] such that h(t̃, γ̃, λ̃) ≥ 2 − η. Then, there exist positive integers k, T and Λ
such that T = t̃k and Λ = k λ̃. Finally, select a small rational ǫ ≤ η such that ε =
integer k1 > 0, and ε =
λ̃
k2 ,
for some integer k2 > 0.
18
λ̃
,
k1 (1−t̃)
for some
P
Next, we create an instance Iε of R|| pj Cj which consists of k machines, a set T of T jobs of
size γ̃ each, and a set L of Λ/ε jobs of size ε each; any job can be assigned to any machine. An
optimal solution to this instance will assign the jobs from T alone on T machines, and distribute
the jobs from L evenly on the rest of the machines (i.e., these machines will all receive λ̃ε jobs of
size ε each). The fact that this is an optimal solution follows in a straightforward manner from the
following two observations:
• A solution which assigns a job from L on the same machine as a job from T is sub-optimal:
indeed, the average makespan is less than γ̃ (in fact, it is exactly t̃γ̃ + λ̃, which is at most γ̃,
due to the fact that t⋆ γ ⋆ + λ⋆ < γ ⋆ and due to the intervals which we choose t̃, γ̃ and λ̃ from),
which implies that we can always reassign such a job to a machine with smaller makespan,
thus decreasing the solution cost.
• Similarly, in any optimal solution, jobs from T are not assigned on the same machine.
Now, consider the Configuration-LP solution yε which assigns to every machine a configuration
which consists of a single job from T (i.e., of cost γ̃ 2 ) with probability t̃, and a configuration
P
P
λ̃2
λ̃
2
which consists of (1−λ̃t̃)ε jobs from L (i.e., of cost 1≤i≤ λ̃
1≤j<i ε = 2(1−t̃)2 + 2(1−t̃) ε) with
(1−t̃)ε
probability 1− t̃; clearly, the cost of this LP solution is equal to that of any optimal integral solution
(in fact, the LP solution is a convex combination of all integral optimal solutions). Furthermore,
this LP solution is optimal (one can see this by applying the reasoning we used for the integral
optimum to all the configurations in the support of a fractional solution).
Algorithm 1 will assign to any machine a configuration which consists of a single job from T and
P
P
2
λ̃/ε jobs from L (i.e., of cost γ̃(γ̃ + λ̃) + 1≤i≤ λ̃ 1≤j<i ε2 = γ̃(γ̃ + λ̃) + λ̃2 + λ̃2 ε) with probability
ε
P
P
2
t̃, and a configuration which consists of λ̃/ε jobs from L (i.e., of cost 1≤i≤ λ̃ 1≤j<i ε2 = λ̃2 + λ̃2 ε)
ε
with probability 1− t̃. To see this, first observe that every machine has a total fractional assignment
of jobs from T equal to t̃, and a total fractional assignment of jobs from L equal to λ̃ε . Therefore,
the first bucket created by Algorithm 1 for any machine will contain a t̃-fraction of T -jobs and an
(1 − t̃)-fraction of L-jobs, and the rest of the buckets will be filled up with L-jobs (since λ̃ε is an
integer, the last bucket will be filled up to a t̃-fraction). This implies that, in a worst-case output
distribution, with probability t̃ any machine receives a T -job and L-jobs of total size λ̃, and with
probability (1 − t̃) it receives L-jobs of total size λ̃.
Now, the ratio of the expected cost of the returned solution to the LP cost, for any machine, is
then
t̃γ̃ 2 + t̃γ̃ λ̃ +
t̃γ̃ 2 +
λ̃2
2
λ̃2
2(1−t̃)
+ λ̃2 ε
+ λ̃2 ε
≥
t̃γ̃ 2 + t̃γ̃ λ̃ +
t̃γ̃ 2 +
λ̃2
2(1−t̃)
λ̃2
2
+ λ̃ 2ε
t̃γ̃ 2 +
= h(t̃, γ̃, λ̃)
t̃γ̃ 2 +
λ̃2
2(1−t̃)
λ̃2
2(1−t̃)
+ λ̃2 ε
√
which is at least 1+2 2 − δ if we pick ε and η small enough; since the cost of the LP solution is equal
to that of any optimal integral solution, the claim follows.
It is interesting to note that, given the instance and LP solution from the proof of Lemma 4.10,
any random assignment produced by Algorithm 1 will assign the same amount of small jobs to all the
machines, while it will assign a large job to a t̃-fraction of the machines. Therefore derandomizing
Algorithm 1 by picking
the best possible matching (instead of picking one at random) will not
√
1+ 2
improve upon the 2 ratio.
Theorem 4.1 and Lemma 4.10 together imply Theorem 1.1.
19
J12
M1
M2
J13
J24
J23
J14
M4
M3
J34
13
Figure 4: The 12
-integrality gap instance. In this picture, black circles correspond to machines,
gray boxes correspond to jobs of size 3, and white boxes correspond to jobs of size 1.
An edge between a circle and a box means that the corresponding job can be assigned
to the corresponding machine.
5
Integrality Gap Lower Bound
First of all, observe
Pthat Theorem 1.1, apart from establishing the existence of a 1.21-approximation
algorithm for R|| j pj Cj , also implies an upper bound on the integrality gap of its ConfigurationLP. Hence, we accompany
P our main result with a lower bound on the integrality gap of the
Configuration-LP for R|| j pj Cj :
P
Theorem 1.2. The integrality gap of the Configuration-LP for R|| pj Cj is at least 1.08.
Proof. Consider the following instance on 4 machines M1 , M2 , M3 , M4 : for every pair {Mi , Mj } of
machines there is one job Jij which can be processed only on these two machines. Jobs J12 and
J34 are large: they have weight and size 3, while the other four jobs are small and have weight and
size 1. (See Figure 4 for an illustration.)
First we show that any integral schedule has cost at least 26. Without loss of generality, the
large job J12 is assigned to machine M1 and the other large job J34 is assigned to machine M3 .
The small job J13 must also be assigned to one of them, say to M1 . This costs 9 + 9 + 4 = 22. The
remaining three small jobs J12 , J14 and J24 cannot all be assigned to distinct machines with zero
makespan (since only M2 and M4 are such), so they will incur at least 1 + 1 + 2 = 4 units of cost.
On the other hand, the Configuration-LP has a solution of cost 24. Namely, it can assign to
each machine Mi two configurations, each with fractional value 12 : the singleton large job that can
be processed on that machine, or the two small jobs that can be processed on that machine. Then
each job is processed with fractional value 12 by each of the two machines that can process it. The
13
cost is 4 · ( 21 · 9 + 21 · (1 + 2)) = 24. Thus the integrality gap is at least 26
24 = 12 > 1.08.
20
References
[BCS74]
J. Bruno, E.G. Coffman, Jr., and R. Sethi. Scheduling independent tasks to reduce
mean finishing time. Comm. ACM, 17:382–387, 1974.
[BSS16]
Nikhil Bansal, Aravind Srinivasan, and Ola Svensson.
Lift-and-round to improve weighted completion time on unrelated machines.
In Proceedings of
the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC
2016, Cambridge, MA, USA, June 18-21, 2016, pages 156–167, 2016.
URL:
http://arxiv.org/abs/1511.07826, doi:10.1145/2897518.2897572.
[Chu99]
F. A. Chudak. A min-sum 3/2-approximation algorithm for scheduling unrelated parallel
machines. Journal of Scheduling, 2(2):73–77, 1999.
[GLLK79] Ronald L Graham, Eugene L Lawler, Jan Karel Lenstra, and AHG Rinnooy Kan.
Optimization and approximation in deterministic sequencing and scheduling: a survey.
Annals of discrete mathematics, 5:287–326, 1979.
[GLS93]
Martin Grötschel, Lászlo Lovász, and Alexander Schrijver. Geometric Algorithms and
Combinatorial Optimization, volume 2 of Algorithms and Combinatorics. Springer,
1993.
[Hor73]
W.A. Horn. Minimizing average flow time with parallel machines. Oper. Res., 21:846–
847, 1973.
[HSW01]
Han Hoogeveen, Petra Schuurman, and Gerhard J. Woeginger. Non-approximability
results for scheduling problems with minsum criteria. INFORMS Journal on Computing, 13(2):157–168, 2001. URL: http://dx.doi.org/10.1287/ijoc.13.2.157.10520,
doi:10.1287/ijoc.13.2.157.10520.
[KK86]
Tsuyoshi Kawaguchi and Seiki Kyan. Worst case bound of an LRF schedule for the
mean weighted flow-time problem. SIAM J. Comput., 15(4):1119–1129, 1986. URL:
http://dx.doi.org/10.1137/0215081, doi:10.1137/0215081.
[KMPS09] V. S. Anil Kumar, Madhav V. Marathe, Srinivasan Parthasarathy, and Aravind Srinivasan. A unified approach to scheduling on unrelated parallel machines. J. ACM,
56(5):28:1–28:31, 2009.
[Sku01]
Martin Skutella.
Convex quadratic and semidefinite programming
relaxations in scheduling.
J. ACM, 48(2):206–242,
2001.
URL:
http://doi.acm.org/10.1145/375827.375840, doi:10.1145/375827.375840.
[Smi56]
Wayne
E.
Smith.
Various
optimizers
for
single-stage
production.
Naval Research Logistics Quarterly, 3(1-2):59–66, 1956.
URL:
http://dx.doi.org/10.1002/nav.3800030106, doi:10.1002/nav.3800030106.
[SS99]
Jay Sethuraman and Mark S. Squillante. Optimal scheduling of multiclass parallel
machines. In Proceedings of the Tenth Annual ACM-SIAM Symposium on Discrete
Algorithms, 17-19 January 1999, Baltimore, Maryland., pages 963–964, 1999. URL:
http://dl.acm.org/citation.cfm?id=314500.314948.
[SS02]
Andreas S Schulz and Martin Skutella. Scheduling unrelated machines by randomized
rounding. SIAM Journal on Discrete Mathematics, 15(4):450–469, 2002.
21
[ST93]
David B. Shmoys and Éva Tardos.
An approximation algorithm for the
generalized assignment problem.
Math. Program., 62:461–474, 1993.
URL:
http://dx.doi.org/10.1007/BF01585178, doi:10.1007/BF01585178.
[SW99a]
Petra Schuurman and Gerhard J Woeginger. Polynomial time approximation algorithms
for machine scheduling: Ten open problems. Journal of Scheduling, 2(5):203–213, 1999.
[SW99b]
Martin Skutella and Gerhard J. Woeginger. A PTAS for minimizing the weighted sum
of job completion times on parallel machines. In Symposium on Theory of Computing,
STOC, pages 400–407, 1999.
[SW13]
Maxim Sviridenko and Andreas Wiese.
Approximating the ConfigurationLP for minimizing weighted sum of completion times on unrelated machines.
In Integer Programming and Combinatorial Optimization - 16th International
Conference, IPCO 2013, Valparaı́so, Chile, March 18-20, 2013. Proceedings,
pages 387–398, 2013. URL: http://dx.doi.org/10.1007/978-3-642-36694-9_33,
doi:10.1007/978-3-642-36694-9_33.
22
| 8 |
On Design Mining:
Coevolution and Surrogate Models
arXiv:1506.08781v6 [] 23 Nov 2016
Richard J. Preen∗ and Larry Bull
Department of Computer Science and Creative Technologies
University of the West of England, Bristol, UK
November 24, 2016
Abstract
Design mining is the use of computational intelligence techniques to iteratively search
and model the attribute space of physical objects evaluated directly through rapid prototyping to meet given objectives. It enables the exploitation of novel materials and
processes without formal models or complex simulation. In this paper, we focus upon
the coevolutionary nature of the design process when it is decomposed into concurrent
sub-design threads due to the overall complexity of the task. Using an abstract, tuneable model of coevolution we consider strategies to sample sub-thread designs for whole
system testing and how best to construct and use surrogate models within the coevolutionary scenario. Drawing on our findings, the paper then describes the effective design
of an array of six heterogeneous vertical-axis wind turbines.
Keywords— 3D printing, coevolution, shape optimisation, surrogate models, turbine,
wind energy
∗ Contact
author. E-mail: [email protected]
1
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
1
Introduction
Design mining [54, 55, 56] is the use of computational intelligence techniques to iteratively
search and model the attribute space of physical objects evaluated directly through rapid
prototyping to meet given objectives. It enables the exploitation of novel materials and
processes without formal models or complex simulation, whilst harnessing the creativity
of both computational and human design methods. A sample-model-search-sample loop
creates an agile/flexible approach, i.e., primarily test-driven, enabling a continuing process
of prototype design consideration and criteria refinement by both producers and users.
Computational intelligence techniques have long been used in design, particularly for
optimisation within simulations/models. Recent developments in additive-layer manufacturing (3D printing) means that it is now possible to work with over a hundred different materials, from ceramics to cells. In the simplest case, design mining assumes no
prior knowledge and builds an initial model of the design space through the testing of 3D
printed designs, whether specified by human and/or machine. Optimisation techniques,
such as evolutionary algorithms (EAs), are then used to find the optima within the data
mining model of the collected data; the model which maps design specifications to performance is inverted and suggested good solutions identified. These are then 3D printed
and tested. The resulting data are added to the existing data and the process repeated.
Over time the model—built solely from physical prototypes tested appropriately for the
task requirements—captures the salient features of the design space, thereby enabling the
discovery of high-quality (novel) solutions. Such so-called surrogate models have also
long been used in optimisation for cases when simulations are computationally expensive.
Their use with 3D printing opens new ways to exploit optimisation in the design of physical objects directly, whilst raising a number of new issues over the simulation case.
This approach of constantly producing working prototypes from the beginning of the
design process shares resemblance to agile software engineering [43]: requirements are
identified at the start, even if only partially, and then corresponding tests created, which
are then used to drive the design process via rapid iterations of solution creation and
evaluation. The constant supply of (tangible) prototypes enables informed sharing with,
and hence feedback from, those involved in other stages of the process, such as those in
manufacture or the end user. This feedback enables constant refinement of the requirements/testing and also means that aspects of the conceptual and detailed design stages
become blended. Moreover, due to the constant production of better (physical) prototypes,
aspects of the traditional manufacturing stage become merged with the design phase. The
data mining models created provide new sources of knowledge, enabling designers, manufacturers, or users, to do what-if tests during the design process to suggest solutions, the
sharing of timely/accurate information when concurrent sub-design threads are being exploited, etc. Thereafter, they serve as sources of information for further adaptive designs,
the construction of simulators/models, etc.
In contrast to human designers, who typically arrive at solutions by refining building
blocks that have been identified in highly constrained ways, computational intelligence offers a much more unconstrained and unbiased approach to exploring potential solutions.
Thus, by creating the designs directly in hardware there is the potential that complex and
subtle physical interactions can be utilised in unexpected ways where the operational principles were previously unknown. These physical effects may simply be insufficiently understood or absent from a simulator and thus otherwise unable to be exploited. Design
mining is therefore ideally suited to applications involving highly complex environments
2
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
and/or materials.
The design of modern wind farms typically begin with the blade profile optimisation
of a single isolated wind turbine through the use of computational fluid dynamics (CFD)
simulations [68], followed by optimising the site positioning of multiple copies of the same
design to minimise the negative effects of inter-turbine wake interactions [25]. Whilst CFD
simulations have been successfully applied, they are extremely computationally expensive; consequently most numerical studies perform only 2D analysis, e.g., [21], and it is
currently infeasible to perform accurate 3D simulations of a large array. Moreover, various
assumptions must be made, and accurately modelling the complex inter-turbine wake interactions is an extremely challenging task where different turbulence models can have a
dramatic effect on turbine performance [33]. CFD studies have also presented significant
differences between results even with identical geometric and flow conditions due to the
complexity of performing accurate numerical analysis [1].
In our initial pilot study we used the design mining approach to discover a pair of
novel, heterogeneous vertical-axis wind turbine (VAWT) designs through cooperative coevolution [55]. Accurate and computationally efficient modelling of the inter-turbine interactions is extremely difficult and therefore the area is ideally suited to the design mining
approach. More recently, we have begun to explore the performance of relevant techniques
from the literature within the context of design mining. Following [10], the pilot study used
multi-layered perceptrons (MLPs) [59] for the surrogate modelling. Using the data from
that study, we have subsequently shown that MLPs appear a robust approach in comparison to a number of well-known techniques [56]. That is, MLPs appear efficient at capturing
the underlying structure of a design space from the relatively small amount of data points
a physical sampling process can be expected to generate. In this paper we begin by continuing the line of enquiry, here focusing upon the coevolutionary nature of the design
process when it is decomposed into concurrent sub-design threads due to the overall complexity of the task. Using an abstract, tuneable model of coevolution we consider strategies
to sample sub-thread designs for whole system testing and how best to construct and use
surrogate models within the coevolutionary scenario. Drawing on our findings, the paper
then describes the effective design of a more complex array of VAWT than our pilot study.
2
2.1
Background
Evolving Physical Systems
As we have reviewed elsewhere [55], there is a small amount of work considering the
evolutionary design of physical systems directly, stretching back to the origins of the discipline [8, 51, 19, 57]. Well-known examples include robot controller design [45]; the evolution of vertebrate tail stiffness in swimming robots [42]; adaptive antenna arrays [4]; electronic circuit design using programmable hardware [66]; product design via human provided fitness values [28]; chemical systems [65]; unconventional computers [27]; robot embodied evolution [22]; drug discovery [64]; functional genomics [39]; adaptive optics [63];
quantum control [36]; fermentation optimisation [18]; and the optimisation of analytical
instrumentation [46]. A selection of multiobjective case studies can be found in [40]. Examples of EAs incorporating the physical aerodynamic testing of candidate solutions include
the optimisation of jet nozzles [57, 62], as well as flapping [2, 31, 48] and morphing [7]
wings. More recent fluid dynamics examples include [50, 23, 5].
3
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
Lipson and Pollack [41] were the first to exploit the use of 3D printing in conjunction
with an EA, printing mobile robots with embodied neural controllers that were evolved using a simulation of the mechanics and control. Rieffel and Sayles’ [58] use of an interactive
EA to 3D print simple shapes is particularly relevant to the work presented here. As noted
above, in this paper we adopt an approach where relatively simple and tuneable simulations of the basic evolutionary design scenario are used to explore the general performance
of different algorithmic approaches before moving to the physical system. This can be seen
as somewhat akin to the minimalist approach proposed by Jakobi [34] for addressing the
so-called reality gap in evolutionary robotics, although further removed from the full details of the physical system. Surrogate models are then used to capture the underlying
characteristics of the system to guide design.
2.2
Cooperative Coevolution and Surrogates
Cooperative coevolution decomposes a global task into multiple interacting sub-component
populations and optimises each in parallel. In the context of a design process, this can be
seen as directly analogous to the use of concurrent sub-threads. The first known use of
cooperative coevolution considered a job-shop scheduling task [32]. Here solutions for individual machines were first evaluated using a local fitness function before being partnered
with solutions of equal rank in the other populations to create a global solution for evaluation. Bull and Fogarty [11] subsequently presented a more general approach wherein the
corresponding newly created offspring solutions from each population are partnered and
evaluated. Later, Potter and De Jong [53] introduced a round-robin approach with each
population evolved in turn, which has been adopted widely. They explored using the current best individual from each of the other populations to create a global solution, before
extending it to using the best and a random individual from the other population(s). These
two partnering strategies, along with others, were compared under their round-robin approach and found to be robust across function/problem types [9]. We used the round-robin
approach and partnering with the best individual in our pilot study and return to it here
with the focus on learning speed, i.e., how to minimise the number of (timely/costly) fitness evaluations whilst learning effectively.
As EAs have been applied to ever more complex tasks, surrogate models (also known
as meta-models) have been used to reduce the optimisation time. A surrogate model,
y = f (~x), can be formed using a sample D of evaluated designs N , where ~x is the genotype describing the design morphology and y is the fitness/performance. The model is
then used to compute the fitness of unseen data points ~x ∈
/ D, thereby providing a cheap
approximation of the real fitness function for the EA to use. Evaluations with the real fitness function must continue to be performed periodically otherwise the model may lead
to premature convergence on local optima (see Jin [35] for an overview). There has been
very little prior work on the use of surrogates in a coevolutionary context: they have been
shown capable of solving computationally expensive optimisation problems with varying
degrees of epistasis more efficiently than conventional coevolutionary EAs (CEAs) through
the use of radial basis functions [49] and memetic algorithms [24]. Remarkably, in 1963
Dunham et al. [19], in describing the evolutionary design of physical logic circuits/devices,
briefly note (without giving details): “It seemed better to run through many ‘generations’
with only approximate scores indicating progress than to manage a very few ‘evolutions’
with rather exact statements of position.”
Our aforementioned pilot study is the first known use of coevolutionary design without
4
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
simulation. As noted above, we have recently compared different modelling techniques by
which to construct surrogates for coevolution. In this paper we further consider how best
to train and use such models.
2.3
The NKCS Model
Kauffman and Johnsen [38] introduced the abstract NKCS model to enable the study of
various aspects of coevolution. In their model, an individual is represented by a genome
of N (binary) genes, each of which depends epistatically upon K other randomly chosen
genes in its genome. Thus increasing K, with respect to N , increases the epistatic linkage,
increasing the ruggedness of the fitness landscapes by increasing the number of fitness
peaks, which increases the steepness of the sides of fitness peaks and decreases their typical heights. Each gene is also said to depend upon C randomly chosen traits in each of
the other X species with which it interacts, where there are S number of species in total.
The adaptive moves by one species may deform the fitness landscape(s) of its partner(s).
Altering C, with respect to N , changes how dramatically adaptive moves by each species
deform the landscape(s) of its partner(s). The model assumes all inter- and intragenome
interactions are so complex that it is appropriate to assign random values to their effects on
fitness. Therefore, for each of the possible K + (X × C) interactions, a table of 2K+(X×C)+1
fitnesses is created for each gene, with all entries in the range 0.0 to 1.0, such that there is
one fitness for each combination of traits. The fitness contribution of each gene is found
from its table; these fitnesses are then summed and normalised by N to give the selective
fitness of the total genome for that species. Such tables are created for each species (see
example in Figure 1; the reader is referred to Kauffman [37] for full details). This tuneable model has previously been used to explore coevolutionary optimisation, particularly
in the aforementioned comparison of partnering strategies [9]. We similarly use it here to
systematically compare various techniques for the design mining approach.
That is, each species is cast as a sub-thread of an overall design task, thereby enabling
examination of the effects from varying their number (S), their individual complexity (K),
and the degree of interdependence between them (C). The fitness calculations of each
species are combined to give a global system performance.
2.4
Evolving Wind Farms
As we have reviewed elsewhere [56], techniques such as EAs have been used to design
wind turbine blades using CFD simulations, some in conjunction with surrogate models,
e.g., Chen et al. [14]. EAs have also been extensively used to optimise the turbine positioning within wind farms, e.g., Mosetti et al. [44]. Most work has focused on arrays
of homogeneous turbines, however wind farms of heterogeneous height have recently
gained attention as a means to improve the overall power output for a given number of
turbines [15, 20, 13]. Chamorro et al. [12] explored horizontal-axis wind turbine (HAWT)
farms with large and small turbines positioned alternately. They found that size heterogeneity has positive effects on turbulent loading as a result of the larger turbines facing a
more uniform turbulence distribution and the smaller turbines operating under lower turbulence levels. Craig et al. [17] have demonstrated a similar potential for heterogeneous
height VAWT wind farms. Chowdhury et al. [16] optimised layouts of HAWT with heterogeneous rotor diameters using particle swarm optimisation and found that the optimal
combination of turbines with differing rotor diameters significantly improved the wind
5
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
N =3 K=1
Species s1
1
n1
C=1
S=2
0
1
n2
n1
X=1
Species s2
1
n2
1
0
n3
n3
Species s1 gene n1
s1n1 s1n3 s2n1 fitness
0
0
0
0.57
0
0
1
0.12
0.09
0
1
0
0.16
0
1
1
0.44
1
0
0
0.66
1
0
1
1
1
0
0.33
0.44
1
1
1
Species s1 gene n2
s1n2 s1n1 s2n3 fitness
0
0
0
0.11
0
0
1
0.32
0.68
0
1
0
0.30
0
1
1
0.19
1
0
0
0.77
1
0
1
1
1
0
0.21
0.23
1
1
1
Species s1 gene n3
s1n3 s1n2 s2n3 fitness
0
0
0
0.75
0
0
1
0.42
0.25
0
1
0
0.28
0
1
1
0.13
1
0
0
0.58
1
0
1
1
1
0
0.66
0.91
1
1
1
Figure 1: The NKCS model: Each gene is connected to K randomly chosen local genes (solid lines)
and to C randomly chosen genes in each of the X other species (dashed lines). A random fitness
is assigned to each possible set of combinations of genes. The fitness of each gene is summed and
normalised by N to give the fitness of the genome. An example NKCS model is shown above and
example fitness tables are provided for species s1, where the s1 genome fitness is 0.416 when s1 =
[101] and s2 = [110].
farm efficiency. Recently, Xie et al. [67] have performed simulations of wind farms with
collocated VAWT and HAWT, showing the potential to increase the efficiency of existing
HAWT wind farms by adding VAWTs.
Conventional offshore wind farms require support structures fixed rigidly to the seabed,
which currently limits their deployment to depths below 50 m. However, floating wind
farms can be deployed in deep seas where the wind resources are strongest, away from
shipping lanes and wind obstructions [52]. See Borg et al. [6] for a recent review of floating
wind farms. They note that floating VAWT have many advantages over HAWT, e.g., lower
centre of gravity, increased stability, and increased tolerance of extreme conditions. The design of floating wind farms is especially challenging since platform oscillations also need
to be considered. EAs are beginning to be used to explore the design of floating support
structures, e.g., Hall et al. [26] optimised HAWT platforms using a simple computational
model to provide fitness scores. Significantly, all of these works have involved the use of
CFD simulations with varying degrees of fidelity.
Our pilot study found that asymmetrical pairs of VAWTs can be more efficient than similar symmetrical designs. In this paper, we extend our initial work to the heterogeneous
design of an array of 6 closely positioned VAWT, which is currently effectively beyond
the capabilities of accurate 3D CFD simulation; the approach performs optimisation in
the presence of non-uniform wind velocity, complex inter-turbine wake effects, and multidirectional wind flow from nearby obstacles, which is extremely difficult to achieve accurately under high fidelity CFD simulation. In addition, previously the combined rotational
6
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
Algorithm 1: Coevolutionary genetic algorithm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
for each species do
initialise population;
select a random representative for each other species;
for each individual in population do
evaluate;
end
end
while evaluation budget not exhausted do
for each species do
create an offspring using genetic operators;
select a representative for each other species;
evaluate the offspring;
add offspring to species population;
end
end
speed was simply used as the objective measure, whereas here we use the total angular
kinetic energy of the array, which includes both mass and speed of rotation, and we use a
more flexible spline representation that enables the potential exploitation of both drag and
lift forces in conjunction with inter-turbine flow and turbulence from nearby obstacles.
3
Surrogate-assisted Coevolution
The basic coevolutionary genetic algorithm (CGA) is outlined in Algorithm 1. Initially all
individuals in each of the species/populations must be evaluated. Since no initial fitness
values are known, a random individual is chosen in each of the other populations to form
a global solution, however if there is a known good individual then that individual can
be used instead. The CGA subsequently cycles between populations, selecting parents via
tournament and creating offspring with mutation and/or crossover. The offspring are then
evaluated using representative members from each of the other populations. At any point
during evolution, each individual is assigned the maximum team fitness achieved by any
team in which it has been evaluated, where the team fitness is the sum of the fitness scores
of each collaborating member.
For the basic surrogate-assisted CGA (SCGA) used in this paper, the CGA runs as normal except that each time a parent is chosen, λm number of offspring are created and then
evaluated with an artificial neural network surrogate model; the single offspring with the
highest approximated fitness is then evaluated on the real fitness function in collaboration with the fittest solution (best partner) in each other populations. See outline in Algorithm 2. The model is trained using backpropagation for T epochs; where an epoch
consists of randomly selecting, without replacement, all individuals from a species population archive and updating the model weights at a learning rate β. The model weights
are (randomly) reinitialised each time before training due to the temporal nature of the
collaborating scheme.
For both CGA and SCGA, a tournament size of 3 takes place for both selection and
replacement. A limited form of elitism is used whereby the current fittest member of the
7
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
Algorithm 2: Surrogate-assisted coevolutionary genetic algorithm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
for each species do
initialise population;
select a random representative for each other species;
for each individual in population do
evaluate;
archive;
end
end
while evaluation budget not exhausted do
for each species do
initialise model;
train model on species archive;
select parent(s) using tournament selection;
for λm number of times do
create an offspring using genetic operators;
predict offspring fitness using the model;
end
select the offspring with the highest model predicted fitness;
select a representative for each other species;
evaluate the offspring;
add offspring to species population;
archive offspring;
end
end
population is given immunity from deletion.
4
NKCS Experimentation
For the physical experiments performed in this paper, 6 VAWT are positioned in a row.
Therefore, to simulate this interacting system, we explore the case where S = 6 and each
species is affected by its proximate neighbours, i.e., X = 1 for the first and sixth species,
and X = 2 for all others. Figure 2 illustrates the simulated topology. For all NKCS simulations performed, P = 20, N = 20, per allele mutation probability µ = 5%, and crossover
probability is 0%. Where a surrogate model is used, the model parameters are: N input
neurons, H = 10 hidden neurons, 1 output neuron, λm = 1000, T = 50, β = 0.1. All
results presented are an average of 100 experiments consisting of 10 coevolutionary runs
on 10 randomly generated NKCS functions. The performance of all algorithms are shown
on four different K and C values, each representing a different point in the range of interand intra-population dependence.
s1
s2
s3
s4
s5
s6
Figure 2: NKCS topology. Arrows indicate inter-species connectivity (X).
8
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
4.1
Coevolution
We begin by comparing the traditional approach of partnering with the elite member in
each other species (CGA-b) with performing additional evaluations (CGA-br), and explore
any benefits to overall learning speed from refreshing the population fitness values as the
fitness landscapes potentially shift; that is, all individuals in the other species populations
are re-evaluated in collaboration with the current elite members each time a new fittest
individual is found (CGA-re).
As noted above, after Potter and De Jong [53], traditionally CEAs consider each population in turn. Thus, if S = 10 and each species population creates one offspring per turn,
then 10 evaluations are required for the whole system. However, at the other end of this
scale each population simultaneously generates a new individual each turn, and evaluates
all offspring at once [11], therefore requiring only one evaluation for the whole system.
Varying the number of offspring collaborators in this way is much like varying the systemlevel mutation rate. Therefore, we explore the case where all species offspring are created
and tested simultaneously (CGA-o).
In summary, the NKCS model is used to examine the following four different collaboration schemes:
• CGA-b: each offspring is evaluated in collaboration with the current best individual
in each of the other species populations.
• CGA-br: each offspring is evaluated as in CGA-b, and additionally with a random
member in each of the other populations.
• CGA-re: each offspring is evaluated as in CGA-b, and all populations are re-evaluated
when one makes a progress.
• CGA-o: offspring are created in each species simultaneously and evaluated together.
Figure 3 and Table 1 present the performance of the collaboration schemes. As can be
seen, during the early stages of evolution, the mean best fitness of CGA-b is significantly
greater than CGA-br and CGA-re for all tested K, C values, showing that performing additional evaluations results in a lower fitness compared with the approach of only collaborating with the elite members. At the end of the experiments, the three approaches
generally reach approximately similar performance, suggesting that there is no penalty for
this increase in early learning speed. For the case of both lower inter- and intra-population
epistasis CGA-br performs better, which supports findings reported elsewhere [53, 9]. The
approach of evaluating all offspring simultaneously (CGA-o) appears to be detrimental to
performance under the simulated conditions.
4.2
Surrogate-assisted Coevolution
In this section we compare the performance of CGA-b with the standard surrogate-assisted
version (SCGA-b). In addition, we compare the performance of the standard surrogate
approach where the models are presented only the N genes from their own species (SCGAb) with the case where the models are presented all N × S partner genes (SCGA-a). We
also compare the standard approach of evaluating the most promising of λm offspring
stemming from a single parent (SCGA-b) with searching the same number of offspring
9
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
N=20 K=6 C=2 S=6
4.2
4.0
4.0
3.8
3.8
Fitness
Fitness
N=20 K=2 C=2 S=6
4.2
3.6
CGA-b
CGA-br
CGA-re
CGA-o
3.4
3.6
CGA-b
CGA-br
CGA-re
CGA-o
3.4
3.2
3.2
0
500 1000 1500 2000 2500 3000 3500 4000
Evaluations
0
N=20 K=6 C=8 S=6
4.2
4.2
4.0
4.0
3.8
3.8
Fitness
Fitness
N=20 K=2 C=8 S=6
3.6
CGA-b
CGA-br
CGA-re
CGA-o
3.4
500 1000 1500 2000 2500 3000 3500 4000
Evaluations
3.6
CGA-b
CGA-br
CGA-re
CGA-o
3.4
3.2
3.2
0
500 1000 1500 2000 2500 3000 3500 4000
Evaluations
0
500 1000 1500 2000 2500 3000 3500 4000
Evaluations
Figure 3: CGA mean best fitness. Results are an average of 100 experiments consisting of 10 coevolutionary runs of 10 random NKCS functions. CGA-b (triangle), CGA-br (circle), CGA-re (square), and
CGA-o (diamond).
where λm tournaments are performed to select parents that each create a single offspring
(SCGA-p).
Furthermore, due to the highly temporal nature of the individuals undergoing evaluation needing to partner with the elite members in each of the other species, it is possible
that the surrogate model performance may degrade by using the entire data set for training. For example, individuals from the initial population may perform very differently
when partnered with the elite individuals from later generations. A windowed approach
of using only the most recent P evaluated individuals in each species for training seemed
promising in a prior experiment coevolving a pair of VAWT, however was not statistically
significant in practice [56]. Here we explore the effect for larger numbers of species where
the temporal variance is potentially much higher (SCGA-bw).
In summary, the algorithms tested:
• SCGA-b: standard SCGA.
• SCGA-a: global surrogate model construction.
• SCGA-p: λm parents are selected via tournaments, each creating a single offspring
and the most promising as suggested by the model is evaluated.
• SCGA-bw: most recent P evaluated individuals used for training.
10
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
Table 1: CGA best fitnesses after 480 and 3600 evaluations (averages of 100). The mean is
highlighted in boldface where it is significantly different from CGA-b using a Mann-Whitney U test
at the 95% confidence interval.
CGA-b CGA-br
After 480 evaluations:
K2C2
3.8449
3.7141
K2C8
3.7767
3.6937
K6C2
3.8338
3.7359
K6C8
3.7626
3.6765
After 3600 evaluations:
K2C2
4.1464
4.1536
K2C8
4.0700
4.0949
K6C2
4.1395
4.1321
K6C8
4.0390
4.0403
CGA-re
CGA-o
3.6066
3.6387
3.6423
3.6105
3.8163
3.6424
3.7617
3.5761
3.9757
4.0469
4.0254
3.9926
4.1417
4.0269
4.1320
3.9279
Table 2: SCGA best fitnesses after 480 and 3600 evaluations (averages of 100). The mean is
highlighted in boldface where it is significantly different from SCGA-b using a Mann-Whitney U
test at the 95% confidence interval.
SCGA-b CGA-b
After 480 evaluations:
K2C2
3.9094
3.8449
K2C8
3.8214
3.7767
K6C2
3.9160
3.8338
K6C8
3.7847
3.7626
After 3600 evaluations:
K2C2
4.1392
4.1464
K2C8
4.0974
4.0700
K6C2
4.1733
4.1395
K6C8
4.0596
4.0390
SCGA-a
SCGA-p
SCGA-bw
3.8633
3.7537
3.8175
3.7750
3.9070
3.8477
3.8987
3.7851
3.9071
3.7778
3.8794
3.7252
4.1521
4.0558
4.1254
4.0571
4.1578
4.0970
4.1657
4.0630
4.2027
4.1383
4.2244
4.0849
The results are presented in Figure 4 and Table 2. As can be seen, the use of the surrogate
model to identify more promising offspring clearly increases learning early in the search.
For example, the mean best fitness of SCGA-b is significantly greater for all tested K, C
values with the exception of very high inter- and intra-population epistasis. At the end
of the experiments, similar optima are reached, showing that there is no penalty for this
increase in early learning speed. The benefit of the divide-and-conquer strategy to model
building can be seen by comparing SCGA-b with SCGA-a. The mean best fitness of SCGAb is significantly greater than SCGA-a for all four K, C values after 480 evaluations, with
the exception of very high K and C; showing that purely local models are both efficient
and scalable. Comparing SCGA-b with SCGA-p shows that the simple method of using the
model we presented in our pilot study is quite robust as there is no significant difference.
Finally, the windowed training scheme (SCGA-bw) was found to be significantly worse
than using all data (SCGA-b) during the early stages of evolution, however later in the
experiments reached a higher optima.
11
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
N=20 K=2 C=2 S=6
4.2
4.0
4.0
3.8
SCGA-b
SCGA-a
SCGA-p
SCGA-bw
CGA-b
3.4
3.2
0
3.8
3.4
3.2
500 1000 1500 2000 2500 3000 3500 4000
Evaluations
4.2
4.0
4.0
3.8
SCGA-b
SCGA-a
SCGA-p
SCGA-bw
CGA-b
3.4
3.2
0
0
3.8
SCGA-b
SCGA-a
SCGA-p
SCGA-bw
CGA-b
3.6
3.4
3.2
500 1000 1500 2000 2500 3000 3500 4000
Evaluations
500 1000 1500 2000 2500 3000 3500 4000
Evaluations
N=20 K=6 C=8 S=6
4.4
4.2
3.6
SCGA-b
SCGA-a
SCGA-p
SCGA-bw
CGA-b
3.6
N=20 K=2 C=8 S=6
4.4
Fitness
Fitness
4.2
3.6
N=20 K=6 C=2 S=6
4.4
Fitness
Fitness
4.4
0
500 1000 1500 2000 2500 3000 3500 4000
Evaluations
Figure 4: SCGA mean best fitness. Results are an average of 100 experiments consisting of 10 coevolutionary runs of 10 random NKCS functions. SCGA-b (triangle), SCGA-a (circle), SCGA-p (square),
SCGA-bw (star), and CGA-b (diamond).
5
5.1
VAWT Wind Farm Design
Methodology
A single 2-stage 2-blade VAWT candidate with end plates is here created as follows. End
plates are drawn in the centre of a Cartesian grid with a diameter of 35 mm and thickness
of 1 mm. A central shaft 70 mm tall, 1 mm thick, and with a 1 mm hollow diameter is also
drawn in the centre of the grid in order to mount the VAWT for testing. The 2D profile of a
single blade on 1 stage is represented using 5 (x, y) co-ordinates on the Cartesian grid, i.e.,
10 genes, x1 , y1 , ..., x5 , y5 . A spline is drawn from (x1 , y1 ) to (x3 , y3 ) as a quadratic Bézier
curve, with (x2 , y2 ) acting as the control point. The process is then repeated from (x3 , y3 )
to (x5 , y5 ) using (x4 , y4 ) as control. The thickness of the spline is a fixed size of 1 mm. The
co-ordinates of the 2D blade profile are only restricted by the plate diameter; that is, the
start and end position of the spline can be located anywhere on the plate.
To enable z-axis variation, 3 additional co-ordinates (i.e., 6 genes) are used to compute
cubic Bézier curves in the xz and yz planes that offset the 2D profile. The xz-plane offset
curve is formed from an x offset=0 on the bottom plate to an x offset=0 on the top plate
using control points (zx1 , z1 ) and (zx2 , z2 ). The yz-plane offset curve is formed in the same
way with zy1 and zy2 control points, however reusing z1 and z2 to reduce the number of
parameters to optimise.
Furthermore, an extra gene, r1 , specifies the degree of rotation whereby the blades of
12
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
(a) Seed design; genome: x1 = 15.1, y1 = 15.1,
x2 = 22.1, y2 = 15.1, x3 = 25.7, y3 = 15.9,
x4 = 32.1, y4 = 16.1, x5 = 32.1, y5 = 27.1,
zx1 = 0, zx2 = 0, zy1 = 0, zy2 = 0, z1 = 20,
z2 = 27.2, r1 = 0.
(b) Seed design printed by a 3D printer; 35 mm
diameter; 70 mm tall; 7 g; 45-min printing time at
0.3 mm resolution.
Figure 5: Example VAWT.
1-stage are rotated from one end plate to the next through the z-axis to ultimately 0 − 180°.
Thus, a total of 17 genes specify the design of a candidate VAWT. The blade is then duplicated and rotated 180° to form a 2-bladed design. The entire stage is then duplicated
and rotated 90° to form the second stage of the VAWT; see example design in Figure 5(a).
When physical instantiation is required, the design is fabricated by a 3D printer (Replicator 2; MakerBot Industries LLC, USA) using a polylactic acid (PLA) bioplastic at 0.3 mm
resolution. Figure 5(b) shows the VAWT after fabrication.
In order to provide sufficient training data for the surrogate model, initially CGA-b
proceeds for 3 generations before the model is used, i.e., a total of 360 physical array evaluations with 60 evaluated individuals in each species. S = 6 species are explored, each with
P = 20 individuals, a per allele mutation probability, µ = 25% with a mutation step size,
σ1 = 3.6 (mm) for co-ordinates and σ2 = 18° for r1 , and a crossover probability of 0%. Each
species population is initialised with the example design in Figure 5(a) and 19 variants mutated with µ = 100%. The individuals in each species population are initially evaluated in
collaboration with the seed individuals in the other species populations. Thereafter, CGA-b
runs as normal by alternating between species after a single offspring is formed and evaluated with the elite members from the other species. After 3 generations, SCGA-b is used for
an additional generation. To explore whether there is any benefit in windowing the training data, the SCGA is subsequently rerun for 1 generation starting with the same previous
CGA-b populations, however using only the current species population for model training
(SCGA-bw). The model parameters: 17 input neurons, H = 10 hidden neurons, 1 output
neuron, λm = 1000, T = 1000, β = 0.1. Each VAWT is treated separately by evolution and
approximation techniques, i.e., heterogeneous designs could therefore emerge.
The fitness, f , of each individual is the total angular kinetic energy of the collaborating
array,
S
X
f=
KEi
(1)
i=1
13
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
Figure 6: Experimental setup with 6 VAWT. Frame width 275 mm, height 235 mm. Vertical support
pillars 10 × 15 × 235 mm. Upper and lower cross bars each with height 40 mm, thickness 1 mm, and
protruding 8 mm. VAWT freely rotate on rigid metal pins 1 mm in diameter and positioned 42.5 mm
adjacently.
where the angular kinetic energy (J), KE, of an individual VAWT,
KE =
1 2
Iω
2
(2)
1
2
2
with angular velocity (rad/s), ω = rpm
60 2π, and moment of inertia (kg·m ), I = 2 mr with
m mass (kg), and r radius (m).
The rotational speed (rpm) is here measured using a digital photo laser tachometer
(PCE-DT62; PCE Instruments UK Ltd) by placing a 10 × 2 mm strip of reflecting tape on
the outer tip of each VAWT and recording the maximum achieved over the period of ∼ 30 s
during the application of wind generated by a propeller fan.
Figure 6 shows the test environment with the 30 W, 3500 rpm, 304.8 mm propeller fan,
which generates 4.4 m/s wind velocity, and 6 turbines mounted on rigid metal pins 1 mm
in diameter and positioned 42.5 mm adjacently and 30 mm from the propeller fan. That is,
there is an end plate separation distance of 0.2 diameters between turbines. It is important
to note that the wind generated by the fan is highly turbulent with non-uniform velocity and direction across the test platform, i.e., each turbine position receives a different
amount of wind energy from different predominant directions and wind reflecting from
the test frame may cause multi-directional wind flow. Thus, the designs evolved under
such conditions will adapt to these exact environmental circumstances.
14
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
5.2
Results
Each generation consisted of 120 fabrications and array evaluations. After evaluating all
individuals in the initial species populations, no mutants were found to produce a greater
total kinetic energy than the seed array. After 1 evolved CGA-b generation, the fittest array
combination generated a greater total kinetic energy of 7.6 mJ compared with the initial
seed array, which produced 5.9 mJ. A small increase in total mass of 42 g to 44.8 g was also
observed. After 2 evolved CGA-b generations, the fittest array generated a total kinetic
energy of 10 mJ with a further small increase in total mass to 45.6 g. SCGA-b was then
used for 1 additional generation and produced a total kinetic energy of 12.2 mJ with a
further increase in mass to 49.3 g.
The fittest SCGA-bw array produced a greater total kinetic energy of 14.8 mJ than
SCGA-b. Furthermore, the SCGA-bw mean kinetic energy (M = 12.27, SD = 1.29, N =
120) was significantly greater than SCGA-b (M = 10.56, SD = 0.93, N = 120) using a
two-tailed Mann-Whitney test (U = 2076, p ≤ 0.001), showing that windowing the model
training data was found to be beneficial in this experiment. SCGA-b and SCGA-bw appear
to have predominantly exploited different components of the fitness measure, with SCGAb finding heavier turbine designs (+8%) that maintain approximately the same rpm (+3%),
whereas SCGA-bw discovered designs that were approximately the same mass (+0.6%)
with significantly increased rpm (+22%).
Figure 7(a) shows the total angular kinetic energy of the fittest arrays each generation;
Figure 7(b) shows the total mass, and Figure 7(c) the total rpm. The cross sections of the
fittest array designs can be seen in Figures 8–12. When the position of the final evolved
SCGA-b array was inverted, i.e., the first species design being swapped with the sixth,
second swapped with fifth, etc., a decrease in total rpm of 17.8% was observed, causing a
reduction in total KE of 36.7%. A similar test was performed for the final evolved SCGAbw array and the total rpm decreased by 14% with a consequential decrease in total KE of
22%, showing that evolution has exploited position-specific characteristics.
It is interesting to note the similarity of some of the evolved VAWTs with human engineered designs. Bach [3] performed one of the earliest morphological studies of Savonius
VAWT and found increased aerodynamic performance with a blade profile consisting of a
2/3 flattened trailing section and a larger blade overlap to reduce the effect of the central
shaft, which is similar to the fourth species designs in Figures 10(d) and 12(d). The evolved
VAWT in the second species, e.g., Figures 10(b) and 12(b), overall appear more rounded
and similar to the classic S-shape Savonius design [60]. There appears to be little twist rotation along the z-axis of the evolved designs, which may be a consequence of the initial
seeding or due to the test conditions having strong and persistent wind velocity from a single direction; that is, starting torque in low wind conditions is not a component of fitness
in these experiments where twisted designs may be more beneficial.
6
Conclusions
Design mining represents a methodology through which the capabilities of computational
intelligence can be exploited directly in the physical world through 3D printing. Our previous pilot study [55] considered the parallel design of two interacting objects. In this paper
we have used a well-known abstract model of coevolution to explore and extend various
techniques from the literature, with an emphasis on reducing the number of fitness function evaluations. Our results suggest the round-robin evaluation process using the best
15
20
50
15
Total mass (g)
Total angular kinetic energy (mJ)
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
10
5
0
100
SCGA-bw
SCGA-b
CGA-b
150
200
250 300 350
Fabrications
400
450
48
46
44
SCGA-bw
SCGA-b
CGA-b
42
40
100
500
150
200
Total rotation speed (rpm)
(a) Array angular kinetic energy.
250 300 350
Fabrications
400
450
500
(b) Array mass.
4000
3500
3000
SCGA-bw
SCGA-b
CGA-b
2500
2000
100 150 200 250 300 350 400 450 500
Fabrications
(c) Array rotation speed.
Figure 7: Performance of the fittest evolved VAWT arrays. CGA-b (circle), SCGA-b (triangle), SCGAbw (square). The SCGAs are used only after 360 evaluations (i.e., 3 generations) of the CGA since
sufficient training data is required.
solution in each other population is robust [53], as is our previously presented sampling
method of surrogate models built using strictly population specific data. It has also been
shown that the same techniques remain effective when scaling to 6 interacting species.
These findings were than applied to a more complex version of the wind turbine design
task considered in our pilot study, primarily moving from designing a pair of heterogeneous VAWT to a system of 6 turbines. As noted above, the SCGA remains robust to an
increasing number of turbines since the number of inputs to the models remains constant.
Indeed, we are unaware of a real-world coevolutionary optimisation task of this scale with
or without surrogate models.
The VAWT spline representation used here is also much more flexible than the simple
integer approach used previously, enabling the exploration of designs where the blades
are not attached to the central shaft. This has enabled designs to emerge that not only
exploit or compensate for the wind interaction with the central shaft, but also the effect of
mass and vibrational forces as the turbines freely rotate around the mounted pins at high
speed. That is, it has been shown possible to exploit the fan-generated wind conditions
in the environment, including the complex inter-turbine turbulent flow conditions, and
position-specific wind velocity, to design an array of 6 different turbines that work together
collaboratively to produce the maximum angular kinetic energy. Note that the starting
design for the turbines was based upon a standard commercial design and performance in
16
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
the array was seen to double over the time allowed.
3D printing provides a very flexible way to rapid-prototype designs for testing. One
of the most significant benefits of the technology is a ‘complexity paradox’ wherein the
complexity of the printed shape is highly uncorrelated with production time/cost. With
conventional manufacturing, the more complex an object is, the more it costs to fabricate
(especially when sub-components require complex assembly processes.) However, with
3D printing, the time and cost to fabricate an object mostly depends on the amount of material used. Moreover, the more complex a shape is, the more numerous the spaces (voids)
that exist between components, and thus the smaller the quantity of material required.
There is thus a synergy between computational intelligence techniques that can search a
wide variety of complex shapes in a complex environment whilst also exploring the effects
of novel materials. Here only PLA plastic was used to fabricate designs, however there are
now over a hundred different materials that 3D printers can use, ranging from cells to titanium. Future work may explore the use of flexible materials and multi-material designs,
which may result in very different designs of future wind farms. In addition, 3D printing
can produce designs at different fidelity, such as slower more accurate prints for subtle optimisation or rapid coarse designs for quick evaluation. Fabrication can also be parallelised
with multiple printers, e.g., a different printer for each species.
Future work may also use the power generated as the objective under different wind
conditions specific to the target environment, e.g., low cut-in speed; the design of larger
wind farms, including turbine location, multiple rows of turbines, and collocation of VAWT
and HAWT; the exploration of alternative surrogate models to reduce the number of fabrications required; alternative shape representations that can enable increased morphological freedom, including varying the number of blades, e.g., supershapes [54]; and the use
of novel fabrication materials. In addition, future avenues of research may include arrays
of collaborating variable-speed wind turbines, turbines located on roof tops, and floating
wind turbines. The use of adaptive design representations, allowing the number of shape
parameters to increase as necessary (e.g., [47]), which will involve adaptive/coevolved
surrogate models (e.g., [61]) will also be of interest.
The issue of scalability remains an important area of future research. Changes in dimensionality may greatly affect performance, however it remains to be seen how the performance will change in the presence of other significant factors such as turbine wake interactions. Larger 3D printing and testing capabilities could be used to design larger turbines
using the same method, although with longer fabrication and testing times. However,
3D printing is a rapidly developing technology capable of fabricating increasingly bigger
parts with decreasing production times; for example, the EBAM 300 (Sciaky Ltd., USA)
can produce a 10 ft long titanium aircraft structure in 48 hrs. On the micro-scale, turbines
with a rotor diameter smaller than 2 mm can be used to generate power, e.g., for wireless sensors [29], and in this case high precision 3D printers would be required. Recently,
3D printing capabilities have been added to aerial robots to create flying 3D printers [30],
which may eventually enable swarms of robots to rapidly create, test, and optimise designs
in areas that are difficult to access.
The design mining approach outlined here provides a general and flexible framework
for engineering design with applications that cannot be simulated due to the complexity of
materials or environment. We anticipate that in the future, such approaches will be used
to create highly unintuitive yet efficient bespoke solutions for a wide range of complex
engineering design applications.
17
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
(a) s1
(b) s2
(c) s3
(d) s4
(e) s5
(f) s6
Figure 8: Cross sections of the fittest VAWT array after 1 CGA-b generation, i.e., initial population.
No mutants resulted in greater total KE than the seed array. Total KE = 5.9 mJ, m = 42 g, 2332 rpm.
(a) s1
(b) s2
(c) s3
(d) s4
(e) s5
(f) s6
Figure 9: Cross sections of the fittest evolved VAWT array after 2 CGA-b generations. Total KE =
7.6 mJ, m = 44.8 g, 2677 rpm.
(a) s1
(b) s2
(c) s3
(d) s4
(e) s5
(f) s6
Figure 10: Cross sections of the fittest evolved VAWT array after 3 CGA-b generations. Total KE =
10 mJ, m = 45.6 g, 3004 rpm.
(a) s1
(b) s2
(c) s3
(d) s4
(e) s5
(f) s6
Figure 11: Cross sections of the fittest evolved VAWT array after 3 CGA-b generations plus 1 SCGA-b
generation. Total KE = 12.2 mJ, m = 49.3 g, 3094 rpm.
(a) s1
(b) s2
(c) s3
(d) s4
(e) s5
(f) s6
Figure 12: Cross sections of the fittest evolved VAWT array after 3 CGA-b generations plus 1 SCGAbw generation. Total KE = 14.8 mJ, m = 45.9 g, 3668 rpm.
18
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council under Grant EP/N005740/1, and the Leverhulme Trust under Grant RPG-2013-344. The data
used to generate the graphs are available at: http://researchdata.uwe.ac.uk/166.
References
1.
Akwa, J. V., Vielmo, H. A., & Petry, A. P. (2012). A review on the performance of Savonius wind
turbines. Renewable and Sustainable Energy Reviews, 16(5), 3054–3064.
2.
Augustsson, P., Wolff, K., & Nordin, P. (2002). Creation of a learning, flying robot by means of
evolution. In Proceedings of the Genetic and Evolutionary Computation Conference, (pp. 1279–1285).
San Francisco, CA, USA: Morgan Kaufmann.
3.
Bach, G. (1931). Untersuchungen über Savonius-rotoren und verwandte strömungsmaschinen.
Forschung auf dem Gebiet des Ingenieurwesens A, 2(6), 218–231.
4.
Becker, J. M., Lohn, J. D., & Linden, D. (2014). In-situ evolution of an antenna array with hardware
fault recovery. In Proceedings of the IEEE International Conference on Evolvable Systems, (pp. 69–76).
Piscataway, NJ, USA: IEEE Press.
5.
Benard, N., Pons-Prats, J., Periaux, J., Bugeda, G., Braud, P., Bonnet, J. P., & Moreau, E. (2016).
Turbulent separated shear flow control by surface plasma actuator: Experimental optimization
by genetic algorithm approach. Experiments in Fluids, 57(2), 22.
6.
Borg, M., Shires, A., & Collu, M. (2014). Offshore floating vertical axis wind turbines, dynamics
modelling state of the art. Part I: Aerodynamics. Renewable and Sustainable Energy Reviews, 39,
1214–1225.
7.
Boria, F., Stanford, B., Bowman, S., & Ifju, P. (2009). Evolutionary optimization of a morphing
wing with wind-tunnel hardware in the loop. AIAA Journal, 47(2), 399–409.
8.
Box, G. E. P. (1957). Evolutionary operation: A method for increasing industrial productivity.
Journal of the Royal Statistical Society: Series C (Statistical Methodology), 6(2), 81–101.
9.
Bull, L. (1997a). Evolutionary computing in multi-agent environments: Partners. In T. Bäck (Ed.)
Proceedings of the 7th International Conference on Genetic Algorithms, (pp. 370–377). San Francisco,
CA, USA: Morgan Kaufmann.
10. Bull, L. (1997b). Model-based evolutionary computing: A neural network and genetic algorithm
architecture. In Proceedings of the IEEE International Conference on Evolutionary Computation, (pp.
611–616). Piscataway, NJ, USA: IEEE Press.
11. Bull, L., & Fogarty, T. (1993). Coevolving communicating classifier systems for tracking. In R. F.
Albrecht, C. R. Reeves, & N. C. Steele (Eds.) Artificial Neural Networks and Genetic Algorithms, (pp.
522–527). Berlin, Germany: Springer.
12. Chamorro, L. P., Tobin, N., Arndt, R. E. A., & Sotiropoulos, F. (2014). Variable-sized wind turbines
are a possibility for wind farm optimization. Wind Energy, 17(10), 1483–1494.
13. Chen, K., Song, M. X., Zhang, X., & Wang, S. F. (2016). Wind turbine layout optimization with
multiple hub height wind turbines using greedy algorithm. Renewable Energy, 96(A), 676–686.
14. Chen, Y., Hord, K., Prater, R., Lian, Y., & Bai, L. (2012). Design optimization of a vertical axis
wind turbine using a genetic algorithm and surrogate models. In 12th AIAA Aviation Technology,
Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and
Optimization Conference. Reston, VA, USA: AIAA.
15. Chen, Y., Li, H., Jin, K., & Song, Q. (2013). Wind farm layout optimization using genetic algorithm
with different hub height wind turbines. Energy Conversion and Management, 70, 56–65.
19
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
16. Chowdhury, S., Zhang, J., Messac, A., & Castillo, L. (2012). Unrestricted wind farm layout optimization (UWFLO): Investigating key factors influencing the maximum power generation. Renewable Energy, 38(1), 16–30.
17. Craig, A. E., Dabiri, J. O., & Koseff, J. R. (2016). Flow kinematics in variable-height rotating
cylinder arrays. Journal of Fluids Engineering, 138(11), 111203.
18. Davies, Z. S., Gilbert, R. J., Merry, R. J., Kell, D. B., Theodorou, M. K., & Griffith, G. W. (2000).
Efficient improvement of silage additives by using genetic algorithms. Applied and Environmental
Microbiology, 66(4), 1435–1443.
19. Dunham, B., Fridshal, R., & North, J. H. (1963). Design by natural selection. Synthese, 15(2),
254–259.
20. DuPont, B., Cagan, J., & Moriarty, P. (2016). An advanced modeling system for optimization of
wind farm layout and wind turbine sizing using a multi-level extended pattern search algorithm.
Energy, 106, 802–814.
21. Ferreira, C. S., & Geurts, B. (2015). Aerofoil optimization for vertical-axis wind turbines. Wind
Energy, 18(8), 1371–1385.
22. Ficici, S. G., Watson, R. A., & Pollack, J. B. (1999). Embodied evolution: A response to challenges
in evolutionary robotics. In J. L. Wyatt, & J. Demiris (Eds.) Proceedings of the European Workshop
on Learning Robots, vol. 1812 of LNCS, (pp. 14–22). Berlin, Germany: Springer.
23. Gautier, N., Aider, J.-L., Duriez, T., Noack, B. R., Segond, M., & Abel, M. (2015). Closed-loop
separation control using machine learning. Journal of Fluid Mechanics, 770, 442–457.
24. Goh, C. K., Lim, D., Ma, L., Ong, Y. S., & Dutta, P. S. (2011). A surrogate-assisted memetic coevolutionary algorithm for expensive constrained optimization problems. In Proceedings of the
IEEE Congress on Evolutionary Computation, (pp. 744–749). Piscataway, NJ, USA: IEEE Press.
25. González, J. S., Payán, M. B., Santos, J. M. R., & González-Longatt, F. (2014). A review and
recent developments in the optimal wind-turbine micro-siting problem. Renewable and Sustainable
Energy Reviews, 30, 133–144.
26. Hall, M., Buckham, B., & Crawford, C. (2013). Evolving offshore wind: A genetic algorithmbased support structure optimization framework for floating wind turbines. In OCEANS - Bergen,
2013 MTS/IEEE, (pp. 1–10). Piscataway, NJ, USA: IEEE Press.
27. Harding, S. L., & Miller, J. F. (2004). Evolution in materio: Initial experiments with liquid crystal.
In Proceedings of the NASA/DoD Workshop on Evolvable Hardware, (pp. 289–299). Washington DC,
USA.
28. Herdy, M. (1996). Evolution strategies with subjective selection. In Proceedings of the 4th Conference
on Parallel Problem Solving from Nature, (pp. 22–26). Berlin, Germany: Springer.
29. Howey, D. A., Bansal, A., & Holmes, A. S. (2011). Design and performance of a centimetre-scale
shrouded wind turbine for energy harvesting. Smart Materials and Structures, 20(8), 085021.
30. Hunt, G., Mitzalis, F., Alhinai, T., Hooper, P. A., & Kovac, M. (2014). 3D printing with flying
robots. In Proceedings of the IEEE International Conference on Robotics and Automation, (pp. 4493–
4499). Piscataway, NJ, USA: IEEE Press.
31. Hunt, R., Hornby, G. S., & Lohn, J. D. (2005). Toward evolved flight. In Proceedings of the Genetic
and Evolutionary Computation Conference, (pp. 957–964). New York, NY, USA: ACM.
32. Husbands, P., & Mill, F. (1991). Simulated coevolution as the mechanism for emergent planning
and scheduling. In R. K. Belew, & L. B. Booker (Eds.) Proceedings of the 4th International Conference
on Genetic Algorithms, (pp. 264–270). San Francisco, CA, USA: Morgan Kaufmann.
33. Islam, M. R., Ting, D. S.-K., & Fartaj, A. (2008). Aerodynamic models for Darrieus-type straightbladed vertical axis wind turbines. Renewable and Sustainable Energy Reviews, 12(4), 1087–1109.
20
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
34. Jakobi, N. (1997). Half-baked, ad-hoc and noisy: Minimal simulations for evolutionary robotics.
In Proceedings of the 4th European Conference on Artificial Life, (pp. 348–357). Cambridge, MA, USA:
MIT Press.
35. Jin, Y. (2011). Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm and Evolutionary Computation, 1(2), 61–70.
36. Judson, R. S., & Rabitz, H. (1992). Teaching lasers to control molecules. Physical Review Letters,
68(10), 1500–1503.
37. Kauffman, S. A. (1993). The origins of order: Self-organisation and selection in evolution. New York,
NY, USA: Oxford University Press.
38. Kauffman, S. A., & Johnsen, S. (1991). Coevolution to the edge of chaos: Coupled fitness landscapes, poised states and coevolutionary avalanches. Journal of Theoretical Biology, 149(4), 467–505.
39. King, R. D., Whelan, K. E., Jones, F. M., Reiser, P. G. K., Bryant, C. H., Muggleton, S. H., Kell,
D. B., & Olive, S. G. (2004). Functional genomic hypothesis generation and experimentation by a
robot scientist. Nature, 427(6971), 247–252.
40. Knowles, J. (2009). Closed-loop evolutionary multiobjective optimization. IEEE Computational
Intelligence Magazine, 4(3), 77–91.
41. Lipson, H., & Pollack, J. (2000). Automatic design and manufacture of robotic lifeforms. Nature,
406(6799), 974–978.
42. Long, J. H., Koob, T. J., Irving, K., Combie, K., Engel, V., Livingston, N., Lammert, A., & Schumacher, J. (2006). Biomimetic evolutionary analysis: Testing the adaptive value of vertebrate tail
stiffness in autonomous swimming robots. Journal of Experimental Biology, 209, 4732–4746.
43. Martin, J. (1991). Rapid Application Development. New York, NY, USA: Macmillan.
44. Mosetti, G., Poloni, C., & Diviacco, B. (1994). Optimization of wind turbine positioning in large
windfarms by means of a genetic algorithm. Journal of Wind Engineering and Industrial Aerodynamics, 51(1), 105–116.
45. Nolfi, S. (1992). Evolving non-trivial behaviours on real-robots: A garbage collecting robot.
Robotics and Autonomous Systems, 22, 187–198.
46. O’Hagan, S., Dunn, W. B., Brown, M., Knowles, J. D., & Kell, D. B. (2005). Closed-loop, multiobjective optimization of analytical instrumentation: Gas chromatography/time-of-flight mass
spectrometry of the metabolomes of human serum and of yeast fermentations. Analytical Chemistry, 77(1), 290–303.
47. Olhofer, M., Jin, Y., & Sendhoff, B. (2001). Adaptive encoding for aerodynamic shape optimization using evolution strategies. In Proceedings of the IEEE Congress on Evolutionary Computation,
(pp. 576–583). Piscataway, NJ, USA: IEEE Press.
48. Olhofer, M., Yankulova, D., & Sendhoff, B. (2011). Autonomous experimental design optimization of a flapping wing. Genetic Programming and Evolvable Machines, 12(1), 23–47.
49. Ong, Y. S., Keane, A. J., & Nair, P. B. (2002). Surrogate-assisted coevolutionary search. In Proceedings of the 9th International Conference on Neural Information Processing, vol. 3, (pp. 1140–1145).
Piscataway, NJ, USA: IEEE Press.
50. Parezanović, V., Laurentie, J.-C., Fourment, C., Delville, J., Bonnet, J.-P., Spohn, T., A. Duriez,
Cordier, L., Noack, M., B. R. andd Abel, Segond, M., Shaqarin, T., & Brunton, S. L. (2015). Mixing layer manipulation experiment—From open-loop forcing to closed-loop machine learning
control. Flow, Turbulence and Combustion, 94(1), 155–173.
51. Pask, G. (1958). Physical analogues to the growth of a concept. In Mechanisation of Thought Processes, no. 10 in National Physical Laboratory Symposium. London, UK: Her Majesty’s Stationery
Office.
21
R. J. Preen and L. Bull: On Design Mining: Coevolution and Surrogate Models
52. Paulsen, U. S., Vita, L., Madsen, H. A., Hattel, J., Ritchie, E., Leban, K. M., Berthelsen, P. A., &
Carstensen, S. (2012). 1st DeepWind 5 MW baseline design. Energy Procedia, 24, 27–35.
53. Potter, M. A., & De Jong, K. A. (1994). A cooperative coevolutionary approach to function optimization. In Y. Davidor, H.-P. Schwefel, & R. Männer (Eds.) Proceedings of the 3rd Conference on
Parallel Problem Solving from Nature, (pp. 249–257). Berlin, Germany: Springer.
54. Preen, R. J., & Bull, L. (2014). Towards the evolution of vertical-axis wind turbines using supershapes. Evolutionary Intelligence, 7(3), 155–187.
55. Preen, R. J., & Bull, L. (2015). Toward the coevolution of novel vertical-axis wind turbines. IEEE
Transactions on Evolutionary Computation, 19(2), 284–294.
56. Preen, R. J., & Bull, L. (2016). Design mining interacting wind turbines. Evolutionary Computation,
24(1), 89–111.
57. Rechenberg, I. (1971). Evolutionsstrategie—Optimierung technischer systeme nach prinzipien der biologischen evolution. Ph.D. thesis, Department of Process Engineering, Technical University of
Berlin, Berlin, Germany.
58. Rieffel, J., & Sayles, D. (2010). EvoFAB: A fully embodied evolutionary fabricator. In G. Tempesti,
A. M. Tyrrell, & J. F. Miller (Eds.) Proceedings of the 9th International Conference on Evolvable Systems,
(pp. 372–380). Berlin, Germany: Springer.
59. Rosenblatt, F. (1962). Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms.
Washington DC, USA: Spartan Books.
60. Savonius, S. J. (1930). Wind rotor. US Patent 1 766 765, June 24.
61. Schmidt, M. D., & Lipson, H. (2008). Coevolution of fitness predictors. IEEE Transactions on
Evolutionary Computation, 12(6), 736–749.
62. Schwefel, H.-P. (1975). Evolutionsstrategie und numerische Optimierung. Ph.D. thesis, Department
of Process Engineering, Technical University of Berlin, Berlin, Germany.
63. Sherman, L., Ye, J. Y., Albert, O., & Norris, T. B. (2002). Adaptive correction of depth-induced
aberrations in multiphoton scanning microscopy using a deformable mirror. Journal of Microscopy, 206(1), 65–71.
64. Singh, J., Ator, M. A., Jaeger, E. P., Allen, M. P., Whipple, D. A., Soloweij, J. E., Chowdhary,
S., & Treasurywala, A. M. (1996). Application of genetic algorithms to combinatorial synthesis:
A computational approach to lead identification and lead optimization. Journal of the American
Chemical Society, 118(7), 1669–1676.
65. Theis, M., Gazzola, G., Forlin, M., Poli, I., Hanczyc, M., & Bedau, M. (2006). Optimal formulation
of complex chemical systems with a genetic algorithm. In Proceedings of the European Conference
on Complex Systems, (p. 50). Saïd Business School, University of Oxford, Oxford, UK.
66. Thompson, A. (1998). Hardware Evolution: Automatic design of electronic circuits in reconfigurable
hardware by artificial evolution. Berlin, Germany: Springer.
67. Xie, S., Archer, C. L., Ghaisas, N., & Meneveau, C. (in press). Benefits of collocating vertical-axis
and horizontal-axis wind turbines in large wind farms. Wind Energy. doi:10.1002/we.1990.
68. Xudong, W., Shen, W. Z., Zhu, W. J., Sørensen, J. N., & Jin, C. (2009). Shape optimization of wind
turbine blades. Wind Energy, 12(8), 781–803.
22
| 9 |
DeepSIC: Deep Semantic Image Compression
arXiv:1801.09468v1 [] 29 Jan 2018
Sihui Luo
Zhejiang University
Hangzhou, China
Yezhou Yang
Arizona State University
Abstract— Incorporating semantic information into the
codecs during image compression can significantly reduce
the repetitive computation of fundamental semantic analysis
(such as object recognition) in client-side applications. The
same practice also enable the compressed code to carry the
image semantic information during storage and transmission.
In this paper, we propose a concept called Deep Semantic
Image Compression (DeepSIC) and put forward two novel
architectures that aim to reconstruct the compressed image
and generate corresponding semantic representations at the
same time. The first architecture performs semantic analysis
in the encoding process by reserving a portion of the bits
from the compressed code to store the semantic representations.
The second performs semantic analysis in the decoding step
with the feature maps that are embedded in the compressed
code. In both architectures, the feature maps are shared by
the compression and the semantic analytics modules. To validate our approaches, we conduct experiments on the publicly
available benchmarking datasets and achieve promising results.
We also provide a thorough analysis of the advantages and
disadvantages of the proposed technique.
I. I NTRODUCTION
As the era of smart cities and Internet of Things (IoT)
unfolds, the increasing number of real-world applications
require corresponding image and video transmission services
to handle compression and semantic encoding at the same
time, hitherto not addressed by the conventional systems.
Traditionally, image compression process is merely a type
of data compression that is applied to digital images to
reduce the storage and transmission cost of them. Almost
all the image compression algorithms stay at the stage of
low-level image representation in which the representation
is arrays of pixel values. For example, the most widely used
compression methods compress images through pixel-level
transformation and entropy encoding [32], [5]. However,
these conventional methods do not consider encoding the
semantic information (such as the object labels, attributes,
and scenes), beyond low-level arrays of pixel values. At the
same time, the semantic information is critical for high-level
reasoning over the image.
Within the last decade, Deep Neural Networks (DNN)
and Deep Learning (DL) have laid the foundation for going
beyond hand-crafted features in visual recognition, with a
significant performance boost in topics ranging from object
recognition [17], scene recognition [30], [35], [10], action
recognition [33], to image captioning [4], [14], [31], [3] and
visual question answering [1], [36], [19]. Recently, several
significant efforts of deep learning based image compression
Fig. 1.
Mingli Song
Zhejiang University
Hangzhou, China
General semantic image compression framework
methods have been proposed to improve the compression
performance [2], [27], [28], [7], [26], [23].
As Rippel and Bourdev [23] point out, generally speaking,
image compression is highly related to the procedure of
pattern recognition. In other words, if a system can discover
the underlying structure of the input, it can eliminate the
redundancy and represent the input more succinctly. Recent
deep learning based compression approaches discover the
structure of images by training a compression model and
then convert it to binary code [2], [27], [28], [7], [26],
[23]. Nevertheless, to the best of our knowledge, a deep
learning based image compression approach incorporating
semantic representations has not yet been explored in the
literature. These existing DL-based compression codecs, like
the conventional codecs, also only compress the images at
pixel level, and do not consider the semantics of them. Currently, when the client-side applications require the semantic
information of an image, they have to firstly reconstruct
the image from the codec and then conduct an additional
computing to obtain the semantic information.
Can a system conduct joint-optimization of the objectives
for both compression and the semantic analysis? In this
paper, we make the first attempt to approach this challenging
task which stands between the computer vision and multimedia information processing fields, by introducing the Deep
Semantic Image Compression (DeepSIC). Here, our DeepSIC framework aims to encode the semantic information in
the codecs, and thus significantly reduces the computational
resources needed for the repetitive semantic analysis on the
client side.
We depict the DeepSIC framework in Figure 1, which
aims to incorporate the semantic representation within the
codec while maintaining the ability to reconstruct visually
pleasing images. Two architectures of our proposed DeepSIC
framework are given for the joint analysis of pixel information together with the semantic representations for lossy
image compression. One is pre-semantic DeepSIC, which
integrate the semantic analysis module into the encoder
part and reserve several bits in the compressed code to
represent the semantics. The other is post-semantic DeepSIC,
which only encodes the image features during the encoding
process and conducts the semantic analysis process during
the reconstruction phase. The feature retained by decoding is
further adopted in the semantic analysis module to achieve
the semantic representation.
In summary, we make the following contributions:
• We propose a concept called Deep Semantic Image
Compression that aims to provide a novel scheme
to compress and reconstruct both the visual and the
semantic information in an image at the same time. To
our best knowledge, this is the first work to incorporate
semantics in image compression procedure.
• We put forward two novel architectures of the proposed
DeepSIC: pre-semantic DeepSIC and post-semantic
DeepSIC.
• We conduct experiments over multiple datasets to validate our framework and compare the two proposed
architectures.
The rest of this paper is organized as follows: In section II,
the conventional and DL-based image compression methods
are briefly reviewed. Section III describes the proposed
DeepSIC framework and the two architectures of DeepSIC in
detail. Our experiment results and discussions are presented
in section IV. Finally, we conclude the paper and look into
the future work in section V.
II. R ELATED WORK
Standard codecs such as JPEG [32] and JPEG2000 [21]
compress images via a pipeline which roughly breaks down
to 3 modules: transformation, quantization, and entropy
encoding. It is the mainstream pipeline for lossy image
compression codec. Although advances in the training of
neural networks have helped improving performance in existing codecs of lossy compression, recent learning-based
approaches still follow the same pipeline as well [2], [27],
[28], [7], [26], [23].
These approaches typically utilize the neural networks to
retain the visual features of the image and then convert
them into binary code through quantization. Some of these
approaches may further compress the binary code via an
entropy encoder. The feature extraction process replaces
the transformation module in JPEG, which automatically
discovers the structure of image instead of engineers manually do. Toderici [27], [28] explore various transformations
for binary feature extraction based on different types of
recurrent neural networks and compressed the binary representations with entropy encoding. [2] introduce a nonlinear
transformation in their convolutional compression network
for joint-optimization of rate and distortion, which effectively
improves the visual quality.
Many recent works utilize autoencoders to reduce the
dimensionality of images. Promising results of autoencoders
have been achieved in converting an image to compressed
code for retrieval [16]. Autoencoders also have the potential
to address the increasing need for flexible lossy compression
algorithms [26]. Other methods expand the basic autoencoder
structure and generate the binary representation of the image
by quantizing the bottleneck layer.
More recently, since Goodfellow [6] introduced Generative
Adversarial Networks(GANs) for image generation, GANs
have been demonstrated to be promising in producing fine
details of the images [13], [12]. In the compression field,
GANs are usually employed to generate reconstructed images that look natural, with the naturalness measured by a
binary classifier. The intuition is, if it is hard to distinguish
the generated images from the original, then the generated
images are ”natural” enough for humans. Some works have
achieved significant progress in generating smaller compressed code size but more visually pleasing reconstructed
images. Gregor et al. [7] introduce a homogeneous deep
generative model in latent variable image modeling. Rippel
and Bourdev’s method [23] contains an autoencoder featuring pyramidal analysis and supplement their approach with
adversarial training. They achieve real-time compression
with pleasing visual quality at a rather low bit rate.
In general, these existing DL-based compression codecs,
like the conventional codecs, also only compress the images
at pixel level, and do not consider the semantics of them.
III. D EEP S EMANTIC I MAGE C OMPRESSION
In the proposed deep semantic image compression framework (DeepSIC), the compression scheme is similar to autoencoder. The encoder-decoder image compression pipeline
commonly maps the target image through a bitrate bottleneck
with an autoencoder and train the model to minimize a loss
function penalizing it from its reconstruction result. For our
DeepSIC, this requires a careful construction of a feature
extractor and reconstruction module for the encoder and
decoder, a good selection of an appropriate optimization
objective for the joint-optimization of semantic analysis
and compression, and an entropy coding module to further
compress the fixed-sized feature map to gain codes with
variable lengths.
Figure 2 shows the two proposed DeepSIC architectures:
pre-semantic DeepSIC and post-semantic DeepSIC respectively. For pre-semantic DeepSIC, it places the semantic
analysis in the encoding process, which is implemented by
reserving a portion of the bits in the compressed code to store
semantic information. Hence the code innately reflects the
semantic information of the image. For post-semantic image
compression, the feature retained from decoding is used for
the semantic analysis module to get the class label and for
reconstruction network to synthesize the target image. Both
architectures have modules of feature extraction, entropy
coding, reconstruction from features and semantic analysis.
Here, a brief introduction of the modules in our DeepSIC
is given.
Fig. 2. Two architectures of our semantic compression network: (a) Image compression with pre semantic analysis in the encoder; (b) Image compression
with post semantic analysis in the decoder.
Feature Extraction: Features represent different types of
structure in images across scales and input channels. Feature
extraction module in image compression aims to reduce the
redundancy while maintaining max-entropy of the containing
information. We adopt Convolutional Neural Network (CNN)
as the feature extraction network. Given x as the input image,
f (x) denotes the feature extraction output.
Entropy Coding: The feature output is firstly quantized to
a lower bit precision. We then apply CABAC [20] encoding
method to lossless leverage the redundancy remained in the
data. It encodes the output of quantization y into the final
binary code sequence ŷ.
Reconstruction from Features: By entropy decoding,
we retrieve image features from the compressed code. The
inverse process of feature extraction is performed in the
decoding process to reconstruct the target image.
Semantic Analysis: Semantic analysis module is the analysis implemented on the extracted feature map. As mentioned
in section I, there are many forms of semantic analysis
such as semantic segmentation, object recognition, and image
captioning. We adopt object classification as the semantic
analysis module in this paper.
In general, we encode the input images through feature
extraction, and then quantize and code them into binary
codes for storing or transmitting to the decoder. The reconstruction network then creates an estimate of the original
input image based on the received binary code. We further
train the network with semantic analysis both in the encoder
and in the decoder (reconstruction) network. This procedure
is repeated under the loss of distortion between the original
image and the reconstruction target image, together with the
error rate of the semantic analysis. The specific descriptions
of each module are present in the subsequent subsections.
A. Feature Extraction
The output of the feature extraction module is the feature
map of an image, which contains the significant structure of
the image. CNNs that create short paths from early layers
to later layers allow feature reuse throughout the network,
and thus allow the training of very deep networks. These
CNNs can represent the image better and are demonstrated
to be good feature extractors for various computer vision
tasks [24], [18], [25], [9], [11], [29]. The strategy of reusing
the feature throughout the network helps the training of
deeper network architectures. Feature extraction model in
compression model[26] adopt similar strategy. We adopt
the operation of adding the batch-normalized output of the
Fig. 3.
The feature extraction module and the reconstruction from features module: They are both formed by a four-stage convolutional network.
previous layer to the subsequent layer in the feature networks
to . Furthermore, we also observe that the dense connections
have a regularizing effect, which reduces overfitting on tasks
with small training set sizes.
Our model extracts image features through a convolutional
network illustrated in Figure 3. Given the input image as
x ∈ RC×H×W , we denote the feature extraction output as
f (x).
Specifically, our feature extraction module consists of four
stages, each stage comprises a convolution, a subsampling,
and a batch normalization layer. Each subsequent stage
utilizes the output of the previous layers. And each stage
begins with an affine convolution to increase the receptive
field of the image. This is followed by 4 × 4, 2 × 2, or 1 × 1
downsampling to reduce the information. Each stage then
concludes by a batch normalization operation.
B. Entropy Coding
Given the extracted tensor f (x) ∈ RC×H×W , before
entropy coding the tensor, we first perform quantization. The
feature tensor is optimally quantized to a lower bit precision
B:
Q(f (x)) =
1
2B−1
B−1
2
f (x) .
We train a classifier to predict the value of each bit from its
context feature, and then use the resulting belief distribution
to compress b.
Given y = Q(f (x)) denotes the quantized code, after
entropy encoding y into its binary representation ŷ, we
retrieve the compression code sequence.
During decoding, we decompress the code by performing
the inverse operation. Namely, we interleave between computing the context of a particular bit using the values of
previously decoded bits. The obtained context is employed
to retrieve the activation probability of the bit to be decoded.
Note that this constrains the context of each bit to only
involve features composed of bits already decoded.
C. Reconstruction From Features
The module of reconstruction from features mirrors the
structure of the feature extraction module, which is four-stage
formed as well. Each stage comprises a convolutional layer
and an upsampling layer. The output of each previous layer
is passed on to the subsequent layer through two paths, one
is the deconvolutional network, and the other is a straightforward upsampling to target size through interpolation. After
reconstruction, we obtain the output decompressed image x̂.
(1)
The quantization bin B we use here is 6 bit. After
quantization, the output is converted to a binary tensor.
The entropy of the binary code generated during feature
extraction and quantization period are not maximum because
the network is not explicitly designed to maximize entropy
in its code, and the model does not necessarily exploit visual
redundancy over a large spatial extent.
We exploit this low entropy by lossless compression via
entropy coding, to be specific, we implement an entropy coding based on the context-adaptive binary arithmetic coding
(CABAC) framework proposed by [20]. Arithmetic entropy
codes are designed to compress discrete-valued data to bit
rates closely approaching the entropy of the representation,
assuming that the probability model used to design the code
approximates the data well. We associate each bit location
in Q(f (x)) with a context, which comprises a set of features
indicating the bit value. These features are based on the
position of the bit as well as the values of neighboring bits.
x̂ = g Q−1 (Q(f (x)))
(2)
Although arithmetic entropy encoding is lossless, the
quantization will bring in some loss in accuracy, the result
of Q−1 (Q(f (x)) is not exactly the same as the output of
feature extraction. It is an approximation of f (x).
D. Semantic Analysis
As aforementioned, there are a number of semantic analysis forms. Classification task is the commonly selected way
to evaluate deep learning networks [24], [18], [25], [9],
[11]. Thus we select object classification for experiments
in this paper. The structure of our semantic analysis module
contains a sequence of convolutions following with two fully
connected layers and a softmax layer.
Figure 4 presents the structure of our semantic analysis
module. It is position-optional and can be placed in the
encoding and decoding process for the two different architectures. We denote it as h (∗) to operate on the extracted
Therefore, R can be calculated as
"
R=E
#
X
log2 Pyi (n) .
(6)
i
D measures the distortion introduced by coding and decoding. It’s calculated by the distance between the original
image and the reconstructed image. We take MSE as the
distance metric for training, thus D is defined as
Fig. 4.
The structure of semantic analysis module
h
i
2
D = E kxi − x̂i k2 .
feature map f (x). Thus the output semantic representations
are h (f (x)).
For the classification task in the semantic analysis part,
we adjust the learning rate using the related name-value
pair arguments when creating the fully connected layer.
Moreover, a softmax function is ultilized as the output unit
activation function after the last fully connected layer for the
multi-class classification.
We set the cross entropy of the classification results as
the semantic analysis loss Lsem in this module. Denote the
weight matrix of the two fully connected layer as Wf c1 and
Wf c2 respectively. Lsem is calculated as follows:
Lsem = E [sof tmax [Wf c2 ∗ (Wf c1 ∗ f (x))]]
(3)
It is worth noting that the inputs of the semantic analysis
module in the two proposed architectures are slightly different. The input feature maps of semantic analysis module
in pre-semantic DeepSIC are under floating point precision.
Differently, the input feature maps of semantic analysis
module in post-semantic DeepSIC are under fixed-point
precision due to quantization and entropy coding.
E. Joint Training of Compression and Semantic Analysis
We implement end-to-end training for the proposed DeepSIC, jointly optimize the two constraints of the semantic
analysis and image reconstruction modules. And we define
the loss as the weighted evaluation of compression ratio R,
distortion D and the loss of the semantic analysis Lsem in
Equation 4.
L = R + λ1 D + λ2 Lsem
Z
n+ 12
n− 12
pyˆi (t)dt
IV. E XPERIMENT
In this section, we present experimental results over multiple datasets to demonstrate the effectiveness of the proposed
semantic image compression.
A. Experimental Setup
Datasets For training, we jointly optimized the full set of
parameters over ILSVRC 2012 which is the subset of the
ImageNet. The ILSVRC 2012 classification dataset consists
of 1.2 million images for training, and 50,000 for validation
from 1, 000 classes. A data augmentation scheme is adopted
for training images and resize them to 128 × 128 at training
time. We report the classification accuracy on the validation
set. Performance tests on Kodak PhotoCD dataset are also
present to enable comparison with other image compression
codecs. Kodak PhotoCD dataset [5] is an uncompressed set
of images which is popularly used for testing compression
performances.
Metric To assess the visual quality of reconstructed images, we adopt Multi-Scale Structural Similarity Index
Metric (MS-SSIM) [34] and Peak Signal-to-Noise Ratio (PSNR) [8] for comparing original, uncompressed images
to compressed, degraded ones. We train our model on the
MSE metric and evaluate all reconstruction model on MSSSIM. MS-SSIM is a representative perceptual metric which
has been specifically designed to match the human visual
system. The distortion of reconstructed images quantified by
PSNR is also provided to compare our method with JPEG,
JPEG2000 and DL-based methods. Moreover, we report the
classification accuracy over validation and testing sets to
evaluate the performance of semantic analysis module.
(4)
Here, λ1 and λ2 govern the trade-offs of the three
terms. Since the quantization operation is non-differential,
the marginal density of yˆi is then given by the training of
the discrete probability masses with weights equal to the
probability mass function of yˆi , where index i runs over all
elements of the vectors, including channels, image width and
height.
Pyi (n) =
(7)
(5)
B. Implementation and Training Details
We conduct training and testing on a NVIDIA Quadro
M6000 GPU. All models are trained with 128×128 patches
sampled from the ILSVRC 2012 dataset. All network architectures are trained using the Tensorflow API, with the
Adam [15] optimizer. We set B = 32 as the batch size, and
C = 3 as the number of color channels. The extracted feature
dimensions is variable due to different subsample settings to
gain variable length of compressed code. This optimization
is performed separately for each weight, yielding separate
transforms and marginal probability models.
TABLE I
ACCURACY OVER DIFFERENT COMPRESSION RATIOS ( MEASURED BY
BPP ) ON ILSVRC VALIDATION , WITH COMPARISONS TO THE
STATE - OF - THE - ART CLASSIFICATION METHODS .
PRE - SEMANTIC
P RE -SA IS SHORT FOR
D EEP SIC AND P OST-SA IS SHORT FOR POST- SEMANTIC
D EEP SIC.
Method
Pre-SA (0.25bpp)
Post-SA (0.25bpp)
Pre-SA (0.5bpp)
Post-SA (0.5bpp)
Pre-SA (1.0bpp)
Post-SA (1.0bpp)
Pre-SA (1.5bpp)
Post-SA (1.5bpp)
VGG-16[24] (10-crops)
Yolo Darknet[22]
DenseNet-121[11] (10-crops)
Fig. 5. Examples of reconstructed image parts by different codecs (JPEG,
JPEG 2000, ours, Toderici [28] and Rippel [23]) for very low bits per
pixel (bpp) values. The uncompressed size is 24 bpp, so the examples
represent compression by around 120 and 250 times. The test images are
from the Kodak PhotoCD dataset.
Fig. 6.
Summary rate-distortion curves, computed by averaging results
over the 24 images in the Kodak test set. JPEG and JPEG 2000 results are
averaged over images compressed with identical quality settings.
We use 128 filters (size 7×7) in the first stage, each
subsampled by a factor of 4 or 2 vertically and horizontally,
and followed up with 128 filters (size 5×5) with the stride
of 2 or 1. The remaining two stages retain the number of
channels, but use filters operating across all input channels
(3×3×128), with outputs subsampled by a factor of 2 or 1 in
each dimension. The structure of the reconstruction module
mirrors the structure of the feature extraction module.
The initial learning rate is set as 0.003, with decaying
twice by a factor of 5 during training. We train each model
for a total of 8,000,000 iterations.
C. Experimental Results
Since semantic compression is a new concept, there are no
direct baseline comparisons. We conduct many experiments
Top-1 acc.
52.2%
51.6%
63.2%
61.9%
68.7%
68.8%
67.1%
68.9%
71.9%
76.5%
76.4%
Top-5 acc.
72.7%
71.4%
82.2%
81.4%
89.4%
89.9%
90.1%
89.9%
90.7%
93.3%
93.4%
and present their results as follows:
To evaluate performance of image compression quality, we
compare DeepSIC against standard commercial compression
techniques JPEG, JPEG2000, as well as recent deep-learning
based compression work [2], [28], [23]. We show results for
images in the test dataset and in every selected available
compression rate. Figure 5 shows visual examples of some
images compressed using proposed DeepSIC optimized for
a low value above 0.25 bpp, compared to JPEG, JPEG
2000 and DL-based images compressed at equal or greater
bit rates. The average Rate-Distortion curves for the luma
component of images in the Kodak PhotoCD dataset, shown
in Figure 6. Additionally, we take average MS-SSIM and
PSNR over Kodak PhotoCD dataset as the functions of the
bpp fixed for the testing images, shown in Figure 7.
To evaluate the performance of semantic analysis, we
demonstrate some examples of the results of DeepSIC with
reconstructions and their semantics, shown in Figure 8.
Comparisons of the semantic analysis result of the two
proposed architectures on classification accuracy are given in
Table I. Furthermore, as different compression ratios directly
affect the performance of compression, we compare semantic
analysis result of the proposed architectures over certain fixed
compression ratios with the mainstream classification methods in Table I. It also presents the trend of how compression
ratio affects the performance of semantic analysis.
Although we use MSE as a distortion metric for training
and incorporate semantic analysis with compression, the appearance of compressed images are substantially comparable
with JPEG and JPEG 2000, and slightly inferior to the
DL-based image compression methods. Consistent with the
appearance of these example images, we retain the semantic
representation of them through the compression procedure.
Although the performance of our method is neither the best
on the visual quality of the reconstructed images nor on the
classification accuracy, the result is still comparable with the
state-of-the-art methods.
Fig. 7. Average perceptual quality and compression ratio curves for the luma component of the images from Kodak dataset. Our DeepSIC is compared
with JPEG, JPEG2000, Ballé [2], Toderici [28] and Rippel [23]. Left: perceptual quality, measured with multi-scale structural similarity (MS-SSIM). Right:
peak signal-to-noise ratio (PSNR). We have no access to reconstructions by Rippel[23], so we carefully transcribe their results, only available in MS-SSIM,
from the graphs in their paper.
Fig. 8.
Examples of results with our DeepSIC: Images in the first row are the original images. The reconstructed images of the two architectures of
DeepSIC are shown in the second and the third row followed by the output representation of the semantic analysis module.
D. Discussion
We perform object classification as the semantic analysis
module and represent the semantics with the identifier code
of the class in this paper. Nevertheless, the semantics itself is
complicated. The length of compressed code directly affect
the size of compressed code and is far from limitless. The
problem of how to efficiently organize semantic representation of multiple objects need careful consideration and
exploration.
Yolo9000 [22] performed classification with WordTree,
a hierarchical model of visual concepts that can merge
different datasets together by mapping the classes in the
dataset to synsets in the tree. It inspires us that models like
WordTree can also be applied to hierarchically encoding the
semantics. We can set up a variety of levels to represent
the semantic information. For example, from the low-level
semantic representation, you can know “there is a cat in the
image”. While from the high-level one, you can know not
only “there’s a cat” but also “what kind of cat it is”. This
kind of schemes are more efficient to represent the semantics
of the images.
V. C ONCLUSION AND F UTURE W ORK
In this paper, we propose an image compression scheme
incorporating semantics, which we refer to as Deep Semantic
Image Compression (DeepSIC). The proposed DeepSIC aims
to reconstruct the images and generate corresponding semantic representations at the same time. We put forward two
novel architectures of it: pre-semantic DeepSIC and postsemantic DeepSIC. To validate our approach, we conduct
experiments on Kodak PhotoCD and ILSVRC datasets and
achieve promising results. We also compare the performance of the proposed two architectures of our DeepSIC.
Though incorporating semantics, the proposed DeepSIC is
still comparable with the state-of-the-art methods over the
experimental results.
This practice opens up a new thread of challenges and
has the potential to immediately impact a wide range of
applications such as semantic compression of surveillance
streams for the future smart cities, and fast post-transmission
semantic image retrieval in the Internet of Things (IoT)
applications.
Despite the challenges to explore, deep semantic image
compression is still an inspiring new direction which breaks
through the boundary of multi-media and pattern recognition.
Nevertheless, it’s unrealistic to explore all the challenges at
once. Instead, we mainly focus on the framework of deep
semantic image compression in this paper. The proposed
DeepSIC paves a promising research avenue that we plan to
further explore other possible solutions to the aforementioned
challenges.
R EFERENCES
[1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick,
and D. Parikh. Vqa: Visual question answering. In International
Conference on Computer Vision (ICCV), 2015.
[2] J. Ballé, V. Laparra, and E. P. Simoncelli. End-to-end optimized image
compression. arXiv preprint arXiv:1611.01704, 2016.
[3] X. Chen and C. L. Zitnick. Learning a recurrent visual representation
for image caption generation. arXiv preprint arXiv:1411.5654, 2014.
[4] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional
networks for visual recognition and description. arXiv preprint
arXiv:1411.4389, 2014.
[5] R. Franzen. Kodak lossless true color image suite. Source: http://r0k.
us/graphics/kodak, 4, 1999.
[6] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In
International Conference on Neural Information Processing Systems,
pages 2672–2680, 2014.
[7] K. Gregor, F. Besse, D. J. Rezende, I. Danihelka, and D. Wierstra.
Towards conceptual compression. In Advances In Neural Information
Processing Systems, pages 3549–3557, 2016.
[8] P. Gupta, P. Srivastava, S. Bhardwaj, and V. Bhateja. A modified psnr
metric based on hvs for quality assessment of color images. In International Conference on Communication and Industrial Application,
pages 1–4, 2012.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image
recognition. In IEEE Conference on Computer Vision and Pattern
Recognition, pages 770–778, 2016.
[10] L. Herranz, S. Jiang, and X. Li. Scene recognition with cnns: Objects,
scales and dataset bias. In IEEE Conference on Computer Vision and
Pattern Recognition, pages 571–579, 2016.
[11] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely
connected convolutional networks. arXiv preprint arXiv:1608.06993,
2016.
[12] D. J. Im, C. D. Kim, H. Jiang, and R. Memisevic. Generating images
with recurrent adversarial networks. arXiv preprint arXiv:1602.05110,
2016.
[13] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for realtime style transfer and super-resolution. In European Conference on
Computer Vision, pages 694–711. Springer, 2016.
[14] A. Karpathy and F.-F. Li. Deep visual-semantic alignments for
generating image descriptions. arXiv preprint arXiv:1412.2306, 2014.
[15] D. Kingma and J. Ba. Adam: A method for stochastic optimization.
arXiv preprint arXiv:1412.6980, 2014.
[16] A. Krizhevsky and G. E. Hinton. Using very deep autoencoders for
content-based image retrieval. In European Symposium on Artificial
Neural Networks, Bruges, Belgium, 2011.
[17] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification
with deep convolutional neural networks. In Advances in neural
information processing systems, pages 1097–1105, 2012.
[18] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks
for semantic segmentation. In IEEE Conference on Computer Vision
and Pattern Recognition, pages 3431–3440, 2015.
[19] J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical questionimage co-attention for visual question answering. arXiv preprint
arXiv:1606.00061, 2016.
[20] D. Marpe, H. Schwarz, and T. Wiegand. Context-based adaptive binary
arithmetic coding in the h. 264/avc video compression standard. IEEE
Transactions on circuits and systems for video technology, 13(7):620–
636, 2003.
[21] M. Rabbani and R. Joshi. An overview of the jpeg 2000 still image
compression standard. Signal processing: Image communication,
17(1):3–48, 2002.
[22] J. Redmon and A. Farhadi. Yolo9000: better, faster, stronger. arXiv
preprint arXiv:1612.08242, 2016.
[23] O. Rippel and L. Bourdev. Real-time adaptive image compression.
arXiv preprint arXiv:1705.05823, 2017.
[24] K. Simonyan and A. Zisserman. Very deep convolutional networks for
large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov,
D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with
convolutions. In IEEE Conference on computer vision and pattern
recognition, pages 1–9, 2015.
[26] L. Theis, W. Shi, A. Cunningham, and F. Huszár. Lossy image compression with compressive autoencoders. arXiv preprint
arXiv:1703.00395, 2017.
[27] G. Toderici, S. M. O’Malley, S. J. Hwang, D. Vincent, D. Minnen,
S. Baluja, M. Covell, and R. Sukthankar. Variable rate image compression with recurrent neural networks. arXiv preprint arXiv:1511.06085,
2015.
[28] G. Toderici, D. Vincent, N. Johnston, S. J. Hwang, D. Minnen, J. Shor,
and M. Covell. Full resolution image compression with recurrent
neural networks. 2017.
[29] L. G. L. X. Tong, T. and Q. Gao. Image super-resolution using dense
skip connections. In IEEE Conference on Computer Vision and Pattern
Recognition.
[30] K. E. A. Van, de Sande, T. Gevers, and C. G. M. Snoek. Evaluating
color descriptors for object and scene recognition. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 32(9):1582–96, 2010.
[31] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A
neural image caption generator. arXiv preprint arXiv:1411.4555, 2014.
[32] G. K. Wallace. The jpeg still picture compression standard. Communications of the Acm, 38(1):xviii–xxxiv, 1992.
[33] H. Wang, A. Klser, C. Schmid, and C. L. Liu. Action recognition
by dense trajectories. In IEEE Conference on Computer Vision and
Pattern Recognition, pages 3169–3176, 2011.
[34] Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural
similarity for image quality assessment. In Signals, Systems and
Computers, 2004. Conference Record of the Thirty-Seventh Asilomar
Conference on, pages 1398–1402 Vol.2, 2003.
[35] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In
Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q.
Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 487–495. Curran Associates, Inc., 2014.
[36] Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7w: Grounded
question answering in images. arXiv preprint arXiv:1511.03416, 2015.
| 1 |
A New Numerical Abstract Domain
Based on Difference-Bound Matrices
arXiv:cs/0703073v2 [] 16 Mar 2007
Antoine Miné
École Normale Supérieure de Paris, France,
[email protected],
http://www.di.ens.fr/~ mine
Abstract. This paper presents a new numerical abstract domain for
static analysis by abstract interpretation. This domain allows us to represent invariants of the form (x − y ≤ c) and (±x ≤ c), where x and y
are variables values and c is an integer or real constant.
Abstract elements are represented by Difference-Bound Matrices, widely
used by model-checkers, but we had to design new operators to meet the
needs of abstract interpretation. The result is a complete lattice of infinite
height featuring widening, narrowing and common transfer functions.
We focus on giving an efficient O(n2 ) representation and graph-based
O(n3 ) algorithms—where n is the number of variables—and claim that
this domain always performs more precisely than the well-known interval
domain.
To illustrate the precision/cost tradeoff of this domain, we have implemented simple abstract interpreters for toy imperative and parallel languages which allowed us to prove some non-trivial algorithms correct.
1
Introduction
Abstract interpretation has proved to be a useful tool for eliminating bugs in software because it allows the design of automatic and sound analyzers for real-life
programming languages. While abstract interpretation is a very general framework, we will be interested here only in discovering numerical invariants, that is
to say, arithmetic relations that hold between numerical variables in a program.
Such invariants are useful for tracking common errors such as division by zero
and out-of-bound array access.
In this paper we propose practical algorithms to discover invariants of the
form (x − y ≤ c) and (±x ≤ c)—where x and y are numerical program variables
and c is a numeric constant. Our method works for integers, reals and even
rationals.
For the sake of brevity, we will omit proofs of theorems in this paper. The
complete proof for all theorems can be found in the author’s MS thesis [12].
Previous and Related Work. Static analysis has developed approaches to
automatically find numerical invariants based on numerical abstract domains
representing the form of the invariants we want to find. Famous examples are
the lattice of intervals (described in, for instance, Cousot and Cousot’s ISOP’76
paper [4]) and the lattice of polyhedra (described in Cousot and Halbwachs’s
POPL’78 paper [8]) which represent respectively invariants of the form (v ∈
[c1 , c2 ]) and (α1 v1 + · · · + αn vn ≤ c). Whereas the interval analysis is very
efficient—linear memory and time cost—but not very precise, the polyhedron
analysis is much more precise but has a huge memory cost—exponential in the
number of variables.
Invariants of the form (x−y ≤ c) and (±x ≤ c) are widely used by the modelchecking community. A special representation, called Difference-Bound Matrices
(DBMs), was introduced, as well as many operators in order to model-check
timed automata (see Yovine’s ES’98 paper [14] and Larsen, Larsson, Pettersson
and Yi’s RTSS’97 paper [10]). Unfortunately, most operators are tied to modelchecking and are of little interest for static analysis.
Our Contribution. This paper presents a new abstract numerical domain
based on the DBM representation, together with a full set of new operators and
transfer functions adapted to static analysis.
Sections 2 and 3 present a few well-known results about potential constraint
sets and introduce briefly the Difference-Bound Matrices. Section 4 presents operators and transfer functions that are new—except for the intersection operator—
and adapted to abstract interpretation. In Section 5, we use these operators to
build lattices, which can be complete under certain conditions. Section 6 shows
some practical results we obtained with an example implementation and Section
7 gives some ideas for improvement.
2
Difference-Bound Matrices
Let V = {v1 , . . . , vn } be a finite set a variables with value in a numerical set I
(which can be the set Z of integers, the set Q of rationals or the set R of reals).
We focus, in this paper, on the representation of constraints of the form
(vj − vi ≤ c), (vi ≤ c) and (vi ≥ c), where vi , vj ∈ V and c ∈ I. By choosing one
variable to be always equal to 0, we can represent the above constraints using only
potential constraints, that is to say, constraints of the form (vj − vi ≤ c). From
now, we will choose v2 , . . . , vn to be program variables, and v1 to be the constant
0 so that (vi ≤ c) and (vi ≥ c) are rewritten (vi − v1 ≤ c) and (v1 − vi ≤ −c). We
assume we now work only with potential constraints over the set {v1 , . . . , vn }.
Difference-Bound Matrices. We extend I to I = I∪{+∞} by adding the +∞
element. The standard operations ≤, =, +, min and max are extended to I as
usual (we will not use operations, such as − or ∗, that may lead to indeterminate
forms).
Any set C of potential constraints over V can be represented uniquely by a n×
n matrix in I—provided we assume, without loss of generality, that there does not
exist two potential constraints (vj − vi ≤ c) in C with the same left member and
different right members. The matrix m associated with the potential constraint
set C is called a Difference-Bound Matrix (DBM) and is defined as follows:
△
c
if (vj − vi ≤ c) ∈ C,
mij =
+∞
elsewhere .
Potential Graphs. A DBM m can be seen as the adjacency matrix of a directed
graph G = (V, A, w) with edges weighted in I. V is the set of nodes, A ⊆ V 2 is
the set of edges and w ∈ A 7→ I is the weight function. G is defined by:
(vi , vj ) ∈
/A
if mij = +∞,
(vi , vj ) ∈ A and w(vi , vj ) = mij
if mij 6= +∞ .
We will denote by hi1 , . . . , ik i a finite set of nodes representing a path from
node vi1 to node vik in G. A cycle is a path such that i1 = ik .
V-Domain and V 0 -Domain. We call the V-domain of a DBM m and we
denote by D(m) the set of points in In that satisfy all potential constraints:
△
D(m) = {(x1 , . . . , xn ) ∈ In | ∀i, j, xj − xi ≤ mij } .
Now, remember that the variable v1 has a special semantics: it is always
equal to 0. Thus, it is not the V-domain which is of interest, but the V 0 -domain
(which is a sort of intersection-projection of the V-domain) denoted by D0 (m)
and defined by:
△
D0 (m) = {(x2 , . . . , xn ) ∈ In−1 | (0, x2 , . . . , xn ) ∈ D(m)} .
We will call V-domain and V 0 -domain any subset of In or In−1 which is
respectively the V-domain or the V 0 -domain of some DBM. Figure 1 shows an
example DBM together with its corresponding potential graph, constraint set,
V-domain and V 0 -domain.
P Order. The ≤ order on I induces a point-wise order P on the set of DBMs:
△
m P n ⇐⇒ ∀i, j, mij ≤ nij .
This order is partial. It is also complete if I has least-upper bounds, i.e, if I is R
or Z, but not Q. We will denote by = the associated equality relation which is
simply the matrix equality.
We have m P n =⇒ D0 (m) ⊆ D0 (n) but the converse is not true. In
particular, we do not have D0 (m) = D0 (n) =⇒ m = n (see Figure 2 for a
counter-example).
v2
−v2
v3
(a)
−v3
v2 − v3
≤ 4
≤ −1
≤ 3
≤ −1
≤ 1
v1 v2 v3
v1 +∞ 4
3
(b)
v2 −1 +∞ +∞
v3 −1 1 +∞
v1
−1
4
(c)
−1
v2
v3
3
1
000000000000
111111111111
1010
000000000000
111111111111
000000000000000
111111111111111
000000000
111111111
000000000000
111111111111
10111111111
000000000
000000000000
111111111111
10111111111
000000000
000000000000
111111111111
10111111111
000000000
(e)
000000000000
111111111111
10111111111
000000000
000000000000
111111111111
10111111111
111111111111111
000000000000000
000000000
000000000000
111111111111
1010
000000000000
111111111111
000000000000
111111111111
1010
000000000000
111111111111
000000000000
111111111111
10
v3
v3
(d)
v1
v2
v2
Fig. 1. A constraint set (a), its corresponding DBM (b) and potential graph (c),
its V-domain (d) and V 0 -domain (e).
(a)
v1 v2 v3
v1 +∞ 4
3
v2 −1 +∞ +∞
v3 −1 1 +∞
(b)
v1 v2 v3
v1 0 5
3
v2 −1 +∞ +∞
v3 −1 1 +∞
(c)
v1 v2 v3
v1 0 4 3
v2 −1 0 +∞
v3 −1 1 0
Fig. 2. Three different DBMs with the same V 0 -domain as in Figure 1. Remark
that (a) and (b) are not even comparable with respect to P.
3
Closure, Emptiness, Inclusion and Equality Tests
We saw in Figure 2 that two different DBMs can represent the same V 0 -domain.
In this section, we show that there exists a normal form for any DBM with a
non-empty V 0 -domain and present an algorithm to find it. The existence and
computability of a normal form is very important since it is, as often in abstract
representations, the key to equality testing used in fixpoint computation. In the
case of DBMs, it will also allows us to carry an analysis of the precision of the
operators defined in the next section.
Emptiness Testing. We have the following graph-oriented theorem:
Theorem 1.
A DBM has an empty V 0 -domain if and only if there exists, in its associated
potential graph, a cycle with a strictly negative total weight.
Checking for cycles with a strictly negative weight is done using the well-known
Bellman-Ford algorithm which runs in O(n3 ). This algorithm can be found in
Cormen, Leiserson and Rivest’s classical algorithmics textbook [2, §25.3].
Closure and Normal Form. Let m be a DBM with a non-empty V 0 -domain
and G its associated potential graph. Since G has no cycle with a strictly negative
weight, we can compute its shortest path closure G ∗ , the adjacency matrix of
which will be denoted by m∗ and defined by:
∗ △
mii = 0,
N
−1
X
∗ △
m
=
min
mik ik+1 if i 6= j .
1≤N
ij
hi=i1 ,i2 ,...,iN =ji
k=1
The idea of closure relies on the fact that, if hi = i1 , i2 , . . . , iN = ji is a path
PN −1
from vi to vj , then the constraint vj − vi ≤ k=1 mik ik+1 can be derived from
m by adding the potential constraints vik+1 − vik ≤ mik ik+1 , 1 ≤ k ≤ N − 1.
This is an implicit potential constraint which does not appear directly in the
DBM m. When computing the closure, we replace each potential constraint
vj − vi ≤ mij , i 6= j in m by the tightest implicit constraint we can find, and
each diagonal element by 0 (which is indeed the smallest value vi − vi can reach).
In Figure 2 for instance, (c) is the closure of both the (a) and (b) DBMs.
Theorem 2.
1. m∗ = inf P {n | D0 (n) = D0 (m)}.
2. D0 (m) saturates m∗ , that is to say:
∀i, j, such that m∗ij < +∞, ∃(x1 = 0, x2 , . . . , xn ) ∈ D(m), xj − xi = m∗ij .
Theorem 2.1 states that m∗ is the smallest DBM—with respect to P—that
represents a given V 0 -domain, and thus the closed form is a normal form. Theorem 2.2 is a crucial property to prove accuracy of some operators defined in the
next section.
Any shortest-path graph algorithm can be used to compute the closure of
a DBM. We suggest the straightforward Floyd-Warshall, which is described in
Cormen, Leiserson and Rivest’s textbook [2, §26.2], and has a O(n3 ) time cost.
Equality and Inclusion Testing. The case where m or n or both have an
empty V 0 -domain is easy; in all other cases we use the following theorem—which
is a consequence of Theorem 2.1:
Theorem 3.
1. If m and n have non-empty V 0 -domain, D0 (m) = D0 (n) ⇐⇒ m∗ = n∗ .
2. If m and n have non-empty V 0 -domain, D0 (m) ⊆ D0 (n) ⇐⇒ m∗ P n.
Besides emptiness test and closure, we may need, in order to test equality or
inclusion, to compare matrices with respect to the point-wise ordering P. This
can be done with a O(n2 ) time cost.
Projection. We define the projection π|vk (m) of a DBM m with respect to a
variable vk to be the interval containing all possible values of v ∈ I such that
there exists a point (x2 , . . . , xn ) in the V 0 -domain of m with xk = v:
△
π|vk (m) = {x ∈ I | ∃(x2 , . . . , xn ) ∈ D0 (m) such that x = xk } .
The following theorem, which is a consequence of the saturation property of the
closure, gives an algorithmic way to compute the projection:
Theorem 4.
If m has a non-empty V 0 -domain, then π|vk (m) = [−m∗k1 , m∗1k ]
(interval bounds are included only if finite).
4
Operators and Transfer Functions
In this section, we define some operators and transfer functions to be used in
abstract semantics. Except for the intersection operator, they are new. The operators are basically point-wise extensions of the standard operators defined over
the domain of intervals [4].
Most algorithms presented here are either constant time, or point-wise, i.e.,
quadratic time.
Intersection. Let us define the point-wise intersection DBM m ∧ n by:
△
(m ∧ n)ij = min(mij , nij ) .
We have the following theorem:
Theorem 5.
D0 (m ∧ n) = D0 (m) ∩ D0 (n).
stating that the intersection is always exact. However, the resulting DBM is
seldom closed, even if the arguments are closed.
Least Upper Bound. The set of V 0 -domains is not stable by union1 so we
introduce here a union operator which over-approximate its result. We define
the point-wise least upper bound DBM m ∨ n by:
△
(m ∨ n)ij = max(mij , nij ) .
m ∨ n is indeed the least upper bound with respect to the P order. The
following theorem tells us about the effect of this operator on V 0 -domains:
Theorem 6.
1. D0 (m ∨ n) ⊇ D0 (m) ∪ D0 (n).
2. If m and n have non-empty V 0 -domains, then
(m∗ ) ∨ (n∗ ) = inf {o | D0 (o) ⊇ D0 (m) ∪ D0 (n)}
P
and, as a consequence, D0 ((m∗ ) ∨ (n∗ )) is the smallest V 0 -domain (with
respect to the ⊆ ordering) which contains D0 (m) ∪ D0 (n).
3. If m and n are closed, then so is m ∨ n.
Theorem 6.1 states that D0 (m ∨ n) is an upper bound in the set of V 0 -domains
with respect to the ⊆ order. If precision is a concern, we need to find the least
upper bound in this set. Theorem 6.2—which is a consequence of the saturation
property of the closure—states that we have to close both arguments before
applying the ∨ operator to get this most precise union over-approximation. If
one argument has an empty V 0 -domain, the least upper bound we want is simply
the other argument. Emptiness tests and closure add a O(n3 ) time cost.
1
V 0 -domains are always convex, but the union of two V 0 -domains may not be convex.
Widening. When computing the semantics of a program, one often encounters
loops leading to fixpoint computation involving infinite iteration sequences. In
order to compute in finite time an upper approximation of a fixpoint, widening
operators were introduced in P. Cousot’s thesis [3, §4.1.2.0.4]. Widening is a sort
of union for which every increasing chain is stationary after a finite number of
iterations. We define the point-wise widening operator ▽ by:
△
mij if nij ≤ mij ,
(m▽n)ij =
+∞ elsewhere .
The following properties prove that ▽ is indeed a widening:
Theorem 7.
1. D0 (m▽n) ⊇ D0 (m) ∪ D0 (n).
2. Finite chain property:
∀m and ∀(ni )i∈N , the chain defined by:
(
△
= m,
x0
△
xi+1 = xi ▽ni ,
is increasing for P and ultimately stationary. The limit l is such that l Q m
and ∀i, l Q ni .
The widening operator has some intriguing interactions with closure. Like the
least upper bound, the widening operator gives more precise results if its right
argument is closed, so it is rewarding to change xi+1 = xi ▽ni into xi+1 =
xi ▽(ni ∗ ). This is not the case for the first argument: we can have sometimes
D0 (m▽n)
D0 ((m∗ )▽n). Worse, if we try to force the closure of the first
argument by changing xi+1 = xi ▽ni into xi+1 = (xi ▽ni )∗ , the finite chain
property (Theorem 7.2) is no longer satisfied, as illustrated in Figure 3.
Originally [4], Cousot and Cousot defined widening over intervals ▽ by:
△
[a, b] ▽ [c, d] = [e, f ],
where:
△
e =
a
−∞
if a ≤ c,
elsewhere,
△
f =
b
+∞
if b ≥ d,
elsewhere .
The following theorem proves that the sequence computed by our widening is
always more precise than with the standard widening over intervals:
Theorem 8.
If we have the following iterating sequence:
(
(
△
[y0 , z0 ]
= m∗ ,
x0
△
xk+1 = xk ▽(nk ∗ ),
△
= π|vi (m),
△
[yk+1 , zk+1 ] = [yk , zk ] ▽ π|vi (nk ),
v1
v1
i+1
i+1
1
△
△
m =
ni =
1
1
v2
v3
i+1
v2
1
x2i =
2i+1
v2
x2i+1 =
v3
v1
2i+1
2i
2i
1
v3
1
v1
2i+1
i+1
1
2i+1
v2
1
2i+2
2i+2
1
v3
1
Fig. 3. Example of an infinite strictly increasing chain defined by x0 =
m∗ , xi+1 = (xi ▽ni )∗ .
then the sequence (xk )k∈N is more precise than the sequence ([yk , zk ])k∈N in the
following sense:
∀k, π|vi (xk ) ⊆ [yk , zk ] .
Remark that the technique, described in Cousot and Cousot’s PLILP’92 paper [7], for improving the precision of the standard widening over intervals ▽ can
also be applied to our widening ▽. It allows, for instance, deriving a widening
that always gives better results than a simple sign analysis (which is not the case
of ▽ nor ▽). The resulting widening over DBMs will remain more precise than
the resulting widening over intervals.
Narrowing. Narrowing operators were introduced in P. Cousot’s thesis [3,
§4.1.2.0.11] in order to restore, in a finite time, some information that may
have been lost by widening applications. We define here a point-wise narrowing
operator △ by:
nij
if mij = +∞,
△
(m△n)ij =
mij elsewhere .
The following properties prove that △ is indeed a narrowing:
Theorem 9.
1. If D0 (n) ⊆ D0 (m), then D0 (n) ⊆ D0 (m△n) ⊆ D0 (m).
2. Finite decreasing chain property:
∀m and for any chain (ni )i∈N decreasing for P, the chain defined by:
(
△
= m,
x0
△
xi+1 = xi △ni ,
is decreasing and ultimately stationary.
0
Given a sequence (nk )k∈N such that the chain (D (nk ))k∈N is decreasing
for the ⊆ partial order (but not (nk )k∈N for the P partial order), one way
to ensure the best accuracy as well as the finiteness of the chain (xk )k∈N is
to force the closure of the right argument by changing xi+1 = xi △ni into
xi+1 = xi △(ni ∗ ). Unlike widening, forcing all elements in the chain to be
closed with xi+1 = (xi △ni )∗ poses no problem.
Forget. Given a DBM m and a variable vk , the forget operator m\vk computes a
DBM where all informations about vk are lost. It is the opposite of the projection
operator π|vk . We define this operator by:
min(mij , mik + mkj ) if i 6= k and j 6= k,
△
0
if i = j = k,
(m\vk )ij =
+∞
elsewhere .
The V 0 -domain of m\vk is obtained by projecting D0 (m) on the subspace
−
→
orthogonal to I−
v→
k , and then extruding the result in the direction of vk :
Theorem 10.
D0 (m\vk ) =
{(x2 , . . . , xn ) ∈ In−1 | ∃x ∈ I, (x2 , . . . , xk−1 , x, xk+1 , . . . , xn ) ∈ D0 (m)}.
Guard. Given an arithmetic equality or inequality g over {v2 , . . . , vn }—which
we call a guard—and a DBM m, the guard transfer function tries to find a new
DBM m(g) the V 0 -domain of which is {s ∈ D0 (m) | s satisfies g}. Since this is,
in general, impossible, we will only try to have:
Theorem 11.
D0 (m(g) ) ⊇ {s ∈ D0 (m) | s satisfies g}.
Here is an example definition:
Definition 12.
1. If g = (vj0 − vi0 ≤ c) with i0 6= j0 , then:
min(mij , c)
△
(m(vj0 −vi0 ≤c) )ij =
mij
if i = i0 and j = j0 ,
elsewhere .
The cases g = (vj0 ≤ c) and g = (−vi0 ≤ c) are settled by choosing respectively i0 = 1 and j0 = 1.
2. If g = (vj0 − vi0 = c) with i0 6= j0 , then:
△
m(vj0 −vi0 =c) = (m(vj0 −vi0 ≤c) )(vi0 −vj0 ≤−c) .
The case g = (vj0 = c) is a special case where i0 = 1.
3. In all other cases, we simply choose:
△
m(g) = m .
In all but the last—general—cases, the guard transfer function is exact.
Assignment. An assignment vk ← e(v2 , . . . , vn ) is defined by a variable vk and
an arithmetic expression e over {v2 , . . . , vn }.
Given a DBM m representing all possible values that can take the variables
set {v2 , . . . , vn } at a program point, we look for a DBM, denoted by m(vk ←e) ,
representing the possibles values of the same variables set after the assignment
vk ← e. This is not possible in the general case, so the assignment transfer
function will only try to find an upper approximation of this set:
Theorem 13.
D0 (m(vk ←e) ) ⊇
{(x2 , . . . , xk−1 , e(x2 , . . . , xn ), xk+1 , . . . , xn ) | (x2 , . . . , xn ) ∈ D0 (m)} .
For instance, we can use the following definition for m(vi0 ←e) :
Definition 14.
1. If e = vi0 + c, then:
(m(vi0 ←vi0 +c) )ij
mij − c
△
=
mij + c
mij
if i = i0 , j 6= j0 ,
if i 6= i0 , j = j0 ,
elsewhere .
2. If e = vj0 + c with i0 6= j0 , then we use the forget operator and the guard
transfer function:
△
m(vi0 ←vj0 +c) = ((m\vi0 )(vi0 −vj0 ≤c) )(vj0 −vi0 ≤−c) .
The case e = c is a special case where we choose j0 = 1.
3. In all other cases, we use a standard interval arithmetic to find an interval
[−e− , e+ ], e+ , e− ∈ I such that
[−e− , e+ ] ⊇ e(πv2 (m), . . . , πvn (m))
and then we define:
(m(vi0 ←e) )ij
+
e
△
e−
=
(m\vi0 )ij
if i = 1 and j = i0 ,
if j = 1 and i = i0 ,
elsewhere .
In all but the last—general—cases, the assignment transfer function is exact.
Comparison with the Abstract Domain of Intervals. Most of the time, the
precision of numerical abstract domains can only be compared experimentally
on example programs (see Section 6 for such an example). However, we claim
that the DBM domain always performs better than the domain of intervals.
To legitimate this assertion, we compare informally the effect of all abstract
operations in the DBM and in the interval domains. Thanks to Theorems 5 and
6.2, and Definitions 12 and 14, the intersection and union abstract operators
and the guard and assignment transfer functions are more precise than their
interval counterpart. Thanks to Theorem 8, approximate fixpoint computation
with our widening ▽ is always more accurate than with the standard widening
over intervals ▽ and one could prove easily that each iteration with our narrowing
is more precise than with the standard narrowing over intervals. This means that
any abstract semantics based on the operators and transfer functions we defined
is always more precise than the corresponding interval-based abstract semantics.
5
Lattice Structures
In this section, we design two lattice structures: one on the set of DBMs and one
on the set of closed DBMs. The first one is useful to analyze fixpoint transfer
between abstract and concrete semantics and the second one allows us to design
a meaning function—or even a Galois Connection—linking the set of abstract
V 0 -domains to the concrete lattice P({v2 , . . . , vn } 7→ I), following the abstract
interpretation framework described in Cousot and Cousot’s POPL’79 paper [5].
DBM Lattice. The set M of DBMs, together with the order relation P and the
point-wise least upper bound ∨ and greatest lower bound ∧, is almost a lattice.
It only needs a least element ⊥, so we extend P, ∨ and ∧ to M⊥ = M ∪ {⊥} in
an obvious way to get ⊑, ⊔ and ⊓. The greatest element ⊤ is the DBM with all
its coefficients equal to +∞.
Theorem 15.
1. (M⊥ , ⊑, ⊓, ⊔, ⊥, ⊤) is a lattice.
2. This lattice is complete if (I, ≤) is complete (Z or R, but not Q).
There are, however, two problems with this lattice. First, we cannot easily
assimilate this lattice to a sub-lattice of P({v2 , . . . , vn } 7→ I) as two different
DBMs can have the same V 0 -domain. Then, the least upper bound operator ⊔
is not the most precise upper approximation of the union of two V 0 -domains
because we do not force the arguments to be closed.
Closed DBM Lattice. To overcome these difficulties, we build another lattice
based on closed DBMs. First, consider the set M∗⊥ of closed DBMs M∗ with a
least element ⊥∗ added. Now, we define a greatest element ⊤∗ , a partial order
relation ⊑∗ , a least upper bound ⊔∗ and a greatest lower bound ⊓∗ in M∗⊥ by:
△
0
if i = j,
∗
⊤ ij =
+∞ elsewhere .
△
either m = ⊥∗ ,
∗
m ⊑ n ⇐⇒
or
m 6= ⊥∗ , n 6= ⊥∗ and m P n .
if n = ⊥∗ ,
m
△
∗
m⊔ n =
n
if m = ⊥∗ ,
m ∨ n elsewhere .
∗
△
⊥
if m = ⊥∗ or n = ⊥∗ or D0 (m ∧ n) = ∅,
m ⊓∗ n =
∗
(m ∧ n)
elsewhere .
Thanks to Theorem 2.1, every non-empty V 0 -domain has a unique representation in M∗ ; ⊥∗ is the representation for the empty set. We build a meaning
function γ which is an extension of D0 (·) to M∗⊥ :
△
∅
if m = ⊥∗ ,
γ(m) =
0
D (m) elsewhere .
Theorem 16.
1. (M∗⊥ , ⊑∗ , ⊓∗ , ⊔∗ , ⊥∗ , ⊤∗ ) is a lattice and γ is one-to-one.
2. If d
(I, ≤) is complete,
this lattice is complete and γ is meet-preserving:
T
∗
γ( X) = {γ(x) | x ∈ X}. We can—according to Cousot and Cousot [6,
Prop. 7]—build a canonical Galois Insertion:
γ
−−→
−
−− M∗⊥
P({v2 , . . . , vn } 7→ I) ←
−−
−→
α
where the
d abstraction function α is defined by:
α(D) = ∗ { m ∈ M∗⊥ | D ⊆ γ(m) }.
The M∗⊥ lattice features a nice meaning function and a precise union approximation; thus, it is tempting to force all our operators and transfer functions to
live in M∗⊥ by forcing closure on their result. However, we saw this does not work
for widening, so fixpoint computation must be performed in the M⊥ lattice.
6
Results
The algorithms on DBMs presented here have been implemented in OCaml and
used to perform forward analysis on toy—yet Turing-equivalent—imperative and
parallel languages with only numerical variables and no procedure.
We present here neither the concrete and abstract semantics, nor the actual
forward analysis algorithm used for our analyzers. They follow exactly the abstract interpretation scheme described in Cousot and Cousot’s POPL’79 paper
[5] and Bourdoncle’s FMPA’93 paper [1] and are detailed in the author’s MS thesis [12]. Theorems 1, 3, 5, 6, 11 and 13 prove that all the operators and transfer
functions we defined are indeed abstractions on the domain of DBMs of the usual
operators and transfer functions on the concrete domain P({v2 , . . . , vn } 7→ I),
which, as shown by Cousot and Cousot [5], is sufficient to prove soundness for
analyses.
Imperative Programs. Our toy forward analyzer for imperative language follows almost exactly the analyzer described in Cousot and Halbwachs’s POPL’78
paper [8], except that the abstract domain of polyhedra has been replaced by
our DBM-based domain. We tested our analyzer on the well-known Bubble Sort
and Heap Sort algorithms and managed to prove automatically that they do
not produce out-of-bound error while accessing array elements. Although we did
not find as many invariants as Cousot and Halbwachs for these two examples, it
was sufficient to prove the correctness. We do not detail these common examples
here for the sake of brevity.
Parallel Programs. Our toy analyzer for parallel language allows analyzing a
fixed set of processes running concurrently and communicating through global
variables. We use the well-known nondeterministic interleaving method in order
to analyze all possible control flows. In this context, we managed to prove automatically that the Bakery algorithm, introduced in 1974 by Lamport [9], for
synchronizing two parallel processes never lets the two processes be at the same
time in their critical sections. We now detail this example.
The Bakery Algorithm. After the initialization of two global shared variables
y1 and y2, two processes p1 and p2 are spawned. They synchronize through the
variables y1 and y2, representing the priority of p1 and p2, so that only one
process at a time can enter its critical section (Figure 4).
Our analyzer for parallel processes is fed with the initialization code (y1 = 0;
y2 = 0) and the control flow graphs for p1 and p2 (Figure 5). Each control graph
is a set of control point nodes and some edges labeled with either an action
performed when the edge is taken (the assignment y1 ← y2 + 1, for example) or
a guard imposing a condition for taking the edge (the test y1 6= 0, for example).
The analyzer then computes the nondeterministic interleaving of p1 and p2
which is the product control flow graph. Then, it computes iteratively the abstract invariants holding at each product control point. It outputs the invariants
shown in Figure 6.
The state (2, c) is never reached, which means that p1 and p2 cannot be
at the same time in their critical section. This proves the correctness of the
Bakery algorithm. Remark that our analyzer also discovered some non-obvious
invariants, such as y1 = y2 + 1 holding in the (1, c) state.
y1 = 0; y2 = 0;
(p1)
while true do
y1 = y2 + 1;
while y2 6= 0 and y1 > y2 do done;
- - - critical section - - y1 = 0;
done
(p2)
while true do
y2 = y1 + 1;
while y1 6= 0 and y2 ≥ y1 do done;
- - - critical section - - y2 = 0;
done
Fig. 4. Pseudo-code for the Bakery algorithm.
a
0
y1 ← y2 + 1
1
y1 ← 0
y2 6= 0 and y1 > y2
y2 = 0 or y1 ≤ y2
2
y2 ← y1 + 1
critical section
(p1)
b
y2 ← 0
y1 6= 0 and y2 ≥ y1
y1 = 0 or y2 < y1
c
critical section
(p2)
Fig. 5. Control flow graphs of processes p1 and p2 in the Bakery algorithm.
(0, a)
y1 = 0
y2 = 0
(0, b)
y1 = 0
y2 ≥ 1
(0, c)
y1 = 0
y2 ≥ 1
(1, a)
y1 ≥ 1
y2 = 0
(1, b)
y1 ≥ 1
y2 ≥ 1
(1, c)
y1 ≥ 2
y2 ≥ 1
y1 − y2 = 1
(2, a)
y1 ≥ 1
y2 = 0
(2, b)
y1 ≥ 1
y2 ≥ 1
y1 − y2 ∈ [−1, 0]
(2, c)
⊥
Fig. 6. Result of our analyzer on the nondeterministic interleaving product graph
of p1 and p2 in the Bakery algorithm.
7
Extensions and Future Work
Precision improvement. In our analysis, we only find a coarse set of the
invariants held in a program since finding all invariants of the form (x − y ≤ c)
and (±x ≤ c) for all programs is non-computable. Possible losses of precision
have three causes: non-exact union, widening in loops and non-exact assignment
and guard transfer functions.
We made crude approximations in the last—general—case of Definitions 12
and 14 and there is room for improving assignment and guard transfer functions,
even though exactness is impossible. When the DBM lattices are complete, there
exists most precise transfer functions such that Theorems 11 and 13 hold, however these functions may be difficult to compute.
Finite Union of V 0 -domains. One can imagine to represent finite unions of
V 0 -domains, using a finite set of DBMs instead of a single one as abstract state.
This allows an exact union operator but it may lead to memory and time cost
explosion as abstract states contain more and more DBMs, so one may need
from time to time to replace a set of DBMs by their union approximation.
The model-checker community has also developed specific structures to represent finite unions of V-domains, that are less costly than sets. Clock-Difference
Diagrams (introduced in 1999 by Larsen, Weise, Yi and Pearson [11]) and Difference Decision Diagrams (introduced in Møller, Lichtenberg, Andersen and
Hulgaard’s CSL’99 paper [13]) are tree-based structures made compact thanks
to the sharing of isomorphic sub-trees; however existence of normal forms for
such structures is only a conjecture at the time of writing and only local or
path reduction algorithms exist. One can imagine adapting such structures to
abstract interpretation the way we adapted DBM in this paper.
Space and Time Cost Improvement. Space is often a big concern in abstract
interpretation. The DBM representation we proposed in this paper has a fixed
O(n2 ) memory cost—where n is the number of variables in the program. In the
actual implementation, we decided to use the graph representation—or hollow
matrix—which stores only edges with a finite weight and observed a great space
gain as most DBMs we use have many +∞. Most algorithms are also faster
on hollow matrices and we chose to use the more complex, but more efficient,
Johnson shortest-path closure algorithm—described in Cormen, Leiserson and
Rivest’s textbook [2, §26.3]—instead of the Floyd-Warshall algorithm.
Larsen, Larsson, Pettersson and Yi’s RTSS’97 paper [10] presents a minimal
form algorithm which finds a DBM with the fewest finite edges representing a
given V 0 -domain. This minimal form could be useful for memory-efficient storing,
but cannot be used for direct computation with algorithms requiring closed
DBMs.
Representation Improvement. The invariants we manipulate are, in term of
precision and complexity, between interval and polyhedron analysis. It is interesting to look for domains allowing the representation of more forms of invariants
than DBMs in order to increase the granularity of numerical domains. We are
currently working on an improvement of DBMs that allows us to represent, with
a small time and space complexity overhead, invariants of the form (±x± y ≤ c).
8
Conclusion
We presented in this paper a new numerical abstract domain inspired from the
well-known domain of intervals and the Difference-Bound Matrices. This domain
allows us to manipulate invariants of the form (x − y ≤ c), (x ≤ c) and (x ≥ c)
with a O(n2 ) worst case memory cost per abstract state and O(n3 ) worst case
time cost per abstract operation (where n is the number of variables in the
program).
Our approach made it possible for us to prove the correctness of some nontrivial algorithms beyond the scope of interval analysis, for a much smaller cost
than polyhedron analysis. We also proved that this analysis always gives better
results than interval analysis, for a slightly greater cost.
Acknowledgments. I am grateful to J. Feret, C. Hymans, D. Monniaux, P.
Cousot, O. Danvy and the anonymous referees for their helpful comments and
suggestions.
References
[1] F. Bourdoncle. Efficient chaotic iteration strategies with widenings. In FMPA’93,
number 735 in LNCS, pages 128–141. Springer-Verlag, 1993.
[2] T. Cormen, C. Leiserson, and R. Rivest. Introduction to Algorithms. The MIT
Press, 1990.
[3] P. Cousot. Méthodes itératives de construction et d’approximation de points fixes
d’opérateurs monotones sur un treillis, analyse sémantique de programmes. Thèse
d’état ès sciences mathématiques, Université scientifique et médicale de Grenoble,
France, 1978.
[4] P. Cousot and R. Cousot. Static determination of dynamic properties of programs.
In Proc. of the 2d Int. Symposium on Programming, pages 106–130. Dunod, Paris,
France, 1976.
[5] P. Cousot and R. Cousot. Systematic design of program analysis frameworks. In
ACM POPL’79, pages 269–282. ACM Press, 1979.
[6] P. Cousot and R. Cousot. Abstract interpretation and application to logic programs. Journal of Logic Programming, 13(2–3):103–179, 1992.
[7] P. Cousot and R. Cousot. Comparing the Galois connection and widening/narrowing approaches to abstract interpretation, invited paper. In PLILP’92,
number 631 in LNCS, pages 269–295. Springer-Verlag, August 1992.
[8] P. Cousot and N. Halbwachs. Automatic discovery of linear restraints among
variables of a program. In ACM POPL’78, pages 84–97. ACM Press, 1978.
[9] L. Lamport. A new solution of dijkstra’s concurrent programming problem. Communications of the ACM, 8(17):453–455, August 1974.
[10] K. Larsen, F. Larsson, P. Pettersson, and W. Yi. Efficient verification of real-time
systems: Compact data structure and state-space reduction. In IEEE RTSS’97,
pages 14–24. IEEE CS Press, December 1997.
[11] K. Larsen, C. Weise, W. Yi, and J. Pearson. Clock difference diagrams. Nordic
Journal of Computing, 6(3):271–298, October 1999.
[12] A. Miné. Representation of two-variable difference or sum constraint set and application to automatic program analysis. Master’s thesis, ENS-DI, Paris, France,
2000. http://www.eleves.ens.fr:8080/home/mine/stage_dea/.
[13] J. Møller, J. Lichtenberg, R. Andersen, H., and H. Hulgaard. Difference decision
diagrams. In CSL’99, volume 1683 of LNCS, pages 111–125. Springer-Verlag,
September 1999.
[14] S. Yovine. Model-checking timed automata. In Embedded Systems, number 1494
in LNCS, pages 114–152. Springer-Verlag, October 1998.
| 6 |
1
arXiv:1801.04223v1 [] 12 Jan 2018
Machine Intelligence Techniques for
Next-Generation Context-Aware Wireless
Networks
Tadilo Endeshaw Bogale1,2 , Xianbin Wang1 and Long Bao Le2
Western University, London, Canada1
Institute National de la Recherche Scientifique (INRS),
Université du Québec, Montréal, Canada2
Email: {tadilo.bogale, long.le}@emt.inrs.ca, [email protected]
Abstract
The next generation wireless networks (i.e. 5G and beyond), which would be extremely dynamic and
complex due to the ultra-dense deployment of heterogeneous networks (HetNets), poses many critical
challenges for network planning, operation, management and troubleshooting. At the same time, generation
and consumption of wireless data are becoming increasingly distributed with ongoing paradigm shift from
people-centric to machine-oriented communications, making the operation of future wireless networks even
more complex. In mitigating the complexity of future network operation, new approaches of intelligently
utilizing distributed computational resources with improved context-awareness becomes extremely important.
In this regard, the emerging fog (edge) computing architecture aiming to distribute computing, storage, control,
communication, and networking functions closer to end users, have a great potential for enabling efficient
operation of future wireless networks. These promising architectures make the adoption of artificial intelligence
(AI) principles which incorporate learning, reasoning and decision-making mechanism, as natural choices
for designing a tightly integrated network. Towards this end, this article provides a comprehensive survey
on the utilization of AI integrating machine learning, data analytics and natural language processing (NLP)
techniques for enhancing the efficiency of wireless network operation. In particular, we provide comprehensive
discussion on the utilization of these techniques for efficient data acquisition, knowledge discovery, network
planning, operation and management of the next generation wireless networks. A brief case study utilizing the
AI techniques for this network has also been provided.
Keywords– 5G and beyond, Artificial (machine) intelligence, Context-aware-wireless, ML, NLP, Ontology
I. I NTRODUCTION
The advent of the fifth generation (5G) wireless
network and its convergence with vertical applications
constitute the foundation of future connected society
which is expected to support 125 billion devices by
2030 (IHS Markit). As these applications and devices
are featured by ubiquitous connectivity requirements,
future 5G and beyond networks are becoming more
complex. Besides the complexity increase of base
stations (BSs) and user equipments (UEs), significant
challenges arise from the initial network planning to
the deployment and situation-dependent operation and
management stages.
The network architecture of 5G and beyond will
be inevitably heterogeneous and multi-tier with ultra-
dense deployment of small cells to achieve the anticipated 1000 fold capacity increase cost-effectively.
For instance, the mixed use of planned and centrally
controlled macro-BSs and randomly deployed wireless
fidelity (WiFi) access points or femto-BSs in the ultradense heterogeneous network (HetNet) raises several
unexpected operation scenarios, which are not possible
to envision at the network design stage. This requires
future wireless networks to have self organizing, configuring and healing capability based on the operational condition through tight coordination among
different nodes, tiers and communication layers. These
challenges make the existing network design strategies
utilizing a fairly simple statistics experience unacceptable performance (for example, in terms of spectrum
Network interoperability
Distributed & affordable
computing resources
AI opportunities
for wireless
networks
Predictive network
characteristics
Massive data availability
Fig. 1. Favorable conditions for the adoption of machine intelligence techniques in the next generation wireless networks.
and energy efficiency, coverage, delay and cost) [1],
[2].
The rapidly growing number of machine-typecommunication (MTC) devices contribute a considerable portion on the complexity of this ultra-dense network. Many of the future MTC applications supported
by 5G and beyond will require the underlying wireless
networks to achieve high availability, reliability and
security, very short transit time and low latency [3].
Furthermore, in such use cases, uninterrupted and
safe operation is often the top priority (for instance,
connected vehicles). Taking a MTC application offline
for any reason can cause significant business loss or
non-tolerable customer experience, and many of the
MTC devices are resource-constrained and will not be
able to rely solely on their own limited resources to
fulfill their processing demands [4].
As a result, these latency critical applications cannot
be moved to the network controller or cloud due to delay, bandwidth, or other constraints. Moreover, the data
sets generated from these devices will be extremely
diverse and may have large-scale missing (inaccurate)
values [5]. In addition, a number of new data hungry
MTC immersive use-cases will arise including wearables, virtual realities, intelligent product and supportsystems where most of them will use built-in backend data infrastructure and analytics engine to provide
context-aware services. All these necessitates the next
generation network (i.e. 5G and beyond) to adopt an
intelligent and context-aware approach for network
planning, design, analysis, and optimization.
We are in the beginning phase of an intelligent era
that has been driven by the rapid evolution of semiconductor industries, computing technologies, and diverse
use cases. This is witnessed by the tight integration of
networked information systems, sensing and communication devices, data sources, decision making, and
cyber physical infrastructures. The proliferation of tiny
wireless sensors and MTC devices, and smart phones
also show clear evidences for the exceptional processing capability and cost-effectiveness of semiconductor devices. These promising developments facilitate
distributed computing resources not only in the cloud
but also in the fog and edge nodes. Both fog and
edge computing attempt pushing the intelligence and
processing capabilities down closer to where the data
originates.
The edge computing aims to integrate intelligence
and processing power capabilities closest to the original data source. The edge node, for example, intelligent programmable automation controllers (PACs),
determines which raw input data should be stored
locally or sent to the fog (cloud) for further analysis.
On the other hand, in the fog computing, all the raw
input data will first be converted to the appropriate
Internet protocol (such as HTTP) before sending to
the fog nodes. Thus, higher-level data contents are
processed, stored and sent to the cloud for further analysis in the fog devices (for example, intelligent routers,
access points, Internet of things (IoT) gateways). Thus,
the edge and fog enabled network allows distributed
computing, storage, control, communication and networking functions by reducing the data transmitted and
work load of the cloud, latency and system response
time especially for applications demanding localized
and location dependent information [6]. Moreover, the
node, user, sensor, or MTC device is potentially capable of generating row and processed data at different
granularity levels which ultimately help the network
to have a massive amount of data likely exhibiting
some pattern. This will help different nodes to leverage data mining and analytics techniques to predict
relevant network metrics such as user mobility, traffic
behavior, network load fluctuation, channel variations,
and interference levels.
All these opportunities enable efficient and flexible
resource allocation and management, protocol stack
configuration, and signaling procedure and physical
layer optimization, and facilitate existing devices to
harness the powers of sensors, edge, fog and cloud
based computing platforms, and data analytics engines
[7]–[9]. These also create favorable conditions to engineer a tightly integrated wireless network by adopting
the AI principles (see Fig. 1) incorporating learning,
reasoning and decision making mechanisms which are
crucial to realize context-awareness capability. A typical next generation network utilizing the AI principles
at different nodes is shown in Fig. 2. Towards this end,
the current paper provides a comprehensive survey
on the utilization of AI integrating machine learning,
data analytics and natural language processing (NLP)
techniques for enhancing the efficiency of wireless
systems. We particularly focus on the utilization of
these techniques for efficient wireless data acquisition
and knowledge discovery, planning, and operation and
management of the next generation wireless networks.
A brief case study showing the utilization of AI
techniques for this network has also been provided.
The paper is organized as follows. In Section II,
we discuss data acquisition and knowledge discovery
approaches used in AI enabled wireless. Then, a comprehensive discussion on how these knowledge can be
used in network planning, operation and management
of the next generation wireless is given in Sections
III and IV. A case study discussing the applications
of AI techniques for channel impulse response (CIR)
prediction and context-aware data transmission is then
provided in Section V. Finally, conclusions are drawn
in Section VI.
II. DATA ACQUISITION AND K NOWLEDGE
D ISCOVERY
Efficient data acquisition and knowledge discovery
is one of the requirements of future wireless networks
as it helps to realize situation aware and optimized
decisions as shown in Fig. 3. The gathered data may
need to be processed efficiently to extract relevant
knowledge. Furthermore, as the available data may
contain a large amount of erroneous (missing) values,
a robust knowledge discovery may need to be devised
[5].
A. Data Acquisition
The AI based tools relying on machine learning for
the input data mining and knowledge model extraction
at different levels could be applied [10]. This includes
the cell level, cell cluster level and user level. In
general, one can collect data from three main sources;
network, user, and external. The network data characterizes the network behavior including outage and
usage statistics of services or nodes, and load of a
cell. The user data could comprise user subscription
information and user device type. And, the external
data contains a user specific information obtained
from different sources such as sensors and channel
measurements [11].
One way of collecting wireless data is by employing
content caching where the idea is to store popular
contents at the network edge (at BSs, devices, or
other intermediate locations). In this regard, one can
enable the proactive cache type if the traffic learning algorithm predicts that the same content will be
requested in the near future [12], [13]. Moreover,
since different users may request the same content
with different qualities, each edge node may need to
cache the same content in different granularity (for
example, caching video data with different resolution).
This further requires the edge device to apply coded
(adaptive) caching technique based on the quality of
service (QoS) requirement of the requester [14]. Coded
caching also enables devices to create multicasting
opportunities for certain contents via coded multicast
transmissions [15].
In some cases, a given edge (fog) may collect date
from more than one sources with different connectivity criteria [16]. In a fog enabled wireless network,
this is facilitated by employing IoT devices which
leverage a multitude of radio-access technologies such
as wireless local area network (WLAN) and cellular
networks. In this regard, context-aware data collection
from multiple sources probably in a compressed format by employing appropriate dimensionality reduction techniques under imperfect statistical knowledge
of the data while simultaneously optimizing multiple
objective functions such as delay and transmission
power can be enabled [13].
B. Knowledge Discovery
Efficient knowledge discovery is critical for optimized operation and management of the network. The
network may need to use a novel learning technique
such as deep learning to extract the hidden contextual
information of the network which are crucial for
knowledge base (KB) creation. In general, context is
related to any information used to characterize the
situation of an entity, including surrounding location,
identity, preferences, and activities. Context may affect
the operation and management procedures of complex
systems at various levels, from the physical device
level to the communication level, up to the application level [17]. For instance, uncovering the relation
between the device and network information (user
location, velocity, battery level, and other medium
access control (MAC) and higher layer aspects) would
permit adaptive communication and processing capa-
IoT
edges
Cloud
Microwave
mmWave
Fog
Fog
BS
BS
Cellular edges
ITS
edges
Fig. 2. Typical next generation network adopting AI principles with learning, reasoning and decision making.
Network, user,
external data
Data acquisition
Knowledge discovery
Reasoning and decision
Optimized
network
Fig. 3. Optimized network design with AI techniques.
bilities based on the changes in the environment and
application [2], [17].
Analyzing wireless data contextually (semantically)
facilitate wireless operators to optimize their network
traffic. To realize semantic aware traffic optimization,
however, the network may need to understand the
content of the signal. One way of understanding the
information content of a data is by creating semantic
aware ontology using predefined vocabulary of terms
and concepts [18]. The ontology specification can
provide an expressive language with many logical
constructs to define classes, properties and their relationships. In this regard, the authors of [18] propose
a semantic open data model for sensor data called
MyOntoSens and write using ontology web language
2 description logic language. The proposed KB has
been implemented using protégé and pre-validated
with pellet reasoner. In a similar context, an ontology
for wireless sensor networks (WSNs) dedicated to the
description of sensor features and observation has been
presented in [19], [20].
Understanding the context also helps to produce
a context-aware compressed (summary) information
which will utilize less radio resource for transmission.
For instance, if a BS would like to transmit text
information to a user, the BS can transmit only its
contextual coded data. The user will, then, extract
the desired content just from the context by utilizing
appropriate decoder and big-data analytics technique
such as NLP. As context differs from the user’s world
knowledge about the content, the coding and decoding
technique may vary among users [17]. In general,
two types of content summarizing approaches are
commonly adopted; abstractive and extractive. The
extractive approach uses only the relevant content
from the original information whereas, the abstractive
approach may use new words (expressions or contents) as part of the summary information [21], [22].
Although most of the existing methods can extract
useful information for the summary, they are very
far from generating human understandable summary.
One of the main reasons is the loose associations
and unordered information distribution which make it
hard to extract syntactically correct and semantically
coherence information from the summary. In fact,
modeling the coherence of information summary is
one of the active areas of research [21].
III. N ETWORK P LANNING
One of the most critical aspects determining the performance of a wireless network is the initial planning.
This includes infrastructure (node deployment), frequency, number of parameters and their configuration
setting procedures, and energy consumption to run the
network in both idle (no data communication takes
place between the network and a user) and active (data
communication takes place between the network and
a user) conditions. A well planned network may need
to guarantee satisfactory user QoS (in terms of data
rate, reliability and latency) and the network operator
requirements (in terms of cost). The machine learning
technique can be used for planning different parts of
the network by utilizing the network and user data.
A. Node Deployment, Energy Consumption and RF
Planning
The future generation wireless networks will be
extremely dense and heterogeneous, equipped likely
with moving and flying BSs, and featured by continually varying network conditions [23]. This fact makes
the existing network planning techniques, which are
mainly static and designed from expensive field tests,
not suitable for the future wireless networks [2], [24].
The utilization of AI techniques for network planning
has recently received an interest in the research community. For instance, a machine learning technique
is suggested for content popularity and request distribution predictions of a cache where its design in
the network considers several factors including cache
placement and update strategy which are determined
by utilizing the users’ content request distribution and
frequency, and mobility pattern [25].
In [26], an AI based system which leverages graph
theory based problem formulations for the fiber to
home network is proposed to automate the planning
process. To solve the problems, mixed integer linear
programming (MILP), ant colony optimization (ACO)
and genetic algorithm (GA) have been applied. The
authors of [10] employ the principles of AI for radio
access network (RAN) planning which includes new
cell, radio frequency (RF) and spectrum of the 5G
wireless. The analysis is performed by processing input data from multiple sources, through learning based
classification, prediction and clustering models, and
extracting relevant knowledge to drive the decisions
made by 5G network.
Wireless networks contribute an increasing share in
the energy consumption of the information communications technology (ICT) infrastructure. Over 80%
of a wireless network power consumption is used by
the RANs, especially at the BS, since the present BS
deployment is designed on the basis of peak traffic
loads and generally stays active irrespective of the
huge variations in traffic load [27]. This makes the
current energy planning inefficient for the future smart
cities aiming to realize green communication. To enable energy efficient wireless, different AI techniques
have been suggested. For instance, the authors of
[28] propose a method to realize the efficient use
of electricity by autonomously controlling network
equipments such as servers, air-conditioners in an
integrated manner. Furthermore, the authors of [29]
suggest predictive models, including neural network
and Markov decision, for the energy consumption of
IoT in smart cities. Along this line, a BS switching
solution for traffic aware greener cellular networks
using AI techniques has also been discussed in [27].
B. Configuration Parameter and Service Planning
The number of configurable parameters in cellular networks fairly increases when moving from one
generation to the next. For instance, in a typical 3G
and 4G nodes these parameters are around 1000 and
1500, respectively. It is anticipated that this trend will
continue and the recently suggested 5G network node
will likely have 2000 or more parameters. In addition,
unlike the current and previous generation networks
which provides static services, the next generation network may need to support the continuously evolving
new services and use cases, and establish sufficient
network resource and provisioning mechanisms while
ensuring agility and robustness. These necessitate
the next generation network to understand parameter variations, learn uncertainties, configure network
parameters, forecast immediate and future challenges,
and provide timely solutions by interacting with the
environment [27]. In this direction, the utilization of
big data analytics has been discussed for protocol stack
configuration, signaling procedure and physical layer
procedure optimizations in [9].
Future smart cities require well planned wired and
wireless networks with ubiquitous broadband connectivity, and flexible, real time and distributed data processing capability. Although most modern cities have
multiple cellular networks that provide adequate coverage and data processing capability, these networks
often have limited capacity and peak bandwidths and
fail to meet the real time constraint of different emerging tactile applications. These make the realization
of advanced delay critical municipal services envisioned in a smart city (e.g., real-time surveillance,
public safety, on-time advisories, and smart buildings)
challenging [1]. One way of addressing this challenge
would be by deploying an AI integrated fog based
wireless architecture which allows data processing of
the network using a number of distributed nodes.
This will help for analyzing network status, detecting
anticipated faults and planning new node deployment
using AI techniques [1].
IV. N ETWORK O PERATION AND M ANAGEMENT
Energy and spectrum efficiency, latency, reliability,
and security are the key parameters that are taken
into account during the network operation stage. And
properly optimizing these parameters usually yield
satisfactory performance for both the service providers
and end users. In addition, these optimization parameters usually require simple and real time learning and
decision making algorithms.
A. Resource Allocation and Management
Different AI techniques have been proposed for
resource allocation, management and optimization of
wireless networks such as cellular, wearable, WSN,
body area network (BAN) [24]. In [30], the potential of AI in solving the channel allocation problem in wireless communication is considered. It is
demonstrated that the AI based approach has shown
better performance than those of randomized-based
heuristic and genetic algorithms (GAs). In [31], radio access technology (RAT) selection utilizing the
Hopfield neural networks as a decision making tool
while leveraging the ability of AI reasoning and the
use of multi-parameter decision by exploiting the
options of IEEE 802.21 protocol is proposed. A machine learning based techniques including supervised,
unsupervised, and reinforcement learning techniques,
have been exploited to manage the packet routing in
many different network scenarios [32]. Specifically, in
[33], [34], a deep learning approach for shortest traffic
route identification to reduce network congestion is
presented. A deep learning technique aiming to shift
the computing needs from rule-based route computation to machine learning based route estimation
for high throughput packet processing is proposed in
[32]. Along this line, a fog computing based radioaccess network which exploits the advantage of local
radio signal processing, cooperative radio resource
management, and distributed storage capability of fog
has been suggested to decrease the load on the front
haul and avoid large scale radio signal processing in
the centralized baseband controllers [1].
The utilization of unlicensed spectrum as a complement to licensed one receives an interest to offload network traffic through the carrier aggregation
framework, while critical control signaling, mobility,
voice and control data will always be transmitted on
the licensed bands. In this aspect, the authors of [35]
propose a hopfield neural network scheme for multiradio packet scheduling. The problem of resource
allocation with uplink-downlink decoupling in a long
term evolution-unlicensed (LTE-U) system has been
investigated in [36] for which the authors propose a
decentralized scheme based on neural network. The
authors in [37] propose a distributed approach based
on Q-learning for the problem of channel selection
in an LTE-U system. Furthermore, in a multi-RAT
scenario, machine learning techniques can allow the
smart use of different RATs wherein a BS can learn
when to transmit on each type of frequency band based
on the network conditions. For instance, one can apply
a machine learning to predict the availability of a line
of sight (LoS) link, by considering the users’ mobility
pattern and antenna tilt, thus allowing the transmission
over the millimeter wave band.
B. Security and Privacy Protection
The inherent shared nature of radio propagation
environment makes wireless transmissions vulnerable
to malicious attacks, including eavesdropping and
jamming. For this reason, security and privacy protection are fundamental concerns of today’s wireless
communication system. Wireless networks generally
adopt separate security level at different layers of the
communication protocol stack. Furthermore, different
applications usually require different encryption methods [42]. The utilization of AI techniques for wireless
security has received a significant interest.
TABLE I
M AIN I SSUES IN AI- ENABLED W IRELESS N ETWORK
Ref.
[13]
[14], [15]
[18]–[20]
[5]
[10], [26]
[38]
[27], [28]
[1], [9]
[30], [31]
[1], [32]
[35]
[39]–[41]
[5], [38]
7.4
Average SE (bps/Hz)
Data acquisition and knowledge discovery
• Context-aware data acquisition from single/multiple sources
• Coded (adaptive) caching
• Semantic-aware Ontology (KB) creation from network data
• Robust knowledge discovery from erroneous (missing) data
Network planning
• Node deployment and radio frequency allocation
• Caching and computing placement and content update
• Energy consumption modeling and prediction (idle/active)
• Parameter and service configuration procedure
Network operation and management
• Resource allocation: RAT and channel selection,
packet routing, distributed storage and processing,
multi-RAT packet scheduling
• Security: Spoofing attack and intrusion detection
• Latency: Context-aware edge computing and scheduling
7.2
7
6.8
Predicted CIR: Analytical (RLS)
Predicted CIR: ML (Linear regression)
Perfect CIR
6.6
1
2
3
4
5
6
7
8
CIR prediction index (in OFDM block)
In [39], a spoofing attack detection scheme using
random key distribution based artificial immune system (AIS) has been proposed. In a similar path, an
approach based on GA and AIS, called GAAIS, for
dynamic intrusion detection in mobile ad-hoc network
(MANETs) is suggested in [41]. In [40], advanced
detection of intrusions on sensor networks (ADIOS)
based intrusion detection and prevention system is
developed. The ADIOS is designed to mitigate denialof-service attacks in wireless sensor networks by capturing and analyzing network events using AI and
an expert system developed using the C language
integrated production system tool. In a similar work,
the authors of [43] propose AI based scheme to secure
the communication protocol of connected vehicles.
C. Latency Optimization for Tactile Applications
The next generation wireless networks are featured by several mission critical (tactile) applications
such as lane switching in automated vehicles. For
vehicular networks, different levels of automation has
been defined by the USA department of transportation
(DOT) ranging from a simple driver assistance (level
1) to the full automation mode (level 5). For this
application, one can apply different message representations including warning alarm, picture and audio
information to request intervening. In fact, recently it
is demonstrated via experiment that the use of natural
language generation techniques from imprecise data
improves the human decision making accuracy. Such
a linguistic description of data could be designed by
modeling vague expressions such as small and large,
which are norms in daily life conversation, using fuzzy
logic theory [44]. All these facilitates the utilization of
predictive machine learning as in [45], [46].
Fig. 4. Comparison of analytical and machine learning (ML)
approaches in terms of achieved average SE for different future
OFDM block index CIR.
From the computing side, edge devices can be
used for effective low-latency computations, using the
emerging paradigm of mobile edge computing. However, optimizing mobile edge computing faces many
challenges such as computing placement, computational resource allocation, computing task assignment,
end-to-end latency, and energy consumption. In this
aspect, a machine learning technique can be used to
address these challenges by utilizing historical data.
Predicting computational requirements enable the network devices to schedule the computational resources
in advance to minimize the global latency. In this
aspect, the authors of [38] propose a cross-system
learning framework in order to optimize the long-term
performance of multi-mode BSs, by steering delaytolerant traffic towards WiFi. Furthermore, in a fog
enabled wireless system, latency can be addressed by
exploiting different levels of awareness at each edge
network. In fact, a number of learning techniques can
be applied to achieve these awareness including incremental, divide and conquer, parallel and hierarchical
[5]. A brief summary of different issues in the AIenabled wireless network is presented in Table I.
V. D ESIGN C ASE S TUDIES
This section discusses typical design case studies
in which the AI techniques can be applied for the
context-aware wireless network.
A. Machine Learning for CIR Prediction
This study demonstrates the utilization of machine learning tools for optimizing wireless system
resources. In this aspect, we select the wireless CIR
prediction as the design objective. To solve this design
objective, the first possibility could be to apply different analytical CIR prediction techniques (for example,
the recursive least square (RLS) prediction proposed in
[45]). The second possibility could be to predict future
CIRs by leveraging the past experience. The former
possibility is very expensive particularly when real
time prediction is needed. Furthermore, in most cases,
the analytical prediction approach may fail whenever
there is a modeling error or uncertainty. The latter
possibility, however, is simple as it employs the past
experience and applies standard vector multiplication
and addition operations [47]. This simulation compares the performances of RLS and machine learning
prediction approaches. For the machine learning, we
employ the well known multivariate linear regression.
For the comparison, we consider an orthogonal
frequency domain multiplexing (OFDM) transmission
scheme where a BS equipped with N antennas is
serving a single antenna IoT device. The CIR is
modeled by considering a typical scenario of the IEEE
802.11 standard with channel correlation both spatially
and temporarily. The spatial channel covariance matrix
is modeled by considering the uniform linear array
(ULA) structure, and the temporal channel correlation
is designed by the well known Jake’s model [48].
The number of multipath taps L = 4, fast Fourier
transform (FFT) size M = 64, N = 8, OFDM
symbol period Ts = 166µs, RLS window size Sb = 8,
forward prediction window size Sf = 8, carrier frequency 5.6GHz, and mobility speed of the IoT device
30km/hr. The signal to noise ratio (SNR) for each subcarrier is set to 10dB. With these settings, Fig. 4 shows
the average spectrum efficiency (SE) obtained by the
RLS and machine learning approaches for sub-carrier
s = 4. In both cases, the achieved SE decreases as
the future OFDM-block index increases. This is expected since the number of unknown CIR coefficients
increase as the future block index increases leading
to a degraded CIR prediction quality. However, for
a fixed future prediction index, the machine learning
approach yields better performance than the RLS one1 .
1
Note that similar average performance is observed for other
sub-carriers.
Sample Traffic Data
1
13%
9%
3%
19%
2
3
56%
4
5
Fig. 5. Data traffic of a sample abstract information: In this figure,
(Sets) 1 to 5 denote, Background, Objective, Method and Result,
Conclusion, and Related and Future Works, respectively.
B. Context-Aware Data Transmission using NLP Techniques
Context (semantic) aware information transmission
is crucial in the future generation network. To validate
this, we employ abstract texts from scientific articles
[49]. According to this paper, each scientific abstract
text consists of different types of information including
research background, methodology, main results etc.
Fig. 5 shows the expert annotated data size for different types for 2000 biomedical article abstracts. As can
be seen from this figure, different information types
use different portions of the overall data set. And for
a given user, one can transmit the desired information
according to the context. For instance, for a user who
is interested in the basics about the article, transmitting the background information could be sufficient
which accounts for only 9% of the total traffic. This
shows that semantically enabled data transmission
will reduce the network traffic while simultaneously
maintaining the desired QoS experience of users.
Such a transmission, however, is realized when
sentences of similar types are clustered correctly for
each abstract. In scientific papers, the location and
part-of-speech voice of a sentence are crucial features
to identify its class set [49]. We have employed these
features with the commonly used data clustering algorithms (i.e., K-means and Agglomerative) and present
the accuracy achieved by these algorithms for each
type as shown in Table II. As can be seen from this
table, different clustering algorithms yield different
accuracy. One can also notice from this table that
significant research work may need to be undertaken
TABLE II
ACCURACY OF DIFFERENT CLUSTERING METHODS
Clustering Method
K-Means
Agglomerative
Set 1
0.34
0.21
Set 2
0.17
0.18
Set 3
0.35
0.38
Set 4
0.31
0.30
Set 5
0.16
0.15
to reach the ideal performance.
VI. C ONCLUSIONS
The next generation wireless networks, which would
be more dynamic, complex with dense deployment
of BSs of different types and access technologies,
poses many design challenges for network planning,
management and troubleshooting procedures. Nevertheless, wireless data can be generated from different sources including networked information systems,
and sensing and communication devices. Furthermore,
the emerging fog computing architecture aiming for
distributed computing, storage, control, communication, and networking functions closer to end users
contribute for the efficient realization of wireless systems. This article provides a comprehensive survey
on the utilization of AI integrating machine learning,
data analytics and NLP techniques for enhancing the
efficiency of wireless networks. We have given a
comprehensive discussion on the utilization of these
techniques for efficient wireless data acquisition and
knowledge discovery, planning, operation and management of the next generation wireless networks. A brief
case study showing the utilization of AI techniques has
also been provided.
R EFERENCES
[1] A. Munir, P. Kansakar, and S. U. Khan, “IFCIoT: integrated
Fog Cloud IoT: A novel architectural paradigm for the future
internet of things,” IEEE Consumer Electronics Magazine,
vol. 6, no. 3, pp. 74–82, July 2017.
[2] T. E. Bogale and L. Le, “Massive MIMO and mmWave for
5G wireless HetNet: Potential benefits and challenges,” IEEE
Vehic. Techno. Mag., vol. 11, no. 1, pp. 64 – 75, Mar. 2016.
[3] M. Weiner, M. Jorgovanovic, A. Sahai, and B. Nikoli,
“Design of a low-latency, high-reliability wireless communication system for control applications,” in Proc. IEEE Int.
Conf. Commun. (ICC), 2014, pp. 3829 – 3835.
[4] M. Chiang and T. Zhang, “Fog and IoT: An overview of
research opportunities,” IEEE Internet of Things, vol. 3, no.
6, pp. 854 – 864, Dec. 2016.
[5] X. Wang and Y. He, “Learning from uncertainty for big data:
Future analytical challenges and strategies,” IEEE Systems,
Man, and Cybernetics Magazine, vol. 2, no. 2, pp. 26–31,
April 2016.
[6] Ruilong Deng, Rongxing Lu, Chengzhe Lai, Tom H Luan,
and Hao Liang, “Optimal workload allocation in fog-cloud
computing toward balanced delay and power consumption,”
IEEE Internet of Things Journal, vol. 3, no. 6, pp. 1171–
1181, 2016.
[7] M. Chiang, S. Ha, C. L. I, F. Risso, and T. Zhang, “Clarifying
Fog computing and networking: 10 questions and answers,”
IEEE Communications Magazine, vol. 55, no. 4, pp. 18–20,
April 2017.
[8] Y. Sun, H. Song, A. J. Jara, and R. Bie, “Internet of things
and big data analytics for smart and connected communities,”
IEEE Access, vol. 4, pp. 766 – 773, 2016.
[9] S. Han, C. L. I, G. Li, S. Wang, and Q. Sun, “Big data
enabled mobile network design for 5G and beyond,” IEEE
Communications Magazine, vol. PP, no. 99, pp. 2–9, 2017.
[10] J. Prez-Romero, O. Sallent, R. Ferrs, and R. Agust,
“Knowledge-based 5G radio access network planning and
optimization,” in 2016 International Symposium on Wireless
Communication Systems (ISWCS), Sept 2016, pp. 359–365.
[11] J. Prez-Romero, O. Sallent, R. Ferrs, and R. Agust, “Artificial intelligence-based 5G network capacity planning and
operation,” in 2015 International Symposium on Wireless
Communication Systems (ISWCS), Aug 2015, pp. 246–250.
[12] E. Bastug, M. Bennis, and M. Debbah, “Living on the edge:
The role of proactive caching in 5G wireless networks,” IEEE
Communications Magazine, vol. 52, no. 8, pp. 82–89, Aug
2014.
[13] E. Bastug, M. Bennis, M. Medard, and M. Debbah, “Toward
interconnected virtual reality: Opportunities, challenges, and
enablers,” IEEE Communications Magazine, vol. 55, no. 6,
pp. 110–117, 2017.
[14] G. Paschos, E. Bastug, I. Land, G. Caire, and M. Debbah,
“Wireless caching: Technical misconceptions and business
barriers,” IEEE Communications Magazine, vol. 54, no. 8,
pp. 16–22, August 2016.
[15] Y. Fadlallah, A. M. Tulino, D. Barone, G. Vettigli, J. Llorca,
and J. M. Gorce, “Coding for caching in 5G networks,” IEEE
Commun. Magazine, vol. 55, no. 2, pp. 106 – 113, 2017.
[16] D. Ohmann, A. Awada, I. Viering, M. Simsek, and G. P.
Fettweis, “Achieving high availability in wireless networks
by inter-frequency multi-connectivity,” in 2016 IEEE International Conference on Communications (ICC), May 2016,
pp. 1–7.
[17] F. Chiti, R. Fantacci, M. Loreti, and R. Pugliese, “Contextaware wireless mobile autonomic computing and communications: Research trends and emerging applications,” IEEE
Wireless Commun., vol. 23, no. 2, pp. 86 – 92, Apr. 2016.
[18] L. Nachabe, M. Girod-Genet, and B. El Hassan, “Unified
data model for wireless sensor network,” IEEE Sensors
Journal, vol. 15, no. 7, pp. 3657–3667, July 2015.
[19] R. Bendadouche, C. Roussey, G. De Sousa, J. Chanet, and
K-M. Hou, “Extension of the semantic sensor network ontology for wireless sensor networks: The stimulus-WSNnodecommunication pattern,” in Proc. 5th Int. Conf. Semantic
Sensor Networks - Vol. 904, 2012, pp. 49–64.
[20] M. Compton, A. A. Henson, L. Lefort, H. Neuhaus, and A. P.
Sheth, “A survey of the semantic specification of sensors,”
in Proc. CEUR Workshop, 2009, pp. 17–32.
[21] E. Cambria and B. White, “Jumping NLP curves: A review
of natural language processing research,” IEEE Comp.
Intelligence Mag., pp. 48 – 57, May 2014.
[22] T. Hirao, M. Nishino, Y. Yoshida, J. Suzuki, N. Yasuda,
and M. Nagata, “Summarizing a document by trimming the
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
discourse tree,” IEEE/ACM Trans. Audio, Speech, and Lang.
Process, vol. 23, no. 11, pp. 2081 – 2092, Nov. 2015.
M. Alzenad, A. El-Keyi, F. Lagum, and H. Yanikomeroglu,
“3-D placement of an unmanned aerial vehicle base station
(UAV-BS) for energy-efficient maximal coverage,” IEEE
Wireless Communications Letters, vol. 6, no. 4, pp. 434–437,
Aug 2017.
Z. Fadlullah, F. Tang, B. Mao, N. Kato, O. Akashi, T. Inoue,
and K. Mizutani, “State-of-the-art deep learning: Evolving
machine intelligence toward tomorrow’s intelligent network
traffic control systems,” IEEE Communications Surveys
Tutorials, vol. PP, no. 99, pp. 1–1, 2017.
M. Chen, M. Mozaffari, W. Saad, C. Yin, M. Debbah, and
C. S. Hong, “Caching in the sky: Proactive deployment
of cache-enabled unmanned aerial vehicles for optimized
quality-of-experience,” IEEE JSAC, vol. 35, no. 5, pp. 1046
– 1061, 2017.
K. F. Poon, A. Chu, and A. Ouali, “An AI-based system
for telecommunication network planning,” in 2012 IEEE
International Conference on Industrial Engineering and Engineering Management, Dec 2012, pp. 874–878.
R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, and
H. Zhang, “Intelligent 5G: When cellular networks meet
artificial intelligence,” IEEE Wireless Communications, vol.
PP, no. 99, pp. 2–10, 2017.
K. Awahara, S. Izumi, T. Abe, and T. Suganuma, “Autonomous control method using AI planning for energyefficient network systems,” in 2013 Eighth International
Conference on Broadband and Wireless Computing, Communication and Applications, Oct 2013, pp. 628–633.
W. Ejaz, M. Naeem, A. Shahid, A. Anpalagan, and M. Jo,
“Efficient energy management for the internet of things in
smart cities,” IEEE Communications Magazine, vol. 55, no.
1, pp. 84–91, January 2017.
S. I. Suliman, G. Kendall, and I. Musirin, “Artificial
immune algorithm in solving the channel assignment task,”
in 2014 IEEE International Conference on Control System,
Computing and Engineering (ICCSCE 2014), Nov 2014, pp.
153–158.
V. Rakovic and L. Gavrilovska, “Novel RAT selection mechanism based on Hopfield neural networks,” in International
Congress on Ultra Modern Telecommunications and Control
Systems, Oct 2010, pp. 210–217.
B. Mao, Z. M. Fadlullah, F. Tang, N. Kato, O. Akashi,
T. Inoue, and K. Mizutani, “Routing or computing? the
paradigm shift towards intelligent computer network packet
transmission based on deep learning,” IEEE Transactions on
Computers, vol. PP, no. 99, pp. 1–1, 2017.
N. Kato, Z. M. Fadlullah, B. Mao, F. Tang, O. Akashi,
T. Inoue, and K. Mizutani, “The deep learning vision for
heterogeneous network traffic control: Proposal, challenges,
and future perspective,” IEEE wireless commun. Mag, vol.
24, no. 3, pp. 146–153, 2017.
F. Tang, B. Mao, Z. M. Fadlullah, N. Kato, O. Akashi,
T. Inoue, and K. Mizutani, “On removing routing protocol
from future wireless networks: A real-time deep learning
approach for intelligent traffic control,” IEEE Wireless
Commun. Mag, 2017.
Y. Cui, Y. Xu, R. Xu, and X. Sha, “A multi-radio packet
scheduling algorithm for real-time traffic in a heterogeneous
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
wireless network environment,” Information Technology
Journal, vol. 10, pp. 182 – 188, Oct. 2010.
M. Chen, W. Saad, C. Yin, and M. Debbah, “Echo state
networks for proactive caching in cloud-based radio access
networks with mobile users,” IEEE Trans. Wireless Commun., vol. 16, no. 6, pp. 3520 – 3535, Jun. 2017.
J. Perez-Romero, O. Sallent, R. Ferrus, and R. Agusti, “A robustness analysis of learning-based coexistence mechanisms
for LTE-U operation in non-stationary conditions,” in Proc.
of IEEE Vehicular Technology Conference (VTC Fall), Sep.
2015.
M. Bennis, M. Simsek, A. Czylwik, W. Saad, W. Valentin,
and M. Debbah, “When cellular meets WiFi in wireless small
cell networks,” IEEE Commun. Magazine, vol. 51, no. 6, pp.
44 – 50, 2013.
E. S. Kumar, S. M. Kusuma, and B. P. V. Kumar, “A
random key distribution based artificial immune system for
security in clustered wireless sensor networks,” in Electrical,
Electronics and Computer Science (SCEECS), 2014 IEEE
Students’ Conference on, March 2014, pp. 1–7.
V. F. Taylor and D. T. Fokum, “Securing wireless sensor networks from denial-of-service attacks using artificial
intelligence and the clips expert system tool,” in 2013
Proceedings of IEEE Southeastcon, April 2013, pp. 1–6.
F. Barani, “A hybrid approach for dynamic intrusion detection in ad hoc networks using genetic algorithm and artificial
immune system,” in 2014 Iranian Conference on Intelligent
Systems (ICIS), Feb 2014, pp. 1–6.
Y. Zou, J. Zhu, X. Wang, and L. Hanzo, “A survey on
wireless security: Technical challenges, recent advances, and
future trends,” Proceedings of IEEE, vol. 104, no. 9, pp.
1727 – 1765, Sept. 2016.
P. Sharma, H. Liu, H. Wang, and S. Zhang, “Securing
wireless communications of connected vehicles with artificial
intelligence,” in 2017 IEEE International Symposium on
Technologies for Homeland Security (HST), April 2017, pp.
1–7.
D. Gkatzia, O. Lemon, and V. Rieser, “Data-to-text generation improves decision-making under uncertainty,” IEEE
Computational Intelligence Magazine, vol. 12, no. 3, pp. 10–
17, Aug 2017.
T. E. Bogale, X. Wang, and L. B. Le, “Adaptive channel
prediction, beamforming and scheduling design for 5G V2I
network,” in Proc. IEEE Veh. Technol. Conf (VTC-Fall).,
Sep. 2017.
K. Zhang, Y. Mao, S. Leng, Y. He, and Y. ZHANG, “Mobileedge computing for vehicular networks: A promising network paradigm with predictive off-loading,” IEEE Vehicular
Technology Magazine, vol. 12, no. 2, pp. 36–44, June 2017.
T. E. Bogale, X. Wang, and L. B. Le, “Adaptive channel
prediction, beamforming and feedback design for 5G V2I
network,” in IEEE (In preparation for submission), 2017.
D. Schafhuber and G. Matz, “MMSE and adaptive prediction
of time-varying channels for OFDM systems,” IEEE Tran.
Wireless Commun., vol. 4, no. 2, pp. 593–602, March 2005.
Y. Guo, A. Korhonen, I. Silins, and U. Stenius, “A weaklysupervised approach to argumentative zoning of scientific
documents,” in Proc. of Conf. on Empir. Methods in Nat.
Lang. Process., 2011, pp. 273 – 283.
| 7 |
Penalized Estimation in Additive Regression
with High-Dimensional Data
Zhiqiang Tan1 & Cun-Hui Zhang
1
arXiv:1704.07229v1 [] 24 Apr 2017
April 25, 2017
Abstract.
Additive regression provides an extension of linear regression by modeling the
signal of a response as a sum of functions of covariates of relatively low complexity. We study
penalized estimation in high-dimensional nonparametric additive regression where functional
semi-norms are used to induce smoothness of component functions and the empirical L2 norm
is used to induce sparsity. The functional semi-norms can be of Sobolev or bounded variation
types and are allowed to be different amongst individual component functions. We establish
new oracle inequalities for the predictive performance of such methods under three simple
technical conditions: a sub-gaussian condition on the noise, a compatibility condition on the
design and the functional classes under consideration, and an entropy condition on the functional classes. For random designs, the sample compatibility condition can be replaced by its
population version under an additional condition to ensure suitable convergence of empirical
norms. In homogeneous settings where the complexities of the component functions are of the
same order, our results provide a spectrum of explicit convergence rates, from the so-called
slow rate without requiring the compatibility condition to the fast rate under the hard sparsity
or certain Lq sparsity to allow many small components in the true regression function. These
results significantly broadens and sharpens existing ones in the literature.
Key words and phrases.
Additive model; Bounded variation space; ANOVA model; High-
dimensional data; Metric entropy; Penalized estimation; Reproducing kernel Hilbert space;
Sobolev space; Total variation; Trend filtering.
1
Department of Statistics & Biostatistics, Rutgers University. Address: 110 Frelinghuysen Road, Piscataway,
NJ 08854. E-mail: [email protected], [email protected]. The research of Z. Tan was supported in
part by PCORI grant ME-1511-32740. The research of C.-H. Zhang was supported in part by NSF grants
DMS-1513378, IIS-1250985, and IIS-1407939.
1
Introduction
Additive regression is an extension of linear regression where the signal of a response can
be written as a sum of functions of covariates of relatively low complexity. Let (Yi , Xi ),
i = 1, . . . , n, be a set of n independent (possibly non-identically distributed) observations,
where Yi ∈ R is a response variable and Xi ∈ Rd is a covariate (or design) vector. Consider an
additive regression model, Yi = g ∗ (Xi ) + εi with
g∗ (x) =
Pp
j=1
gj∗ (x(j) ),
(1)
where εi is a noise with mean 0 given Xi , x(j) is a vector composed of a small subset of the
components of x ∈ Rd , and gj∗ belongs to a certain functional class Gj . That is, g ∗ (x) lies in
P
the space of additive functions G = { pj=1 gj (x(j) ) : gj ∈ Gj , j = 1, . . . , p}. A function g ∈ G
Pp
(j)
may admit the decomposition g(x) =
j=1 gj (x ) for multiple choices of (g1 , . . . , gp ). In
what follows, such choices are considered equivalent but a favorite decomposition can be used
to evaluate properties of the components of g ∈ G.
In a classical setting (e.g., Stone 1985), each gj∗ is a univariate function and x(j) is the jth
component of x ∈ [0, 1]d , so that p = d. We take a broad view of additive regression and our
(j)
analysis will accommodate the general setting where gj∗ can be multivariate with Xi
being a
block of covariates, possibly overlapping across different j as in functional ANOVA (e.g., Gu
2002). However, most concrete examples will be given in the classical setting.
Additive modeling has been well studied in the setting where the number of components
p is fixed. See Hastie & Tibshirani (1990) and references therein. Recently, building upon
related works in penalized linear regression, there have been considerable progresses in the
development of theory and methods for sparse additive regression in high-dimensional settings
where p can be of greater order than the sample size n but the number of significant components
is still smaller than n. See, for example, Lin & Zhang (2006), Meier et al. (2009), Ravikumar
et al. (2009), Huang et al. (2010), Koltchinskii & Yuan (2010), Raskutti et al. (2012), Suzuki
& Sugiyama (2013), Petersen et al. (2016), and Yuan & Zhou (2016).
In this article, we study a penalized estimator ĝ with a specific associated decomposition
P
ĝ = pj=1 ĝj defined as a minimizer of a penalized loss
kY − gk2n /2 +
over g ∈ G and decompositions g =
Pp
Pp
j=1
j=1 gj ,
ρnj kgj kF,j + λnj kgj kn
where (λnj , ρnj ) are tuning parameters, k · kn is
1
the empirical L2 norm based on the data points, e.g. kY − gk2n = n−1
Pn
i=1 {Yi
− g(Xi )}2 , and
kgj kF,j is a semi-norm describing the complexity of gj ∈ Gj . For simplicity, the association of
(j)
kgj kn and kgj kF,j with Xi
is typically suppressed.
In the above penalty function, the primary role of the empirical norm k · kn is to induce
sparsity, whereas the primary role of the functional semi-norm k · kF,j is to induce smoothness
R 1 (m)
of the estimated regression function. For example, kgj kF,j = { 0 (gj )2 dz}1/2 when Gj is the
(m)
L2 -Sobolev space W2m on [0, 1], where gj
denotes the mth derivative of gj .
We consider both fixed and random designs and establish oracle inequalities for the predictive performance of ĝ under three simple technical conditions: a sub-gaussian condition on
noises, a compatibility condition on the design and the functional classes Gj , and an entropy
condition on Gj . The compatibility condition is similar to the restricted eigenvalue condition
used in analysis of Lasso, and for random designs, the empirical compatibility condition can be
replaced by its population version under an additional condition to ensue suitable convergence
of empirical norms. For the Sobolev and bounded variation classes, the entropy condition on
Gj follows from standard results in the literature (e.g., Lorentz et al. 1996).
The implications of our oracle inequalities can be highlighted in the classical homogeneous
(j)
setting where Xi
is the jth component of Xi and Gj = G0 for all j, where G0 is either an
Lr -Sobolev space Wrm or a bounded variation space V m of univariate functions on [0, 1], where
r ≥ 1 and m ≥ 1 are shape and smoothness indices of the space, and r = 1 for V m . In
this setting, it is natural to set (λnj , ρnj ) = (λn , ρn ) for all j. Consider random designs, and
suppose that for some choice of (g1∗ , . . . , gp∗ ) satisfying (1),
p
X
j=1
where kf k2Q = n−1
Pn
i=1 E{f
kgj∗ kF,j
2 (X )}
i
p
X
≤ C1 MF ,
j=1
kgj∗ kqQ ≤ C1q Mq ,
(2)
for a function f (x), C1 > 0 is a scaling constant depending
only on the moments of εi , and 0 ≤ q ≤ 1, Mq > 0 and MF > 0 are allowed to depend on
(n, p). In the case of hard sparsity, q = 0, M0 = #{j : gj∗ 6= 0}. As a summary, the following
result can be easily deduced from Proposition 3, 5, and 7.
Let β0 = 1/m and define
o
n
−1
1−q
wn∗ (q) = max n 2+β0 (1−q) , (log(p)/n) 2 ,
−1
−(1−q)β0
−1/2
∗
2+β0 (1−q)
4
,n
(log(p)/n)
γn (q) = min n
.
2
For simplicity, we restrict to the case where 1 ≤ r ≤ 2. For rm > 1, we assume that the
(j)
(j)
average marginal density of (X1 , . . . , Xn ) are uniformly bounded away from 0 and, if q 6= 1,
also uniformly bounded from above for all j = 1, . . . , p. The assumption of marginal densities
bounded from above, as well as the restriction 1 ≤ r ≤ 2, can be relaxed under slightly different
technical conditions (see Propositions 3, 4, and 6). For r = m = 1, neither the lower bound
nor the upper bound of marginal densities need to be assumed.
Proposition 1. Let G0 be a Sobolev space Wrm with 1 ≤ r ≤ 2 and m ≥ 1 or a bounded
variation space V m with r = 1 and m ≥ 1. Suppose that the noises are sub-gaussian and
√
log(p) = o(n). Let τ0 = 1/(2m + 1 − 2/r), Γn = 1 for rm > 1 and Γn = log n for r = m = 1.
(i) Let q = 1 and λn = ρn = A0 {log(p)/n}1/2 for a sufficiently large constant A0 . If p → ∞,
then
o
n
p
kĝ − g∗ k2Q = Op (1)C12 (MF2 + M12 ) n−1/2 Γn + log(p)/n .
(3)
(ii) Let q = 0, λn = A0 [γn∗ (0) + {log(p)/n}1/2 ] and ρn = λn wn∗ (0). Suppose that
n
wn∗ (0)−τ0
p
o
log(np)/n (1 + MF + M0 ) = o(1)
(4)
and a population compatibility condition (Assumption 5) holds. Then,
n −1
o2
p
kĝ − g∗ k2Q = Op (1)C12 (MF + M0 ) n 2+β0 + log(p)/n .
(5)
(iii) Let 0 < q < 1, λn = A0 [γn∗ (q) + {log(p)/n}1/2 ] and ρn = λn wn∗ (q). Suppose that
n
wn∗ (q)−τ0 (log(np)/n)
1−q
2
o
(1 + MF + Mq ) = O(1)
and a population compatibility condition (Assumption 7) holds. Then,
n
o2−q
p
−1
kĝ − g∗ k2Q = Op (1)C12 (MF + Mq ) n 2+β0 (1−q) + log(p)/n
.
(6)
There are several important features achieved by the foregoing result, distinct from existing
results. First, our results are established for additive regression with Sobolev spaces of general shape and bounded variation spaces. An important innovation in our proofs involves a
delicate application of maximal inequalities based on the metric entropy of a particular choice
of bounded subsets of G0 (see Lemma 1). All previous results seem to be limited to the L2 Sobolev spaces or similar reproducing kernel Hilbert spaces, except for Petersen et al. (2016),
who studied additive regression with the bounded variation space V 1 and obtained the rate
3
{log(np)/n}1/2 for in-sample prediction under assumption (2) with q = 1. In contrast, our
analysis in the case of q = 1 yields the sharper, yet standard, rate {log(p)/n}1/2 for in-sample
prediction (see Proposition 3), whereas {log(np)/n}1/2 for out-of-sample prediction by (3).
Second, the restricted parameter set (2) represents an L1 ball in k · kF semi-norm (inducing
smoothness) but an Lq ball in k · kQ norm (inducing sparsity) for the component functions
(g1∗ , . . . , gp∗ ). That is, the parameter set (2) decouples conditions for sparsity and smoothness
in additive regression: it can encourage sparsity at different levels 0 ≤ q ≤ 1 while enforcing
smoothness only to a limited extent. Accordingly, our result leads to a spectrum of convergence
rates, which are easily seen to slow down as q increases from 0 to 1, corresponding to weaker
sparsity assumptions. While most of previous results are obtained under exact sparsity (q =
0), Yuan & Zhou (2016) studied additive regression with reproducing kernel Hilbert spaces
Pp
∗ q
under an Lq ball in the Hilbert norm k · kH :
j=1 kgj kH ≤ Mq . This parameter set induces
smoothness and sparsity simultaneously and is in general more restrictive than (2). As a
result, the minimax rate of estimation obtained by Yuan & Zhou (2016), based on constrained
least squares with known Mq instead of penalized estimation, is faster than (6), in the form
n−2/(2+β0 ) + {log(p)/n}(2−q)/2 , unless q = 0 or 1.
Third, in the case of q = 1, our result (3) shows that the rate {log(p)/n}1/2 , with an
additional {log(n)/n}1/2 term for the bounded variation space V 1 , can be achieved via penalized
estimation without requiring a compatibility condition. This generalizes a slow-rate result for
constrained least-squares (instead of penalization) with known (M1 , MF ) in additive regression
with the Sobolev Hilbert space in Ravikumar et al. (2009). Both are related to earlier results
for linear regression (Greenhstein & Ritov 2004; Bunea et al. 2007).
Finally, compared with previous results giving the same rate of convergence (5) under exact
sparsity (q = 0) for Hilbert spaces, our results are stronger in requiring much weaker technical
conditions. The penalized estimation procedures in Koltchinskii & Yuan (2010) and Raskutti
et al. (2012), while minimizing a similar criterion as Kn (g), involve additional constraints:
Koltchinskii & Yuan (2010) assumed that the sup-norm of possible g∗ is bounded by a known
constant, where as Raskutti et al. (2012) assumed maxj kgj kH is bounded by a known constant.
(1)
(p)
Moreover, Raskutti et al. (2012) assumed that the covariates (Xi , . . . , Xi ) are independent
of each other. These restrictions were relaxed in Suzuki & Sugiyama (2013), but only explicitly
under the assumption that the noises εi are uniformly bounded by a constant. Moreover, our
rate condition (4) about the sizes of (M0 , MF ) is much weaker than in Suzuki & Sugiyama
4
(2013), due to improved analysis of convergence of empirical norms and the more careful
choices (λn , ρn ). For example, if (M0 , MF ) are bounded, then condition (4) holds whenever
log(p)/n = o(1) for Sobolev Hilbert spaces, but the condition previously required amounts to
log(p)n−1/2 = o(1). Finally, the seemingly faster rate in Suzuki & Sugiyama (2013) can be
deduced from our results when (λn , ρn ) is allowed to depend on (M0 , MF ). See Remarks 8 and
14–16 for relevant discussion.
The rest of the article is organized as follows. Section 2 gives a review of univariate functional
classes and entropies. Section 3 presents general results for fixed designs (Section 3.1) and
random designs (Section 3.2), and then provides specific results with Sobolev and bounded
variation spaces (Section 3.4) after a study of convergence of empirical norms (Section 3.3).
Section 4 concludes with a discussion. For space limitation, all proofs are collected in Section S1
and technical tools are stated in Section S2 of the Supplementary Material.
2
Functional classes and entropies
As a building block of additive regression, we discuss two broad choices for the function space Gj
and the associated semi-norm kgj kF,j in the context of univariate regression. For concreteness,
we consider a fixed function space, say G1 , although our discussion is applicable to Gj for j =
R1
1, . . . , p. For r ≥ 1, the Lr norm of a function f on [0, 1] is defined as kf kLr = { 0 |f (z)|r dz}1/r .
Example 1 (Sobolev spaces). For r ≥ 1 and m ≥ 1, let Wrm = Wrm ([0, 1]) be the Sobolev
(m−1)
space of all functions, g1 : [0, 1] → R, such that g1
(m)
(m)
kg1 kWrm = kg1 kLr + kg1 kLr is finite, where g1
is absolutely continuous and the norm
denotes the mth (weak) derivative of g1 . To
(m)
describe the smoothness, a semi-norm kg1 kF,1 = kg1 kLr is often used for g1 ∈ Wrm .
In the statistical literature, a major example of Sobolev spaces is W2m = {g1 : kg1 kL2 +
(m)
kg1 kL2 < ∞}, which is a reproducing kernel Hilbert space (e.g., Gu 2002). Consider a
univariate regression model
(1)
Yi = g1 (Xi ) + εi ,
i = 1, . . . , n.
(7)
The Sobolev space W2m is known to lead to polynomial smoothing splines through penalized
estimation: there exists a unique solution, in the form of a spline of order (2m − 1), when
minimizing over g1 ∈ W2m the following criterion
n
o2
1 Xn
(1)
Yi − g1 (Xi ) + ρn1 kg1 kF,1 .
2n
i=1
5
(8)
This solution can be made equivalent to the standard derivation of smoothing splines, where the
penalty in (8) is ρ′n1 kg1 k2F,1 for a different tuning parameter ρ′n1 . Particularly, cubic smoothing
splines are obtained with the choice m = 2.
Example 2 (Bounded variation spaces). For a function f on [0, 1], the total variation (TV)
of f is defined as
TV(f ) = sup
( k
X
i=1
)
|f (zi ) − f (zi−1 )| : z0 < z1 < . . . < zk is any partition of [0, 1] .
If f is differentiable, then TV(f ) =
R1
0
|f (1) (z)| dz. For m ≥ 1, let V m = V m ([0, 1]) be the
(m−2)
bounded variation space that consists of all functions, g1 : [0, 1] → R, such that g1
m ≥ 2, is absolutely continuous and the norm kg1 kV m = kg1 kL1 +
g1 ∈ V m , the semi-norm kg1 kF,1 =
(m−1)
TV(g1
)
(m−1)
TV(g1
)
, if
is finite. For
is often used to describe smoothness. The
bounded variation space V m includes as a strict subset the Sobolev space W1m , where the
(m−1)
semi-norms also agree: TV(g1
(m)
) = kg1 kL1 for g1 ∈ W1m .
For univariate regression (7) with bounded variation spaces, TV semi-norms can be used
as penalties in (8) for penalized estimation. This leads to a class of TV splines, which are
shown to adapt well to spatial inhomogeneous smoothness (Mammen & van de Geer 1997).
For m = 1 or 2, a minimizer of (8) over g1 ∈ V m can always be chosen as a spline of order m,
(1)
with the knots in the set of design points {Xi
: i = 1, . . . , n}. But, as a complication, this is
in general not true for m ≥ 3.
Recently, there is another smoothing method related to TV splines, called trend filtering
(1)
(Kim et al. 2009), where (8) is minimized over all possible values {g1 (Xi ) : i = 1, . . . , n}
with kg1 kF,1 replaced by L1 norm of mth-order differences of these values. This method is
equivalent to TV splines only for m = 1 or 2. But when the design points are evenly spaced,
it achieves the minimax rate of convergence over functions of bounded variation for general
m ≥ 1, similarly as TV splines (Tibshirani 2014).
The complexity of a functional class can be described by its metric entropy, which plays
an important role in the study of empirical processes (van der Vaart & Wellner 1996). For a
subset F in a metric space F endowed with norm k · k, the covering number N (δ, F, k · k) is
defined as the smallest number of balls of radius δ in the k·k-metric needed to cover F, i.e., the
smallest value of N such that there exist f1 , . . . , fN ∈ F , satisfying minj=1,...,N kf − fj k ≤ δ
for any f ∈ F. The entropy of (F, k · k) is defined as H(δ, F, k · k) = log N (δ, F, k · k).
6
For analysis of regression models, our approach involves using entropies of functional classes
(1)
for empirical norms based on design points, for example, {Xi
: i = 1, . . . , n} for subsets of G1 .
P
(1)
One type of such norms is the empirical L2 norm, kg1 kn = {n−1 ni=1 g12 (Xi )}1/2 . Another
(1)
is the empirical supremum norm, kg1 kn,∞ = maxi=1,...,n g1 (Xi ). If F is the unit ball in
the Sobolev space Wrm or the bounded variation space V m on [0, 1], the general picture is
H(δ, F, k · k) . δ−1/m for commonly used norms. See Section S2.5 for more.
3
Main results
As in Section 1, consider the estimator
Kn (g) = kY − gk2n /2 + A0 Rn (g),
ĝ = argming∈G Kn (g),
where A0 > 1 is a constant, G = {g =
Rn (g) =
p
X
j=1
for any decomposition g =
Pp
Pp
j=1 gj
(9)
: gj ∈ Gj } and the penalty is of the form
p
X
ρnj kgj kF,j + λnj kgj kn
Rnj (gj ) =
j=1
j=1 gj
with gj ∈ Gj , with certain functional penalties kfj kF,j and
the empirical L2 penalty kfj kn . Here the regularization parameters (λnj , ρnj ) are of the form
ρnj = λnj wnj ,
n
λnj = C1 γnj +
p
o
log(p/ǫ)/n ,
where C1 > 0 is a noise level depending only on parameters in Assumption 1 below, 0 < ǫ < 1
is a tail probability for the validity of error bounds, 0 < wnj ≤ 1 is a rate parameter, and
γnj = n−1/2 ψnj (wnj )/wnj
(10)
for a function ψnj (·) depending on the entropy of the unit ball of the space Gj under the
associated functional penalty. See Assumption 2 or 4 below.
Before theoretical analysis, we briefly comment on computation of ĝ. By standard properties
of norms and semi-norms, the objective function Kn (g) is convex in g. Moreover, there are
at least two situations where the infinitely-dimensional problem of minimizing Kn (g) can be
reduced to a finite-dimensional one. First, if each class Gj is a reproducing kernel Hilbert space
P
such as W2m , then a solution ĝ = pj=1 ĝj can be obtained such that each ĝj is a smoothing
(j)
spline with knots in the design points {Xi
: i = 1, . . . , n} (e.g., Meier et al. 2009). Second, by
the following proposition, the optimization problem can be also reduced to a finite-dimensional
7
one when each class Gj is the bounded variation space V 1 or V 2 . As a result, the algorithm in
Petersen et al. (2016) can be directly used to find ĝ when all classes (G1 , . . . , Gp ) are V 1 .
Proposition 2. Suppose that the functional class Gj is V m for some 1 ≤ j ≤ p and m = 1
P
or 2. Then a solution ĝ = pj=1 ĝj can be chosen such that ĝj is piecewise constant with jump
(j)
points only in {Xi
: i = 1, . . . , n} if m = 1, or ĝj is continuous and piecewise linear with
(j)
break points only in {Xi
: i = 1, . . . , n} if m = 2.
By Example 2, it can be challenging to compute ĝ when some classes Gj are V m with m ≥ 3.
However, this issue may be tackled using trend filtering (Kim et al. 2009) as an approximation.
3.1
Fixed designs
For fixed designs, the covariates (X1 , . . . , Xn ) are fixed as observed, whereas (ε1 , . . . , εn ) and
hence (Y1 , . . . , Yn ) are independent random variables. The responses are to be predicted when
new observations are drawn with covariates from the sample (X1 , . . . , Xn ). The predictive
performance of ĝ is measured by kĝ − g ∗ k2n .
Consider the following three assumptions. First, we assume sub-gaussian tails for the noises.
This condition can be relaxed, but with increasing technical complexity and possible modification of the estimators, which we will not pursue here.
Assumption 1 (Sub-gaussian noises). Assume that the noises (ε1 , . . . , εn ) are mutually independent and uniformly sub-Gaussian: For some constants D0 > 0 and D1 > 0,
max D0 E exp(ε2i /D0 ) ≤ D1 .
i=1,...,n
We will also impose this assumption for random designs with the interpretation that the above
probability and expectation are taken conditionally on (X1 , . . . , Xn ).
Second, we impose an entropy condition which describes the relationship between the function ψnj (·) in the definition of γnj and the complexity of bounded subsets in Gj . Although
entropy conditions are widely used to analyze nonparametric regression (e.g., Section 10.1, van
de Geer 2000), the subset Gj (δ) in our entropy condition below is carefully aligned with the
penalty Rnj (gj ) = λnj (wnj kgj kF,j + kgj kn ). This leads to a delicate use of maximal inequalities so as to relax and in some cased remove some restrictions in previous studies of additive
models. See Lemma 1 in the Supplement and Raskutti et al. (2012, Lemma 1).
8
Assumption 2 (Entropy condition for fixed designs). For j = 1, . . . , p, let Gj (δ) = {fj ∈ Gj :
kfj kF,j + kfj kn /δ ≤ 1} and ψnj (δ) be an upper bound of the entropy integral as follows:
Z δ
H 1/2 (u, Gj (δ), k · kn ) du, 0 < δ ≤ 1.
(11)
ψnj (δ) ≥
0
(j)
In general, Gj (δ) and the entropy H(·, Gj (δ), k · kn ) may depend on the design points {Xi }.
The third assumption is a compatibility condition, which resembles the restricted eigenvalue
condition used in high-dimensional analysis of Lasso in linear regression (Bickel et al. 2009).
Similar compatibility conditions were used by Meier et al. (2009) and Koltchinskii & Yuan
(2010) in their analysis of penalized estimation in high-dimensional additive regression.
Assumption 3 (Empirical compatibility condition). For certain subset S ⊂ {1, 2, . . . , p} and
constants κ0 > 0 and ξ0 > 1, assume that
2
X
X
λnj kfj kn ≤
κ20
λ2nj kf k2n
j∈S
j∈S
for any functions {fj ∈ Gj : j = 1, . . . , p} and f =
p
X
j=1
λnj wnj kfj kF,j +
X
j∈S c
Pp
j=1 fj
∈ G satisfying
λnj kfj kn ≤ ξ0
X
j∈S
λnj kfj kn .
Remark 1. The subset S can be different from {1 ≤ j ≤ p : gj∗ 6= 0}. In fact, S is arbitrary
in the sense that a larger S leads to a smaller compatibility coefficient κ0 which appears as a
factor in the denominator of the “noise” term in the prediction error bound below, whereas a
smaller S leads to a larger “bias” term. Assumption 3 is automatically satisfied for the choice
S = ∅. In this case, it is possible to take ξ0 = ∞ and any κ0 > 0, provided that we treat
summation over an empty set as 0 and ∞ × 0 as 0.
Our main result for fixed designs is an oracle inequality stated in Theorem 1 below, where
P
ḡ = pj=1 ḡj ∈ G as an estimation target is an additive function but the true regression function
g∗ may not be additive. Denote as a penalized prediction loss
Dn (ĝ, ḡ) =
1
1
kĝ − g∗ k2n + kĝ − ḡk2n + (A0 − 1)Rn (ĝ − ḡ).
2
2
For a subset S ⊂ {1, 2, . . . , p}, write as a bias term for the target ḡ
P
P
1
p
λ
kḡ
k
ρ
kḡ
k
+
∆n (ḡ, S) = kḡ − g∗ k2n + 2A0
j∈S c nj j n .
j=1 nj j F,j
2
The bias term is small when ḡ is smooth and sparse and predicts g ∗ well.
9
Theorem 1. Suppose that Assumptions 1, 2, and 3 hold. Then for any A0 > (ξ0 + 1)/(ξ0 − 1),
we have with probability at least 1 − ǫ,
Dn (ĝ, ḡ) ≤ ξ1−1 ∆n (ḡ, S) + 2ξ22 κ−2
0
P
2
λ
j∈S nj .
(12)
where ξ1 = 1 − 2A0 /{(ξ0 + 1)(A0 − 1)} ∈ (0, 1] and ξ2 = (ξ0 + 1)(A0 − 1).
Remark 2. As seen from our proofs, Theorem 1 and subsequent corollaries are directly ap(j)
plicable to functional ANOVA modeling, where each function gj may depend on Xi , a block
of covariates, and the variable blocks are allowed to overlap across different j. The entropy
associated with the functional class Gj need to be determined accordingly.
Remark 3. Using ideas from Bellec and Tsybakov (2016), it is possible to refine the oracle
inequality for ĝ, such that the scaling parameter ǫ is fixed, for example, ǫ = 1/2 in the definition
of ĝ in (9), but at any level 0 < ǫ̃ < 1, (12) holds with probability 1 − ǫ̃ when an additional
term of the form log(1/ǫ̃)/n on the right-hand side.
Taking S = ∅ and ξ0 = ∞ leads to the following corollary, which explicitly does not require
the compatibility condition (Assumption 3).
Corollary 1. Suppose that Assumptions 1 and 2 hold. Then for any A0 > 1, we have with
probability at least 1 − ǫ,
1
Dn (ĝ, ḡ) ≤ ∆n (ḡ, ∅) = kḡ − g∗ k2n + 2A0 Rn (ḡ).
2
(13)
The following result can be derived from Theorem 1 through the choice S = {1 ≤ j ≤ p :
kḡj kn > C0 λnj } for some constant C0 > 0.
Corollary 2. Suppose that Assumptions 1, 2, and 3 hold with S = {1 ≤ j ≤ p : kḡj kn >
C0 λnj } for some constant C0 > 0. Then for any 0 ≤ q ≤ 1 and A0 > (ξ0 + 1)/(ξ0 − 1), we
have with probability at least 1 − ǫ,
p
X
q
Dn (ĝ, ḡ) ≤ O(1) kḡ − g ∗ k2n +
ρnj kḡj kF,j + λ2−q
nj kḡj kn ,
j=1
where O(1) depends only on (q, A0 , C0 , ξ0 , κ0 ).
It is instructive to examine the implications of Corollary 2 in a homogenous situation where
for some constants B0 > 0 and 0 < β0 < 2,
Z δ
H 1/2 (u, Gj (δ), k · kn ) du ≤ B0 δ1−β0 /2 ,
max
j=1,...,p 0
10
0 < δ ≤ 1.
(14)
That is, we assume ψnj (δ) = B0 δ1−β0 /2 in (11). For j = 1, . . . , p, let
wnj = wn (q) = {γn (q)}1−q ,
2
2+β0 (1−q)
γnj = γn (q) = B0
−1
n 2+β0 (1−q) ,
(15)
1−q
which are determined by balancing the two rates ρnj = λ2−q
nj , that is, wnj = λnj , along with the
P
P
−β /2
definition γnj = B0 n−1/2 wnj 0 by (10). For g = pj=1 gj ∈ G, denote kgkF,1 = pj=1 kgj kF,j
P
and kgkn,q = pj=1 kgj kqn . For simplicity, we also assume that g∗ is an additive function and
set ḡ = g ∗ for Corollary 3.
Corollary 3. Assume that (1) holds and kg∗ kF,1 ≤ C1 MF and kg∗ kn,q ≤ C1q Mq for 0 ≤ q ≤ 1,
Mq > 0, and MF > 0, possibly depending on (n, p). In addition, suppose that (14) and (15)
hold, and Assumptions 1 and 3 are satisifed with S = {1 ≤ j ≤ p : kgj∗ kn > C0 λnj } for some
constant C0 > 0. If 0 < wn (q) ≤ 1 for sufficiently large n, then for any A0 > (ξ0 + 1)/(ξ0 − 1),
we have with probability at least 1 − ǫ,
Dn (ĝ, g∗ ) = kĝ − g ∗ k2n + (A0 − 1)Rn (ĝ − g∗ )
n
o2−q
p
≤ O(1)C12 (MF + Mq ) γn (q) + log(p/ǫ)/n
,
(16)
where O(1) depends only on (q, A0 , C0 , ξ0 , κ0 ).
Remark 4. There are several interesting features in the convergence rate (16). First, (16)
presents a spectrum of convergence rates in the form
n
−1
n 2+β0 (1−q) +
p
log(p)/n
o2−q
,
which are easily shown to become slower as q increases from 0 to 1, that is, the exponent
(2 − q)/{2 + β0 (1 − q)} is decreasing in q for 0 < β0 < 2. The rate (16) gives the slow rate
−2
{log(p)/n}1/2 for q = 1, or the fast rate n 2+β0 + log(p)/n for q = 0, as previously obtained
for additive regression with reproducing kernel Hilbert spaces. We defer to Section 3.4 the
comparison with existing results in random designs. Second, the rate (16) is in general at least
as fast as
n −1
o2−q
p
n 2+β0 + log(p)/n
.
Therefore, weaker sparsity (larger q) leads to a slower rate of convergence, but not as slow
−2
as the fast rate {n 2+β0 + log(p)/n} raised to the power of (2 − q)/2. This is in contrast
with previous results on penalized estimation over Lq sparsity balls, for example, the rate
{k/n + log(p)/n}(2−q)/2 obtained for group Lasso estimation in linear regression (Neghaban
11
et al. 2012), where k is the group size. Third, the rate (16) is in general not as fast as the
following rate (unless q = 0 or 1)
−2
n 2+β0 + {log(p)/n}(2−q)/2 ,
which was obtained by Yuan & Zhou (2016) using constrained least squares for additive
regression with reproducing kernel Hilbert spaces under an Lq ball in the Hilbert norm:
Pp
∗ q
j=1 kgj kH ≤ Mq . This difference can be explained by the fact that an Lq ball in k · kH
norm is more restrictive than in k · kn or k · kQ norm for our results.
3.2
Random designs
For random designs, prediction of the responses can be sought when new observations are randomly drawn with covariates from the distributions of (X1 , . . . , Xn ), instead of within the sample (X1 , . . . , Xn ) as in Section 3.1. For such out-of-sample prediction, the performance of ĝ is
P
measured by kĝ−g ∗ k2Q , where k·kQ denotes the theoretical norm: kf k2Q = n−1 ni=1 E{f 2 (Xi )}
for a function f (x).
Consider the following two extensions of Assumptions 2 and 3, such that dependency on
the empirical norm k · kn and hence on (X1 , . . . , Xn ) are removed.
Assumption 4 (Entropy condition for random designs). For some constant 0 < η0 < 1 and
j = 1, . . . , p, let ψnj (δ) be an upper bound of the entropy integral, independent of the realizations
(j)
{Xi
: i = 1, . . . , n}, as follows:
ψnj (δ) ≥
Z
δ
0
H ∗1/2 ((1 − η0 )u, Gj∗ (δ), k · kn ) du,
0 < δ ≤ 1,
(17)
where Gj∗ (δ) = {fj ∈ Gj : kfj kF,j + kfj kQ /δ ≤ 1} and
H ∗ (u, Gj∗ (δ), k · kn ) =
sup
(j)
(j)
(X1 ,...,Xn )
H(u, Gj∗ (δ), k · kn ).
Assumption 5 (Theoretical compatibility condition). For some subset S ⊂ {1, 2, . . . , p} and
constants κ∗0 > 0 and ξ0∗ > 1, assume that for any functions {fj ∈ Gj : j = 1, . . . , p} and
P
f = pj=1 fj ∈ G, if
p
X
j=1
λnj wnj kfj kF,j +
X
j∈S c
λnj kfj kQ ≤ ξ0∗
12
X
j∈S
λnj kfj kQ ,
(18)
then
κ∗2
0
X
j∈S
2
X
λnj kfj kQ ≤
λ2nj kf k2Q .
(19)
j∈S
Remark 5. Similarly as in Remark 1 about the empirical compatibility condition, Assumption 5 is also automatically satisfied for the choice S = ∅, in which case it is possible to take
ξ0∗ = ∞ and any κ∗0 > 0.
To tackle random designs, our approach relies on establishing appropriate convergence of
empirical norms k · kn to k · kQ uniformly over the space of additive functions G, similarly as
in Meier et al. (2009) and Koltchinskii & Yuan (2010). For clarity, we postulate the following
assumption on the rate of such convergence to develop general analysis of ĝ. We will study
convergence of empirical norms specifically for Sobolev and bounded variation spaces in Section 3.3, and then provide corresponding results on the performance of ĝ in Section 3.4. For
P
g = pj=1 gj ∈ G, denote
Rn∗ (g)
=
p
X
j=1
∗
Rnj
(gj ),
∗
Rnj
(gj ) = λnj (wnj kgj kF,j + kgj kQ ),
as the population version of the penalty Rn (g), with kgj kQ in place of kgj kn .
Assumption 6 (Convergence of empirical norms). Assume that
kgk2n − kgk2Q
P sup
>
φ
≤ π,
n
g∈G
Rn∗2 (g)
(20)
where 0 < π < 1 and φn > 0 such that for sufficiently large n, one or both of the following
conditions are valid.
(i) φn (maxj=1,...,p λ2nj ) ≤ η02 , where η0 is from Assumption 4.
(ii) For some constant 0 ≤ η1 < 1, we have
φn (ξ0∗ + 1)2 κ∗−2
0
X
j∈S
λ2nj ≤ η12 ,
(21)
where S is the subset of {1, 2, . . . , p} used in Assumption 5.
Our main result, Theorem 2, gives an oracle inequality for random designs, where the predicP
tive performance of ĝ is compared with that of an arbitrary additive function ḡ = pj=1 ḡj ∈ G,
13
but the true regression function g ∗ may not be additive, similarly as in Theorem 1 for fixed
designs. For a subset S ⊂ {1, 2, . . . , p}, denote
1
∆∗n (ḡ, S) = kḡ − g ∗ k2n + 2A0 (1 − η0 )
2
p
X
j=1
ρnj kḡj kF,j +
X
j∈S c
λnj kḡj kQ ,
which, unlike ∆n (ḡ, S), involves kḡj kQ and η0 from Assumptions 4 and 6(i).
Theorem 2. Suppose that Assumptions 1, 4, 5 and 6(i)–(ii) hold with 0 < η0 < (ξ0∗ − 1)/(ξ0∗ +
1). Let A(ξ0∗ , η0 ) = {ξ0∗ + 1 + η0 (ξ0∗ + 1)}/{ξ0∗ − 1 − η0 (ξ0∗ + 1)} > (1 + η0 )/(1 − η0 ). Then for
any A0 > A(ξ0∗ , η0 ), we have with probability at least 1 − ǫ − π,
1
1
kĝ − g ∗ k2n + kĝ − ḡk2n + (1 − η1 )A1 Rn∗ (ĝ − ḡ)
2
2
X
λ2nj ,
≤ ξ1∗−1 ∆∗n (ḡ, S) + 2ξ2∗2 κ∗−2
0
(22)
j∈S
where A1 = (A0 − 1) − η0 (A0 + 1) > 0, ξ1∗ = 1 − 2A0 /{(ξ0∗ + 1)A1 } ∈ (0, 1] and ξ2∗ = (ξ0∗ + 1)A1 .
Moreover, we have with probability at least 1 − ǫ − π,
1
1
kĝ − g ∗ k2n + kĝ − ḡk2Q + A2 Rn∗ (ĝ − ḡ)
2
2
X
φn
∗
≤ ξ3∗−1 ∆∗n (ḡ, S) + 2ξ4∗2 κ∗−2
λ2nj +
ξ ∗−2 ∆∗2
n (ḡ, g , S),
0
2A1 A2 3
Dn∗ (ĝ, ḡ) :=
(23)
j∈S
where A2 = A1 /(1 − η12 ), ξ3∗ = ξ1∗ (1 − η12 ), and ξ4∗ = ξ2∗ /(1 − η12 ).
Remark 6. Similarly as in Remark 2, we emphasize that Theorem 2 and subsequent corollaries
are also applicable to functional ANOVA modeling (e.g., Gu 2002). For example, consider
model (1) studied in Yang & Tokdar (2015), where each gj∗ is assumed to depend only on d0
of a total of d covariates and lie in a Hölder space with smoothness level α0 . Then p = dd0 ,
and the entropy condition (27) holds with β0 = d0 /α0 . Under certain additional conditions,
Corollary 6 with q = 0 shows that penalized estimation studied here achieves a convergence
−2
−2
rate M0 n 2+β0 + M0 log(p)/n under exact sparsity of size M0 , where n 2+β0 is the rate for
estimation of a single regression function in the Hölder class in dimension d0 with smoothness
β0−1 , and log(p)/n ≍ d0 log(d/d0 )/n is the term associated with handling p regressors. This
result agrees with the minimax rate derived in Yang & Tokdar (2015), but can be applied when
more general functional classes are used such as multi-dimensional Sobolev spaces. In addition,
14
Yang & Todkar (2015) considered adaptive Bayes estimators which are nearly minimax with
some extra logarithmic factor in n.
Taking S = ∅, ξ0∗ = ∞, and η1 = 0 leads to the following corollary, which explicitly
does not require the theoretical compatibility condition (Assumption 5) or the rate condition,
Assumption 6(ii), for convergence of empirical norms.
Corollary 4. Suppose that Assumptions 1, 4, and 6(i) hold. Then for any A0 > (1 + η0 )/(1 −
η0 ), we have with probability at least 1 − ǫ − π,
1
1
kĝ − g∗ k2n + kĝ − ḡk2n + A1 Rn∗ (ĝ − ḡ)
2
2
1
∗
≤ ∆n (ḡ, ∅) = λ2n0 + kḡ − g∗ k2n + 2A0 Rn∗ (ḡ).
2
(24)
Moreover, we have with probability at least 1 − ǫ − π,
1
1
kĝ − g∗ k2n + kĝ − ḡk2Q + A1 Rn∗ (ĝ − ḡ)
2
2
φn ∗2
∗
≤ ∆n (ḡ, ∅) +
∆ (ḡ, ∅).
2A21 n
(25)
The preceding results deal with both in-sample and out-of-sample prediction. For space limitation, except in Proposition 3, we hereafter focus on the more challenging out-of-sample prediction. Under some rate condition about φn in (20), the additional term involving φn ∆∗2
n (ḡ, S)
can be absorbed into the first term, as shown in the following corollary. Two possible scenarios
are accommodated. On one hand, taking ḡ = g ∗ directly gives high-probability bounds on the
prediction error kĝ − g∗ k2Q provided that g∗ is additive, that is, model (1) is correctly specified.
On the other hand, the error kĝ − g ∗ k2Q can also be bounded, albeit in probability, in terms of
an arbitrary additive function ḡ ∈ G, while allowing g ∗ to be non-additive.
Corollary 5. Suppose that the conditions of Theorem 2 hold with S = {1 ≤ j ≤ p : kḡj kQ >
C0∗ λnj } for some constant C0∗ > 0, and (20) holds with φn > 0 also satisfying
p
X
X
φn
ρnj kḡj kF,j +
λnj kḡj kQ ≤ η2 ,
(26)
j∈S c
j=1
for some constant η2 > 0. Then for any 0 ≤ q ≤ 1 and A0 > A(ξ0∗ , η0 ), we have with probability
at least 1 − ǫ − π,
p
X
2−q
ρnj kḡj kF,j + λnj
kḡj kqQ
Dn∗ (ĝ, ḡ) ≤ {O(1) + φn kḡ − g ∗ k2n } kḡ − g ∗ k2n +
,
j=1
15
where O(1) depends only on (q, A∗0 , C0∗ , ξ0∗ , κ∗0 , η0 , η1 , η2 ). In addition, suppose that φn kḡ − g∗ k2Q
is bounded by a constant and ǫ = ǫ(n, p) tends to 0 in the definition of ĝ in (9). Then for any
0 ≤ q ≤ 1, we have
p
X
q
kĝ − g∗ k2Q ≤ Op (1) kḡ − g ∗ k2Q +
ρnj kḡj kF,j + λ2−q
nj kḡj kQ .
j=1
Similarly as Corollary 3, it is useful to deduce the following result in a homogeneous situation
where for some constants B0∗ > 0 and 0 < β0 < 2,
Z δ
H ∗1/2 ((1 − η0 )u, Gj∗ (δ), k · kn ) du ≤ B0∗ δ1−β0 /2 ,
max
j=1,...,p 0
0 < δ ≤ 1.
(27)
That is, we assume ψnj (δ) = B0∗ δ1−β0 /2 in (17). For j = 1, . . . , p, let
wnj = wn∗ (q) = max{γn (q)1−q , νn1−q },
(28)
γnj = γn∗ (q) = min{γn (q), B0∗ n−1/2 νn−(1−q)β0 /2 },
(29)
where νn = {log(p/ǫ)/n}1/2 , and wn (q) = γn (q)1−q and
2
−1
−1
γn (q) = B0∗ 2+β0 (1−q) n 2+β0 (1−q) ≍ n 2+β0 (1−q)
are determined from the relationship (10), that is, γn (q) = B0∗ n−1/2 wn (q)−β0 /2 . The reason
why (wn∗ (q), γn∗ (q)) are used instead of the simpler choices (wn (q), γn (q)) is that the rate condition (30) needed below would become stronger if γn∗ (q) were replaced by γn (q). The rate
of convergence, however, remains the same even if γn∗ (q) is substituted for γn (q) in (31). See
P
P
Remark 16 for further discussion. For g = pj=1 gj ∈ G, denote kgkF,1 = pj=1 kgj kF,j and
P
kgkQ,q = pj=1 kgj kqQ .
Corollary 6. Assume that (1) holds and kg∗ kF,1 ≤ C1 MF and kg ∗ kQ,q ≤ C1q Mq for 0 ≤ q ≤ 1,
Mq > 0, and MF > 0, possibly depending on (n, p). In addition, suppose that (27), (28),
and (29) hold, Assumptions 1, 5, and 6(i) are satisfied with 0 < η0 < (ξ0∗ − 1)/(ξ0∗ + 1) and
S = {1 ≤ j ≤ p : kgj∗ kQ > C0∗ λnj } for some constant C0∗ > 0, and (20) holds with φn > 0
satisfying
o2−q
n
p
= o(1).
φn C12 (MF + Mq ) γn∗ (q) + log(p/ǫ)/n
(30)
Then for sufficiently large n, depending on (MF , Mq ) only through the convergence rate in (30),
and any A0 > A(ξ0∗ , η0 ), we have with probability at least 1 − ǫ − π,
o2−q
n
p
Dn∗ (ĝ, g∗ ) ≤ O(1)C12 (MF + Mq ) γn (q) + log(p/ǫ)/n
,
16
(31)
where O(1) depends only on (q, A∗0 , C0∗ , ξ0∗ , κ∗0 , η0 ).
In the case of q 6= 0, Corollary 6 can be improved by relaxing the rate condition (30) but
requiring the following compatibility condition, which is seemingly stronger than Assumption 5,
and also more aligned with those used in related works on additive regression (Meier et al. 2009;
Koltchinskii & Yuan 2010).
Assumption 7 (Monotone compatibility condition). For some subset S ⊂ {1, 2, . . . , p} and
constants κ∗0 > 0 and ξ0∗ > 1, assume that for any functions {fj ∈ Gj : j = 1, . . . , p} and
P
f = pj=1 fj ∈ G, if (18) holds then
κ∗2
0
X
j∈S
kfj k2Q ≤ kf k2Q .
(32)
Remark 7. By the Cauchy–Schwartz inequality, (32) implies (19), and hence Asssumption 7
is stronger than Assumption 5. However, there is a monotonicity in S for the validity of
Assumption 7 with (32) used. In fact, for any subset S ′ ⊂ S and any functions {fj′ ∈ Gj : j =
P
1, . . . , p} and f ′ = pj=1 fj′ ∈ G, if
p
X
j=1
λnj wnj kfj′ kF,j +
X
j∈S ′c
λnj kfj′ kQ ≤ ξ0∗
X
j∈S ′
λnj kfj′ kQ ,
then (18) holds with fj = fj′ , j = 1, . . . , p, and hence, via (32), implies
kf ′ k2Q ≥ κ∗2
0
X
j∈S
kfj′ k2Q ≥ κ∗2
0
X
j∈S ′
kfj′ k2Q .
Therefore, if Assumption 7 holds for a subset S, then it also holds for any subset S ′ ⊂ S with
the same constants (ξ0∗ , κ∗ ).
Corollary 7. Suppose that the conditions of Corollary 6 are satisfied with 0 < q ≤ 1 (excluding
q = 0), Assumption 7 holds instead of Assumption 5, and the following condition holds instead
of (30),
o2−q
n
p
≤ η3 ,
φn C12 (MF + Mq ) γn∗ (q) + log(p/ǫ)/n
(33)
for some constant η3 > 0. If 0 < wn∗ (q) ≤ 1 for sufficiently large n, then for any A0 >
A(ξ0∗ , η0 ), inequality (31) holds with probability at least 1 − ǫ − π, where O(1) depends only on
(q, A∗0 , C0∗ , ξ0∗ , κ∗0 , η0 , η3 ).
17
To demonstrate the flexibility of our approach and compare with related results, notably
Suzuki & Sugiyama (2013), we provide another result in the context of Corolloary 6 with
(wnj , γnj ) allowed to depend on (MF , Mq ), in contrast with the choices (28)–(29) independent
of (MF , Mq ). For j = 1, . . . , p, let
wnj = wn† (q) = max{wn′ (q), νn1−q (Mq /MF )},
(34)
γnj = γn† (q) = min{γn′ (q), B0∗ n−1/2 νn−(1−q)β0 /2 (Mq /MF )−β0 /2 },
(35)
2
−β0
−1
where wn′ (q) = γn′ (q)1−q (Mq /MF ) and γn′ (q) = B0∗ 2+β0 (1−q) n 2+β0 (1−q) (Mq /MF ) 2+β0 (1−q) are determined along with the relationship γn′ (q) = B0∗ n−1/2 wn′ (q)−β0 /2 by (10). These choices are
picked to balance the two rates: λn wn MF and λn2−q Mq , where wn and λn denote the common
values of wnj and λnj for j = 1, . . . , p.
Corollary 8. Suppose that the conditions of Corollary 6 are satisfied except that (wnj , γnj )
are defined by (34)–(35), and the following condition holds instead of (30),
n
o2−q
p
φn C12 Mq γn† (q) + log(p/ǫ)/n
= o(1).
(36)
Then for sufficiently large n, depending on (MF , Mq ) only through the convergence rate in (36),
and any A0 > A(ξ0∗ , η0 ), we have with probability at least 1 − ǫ − π,
(
Dn∗ (ĝ, g ∗ ) ≤ O(1)C12
2−β0
2+β0 (1−q)
Mq
(2−q)β0
2+β0 (1−q)
MF
n
−(2−q)
2+β0 (1−q)
+ Mq νn2−q
)
,
(37)
where O(1) depends only on (q, B0∗ , A∗0 , C0∗ , ξ0∗ , κ∗0 , η0 ).
Remark 8. In the special case of q = 0 (exact sparsity), the convergence rate (37) reduces to
2−β0
2+β0
M0
2β0
2+β0
MF
−2
n 2+β0 + M0 νn2 . The same rate was obtained in Suzuki & Sugiyama (2013) under
p
X
j=1
kgj∗ k0Q
≤ M0 ,
p
X
j=1
kgj∗ kH ≤ MF ≤ cM0 ,
(38)
with a constant c for additive regression with reproducing kernel Hilbert spaces, where kgj∗ kH
is the Hilbert norm. As one of their main points, this rate was argued to be faster than
−2
(M0 + MF )n 2+β0 + M0 νn2 , that is, the rate (31) with q = 0 under (38). Our analysis sheds
new light on the relationship between the rates (31) and (37): their difference mainly lies in
whether the tuning parameters (wnj , γnj ) are chosen independently of (MF , M0 ) or depending
on (MF , M0 ).
18
3.3
Convergence of empirical norms
We provide two explicit results on the convergence of empirical norms as needed for Assumption 6. These results can also be useful for other applications.
Our first result, Theorem 3, is applicable (but not limited) to Sobolev and bounded variation
spaces in general. For clarity, we postulate another entropy condition, similar to Assumption 4
but with the empirical supremum norms.
Assumption 8 (Entropy condition in supremum norms). For j = 1, . . . , p, let ψnj,∞ (·, δ) be
(j)
an upper envelope of the entropy integral, independent of the realizations {Xi
: i = 1, . . . , n},
as follows:
ψnj,∞(z, δ) ≥
Z
0
z
H ∗1/2 (u/2, Gj∗ (δ), k · kn,∞ ) du,
z > 0, 0 < δ ≤ 1,
where Gj∗ (δ) = {fj ∈ Gj : kfj kF,j + kfj kQ /δ ≤ 1} as in Assumption 4 and
H ∗ (u, Gj∗ (δ), k · kn,∞ ) =
sup
(j)
(j)
(X1 ,...,Xn )
H(u, Gj∗ (δ), k · kn,∞ ).
We also make use of the following two conditions about metric entropies and sup-norms.
Suppose that for j = 1, . . . , p, ψnj (δ) and ψnj,∞ (z, δ) in Assumptions 4 and 8 are in the
polynomial forms
ψnj (δ) = Bnj δ1−βj /2 ,
0 < δ ≤ 1,
ψnj,∞ (z, δ) = Bnj,∞z 1−βj /2 ,
(39)
z > 0, 0 < δ ≤ 1,
(40)
where 0 < βj < 2 is a constant, and Bnj > 0 and Bnj,∞ > 0 are constants, possibly depending
on n. Denote Γn = maxj=1,...,p (Bnj,∞ /Bnj ). In addition, suppose that for j = 1, . . . , p,
kgj k∞ ≤ (C4,j /2) kgj kF,j + kgj kQ
τj
1−τj
kgj kQ
,
gj ∈ Gj ,
(41)
where C4,j ≥ 1 and 0 < τj ≤ (2/βj − 1)−1 are constants. Let γnj = n−1/2 ψnj (wnj )/wnj =
−β /2
n−1/2 Bnj wnj j
−τ
by (10) and γ̃nj = n−1/2 wnj j for j = 1, . . . , p. As a function of wnj , the
quantity γ̃nj in general differs from γnj even up to a multiplicative constant unless τj = βj /2
as in the case where Gj is an L2 -Sobolev space; see (43) below.
Theorem 3. Suppose that Assumptions 4 and 8 hold with ψnj (δ) and ψnj,∞(z, δ) in the forms
(39) and (40), and condition (41) holds. In addition, suppose that for sufficiently large n,
19
1−βj /2
γnj ≤ wnj ≤ 1 and Γn γnj
≤ 1 for j = 1, . . . , p. Then for any 0 < ǫ′ < 1 (for example,
ǫ′ = ǫ), inequality (20) holds with π = ǫ′ 2 and φn > 0 such that
β
τ /2
γ̃nj wnjp+1 j
γnj
1/2
max
φn = O(1) n Γn max
j
j λnj
λnj
p
2 log(p/ǫ′ )
γ̃nj
log(p/ǫ′ )
γ̃nj
+ max
max
+ max
,
j λnj
j
j
λnj
λ2nj
(42)
where βp+1 = minj=1,...,p βj , and O(1) depends only on (C2 , C3 ) from Lemmas 13 and 14 and
C4 = maxj=1,...,p C4,j from condition (41).
To facilitate justification of conditions (39), (40), and (41), consider the following assumption on the marginal densities of the covariates, as commonly imposed when handling random
designs (e.g., Stone 1982).
Assumption 9 (Non-vanishing marginal densities). For j = 1, . . . , p, denote by qj (x(j) ) the
(j)
(j)
average marginal density function of (X1 , . . . , Xn ), that is, the density function associated
P
(j)
with the probability measure n−1 ni=1 QX (j) , where QX (j) is the marginal distribution of Xi .
i
i
For some constant 0 < ̺0 ≤ 1, assume that qj (x(j) ) is bounded from below by ̺0 simultaneously
for j = 1, . . . , p.
Remark 9. Conditions (39), (40), and (41) are satisfied under Assumption 9, when each Gj
m
is a Sobolev space Wrj j for rj ≥ 1 and mj ≥ 1, or a bounded variation space V mj for rj = 1
and mj ≥ 1, on [0, 1]. Let βj = 1/mj . First, (41) is implied by the interpolation inequalities
for Sobolev spaces (Nirenberg 1966) with
τj = (2/βj + 1 − 2/rj )−1
(43)
∗
and C4,j = ̺−1
0 C4 (mj , rj ) as stated in Lemma 21 of the Supplement. Moreover, if fj ∈ Gj (δ)
with 0 < δ ≤ 1, then kfj kF,j ≤ 1 and kfj kQ ≤ δ, and hence kfj kLrj ≤ kfj k∞ ≤ C4,j by
(41). By rescaling the entropy estimates for Sobolev and bounded variation spaces (Lorentz
et al. 1996) as in Lemmas 19 and 20 of the Supplement, Assumptions 4 and 8 are satisfied
such that (39) and (40) hold with Bnj independent of n, and Bnj,∞ = O(1)Bnj if rj > βj or
Bnj,∞ = O(log1/2 (n))Bnj if rj = βj = 1.
Remark 10. Assumption 9 is not needed for justification of (39), (40), and (41), when each
class Gj is W11 or V 1 on [0, 1], that is, rj = mj = 1. In this case, condition (41) directly holds
with τj = 1, because kgj k∞ ≤ TV(gj ) + kgj kQ . Then (39) and (40) easily follow from the
entropy estimates in Lemmas 19 and 20.
20
Our second result provides a sharper rate than in Theorem 3, applicable (but not limited)
to Sobolev and bounded variation spaces, provided that the following conditions hold. For
P∞
gj ∈ Gj , assume that gj (·) can be written as
ℓ=1 θjℓ ujℓ (·) for certain coefficients θjℓ and
basis functions ujℓ (·) on a set Ω. In addition, for certain positive constants C5,1 , C5,2 , C5,3 ,
0 < τj < 1, and 0 < wnj ≤ 1, assume that for all 1 ≤ j ≤ p,
sup
P k
max
k≥1
Pℓj0
2
ℓ=1 ujℓ (x)/k
P
: x ∈ Ω, k ≥ ℓj0 ≤ C5,1 ,
2 1/τj
ℓj,k−1 <ℓ≤ℓjk θjℓ ℓjk
2 −2
ℓ=1 θjℓ wnj
(44)
−1
≤ C5,2 (kgj kF,j + wnj
kgj kQ )2 ,
(45)
−1
kgj kQ )2 ,
≤ C5,2 (kgj kF,j + wnj
(46)
with ℓjk = ⌈(2k /wnj )2τj ⌉ for k ≥ 0 and ℓj,−1 = 0, and for all 1 ≤ j ≤ p and k ≥ 0,
n P
P
2
= 1 ≤ C5,3 .
sup k ℓj,k−1 <ℓ≤ℓjk θjℓ ujℓ k2Q : ℓj,k−1 <ℓ≤ℓjk θjℓ
(47)
Theorem 4. Suppose that (44), (45), (46) and (47) hold as above, and maxj=1,...,p {e2/(1−τj ) +
−τ
2wnj j } ≤ n. Then for any 0 < ǫ′ < 1 (for example, ǫ′ = ǫ), inequality (20) holds with π = ǫ′ 2
and φn > 0 such that
(
γ̃nj
φn = O(1) max
max
j (1 − τj )λnj
j
p
2 log(np/ǫ′ )
γ̃nj
log(np/ǫ′ )
+ max
j
λnj
(1 − τj )2 λ2nj
)
,
−τ
where γ̃nj = n−1/2 wnj j and O(1) depends only on {C5,1 , C5,2 , C5,3 }.
m
Remark 11. Let Gj be a Sobolev space Wrj j with rj ≥ 1, mj ≥ 1, and (rj ∧ 2)mj > 1 or a
bounded variation space V mj with rj = 1 and mj > 1 (excluding mj = 1) on [0, 1]. Condition
(44) holds for commonly used Fourier, wavelet and spline bases in L2 . For any L2 orthonormal
bases {ujℓ , ℓ ≥ 1}, condition (46) follows from Assumption 9 when C5,2 ≥ ̺−1
0 , and condition
(47) is also satisfied under an additional assumption that the average marginal density of
(j)
{Xi
: i = 1, . . . , n} is bounded from above by C5,3 for all j. In the proof of Proposition 5
we verify (44) and (45) for suitable wavelet bases with τj = 1/{2mj + 1 − 2/(rj ∧ 2)}, which
m
j
satisfies τj < 1 because (rj ∧ 2)mj > 1. In fact, Gj is allowed to be a Besov space Brj ,∞
, which
m
contains Wrj j for rj ≥ 1 and V mj for rj = 1 (e.g., DeVore & Lorentz 1993).
Remark 12. The convergence rate of φn in Theorem 4 is no slower than (42) in Theorem 3
if 1 ≤ rj ≤ 2 and (1 − τj )−1 {log(n)/n}1/2 = O(γ̃nj ), the latter of which is valid whenever τj
−τ /2
is bounded away from 1 and γ̃nj = n−1/2 wnj j
21
is of a slower polynomial order than n−1/2 .
However, Theorem 4 requires an additional side condition (47) along with the requirement of
τj < 1, which excludes for example the bounded variation space V 1 on [0, 1]. See Equations
(52) and (53) for implications of these rates when used in Assumption 6.
3.4
Results with Sobolev and bounded variation spaces
We combine the results in Section 3.2 and 3.3 (with ǫ′ = ǫ) to deduce a number of concrete
results on the performance of ĝ. For simplicity, consider a fully homogeneous situation where
each class Gj is a Sobolev space Wrm0 0 for some constants r0 ≥ 1 and m0 ≥ 1 or a bounded
variation space V m0 for r0 = 1 and m0 ≥ 1 on [0, 1]. Let β0 = 1/m0 . By Remark 9, if
r0 > β0 , then Assumptions 4 and 8 are satisfied such that ψnj (δ) = B0∗ δ1−β0 /2 and ψnj,∞ (z, δ) =
O(1)B0∗ z 1−β0 /2 for z > 0 and 0 < δ ≤ 1 under Assumption 9 (non-vanishing marginal densities),
where B0∗ > 0 is a constant depending on ̺0 among others. On the other hand, by Remark 10,
if r0 = β0 = 1, then Assumptions 4 and 8 are satisfied such that ψnj (δ) = B0∗ δ1/2 and
ψnj,∞ (z, δ) = O(log1/2 (n))B0∗ z 1/2 for z > 0 and 0 < δ ≤ 1, even when Assumption 9 does not
hold. That is, Γn in Theorem 3 reduces to
Γn = O(1) if r0 > β0 or O(log1/2 (n)) if r0 = β0 = 1.
We present our results in three cases, where the underlying function g ∗ =
(48)
Pp
∗
j=1 gj
is assumed
to satisfy (2) with q = 1, q = 0, or 0 < q < 1. As discussed in Section 1, the parameter set (2)
decouples sparsity and smoothness, inducing sparsity at different levels through an Lq ball in
k · kQ norm for 0 ≤ q ≤ 1, while only enforcing smoothness through an L1 ball in k · kF norm
on the components (g1∗ , . . . , gp∗ ).
The first result deals with the case q = 1 for the parameter set (2).
Proposition 3. Assume that (1) holds and kg ∗ kF,1 ≤ C1 MF and kg ∗ kQ,1 ≤ C1 M1 for MF > 0
and M1 > 0, possibly depending on (n, p). Let wnj = 1 and γnj = γn∗ (1) ≍ n−1/2 by (28)–(29).
Suppose that Assumptions 1 and 9 hold, and log(p/ǫ) = o(n). Then for sufficiently large n,
independently of (MF , M1 ), and any A0 > (1 + η0 )/(1 − η0 ), we have with probability at least
1 − 2ǫ,
p
kĝ − g ∗ k2n + A1 Rn∗ (ĝ − g ∗ ) ≤ O(1)C12 (MF + M1 ) log(p/ǫ)/n,
22
where O(1) depends only on (B0∗ , A0 , η0 , ̺0 ). Moreover, we have
1
kĝ − g∗ k2Q + A1 Rn∗ (ĝ − g ∗ )
2
n
o
p
≤ O(1)C12 (MF2 + M12 ) n−1/2 Γn + log(p/ǫ)/n ,
with probability at least 1 − 2ǫ, where Γn is from (48) and O(1) depends only on (B0∗ , A0 , η0 , ̺0 )
and (C2 , C3 , C4 ) as in Theorem 3. If r0 = β0 = 1, then the results are valid even when
Assumption 9 and hence ̺0 are removed.
Remark 13 (Comparison with existing results). Proposition 3 leads to the slow rate {log(p)/n}1/2
under L1 -ball sparsity in k·kQ norm, as previously obtained for additive regression with Sobolev
Hilbert spaces in Ravikumar et al. (2009), except in the case where r0 = β0 = 1, that is, each
class Gj is W11 or V 1 . In the latter case, Proposition 3 shows that the convergence rate is
{log(np)/n}1/2 for out-of-sample prediction, but remains {log(p)/n}1/2 for in-sample predic-
tion. Previously, only the slower rate, {log(np)/n}1/2 , was obtained for in-sample prediction
in additive regression with the bounded variation space V 1 by Petersen et al. (2016).
The second result deals with the case q = 0 for the parameter set (2).
Proposition 4. Assume that (1) holds and kg∗ kF,1 ≤ C1 MF and kg∗ kQ,0 ≤ M0 for MF > 0
and M0 > 0, possibly depending on (n, p). By (28)–(29), let
(
1/2 )
−1
2
log(p/ǫ)
,
wnj = wn∗ (0) = max B0∗ 2+β0 n 2+β0 ,
n
− β 0
4
−1
2
log(p/ǫ)
γnj = γn∗ (0) = min B0∗ 2+β0 n 2+β0 , B0∗ n−1/2
.
n
Suppose that Assumptions 1, 5, and 9 hold with 0 < η0 < (ξ0∗ − 1)/(ξ0∗ + 1) and S = {1 ≤ j ≤
p : kgj∗ kQ > C0∗ λnj } for some constant C0∗ > 0, and
n
Γn wn∗ (0)−(1−β0 /2)τ0 γn∗ (0) + wn∗ (0)−τ0
p
o
log(p/ǫ)/n (1 + MF + M0 ) = o(1),
(49)
where τ0 = 1/(2/β0 + 1 − 2/r0 ). Then for sufficiently large n, depending on (MF , M0 ) only
through the convergence rate in (49), and any A0 > A(ξ0∗ , η0 ), we have
n −1
o2
p
Dn∗ (ĝ, g∗ ) ≤ O(1)C12 (MF + M0 ) n 2+β0 + log(p/ǫ)/n ,
(50)
with probability at least 1 − 2ǫ, where O(1) depends only on (B0∗ , A∗0 , C0∗ , ξ0∗ , κ∗0 , η0 , ̺0 ). If
r0 = β0 = 1, then the results are valid even when Assumption 9 and hence ̺0 are removed.
23
Condition (49) is based on Theorem 3 for convergence of empirical norms. By Remark 12,
a weaker condition can be obtained using Theorem 4 when 1 ≤ r0 ≤ 2 and τ0 < 1 (that is,
r0 > β0 ). It is interesting to note that (49) reduces to (51) below in the case r0 = β0 = 1.
Proposition 5. Proposition 4 is also valid with (49) replaced by the weaker condition
o
n
p
wn∗ (0)−τ0 log(np/ǫ)/n (1 + MF + M0 ) = o(1),
(j)
(51)
(j)
in the case where 1 ≤ r0 ≤ 2, r0 > β0 , and the average marginal density of (X1 , . . . , Xn ) is
bounded from above for all j.
Remark 14 (Comparison with existing results). Propositions 4 and 5 yield the fast rate
−2
n 2+β0 + log(p)/n under L0 -ball sparsity in k · kQ norm. Previously, the same rate was obtained
for high-dimensional additive regression only with reproducing kernel Hilbert spaces (including
the Sobolev space W2m ) by Koltchinskii & Yuan (2010) and Raskutti et al. (2012), but under
more restrictive conditions. They studied hybrid penalized estimation procedures, which involve additional constraints such that the Hilbert norms of (g1 , . . . , gp ) are bounded by known
constants when minimizing a penalized criterion. Moreover, Koltchinskii & Yuan (2010) assumed a constant bound on the sup-norm of possible g∗ , whereas Raskutti et al. (2012) assumed
(1)
(p)
the independence of the covariates (Xi , . . . , Xi ) for each i. These restrictions were relaxed
in subsequent work by Suzuki & Sugiyama (2013), but only explicitly under the assumption
that the noises εi are uniformly bounded by a constant. Moreover, our condition (49) is much
weaker than related ones in Suzuki & Sugiyama (2013), as discussed in Remarks 15 and 16
below. See also Remark 8 for a discussion about the relationship between our results and the
seemingly faster rate in Suzuki & Sugiyama (2013).
Remark 15. To justify Assumptions 6(i)–(ii) on convergence of empirical norms, our rate
condition (49) is much weaker than previous ones used. If each class Gj is a Sobolev Hilbert
space (r0 = 2), then τ0 = β0 /2 and (49) becomes
o
n
p
2
n1/2 wn∗ (0)β0 /4 γn∗ (0)2 + γn∗ (0) log(p/ǫ) (1 + MF + M0 ) = o(1).
(52)
Moreover, by Proposition 5, condition (49) can be weakened to (51), that is,
p
γn∗ (0) log(np/ǫ)(1 + MF + M0 ) = o(1),
(53)
(j)
(j)
under an additional condition that the average marginal density of (X1 , . . . , Xn ) is bounded
from above for all j. Either condition (52) or (53) is much weaker than those in related analysis
24
with reproducing kernel Hilbert spaces. In fact, techniques based on the contraction inequality
(Ledoux & Talagrand 1991) as used in Meier et al. (2009) and Koltchinskii & Yuan (2010),
lead to a rate condition such as
n1/2 {γn2 (0) + νn2 }(1 + MF + M0 ) = o(1),
2
(54)
−1
where γn (0) = B0∗ 2+β0 n 2+β0 and νn = {log(p/ǫ)/n}1/2 . This amounts to condition (6) assumed
in Suzuki & Sugiyama (2013), in addition to the requirement n−1/2 (log p) ≤ 1. But condition
(54) is even stronger than the following condition:
o
n
2
n1/2 γn (0)2+β0 /4 + γn (0)νn (1 + MF + M0 ) = o(1),
(55)
2
because Γn γn (0)2+β0 /4 + γn (0)νn ≪ γn2 (0) + νn2 if either γn (0) ≫ νn or γn (0) ≪ νn . Condition
(55) implies (52) and (53), as we explain in the next remark.
Remark 16. Our rate condition (49) is in general weaker than the corresponding condition
with (wn∗ (0), γn∗ (0)) replaced by (wn (0), γn (0)), that is,
o
n
Γn γn (0)1−(1−β0 /2)τ0 + γn (0)−τ0 νn (1 + MF + M0 ) = o(1),
(56)
This demonstrates the advantage of using the more careful choices (wn∗ (0), γn∗ (0)) and also
explains why (55) implies (52) in Remark 15. In fact, if γn (0) ≥ νn then (49) and (56) are
identical to each other. On the other hand, if γn (0) < νn , then wn∗ (0) = νn > γn (0) and
wn∗ (0)−(1−β0 /2)τ0 γn∗ (0) = B0∗ n−1/2 wn∗ (0)−(1−β0 /2)τ0 −β0 /2 < γn (0)1−(1−β0 /2)τ0 . This also shows
that if γn (0) ≪ νn , then (49) is much weaker than (56). For illustration, if r0 = 2 and
hence τ0 = β0 /2, then (56) or equivalently (55) requires at least γn (0)−β0 /2 νn = o(1), that is,
−2
(log p)n 2+β0 = o(1), and (54) requires at least n1/2 νn2 = o(1), that is, log(p)n−1/2 = o(1). In
contrast, the corresponding requirement for (49), wn∗ (0)−β0 /2 νn = o(1), is automatically valid
as long as νn = o(1), that is, log(p)n−1 = o(1).
The following result deals with the case 0 < q < 1 for the parameter set (2).
Proposition 6. Assume that (1) holds and kg∗ kF,1 ≤ C1 MF and kg∗ kQ,q ≤ C1q Mq for 0 <
q < 1, Mq > 0, and MF > 0, possibly depending on (n, p). Let wnj = wn∗ (q) and γnj = γn∗ (q)
by (28)–(29). Suppose that Assumptions 1, 7, and 9 hold with 0 < η0 < (ξ0∗ − 1)/(ξ0∗ + 1) and
S = {1 ≤ j ≤ p : kgj∗ kQ > C0∗ λnj } for some constant C0∗ > 0, log(p/ǫ) = o(n), and
n
o
Γn wn∗ (q)−(1−β0 /2)τ0 γn∗ (q)1−q + wn∗ (q)−τ0 νn1−q (1 + MF + Mq ) ≤ η4 ,
25
(57)
for some constant η4 > 0, where νn = {log(p/ǫ)/n}1/2 . Then for sufficiently large n, independently of (MF , Mq ), and any A0 > A(ξ0∗ , η0 ), we have
n
o2−q
p
−1
Dn∗ (ĝ, g ∗ ) ≤ O(1)C12 (MF + Mq ) n 2+β0 (1−q) + log(p/ǫ)/n
,
with probability at least 1 − 2ǫ, where O(1) depends only on (q, B0∗ , A∗0 , C0∗ , ξ0∗ , κ∗0 , η0 , ̺0 , η4 ) and
(C2 , C3 , C4 ) as in Theorem 3. If r0 = β0 = 1, then the results are valid even when Assumption 9
and hence ̺0 are removed.
Similarly as in Propositions 4 and 5, condition (57) can be weakened as follows when
1 ≤ r0 ≤ 2 and τ0 < 1 (that is, r0 > β0 ). It should also be noted that (57) is equivalent to
(58) below (with different η4 in the two equations) in the case r0 = β0 = 1, because γn∗ (q) with
q < 1 is of a slower polynomial order than n−1/2 and hence {log(n)/n}1/2 γn∗ (q)−1 = o(1).
Proposition 7. Proposition 6 is also valid with (49) replaced by the weaker condition
n
o
wn∗ (q)−τ0 (log(np/ǫ)/n)(1−q)/2 (1 + MF + M0 ) ≤ η4 ,
(58)
for some constant η4 > 0, in the case where 1 ≤ r0 ≤ 2, r0 > β0 , and the average marginal
(j)
(j)
density of (X1 , . . . , Xn ) is bounded from above for all j.
Remark 17. Propositions 6 and 7 yield, under Lq -ball sparsity in k · kQ norm, a convergence
rate interpolating the slow and fast rates smoothly from q = 1 to q = 0, similarly as in fixed
designs (Section 3.1). However, the rate condition (57) involved does always exhibit a smooth
transition to those for the slow and fast rates. In the extreme case q = 1, condition (57) with
q = 1 cannot be satisfied when M1 is unbounded or when M1 is bounded but Γn is unbounded
with r0 = β0 = 1. In contrast, Proposition 3 allows for unbounded M1 and the case r0 = β0 = 1.
This difference is caused by the need to justify Assumption 6(ii) with q 6= 1. In the extreme
case q = 0, condition (57) with q = 0 also differ drastically from (49) in Proposition 4. As seen
from the proof of Corollary 7, this difference arises because Assumption 6(ii) can be justified
by exploiting the fact that z q → ∞ as z → ∞ for q > 0 (but not q = 0).
For illustration, Table 1 gives the convergence rates from Propositions 3–6 in the simple
situation where (MF , Mq ) are bounded from above, independently of (n, p). The rate conditions
(49) and (57) are easily seen to hold in all cases except that (49) is not satisfied for q = 0
when r0 = β0 = 1 but νn 6= o(γn (0)). In this case, we show in the following result that the
convergence rate {γn (0) + νn }2 can still be achieved, but with the tuning parameters (wnj , γnj )
26
Table 1: Convergence rates for out-of-sample prediction under parameter set (2) with (MF , Mq )
bounded from above
r0 > β0
0≤q≤1
scale
adaptive
rate
yes
{γn (q) + νn }2−q
−1
r0 = β0 = 1
q=1
0<q<1
yes
p
log(n)/n + νn
yes
q=0
νn = o(γn (0))
otherwise
yes
no
{γn (q) + νn }2−q
Note: γn (q) ≍ n 2+β0 (1−q) and νn = {log(p/ǫ)/n}1/2 . Scale-adaptiveness means the convergence rate is
achieved with (wnj , γnj ) chosen independently of (MF , Mq ).
chosen suitably depending on the upper bound of (MF , Mq ). This is in contrast with the other
cases in Table 1 where the convergence rates are achieved by our penalized estimators in a
scale-adaptive manner: (wnj , γnj ) = (wn∗ (q), γn∗ (q)) are chosen independently of (MF , Mq ) or
their upper bound.
Proposition 8. Assume that r0 = β0 = 1, and MF and M0 are bounded from above by a
constant M > 0. Suppose that the conditions of Proposition 4 are satisfied except with (49)
and Assumption 9 removed, and Assumption 7 holds instead of Assumption 5. Let ĝ ′ be the
−β0 /2 ∗
γn (0)
′ = K
′
estimator with (wnj , γnj ) replaced by wnj
= K0 wn∗ (0) and γnj
0
for K0 > 0.
Then K0 can be chosen, depending on M but independently of (n, p), such that for sufficiently
large n, depending on M , and any A0 > A(ξ0∗ , η0 ), we have
n −1
o2
p
Dn∗ (ĝ ′ , g ∗ ) ≤ O(1)C12 (MF + M0 ) n 2+β0 + + log(p/ǫ)/n ,
with probability at least 1 − 2ǫ, where O(1) depends only on (M , B0∗ , A∗0 , C0∗ , ξ0∗ , κ∗0 , η0 ) and
(C2 , C3 , C4 ) as in Theorem 3.
4
Discussion
For additive regression with high-dimensional data, we have established new convergence results on the predictive performance of penalized estimation when each component function can
be a Sobolev space or a bounded variation space. A number of open problems remain to be
fully investigated. First, our results provide minimax upper bounds for estimation when the
27
component functions are restricted within an L1 ball in k · kF,j semi-norm and an Lq ball in
k · kQ norm. It is desirable to study whether these rates would match minimax lower bounds.
Second, while the penalized estimators have been shown under certain conditions to be adaptive to the sizes of L1 (k · kF ) and Lq (k · kQ ) balls for fixed q, we are currently investigating
adaptive estimation over such balls with varying q simultaneously. Finally, it is interesting to
study variable selection and inference about component functions for high-dimensional additive
regression, in addition to predictive performance studied here.
References
Bellec, P.C. and Tsybakov, A.B. (2016) Bounds on the prediction error of penalized least
squares estimators with convex penalty. Festschrift in Honor of Valentin Konakov, to
appear.
Bickel, P., Ritov, Y., and Tsybakov, A.B. (2009) Simultaneous analysis of Lasso and Dantzig
selector, Annals of Statistics, 37, 1705–1732.
Bunea, F., Tsybakov, A.B., and Wegkamp, M. (2007) Sparsity oracle inequalities for the
Lasso, Electronic Journal of Statistics, 1, 169–194.
DeVore, R. A. and Lorentz, G.G. (1993) Constructive Approximation, Springer: New York,
NY.
Greenshtein, E. and Ritov, Y. (2004) Persistency in high-dimensional linear predictor selection
and the virtue of over-parametrization, Bernoulli 10, 971–988.
Gu, C. (2002) Smoothing Spline ANOVA Models, Springer: New York, NY.
Hastie, T. and Tibshirani, R. (1990) Generalized Additive Models, Chapman & Hall: New
York, NY.
Huang, J., Horowitz, J.L., and Wei, P. (2010) Variable selection in nonparametric additive
models, Annals of Statistics, 38, 2282–2313.
Kim, S.-J., Koh, K., Boyd, S., and Gorinevsky, D. (2009) ℓ1 trend filtering, SIAM Review 51,
339–360.
Koltchinskii, V. and Yuan, M. (2010) Sparsity in multiple kernel learning, Annals of Statistics,
38, 3660–3695.
28
Ledoux, M. and Talagrand, M. (1991) Probability in Banach Spaces: Isoperimetry and Processes, Springer: Berlin.
Lin, Y. and Zhang, H.H. (2006) Component selection and smoothing in multivariate nonparametric regression, Annals of Statistics, 34, 2272–2297
Lorentz, G.G., Golitschek, M.v. and Makovoz, Y. (1996) Constructive Approximation: Advanced Problems, Springer: New York, NY.
Mammen, E. and van de Geer, S. (1997) Locally adaptive regression splines, Annals of Statistics, 25, 387–413.
Meier, L., van de Geer, S., and Buhlmann, P. (2009) High-dimensional additive modeling,
Annals of Statistics, 37, 3779–3821.
Negahban, S.N., Ravikumar, P., Wainwright, M.J., and Yu, B. (2012) A unified framework
for high-dimensional analysis of M-estimators with decomposable regularizers, Statistical
Science, 27, 538–557.
Nirenberg, L. (1966) An extended interpolation inequality, Annali della Scuola Normale Superiore di Pisa, Classe di Scienze, 20, 733–737.
Petersen, A., Witten, D., and Simon, N. (2016) Fused Lasso additive model, Journal of
Computational and Graphical Statistics, 25, 1005–1025.
Raskutti, G., Wainwright, M.J., and Yu, B. (2012) Minimax-optimal rates for sparse additive models over kernel classes via convex programming, Journal of Machine Learning
Research 13, 389–427.
Ravikumar, P., Liu, H., Lafferty, J., and Wasserman, L. (2009) SPAM: Sparse additive models,
Journal of the Royal Statistical Society, Series B, 71, 1009–1030.
Stone, C.J. (1982) Optimal global rates of convergence for nonparametric regression. Annals
of Statistics, 10, 1040–1053.
Stone, C.J. (1985) Additive regression and other nonparametric models. Annals of Statistics,
13, 689–705.
29
Suzuki, T. and Sugiyama, M. (2013) Fast learning rate of multiple kernel learning: Trade-off
between sparsity and smoothness, Annals of Statistics, 41, 1381–1405.
Tibshirani, R.J. (2014) Adaptive piecewise polynomial estimation via trend filtering, Annals
of Statistics, 42, 285–323.
van de Geer, S. (2000) Empirical Processes in M-Estimation, Cambridge University Press.
van der Vaart, A.W. and Wellner, J. (1996) Weak Convergence and Empirical Processes,
Springer: New York, NY.
Yuan, M. and Zhou, D.-X. (2016) Minimax optimal rates of estimation in high dimensional
additive models, Annals of Statistics, 44, 2564–2593.
Yang, Y. and Tokdar, S.T. (2015) Minimax-optimal nonparametric regression in high dimensions, Annals of Statistics, 43, 652–674.
30
Supplementary Material for “Penalized Estimation in
Additive Regression with High-Dimensional Data”
Zhiqiang Tan & Cun-Hui Zhang
S1
Proofs
S1.1
Proof of Proposition 2
(1)
(1)
Without loss of generality, assume that k = 1 and 0 ≤ X1 < . . . < Xn ≤ 1.
P
Consider the case m = 1. For any g = pj=1 gj with g1 ∈ V 1 , define g̃1 as a piecewise
(1)
(1)
(1)
(1)
(1)
constant function: g̃1 (z) = g1 (X1 ) for 0 ≤ z < X2 , g̃1 (z) = g1 (Xi ) for Xi ≤ z < Xi+1 ,
P
(1)
(1)
i = 2, . . . , n − 1, and g̃1 (z) = g1 (Xn ) for Xn ≤ z ≤ 1. Let g̃ = g̃1 + pj=2 gj . Then
g̃(Xi ) = g(Xi ) for i = 1, . . . , n, but TV(g̃1 ) ≤ TV(g1 ) and hence Rn (g̃) ≤ Rn (g), which implies
the desired result for m = 1.
Consider the case m = 2. For any g =
(1)
Pp
j=1 gj
(1)
with g1 ∈ V 2 , define g̃1 such that g̃1 (Xi ) =
(1)
(1)
(1)
g1 (Xi ), i = 1, . . . , n, and g̃1 (z) is linear in the intervals [0, X2 ], [Xi , Xi+1 ], i = 2, . . . , n −
Pn−1
(1)
(1)
2, and [Xn−1 , 1]. Then TV(g̃1 ) =
i=1 |bi+1 − bi |, where bi is the slope of g̃1 between
(1)
(1)
(1)
(1)
[Xi , Xi+1 ]. On the other hand, by the mean-value theorem, there exists zi ∈ [Xi , Xi+1 ] such
P
Pn−1
(1)
(1)
that g1 (zi ) = bi for i = 1, . . . , n − 1. Then TV(g1 ) ≥ i=1
|bi+1 − bi |. Let g̃ = g̃1 + pj=2 gj .
Then g̃(Xi ) = g(Xi ) for i = 1, . . . , n, but Rn (g̃) ≤ Rn (g), which implies the desired result for
m = 2.
S1.2
Proofs of Theorem 1 and corollaries
We split the proof of Theorem 1 and Corollary 1 into five lemmas. The first one provides a
probability inequality controlling the magnitude of hε, hj in , in terms of the semi-norm khj kF,j
and the norm khj kn for all hj ∈ Gj with a single j.
Lemma 1. For fixed j ∈ {1, . . . , p}, let
Anj = ∪hj ∈Gj {|hε, hj in |/C1 > γnj,twnj khj kF,j + γnj,tkhj kn } ,
where γnj,t = γnj +
p
t/n for t > 0, γnj = n−1/2 ψnj (wnj )/wnj , and wnj ∈ (0, 1]. Under
Assumptions 1 and 2, we have
P (Anj ) ≤ exp(−t).
1
Proof. In the event Anj , we renormalize hj by letting fj = hj /(khj kF,j + khj kn /wnj ). Then
kfj kF,j + kfj kn /wnj = 1 and hence fj ∈ Gj (wnj ). By Lemma 12 with F1 = Gj (wnj ) and
δ = wnj , we have for t > 0,
(
P (Anj ) ≤ P
=P
(
sup
fj ∈Gj (wnj )
sup
fj ∈Gj (wnj )
|hε, fj in |/C1 > γnj,t wnj
)
|hε, fj in |/C1 > n−1/2 ψnj (wnj ) + wnj
p
)
t/n
≤ exp(−t).
By Lemma 1 and the union bound, we obtain a probability inequality controlling the magnitude of hε, hj in for hj ∈ Gj simultaneously over j = 1, . . . , p.
Lemma 2. For each j ∈ {1, . . . , p}, let
Anj = ∪hj ∈Gj {|hε, hj in | > λnj wnj khj kF,j + λnj khj kn } ,
where λnj /C1 = γnj + {log(p/ǫ)/n}1/2 . Under Assumptions 1 and 2, we have
P (∪pj=1 Anj ) ≤ ǫ.
Proof. By Lemma 1 with t = log(p/ǫ), we have for j = 1, . . . , p,
P (Anj ) ≤ exp(−t) =
ǫ
.
p
Applying the union bound yields the desired inequality.
If g ∗ ∈ G, then Kn (ĝ) ≤ Kn (g∗ ) directly gives the basic inequality:
1
kĝ − g ∗ k2n + A0 Rn (ĝ) ≤ hε, ĝ − g ∗ in + A0 Rn (g∗ ).
2
(S1)
By exploiting the convexity of the regularizer Rn (·), we provide a refinement of the basic
inequality (S1), which relates the estimation error of ĝ to that of any additive function ḡ ∈ G
and the corresponding regularization Rn (ḡ).
Lemma 3. The fact that ĝ is a minimizer of Kn (g) implies that for any function ḡ(x) =
Pp
(j)
j=1 ḡj (x ) ∈ G,
1
1
kĝ − g ∗ k2n + kĝ − ḡk2n + A0 Rn (ĝ)
2
2
1
≤ kḡ − g ∗ k2n + hε, ĝ − ḡin + A0 Rn (ḡ).
2
2
(S2)
Proof. For any t ∈ (0, 1], the fact that Kn (ĝ) ≤ Kn ((1 − t)ĝ + tḡ) implies
t2
kĝ − ḡk2n + Rn (ĝ) ≤ hY − ((1 − t)ĝ + tḡ), t(ĝ − ḡ)in + Rn ((1 − t)ĝ + tḡ)
2
≤ hy − ((1 − t)ĝ + tḡ), t(ĝ − ḡ)in + (1 − t)Rn (ĝ) + tRn (ḡ),
by similar calculation leading to the basic inequality (S1) and by the convexity of Rn (·):
Rn ((1 − t)ĝ + tḡ) ≤ (1 − t)Rn (ĝ) + tRn (ḡ). Using Y = g∗ + ε, simple manipulation of the
preceding inequality shows that for any t ∈ (0, 1],
t
hĝ − g∗ , ĝ − ḡin − kĝ − ḡk2n + Rn (ĝ) ≤ hε, ĝ − ḡin + Rn (ḡ),
2
which reduces to
1
1−t
1
kĝ − g ∗ k2n +
kĝ − ḡk2n + Rn (ĝ) ≤ kḡ − g∗ k2n + hε, ĝ − ḡin + Rn (ḡ)
2
2
2
by the fact that 2hĝ − g∗ , ĝ − ḡin = kĝ − g ∗ k2n + kĝ − ḡk2n − kḡ − g∗ k2n . Letting t ց 0 yields the
desired inequality (S2).
From Lemma 3, we obtain an upper bound of the estimation error of ĝ when the magnitudes
of hε, ĝj − ḡj in , j = 1, . . . , p, are controlled by Lemma 2.
Lemma 4. Let An = ∪pj=1 Anj with hj = ĝj − ḡj in Lemma 2. In the event Acn , we have for
any subset S ⊂ {1, 2, . . . , p},
1
1
kĝ − g ∗ k2n + kĝ − ḡk2n + (A0 − 1)Rn (ĝ − ḡ)
2
2
X
λnj kĝj − ḡj kn ,
≤ ∆n (ḡ, S) + 2A0
j∈S
where
p
X
X
1
∆n (ḡ, S) = kḡ − g∗ k2n + 2A0
ρnj kḡj kF,j +
λnj kḡj kn .
2
c
j=1
j∈S
Proof. By the refined basic inequality (S2), we have in the event Acn ,
1
1
kĝ − g ∗ k2n + kĝ − ḡk2n + A0 Rn (ĝ)
2
2
1
∗ 2
≤ kḡ − g kn + Rn (ĝ − ḡ) + A0 Rn (ḡ).
2
Applying to the preceding inequality the triangle inequalities,
kĝj kF,j ≥ kĝj − ḡj kF,j − kḡj kF,j ,
j = 1, . . . , p,
kĝj kn ≥ kĝj − ḡj kn − kḡj kn ,
j ∈ S c,
kĝj kn ≥ kḡj kn − kĝj − ḡj kn ,
j ∈ S,
3
(S3)
and rearranging the result leads directly to (S3).
Taking S = ∅ in (S3) yields (13) in Corollary 1. In general, we derive implications of (S3)
by invoking the compatibility condition (Assumption 3).
Lemma 5. Suppose that Assumption 3 holds. If A0 > (ξ0 + 1)/(ξ0 − 1), then (S3) implies
(12) in Theorem 1.
Proof. For the subset S used in Assumption 3, write
1
1
Zn = kĝ − g ∗ k2n + kĝ − ḡk2n ,
2
2
p
X
X
ρnj kĝj − ḡj kF,j +
λnj kĝj − ḡj kn ,
Tn1 =
Tn2 =
j∈S c
j=1
X
j∈S
λnj kĝj − ḡj kn .
Inequality (S3) can be expressed as
Zn + (A0 − 1)(Tn1 + Tn2 ) ≤ ∆n (ḡ, S) + 2A0 Tn2 ,
which leads to two possible cases: either
ξ1 {Zn + (A0 − 1)(Tn1 + Tn2 )} ≤ ∆n (ḡ, S),
(S4)
or (1 − ξ1 ){Zn + (A0 − 1)(Tn1 + Tn2 )} ≤ 2A0 Tn2 , that is,
Zn + (A0 − 1)(Tn1 + Tn2 ) ≤
2A0
Tn2 = (ξ0 + 1)(A0 − 1)Tn2 = ξ2 Tn2 ,
1 − ξ1
(S5)
where ξ1 = 1 − 2A0 /{(ξ0 + 1)(A0 − 1)} ∈ (0, 1] because A0 > (ξ0 + 1)/(ξ0 − 1). If (S5) holds,
then Tn1 ≤ ξ0 Tn2 , which, by Assumption 3 with fj = ĝj − ḡj , implies
Tn2
1/2
X
λ2nj kĝ − ḡkn .
≤ κ−1
0
(S6)
j∈S
Combining (S5) and (S6) and using kĝ − ḡk2n /2 ≤ Zn yields
X
λ2nj .
Zn + (A0 − 1)(Tn1 + Tn2 ) ≤ 2ξ22 κ−2
0
(S7)
j∈S
Therefore, inequality (S3), through (S4) and (S7), implies
Zn + (A0 − 1)(Tn1 + Tn2 ) ≤ ξ1−1 ∆n (ḡ, S) + 2ξ22 κ−2
0
4
X
j∈S
λ2nj .
Finally, combining Lemmas 2, 4 and 5 completes the proof of Theorem 1.
P
λ2nj and j∈S c λnj kḡj kn
Pp
2−q
q
> C0 λnj }. First, because
j=1 λnj kḡj kn ≥
Proof of Corollary 2. The result follows from upper bounds of
by the definition S = {1 ≤ j ≤ p : kḡj kn
P
2−q
+ q q
j∈S λnj (C0 ) λnj , we have
X
λ2nj
j∈S
≤
(C0+ )−q
p
X
j=1
P
j∈S
2−q
λnj
kḡj kqn ,
(S8)
where for z ≥ 0, (z + )q = z q if q > 0 or = 1 if q = 0. Second, because
Pp
1−q kḡ kq , we have
j n
j=1 λnj (C0 λnj )
X
j∈S c
λnj kḡj kn ≤ C01−q
p
X
j=1
P
j∈S c
2−q
λnj
kḡj kqn .
λnj kḡj kn ≤
(S9)
Inserting (S8) and (S9) into (12) yields the desired inequality.
Proof of Corollary 3. The result follows directly from Corollary 2, because λ2−q
=
nj
C12−q {γn (q) + νn }2−q and ρnj = C1 {γn (q) + νn }γn1−q (q) ≤ C1 {γn (q) + νn }2−q , where νn =
{log(p/ǫ)/n}1/2 .
S1.3
Proofs of Theorem 2 and corollaries
Write hj = ĝj − ḡj and h = ĝ − ḡ and, for the subset S used in Assumption 5,
1
1
Zn = kĝ − g ∗ k2n + khk2n ,
2
2
p
X
X
∗
ρnj khj kF,j +
λnj khj kQ ,
Tn1
=
∗
Tn2
=
j∈S c
j=1
X
j∈S
λnj khj kQ .
∗ and T ∗ are
Compared with the definitions in Section S1.2, Zn is the same as before, and Tn1
n2
similar to Tn1 and Tn2 , but with khj kQ used instead of khj kn .
Let
Ωn1 =
sup
g∈G
kgk2n − kgk2Q
Rn∗2 (g)
≤ φn
.
Then P (Ωn1 ) ≥ 1 − π. In the event Ωn1 , we have by Assumption 6(i),
max sup
j=1,...,p gj ∈Gj
|kgj kn − kgj kQ |
≤ λn,p+1 φn1/2 ≤ η0 .
wnj kgj kF,j + kgj kQ
5
(S10)
By direct calculation, (S10) implies that if kgj kF,j + kgj kn /wnj ≤ 1 then kgj kF,j + kgj kQ /wnj ≤
(1 − η0 )−1 and hence (S10) implies that
H(u, Gj (wnj ), k · kn ) ≤ H((1 − η0 )u, Gj∗ (wnj ), k · kn ),
and ψnj (wnj ) satisfying (17) also satisfies (11) for δ = wnj . Let Ωn2 = Acn in Lemma 4. Then
conditionally on X1:n = (X1 , . . . , Xn ) for which Ωn1 occurs, we have P (Ωn2 |X1:n ) ≥ 1 − ǫ by
Lemma 2. Therefore, P (Ωn1 ∩ Ωn2 ) ≥ (1 − ǫ)(1 − π) ≥ 1 − ǫ − π.
In the event Ωn2 , recall that (S3) holds, that is,
Zn + (A0 − 1)Rn (h) ≤ ∆n (ḡ, S) + 2A0
X
j∈S
λnj khj kn .
(S11)
In the event Ωn1 ∩ Ωn2 , simple manipulation of (S11) using (S10) shows that
Zn + A1 Rn∗ (h) ≤ ∆∗n (ḡ, S) + 2A0
X
j∈S
λnj khj kQ ,
(S12)
where A1 = (A0 − 1) − η0 (A0 + 1) > 0 because A0 > (1 + η0 )/(1 − η0 ). In the following, we
restrict to the event Ωn1 ∩ Ωn2 with probability at least 1 − ǫ − π.
Proof of Corollary 4. Taking S = ∅ in (S12) yields (24), that is,
Zn + A1 Rn∗ (h) ≤ ∆∗n (ḡ, g∗ , ∅).
−2 ∗2
∗
∗
2
2
∗2
2
∗
As a result, Rn∗ (h) ≤ A−1
1 ∆n (ḡ, g , ∅) and hence khkQ ≤ khkn +φn Rn (h) ≤ khkn +φn A1 ∆n (ḡ, g , ∅).
Inequality (25) then follows from (24).
Proof of Theorem 2. Inequality (S12) can be expressed as
∗
∗
∗
Zn + A1 (Tn1
+ Tn2
) ≤ ∆∗n (ḡ, g ∗ , S) + 2A0 Tn2
,
which leads to two possible cases: either
∗
∗
ξ1∗ {Zn + A1 (Tn1
+ Tn2
)} ≤ ∆∗n (ḡ, S),
(S13)
∗ + T ∗ )} ≤ 2A T ∗ , that is,
or (1 − ξ1∗ ){Zn + A1 (Tn1
0 n2
n2
∗
∗
Zn + A1 (Tn1
+ Tn2
)≤
2A0 ∗
∗
∗
T = (ξ0∗ + 1)A1 Tn2
= ξ2∗ Tn2
,
1 − ξ1∗ n2
(S14)
where ξ1∗ = 1 − 2A0 /{(ξ0∗ + 1)A1 } ∈ (0, 1] because A0 > {ξ0∗ + 1 + η0 (ξ0∗ + 1)}/{ξ0∗ − 1 −
∗ ≤ ξ ∗ T ∗ , which, by the theoretical compatibility condition
η0 (ξ0∗ + 1)}. If (S14) holds, then Tn1
0 n2
6
(Assumption 5) with fj = ĝj − ḡj , implies
1/2
X
∗
λ2nj khkQ
Tn2
≤ κ∗−1
0
(S15)
j∈S
1/2
n
o
X
∗
∗
λ2nj
≤ κ∗−1
khkn + φ1/2
n (Tn1 + Tn2 )
0
(S16)
j∈S
P
2
2 2
By Assumption 6(ii), we have φn ξ2∗2 κ∗−2
0 ( j∈S λnj ) ≤ η1 A1 . Combining this fact, (S14) and
(S16) and simple manipulation yields
1/2
X
∗
∗
λ2nj khkn ,
Zn + (1 − η1 )A1 (Tn1
+ Tn2
) ≤ ξ2∗ κ0∗−1
j∈S
which, by the fact that khk2n /2 ≤ Zn , implies
∗
∗
Zn + (1 − η1 )A1 (Tn1
+ Tn2
) ≤ 2ξ2∗2 κ∗−2
0
X
j∈S
λ2nj .
(S17)
Therefore, inequality (S12), through (S13) and (S17), implies (22):
∗
∗
Zn + (1 − η1 )A1 (Tn1
+ Tn2
) ≤ ξ1∗−1 ∆∗n (ḡ, S) + 2ξ2∗2 κ∗−2
0
X
j∈S
λ2nj .
To demonstrate (23), we return to the two possible cases, (S13) or (S14). On one hand, if
∗ + T ∗ ) is also bounded from above by the right hand side
(S13) holds, then A1 Rn∗ (h) = A1 (Tn1
n2
of (S13) and hence
khk2Q ≤ khk2n + φn Rn∗2 (h) ≤ khk2n +
φn ∗−2 ∗2
ξ ∆n (ḡ, g∗ , S).
A21 1
(S18)
Simple manipulation of (S13) using (S18) yields
1
φn ∗−2 ∗2
1
∗
∗
kĝ − g∗ k2n + khk2Q + A1 (Tn1
+ Tn2
) ≤ ξ1∗−1 ∆∗n (ḡ, S) +
ξ ∆n (ḡ, g ∗ , S).
2
2
2A21 1
On the other hand, combining (S14) and (S15) yields
∗
∗
Zn + A1 (Tn1
+ Tn2
) ≤ ξ2∗ κ∗−1
0
X
j∈S
1/2
λ2nj
khkQ .
(S19)
(S20)
∗ + T ∗ ) is also bounded from above by the right hand side of
As a result, A1 Rn∗ (h) = A1 (Tn1
n2
(S20) and hence by Assumption 6(ii),
khk2Q ≤ khk2n + φn Rn∗2 (h)
X
1
φ
n
λ2nj khk2Q ≤ khk2n + η12 khk2Q .
≤ khk2n + 2 ξ2∗2 κ∗−2
0
2
A1
j∈S
7
(S21)
Simple manipulation of (S20) using (S21) yields
1
1−
kĝ − g ∗ k2n +
2
2
η12
1/2
X
∗
∗
λ2nj khkQ ,
khk2Q + A1 (Tn1
+ Tn2
) ≤ ξ2∗ κ∗−1
0
j∈S
which, when squared on both sides, implies
− η12
1
1
kĝ − g ∗ k2n +
2
2
∗
∗
khk2Q + A1 (Tn1
+ Tn2
)≤
2
1 − η12
X
λ2nj .
ξ2∗2 κ∗−2
0
(S22)
j∈S
Therefore, inequality (S12), through (S19) and (S22), implies
1 − η12
1
∗
∗
kĝ − g∗ k2n +
khk2Q + A1 (Tn1
+ Tn2
)
2
2
X
2
φn ∗−2 ∗2
≤ ξ1∗−1 ∆∗n (ḡ, S) +
λ2nj +
ξ2∗2 κ∗−2
ξ1 ∆n (ḡ, g ∗ , S),
0
2
2
1 − η1
2A1
j∈S
which yields (23) after divided by 1 − η12 on both sides.
Proof of Corollary 5. We use the following upper bounds, obtained from (S8) and (S9)
with S = {1 ≤ j ≤ p : kḡj k > C0∗ λnj },
X
j∈S
λ2nj
≤
(C0∗+ )−q
p
X
j=1
q
λ2−q
nj kḡj kQ ,
(S23)
and
X
j∈S c
λnj kḡj kQ ≤ C0∗1−q
p
X
j=1
q
λ2−q
nj kḡj kQ .
(S24)
Equations (21) and (26) together imply φn ∆∗n (ḡ, , S) = O(1) + φn kḡ − g∗ k2n /2. Inserting this
into (23) and applying (S23) and (S24) yields the high-probability result about Dn∗ (ĝ, ḡ). The
in-probability result follows by combining the facts that ǫ → 0, kḡ − g∗ k2n = Op (1)kḡ − g∗ k2Q by
the Markov inequality, and kĝ − g ∗ k2Q ≤ 2(kĝ − ḡk2Q + kḡ − g ∗ k2Q ) by the triangle inequality.
Proof of Corollary 6. First, we show
wn∗ (q) ≤ {γn∗ (q) + νn }1−q .
(S25)
In fact, if γn (q) ≥ νn , then γn∗ (q) = γn (q) and wn∗ (q) = γn (q)1−q ≤ {γn∗ (q) + νn }1−q . If
γn (q) < νn , then wn∗ (q) = νn1−q ≤ {γn∗ (q) + νn }1−q . By (S23), (S24), and (S25), inequality (30)
implies that for any constants 0 < η1 < 1 and η2 > 0, (21) and (26) are satisfied for sufficiently
2−q
large n. The desired result follows from Corollary 5 with ḡ = g ∗ , because λ2−q
{γn∗ (q) +
nj = C1
8
νn }2−q ≤ C12−q {γn (q)+ νn }2−q and, by (S25), ρnj = C1 wn∗ (q){γn∗ (q)+ νn } ≤ C1 {γn (q)+ νn }2−q .
Proof of Corollary 7. For a constant 0 < η1 < 1, we choose and fix C0∗ ′ ≥ C0∗ sufficiently
large, depending on q > 0, such that
∗′ q 2
(ξ0∗ + 1)2 κ∗−2
0 η3 ≤ (C0 ) η1 .
Let S ′ = {1 ≤ j ≤ p : kgj∗ kQ > C0∗ ′ λnj }. Then (21) is satisfied with S replaced by S ′ , due
to (33), (S23), and the definition λnj = C1 {γn∗ (q) + νn }. Similarly, (26) is satisfied with S
replaced by S ′ for η2 = M q + (C0∗ ′ )1−q M q , by (S24) and simple manipulation. By Remark 7,
Assumption 7 implies Assumption 5 and remains valid when S is replaced by S ′ ⊂ S. The
desired result follows from Corollary 5 with ḡ = g ∗ .
Proof of Corollary 8. The proof is similar to that of Corollary 6. First, we show
wn† (q)MF ≤ {γn† (q) + νn }1−q Mq .
(S26)
In fact, if γn′ (q) ≥ νn , then γn† (q) = γn′ (q) and wn† (q)MF = γn′ (q)1−q Mq ≤ {γn† (q) + νn }1−q Mq .
If γn (q) < νn , then wn† (q)MF = νn1−q Mq ≤ {γn† (q) + νn }1−q Mq . Then (36) implies that for
any constants 0 < η1 < 1 and η2 > 0, (21) and (26) are satisfied for sufficiently large n.
2−q
†
The desired result follows from Corollary 5 with ḡ = g ∗ , because λ2−q
nj Mq = C1 {γn (q) +
νn }2−q Mq ≤ C12−q {γn′ (q) + νn }2−q Mq and, by (S26), ρnj MF = C1 wn† (q){γn† (q) + νn }MF ≤
C1 {γn′ (q) + νn }2−q Mq .
S1.4
Proof of Theorem 3
We split the proof into three lemmas. First, we provide maximal inequalities on convergence
of empirical inner products in functional classes with polynomial entropies.
Lemma 6. Let F1 and F2 be two functional classes such that
sup kfj kQ ≤ δj ,
fj ∈Fj
sup kfj k∞ ≤ bj ,
j = 1, 2.
fj ∈Fj
Suppose that for some 0 < βj < 2 and Bnj,∞ > 0, condition (S45) holds with
ψn,∞ (z, Fj ) = Bnj,∞ z 1−βj /2 ,
9
j = 1, 2.
(S27)
Then we have
(
E
sup
f1 ∈F1 ,f2 ∈F2
|hf1 , f2 in − hf1 , f2 iQ | /C2
)
2C2 ψn,∞ (b1 , F1 ) 1−β1 /2
2C2 ψn,∞ (b2 , F2 ) β1 /2 ψn,∞ (b2 , F1 )
√
√
√
≤ 2 δ1 +
δ2 +
n
n
n
1−β2 /2
β2 /2
2C2 ψn,∞ (b2 , F2 )
ψn,∞ (b1 , F2 )
2C2 ψn,∞ (b1 , F1 )
√
√
√
+ 2 δ2 +
.
δ1 +
n
n
n
(S28)
Moreover, we have for any t > 0,
sup
f1 ∈F1 ,f2 ∈F2
≤E
(
|hf1 , f2 in − hf1 , f2 iQ | /C3
sup
f1 ∈F1 ,f2 ∈F2
)
|hf1 , f2 in − hf1 , f2 iQ |
+ δ1 b2
r
t
t
+ b1 b2 ,
n
n
(S29)
with probability at least 1 − e−t .
Proof. For any function f1 , f1′ ∈ F1 and f2 , f2′ ∈ F2 , we have by triangle inequalities,
kf1 f2 − f1′ f2′ kn ≤ δ̂2 kf1 − f1′ kn,∞ + δ̂1 kf2 − f2′ kn,∞ .
As a result, we have for u > 0,
H(u, F1 × F2 , k · kn ) ≤ H{u/(2δ̂2 ), F1 , k · kn,∞ } + H{u/(2δ̂1 ), F2 , k · kn,∞ },
(S30)
where F1 × F2 = {f1 f2 : f1 ∈ F1 , f2 ∈ F2 }.
By symmetrization inequality (van der Vaart & Wellner 1996),
)
(
(
E
sup
f1 ∈F1 ,f2 ∈F2
|hf1 , f2 in − hf1 , f2 iQ |
≤ 2E
sup
f1 ∈F1 ,f2 ∈F2
)
|hσ, f1 f2 in | .
Let δ̂12 = supf1 ∈F1 ,f2 ∈F2 kf1 f2 kn ≤ min(δ̂1 b2 , δ̂2 b1 ). By Dudley’s inequality (Lemma 13) conditionally on X1:n = (X1 , . . . , Xn ), we have
)
(
E
sup
f1 ∈F1 ,f2 ∈F2
|hσ, f1 f2 in | X1:n
/C2 ≤ E
(Z
δ̂12
0
H 1/2 (u, F1 × F2 , k · kn ) du X1:n
)
.
Taking expectations over X1:n , we have by (S30), (S45), and definition of H ∗ (),
)
(
E
sup
f1 ∈F1 ,f2 ∈F2
≤E
"Z
δ̂1 b2
0
|hf1 , f2 in − hf1 , f2 iQ | /C2
H ∗1/2 {u/(2δ̂2 ), F1 , k · kn,∞ } du +
Z
δ̂2 b1
0
h
i
≤ E δ̂2 ψn,∞ (δ̂1 b2 /δ̂2 , F1 ) + δ̂1 ψn,∞ (δ̂2 b1 /δ̂1 , F2 ) .
10
H ∗1/2 {u/(2δ̂1 ), F2 , k · kn,∞ } du
#
(S31)
By (S27) and the Hölder inequality, we have
n
o
1−β /2
β /2 1−β /2
E δ̂2 ψn,∞ (δ̂1 b2 /δ̂2 , F1 ) ≤ Bn1,∞ b2 1 E δ̂2 1 δ̂1 1
1−β1 /2
≤ Bn1,∞ b2
1−β1 /2
E β1 /2 (δ̂2 )E 1−β1 /2 (δ̂1 ) ≤ Bn1,∞ b2
E β1 /4 (δ̂22 )E (2−β1 )/4 (δ̂12 ),
and similarly
n
o
1−β /2
E δ̂1 ψn,∞ (δ̂2 b1 /δ̂1 , F2 ) ≤ Bn2,∞ b1 2 E β2 /4 (δ̂12 )E (2−β2 )/4 (δ̂22 ).
Then inequality (S28) follows from (S31) and Lemma 16. Moreover, inequality (S29) follows
from Talagrand’s inequality (Lemma 14) because kf1 f2 kQ ≤ δ1 b2 and kf1 f2 k∞ ≤ b1 b2 for
f1 ∈ F1 and f2 ∈ F2 .
By application of Lemma 6, we obtain the following result on uniform convergence of empirical inner products under conditions (39), (40), and (41).
Lemma 7. Suppose the conditions of Theorem 3 are satisfied for j = 1, 2 and p = 2. Let
Fj = Gj∗ (wnj ) for j = 1, 2. Then we have
(
E
sup
f1 ∈F1 ,f2 ∈F2
|hf1 , f2 in − hf1 , f2 iQ | /C2
)
β τ /2
β τ /2
≤ 2(1 + 2C2 C4 )C4 n1/2 Γn wn1 wn2 γn1 γ̃n2 wn21 2 + γn2 γ̃n1 wn12 1
.
−τ
where 0 < τj ≤ (2/βj −1)−1 and C4 = maxj=1,2 C4,j from condition (41), and γ̃nj = n−1/2 wnj j .
Moreover, we have for any t > 0,
sup
f1 ∈F1 ,f2 ∈F2
≤E
(
|hf1 , f2 in − hf1 , f2 iQ | /C3
sup
f1 ∈F1 ,f2 ∈F2
)
|hf1 , f2 in − hf1 , f2 iQ |
+ wn1 wn2 C4 t1/2 γ̃n2 + C42 tγ̃n1 γ̃n2 ,
with probability at least 1 − e−t .
Proof. For fj ∈ Fj with wnj ≤ 1, we have kfj kF,j ≤ 1 and kfj kQ ≤ wnj , and hence
1−τj
kfj k∞ ≤ C4 wnj
by (41). Let ψn,∞ (·, Fj ) = ψnj,∞ (·, wnj ) from (40), that is, in the form (S27)
1−τ
such that (S45) is satisfied. We apply Lemma 7 with δj = wnj and bj = C4 wnj j . By simple
manipulation, we have
1−τ
n−1/2 ψn,∞ (bj , Fj ) = n−1/2 ψnj,∞ (C4 wnj j , wnj )
−β /2
1−(1−βj /2)τj
≤ C4 Bnj,∞n−1/2 wnj j wnj
11
1−βj /2
≤ C4 Γn γnj wnj
≤ C4 wnj ,
where C4 ≥ 1 is used in the second step, Bnj,∞ ≤ Γn Bnj and (1 − βj /2)τj ≤ βj /2 in the
−β /2
third step, and γnj ≤ wnj and Γn γnj wnj j
inequality (S28) yields
(
E
sup
f1 ∈F1 ,f2 ∈F2
1−βj /2
≤ Γn γnj
|hf1 , f2 in − hf1 , f2 iQ | /C2
1−β1 /2
≤ 2(1 + 2C2 C4 )n−1/2 wn1
≤ 1 in the fourth step. Therefore,
)
β /2
1−τ2
wn21 ψn1,∞ (C4 wn2
, wn1 )
1−β2 /2
+ 2(1 + 2C2 C4 )n−1/2 wn2
1−β1 /2
≤ 2(1 + 2C2 C4 )C4 n−1/2 wn1
β /2
1−τ1
wn12 ψn2,∞ (C4 wn1
, wn2 )
1−β2 /2
+ 2(1 + 2C2 C4 )C4 n−1/2 wn2
−τ +β1 τ2 /2
Bn1,∞ wn2 wn2 2
−τ +β2 τ1 /2
Bn2,∞ wn1 wn1 1
,
which leads to the first desired inequality because Bnj,∞ ≤ Γn Bnj . Moreover, simple manipulation gives
δ1 b2
b1 b2
r
t
1−τ2
= C4 wn1 wn2
n
r
t
= C4 t1/2 wn1 wn2 γ̃n2 ,
n
t
1−τ1 1−τ2 t
= C42 wn1
wn2
= C42 twn1 γ̃n1 wn2 γ̃n2 .
n
n
The second desired inequality follows from (S29).
The following result concludes the proof of Theorem 3.
Lemma 8. In the setting of Theorem 3, let
βp+1 τj /2
γ̃nj wnj
γnj
max
φn = 4C2 C3 (1 + 2C2 C4 )C4 n Γn max
j
j λnj
λnj
p
2 log(p/ǫ′ )
′
√
γ̃nj
log(p/ǫ )
γ̃nj
+ 2C3 C4 max
max
+ 2C3 C42 max
,
j λnj
j
j
λnj
λ2nj
1/2
−τ
where γ̃nj = n−1/2 wnj j and βp+1 = minj=1,...,p βj . Then
kgk2n − kgk2Q
2
>
φ
≤ ǫ′ .
P sup
n
∗2
g∈G
Rn (g)
∗ (g ) = kg k
∗
Proof. For j = 1, . . . , p, let rnj
j
j F,j + kgj kQ /wnj and fj = gj /rj (gj ). Then kfj kF,j +
P
kfj kQ /wnj = 1 and hence fj ∈ Gj∗ (wnj ). By the decomposition kgk2n = j,k hgj , gk in , kgk2Q =
P
j,k hgj , gk iQ , and the triangle inequality, we have
kgk2n − kgk2Q ≤
=
X
j,k
X
j,k
|hgj , gk in − hgj , gk iQ |
∗
∗
(gk ) |hfj , fk in − hfj , fk iQ | .
rnj
(gj )rnk
12
Because R∗2 (g) =
sup
P
⊂
(
g=
P
∗
∗
j,k rnj (gj )rnk (gk )wnj λnj wnk λnk ,
kgk2n − kgk2Q
R∗2 (g)
p
j=1 gj
[
j,k
> φn
sup
fj ∈G ∗ (wnj ),fk ∈G ∗ (wnk )
=
g=
we have
[
Pp
j=1
gj
n
o
kgk2n − kgk2Q > φn R∗2 (g)
|hfj , fk in − hfj , fk iQ | > φn wnj λnj wnk λnk
)
By Lemma 7 with F1 = Gj∗ (wnj ), F2 = Gk∗ (wnk ), and t = log(p2 /ǫ′ 2 ), we have with probability
no greater than ǫ′ 2 /p2 ,
sup
fj ∈G ∗ (wnj ),fk ∈G ∗ (wnk )
|hfj , fk in − hfj , fk iQ | /C3
> 4C2 (1 + 2C2 C4 )C4 n1/2 Γn Wn wnj γnj wnk γnk
q
2
1/2
+ C4 n Vn wnj wnk γnk log(p2 /ǫ′ 2 )/n + C42 Vn2 log(p2 /ǫ′ )wnj γnj wnk γnk .
Therefore, we have by the definition of φn ,
P
sup
fj ∈G ∗ (wnj ),fk ∈G ∗ (wnk )
|hfj , fk in − hfj , fk iQ | > φn wnj λnj wnk λnk
The desired result follows from the union bound.
S1.5
!
≤
ǫ′ 2
.
p2
Proofs of Propositions 3, 4, 6, and 8
Denote wnj = wn,p+1 and γnj = γn,p+1 for j = 1, . . . , p. By direct calculation, (42) implies
that for any 0 ≤ q ≤ 1,
n
2−q
1−q
φn (γn,p+1 + νn )2−q ≤ O(1) n1/2 Γn Wn γn,p+1
+ n1/2 Vn min γn,p+1 νn1−q , γn,p+1
νn
o
2−q
2
(S32)
+ nVn2 min γn,p+1
νn2−q , γn,p+1
νn2 ,
where
β /2−τ
0
0
Vn = wn,p+1
,
β /2−τ0 +β0 τ0 /2
0
Wn = wn,p+1
.
(S33)
We verify that the technical conditions hold as needed for Theorem 3, with wnj = wn∗ (q)
and γnj = γn∗ (q) for 0 ≤ q ≤ 1. First, we verify γnj ≤ wnj for sufficiently large n. It suffices
to show that γn∗ (q) ≤ wn∗ (q) whenever γn (q) ≤ 1 and νn ≤ 1. In fact, if γn (q) ≥ νn , then
wn∗ (q) = γn (q)1−q and γn∗ (q) = γn (q) ≤ γn (q)1−q provided γn (q) ≤ 1. If γn (q) < νn , then
−(1−q)β0 /2
wn∗ (q) = νn1−q and γn∗ (q) = B0∗ n−1/2 νn
≤ νn ≤ νn1−q provided νn ≤ 1. Moreover, we
13
have Γn γn∗ (q)1−β0 /2 ≤ 1 for sufficiently large n, because Γn is no greater than O(log1/2 (n)) and
γn∗ (q)1−β0 /2 ≤ γn (q)1−β0 /2 decreases polynomially in n−1 for 0 < β0 < 2.
Proof of Proposition 3. For wnj = 1 and γnj = γn∗ (1) ≍ n−1/2 , inequality (S32) with
q = 0 and νn = o(1) gives
o
n
φn {γn∗ (1) + νn }2 ≤ O(1) n1/2 Γn γn∗2 (1) + n1/2 γn∗ (1)νn + nγn∗2 (1)νn2
= O(1) n−1/2 Γn + νn ,
Assumption 6(i) holds because Γn is no greater than O(log1/2 (n)). Inserting the above inequality into (25) in Corollary 4 yields the out-of-sample prediction result. The in-sample prediction
result follows directly from Corollary 4.
Proof of Proposition 4. For γnj = γn∗ (0), inequality (S32) with q = 0 gives
o
n
φn {γn∗ (0) + νn }2 ≤ O(1) n1/2 Γn Wn γn∗2 (0) + n1/2 Vn γn∗ (0)νn + nVn2 γn∗2 (0)νn2 .
(S34)
By (S33) and γn∗ (0) = B0∗ n−1/2 wn∗ (0)−β0 /2 , simple manipulation gives
n1/2 Vn γn∗ (0)νn = B0∗ wn∗ (0)−τ0 νn ,
(S35)
n1/2 Wn γn∗ (0)2 = B0∗ wn∗ (0)−(1−β0 /2)τ0 γn∗ (0).
Then (49) and (S34) directly imply that Assumption 6(i) holds for sufficiently large n and also
(30) holds. The desired result follows from Corollary 6 with q = 0.
Proof of Proposition 6. For γnj = γn∗ (q), inequality (S32) with q = 0 gives
o
n
φn {γn∗ (q) + νn }2 ≤ O(1) n1/2 Γn Wn γn∗2 (q) + n1/2 Vn γn∗ (q)νn + nVn2 γn∗2 (q)νn2 .
(S36)
By (S33) and γn∗ (q) = B0∗ n−1/2 wn∗ (q)−β0 /2 , simple manipulation gives
n1/2 Vn γn∗ (q)νn1−q = B0∗ wn∗ (q)−τ0 νn1−q ,
n1/2 Wn γn∗ (q)2−q = B0∗ wn∗ (q)−(1−β0 /2)τ0 γn∗ (q)1−q .
Then (57) and (S36) imply that Assumption 6(i) holds for sufficiently large n, along with the
fact that νn = o(1), γn (q) = o(1), and q > 0. Moreover, (57) and (S32) with γnj = γn∗ (q)
directly yield (33). The desired result follows from Corollary 7.
Proof of Proposition 8.
′
, Vn′ , Wn′ , etc., the corresponding quanDenote by γn,p+1
′ , γ ′ ). By (S33) and (S35) with τ = 1, we have n1/2 V ′ γ ′
tities based on (wnj
0
n n,p+1 νn =
nj
14
K0−1 (n1/2 Vn γn,p+1 νn ) and n1/2 Vn γn,p+1 νn = B0∗ min{νn γn−1 (0), 1} ≤ B0∗ . Moreover, we have
n1/2 Γn Wn′ γ ′ 2n,p+1 = o(1) for a constant K0 , because Wn′ = 1, Γn is no greater than O(log1/2 (n)),
and n1/2 γn2 (0) decreases polynomially in n−1 . For a constant 0 < η1 < 1, we choose and fix
K0 ≥ 1 sufficiently large, depending on M but independently of (n, p), such that Assump-
′ , γ ′ ), for sufficiently large n, due
tions 6(i)–(ii) are satisfied, with (wnj , γnj ) replaced by (wnj
nj
′ + ν ). Moreover, by (S25), ρ′ = λ′ w ′ ≤
to (S23), (S34), and the definition λ′nj = C1 (γnj
n
nj
nj nj
1−β0 /2
K0
1−β0 /2
λnj wnj ≤ K0
C1 {γn∗ (0)+νn }2 , which together with (S24) implies that (26) is satisβ /2
fied for some constant η2 > 0. Assumption 7 is also satisfied with C0∗ replaced by C0∗′ = C0∗ K0 0
and S replaced by {1 ≤ j ≤ p : kgj∗ kQ > C0∗′ λ′nj } ⊂ S for K0 ≥ 1 due to monotonicity in S for
′ , γ ′ ) because
the validity of Assumption 7 by Remark 7, and with (wnj , γnj ) replaced by (wnj
nj
′ ≥w
(18) after the modification implies (18) itself, with wnj
nj for K0 ≥ 1 and λnj constant in
j. The desired result follows from Corollary 5 with ḡ = g ∗ .
S1.6
Proof of Theorem 4
We use the non-commutative Bernstein inequality (Lemma 15) to prove Theorem 4. Suppose
that (X1 , . . . , Xn ) are independent variables in a set Ω. First, consider finite-dimensional
functional classes Fj with elements of the form
fj (x) = uTj (x)θj ,
∀ θj ∈ Rdj , j = 1, 2,
(S37)
where uj (x) is a vector of basis functions from Ω to Rdj , and θj is a coefficient vector. Let
Uj = {uj (X1 ), . . . , uj (Xn )}T , and Σjj ′ = E UjT Uj ′ /n ∈ Rdj ×dj′ . The population inner product
is hfj , fj′ iQ = θjT Σjj ′ θj ′ , j, j ′ = 1, 2. The difference between the sample and population inner
products can be written as
sup
fj , fj ′
kθj k=kθj ′ k=1
n
− fj , fj ′
Q
=
sup
kθj k=kθj ′ k=1
|θjT (UjT Uj ′ /n − Σjj ′ )θj′ |
= kUjT Uj ′ /n − Σjj ′ kS .
Lemma 9. Let fj be as in (S37). Assume that for a constant C5,1 ,
sup kuj (x)k2 ≤ C5,1 ℓj ,
x∈Ω
∀j = 1, 2.
Then for all t > 0,
kUjT Uj ′ /n − Σjj ′ kS >
q
r
(ℓj kΣj ′ j ′ kS ) ∨ (ℓj ′ kΣjj kS )
with probability at least 1 − (dj + dj ′ )e−t .
15
p
2C5,1 t
4t
+ C5,1 ℓj ℓj ′
n
3n
Let Mi = uj (Xi )uTj ′ (Xi ) − E{uj (Xi )uTj ′ (Xi )}. Because uj (Xi )uTj ′ (Xi ) is of rank
p
p
1, kMi kS ≤ 2 supx∈Ω {kuj (x)kkuTj ′ (x)k} ≤ 2C5,1 ℓj ℓj ′ . Hence we set s0 = 2C5,1 ℓj ℓj ′ in
Proof.
Lemma 15. Similarly, Wcol ≤ C5,1 ℓj ′ kΣjj kS because
E(Mi MiT ) ≤ E{uj (Xi )uTj ′ (Xi )uj ′ (Xi )uTj (Xi )} ≤ C5,1 ℓj ′ E{uj (Xi )uTj (Xi )},
and Wrow ≤ C5,1 ℓj kΣj ′ j ′ kS . Thus, (S44) gives the desired result.
Now consider functional classes Fj such that fj ∈ Fj admits an expansion
fj (·) =
∞
X
θjℓ ujℓ (·),
ℓ=1
where {ujℓ (·) : ℓ = 1, 2, . . .} are basis functions and {θjℓ : ℓ = 1, 2, . . .} are the associated
coefficients.
Lemma 10. Let 0 < τj < 1, 0 < wnj ≤ 1 and
n
P
P
2
≤ k−1/τj ∀ k ≥ (1/wnj )2τj ,
Bj = fj : k/4<ℓ≤k θjℓ
0≤ℓ1/τj w 2
nj
2
≤ wnj
θ2
<1 j,ℓ+1
o
Suppose that (44) and (47) hold with certain positive constants C5,1 , C5,3 . Then, for a certain
constant C5,4 depending on {C5,1 , C5,3 } only,
sup
fj ∈Bj ,fj ′ ∈Bj ′
hfj , fj ′ in − hfj , fj ′ iQ
q
h
−τ ′
−τ ′
−τ
−τ
≤ C5,4 wnj wnj ′ (µj wnj j + µj ′ wnj ′j )
µj + µj ′ + log(wnj j + wnj ′j ) + t /n
i
−τ ′
−τ ′
−τ
−τ
+ µj + µj ′ + log(wnj j + wnj ′j ) + t (µj wnj j )(µj ′ wnj ′j )/n
with at least probability 1 − e−t for all t > 0, where µj = 1/(1 − τj ).
Proof. Let ℓjk = ⌈(2k /wnj )2τj ⌉. We group the basis and coefficients as follows:
uj,Gjk (x) = (ujℓ (x), ℓ ∈ Gjk )T ,
θj,Gjk = (θjℓ , ℓ ∈ Gjk )T ,
k = 0, 1, . . .
where Gj0 = {1, . . . , ℓj0 } of size |Gj0 | = ℓj0 and Gjk = {ℓj,k−1 + 1, . . . , ℓjk } of size |Gjk | =
ℓjk − ℓj,k−1 ≤ (2k /wnj )2τj for k ≥ 1. Define θ̃j , a rescaled version of θj , by
−1
θj,Gjk .
θ̃j,Gjk = (θ̃jℓ , ℓ ∈ Gjk ) = 2k wnj
It follows directly from (45) and (46) that
kθ̃j,Gj,0 k2 ≤ 1,
−1/(2τj )
kθ̃j,Gjk krj ≤ (2k /wnj )/ℓjk
16
≤ 1 ∀ k ≥ 1, ∀ fj ∈ Bj .
Let Ujk = {uj,Gjk (X1 ), . . . , uj,Gjk (Xn )}T ∈ Rn×|Gjk | . We have
sup
fj ∈Bj ,fj ′ ∈Bj ′
=
hfj , fj ′ in − hfj , fj ′ iL2
sup
fj ∈Bj ,fj ′ ∈Bj ′
≤
max
k=0 ℓ=0
∞
∞ X
X
T
T
T
′
′
θj,G
U
U
/n
−
E
U
U
/n
θj ′ ,Gj′ ,ℓ
jk j ,ℓ
jk j ,ℓ
jk
T
T
T
Uj ′ ,ℓ /n
Uj ′ ,ℓ /n − E Ujk
Ujk
θ̃j,Gjk
k=0 ℓ=0
∞
∞ X
T
X
Uj ′ ,ℓ /n
Ujk
kθ̃j k∨kθ̃j ′ k≤1
≤ wnj wnj ′
∞ X
∞
X
k=0 ℓ=0
−1 ℓ −1
2k wnj
2 wnj ′
T
Uj ′ ,ℓ /n
− E Ujk
2k 2ℓ
θ̃j ′ ,Gj′ ,ℓ
.
(S38)
S
Let ak = 1/{(k + 1)(k + 2)}. By (44), supx∈Ω kuj,Gjk (x)k2 ≤ supx∈Ω
Pℓjk
2
ℓ=1 ujℓ (x)
≤ C5,1 ℓjk for
T
Ujk /nkS ≤ C5,3 . Because |Gjk | ≤ ℓj,k , it follows from Lemma 9 that
k ≥ 0. By (47), kEUjk
T
T
Uj ′ ,ℓ /nkS
Uj ′ ,ℓ /n − E Ujk
kUjk
q
log(ℓjk + ℓj ′ ,ℓ ) − log(ak aℓ ) + t 2C5,1 C5,3 (ℓjk ∨ ℓj ′ ,ℓ )/n
≤
p
+ log(ℓjk + ℓj ′ ,ℓ ) − log(ak aℓ ) + t (4/3)C5,1 ℓjk ℓj ′ ,ℓ /n
(S39)
with probability at least 1 − ak aℓ e−t for any fixed k ≥ 0 and ℓ ≥ 0. By the union bound and
P
the fact that ∞
k=0 ak = 1, inequality (S39) holds simultaneously for all k ≥ 0 and ℓ ≥ 0 with
probability at least 1 − e−t . Because ℓjk = ⌈(2k /wnj )2τj ⌉, we rewrite (S39) as
T
T
kUjk
Uj ′ ,ℓ /n − E Ujk
Uj ′ ,ℓ /nkS
q
h
−τ ′
−τ ′
−τ
−τ
k + ℓ + log(wnj j + wnj ′j ) + t /n
≤C5,4 (2τj k wnj j + 2τj′ ℓ wnj ′j )
i
−τ ′
−τ ′
−τ
−τ
+ k + ℓ + log(wnj j + wnj ′j ) + t (2τj k wnj j )(2τj′ ℓ wnj ′j )/n .
where C5,4 is a constant depending only on {C5,1 , C5,3 }. For any α ≥ 0,
P∞
k=0 k
(S40)
α 2−k(1−τj )
≤
Cα µα+1
, where Cα is a numerical constant and µj = 1/(1 − τj ). Using this fact and inserting
j
(S40) into (S38) yields the desired result.
Finally, the following result concludes the proof of Theorem 4.
Lemma 11. In the setting of Theorem 4, let
)
(
p
2
2 log(np/ǫ′ )µ2j γ̃nj
2 log(np/ǫ′ )
µj γ̃nj
max
+ max
,
φn = C5,2 C5,4 max
j
j
j
λnj
λnj
λ2nj
17
−τ
where γ̃nj = n−1/2 wnj j , µj = 1/(1−τj )−1 , and C5,4 is a constant depending only on {C5,1 , C5,3 }
as in Lemma 10. Then
P
sup
g∈G
kgk2n − kgk2Q
Proof. Recall that ℓjk = ⌈(2k /wnj )2τj ⌉. For gj =
2
rnj
(gj )
X
ℓj0
=
> φn
Rn∗2 (g)
2
2
θjℓ
/wnj
ℓ=1
∨
P∞
ℓ=1 θjℓ ujℓ ,
max
k≥1
2
≤ ǫ′ .
X
define rnj (gj ) by
2 1/τj
θjℓ
ℓjk
ℓj,k−1 <ℓ≤ℓjk
.
Let fj = gj /rnj (gj ) and µj = 1/(1 − τj ). Then fj ∈ Bj as in Lemma 10 and
kgk2n
=
−
p X
p
X
j=1
Because
Pp
i=1 wnj λnj rnj (gj )
sup
g∈G
⊂
[
j,j ′
kgk2Q
j ′ =1
≤
Rn∗2 (g)
sup
fj ∈Bj ,fj ′ ∈Bj ′
j=1 j ′ =1
hgj , gj ′ in − hgj , gj ′ iQ
rnj (gj )rnj ′ (gj ′ ) hfj , fj ′ in − hfj , fj ′ iQ .
1/2
j=1 C5,2 λnj (wnj kgj kF,j
Pp
kgk2n − kgk2Q
(
≤
p X
p
X
> φn
1/2
+ kgj kQ ) = C5,2 Rn∗ (g) by (45),
−1
φn wnj λnj wnj ′ λnj ′
|hfj , fk in − hfj , fk iQ | > C5,2
)
.
(S41)
−τ
By Lemma 10 with t = log(p2 /ǫ′ 2 ) and e2µj + 2wnj j ≤ n, we have
sup
fj ∈Bj ,fj ′ ∈Bj ′
hfj , fj ′ in − hfj , fj ′ iQ
q
h
−τ ′
−τ ′
−τ
−τ
≤ C5,4 wnj wnj ′ (µj wnj j + µj ′ wnj ′j )
µj + µj ′ + log(wnj j + wnj ′j ) + log(p2 /ǫ′ 2 ) /n
i
−τ ′
−τ ′
−τ
−τ
2
+ µj + µj ′ + log(wnj j + wnj ′j ) + log(p2 /ǫ′ ) (µj wnj j )(µj ′ wnj ′j )/n
n
o
−τ ′
−τ ′ p
−τ
−τ
≤ C5,4 wnj wnj ′ (µj wnj j + µj ′ wnj ′j ) 2 log(np/ǫ′ )/n + 2 log(p/ǫ′ )(µj wnj j )(µj ′ wnj ′j )/n ,
with probability at least 1 − ǫ′ 2 /p2 . By the definition of φn , we have
(
P
sup
fj ∈Bj ,fj ′ ∈Bj ′
hfj , fj ′ in − hfj , fj ′ iQ ≤
−1
C5,2
φn wnj wnj ′ λnj λnj ′
The conclusion follows from the union bound using (S41).
18
)
≥1−
ǫ′ 2
.
p2
S1.7
Proof of Proposition 5
Here we verify explicitly conditions of Theorem 4 for Sobolev spaces Wrmi i and bounded varia-
tion spaces V mi with ri = 1 on [0, 1] in the case of τj < 1, where τi = 1/(2mi +1−2/(ri ∧2)). Be-
cause conditions (44), (45), (46) and (47) depend on (mj , rj ) only through τj , we assume with(j)
out loss of generality 1 ≤ rj ≤ 2. When the average marginal density of {Xi
: i = 1, . . . , n}
is uniformly bounded away from 0 and ∞, the norms kgj kQ and kgj kL2 are equivalent, so that
condition (46) and (47) hold for any L2 -orthonormal bases {ujℓ : ℓ ≥ 1}. Let u0 (x) be a mother
R
R
wavelet with m vanishing moments, e.g., u0 (x) = 0 for |x| > c0 , u20 (x)dx = 1, xm u0 (x)dx =
√
0 for m = 0, . . . , maxj mj , and {u0,kℓ (x) = 2k u0 (2k (x − j)) : ℓ = 1, . . . , 2k , k = 0, 1, . . .} is
L2 -orthonormal. We shall identify {ujℓ : ℓ ≥ 1} as {u0,11 , u0,21 , u0,22 , u0,31 , . . .}. Because
#{ℓ : u0,kℓ (x) 6= 0} ≤ 2c0 k ∀x,
2k+1
X−1
k
u2jℓ (x)
=
ℓ=1
ℓ=2k
cause u0 has
u20,kℓ (x) ≤ 2c0 2k ku0 k∞ , ∀x,
P∞ P2k
(−m)
(x)
ℓ=1 θjkℓ u0,kℓ (x). Define u0
ℓ=1 θjℓ ujℓ (x) =
k=0
R
(m)
(−m+1)
(−m)
x
(x) = −∞ u0
(t)dt, and gj (x) = (d/dx)m gj (x). Beintegral of u0 , u0
R (m)
(m)
vanishing moments, u0 (x)dx = 0 for m = 0, . . . , maxj mj , so that u0 (x) = 0
so that (44) holds. Suppose gj (x) =
as the m-th
2
X
P∞
for |x| > c0 . Due to the orthonormality of the basis functions, for 1 ≤ ℓ ≤ 2k , we have
Z
Z
(m )
2mj k θjkℓ = 2mj k gj (x)u0,kℓ (x)dx = (−1)m gj j (x)u0,mj kℓ (x)dx
with u0,mkℓ (x) =
k −1
2X
mj k
2
θjℓ
rj
ℓ=2k−1
√
(−m)
2k u0
≤
k −1
2X
ℓ=2k−1
(2k (x − j)). By the Hölder inequality,
Z
(mj )
gj
(mj ) rj
≤ gj
Lr j
(x)
rj
u0,mj kℓ (x)
rj (1−2(1−/rj ))
dx u0,mj kℓ
(mj ) rj (1−2(1−/rj )) (k/2)rj (1−2(1−/rj ))
2c0 u0
2
∞
rj (2(1−1/rj ))
L2
(mj )
u0
L2
2rj (2(1−1/rj )) .
Because 2mj k−(k/2)(1−2(1−/rj )) = 2k(mj +1/2−1/rj ) = 2k/(2τj ) and 1 ≤ rj ≤ 2, we have
k −1
2X
ℓ=2k−1
2
2k/τj θjℓ
1/2
≤
k −1
2X
k/(2τj )
2
ℓ=2k−1
1/(2τj )
Because ℓjk
1/rj
(mj ) 2/rj −1
(mj )
≤ gj
θjℓ
rj
Lr j
(2c0 )1/rj u0
∞
(mj ) 2−2/rj
u0
L2
.
≥ 2k /wnj and ℓjk ≤ 1 + 22τj ℓj,k−1 with τj < 1, we have ℓjk ≤ 4ℓj,k−1 , so that
{ℓj,k−1 + 1, . . . , ℓj,k } involves at most three resolution levels. Thus, condition (45) follows from
19
the above inequality. For the bounded variation class, we have
Z
Z
(m −1)
mj k
mj k
m
2
θjkℓ = 2
gj (x)u0,kℓ (x)dx = (−1)
u0,mj kℓ (x)dgj j (x),
so that (45) follows from the same proof with rj = 1.
S2
S2.1
Technical tools
Sub-gaussian maximal inequalities
The following maximal inequality can be obtained from van de Geer (2000, Corollary 8.3), or
directly derived using Dudley’s inequality for sub-gaussian variables and Chernoff’s tail bound
(see Proposition 9.2, Bellec et al. 2016).
Lemma 12. For δ > 0, let F1 be a functional class such that supf1 ∈F1 kf1 kn ≤ δ, and
Z δ
H 1/2 (u, F1 , k · kn ) du.
(S42)
ψn (δ, F1 ) ≥
0
Let (ε1 , . . . , εn ) be independent variables. Under Assumption 1, we have for any t > 0,
)
(
p
−1/2
ψn (δ, F1 ) + δ t/n ≤ exp(−t),
P sup |hε, f1 in |/C1 > n
f1 ∈F1
where C1 = C1 (D0 , D1 ) > 0 is a constant, depending only on (D0 , D1 ).
S2.2
Dudley and Talagrand inequalities
The following inequalities are due to Dudley (1967) and Talagrand (1996).
Lemma 13. For δ > 0, let F1 be a functional class such that supf1 ∈F1 kf1 kn ≤ δ and (S42)
holds. Let (σ1 , . . . , σn ) be independent Rademacher variables, that is, P (σi = 1) = P (σi =
−1) = 1/2. Then for a universal constant C2 > 0,
)
(
E
sup |hσ, f1 in |/C2
f1 ∈F1
≤ n−1/2 ψ(δ, F1 ).
Lemma 14. For δ > 0 and b > 0, let (X1 , . . . , Xn ) be independent variables, and F be a
functional class such that supf ∈F kf kQ ≤ δ and supf ∈F kf k∞ ≤ b. Define
n
Zn = sup
f ∈F
1X
{f (Xi ) − Ef (Xi )} .
n
i=1
Then for a universal constant C3 > 0, we have
)
(
r
t
t
≤ exp(−t),
+b
P Zn /C3 > E(Zn ) + δ
n
n
20
t > 0.
S2.3
Non-commutative Bernstein inequality
We state the non-commutative Bernstein inequality (Troop, 2011) as follows.
Lemma 15. Let {Mi : i = 1, . . . , n} be independent random matrices in Rd1 ×d2 such that
E(Mi ) = 0 and P {kMi kS ≤ s0 } = 1, i = 1, . . . , n, for a constant s0 > 0, where k · kS denotes
P
P
the spectrum norm of a matrix. Let Σcol = ni=1 E(Mi MiT )/n and Σrow = ni=1 E(MiT Mi )/n.
Then, for all t > 0,
n
1X
P
Mi
n
i=1
>t
S
≤ (d1 + d2 ) exp
−nt2 /2
.
kΣcol kS ∨ kΣrow kS + s0 t/3
Consequently, for all t > 0,
n
p
p
1X
Mi > kΣcol kS ∨ kΣrow kS 2t/n + (s0 /3)2t/n ≤ (d1 + d2 )e−t .
P
n
S
(S43)
(S44)
i=1
S2.4
Convergence of empirical norms
For δ > 0 and b > 0, let F1 be a functional class such that
sup kf1 kQ ≤ δ,
f1 ∈F1
sup kf1 k∞ ≤ b,
f1 ∈F1
and let ψn,∞ (·, F1 ) be an upper envelope of the entropy integral:
Z z
H ∗ 1/2 (u/2, F1 , k · kn,∞ ) du,
ψn,∞ (z, F1 ) ≥
z > 0,
(S45)
0
where H ∗ (u, F1 , k · kn,∞ ) = sup(X (1) ,...,X (1) ) H(u, F1 , k · kn,∞ ). Let δ̂ = supf1 ∈F1 kf1 kn . The
1
n
following result can be obtained from Guedon et al. (2007) and, in its present form, van de
Geer (2014), Theorem 2.1.
Lemma 16. For the universal constant C2 in Lemma 13, we have
)
(
2
(b, F1 )
2δC2 ψn,∞ (b, F1 ) 4C22 ψn,∞
2
2
√
E sup kf1 kn − kf1 kQ ≤
.
+
n
n
f1 ∈F1
Moreover, we have
q
S2.5
E(δ̂2 ) ≤ δ +
2C2 ψn,∞ (b, F1 )
√
.
n
Metric entropies
m
For r ≥ 1 and m > 0 (possibly non-integral), let W r = {f : kf kLr + kf (m) kLr ≤ 1}. The
following result is taken from Theorem 5.2, Birman & Solomjak (1967).
21
Lemma 17. If rm > 1 and 1 ≤ q ≤ ∞, then
m
H(u, W r , k · kLq ) ≤ B1 u−1/m ,
u > 0,
where B1 = B1 (m, r) > 0 is a constant depending only on (m, r). If rm ≤ 1 and 1 ≤ q <
r/(1 − rm), then
m
H(u, W r , k · kLq ) ≤ B2 u−1/m ,
u > 0,
where B2 = B2 (m, r, q) > 0 is a constant depending only on (m, r, q).
For m ≥ 1, let V
m
= {f : kf kL1 + TV(f (m−1) ) ≤ 1}. The following result can be obtained
from Theorem 15.6.1, Lorentz et al. (1996), on the metric entropy of the ball {f : kf kLr +
[f ]Lip(m,Lr ) ≤ 1}, where [f ]Lip(m,Lr ) is a semi-norm in the Lipschitz space Lip(m, Lr ). By
Theorem 9.9.3, DeVore & Lorentz (1993), the space Lip(m, L1 ) is equivalent to V m , with
the semi-norm [f ]Lip(m,Lr ) equal to TV(f ), up to suitable modification of function values at
1
(countable) discontinuity points. However, it should be noted that the entropy of V endowed
with the norm k · kL∞ is infinite.
Lemma 18. If m ≥ 2 and 1 ≤ q ≤ ∞, then
m
H(u, V , k · kLq ) ≤ B3 u−1/m ,
u > 0,
where B3 = B3 (m) > 0 is a constant depending only on m. If 1 ≤ q < ∞, then
1
H(u, V , k · kLq ) ≤ B4 u−1 ,
u > 0,
where B4 = B4 (r) > 0 is a constant depending only on r.
By the continuity of functions in Wrm for m ≥ 1 and V m for m ≥ 2, the maximum entropies
of these spaces in k · kn,∞ and k · kn norms over all possible design points can be derived from
Lemmas 17 and 18.
Lemma 19. If rm > 1, then for B1 = B1 (m, r),
m
m
H ∗ (u, W r , k · kn ) ≤ H ∗ (u, W r , k · kn,∞ ) ≤ B1 u−1/m ,
m
u > 0,
m
and hence (S42) and (S45) hold with ψn (z, W r ) ≍ ψn,∞ (z, W r ) ≍ z 1−1/(2m) . If m ≥ 2, then
for B3 = B3 (m),
m
m
H ∗ (u, V , k · kn ) ≤ H ∗ (u, V , k · kn,∞ ) ≤ B3 u−1/m ,
m
m
u > 0,
and hence (S42) and (S45) hold with ψn (z, V ) ≍ ψn,∞ (z, V ) ≍ z 1−1/(2m) .
22
1
The maximum entropies of V over all possible design points can be obtained from Section
5, Mammen (1991) for the norm k · kn and Lemma 2.2, van de Geer (2000) for the norm
k · kn,∞ . In fact, the proof of van de Geer shows that for F the class of nondecreasing functions
f : [0, 1] → [0, 1], H ∗ (u, F, k · kn,∞ ) ≤ n log(n + u−1 ) if u ≤ n−1 or ≤ u−1 log(n + u−1 ) if
u > n−1 . But if u ≤ n−1 , then n log(n + u−1 ) ≤ n(log n + n−1 u−1 ) ≤ (1 + log n)u−1 . If
u > n−1 , then u−1 log(n + u−1 ) ≤ u−1 log(2n). Combining the two cases gives the stated result
1
1
about H ∗ (u, V , k · kn,∞ ), because each function in V can be expressed as a difference two
nondecreasing functions.
Lemma 20. For a universal constant B5 > 0, we have
1
1
H ∗ (u, W 1 , k · kn ) ≤ H ∗ (u, V , k · kn ) ≤ B5 u−1 ,
1
u > 0,
1
and hence (S42) holds with ψn (z, W 1 ) ≍ ψn (z, V ) ≍ z 1/2 . Moreover, for a universal constant
B6 > 0, we have
1
1
H ∗ (u, W 1 , k · kn,∞ ) ≤ H ∗ (u, V , k · kn,∞ ) ≤ B6
1
1 + log n
,
u
u > 0,
1
and hence (S45) holds with ψn,∞ (z, W 1 ) ≍ ψn,∞ (z, V ) ≍ (1 + log n)1/2 (z/2)1/2 .
S2.6
Interpolation inequalities
The following inequality (S46) can be derived from the Gagliardo-Nirenberg inequality for
Sobolev spaces (Theorem 1, Nirenberg 1966). Inequality (S47) can be shown by approximating
f ∈ V m by functions in W1m .
Lemma 21. For r ≥ 1 and m ≥ 1, we have for any f ∈ Wrm ,
oτ
n
kf k∞ ≤ (C4 /2) kf (m) kLr + kf kL2 kf k1−τ
L2 ,
(S46)
where τ = (2m + 1 − 2/r)−1 ≤ 1 and C4 = C4 (m, r) ≥ 1 is a constant depending only on (m, r).
In addition, we have for any f ∈ V m ,
oτ
n
kf k∞ ≤ (C4 /2) TV(f (m−1) ) + kf kL2 kf k1−τ
L2 .
(S47)
From this result, kf k∞ can be bounded in terms of kf kL2 and kf (m) kLr or TV(f (m−1) )
in a convenient manner. For f ∈ Wrm and 0 < δ ≤ 1, if kf kL2 ≤ δ and kf (m) kLr ≤ 1,
then kf k∞ ≤ C4 δ1−1/(2m+1−2/r) . Similarly, for f ∈ V m and 0 < δ ≤ 1, if kf kL2 ≤ δ and
TV(f (m−1) ) ≤ 1, then kf k∞ ≤ C4 δ1−1/(2m−1) .
23
References
Bellec, P.C., Lecue, G., Tsybakov, A.B. (2016) Slope meets Lasso: Improved oracle bounds
and optimality, arXiv:1605.08651.
Birman, M.S. and Solomjak, M.Z. (1967) Piecewise-polynomial approximations of functions
of the classes Wpα , Mathematics of the USSR–Sbornik, 2, 295–317.
Dudley, R.M. (1967) The sizes of compact subsets of Hilbert space and continuity of Gaussian
processes, Journal of Functional Analysis, 1, 290–330.
Guedon, O., Mendelson, S., Pajor, A., and Tomczak-Jaegermann, N. (2007) Subspaces and
orthogonal decompositions generated by bounded orthogonal systems, Positivity, 11,
269–283.
Mammen, E. (1991) Nonparametric regression under qualitative smoothness assumptions,
Annals of Statistics, 19, 741–759.
Talagrand, M. (1996) New concentration inequalities in product spaces. Inventiones Mathematicae 126, 505–563.
Tropp, J.A. (2011) Freedman’s inequality for matrix martingales, Electronic Communications
in Probability, 16, 262–270.
van de Geer, S. (2014) On the uniform convergence of empirical norms and inner products,
with application to causal inference, Electronic Journal of Statistics, 8, 543–574.
24
| 10 |
Towards Improving Brandes’ Algorithm for Betweenness
Centrality
arXiv:1802.06701v1 [] 19 Feb 2018
Matthias Bentert
Alexander Dittmann
Leon Kellerhals
André Nichterlein
Rolf Niedermeier
Institut für Softwaretechnik und Theoretische Informatik, TU Berlin, Germany,
{alexander.dittmann,leon.kellerhals}@campus.tu-berlin.de
{matthias.bentert,andre.nichterlein,rolf.niedermeier}@tu-berlin.de
Abstract
Betweenness centrality, measuring how many shortest paths pass through a vertex, is one
of the most important network analysis concepts for assessing the (relative) importance of
a vertex. The famous state-of-art algorithm of Brandes [2001] computes the betweenness
centrality of all vertices in O(mn) worst-case time on an n-vertex and m-edge graph. In
practical follow-up work, significant empirical speedups were achieved by preprocessing degreeone vertices. We extend this by showing how to also deal with degree-two vertices (turning
out to be much richer in mathematical structure than the case of degree-one vertices). For
our new betweenness centrality algorithm we prove the running time upper bound O(kn),
where k is the size of a minimum feedback edge set of the input graph.
1
Introduction
One of the most important building blocks in network analysis is to determine a vertex’s relative
importance in the network. A key concept herein is betweenness centrality as introduced in 1977
by Freeman [6]; it measures centrality based on shortest paths. Intuitively, for each vertex, betweenness centrality counts the (relative) number of shortest paths that pass through the vertex.
A straightforward algorithm for computing betweenness centrality on undirected (unweighted) nvertex graphs runs in Θ(n3 ) time and improving this to O(n3−ε ) time for any ǫ > 3 would break
the APSP-conjecture [1]. In 2001, Brandes [4] presented the to date theoretically fastest algorithm,
improving the running time to O(mn) for graphs with m edges. As many real-world networks
are sparse, this is a tremendous improvement, having a huge impact also in practice.1 Since betweenness centrality is a measure of outstanding importance in network science, it finds numerous
applications in diverse areas, e.g. in data mining [5, 15] or neuroscience [10, 11]. Speeding up
betweenness centrality computations is the ultimate goal of our research. To this end, we extend
previous work and provide a rigorous mathematical analysis that yields a new upper bound on its
worst-case running time.
Our work is in line with numerous research efforts concerning the development of algorithms
for computing betweenness centrality, including approximation algorithms [2, 7, 14], parallel and
distributed algorithms [16, 17], and streaming and incremental algorithms [9, 13].
Formally, we study the following problem:
Betweenness Centrality
Input: An undirected graph G.
P
Task: Compute the betweenness centrality CB (v) := s,t∈V (G)
V (G).
1 According
σst (v)
σst
for each vertex v ∈
to Google Scholar, accessed in February 2018, Brandes’ paper [4] has been cited almost 3000 times.
1
Herein, σst is the number of shortest paths in G from vertex s to vertex t, and σst (v) is the number
of shortest paths from s to t that additionally pass through v.
Extending previous work of Baglioni et al. [3], our main result is an algorithm for Betweenness Centrality that runs in O(kn) time, where k denotes the feedback edge number of the
input graph G. The feedback edge number of G is the minimum number of edges one needs to
delete from G in order to make it a forest.2 Obviously, by depth-first search one can compute k
in linear time; in addition, typically k ≈ m − n, so there is no asymptotic improvement over
Brandes’ algorithm for most graphs. When the input graph is very tree-like, however, our new
algorithm can be much faster than Brandes’s algorithm. Indeed, Baglioni et al. [3], building on
Brandes’ algorithm and basically shrinking the input graph by deleting degree-one vertices (in a
preprocessing step), report on significant speedups in comparison with Brandes’ basic algorithm
in tests with real-world social networks.
Our technical contribution is to extend the work of Baglioni et al. [3] by, roughly speaking,
additionally dealing with degree-two vertices. These vertices are much harder to deal with and
to analyze since, other than degree-one vertices, they may lie on shortest paths between two
vertices. Thus, we have to significantly refine and extend the approaches underlying Brandes [4]
and Baglioni et al. [3]. Moreover, to prove correctness and our “improved” asymptotic upper
bound we perform an extensive mathematical analysis. Notably, the work of Baglioni et al. [3]
mainly focuses on the empircial side and the algorithmic techniques there would not suffice to
prove our upper bound O(kn).
Our work is purely theoretical in spirit. To the best of our knowledge, it provides the first
theoretical “improvement” over Brandes’ algorithm. It is conceivable that our (more complicated)
algorithms finally can be turned into practically useful tools in the spirit of Brandes [4] and
Baglioni et al. [3].
Notation. We use mostly standard graph notation. Given a graph G, V (G) and E(G)
denote the vertex respectively edge set of G with n = |V (G)| and m = |E(G)|. We denote the
vertices of degree one, two, and at least three by V =1 (G), V =2 (G), and V ≥3 (G), respectively. A
path P = v0 . . . va is a graph with V (P ) = {v0 , . . . , va } and E(P ) = {{vi , vi+1 } | 0 ≤ i < a}. The
length of the path P is |E(P )|. Adding the edge {va , v0 } to P gives a cycle C = v0 . . . va v0 . The
distance dG (s, t) between vertices s, t ∈ V (G) is the length of the shortest path between s and t
in G. The number of shortest s-t–paths is denoted by σst . The number of shortest s-t–paths
containing v is denoted by σst (v). We set σst (v) = 0 if s = v or t = v (or both).
We set [j, k] := {j, j + 1, . . . k} and denote for a set X by Xi the size-i subsets of X.
2
Algorithm Overview
In this section, we review our algorithmic strategy to compute the betweenness centrality of each
vertex. Before doing so, since we build on the works of Brandes [4] and Baglioni et al. [3], we first
give the high-level ideas behind their algorithmic approaches. Then, we describe the ideas behind
our extension.
We remark that we assume throughout our paper that the input graph is connected. Otherwise,
we can process the connected components one after another.
The algorithms of Brandes [4] and Baglioni et al. [3]. Brandes [3] developed an O(nm)time algorithm which essentially runs heavily modified breadth-first searches (BFS) from each
vertex of the graph. In each of these modified BFS, Brandes’ algorithm computes the “effect” that
the starting vertex s of the modified BFS has on the betweenness centrality values of all other
2 Notably, Betweenness Centrality computations have also been studied when the input graph is a tree [17],
indicating the practical relevance of this special case. We mention in passing that in recent work [12] we employed
the parameter “feedback edge number” in terms of theoretically analyzing practically useful data redcution rules
for computing maximum-cardinality matchings.
2
vertices. More formally, the modified BFS starting at vertex s computes for each vertex v ∈ V (G):
X σst (v)
.
σst
t∈V (G)
In order to reduce the number of performed modified BFS in Brandes’ algorithm, Baglioni
et al. [3] removed in a preprocessing step all degree-one vertices from the graph. Since removing
vertices influences the betweenness centrality of the remaining vertices, Baglioni et al. [3] modified Brandes’ algorithm (without changing the worst-case running time). Concerning degree-one
vertices, observe that in each shortest path P starting at degree-one vertex v, the second vertex
in P is the single neighbor u of v. Hence, after deleting v, one only needs to store the information
that u had a neighboring degree-one vertex. By repeatedly removing degree-one vertices whole
“pending trees” can be deleted. Baglioni et al. [3] added for each vertex w of the graph a counter
Pen[w] that stores the size of the subtree that was pending on w.
Remark. In contrast to Baglioni et al. [3], we initialize for each vertex v ∈ V the value Pen[v] with
one instead of zero (so we count v as well). This is a minor technical change that simplifies most
of our formulas.
The idea of the preprocessing step of Baglioni et al. [3] is given in the next observation.
Observation 1 ([3]). Let G be a graph, let s ∈ V (G) be a degree-one vertex, and let v ∈ V (G) be
the neighbor of s. Then,
X
t∈V (G)\{s,v}
σst (v)
= Pen[v] ·
σst
X
Pen[t].
t∈V (G)\{s,v}
Hence, the influence of P
a degree-one vertex to the betweenness centrality of its neighbor can be
computed in O(1) time as w∈V (G)\{s,v} Pen[w] = norig − Pen[v] − Pen[s] where norig denotes the
number of vertices in the original graph. Baglioni et al. [3] also adapted Brandes’ algorithm in order
to compute the betweenness centrality values of the remaining vertices using the counters Pen[·].
To this end, they showed that the betweenness centrality of a vertex v that is not removed in the
preprocessing step is
CB (v) =
X
Pen[s] · Pen[t] ·
s,t∈V (G)
σst (v)
=
σst
X
γ(s, t, v).
(1)
s,t∈V (G)
Inhere, we use the following notation which simplifies many formulas in this work.
Definition 1. Let γ(s, t, v) = Pen[s] · Pen[t] ·
σst (v)
σst .
Our algorithm—basic idea. Starting with the graph where all degree-one vertices are
removed [3], our algorithm copes with degree-two vertices. In contrast to degree-one vertices,
degree-two vertices can lie on shortest paths between two other vertices. This difference makes
degree-two vertices harder to handle: Removing a degree-two vertex v in a similar way as done
with degree-one vertices (see Observation 1) affects many other shortest paths that neither start
nor end in v. Hence, we deal with degree-two vertices in a different manner. Instead of removing
vertices one-by-one as Baglioni et al. [3], we process multiple degree-two vertices at once. To
this end we exploit that consecutive degree-two vertices behave similarly. We distinguish two
possibilities for consecutive degree-two vertices captured by the following definitions.
Definition 2. Let G be a graph. A path P = v0 v1 . . . vℓ is a maximal path in G if ℓ ≥ 2, the
inner vertices v1 , v2 , . . . , vℓ−1 all have degree two in G, but the endpoints v0 and vℓ do not, that
is, degG (v1 ) = . . . = degG (vℓ−1 ) = 2, degG (v0 ) 6= 2, and degG (vℓ ) 6= 2. Moreover, P max is the set
of all maximal paths in G.
3
Definition 3. Let G be a graph. A cycle C = v0 v1 . . . vℓ v0 is a pending cycle in G if at most one
vertex in C does not have degree two in G. Moreover, C pen is the set of all pending cycles in G.
Section 3 presents a preprocessing step that deals with pending cycles. We first show how to
break the graph into two independent parts at any cut vertex (a vertex whose removal makes the
graph disconnected). These two parts can then be solved independently. Second, we show that
the betweenness centrality can be computed in O(n) time on a cycle with n vertices where each
vertex is equipped with a value Pen[·]. Putting both results together we can deal with pending
cycles: First break the graph into two parts with one part being the cycle. Then, the pending cycle
is just a cycle and we can compute the betweenness centrality for all its vertices. The remaining
part of the graph then has one pending cycle less and we can repeat this process. Overall, the
preprocessing step removes all pending cycles from the graph in O(n) time.
The rest of the algorithm deals with maximal paths (described in Section 4). Maximal paths
turn out to be more challenging than pending cycles and we are not able to remove or shrink them
in a preprocessing step. However, using standard arguments, one can show that the number of
maximal paths is upper-bounded in the feedback edge number k of the input graph:
Lemma 1. Let G be a graph with feedback edge number k containing no degree-one vertices. Then
the cardinalities |V ≥3 (G)|, |C pen |, and |P max | are upper-bounded by O(min{n, k}).
Proof. Let X ⊆ E(G) be a feedback edge set of G of size at most k. Since G does not contain any
degree-one vertices, in the forest G′ = (V (G), E(G) \ X) there are at most 2k leaves, since each
leaf must be incident to an edge in X. Since G′ is a forest, it has less vertices of degree at least
three than leaves, thus |V ≥3 (G′ )| < 2k. Observe that every edge in X can increase the degree
of its two incident vertices in G by one and thus we may obtain at most 2k additional vertices
of degree at least three in G. With this, and the fact that there are at most n vertices of degree
three, we have |V ≥3 (G)| ≤ |V ≥3 (G′ )| + 2k ∈ O(min{k, n}).
Since every pending cycle contains an edge in X, it follows that |C pen| ∈ O(min{k, n}). For a
maximal path P max observe that the path contains an edge e ∈ X, or its endpoints are in V ≥3 (G′ ).
In the latter case, for every P max there exists a unique leaf to which it is leading, otherwise G′
contains a cycle. With this we obtain |P max | ∈ O(min{k, n}).
Our algorithm will process the maximal paths one by one and compute several additional
values in O(n) time per maximal path. Using these additional values, the algorithm runs a
modified version of Brandes’ algorithm starting the modified BFS only in vertices with degree at
least three. Since there are at most O(min{k, n}) maximal paths and vertices of degree at least
three, this results in a running time of O(kn).
Our algorithm—further details. We now provide more details for our algorithm—see
Algorithm 1 for pseudocode. To this end, we use our main results in Sections 3 and 4 mainly as
black boxes and concentrate on describing the interaction between the different parts.
As mentioned earlier, we start with the preprocessing of Baglioni et al. [3] (see Lines 4 to 7).
For a degree-one vertex v we update the betweenness centrality of its neighbor u. The idea behind
this is, that if Pen[v] = 1, then the betweenness centrality of v is zero. Otherwise, the betweenness
centrality of v was computed in a previous iteration of the for-loop in Line 4. Refer to Baglioni
et al. [3] for the proof that the betweenness centrality values of all removed vertices are computed
correctly. As a consequence of the preprocessing, we have to consider the Pen[·] values and use
Equation (1) for the computation of the betweenness centrality values.
Next we compute all pending cycles and maximal paths in linear time (see Line 9).
Lemma 2. Let G be a graph. Then the set P max of all maximal paths and the set C pen of all
pending cycles can be computed in O(n + m) time.
Proof. Iterate through all vertices v ∈ V (G). If v ∈ V =2 (G), then iteratively traverse the two
edges incident to v to discover adjacent degree-two vertices until finding endpoints vℓ , vr ∈ V ≥3 (G).
If vℓ = vr , then add C pen = vℓ . . . vr vℓ to C pen. Otherwise, add P max = vℓ . . . vr to P max .
4
Algorithm 1: Algorithm for computing the betweenness centrality of a graph.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Input: An undirected graph G with feedback edge number k.
Output: The betweenness centrality values of all vertices.
foreach v ∈ V (G) do
// initialization
BC[v] ← 0
// in the end the table BC will contain the betweenness centrality values
Pen[v] ← 1
while ∃v ∈ V (G) : degG (v) = 1 do
// remove degree-one vertices as done by Baglioni et al. [3]
u ← neighbor of v P
BC[u] ← 2 Pen[v] · ( w∈V (G)\{u,v} Pen[w])
// takes O(1) time [3]
delete v from G
F ← feedback edge set of G
// in O(n + m) time with a BFS
P max ← all maximal paths of G; C pen all pending cycles of G
// by Lemma 2 computable in O(n + m) time
foreach C pen ∈ C pen do
// dealing with pending cycles in O(n) time, see Section 3
update BC for all u ∈ C pen
// in O(|C pen |) time, see Proposition 5 and Lemma 5
pen
v ← vertex of C
with degree at least three (if existing)
Pen[v] ← Pen[v] + |C pen | − 1; remove each vertex u ∈ C pen \ {v} from G
foreach s ∈ V ≥3 (G) do
// some precomputations, takes O(kn) time by Lemma 4
compute dG (s, t) and σst for each t ∈ V (G) \ {s}
Inc[s, t] ← 2 · Pen[s] · Pen[t]/σst for each t ∈ V =2 (G)
Inc[s, t] ← Pen[s] · Pen[t]/σst for each t ∈ V ≥3 (G) \ {s}
foreach x0 x1 . . . xa = P max ∈ P max do
// initialize W left and W right ; takes O(n) time
left
right
W [x0 ] ← Pen[x0 ]; W
[xa ] ← Pen[xa ]
for i = 1 to a do W left [xi ] ← W left [xi−1 ] + Pen[xi ]
for i = a − 1 to 0 do W right [xi ] ← W right [xi+1 ] + Pen[xi ]
foreach x0 x1 . . . xa = P1max ∈ P max do
// case s ∈ V =2 (P1max ), see Section 4
max
max
max
foreach y0 y1 . . . yb = P2
∈P
\ {P1 } do
// case t ∈ V =2 (P2max ), see Section 4.1
max
max
/* update BC for the case v ∈ V (P1 ) ∪ V (P2 ) (cf. Definition 1)
*/
foreach v ∈ V (P1max ) ∪ V (P2max ) do BC[v] ← BC[v] + γ(s, t, v)
/* now deal with the case v ∈
/ V (P1max ) ∪ V (P2max )
*/
update Inc[x0 , y0 ], Inc[xa , y0 ], Inc[x0 , yb ], and Inc[xa , yb ]
/* now deal with the case that t ∈ V =2 (P1max ), see Section 4.2
*/
foreach v ∈ V (P1max ) do BC[v] ← BC[v] + γ(s, t, v)
update Inc[x0 , xa ]
// this deals with the case v ∈
/ V (P1max )
29
foreach s ∈ V ≥3 (G) do
// perform modified BFS from s, see Section 4.3
foreach t, v ∈ V (G) do BC[v] ← BC[v] + Inc[s, t] · σst (v)
30
return BC.
28
For every H1 , H2 ∈ C pen ∪ P max it holds that either V =2 (H1 ) ∩ V =2 (H2 ) = ∅ or H1 = H2 .
Hence, we do not need to reconsider any degree-two vertex found in the traversal above; thus we
require O(m + n) time to find all maximal paths and pending cycles.
Having computed all pending cycles, we can process them one after another (see Lines 10 to 13).
The main idea herein is that if a connected graph contains a vertex v whose removal disconnects
the graph, then we can split the graph into at least two parts whose vertex sets overlap in v. After
the splitting, we can compute the betweenness centrality values of the two parts independently.
Applying this idea to a pending cycle C pen yields the following. Let v be the connection of C pen to
the rest of the graph. Then, we can split C pen from G and update the betweenness centrality values
of all vertices in C pen (see Line 11). After that, we can remove all vertices of C pen except v (see
Line 12). To store the information that there was a pending cycle connected to v, we update Pen[v]
(see Line 13). In Section 3 we provide the details for the splitting of the graph and the algorithm
that computes the betweenness centrality values in a cycle.
5
Lastly, we need to deal with the maximal paths (see Lines 14 to 29). This part of the algorithm
is more involved and requires its own pre- and postprocessing (see Lines 14 to 21 and Lines 28
to 29 respectively). In the preprocessing, we initialize tables used frequently in the main part (of
Section 4). The postprocessing computes the final betweenness centrality values of each vertex as
this computation is too time-consuming to be executed for each maximal path. When explaining
our basic ideas, we will first present the postprocessing as this explains why certain values will be
computed before.
P
Recall that we want to compute
s,t∈V (G) γ(s, t, v) for each v ∈ V (G) (see Equation (1)).
Using the following observations, we split Equation (1) into different parts:
Observation 2. For s, t, v ∈ V (G) it holds that γ(s, t, v) = γ(t, s, v).
Proof. Since G is undirected, for every path v1 v2 . . . vℓ there also exists its reverse vℓ . . . v2 v1 .
Hence, we have σst = σts and σst (v) = σts (v), and thus γ(s, t, v) = γ(t, s, v).
Observation 3. Let G be a graph without degree-one vertices and without pending cycles. Let v ∈
V (G). Then,
X
X
X
γ(s, t, v) =
γ(s, t, v) +
γ(t, s, v)
s∈V ≥3 (G),t∈V (G)
s∈V (G),t∈V (G)
+
X
s∈V =2 (G),t∈V ≥3 (G)
γ(s, t, v) +
s∈V =2 (P1max ),t∈V =2 (P2max )
P1max 6=P2max ∈P max
X
γ(s, t, v).
s,t∈V =2 (P max )
P max ∈P max
Proof. The first two sums cover all pairs of vertices in which at least one of the two vertices is of
degree at least three. The other two sums cover all pairs of vertices which both have degree
S two.
As all vertices of degree two must be part of some maximal path we have V =2 (G) = V =2 ( P max ).
Two vertices of degree two can thus either be in two different maximal paths (third sum) or in
the same maximal path (fourth sum).
In the remaining graph, by Lemma 1, there are at most O(min{k, n}) vertices of degree at least
three and at most O(k) maximal paths. This implies that we can afford to run the modified BFS
(similar to Brandes’ algorithm) from each vertex s ∈ V ≥3 (G) in O(min{k, n}) · (n + k)) = O(kn)
time. This computes the first summand and, by Observation 2, also the second summand of the
sum given in Observation 3. However, we cannot afford to run such a BFS from every vertex
in V =2 (G). Thus, we need to compute the third and fourth summand in a different way.
Notice that the only factor in γ(s, t, v) that depends on v is the term σst (v). Our goal is then
to precompute the factor γ(s, t, v)/σst (v) = Pen[s] · Pen[t]/σst for as many vertices as possible. We
will store precomputed values in table Inc[·, ·] (see Lines 17, 25 and 27). Then, we plug this factor
into the next lemma which provides our postprocessing and will be proven in Section 4.3.
Lemma 3. Let s be a vertex and let f : V (G)2 7→ N be a function such that
Pfor each u, v ∈ V (G)
the value f (u, v) can be computed in O(τ ) time. Then, one can compute t∈V (G) f (s, t) · σst (v)
for all v ∈ V in overall O(n · τ + m) time.
Our strategy is to start the algorithm behind Lemma 3 only from vertices in V ≥3 (G) (see
Line 29). Since the term τ in the above lemma will be constant, we obtain a running time
of O(kn) for this postprocessing. The most technical part will be to precompute the factors in
Inc[·, ·] (see Lines 25 and 27). We defer the details to Sections 4.1 and 4.2. In these parts, we need
the tables W left [·], and W right [·]. These tables store values depending on the maximal path a vertex
is in. More precisely, for a vertex xi in a maximal path P max = x0 x1 . . . xa , we store in W left [xk ]
Pk
the sum of the Pen[·]-values of vertices “left of” xk in P max ; formally, W left [xk ] = i=1 Pen[xj ].
Pa−1
Similarly, we have W right [xk ] = i=k Pen[xk ]. The reason for having these tables is easy to see:
Assume for the vertex xk ∈ P max that the shortest paths to t ∈
/ V (P max ) leave P max through x0 .
Then, it is equivalent to just consider the shortest path(s) starting in x0 and simulate the vertices
between xk and x0 in P max by “temporarily increasing” Pen[x0 ] by W left [xk ]. This is also the
6
idea behind the argument that we only need to increase the values Inc[·, ·] for the endpoints of the
maximal paths in Line 25.
This leaves us with the remaining part of the preprocessing, the computation of the distances dG (s, t), the number of shortest paths σst , and Inc[s, t] for s ∈ V ≥3 (G), t ∈ V (G) (see
Lines 14 to 17). This can be done in O(kn) time as well:
Lemma 4. The initialization in the for-loop in Lines 14 to 17 of Algorithm 1 can be done in O(kn)
time.
Proof. Following Brandes [4, Corollary 4], computing the distances and the number of shortest paths from a fixed vertex s to every t ∈ V (G) takes O(m) = O(n + k) time. Once these
values are computed for a fixed s, computing Inc[s, t] for t ∈ V (G) takes O(n) time since the values Pen[s], Pen[t] and σst are known. Since by Lemma 1 there are O(min{k, n}) vertices of degree
at least three, it takes O(min{k, n} · (n + k + n)) = O(kn) time to compute Lines 14 to 17.
Putting all parts together, we arrive at our main theorem (see Section 4.3 for the proof).
Theorem 4. Betweenness Centrality can be solved in O(kn) time, where k is the feedback
edge number of the input graph.
3
Preprocessing for pending cycles
In this section, we describe our preprocessing that removes all pending cycles from the input
graph. To this end, let C pen be a pending cycle. There are two cases: Either C pen has one vertex
in V ≥3 (G) or C pen has no such vertex. In the latter case the graph is a cycle. Our strategy is as
follows: If C pen has no vertex in V ≥3 (G), then we compute all betweenness centrality values in
linear time using a dynamic program. Otherwise, we split the graph into two parts with one part
being C pen (now without a vertex in V ≥3 (G)) and the other part being the rest. After splitting
the graph, we can process the two parts independently.
We next describe the basic idea behind the splitting operation. Let C pen be a pending cycle
with v ∈ V ≥3 (G) being its connection to the rest of the graph. Obviously, every shortest path P
that starts in C pen and ends outside C pen has to pass through v. Thus, for the betweenness
centrality values of the vertices outside C pen it is not important where exactly P starts. Similarly,
for the betweenness centrality values of the vertices inside C pen it is not important where exactly P
ends. The only relevant information is the size of C pen and the rest as this determines how many
paths start in C pen and end outside C pen . Hence, we can split the graph at v (where each part
gets a copy of v). All we need to take care of is to increase the Pen[·]-value of each copy of v by
the size of the respective part. Generalizing this idea, we arrive at the following lemma.
Lemma 5 (Split Lemma). Let G be a connected graph. If there is a vertex v ∈ V (G) such that
the removing v (and all adjacent edges) from G yields k ≥ 2 connected components C1 , C2 , . . . Ck ,
then remove v, add a new vertex vi to each component Ci , make them adjacent
to all vertices
P
in the respective component that were adjacent to v, and set Pen[vi ] =
w∈V (G)\V (Ci ) Pen[w].
Computing the betweenness centrality of each connected component independently, increasing the
betweenness centrality of v by
k
X
i=1
Ci
CB
(vi ) + (
X
s∈V (Ci )\{vi }
Pen[s])(Pen[vi ] − Pen[v])
and ignoring all new vertices vi is the same as computing the betweenness centrality in G, that is,
( C
CB i (u),
G
if u ∈ V (Ci ) \ {vi };
P
CB (u) = Pk Ci
if u = v.
s∈V (Ci )\{vi } Pen[s])(Pen[vi ] − Pen[v]) ,
i=1 CB (vi ) + (
7
Ci
G
Proof. First, we will show that CB
(u) = CB
(u) if u ∈ V (Ci ) \ {vi } and afterwards that the
computations for v are correct. Let u ∈ V (G) \ {v} be any vertex except v in G and let Ci be the
connected component
that u is contained in after removing v from G. By definition of γ it holds
P
Ci
G
(u) and afterwards
that CB
(u) = s,t∈V (G) γ(s, t, u). We show how this can be rewritten to CB
justify the equalities.
X
γ(s, t, u)
s,t∈V (G)
=
+
γ(s, t, u) +
s,t∈V (Ci )\{vi }
s∈V (Ci )\{vi }
t∈V (G)\V (Ci )
X
γ(s, t, u) +
X
X
γ(s, t, u) + 2 ·
γ(s, t, u)
X
γ(s, t, u)
X
γ(s, vi , u) + 2 ·
X
Pen[s] (
s∈V (Ci )\{vi }
=
Ci
CB
(u)
(3)
X
γ(s, t, u)
(4)
s∈V (Ci )\{vi }
t∈V (G)\V (Ci )
s∈V (Ci )\{vi }
Ci
= CB
(u) + 2
(2)
s∈V (Ci )\{vi }
t∈V (G)\V (Ci )
s,t∈V (Ci )\{vi }
Ci
= CB
(u) − 2
γ(s, t, u)
s,t∈V (G)\V (Ci )
s∈V (G)\V (Ci )
t∈V (Ci )\{vi }
=
X
X
X
t∈V (G)\V (Ci )
Pen[t]
σst (u)
σsv (u)
) − Pen[vi ] i
σst
σsvi
(5)
We now prove the correctness of the equalities above. First, observe that each vertex v ∈ V (G) is
either contained in V (Ci )\{vi } or in V (G)\V (Ci ) and therefore Equation (2) is only a simple case
distinction. Second, each shortest path between two vertices that are not in V (Ci ) will never pass
through any vertex in V (Ci ) \ {vi } as it would pass v twice. Hence the fourth sum in Equation (2)
is 0. The factor 2 in Equation (3) is due to symmetry of γ. To see that Equation (4) is correct,
notice that the first sum in Equation (3) is actually the betweenness centrality
of u in the connected
P
component Ci without vi . Lastly, observe that by definition Pen[vi ] = w∈V (G)\V (Ci ) Pen[w] and
by definition of v it holds v lies on all shortest path from a vertex s to any vertex t ∈ V (G) \ V (Ci ).
Thus, σst = σsvi and σst (u) = σsvi (u).
We will now show that the computations for v are correct as well. Again, we explain the
equations after we state them. First observe that each copy of v is in a different component and
8
therefore we only need to consider s, t ∈ V (G) \ {v}. Hence, it holds that
X
G
γ(s, t, v)
CB
(v) =
s,t∈V (G)\{v}
=
k
X
X
i=1 s,t∈V (Ci )\{vi }
=
X
i=1 j=1 s∈V (Ci )\{vi } t∈V (Cj )\{vj }
i6=j
k
X
k
X
Ci
CB
(vi ) +
X
Pen[s]
k
X
k
X
Ci
CB
(vi ) +
X
Pen[s](Pen[vi ] − Pen[v])
i=1
=
X
k
X
i=1
=
k X
k
X
γ(s, t, v) +
k
X
Ci
CB
(vi ) +
i=1
X
s∈V (Ci )\{vi }
Pen[t]
j=1 t∈V (Cj )\{vj }
i6=j
i=1 s∈V (Ci )\{vi }
i=1 s∈V (Ci )\{vi }
X
γ(s, t, v)
(6)
σst (v)
σst
(7)
Pen[s] (Pen[vi ] − Pen[v])
Notice that Equation (6) is only a case distinguish between the cases that s and t are in the
same connected component or in different ones. To see that Equation (7) is correct, observe
P
P
Pk
that j=1,i6=j t∈V (Cj )\{vj } Pen[t] = t∈(V (G)\V (Ci ))\{v} Pen[t] = Pen[vi ] − Pen[v] by definition
of Pen[vi ]. This concludes the proof.
Observe that we can split all pending cycles off the graph using Lemma 5. It is easy to see that
these splits can be done in overall linear time. If the graph is split into multiple
components in the
Pk P
process, then we can just solve them independently and increase BC[v] by i=1 s∈V (Ci )\{vi } Pen[s](Pen[vi ]−
P
Pen[v]) in linear time. Notice that ki=1 |V (Ci ) \ {vi }| ≤ n. Having split the pending cycle from
the remaining graph, it remains to show that the betweenness centrality values of the vertices in
a cycle can be computed in linear time. To this end, notice that the vertices in the cycle can have
different betweenness centrality values as they can have different Pen[·]-values.
Proposition 5. Let C = x0 . . . xa x0 be a cycle. Then, the betweenness centrality of the vertices
in C can be computed in O(a) time.
Proof. We first introduce some notation needed for the proof, then we will show how to compute BC[v] for v ∈ V (C) efficiently. Finally, we prove the running time.
We denote by [xi , xj ] for 0 ≤ i, j ≤ a the vertices {xi , xi+1 mod a+1 , xi+2 mod a+1 , . . . , xj }. Similarly to W left [·] and W [·, ·] on maximal paths we define
Pen[xi ],
if i = j;
i
left
X
left
if i < j;
i ],
W left [xi ] =
Pen[xi ], and W [xi , xj ] = W left[xj ] − W left[xi ] + Pen[x
left
W
[x
]
−
W
[x
]
+
W
[x
]
a
i
j
k=0
if i > j.
+ Pen[xi ],
The value W [xi , xj ] computes the sum of all values Pen[·] from xi to xj , clockwise. Further, we
denote by ϕ(i) = ( a+1
2 + i) mod a + 1 the index that is “opposite” to i on the cycle. Notice that
if ϕ(i) ∈ N, then xϕ(i) is the unique vertex in C to which there are two shortest paths from xi ,
one visiting xi+1 mod a+1 and one visiting xi−1 mod a+1 . Otherwise, if ϕ(i) 6∈ N, then there is only
one shortest path from xk to any t ∈ V (C). For the sake of convenience in the next parts of the
proof, if ϕ(i) 6∈ N, we say that Pen[xϕ(xi ) ] = 0. Further, by ϕleft (i) = ⌈ϕ(i)⌉ − 1 mod a + 1 we
denote the index of the vertex to the left of index ϕ(i) and by ϕright (i) = ⌊ϕ(i)⌋ + 1 mod a + 1
the index of the vertex to the right of index ϕ(i).
We now lay out the algorithm. For every vertex xk , 0 ≤ k ≤ a, we need to compute
BC[xk ] :=
X
γ(s, t, xk ) =
a
X
X
i=0 t∈V (C)
s,t∈V (C)
9
γ(xi , t, xk ).
We compute these values in a dynamic programming manner. We first show how to compute the
betweenness centrality of x0 , and then present how to compute values BC[xk+1 ] for 0 ≤ k ≤ a − 1
given the value BC[xk ].
Towards computing BC[x0 ], observe that γ(xi , t, x0 ) = 0 if xi = x0 or t = x0 . Also, for every
shortest path starting in xϕ(0) and ending in some xj , 1 ≤ j ≤ a, it holds that dC (xϕ(0) , xj ) <
dC (xϕ(0) , x0 ). Thus there is no shortest path starting in xϕ(0) that visits x0 . Hence, we do not
need to consider the cases i = 0 or i = ϕ(0) and we have
a
X
BC[x0 ] =
X
γ(xi , t, x0 )
i=0
t∈V (C)
06=i6=ϕ(0)
ϕleft (0)
X
=
i=1
a
X
X
γ(xi , t, x0 ) +
X
X
Pen[xi ] · Pen[t] ·
i=1
t∈V (C)
a
X
+
γ(xi , t, x0 )
i=ϕright (0) t∈V (C)
t∈V (C)
ϕleft (0)
=
X
X
σxi t (x0 )
σxi t
Pen[xi ] · Pen[t] ·
i=ϕright (0) t∈V (C)
σxi t (x0 )
σxi t
By definition of ϕ(i) we have that dC (xi , xϕleft (i) ) = dC (xi , xϕright (i) ) < a+1
2 . Hence, there is a
unique shortest path from xi to xϕleft (i) visiting xi+1 mod a+1 , and there is a unique shortest path
from xi to xϕright (i) visiting xi−1 mod a+1 . This gives us that in the equation above, in the first
sum, all shortest paths from xi to t ∈ [xϕright (i) , xa ] visit x0 , and in the second sum, all shortest
paths from xi to t ∈ [x1 , xϕleft (i) ] visit x0 . If ϕ(xi ) ∈ N, then there are two shortest paths from xi
to xϕ(i) , and one of them visits x0 . With this we can rewrite the sum as follows:
BC[x0 ] =
ϕleft (0)
X
Pen[xi ] · Pen[xϕ(i) ] ·
i=1
a
X
+
i=ϕright (0)
ϕ
=
+
1
+
2
Pen[xi ] · Pen[xϕ(i) ] ·
X
Pen[xi ] · Pen[t]
t∈[xϕright (i) ,xa ]
1
+
2
X
Pen[xi ] · Pen[t]
t∈[x1 ,xϕleft (i) ]
left
X(0)
i=1
a
X
Pen[xi ]
i=ϕright (0)
1
2
Pen[xi ]
Pen[xϕ(i) ] + W [xϕright (i) , xa ]
1
2
Pen[xϕ(i) ] + W [x1 , xϕleft (i) ] .
Since the values Pen[·] are given, the values W left [·] can be precomputed in O(a) time and
thus, when computing BC[x0 ], the values W [·, ·] can be obtained in constant time. The values ϕ(i), ϕleft (i) and ϕright (i) can be computed in constant time as well, and thus it takes O(a)
time to compute BC[x0 ].
Assume now that we have computed BC[xk ]. Then we claim that BC[xk+1 ], 0 ≤ k < a, can
be computed as follows:
BC[xk+1 ] = BC[xk ] − Pen[xk+1 ] Pen[xϕ(k+1) ] + 2W [xϕright (k+1) , xk−1 mod a+1 ]
(8)
+ Pen[xk ] Pen[xϕ(k) ] + 2W [xk+2 mod a+1 , xϕleft (k) ]).
To this end, observe that all shortest paths in C that visit xk also visit xk+1 , except for those
paths that start or end in xk+1 . Likewise, all shortest paths in C that visit xk+1 also visit xk ,
10
except for those paths that start or end in xk . Hence, to compute BC[xk+1 ] from BC[xk ], we
need to subtract the γ-values for shortest paths starting in xk+1 and visiting xk , and we need to
add the γ-values for shortest paths starting in xk and visiting xk+1 . Since by Observation 2, each
path contributes the same value to the betweenness centrality as its reverse, and thus
X
γ(xk , t, xk+1 ) − γ(xk+1 , t, xk ).
(9)
BC[xk+1 ] = BC[xk ] + 2 ·
t∈V (C)
With a similar argumentation as above for the computation of BC[x0 ] one can show that shortest
paths starting in xk and visiting xk+1 must end in t ∈ [xk+2 , xϕleft (k) ], or in xϕ(k+1) . Shortest
paths starting in xk+1 and visiting xk must end in t ∈ [xϕright (k+1) , xk−1 ], or in xϕ(k) . As similarly
stated above, for both i = k and i = k + 1, some fixed vertex xj is visited by only half of the
shortest paths from xi to xϕ(i) . With the arguments above we can rewrite Equation (9) to obtain
the claimed Equation (8).
For the running time, after precomputing the values W left [·] and BC[x0 ] in O(a) time, we can
compute each of the values BC[xk+1 ] for 0 ≤ k < a in constant time. This yields an overall
running time of O(a).
4
Dealing with maximal paths
In this section, we put our focus on degree-two vertices contained in maximal paths in order to
compute the betweenness centrality CB (v) (cf. Equation (1)) for all v ∈ V (G) in overall O(kn)
time. In the end of this section, we also prove Theorem 4.
Based on Observation 3, we compute Equation (1) in three steps. By starting a modified BFS
from vertices in V ≥3 (G) similar to Baglioni et al. [3] and Brandes [4], we can compute
X
X
γ(t, s, v) +
γ(s, t, v)
s∈V ≥3 (G),t∈V (G)
s∈V =2 (G),t∈V ≥3 (G)
in O(kn) time. We show in the next two subsections how to compute the remaining two parts
given in Observation 3. In the last subsection we describe the postprocessing and prove our main
theorem.
4.1
Paths with endpoints in different maximal paths
In this subsection, we look at shortest paths between pairs of maximal paths P1max = x0 . . . xa
and P2max = y0 . . . yb whose endpoints are in different maximal paths, and how to efficiently
compute how these paths affect the betweenness centrality. Our goal is to prove the following:
Proposition 6. The following can be computed for all v ∈ V (G) in overall O(kn) time:
X
γ(s, t, v).
s∈V =2 (P1max ),t∈V =2 (P2max )
P1max 6=P2max ∈P max
Notice that in the course of the algorithm, we first gather all increments to Inc[·, ·] and in the
final step compute for each s, t ∈ V ≥3 (G) the values Inc[s, t] · σst (v) in O(m) time (Lemma 3).
This postprocessing (see Lines 28 and 29 in Algorithm 1) can be done in overall O(kn) time. To
keep the following proofs simple we assume that these values Inc[s, t] · σst (v) can be computed in
constant time for every s, t ∈ V ≥3 (G) and v ∈ V (G).
For every pair P1max 6= P2max ∈ P max of maximal paths, we consider two cases: First, we look
at how the shortest paths between vertices in P1max and P2max affect the betweenness centrality of
those vertices that are not contained in the two maximal paths, and second, how they affect the
betweenness centrality of those vertices that are contained in the two maximal paths. Finally, we
prove Proposition 6.
11
Throughout the following proofs, we will need the following definitions. Let t ∈ P2max . Then we
right
choose vertices xleft
∈ V =2 (P1max ) such that shortest paths from t to s ∈ {x1 , x2 , . . . , xleft
t , xt
t } =:
right
left
max
Xt enter P1
only via x0 , and shortest paths from t to s ∈ {xt , . . . , xa−2 , xa−1 } =: Xtright
max
enter P1
only via xa . There may exist a vertex xmid
to which there are shortest paths both
t
via x0 and via xa . For computing the indices of those vertices, we determine index i such
that dG (x0 , t) + i = dG (xa , t) + b − i which is equivalent to i = 21 (a − dG (x0 , t) + dG (xa , t)).
If i is integral, then xmid
= xi , xleft
= xi−1 and xright
= xi+1 . Otherwise, xmid
does not exist,
t
t
t
t
right
left
mid
and xt = xi−1/2 and xt
= xi+1/2 . For easier argumentation, if xt does not exist, then we
say that Pen[xmid
] = σtxmid
(v)/σ
= 0, and hence, γ(xmid
, t, v) = 0. For computing the indices
txmid
t
t
t
t
mid
of xleft
and xright
note that t = yj for some 1 ≤ j < b. Since we compute the distances
t , xt
t
between all pairs of vertices of degree at least three beforehand, the value i can be computed in
constant time.
4.1.1
Vertices outside of the maximal paths
We now show how to compute how the shortest paths between two fixed maximal paths P1max
and P2max affect the betweenness centrality of vertices that are not contained in P1max or in P2max ,
that is v ∈ V (G) \ (V (P1max ) ∪ V (P2max )). Recall that in the final step of the algorithm we compute
the values Inc[s, t] · σs,t (v) for every s, t ∈ V ≥3 (G) and v ∈ V (G). Thus, to keep the following
proof simple, we assume that these values can be computed in constant time.
Lemma 6. Let P1max 6= P2max ∈ P max , P1max = x0 . . . xa , P2max = y0 . . . yb . Then, assuming
that the values dG (s, t) and W left [v] and W right [v] are known for s, t ∈ V ≥3 (G) and v ∈ V =2 (G)
respectively, and that the values Inc[s, t] · σst (v) can be computed in constant time for every s, t ∈
V ≥3 (G) and v ∈ V (G), the following can be computed for all v ∈ V (G) \ (V (P1max ∪ V (P2max ))
in O(b) time:
X
γ(s, t, v).
(10)
s∈V =2 (P1max ),t∈V =2 (P2max )
Proof. WePfix P1max 6= P2max ∈ P max with P1max = x0 . . . xa and P2max = y0 . . . yb . We show how to
compute s∈V =2 (P max ) γ(s, t, v) for a fixed t ∈ V =2 (P2max ) and v ∈ V (G) \ (V (P1max ) ∪ V (P2max )).
1
Afterwards we analyze the time required.
Since we can split the vertices in V =2 (P1max ) into those that can be reached from t via both x0
and xa , only via x0 , or only via xa , we have
X
X
X
γ(s, t, v).
(11)
γ(s, t, v) +
γ(s, t, v) = γ(xmid
, t, v) +
t
s∈V =2 (P1max )
s∈Xtleft
s∈Xtright
By definition of maximal paths, all shortest paths from s ∈ V =2 (P1max ) to t visit either y0 or yb .
For each ψ ∈ {x0 , xa } let S(t, ψ) be a maximal subset of {y0 , yb } such that for each ϕ ∈ S(t, ψ) there
is a shortest st-path via ψ and ϕ. Then, for s ∈ Xtleft, all st-paths visit x0 and ϕ ∈ S(t, x0 ). Hence,
P
P
we have that σst = ϕ∈S(t,x0 ) σx0 ϕ and σst (v) = ϕ∈S(t,x0 ) σx0 ϕ (v). Analogously, for s ∈ Xtright
P
may
we have that σst = ϕ∈S(t,xa ) σxa ,ϕ and the similar equality for σst (v). Paths from t to xmid
t
P
P
visit x0 and ϕ ∈ S(t, x0 ) or xa and ϕ ∈ S(t, xa ). Hence, σtxmid
= ϕ∈S(tx0 ) σx0 ϕ + ϕ∈S(t,xa ) σxa ϕ .
t
The equality holds analogously for σtxmid
(v).
With
this
at
hand,
we can simplify the computation
t
of the first sum of Equation (11):
X
X
σst (v)
Pen[s] · Pen[t] ·
γ(s, t, v) =
σst
s∈Xtleft
s∈Xtleft
P
X
ϕ∈S(t,x0 ) σx0 ϕ (v)
=
Pen[s] · Pen[t] · P
ϕ∈S(t,x0 ) σx0 ϕ
s∈Xtleft
P
ϕ∈S(t,x0 ) σx0 ϕ (v)
left left
.
(12)
= W [xt ] · Pen[t] · P
ϕ∈S(t,x0 ) σx0 ϕ
12
Analogously,
X
γ(s, t, v) = W
right
[xright
]
t
s∈Xtright
and
γ(xmid
, t, v)
t
=
Pen[xmid
]
t
· Pen[t] ·
P
With this we can rewrite Equation (11) as
X
γ(s, t, v)
P
ϕ∈S(t,xa )
· Pen[t] · P
ϕ∈S(t,x0 )
P
ϕ∈S(t,xa )
σx0 ϕ (v) +
ϕ∈S(t,x0 )
σxa ϕ (v)
σx0 ϕ +
σxa ϕ
P
Pϕ∈S(t,xa )
ϕ∈S(t,xa )
,
σxa ϕ (v)
(13)
.
(14)
σx0 ϕ (v)
(15)
σxa ϕ
s∈V =2 (P1max )
(12),(13),(14)
=
W left [xleft
t ] · Pen[t]
P
·
ϕ∈S(t,x0 ) σx0 ϕ
X
σx0 ϕ (v)
ϕ∈S(t,x0 )
X
W right [xright
] · Pen[t]
t
P
σxa ϕ (v)
·
ϕ∈S(t,xa ) σxa ϕ
ϕ∈S(t,xa )
P
P
ϕ∈S(t,xa ) σxa ϕ (v)
ϕ∈S(t,x0 ) σx0 ϕ (v) +
mid
P
P
.
+ Pen[xt ] · Pen[t] ·
σ
+
ϕ∈S(t,xa ) σxa ϕ
ϕ∈S(t,x0 ) x0 ϕ
+
By joining values σx0 ϕ (v) and σxa ϕ (v) we obtain
X
γ(s, t, v)
s∈V =2 (P1max )
=
W left [xleft ] · Pen[t]
Pen[xmid
] · Pen[t]
t
t
P
P
·
+P
ϕ∈S(t,xa ) σxa ϕ
ϕ∈S(t,x0 ) σx0 ϕ
ϕ∈S(t,x0 ) σx0 ϕ +
X
ϕ∈S(t,x0 )
W right [xright ] · Pen[t]
X
Pen[xmid
] · Pen[t]
t
t
P
P
σxa ϕ (v)
·
+P
ϕ∈S(t,xa ) σxa ϕ
ϕ∈S(t,xa ) σxa ϕ
ϕ∈S(t,x0 ) σx0 ϕ +
ϕ∈S(t,xa )
X
X
σxa ϕ (v).
σx0 ϕ (v) + X2 ·
=: X1 ·
+
ϕ∈S(t,x0 )
(16)
ϕ∈S(t,xa )
We need to increasee the betweenness centrality of all vertices on shortest paths from s to t
via x0 by the value of Term (15), and those shortest paths via xa by the value of Term (16). By
Lemma 3, increasing Inc[s, t] by some value A ensures the increment of the betweenness centrality
of v by A · σst (v) for all vertices v that are on a shortest path between s and t. Hence, increasing Inc[x0 , ϕ] for every ϕ ∈ S(t, x0 ) by X1 is equivalent to increasing the betweenness centrality
of v by the value of Term (15). Analogously, increasing Inc[xa , ϕ] for every ϕ ∈ S(t, xa ) by X2 is
equivalent to increasing the betweenness centrality of v by the value of Term (16).
We now have incremented Inc[ψ, ϕ] for ψ ∈ {x0 , xa } and ϕ ∈ {y0 , yb } by certain values, and
we have shown that this increment is correct, if the vertices that are on the shortest paths from ψ
to ϕ are not contained in V (P1max ) or V (P2max ). We still need to show that we increase Inc[ψ, ϕ]
only if there is no shortest path between ϕ and ψ visiting inner vertices of P1max or P2max . Let ϕ̄ 6=
ϕ be the second endpoint of P2max , and let dG (ψ, ϕ) = dG (ψ, ϕ̄) + dG (ϕ̄, ϕ), that is, there is
a shortest path between ψ and ϕ that visits the inner vertices of P2max . Then, for all 1 ≤
i < b, dG (ψ, ϕ) + dG (ϕ, yi ) > dG (yi , ϕ̄) + dG (ϕ̄, ψ), that is, there is no shortest path from yi
to ψ via ϕ, and thus Inc[ψ, ϕ] will not be increased. The same holds if dG (ψ, ϕ) = dG (ψ, ψ̄) +
dG (ψ̄, ϕ). Increasing Inc[ψ, ϕ] also does not affect the betweenness centrality of vertices ψ and ϕ
since σψϕ (ψ) = σψϕ (ϕ) = 0.
13
Finally, we analyze the running time. The values W left [·], W right [·] and Pen[·] as well as the
distances and number of shortest paths between all pairs of vertices of degree at least three are
assumed to be known. With this, S(t, x0 ) and S(t, xa ) can be computed in constant time. Hence,
we have that the values X1 and X2 can be computed in constant time for a fixed t ∈ V =2 (P2max ).
Thus the running time needed to compute the increments of Inc[·, ·] is upper-bounded by O(b).
4.1.2
Vertices inside the maximal paths
We now consider how the betweenness centrality of vertices inside of a pair P1max 6= P2max ∈ P max
of maximal paths is affected by shortest paths between vertices in the two maximal paths.
Observe that, when iterating through all pairs P1max 6= P2max ∈ P max , one will encounter the
pair (P1max , P2max ) and its reverse (P2max , P1max ). Since our graph is undirected, instead of looking
at the betweenness centrality of the vertices in both maximal paths, it suffices to consider only
the vertices inside the second maximal path of the pair. This is shown in the following lemma.
Lemma 7. Computing for every P1max 6= P2max ∈ P max , for v ∈ V (P1max ) ∪ V (P2max )
X
γ(s, t, v)
(17)
s∈V =2 (P1max ),t∈V =2 (P2max )
is equivalent to computing for every P2max 6= P2max ∈ P max , for each v ∈ V (P2max )
X
γ(s, t, v), if v ∈ V (P1max ) ∩ V (P2max );
s∈V =2 (P1max ),t∈V =2 (P2max )
X
Xv =
γ(s, t, v), otherwise.
2
·
s∈V =2 (P1max ),t∈V =2 (P2max )
Proof. We will first assume that V (P1max ) ∩ V (P2max ) = ∅ for every P1max 6= P2max ∈ P max , and
will discuss the special case V (P1max ) ∩ V (P2max ) 6= ∅ afterwards.
max
For every fixed {P1max , P2max } ∈ P 2 , for every v ∈ V (P2max ), we compute
X
X
γ(s, t, v),
γ(s, t, v) +
s∈V =2 (P2max ),t∈V =2 (P1max )
s∈V =2 (P1max ),t∈V =2 (P2max )
and since by Observation 2, γ(s, t, v) = γ(t, s, v), this is equal to
X
γ(s, t, v)
2·
(18)
s∈V =2 (P1max ),t∈V =2 (P2max )
Analogously, for every w ∈ V (P1max ), we compute
X
2·
γ(s, t, w).
s∈V =2 (P1max ),t∈V =2 (P2max )
Thus, computing Sum (18) for v ∈ V (P2max ) for every pair P1max 6= P2max ∈ P max is equivalent to computing Sum (17) for v ∈ V (P1max ) ∪ V (P2max ) for every pair P1max 6= P2max ∈ P max ,
since when iterating over pairs of maximal paths we will encounter both the pairs (P1max , P2max )
and (P2max , P1max ).
If there exists a v ∈ V (P1max ) ∩ V (P2max ), then it is covered once when performing the computations for (P1max , P2max ), and once when performing the computations for (P2max , P1max ). Hence, we
are doing computations twice. We compensate for this by dividing the values of the computations
in half, that is, we compute
X
γ(s, t, v)
s∈V =2 (P1max ),t∈V =2 (P2max )
for all P1max 6= P2max , for vertices v ∈ V (P1max ) ∩ V (P2max ).
14
With this at hand we can show how to compute Xv for each v ∈ V (P2max ), for a pair P1max 6=
∈ P max of maximal paths. To this end, we show the following lemma.
P2max
Lemma 8. Let P1max 6= P2max ∈ P max , P1max = x0 . . . xa , P2max = y0 . . . yb . Then, given that the
values dG (s, t) and W left [v] and W right [v] are known for s, t ∈ V ≥3 (G) and v ∈ V =2 (G) respectively,
the following can be computed for v ∈ V (P2max ) in O(b) time:
X
γ(s, t, v).
(19)
s∈V =2 (P1max ),t∈V =2 (P2max )
Proof. Let P1max = x0 . . . xa , P2max = y0 . . . yb . For v = yi , 0 ≤ i ≤ b, we need to compute
X
γ(s, t, yi ) =
=2
s∈V
(P1max )
t∈V =2 (P2max )
s∈V
X
=2 (P max )
1
b−1
X
γ(s, yk , yi ) =
b−1
X
X
γ(s, yk , yi ).
(20)
k=1 s∈V =2 (P1max )
k=1
For easier reading, we define for 0 ≤ i ≤ b and for 1 ≤ k < b
X
γ(s, yk , yi ).
λ(yk , yi ) =
s∈V =2 (P1max )
Recall that all shortest paths from yk to s ∈ Xyright
visit x0 and all shortest paths from yk
k
to s ∈ Xyright
visit
x
.
Recall
also
that
there
may
exist
a
unique vertex xmid
a
yk from which there are
k
shortest paths to yk via x0 and via xa .
With this at hand, we have
X
X
λ(yk , yi ) = γ(xmid
γ(s, yk , yi )
γ(s, yk , yi ) +
yk , yk , yi ) +
s∈Xyleft
s∈Xyright
k
k
= Pen[xmid
yk ] · Pen[yk ] ·
σyk xmid
(yi )
y
k
σyk xmid
y
k
+
σsyk (yi )
Pen[s] · Pen[yk ] ·
+
σsyk
left
X
s∈Xy
k
X
Pen[s] · Pen[yk ] ·
s∈Xyright
k
σyk xmid
(yi )
yk
= Pen[yk ] · Pen[xmid
yk ] ·
σyk xmid
yk
X σsy (yi )
k
+ W left [xleft
+ W right [xright
yk ] ·
yk ] ·
σsyk
left
s∈Xy
k
X
s∈Xyright
k
σsyk (yi )
σsyk
σsyk (yi )
.
σsyk
(21)
Next, we rewrite λ in such a way that we can compute it in constant time. For this, we need
to make the values σ independent of s and yi . To this end, note that if k < i, yi is visited only by
those shortest paths from yk to s ∈ V =2 (P1max ) that also visit yb . If k > i, then yi is only visited
by those paths that also visit y0 . Hence, we need to know whether there are shortest paths from yk
to some endpoint of P1max via either y0 or yb . For this we define η(yk , ϕ, ψ), which informally tells
us whether there is a shortest path from yk , 1 ≤ k < b, to ψ ∈ {x0 , xa } via ϕ ∈ {y0 , yb }. Formally,
(
1, if dG (yk , ϕ) + dG (ϕ, ψ) = dG (yk , ψ);
η(yk , ϕ, ψ) =
0, otherwise.
We now show how to compute σsyk (yi )/σsyk . Let ω = yb if k < i, and ω = y0 if k > i. As stated
above, for yi to be on a shortest path from yk to s ∈ V =2 (P1max ), the path must visit ω. If we
have s ∈ Xyleft
, then the shortest paths enter P1max via x0 , and σsyk (yi )/σsyk = σx0 yk (yi ) /σx0 yk
k
15
for s ∈ V =2 (P1max ). Notice that there may be shortest syk -paths that go via y0 and syk -paths
that go via yb . Thus we have
σx0 yk (yi )
η(yk , ω, x0 )σx0 ω
.
=
σx0 yk
η(yk , y0 , x0 )σx0 y0 + η(yk , yb , x0 )σx0 yb
(22)
With σx0 yk (yi ) we count the number of shortest x0 yk -paths visiting yi . Notice that any such path
must visit ω. If there is such a shortest path visiting ω, then all shortest x0 yk -paths visit yi , and
since there is only one shortest ωyk -path, the number of shortest x0 yk -paths visiting ω is equal to
the number of shortest x0 ω-paths, which is σx0 ω .
If s ∈ Xyright
, then
k
η(yk , ω, xa )σxa ω
σsyk (yi )
.
=
σsyk
η(yk , y0 , xa )σxa y0 + η(yk , yb , xa )σxa yb
(23)
Shortest paths from yk to xmid
yk may visit any ϕ ∈ {y0 , yb } and ψ ∈ {x0 , xa }, and thus
σyk xmid
(yi )
y
k
σyk xmid
y
P
= P
ψ∈{x0 ,xa }
ϕ∈{y0 ,yb }
k
η(yk , ω, ψ)σψω
P
ψ∈{x0 ,xa }
η(yk , yb , ψ)σψyb
.
(24)
With this, we made the values of σ independent of s and yi . We define
P
ψ∈{x0 ,xa } η(yk , ω, ψ)σψω
mid
P
κ(yk , ω) = Pen[yk ] · Pen[xyk ] · P
ϕ∈{y0 ,yb }
ψ∈{x0 ,xa } η(yk , ϕ, ψ)σψϕ
η(yk , ω, x0 )σx0 ω
η(yk , y0 , x0 )σx0 y0 + η(yk , yb , x0 )σx0 yb
η(yk , ω, xa )σxa ω
.
+ W right [xright
]
·
yk
η(yk , y0 , xa )σxa y0 + η(yk , yb , xa )σxa yb
+ W left [xleft
yk ] ·
Recall that W left [xj ] =
λ(yk , yi )
=
Pj
i=1
Pen[xi ] for 1 ≤ j < b. We can observe that if k < i, then
σyk xmid
(yi )
yk
Pen[yk ] · Pen[xmid
yk ] ·
σyk xmid
yk
X
X
σsyk (yi )
σsyk (yi )
+
Pen[s] ·
Pen[s] ·
+
σsyk
σsyk
left
right
s∈Xy
s∈Xyk
k
(22),(23),(24)
=
Pen[yk ] ·
Pen[xmid
yk ]
P
ψ∈{x0 ,xa }
·P
ϕ∈{y0 ,yb }
η(yk , yb , ψ)σψyb
P
ψ∈{x0 ,xa }
η(yk , ϕ, ψ)σψϕ
η(yk , yb , x0 )σx0 yb
η(yk , y0 , x0 )σxa y0 + η(yk , yb , x0 )σxa yb
η(yk , yb , xa )σxa yb
+ W right [xright
= κ(yk , yb ).
yk ] ·
η(yk , y0 , xa )σxa y0 + η(yk , yb , xa )σxa yb
+ W left [xleft
yk ] ·
Analogously, if k > i, then λ(yk , yi ) = κ(yk , y0 ). Notice that if k = i, then σsyk (yi ) = 0, and
thus γ(s, yk , yi ) = λ(yk , yi ) = 0. Hence, we can rewrite Sum (20) as
b−1
X
X
k=1 s∈V =2 (P1max )
γ(s, yk , yi ) =
b−1
X
λ(yk , yi ) =
k=1
k6=i
i−1
X
λ(yk , yi ) +
i−1
X
κ(yk , yb ) +
k=1
=
k=1
16
b−1
X
λ(yk , yi )
b−1
X
κ(yk , y0 ) .
k=i+1
k=i+1
Now we can show how to compute Sum (19) in O(b) time. For 0 ≤ i ≤ b we define ρi =
Pi−1
Pb−1
Pb−1
k=1 κ(yk , yb ) +
k=i+1 κ(yk , y0 ), which is equal to Sum (19). Then, ρ0 =
k=1 κ(yk , y0 )
and ρi+1 = ρi −κ(yi+1 , yb )+κ(yi , y0 ). The value of ρ0 can be computed in O(b) time. Every ρi , 1 ≤
i ≤ b, can then be computed in constant time, since the values of Pen[·], W left [·], W right [·] and
the distances and number of shortest paths between all pairs of vertices of degree at least three
are assumed to be known. Hence, computing all ρi , 0 ≤ i ≤ b and thus computing Sum (19) for
every v ∈ V (P2max ) takes O(b) time.
We are now ready to combine Lemmata 6 to 8 to prove Proposition 6. As mentioned above,
to keep the proposition simple, we assume that the values Inc[s, t] · σst (v) can be computed in
constant time for every s, t ∈ V ≥3 (G) and v ∈ V (G). In fact, these values are computed in the
last step of the algorithm taking overall O(kn) time (see Lines 28 and 29 in Algorithm 1 and
Lemma 3).
Proposition 6 (Restated). The following value can be computed for all v ∈ V (G) in overall O(kn)
time:
X
γ(s, t, v).
s∈V =2 (P1max ),t∈V =2 (P2max )
P1max 6=P2max ∈P max
Proof. Let P1max 6= P2max ∈ P max . Then, for each v ∈ V (G) = (V (G) \ (V (P1max ) ∪ V (P2max ))) ∪
(V (P1max ) ∪ V (P2max )), we need to compute
X
γ(s, t, v).
(25)
s∈V =2 (P1max ),t∈V =2 (P2max )
We first compute in O(kn) time the values dG (s, t) and σst for every s, t ∈ V ≥3 (G), as well as
the values W left [v], and W right [v] for every v ∈ V =2 (G), see Lines 14 to 21 in Algorithm 1. By
Lemma 6 we then can compute Sum (25) in O(b) time for v ∈ V (G)\(V (P1max )∪V (P2max )). Given
the values ρi of Lemma 8 we can compute values Xv of Lemma 7 for v = yi ∈ V (P2max ) as follows:
(
ρi , if v ∈ V (P1max ) ∩ V (P2max );
Xv = Xy i =
2ρi , otherwise.
This can be done in constant time for a single v ∈ V (P2max ), and hence, in O(b) time overall. Thus,
by Lemma 7, we can compute Sum (25) for V (P1max ) ∪ V (P2max ) in O(b) = O(|V (P2max )|) time.
Sum (25) must be computed for every pair P1max 6= P2max ∈ P max . Thus overall we require
X
X
X
|V (P2max )| ⊆ O
|V =2 (P2max )| + |V ≥3 (P2max )|
O
P1max
6=P2max ∈P max
P1max ∈P max P2max ∈P max
P1max 6=P2max
⊆O
X
n + min{k, n}
P1max ∈P max
(26)
⊆ O k(n + min{k, n}) = O(kn)
time. By Lemma 1 there are O(k) vertices of degree at least three and O(k) maximal paths. Since
the number of vertices of degree at least three can also be upper-bounded by O(n), Equation (26)
holds.
4.2
Paths with endpoints in the same maximal path
In this subsection, we look at shortest paths starting and ending in a maximal path P max =
x0 . . . xa and how to efficiently compute how these paths affect the betweenness centrality. Our
goal is to prove the following:
17
Proposition 7. The following value can be computed for each v ∈ V (G) in overall O(kn) time:
X
γ(s, t, v).
s,t∈V =2 (P max )
P max ∈P max
Notice that in the course of the algorithm, as in Section 4.1, we first gather all increments
to Inc[·, ·] and in the final step compute for each s, t ∈ V ≥3 (G) the values Inc[s, t] · σst (v) in O(m)
time (Lemma 3). This postprocessing (see Lines 28 and 29 in Algorithm 1) can be done in overall O(kn) time (see Lemma 3). To keep the following proofs simple we assume that these values Inc[s, t] · σst (v) can be computed in constant time for every s, t ∈ V ≥3 (G) and v ∈ V (G).
Let P max = x0 x1 . . . xa , where x0 , xa ∈ V ≥3 (G) and xi ∈ V =2 (G) for 1 ≤ i ≤ a − 1. Then
X
γ(s, t, v) =
s,t∈V =2 (P max )
X
γ(xi , xj , v) = 2 ·
a−1
X a−1
X
γ(xi , xj , v).
i=1 j=i+1
i,j∈[1,a−1]
For the sake of readability we say that v ∈ [xp , xq ], p < q, if there exists an k ∈ [p, q] such that v =
xk , and v ∈
/ [xp , xq ] otherwise. We will distinguish between two different cases that we then treat
in corresponding subsubsections: Either v ∈ [xi , xj ] or v ∈
/ [xi , xj ]. We will show that both cases
can be solved in P
overall O(a) time for P max . Doing this for all maximal paths results in an overall
running time of P max ∈P max V =2 (P max ) ∈ O(n). In the calculations we will distinguish between
the two main cases—all shortest xi xj -paths are fully contained in P max , or all shortest xi xj -paths
leave P max —and the corner case that there are some shortest paths inside P max and some that
partially leave it. Observe that for any fixed pair i < j the distance between xi and xj is given
by din = j − i if the shortest path is contained in P max and by dout = i + dG (x0 , xa ) + a − j if
shortest xi xj -paths leave P max . The corner case that there are shortest paths both inside and
outside of P max occurs when din = dout . In this case it holds that j − i = i + dG (x0 , xa ) + a − j
which is equivalent to
j =i+
dG (x0 , xa ) + a
,
2
(27)
where j is an integer smaller than a. For convenience, we will again use a notion of “mid-elements”
for a fixed starting vertex xi . We distinguish between the two cases that this mid-element has
dG (x0 ,xa )+a
−
and jmid
=
a higher index in P max or a lowerone. Formally, we say that i+
mid = i +
2
dG (x0 ,xa )+a
j−
.
We
also
distinguish
between
the
(v)/σ
.
We
next
analyze
the
factor
σ
xi xj
xi xj
2
cases v ∈ V (P max ) and v ∈
/ V (P max ). Observe that
0,
if dout < din ∧ v ∈ [xi , xj ] or din < dout ∧ v ∈
/ [xi , xj ];
if din < dout ∧ v ∈ [xi , xj ];
1,
if dout < din ∧ v ∈
/ [xi , xj ] ∧ v ∈ V (P max );
1,
σxi xj (v) σx0 xa (v) , if d < d ∧ v ∈
/ V (P max );
out
in
=
(28)
σx0 xa
1
σxi xj
,
if
d
=
d
∧
v
∈
[x
,
x
];
in
out
i
j
σx0 xa +1
σx0 xa
max
,
if
d
=
d
∧
v
∈
/
[x
,
x
);
in
out
i
j ] ∧ v ∈ V (P
σx0 xa +1
σ
(v)
x
x
0 a , if din = dout ∧ v ∈
/ V (P max ).
σx x +1
0 a
The denominator σxo xa + 1 is correct since there are σx0 xa shortest paths from x0 to xa (and
therefore σx0 xa shortest paths from xi to xj that leave P max ) and one shortest path from xi
to xj within P max . Notice that if there are shortest paths that are not contained in P max ,
then dG (x0 , xa ) < a as we consider the case that 0 < i < j < a. Thus, P max is not a shortest path from x0 to xa .
18
4.2.1
Paths completely contained in a maximal path
We now will compute the value for all paths that only consist of vertices in P max , that is, we will
compute for each xk with i < k < j in overall O(a) time by a dynamic program
2·
a−1 X
a−1
X
γ(xi , xj , xk ).
i=1 j=i+1
This can be simplified to
X
2·
X
X
γ(xi , xj , xk ) = 2 ·
X
γ(xi , xj , xk ).
i∈[1,k−1] j∈[k+1,a−1]
i∈[1,a−1] j∈[i+1,a−1]
k<j
i<k
Lemma 9. For a fixed maximal path P max = x0 x1 . . . xa , for all xk with 0 ≤ k ≤ a we can
compute the following in overall O(a) time:
X
X
γ(xi , xj , xk ).
2·
i∈[1,k−1] j∈[k+1,a−1]
Proof. For the sake of readability we define
X
αxk = 2 ·
X
γ(xi , xj , xk ).
i∈[1,k−1] j∈[k+1,a−1]
P P
Notice that i ≥ 1 and k > i and thus for x0 we have αx0 = 2 i∈∅ j∈[1,a−1] γ(xi , xj , x0 ) = 0.
This will be the anchor of the dynamic program.
For every vertex xk with 1 ≤ k < a it holds that
X
X
X
γ(xk−1 , xj , xk ).
γ(xi , xj , xk ) + 2 ·
γ(xi , xj , xk ) = 2 ·
αxk = 2 ·
j∈[k+1,a−1]
i∈[1,k−2]
j∈[k+1,a−1]
i∈[1,k−1]
j∈[k+1,a−1]
Similarly, for xk with 1 < k ≤ a it holds that
X
X
γ(xi , xj , xk−1 ) = 2 ·
αxk−1 = 2 ·
γ(xi , xj , xk−1 ) + 2 ·
γ(xi , xk , xk−1 ).
i∈[1,k−2]
i∈[1,k−2]
j∈[k+1,a−1]
i∈[1,k−2]
j∈[k,a−1]
X
Next, notice that any path from xi to xj with i ≤ k − 2 and j ≥ k + 1 that contains xk also
contains xk−1 and vice versa. Substituting this into the equations above yields
X
X
γ(xi , xk , xk−1 ).
γ(xk−1 , xj , xk ) − 2 ·
αxk = αxk−1 + 2 ·
i∈[1,k−2]
j∈[k+1,a−1]
P
Lastly, we prove that
i∈[1,k−2] γ(xi , xk , xk−1 ) can be
j∈[k+1,a−1] γ(xk−1 , xj , xk ) and 2 ·
left
right
computed in constant time once W [·] and W
[·] are precomputed, see Lines 18 to 21 in
Algorithm 1. (These tables can also be computed in O(a) time.) For convenience we say that γ(xi , xj , xk ) =
Pj
0 if i or j are not integral or are not in [1, a − 1] and define W [xi , xj ] =
ℓ=i Pen[xℓ ] =
W left [xj ] − W left [xi−1 ]. Then we can use Equations (27) and (28) to show that
P
X
γ(xk−1 , xj , xk ) =
X
Pen[xk−1 ] · Pen[xj ] ·
j∈[k+1,a−1]
j∈[k+1,a−1]
= γ(xk−1 , x(k−1)+ , xk ) +
mid
X
σxk−1 xj (xk )
σxk−1 xj
Pen[xk−1 ] · Pen[xj ]
j∈[k+1,min{⌈(k−1)+
mid ⌉−1,a−1}]
if (k − 1)+
mid ≥ a;
Pen[xk−1 ] · W [xk+1 , xa−1 ],
+
+
+
Pen[x
]
·
W
[x
,
x
],
if
(k
−
1)
<
a
∧
(k
−
1)
/ Z;
=
k−1
k+1
mid
mid ∈
⌈(k−1)mid ⌉−1
1
Pen[xk−1 ] · (Pen[x(k−1)+ ] · σx ,x +1 + W [xk+1 , x(k−1)+ −1 ]), otherwise.
mid
0
mid
a
19
Herein we use (k − 1)+
/ Z to say that (k − 1)+
mid ∈
mid is not integral. Analogously,
X
X
γ(xi , xk , xk−1 ) =
Pen[xi ] · Pen[xk ] ·
i∈[1,k−2]
i∈[1,k−2]
= γ(xk−1 , xk− , xk−1 ) +
mid
X
σxi xk (xk−1 )
σxi xk
Pen[xi ] · Pen[xk ]
i∈[max{1,⌊(k−1)−
mid ⌋+1},k−2]
Pen[xk ] · W [x1 , xk−2 ],
−
= Pen[xk ] · W [x⌊kmid
⌋+1 , xk−2 ],
Pen[xk ] · (Pen[xk− ] · σx ,x1 +1 + W [x1 , xk−
mid
0
−
if kmid
mid +1
a
−
if kmid
< 1;
−
≥ 1 ∧ kmid ∈
/ Z;
]),
otherwise.
−
This completes the proof since (k − 1)+
mid , kmid , an arbitrary entry in W [·], and all other variables
in the equation above can be computed in constant time once W left [·] is computed, and therefore,
computing αxi for each vertex xi in P max takes constant time. As there are a vertices in P max
the computations for the whole maximal path P max take O(a) time.
4.2.2
Paths partially contained in a maximal path
We will now compute the value for all paths that partially leave P max . We will again assume
that Inc[s, t] · σst (v) can be computed in constant time for fixed vertices s and t to simplify the
analysis.
Lemma 10. Let P max = x0 x1 . . . xa ∈ P max . Then, assuming that Inc[s, t]·σst (v) can be computed
in constant time for some s, t ∈ V ≥3 (G), one can compute
X
X
γ(xi , xj , v)
βv =
i∈[1,a−1] j∈[i+1,a−1]
in O(a) time, where v ∈ V (G) \ [xi , xj ].
Proof. We start with all vertices v ∈
/ V (P max ) and then compute the value for all vertices xk
with k ≤ i or k ≥ j. As stated above, the distance from xi to xi+ (if existing) is the boundary
mid
max
such that all shortest paths to vertices xj with j > i+
and the unique shortest path
mid leave P
+
to any xj with i < j < imid is xi xi+1 · · · xj . Thus we can use Equations (27) and (28) to show
that for each v ∈
/ P max and each fixed i ∈ [1, a − 1] it holds that
X
γ(xi , xj , v) =
Pen[xi ] · Pen[xj ] ·
j∈[i+1,a−1]
j∈[i+1,a−1]
0,
P
X
σ
σxi xj (v)
σxi xj
if i+
mid > a − 1;
(v)
+
Pen[xi ] · Pen[xj ] · xσ0xxax ,
if i+
/ Z;
mid ≤ a − 1 ∧ imid ∈
0 a
mid
P
σ
(v)
σx0 xa (v)
Pen[xi ] · Pen[xi+ ] · σxx0 xxa +1 + j∈[x +
,
otherwise;
,a−1] · Pen[xj ] · σx0 xa
mid
0 a
⌊i
⌋+1
mid
0,
if i+
mid > a − 1;
σx0 xa (v)
+
right
W
[x⌊i+ ⌋+1 ] · σx x ,
if imid ≤ a − 1 ∧ i+
/ Z;
= Pen[xi ] ·
mid ∈
mid
0 a
σ
(v)
σ
(v)
x
x
x
x
a
a
right
Pen[xi ] · Pen[x + ] · 0
,
otherwise.
+W
[x +
] · Pen[xj ] · 0
=
j∈[x⌊i+
⌋+1
,a−1]
imid
σx0 xa +1
⌊imid ⌋+1
σx0 xa
All variables except for σx0 xa (v) can be computed in constant time once W right and σx0 xa are
computed. Thus we can compute in overall O(a) time the value
P
P
X
X
2 · i∈[1,a−1] j∈[i+1,a−1] γ(xi , xj , v)
Pen[xi ] Pen[xj ]σxi ,xj . (29)
X=
=2·
σx0 xa (v)
i∈[1,a−1] j∈[i+1,a−1]
20
Due to the postprocessing (see Lines 28 and 29 in Algorithm 1) it is sufficient to add X to Inc[x0 , xa ]
/ V (P max ).
This ensures that X · σx0 xa (v) is added to the betweenness centrality of each vertex v ∈
Notice that if X > 0, then dG (x0 , xa ) < a and thus the betweenness centrality of any vertex v ∈ V =2 (P max ) is not affected by Inc[x0 , xa ]. This also holds for x0 and xa since σx0 xa (v) = 0
for v ∈ {x0 , xa }.
Next, we will compute βv for all vertices v ∈ V (P max ) (recall that v ∈
/ [xi , xj ]). We start with
the simple observation that all paths that leave P max at some point have to contain x0 . Thus βx0
is equal to X from Equation (29). We will use this as the anchor for a dynamic program that
iterates through P max and computes βxk for each vertex k ∈ [0, a] in constant time.
Similarly to Section 4.2.1 we observe that
X
X
X
X
γ(xi , xj , xk )
γ(xi , xj , xk )
+
βx k = 2
i∈[1,k−1] j∈[i+1,k−1]
i∈[k+1,a−1] j∈[i+1,a−1]
=2
X
X
γ(xi , xj , xk )
+
X
X
+
X
j∈[k+2,a−1]
βxk+1 = 2
=2
γ(xi , xj , xk )
i∈[1,k−1] j∈[i+1,k−1]
i∈[k+2,a−1] j∈[i+1,a−1]
X
X
γ(xi , xj , xk+1 )
X
X
γ(xi , xj , xk+1 )
+
X
γ(xk+1 , xj , xk ) and
X
i∈[1,k] j∈[i+1,k]
i∈[k+2,a−1] j∈[i+1,a−1]
+
X
γ(xi , xj , xk+1 )
X
γ(xi , xj , xk+1 )
i∈[1,k−1] j∈[i+1,k−1]
i∈[k+2,a−1] j∈[i+1,a−1]
+
X
i∈[1,k−1]
γ(xi , xk , xk+1 ) .
Furthermore, observe that every st-path with s, t 6= xk , xk+1 that contains xk also contains xk+1
and vice versa. Thus we can conclude that
X
X
γ(xk+1 , xj , xk ) .
γ(xi , xk , xk+1 ) −
βxk+1 = βxk + 2
j∈[k+2,a−1]
i∈[1,k−1]
P
P
It remains to show that i∈[1,k−1] γ(xi , xk , xk+1 ) and j∈[k+2,a−1] γ(xk+1 , xj , xk ) can be computed in constant
time once W left [·] and W right [·] are computed. Using Equations (27) and (28)
P
we get that i∈[1,k−1] γ(xi , xk , xk+1 )
−
if kmid
< 1;
0,
−
−
left
−
Pen[x
]
·
W
[x
],
if
k
≥
1
∧
k
∈
/
Z;
k
=
mid
mid
⌊kmid ⌋
σ
x
x
−
a
left
0
Pen[xk ] · W [x −
,
otherwise;
] + Pen[k ] ·
kmid −1
mid
σx0 xa +1
P
j∈[k+2,a−1] γ(xk+1 , xj , xk )
+
if kmid
< 1;
0,
+
+
right
+
Pen[x
]
·
W
[x
],
if
(k
+
1)
≤
a
−
1
∧
(k
+
1)
∈
/
Z;
k+1
=
mid
mid
⌊(k+1)mid ⌋
σ
x
x
+
0 a
Pen[xk+1 ] · W right [x
otherwise.
(k+1)+ −1 ] + Pen[(k + 1)mid ] · σx x +1 ,
and
mid
0 a
Since all variables in these two equalities can be computed in constant time, this concludes the
proof.
4.3
Postprocessing and algorithm summary
In this section we discuss our postprocessing that finishes our algorithm. For this we will state a
generalized version of Brandes’ algorithm. We will then describe how we use it to compute the
betweenness centrality correctly.
21
Lemma 11 (Restated). Let s be a vertex and let f : V (G)2 7→ N be a function such that
for each
P u, v ∈ V (G) the value f (u, v) can be computed in O(τ ) time. Then, one can compute t∈V (G) f (s, t) · σst (v) for all v ∈ V in overall O(n · τ + m) time.
Proof. This proof generally follows the structure of the proof by Brandes [4, Theorem 6, Corollary
7]. Analogously to Brandes we define σst (v, w) as the number of shortest paths from s to t that
contain the edge {v, w} and Ss (v) as the set of successors of a vertex v on shortest path from s,
that is, Ss (v) = {wP∈ V (G) | {v, w} ∈ E ∧ dG (s, w) = dG (s, v) + 1}. For the sake of readability we
also define χsv = t∈V (G) f (s, t) · σst (v). We will first show a series of equations that show how
to compute χsv . Afterwards we will give reasoning for equalities (1) and (2).
X
f (s, t) · σst (v)
χsv =
t∈V (G)
=
X
f (s, t)
t∈V (G)
=
X
w∈Ss (v)
=
X
w∈Ss (v)
=
X
w∈Ss (v)
X
σst (v, w) =
t∈V (G)\{w}
X
f (s, t) · σst (v, w)
(30)
f (s, t) · σst (v, w) + f (s, w) · σsw (v, w)
f (s, t) · σst (w) ·
t∈V (G)\{w}
χsw ·
X
w∈Ss (v) t∈V (G)
w∈Ss (v)
X
X
σsv
+ f (s, w) · σsv
σsw
σsv
+ f (s, w) · σsv
σsw
(31)
We will now show that Equations (30) and (31) are correct. All other equalities are simple arithmetics. To see that Equation (30) is correct, observe that each shortest path from s to any other
vertex t that contains v either ends in v, that is, t = P
v, or contains P
exactly
Pone edge {v, w},
where w ∈ Ss (v). If t = v, then σst (v) = 0 and therefore t∈V σst (v) = t∈V w∈Ss (v) σst (v, w).
To see that Equation (31) is correct, observe the following: First, that the number of shortest
paths from s to t that contain a vertex v is
(
0,
if dG (s, v) + dG (v, t) > dG (s, t);
σst (v) =
σsv · σvt , otherwise;
second, that the number of shortest st-paths that contain an edge {v, w}, w ∈ Ss (v), is
(
0,
if dG (s, v) + dG (w, t) + 1 > dG (s, t);
σst (v, w) =
σsv · σwt , otherwise;
and third, that the number of shortest sw-paths that contain v is equal to the number of shortest svpaths. Combining these two results yields σst (v, w) = σsv · σwt = σsv · σst (w)/σsw .
We next show how to compute χsv for each v ∈ V in overall O(m + n · τ ) time. First, order the vertices in non-increasing distance to s and compute the set of all successors of each
vertex in O(m) time using breadth-first search. Notice that the number of successors of all vertices is at most m since each edge defines at most one successor-predecessor relation. Then
compute χsv for each vertex by a dynamic program that
iterates over the ordered list of verP
σsv
tices and computes w∈Ss (v) χsw · σsw + f (s, w) · σsv in overall O(m + n · τ ) time. This can
be done by first computing σst for all t ∈ V in overall O(m) time by Brandes [4, Corollary 4]
and f (s, t) for all t ∈ V (G) in O(n · τ )
time, and then using the already
computed values Ss (v)
P
σsv
χ
·
and χsw to compute χsv =
in
O(|S
+
f
(s,
w)
·
σ
sw
s (v)|) time. Notice
sv
w∈Ss (v)
σsw
P
that v∈V |Ss (v)| ≤ O(m). This concludes the proof.
We are now ready to combine all parts and prove our main theorem.
22
Theorem 4 (Restated). Betweenness Centrality can be solved in O(kn) time, where k is
the feedback edge number of the input graph.
Proof. We show that Algorithm 1 computes
X
CB (v) =
Pen[s] · Pen[t] ·
s,t∈V (G)
σst (v)
=
σst
X
γ(s, t, v)
s,t∈V (G)
for all v ∈ V (G) in O(kn) time. To this end, we use Observation 3 to split the sum as follows.
X
X
X
γ(s, t, v) =
γ(s, t, v) +
γ(t, s, v)
s∈V ≥3 (G),t∈V (G)
s∈V (G),t∈V (G)
s∈V =2 (G),t∈V ≥3 (G)
X
+
γ(s, t, v) +
X
γ(s, t, v).
s,t∈V =2 (P max )
P max ∈P max
s∈V =2 (P1max ),t∈V =2 (P2max )
P1max 6=P2max ∈P max
By Propositions 6 and 7, we can compute the third and fourth summand in O(kn) time provided that the term Inc[s, t] · σst (v) will be computed for every s, t ∈ V ≥3 (G) and v ∈ V (G) in a
postprocessing step (see Lines 22 to 27). We incorporate this postprocessing into the computation
of the first two summands in the equation, that is, we next show that for all v ∈ V (G) the following
value can be computed in overall O(kn) time:
X
X
X
γ(s, t, v) +
γ(s, t, v) +
Inc[s, t] · σst (v)
s∈V =2 (G)
t∈V ≥3 (G)
s∈V ≥3 (G)
t∈V (G)
To this end, observe that
X
γ(s, t, v) +
s∈V ≥3 (G)
t∈V (G)
=
X
γ(s, t, v) +
s∈V =2 (G)
t∈V ≥3 (G)
X
Pen[s] · Pen[t] ·
X
Inc[s, t] · σst (v)
X
(2
s∈V ≥3 (G)
t∈V (G)
+
s∈V ≥3 (G)
t∈V ≥3 (G)
σst (v)
+
σst
X
Inc[s, t] · σst (v)
s∈V ≥3 (G)
t∈V ≥3 (G)
X
Pen[s] · Pen[t] ·
s∈V ≥3 (G)
t∈V =2 (G)
σst (v)
σst
s∈V ≥3 (G)
t∈V ≥3 (G)
=
s∈V ≥3 (G)
X
t∈V =2 (G)
Pen[s] Pen[t]
X
Pen[s] Pen[t]
σst (v)
σst (v)(
)+
+ Inc[s, t])
σst
σst
≥3
t∈V
Notice that we initialize Inc[s, t] in Lines 16 and 17 in Algorithm 1 with 2 · Pen[s] Pen[t]/σst
and Pen[s] Pen[t]/σst respectively. Thus we can use the algorithm described in Lemma 3 for each
vertex s ∈ V ≥3 (G) with X(s, t) = Inc[s, t].
Since Pen[s], Pen[t], σst and Inc[s, t] can all be looked up in constant time, the algorithm only
takes O(n + m) time for each vertex s (see Lines 28 and 29). By Lemma 1 there are O(min{k, n})
vertices of degree at least three. Thus the algorithm altogether takes O(min{n, k}·m) = O(min{n, k}·
(n + k)) = O(kn) time.
5
Conclusion
Lifting the processing of degree-one vertices due to Baglioni et al. [3] to a technically much more
demanding processing of degree-two vertices, we derived a new algorithm for Betweenness Centrality running in O(kn) worst-case time (k is the feedback edge number of the input graph).
23
Our work focuses on algorithm theory; in a follow-up step empirical work with implementing and
testing our algorithm should be done. Given the practical success of Baglioni et al. [3] in accelerating the algorithm of Brandes [4], we expect this to be a fruitful line of future research. In
this context, also the use of further data reduction techniques should be investigated. Moving
back to theory, it would be of high interest to identify parameterizations “beyond” feedback edge
number that might help to get more results in the spirit of our work. Indeed, our work can be
seen as a further systematic contribution in making use of parameterized complexity analysis for
polynomial-time solvable problems as advocated by Giannopoulou et al. [8].
References
[1] Amir Abboud, Fabrizio Grandoni, and Virginia Vassilevska Williams. Subcubic equivalences
between graph centrality problems, APSP and diameter. In Proceedings of the 26th Annual
ACM-SIAM Symposium on Discrete Algorithms (SODA 2015), pages 1681–1697. SIAM, 2015.
1
[2] David A Bader, Shiva Kintali, Kamesh Madduri, and Milena Mihail. Approximating betweenness centrality. In Proceedings of the 5th International Workshop on Algorithms and Models
for the Web-Graph (WAW 2007), pages 124–137. Springer, 2007. 1
[3] Miriam Baglioni, Filippo Geraci, Marco Pellegrini, and Ernesto Lastres. Fast exact computation of betweenness centrality in social networks. In Proceedings of the 4th International
Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012), pages
450–456. IEEE Computer Society, 2012. 2, 3, 4, 5, 11, 23, 24
[4] Ulrik Brandes. A faster algorithm for betweenness centrality. Journal of Mathematical Sociology, 25(2):163–177, 2001. 1, 2, 7, 11, 22, 24
[5] Dóra Erdős, Vatche Ishakian, Azer Bestavros, and Evimaria Terzi. A divide-and-conquer algorithm for betweenness centrality. In Proceedings of the 15th SIAM International Conference
on Data Mining (SDM 2015), pages 433–441. SIAM, 2015. 1
[6] Linton Freeman. A set of measures of centrality based on betweenness. Sociometry, 40:35–41,
1977. 1
[7] Robert Geisberger, Peter Sanders, and Dominik Schultes. Better approximation of betweenness centrality. In Proceedings of the Meeting on Algorithm Engineering & Expermiments
(ALENEX 2008), pages 90–100. SIAM, 2008. 1
[8] Archontia C. Giannopoulou, George B. Mertzios, and Rolf Niedermeier. Polynomial fixedparameter algorithms: A case study for longest path on interval graphs. Theoretical Computer
Science, 689:67–95, 2017. 24
[9] Oded Green, Robert McColl, and David A Bader. A fast algorithm for streaming betweenness
centrality. In International Conference on Privacy, Security, Risk and Trust (PASSAT 2012),
pages 11–20. IEEE, 2012. 1
[10] Ali Khazaee, Ata Ebrahimzadeh, and Abbas Babajani-Feremi. Identifying patients with
Alzheimer’s disease using resting-state fMRI and graph theory. Clinical Neurophysiology, 126
(11):2132–2141, 2015. 1
[11] John D. Medaglia. Graph theoretic analysis of resting state functional MR imaging. Neuroimaging Clinics of North America, 27(4):593–607, 2017. 1
[12] George B. Mertzios, André Nichterlein, and Rolf Niedermeier. The power of linear-time data
reduction for maximum matching. In Proceedings of the 42nd International Symposium on
Mathematical Foundations of Computer Science (MFCS 2017), volume 83 of LIPIcs, pages
46:1–46:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2017. 2
24
[13] Meghana Nasre, Matteo Pontecorvi, and Vijaya Ramachandran. Betweenness centrality –
incremental and faster. In Proceedings of the 39th International Symposium on Mathematical
Foundations of Computer Science (MFCS 2014), volume 8634 of L, pages 577–588. Springer,
2014. 1
[14] Matteo Riondato and Evgenios M Kornaropoulos. Fast approximation of betweenness centrality through sampling. Data Mining and Knowledge Discovery, 30(2):438–475, 2016. 1
[15] Ahmet Erdem Sariyüce, Kamer Kaya, Erik Saule, and Ümit V. Çatalyürek. Graph manipulations for fast centrality computation. ACM Transactions on Knowledge Discovery from
Data, 11(3):26:1–26:25, 2017. 1
[16] Guangming Tan, Dengbiao Tu, and Ninghui Sun. A parallel algorithm for computing betweenness centrality. In Proceedings of the 38th International Conference on Parallel Processing
(ICPP 2009), pages 340–347. IEEE Computer Society, 2009. 1
[17] Wei Wang and Choon Yik Tang. Distributed computation of node and edge betweenness on
tree graphs. In Proceedings of the 52nd IEEE Conference on Decision and Control (CDC
2013), pages 43–48. IEEE, 2013. 1, 2
25
| 8 |
Automatically Generating Features for
Learning Program Analysis Heuristics
Kwonsoo Chae
Hakjoo Oh
arXiv:1612.09394v1 [] 30 Dec 2016
Korea University
{kchae,hakjoo oh}@korea.ac.kr
Kihong Heo
Hongseok Yang
Seoul National University
[email protected]
University of Oxford
[email protected]
Abstract
driven approaches, where a static analysis uses a parameterized heuristic and the parameter values that maximize the
analysis performance are learned automatically from existing codebases via machine learning techniques [10, 18, 22,
40]; the learned heuristic is then used for analyzing previously unseen programs. The approaches have been used
to generate various cost-effective analysis heuristics automatically, for example, for controlling the degree of flow
or context-sensitivity [40], or determining where to apply
relational analysis [22], or deciding the threshold values of
widening operators [10].
However, these data-driven approaches have one serious
drawback. Their successes crucially depend on the qualities
of so called features, which convert analysis inputs, such as
programs and queries, to the kind of inputs that machine
learning techniques understand. Designing a right set of features requires a nontrivial amount of knowledge and efforts
of domain experts. Furthermore, the features designed for
one analysis do not usually generalize to others. For example, in [40], a total of 45 features were manually designed for
controlling flow-sensitivity, but a new set of 38 features were
needed for controlling context-sensitivity. This manual task
of crafting features is a major impediment to the widespread
adoption of data-driven approaches in practice, as in other
applications of machine learning techniques.
In this paper, we present a technique for automatically
generating features for data-driven static program analyses.
From existing codebases, a static analysis with our technique
learns not only an analysis heuristic but also features necessary to learn the heuristic itself. In the first phase of this
learning process, a set of features appropriate for a given
analysis task is generated from given codebases. The next
phase uses the generated features and learns an analysis
heuristic from the codebases. Our technique is underpinned
by two key ideas. The first idea is to run a generic program reducer (e.g., C-Reduce [46]) on the codebases with
a static analysis as a subroutine, and to synthesize automatically feature programs, small pieces of code that minimally describe when it is worth increasing the precision
of the analysis. Intuitively these feature programs capture
programming patterns whose analysis results benefit greatly
We present a technique for automatically generating features
for data-driven program analyses. Recently data-driven approaches for building a program analysis have been proposed, which mine existing codebases and automatically
learn heuristics for finding a cost-effective abstraction for
a given analysis task. Such approaches reduce the burden
of the analysis designers, but they do not remove it completely; they still leave the highly nontrivial task of designing so called features to the hands of the designers. Our
technique automates this feature design process. The idea
is to use programs as features after reducing and abstracting them. Our technique goes through selected programquery pairs in codebases, and it reduces and abstracts the
program in each pair to a few lines of code, while ensuring
that the analysis behaves similarly for the original and the
new programs with respect to the query. Each reduced program serves as a boolean feature for program-query pairs.
This feature evaluates to true for a given program-query pair
when (as a program) it is included in the program part of the
pair. We have implemented our approach for three real-world
program analyses. Our experimental evaluation shows that
these analyses with automatically-generated features perform comparably to those with manually crafted features.
1. Introduction
In an ideal world, a static program analysis adapts to a given
task automatically, so that it uses expensive techniques for
improving analysis precision only when those techniques are
absolutely necessary. In a real world, however, most static
analyses are not capable of doing such automatic adaptation. Instead, they rely on fixed manually-designed heuristics for deciding when these precision-improving but costly
techniques should be applied. These heuristics are usually
suboptimal and brittle. More importantly, they are the outcomes of a substantial amount of laborious engineering efforts of analysis designers.
Addressing these concerns with manually-designed heuristics has been the goal of a large body of research in the
program-analysis community [4, 11, 17, 19–21, 25, 35, 37,
51, 53, 54]. Recently researchers started to explore data1
2017/1/2
from the increased precision of the analysis. The second idea
is to generalize these feature programs and represent them by
abstract data-flow graphs. Such a graph becomes a boolean
predicate on program slices, which holds for a slice when
the graph is included in it. We incorporate these ideas into
a general framework that is applicable to various parametric
static analyses.
We show the effectiveness and generality of our technique
by applying it to three static analyses for the C programming
language: partially flow-sensitive interval and pointer analyses, and partial Octagon analysis. Our technique successfully
generated features relevant to each analysis, which were
then used for learning an effective analysis heuristic. The
experimental results show that the heuristics learned with
automatically-generated features have performance comparable to those with hand-crafted features by analysis experts.
Contributions
that may influence the query: it computes the data-flow
slice of the query and tracks the variables in the slice flowsensitively. On the other hand, if the prediction is negative,
the analysis applies flow-insensitivity (FI) to the variables
on which the query depends.
For example, consider the following program:
1
2
3
4
5
x = 0;
y = x;
assert
assert
assert
y = 0; z =
y++;
(y > 0);
(z > 0);
(w == 0);
input(); w = 0;
// Query 1
// Query 2
// Query 3
The first query needs FS to prove, and the second is impossible to prove because the value of z comes from the external
input. The last query is easily proved even with FI. Ideally,
we want the classifier to give positive prediction only to the
first query, so that our analysis keeps flow-sensitive results
only for the variables x and y, on which the first query depends, and analyzes other variables flow-insensitively. That
is, we want the analysis to compute the following result:
We summarize our contributions below.
• We present a framework for automatically generating
features for learning analysis heuristics. The framework
is general enough to be used for various kinds of analyses
for the C programming language such as interval, pointer,
and Octagon analyses.
line
1
2
3
4
5
• We present a novel method that uses a program reducer
for generating good feature programs, which capture important behaviors of static analysis.
• We introduce the notion of abstract data-flow graphs and
show how they can serve as generic features for datadriven static analyses.
flow-sensitive result
abstract state
{x 7→ [0, 0], y 7→ [0, 0]}
{x 7→ [0, 0], y 7→ [1, 1]}
{x 7→ [0, 0], y 7→ [1, 1]}
{x 7→ [0, 0], y 7→ [1, 1]}
{x 7→ [0, 0], y 7→ [1, 1]}
flow-insensitive result
abstract state
{z 7→ ⊤, w 7→ [0, 0]}
Note that for x and y, the result keeps a separate abstract
state at each program point, but for the other variables z and
w, it has just one abstract state for all program points.
• We provide extensive experimental evaluations with three
different kinds of static analyses.
Outline We informally describe our approach in Section 2.
The formal counterparts of this description take up the next
four sections: Section 3 for the definition of parametric static
analyses considered in the paper, Section 4 for an algorithm
that learns heuristics for choosing appropriate parameter values for a given analysis task, Section 5 for our technique for
automatically generating features, and Section 6 for instance
analyses designed according to our approach. In Section 7,
we report the findings of the experimental evaluation of our
approach. In Sections 8 and 9, we explain the relationship
between our approach and other prior works, and finish the
paper with concluding remarks.
2.1 Learning a Classifier
The performance of the analysis crucially depends on the
quality of its classifier C. Instead of designing the classifier
manually, we learn it from given codebases automatically.
Let us illustrate this learning process with a simple codebase
of just one program P .
The input to the classifier learning is the collection
{(q (i) , b(i) )}ni=1 of queries in P labeled with values 0 and
1. The label b(i) indicates whether the corresponding query
q (i) can be proved with FS but not with FI. These labeled
data are automatically generated by analyzing the codebase
{P } and identifying the queries that are proved with FS but
not with FI.
Given such data {(q (i) , b(i) )}ni=1 , we learn a classifier C
in two steps. First, we represent each query q (i) by a feature
vector, which encodes essential properties of the query q (i)
in the program P and helps learning algorithms to achieve
good generalization. Formally, we transform the original
data {(q (i) , b(i) )}ni=1 to {(v (i) , b(i) )}ni=1 , where v (i) ∈ Bk =
{0, 1}k is a binary feature vector of query q (i) . The dimension k of feature vectors denotes the number of features.
Second, to this transformed data set {(v (i) , b(i) )}ni=1 , we ap-
2. Overview
We illustrate our approach using its instantiation with a partially flow-sensitive interval analysis.
Our interval analysis is query-based and partially flowsensitive. It uses a classifier C that predicts, for each query in
a given program, whether flow-sensitivity is crucial for proving the query: the query can be proved with flow-sensitivity
but not without it. If the prediction is positive, the analysis applies flow-sensitivity (FS) to the program variables
2
2017/1/2
1
2
3
4
5
6
7
8
a = 0;
while (1) {
b = unknown();
if (a > b)
if (a < 3)
assert (a < 5);
a++;
}
(a) Original program
1
2
3
4
5
6
a = 0;
while (1) {
if (a < 3)
assert (a < 5);
a++;
}
(b) Feature program
x := x + c
x := c
x<c
Q(x < c)
(c) Abstract data-flow graph (Feature)
Figure 1. Example program and feature.
in codebases that require FS to prove. Then, for every query
in the set, we run the reducer on the program containing
the query under the predicate that the query in the reduced
program continues to be FS-provable but FI-unprovable. The
reducer removes all the parts from the program that are
irrelevant to the FS-provability and the FI-unprovability of
the query, leading to a feature program.
For example, consider the example program in Figure 1(a). The assertion at line 6 can be proved by the flowsensitive interval analysis but not by the flow-insensitive
one; with FS, the value of a is restricted to the interval [0, 3]
because of the condition at line 5. With FI, a has [0, +∞]
at all program points. We reduce this program as long as
the flow-sensitive analysis proves the assertion while the
flow-insensitive one does not, resulting in the program in
Figure 1(b). Note that the reduced program only contains
the key reasons (i.e., loop and if (a < 3)) for why FS
works. For example, the command if (a > b) is removed
because even without it, the flow-sensitive analysis proves
the query. Running the reducer this way automatically removes these irrelevant parts of the original program.
In experiments, we used C-Reduce [46], which has been
used for generating small test cases that trigger compiler
bugs. The original program in Figure 1(a) is too simplistic
and does not fully reflect the amount of slicing done by CReduce for real programs. In our experiments, we found that
C-Reduce is able to transform programs with >1KLOC to
those with just 5–10 lines, similar to the one in Figure 1(b).
ply an off-the-shelf classification algorithm (such as decision
tree) and learn a classifier C : Bk → B, which takes a feature
vector of a query and makes a prediction.
The success of this learning process relies heavily on
how we convert queries to feature vectors. If the feature
vector of a query ignores important information about the
query for prediction, learning a good classifier is impossible irrespective of learning algorithms used. In previous
work [10, 22, 40], this feature engineering is done manually by analysis designers. For a specific static analysis, they
defined a set of features and used them to convert a query
to a feature vector. But as in other applications of machine
learning, this feature engineering requires considerable domain expertise and engineering efforts. Our goal is to automatically generate high-quality features for this programanalysis application.
2.2 Automatic Feature Generation
We convert queries to feature vectors using a set of features
Π = {π1 , . . . , πk } and a procedure match. A feature πi encodes a property about queries. The match procedure takes a
feature π, a query q0 and a program P0 containing the query,
and checks whether the slice of P0 that may affect q0 satisfies
the property encoded by π. If so, it returns 1, and otherwise,
0. Using Π and match, we transform every query q in the
program P of our codebase into a feature vector v:
v = hmatch(π1 , q, P ), . . . , match(πk , q, P )i.
2.2.1 Feature Generation
Representing Feature Programs by Abstract Data-flow
Graphs The second idea is to represent the feature programs by abstract data-flow graphs. We build graphs that
describe the data flows of the feature programs. Then, we
abstract individual atomic commands in the graphs, for instance, by replacing some constants and variables with the
same fixed symbols c and x, respectively. The built graphs
form the collection of features Π.
For example, the feature program in Figure 1(b) is represented by the graph in Figure 1(c). The graph captures the
data flows of the feature program that influence the query.
At the same time, the graph generalizes the program by abstracting its atomic commands. All the variables are replaced
by the same symbol x, and all integers by c, which in particu-
The unique aspect of our approach lies in our technique for
generating the feature set Π from given codebases automatically. 1 Two ideas make this automatic generation possible.
Generating Feature Programs Using a Reducer The first
idea is to use a generic program reducer. A reducer (e.g., CReduce [46]) takes a program and a predicate, and iteratively
removes parts of the program as long as the predicate holds.
We use a reducer to generate a collection of small code
snippets that describe cases where the analysis can prove a
query with FS but not with FI. We first collect a set of queries
1 In our implementation, we partition the codebases to two groups. Programs in the first group are used for feature generation and learning and
those in the other group are used for cross validation.
3
2017/1/2
lar makes the conditions a < 3 and a < 5 the same abstract
condition x < c.
How much should we abstract commands of the feature
program? The answer depends on a static analysis. If we abstract commands aggressively, this would introduce a strong
inductive bias, so that the algorithm for learning a classifier might have hard time for finding a good classifier for
given codebases but would require fewer data for generalization. Otherwise, the opposite situation would occur. Our
technique considers multiple abstraction levels, and automatically picks one to a given static analysis using the combination of searching and cross-validation (Section 5).
query, but this is not a viable option because reducing is just
too expensive to perform every time we analyze a program.
Instead, we take a (less expensive) alternative based on the
transitive closure of the graph of the query.
3. Setting
Parametric Static Analysis We use the setting for parametric static analyses in [29]. Let P ∈ P be a program to
analyze. We assume that a set QP of queries (i.e., assertions) in P is given together with the program. The goal
of the analysis is to prove as many queries as possible. A
static analysis is parameterized by a set of program components. We assume a set JP of program components that represent parts of P . For instance, in our partially flow-sensitive
analysis, JP is the set of program variables. The parameter
space is defined by (AP , ⊑) where AP is the binary vector
a ∈ AP = BJP = {0, 1}JP with the pointwise ordering. We
sometimes regard a parameter a ∈ AP as a function from JP
to B, or the set a = {j ∈ JP | aj = 1}. In the latter case, we
write |a| for the size of the set. We define two constants in
AP : 0 = λj ∈ JP . 0 and 1 = λj ∈ JP . 1, which represent
the most imprecise and precise abstractions, respectively. We
omit the subscript P when there is no confusion. A parametric static analysis is a function F : P × A → ℘(Q), which
takes a program to analyze and a parameter, and returns a set
of queries proved by the analysis under the given parameter.
In static analysis of C programs, using a more refined parameter typically improves the precision of the analysis but
increases the cost.
2.2.2 Matching Algorithm
By using the technique explained so far, we generate an abstract data-flow graph for each FS-provable but FI-unprovable
query in given codebases. These graphs form the set of features, Π = {π1 , . . . , πk }.
The match procedure takes a feature (i.e., abstract dataflow graph) πi ∈ Π, a query q0 , and a program P0 containing
q0 . Then, it checks whether the slice of P0 that may affect
q0 includes a piece of code described by πi . Consider the
query in the original program in Figure 1(a) and the feature
π in Figure 1(c). Checking whether the slice for the query
includes the feature is done in the following two steps.
We first represent the query in Figure 1(a) itself by an
abstract data-flow graph:
x := ⊤
x>x
x := x + c
x := c
x<c
Q(x < c)
Analysis Heuristic that Selects a Parameter The parameter of the analysis is selected by an analysis heuristic H :
P → A. Given a program P , the analysis first applies the
heuristic to P , and then uses the resulting parameter H(P )
to analyze the program. That is, it computes F (P, H(P )). If
the heuristic is good, running the analysis with H(P ) would
give results close to those of the most precise abstraction
(F (P, 1)), while the analysis cost is close to that of the least
precise abstraction (F (P, 0)). Previously, such a heuristic
was designed manually (e.g., [25, 37, 54]), which requires
a large amount of engineering efforts of analysis designers.
Note that the graph is similar to the one in Figure 1(c) but it
contains all the parts of the original program. For instance,
it has the node x > x and the edge from this node to x < c,
both of which are absent in the feature. The unknown value,
such as the return value of unknown(), is represented by ⊤.
Next, we use a variant of graph inclusion to decide
whether the query includes the feature. We check whether
every vertice of the feature is included in the graph of the
query and whether every arc of the feature is included in the
transitive closure of the graph. The answers to both questions are yes. For instance, the path for the arc x:=x+c →
x<c in the feature is x:=x+c → x>x → x<c in the
graph of the query.
Note that we use a variant of graph inclusion where an
arc of one graph is allowed to be realized by a path of its
including graph, not necessarily by an arc as in the usual
definition. This variation is essential for our purpose. When
we check a feature against a query, the feature is reduced but
the query is not. Thus, even when the query here is the one
from which the feature is generated, this checking is likely
to fail if we use the usual notion of graph inclusion (i.e.,
G1 = (V1 , E1 ) is included in G2 = (V2 , E2 ) iff V1 ⊆ V2
and E1 ⊆ E2 ). In theory, we could invoke a reducer on the
4. Learning an Analysis Heuristic
In a data-driven approach, an analysis heuristic H is automatically learned from given codebases. In this section, we
describe our realization of this approach while assuming that
a set of features is given; this assumption will be discharged
in Section 5. We denote our method for learning a heuristic
by learn(F, Π, P), which takes a static analysis F , a set Π of
features, and codebases P, and returns a heuristic H.
Learning a Classifier In our method, learning a heuristic
H boils down to learning a classifier C, which predicts for
each query whether the query can be proved by a static
analysis with increased precision but not without it. Suppose
4
2017/1/2
that we are given codebases P = {P1 , . . . , Pn }, a set of
features Π = {π1 , . . . , πk }, and a procedure match. The
precise definitions of Π and match will be given in the next
section. For now, it is sufficient just to know that a feature
πi ∈ Π describes a property about queries and match checks
whether a query satisfies this property.
Using Π and match, we represent a query q ∈ QP in a
program P by a feature vector Π(q, P ) ∈ Bk such that the
ith component of the vector is the result of match(πi , q, P ).
This vector representation enables us to employ the standard
tools for learning and using a binary classifier. In our case,
a classifier is just a map C : Bk → B and predicts whether
the query can be proved by the analysis under high precision
(such as flow-sensitivity) but not with low precision (such as
flow-insensitivity). To use such a classifier, we just need to
call it with Π(q, P ). To learn it from codebases, we follow
the two steps described below:
called feature programs from given codebases (Section 5.1),
and then converts all the generated programs to abstract dataflow graphs (Section 5.2). The obtained graphs enable the
match procedure to transform queries to feature vectors so
that a classifier can be applied to these queries (Section 5.3).
In this section, we will explain all these aspects of our feature generation algorithm.
5.1 Generation of Feature Programs
Given a static analysis F and codebases P, gen fp(P, F )
generates feature programs in two steps.
First, it collects the set of queries in P that can be proved
by the analysis F under high precision (i.e., F (−, 1)) but
not with low precision (i.e., F (−, 0)). We call such queries
positive and the other non-positive queries negative. The
negative queries are either too hard in the sense that they
cannot be proved even with high precision, or too easy in the
sense that they can be proved even with low precision. Let
Ppos be the set of positive queries and their host programs:
1. We generate labeled data D ⊆ Bk ×B from the codebases:
D = {hΠ(q, Pi ), bi i | Pi ∈ P ∧ q ∈ QPi }, where
bi = (q ∈ F (Pi , 1)\F (Pi , 0)). That is, for each program
Pi ∈ P and a query q in Pi , we represent the query by
a feature vector and label it with 1 if q can be proved
by the analysis under the most precise setting but not
under the least precise setting. When it is infeasible to run
the most precise analysis (e.g., the Octagon analysis), we
instead run an approximate version of it. In experiments
with the partial Octagon analysis, we used the impact preanalysis [37] as an approximation.
Ppos = {(P, q) | P ∈ P ∧ q ∈ QP ∧ φq (P )}
where QP is the set of queries in P and φq is defined by:
φq (P ) = (q 6∈ F (P, 0) ∧ q ∈ F (P, 1)) .
Second, gen fp(P, F ) shrinks the positive queries collected in the first step by using a program reducer. A program reducer (e.g., C-Reduce [46]) is a function of the type:
reduce : P × (P → B) → P. It takes a program P
and a predicate pred, and removes parts of P as much as
possible while preserving the original result of the predicate. At the end, it returns a minimal program P ′ such that
pred(P ′ ) = pred(P ). Our procedure gen fp(P, F ) runs a
reducer and shrinks programs in Ppos as follows:
2. Then, we learn a classifier from the labeled data D by invoking an off-the-shelf learning algorithm, such as logistic regression, decision tree, and support vector machine.
Building an Analysis Heuristic We construct an analysis
heuristic H : P
[→ A from a learned classifier C as follows:
H(P ) = {req(q) | q ∈ QP ∧ C(Π(q, P )) = 1}
Pfeat = {(reduce(P, φq ), q) | (P, q) ∈ Ppos }.
Pfeat is the collection of the reduced programs paired with
queries. We call these programs feature programs. Because
of the reducer, each feature program contains only those
parts related to the reason that high precision is effective for
proving its query. Intuitively, the reducer removes noise in
the positive examples (P, q) ∈ Ppos , until the examples contain only the reasons that high precision of the analysis helps
prove their queries. The result of gen fp(P, F ) is Pfeat .
The heuristic iterates over every query q ∈ QP in the
program P , and selects the ones that get mapped to 1 by the
classifier C. For each of these selected queries, the heuristic
collects the parts of P that may affect the analysis result
of the query. This collection is done by the function req :
Q → A, which satisfies that q ∈ F (P, 1) =⇒ q ∈
F (P, req(q)) for all queries q in P . This function should
be specified by an analysis designer, but according to our
experience, this is rather a straightforward task. For instance,
our instance analyses (namely, two partially flow-sensitive
analyses and partial Octagon analysis) implement req via a
simple dependency analysis. For instance, in our partially
flow-sensitive analysis, req(q) is just the set of all program
variables in the dependency slice of P for the query q. The
result of H(P ) is the union of all the selected parts of P .
Improvement 1: Preserving Analysis Results A program
reducer such as C-Reduce [46] is powerful and is able to reduce C programs of thousands LOC to just a few lines of feature programs. However, some additional care is needed in
order to prevent C-Reduce from removing too aggressively
and producing trivial programs.
For example, suppose we analyze the following code
snippet (excerpted and simplified from bc-1.06) with a
partially flow-sensitive interval analysis:
5. Automatic Feature Generation
We now present the main contribution of this paper, our feature generation algorithm. The algorithm first generates so
(1)
1
2
5
yychar = 1; yychar = input(); //external input
if (yychar < 0) exit(1);
2017/1/2
3
4
5
if (yychar <= 289)
assert(0 <= yychar < 290); // query q
yychar++;
1
2
The flow-sensitive interval analysis proves the assertion because it infers the interval [0, 0] for pos. Note that this analysis result crucially relies on the condition !pos. Before the
condition, the value of pos is [0, +∞], but the condition refines the value to [0, 0]. However, reducing the program under φq in (1) leads to the following program:
The predicate φq in (1) holds for this program. The analysis can prove the assertion at line 4 with flow-sensitivity,
because in that case, it computes the interval [0, 289] for
yychar at line 4. But it cannot prove the assertion with flowinsensitivity, because it computes the interval [−∞, +∞] for
yychar that holds over the entire program.
Reducing the program under the predicate φq may produce the following program:
pos=0; assert(pos==0); pos++;
This reduced program no longer says the importance of refining an abstract value with the condition !pos. Demanding
the preservation of the analysis result does not help, because
the value of pos is also [0, 0] in the reduced program.
We fight against this undesired behavior of the reducer by
approximating commands of a program P before passing it
to the reducer. Specifically, for every positive query (P, q)
and for each command in P that initializes a variable with
a constant value, we replace the constant by Top (an expression that denotes the largest abstract value ⊤) as long as this
replacement does not make the query negative. For instance,
we transform our example to the following program:
yychar=1; assert(0<=yychar<290); yychar++;
It is proved by flow-sensitivity but not by flow-insensitivity.
An ordinary flow-insensitive interval analysis computes the
interval [1, +∞] because of the increment of yychar at the
end. Thus the resulting program still satisfies φq . However,
this reduced program does not contain the genuine reason
that the original program needed flow-sensitivity: in the original program, the if commands at lines 2 and 3 are analyzed
accurately only under flow-sensitivity, and the accurate analysis of these commands is crucial for proving the assertion.
To mitigate the problem, we run the reducer with a
stronger predicate that additionally requires the preservation of analysis result. In the flow-sensitive analysis of our
original program, the variable yychar has the interval value
[0, 289] at the assertion. In the above reduced program, on
the other hand, it has the value [1, 1]. The strengthened predicate φ′q in this example is:
pos = Top; // 0 is replaced by Top
while (1) { if (!pos) assert(pos==0); pos++; }
Note that pos = 0 is replaced by pos = Top. Then, we
apply the reducer to this transformed program, and obtain:
pos = Top; if (!pos) assert(pos==0);
Note that the reduced program keeps the condition !pos; because of the change in the initialization of pos, the analysis
cannot prove the assertion without using the condition !pos
in the original program.
φ′q (P ) = (φq (P ) ∧ value of yychar at q is [0, 289]). (2)
1
2
3
pos = 0;
while (1) { if (!pos) assert(pos==0); pos++; }
Running the reducer with this new predicate results in:
yychar = input();
if (yychar < 0) exit(1);
if (yychar <= 289) assert(0 <= yychar < 290);
5.2 Transformation to Abstract Data-Flow Graphs
The irrelevant commands (yychar = 1, yychar++) in the
original program are removed by the reducer, but the important if commands at lines 2 and 3 remain in the reduced
program. Without these if commands, it is impossible to
satisfy the new condition (2), so that the reducer has to preserve them in the final outcome. This idea of preserving the
analysis result during reduction was essential to generate diverse feature programs. Also, it can be applied to any program analysis easily.
Our next procedure is build dfg(Pfeat , R̂), which converts
feature programs in Pfeat to their data-flow graphs where
nodes are labeled with the abstraction of atomic commands
in those programs. We call such graphs abstract data-flow
graphs. These graphs act as what people call features in the
applications of machine learning. The additional parameter
R̂ to the procedure controls the degree of abstraction of the
atomic commands in these graphs. A method for finding an
appropriate parameter R̂ will be presented in Section 5.4.
Improvement 2: Approximating Variable Initialization
Another way for guiding a reducer is to replace commands
in a program by their overapproximations and to call the
reducer on the approximated program. The rationale is that
approximating commands would prevent the reducer from
accidentally identifying a reason that is too specific to a
given query and does not generalize well. Approximation
would help remove such reasons, so that the reducer is more
likely to find general reasons.
Consider the following code snippet (from spell-1.0):
Step 1: Building Data-Flow Graphs The first step of
build dfg(Pfeat , R̂) is to build data-flow graphs for feature
programs in Pfeat and to slice these graphs with respect to
queries in those programs.
The build dfg procedure constructs and slices such dataflow graphs using standard recipes. Assume a feature program P ∈ Pfeat represented by a control-flow graph (C, ֒→
), where C is the set of program points annotated with
atomic commands and (֒→) ⊆ C × C the control-flow relation between those program points. The data-flow graph
6
2017/1/2
R1
R2
R3
R4
: c → lv := e | lv := alloc(e) | assume(e1 ≺ e2 )
: e → c | e1 ⊕ e2 | lv | &lv, lv → x | ∗e | e1 [e2 ]
: ⊕ → + | − | ∗ | / |<<|>>, ≺ → <|≤|>|≥|=|6=
: c → 0 | 1 | 2 | ··· ,
x → x | y | z | ···
rules R̂ backwards to parse trees until none of the rules
becomes applicable. For example, the expression x + 1 is
abstracted into x ⊕ c as follows:
e
e
e
e
e ⊕ e
e ⊕ e
e ⊕ e
⇒
⇒
⇒ e ⊕ e
x + c
x
c
x + c
x
c
x
1
1
1
We first apply the rule x → x backwards to the parse tree
(leftmost) and collapse the leaf node x with its parent. Next,
we apply ⊕ → + where + is collapsed to ⊕. Finally, we
apply the rule c → 1, getting the rightmost tree. The result
is read off from the last tree, and is the abstract expression
x ⊕ c. Similarly, y − 2 gets abstracted to x ⊕ c.
Figure 2. The set R of grammar rules for C-like languages
for P reuses the node set C of the control-flow graph, but
it uses a new arc relation ❀: c ❀ c′ iff there is a defuse chain in P from c to c′ on a memory location or variable l (that is, c ֒→+ c′ , l is defined at c, l is used at c′ ,
and l is not re-defined in the intermediate program points
between c and c′ ). For each query q in the program P , its
slice (Cq , ❀q ) is just the restriction of the data-flow graph
(Cq , ❀q ) with respect to the nodes that may reach the query
(i.e., Cq = {c ∈ C | c ❀∗ cq }).
For each data-flow slice computed in the first step, our
build dfg(Pfeat , R̂) procedure applies the αR̂ function to the
atomic commands in the slice. Then, it merges nodes in the
slice to a single node if they have the same label (i.e., the
same abstract atomic command). The nodes after merging
inherit the arcs from the original slice. We call the resulting
graphs abstract data-flow graphs. These graphs describe
(syntactic and semantic) program properties, such as the
ones used in [40]. For example, the abstract data-flow graph
(x < c) ❀ (x := alloc(x)) says that a program variable is
compared with a constant expression before being used as an
argument of a memory allocator (which corresponds to the
features #9 and #11 for selective flow-sensitivity in [40]).
We write {π1 , . . . , πk } for the abstract data-flow graphs
generated by build dfg(Pfeat , R̂). We sometimes call πi feature, especially when we want to emphasize its role in our
application of machine learning techniques.
We point out that the performance of a data-driven analysis in our approach depends on the choice of the parameter R̂ to the build dfg(Pfeat , R̂) procedure. For example,
the Octagon analysis can track certain binary operators such
as addition precisely but not other binary operators such as
multiplication and shift. Thus, in this case, we need to use
R̂ that at least differentiates these two kinds of operators. In
Section 5.4, we describe a method for automatically choosing R̂ from data via iterative cross validation.
Step 2: Abstracting Atomic Commands The second step
of build dfg(Pfeat , R̂) is to abstract atomic commands in
the data-flow graphs obtained in the first step and to collapse nodes labeled with the same abstract command. This
abstraction is directed by the parameter R̂, and forms the
most interesting part of the build dfg procedure.
Our abstraction works on the grammar for the atomic
commands shown in Figure 2. The command lv := e assigns
the value of e into the location of lv, and lv := alloc(e) allocates an array of size e. The assume command assume(e1 ≺
e2 ) allows the program to continue only when the condition
evaluates to true. An expression may be a constant integer
(c), a binary operator (e1 ⊕ e2 ), an l-value expression (lv), or
an address-of expression (&lv). An l-value may be a variable
(x), a pointer dereference (∗e), or an array access (e1 [e2 ]).
Let R be the set of grammar rules in Figure 2. The second parameter R̂ of build dfg(Pfeat , R̂) is a subset of R. It
specifies how each atomic command should be abstracted.
Intuitively, each rule in R̂ says that if a part of an atomic
command matches the RHS of the rule, it should be represented abstractly by the nonterminal symbol in the LHS
of the rule. For example, when R̂ = {⊕ → + | −}, both
x = y + 1 and x = y − 1 are represented abstractly by the
same x = y⊕1, where + and − are replaced by ⊕. Formally,
build dfg(Pfeat , R̂) transforms the parse tree of each atomic
command in Pfeat by repeatedly applying the grammar rules
in R̂ backwards to the tree until a fixed point is reached. This
transformation loses information about the original atomic
command, such as the name of a binary operator. We denote
it by a function αR̂ . The following example illustrates this
transformation using a simplified version of our grammar.
5.3 Abstract Data-flow Graphs and Queries
Abstract data-flow graphs encode properties about queries.
These properties are checked by our matchR̂ procedure parameterized by R̂. The procedure takes an abstract dataflow graph π, a query q and a program P that contains
the query. Given such inputs, it works in four steps. First,
matchR̂ (π, q, P ) normalizes P syntactically so that some
syntactically different yet semantically same programs become identical. Specifically, the procdure eliminates temporary variables (e.g., convert tmp = b + 1; a = tmp;
to a = b + 1), removes double negations (e.g., convert
assume (!(!(x==1))) to assume (x==1)), and makes
Example 1. Consider the grammar: R = {e → x | c |
e1 ⊕ e2 , x → x | y, c → 1 | 2, ⊕ → + | −}. Let R̂ =
{x → x | y, c → 1 | 2, ⊕ → + | −}. Intuitively, R̂ specifies
that we should abstract variables, constants, and operators
in atomic commands and expressions by nonterminals x, c,
and ⊕, respectively. The abstraction is done by applying
7
2017/1/2
of our approach depends on a set R̂ of grammar rules, which
determines how much atomic commands get abstracted. In
each iteration of the loop, the algorithm chooses R̂ ⊆ R according to the strategy that we will explain shortly, and calls
build dfg(Pfeat , R̂) to generate a new candidate set of features Π. Then, using this candidate set, the algorithm invokes
an off-the-shelf learning algorithm (line 7) for learning an
analysis heuristic HC from the training data Ptr ; the subscript C denotes a classifier built by the learning algorithm.
The quality of the learned heuristic is evaluated on the validation set Pva (line 8) by computing the F1 -score2 of C.
If this evaluation gives a better score than the current best
sbest , the set Π becomes a new current best Πbest (lines 9–
10). To save computation, before running our algorithm, we
run the static analysis F for all programs in the codebases P
with highest precision 1 and again with lowest precision 0,
and record the results as labels for all queries in P. This preprocessing lets us avoid calling F in learn and evaluate. As
a result, after feature programs Pfeat are computed at line 2,
building data-flow graphs and learning/evaluating the heuristic do not invoke the static analysis, so that each iteration of
the loop in Algorithm 1 runs fast.
Our algorithm chooses a subset R̂ ⊆ R of grammar rules
using a greedy bottom-up search. It partitions the grammar
rules in Figure 2 into four groups R = R1 ⊎ R2 ⊎ R3 ⊎ R4
such that R1 contains the rules for the nonterminal c for
commands, R2 those for the nonterminals e, lv for expressions, R3 the rules for the nonterminals ⊕, ≺ for operators,
and R4 those for the remaining nonterminals x, c for variables and constants. These sets form a hierarchy with Ri
above Ri+1 for i ∈ {1, 2, 3} in the following sense: for a
typical derivation tree of the grammar, an instance of a rule
in Ri usually appears nearer to the root of the tree than that
of a rule in Ri+1 . Algorithm 1 begins by choosing a subset of R3 randomly and setting the current rule set R̂ to
the union of this subset and R4 . Including the rules in R4
has the effect of making the generated features (i.e., abstract
data-flow graphs) forget variable names and constants that
are specific to programs in the training set, so that they generalize well across different programs. This random choice
is repeated for a fixed number of times (without choosing
previously chosen abstractions), and the best R̂3 in terms of
its score s is recorded. Then, Algorithm 1 similarly tries different randomly-chosen subsets of R2 but this time using the
best R̂3 found, instead of R4 , as the set of default rules to include. The best choice R̂2 is again recorded. Repeating this
process with R1 and the best R̂2 gives the final result R̂1 ,
which leads to the result of Algorithm 1.
Algorithm 1 Automatic Feature Generation
Input: codebases P, static analysis F , grammar rules R
Output: a set of features Π
1: partition P into Ptr and Pva ⊲ training/validation sets
2: Pfeat ← gen fp(Ptr , F ) ⊲ generate feature programs
3: sbest , Πbest ← −1, ∅
4: repeat
5:
R̂ ← choose a subset of R (i.e., R̂ ⊆ R)
6:
Π ← build dfg(Pfeat , R̂) ⊲ Build data-flow graphs
7:
HC ← learn(F, Π, Ptr )
8:
s ← evaluate(F, C, Pva ) ⊲ Evaluate F1 -score of C
9:
if s > sbest then
10:
sbest , Πbest ← s, Π
11:
end if
12: until timeout
13: return Πbest
explicit conditional expressions (e.g., convert assume(x)
to assume(x!=0)). Second, matchR̂ (π, q, P ) constructs a
data-flow graph of P , and computes the slice of the graph
that may reach q. Third, it builds an abstract data-flow graph
from this slice. That is, it abstracts the atomic commands in
the slice, merges nodes in the slice that are labeled with the
same (abstract) atomic command, and induces arcs between
nodes after merging in the standard way. Let (Nq , ❀q ) be
the resulting abstract data-flow graph, and (N0 , ❀0 ) the
node and arc sets of π. In both cases, nodes are identified
with their labels, so that Nq and N0 are the sets of (abstract)
atomic commands. Finally, matchR̂ (π, q, P ) returns 0 or
1 according to the criterion: matchR̂ (π, q, P ) = 1 ⇐⇒
N0 ⊆ Nq ∧ (❀0 ) ⊆ (❀∗q ). The criterion means that the
checking of our procedure succeeds if all the atomic commands in N0 appear in Nq and their dependencies encoded
in ❀0 are respected by the transitive dependencies ❀∗q in
the query. Taking the transitive closure (❀∗q ) here is important. It enables matchR̂ (π, q, P ) to detect whether the programming pattern encoded in π appears somewhere in the
program slice for q, even when the slice contains commands
not related to the pattern.
5.4 Final Algorithm
Algorithm 1 shows the final algorithm for feature generation.
It takes codebases P = {P1 , . . . , Pn }, a static analysis F
(Section 3), and a set R of grammar rules for the target
programming language. Then, it returns the set Π of features.
The algorithm begins by splitting the codebases P into
a training set Ptr = {P1 , . . . , Pr } and a validation set
Pva = {Pr+1 , . . . , Pn } (line 1). In our experiments, we set
r to the nearest integer to 0.7n. Then, the algorithm calls
gen fp with Ptr and the static analysis, so as to generate
feature programs. Next, it initializes the score sbest to −1,
and the set of features Πbest to the empty set. At lines 4–
12, the algorithm repeatedly improves Πbest until it hits the
limit of the given time budget. Recall that the performance
6. Instance Analyses
We have applied our feature-generation algorithm to three
parametric program analyses: partially flow-sensitive inter22
8
· precision · recall/(precision + recall).
2017/1/2
eter a ∈ A = {0, 1}J consists of pairs of program variables. Intuitively, a specifies which two variables should be
tracked together by the analysis. Given such a, the analysis
defines the smallest partition Γ of variables such that every
(x, y) ∈ a is in the same partition
Q of Γ. Then, it defines a
grouped Octagon domain OΓ = γ∈Γ Oγ where Oγ is the
usual Octagon domain for the variables in the partition γ.
The abstract domain of the analysis is C → OΓ , the collection of maps from program points to grouped Octagons.
The analysis performs fixed-point computation on this domain using adjusted transfer functions of the standard Octagon analysis. The details can be found in [22].
We have to adjust the learning engine and our featuregeneration algorithm for this partial Octagon slightly. This
is because the full Octagon analysis on a program P (that is,
F (P, 1)) does not work usually when the size of P is large
(≥20KLOC in our experiments). Whenever the learning part
in Section 4 and our feature-generation algorithm have to run
F (P, 1) originally, we run the impact pre-analysis in [22]
instead. This pre-analysis is a fully relational analysis that
works on a simpler abstract domain than the full Octagon,
and estimates the behavior of the full Octagon; it defines a
function F ♯ : P → ℘(Q), which takes a program and returns
a set of queries in the program that are likely to be verified
by the full Octagon. Formally, we replaced the predicate
φ
in Section 5.1 by φq (P ) = q 6∈ F (P, 0) ∧ q ∈ F ♯ (P ) .
val and pointer analyses, and a partial Octagon analysis.
These analyses are designed following the data-driven approach of Section 4, so they are equipped with engines for
learning analysis heuristics from given codebases. Our algorithm generates features required by these learning engines.
In this section, we describe the instance analyses. In these
analyses, a program is given by its control-flow graph (C, ֒→
), where each program point c ∈ C is associated with an
atomic command in Figure 2. We assume heap abstraction
based on allocation sites and the existence of a variable
for each site in a program. This lets us treat dynamically
allocated memory cells simply as variables.
Two Partially Flow-sensitive Analyses We use partially
flow-sensitive interval and pointer analyses that are designed
according to the recipe in [40]. These analyses perform the
sparse analysis [8, 13, 32, 39] in the sense that they work
on data-flow graphs. Their flow-sensitivity is controlled by a
chosen set of program variables; only the variables in the set
are analyzed flow-sensitively. In terms of the terminologies
of Section 3, the set of program components J is that of
variables Var, an analysis parameter a ∈ A = {0, 1}J
specifies a subset of Var.
Both interval and pointer analyses define functions F : P×
A → ℘(Q) that take a program and a set of variables and return proved queries in the program. They compute mappings
D ∈ D = C → S from program points to abstract states,
where an abstract state s ∈ S itself is a map from program
variables to values, i.e., S = Var → V. In the interval analysis, V consists of intervals, and in the pointer analysis, V
consists of sets of the addresses of program variables.
Given an analysis parameter a, the analyses compute the
mappings D ∈ D as follows. First, they construct a dataflow graph for variables in a. For each program point c ∈ C,
let D(c) ⊆ Var and U(c) ⊆ Var be the definition and use
sets. Using these sets, the analyses construct a data-flow
x
relation ( a ) ⊆ C × Var × C: c0
a cn holds if there
exists a path [c0 , c1 , . . . , cn ] in the control-flow graph such
that x is defined at c0 (i.e., x ∈ D(c0 )) and used at cn (i.e.,
x ∈ U(cn )), but it is not re-defined at any of the intermediate
points ci , and the variable x is included in the parameter
a. Second, the analyses perform flow-insensitive analyses
on the given program, and store the results in sI ∈ S.
Finally, they compute fixed points of the function Fa (D) =
λc.fc (s′ ) where fc is a transfer function at a program point
c, and the abstract state s′ is the following combination of
D and sIFat c: s′ (x) = sI (x), for x 6∈ a and for x ∈ a,
s′ (x) = c0 x a c D(c0 )(x). Note that for variables not in a,
Fa treats them flow-insensitively by using sI . When a =
Var, the analyses become ordinary flow-sensitive analyses,
and when a = ∅, they are just flow-insensitive analyses.
7. Experiments
We evaluated our feature-generation algorithm with the instance analyses in Section 6. We used the interval and Octagon analyses for proving the absence of buffer overruns,
and the pointer analysis the absence of null dereferences.
The three analyses are implemented on top of our analysis framework for the C programming language [38]. The
framework provides a baseline analysis that uses heap abstraction based on allocation sites and array smashing,
is field-sensitive but context-insensitive, and performs the
sparse analysis [8, 13, 32, 39]. We extended this baseline
analysis to implement the three analyses. Our pointer analysis uses Andersen’s algorithm [3]. The Octagon analysis
is implemented by using the OptOctagons and Apron libraries [24, 50]. Our implementation of the feature-generation
algorithm in Section 5 and the learning part in Section 4 is
shared by the three analyses except that the analyses use
slight variants of the req function in Section 4, which converts a query to program components. In all three analyses,
req first computes a dependency slice of a program for a
given query. Then, it collects program variables in the slice
for the interval and pointer analyses, and pairs of all program variables in the slice for the Octagon analysis. The
computation of the dependency slice is approximate in that it
estimates dependency using a flow-insensitive pointer analysis and ignores atomic commands too far away from the
Partial Octagon Analysis We use the partial Octagon analysis formulated in [22]. Let m be the number of variables
in the program, and write Var = {x1 , . . . , xm }. The set of
program components J is Var × Var, so an analysis param9
2017/1/2
query when the size of the slice goes beyond a threshold.3
This approximation ensures that the cost of computing req
is significantly lower than that of the main analyses. We
use the same dependency analysis in match. Our implementation uses C-Reduce [46] for generating feature programs
and a decision tree algorithm [42] for learning a classifier for
queries. Our evaluation aims at answering four questions:
• Effectiveness: Does our feature-generation algorithm en-
able the learning of good analysis heuristics?
• Comparison with manually-crafted features: How
does our approach of learning with automatically-generated
features compare with the existing approaches of learning with manually-crafted features?
• Impact of reducing and learning: Does reducing a pro-
gram really help for generating good features? Given a
set of features, does learning lead to a classifier better
than a simple (disjunctive) pattern matcher?
• Generated features: Does our feature-generation algo-
rithm produce informative features?
Effectiveness We compared the performance of our three
instance analyses with their standard counterparts:
• Flow-insensitive(FI I) & -sensitive(FS I) interval analyses
program sets. The average numbers of generated features
over the five trials were 38 (interval), 45 (pointer), and 44
(Octagon). C-Reduce took 0.5–24 minutes to generate a feature program from a query.All experiments were done on a
Ubuntu machine with Intel Xeon cpu (2.4GHz) and 192GB
of memory.
Table 1 shows the performance of the learned heuristics
on the test programs for the interval analysis. The learned
classifier for queries (Section 4) was able to select 75.7%
of FS-provable but FI-unprovable queries on average (i.e.,
75.7% recall) and 74.5% of the selected queries were actually proved under FS only (i.e., 74.5% precision). With
the analysis heuristic on top of this classifier, our partially
flow-sensitive analysis could prove 80.2% of queries that require flow-sensitivity while increasing the time of the flowinsensitive analysis by 2.0x on average. We got 80.2 (higher
than 75.7) because the analysis parameter is the set of all the
program components for queries selected by the classifier
and this parameter may make the analysis prove queries not
selected by the classifier. The fully flow-sensitive analysis
increased the analysis time by 46.5x. We got similar results
for the other two analyses (Tables 2 and 3).
Comparison with Manually-Crafted Features We compared our approach with those in Oh et al. [40] and Heo
et al. [22], which learn analysis heuristics using manuallycrafted features. The last two columns of Table 1 present the
performance of the partially flow-sensitive interval analysis
in [40], and those of Table 3 the performance of the partial
Octagon analysis in [22].
The five trials in the tables use the splits of training and
test programs in the corresponding entries of Tables 1 and 3.
Our implementation of Oh et al.’s approach used their 45
manually-crafted features, and applied their Bayesian optimization algorithm to our benchmark programs. Their approach requires the choice of a threshold value k, which
determines how many variables should be treated flowsensitively. For each trial and each program in that trial, we
set k to the number of variables selected by our approach,
so that both approaches induce similar overhead in analysis
time. Our implementation of Heo et al.’s approach used their
30 manually-crafted features, and applied their supervised
learning algorithm to our benchmark programs.
The results show that our approach is on a par with the
existing ones, while not requiring the manual feature design.
For the interval analysis, our approach consistently proved
more queries than Oh et al.’s (80.2% vs 55.1% on average).
For Octagon, Heo et al.’s approach proved more queries
than ours (81.1% vs 96.2%). We warn a reader that these
are just end-to-end comparisons and it is difficult to draw
a clear conclusion, as the learning algorithms of the three
approaches are different. However, the overall results show
that using automatically-generated features is as competitive
as using features crafted manually by analysis designers.
• Flow-insensitive(FI P) & -sensitive(FS P) pointer analyses
• Flow-sensitive interval analysis (FS I ) and partial Oc-
tagon analysis by impact pre-analysis (IMPCT) [37].
We did not include the Octagon analysis [33] in the list because the analysis did not scale to medium-to-large programs
in our benchmark set.
In experiments, we used 60 programs (ranging 0.4–109.6
KLOC) collected from Linux and GNU packages. The programs are shown in Table 4. To evaluate the performance
of learned heuristics for the interval and pointer analyses,
we randomly partitioned the 60 programs into 42 training
programs (for feature generation and learning) and 18 test
programs (for cross validation). For the Octagon analysis,
we used only 25 programs out of 60 because for some programs, Octagon and the interval analysis prove the same
set of queries. We selected these 25 by running the impact
pre-analysis [22] for the Octagon on all the 60 programs
and choosing the ones that may benefit from Octagon according to the results of this pre-analysis. The 25 programs
are shown in Table 5. We randomly partitioned the 25 programs into 17 training programs and 8 test programs. From
the training programs, we generated features and learned a
heuristic based on these features.4 The learned heuristic was
used for analyzing the test programs. We repeated this procedure for five times with different partitions of the whole
3 In our implementation, going beyond a threshold means having more than
200 program variables.
4 We followed the practice used in representation learning [6], where both
feature generation and learning are done with the same dataset.
10
2017/1/2
Trial
1
2
3
4
5
TOTAL
Query Prediction
Precision
Recall
71.5 % 78.8 %
60.9 % 75.1 %
78.2 % 74.0 %
72.9 % 76.1 %
83.2 % 75.3 %
74.5 % 75.7 %
#Proved Queries
FI I (a) FS I (b) Ours (c)
6,537
7,126
7,019
4,127
4,544
4,487
6,701
7,532
7,337
4,399
4,956
4,859
5,676
6,277
6,140
27,440
30,435
29,842
Analysis Cost (sec)
FI I (d)
FS I
Ours (e)
26.7
569.0
52.0
58.3
654.2
79.9
50.9 6,175.2
167.5
36.9
385.1
44.9
31.7 1,740.3
61.6
204.9 9,523.9
406.1
Quality
Prove
Cost
81.8 % 1.9x
86.3 % 1.4x
76.5 % 3.3x
82.6 % 1.2x
77.2 % 1.9x
80.2 % 2.0x
Oh et al. [40]
Prove
Cost
56.6 % 2.0x
49.2 % 2.4x
51.1 % 3.4x
54.8 % 1.2x
65.6 % 1.8x
55.1 % 2.3x
Table 1. Effectiveness of partially flow-sensitive interval analysis. Quality: Prove = (c − a)/(b − a), Cost = e/d
Trial
1
2
3
4
5
TOTAL
Query Prediction
Precision
Recall
79.1 % 76.8 %
78.3 % 77.1 %
74.5 % 75.0 %
73.8 % 75.9 %
78.0 % 82.5 %
76.6 % 77.3 %
#Proved Queries
FI P
FS P
Ours
4,399
6,346
6,032
7,029
8,650
8,436
8,781 10,352 10,000
10,559 12,914 12,326
4,205
5,705
5,482
34,973 43,967 42,276
Analysis Cost (sec)
FI P
FS P
Ours
48.3
3,705.0 150.0
48.9
651.4
74.0
41.5
707.0
59.4
51.1
4,107.0 164.3
23.0
847.2
56.7
212.9 10,017.8 504.6
Quality
Prove
Cost
83.9 % 3.1x
86.8 % 1.5x
77.6 % 1.4x
75.0 % 3.2x
85.1 % 2.5x
81.2 % 2.4x
Table 2. Effectiveness of partially flow-sensitive pointer analysis
Trial
1
2
3
4
5
TOTAL
Query Prediction
Precision
Recall
74.8 % 81.3 %
84.1 % 82.6 %
82.8 % 73.0 %
77.6 % 85.2 %
71.6 % 78.4 %
79.0 % 79.9 %
#Proved Queries
FS I
IMPCT
Ours
3,678
3,806
3,789
5,845
6,004
5,977
1,926
2,079
2,036
2,221
2,335
2,313
2,886
2,962
2,944
16,556
17,186 17,067
Analysis Cost (sec)
FS I
IMPCT
Ours
140.7
389.8
230.5
613.5 18,022.9
782.9
315.2
2,396.9
416.0
72.7
495.1
119.9
148.9
557.2
209.7
1,291.0 21,861.9 1,759.0
Quality
Prove
Cost
86.7 % 1.6 x
83.0 % 1.3 x
71.9 % 1.3 x
80.7 % 1.6 x
76.3 % 1.4 x
81.1 % 1.4 x
Heo et al. [22]
Prove
Cost
100.0 % 3.0 x
94.3 % 1.8 x
92.2 % 1.1 x
100.0 % 2.0 x
96.1 % 2.3 x
96.2 % 1.8 x
Table 3. Effectiveness of partial Octagon analysis
Impact of Reducing and Learning In order to see the role
of a reducer in our approach, we generated feature programs
without calling the reducer in our experiment with the interval analysis. These unreduced feature programs were then
converted to abstract data-flow graphs or features, which
enabled the learning of a classifier for queries. The generated features were too specific to training programs, and the
learned classifier did not generalize well to unseen test programs; removing a reducer dropped the average recall of the
classifier from 75.7% to 58.2% for test programs.
In our approach, a feature is a reduced and abstracted program slice that illustrates when high precision of an analysis
is useful for proving a query. Thus, one natural approach is to
use the disjunction of all features as a classifier for queries.
Intuitively, this classifier attempts to pattern-match each of
these features against a given program and a query, and it
returns true if some attempt succeeds. We ran our experiment on the interval analysis with this disjunctive classifier, instead of the original decision tree learned from training programs. This change of the classifier increased the recall from 75.7% to 79.6%, but dropped the precision significantly from 74.5% to 10.4%. The result shows the benefit
of going beyond the simple disjunction of features and using a more sophisticated boolean combination of them (as
encoded by a decision tree). One possible explanation is that
the matching of multiple features suggests the high complexity of a given program, which typically makes the analysis
lose much information even under the high-precision setting.
Generated Features We ranked generated features in our
experiments according to their Gini scores [9] in the learned
decision tree, which measure the importance of these features for prediction. For each instance analysis, we show two
features that rank consistently high in the five trials of experiments. For readability, we present them in terms of their
feature programs, rather than as abstract data-flow graphs.
The top-two feature programs for the interval analysis
and the pointer analysis are:
// Feature Program 1 for Interval
int buf[10];
for (i=0;i<7;i++) { buf[i]=0; /* Query */ }
// Feature Program 2 for Interval
i=255; p=malloc(i);
while (i>0) { *(p+i)=0; /* Query */
i--; }
// Feature Program 1 for Pointer
i=128; p=malloc(i);
if (p==0) return; else *p=0; /* Query */
// Feature Program 2 for Pointer
p=malloc(i); p=&a; *p=0; /* Query */
11
2017/1/2
The feature programs for the interval analysis describe cases
where a consecutive memory region is accessed in a loop
through increasing or decreasing indices and these indices
are bounded by a constant from above or from below because of a loop condition. The programs for the pointer analysis capture cases where the safety of pointer dereference is
guaranteed by null check or preceding strong update. All of
these programs are typical showcases of flow-sensitivity.
The top-two feature programs for the partial Octagon
analysis are:
graphs exist for the target language, which does not hold for,
e.g., JavaScript.
8. Related Work
These feature programs allocate an array of a positive size
and access the array using an index that is related to this size
in a simple linear way. They correspond to our expectation
about when the Octagon analysis is more effective than the
interval analysis.
When converting feature programs to abstract data-flow
graphs, our approach automatically identifies the right abstraction level of commands for each instance analysis (Algorithm 1). In the interval analysis, the abstraction found
by our approach was to merge all the comparison operators (e.g., <, ≤, >, ≥, =) but to differentiate all the arithmetic operators (e.g., +, −, ∗). This is because, in the interval analysis, comparison with constants is generally a good
signal for improving precision regardless of a specific comparison operator used, but the analysis behaves differently
when analyzing different commands involving + or −. With
other abstractions, we obtained inferior performance; for
example, when we differentiate comparison operators and
abstract away arithmetic operators, the recall was dropped
from 75.7% to 54.5%. In the pointer analysis, the best abstraction was to merge all arithmetic and comparison operators while differentiating equality operators (=, 6=). In the
Octagon analysis, our approach identified an abstraction that
merges all comparison and binary operators while differentiating addition/subtraction operators from them.
Parametric Static Analysis In the past decade, a large
amount of research in static analysis has been devoted for
developing an effective heuristic for finding a good abstraction. Several effective techniques based on counterexampleguided abstraction refinement (CEGAR) [4, 11, 17, 19, 21,
53, 54] have been developed, which iteratively refine the
abstraction based on the feedback from the previous runs.
Other techniques choose an abstraction using dynamic analyses [20, 35] or pre-analyses [37, 51]. Or they simply use a
good manually-designed heuristic, such as the one for controlling the object sensitivity for Java programs [25]. In all
of these techniques, heuristics for choosing an abstraction
cannot automatically extract information from one group of
programs, and generalize and apply it to another group of
programs. This cross-program generalization in those techniques is, in a sense, done manually by analysis designers.
Recently, researchers have proposed new techniques for
finding effective heuristics automatically rather than manually [10, 18, 22, 40]. In these techniques, heuristics themselves are parameterized by hyperparameters, and an effective hyperparameter is learned from existing codebases by
machine learning algorithms, such as Bayesian optimization
and decision tree learning [10, 22, 40]. In [18], a learned
hyperparameter determines a probabilistic model, which is
then used to guide an abstraction-refinement algorithm.
Our work improves these recent techniques by addressing
the important issue of feature design. The current learningbased techniques assume well-designed features, and leave
the obligation of discharging this nontrivial assumption to
analysis designers [10, 22, 40]. The only exception is [18],
but the technique there applies only to a specific class of
program analyses written in Datalog. In particular, it does
not apply to the analyses with infinite abstract domains,
such as interval and Octagon analyses. Our work provides a
new automatic way of discharging the assumption on feature
design, which can be applicable to a wide class of program
analyses because our approach uses program analysis as
a black box and generates features (i.e., abstracted small
programs) not tied to the internals of the analysis.
Limitations Our current implementation has two limitations. First, because we approximately generate data dependency (in req and match) within a threshold, we cannot apply our approach to instances that require computing
long dependency chains, e.g., context-sensitive analysis. We
found that computing dependency slices of queries beyond
procedure boundaries efficiently with enough precision is
hard to achieve in practice. Second, our method could not
be applicable to program analyses for other programming
languages (e.g., oop, functional), as we assume that a powerful program reducer and a way to efficiently build data-flow
Application of Machine Learning in Program Analysis
Recently machine learning techniques have been applied
for addressing challenging problems in program analysis.
They have been used for generating candidate invariants
from data collected from testing [14, 15, 36, 48, 49], for
discovering intended behaviors of programs (e.g., preconditions of functions, API usages, types, and informative variable names) [1, 5, 16, 26, 27, 30, 34, 41, 44, 47, 55, 56], for
finding semantically similar pieces of code [12], and for synthesizing programs (e.g., code completion and patch generation) [2, 7, 23, 31, 43, 45]. Note that the problems solved by
// Feature Program 1 for Octagon
size=POS_NUM; arr=malloc(size);
arr[size-1]=0; /* Query */
// Feature Program 2 for Octagon
size=POS_NUM; arr=malloc(size);
for(i=0;i<size;i++){ arr[i]=0; /* Query */ }
12
2017/1/2
References
these applications are different from ours, which is to learn
good analysis heuristics from existing codebases and to generate good features that enable such learning.
[1] Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles
Sutton. Suggesting accurate method and class names. In FSE,
2015.
Feature Learning in Machine Learning Our work can
be seen as a feature learning technique specialized to the
program-analysis application. Automating the feature-design
process has been one of the holy grails in the machine learning community, and a large body of research has been done
under the name of representation learning or feature learning [6]. Deep learning [28] is perhaps the most successful
feature-learning method, which simultaneously learns features and classifiers through multiple layers of representations. It has been recently applied to programming tasks
(e.g. [2]). A natural question is, thus, whether deep learning
can be used to learn program analysis heuristics as well. In
fact, we trained a character-level convolutional network in
Zhang et al. [52] for predicting the need for flow-sensitivity
in interval analysis. We represented each query by the 300
characters around the query in the program text, and used
pairs of character-represented query and its provability as
training data. We tried a variety of settings (varying, e.g.,
#layers, width, #kernels, kernel size, activation functions,
#output units, etc) of the network, but the best performance
we could achieve was 93% of recall with disappointing 27%
of precision on test data. Achieving these numbers was
highly nontrivial, and we could not find intuitive explanation about why a particular setting of the network leads to
better results than others. We think that going beyond 93%
recall and 27% precision in this application is challenging
and requires expertise in deep learning.
[2] Miltiadis Allamanis, Hao Peng, and Charles A. Sutton. A
convolutional attention network for extreme summarization of
source code. In ICML, 2016.
[3] L. O. Andersen. Program Analysis and Specialization for the
C Programming Language. PhD thesis, DIKU, University of
Copenhagen, May 1994. (DIKU report 94/19).
[4] T. Ball and S. Rajamani. The SLAM project: Debugging
system software via static analysis. In POPL, 2002.
[5] Nels E. Beckman and Aditya V. Nori. Probabilistic, modular
and scalable inference of typestate specifications. In PLDI,
pages 211–221, 2011.
[6] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE
Transactions on Pattern Analysis Machine Intelligence, 35(8),
August 2013.
[7] Pavol Bielik, Veselin Raychev, and Martin Vechev. PHOG:
probabilistic model for code. In ICML, 2016.
[8] Sam Blackshear, Bor-Yuh Evan Chang, and Manu Sridharan.
Selective control-flow abstraction via jumping. In OOPSLA,
2015.
[9] Leo Breiman. Random Forests. Machine Learning, 2001.
[10] Sooyoung Cha, Sehun Jeong, and Hakjoo Oh. Learning a
strategy for choosing widening thresholds from a large codebase. In APLAS, 2016.
[11] E. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith.
Counterexample-guided abstraction refinement for symbolic
model checking. JACM, 50(5), 2003.
[12] Yaniv David, Nimrod Partush, and Eran Yahav. Statistical
similarity of binaries. In PLDI, 2016.
9. Conclusion
We have presented an algorithm that mines existing codebases and generates features for a data-driven parametric
static analysis. The generated features enable the learning
of an analysis heuristic from the codebases, which decides
whether each part of a given program should be analyzed
under high precision or under low precision. The key ideas
behind the algorithm are to use abstracted code snippets as
features, and to generate such snippets using a program reducer. We applied the algorithm to partially flow-sensitive
interval and pointer analyses and partial Octagon analysis.
Our experiments with these analyses and 60 programs from
Linux and GNU packages show that the learned heuristics
with automatically-generated features achieve performance
comparable to those with manually-crafted features.
Designing a good set of features is a nontrivial and costly
step in most applications of machine learning techniques.
We hope that our algorithm for automating this feature design for data-driven program analyses or its key ideas help
attack this feature-design problem in the ever-growing applications of machine learning for program analysis, verification, and other programming tasks.
[13] Azadeh Farzan and Zachary Kincaid. Verification of parameterized concurrent programs by modular reasoning about data
and control. In POPL, 2012.
[14] Pranav Garg, Christof Löding, P. Madhusudan, and Daniel
Neider. Ice: A robust framework for learninginvariants. In
CAV, 2014.
[15] Pranav Garg, Daniel Neider, P. Madhusudan, and Dan Roth.
Learning invariants using decision trees and implication counterexamples. In POPL, 2016.
[16] Timon Gehr, Dimitar Dimitrov, and Martin Vechev. Learning
commutativity specifications. In CAV, 2015.
[17] S. Grebenshchikov, A. Gupta, N. Lopes, C. Popeea, and
A. Rybalchenko. HSF(C): A software verifier based on Horn
clauses. In TACAS, 2012.
[18] Radu Grigore and Hongseok Yang. Abstraction refinement
guided by a learnt probabilistic model. In POPL, 2016.
[19] B. Gulavani, S. Chakraborty, A. Nori, and S. Rajamani. Automatically refining abstract interpretations. In TACAS, 2008.
[20] Ashutosh Gupta, Rupak Majumdar, and Andrey Rybalchenko.
From tests to proofs. STTT, 15(4):291–303, 2013.
13
2017/1/2
[21] T. Henzinger, R. Jhala, R. Majumdar, and K. McMillan. Abstractions from proofs. In POPL, 2004.
[41] Saswat Padhi, Rahul Sharma, and Todd Millstein. Data-driven
precondition inference with learned features. In PLDI, 2016.
[22] Kihong Heo, Hakjoo Oh, and Hongseok Yang. Learning
a variable-clustering strategy for Octagon from labeled data
generated by a static analysis. In SAS, 2016.
[42] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake
Vanderplas, Alexandre Passos, David Cournapeau, Matthieu
Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikitlearn: Machine Learning in Python. The Journal of Machine
Learning Research, 2011.
[23] Abram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and
Premkumar Devanbu. On the naturalness of software. In
ICSE, 2012.
[24] B. Jeannet and A. Miné. Apron: A library of numerical
abstract domains for static analysis. In CAV, pages 661–667,
2009.
[43] Veselin Raychev, Pavol Bielik, Martin Vechev, and Andreas
Krause. Learning programs from noisy data. In POPL, 2016.
[25] George Kastrinis and Yannis Smaragdakis. Hybrid contextsensitivity for points-to analysis. In PLDI, 2013.
[44] Veselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from ”big code”. In POPL, 2015.
[26] Omer Katz, Ran El-Yaniv, and Eran Yahav. Estimating types
in binaries using predictive modeling. In POPL, 2016.
[45] Veselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models. In PLDI, 2014.
[27] Sulekha Kulkarni, Ravi Mangal, Xin Zhang, and Mayur Naik.
Accelerating program analyses by cross-program training. In
OOPSLA, 2016.
[46] John Regehr, Yang Chen, Pascal Cuoq, Eric Eide, Chucky
Ellison, and Xuejun Yang. Test-case reduction for C compiler
bugs. In PLDI, 2012.
[28] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep
learning. Nature, 521(7553):436–444, 2015.
[47] Sriram Sankaranarayanan, Swarat Chaudhuri, Franjo Ivančić,
and Aarti Gupta. Dynamic inference of likely data preconditions over predicates by tree learning. In ISSTA, 2008.
[29] Percy Liang, Omer Tripp, and Mayur Naik. Learning minimal
abstractions. In POPL, 2011.
[48] Rahul Sharma, Saurabh Gupta, Bharath Hariharan, Alex
Aiken, Percy Liang, and Aditya V. Nori. A data driven approach for algebraic loop invariants. In ESOP, 2013.
[30] V. Benjamin Livshits, Aditya V. Nori, Sriram K. Rajamani,
and Anindya Banerjee. Merlin: specification inference for
explicit information flow problems. In PLDI, 2009.
[49] Rahul Sharma, Aditya V. Nori, and Alex Aiken. Interpolants
as classifiers. In CAV, 2012.
[31] Fan Long and Martin Rinard. Automatic patch generation by
learning correct code. In POPL, 2016.
[50] Gagandeep Singh, Markus Püschel, and Martin Vechev. Making numerical program analysis fast. In PLDI, 2015.
[32] Magnus Madsen and Anders Møller. Sparse dataflow analysis
with pointers and reachability. In SAS, 2014.
[51] Yannis Smaragdakis, George Kastrinis, and George Balatsouras. Introspective analysis: Context-sensitivity, across the
board. In PLDI, 2014.
[33] A. Miné. The Octagon Abstract Domain. Higher-Order and
Symbolic Computation, 19(1):31–100, 2006.
[34] Alon Mishne, Sharon Shoham, and Eran Yahav. Typestatebased semantic code search over partial programs. In OOPSLA, pages 997–1016, 2012.
[52] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level
convolutional networks for text classification. In NIPS, 2015.
[53] Xin Zhang, Ravi Mangal, Radu Grigore, Mayur Naik, and
Hongseok Yang. On abstraction refinement for program analyses in datalog. In PLDI, 2014.
[35] Mayur Naik, Hongseok Yang, Ghila Castelnuovo, and Mooly
Sagiv. Abstractions from tests. In POPL, 2012.
[36] Aditya V. Nori and Rahul Sharma. Termination proofs from
tests. In FSE, 2013.
[54] Xin Zhang, Mayur Naik, and Hongseok Yang. Finding optimum abstractions in parametric dataflow analysis. In PLDI,
2013.
[37] Hakjoo Oh, , Wonchan Lee, Kihong Heo, Hongseok Yang,
and Kwangkeun Yi. Selective context-sensitivity guided by
impact pre-analysis. In PLDI, 2014.
[55] He Zhu, Aditya V. Nori, and Suresh Jagannathan. Learning
refinement types. In ICFP, 2015.
[38] Hakjoo
Oh,
Kihong
Heo,
Wonchan
Lee,
Woosuk Lee, and Kwangkeun Yi.
Sparrow.
http://ropas.snu.ac.kr/sparrow.
[56] He Zhu, Gustavo Petri, and Suresh Jagannathan. Automatically learning shape specifications. In PLDI, 2016.
[39] Hakjoo Oh, Kihong Heo, Wonchan Lee, Woosuk Lee, and
Kwangkeun Yi. Design and implementation of sparse global
analyses for C-like languages. In PLDI, 2012.
A. Benchmark Programs
[40] Hakjoo Oh, Hongseok Yang, and Kwangkeun Yi. Learning a
strategy for adapting a program analysis via Bayesian optimisation. In OOPSLA, 2015.
Table 4 and 5 show the benchmark programs for the partially
flow-sensitive interval and pointer analyses, and the partial
Octagon analysis, respectively.
14
2017/1/2
Programs
brutefir-1.0f
consol calculator
dtmfdial-0.2+1
id3-0.15
polymorph-0.4.0
unhtml-2.3.9
spell-1.0
mp3rename-0.6
mp3wrap-0.5
ncompress-4.2.4
pgdbf-0.5.0
mcf-spec2000
acpi-1.4
unsort-1.1.2
checkmp3-1.98
cam-1.05
bottlerocket-0.05b3
129.compress
e2ps-4.34
httptunnel-3.3
mpegdemux-0.1.3
barcode-0.96
stripcc-0.2.0
xfpt-0.07
man-1.5h1
cjet-0.8.9
admesh-0.95
hspell-1.0
juke-0.7
gzip-spec2000
LOC
398
1,124
1,440
1,652
1,764
2,057
2,284
2,466
2,752
2,840
3,135
3,407
3,814
4,290
4,450
5,459
5,509
6,078
6,222
7,472
7,783
7,901
8,914
9,089
11,059
11,287
11,439
11,520
12,518
12,980
Programs
mpage-2.5.6
bc-1.06
ample-0.5.7
irmp3-ncurses-0.5.3.1
tnef-1.4.6
ecasound2.2-2.7.0
gzip-1.2.4a
unrtf-0.19.3
jwhois-3.0.1
archimedes
aewan-1.0.01
tar-1.13
normalize-audio-0.7.7
less-382
tmndec-3.2.0
gbsplay-0.0.91
flake-0.11
enscript-1.6.5
twolame-0.3.12
mp3c-0.29
bison-2.4
tree-puzzle-5.2
icecast-server-1.3.12
dico-2.0
aalib-1.4p5
pies-1.2
rnv-1.7.10
mpg123-1.12.1
raptor-1.4.21
lsh-2.0.4
LOC
14,827
16,528
17,098
17,195
18,172
18,236
18,364
19,019
19,375
19,552
28,667
30,154
30,984
31,623
31,890
34,002
35,951
38,787
48,223
52,620
59,955
62,302
68,571
69,308
73,413
84,649
93,858
101,701
109,053
109,617
Table 4. 60 benchmark programs for our partially flow-sensitive interval and pointer analyses.
Programs
brutefir-1.0f
consol calculator
dtmfdial-0.2+1
id3-0.15
spell-1.0
mp3rename-0.6
e2ps-4.34
httptunnel-3.3
mpegdemux-0.1.3
barcode-0.96
juke-0.7
bc-1.06
irmp3-ncurses-0.5.3.1
LOC
398
1,124
1,440
1,652
2,284
2,466
6,222
7,472
7,783
7,901
12,518
16,528
17,195
Programs
ecasound2.2-2.7.0
unrtf-0.19.3
jwhois-3.0.1
less-382
flake-0.11
mp3c-0.29
bison-2.4
icecast-server-1.3.12
dico-2.0
pies-1.2
raptor-1.4.21
lsh-2.0.4
LOC
18,236
19,019
19,375
31,623
35,951
52,620
59,955
68,571
69,308
84,649
109,053
109,617
Table 5. 25 benchmark programs for our partial Octagon analysis.
15
2017/1/2
| 6 |
1
Communication Using Eigenvalues of Higher
Multiplicity of the Nonlinear Fourier Transform
arXiv:1802.07456v2 [] 26 Feb 2018
Javier Garcı́a
Institute for Communications Engineering
Technical University of Munich
[email protected]
(This work has been submitted to the IEEE/OSA Journal of Lightwave Technology for possible publication.
Copyright may be transferred without notice, after which this version may no longer be accessible.)
Abstract—Eigenvalues of higher multiplicity of the Nonlinear
Fourier Transform (NFT) are considered for information transmission over fiber optic channels. The effects of phase, time or
frequency shifts on this generalized NFT are derived, as well
as an expression for the signal energy. These relations are used
to design transmit signals and numerical algorithms to compute
the direct and inverse NFTs, and to numerically demonstrate
communication using a soliton with one double eigenvalue.
Index Terms—Inverse Scattering Transform, Nonlinear
Fourier Transform, optical fiber, higher multiplicity eigenvalues,
spectral efficiency
I. I NTRODUCTION
Current optical transmission systems exhibit a peak in the
achievable rate due to the Kerr nonlinearity of the Nonlinear
Schrödinger Equation (NLSE) [1]. Several techniques have
been proposed to attempt to overcome this limit, of which
the Inverse Scattering Transform (IST) [2], or the Nonlinear
Fourier Transform (NFT) [3], has attracted considerable attention. Numerous algorithms have been developed to compute
the direct [4], [5] and inverse [5], [6], [7] NFT.
Information transmission using the NFT has been demonstrated both numerically and experimentally in several works,
such as [8], [9], [10]. For purely discrete spectrum modulation, the spectral efficiencies obtained so far are not very
high [11]. In this paper, eigenvalues of higher multiplicity in
the discrete spectrum are considered for communication. The
theory for these eigenvalues has been developed in [12], [13],
but its applications to communications have to the best of our
knowledge not been explored yet.
The paper is organized as follows. In Section II, we introduce the NLSE model. Section III briefly describes the NFT.
In Section IV, we explain the theory of higher multiplicity
eigenvalues from [12], [13], and we prove some properties
of this generalized NFT. In Section V, we show how to
compute the direct and inverse NFT with multiple eigenvalues.
Section VI numerically demonstrates information transmission
using a double eigenvalue, and Section VII concludes the
paper.
Date of current version February 27, 2018. J. Garcı́a was supported by the
German Research Foundation under Grant KR 3517/8-1.
II. S YSTEM
MODEL
Consider the slowly varying component Q(Z, T ) of an
electrical field propagating along an optical fiber, where Z
is distance and T is time. The field obeys the NLSE, which
is expressed as [14, Eq. (2.3.46)]:
β2 ∂ 2
∂
Q(Z, T ) + jγ |Q(Z, T )|2 Q(Z, T )
Q(Z, T ) = − j
∂Z
2 ∂T 2
+ N (Z, T )
(1)
where β2 is the group velocity dispersion (GVD) parameter,
and γ is the nonlinear coefficient. We neglect attenuation in (1)
because we assume that fiber loss is exactly compensated
by distributed Raman amplification. The noise term N (Z, T )
is the formal derivative of a band-limited Wiener process
W (Z, T ), i.e., we have
Z Z
p
N (Z ′ , T ) dZ ′ = Nase W (Z, T )
(2)
0
where Nase = αhνs KT is the distributed noise spectral density, α is the attenuation coefficient, h ≈ 6.626 ·
10−34 m2 kgs−1 is Planck’s constant, νs is the signal center
frequency, and KT is the phonon occupancy factor, which is
approximately 1.13 for Raman amplification [1]. Note that,
unlike [1], we do not include the distance in the definition of
Nase . The Wiener process W (Z, T ) may be defined as
⌊KZ⌋
1 X
Wk (T )
W (Z, T ) = lim √
K→∞
K k=1
(3)
where the Wk (T ) are independent and identically distributed
(i.i.d.) circularly symmetric complex Gaussian processes with
zero mean, bandwidth B, and autocorrelation
E [Wk (T )Wk∗ (T ′ )] = B sinc (B (T − T ′ ))
(4)
where sinc(x) , sin (πx) / (πx).
III. T HE N ONLINEAR F OURIER T RANSFORM
In this section, we briefly introduce the steps involved in
the NFT. For more detail, the reader is referred to [3].
2
By applying the following change of variables:
s
T02
1 |β2 |
T = T0 t,
Z=2
z,
Q(Z, T ) =
q(z, t)
|β2 |
T0
γ
E [N (Z, T )N ∗ (Z ′ , T ′ )] =
β22
E [n(z, t)n∗ (z ′ , t′ )]
2γT03
(5)
the NLSE (1) is normalized to
∂
∂2
q(z, t) = −j sign (β2 ) 2 q(z, t) + j2 |q(z, t)|2 q(z, t)
∂z
∂t
+ n(z, t)
(6)
and we choose β2 < 0 to focus on the case of anomalous
GVD [14, p. 131]. The parameter T0 can be freely chosen, and provides an additional degree of freedom for the
normalization. The nonlinear term |q|2 q causes inter-channel
interference.
The IST or NFT provides a domain in which the NLSE
channel is multiplicative, i.e., there exists a channel transfer
function H(z, λ) such that
Q̃(z, λ) = H(z, λ)Q̃(0, λ)
(7)
adjoint of the solutions. The four canonical eigenvectors
v 1 (t, λ), v 2 (t, λ), ṽ 1 (t, λ), and ṽ 2 (t, λ) satisfy
v 2 (t, λ) = a(λ)ṽ 1 (t, λ∗ ) + b(λ)v 1 (t, λ)
2
vz = M v
(9)
vt = P v
(10)
where a subscript indicates a derivative with respect to that
variable. For the NLSE (6), L, M and P are given by:
∂
−q
∂t
L(z, t) = j
(11)
∂
−q ∗ − ∂t
2
2jλ2 − j |q|
−2λq − jqt
(12)
M (z, t, λ) =
∗
∗
2λq − jqt
−2jλ2 + j |q|2
−jλ q
P (z, t, λ) =
.
(13)
−q ∗ jλ
The NFT is calculated by solving the Zakharov-Shabat system (10). In the following, we often drop the dependence on z
to simplify notation. Two solutions v 1 (t, λ) and v 2 (t, λ) that
are bounded in the upper complex half plane (λ ∈ C+ ) are
obtained using the boundary conditions
0 jλt
1
v (t, λ) →
e , t → +∞
(14a)
1
1 −jλt
v 2 (t, λ) →
e
, t → −∞.
(14b)
0
We define the adjoint of a vector v = (v1 , v2 )T as ṽ =
(v2∗ , −v1∗ )T . Two additional solutions ṽ 1 (t, λ∗ ) and ṽ 2 (t, λ∗ )
of (10) are calculated by solving vt (t, λ∗ ) = P (t, λ∗ )v(t, λ∗ )
using boundary conditions adjoint to (14), and taking the
∗
1
∗
∗
∗
1
ṽ (t, λ) = −b (λ )ṽ (t, λ ) + a (λ )v (t, λ)
(15a)
(15b)
where a(λ) and b(λ) do not depend on t. The NFT of the
signal q(z, t) is made up of two spectra:
b(λ)
• the continuous spectrum Qc (λ) = a(λ) , for λ ∈ R;
k)
, for the K
the discrete spectrum Qd (λk ) = ab(λ
λ (λk )
eigenvalues {λk ∈ C+ : a(λk ) = 0}
where aλ = da/dλ. To compute the NFT, the following
relations are useful
•
a(λ) = lim v12 (t, λ)ejλt
(16a)
b(λ) = lim v22 (t, λ)e−jλt .
(16b)
t→∞
t→∞
Given a signal q(z, t) propagating according to the
NLSE (6), its NFT evolves in z according to the following
multiplicative relations:
2
Qc (z, λ) = Qc (0, λ)e4jλ
where Q̃(z, λ) is the NFT of q(z, t). The NFT is based on the
existence of a Lax pair (L, M ) of operators that satisfies the
following condition:
∂L
= M L − LM.
(8)
∂z
As shown in [2, Section 1.4], the eigenvalues λ of L are
invariant in z, and the eigenvectors v of L satisfy:
∗
z
λk (z) = λk (0)
(17a)
(17b)
2
Qd (z, λk ) = Qd (0, λk )e4jλk z .
IV. E IGENVALUES OF
(17c)
HIGHER MULTIPLICITY
The relations (15) are consistent only for λ ∈ R ∪ {λk } ∪
{λ∗k }, because the boundary condition (14a) on ṽ 1 (t, λ∗ ) is
unbounded outside this region. As the eigenvalues λk come
in complex conjugate pairs, the spectrum at λ ∈ R ∪ {λk } is
enough to determine q(t) uniquely. However, to the best of our
knowledge, all the work on the NFT for optical communication
assumes that all the eigenvalues λk have multiplicity 1, i.e.,
the zeros of a(λ) are simple. There has been, however,
some work [12], [13] on the mathematical theory of higher
multiplicity eigenvalues, which we summarize in this section.
For a multiple zero λk of a(λ), we have aλ (λk ) = 0,
and the above definition of the discrete spectrum is not valid
anymore. If the multiplicity of the eigenvalue λk is Lk , we
need Lk constants qk0 , . . . , qk,(Lk −1) to determine the discrete
spectrum. In [13], these norming constants are calculated in
several intermediate steps.
• Dependency constants
The dependency constants γkℓ , k ∈ {1, . . . , K}, ℓ ∈
{0, . . . , Lk − 1} are defined by the following equation [13, Eq. (3.1)]:
m
X
m
v 2,(m) (t, λk ) =
γk,(m−ℓ) v 1,(ℓ) (t, λk ) (18)
ℓ
ℓ=0
i,(m)
where v
denotes the m-th derivative of v i with
respect to λ. Taking the m-th derivative of (15), and using
a(ℓ) (t, λ) = 0 for ℓ ≤ Lk − 1, we have
γk,ℓ = b(ℓ) (λk ).
•
Generalized residues
(19)
3
The generalized residues tkℓ , k ∈ {1, . . . , K}, ℓ ∈
{1, . . . , Lk }, are the coefficients of the expansion of
1/a(λ) − 1 in inverse powers of λ − λk [13, Eq. (4.3)]:
tkLk
tk1
1
−1 =
+ O(1). (20)
+···+
L
k
a(λ)
(λ − λk )
(λ − λk )
They can be computed as
tkℓ
•
1
dLk −ℓ
=
lim
(Lk − ℓ)! λ→λk dλLk −ℓ
"
#
(λ − λk )Lk
.
a(λ)
LkX
−ℓ−1
m=0
b(m) (λk )
tk,(ℓ+m+1) .
m!
(22)
The generalization of the distance evolution equation (17c)
to the case Lk > 1 is given by [13, Eq. (4.9)]:
qk,(Lk −1) (z) · · · qk0 (z)
2
= qk,(Lk −1) (0) · · · qk0 (0) e−4jΛk z (23)
for all k ∈ {1, . . . , K}, where
−jλk
−1
0
···
0
−jλ
−1
·
··
k
..
.
.
.
..
..
..
Λk = .
0
···
0 −jλk
0
0
···
0
0
0
..
.
∈ CLk ×Lk .
−1
−jλk
(24)
NFT {q(t)} = (Qc (λ), {λk }, {qkℓ }).
(25)
A. Properties of the NFT with Higher Multiplicity Eigenvalues
We prove the following properties in Appendix A.
1) Phase shift:
NFT q(t)ejφ0 = (Qc (λ)e−jφ0 , {λk }, qkℓ e−jφ0 ).
(26)
2) Time shift: if q ′ (t) = q(t − t0 ) then
′
NFT {q ′ (t)} = (Q′c (λ), {λ′k } , {qkℓ
})
(27)
satisfies
Q′c (λ) = Qc (λ)e−2jλt0
(28a)
λ′k
(28b)
i
′
· · · qk0
= qk,(Lk −1)
COMPUTATION OF THE (I)NFT WITH
HIGHER MULTIPLICITY EIGENVALUES
Using the theory from Section IV, we extend the existing
numerical algorithms that compute the (I)NFT to include
multiple eigenvalues.
A. Direct NFT
Most algorithms that compute the direct NFT discretize the
Zakharot-Shabat system (10) to find a(λ) and b(λ) from (16).
Let u = (u1 , u2 )T , where u1 (t, λ) = v12 (t, λ)ejλt and
u2 (t, λ) = v22 (t, λ)e−jλt . Then from (10) we have
0
q(t)e2jλt
ut (t, λ) =
u(t, λ)
(32)
−q ∗ (t)e−2jλt
0
and from (16) we have
a(λ) = lim u1 (t, λ)
(33a)
b(λ) = lim u2 (t, λ).
(33b)
t→∞
We write the NFT as
= λk
h
′
qk,(L
k −1)
k=0
V. N UMERICAL
(21)
Norming constants
The norming constants qkℓ , k ∈ {1, . . . , K}, ℓ ∈
{0, . . . , Lk − 1}, are given by [13, Eq. (4.1)]:
qkℓ = j ℓ
5) Parseval’s theorem:
Z ∞
Z
1 ∞
|q(t)|2 dt =
log 1 + |Qc (λ)|2 dλ
π −∞
−∞
K
X
+4
Lk ℑ {λk } .
(31)
· · · qk0 e2Λk t0 .
(28c)
3) Frequency shift:
NFT q(t)e−2jω0 t = (Qc (λ − ω0 ), {λk + ω0 }, {qkℓ }).
(29)
4) Time dilation: for T > 0
n
λk
qkℓ o
t
1
= Qc (T λ),
,
.
q
NFT
T
T
T
T ℓ+1
(30)
t→∞
To compute the NFT of q(t), we discretize the time axis in the
interval t ∈ [t1 , t2 ]. Let tn = t1 + nǫ, qn = q(tn ), where n ∈
{0, . . . , N − 1}, N is the number of samples, and ǫ = (t2 −
t1 )/(N −1) is the step size. Similarly, let u[n] = u(t1 +nǫ, λ).
Starting at u[0] = (1, 0)T (see (14)), the following update step
is applied iteratively:
u[n + 1] = A[n]u[n], n ∈ {0, . . . , N − 2}
(34)
and we have a(λ) = u1 [N − 1] and b(λ) = u2 [N − 1]. The
kernel A[n] varies according to the discretization algorithm.
A few options are given in [4]. In this work, we consider the
trapezoidal kernel proposed in [5]:
cos (|qn | ǫ)
sin (|qn | ǫ) ej(θn +2λtn )
A[n] =
− sin (|qn | ǫ) e−j(θn +2λtn )
cos (|qn | ǫ)
(35)
where θn = arg qn . However, the following analysis is valid
for any kernel A[n]. To obtain the norming constants qkℓ , we
need to calculate higher order λ-derivatives of a(λ) and b(λ).
More specifically, from (22) we need the first Lk −1 derivatives
of b(λ). In the case of a(λ), we obtain an upper bound on the
order of the required derivatives.
Lemma 1. The value of tkℓ in (21) depends on λk only
through the functions a(m) (λk ) for m ∈ {Lk , . . . , 2Lk − ℓ}
Proof. See Appendix B.
For an eigenvalue of multiplicity Lk , we need to compute
the first 2Lk − 1 derivatives of u[N − 1]. We do this by setting
the initial conditions
0
u(m) [0] =
,
m ∈ {1, . . . , 2Lk − 1}
(36)
0
4
and applying the update steps
m
X
m (r)
A [n]u(m−r) [n]
u(m) [n + 1] =
r
r=0
and we compute
(37)
where A(r) [n], the r-th order λ-derivative of A[n], is obtained
in closed form. For the trapezoidal kernel (35) we have
a(ℓ) (λk ) =
b(λk ) = R21 L11 + R22 L21
R21
L21
=
(R11 L11 + R12 L21 ) +
R11
R11
L21
R21
a(λk ) +
=
R11
R11
ej(θn +2λtn )
. (38)
0
Once we have the required values of a, b and their derivatives,
we use (22) and (21) to compute the norming constants.
In (21), the derivative is evaluated in closed form, and then
L’Hôpital’s rule is applied repetitively to obtain an expression
for tkℓ that depends only on nonzero derivatives of a. See (75)(77) in Appendix B for details. For Lk = 2, this gives
j2b(λk )
aλλ (λk )
2bλ (λk )
2 b(λk )aλλλ (λk )
=
−
aλλ (λk ) 3 aλλ (λk )2
qk1 =
(39a)
qk0
(39b)
and for Lk = 3 we have
(45)
To obtain b(ℓ) (λk ), note that
r
A(r) [n] = (2jtn ) sin (|qn |ǫ)
0
·
r+1 −j(θn +2λtn )
(−1)
e
ℓ
X
ℓ
(m) (ℓ−m)
(m) (ℓ−m)
.
r2 l1
− r1 l2
m
m=0
(46)
where we used R22 = (1 + R12 R21 ) /R11 . The ℓ-th derivative
of the left summand in (46) is 0 for ℓ ≤ Lk − 1, because
a(ℓ) (λk ) = 0 for ℓ ≤ Lk − 1 . Therefore, we have
b(ℓ) (λk ) =
dℓ l2
dλℓ r2
(47)
λ=λk
which can be written in closed form using (72) below. Equations (47) and (45), together with (22) (or (39) or (40)), let us
compute the direct NFT with higher multiplicity eigenvalues
from the forward-backward method.
B. Inverse NFT
−6b(λk )
qk2 =
(40a)
aλλλ (λk )
3aλλλλ (λk )
j6bλ (λk )
(40b)
− jb(λk )
qk1 =
aλλλ (λk )
2aλλλ (λk )2
6bλλ (λk )
3aλλλλ (λk )
qk0 =
− bλ (λk )
aλλλ (λk )
2aλλλ (λk )2
2
15aλλλλ (λk ) − 12aλλλ (λk )aλλλλλ (λk )
+ b(λk )
. (40c)
20aλλλ (λk )3
Forward-Backward Method: This technique was proposed
in [5] to improve numerical stability. We write (34) as
a(λ)
1
1
= A[N − 1] · · · A[1]A[0]
= RL
(41)
b(λ)
0
0
where R = A[N − 1] · · · A[n0 ] and L = A[n0 − 1] · · · A[0],
and n0 is chosen according to some criterion to minimize the
numerical error. The iterative procedure (34) is run forward
up to n0 − 1 to obtain
1
l1
L11
=L
(42)
=
0
l2
L21
T
and backward from r[N − 1] = (0, 1) down to r[n0 − 1]:
0
−R12
r1
.
(43)
=
= R−1
R11
1
r2
−1
The kernel A[n] is used to compute (43): for the trapezoidal
case this amounts to replacing ǫ with −ǫ in (35). Note that (43)
is valid only for kernels with unit determinant.
Using (38), we obtain r1 , r2 , l1 , l2 , and their derivatives up
to order 2Lk − ℓ. From (41) we have
a(λ) = R11 L11 + R12 L21
(44)
The inverse NFT can be computed using the generalized version of the Gelfand-Levitan-Marchenko equation (GLME) [7]:
K(t, y) − Ω∗ (t + y)
Z ∞
Z ∞
+
dx
ds K(t, s)Ω(s + x)∗ Ω(x + y) = 0. (48)
t
t
The kernel Ω(y) is given by [12]:
Z ∞
K LX
k −1
X
yℓ
1
Qc (λ)ejλy dλ +
qkℓ ejλk y . (49)
Ω(y) =
2π −∞
ℓ!
k=1 ℓ=0
The inverse NFT is then obtained as
q(t) = −2K(t, t).
(50)
A numerical procedure to solve the GLME (48) is given in [7,
Section 4.2]: it suffices to replace F (y) in the reference by
Ω(y) from (49).
When there is no continuous spectrum, a closed-form expression is given in [12] for the generalized K-solitons:
q(z, t)
H
= −2B H e−Λ
where
t
−1 −ΛH t+4j (ΛH )2 z
(I + M (z, t)N (t))
Λ1
0
Λ= .
..
0
e
0
Λ2
..
.
···
···
..
.
0
0
..
.
···
0
ΛK
,
CH
(51)
(52)
Λk is given by (24),
T
T
L ×1
B = B1 · · · BK , Bk = 0 · · · 0 1 ∈ {0, 1} k ,
(53)
5
· · · CK ,
Ck = qk,(Lk −1)
· · · qk0 ,
(54)
P
I is an identity matrix of size k Lk , and
Z ∞
H
H 2
2
M (z, t) =
e−Λ s+4j (Λ ) z C H Ce−Λs−4jΛ z ds (55)
Z ∞t
H
e−Λx BB H e−Λ x dx.
(56)
N (t) =
t
We compute the integrals (55) and (56) numerically, and then
apply (51) to obtain the inverse NFT of a purely discrete
spectrum with higher multiplicity eigenvalues.
1
0
20
0
−20
t
0
0.2
0.6
0.4
0.8
z
Figure 1. Propagation of a double soliton
C. Example: Double Soliton
A soliton q(z, t) with a 2nd order eigenvalue at λ = ξ + jη,
with norming constants q11 and q10 , can be derived in closed
form from (51). The result is
h(z, t)
q(z, t) =
f (z, t)
(57)
where
n
2
2
h(z, t) = −j4ηe−j arg q11 e−j2ξt e−j4(ξ −η )z
i
h
2
∗
q10
e−X − |q11 | (2ηt + 8η (ξ + jη) z + 2) − ηq11
h
io
2
∗
+eX |q11 | (2ηt + 8η (ξ − jη) z) + ηq11 q10
(58)
2
f (z, t) = |q11 | [cosh (2X) + 1]
2
+ 2 |q10 η + q11 (2ηt + 8η (ξ + jη) z + 1)|
(59)
and
X = 2ηt + 8ηξz − log
|q11 |
.
4η 2
(60)
We refer to this soliton as a double soliton. The evolution of
the norming constants (23) reduces to
2
q11 (z) = q11 (0)e4jλ
z
q10 (z) = (q10 (0) + 8λzq11 (0)) e
(61a)
4jλ2 z
.
(61b)
Note from (61b) that a soliton with higher multiplicity eigenvalues does not exhibit periodic (breathing) behavior in z.
Only the first norming constant qk,(Lk −1) evolves periodically
if the eigenvalue is purely imaginary. In particular, the ratio
|q10 |/|q11 | is unbounded with distance, which causes the unbounded pulse broadening that can be inferred from Figure 1.
We remark that the following closed form expression for
the center time of the double soliton seems to be valid based
on our experiments, although we have not found a proof:
R∞
2
1
|q11 |
−∞ t |q(t)|
.
(62)
R∞
2 = 2η log
4η 2
|q(t)|
−∞
2
|q(z, t)|
C = C1
VI. I NFORMATION
TRANSMISSION USING HIGHER
MULTIPLICITY EIGENVALUES
We simulated a communications system with the parameters
given in Table I. Three different launch signals were compared:
• a double-soliton with an eigenvalue at λ = 1.25i,
• a 2-soliton with eigenvalues λ1 = 1.5i and λ2 = 1i,
• and a 1-soliton with an eigenvalue at λ0 = 2.5i.
From (31), the three signals have the same pulse energy in the
normalized domain. The 1-soliton uses multi-ring modulation
on q0 = Qd (λ0 ) with 32 rings and 128 phases per ring.
The 2-soliton has the two spectral amplitudes q1 = Qd (λ1 )
and q2 = Qd (λ2 ), while the double-soliton system has the
two norming constants q11 and q10 . Both the 2-soliton and
the double-soliton have 4 rings and 16 phases per spectral
amplitude. With this design, the three launch signals can
transmit up to 12 bits per channel use. The amplitudes of the
rings have been heuristically optimized to obtain a small timebandwidth product (TBP) of 10.5 sec·Hz that is approximately
the same for the three signals. The ring amplitudes for the 1soliton are given by
|q0 | ∈ 0.088754 · 1.6142k : k ∈ {0, . . . , 31} .
(63)
The ring amplitudes for the two-soliton and double-soliton
are given in Tables II and III, respectively. The phases are
uniformly spaced in [0, 2π), starting at 0 for q0 and q11 , at
π/128 for q2 , and at π/16 for q10 . The optimal criterion
for choosing ring amplitudes has not yet been found, but
expressions such as (62) suggest that geometric progressions
are better suited than arithmetic progressions.
The free normalization parameter T0 in (5) was used to
obtain the desired powers. This means that the pulse duration
and bandwidth are different for each power value. This is
unavoidable in soliton systems. The TBP of 10.5 is, however,
constant for all values of power.
Propagation according to (1) was simulated using the splitstep Fourier method. In all systems, the transmitter used
closed-form expressions to generate the solitons, and the
receiver used forward-backward computation with the trapezoidal kernel to obtain the norming constants. Equalization
was performed by inverting (23). The mutual information
of the transmitted and received symbols was measured and
6
Table III
R ING AMPLITUDES FOR THE DOUBLE - SOLITON SYSTEM
Table I
S IMULATION PARAMETERS
Symbol
β2
γ
z
Nase
|q11 |
|q10 |
Value
−21.667 ps2 /km
1.2578 W−1 km−1
4000 km
6.4893 · 10−24 Ws/m
Table II
R ING AMPLITUDES FOR THE 2- SOLITON SYSTEM
|q1 |(λ = 1.5i)
|q2 |(λ = 1i)
2.5355
0.2662
2.8364
1.0211
3.1730
3.9173
3.5496
15.0283
normalized by the TBP to obtain the spectral efficiency. In the
2-soliton and double-soliton systems, the mutual information
was computed jointly for the two spectral amplitudes,
(T X) (T X) (RX) (RX)
I(q1
, q2
; q1
, q2
)
(T X)
Spectral efficiency (bits/s/Hz)
Parameter
Dispersion coefficient
Nonlinear coefficient
Fiber length
Noise spectral density
5.3785
34.3750
6.5708
45.0440
7.2627
51.5625
1-soliton
2-soliton
Double soliton
1
0.5
0
−40
−30
−20
(64)
−10
0
P (dBm)
(RX)
where qk
refers to the transmitted symbols and qk
refers to the received symbols after equalization. The effective
number of transmitted symbols was 614440.
Figure 2 shows the spectral efficiency for the three systems.
The double soliton performs better than the 1-soliton, but
worse than the 2-soliton. This is expected: the higher order
derivatives of a(λ) in (39) lead to a loss of accuracy. However,
the experiment demonstrates that the generalized NFT with
multiple zeros can be used to transmit information.
VII. C ONCLUSION
We started from the theory in [12], [13], and proved some
properties of the generalized NFT that are useful for communications. We designed and implemented algorithms to compute
the generalized NFT, and we numerically demonstrated the
potential of higher multiplicity eigenvalues for information
transmission. With this, we extend the class of signals that
admit an NFT, and we show that there are additional degrees
of freedom for NFT-based optical communications systems.
There are several directions for future work. For example,
an extension of the Darboux algorithm to the generalized NFT
would speed up the computation of the INFT. More insight into
the duration, bandwidth and robustness to noise of multiple
eigenvalue signals would be useful.
Figure 2. Spectral efficiency of three solitonic signals
we have
L
(λ − λk ) k
a(λ)
dLk −ℓ
1
tkLk (λ).
tkℓ (λ) =
(Lk − ℓ)! dλLk −ℓ
tkLk (λ) =
OF THE PROPERTIES OF THE
NFT
t′kℓ (λ) = e−jλt0
In the following, all primed variables (a′ ) refer to the
spectral functions of the shifted signal q ′ (t).
1) Phase shift: replacing q with qejφ0 in (32), we have
′
a (λ) = a(λ) and b′ (λ) = b(λ)e−jφ0 . The property then
follows from (17a) and (22).
2) Time shift: the change of variable t → t − t0 in (10)
proves that a′ (λ) = a(λ)ejλt0 , and b′ (λ) = b(λ)e−jλt0 . The
expressions (28a) and (28b) follow immediately. Defining
#
"
L
1
dLk −ℓ (λ − λk ) k
tkℓ (λ) =
(65)
(Lk − ℓ)! dλLk −ℓ
a(λ)
LX
k −ℓ
i=0
(66b)
1
i
(−jt0 ) tk,ℓ+i (λ).
i!
(67)
From b′ = be−jλt0 we have
′ (ℓ)
(b )
(λ) = e
−jλt0
ℓ
X
ℓ
i
i=0
(−jt0 )ℓ−i b(i) (λ).
(68)
Using (67) and (68) in (22), we have
′
qkℓ
ℓ −2jλk t0
=j e
LkX
−ℓ−1
m
1 X m (i)
m−i
b · (−jt0 )
m! i=0 i
m=0
Lk −ℓ−m−1
X
WITH HIGHER
MULTIPLICITY EIGENVALUES
(66a)
Using a′ = aejλt0 , we have t′kLk (λ) = tkLk (λ)e−jλt0 and,
from (66b),
A PPENDIX A
P ROOF
5.9449
39.3496
r=0
1
r
(−jt0 ) tk,ℓ+m+r+1 . (69)
r!
The change of variable m = u + i − r yields
′
qkℓ
=
e
−2jλk t0
LkX
−ℓ−1
u=0
Lk −ℓ−u−1
X
i=0
=
e−2jλk t0
j
ℓ+u
i!
which is the same as (28c).
u
X
r=0
1
r! (u − r)!
!
b(i) tk,ℓ+u+i+1
LkX
−ℓ−1
u=0
(−t0 )
u
1
(−2t0 )u qk,ℓ+u
u!
(70)
7
3) Frequency shift: using the change of variable λ → λ−ω0
in (32), we have a′ (λ) = a(λ − ω0 ) and b′ (λ) = b(λ − ω0 ),
from which the property follows.
4) Time dilation: the change of variable t → t/T in (32)
proves that a′ (λ) = a(T λ) and b′ (λ) = b(T λ). Using this
in (21) and changing λ → T λ shows that t′kℓ = tkℓ /T ℓ.
Together with (22), this proves (30).
5) Parseval’s theorem: this is a particular case (n = 0) of
the more general trace formula:
Z
1 ∞
(2jλ)n log 1 + |Qc (λ)|2 dλ
Cn =
π −∞
K
X
4
n
(71)
(2j)
+
Lk ℑ λn+1
k
n+1
k=0
where Cn are the constants of motion, of which C0 is the
signal energy (31). The proof for simple eigenvalues is given
in [2, Sec. 1.6]. The result is extended to multiple eigenvalues
by allowing several ζm in [2, Eq. (1.6.18)] to be equal.
A PPENDIX B
P ROOF OF L EMMA 1
Applying Faà di Bruno’s formula [15, pp. 43-44] to 1/a(λ),
and then the generalized product rule to c(λ)·(1/a(λ)), for an
arbitrary c(λ), we can write a quotient rule for higher order
derivatives:
"
n
X
n (n−m)
dn c
c
=
dλn a m=0 m
m
X
(−1)|p| m!
|p|! Y (i) pi
. (72)
a
p1 !1!p1 · · · pm !m!pm a|p|+1 i=1
p∈P(m)
Recall that a(i) denotes an i-th order derivative. Here, P(m)
denotes the set of partitions p of m:
m
X
p = [p1 , · · · , pm ] ,
and |p| =
we have
Pm
i=1
ipi = m,
i=1
pi ∈ N ∪ {0} (73)
pi is the cardinality of p. Using (72) in (21)
tkℓ = lim
λ→λk
g(λ)
a(λ)Lk −ℓ+1
(74)
where
LX
k −ℓ
"
Lk − ℓ
Lk !
(λ − λk )ℓ+m
g(λ) =
(ℓ
+
m)!
m
m=0
·
X
p∈P(m)
m
Y
p
i
(−1) m!
a(i) .
|p|!aLk −ℓ−|p|
p1 !1!p1 · · · pm !m!pm
i=1
|p|
(75)
Note that a has a zero of order Lk , and therefore a(m) (λk ) =
0 for m ∈ {0, . . . , Lk − 1}. To compute tkℓ , we repeatedly
apply L’Hôpital’s rule until the numerator and the denominator
become nonzero in the limit:
tkℓ =
g (r) (λk )
.
[dr a(λ)Lk −ℓ+1 /dλr ]|λ=λk
(76)
The number r of times we need to differentiate is equal to the
order of the zero in the denominator:
r = Lk (Lk − ℓ + 1) .
(77)
The summands in g (r) (λ) are of the form
m
pi
Y
d(Lk −ℓ+1)Lk
ℓ+m Lk −ℓ−|p|
(i)
a
(λ
−
λ
)
a
k
dλ(Lk −ℓ+1)Lk
i=1
(78)
where s is an index, and Ks is a constant independent of λ. If
we apply the generalized product rule to (78), we see that the
only summands that will become nonzero for λ = λk are those
that apply ℓ + m differentiations on the factor (λ − λk )ℓ+m .
This leaves (Lk − ℓ + 1)Lk − ℓ − m differentiations for the
other factors. Note that the derivative of a term such as
m
pi
Y
Lk −ℓ−|p|
(79)
a(i)
a
gs (λ) = Ks
i=1
is a sum of terms
P of the same form. Each new term has the
same amount i pi of a-factors as the original (an a-factor
here refers to a or one
P of its derivatives), and where the number
of differentiations i ipi in the a-factors is increased by 1. We
conclude that gs (λ) is made up of summands that contain
P
• Lk − ℓ − |p| +
− ℓ a-factors that have
i pi = LkP
• (Lk − ℓ + 1)Lk − ℓ − m +
i ipi = (Lk − ℓ + 1)Lk − ℓ
differentiations.
Only the summands where all the a-factors are differentiated at
least Lk times will be nonzero. We want to know the highestorder derivative of a that appears in a nonzero term. The worst
case occurs when we have Lk − ℓ − 1 a-factors with an Lk -th
order derivative. The remaining a-factor must have a derivative
of order [(Lk − ℓ + 1)Lk − ℓ] − [Lk (Lk − ℓ − 1)] = 2Lk − ℓ.
ACKNOWLEDGMENT
The author wishes to thank Prof. G. Kramer and B. Leible
for useful comments and proofreading the paper.
R EFERENCES
[1] R. J. Essiambre, G. Kramer, P. J. Winzer, G. J. Foschini, and B. Goebel,
“Capacity Limits of Optical Fiber Networks,” J. Lightw. Technol.,
vol. 28, no. 4, pp. 662–701, Feb 2010.
[2] M. Ablowitz and H. Segur, Solitons and the Inverse Scattering Transform. Society for Industrial and Applied Mathematics, 1981. [Online].
Available: http://epubs.siam.org/doi/abs/10.1137/1.9781611970883
[3] M. I. Yousefi and F. R. Kschischang, “Information Transmission Using
the Nonlinear Fourier Transform, Part I: Mathematical Tools,” IEEE
Trans. Inf. Theory, vol. 60, no. 7, pp. 4312–4328, July 2014.
[4] ——, “Information Transmission Using the Nonlinear Fourier Transform, Part II: Numerical Methods,” IEEE Trans. Inf. Theory, vol. 60,
no. 7, pp. 4329–4345, July 2014.
[5] V. Aref, “Control and Detection of Discrete Spectral Amplitudes in
Nonlinear Fourier Spectrum,” ArXiv e-prints, May 2016.
[6] M. I. Yousefi and F. R. Kschischang, “Information Transmission Using
the Nonlinear Fourier Transform, Part III: Spectrum Modulation,” IEEE
Trans. Inf. Theory, vol. 60, no. 7, pp. 4346–4369, July 2014.
[7] S. T. Le, J. E. Prilepsky, and S. K. Turitsyn, “Nonlinear inverse synthesis
for high spectral efficiency transmission in optical fibers,” Opt. Express,
vol. 22, no. 22, pp. 26 720–26 741, Nov 2014. [Online]. Available:
http://www.opticsexpress.org/abstract.cfm?URI=oe-22-22-26720
[8] M. I. Yousefi and X. Yangzhang, “Linear and Nonlinear FrequencyDivision Multiplexing,” in 2016 Eur. Conf. Optical Commun. (ECOC),
Sept 2016, pp. 1–3.
8
[9] S. T. Le, H. Buelow, and V. Aref, “Demonstration of 64 0.5Gbaud
nonlinear frequency division multiplexed transmission with 32QAM,”
in 2017 Optical Fiber Commun. Conf. and Exhib. (OFC), March 2017,
pp. 1–3.
[10] V. Aref, H. Blow, K. Schuh, and W. Idler, “Experimental demonstration
of nonlinear frequency division multiplexed transmission,” in 2015 Eur.
Conf. Optical Commun. (ECOC), Sept 2015, pp. 1–3.
[11] S. Hari, M. I. Yousefi, and F. R. Kschischang, “Multieigenvalue Communication,” J. Lightw. Technol., vol. 34, no. 13, pp. 3110–3117, July
2016.
[12] T. Aktosun, F. Demontis, and C. van der Mee, “Exact
solutions to the focusing nonlinear Schrd̈inger equation,” Inverse
Problems, vol. 23, no. 5, p. 2171, 2007. [Online]. Available:
http://stacks.iop.org/0266-5611/23/i=5/a=021
[13] T. N. B. Martines, “Generalized inverse scattering transform for the nonlinear Schrödinger equation for bound states with higher multiplicities,”
Electronic J. Differential Equations, vol. 2017, no. 179, pp. 1–15, July
2017.
[14] G. P. Agrawal, Nonlinear Fiber Optics, 4th ed. Academic Press, October
2012.
[15] L. Arbogast, Du calcul des dérivations, 1800.
| 7 |
Reversible Image Watermarking for Health Informatics Systems
Using Distortion Compensation in Wavelet Domain
Hamidreza Zarrabi, Mohsen Hajabdollahi, S.M.Reza Soroushmehr,
Nader Karimi, Shadrokh Samavi, Kayvan Najarian
Abstract— Reversible image watermarking guaranties
restoration of both original cover and watermark logo from the
watermarked image. Capacity and distortion of the image
under reversible watermarking are two important parameters.
In this study a reversible watermarking is investigated with
focusing on increasing the embedding capacity and reducing
the distortion in medical images. Integer wavelet transform is
used for embedding where in each iteration, one watermark bit
is embedded in one transform coefficient. We devise a novel
approach that when a coefficient is modified in an iteration, the
produced distortion is compensated in the next iteration. This
distortion compensation method would result in low distortion
rate. The proposed method is tested on four types of medical
images including MRI of brain, cardiac MRI, MRI of breast,
and intestinal polyp images. Using a one-level wavelet
transform, maximum capacity of 1.5 BPP is obtained.
Experimental results demonstrate that the proposed method is
superior to the state-of-the-art works in terms of capacity and
distortion.
I. INTRODUCTION
Medical image analysis is extensively used to help
physicians in medical applications and improves their
diagnostic capabilities. Transmission of the medical
information through a public network such as internet or
computer network may be leaded to the security problems
such as, modification and unauthorized access. In this regard,
many researches were performed to address these problems
and provided solutions for content authentication. Although
digital
image
watermarking
has
addressed
the
aforementioned problems but medical image watermarking
causes original image modification and distortion. Small
distortion in medical images may have negative impact on
physician diagnostic hence the original image content must
be preserved. In recent years reversible watermarking has
been introduced as an effective way to restore both the
original image and watermark information. In this manner,
physician diagnostic process and treatment are not affected,
H.R Zarrabi, M. Hajabdollahi, and N. Karimi are with the Department of
Electrical and Computer Engineering, Isfahan University of Technology,
Isfahan 84156-83111, Iran.
S.M.R. Soroushmehr is with the Department of Computational Medicine
and Bioinformatics and Michigan Center for Integrative Research in Critical
Care, University of Michigan, Ann Arbor, MI, U.S.A.
S. Samavi is with the Department of Electrical and Computer Engineering,
Isfahan University of Technology, Isfahan 84156-83111, Iran. He is also
with the Department of Emergency Medicine, University of Michigan, Ann
Arbor, MI, U.S.A.
K. Najarian is with the Department of Computational Medicine and
Bioinformatics; Department of Emergency Medicine; and the Michigan
Center for Integrative Research in Critical Care, University of Michigan,
Ann Arbor, MI, U.S.A.
.
and patient privacies are kept. Reversible image
watermarking can be considered as an essential part of health
information system (HIS).
Recently, reversible watermarking has attracted a lot of
attention in the research community. In many recent studies
watermarking using transform domain, focused on the
wavelet transform to reach better robustness. For example in
[1], Haar discrete wavelet transform was used for reversible
watermarking. In [2], difference expansion based reversible
watermarking in Haar discrete wavelet domain was used.
Kumar et al. employed singular value decomposition based
reversible watermarking in discrete wavelet transform [3]. In
[4], 4th level of discrete wavelet transform using a
quantization function was used for embedding. Watermark
information was encoded by BCH encoding to reach more
security. Selvam et al. [5] firstly used integer wavelet
transform to generate a transformed image and then they
applied discrete gould transform in transformed coefficients
and reached a capacity of 0.25 per pixel. In [6], CohenDaubechies-Fauraue integer wavelet transform was used. In
order to prevent the overflow/underflow, histogram was
firstly pre-processed, and then additional information was
created. Companding was applied to compression and
decompression of the transformed coefficients. Since in
companding process, there was a probability of distortion, so,
additional information was created.
Some of the previous studies have investigated the
problem of the reversible watermarking with other
transformation and techniques. In [7], mean value of
transformed coefficients was utilized by slantlet for
embedding. In order to increase the security, watermarking
was transformed to Arnold domain and overflow/underflow
was prevented by post-processing. In [8], initial using
adaptive threshold detector algorithm, region of interest and
non-region of interest was separated automatically. Then,
embedding in each region separately was implemented using
bin histogram. In [9], [10] and [11], reversible image
watermarking based on Prediction Error Expansion was
proposed. In [9], diagonal neighbors were considered as
embedding locations, when overflow/underflow was
occurred. In [12], histogram shifting was applied directly on
pixels or prediction error, dynamically. In [13], two
intelligent techniques including “Genetic algorithm” and
“particle swarm optimization” were utilized for watermarking
with interpolation error expansion. In [14], intermediate
significant bit substitution was used as watermark embedding
process. A fragile watermark was embedded for tamper
detection. Watermark bits were encrypted before embedding
in order to increase the security.
Reversible watermarking can be useful in case of
encrypted images. Hence in [15], probabilistic properties of
Paillier cryptosystem was used for encrypted images. In [16],
block histogram shifting was used for encrypted image.
Integer Wavelet Transform
Selection of Sub- bands
In this paper, a novel reversible watermarking method for
medical images in telemedicine applications is proposed.
Integer wavelet domain is utilized and watermark bits are
embedded in each sub-band in two iterations. In the first
iteration, coefficients are modified by embedding. In the
second iteration, coefficients are modified in such a way that
to be close to the original values. In this manner it is possible
to have 2 bit embedding without any modifications. In other
words, coefficient modification in the first iteration is
compensated in the second iteration.
Cover image
Compensation (Algorithm1)
Watermark Logo
Embedding by Subroutine-1 in
Selected Sub- bands
Embedding by Subroutine-2 in
Selected Sub-bands
Watermarked image
Inverse Integer Wavelet
Transform
The rest of this paper is organized as follows. In Section
II, proposed reversible watermarking based on integer
wavelet transform is presented. Section III is dedicated to the
experimental results. Finally in Section IV, our concluding
remarks are presented.
&
Side information
Figure 1. Block diagram of the embedding Procedure
II. PROPOSED METHOD
In recent studies, discrete wavelet transform is
extensively used as transformed domain based reversible
image watermarking. In discrete wavelet transform, pixels
are converted in form of integer values to the floating point
one. Due to changing the coefficient values in embedding
phase in form of truncation, preservation of original integer
value cannot be guaranteed. To address this problem, in the
proposed reversible image watermarking, integer to integer
wavelet transform is exploited. The proposed method consists
of two embedding and extraction phases. In the proposed
algorithm, wavelet coefficients are converted to a binary map
by:
𝑥
𝑄(𝑥) = 𝑚𝑜𝑑 (⌊ ⌋ , 2)
2
Post processing on pixels with
overflow or underflow
(1)
where “𝑚𝑜𝑑” is a function for calculating the remainder and
⌊. ⌋ is a floor function. According to the watermark bit and
corresponding binary map, original coefficient value is
changed by a constant value. This process may produce the
same value for different coefficients and causes ambiguity
during reconstruction of the original image. To avoid this
ambiguity, tracker key is produced as side information and
makes the algorithm reversible. In the followings, embedding
and extraction phases are presented respectively.
A. Embedding phase
An overview of the proposed embedding phase is
presented in Fig. 1. System inputs are cover image and
watermark logo and outputs are watermarked image and
tracking key. At first, cover image is transformed by onelevel integer wavelet transform and four wavelet sub-bands
including LL, LH, HL and HH are calculated. LL sub-band
has high sensitivity to human visual system and embedding
in low frequency sub-band can be leaded to the perceptual
distortion. So, according to the capacity requirements, three
number of high frequency sab-bands (LH, HL, and HH) are
selected for embedding. The number of watermark bits for
embedding is divided in to the two equal parts for each
selected embedding sub-band. Embedding phase has two
iterations and in the first iteration, each bit of the first part is
embedded in a coefficient. In the second iteration, second
part of the watermark is embedded in the previously
embedded coefficients.
Embedding algorithm for two iterations is explained in
Algorithm1. Embedded image in wavelet domain is
transformed back to the spatial domain with inverse integer
wavelet transform. Embedding process may lead to a value
outside of the acceptable range of the image values which are
underflow in the case of pixels with value smaller than 0 and
overflow in value greater than 255. For these cases, pixel
values are truncated and their locations as well as their
original values are considered as side information. Suppose
that 𝑐(𝑢, 𝑣) is wavelet coefficient, 𝑡𝑘𝑒𝑦(𝑖) 𝑖 = 1, … is length
of watermark as side information, 𝑤 is watermark, 𝑐 𝑤 (u, v)
is watermarked coefficient. Embedding algorithm
(Algorithm1) is as follow.
Algorithm1 : Embedding
1: Calculate 𝑄(𝑐(𝑢, 𝑣)) by equation (1)
2:
If 𝑸(𝒄(𝒖, 𝒗)) == 𝟏
3:
𝑡𝑘𝑒𝑦(𝑖) = 1
4:
In first iteration, apply subroutine 1 as follow
5:
If 𝑤(𝑖) = 0 then 𝑐 𝑤 (u, v) = c(u, v) + 2
6:
Else 𝑐 𝑤 (u, v) = c(u, v)
7:
In second iteration, apply subroutine 2 as follow
8:
If 𝑤(𝑖) = 0 then 𝑐 𝑤 (u, v) = c(u, v) − 2
9:
Else 𝑐 𝑤 (u, v) = c(u, v)
10:
Else If 𝑸(𝒄(𝒖, 𝒗)) == 𝟎
11:
𝑡𝑘𝑒𝑦(𝑖) = 0
12:
In first iteration, apply subroutine 1 as follow
13:
If 𝑤(𝑖) = 1 then 𝑐 𝑤 (𝑢, 𝑣) = 𝑐(𝑢, 𝑣) + 2
14:
Else 𝑐 𝑤 (𝑢, 𝑣) = 𝑐(𝑢, 𝑣)
15:
In second iteration, apply subroutine 2 as follow
16:
If 𝑤(𝑖) = 1 then 𝑐 𝑤 (𝑢, 𝑣) = 𝑐(𝑢, 𝑣) − 2
17:
Else 𝑐 𝑤 (𝑢, 𝑣) = 𝑐(𝑢, 𝑣)
Watermark information is embedded by iteration 2 in
such a way that the variations due to iteration 1 are
compensated. Embedding process has three advantages. First,
the number of modified coefficients is small which can be
leaded to the low perceptual distortion. Second, variation on
coefficients duo to in iteration 1, can be compensated in
iteration 2. Third, by the ability of embedding in each subband (LH, HL, HH), it is possible to increase the embedding
capacity. Hence the maximum capacity is 1.5 BPP. Also by
using simple methods such as duplicating, more robustness
can be obtained.
B. Extraction phase
Overview of the proposed extraction phase is presented in
Fig. 2. Inputs are watermarked image as well as tracking key
and outputs are recovered original image and extracted
watermark logo. At first, as a pre-processing stage we apply
the side information including overflow or underflow and
retrieve spatial domain version of the watermarked image.
After the pre-processin, integer wavelet transform is applied
on watermarked image and four sub-bands are obtained.
Watermark information is extracted from the embedded subbands in two iterations by compensation algorithm. In each
iteration, one bit is extracted from each wavelet coefficient.
The order in which watermark information is extracted is
inverse of the order in which it is embedded. So embedded
watermark in the second iteration is extracted in the first
iteration. Finally the original image is recovered by inverse
integer wavelet transform. For extraction phase, suppose that
𝑐 𝑤 (𝑢, 𝑣) is coefficient value, 𝑡𝑘𝑒𝑦(𝑖) 𝑖 = 1, … is length of
watermark as side information, 𝑤 𝑒 is as extracted watermark
and 𝑐 𝑟 (𝑢, 𝑣) is recovered coefficient. The extraction
algorithm (Algorithm2) of the proposed watermarking
procedure is as follow.
Algorithm 2: Extraction
1: Calculate 𝑄(𝑐 𝑤 (𝑢, 𝑣)) by equation (1)
2:
𝑤 𝑒 = 𝑄(𝑐 𝑤 (𝑢, 𝑣))
3:
If 𝒕𝒌𝒆𝒚(𝒊) = 𝟏 then
4:
In first iteration, apply subroutine 1 as follow
5:
If (𝑐 𝑤 (𝑢, 𝑣))=0 then 𝑐 𝑟 (𝑢, 𝑣) = 𝑐 𝑤 (𝑢, 𝑣) + 2
6:
Else 𝑐 𝑟 (𝑢, 𝑣) = 𝑐 𝑤 (𝑢, 𝑣)
7:
In second iteration , apply subroutine 2 as follow
8:
If 𝑄(𝑐 𝑤 (𝑢, 𝑣))=0 then 𝑐 𝑟 (𝑢, 𝑣) = 𝑐 𝑤 (𝑢, 𝑣) − 2
9:
Else 𝑐 𝑟 (𝑢, 𝑣) = 𝑐 𝑤 (𝑢, 𝑣)
10: Else If 𝒕𝒌𝒆𝒚(𝒊) = 𝟎 then
11:
In first iteration, apply subroutine 1 as follow
12:
If 𝑄(𝑐 𝑤 (𝑢, 𝑣))=1 then 𝑐 𝑟 (𝑢, 𝑣) = 𝑐 𝑤 (𝑢, 𝑣) + 2
13:
Else 𝑐 𝑟 (𝑢, 𝑣) = 𝑐 𝑤 (𝑢, 𝑣)
14:
Figure 2. Block diagram of the extraction procedure.
Sample images from four medical datasets are illustrated in
Fig. 3. All images have been resized to be 512×512. Input
watermark is a binary image, consisting equal number of
ones and zeros (49% ones and 51% zeros).
In Table 1, capacity-distortion results of the proposed
method in case of four medical images are provided. Brain
MRI, cardiac MRI, intestinal polyp and breast MRI are
including 80, 70, 100 and 100 images which the average
results of each dataset are as Table 1. Simulation results show
the maximum capacity of 1.5 BPP with low distortion is
obtained. Also it is observed from Table 1 that increasing the
capacity from 0.1 to 1.5 BPP does not affect the visual
quality significantly.
For comparison of the proposed method with other
related methods, an experiment is performed in case of Lena
image. A capacity-PSNR chart is illustrated in Fig. 4. It is
observed that our method, for the same capacity, has better
PSNR compared to the other six comparable methods. Also,
visual quality after relatively high amount of embedding is
acceptable. It is important to note that in the proposed
algorithm, more capacity can be obtained by more
transformation levels or iterations. Finally in order to
evaluate the visual quality of the watermarked images, in Fig.
5 the original and the watermarked images are shown. In Fig.
5b, a sample watermark logo is shown for embedding. Fig. 5c
shows watermarked image with capacity of 1.5 BPP. Fig. 5
shows that the watermarked images are not visually different
from the original ones.
In second iteration, apply subroutine 2 as follow
15:
If 𝑄(𝑐 𝑤 (𝑢, 𝑣))=1 then 𝑐 𝑟 (𝑢, 𝑣) = 𝑐 𝑤 (𝑢, 𝑣) − 2
16:
Else 𝑐 𝑟 (𝑢, 𝑣) = 𝑐 𝑤 (𝑢, 𝑣)
III. EXPERIMENTAL RESULTS
Performance of the proposed method is tested on four
grayscale medical image datasets including brain MRI [17],
cardiac MRI [18], intestinal polyp [19] and breast MRI [20].
Figure .3 (a) Brain MRI, (b) Cardiac MRI, (c) Intestinal Polyp,
(d) Breast MRI
TABLE 1. Average capacity (BPP) vs. distortion (PSNR) for four medical image
datasets.
PSNR(dB)
Capacity
(BPP)
0.1
0.2
0.3
0.7
1
1.3
1.4
1.5
Brain
MRI
58.92
55.65
53.80
50.17
48.60
47.46
47.12
43.83
Cardiac
MRI
58.74
55.71
53.97
50.33
48.81
47.65
47.32
47.03
Intestinal
Polyps
58.13
55.12
53.36
49.68
48.13
46.98
46.66
46.36
Breast
MRI
59.52
56.54
54.72
51.08
49.53
48.40
48.06
47.75
performed in two iterations with a compensation method. It
was possible for modified coefficients in the first iteration, to
recover its original value in the second iteration. One bit was
embedded on each iteration, hence maximum capacity of 1.5
BPP was obtained. Simulation results demonstrated that the
proposed reversible image watermarking provided suitable
capacity-distortion in comparison with the other methods.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Figure 4. Comparison of capacity-distortion of proposed method with six
other works for reversible embedding in Lena image.
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
a) Original image
b) Logo image
c) Watermarked
image
Figure 5. Visual quality of watermarked images using the proposed
embedding method.
IV. CONCLUSION
Novel reversible image watermarking method was
presented in medical images based on integer wavelet
transform. Improving distortion and embedding capacity was
considered in the proposed method. Embedded process was
[16]
[17]
[18]
[19]
[20]
L.C. Huang, T.H. Feng, and M.S. Hwang, “A New Lossless
Embedding Techniques Based on HDWT,” IETE Tech. Rev., vol. 34,
no. 1, pp. 40–47, 2017.
J. Park, S. Yu, and S. Kang, “Non-fragile High quality Reversible
Watermarking for Compressed PNG image format using Haar
Wavelet Transforms and Constraint Difference Expansions,” Int. J.
Appl. Eng. Res., vol. 12, no. 5, pp. 582–590, 2017.
M. Kumar, S. Agrawal, and T. Pant, “SVD-Based Fragile Reversible
Data Hiding Using DWT,” In Proceedings of Fifth International
Conference on Soft Computing for Problem Solving, pp. 743-756,
2016.
M. P. Turuk and A. P. Dhande, “A Novel Reversible Multiple
Medical Image Watermarking for Health Information System,” J.
Med. Syst., vol. 40, no. 12, 2016.
P. Selvam., S. Balachandran, S. P. lyer, and R Jayabal, “Hybrid
Transform Based Reversible Watermarking Technique for Medical
Images in Telemedicine Applications,” Opt. - Int. J. Light Electron
Opt., pp. 655-671, 2017.
M. Arsalan, A. S. Qureshi, A. Khan, and M. Rajarajan, “Protection of
medical images and patient related information in healthcare: Using
an intelligent and reversible watermarking technique,” Appl. Soft
Comput. J., vol. 51, pp. 168–179, 2017.
I. A. Ansari, M. Pant, and C. W. Ahn, “Artificial bee colony
optimized robust-reversible image watermarking,” Multimed. Tools
Appl., vol. 76, no. 17, pp. 18001–18025, 2017.
Y. Yang, W. Zhang, D. Liang, and N. Yu, “A ROI-based high
capacity reversible data hiding scheme with contrast enhancement for
medical images,” Multimed. Tools Appl., pp. 1–23, 2017.
I. Dragoi, and D. Coltuc, “Towards Overflow / Underflow Free PEE
Reversible Watermarking,” in European Signal Processing
Conference, pp. 953–957, 2016.
I. C. Dragoi and D. Coltuc, “Local-prediction-based difference
expansion reversible watermarking,” IEEE Trans. Image Process.,
vol. 23, no. 4, pp. 1779–1790, 2014.
X. Li, B. Yang, and T. Zeng, “Efficient Reversible Watermarking
Based on Adaptive Prediction-Error Expansion and Pixel Selection,”
IEEE Trans. IMAGE Process., vol. 20, no. 12, pp. 3524–3533, 2011.
H. Y. Wu, “Reversible Watermarking Based on Invariant Image
Classification and Dynamic Histogram Shifting,” Inf. Forensics
Secur., vol. 8, no. 1, pp. 111–120, 2013.
T. Naheed, I. Usman, T. M. Khan, A. H. Dar, and M. F. Shafique,
“Intelligent reversible watermarking technique in medical images
using GA and PSO,” Opt. - Int. J. Light Electron Opt., vol. 125, no.
11, pp. 2515–2525, 2014.
S. A. Parah, F. Ahad, J. A. Sheikh, and G. M. Bhat, “Hiding clinical
information in medical images: A new high capacity and reversible
data hiding technique,” J. Biomed. Inform., vol. 66, pp. 214–230,
2017.
S. Xiang and X. Luo, “Reversible Data Hiding in Homomorphic
Encrypted Domain By Mirroring Ciphertext Group,” IEEE Trans.
Circuits Syst. Video Technol., vol. 8215, no. c, pp. 1–12, 2017.
Z. Yin et al., “Reversible Data Hiding in Encrypted Image Based on
Block Histogram Shifting,” ICASSP, pp. 2129-2133, 2016.
Brain MRI Images available at http://overcode.yak.net/15
Cardiac
MRI
dataset
available
at
http://www.cse.yorku.ca/~mridataset/
ASU-Mayo Clinic Colonoscopy Video (c) Database available at
https://polyp.grand-challenge.org/site/Polyp/AsuMayo/
PEIPA, the Pilot European Image Processing Archive, available at
http://peipa.essex.ac.uk/pix/mias
| 1 |